text
stringlengths
1.36k
1.27M
DISTANCE-TWO COLORINGS OF BARNETTE GRAPHS TOMÁS FEDER, PAVOL HELL, AND CARLOS SUBI Abstract. Barnette identified two interesting classes of cubic polyhedral graphs for which he conjectured the existence of a Hamiltonian cycle. Goodey proved the conjecture for the intersection of the two classes. We examine these classes from the point of view of distance-two colorings. A distance-two $r$-coloring of a graph $G$ is an assignment of $r$ colors to the vertices of $G$ so that any two vertices at distance at most two have different colors. Note that a cubic graph needs at least four colors. The distance-two four-coloring problem for cubic planar graphs is known to be NP-complete. We claim the problem remains NP-complete for tri-connected bipartite cubic planar graphs, which we call type-one Barnette graphs, since they are the first class identified by Barnette. By contrast, we claim the problem is polynomial for cubic plane graphs with face sizes 3, 4, 5, or 6, which we call type-two Barnette graphs, because of their relation to Barnette’s second conjecture. We call Goodey graphs those type-two Barnette graphs all of whose faces have size 4 or 6. We fully describe all Goodey graphs that admit a distance-two four-coloring, and characterize the remaining type-two Barnette graphs that admit a distance-two four-coloring according to their face size. For quartic plane graphs, the analogue of type-two Barnette graphs are graphs with face sizes 3 or 4. For this class, the distance-two four-coloring problem is also polynomial; in fact, we can again fully describe all colorable instances – there are exactly two such graphs. 1. Introduction Tait conjectured in 1884 [21] that all cubic polyhedral graphs, i.e., all tri-connected cubic planar graphs, have a Hamiltonian cycle; this was disproved by Tutte in 1946 [23], and the study of Hamiltonian cubic planar graphs has been a very active area of research ever since, see for instance [1, 10, 16, 18]. Barnette formulated two conjectures that have been at the centre of much of the effort: (1) that bipartite tri-connected cubic planar graphs are Hamiltonian (the case of Tait’s conjecture where all face sizes are even) [4], and (2) that tri-connected cubic planar graphs with all face sizes 3, 4, 5 or 6 are Hamiltonian, cf. [3, 19]. Goodey [11, 12] proved that the conjectures hold on the intersection of the two classes, i.e., that tri-connected cubic planar graphs with all face sizes 4 or 6 are Hamiltonian. When all faces have sizes 5 or 6, this was a longstanding open problem, especially since these graphs (tri-connected cubic planar graphs with all face sizes 5 or 6) are the popular fullerene graphs [8]. The second conjecture has now been affirmatively resolved in full [17]. For the first conjecture, two of the present authors have shown in [9] that if the conjecture is false, then the Hamiltonicity problem for tri-connected cubic planar graphs is NP-complete. In view of these results and conjectures, in this paper we call bipartite tri-connected cubic planar graphs type-one Barnette graphs; we call cubic plane graphs with all face sizes 3, 4, 5 or 6 type-two Barnette graphs; and finally we call cubic plane graphs with all face sizes 4 or 6 Goodey graphs. Note that it would be more logical, and historically accurate, to assume tri-connectivity also for type-two Barnette graphs and for Goodey graphs. However, we prove our positive results without needing tri-connectivity, and hence we do not assume it. Cubic planar graphs have been also of interest from the point of view of colorings [6, 14]. In particular, they are interesting for distance-two colourings. Let $G$ be a graph with degrees at most $d$. A \textit{distance-two $r$-coloring} of $G$ is an assignment of colors from $[r] = \{1, 2, \ldots, r\}$ to the vertices of $G$ such that if a vertex $v$ has degree $d' \leq d$ then the $d'+1$ colors of $v$ and of all the neighbors of $v$ are all distinct. (Thus a distance-two coloring of $G$ is a classical coloring of $G^2$.) Clearly a graph with maximum degree $d$ needs at least $d+1$ colors in any distance-two coloring, since a vertex of degree $d$ and its $d$ neighbours must all receive distinct colors. It was conjectured by Wegner [24] that a planar graph with maximum degree $d$ has a distance-two $r$-colouring where $r = 7$ for $d = 3$, $r = d + 5$ for $d = 4, 5, 6, 7$, and $r = \lceil 3d/2 \rceil + 1$ for all larger $d$. The case $d = 3$ has been settled in the positive by Hartke, Jahanbekam and Thomas [13], cf. also [22]. For cubic planar graphs in general it was conjectured in [13] that if a cubic planar graph is tri-connected, or has no faces of size five, then it has a distance-two six-coloring. We propose a weaker version of the second case of the conjecture, namely, we conjecture that \textit{a bipartite cubic planar graph can be distance-two six-colored}. We prove this in one special case (Theorem 2.5), which of course also confirms the conjecture of Hartke, Jahanbekam and Thomas for that case. Heggernes and Telle [15] have shown that the problem of distance-two four-coloring cubic planar graphs is NP-complete. On the other hand, Borodin and Ivanova [5] have shown that subcubic planar graphs of girth at least 22 can be distance-two four-colored. In fact, there has been much attention focused on the relation of distance-two colorings and the girth, especially in the planar context [5, 14]. Our results focus on distance-two colorings of cubic planar graphs, with particular attention on Barnette graphs, of both types. We prove that a cubic plane graph with all face sizes divisible by four can always be distance-two four-colored, and a give a simple condition for when a bi-connected cubic plane graph with all face sizes divisible by three can be distance-two four-colored using only three colors per face. It turns out that the distance-two four-coloring problem for type-one Barnette graphs is NP-complete, while for type-two Barnette graphs it is not only polynomial, but the positive instances can be explicitly described. They include one infinite family of Goodey graphs (cubic plane graphs with all faces of size 4 or 6), and all type-two Barnette graphs which have all faces of size 3 or 6. Interestingly, there is an analogous result for quartic (four-regular) graphs: all quartic planar graphs with faces of only sizes 3 or 4 that have a distance-two five coloring can be explicitly described; there are only two such graphs. Note that we use the term “plane” graph when the actual embedding is used, e.g., by discussing the faces; when the embedding is unique, as in tri-connected graphs, we stick with writing “planar”. 2. Relations to edge-colorings and face-colorings Distance-two colorings have a natural connection to edge-colorings. \textbf{Theorem 2.1.} Let $G$ be a graph with degrees at most $d$ that admits a distance-two $(d+1)$-coloring, with $d$ odd. Then $G$ admit an edge-coloring with $d$ colors. Proof. The even complete graph $K_{d+1}$ can be edge-colored with $d$ colors by the Walecki construction [2]. We fix one such coloring $c$, and then consider a distance-two $(d+1)$-coloring of $G$. If an edge $uv$ of $G$ has colors $ab$ at its endpoints, we color $uv$ in $G$ with the color $c(ab)$. It is easy to see that this yields an edge-coloring of $G$ with $d$ colors. We call the resulting edge-coloring of $G$ the derived edge-coloring of the original distance-two coloring. In this paper, we mostly focus on the case $d = 3$ (the subcubic case). Thus we use the edge-coloring of $K_4$ by colors red, blue, green. This corresponds to the unique partition of $K_4$ into perfect matchings. Note that for every vertex $v$ of $K_4$ and every edge-color $i$, there is a unique other vertex $u$ of $K_4$ adjacent to $v$ in edge-color $i$. Thus if we have the derived edge-coloring we can efficiently recover the original distance-two coloring. In the subcubic case, in turns out to be sufficient to have just one color class of the edge-coloring of $G$. Theorem 2.2. Let $G$ be a subcubic graph, and let $R$ be a set of red edges in $G$. The question of whether there exists a distance-two four-coloring of $G$ for which the derived edge-coloring has $R$ as one of the three color classes can be solved by a polynomial time algorithm. If the answer is positive, the algorithm will identify such a distance-two coloring. Proof. We may assume in $K_4$ red joins colors 13, 24, blue joins colors 12, 34 and green joins colors 14, 23. Note that we may also assume that $R$ is a matching that covers at least all vertices of degree three, otherwise we answer in the negative. We may further assume that some vertex $v$ gets an even color (2 or 4). The parity of the color of a vertex $u$ determines the parity of the color of its neighbors, namely the parity is the same if they are adjacent by an edge in $R$, and they are of different parity otherwise. We may thus extend from $v$ the assignment of parities to all the vertices, unless an inconsistency is reached, in which case no coloring exists. Otherwise, at this point all vertices have only two possible colors, namely 1, 3 for odd and 2, 4 for even. Define an auxiliary graph $G'$ with vertices $V(G') = V(G)$, and edges $xy$ in $E(G')$ if $xy$ is a red edge in $E(G)$ or if there is a path $xzy$ without red edges in $E(G)$. Note that these edges $xy$ join vertices of the same parity, and $x, y$ must have different colors. If $G'$ has an odd cycle, then no solution exists. Otherwise $G'$ is bipartite, and we may choose 1, 3 in different sides of a bipartition of $G'$ for odd vertices, and 2, 4 in different sides for even vertices. Each vertex $u$ will have at most one neighbor $x$ of the same parity in $G$, namely the one joined to it by the red edge, and $ux$ is an edge of $G'$. This guarantees different colors for $u, x$. The at most two other neighbors $y, z$ of $u$ have different parity from $u, x$, and the path $yuz$ in $G$ ensures the edge $yz$ is in $G'$. This guarantees different colors for $y, z$. Thus the colors for $u, x, y, z$ are all different at each vertex $u$, and we have a distance-two coloring of $G$. There is also a relation to face-colorings. It is a folklore fact that the faces of any bipartite cubic plane graph $G$ can be three-colored [20]. This three-face-coloring induces a three-edge-coloring of $G$ by coloring each edge by the color not used on its two incident faces. (It is easy to see that this is in fact an edge-coloring, i.e., that incident edges have distinct colors.) We call an edge-coloring that arises this way from some face-coloring of $G$ a special three-edge-coloring of $G$. We first ask when is a special three-edge-coloring of $G$ the derived edge-coloring of a distance-two four-coloring of $G$. **Theorem 2.3.** A special three-edge-coloring of $G$ is the derived edge-coloring of some distance-two four-coloring of $G$ if and only if the size of each face is a multiple of 4. **Proof.** The edges around a face $f$ alternate in colors, and the vertices of $f$ can be colored consistently with this alternation if and only if the size of $f$ is a multiple of 4. This proves the “only if” part. For the “if” part, suppose all faces have size multiple of 4. If there is an inconsistency, it will appear along a cycle $C$ in $G$. If there is only one face inside $C$, there is no inconsistency. Otherwise we can join some two vertices of $C$ by a path $P$ inside $C$, and the two sides of $P$ inside $C$ give two regions that are inside two cycles $C', C''$. The consistency of $C$ then follows from the consistency of each of $C', C''$ by induction on the number of faces inside the cycle. \qed **Corollary 2.4.** Let $G$ be a cubic plane graph in which the size of each face is a multiple of four. Then $G$ can be distance-two four-colored. We now prove a special case of the conjecture stated in the introduction, that all bipartite cubic plane graphs can be distance-two six-colored. Recall that the faces of any bipartite cubic plane graph can be three-colored. **Theorem 2.5.** Suppose the faces of a bipartite cubic plane graph $G$ are three-colored red, blue and green, so that the red faces are of arbitrary even size, while the size of each blue and green face is a multiple of 4. Then $G$ can be distance-two six-colored. **Proof.** Let $G'$ be the multigraph obtained from $G$ by shrinking each of the red faces. Clearly $G'$ is planar, and since the sizes of blue and green faces in $G'$ are half of what they were in $G$, they will be even, so $G'$ is also bipartite. Let us label the two sides of the bipartition as $A$ and $B$ respectively. Now consider the special three-edge coloring of $G$ associated with the face coloring of $G$. Each red edge in this special edge-coloring joins a vertex of $A$ with a vertex of $B$; we orient all red edges from $A$ to $B$. Now traversing each red edge in $G$ in the indicated orientation either has a blue face on the left and green face on the right, or a green face on the left and blue face on the right. In the former case we call the edge class one in the latter case we call it class two. Each vertex of $G$ is incident with exactly one red edge; the vertex inherits the class of its red edge. The vertices around each red face in $G$ are alternatingly in class 1 and class 2. We assign colors 1, 2, 3 to vertices of class one and colors 4, 5, 6 to vertices of class two. It remains to decide how to choose from the three colors available for each vertex. A vertex adjacent to red edges in class $i$ has only three vertices within distance two in the same class, namely the vertex across the red edge, and the two vertices at distance two along the red face in either direction. Therefore distance-two coloring for class $i$ corresponds to three-coloring a cubic graph. Since neither class can yield a $K_4$, such a three-coloring exists by Brooks’ theorem [7]. This yields a distance-two six-coloring of $G$. \qed ### 3. Distance-two four-coloring of type-one Barnette graphs is NP-complete We now state our main intractability result. **Theorem 3.1.** The distance-two four-coloring problem for tri-connected bipartite cubic planar graphs is NP-complete. We will begin by deriving a weaker version of our claim. **Theorem 3.2.** The distance-two four-coloring problem for bipartite planar subcubic graphs is NP-complete. **Proof.** Consider the graph $H$ in Figure 1. ![Figure 1](image) **Figure 1.** The graph $H$ for the proof of Theorem 3.2 We will reduce the problem of $H$-coloring planar graphs to the distance-two four-coloring problem for bipartite planar subcubic graphs. In the $H$-coloring problem we are given a planar graph $G$ and the question is whether we can color the vertices of $G$ with colors that are vertices of $H$ so that adjacent vertices of $G$ obtain adjacent colors. This can be done if and only if $G$ is three-colorable, since the graph $H$ both contains a triangle and is three-colorable itself. (Thus any three-coloring of $G$ is an $H$-coloring of $G$, and any $H$-coloring of $G$ composed with a three-coloring of $H$ is a three-coloring of $G$.) It is known that the three-coloring problem for planar graphs is NP-complete, hence so is the $H$-coloring problem. ![Figure 2](image) **Figure 2.** The ring gadget Thus suppose $G$ is an instance of the $H$-coloring problem. We form a new graph $G'$ obtained from $G$ by replacing each vertex $v$ of $G$ by a *ring* gadget depicted in Figure 2. If $v$ has degree $k$, the ring gadget has $2k$ squares. A *link* in the ring is a square $a_i b_i c_i d_i a_i$ followed by the edge $c_ia_{i+1}$. A link is *even* if $i$ is even, and *odd* otherwise. Every even link in the ring will be used for a connection to the rest of the graph $G'$, thus vertex $v$ has $k$ available links. For each edge $vw$ of $G$ we add a new vertex $f_{vw}$ that is adjacent to a vertex $d_s$ in one available link of the ring for $v$ and a vertex $d'_t$ in one available link of the ring for $w$. (We use primed letters for the corresponding vertices in the ring of $w$ to distinguish them from those in the ring of $v$.) The actual choice of (the even) subscripts $s,t$ does not matter, as long as each available link is only used once. The resulting graph is clearly subcubic and planar. It is also bipartite, since we can bipartition all its vertices into one independent set $A$ consisting of all the vertices $a_i, c_i, b_{i+1}, d_i + 1$ with odd $i$ in all the rings, and another independent set $B$ consisting of the vertices $a_i, c_i, b_{i+1}, d_i + 1$ with even $i$ in all the rings. Moreover, we place all vertices $f_{vw}$ into the set $A$. Note that in any distance-two four-coloring of the ring, each link must have four different colors for vertices $a_i, b_i, c_i, d_i$, and the same color for $a_i$ and $a_{i+1}$. Thus all $a_i$ have the same color and all $c_i$ have the same color. The pair of colors in $b_i, d_i$ is also the same for all $i$; we will call it the *characteristic pair of the ring for $v$*. For any pair $ij$ of colors from $1,2,3,4$, there is a distance-two coloring of the ring that has the characteristic pair $ij$. We prove that $G$ is $H$-colorable if and only if $G'$ is distance-two four-colorable. In an $H$-coloring of $G$, the vertices of $G$ are actually assigned unordered pairs from $\{1,2,3,4\}$, since the vertices of $H$ are labeled by pairs. (Note that two vertices of $H$ are adjacent if and only if the pairs they are labeled with intersect in exactly one element.) Thus suppose that we have an $H$-coloring $\phi$ of $G$. If $\phi(v) = ij$ (i.e., the vertex $v$ of $G$ is assigned the vertex of $H$ labeled by the pair $ij$), then we colour the ring of $v$ so that its characteristic pair is $ij$. This still leaves a choice of which of the colors $i,j$ is in which $b_s, d_s$, in each of the links $a_s, b_s, c_s, d_s$. Since $\phi$ is an $H$ coloring, adjacent vertices $vw$ are assigned pairs that intersect if exactly one element. This makes it possible to color each $b_s, d_s$ so that all colors at distance at most two are distinct. For instance if vertices $v$ and $w$ are adjacent in $G$ and colored by 12, 13 by $\phi$, and if $f_{vw}$ is adjacent to the vertices $d_s$ in the ring for $v$ and $d_t$ in the ring for $w$, then both $b_s$ in the ring for $v$ and $b_t$ in the ring for $w$ are colored 1, as is $f_{vw}$, while $d_s$ in the ring for $v$ and $d_t$ in the ring for $w$ are colored 2 and 3 respectively. It is easy to see that this is a distance-two four-coloring of $G'$. Conversely, in any distance-two four-coloring of $G'$, the color of a vertex $f_{vw}$ determines the same color in the $b$'s of its adjacent links of the rings for $v$ and $w$, whence the characteristic pairs of these two rings intersect in exactly one element. Thus we may define a mapping $\phi$ of $V(G)$ to $V(H)$ by assigning to each vertex $v \in V(G)$ the characteristic pair of the ring for $v$. Then $\phi$ is an $H$-coloring of $G$, since adjacent vertices of $G$ are assigned pairs that are adjacent in $H$. To prove the full Theorem 3.1, the construction of the graph $G'$ is modified as suggested in Figure 3. Recall that in the construction of $G'$, for each edge $vw$ of $G$ a separate vertex $f_{vw}$ was made adjacent to $d_s$ in the ring of $v$ and $d'_t$ in the ring of $w$. Recall that both $s$ and $t$ are even, and the vertices $d_{s+1}, d'_{t+1}$ (with both subscripts odd) remained available for connection. We now make a new edge-gadget around the vertex $f_{vw}$, making it directly adjacent to $d'_{t+1}$, and connected to $d_s$ by a path, as depicted in Figure 3. In both rings, the two "b" type vertices in the two consecutive links are joined together by an additional edge; specifically, we add the edges $b_s b_{s+1}$ and $b'_s b'_{s+1}$. (Note that this forces the corresponding “$d$” type vertices $d_s$ and $d_{s+1}$ to be colored differently in any distance-two four-coloring, and similarly for $d'_s$ and $d'_{s+1}$). Moreover, further vertices and edges are added, as depicted in Figure 3. The shaded ten-sided region is identified with the ten-sided exterior face of the graph depicted in Figure 5, which has a unique distance-two four-coloring, shown there. (The heavy edges correspond to the ten-sided shaded figure.) (Note that the graph in Figure 5 was obtained from the graph in Figure 7 by the deletion of two edges.) Note that the construction is not symmetric, as it depends on which ring is viewed as the “bottom” ring for the vertex $f_{vw}$. (The depicted figure has the ring of $v$ on the bottom, but the conclusions are the same if it were the ring of $w$.) We can choose either way, independently for each edge $vw$ of $G$. It can be seen that the resulting graph, which we denote by $G''$, is bipartite, planar, and cubic. We may assume that $G$ is bi-connected (the three-coloring problem for biconnected planar graphs is still NP-complete), and therefore $G''$ is also tri-connected (as no two faces share more than one edge). Using the unique distance-two four-coloring of the graph in Figure 5, it also follows that in any distance-two four-coloring of $G''$ the vertices $d_s$ and $d'_{t+1}$ have different colors, while both vertices $b_s$ and $b'_{t+1}$ have the same color (the color of $f_{vw}$), in any distance-two four-coloring of $G''$. To facilitate checking this, we show in Figure 4 a partial distance-two four-coloring, by circles, squares, up triangles, and down triangles; this coloring is forced by arbitrarily coloring $f_{vw}$ and its three neighbours by four distinct colors. Since the colors of the pair $b_s, d_s$ and the pair $b'_{t+1}, d'_{t+1}$ have exactly one color in common, the previous NP-completeness proof applies, i.e., $G$ is $H$-colorable if and only if $G''$ is distance-two four-colorable. ![Figure 5](image) **Figure 5.** The graph for the shaded region, with its unique distance-two four-coloring We remark that (with some additional effort) we can prove that the problem is still NP-complete for the class of tri-connected bipartite cubic planar graphs with no faces of sizes larger than 44. ### 4. Distance-two four-coloring of Goodey graphs Recall that Goodey graphs are type-two Barnette graph with all faces of size 4 and 6 [11, 12]. In other words, a *Goodey graph* is a cubic plane graph with all faces having size 4 or 6. By Euler’s formula, a Goodey graph has exactly six square faces, while the number of hexagonal faces is arbitrary. A *cyclic prism* is the graph consisting of two disjoint even cycles $a_1a_2 \cdots a_{2k}a_1$ and $b_1b_2 \cdots b_{2k}b_1$, $k \geq 2$, with the additional edges $a_ib_i$, $1 \leq i \leq 2k$. It is easy to see that cyclic prisms have either no distance-two four-coloring (if $k$ is odd), or a unique distance-two four-coloring (if $k \geq 2$ is even). Only the cyclic prisms with $k = 2, 3$ are Goodey graphs, and thus from Goodey cyclic graphs only the cube (the case of $k = 2$) has a distance-two coloring, which is moreover unique. In fact, all Goodey graphs that admit distance-two four-coloring can be constructed from the cube as follows. The Goodey graph $C_0$ is the cube, i.e., the cyclic prism with $k = 2$. The Goodey graph $C_1$ is depicted in Figure 7. It is obtained from the cube by separating the six square faces and joining them together by a pattern of hexagons, with three hexagons meeting at a vertex tying together the three faces that used to meet in one vertex. The higher numbered Goodey graphs are obtained by making the connecting pattern of hexagons higher and higher. The next Goodey graph $C_2$ has two hexagons between any two of the six squares, with a central hexagon in the centre of any three of the squares, the following Goodey graph $C_3$ has three hexagons between any two of the squares and three hexagons in the middle of any three of the squares, and so on. Thus in general we replace every vertex of the cube by a triangular pattern of hexagons whose borders are replacing the edges of the cube. We illustrate the vertex replacement graphs in Figure 6, without giving a formal description. The entire Goodey graph $C_1$ is depicted in Figure 7. ![Figure 6](image) **Figure 6.** The vertex replacements for Goodey graphs $C_0, C_1, C_2,$ and $C_3$ ![Figure 7](image) **Figure 7.** The Goody graph $C_1$ We have the following results. **Theorem 4.1.** The Goodey graphs $C_k, k \geq 0$, have a unique distance-two four-coloring, up to permutation of colors. **Proof.** We described $C_k$ as eight triangular regions $R$, each consisting of $\binom{k}{2}$ hexagons, one region $R$ for each vertex of the cube. Each $R$ has three squares at the corners, which we describe as two squares joined by a chain of $k$ hexagons horizontally at the bottom, and a third square on top. (See Figure 6.) We partition the vertices into $k + 2$ horizontal paths $P_i$, $0 \leq i \leq k + 1$, with each $P_i$ having endpoints of degree 2 and internal vertices of degree 3. The path $P_0$ has length $2k + 2$, and the remaining paths $P_i$, $i \geq 1$ have length $2k + 6 - 2i$. In particular the last $P_{k+1}$ has length 4, and is the only $P_i$ that is actually a cycle, pictured as the square at the top. See Figure 8. ![Figure 8: The paths $P_i$ and the resulting distance-two colorings](image) We denote $P_i = v^0_i v^1_i \cdots v^{\ell_i}_i$. The edges between $P_0$ and $P_1$ are $v^0_0 v^1_1$, $v^j_0 v^{j+1}_1$ for $1 \leq j \leq \ell_0 - 1$, $j$ odd, and $v^{\ell_0}_0 v^{\ell_1-1}_1$. We can choose the permutation of colors for the square $v^0_0 v^1_1 v^2_1 v^1_1$ to be 4132, forcing for neighbors $v^0_1, v^1_1, v^2_0$ the colors 1, 4, 2, and completing the adjacent square, or hexagon with the assignment to $v^4_1, v^3_0$ of colors 1, 3. This forced process extends similarly through the chain of hexagons until the last square. We have derived the beginning of $P_0$ as 4123 and the beginning of $P_1$ as 12341. After the forced extension, $P_0$ will be an initial segment of $(4123)^*$ and $P_1$ will be an initial segment of $(1234)^*$. For $i \geq 1$, the edges between $P_i$ and $P_{i+1}$ are $v^0_i v^1_{i+1}$, $v^j_i v^{j-1}_{i+1}$ for $3 \leq j \leq \ell_i - 3$, $j$ odd, and $v^{\ell_i}_i v^{\ell_{i+1}-1}_{i+1}$. A similar process derives the beginning of $P_i$ for $i$ odd as 1234 and the beginning of $P_i$ for $i$ even as 4321. After the forced extension, $P_i$ for $i$ odd will be an initial segment of $(1234)^*$ and $P_i$ for $i$ even will be an initial segment of $(4321)^*$. This gives a unique coloring for the triangular region after coloring one square $S$, which is uniquely extended to the four triangular regions surrounding $S$, and then uniquely extended to the four triangular regions surrounding $S'$ opposite to $S$. **Theorem 4.2.** The Goodey graphs $C_k$, $k \geq 0$, are the only bipartite cubic planar graphs having a distance-two four-coloring. Proof. Consider a Goodey graph $G$ with a fixed distance-two four-coloring. Recall that Goodey graphs have exactly six squares. Each of the squares is joined by four chains of hexagons to four squares. We consider the dual six-vertex graph $G'$ whose vertices are squares in $G$, with $ab$ an edge in $G'$ if and only if there is a chain of hexagons joining squares $a$ and $b$. It can be readily verified that such a chain cannot cross itself or another chain in $G$. Indeed, the colors in the fixed distance-two four-coloring are uniquely forced along such chains and they don’t match if the chains should cross. It follows that the graph $G'$ is planar. A similar argument shows that a chain cannot return to the same square, and two chains from the square $a$ cannot end at the same square $b$. Thus $G'$ has no faces of size one or two, and by Euler’s formula it has 12 edges and 8 faces; therefore all faces of $G'$ must be triangles, and $G'$ is the octahedron. Let $T$ be a triangular face in $G'$, let $s$ be a side of $T$ with the smallest number $d$ of hexagons in $G$. Then it can again be checked using the coloring that the other two sides of $T$ will also have $d$ hexagons in $G$. Then $T$ corresponds to a triangular region $R$ as in Theorem 4.1, and the octahedron $G'$ yields $G = C_k$ for $k = d$. \qed We can therefore conclude the following. Corollary 4.3. The distance-two four-coloring problem for Goodey graphs is solvable in polynomial time. Recognizing whether an input Goodey graph is some $C_k$ can be achieved in polynomial time; in the same time bound $G$ can actually be distance-two four-colored. 5. Distance-two four-coloring of type-two Barnette graphs is polynomial We now return to general type-two Barnette graphs, i.e., cubic plane graphs with face sizes 3, 4, 5, or 6. As a first step, we analyze when a general cubic plane graph admits a distance-two four-coloring which has three colors on the vertices of every face of $G$. Theorem 5.1. A cubic plane graph $G$ has a distance-two four-coloring with three colors per face if and only if 1. all faces in $G$ have size which is a multiple of 3, 2. $G$ is bi-connected, and 3. if two faces share more than one edge, the relative positions of the shared edges must be congruent modulo 3 in the two faces. The last condition means the following: if faces $F_1, F_2$ meet in edges $e, e'$ and there are $n_1$ edges between $e$ and $e'$ in (some traversal of) $F_1$, and $n_2$ edges between $e$ and $e'$ in (some traversal of) $F_2$, then $n_1 \equiv n_2 \pmod{3}$. Proof. Suppose $G$ has a distance-two four-coloring with three colors in each face. The unique way to distance-two color a cycle with colors 1, 2, 3 is by repeating them in some order $(123)^*$ along one of the two traversals of the cycle. Therefore the length is a multiple of 3 so (1) holds. Moreover, there can be no bridge in $G$ as that would imply a face that self-intersects and is traversed in opposite directions along any traversal of that face, disagreeing with the order $(123)^*$ in one of them; thus (2) also holds. Finally, (3) holds because the common edges must have the same colors in both faces. Suppose the conditions hold, and consider the dual $G^D$ of $G$. (Note that each face of $G^D$ is a triangle.) We find a distance-two coloring of $G$ as follows. Let $F$ be a face in $G$; according to conditions (1-2), its vertices can be distance-two colored with three colors. That takes care of the vertex $F$ in $G^D$. Using condition (3), we can extend the coloring of $G$ to any face $F'$ adjacent to $F$ in $G^D$. Note that we can use the fourth colour, 4, on the two vertices adjacent in $F'$ to the two vertices of a common edge. In this way, we can propagate the distance-two coloring of $G$ along the adjacencies in $G^D$. If this produces a distance-two coloring of all vertices of $G$, we are done. Thus it remains to show there is no inconsistency in the propagation. If there is an inconsistency, it will appear along a cycle $C$ in $G^D$. If there is only one face inside of $C$, then $C$ is a triangle corresponding to a vertex of $G$, and there is no inconsistency. Otherwise we can join some two vertices of $C$ by a path $P$ inside $C$, and the two sides of $P$ inside $C$ give two regions that are inside two cycles $C', C''$. The consistency of $C$ then follows from the consistency of each of $C', C''$ by induction on the number of faces inside the cycle. It turns out that conditions (1 - 3) are automatically satisfied for cubic plane graphs with faces of sizes 3 or 6. **Corollary 5.2.** Type-two Barnette graphs with faces of sizes 3 or 6 are distance-two four-colorable. **Proof.** Such a graph must be bi-connected, i.e., cannot have a bridge, since no triangle or hexagon can self-intersect. Moreover, only two hexagons can have two common edges, and it is easy to check that they must indeed be in relative positions congruent modulo 3 on the two faces. (Since all vertices must have degree three.) Thus the result follows from Theorem 5.1. **Theorem 5.3.** Let $G$ be type-two Barnette graph. Then $G$ is distance-two four-colorable if and only if it is one of the graphs $C_k, k \geq 0$, or all faces of $G$ have sizes 3 or 6. **Proof.** If there are faces of size both 3 and 4 (and possibly size 6), then there must be (by Euler’s formula) two triangles and three squares, and as in the proof of Theorem 4.2, the squares must be joined by chains of hexagons, which is not possible with just three squares. If there is a face of size 5, then there is no distance-two four-coloring since all five vertices of that face would need different colors. ### 6. Distance-two coloring of quartic graphs A *quartic graph* is a regular graph with all vertices of degree four. Thus any distance-two coloring of a quartic graph requires at least five colors. A *four-graph* is a plane quartic graph whose faces have sizes 3 or 4. The argument to view these as analogues of type-two Barnette graphs is as follows. For cubic plane Euler’s formula limits the numbers of faces that are triangles, squares, and pentagons, but does not limit the number of hexagon faces. Similarly, for plane quartic graphs, Euler’s formula implies that such a graph must have 8 triangle faces, but places no limits on the number of square faces. We say that two faces are *adjacent* if they share an edge. Lemma 6.1. If a four-graph can be distance-two five-colored, then every square face must be adjacent to a triangle face. Thus $G$ can have at most 24 square faces. Proof. We view the numbers 1, 2, 3, 4 modulo 4, and number 5 is separate. Let $u_1u_2u_3u_4$ be a square face that has no adjacent triangle face. (This is depicted in Figure 9 as the square in the middle.) Color $u_i$ by $i$. Let the adjacent square faces be $u_iu_{i+1}w_{i+1}v_i$. One of $v_i, w_i$ must be colored 5 and the other one $i + 2$. Then either all $v_i$ or all $w_i$ are colored 5, say all $w_i$ are colored 5, and all $v_i$ are colored $i + 1$. Then $v_iu_iw_i$ cannot be a triangle face, or $w_i, w_{i+1}$ would be both colored 5 at distance two. Therefore $t_iv_iu_iw_i$ must be a square face. (In the figure, this is indicated by the corner vertices being marked by smaller circles; these must exist to avoid a triangle face.) This means that the original square is surrounded by eight square faces for $u_1u_2u_3u_4$, and $t_i$ must have color $i + 3$, since $u_i, v_{i+3}, v_i, w_i$ have colors $i, i + 1, i + 2, 5$. But then there cannot be a triangle face $x_iw_iw_{i+1}$, since $x_i$ is within distance two of $u_i, u_{i+1}, v_i, t_i, w_{i+1}$ of colors $i, i + 1, i + 2, i + 3, 5$, so each of the adjacent square faces $u_iu_{i+1}w_{i+1}v_i$ for $u_1u_2u_3u_4$ has adjacent square faces as well. This process of moving to adjacent square faces eventually reaches all faces as square faces, contrary to the fact that there are 8 triangle faces. \hfill $\square$ Figure 9. One square without adjacent triangles implies all faces must be squares It follows that there are only finitely many distance-two five-colorable four-graphs. Corollary 6.2. The distance-two five-coloring problem for four-graphs is polynomial. In fact, we can fully describe all four-graphs that are distance-two five-colorable. Consider the four-graphs $G_0, G_1$ given in Figure 10. The graph $G_0$ has 8 triangle faces and 4 square faces, the graph $G_1$ has 8 triangle faces and 24 square faces. Note that $G_0$ is obtained from the cube by inserting two vertices of degree four in two opposite square faces. Similarly, $G_1$ is obtained from the cube by replacing each vertex with a triangle and inserting into each face of the cube a suitably connected degree four vertex. (In both figures, these inserted vertices are indicated by smaller size circles.) Theorem 6.3. The only four-graphs $G$ that can be distance-two five-colored are $G_0, G_1$. These two graphs can be so colored uniquely up to permutation of colors. Proof. We show that if $G$ can be so colored, then either $G$ is $G_0$ or every triangle in $G$ must be surrounded by six square faces, in which case $G$ is $G_1$. Suppose $G$ has two adjacent triangles $u_5u_1u_2$ and $T = u_5u_2u_3$. The vertices adjacent to $T$ must be given two colors other than those of $u_5, u_2, u_3$. If $T$ has two adjacent squares, then it has five adjacent vertices, which must be given the two colors in alternation, a contradiction. Similarly if $T$ is adjacent to three triangles then the three vertices adjacent would need three new colors, a contradiction. We may thus assume a triangle $u_5u_3u_4$. If there is a square $u_1u_5u_4t$, this square plus the two adjacent triangles would need six colors, a contradiction, so $u_5u_4u_1$ is a triangle, completing $u_5$ adjacent to the four-cycle $u_1u_2u_3u_4$. Color $u_i$ with color $i$. Then the additional vertex $v_i$ adjacent to $u_i$ for $1 \leq i \leq 4$ must be given color $i + 2$ (modulo 4), so these $v_i$ form a 4-cycle, and any additional vertex adjacent to a $v_i$ must get color 5, so there is a single additional $v_5$ with color 5. This gives a uniquely colored $G_0$, up to permutation of colors. In the remaining case, each triangle $T' = u_1u_2u_3$ has adjacent squares $u_iu_{i+1}w_{i+1}v_i$, with addition modulo 3. The vertices $v_i, w_{i+1}$ must be given the two colors different from those of $T'$, and in alternation around $T'$, so there cannot be a triangle $u_iv_iw_i$ else $w_i, w_{i+1}$ with the same color would be at distance two. So there are squares $u_iv_it_iw_i$, and $T'$ is surrounded by six squares. By Lemma 6.1, we must have a triangle adjacent to the square $u_iv_tw_i$, either $v_tx_ix_i$ or $w_tb_iy_i$, but not both since six colors would be needed. Let such a triangle be $T_i$, and we link $T'$ to the three $T_i$. These triangles viewed as vertices linked form a cubic graph without triangles $G'$, since a triangle face would be three triangles joined in $G$, which would need to have only three squares inside by Lemma 6.1. The graph $G'$ has 8 vertices for the 8 triangles, so this graph is the cube $C$. Replacing each vertex corresponding to a triangle by the corresponding triangle gives a graph $D$. Suppose the triangles adjacent to $T'$ are $v_ix_ix_i$ for $1 \leq i \leq 3$. Then going around a face of $C$ we notice only one vertex inside this face by Lemma 6.1, giving the construction of $G_1$. If we assign to the vertex inside this face the color 5, we notice that the surrounding triangles in $D$ must use three colors at most 4, and each must omit a different color of 4. This implies that all vertices in centers of square faces must be 5, and only opposite triangles for $C$ use the same 3 out of 4 colors. This proves existence and uniqueness up to permutation of colors of the distance-two 5-coloring of $G_1$. Suppose instead the adjacent triangles are $T_1 = w_1 t_1 y_1$, $T_2 = v_2 t_2 x_2$, and $T_3 = v_3 t_3 x_3$. If there is no triangle $v_1 w_2 x$, then the three squares $Q_1, Q_2, Q_3$ between $T_1$ and $T_2$ are respectively adjacent to squares $Q'_1, Q'_2, Q'_3$, and $Q'_5$ must be adjacent to a triangle by Lemma 6.1. There must be triangles at both ends of the $Q'_i$ and these are adjacent to $T_1$ and $T_2$, a contradiction. Finally, suppose again the adjacent triangles are $T_1 = w_1 t_1 y_1$, $T_2 = v_2 t_2 x_2$, and $T_3 = v_3 t_3 x_3$, but there is a triangle $v_1 w_2 x$. This triangle faces $T'$, and $T_1$ faces $T_3$. Triangles facing each other give two diagonals in the square faces of $C$, which implies two opposite faces without such diagonals in $C$, while the four sets of two diagonals form a matching of the 8 vertices of $C$. If the center of a face without diagonals gets assigned 5, then the adjacent triangles will be assigned a subset of $1 \leq i \leq 4$. Then joining the sets of two diagonals assigns a 5 to a vertex of each remaining triangle, which is not possible to the center of the remaining face without diagonals. We close with a few remarks and open problems. Wegner’s conjecture [24] that any planar graph with maximum degree $d = 3$ can be distance-two seven-colored has been proved in [13, 22]. That bound is actually achieved by a type-two Barnette graph, namely the graph obtained from $K_4$ by subdividing three incident edges. Thus the bound of 7 cannot be lowered even for type-two Barnette graphs. Wegner’s conjecture for $d = 4$ claims that any planar graph with maximum degree four can be distance-two nine-colored. The four-graph in Figure 11 actually requires nine colors in any distance-two coloring. Thus if Wegner’s conjecture for $d = 4$ is true, the bound of 9 cannot be lowered, even in the special case of four-graphs. It would be interesting to prove Wegner’s conjecture for four-graphs, i.e., to prove that any four-graph can be distance-two nine-colored. Finally, we’ve conjectured that any bipartite cubic planar graph can be distance-two six-colored (a special case of a conjecture of Hartke, Jahanbekam and Thomas [13]). The hexagonal prism (a cyclic prism with $k = 3$, which is a Goodey graph), actually requires six colors. Hence if our conjecture is true, the bound of 6 cannot be lowered even for Goodey graphs. It would be interesting to prove our conjecture for Goodey graphs, i.e., to prove that any Goodey graph can be distance-two six-colored. REFERENCES [1] R.E.L. Aldred, S. Bau, D.A. Holton, and B.D. McKay. Non-hamiltonian 3-connected cubic planar graphs. *SIAM J. Discrete Math.* 13:25–32, 2000. [2] B. Alspach. The wonderful Walecki construction. *Bull. Inst. Combin. Appl.* 52:7–20, 2008. [3] D. Barnette. On generating planar graphs. *Discrete Math.* 7:199–208, 1974. [4] D. Barnette. Conjecture 5. *Recent Progress in Combinatorics* (Ed. W.T. Tutte), Academic Press, New York 343, 1969. [5] O.V. Borodin and A.O. Ivanova. 2-distance 4-colorability of planar subcubic graphs with girth at least 22. *Discussioine Math. Graph Theory* 32:141–151, 2012. [6] O.V. Borodin. Colorings of plane graphs: a survey, *Discrete Math.* 313:517–539, 2013. [7] R.L. Brooks. On coloring the nodes of a network. *Proc. Cambridge Philos. Soc.* 37:194–197, 1941. [8] R. Erman, F. Kardoš, J. Miskuf. Long cycles in fullerene graphs. *J. Math. Chemistry* 46:1103–1111, 2009. [9] T. Feder and C. Subi. On Barnette’s conjecture. *Electronic Colloquium on Computational Complexity (ECCC)* TR06-015, 2006. [10] M.R. Garey, D.S. Johnson, and R.E. Tarjan. The planar Hamiltonian circuit problem is NP-complete. *SIAM J. Comput.* 5:704–714, 1976. [11] P.R. Goodey. Hamiltonian circuits in polytopes with even sided faces. *Israel J. Math.* 22:52–56, 1975. [12] P.R. Goodey. A class of Hamiltonian polytopes. *J. Graph Theory* 1:181–185, 1977. [13] S.G. Hartke, S. Jahangbeakam, and B. Thomas. The chromatic number of the square of subcubic planar graphs. *arXiv*:1604.06504. [14] F. Havet. Choosability of the square of planar subcubic graphs with large girth. *Discrete Math.* 309:3353–3563, 2009. [15] P. Heggernes and J.A. Telle. Partitioning graphs into generalized dominating sets. *Nordic J. Computing* 5:128–143, 1998. [16] D.A. Holton, B.D. McKay. The smallest non-Hamiltonian 3-connected cubic planar graphs have 38 vertices. *J. Combin. Theory B* 45:305–319, 1988. [17] F. Kardoš. A computer-assisted proof of Barnette-Goodey conjecture: Not only fullerene graphs are Hamiltonian. *arXiv math*:1409.2440 [18] X. Lu. A note on 3-connected cubic planar graphs. *Discrete Math.* 310:2054–2058, 2010. [19] J. Malkevitch. Polytopal graphs. *in Selected Topics in Graph Theory* (L. W. Beineke and R. J. Wilson eds.) 3:169–188, 1998. [20] O. Ore. The four colour problem. *Academic Press*, 1967. [21] P.G. Tait. Listing’s topologie. *Philosophical Magazine*, 5th Series 17:30-46, 1884. Reprinted in Scientific Papers, Vol. II, pp. 85–98. [22] C. Thomassen. The square of a planar cubic graph is 7-colorable. manuscript 2006. [23] W.T. Tutte. On hamiltonian circuits. *J. Lond. Math. Soc.* 21:98–101, 1946. [24] G. Wegner. Graphs with given diameter and a coloring problem. Technical Report, University of Dortmund, Germany, 1977. 268 Waverley Street, Palo Alto, CA 94301, USA *E-mail address:* email@example.com School of Computing Science, Simon Fraser University, Burnaby, B.C., Canada V5A 1S6 *E-mail address:* firstname.lastname@example.org Los Altos Hills *E-mail address:* email@example.com
Pollination of Lowbush Blueberry (*Vaccinium angustifolium*) in Newfoundland by native and introduced bees Barry J. Hicks ABSTRACT Lowbush blueberry, *Vaccinium angustifolium* Aiton (Ericaceae), requires insects (mainly bees) for successful and adequate pollination. The bee fauna of Newfoundland is not well known but native bees have been shown to be important to blueberry pollination in this province. The diversity and abundance of native bees was greatest in managed blueberry plots compared to unmanaged plots but this did not manifest into greater fruit-set. The supplementation of blueberry plots with imported bees *Bombus impatiens* and *Apis mellifera* did not increase fruit-set and the diversity of native bee species was decreased in the supplemented plots. RÉSUMÉ La pollinisation du bleuet, *Vaccinium angustifolium* Aiton (Ericaceae), depend des insectes (surtout les abeilles). La faune des abeilles de Terre Neuve n’est pas bien connue mais joue un rôle important pour la pollinisation du bleuet. La diversité et l’abondance des abeilles sauvages étaient plus importantes dans les parcelles de bleuet cultivées que dans les parcelles sauvages mais cette différence ne s’est pas reflétée dans la quantité de fruit. L’ajout de bourdons *Bombus impatiens* ou d’abeilles domestiques, *Apis mellifera*, dans certaines parcelles de bleuet n’a pas augmenté la quantité de fruits et la diversité des abeilles sauvages a diminué. INTRODUCTION Many species of insects visit flowers in search of nectar and pollen. In return for these foods, the insects inadvertently pollinate the flowers. Many native insect species (especially bees) are important pollinators of commercial food crops. About 20,000 species of bees are known throughout the world (Finnamore and Michener 1993). Presently, around 50 species occur in Newfoundland representing 5 families: digger bees (Andrenidae), sweat bees (Halictidae), cellophane bees (Colletidae), leafcutting bees (Megachilidae), and bumble and cuckoo bees (Apidae) (Hicks 2009). The number of bee species recorded from Newfoundland differs considerably compared to mainland Atlantic Canada where 159 species have been recorded (Sheffield et al. 2003). As native Newfoundland bee species have not been well studied the list of bees found in Newfoundland will undoubtedly continue to grow. Many native bee species are natural pollinators of lowbush blueberry in Newfoundland (Lomond and Larson 1983). Most species are solitary, with the exception of non-parasitic social bumble bees and some primitively social members of the Halictidae. Although many bee species are excellent pollinators of lowbush blueberry, the negative impact of year to year weather fluctuations results in a need for many blueberry growers to supplement native bee populations with imported bee species such as honey bees (*Apis mellifera*), bumble bees (*Bombus impatiens*), or leafcutting bees (*Megachile*). Received 16 June 2011. Accepted for publication 1 November 2011. Published on the Acadian Entomological Society website at www.acadianes.ca/journal.html on 22 December 2011. Barry Hicks: College of the North Atlantic, 4 Pikes Lane, Carbonear, NL, A1Y 1A7. Canada. Corresponding author (email email@example.com). rotundata). In many areas of North America, blueberry producers supplement pollination with commercial bees due to low numbers of native bees (Boulanger et al. 1967; Desjardins and de Oliveira 2006). In Nova Scotia, pollen collecting bees such as Bombus spp. and Andrena spp. were more efficient at pollinating blueberry flowers than nectar collecting Apis mellifera and Megachile rotundata (Javorok et al. 2002). Meanwhile, a study of blueberry pollination on Newfoundland’s coastal barrens showed that Apis mellifera supplementation increased fruit set (Lomond and Larson 1983) and that native bee abundance was similar between treatment and control plots. The importation of non-native bee species to increase pollination may result in the transmission of bee diseases that could seriously impact the diversity and abundance of native bee species. Wild lowbush blueberry, Vaccinium angustifolium Aiton, is indigenous to northeastern North America. It has become an important commercial product in Nova Scotia, Newfoundland, New Brunswick, Prince Edward Island, Quebec, and Maine. While Quebec and Maine are the largest producers, Newfoundland is the smallest producer in North America (Statistics Canada 2011; Yarborough 2009) although it is an important agricultural crop there. In 2008, Newfoundland and Labrador had 809 ha of area for blueberry cultivation accounting for over half of the total acreage of planted fruits and vegetables (Statistics Canada 2011). Major production areas included Conception Bay North, Bonavista North, and Central Newfoundland. In the growing season of 2003, 12 commercial producers harvested 274,428 kg on 485 ha of land (Ricketts 2004). Production has slowed in recent years with 122,500 kg in 2006 (Government of Newfoundland 2011) and 181,500 kg in 2008 (Statistics Canada 2011). Harvests from the limited number of managed blueberry farms represent only a fraction of the total volume produced in the province. In 2004, the total amount harvested from wild blueberries was estimated to be 823,265 kg (Ricketts 2004). As the native bee fauna of Newfoundland is so poorly known, it is important to investigate the diversity of bees and their ecology. Therefore, we investigated the following three questions: 1) Is there a difference in the biodiversity and abundance of native bee species between managed (farm) and unmanaged (wild) blueberry plots? The hypothesis is that the cultivated plots have higher abundance and diversity of native bees. 2) What is the impact of the introduced bumble bee, Bombus impatiens, on blueberry pollination in eastern Newfoundland? In this case the hypothesis is that fields supplemented with pollinators show increased pollination and berry production. 3) What is the impact of the introduced honey bee, Apis mellifera, on blueberry pollination in eastern Newfoundland? The hypothesis is that fields supplemented with pollinators show increased pollination and berry production. METHODS Biodiversity of native bees in blueberry fields Location of plots In 2006, four blueberry plots were chosen on a farm near Colliers, Avalon Peninsula, Newfoundland (47°27’30”N, 53°15’20”W). Two of the plots had additional nesting habitat around the periphery of the blueberry field, while two plots had soil-nesting sites restricted to the blueberry field due to rock soil and bog land bordering the plots. Plots were spaced at 250 m intervals and located on top of a small hill (elevation = 150 m) with south-westerly exposure. Two additional uncultivated plots with rocky soil located 4.76 km from the farm (47°25’55”N, 53°18’11”W), were chosen to represent a natural barren habitat. Along with blueberry other common plants included: Kalmia angustifolia L. (Ericaceae), Rhododendron groenlandicum (Oeder) K.A. Kron & W.S. Judd (Ericaceae), Caribou lichen, Cladonia rangiferina (Cladoniaceae), and Larix laricina (Du Roi) K. Kock (Pinaceae). These natural sites were located on top of a small ridge and were exposed to wind in all directions. In 2007, two different blueberry plots were selected near Colliers (47°26’31”N; 53°18’52”W). Two additional plots were chosen 26 km to the north of these plots on a farm near Harbour Grace (47°40’21”N; 53°21’06”W). Two plots representing a natural habitat were selected in the same location as 2006. Fruit-set Four 5 m-long transects were established in each plot with two transects arranged in an east–west direction and two in a north–south direction. In each year, 10 stems, touching or closest to the transect line at 0.5 m intervals, were chosen and tagged. On each tagged stem, the number of flowers was counted on 16 June 2006 and on 26 June 2007 and the number of developed fruit on 25 July 2006 and 13 August 2007. Sampling of Bees Starting on 26 June 2006, five yellow bowl traps (10 cm in diameter and 4 cm deep) (Solo Cup Company, Toronto, Ontario) were placed in each plot. Bees were only collected on warm, sunny or partly sunny days when temperatures were greater than 18 °C. No bees were sampled during the blooming period. The bees that were captured over a 24-hr period were removed and pinned for later identification. Traps were not placed in the plots if inclement weather was forecasted. The plots were sampled 10 times over July and August. In 2007, five traps were set out in each plot starting on 5 July 2007. The plots were sampled six times over July and August. In each plot, on sampling days in all years, air temperature, relative humidity, and wind speed were recorded. Verification of bee identification was by Cory Sheffield (York U, Toronto, Ontario) with voucher specimens being housed in the insect collection at the College of the North Atlantic, Carbonear Campus. **The impact of the introduced bumblebee, *Bombus impatiens* and honey bee, *Apis mellifera*, on blueberry pollination** **Location of plots** In 2008 and 2009, one blueberry field was selected on each of four farms located near Colliers and Harbour Grace (Table 1). In 2008, one blueberry field was supplemented with a box (also known as a quad) of four colonies of *Bombus impatiens* with ca 200 bees per colony at a stocking rate of 4 colonies/ha; one blueberry field at a stocking rate 8 colonies/3.8 ha, and two other plots were not supplemented. In addition, two natural habitat sites not supplemented and located <3 km from supplemented fields were chosen. In 2009, eight *Apis mellifera* hives were placed in two fields, one at a stocking rate of 4 hives/3 ha and one at a stocking rate of 4 hives/2 ha. The hives contained from 40,000 to 60,000 bees and were oriented with their entrances facing south. **Sampling of Bees** Starting on 2 July 2008 and 16 June 2009, five yellow bowl traps (10cm in diameter and 4cm deep) (Solo Cup Company, Toronto, Ontario) were placed in each plot (total = 10/site). The bees that were captured over a 24-hr period were removed and pinned for later identification. The traps were not placed into the plots if inclement weather was forecasted. In 2008 and 2009, the plots were sampled six times. In addition, on 3 June 2009, two Malaise traps were set up perpendicular to a forest edge, one trap 100 m from the *Apis mellifera* hives and one trap in a natural site. The traps stayed in place until 28 July 2009 and the bottles on the traps were changed every two weeks. **Fruit set, berry weight and seed count** Fruit set was evaluated as above in 2008 and 2009 with flowers counted on 27 June 2008 and 16 June 2009 and developed fruit on 8 August 2008 and 24 July 2009. After fruit developed, 10 berries along the transects were chosen randomly, placed in a Ziploc bag, labeled and transported back to the lab in an ice box. At the laboratory, each berry was weighed using an analytical balance (0.001 g) and the diameter of the each berry was measured using digital calipers (0.01 mm). Each berry was crushed, washed with water and passed through a suction filter to harvest seeds. The seeds were counted under 5x magnification. **Pollen deposition on blueberry stigmata** The protocol for the assessment of pollination and pollen deposition was modified from Javorek et al. (2002). Percent pollination of blueberry flowers was accomplished by randomly cutting 40 stigmata from flowers (mid-style) from each site (80 per treatment). Groups of five stigmata were placed in a drop of basic fuschin gel (250 ml water, 75 ml glycine, 7 g gelatin and a few crystals of basic fuschin stain) on a microscope slide, viewed under light microscopy (400x), and assessed as pollinated if more than two pollen tetrads were present. Pollen load was classified as low (3–20 pollen tetrads/stigma), moderate (21–40), or heavy (>40). **Data handling** Shannon-Wiener diversity indexes were calculated for each treatment and year using PAST online calculator (Hammer et al. 2001). PAST was used to compare the diversity indexes within years using a *t*-test described by Poole (1974). A one-way analysis of variance was used to compare means of variables (i.e., fruit-set, pollination --- **Table 1.** The location of blueberry plots in 2008 and 2009. | Treatment | 2008 | 2009 | |--------------------|-----------------------|-----------------------| | Supplemented* | 47°25'56"N; 53°19'32"W | 47°26'08"N; 53°19'50"W | | Un-supplemented | 47°26'32"N; 53°15'40"W | 47°25'21"N; 53°17'38"W | | Supplemented | 47°40'18"N; 53°21'13"W | 47°26'28"N; 53°19'14"W | | Un-supplemented | 47°40'04"N; 53°20'30"W | 47°25'14"N; 53°17'48"W | | Natural habitat | 47°25'55"N; 53°18'11"W | 47°25'55"N; 53°18'11"W | | Natural habitat | 47°26'49"N; 53°15'26"W | 47°27'38"N; 53°14'31"W | * In 2008 supplemented fields had *Bombus impatiens*; 2009 the supplemented fields had *Apis mellifera* and environmental measurements) after the variance was checked for normality. Proportional data and data that did not turn out to have the variance normally distributed were transformed (arcsin and Log$_{10}$). In cases where transformation could not achieve normality, nonparametric tests were employed (Mann-Whitney for two-sample tests and Kruskal-Wallis for more than two-sample tests). All statistical tests were performed in Minitab Version 15. RESULTS Biodiversity of native bees in blueberry fields Sixteen bee species were collected over the two years and from all sites combined (Table 2). The species richness and abundance of bees were higher in the farm habitat compared to the unmanaged habitat for both years (Table 3). In 2006, while the natural habitat had the lowest diversity of bees, fruit-set did not differ significantly between either. In 2007, percent fruit-set was statistically different among the sites (Table 3). The managed field (F1) had the same fruit-set as in the natural habitat, but the fruit-set at F2 had a significantly lower fruit-set compared to the other two sites. As in the previous year, there was no difference in the Shannon-Weiner diversity index or among the measured environmental variables. Comparison of the data between the years (2006 and 2007) was not possible as there were varying levels of sampling effort and the sites between the years were different. The impact of the introduced bees, Bombus impatiens and Apis mellifera, on blueberry pollination The species richness and abundance was considerably lower in 2008 and 2009 than in the previous years and that is reflected in the lower Shannon-Weiner diversity indices (Table 4). However, the sampling effort was not the same during the four years of sampling (10 samples in 2006; 6 samples in 2007, 2008 and 2009). In the fields supplemented with Bombus impatiens, only four of the six species of bees collected were native pollinating species. The remaining two were the imported bee, Bombus impatiens and the cleptoparasite, Bombus (Psithyrus) fernaldae. The species richness and abundance in the natural sites was lower compared to the unsupplemented field, a trend that was observed in previous years (Table 4 & 5). In 2009, the supplemented (Apis mellifera) and natural sites had three and six pollinating species, respectively. The social parasite Nomada cressona was also collected in both the supplemented and natural sites. The supplemented sites had the lowest abundance and least species richness among the sites studied, reflected in the significantly lower Shannon-Weiner diversity index. Table 2. Bee species and their abundance in managed and unmanaged blueberry plots in eastern Newfoundland during 2006 and 2007. For 2006: F1 = M. Walsh farm with suitable nesting nearby; F2 = M. Walsh farm with unsuitable nesting nearby; Nat = natural habitat. For 2007: F1 = M. Walsh farm; F2 = D. Howell farm. | Bee species | Family | 2006 | 2007 | |-----------------------------------|--------------|------|------| | | | F1 | F2 | Nat | F1 | F2 | Nat | | Andrena carolina | Andrenidae | 35 | 34 | 13 | 27 | 31 | 1 | | Andrena rufosignata | Andrenidae | 14 | 13 | 26 | 2 | 1 | - | | Andrena thaspii | Andrenidae | 4 | 3 | - | - | 1 | 1 | | Andrena wilkella | Andrenidae | 1 | 2 | 1 | - | - | - | | Osmia inermis | Megachilidae | - | - | 1 | - | - | - | | Hylaeus modestus | Halictidae | 1 | 1 | - | - | 2 | - | | Lasioglossum (Dialictis) sp. | Halictidae | 34 | 29 | 13 | 20 | - | 17 | | Lasioglossum (Evylaeus) quebecense| Halictidae | 35 | 55 | 17 | 18 | 13 | 15 | | Lasioglossum (Evylaeus) foxii | Halictidae | 7 | 7 | 6 | - | - | - | | Lasioglossum (Evylaeus) ruftarsus | Halictidae | 13 | 3 | 3 | - | - | - | | Sphexcodes solonis | Halictidae | 4 | 5 | - | 1 | 3 | - | | Nomada cressoni | Anthophoridae| 2 | - | 1 | 1 | 4 | - | | Bombus borealis | Apidae | - | 1 | - | - | - | - | | Bombus frigidus | Apidae | 4 | - | - | 2 | 1 | 1 | | Bombus vagans bolsteri | Apidae | 3 | 6 | 3 | 2 | 3 | 2 | | Bombus terricola | Apidae | 3 | 14 | 1 | 4 | 4 | - | Table 3. Bee species diversity, abundance, and selected environmental variables in managed and unmanaged blueberry plots in eastern Newfoundland during 2006 and 2007. | Year | Site | No. of species | Total abundance | H' | Fruit-set (%) | Air temp (°C) | Wind speed (m/s) | RH (%) | |------|------|----------------|-----------------|----|---------------|---------------|------------------|-------| | 2006 | F1 | 14 | 157 | 2.09a | 65.8(80) a | 23.8(28) a | 2.6(28) a | 80.0(26) a | | | F2 | 13 | 168 | 1.98a | 58.2(78) a | 24.2(28) a | 2.1(28) a | 81.1(25) a | | | Nat | 11 | 61 | 1.89a | 66.8(75) a | 24.6(28) a | 2.3(28) a | 78.8(26) a | | 2007 | F1 | 9 | 77 | 1.61a | 73.0(39) a | 22.5(9) a | 2.1(9) a | 75.2(9) a | | | F2 | 10 | 63 | 1.62a | 51.7(40) b | 23.2(7) a | 2.8(7) a | 75.1(7) a | | | Nat | 6 | 37 | 1.17a | 70.2(37) a | 21.9(9) a | 1.6(9) a | 75.2(9) a | NOTE: Number of sampling days: 2006 = 10, 2007 = 6. H' = Shannon–Weiner diversity index. The numbers in brackets indicate the number of samples taken. Values followed by same letter indicate no significant difference P = 0.05 Table 4. Bee species and their abundance in blueberry plots supplemented with *Bombus impatiens* (2008), *Apis mellifera* (2009), managed sites that were un-supplemented and natural sites also un-supplemented in eastern Newfoundland during 2008 and 2009. Site 1=supplemented; Site 2 = un-supplemented; Nat = un-supplemented natural site. | Bee species | Family | 2006 | 2007 | |------------------------------------|--------------|------|------| | | | F1 | F2 | Nat | F1 | F2 | Nat | | *Andrena carolina* | Andrenidae | 19 | 15 | 15 | 21 | 30 | 43 | | *Andrena rufosignata* | Andrenidae | 1 | - | - | - | - | - | | *Andrena thaspii* | Andrenidae | - | 1 | - | - | - | 1 | | *Andrena wilkella* | Andrenidae | - | 1 | - | - | - | - | | *Andrena frigida* | Andrenidae | - | - | 1 | - | - | - | | *Andrena sp.* | Andrenidae | - | 1 | - | - | - | - | | *Lasioglossum (Evylaeus) quebecense* | Halictidae | - | 5 | 1 | 2 | 7 | 10 | | *Lasioglossum (Evylaeus) foxii* | Halictidae | - | 1 | - | - | - | - | | *Sphecodes solonis* | Halictidae | - | - | - | - | - | 6 | | *Sphecodes levis* | Halictidae | - | - | - | - | - | 1 | | *Lasioglossum (Dialictis) sp.* | Halictidae | - | - | - | 1 | 7 | 14 | | *Bombus frigidus* | Apidae | 1 | - | - | - | - | - | | *Bombus vagans bolsteri* | Apidae | 4 | 4 | 1 | 2 | 2 | 1 | | *Bombus terricola* | Apidae | - | - | 1 | - | - | - | | *Bombus impatiens* | Apidae | 4 | - | - | - | - | - | | *Bombus (Psithyrus) fernaldae* | Apidae | 1 | - | - | - | - | - | | *Apis mellifera* | Apidae | - | - | - | 1 | - | - | | *Nomada cressonii* | Anthophoridae| - | - | - | 4 | - | 3 | Table 5. Measurement of selected environmental variables in blueberry plots supplemented (Sup) with *Bombus impatiens* (2008) and *Apis mellifera* (2009), un-supplemented (Un-sup) and natural habitat in eastern Newfoundland. | Year | Treatment | No. of species | Total abundance | H' | Air Temp (°C) | RH (%) | Wind speed (m/s) | |------|-----------|----------------|-----------------|----|---------------|--------|------------------| | 2008 | Sup | 6 | 31 | 0.89a | 22.8 (8)a | 82.9 (8)a | 2.9 (8)a | | | Un-sup | 7 | 28 | 1.40a | 22.4 (8)a | 84.0 (8)a | 3.8 (8)a | | | Natural | 5 | 19 | 0.81a | 21.1 (8)a | 81.9 (8)a | 3.1 (8)a | | 2009 | Sup | 4 | 26 | 0.69a | 23.5 (10)a | 58.9 (10)a | 2.8 (10)a | | | Un-sup | 4 | 43 | 1.19b | 23.0 (10)a | 64.3 (10)a | 2.8 (10)a | | | Natural | 7 | 65 | 1.39b | 22.4 (10)a | 64.4 (10)a | 2.4 (10)a | NOTE: Values are means; the number of measurements taken are in the brackets. Number of sampling days: 2008 = 6, 2009 = 6. H' = Shannon-Weiner diversity index. Values followed by same letter indicate no significant difference P = 0.05. Four species, *Andrena carolina*, *Bombus vagans bolsteri*, *Lasioglossum (Evylaeus) quebecense* and *Nomada cressoni* were captured in Malaise traps in both the supplemented site and the natural site. In addition, *Bombus (Psithyrus) fernaldae* was captured from only the natural site. The abundance of bees captured over the sampling period was surprisingly low; 18 specimens in the supplemented site and 25 in the natural site. During both study years, the air temperature, RH and wind speed were not significantly different between the study sites (Table 5). Percent fruit-set was lowest in the sites that were supplemented with *Bombus impatiens* and *Apis mellifera* while the un-supplemented and natural sites had similar fruit-set in both years (Table 6). Measurement of berry diameter and mass was not significantly different among the sites in 2008 (Table 6). However, while the berry mass was the same in 2009, there was a difference in berry diameter, with the un-supplemented site having the largest diameter and the natural sites the smallest diameter. In 2008, the number of seeds per berry, fully developed and aborted seeds, was significantly lower in berries collected from the natural habitat compared to the other sites. In 2009, there was no difference in the fully developed seeds per berry between the sites. However, the supplemented sites had significantly lower pollination than the other two sites (supplemented vs. un-supplemented $P < 0.001$; un-supplemented vs. natural $P = 0.022$). The un-supplemented and natural sites had similar pollen deposited ($P = 0.258$). In 2008, the examination of the amount of pollen transferred to the stigmata of flowers showed that the supplemented sites had significantly more blueberry pollen tetrads deposited on the stigmata than in the un-supplemented or natural sites (supplemented vs. un-supplemented $P = 0.003$; un-supplemented vs. natural $P = 0.005$). The un-supplemented and natural sites had similar pollen deposited ($P = 0.79$). The supplemented site had significantly more flowers pollinated than the two other sites. **DISCUSSION** The bee fauna associated with lowbush blueberry in Newfoundland is small compared to mainland North America. Boulanger et al. (1967) and Vander Kloet (1976) showed that the solitary bees *Andrena regularis*, *Andrena carlini*, *Andrena nivalis* and *Andrena vicina* and the bumble Table 6. Fruit set, berry size, seed count and percent pollination in managed blueberry plots supplemented with *Bombus impatiens* (2008) and *Apis mellifera* (2009), managed sites that were un-supplemented and natural sites also un-supplemented. | Treatment | Fruit-set (%) | Berry diameter (mm) | Berry mass (g) | Seed count per berry | Percent pollination | |--------------------|---------------|---------------------|----------------|----------------------|---------------------| | **2008** | | | | | | | Supplemented | 47.5 (80)a | 9.26 (40)a | 0.426 (40)a | 34.6 (40)a | 91.0 (67)a | | Un-supplemented | 57.4 (80)b | 9.38 (40)a | 0.448 (40)a | 39.7 (40)a | 83.6 (73)b | | Natural habitat | 57.7 (80)b | 9.35 (40)a | 0.421 (40)a | 28.5 (40)b | 84.3 (70)b | | **2009** | | | | | | | Supplemented | 52.5 (80)a | 9.46 (40)ab | 0.409 (40)a | 17.0 (40)a | 55.3(56)a | | Un-supplemented | 72.7 (80)b | 9.95 (40)a | 0.467 (40)a | 18.7 (40)a | 87.3(63)b | | Natural habitat | 69.7 (80)b | 9.23 (40)b | 0.403 (40)a | 18.1 (40)a | 73.6(53)b | **NOTE:** Values are means; the number of measurements taken are in the brackets. Values followed by same letter indicate no significant difference $P = 0.05$. Bees *Bombus bimaculatus*, *Bombus terricola* and *Bombus ternarius* are all important pollinators of blueberry in mainland areas. However, only *Bombus terricola* and *Bombus ternarius* are known to occur in Newfoundland. The bees most associated with blueberry in Eastern Newfoundland include: *Andrena carolina*, *Lasioglossum (Evylaeus) quebecense* and *Bombus vagans bolsteri*. In 2006, 2007 and 2008 the managed plots had higher diversity than the natural plots, as shown by Shannon-Weiner diversity index. While 2007 was the only year that showed a significant increase in fruit set in the natural site compared to one of the managed sites (Table 3), there was no significant overall affect among sites. As there is considerable year to year variation in bee abundance, plus the sampling effort between years was different, year to year comparisons could not be made. The percent fruit set that was recorded (range = 51.7–73.0%) was higher than that reported by Lomond and Larson (1983) from an area close by. They had a fruit set of 39% in un-supplemented managed fields and suggested that an average fruit set over 35% was rare. However, Lomond and Larson (1983) utilized a destructive sampling technique for counting flowers and berries compared with our non-destructive tagged stems. Of the sampling methods available (e.g., Malaise trap, bowl trap or sweep netting), bowl trapping is considered the most efficient, cost effective and eliminates bias of collectors (Westpal et al. 2008). While Leong and Thorp (1999) showed that generalist bees are attracted to yellow bowls, the recent literature suggests that a combination of blue, white and yellow bowls should be used in studies on bee diversity and abundance (Toler el et. 2005; Campbell and Hanula 2007). The present study was limited to using yellow bowls. However, during the summer of 2011, transects of alternating blue, white and yellow bowls (12 per each colour) where placed in the blueberry farms. The yellow bowls captured over twice the halictid and *Bombus* bees compared to the blue bowls but similar numbers to the white bowls (Hicks, unpublished data). Thus, it appears that the exclusive use of yellow bowls, while not ideal, should provide valuable information on the relative diversity and abundance among the plots. It must be noted here that bowl trapping generally fails to capture larger bodied bees such as bumblebees and honey... bees at frequencies that are reflective of their perceived natural abundance (Toler et al. 2005). Smaller and medium sized bees are captured more readily in the trap fluid while the larger species may have the ability to escape the fluid. This appears to be case during this study as the proportion of bumble bees captured by the Malaise traps was greater than that trapped in the bowls. Therefore, it is possible that the bowl trapping underestimated the number of large bees in the habitats. The number of bees in general collected by the Malaise trap was considerably lower compared to the bowl traps. Campbell and Hanula (2007) suggested that bowls are better than Malaise traps for bee sampling mainly because their flight abilities may allow them to avoid capture in the Malaise trap. Despite the greater abundance of bees in the bowls in the managed plots compared to unmanaged plots in 2006 & 2007 there was no increase in the number of fruit produced. Notwithstanding the biases of bowl traps to underestimate the abundance of large bees in the habitat, the lower fruit set in the managed sites could be the result of the differences in the density of the flowers between the two types of plots. While it was not measured directly, the managed plots had a much greater flower density than the natural plots. It is possible that while the bee abundance was greater in the managed plots, it was not great enough to pollinate all of the flowers available. In contrast, the natural plots had a low abundance of bees but those bees did not have as many flowers to visit and thus they pollinated those flowers with greater ease. In 2009, the greater abundance and diversity of bees in the natural plots compared to that in the un-supplemented managed plots, opposite to what was observed in previous years of sampling, could be explained by the fact that one of the natural plots chosen in that year was structurally very different than the natural plots from pervious years. The soil at the 2009 natural site was considerably less rocky and had ground cover composed of grasses and numerous herbaceous plants. In the previous years, the natural sites were composed mostly of bare rocky soil with patches of the ericaceous shrubs and conifer trees. These differences in habitat could account for the difference in abundance in the natural site compared to the managed sites for 2009 and supports the study by Steffan-Dewenter et al. (2002) who showed that structurally more complex habitats had increased species richness and abundance of wild bees. Supplementation of blueberry fields in eastern Newfoundland with *Bombus impatiens* or *Apis mellifera* did not increase fruit set in those fields. In fact, there was significantly lower fruit set when the fields were supplemented with either species (Table 7). In 2008, the number of *Bombus impatiens* captured in the bowl traps was low and may have been an artifact of bowl sampling. While fruit set was lower in the supplemented field, the percent pollination was significantly higher in that field in 2008. In other words, *Bombus impatiens* seemed to be good at transporting pollen between flowers but those flowers did not produce fruit. In this case, it may be possible that the pollen transferred may have come from flowers of the same clone. The “near-neighbor” model of pollen distribution suggests that plants are expected to receive much of their own pollen and that of their nearest neighbors (Turner et al. 1982). In this case, a clone of blueberries, which is generally genetically homogeneous (Bell et al. 2009), dominates almost exclusively in patches ranging from 7–23 m$^2$ (Yarborough 2009). The results from the present study, where significantly more pollen was transferred to flower stigmata but where the flowers failed to produce fruit may have been caused by these flowers receiving incompatible pollen from their nearest neighbor. Lowbush blueberry is considered to be generally self-incompatible (Aalders and Hall 1961; Wood 1968; Hall et al. 1979). While significant amounts of pollen were transferred, this pollen was not compatible and due to physiological barriers to self-pollination, aborted fruit production. This fruit abortion of self-infertile clones was shown by Aalders and Hall (1961). Supplementation of farms with *Apis mellifera* did not show increased transfer of pollen or an increase in fruit-set (Table 6). In this case, the transects were close to the hives but only a small proportion of the collected bees were *Apis mellifera*. Bowl sampling may have underestimated the abundance of honey bees in the fields, however, the Malaise trap placed 100 meters from the hive did not collect any *Apis mellifera* during the blooming period. The recommended stocking rate for honey bees on Maine blueberry farms is 7.5–10 hives/ha (Stubbs and Drummond 2001). The present study may not have had adequate number of hives available (1.5–2 hives/ha), although Lomond and Larson (1983) showed an increase in the rate of blueberry pollination using a stocking rate of 1.7 hives/ha. Generally, the abilities of native bees to buzz-pollinate the blueberry flowers is an advantage over honey bees which take nectar and do not sonicate (Javorak et al. 2002). While honey bees are not considered to be the most efficient bees for pollinating blueberry, some authors found that their higher abundances made up for their inefficiencies and increased fruit-set (Lomond and Larson 1983; Eaton 1992; Aras et al. 1996; Dedej and Delephane 2003). In contrast, Wood (1961) did not find an increase in blueberry fruitset with increased honey bee density (stocking rate up to 3.6 hives/ha). Results from the present study do not support the idea that increased honey bee density increases fruit production, at least in eastern Newfoundland during that particular year and at the stocking rate used. Unlike others who have shown that supplementation of blueberry fields with bees increased fruit-set (Lomond and Larson 1983; Eaton 1992; Aras et al. 1996; Stubbs and Drummond 2001; Desjardins and de Oliveira 2006; Tuell et al. 2009), supplementation with *Bombus impatiens* and *Apis mellifera* did not increase fruit set but was actually detrimental as percent fruit-set was significantly lower in the supplemented fields compared to the un-supplemented fields. Intuitively, it would seem logical that supplementation with bees should combine with the pollination activities of native bees and thus show an overall increase. Greenleaf and Kremen (2006) showed that pollination of hybrid sunflowers by honey bees caused a 5-fold additive affect on pollination efficiency. In addition, while honey bees are not aggressive toward other insects while foraging, they do compete with other species for floral resources (Goulson 2003 and references therein). Thomson (2004) showed that honey bees competitively suppressed a native social bee known to be an important pollinator. Winfree et al. (2007) suggested that native bees alone can provide sufficient pollination and that supplementation may not be required in some agro-ecosystems. The supplementation of blueberry fields with *Bombus impatiens* is thought to be better than supplementation with honey bees (Stubbs and Drummond 2001; Desjardins and de Oliveira 2006). However, in eastern Newfoundland, the stocking rates of *Bombus impatiens* and *Apis mellifera* observed in this study failed to increase fruit set compared to un-supplemented areas and thus does not support their use by farmers in eastern Newfoundland. However, increasing the stocking rates may increase pollination, but presently it is unknown if that would result in greater fruit production. As the bee fauna throughout Newfoundland is not well known, additional studies should be initiated to determine whether supplementation with introduced bees in those areas is worthwhile. Presently, Newfoundland is in an enviable position regarding its population of *Apis mellifera*. The province has strict importation regulations and because of its geographical isolation it does not harbour the same parasites that plague honey bees in other areas worldwide (Williams et al. 2010). Honey bees can not survive as feral populations in Newfoundland thus limiting the possibilities of disease transmission. The parasites and diseases of native *Bombus* spp have not been studied and we are unsure of their impact on these vulnerable populations. However, *Bombus* spp. in other areas of North America are known to harbor several pathogens. Pathogen spillover from commercial *Bombus impatiens* colonies to native species has been documented in other areas (Colla et al. 2006). In Newfoundland, several specimens of native *Bombus* species including *Bombus tarrarius*, *Bombus vagans bolsteri*, *Bombus terricola* and *Bombus* (*Psithurus*) *fernaddae* have been found inside the colony boxes of imported *Bombus impatiens* at the end of the season (personal observation). While it is unclear what impact the parasites and diseases may have on native bee species, they have been implicated as the cause of the decline of important bee pollinators in North America (Berenbaum et al. 2007). With the decline in native species for various reasons (see Colla and Packer 2008), it may open up new niches that may be filled by exotic species. With global warming, Newfoundland may be at greater risk of having non-native bees establish here as the climate becomes milder. In Newfoundland, many of the spring and fall flowering plants rely on native *Bombus* species for their pollination. The loss of native species by diseases and competition with exotic species may result in significant changes to the island’s ecosystem. We may see substantial changes in the availability of seeds and berries that will negatively impact biodiversity of birds and mammals (Winter et al. 2006). **ACKNOWLEDGMENTS** I thank Martin and Beverly Walsh, John Clarke and Donald Howell for access to their blueberry fields. Cory Sheffield (University of York) verified the identification of some of the bees. The following people assisted in fieldwork: M. Baird, J. Churchill, M. O’Grady, and V. Shute. This project was partially supported by an Industrial Research and Innovation Fund (IRIF) grant from the Department of Innovation, Trade and Rural Development, Government of Newfoundland. **REFERENCES** Aalders, L.E., and Hall, I.V. 1961. Pollen incompatibility and fruit set in lowbush blueberries. Canadian Journal of Genetics and Cytology **3**: 300–307. Aras, P., de Oliveira, D., and Savoie, L. 1996. Effect of a honey bee (Hymenoptera: Apidae) gradient on the pollination and yield of lowbush blueberry. Journal of Economic Entomology **89**: 1080–1083. Bell, D.J., Rowland, L.J., Zhang, D., and Drummond, F.A. 2009. Spatial genetic structure of lowbush blueberry, *Vaccinium angustifolium*, in four fields in Maine. *Botany* **87**: 932-949. Berenbaum, M., Bernhardt, P., Buchmann, S., Calderone, N.W., Goldstein, P., Inouye, D.W., Kevan, P.G., Kremen, C., Medellin, R.A., Ricketts, T., Robinson, G.E.A., Snow, A., Swinton, S.M., Thien, L.B., and Thompson, F.C. 2007. Status of Pollinators in North America. The National Academies Press, Washington, D.C. Boulanger, L.W., Wood, G.W., Osgood, E.A., and Dirks, C.O. 1967 Native bees associated with lowbush blueberry in Maine and Eastern Canada. Maine Agricultural Experimental Station. Technical Bulletin 26. Orono, Maine. Campbell, J.W., and Hanula, J.L. 2007. Efficiency of Malaise traps and colored pan traps for collecting flower visiting insects from three forested ecosystems. *Journal of Insect Conservation* **11**: 399-408. Colla, S.R., Otterstatter, M.C., Geogear, R.J., and Thomson, J.D. 2006. Plight of the bumble bee: pathogen spillover from commercial to wild populations. *Biological Conservation* **129**: 461-467. Colla, S.R., and Packer, L. 2008. Evidence for decline in eastern North American bumble bees (Hymenoptera: Apidae), with special focus on *Bombus affinis* Cresson. *Biodiversity and Conservation* **17**: 1379-1391. Dedej, S., and Delaplane, K.S. 2003. Honey Bee (Hymenoptera: Apidae) pollination of rabbiteye blueberry *Vaccinium ashei* var. ‘climax’ is pollinator density-dependent. *Journal of Economic Entomology* **96**: 1215-1220. Desjardins, E.C., and de Oliveira, D. 2006. Commercial bumble bee *Bombus impatiens* (Hymenoptera: Apidae) as a pollinator in lowbush blueberry (*Ericale: Ericaceae*) fields. *Journal of Economic Entomology* **99**: 443-449. Eaton, L.J. 1992. Effect of pollinator number on fruit set and yield of lowbush blueberry. *Canadian Beekeeping* **17**: 32-34. Finnamore, A.T., and Michener, C.D. 1993. Superfamily Apoidea. *In Hymenoptera of the World: an identification guide to families. Edited by H. Goulet, and J.T. Huber*. Agriculture Canada Publication 1894/E. pp. 279-357. Goulson, D. 2003. Effects of introduced bees on native ecosystems. *Annual Review of Ecology, Evolution and Systematics* **34**: 1-26. Government of Newfoundland. 2011. Available from http://www.nr.gov.nl.ca/nr/agrifoods/crops/berries/blueberry.html [accessed 18 December 2011]. Greenleaf, S.S., and Kremen, C. 2006. Wild bees enhance honey bee’s pollination of hybrid sunflower. *Proceeding of the National Academy of Sciences*, **103**: 13890-13895. Hall, I.V., Alder, L.E., Nickerson, N.L., and Vander Kloet, S.P. 1979. *The Biological Flora of Canada. 1. Vaccinium angustifolium* Ait. Sweet Lowbush Blueberry. Canadian Field-Naturalist **93**: 415-430. Hammer, Ø., Harper, D.A.T., and Ryan P. D. 2001. PAST: Paleontological Statistics Software Package for Education and Data Analysis. *Palaeontologia Electronica* **4**(1). Available from http://palaeo-electronica.org/2001_1/past/issue1_01.htm [accessed 22 December 2010]. Hicks, B.J. 2009. Observations of the nest structure of *Osmia inermis* (Hymenoptera: Megachilidae) from Newfoundland, Canada. *Journal of the Acadian Entomological Society* **5**: 12-18. Javorek, S.K., McKenzie, K.E., and Vander Kloet, S.P. 2002. Comparative pollination effectiveness among bees (Hymenoptera: Apoidea) on lowbush blueberry (*Ericaceae: Vaccinium angustifolium*). *Annals of the Entomological Society of America* **95**: 345-351. Leong, J.M., and Thorp, R.W. 1999. Colour-coded sampling: the pan trap colour preferences of oligolectic and nonoligolectic bees associated with a vernal pool plant. *Ecological Entomology* **24**: 329-335. Lomond, D., and Larson, D.J. 1983. Honey bees, *Apis mellifera* (Hymenoptera: Apidae), as pollinators of lowbush blueberry, *Vaccinium angustifolium*, on Newfoundland coastal barrens. *The Canadian Entomologist* **115**: 1647-1651. Michener, C.D. 2007. *The bees of the world*. The John Hopkins University Press, Baltimore. Poole, R.W. 1974. An introduction to quantitative ecology. McGraw-Hill, New York. Ricketts, R. 2004. An overview of the Newfoundland and Labrador Agrifoods Industry. Available from http://www.nr.gov.nl.ca/nr/publications/agrifoods/overview04.pdf, Government of Newfoundland and Labrador, Department of Natural Resources [accessed 4 January 2011]. Sheffield, C.S., Kevan, P.G., Smith, R.F., Rigby, S.M., and Rogers, R.E.L. 2003. Bee species of Nova Scotia, Canada, with new records and notes on biomics and floral relations (Hymenoptera : Apoidea). *Journal of the Kansas Entomological Society* **76**: 357-384. Statistics Canada. 2011. Available from http://www.statcan.gc.ca/pub/22-003-x/22-003-x2010001-eng.htm [accessed 18 December 2011]. Steffan-Dewenter, I., Münzenberg, U., Bürger, C., Thies, C., and Tscharntke, T. 2002. Scale-dependent effects of landscape context on three pollinator guilds. Ecology 83: 1421-1432. Stubbs, C.S., and Drummond, F.A. 2001. *Bombus impatiens* (Hymenoptera: Apidae): An alternative to *Apis mellifera* (Hymenoptera: Apidae) for lowbush blueberry pollination. Journal of Economic Entomology 94: 609-616. Thomson, D. 2004. Competitive interactions between the invasive European honey bee and native bumble bees. Ecology 85: 458-470. Toler, T.R., Evans, E.W. and Tepedino, V.J. 2005. Pan trapping for bees in Utah’s west desert: the importance of color diversity. Pan-Pacific Entomologist 81: 103-113. Tuell, J.K., Ascher, J.S., and Isaacs, R. 2009. Wild bees (Hymenoptera: Apoidea: Anthophila) of the Michigan highbush blueberry agroecosystem. Annals of the Entomological Society of America 102: 257-287. Turner, M.E., Stephens, C., and Anderson, W.W. 1982. Homozygosity and patch structure in plant populations as a result of dearest-neighbor pollination. Proceedings of the National Academy of Science 79: 203-207. Vander Kloet, S.P. 1976. Nomenclature, taxonomy and biosystematics of *Vaccinium* section Cyanococcus (the blueberries) in North America. 1. Natural barriers to gene exchange between *Vaccinium angustifolium* Ait. and *Vaccinium corymbosum* L. Rhodora 78: 503-515. Westpal, C., Bommarco, R., Carré, G., Lamborn, E., Morrison, N., Petanidou, T., Potts, S.G., Roberts, S.P.M., Szentgyörgyi, H., Tscheulin, T., Vaissière, B.E., Woyciechowski, M., Biesmeijer, J.C., Kunin, W.E., Settele, J., and Steffan-Dewenter, I. 2008. Measuring bee diversity in different European habitats and biogeographical regions. Ecological Monographs 78: 653-671. Williams, G.R., Head, K., Burgher-MacLellan, L., Rogers, R.E.L., and Shutler, D. 2010. Parasitic mites and microsporidians in managed western honey bee colonies on the island of Newfoundland, Canada. The Canadian Entomologist 142: 584-588. Winfree, R., Williams, N.M., Dushoff, J., and Kremen, C. 2007. Native bees provide insurance against ongoing honey bee losses. Ecology Letters 10: 1105-1113. Winter, K., Adams, L., Thorp, R., Inouye, D., Day, L., Ascher, J., and Buchmann, S. 2006. Importation of non-native bumble bees into North America: potential consequences of using *Bombus terrestris* and other non-native bumble bees for greenhouse crop pollination in Canada, Mexico and the United States. A White Paper of the North American Pollinator Protection Campaign. 33pp. Wood, G.W. 1961. The influence of honey bee pollination on fruit set of the lowbush blueberry. Canadian Journal of Plant Sciences 41: 332-335. Wood, G.W. 1968. Self-fertility in the lowbush blueberry. Canadian Journal of Plant Science 48: 433-434. Yarborough, D. 2009. Wild Blueberry culture in Maine. Wild Blueberry Fact Sheet No 220. Available from http://umaine.edu/blueberries/factsheets/production/wild-blueberry-culture-in-maine/ University of Maine, Orono, Maine [accessed 4 January 2011].
Don’t miss the opportunity to advertise your business, products, and services to this targeted audience! Attract new customers or supporters and connect with existing ones with a sponsorship, ad, or a vendor booth at our conference. Your support helps cover costs and allows us to keep registration affordable. If small producers, farmers, soil, and food are important to you, then get behind this conference! The Utah Farm & Food Conference (UFC) is an educational and networking hub for farmers, ranchers, foodies, chefs, small-artisan producers, food retailers, activists, researchers, and educators. The conference features pre-conference events, a farm tour, film screening, farmer’s market, workshops, keynote speakers, exhibitors, vendors, a mixer, live entertainment, a seed exchange, and organic local culinary fare. The Utah Farm & Food Conference is organized by Red Acre Center, a 501(c)(3). We pull together a group of farmers, ranchers, advocates, chefs, and consumers to make this conference happen. The 4th annual UFC is expected to draw over 250 attendees. Attendees are predominately from Utah, but with 10 states represented in 2019. UFC brings together internationally acclaimed keynote speakers and local experts to strengthen the regenerative farming movement to help farmers and small artisan producers create more successful businesses and to foster a sense of community among those engaged in the local and sustainable food and farm movement. Conferences similar to ours are hosted across the United States, some as old as 40 years with 3,000 attendees, but in Utah, this is the first of its kind—the only one in Utah, and southern at that! The Utah Farm Conference focuses not only on learning but gathering and being inspired. Red Acre Center is a membership-based organization growing in membership. We have a growing social media presence on Instagram and Facebook and an active digital newsletter. Printed materials are distributed throughout the entire state of Utah, parts of 10 other states and at several of the largest organic and biodynamic farm conferences in the country. Go “shopping”! Choose from the following list what would benefit you, your product, or company the most. Total the dollar amount up and that is the level sponsor you are! 4 Season Sponsor $2,000 monetary donation OR $ 2,200 value in trade (product or media sponsors welcome) Your business logo listed in our conference program and on our website as a 4 season sponsor. Fall $1,500 monetary donation OR $ 1,150 value in trade (product or media sponsors welcome) Your business logo listed in our conference program and on our website as a Fall sponsor. Summer $1,000 monetary donation OR $1,100 value in trade (product or media sponsors welcome) Your business logo listed in our conference program and on our website as a Summer sponsor. Spring $500 monetary donation OR $550 value in trade (product or media sponsors welcome) Your business logo listed in our conference program and on our website as a Spring sponsor. Winter $250 monetary donation OR $275 value in trade (product or media sponsors welcome) Your business or name listed in our conference program and on our website as a Winter sponsor. Seasonal $100 monetary donation OR $110 value in trade (product or media sponsors welcome) Your business or name listed in our conference program and on our website as a Seasonal sponsor. Sponsorship Opportunities Have an idea? Want to change something? Let’s switch it up! We love new ideas, so please let us know! **Bites & Beverages** (1 of 4 events open to the community, 1 available) $2,000, Deadline: Jan 20 Be the sponsor of this ultimate foodie event! Conference attendees will come together along with the local community for this evening event. Bites & Beverages is a farm-inspired appetizer cook-off featuring the culinary creations of 5 Utah chefs. Guests will mix and mingle while visiting the chef’s stations for a bite made from local ingredients and sip local beer and wine. Guests will vote for the bite they liked the best followed by an award and cash prize for the winning chef. Special perks of sponsoring this event: Each chef will receive a participation award and the winning chef will receive an oversized check with your logo prominently displayed on both the check and the awards presented. We will put up as many banners as you supply, in any size in the event space. Your logo and name will also be featured on the color poster that is circulated and posted and wherever the Bites and Beverage event is listed: social media, press releases, emails, special event page, pre-conference page, and the schedule - both online and in the printed program. **Keynote Speaker** (2 available, 1 for each speaker) $2000, Deadline: Jan 20 Help bring an internationally recognized speaker to the Utah Farm and Food Conference and link your business name and logo to one of the great names currently leading out in the agriculture movement. Benefits include verbal recognition from the podium before the keynote address and your business logo and name featured on a visual backdrop before keynote address and wherever the speaker is listed: social media, press releases, emails, the schedule page, presenters page, and workshop page - both online and in the printed program. We’re not done yet! This also includes a photo opportunity with a sponsored speaker. **Farm to Fork Dinner**, (1 available) $1,500, Deadline: Jan 20 A highlight of the conference! Attendees gather for dinner at long tables and eat all locally sourced food family style prepared by Chef Shon of Sego and Wood Ash Rye. Sponsoring this event would include having any size banner prominently placed during the dinner, being recognized verbally during dinner, having your business logo and name on: social media, press releases, emails, special events page, schedule page - both online and in the printed program. **Photographer**, (1 available) $1,500, Deadline: Feb 4 Broken Banjo is a nationally recognized photographer in the local food movement - photographing some of the largest farm conferences in the country. Help capture and memorialize the spirit of the conference! Benefits include one photo with a presenter or setting of your choice, and your business logo and name where ever the photographer is listed: social media, press releases, emails, and photographer profile - both online and in the printed program. **Farmers Market**, (1 of 4 events open to the community, free to attend, 1 available) $1,000, Deadline: Jan 20 What a cool event to host! Sponsoring this event would include having a banner of any size prominently placed during the farmers market and your logo and name on the color poster that is circulated and posted in southern Utah and wherever the Farmers Market event is listed: social media, press releases, emails, special event page, and schedule page - both online and in the printed program. **Program / Schedule** (1 available) $1,000, Deadline: Jan 20 You will have the color centerfold with artwork you provide! 30 pages or more of color and black and white images, ads, articles from presenters, and the schedule which you cannot live without for the conference. The conference program is a take-home keepsake for sure! This is distributed at the conference to every attendee and posted on the website for over a year and online for years to come. The year following the conference we are constantly circulating the program, we have a copy of the program at every booth Red Acre Center sets up at events and we promote future conferences with previous programs including all of the details about advertisers and sponsors. Farmer Guild Mixer, (1 available) $700, Deadline: Jan 20 A late-night event for everyone, but yes spirits are served - so this could be a fun event to sponsor. Sponsoring this event would include having any size banner prominently placed during the mixer, being recognized verbally during the mixer, and having your business logo and name on: the bingo card (we do for a get-to-know-you activity), social media, press releases, emails, and the special event page and schedule page - both online and in the printed program. Lanyards (1 available) $550, Deadline: Jan 20 Have your name and logo on every attendee’s keepsake take-home lanyard. Live Music at Bites & Beverages (1 available) $500, Deadline: Jan 20 Be the proud sponsor of bringing live music - a great local band to this event. Benefits of sponsoring include your logo and name on a prominent sign in front of the band, verbal recognition from the band during their set, your logo and name on the color posters that are circulated and posted and wherever the band is listed on: social media, press releases, emails, and the special event page, pre-conference page, and schedule page - both online and in the printed program. Live Music at the Farm to Fork Dinner (1 available) $500, Deadline: Jan 20 Be the proud sponsor of bringing live music, a great local band to this event. Benefits include your logo and name on a prominent sign in front of the band, verbal recognition from the band during their set, and your logo and name wherever the band is listed on: social media, press releases, emails, and the special event page and schedule page - both online and in the printed program. Pre-Conference Workshop (4 available) $500, Deadline: Jan 20 The benefits of sponsoring a pre-conference workshop include being verbally recognized during the event, and your business logo and name featured on: social media, press releases, emails, the pre-conference page and schedule page - both online and in the printed program. Live Music at the Farmer Guild Mixer (1 available) $250, Deadline: Jan 20 Be the proud sponsor of bringing live music, a great local band to this event. Your logo and name on a prominent sign in front of the band, verbal recognition from the band during their set, and your logo and name wherever the band is listed: on social media, press releases, emails, and the special event page and schedule page - both online and in the printed program. Pre-Conference Workshop Venue (4 available), Deadline: Jan 20 Do you have a kitchen, farm, location or venue that could be just right for a pre-conference workshop? Host the workshop and your space will be associated with sponsoring the workshop by having your business logo and name on: social media, press releases, emails, and the pre-conference page and schedule page - both online and in the printed program. Farm Tour (1 available) $500, Deadline: Jan 20 Support the farm tour highlighting small diversified farms. Benefits include two seats on the tour, verbal recognition during the tour, and your business logo and name on: post on social media, emails, and the schedule and pre-conference page - both online and in the printed program. Film Screening (1 of 4 events open to the community, free to attend, 1 available) $500, Deadline: Jan 20 Bring a film unique to this movement not only to UFC attendees but to the public, free of charge! Your logo will be on every poster we put up in Southern Utah! Benefits also include your business logo and name where ever the film is listed, on: social media, emails, and the schedule, and the special events page - both online and in the printed program. Photobooth (1 available) $500, Deadline: Feb 4 We’re looking for a like-minded sponsor with a cool logo that would be in EVERY photo taken at the photobooth in its own creative fun way! Your logo and name where ever the photobooth is listed, on: social media, emails, and the special event page - both online and in the printed program. Seed Exchange (1 of 4 events open to the community, free to attend, 1 available) $500, Deadline: Jan 20 Benefits of sponsoring include being verbally recognized during the seed exchange, having a banner of any size prominently placed during the seed exchange, being recognized verbally during the seed exchange, and your business logo and name on: social media, press releases, emails, and the special events page and schedule page - both online and in the printed program. Conference Bag (12 available) $400, Deadline: Jan 20 Each attendee will receive their program and a bag full of swag! Have your business name and logo on every bag attendees take home and lovingly use forever. Conference Tickets (unlimited) $275, Deadline: Feb 4 Attend the conference as a proud sponsor! Benefits include whatever level you reach by amount spent, and recognition on your name badge. Scholarships (unlimited) $225, Deadline: Feb 3 Cover the cost for one attendee at the “early-bird” price. Help new and beginning farmers, small artisan producers, and those involved in working towards a sustainable food movement attend the conference! Benefits include meeting the recipients at reception (if you attend the conference), a thank-you letter from the recipient after the conference, as well as name recognition for your business or you personally, or a message in memory or honor of someone in the conference program and on the website. Booth Space (12 available) $175, Deadline: Feb 1 Reach an influential, diverse and niche group of farmers, ranchers, owners, retailers, activists, educators, and supporters. Benefits include a booth space for 2 days with a 2’ x 6’ table. Something in the Bag (item(s) you provide, Unlimited) $150, Deadline: Feb 3 Each attendee will receive their program and a bag full of swag! Have your business name and or logo on every item/piece of swag that you provide and we will be sure one is put in every bag. Banner at the Conference (you provide, 25 available), Deadline: Feb 3 We will proudly display your banner on the main stage where all keynote addresses and the film screening are held. $75 approximate size 1.7’ x 3’ $125 approximate size 2.5’ x 4’ - 2.5’ x 4’ - 2.5’ x 8’ $150 approximate size 4’ x 6’ - 4’ x 8’ Posters (25 available) $100, Deadline: Jan 15 Your business logo on every poster! We canvas the entire state of Utah and parts of Washington, Oregon, California, Arizona, Idaho, Montana, Wyoming, New Mexico, Colorado, Nevada and at several of the largest organic and biodynamic farm conferences in the country. We also put up posters in feed stores, breweries, seed companies, coffee shops, bakeries - anywhere we can think of. The save the date poster is due by May 4, 2020, the second round of posters is due by September 1, 2020. Farmers (Unlimited), Deadline: Jan 20 Those that donate anything they produce will have a special place in our hearts and a logo on a page in our program dedicated to those who grow and raise our food! Farmers (Unlimited), Deadline: Feb 6 That donate anything they produce will have a special place in our hearts! Snack Sponsor (5 available), Deadline: Feb 3 Have your product served during a break and any literature about the product and your business. Advertising Ads reach an influential, diverse and niche group of farmers, ranchers, owners, retailers, activists, educators, and supporters. Ad Submission | Ad Size | Dimensions | B&W | Color | |-----------|-------------------------------------------------|------|-------| | Full Page | 7.25”w x 9.5”h | $350 | $450 | | 1/2 Page | 7.25”w x 4.626”h or 3.5”w x 9.5”h | $200 | $275 | | 1/4 Page | 7.25”w x 2.375”h or 3.5”w x 4.626”h | $100 | $125 | | 1/8 Page | 3.5”w x 2”h (biz card) | $50 | $80 | Program is distributed to all attendees, sponsors, vendors, advertisers and to those who are interested in participating the following year. Program will be available for viewing on the conference website immediately following the conference and stay up for approximately one year. Artwork Guidelines • Press-ready PDF, JPEG, TIFF or EPS files accepted. • No bleeds. • Cannot provide any color matching guarantee as pertains to screen-viewable, electronic files Ads Due by: Monday, Jan 20 Send your Ad file to: firstname.lastname@example.org Questions? Contact email@example.com 435-704-1222
Current transport along the [001] axis of YBCO in low-temperature superconductor—normal metal—high-temperature superconductor heterostructures F. V. Komissinskii Institute of Radio Engineering and Electronics, Russian Academy of Sciences, 103907 Moscow, Russia; M. V. Lomonosov Moscow State University, 119899 Moscow, Russia G. A. Ovsyannikov\(^{a)}\) Institute of Radio Engineering and Electronics, Russian Academy of Sciences, 103907 Moscow, Russia N. A. Tulina and V. V. Ryazanov Institute of Solid-State Physics, Russian Academy of Sciences, 142432 Chernogolovka, Moscow Region, Russia (Submitted 27 May 1998; resubmitted 14 July 1999) Zh. Éksp. Teor. Fiz. **116**, 2140–2149 (December 1999) The electrophysical properties of heterojunctions several microns in size, obtained by successive deposition of the metal-oxide high-temperature superconductor $\text{YBa}_2\text{Cu}_3\text{O}_x$, a normal metal Au, and the low-temperature superconductor Nb, were studied experimentally. Current flows in the [001] direction of the epitaxial $\text{YBa}_2\text{Cu}_3\text{O}_x$ film. It is shown, by comparing the experimental data with existing theoretical calculations, that for the experimentally realizable transmittances ($D = 10^{-5} - 10^{-6}$) of the $\text{YBa}_2\text{Cu}_3\text{O}_x$—normal metal boundary the critical current of the entire heterostructure is low (of the order of the fluctuation current) because of a sharp change in the amplitude of the potential of the superconducting carriers at this boundary. The current–voltage characteristics of the heterostructure studied correspond to tunnel junctions consisting of a superconductor with $d_{x^2-y^2}$ type symmetry of the superconducting wave function and a normal metal. © 1999 American Institute of Physics. [S1063-7761(99)02012-0] 1. INTRODUCTION Currently many properties of HTSCs are being estimated using a $d$-type wave function for the superconducting carriers. Specifically, this model explains the magnetic field dependence of the critical current in bimetallic two-junction SQUIDs consisting of $\text{YBa}_2\text{Cu}_3\text{O}_x$ (YBCO) and Pd\(^1\) and the spontaneous excitation of magnetic flux quanta in HTSC structures with three bicrystalline boundaries.\(^2\) At the same time, experiments on electron tunneling in the $c$ direction in HTSCs give contradictory results. On the one hand, in HTSC—low-temperature superconductor ($s$-type superconducting wave function) junctions there is no critical current for junctions in the $c$ direction,\(^3–5\) which agrees well with the theory of junctions consisting of superconductors with a $d$-type wave function for the superconducting carriers and an $s$-superconductor. On the other hand, an appreciable critical current, whose amplitude varies nonmonotonically as a function of the magnetic and microwave fields nonmonotonically as predicted for junctions with $s$-superconductors, has been observed in a number of experiments.\(^6–8\) To explain the experiments of Refs. 6–8, it has been conjectured that in yttrium-group HTSC materials a mixture of superconducting $s$- and $d$-type carriers arises because of the orthorhombic nature of these materials, and diffuse scattering near the boundary or twinning of HTSC films results in a larger contribution from the $s$ component.\(^9,10\) We note that an estimate of the parameters of the Pb/(Au,Ag)/YBCO structures investigated in Refs. 6–8 gives transmittances $D = 10^{-7} - 10^{-9}$, averaged over the directions of the moments, for the barriers of the HTSC—normal metal barriers with quite large junction areas, $S = 0.1 – 1 \text{ mm}^2$. In the present paper we report the results of an experimental investigation of current flow in $s$-superconductor—normal metal—HTSC heterojunctions, fabricated by successive deposition of YBCO, a normal metal (ordinarily Au), and Nb, with much smaller areas ($S \approx 8 \times 8 \mu\text{m}^2$) and higher transmittance ($10^{-5} - 10^{-6}$) of the YBCO—normal metal boundary. The experimental data are analyzed from two standpoints: on the basis of the isotropic theory of $s$ superconductivity and from the standpoint of the modern theory, which assumes a $d$-type wave function in the superconductor YBCO film. 2. EXPERIMENTAL PROCEDURE AND EXPERIMENTAL SAMPLES The junctions were prepared by using the sequence of operations shown in Fig. 1. First, the epitaxial YBCO films were grown either by laser ablation or using cathodic sputtering in a diode configuration with dc current and high oxygen pressure. During YBCO film growth, a temperature 700–800 °C was maintained and the pure oxygen pressure was 0.3–1 mbar for laser ablation and 3 mbar for cathodic sputtering. Neodymium gallate with (110) orientation or the $r$ plane of sapphire with a CeO$_2$ buffer layer was used as the substrate. Epitaxial YBCO films, 100–150 nm thick, with $c$ orientation and the following superconducting parameters, measured by the resistive method, were obtained: 1) the critical temperature at which the resistance of the film deposited on a $5 \times 5$ mm$^2$ substrate is zero, $T_{c,f} = 84–89$ K; 2) the width of the superconducting transition (determined at the levels 0.9 and 0.1 times the resistance of the film at the onset of the transition into the superconducting state), $\Delta T_c = 0.5 – 1$ K; 3) the ratio of the resistances at temperatures 300 K and 100 K, $\rho_{300\text{K}} / \rho_{100\text{K}} \approx 2.8$. The number of 0.3–1 μm in diameter particles on the surface of the YBCO film, which are caused by the formation of different phases of YBCO as well as Y, Ba, and Cu oxides, was $\sim 10^6$ cm$^{-2}$. Evidence of the high quality of the YBCO films fabricated is the small width of the (005) x-ray peak of YBCO, FWHM(005) $\approx 0.2^\circ$, for $\theta/2\theta$ scanning with 0.15 μm film thickness. A thin, 20 nm thick, layer of normal metal (Au, Ag, Pt) was deposited at 100°C immediately after the YBCO film, using either laser ablation or high-frequency cathodic sputtering (Fig. 1a). Next, a 100–150 nm thick Nb layer was deposited on a water-cooled substrate by a magnetron cathodic sputtering. The critical temperature of the superconducting transition in Nb films is 9.1–9.2 K. Niobium is used as the low-temperature superconductor because it does not enter into a solid-phase chemical reaction with Au. We note that in the experiments of Refs. 4–7, where Pb is used, a superconducting alloy of Au and Pb can form. In the trilayer heterostructure obtained, photolithography and ion and plasma-chemical etchings were used to form regions of heterojunctions which during photolithography were fixed on sections with the minimum number of particles on the surface of the YBCO films (Fig. 1b). To prevent electrical contact in the basal (a–b) plane of the YBCO film, the lateral region of the junction was insulated with a CuO$_2$ layer with a central window with the dimensions $S = 8 \times 8$ μm$^2$ (Fig. 1c). At the final stage explosion lithography was used to form junction areas and Au wiring in the form of two stripes, which enable separated input of current and voltage to the top electrode Nb (Figs. 1d, e). The geometry used for the gold contacts (see Fig. 1) makes it possible to investigate the electrophysical properties of Nb/Au/YBCO structures for the YBCO film in the superconducting state. More than 30 Nb/normal metal/YBCO samples, where Au, Ag, and Pt were used as the normal metal, were prepared. In the present paper the results of investigations performed on nine Nb/Au/YBCO samples, in which the variance of the characteristic resistances $R_N S$ ($R_N$ is the differential resistance, measured for $V > 20$ mV) of the boundaries at liquid-helium temperature did not exceed a factor of 4 (see Table I). 3. EXPERIMENTAL RESULTS The dependences of the resistances $R$ of the heterojunctions on the temperature $T$ and 4 μm wide the test bridges, consisting of YBCO films placed on the same substrate, with 1–5 μA bias currents and their current-voltage characteristics (IVCs) in the temperature 4.2–300 K were measured. Figure 2 shows the temperature dependences for one of the substrates. At temperatures $T > T_{cf}$ metallic behavior of $R(T)$ is observed, i.e., the resistance decreases with temperature, as is characteristic for a $c$-oriented YBCO film with current flow in the basal plane of YBCO. As a rule, $T_{cf}$ of the bridges and heterojunctions was less than the critical temperature of YBCO films, measured immediately after the trilayer heterostructure was prepared. The degradation of the superconducting properties of the film is evidently due to a decrease in the amount of oxygen during ionic etching. The inset in Fig. 2 shows the function $R(T)$ for a heterojunction at temperatures $T < T_{cf}$, demonstrating that the resistance of the heterojunction increases as temperature decreases. The value of $R(T)$ at temperatures $T < T_{cf}$ depends on the current. This attests to a nonlinear current dependence of the differential resistance $R_d$ of the heterojunction. A family of curves of $R_d$ versus the voltage $V$ at various temperatures is shown in Fig. 3. It is evident that $R_d(0)$ increases as $T$ decreases. This growth is reflected in an increase of the resistance $R(T)$ (Fig. 2). The nonlinearity observed in the IVC in the temperature range 72 K < $T$ < 84 K is due to the destruction of the superconductivity of the YBCO film. The function $R_d(V)$ increases. This is due to the systematic destruction of superconductivity in sections of the YBCO electrode as the current $I$ increases. We note that the junction resistance at $T \approx T_{cf}$ is somewhat higher than the asymptotic resistance $R_N$, measured for $V > 20$ mV and $T \ll T_{cf}$. The results of the measurements of the electrophysical parameters of several samples, prepared by the same method are presented in Table I. The resistance $R_NS$ of the boundary at $T = 4.2$ K makes it possible to estimate the average (over the direction of the momentum of the quasiparticles) boundary transmittance, which we shall employ below, as $$\bar{D} = \frac{2 \pi^2 \hbar^3}{e^2 p_F^2} \frac{1}{R_NS} = \frac{2 \rho^{YBCO} l^{YBCO}}{3 R_NS},$$ where $p_F$ is the smallest value of the Fermi momentum for YBCO or Au. The values of the transmittance of the boundaries of the fabricated structures for $\rho^{YBCO} l^{YBCO} \approx 3.2 \times 10^{-11} \Omega \cdot \text{cm}^2$ (Ref. 4) are also presented in Table I. Test samples with bilayer heterostructures Au/YBCO, Nb/YBCO, and Au/Nb, fabricated using a technology with the same conditions as for the formation of the experimental Nb/Au/YBCO heterostructures, were also investigated. The resistances $R_NS$ of these boundaries measured at liquid-nitrogen temperature are $R_NS(\text{Au/YBCO}) \sim 10^{-8} \Omega \cdot \text{cm}^2$, $R_NS(\text{Au/Nb}) \sim 10^{-12} \Omega \cdot \text{cm}^2$, and $R_NS(\text{Nb/YBCO}) \sim 10^{-4} \Omega \cdot \text{cm}^2$. Here the series resistance of the YBCO film for $T_{cf} < 77$ K was taken into account. Comparing these quantities with the data presented in Table I, it is evident that the resistance of the Au/Nb boundary can be neglected, and the resistance of the Au/YBCO boundary, which increases when Nd is deposited on top of Au, probably, because of the interaction of Nb with YBCO, makes the main contribution to the resistance of the experimental heterojunctions. The resistance of a direct Nb/YBCO contact is very large. Most likely, the increase in the contact resistance is due to the displacement of oxygen out of the YBCO film into the Nb, which has good gettering characteristics, deposited on top. We note that the oxygen mobility in the $a-b$ planes of YBCO is much higher than in the $c$ direction. Figure 4 shows the surface of a bilayer Au/YBCO heterostructure, measured with an atomic-force microscope. It is evident that its surface consists of Au granules separated by $\sim 1 \mu$m. The subsequently deposited Nb film covers the surface of the Au granules, where a good electric contact with the YBCO film is created, and forms a direct contact with... YBCO, where, as a result of a decrease in the amount of oxygen in the basal planes, the contact resistance is much higher. This could be the reason why the resistance of the trilayer heterostructure Nb/Au/YBCO is higher than that of a bilayer Au/YBCO heterostructure. 4. DISCUSSION OF THE EXPERIMENTAL RESULTS The experimental trilayer heterostructure can be represented as Nb/Au/YBCO granules connected in parallel and sections of direct contact of Nb and YBCO via pores in the Au film. Since the characteristic resistance of the Nb/YBCO boundary is several orders of magnitude greater than $R_{NS}$ of the trilayer heterostructure Nb/Au/YBCO, and the surface area of the granules and pores, according to our estimates, differ severalfold (see Fig. 4), current flows mainly through the boundary of Nb/Au/YBCO granules. A trilayer Nb/Au/YBCO heterostructure can be described by the model shown in Fig. 5; a 100–150 nm superconducting YBCO electrode ($S_d$) with critical superconducting transition temperature $T_{cY} = 87$ K; a 1–3 nm YBCO ($S'_d$) layer with an oxygen deficit and therefore disrupted superconducting properties; a 10–20 nm thick layer of normal metal (Au); a 100–150 nm thick superconducting Nb ($S_s$) electrode with $T_c = 9.2$ K. A similar model has been proposed in Ref. 4 to estimate the electrophysical parameters of the system Pb/Au/YBCO. First, we shall estimate the change in the superconducting order parameter in Nb as a result of the contact with Au. Since the measured value of the boundary resistance is quite small, it can be assumed that the superconducting Green’s function, characterizing the amplitude of the interaction potential $\Phi$ of the superconducting carriers and its derivative with respect to the coordinate $x$ are continuous at the boundary. Using the calculations of Refs. 12 and 13, we find that the superconducting order parameter $\Delta_1$ of Nb at the Nb/Au boundary is somewhat less than its equilibrium value $\Delta_{Nb}$ in the interior volume of the film and is $\Delta_1/e \approx 560 \mu$ V. For estimates, the following values of the electrophysical parameters of Nb and Au were used at $T = 4.2$ K: $\rho^{Nb/Nb} = 4 \times 10^{-12} \Omega \cdot \text{cm}^2$, $\xi^{Nb} = 0.73 \times 10^{-6} \text{ cm}$, $v_F^{Nb} = 3 \times 10^7 \text{ cm/s}$, $T_{c0}^{Nb} = 9.2$ K and $\rho^{Au}/l^{Au} = 8 \times 10^{-12} \Omega \cdot \text{cm}^2$, $\xi^{Au} = 10^{-6} \text{ cm}$, and $v_F^{Au} = 1.4 \times 10^8 \text{ cm/s}$, where $v_F^{Nb,Au}$ is the Fermi velocity and $l^{Nb,Au}$ is the mean-free path length in Nb and Au, respectively. Let us estimate the change in the order parameter at the YBCO/Au boundary. We assume that as a result of the interaction of YBCO and Nb, a superconducting surface layer $S'_d$ of the order of 3 nm thick with critical temperature less than 4 K is formed. Assuming that the coherence length of $S'_d$ differs negligibly from $\xi_{c-YBCO}$ and is $\xi_{S'_d} = 5 \times 10^{-8} \text{ cm}$ and that the resistivity increases by an order of magnitude—from $\rho_{c-YBCO} = 10^{-4} \Omega \cdot \text{cm}^2$ to $\rho_{S'_d} = 1 \times 10^{-3} \Omega \cdot \text{cm}^2$, we obtain that at the Au/YBCO boundary ![FIG. 4. Three-dimensional image of the surface of a bilayer heterostructure Au/YBCO. The image was obtained with an atomic-force microscope.](image) ![FIG. 5. Schematic diagram of the distribution of the order parameter (solid line) and the amplitude of the pair potential (dashed lines) in a direction perpendicular to the surface of an Nb/Au/YBCO heterostructure.](image) the order parameter decreases on the YBCO side decreases by a factor of approximately 100, $\Delta_2'/e \approx 140\,\mu$V. A potential barrier with low transmittance, $\tilde{D} \sim 10^{-6}$, is present at the Au/YBCO boundary. This barrier decreases $\Delta_2$ by another factor of $\tilde{D}$, $\Delta_2 = \Delta_2'\tilde{D}$. Here we used the theoretical estimates, which are strictly applicable for superconductors with $s$-type pairing. However, as the calculations of Refs. 10 and 14 show, the character of the change in the order parameter at the boundary of a $d$ superconductor with a normal metal or insulator does not differ much from a junction with an $s$ superconductor for orientations of the normal to the $d$ superconductor along the principal crystallographic axes. As a result, we can estimate the amplitude of the superconducting current through the entire structure by using the model of a superconductor—normal metal—superconductor ($S_d'NS$) junction, on the boundaries of whose weak section the values of the order parameters are known: $\Delta_2/e \approx 0.004\,\mu$V and $\Delta_1/e \approx 560\,\mu$V. In what follows, we shall employ the theory developed for $S-N-S$ junctions. The thickness of the $N$ layer is of the order of the coherence length, so that the change in the superconducting order parameter in the interlayer can be neglected. As a result, the product of the critical current $I_c$ by $R_N$ at low temperature is $I_cR_N \approx (\Delta_1\Delta_2)/e = 0.09\,\mu$V. Taking account of the resistance of the heterojunctions ($R_N = 10\,\Omega$), we obtain that the critical current of the structure $I_c \approx 0.009\,\mu$A is less than the fluctuation current $I_f = 1\,\mu$A of the measuring system and does not affect the experiment even if YBCO contains a mixture of $d$ and $s$ components of the superconducting order parameter and the number of $s$ components is greater than the number of $d$ components. For pure $d$ pairing, the superconducting current for flow along the $c$ direction in YBCO must be zero because of the type of symmetry of the superconducting order parameter. To estimate the critical current we assumed that the large width of the potential barrier (several coherence lengths) prevents direct tunneling of the superconducting current through the barrier. We note that we have considered quite strong suppression of the order parameter at the YBCO boundary because of degradation of the superconducting parameters of the HTSC film. However, even in the absence of suppression of the order parameter in the surface layer of YBCO, $\Delta_1/e = 14\,$mV, the critical current of the heterojunctions Nb/Au/YBCO will once again be comparable to the fluctuation current because of the decrease in the order parameter on the low-transmittance barrier. The finite critical current, observed in a number of works, in Pb(Au,Ag)/YBCO heterostructures with a much larger value of $R_N/S$ and large junction areas could be due to the fact that treatment of the YBCO electrode with a solution of bromine and alcohol, as was done in these works, opens up the basal planes of YBCO, the transmittance of whose boundaries with a normal metal or ordinary superconductors is three orders of magnitude higher than in the $c$ direction ($R_{ab}/S_{ab} \ll R_c/S_c$). Ultimately, the superconducting current flows along the contacts to the basal plane of YBCO, and the normal resistance is determined by parallel connection of the resistances of the boundaries along the $c$ and in the basal plane. In our case current flow is impeded in the direction of the basal plane along the Nb/YBCO junctions, most likely because of substantial displacement of oxygen out of YBCO into Nb. It is important that lead can react with Au, forming a superconducting alloy. Then, the Pb/Au/YBCO structure contains a superconductor instead of a layer of normal metal. This is confirmed by the appearance of gap features of lead in the IVCs at sufficiently low temperatures ($T = 1.2\,$K). A new explanation of the experimental data on the flow of a superconducting current through low-temperature superconductor—HTSC junctions was proposed recently. It has been shown theoretically that a strong spin-orbit interaction, which is observed in Pb/Ag structures, intensifies superconducting current flow through a barrier. Replacement of Pb by an Al- or Nb-type superconductor decreases the spin-orbit interaction, and the superconducting current decreases as a result. Let us discuss the dependences $R_d(V)$ for heterojunctions as a function of temperature in the range 4.2–100 K (Fig. 3). For $T \ll T_c$ the IVC as a whole corresponds to heterojunctions of the type superconductor—insulator—normal metal ($S-I-N$): There is a location where $R_d$ increases at low voltages. However, the feature on $R_d(V)$ that is due to the gap in YBCO is not observed in the experiment. This corresponds to a junction with a superconductor with gapless superconductivity, including with $d$-type superconductivity. According to the calculations performed in Ref. 14, the feature at $eV \approx \Delta$ in the density of states of a $d$ superconductor gives a logarithmic dependence $R_d \propto \ln(T), \ln(eV - |\Delta|)$, subjected to strong temperature broadening just as for a gapless $s$ superconductor. We note that for $s$ superconductors with a gap a power-law divergence is observed $R_d \propto T^{-1/2}, ((eV)^2 - \Delta^2)^{-1/2}$. The features in the form of changes in $R_d(V)$ at voltages $V < 2\,$mV due to the niobium gap have virtually no effect in our experiment, and we did not study them in detail. For $s$-type symmetry of the order parameter in a superconductor at low temperatures, $kT \ll \Delta$, the number of excited quasiparticles decreases exponentially with temperature. Therefore the resistance increases proportionally $R_d(0) \propto (-\Delta/T)$. In a superconductor with $d$-type pairing, the presence of nodes with a zero order parameter makes it possible to excite a quasiparticle even at very low temperature, $T \ll \Delta$. As a result, $R_d(0)$ grows more slowly as temperature decreases. As one can see in the inset in Fig. 2, nearly linear growth of $R_d(0)$ with decreasing $T$ is observed in the experiment. The dependence $R_d(V)$ is quadratic as $V \to 0$, which agrees qualitatively with calculations for a $d$ superconductor. One of the most surprising features of superconductors with $d$-type pairing is the appearance of two types of bound states, which, as a rule, are not observed in $s$-superconductors. Surface states with low energies at the boundary of the $d$ superconductor with an insulator are due to the change in sign of the order parameter at the Fermi surface for quasiparticles reflected from the boundary. The superconducting parameter for a $d$-type superconducting wave function changes sign with a 90° circuit around the $c$ axis. Since the direction of the momentum of a quasiparticle... changes on mirror reflection from a boundary, bound states arise at zero energies because of Andreev reflection. This leads to the appearance of a dip in $R_d(V)$ at small $V$, as is observed experimentally for a transport current in the [110] direction in a YBCO film (see Refs. 6–8, 18, and 19). In our case, the contribution of such quasiparticles is small because the normal to the boundary is oriented along one of the principal crystallographic directions in YBCO. For mirror-reflected quasiparticles, there is no Andreev reflection because the phases of the order parameter are the same for incident and reflected quasiparticles. An additional mechanism was recently predicted theoretically for the appearance of bound states due to the suppression of the order parameter of a $d$-superconductor for orientations of the normal with respect to the boundary different from the principal crystallographic axes or for diffuse reflection at a boundary with an insulator.\textsuperscript{17} These states are observed at energies different from zero, and estimates in Ref. 17 show that they are more stable with respect to the quality of the boundary. The appearance of bound states should be observed in the dependences $R_d(V)$ as a decrease of $R_d$ for $eV_r$ of the order of the gap in the $d$-superconductor, and in addition the ratio $eV_r/\Delta$ depends on the angle between the normal and the crystallographic axes of the $d$-superconductor. The condition for the existence of bound states with nonzero energy is suppression of the order parameter near the boundary. This occurs in our experiment because of the degradation of the superconducting properties of the surface. Indeed, in all samples we observe features at $V_r=15\,\text{mV}$, where $V_r$ is virtually temperature-independent. 5. CONCLUSIONS In the present work, heterojunctions with dimensions of several microns, obtained by successive deposition of YBCO, Au, and Nb, with transport current flowing in YBCO along the $c$ axis were fabricated and studied experimentally. The transmittances of the heterostructures, as estimated from the resistance of the junctions, are two orders of magnitude greater than the existing experimental data, and the areas of the heterojunctions are much smaller. The IVCs of the heterojunctions with resistances differing from one another by a factor of 4 were investigated. Estimates based on the proximity effect showed that the absence of a critical current in heterojunctions is probably due to a decrease in the amplitude of the potential of the superconducting carriers at the Au/YBCO boundary. The curves of the differential resistance of the heterojunctions versus the voltage are similar to the case of $S-I-N$ junctions with a gapless superconductor; specifically, the absence of a YBCO gap feature could also correspond to $d$-type superconductivity, specifically, to the presence of nodes of the order parameter as the direction of the momentum of the quasiparticles changes by $45^\circ$. The dependence of $R_d(0)$ on $T$ also corresponds to a $d$-type superconductor. We thank Yu. S. Barash, D. A. Golubev, A. V. Zaitsev, Z. G. Ivanov, and M. Yu. Kupriyanov for a helpful discussion of the experimental results, and D. Ertz, P. B. Mozhaev, and T. Henning for assisting in the experiment. This work was supported in part by the program “Current Problems of Condensed-State Physics” (subsection “Superconductivity”), the Russian Fund for Fundamental Research, and the INTAS program of the European Union. *E-mail: firstname.lastname@example.org \begin{thebibliography}{99} \bibitem{1} D. A. Wollman, D. J. Van Harlingen, W. C. Lee \textit{et al.}, Phys. Rev. Lett. \textbf{71}, 2134 (1993). \bibitem{2} C. C. Tsuei, J. R. Kirtley, C. C. Chi \textit{et al.}, Phys. Rev. Lett. \textbf{73}, 593 (1994). \bibitem{3} H. Akoh, C. Camerlingo, and S. Takada, Appl. Phys. Lett. \textbf{56}, 1487 (1990). \bibitem{4} J. Yoshida, T. Hashimoto, S. Inoue \textit{et al.}, Jpn. J. Appl. Phys., Part 1 \textbf{31}, 1771 (1992). \bibitem{5} L. Lesueur, L. H. Greene, W. L. Feldmann \textit{et al.}, Physica C \textbf{191}, 325 (1992). \bibitem{6} A. G. Sun, A. Truscott, A. S. Katz \textit{et al.}, Phys. Rev. B \textbf{54}, 6734 (1996). \bibitem{7} A. S. Katz, A. G. Sun, R. C. Dynes \textit{et al.}, Appl. Phys. Lett. \textbf{66}, 105 (1995). \bibitem{8} L. Lesueur, M. Aprili, A. Goulon \textit{et al.}, Phys. Rev. B \textbf{55}, 3398 (1997). \bibitem{9} J. R. Kirtley, K. A. Moler, and D. J. Scarlapino, E-print archive cond-mat/9703067 (1997). \bibitem{10} L. J. Buchholz, M. Palumbo, D. Rainer, and J. A. Sauls, J. Low Temp. Phys. \textbf{101}, 1099 (1995). \bibitem{11} A. V. Zaïtsev, Zh. Eksp. Teor. Fiz. \textbf{86}, 1742 (1984) [Sov. Phys. JETP \textbf{59}, 1015 (1984)]. \bibitem{12} M. Yu. Kupryanov and K. K. Likharev, IEEE Trans. Magn. \textbf{27}, 2400 (1991). \bibitem{13} G. Deutscher, Physica C \textbf{185--189}, 216 (1991). \bibitem{14} Yu. S. Barash, A. V. Galaktionov, and A. D. Zaikin, Phys. Rev. B \textbf{52}, 665 (1995). \bibitem{15} G. S. Lee, Physica C \textbf{292}, 171 (1997). \bibitem{16} Y. Tanaka and S. Kashiwaya, Phys. Rev. B \textbf{53}, 11957 (1996). \bibitem{17} Yu. S. Barash, A. A. Svidzinsky, and H. Burkhardt, Phys. Rev. B \textbf{55}, 15282 (1997). \bibitem{18} F. V. Komissinskii, G. A. Ovsyannikov, N. A. Tulina, and V. V. Ryazanov, in \textit{Abstracts of Reports at the 31st Conference on Low-Temperature Physics (LT-31)}, Moscow, 1998, p. 236. \bibitem{19} P. V. Komissinski, G. A. Ovsyannikov, N. A. Tulina, and V. V. Ryazanov, E-print archive cond-mat/9903065 (1999). \end{thebibliography} Translated by M. E. Alferieff
Internal Memo To: Concerned Class Advisor through respective Chairperson. From: Controller Students’ Affairs Subject: Koshish Foundation Scholarship 2017 (Fresh & Renewal). Ref: No. DSA / 1601 Dated: 10-10-17 It is submitted for your kind information that the representative of Koshish Foundation has forwarded the pledge form of the concerned students (List attached). In this connection, you are therefore requested to inform the concerned students to obtain their pledge form from the Students’ Affairs Department and fill the said form on or before 12-10-2017. It is mandatory to fill the said form by every awardee otherwise the scholarship will be discontinued. Your cooperation in this regard shall highly be appreciated. Controller Students’ Affairs 10/10/17 | Sr.# | Name of Selected Students with Father's Name | Class | Discipline | Roll No. | Batch | |------|---------------------------------------------|------------------------|-----------------------------------|----------|-------------| | 1. | OBAIID UD DIN KHAN S/o SALAH UD DIN KHAN | First Year Spring / Fall Semester | Chemical Engineering | CI-049 | 2016-2017 | | 2. | ZAHAAD HABIB S/o AZAM HABIB | First Year Spring / Fall Semester | Civil Engineering | CE-091 | 2016-2017 | | 3. | FIZZAH NOOR D/o NOOR MUHAMMAD | First Year Spring / Fall Semester | Civil Engineering | CE-075 | 2016-2017 | | 4. | TALIA RIAZ KHAN S/o RIAZ MUHAMMAD KHAN | First Year Spring / Fall Semester | Civil Engineering | CE-222 | 2016-2017 | | 5. | ZULFIQAR ALI S/o AHMED ALI | First Year Spring / Fall Semester | Civil Engineering | CE-173 | 2016-2017 | | 6. | MUHAMMAD RAFIQUE SHAR S/o RASOOL BUX SHAR | First Year Spring / Fall Semester | Civil Engineering | CE-140 | 2016-2017 | | 7. | SHEHRISH SUBHAN D/o SUBHAN AHMED | First Year Spring / Fall Semester | Computer & Information Systems Engineering | CS-103 | 2016-2017 | | 8. | RAMSHA ILYAS D/o MUHAMMAD ILYAS | First Year Spring / Fall Semester | Computer & Information Systems Engineering | CS-033 | 2016-2017 | | 9. | ARESIA ZEHRA D/o SYED JAWED ABBAAS | First Year Spring / Fall Semester | Computer & Information Systems Engineering | CS-095 | 2016-2017 | | 10. | SYEDA MINHAL BUKHARI D/o SYED WAMIQ BUKHARI | First Year Spring / Fall Semester | Electrical Engineering | EE-076 | 2016-2017 | | 11. | ALEENA ISRAR D/o ISRAR AHMED | First Year Spring / Fall Semester | Electrical Engineering | EE-081 | 2016-2017 | | 12. | MUNTAHA KHAN YOUSFI D/o MUHAMMAD YOUSUF KHAN YOUSFI | First Year Spring / Fall Semester | Industrial & Manufacturing Engineering | IM-074 | 2016-2017 | | 13. | SIDRA TUL MUNTAHA D/o ABDUL MATEEN GHOURI | First Year Spring / Fall Semester | Electrical Engineering | EE-155 | 2016-2017 | | 14. | RAMSHA JAWAID D/o MUHAMMAD JAWAID | First Year Spring / Fall Semester | Electrical Engineering | EE-072 | 2016-2017 | | 15. | ASHAR HAROON S/o MUHAMMAD HAROON | First Year Spring / Fall Semester | Electrical Engineering | EE-161 | 2016-2017 | | 16. | UROOF FATIMA D/o KHUSHNOOD AHMED | First Year Spring / Fall Semester | Electrical Engineering | EE-140 | 2016-2017 | | 17. | NADIA IMRAN D/o IMRAN SIDDEQUE | First Year Spring / Fall Semester | Electronic Engineering | EI-073 | 2016-2017 | | 18. | SHEEBA SHAHIID D/o MUHAMMAD SHAHIID | First Year Spring / Fall Semester | Industrial & Manufacturing Engineering | IM-077 | 2016-2017 | | 19. | NOOR FATIMA D/o MUHAMMAD FAHJEM SALMAN | First Year Spring / Fall Semester | Industrial & Manufacturing Engineering | IM-002 | 2016-2017 | | 20. | RUKHSAR SHEIKH D/o MUHAMMAD HAROON SHEIKH | First Year Spring / Fall Semester | Industrial & Manufacturing Engineering | IM-081 | 2016-2017 | | 21. | OSAMA RASHID S/o SHAHZAD RASHID | First Year Spring / Fall Semester | Mechanical Engineering | ME-020 | 2016-2017 | | 22. | MUHAMMAD MOHSIN IRSHAD S/o IRSIHAD AHMED | First Year Spring / Fall Semester | Mechanical Engineering | ME-257 | 2016-2017 | | No. | Name and Father's Name | Year and Semester | Department | Course Code | Session | |-----|------------------------|------------------|------------|-------------|---------| | 23. | NOUMAN ALEEM S/o ALEEM UD DIN | First Year Spring / Fall Semester | Mechanical Engineering | ME-139 | 2016-2017 | | 24. | SYED RIAZ ALI S/o SYED MUNIR ALI | First Year Spring / Fall Semester | Mechanical Engineering | ME-229 | 2016-2017 | | 25. | MUHAMMAD MOHSIN S/o GIULAM MUSTAFA KHAN | First Year Spring / Fall Semester | Petroleum Engineering | PE-012 | 2016-2017 | | 26. | ABEEHA ASHRAF D/o SYED MISBAH UDDIN | First Year Spring / Fall Semester | Polymer & Petrochemical Engineering | PP-016 | 2016-2017 | | 27. | AQSA AHSAN D/o MUHAMMAD AHSAN KHAN | Second Year Spring / Fall Semester | Bio-medical Engineering | BM-065 | 2015-16 | | 28. | MUHAMMAD ARSALAN S/o ABDUL SALAM | Second Year Spring / Fall Semester | Chemical Engineering | CH-009 | 2015-16 | | 29. | OSAMA SHAHAB S/o MUHAMMAD SHAHAB FAROOQ | Second Year Spring / Fall Semester | Chemical Engineering | CH-031 | 2015-16 | | 30. | SYEDA FIZZA ZAIDI D/o SYED MUHAMMAD RAZA ZAIDI | Second Year Spring / Fall Semester | Chemical Engineering | CH-008 | 2015-16 | | 31. | ASHBA JAWED D/o JAWED MAZHAR QURESHI | Second Year Spring / Fall Semester | Computer & Information System Engineering | CS-078 | 2015-16 | | 32. | FATIMA SHOUKAT D/o SHOUKAT ALI | Second Year Spring / Fall Semester | Computer & Information System Engineering | CS-108 | 2015-16 | | 33. | USMAN ABBAS S/o UMAR ABBAS | Second Year Spring / Fall Semester | Computer & Information System Engineering | CS-097 | 2015-16 | | 34. | ZARSHA MOEED D/o MIRZA MOEED BAIG | Second Year Spring / Fall Semester | Computer & Information System Engineering | CS-119 | 2015-16 | | 35. | SABA ZAFAR D/o ZAFAR IQBAL | Second Year Spring / Fall Semester | Computer Science & Information Technology | CT-030 | 2015-16 | | 36. | AINUS SABA MALICK D/o MUHAMMAD RIAZ | Second Year Spring / Fall Semester | Electrical Engineering | EE-094 | 2015-16 | | 37. | MUHAMMAD MUZAMMIL AIIMED S/o HAFIZ AKHLAQ AHMED | Second Year Spring / Fall Semester | Electrical Engineering | EE-147 | 2015-16 | | 38. | MUHAMMAD USMAN MAZHAR S/o MAZHAR HUSSAIN | Second Year Spring / Fall Semester | Electrical Engineering | EE-148 | 2015-16 | | 39. | REHAB ASHRAF D/o MUHAMMAD ASHRAF | Second Year Spring / Fall Semester | Electrical Engineering | EE-179 | 2015-16 | | 40. | SIDRA HAROON D/o MUHAMMAD YOUSUF HAROON KHAN | Second Year Spring / Fall Semester | Electrical Engineering | EE-181 | 2015-16 | | 41. | WALID IRFAN S/o MUHAMMAD IRFAN | Second Year Spring / Fall Semester | Electrical Engineering | EE-192 | 2015-16 | | 42. | FAYAZ AHMAD S/o SARFARAZ AHMAD | Second Year Spring / Fall Semester | Industrial & Manufacturing Engineering | IM-024 | 2015-16 | | 43. | OSAMA AHMED S/o MUHAMMAD AHMED | Second Year Spring / Fall Semester | Mechanical Engineering | ME-211 | 2015-16 | | 44. | SHAI SAAD S/o SYED ZULFIQAR ALI | Second Year Spring / Fall Semester | Mechanical Engineering | ME-137 | 2015-16 | | 45. | SYED UZAIR UR REHMAN S/o SYED ZAFAR IQBAL | Second Year Spring / Fall Semester | Petroleum Engineering | PE-018 | 2015-16 | | 46. | FARAH D/o MUHAMMAD IQBAL | Second Year Spring / Fall Semester | Software Engineering | SE-035 | 2015-16 | | No. | Name and Father's Name | Semester | Department | Roll No. | Year | |-----|------------------------|----------|------------|---------|------| | 47. | KASHMALA KHAN D/o ASHEFAQ AHMED KHAN | Second Year Spring / Fall Semester | Software Engineering | SE-049 | 2015-16 | | 48. | MUHAMMAD UZAIR ALI GUL KHAN S/o SAADAT ALI GUL KILAN KAMAL | Second Year Spring / Fall Semester | Software Engineering | SE-011 | 2015-16 | | 49. | ERAJ BADAR D/o MUHAMMAD BADARUDDIN | Second Year Spring / Fall Semester | Textile Engineering | TE-020 | 2015-16 | | 50. | FARIJA KHAN D/o IMTIAZ HUSAIN | Second Year Spring / Fall Semester | Urban Engineering | UF-003 | 2015-16 | | 51. | MUHAMMAD SHAFAY KALIM S/o KALIM UDDIN | Third Year Spring / Fall Semester | Automotive Engineering | AU-021 | 2014-15 | | 52. | FAIZA JALIL D/o JALIL AKRAM ABBASI | Third Year Spring / Fall Semester | Civil Engineering | CE-197 | 2014-15 | | 53. | HAMAIIS SAJJID S/o SAJJID KARIM | Third Year Spring / Fall Semester | Civil Engineering | CE-196 | 2014-15 | | 54. | HAMMAD AHMED KHAN S/o SHAHID AHMED KHAN | Third Year Spring / Fall Semester | Civil Engineering | CE-042 | 2014-15 | | 55. | TAIMOOR ALI S/o MUHAMMAD IQBAL | Third Year Spring / Fall Semester | Civil Engineering | CE-173 | 2014-15 | | 56. | AREEBA ALI D/o SYED LIAQUAT ALI | Third Year Spring / Fall Semester | Computer & Information System Engineering | CS-082 | 2014-15 | | 57. | AISHA SALEEM D/o MUHAMMAD SALEEM AHMED | Third Year Spring / Fall Semester | Electrical Engineering | EE-034 | 2014-15 | | 58. | AYMEN MUJEEB BAIG D/o MUHAMMAD MUJEEB ANWAR BAIG | Third Year Spring / Fall Semester | Electrical Engineering | EE-017 | 2014-15 | | 59. | IMRAN ZAMMARUD S/o ZAMMARUD IIUSSAIN | Third Year Spring / Fall Semester | Electrical Engineering | EE-138 | 2014-15 | | 60. | MISCHEAL KHALID D/o KHAWAJA KHALID KAMAL | Third Year Spring / Fall Semester | Electrical Engineering | EE-198 | 2014-15 | | 61. | TOOBA SAEED D/o SAEED AHMED | Third Year Spring / Fall Semester | Electrical Engineering | EE-036 | 2014-15 | | 62. | OSAMA HAFEEZ S/o MUHAMMAD HAFEEZ UR RAHMAN | Third Year Spring / Fall Semester | Industrial & Manufacturing Engineering | IM-086 | 2014-15 | | 63. | ABDUL MUJEEB S/o ABDUL GHAFFAR GHANGHRO | Third Year Spring / Fall Semester | Mechanical Engineering | ME-077 | 2014-15 | | 64. | SHAHZAD AKBAR ALI CHHATTA S/o AKBAR ALI CHHATTA | Third Year Spring / Fall Semester | Mechanical Engineering | ME-162 | 2014-15 | | 65. | ASFA SHARIQ D/o MUHAMMAD SHARIQ | Third Year Spring / Fall Semester | Polymer & Petrochemical Engineering | PP-005 | 2014-15 | | 66. | MUHAMMAD SHAHZAR KHAN S/o SHAMIM AKHTER | Third Year Spring / Fall Semester | Polymer & Petrochemical Engineering | PP-002 | 2014-15 | | 67. | TAYYABA EJAZ D/o EJAZ UR REHMAN | Third Year Spring / Fall Semester | Polymer & Petrochemical Engineering | PP-021 | 2014-15 | | 68. | ZEESHAN ALI S/o FARZAND ALI | Third Year Spring / Fall Semester | Polymer & Petrochemical Engineering | PP-047 | 2014-15 | | No. | Name and Father's Name | Semester | Department | Code | Year | |-----|------------------------|----------|------------|------|------| | 69. | SYEDA SARA IQBAL, D/o SYED SHAHID IQBAL | Third Year Spring / Fall Semester | Software Engineering | SE-045 | 2014-15 | | 70. | FIRZA AJAZ D/o SYED AJAZ HUSSAIN | Third Year Spring / Fall Semester | Telecommunication Engineering | TC-021 | 2014-15 | | 71. | Muhammad Khiaqan Serwar S/o Muhammad Serwar Pervaiz | Fourth Year Spring / Fall Semester | Industrial Chemistry | 046 | 2013-14 | | 72. | Nimra Hussain D/o Dilber Hussain | Fourth Year Spring / Fall Semester | Chemical Engineering | 009 | 2013-14 | | 73. | Nadeem Matloob S/o Matloob Akhtar | Fourth Year Spring / Fall Semester | Civil Engineering | 115 | 2013-14 | | 74. | Muhammad Ahsan Manzoor S/o Muhammad Manzoor Ul Haque | Fourth Year Spring / Fall Semester | Computational Finance Engineering | 027 | 2013-14 | | 75. | Ayesha Shabbir D/o Syed Shabbir Ahmed | Fourth Year Spring / Fall Semester | Computer & Information System Engineering | 079 | 2013-14 | | 76. | Abdul Rafay S/o Muhammad Arshad Khan | Fourth Year Spring / Fall Semester | Comp. Sci. & Info Tech | 019 | 2013-14 | | 77. | Areeba Shahid D/o Muhammad Shahid | Fourth Year Spring / Fall Semester | Electrical Engineering | 031 | 2013-14 | | 78. | Hira Haider D/o Syed Haider Ali | Fourth Year Spring / Fall Semester | Electrical Engineering | 097 | 2013-14 | | 79. | Saad Ul Haq Haqqi S/o Sharf Ul Haq Haqqi | Fourth Year Spring / Fall Semester | Electrical Engineering | 206 | 2013-14 | | 80. | Usama Gulzar S/o Gulzar Hussain | Fourth Year Spring / Fall Semester | Electronic Engineering | 111 | 2013-14 | | 81. | Baniyah Zehra D/o Riaz Haider Shaikh | Fourth Year Spring / Fall Semester | Food Engineering | 025 | 2013-14 | | 82. | Saad Tanveer S/o Tanveer Zafar | Fourth Year Spring / Fall Semester | Industrial & Manufacturing Engineering | 128 | 2013-14 | | 83. | Muhammad Abdullah Alvi S/o Humayoun Zafar Alvi | Fourth Year Spring / Fall Semester | Materials Engineering | 009 | 2013-14 | | 84. | Owais S/o Muhammad Ikram | Fourth Year Spring / Fall Semester | Materials Engineering | 034 | 2013-14 | | 85. | Syed Muhammad Talha Wadood S/o Syed Abdul Wadood | Fourth Year Spring / Fall Semester | Mechanical Engineering | 188 | 2013-14 | | 86. | Talha Ahmed Qasmi S/o Iqbal Ahmed Qasmi | Fourth Year Spring / Fall Semester | Mechanical Engineering | 017 | 2013-14 | | 87. | Khalil Ur Rehman Alvi S/o Muhammad Amin Alvi | Fourth Year Spring / Fall Semester | Petroleum Engineering | 007 | 2013-14 | | 88. | Hafiz Muhammad Anas Abdul Wahab S/o Abdul Wahab | Fourth Year Spring / Fall Semester | Software Engineering | 019 | 2013-14 | | 89. | Sami Haroon Khan S/o Muhammad Haroon Khan | Fourth Year Spring / Fall Semester | Software Engineering | 036 | 2013-14 |
Learning to Rank for Synthesizing Planning Heuristics Caelan Reed Garrett, Leslie Pack Kaelbling, Tomás Lozano-Pérez MIT CSAIL Cambridge, MA 02139 USA {caelan, lpk, email@example.com Abstract We investigate learning heuristics for domain-specific planning. Prior work framed learning a heuristic as an ordinary regression problem. However, in a greedy best-first search, the ordering of states induced by a heuristic is more indicative of the resulting planner’s performance than mean squared error. Thus, we instead frame learning a heuristic as a learning to rank problem which we solve using a RankSVM formulation. Additionally, we introduce new methods for computing features that capture temporal interactions in an approximate plan. Our experiments on recent International Planning Competition problems show that the RankSVM learned heuristics outperform both the original heuristics and heuristics learned through ordinary regression. 1 Introduction Forward state-space greedy heuristic search is a powerful technique that can solve large planning problems. However, its success is strongly dependent on the quality of its heuristic. Many domain-independent heuristics estimate the distance to the goal by quickly solving easier, approximated planning problems [Hoffmann and Nebel, 2001; Helmert, 2006; Helmert and Geffner, 2008]. While domain-independent heuristics have enabled planners to solve a much larger class of problems, there is a large amount of room to improve their estimates. In particular, the effectiveness of many domain-independent heuristics varies across domains, with poor performance occurring when the approximations in the heuristic discard a large amount of information about the problem. Previous work has attempted to overcome the limitations of these approximations by learning a domain-specific heuristic correction [Yoon et al., 2006; 2008]. Yoon et al. formulated learning a correction for the FastForward (FF) heuristic [Hoffmann and Nebel, 2001] as a regression problem and solved it using ordinary least-squares regression. While the resulting planner is no longer domain-independent, the learning process is domain independent, and the learned heuristic is more effective than the standard FF heuristic. In this paper, we improve on these results by framing the learning problem as a learning to rank problem instead of an ordinary regression problem. This is motivated by the insight that, in a greedy search, the ranking induced by a heuristic, rather than its numerical values, governs the success of the planning. By optimizing for the ranking directly, our RankSVM learner is able to produce a heuristic that outperforms heuristics learned through least-squares regression. Additionally, we introduce new methods for constructing features for heuristic learners. Like Yoon et al., we derive our features from an existing domain-independent heuristic [Yoon et al., 2006; 2008]. However, our features focus on the ordering and interaction between actions in approximate plans. Thus, they can be based on any existing heuristic that implicitly constructs an approximate plan, such as the context-enhanced additive (CEA) heuristic [Helmert and Geffner, 2008]. These features can be easily constructed and still encode a substantial amount of information for heuristic learners. In our experiments, we evaluate the performance of the different configurations of our learners on several of the International Planning Competition learning track problems [Vallati et al., 2015]. We find that the learned heuristics using the RankSVM approach allow more problems to be solved successfully than using the popular FF and CEA heuristics alone. Additionally, they significantly surpass the performance of heuristics learned through ordinary regression. 2 Related Work Prior work in learning for planning spans many types of domain-specific planning knowledge [Jiménez et al., 2012]; our focus in this paper is on learning heuristic functions. Yoon et al. were the first to improve on a heuristic function using machine learning [Yoon et al., 2006; 2008]. They centered their learning on improving the FF Heuristic [Hoffmann and Nebel, 2001], using ordinary least-squares regression to learn the difference between the actual distance-to-go and the estimate given by the FF heuristic. Their key contribution was deriving features using the relaxed plan that FF produces when computing its estimate. Specifically, they used taxonomic syntax to identify unordered sets of actions and predicates on the relaxed plan that shared common object arguments. Because there are an exponential number of possible subsets of actions and predicates, they iteratively introduced a taxonomic expression that identifies a subset greedily based on which subset will give the largest decrease in mean squared error. This process resulted in an average of about 20 features per domain [Xu et al., 2009]. In contrast, our features encode ordering information about the plan and can be successfully applied without any taxonomic syntax or iterative feature selection. Xu et al. built on the work of Yoon et al. and incorporated ideas from structural prediction [Xu et al., 2007; 2009]. They adapted the learning-as-search optimization framework to the context of beam search. They learn a discriminative model to rank the top $b$ successors per state to include in the beam search. In subsequent work, they used RankBoost to more reliably rank successors by bootstrapping the predictions of action-selection rules [Xu et al., 2010]. Although we also use a ranking approach, we use ranking as a loss function to train a heuristic from the position of states along a trajectory, resulting in a global heuristic that can be directly applied to greedy best-first search. Arfaee et al. learned heuristics by iteratively improving on prior heuristics for solving combinatorial search problems [Arfaee et al., 2011]. They used neural networks and user defined features. Finally, Virseda et al. learned combinations of existing heuristics values that would most accurately predict the cost-to-go [Virseda et al., 2013]. However, this strategy does not use features derived from the structure of the heuristics themselves. Wilt et al. investigated greedy heuristic search performance in several combinatorial search domains [Wilt and Ruml, 2012]. Their results suggest that heuristics that exhibit strong correlation with the distance-to-go are less likely to produce large local minima. And large local minima are thought to often dominate the runtime of greedy planners [Hoffmann, 2005; 2011]. They later use the Kendall rank correlation coefficient ($\tau$) to select a pattern database for some of these domains [Wilt and Ruml, 2015]. Their use of $\tau$ as a heuristic quality metric differs from our own use because they score $\tau$ using sampled states near the goal while we score $\tau$ by ranking the states on a plan. ### 3 Planning domains and training data Our goal is to learn a heuristic that will improve the coverage, or the number of problems solved, for greedy forward-search planning on very large satisficing planning problems. Secondary goals are to decrease the resulting plan length and time to solve these problems. The search control of our planners is greedy best first search (GBFS) with alternating, dual open lists [Richter and Helmert, 2009]. The preferred operators in the second open list are computed by the base heuristic which, as we will later see, is used to generate our learning features [Hoffmann and Nebel, 2001]. We use the lazy variant of greedy best first search which defers heuristic evaluation of successors. We consider STRIPS planning problems [Fikes and Nilsson, 1971] with unit costs, and without axioms or conditional effects, but our techniques can be straightforwardly generalized to handle them. **Definition 1 (Planning Domain).** A planning domain $D = \langle P, A \rangle$ consists of a set of predicate schemas $P$ and a set of action schemas $A$. Each action schemas contains a set of precondition predicates and effect predicates. A predicate schema or action schema can be instantiated by assigning objects to its arguments. **Definition 2 (Planning Problem).** A planning problem $\Pi = \langle D, O, s_0, g, \rangle$ is given by a domain $D$, a set of objects $O$, an initial state $s_0$, and a goal partial-state $g$. The initial state $s_0$ is fully specified by a set of predicates. The goal partial-state $g$ is only partially specified by its set of predicates. The overall approach will be, for each planning domain, to train a learning algorithm on several planning problem instances, and then to use the learned heuristic to improve planning performance on additional planning problems from that same domain. Note that the new problem instances use the same predicate and action schemas, but may have different universes of objects, initial states, and goal states. In order to learn a heuristic for a particular domain, we must first gather training examples from a set of existing training problems within the domain [Jiménez et al., 2012]. Suppose that we have a distribution over problems for a domain $D$, which will be used to generate testing problems. We will sample a set of training problems $\{\Pi^1, ..., \Pi^n\}$ from $D$. From each problem $\Pi^i$, we generate a set of training examples in which the $j$th training example is the pair $\langle x_j^i, y_j^i \rangle$ where $x_j^i = \langle s_j^i, \Pi^i \rangle$ is the input composed of a state $s_j^i$ and the problem $\Pi^i$. Let $y_j^i$ be the length of a plan from $s_j^i$ to $g^i$. Ideally, $y_j^i$ would be the length of the shortest plan, but because obtaining optimal plans is intractable for the problems we consider, we construct approximately optimal plans and use their lengths as the $y$ values in the training data. We use the set of states on a single high-quality plan from the initial state to the goal state as training examples. Unfortunately, we have observed that using low-quality plans, which are more easily found, can be dangerous, as it introduces large amounts of noise into the training data. This noise can produce conflicting observations of $y_j^i$ for similar $x_j^i$, which can prevent the learner from identifying any meaningful predictive structure. Reducing at least this kind of local noise is important for the learning process even if the global plan is still suboptimal. Thus, we post-process each candidate plan using two local search methods: action elimination and plan neighborhood graph search [Nakhost, 2010]. In separate experiments, we attempted learning a heuristic by instead using a sampled set of successors on these plans as training examples. However, we found that the inclusion of these states slightly worsened the resulting performance of the learners. Our hypothesis is that the inclusion of successor states improves local accuracy at the expense of global accuracy. Because the runtime of greedy search methods is often dominated by the time to escape the largest local minima [Hoffmann, 2005; 2011; Wilt and Ruml, 2012; 2015], it is a worthwhile tradeoff to reduce the size of large local minima at the cost of increasing the size of small local minima. ### 4 Feature Representation The majority of machine learning methods assume that the inputs are represented as points in a vector space. In our case, the inputs $x_j^i$ are a pair of a state and a planning problem, each of which is a complex structured symbolic object. So, we need to define a feature-mapping function $\phi$ that maps an $x$ value into a vector of numeric feature values. This can also be done implicitly by defining a kernel, we restrict our attention to finite-dimensional $\phi$ that are straightforwardly computable. The objective in designing a feature mapping is to arrange for examples that are close in feature space to have similar output values. Thus, we want to reveal the structural aspects of an input value that encode important similarities to other input values. This can be particularly challenging in learning for planning: while problems within the same domain share the same schemas for predicates and actions, the set of objects can be arbitrarily different. For example, a feature representation with a feature for each predicate instance present in $s^i_j$ or $g^i$ will perform poorly on new problems, which may not share any predicate instances with the problems used to create the feature representation. Yoon et al. used information from the FF heuristic to construct additional features from the resulting relaxed plan [Yoon et al., 2006; 2008]. The relaxed plan compresses the large set of possible actions into a small plan of actions that are likely to be relevant to achieving the goal. Many modern heuristics either explicitly or implicitly generate approximate plans, similar to FF’s relaxed plan, that can be represented as directed acyclic graphs (DAG) where each action is a vertex, and directed edges indicate that the outgoing action is supported by the incoming action. We provide feature mappings that are applicable to any heuristic that gives rise to such a DAG, but in this paper, we focus on the FF [Hoffmann and Nebel, 2001] and CEA [Helmerl and Geffner, 2008] heuristics. Our method can be extended to include additional features for example derived from landmark heuristics or domain-dependent heuristics, although we do not consider these extensions here. We can now view our training inputs as $x^i_j = \langle s^i_j, g^i, \pi^{ij}_h \rangle$ where $\pi^{ij}_h$ is the DAG generated by heuristic $h$ for state $s^i_j$ and goal $g^i$. The computation time of each feature affects the performance of the resulting planner in a complex way: the feature representation is computed for every state encountered in the search, but good features will make the heuristic more effective, causing fewer states to be encountered. ### 4.1 Single Actions The first feature representation serves primarily as a baseline. Each feature is the number of instances of a particular action schema in the DAG $\pi^{ij}_h$. The number of features is the number of action schemas $|A|$ in the domain and thus around five for many domains. This feature representation is simple and therefore limited in its expressiveness, but it can be easily computed in time $O(|\pi^{ij}_h|)$ and is unlikely to overfit. If we are learning a linear function of $\phi(x)$, then the weights can be seen as adjustments to the predictions made by the DAG of how many instances of each action are required. So, for instance, in a domain that requires a robot to do a “move” action every time it “picks” an object, but where the delete relaxation only includes one “move” action, this representation would allow learning a weight of two on pick actions, effectively predicting the necessity of extra action instances. ### 4.2 Pairwise Actions The second feature representation creates features for pairs of actions, encoding both their intersecting preconditions and effects as well as their temporal ordering in the approximate plan. First, we solve the all-pairs shortest paths problem on $\pi^{ij}_h$ by running a BFS from each action vertex. Then, consider each pair of actions $a_1 \rightarrow a_2$ where $a_2$ descends from $a_1$, as indicated by having a finite, positive distance from $a_1$ to $a_2$ in the all-pairs shortest paths solution. This indicates $a_2$ must come after $a_1$ on all topological sorts of the DAG; i.e., $\pi^{ij}_h$ contains the implicit partial ordering $a_1 \prec a_2$. Moreover, if there is an edge $(a_1, a_2)$ in $\pi^{ij}_h$, then $a_1 \prec a_2$ is an explicit partial ordering because $a_1$ directly supports $a_2$. For every pair of action schemas $(A_1, A_2)$, we include two features, counting the number of times it happens that, for an instance $a_1$ of $A_1$ and instance $a_2$ of $A_2$, 1. $a_1 \prec a_2$, $\text{EFF}(a_1) \cap \text{PRE}(a_2) \neq \emptyset$ 2. $a_2 \succ a_1$, $\text{EFF}(a_2) \cap \text{PRE}(a_1) \neq \emptyset$ The current state and goal partial-state are included as dummy actions with only effects or preconditions respectively. This feature representation is able to capture information about the temporal spread of actions in the DAG: for example, whether the DAG is composed of many short parallel sequences of actions or a single long sequence. Additionally, the inclusion of the preconditions and effects that overlap encodes interactions that are not often directly captured in the base heuristic. For example, FF and CEA make predicate independence approximations, which can result in overestimating the distance-to-go. The learner can automatically correct for these estimations if it learns that a single sequence can be used to achieve multiple predicates simultaneously. In contrast to the single-action feature representation, the computation of the pairwise representation takes $O(|\pi^{ij}_h|^2)$ in the worst case. However, the DAG frequently is composed of almost disjoint subplans, so in practice, the number of pairs considered is fewer than $\binom{|\pi^{ij}_h|}{2}$. Additionally, this tradeoff is still advantageous if the learner is able to produce a much better heuristic. Finally, for both the single and pairwise feature representations, we add three additional features corresponding to the original heuristic value, the number of layers present in the DAG, and the number of unsatisfied goals. ### 5 Models for heuristic learning We consider two different framings of the problem of learning a heuristic function $f$. In the first, the goal is to ensure that the $f(x)$ values are an accurate estimate of the distance-to-go in the planning state and problem encoded by $x = (s, \Pi)$. In the second, the goal is to ensure that the $f(x)$ values accurately rank the distance-to-go for different states $s$ within the same planning problem $\Pi$, but do not necessarily reflect that actual distance-to-go values. These different framings of the problem lead to different loss functions to be optimized by the learner and to different optimization algorithms. Because our learning algorithms cannot optimize for search performance directly, the loss function serves as a proxy for the search performance. A good loss function will be highly correlated with performance of learned heuristics. We restrict ourselves to linear models that learn a weight vector $w$, and make a prediction $\hat{f}(x) = \phi(x)^T w$. ### 5.1 Heuristic value regression Because learning a heuristic is, at face value, a regression problem, a natural loss function is the root mean squared error (RMSE). A model with a low RMSE produces predictions close to the actual distance-to-go. Because each training problem $\Pi^i$ may produce a different number of examples $m_i$, we use the average RMSE over all problems. This ensures that we do not assign more weight to problems with more examples. If $f$ is a prediction function mapping a vector to the reals, then: $$\text{RMSE} = \frac{1}{n} \sum_{i=1}^{n} \sqrt{\frac{1}{m_i} \sum_{j=1}^{m_i} (f(x_j^i) - y_j^i)^2}.$$ The first learning technique we applied is ridge regression (RR) [Hoerl and Kennard, 1970]. This serves as a baseline to compare to the results of Yoon et al. [Yoon et al., 2008]. Ridge regression is a regularized version of Ordinary Least Squares Regression (OLS). The regularization trades off optimizing the squared error against preferring low magnitude $w$ using a parameter $\lambda$. This results in the following optimization problem. Letting $\phi(X)$ be the design matrix of concatenated features $\phi(x_j^i)$ and $Y$ be the vector concatenation of $y_j^i$ for all $i, j$, we wish to find $$\min_w ||\phi(X)w - Y||^2 + \lambda ||w||^2.$$ This technique is advantageous because it can be quickly solved in closed form for reasonably sized $\phi(X)$, yielding the weight vector $$w = (\phi(X)^T \phi(X) + \lambda I)^{-1} Y.$$ Optimizing RMSE directly, with no penalty $\lambda$, will yield a weight vector that performs well on the training data but might not generalize well to previously unseen problems. Increasing $\lambda$ forces the magnitude of $w$ to be smaller, which prevents the resulting $f$ from “overfitting” the training data and therefore not generalizing well to new examples. This is especially important in our application as we are trying learn a heuristic that generalizes across the full state-space from only a few representative plans. We select an appropriate value of $\lambda$ by performing domain-wise leave-one-out cross validation (LOOCV): For different possible values of $\lambda$, and in a domain with $n$ training problem, we train on data from $n-1$ training problems and evaluate the resulting heuristic on the remaining problem according to the RMSE loss function, and average the scores from holding out each problem instance. We select the $\lambda$ value for which the LOOCV RMSE is minimized over a logarithmic scale. ### 5.2 Learning to Rank The RMSE, however, is not the most appropriate metric for our learning application. We are learning a heuristic for greedy search, which uses the heuristic solely to determine open list priority. The value of the heuristic per se does not govern the search performance which depends most directly on the ordering on states induced by the heuristic. In this context, any monotonically increasing function of a heuristic results in the same ranking and performance. A heuristic may have arbitrarily bad RMSE despite performing well. For these reasons, we consider the Kendall rank correlation coefficient ($\tau$), a nonparametric ranking statistic, as a loss function. It represents the normalized difference between the number of correct rankings and incorrect rankings for each of the ranking pairs. As with the RMSE, we compute the average $\tau$ across each problem. The separation of problems is even more important here. Our $\tau$ only scores rankings between examples from the same problem as examples from separate problems are never encountered together in the same search. This provides a major source of leverage over an ordinary regression framework. Heuristics are not penalized for producing inconsistent distances-to-go values across multiple problems, allowing them to provide more effort to improve the per-problem rankings. Let $s(i; j, k)$ score the concordance or discordance of the ranking function $f$ for examples $\langle x_k^i, y_k^i \rangle$ and $\langle x_j^k, y_k^k \rangle$ from the same problem $\Pi^i$: $$s(i; j, k) = \begin{cases} +1 & \text{if } \text{sgn}(f(x_k^i) - f(x_j^i)) = \text{sgn}(y_k^i - y_j^i) \\ -1 & \text{if } \text{sgn}(f(x_k^i) - f(x_j^i)) = -\text{sgn}(y_k^i - y_j^i) \\ 0 & \text{if } f(x_k^i) - f(x_j^i) = 0 \end{cases}.$$ Then the Kendall rank correlation coefficient is specified by $$\tau = \frac{1}{n} \sum_{i=1}^{n} \frac{2}{m_i(m_i - 1)} \sum_{j=1}^{m_i} \sum_{k=j+1}^{m_i} s(i; j, k).$$ Note that each $y_j^i$ is unique per problem $\Pi^i$ because our examples come from a single trajectory. Observe that $\tau \in [-1, 1]$; values close to one indicate the ranking induced by the heuristic $f$ has strong positive correlation to the true ranking of states as given by the actual labels. Conversely, values close to negative one indicate strong negative correlation. If our loss function is $\tau$, it is more effective to optimize $\tau$ directly in the learning process. To this end, we use Rank Support Vector Machines (RankSVM) [Joachims, 2002]. RankSVMs are variants of SVMs which penalize the number of incorrectly ranked training examples. Like SVMs, RankSVMs also have a parameter $C$ used to provide regularization. Additionally, their formulation uses the hinge loss function to make the learning problem convex. Thus, a RankSVM finds the $w$ vector that optimizes a convex relaxation of $\tau$. Our formulation of the RankSVM additionally takes into account the fact that we only wish to rank training examples from the same problem. Our formulation is the following: $$\min_w ||w||^2 + C \sum_{i=1}^{n} \sum_{j=1}^{m_i} \sum_{k=j+1}^{m_i} \xi_{ijk}$$ s.t. $\quad \phi(x_j^i)^T w \geq \phi(x_k^i)^T w + 1 - \xi_{ijk}, \forall y_j^i \geq y_k^i, \forall i$ $\quad \xi_{ijk} \geq 0, \forall i, j, k.$ The first constraint can also be rewritten to look similar to the original SVM formulation. In this form, the RankSVM can be viewed as classifying if $x^i_j, x^i_k$ are properly ranked. $$\left( \phi(x^i_j) - \phi(x^i_k) \right)^T w \geq 1 - \xi_{ijk}, \quad \forall y^i_j \geq y^i_k, \quad \forall i$$ Notice that number of constraints and slack variables, corresponding to the number of rankings, grows quadratically in the size of each problem. This makes training the RankSVM more computationally expensive than RRs or SVMs. However, there are efficient methods for training these, and other SVMs, when considering just the linear, primal form of the problem [Joachims, 2006; Franc and Sonnenburg, 2009]. It is important to note that we generate a number of constraints that is quadratic only in the length of any given training plan, and do not attempt to rank all the actions of all the training plans jointly; this allows us to increase the number of training example plans without dramatically increasing the size of the optimization problem. An additional advantage of RankSVM is that it supports the inclusion of the non-negativity constraint $w \geq 0$ which provide additional regularization. Because each feature represent a count of actions or action pairs, the values are always non-negative, as are the target values. We generally expect that DAGs with a large number of actions indicate that the state is far from the goal. The non-negativity constraint allows us to incorporate this prior knowledge in the model, which can sometimes improve the generalization of the learned heuristic. As in RR, we select $C$ using a line search over a logarithmic scale, to maximize a cross-validated estimate of $\tau$. As a practical note, we start with an over-regularized model where $C \approx 0$ and increase $C$ until reaching a local minimum because SVMs are trained much more efficiently for small $C$. ### 6 Results We implemented our planners using the FastDownward framework [Helmert, 2006]. Each planning problem is compiled to a representation similar to SAS+ [Bäckström and Nebel, 1995] using the FastDownward preprocessor. However, the predicates that represent each SAS+ (variable, value) pair are still stored so, actions and states can be mapped back to their prior form. We used the `lib` C++ machine learning library to implement the learning algorithms [King, 2009]. We experimented on four domains from the 2014 IPC learning track [Vallati et al., 2015]: *elevators*, *transport*, *parking*, and *no-mystery*. For each domain, we constructed a set of unique examples with the competition problem generators by sampling parameters that cover competition parameter space. We use a variant of the 2014 FastDownward Stone Soup portfolio [Helmert et al., 2011] planner, with a large timeout and memory limit, to generate training example plans. We trained on at most 10 examples randomly selected from the set of problems our training portfolio planner was able to solve, and then tested on the remaining problems. For each experiment, we report the following values: - **Cov**: coverage, or total number of problems solved; - **Len**: mean plan length; - **Run T**: mean planning time in seconds; - **Exp**: mean number of expansions; - **RMSE**: RMSE of learned heuristic; - $\tau$: Kendall rank correlation coefficient of learned heuristic; - $\lambda/C$: regularization parameter value ($\lambda$ for RR and $C$ for RankSVM); - **Feat**: number of nonzero weights learned relative to the total number of features; and - **Train T**: runtime to train the heuristic learner in seconds. Each planner was run on a single 2.5 GHz processor for 30 minutes with 5 GB of memory. We only include the results of the original CEA heuristic on *elevators*, as the default heuristic was able to solve each problem and the heuristics learned using CEA all performed similarly. The heuristics learned by RankSVM are able to solve more problems than those learned using ridge regression. Within a domain, $\tau$ seems to be positively correlated with the number of problems solved while the RMSE does not. The pairwise-action features outperform the single-action features in RankSVM, making it worthwhile to incur a larger heuristic evaluation time for improved heuristic strength. The CEA learned heuristics performed slightly better than the FF learned heuristics. On *transport* and *parking*, the training portfolio planner was only able to solve the smallest problems within the parameter space. Thus, our RankSVM learners demonstrate the ability to learn from smaller problems and perform well on larger problems. In separate experiments, we observed that both artificially over-regularized and under-regularized learners performed poorly indicating that selection of the regularization parameter is important to the learning process. The learned heuristics perform slightly worse than the standard heuristics on *no-mystery* despite having almost perfect $\tau$ values. In separate experiments using eager best-first search, the learned heuristics perform slightly better on *no-mystery*, but the improvement is not significant. This domain is known to be challenging for heuristics because it contains a large number of dead-ends. We observed that $\tau$ does not seem sufficient for understanding heuristic performance on domains with harmful dead-ends. Our hypothesis is that failing to recognize a dead-end is often more harmful than incorrectly ranking nearby states and should be handled separately from learning a heuristic. A topic for future work is to combine our learned heuristics with learned dead-end detectors. Inclusion of the non-negativity constraint (NN) on *transport* significantly improved the coverage of the FF learned heuristic over the normal RankSVM formulation. We believe that this constraint can sometimes improve generalization in domains with a large variance in size or specification. For example, the *transport* generator samples problems involving either two or three cities leading to a bimodal distribution of problems. Finally, we tested two learned heuristics on the five evaluation problems per domain chosen in the IPC 2014 learning --- 1Because heuristic values are required to be integers in this framework, we scale up and then round predicted heuristic values, in order to capture more of the precision in the values. Recall that scaling will not alter the planner’s performance because arbitrary non-negative, affine transformations to $f(x)$ will not affect the resulting ranking in greedy search. 2We use arithmetic mean for plan length and geometric means for planning time and number of expansions, and report these statistics only for solved instances; RMSE and $\tau$ values are cross-validation estimates. | Method | Cov | Len | Run T | Exp. | RMSE | $\tau$ | $MC'$ | Feat. | Train T | |------------------------|-----|------|-------|------|--------|--------|-------|-------|---------| | FF Original | 14 | 318 | 196 | 17833| 34.370 | 0.9912 | N/A | N/A | N/A | | FF RR Single | 22 | 546 | 504 | 34970| 4.091 | 0.9948 | 100 | 9/9 | 3.133 | | FF RR Pair | 15 | 561 | 308 | 20985| 3.789 | 0.9971 | 1000 | 53/53 | 11.686 | | FF RSVM Single | 34 | 375 | 403 | 23765| 79.867 | 0.9967 | 0.1 | 9/9 | 55.681 | | FF RSVM Pair | 34 | 631 | 123 | 7083 | 418.828| 0.9996 | 1 | 53/53 | 140.786 | | **FF NN RSVM Pair** | **35** | **655** | **61** | **10709** | **46.296** | **0.9992** | **1** | **51/53** | **125.702** | | CEA Original | 35 | 397 | 163 | 4504 | 21.494 | 0.9973 | N/A | N/A | N/A | | FF Original | 5 | 588 | 470 | 18103| 126.193| 0.8460 | N/A | N/A | N/A | | FF RR Single | 0 | None | None | None | 31.518 | 0.9303 | 100 | 6/6 | 3.569 | | FF RR Pair | 4 | 529 | 560 | 27866| 27.570 | 0.9392 | 10000 | 32/32 | 11.028 | | FF RSVM Single | 21 | 1154 | 650 | 29452| 149.003| 0.9720 | 0.1 | 6/6 | 106.901 | | FF RSVM Pair | 20 | 587 | 178 | 8896 | 162.141| 0.9797 | 0.001 | 32/32 | 117.808 | | **FF NN RSVM Pair** | **31** | **663** | **206** | **7803** | **141.273** | **0.9798** | **0.01** | **17/32** | **287.586** | | CEA Original | 9 | 448 | 542 | 9064 | 57.819 | 0.9314 | N/A | N/A | N/A | | CEA RR Single | 11 | 493 | 436 | 6921 | 33.032 | 0.9420 | 10000 | 6/6 | 4.536 | | CEA RR Pair | 2 | 609 | 1602 | 40327| 30.731 | 0.9318 | 100 | 45/45 | 15.716 | | CEA RSVM Single | 18 | 722 | 588 | 11334| 130.653| 0.9748 | 0.1 | 6/6 | 158.523 | | **CEA RSVM Pair** | **31** | **650** | **225** | **3526** | **159.139** | **0.9804** | **0.0001** | **45/45** | **244.164** | | CEA NN RSVM Pair | 29 | 696 | 277 | 9006 | 191.064| 0.9795 | 0.0001| 29/45 | 528.665 | | Method | Cov | Len | Run T | Exp. | RMSE | $\tau$ | $MC'$ | Feat. | Train T | |------------------------|-----|------|-------|------|--------|--------|-------|-------|---------| | FF Original | 0 | None | None | None | 6.101 | 0.9525 | N/A | N/A | N/A | | FF RR Single | 0 | None | None | None | 4.571 | 0.9648 | 100 | 7/7 | 0.201 | | FF RR Pair | 2 | 156 | 1419 | 33896| 4.285 | 0.9757 | 100 | 40/40 | 0.570 | | FF RSVM Single | 0 | None | None | None | 10.468 | 0.9745 | 0.01 | 7/7 | 8.423 | | FF RSVM Pair | 8 | 185 | 208 | 2852 | 18.262 | 0.9918 | 0.1 | 40/40 | 7.030 | | FF NN RSVM Pair | 6 | 183 | 358 | 5891 | 143.063| 0.9941 | 10 | 26/40 | 7.119 | | CEA Original | 0 | None | None | None | 15.885 | 0.9628 | N/A | N/A | N/A | | CEA RR Single | 0 | None | None | None | 4.667 | 0.9669 | 0.01 | 7/7 | 0.277 | | CEA RR Pair | 1 | 280 | 1230 | 48180| 4.448 | 0.9660 | 10 | 47/47 | 0.738 | | CEA RSVM Single | 0 | None | None | None | 7.950 | 0.9757 | 0.1 | 7/7 | 10.830 | | **CEA RSVM Pair** | **10** | **272** | **81** | **2147** | **45.823** | **0.9918** | **1** | **47/47** | **10.237** | | CEA NN RSVM Pair | 10 | 260 | 70 | 1690 | 140.297| 0.9938 | 10 | 27/47 | 9.179 | | Method | Cov | Len | Run T | Exp. | RMSE | $\tau$ | $MC'$ | Feat. | Train T | |------------------------|-----|------|-------|------|--------|--------|-------|-------|---------| | FF Original | 4 | 31 | 583 | 5658745| 3.462 | 0.9841 | N/A | N/A | N/A | | **FF RR Single** | **4** | **30** | **1004** | **8385159** | **1.662** | **0.9861** | **100** | **6/6** | **0.085** | | FF RR Pair | 2 | 31 | 700 | 3898861 | 1.622 | 0.9902 | 1000 | 21/21 | 0.193 | | FF RSVM Single | 1 | 26 | 1411 | 16201215| 21.069 | 0.9871 | 100 | 6/6 | 0.712 | | FF RSVM Pair | 2 | 28 | 892 | 6894959 | 39.350 | 0.9968 | 1 | 21/21 | 0.914 | | FF NN RSVM Pair | 1 | 29 | 1049 | 7973003| 80.588 | 0.9972 | 10 | 17/21 | 1.024 | | CEA Original | 3 | 30 | 73 | 107773 | 16.851 | 0.9579 | N/A | N/A | N/A | | CEA RR Single | 2 | 28 | 9 | 26319 | 1.824 | 0.9890 | 100 | 6/6 | 0.069 | | CEA RR Pair | 3 | 32 | 104 | 169434 | 1.717 | 0.9892 | 1000 | 32/32 | 0.342 | | CEA RSVM Single | 2 | 28 | 12 | 33559 | 36.457 | 0.9916 | 1 | 6/6 | 1.283 | | CEA RSVM Pair | 3 | 32 | 34 | 46501 | 6.358 | 0.9964 | 0.01 | 32/32 | 4.023 | | CEA NN RSVM Pair | 3 | 31 | 190 | 264225 | 55.141 | 0.9970 | 1 | 16/32 | 62.608 | Table 1: Results from the elevators, transport, parking, and no-mystery IPC Learning Track 2014 problems. track. Both the FF RSVM Pair heuristic and the CEA RSVM Pair heuristic solved all 5/5 problems in elevators, transport, and parking but only 1/5 problems in no-mystery. 7 Conclusion Our results indicate that, for greedy search, learning a heuristic is best viewed as a ranking problem. The Kendall rank correlation coefficient $\tau$ is a better indicator of a heuristic’s quality than the RMSE, and it is effectively optimized using the RankSVM learning algorithm. Pairwise-action features outperformed simpler features. Further work involves combining features from several heuristics, learning complementary search control using our features, and incorporating the learned heuristics in planning portfolios. Acknowledgments We gratefully acknowledge support from NSF grants 1420927 and 1523767, from ONR grant N00014-14-1-0486, and from ARO grant W911NF1410433. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors. References [Arfaee et al., 2011] Shahab Jabbari Arfaee, Sandra Zilles, and Robert C Holte. Learning heuristic functions for large state spaces. *Artificial Intelligence*, 175(16):2075–2098, 2011. [Bäckström and Nebel, 1995] Christer Bäckström and Bernhard Nebel. Complexity results for sas+ planning. *Computational Intelligence*, 11(4):625–655, 1995. [Fikes and Nilsson, 1971] Richard E. Fikes and Nils J. Nilsson. STRIPS: A new approach to the application of theorem proving to problem solving. *Artificial Intelligence*, 2:189–208, 1971. [Franc and Sonnenburg, 2009] Vojtěch Franc and Sören Sonnenburg. Optimized cutting plane algorithm for large-scale risk minimization. *The Journal of Machine Learning Research*, 10:2157–2192, 2009. [Helmert and Geffner, 2008] Malte Helmert and Héctor Geffner. Unifying the causal graph and additive heuristics. In *ICAPS*, pages 140–147, 2008. [Helmert et al., 2011] Malte Helmert, Gabriele Röger, Jendrik Seipp, Erez Karpas, Jörg Hoffmann, Emil Keyder, Raz Nissim, Silvia Richter, and Matthias Westphal. Fast downward stone soup. *Seventh International Planning Competition*, pages 38–45, 2011. [Helmert, 2006] Malte Helmert. The fast downward planning system. *Journal of Artificial Intelligence Research*, 26:191–246, 2006. [Hoerl and Kennard, 1970] Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. *Technometrics*, 12(1):55–67, 1970. [Hoffmann and Nebel, 2001] Jörg Hoffmann and Bernhard Nebel. The FF planning system: Fast plan generation through heuristic search. *Journal Artificial Intelligence Research (JAIR)*, 14:253–302, 2001. [Hoffmann, 2005] Jörg Hoffmann. Where ignoring delete lists’ works: local search topology in planning benchmarks. *Journal of Artificial Intelligence Research*, pages 685–758, 2005. [Hoffmann, 2011] Jörg Hoffmann. Where ignoring delete lists works, part ii: Causal graphs. In *ICAPS*, 2011. [Jiménez et al., 2012] Sergio Jiménez, Tomás De la Rosa, Susana Fernández, Fernando Fernández, and Daniel Borrajo. A review of machine learning for automated planning. *The Knowledge Engineering Review*, 27(04):433–467, 2012. [Joachims, 2002] Thorsten Joachims. Optimizing search engines using clickthrough data. In *Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining*, pages 133–142. ACM, 2002. [Joachims, 2006] Thorsten Joachims. Training linear svms in linear time. In *Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining*, pages 217–226. ACM, 2006. [King, 2009] Davis E King. Dlib-ml: A machine learning toolkit. *The Journal of Machine Learning Research*, 10:1755–1758, 2009. [Nakhost, 2010] Hootan Nakhost. Action elimination and plan neighborhood graph search: Two algorithms for plan improvement. In *ICAPS*, 2010. [Richter and Helmert, 2009] Silvia Richter and Malte Helmert. Preferred operators and deferred evaluation in satisficing planning. In *ICAPS*, 2009. [Vallati et al., 2015] M. Vallati, L. Chrpa, M. Grzes, T.L. McCluskey, M. Roberts, and S. Sanner. The 2014 international planning competition: Progress and trends. *AI Magazine*, 2015. [Virseda et al., 2013] Jesús Virseda, Daniel Borrajo, and Vidal Alcázar. Learning heuristic functions for cost-based planning. *Planning and Learning*, page 6, 2013. [Wilt and Ruml, 2012] Christopher Makoto Wilt and Wheeler Ruml. When does weighted a* fail? In *SOCS*, 2012. [Wilt and Ruml, 2015] Christopher Makoto Wilt and Wheeler Ruml. Building a heuristic for greedy search. In *Eighth Annual Symposium on Combinatorial Search*, 2015. [Xu et al., 2007] Yuehua Xu, Alan Fern, and Sung Wook Yoon. Discriminative learning of beam-search heuristics for planning. In *IJCAI*, 2007. [Xu et al., 2009] Yuehua Xu, Alan Fern, and Sungwook Yoon. Learning linear ranking functions for beam search with application to planning. *The Journal of Machine Learning Research*, 10:1571–1610, 2009. [Xu et al., 2010] Yuehua Xu, Alan Fern, and Sung Wook Yoon. Iterative learning of weighted rule sets for greedy search. In *ICAPS*, pages 201–208, 2010. [Yoon et al., 2006] Sung Wook Yoon, Alan Fern, and Robert Givan. Learning heuristic functions from relaxed plans. In *ICAPS*, 2006. [Yoon et al., 2008] Sungwook Yoon, Alan Fern, and Robert Givan. Learning control knowledge for forward search planning. *The Journal of Machine Learning Research*, 9:683–718, 2008.
SESSION II Chairman: PROFESSOR J. M. HINTON The function of sleep IAN OSWALD M.D., D.Sc. Department of Psychiatry, University of Edinburgh Summary Evidence is reviewed that points to sleep as a time during which synthetic processes for growth and repair are enhanced. REM (paradoxical) sleep seems especially related to increased synthetic processes in the brain. Introduction The 1960s saw a great volume of new knowledge about what happens during sleep. Some research workers felt it improper then to inquire what purpose or function might be served. The first explicit modern proposal that sleep was related to anabolic activity in the brain was made, so far as I know, by West (1969). He suggested that certain metabolic products would have to be dissipated and that this might happen mostly during a catabolic period while slow wave sleep (NREM sleep) was in progress and that then an 'anabolic phase occurs during fast-wave or paradoxical sleep' (REM sleep). Just at that time the implications of the Japanese discovery that growth hormone was secreted in large amounts during sleep (Takahashi, Kipnis and Doughaday, 1968; Honda et al., 1969), and during slow-wave sleep in particular (Sassin et al., 1969b) were beginning to dawn, and I was led to write of slow (NREM) sleep that 'its chief function is for bodily restitution, while REM sleep may be chiefly for brain repair' (Oswald, 1969). These themes were later developed (Oswald, 1970, 1973) and now Hartmann (1973) and Stern and Morgane (1974) have espoused the same general belief, but independently each suggests that REM sleep has a particular role of restoring or maintaining the functioning of catecholamine mechanisms within the brain. Hartmann points particularly to the fact that NREM sleep always precedes a shorter period of REM sleep, that this demands explanation and he suggests that this can only be explained if one thinks of certain molecules being synthesized in the brain during NREM sleep and then being further processed during REM sleep. There are of course those who will seek to explain sleep in behavioural terms, seeing the inertia as an adaptive response (Webb, 1974). Ideas of this nature are not incompatible with the notion that certain biochemical processes may be especially favoured during sleep. Nature is economical and in the course of evolution many purposes might have come to be best served during sleep. We can see oscillations or rhythms of activity throughout nature—motor activity and motor inactivity, wakefulness and sleep. Organisms have both external and internal requirements for energy. Energy is sometimes directed to the outside world, and at other times must be used for such internal needs as repair and renewal of cells and synthesis for cellular division. Many enzymes can serve more than one function at one time, for example, promoting a catabolic process and, when conditions are slightly changed, an anabolic process. Since nature is economical some enzyme systems may function to direct energy expenditure towards the external world during wakefulness, but towards internal needs such as the synthesis of molecules during sleep. Synthetic activity and sleep Cell division is one customer for internal energy expenditure. Peaks of mitotic activity occur in human bone marrow and in human skin soon after the usual sleep onset time (Cooper, 1939; Killman et al., 1962; Mauer, 1965; Fisher, 1968). In rats and mice cellular division in epidermis (Halberg and Barnum, 1961), bone marrow (Clark and Korst, 1969), pineal gland (Renzoni and Quay, 1964), liver parenchyma (Halberg and Barnum, 1961), blood reticulocytes (Clark and Korst, 1969) and eosinophils (Halberg, 1960) shows circadian rhythms with maxima during the hours when the animals are predominantly asleep. A further step in the inferential chain is provided by the new knowledge about hormone secretion during sleep. We now know that four important hormones concerned with the regulation of tissue growth and development, are sleep-dependent. Just because a hormone is present in greater amounts during sleep would not necessarily mean that it was sleep-dependent—corticosteroids rise during the later part of the sleep period, but they do so whether the individual is awake or asleep, and here the rise is a manifestation of a circadian rhythm and not of sleep-dependence. Sleep-dependent hormones are those which can be shown to be secreted in large amounts during sleep at the normal time, but not if the individual stayed awake, yet will be secreted if he sleeps, say, 6 hr or 12 hr later than the normal time. Human growth hormone is not merely sleep-dependent but requires the presence of slow-wave sleep Stages 3 and 4 (Sassin et al., 1969a; Schnure, Raskin and Lipman, 1971). Also dependent on sleep, but not closely linked with any particular EEG-defined stage of sleep, are prolactin (Sassin et al., 1973) and, in early puberty only, luteinizing hormone and testosterone (Boyar et al., 1972, 1974). It seems a sensible provision of nature that maximal mitoses should be related in time to high blood levels of anabolic hormones. Growth hormone, especially, has been shown to increase the rate of synthesis of protein and RNA (Korner, 1965). It has been known for many years that if it was a long time since the last sleep had occurred then slow wave sleep Stages 3 and 4 had immediate priority (Berger and Oswald, 1962). It is now widely recognized that whereas the amount of REM sleep seems to be governed chiefly by the circadian cycle, the amount of slow wave sleep at any given age is determined by the need for sleep as judged by the number of hours of continuous wakefulness that have elapsed. An especial role for this kind of sleep in promoting restoration was further suggested by the report of Baekeland and Lasky (1966) that when athletes had exercised hard during the day they got more slow-wave sleep at night, and by the report of Hobson (1968) that when cats had been obliged to take extra physical exercise, they too got a significant excess of subsequent slow wave sleep. A variety of authors have failed to confirm the finding of Baekeland and Lasky but none of us has exactly repeated their experiment by using athletes. Nevertheless, Adamson and his colleagues (1974) found that there is a significant increase of growth hormone during sleep in men who have taken strenuous exercise during the day compared with days when they have taken only an ordinary amount of exercise. They also found that on the nights when the anabolism-promoting growth hormone was increased the catabolism-promoting corticosteroids were significantly reduced. Another way of increasing the metabolism of tissue reserves is by starvation and under these circumstances an increase in the protein conserving growth hormone in the blood has been observed by Parker, Rossman and Vanderlaan (1972). Acute starvation is, at the same time, associated with a significant increase in the amount of slow wave sleep Stages 3 and 4 (MacFadyen, Oswald and Lewis, 1973; Karacan et al., 1973). Yet another condition that increases demands on tissue reserves is hyperthyroidism. Its converse, hypothyroidism was found to be associated with loss of Stages 3 and 4 sleep that returned during treatment (Kales et al., 1967). We have found greatly increased amounts of Stage 3 and 4 slow wave sleep in hyperthyroidism and suspect that there is also an increase in growth hormone secretion (Dunleavy et al., 1974). Where loss of weight is induced by the anorectic drug, fenfluramine, there is also an increase of Stage 3 and 4 sleep (Lewis et al., 1971) and there can be increase of nocturnal growth hormone (Dunleavy, Oswald and Strong, 1973). Even the extra demands of an additional hour of wakefulness in the middle of the night leads to a significant increase in the amount of Stage 3 and 4 sleep and of plasma growth hormone in the remainder of the night (Beck et al., 1975). It is as if the extra wakefulness demands extra amounts of the sleep of high restorative value. **REM sleep and cerebral synthetic processes** It has been repeatedly shown that cerebral blood flow during paradoxical sleep is considerably increased, even to well above the levels of wakefulness (Townsend, Prinz and Obrist, 1973). Since blood flow through a tissue is normally proportional to oxidative metabolism and since during REM sleep the brain could hardly be working strenuously in order to cope with the external environment, one can only suppose that internal metabolic needs are being met and this is in keeping with the report by Van den Noort and Brine (1970) that brain ATP rises during the sleep of rats and that RNA synthesis in rabbit cortex increased as the sleep EEG became less synchronized (Vitale-Neugebauer et al., 1970). There is, moreover, a lot of evidence suggesting that the high proportion of REM sleep or its equivalent in young animals is related in time to the period of most rapid brain growth, whereas senility, with its failure of cerebral synthetic processes and shrivelling of the brain, is associated with decreased REM sleep. and there is also the deficit of REM sleep in mental defectives (Petre-Quadens and Lee, 1970; Feinberg, 1968). One might suppose that in mental defectives less synthetic processes are required for turnover and maintenance. There have been a number of animal experimenters who have reported that extra learning tasks are associated with more REM sleep in rats and that lack of REM sleep impairs learning performance, but most of these experiments are open to uncertainties of interpretation. We were unable to find that massive learning through the wearing of distorting spectacles caused any increase of REM sleep in man (Allen et al., 1972) but since most cerebral protein synthesis must be for the maintenance of existing tissue this is not really very surprising. Another telling line of evidence stems from the fact that when the brain recovers from poisoning there is no excess of slow wave sleep during the subsequent weeks but there is usually a very large excess of REM sleep (Haider and Oswald, 1970; Oswald et al., 1973). In the Soviet Union, Demin and Rubinskaya (1974) have measured protein and RNA in cerebral neurones and found them to be decreased in association with REM-deprivation, and one of their colleagues, Dr A. Panov, has since repeated the work and confirmed that this is so even when the REM-deprivation procedure is relatively short and insufficient to cause any corticosteroid signs of a stress reaction. In Rostov-on-Don, Kogan et al. (1975) have now been able, with most elegant technique, to determine the rate of protein and RNA synthesis in small cerebral biopsies in relation to the stages of sleep of the cat, and they find a 30% reduction below waking levels during slow wave sleep and a rise of about 7% above waking levels during REM sleep. Stern and Morgane (1974) point to diminished responsiveness of catecholamine systems following REM sleep deprivation and to greatly increased REM time where substances are given that depress catecholamine activity, such as reserpine. Whether brain catecholamines really are in a special category in regard to brain synthetic processes during REM sleep only time will tell. Although the most crucial experiments, which would have to involve incorporation of labelled amino acids, still remain to be done, we may by this time conclude that there is strong evidence for regarding sleep as a time specially important for synthetic processes in the body, with REM sleep being particularly important for synthetic processes in the brain. References Adamson, L., Hunter, W.M., Ogunremi, O.O., Oswald, I. & Percy-Robb, I.W. (1974) Growth hormone increase during sleep after daytime exercise. Journal of Endocrinology, 62, 473. Allen, S.R., Oswald, I., Lewis, S. & Tagney, J. (1972) The effects of distorted visual input on sleep. Psychophysiology, 9, 498. Bækeland, F. & Lasky, R. (1966) Exercise and sleep patterns in college athletes. Perceptual and Motor Skills, 23, 1203. Beck, U., Březinová, V., Hunter, W.M. & Oswald, I. (1975) Plasma growth hormone and slow wave sleep increase after interruption of sleep. Journal of Clinical Endocrinology and Metabolism, 40, 812. Berger, R.J. & Oswald, I. (1962) Effects of sleep deprivation on behaviour, subsequent sleep, and dreaming. Journal of Mental Science, 108, 457. Boyar, R., Finkelstein, J., Roffwarg, H., Kapen, S., Weitzman, E. & Hellman, L. (1972) Synchronization of augmented luteinizing hormone secretion with sleep during puberty. New England Journal of Medicine, 287, 582. Boyar, R.M., Rosenfeld, R.S., Kapen, S., Finkelstein, J.W., Roffwarg, H.P., Weitzman, E.D. & Hellman, L. (1974) Simultaneous augmented secretion of luteinizing hormone and testosterone during sleep. Journal of Clinical Investigation, 54, 609. Clark, R.H. & Korst, D.R. (1969) Circadian periodicity of bone marrow mitotic activity and reticulocyte counts in rats and mice. Science, 166, 236. Cooper, Z.K. (1939) Mitotic rhythm in human epidermis. Journal of Investigative Dermatology, 2, 289. Demin, N.N. & Rubinskaya, N.L. (1973) Content of proteins and RNA in the neurones and their glial satellite cells of the supraoptical nucleus of the rat brain after deprivation of the paradoxical phase of sleep for 24-hours. Doklady Akademii nauk SSSR, 214, 940. Dunleavy, D.L.F., Oswald, I., Brown, P. & Strong, J.A. (1974) Hyperthyroidism, sleep and growth hormone. Electroencephalography and Clinical Neurophysiology, 36, 259. Dunleavy, D.L.F., Oswald, I. & Strong, J.A. (1973) Fenfluramine and growth hormone release. British Medical Journal, 3, 48. Feinberg, I. (1968) The ontogenesis of human sleep and the relationship of sleep variables to intellectual function in the aged. Comprehensive Psychiatry, 9, 138. Feinberg, I. (1968) Eye-movement activity during sleep and intellectual function in mental retardation. Science, 159, 1256. Fisher, L.B. (1968) The diurnal mitotic rhythm in the human epidermis. British Journal of Dermatology, 80, 75. Haider, I. & Oswald, I. (1970) Late brain recovery processes after drug overdose. British Medical Journal, 2, 318. Halberg, F. (1960) The 24-hour scale: a time dimension of adaptive functional organization. Perspectives in Biology and Medicine, 3, 491. Halberg, F. & Barnum, C.P. (1961) Continuous light or darkness and circadian periodic mitosis and metabolism in C and D₈ mice. American Journal of Physiology, 201, 227. Hartmann, E.L. (1973) The Functions of Sleep. Yale University Press, New Haven, Connecticut. Hobson, J.A. (1968) Sleep and exercise. Science, 112, 1503. Honda, Y., Takahashi, K., Takahashi, S., Azumi, K., Irie, M., Sakuma, M., Tsushima, T. & Shizume, K. (1969) Growth hormone secretion during nocturnal sleep in normal subjects. Journal of Clinical Endocrinology and Metabolism, 29, 20. Kales, A., Heuser, G., Jacobson, A., Kales, J.D., Hanley, J., Zweigiz, J.R. & Polson, M.J. (1967) All-night sleep studies in hypothyroid patients, before and after treatment. Journal of Clinical Endocrinology and Metabolism, 27, 1593. KARACAN, I., ROSENBLOOM, A.L., LONDON, J.H., SALIS, P.J., THORNBY, J.I. & WILLIAMS, R.L. (1973) The effects of acute fasting on sleep and sleep growth hormone response. *Psychosomatics*, 14, 33. KILLMANN, S.A., CRONKITE, E.P., FLIEDNER, T.M. & BOND, V.P. (1962) Mitotic indices of human bone marrow cells. *Blood*, 19, 744. KOGAN, A.B., BRODSKY, V.Y., FELDMAN, G.L. & GUSATINSKY, V.N. (1975) Comparison of electrical and metabolic indices of sleep processes autoregulation. In: *Proceedings of Symposium 'Self-Regulation of the Sleep Process'*, Academy of Sciences of the USSR, Leningrad, edited N.I. Moiseeva. KORNER, A. (1965) Growth hormone control of biosynthesis of protein and ribonucleic acid. *Recent Progress in Hormone Research*, 21, 205. LEWIS, S.A., OSWALD, I. & DUNLEAVY, D.L.F. (1971) Chronic fenfluramine administration: some cerebral effects. *British Medical Journal*, 3, 67. MACFADYEN, U.M., OSWALD, I. & LEWIS, S.A. (1973) Starvation and human slow-wave sleep. *Journal of Applied Physiology*, 35, 391. MAUER, A.M. (1965) Diurnal variation of proliferative activity in the human bone marrow. *Blood*, 26, 1. OSWALD, I. (1969) Human brain protein, drugs and dreams. *Nature*. London, 223, 893. OSWALD, I. (1970) Sleep, the great restorer. *New Scientist*, 46, 170. OSWALD, I. (1973) Is sleep related to synthetic purpose? In: *Sleep: Physiology, Biochemistry, Psychology, Pharmacology, Clinical Implications*, p. 225. Karger, Basel. OSWALD, I., LEWIS, S.A., TAGNEY, J., FIRTH, H. & HAIDER, I. (1973) Benzodiazepines and human sleep. In: *The Benzodiazepines*, p. 613. Raven Press, New York. PARKER, C.C., ROSSMAN, L.G. & VANDERLAAN, E.F. (1972) Persistence of rhythmic human growth hormone release during sleep in fasted and nonisocalorically fed normal subjects. *Metabolism*, 21, 241. PETRE-QUADENS, O. & LEE, C. (1970) Eye-movements during sleep: a common criterion of learning capacities and endocrine activity. *Developmental Medicine and Child Neurology*, 12, 730. RENZONI, A. & QUAY, W.B. (1964) Daily karyometric and mitotic rhythms of pineal parenchymal cells in the rat. *American Zoologist*, 4, 416. SASSIN, J.F., FRANTZ, A.G., KAPEN, S. & WEITZMAN, E.D. (1973) The nocturnal rise of human prolactin is dependent on sleep. *Journal of Clinical Endocrinology and Metabolism*, 37, 436. SASSIN, J.F., PARKER, D.C., JOHNSON, L.C., ROSSMAN, L.G., MACE, J.W. & GOTLIN, R.W. (1969a) Effects of slow-wave sleep deprivation on human growth hormone release in sleep: preliminary study. *Life Sciences*, 8, Part I, 1299. SASSIN, J.F., PARKER, D.C., MACE, J.W., GOTLIN, R.W., JOHNSON, L.C. & ROSSMAN, L.G. (1969b) Human growth hormone release: relation to slow-wave sleep and sleep-waking cycles. *Science*, 165, 513. SCHNURE, J.J., RASKIN, P. & LIPMAN, R.L. (1971) Growth hormone secretion during sleep: impairment in glucose tolerance and non-suppressibility by hyperglycemia. *Journal of Clinical Endocrinology and Metabolism*, 33, 234. STERN, W.C. & MORGANE, P.J. (1974) Theoretical view of REM sleep function: maintenance of catecholamine systems in the central nervous system. *Behavioural Biology*, 11, 1. TAKAHASHI, Y., KIPNIS, D.M. & DOUGHADAY, W.H. (1968) Growth hormone secretion during sleep. *Journal of Clinical Investigation*, 47, 2079. TOWNSEND, R.E., PRINZ, P.N. & OBRIST, W.D. (1973) Human cerebral blood flow during sleep and waking. *Journal of Applied Physiology*, 35, 620. VAN DEN NOORT, S. & BRINE, K. (1970) Effect of sleep on brain labile phosphates and metabolic rate. *American Journal of Physiology*, 218, 1434. VITALE-NEUGEBAUER, A., GIUDITTA, A., VITALE, B. & GIAQUINTO, S. (1970) Pattern of RNA synthesis in rabbit cortex during sleep. *Journal of Neurochemistry*, 17, 1261. WEBB, W.B. (1974) Sleep as an adaptive response. *Perceptual and Motor Skills*, 38, 1023. WEST, L.J. (1969) In: *Dream Psychology and the New Biology of Dreaming* (Ed. by M. Kramer) p. xvi. Charles C. Thomas, Springfield, Illinois.
BRAIN ASTROCYTOMAS: A STUDY OF EPIDEMIOLOGICAL FINDINGS, TREATMENT RESULTS AND PROGNOSTIC FACTORS IN TEHRAN CANCER INSTITUTE'S RADIOTHERAPY PATIENTS F. Amouzegar-Hashemi, P. Haddad, M. Sajjadi and K. Dehshiri Department of Radiation Oncology, Cancer Institute, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran Abstract - Astrocytomas, including glioblastoma multiforme (GBM), are the most common brain tumors. Post-operative radiotherapy plays an important role in their treatment. Records of all patients with a pathologic diagnosis of astrocytoma referred for radiotherapy from 1987-1992 were reviewed and prognostic factors with regard to recurrences were analyzed. During the study period, 162 astrocytoma patients were treated by radiation in our department. Male-to-female ratio was 1.4:1. The disease was most prevalent in the 3rd and 4th decades of life. Most tumors were in cerebral hemispheres and grade IV. In nearly all patients only CT scan had been used for diagnosis, and total resection had been performed. Radiation dose was mostly 5,000-5,500 cGy by standard fractionation. Follow-up was available for 91 patients, and in these patients CCNU (lomustine) chemotherapy was prescribed for high-grade tumors. Three-year local control was 77%. Grade, extent of surgery, and use of CCNU were statistically significant as prognostic factors. Also 4 GBM long-term survivors were found. Treatment of brain astrocytomas by radiation in our department was concluded to be reasonably successful. Acta Medica Iranica 37 (3): 155-160; 1999 Key Words: astrocytoma, glioblastoma multiforme, brain radiotherapy, chemotherapy, CCNU INTRODUCTION Brain tumor is the second most common cause of neurological death after cerebrovascular accidents. It is the most common solid tumor of childhood and the second most common cause of cancer death in children, after leukemias (1). Astrocytomas, including glioblastoma multiforme (GBM), are the most common primary brain tumors (2). Post-operative radiotherapy plays an important role in the treatment of astrocytomas and has led to significant improvements in their prognosis, but optimal therapy with a high rate of long-term disease-free survival is still not available (3). Adjuvant chemotherapy too is used in high-grade tumors with some success (4), but no major advantage has been reported yet for this modality. Considering the importance of brain tumors and the significant morbidity and mortality they cause, a retrospective study was undertaken to evaluate the epidemiological characteristics, radiotherapy details, treatment results, and prognostic factors of the brain astrocytoma patients treated in the Radiation Oncology Department of Cancer Institute, Tehran University of Medical Sciences. MATERIAL AND METHODS Records of all patients with a pathologic diagnosis of astrocytoma referred to our department for radiotherapy in the 5-year period of 1987-1992 were retrospectively studied. Epidemiological and treatment details were sought; patients' follow-up was checked and the number and time of tumor recurrences were evaluated. Prognostic factors (age, sex, tumor site, histologic grade, extent of surgery and use of chemotherapy) with regard to recurrence of disease were statistically analyzed by chi-square test and linear correlation. RESULTS During the study period, 162 patients with a pathologic diagnosis of astrocytoma were treated in our department. Ninety-five patients (57%) were male and 67 (43%) female, with a male to female ratio of 1.4:1. The youngest patient was 2.5 and the oldest 83 years old with a mean age of 28 years. The disease was most prevalent in the 3rd and 4th decades of life, and 40% of the patients were in this age range. The second most common age group was 1-10 years (20%). The distribution of age in our patients has been shown in Fig. 1. The tumor site was in the cerebral hemispheres in 112 patients (69%), cerebellum in 22 (13%), thalamus, hypothalamus and optic chiasma in 15 (9%), brainstem Brain Astrocytomas ... Fig. 1. Age distribution Fig. 2. Tumor site Fig. 3. Tumor grade Fig. 4. Symptoms in 8 (5%) and spine in 4 (2%) (Fig. 2). Grade IV astrocytoma was found in 40 patients and grade III in 37, totally including 49% of the patients. Twenty-nine patients (18%) had grade II tumors and 33 (23%) grade I. Grading had not been performed on tumors of 18 patients (Fig. 3). The most common symptoms were headache, followed by nausea and vomiting, which were present in more than 50% of the patients. Less common were sensory and motor disturbances, vision dysfunctions, convulsions, and mental problems (Fig. 4). All patients had been diagnosed by computed tomography (CT). Only one patient had been evaluated by magnetic resonance imaging (MRI), in addition to CT. Most tumors (119, 71%) had been removed by subtotal resection, and total resection had been performed in only 9 patients (5%). In 24% of the cases (41 patients) the tumors had only been biopsied. Eleven patients had a shunt operation. All patients received their radiotherapy by two opposed lateral fields on cobalt-60 units. Nearly all received a dose of 5,000-5,500 cGy by treatment fractions of 200 cGy, five daily fractions each week. Radiation fields were local fields with suitable margins in grade I and II astrocytomas, and whole-brain fields up to 3,000-5,000 cGy with localized boost fields afterwards in grades III and IV astrocytomas. Spine fields were posterior direct local fields with suitable margins and a dose of 5,000 cGy in 25 fractions. It should be mentioned that 16 patients (10%) did not complete their treatment, the reasons of which are not clear. Also, 4 children received their radiation to a dose of 4,400-4,600 cGy in 22-23 fractions, and two GBM patients received 6,000 cGy in 30 fractions in the last year of the study period. From the 162 treated patients, only 91 (56%) returned for follow-up. In these patients the mean follow-up duration was 15 months, with a range of 3-60 months. Chemotherapy with CCNU (lomustine) was prescribed to 2/3 of the followed adult patients with grade IV tumors (14 out of 21) and nearly half of grade III patients (9 out of 21). Only one patient with a grade II tumor among the 49 followed grade I or II patients received CCNU. The decision for CCNU administration was only made by the judgment of the responsible radiation oncologist. Dose of CCNU was 100-130 mg/sq.m every 6 weeks. Of the followed-up patients, 58 had follow-up for less than one year, 8 of whom suffered a local recurrence. Thirty-three patients were followed for more than one year, 13 of whom had a local recurrence. The total number of recurrences in the whole follow-up period was 21 (23%), which shows a local control of 77%. The relationship of various patient variables, tumor, and treatment factors were evaluated for statistical significance. The highest significance was found for correlation of age and tumor grade, showing an increase of tumor grade with age. There was a significant association between grade and use of CCNU chemotherapy, showing more CCNU use in higher grades. Age had an effect on tumor site, with more hemispheric sites in older patients, and 3rd ventricle and spinal sites in younger ones. In addition, biopsy was often performed in 3rd ventricle and spinal tumors with more complete surgeries in cerebral hemispheres and cerebellum. The effect of various epidemiological and treatment factors on tumor recurrence and prognosis was statistically analyzed. These factors included age, sex, tumor site, tumor grade, extent of surgery, and use of chemotherapy (CCNU). Statistical significance was found only for grade (higher rate of recurrence with higher grades), surgical extent (lower rate of recurrence with more complete surgery), and CCNU use (lower recurrence). It is noteworthy that after treatment with radiation and CCNU, four male GBM patients (out of 20 GBMs, 10%) survived for 4-4.5 years with no recurrence, and two of them were still disease-free at the last follow-up: 50-55 months after radiotherapy. **DISCUSSION** After surgical resection, radiation therapy is the single most effective treatment for malignant astrocytomas (4). Randomized trials of radiotherapy added to surgery have demonstrated a clear survival benefit (1,2). Dose escalation through radiosurgery (5) or brachytherapy boosts (6) in addition to standard external-beam treatment has been attempted, but application of these modalities is limited to approximately 20-30% of patients with malignant glioma. Another approach is the use of hyperfractionated (7) or accelerated radiotherapy (8) or concurrent use of chemotherapy with radiation (9,10), but to date these methods have not demonstrated a definite survival advantage (2). Postoperative treatment of low-grade astrocytomas, especially in children, is controversial. Postoperative adjuvant therapy is clearly not indicated after complete surgical resection of pilocytic and other low-grade astrocytomas in children, but the use of radiotherapy following less than complete resection is reported in several series to result in better disease-free survival (11). The outcome of adult patients with total resection has been found in some series to be similar to that of patients undergoing less extensive surgery. Thus in adults postoperative radiation has been recommended after complete resection by some authors, whereas others advise that radiotherapy be withheld until there is evidence of tumor recurrence (1). Our study could not present epidemiological data representing the Iranian patient population with astrocytomas, as the study population includes only the patients referred for radiotherapy and does not include the patients with low-grade tumors solely observed after surgery. Nonetheless it could be informative in the absence of a population-based cancer registry. In our study, the male-to-female ratio of 1.4:1 is in accordance with international literature showing preponderance among males for most brain tumor types (2,4). Peak age was in the 3rd and 4th decades of life. The most common tumor site was the cerebral hemispheres and the most common grade was grade IV (25%), though the patients were almost equally divided among the malignant (grades III and IV) and low-grade (grades I and II) groups. This is again in agreement with international literature. All our patients had been diagnosed by CT scanning with only one MRI performed, although the recent widespread availability of MRI has replaced CT as the optimal method of imaging (12) and MRI is now considered to be the imaging modality of choice for most brain tumors (1,2,4). This is probably the result of difficult access and high cost of MRI during the study period in Iran, and of changing neuroimaging strategies. Total resection had been performed for only 5% of our patients, and most had been treated by subtotal surgery. This demonstrates the difficulty of performing a complete resection in malignant astrocytomas, and is also a reflection of the referral nature of our patient population, as the totally resected low-grade astrocytomas might not have been sent for radiotherapy. All patients in our study were treated by cobalt-60 radiotherapy units. It should be emphasized that brain irradiation presents no technical difficulties using either a cobalt-60 unit or a linear accelerator (13). The radiation fields included local (partial) brain fields in low-grade astrocytomas, and whole-brain fields with local boost fields after an initial dose in malignant astrocytomas. However, the use of localized fields for low-grade tumors has been the subject of dispute. Previously, whole-brain irradiation was recommended (13) and earlier trials of radiation therapy treated whole brain fields but recent trials of limited field irradiation have shown survival times comparable to those obtained with whole-brain therapy (4). Therefore, generous margins and inclusion of all radiographic evidence of tumor and associated edema is the rule today (1,2) and is the current treatment policy of our department. Dose of radiation in our patients was mostly 5,000-5,500 cGy, but children tended to receive a lower dose and GBM patients received a dose of 6,000 cGy in the last year of study period. This again reflects the evolving treatment policies. Previously, doses like 4,500 cGy in 20 fractions were recommended for malignant astrocytomas (14), but the recent recommendation is 6,000 cGy in 30 fractions for GBM (1,2,4) and is the current dose of radiation in our department. Attempts for higher doses through radiosurgery or brachytherapy boosts or hyperfractionated radiotherapy continues, as mentioned before. Only 56% of the patients came back for follow-up, and in these patients the mean follow-up time was 15 months (max. 60 months). Lack of long-term follow-up covering the maximum number of patients has always been a problem of the retrospective studies in our institute, and we believe of the Iranian studies in general. The reasons for this are multiple, and cannot be discussed here. In the followed patients the rate of tumor recurrence was 13% in one year and 23% at the last follow-up, which shows a 3-year local control of 77%. Considering the relatively short and incomplete follow-up in our study, this local control rate cannot be very accurate. But keeping in mind its limitations, it is a measure of treatment success in our department and compares favorably with international literature. The statistical analysis of various patient, tumor and treatment factors showed significant relationships representing the increased incidence of high-grade tumors in older ages and the difficulty of more complete surgeries in tumors around 3rd ventricle. It also showed the prevailing policy of using chemotherapy for high-grade tumors. Increasing grades of malignancy within the astrocytoma group are generally associated directly with patient age. Low-grade astrocytomas are most common in patients 20-40 years old, anaplastic astrocytomas in 30-50 years, and GBM (the most malignant tumor) in patients who are 50 or older (15). Also the analysis of prognostic factors demonstrated the effects of grade, extent of surgery, and use of CCNU to be statistically significant. It is recognized that higher grades of malignancy in gliomas are associated with a poorer patient prognosis (15). But the role of surgical extent in determining eventual outcome of patients with malignant glioma is somewhat controversial. To date, no prospective, randomized studies have critically evaluated this (16), though the trend in recent literature is to support the strategy of removing as much tumor as possible. It is believed that maximal surgical resection improves the results of treatment (1, 17), and our study is in agreement with this belief. The chemotherapy prescribed to our patients was single-agent CCNU, and it was prescribed to more than half of the patients with grade III or IV tumors. The nitrosoureas, including carmustine (BCNU) and lomustine (CCNU), appear to be the most active single agents for malignant astrocytomas (18). These drugs readily cross the blood-brain barrier. BCNU is the most active in this group, but it should be used intravenously while CCNU is used orally and is more convenient. Our study showed a lower rate of recurrence with the use of adjuvant chemotherapy. Adjuvant chemotherapy after irradiation has modestly improved survival in GBM and more so in anaplastic astrocytoma (1,4,19). At least one study supports the use of three-drug combination of procarbazine, CCNU, and vincristine (PCV) over single-agent BCNU (20) for this purpose, and because of this some authorities recommend PCV for the adjuvant chemotherapy of malignant astrocytomas (1) while others consider the treatment of 60 Gy radiation plus adjuvant BCNU to be the standard to which other therapies should be compared (4), still others consider either BCNU or PCV currently acceptable (21). Considering the above and considering the relative convenience of CCNU and its efficacy in our experience, the current policy in our department is to use CCNU as adjuvant to radiation in malignant astrocytomas. Also a randomized trial for comparison of adjuvant CCNU and PCV has been proposed by us, and will be hopefully launched soon in our department. Chemotherapy has no established role in the treatment of low-grade astrocytomas (22), but it could be useful in infants and very young children for controlling the lesion while deferring irradiation until the child is older (4,23). No mention has been made in this report of the adverse effects of radiation. This is because within the therapeutic doses, acute damage of radiation to CNS parenchyma is rare. Nausea and vomiting and occasionally a transient worsening of pretreatment symptoms may occur during the course of radiotherapy. But the significant side effects of CNS irradiation are late effects, which require long-term and accurate follow-up to discover and are generally difficult to diagnose and differentiate from tumor recurrence. The most serious late reaction to radiotherapy is radiation necrosis, which may appear years (peak at 3 years) after irradiation (2). We did not find any significant late effect of radiation in the study patients. However, considering the limitations mentioned before we may have remained ignorant of some late sequelae. Despite all the efforts and the multimodality management of high-grade astrocytomas, unfortunately treatment is not ultimately successful and recurrence is the rule leading to death, especially in GBMs. Long-term survivors of GBM are reported in literature (24). In this study we found four GBM patients surviving without tumor recurrence for more than 4-4.5 years after irradiation and CCNU, two of them still disease-free at the last follow-up. In conclusion epidemiologic findings in patients with brain astrocytomas referred to our department for radiotherapy were similar to those reported in international literature. Our treatment with irradiation plus CCNU chemotherapy in high-grade tumors had a 3-year local control of 77%, comparable to international literature. Statistically significant prognostic factors included grade, extent of surgery, and use of CCNU. Considering this, the current policy in our department is to use 5,000-5,500 cGy radiotherapy with local fields for incompletely resected low-grade astrocytomas, and to use 6,000 cGy radiation with wide margins plus adjuvant CCNU in GBM. Also a randomized trial is proposed to test combination chemotherapy (PCV) against single-agent CCNU. REFERENCES 1. Levin VA., Leibel SA. and Gutin PH. Neoplasms of the central nervous system. In: DeVita VT, Hellman S, Rosenberg SA. (eds). Cancer: Principles & practice of oncology. 5th ed. Philadelphia: Lippincott-Raven. 1997: 2022-82. 2. Wara WM., Bauman GS. and Sneed PK. Brain, brain stem and cerebellum. In: Perez CA, Brady LW, eds. Principles and practice of radiation oncology. 3rd ed. Philadelphia: Lippincott-Raven. 1998: 777-828. 3. Leibel SA., Scott CB. and Loeffler JS. Contemporary approaches to the treatment of malignant gliomas with radiation therapy. Semin Oncol. 21:2: 198-219; 1994. 4. Prados MD. Wilson CB. Neoplasms of the central nervous system. In: Holland JF, Bast RC, Morton DL. and coworkers. Cancer medicine. 4th ed. Baltimore: Williams & Wilkins. 1997: 1471-1514. 5. Alexander E. and Loeffler JS. Radiosurgery for primary malignant brain tumors. Semin. Surg. Oncol. 1998: 14:1: 43-52. 6. Lapierre NJ. Leung PMK. McKenzie S. and coworkers. Randomized study of brachytherapy in the initial management of patients with malignant astrocytoma. Int. J. Radiat. Oncol. Biol. Phys. 41:5: 1005-11; 1998. 7. Bese NS., Uzel O., Turkan S. and Okhan S. Continuous hyperfractionated accelerated radiotherapy in the treatment of high-grade astrocytomas. Radiother Oncol. 47:2: 197-200; 1998. 8. Cardinale RM., Schmidt-Ullrich RK. and Benedict SH. Accelerated radiotherapy regimen for malignant gliomas using stereotactic concomitant boost for dose escalation. Rafiat. Oncol. Investig. 6:4:175-81; 1998. 9. Hellman R., Neuberg DS. and Wagner H. A therapeutic trial of radiation therapy with vincristine, etoposide and procarbazine (VVP) in high-grade intracranial gliomas-an eastern cooperative oncology group study (E2392). J. Neurooncol. 37: 1:55-62; 1998. 10. Brandes AA., Rigon A. and Zampieri P. Carboplatin and teniposide concurrent with radiotherapy in patients with glioblastoma multiforme: a phase II study. Cancer. 15:82:2: 355-61; 1998. 11. Freeman CR., Farmer JP. and Montes J. Low-grade astrocytomas in children: evolving management strategies. Int. J. Radiat. Oncol. Biol. Phys. 41:5:979-87; 1998. 12. Byrne TN. Imaging of gliomas. Semin. Oncol. 21:2:162-71; 1994. 13. McKenzie CG. Thomas DGT. Central nervous system. In: Price P. Sikora K. eds. Treatment of cancer. 3rd ed. London: Chapman & Hall, 1995: 221-47. 14. Hope-Stone HF. Malignant disease of the central nervous system. In: Hope-Stone HF, eds. Radiotherapy in clinical practice. 1st ed. London: Butterworths, 1986: 317-68. 15. Bruner JM. Neuropathology of malignant gliomas. Semin. Oncol. 21:2:126-38; 1994. 16. Berger MS. Malignant astrocytomas: surgical aspects. Semin. Oncol. 21:2: 172-85. 1994. 17. Wisoff JH., Boyett JM. and Berger MS. Current neurosurgical management and the impact of extent of resection in the treatment of malignant gliomas of childhood: a report of the children's cancer group trial no. CCG-945. J Neurosurg. 89:1:52-9, 1998. 18. Grossman S. and Lesser GL. The chemotherapy of high-grade astrocytomas. Semin. Oncol. 21:2:220-235; 1994. 19. Shapiro WR., Shapiro JR. Biology and treatment of malignant glioma. Oncology (Huntingt). 12:2:233-40; 1998. 20. Levin VA., Silver P. and Hannigan J. Superiority of post-radiotherapy adjuvant chemotherapy with PCV over BCNU for anaplastic gliomas: NCOG 6G61 final report. Int J Radiat Oncol Biol Phys. 18: 2: 321-4; 1990. 21. Chamberlain MC. and Kormanic PA. Practical guidelines for the treatment of malignant gliomas. West J Med. 168: 2:114-20; 1998. 22. Macdonald DR. Low-grade gliomas, mixed gliomas, and oligodendrogliomas. Semin. Oncol. 21:2:236-248; 1994. 23. Castello MA., Schiavetti A. and Varrasso G. Chemotherapy in low-grade astrocytoma management. Childs Nerv. Syst. 14:1-2: 6-9; 1998. 24. Salvati M., Cervoni L. and Artico M. Long-term survival in patients with supratentorial glioblastoma. J. Neurooncol. 36: 1: 61-4; 1998.
INTERLOOP SCULPTURE, REPURPOSED ORIGINAL 1930’S ESCALATOR TREADS BY ARTIST CHRIS FOX, WYNYARD RAILWAY STATION, SYDNEY Photo credit: Department of Transport TRANSPORT MORE BUS SERVICES TO MEET GROWING DEMAND - 14,000 extra weekly bus services across Sydney, Illawarra, Central Coast and the Lower Hunter. - $67.9 million over four years to improve bus services across 15 regional towns. PLANNING FOR TWEED LIGHT RAIL - Provide $1 million to commence strategic planning for a future light rail between Tweed Heads and Coolangatta. FASTER RAIL - $295 million over four years initial investment in the fast rail network, including improved alignment north of Mittagong, duplication between Berry and Gerringong, planning of a new alignment between Sydney and Woy Woy and planning work to improve the route to the Central West. URBAN ROAD UPGRADE AND CONGESTION PROGRAM New urban road projects across Sydney, the Central Coast and the Lower Hunter, including: - $450 million commitment to reduce traffic congestion at 12 pinch points across Sydney: - Pennant Hills Road / Carlingford Road, Carlingford - Forest Road and Stoney Creek Road, Beverly Hills - Forest Road at Boundary Road and Bonds Road, Peakhurst - Henry Lawson Drive at Rabaul Road and Haig Avenue, Georges Hall - Linden Street, between River Road and The Grand Parade, Sutherland - Princes Highway at Bates Drive, Kareela - Pennant Hills Road, between the M2 Motorway and Woodstock Avenue, Carlingford (Southbound) - The Horsley Drive / Polding Street, Fairfield $450 MILLION total commitment to reduce congestion across major arterial roads and regional links REGIONAL & LOCAL ROADS - $500 million over five years for the Fixing Local Roads program to assist regional councils with repairing, maintaining and sealing council roads. - Establish a process to transfer up to 15,000 kilometres of council-owned regional roads back to the state. - Regional and local road commitments include: - $17.6 million towards sealing and re-sealing roads in the Snowy Monaro region - $17 million to Kempsey Shire Council and Port Macquarie-Hastings Council for upgrades to Maria River Road between Port Macquarie and Crescent Head - $12.5 million to seal Pooncarie Road in Menindee from the Regional Growth Fund (with joint funding of $12.5 million from the Commonwealth Government) - $10 million to Kempsey Shire Council for upgrades to Armidale Road, Kempsey - $10 million to Richmond Valley Council to upgrade Woodburn-Coraki Road, Coraki - $10 million for Captains Flat Road near Queanbeyan - $10 million to seal, reseal, stabilise pavement, install new guardrail and drainage on Towamba and Burragate Roads - $10 million to upgrade Werris Creek Road near Duri - Over $8 million towards sealing Rangari Road between Manilla and Boggabri - $5.6 million to Griffith City Council to seal Boorga and Dickie Roads - $4.4 million to upgrade Federation Way in Albury from the Fixing Country Roads program - $3 million to Port Macquarie-Hastings Council for upgrades to Waitui Road - $0.3 million to Queanbeyan-Palerang Regional Council to improve Araulen Road at Braidwood - Funding to Port Stephens Council for Raymond Terrace Road upgrade works $500 MILLION to kick off the Fixing Local Roads program TRANSPORT CLUSTER CONT. TRANSPORT - Cumberland Highway at The Horsley Drive, Smithfield - The Horsley Drive at Nelson Street, Fairfield - traffic lights at the intersection of Baker Street and Pennant Hills Road - Victoria Road widening at the West Ryde rail bridge between West Parade and Anzac Avenue. - **$695 million** commitment for technology upgrades on the road network, including: - upgrade traffic light systems at 500 intersections across New South Wales - Smart Motorways rollout between Sydney and Gosford, and planning for major freeways - development of smart parking and clearway signage - new drones to better respond to traffic incidents and virtual in-car messaging to better alert drivers. - Urban Road upgrades including: - **$387 million** to upgrade the Central Coast Highway between Bateau Bay and Wamberal - **$260 million** to upgrade Mulgoa Road from Jeanette Street to Glenmore Parkway, and Jamison Road to Blaikie Road - **$220 million** to upgrade Mamre Road between the M4 Motorway and Erskine Park Road - a further **$205 million** to duplicate Nelson Bay Road between Williamtown and Bobs Farm, in addition to $70 million previously allocated for improvements to Nelson Bay Road - **$188 million** to deliver the Fingal Bay Link Road - **$20 million** for a westbound on ramp to the M4 Motorway from Roper Road - **$16 million** for design and development of Spring Farm Link Road Stage 2 - **$2 million** for planning to upgrade the Toongabbie Rail Bridge. UPGRADING 68 MORE TRAIN STATIONS - Upgrade a further 68 train stations under the Transport Access Program and Sydney Metro City and Southwest, to make train stations more accessible, including new lifts, ramps and footbridges. REDUCE THE WEEKLY OPAL TRAVEL CAP - **$69.6 million** over four years to reduce the Opal Weekly Travel cap by approximately 20 per cent to $50 a week for adults and $25 per week for child/youth and concession travel from 1 July 2019 for all train, bus, ferry and light rail customers. This will benefit approximately 55,000 commuters with savings of up to $686 a year. ELECTRIC BUSES TRIAL - **$10 million** over two years to trial 10 electric buses at Randwick Bus Depot, as part of the Government’s Electric and Hybrid Vehicle Plan. FIXING COUNTRY BRIDGES - **$500 million** over five years for the Fixing Country Bridges program, to repair and replace poor quality timber bridges in rural and regional communities. ACCELERATING SYDNEY METRO WEST - **$6.4 billion** commitment over four years, for planning and the acceleration of construction of Sydney Metro West, to provide a faster, easier and more reliable journey between Greater Parramatta and the Sydney CBD in around 20 minutes. NEW REGIONAL ROAD PROJECTS ACROSS NEW SOUTH WALES Princes Highway - **$960 million** for new upgrades to the Princes Highway between Nowra and the Victorian Border, as the first part of duplicating the highway across the next 20 years: - duplicate sections between Jervis Bay Road and Sussex Inlet Road - build the Moruya Bypass - start detailed planning work for the Milton and Ulladulla Bypass and upgrades between Burrill Lake and Batemans Bay. Great Western Highway - **$2.5 billion** for the first stages of the duplication of the Great Western Highway between Katoomba and Lithgow: - construction to commence on: - Medlow Bath Upgrade - Mount Victoria Bypass - upgrade between Jenolan Caves Road and South Bowenfels. - design and planning to begin on: - Katoomba to Medlow Bath - Medlow Bath to Blackheath - Blackheath Bypass Tunnel - Blackheath to Mount Victoria. Additional regional road upgrades - Upgrades on the following highways: - **$266 million** to deliver the New England Highway bypass of Muswellbrook - **$200 million** to reduce Newell Highway flooding between West Wyalong and Forbes - **$20 million** each for upgrades to the Kings Highway and Monaro Highway - **$18 million** for overtaking lanes on the Mitchell Highway between Dubbo and Narromine - **$11.2 million** for upgrades to the Bruxner Highway, including at Alstonville and Lismore - **$4.5 million** to address flooding at the Washpool causeway on the Gwydir Highway, 15 kilometres east of Moree - **$3 million** for planning the New England Highway (Goonoo Goonoo Road) duplication at Tamworth between Calala Lane and Jack Smyth Drive. Major road upgrades including: - **$60 million** for duplication of Ocean Drive at Port Macquarie - **$50 million** to upgrade Waterfall Way - **$27 million** for design and land acquisition for the Dunns Creek Road corridor - **$20 million** to seal Bobeyan Road - **$15 million** for the Taree Northern Gateway - **$3 million** for the upgrade of Main Street, Hay. NEW COMMUTER CAR PARKING - **$300 million** over four years to provide additional car spaces through the Commuter Car Parking Program at the following train stations: - Edmondson Park - Emu Plains - Engadine - Jannali - Leppington - Revesby - Riverwood - Schofields - Tuggerah - Warwick Farm - West Ryde. - As well as additional car parking for bus commuters at Winston Hills. - A new commuter car park will also be delivered at Hornsby. $6.4 BILLION over four years for planning and the acceleration of construction of Sydney Metro West $69.6 MILLION over four years to reduce the Opal Weekly Travel Cap TRANSPORT CLUSTER CONT. TRANSPORT OPAL PARK AND RIDE EXPANSION - 10 train station commuter car parks to be converted to Opal Park and Ride car parks, to keep spaces available for public transport users, at: - Campbelltown - Gosford - Holsworthy - Hornsby - Jannali - Kiama - Penrith - Revesby - Sutherland - Warwick Farm. MORE EXPRESS TRAINS FOR WESTERN SYDNEY - Deliver an additional eight train express services on the T1 Western Line across the morning and evening peak periods on weekdays, adding over 35,000 extra seats each week. CONNECTING RURAL AND REGIONAL COMMUNITIES - Trial 13 new public transport routes (bus and train) to connect 44 isolated communities across regional New South Wales to a major centre or city. REGIONAL SENIORS TRANSPORT CARD - A Regional Seniors Transport Card providing $250 per year in 2020 and 2021 towards fuel or taxi travel from regional providers or pre-booked NSW TrainLink tickets for aged pensioners and Commonwealth Seniors Health card holder living in regional New South Wales. TRANSPORT DISABILITY SUBSIDIES - $173 million over four years for transport disability subsidies to extend the Taxi Transport Subsidy Scheme and the Wheelchair Accessible Taxi Driver Incentive Scheme. MORE CYCLING AND PEDESTRIAN INFRASTRUCTURE - $256 million over four years towards new walking and cycling infrastructure projects across the state to make walking and cycling a more convenient, safer and enjoyable option that benefits everyone. MORE TRAINS, MORE SERVICES - Continue delivery of the More Trains, More Services program, including providing increased rail services on the Illawarra, Airport and South Coast Lines. Including: - fast-tracking the delivery of another 17 new air-conditioned Waratah Series 2 trains from 2020, in response to growing demand across the Sydney Trains network - new train carriages and extra seats on the South Coast line to address increased customer demand during the week and on the weekends. **ACCESSIBILITY IMPROVEMENTS FOR FERRY WHARVES** - Improve accessibility at Taronga Zoo, South Mosman, North Sydney and Manly ferry wharves. **HELPING KIDS GET TO SCHOOL SAFELY** - **$18.5 million** over four years to provide an additional 300 School Crossing Supervisors across New South Wales primary schools to help children get to and from school safely each day. **PLANNING FOR THE COFFS HARBOUR BYPASS** - Conduct further community consultation on the design of the Coffs Harbour bypass. **NEW REGIONAL RAIL FLEET** - **$2.8 billion** commitment towards the design, build and maintenance of the new regional rail fleet, along with the new purpose built maintenance facility in Dubbo, to create better, safer, more comfortable and reliable services for customers travelling long distances. **REVIEW SURPLUS LAND ACQUIRED UNDER WESTCONNEX** - Review the sale of approximately 4,000 square metres of surplus land at Homebush, acquired as part of the WestConnex motorway project. **FARE FREEZE FOR GOLD OPAL CARDS** - Freeze fares for a further four years for Gold Opal Card holders at a maximum of $2.50 per day. **HEATHCOTE ROAD UPGRADE** - Widen a two-kilometre section of Heathcote Road at Holsworthy to improve traffic flow and road safety. **EXPRESS TRAIN SERVICES BETWEEN GRANVILLE AND THE CITY** - Provide new return express train services between Granville and the City. **RAPID TRANSPORT OPTIONS FOR WOLLONDILLY AND THE SOUTHERN HIGHLANDS** - Investigate new rapid public transport options to connect communities in Wollondilly and the Southern Highlands with Sydney’s electrified rail network. New routes to be explored will include Bargo, Picton and Wilton to Campbelltown, as well as Moss Vale, Bowral and Mittagong to Campbelltown. **PLAN AN EXTENSION OF THE SYDNEY METRO CITY AND SOUTHWEST LINE** - Begin planning an extension of Sydney Metro City and Southwest between Bankstown and Liverpool. **ADDITIONAL FERRY SERVICES AND VESSELS** - Create 400 additional weekly ferry services across the network over the next two years. **NORTH SOUTH METRO RAIL LINK TO THE NEW WESTERN SYDNEY AIRPORT** - Invest over **$2 billion** over four years towards the New South Wales and Federal Government funded North South Metro Rail Link connecting to Western Sydney Airport, with construction expected to start in 2021 and be completed in 2026 in time for the opening of the airport. **UPGRADE PROSPECT HIGHWAY AND MEMORIAL AVENUE** - **$300 million** commitment to commence upgrading the Prospect Highway and Memorial Avenue to reduce congestion and to help meet future demands on this corridor. **EASING CONGESTION AND CONNECTING COMMUNITIES** - **$32.2 million** from the Housing Acceleration Fund for planning and design of eight road projects across Sydney and regional New South Wales. **NORTHERN BEACHES BUSES** - Deliver a new direct bus service linking Pittwater and Frenchs Forest via the Wakehurst Parkway and start work on developing a turn up and go express bus service linking Dee Why and Chatswood. --- **$18.5 MILLION** over four years to provide an additional 300 School Crossing Supervisors **$2 BILLION** over four years for the North South Metro Rail Link connecting Western Sydney Airport **$300 MILLION** total commitment to commence upgrading the Prospect Highway and Memorial Avenue
Abstract—Computer-aided diagnosis through biomedical image analysis is increasingly considered in health sciences. This is due to the progress made on the acquisition side, as well as on the processing one. In vivo visualization of human tissues where one can determine both anatomical and functional information is now possible. The use of these images with efficient intelligent mathematical and processing tools allows the interpretation of the tissues state and facilitates the task of the physicians. Segmentation and registration are the two most fundamental tools in bioimaging. The first aims to provide automatic tools for organ delineation from images, while the second focuses on establishing correspondences between observations inter and intra subject and modalities. In this paper, we present some recent results towards a common formulation addressing these problems, called the Markov Random Fields. Such an approach is modular with respect to the application context, can be easily extended to deal with various modalities, provides guarantees on the optimality properties of the obtained solution and is computationally efficient. I. INTRODUCTION Recent developments on the hardware side have led to a new generation of scanners as well as image modalities where the in vivo visualization of anatomical structures of biological systems is possible in a non invasive fashion. The exploitation of such an information space is a great challenge of our days and consists of understanding the anatomical structure of biological systems and in particular the effect of pathologies on their complex mechanisms of operation. One can consider such a task from a mathematical perspective. In such a case, for a given modeling task the first objective consists of parameterizing the problem or associating the understanding of a complex mechanism through a mathematical model that describes a generic behavior and depends on a number of parameters. Given such a model, the next step aims to establish a relation between the theoretical model and the available observations. In simple words, we should be able to understand the impact of model parameters to the data. Last, but not least, inference of the model parameters given the data is to be performed, or recover the set of values that once applied to the model will optimally explain the data. There are several challenges in such a process. • curse of dimensionality: ideally a complex model would have excellent capabilities on approximating the organ under observation behavior but it will be hard to infer, • curse of non-linearity: often the observations are not directly associated with the model and therefore there is a non-linear relationship between them that makes inference quite problematic, • curse of non-convexity: in most of the cases the designed cost function is too complex and therefore recovering computationally the optimal solution is not obvious/feasible • curse of non-modularity: in particular as it concerns the data-association and inference steps where the models are hard-encoded in the process, decreasing the modularity of the proposed methods to specific clinical problems and even more specific class of models. The most common clinical scenario involves the extraction of a structure of interest from images, the mathematical modeling of the normal case which consists of recovering a probabilistic representation of the healthy and in some cases the non-healthy subjects. In the above mentioned scenario, one can point out an important limitation that is due the interdependencies between the three tasks of the processing chain. In particular as it concerns the data-association and inference steps where the models are hard-encoded in the process, decreasing the modularity of the proposed methods to specific clinical problems and even more specific class of models. The use of prior knowledge is often considered to reduce the model complexity while preserving its ability to capture the expectable behavior. This is done either through the use of anatomy or through the use of machine learning techniques on an important set of training examples. In order to address the curse of non-linearity, the idea of decomposition between the model and the data association is the most prominent. Such an approach aims to decouple dependencies between the model parameters with the data making possible a better association between them. The non-convexity issue can be addressed either by introducing additional regularization constraints (the objective function becomes convex) or by dropping some of the model constraints towards simplification of the objective function. Last, but not least modularity can be addressed through the use of gradient-free methods. Markov Random Fields [4] is a popular paradigm in computer vision and medical image analysis. The central idea is to represent the parameter estimation problem through a graph. The connections between graph-nodes exploit the co-dependencies/constraints between the model variables. The inference consists of finding the most appropriate labeling such that the corresponding objective function is minimized. In this paper, we introduce a novel approach to perform inference in biomedical image analysis using MRFs, relaxations and efficient linear programming. The generic formulation and the corresponding optimization methods will be presented in section 2. Medical image analysis problems will be briefly explained in section 3 while the last section will conclude the paper. II. MARKOV RANDOM FIELDS AND EFFICIENT LINEAR PROGRAMMING A wide variety of tasks in medical can be formulated as discrete labeling problems. In very simple terms, a discrete optimization problem can be stated as follows: we are given a discrete set of variables $\mathcal{V}$, all of which are vertices in a graph $\mathcal{G}$. The edges of this graph (denoted by $\mathcal{E}$) encode the variables’ relationships. We are also given as input a discrete set of labels $\mathcal{L}$. We must then assign one label from $\mathcal{L}$ to each variable in $\mathcal{V}$. However, each time we choose to assign a label, say, $x_p$ to an object $p$, we are forced to pay a price according to the so called singleton potential function $V_p(x_p)$, while each time we choose to assign a pair of labels, say, $x_p$ and $x_q$ to two interrelated variables $p$ and $q$ (two objects that are connected to each other by an edge in the graph $\mathcal{G}$), we are also forced to pay another price, which is now determined by the so called pairwise potential function $V_{pq}(x_p, x_q)$ (both the singleton and pairwise potential functions are problem specific and are thus assumed to be provided as input). Our goal is then to choose a labeling which will allow us to pay the smallest total price. In other words, based on what we have mentioned above, we want to choose a labeling that minimizes the sum of all the MRF potentials, or equivalently the MRF energy. This amounts to solving the following optimization problem: $$\arg \min_{\{x_p\}} \sum_{p \in \mathcal{V}} V_p(x_p) + \sum_{(p,q) \in \mathcal{E}} V_{pq}(x_p, x_q). \quad (1)$$ The use of such a model can describe a number of challenging problems in medical image analysis. Parameters inference is the most critical aspect in computational medicine and efficient optimization algorithms are to be considered both in terms of computational complexity as well as of inference performance. Discrete MRFs are a very promising framework that assumes local/limited interactions between the model variables. Such a paradigm can be used to efficiently model a number of problems in medical imaging, like denoising, enhancement, feature extraction, segmentation, shape alignment, registration, etc. However, most of the existing methods were constrained from the type of interactions that one can introduce between the model variables. The use of relaxation techniques, linear programming and duality are a prominent direction to deal with the minimization of generic MRFs. A. LP-Relaxations and Primal Dual Method In [11] we introduced a novel method to address minimization of static and dynamic MRFs. Our approach is based on principles from linear programming and, in particular, on primal dual strategies. It generalizes prior state-of-the-art methods such as $\alpha$-expansion, while it can also be used for efficiently minimizing NP-hard problems with complex pairwise potential functions. Furthermore, it offers a substantial speedup - of a magnitude ten - over existing techniques, due to the fact that it exploits information coming not only from the original MRF problem, but also from a dual one. The proposed technique consists of recovering pair of solutions for the primal and the dual such that the gap between them is minimized. Therefore, it can also boost performance of dynamic MRFs, where one should expect that the new new pair of primal-dual solutions is closed to the previous one. B. Master-Slave Decomposition and Message Passing In [8] a new message-passing scheme for MRF optimization was proposed. This scheme inherited better theoretical properties than all other state-of-the-art message passing methods and in practice performed equally well/outperformed them. It is based on the very powerful technique of Dual Decomposition [1] and leads to an elegant and general framework for understanding/designing message-passing algorithms that can provide new insights into existing techniques. Promising experimental results and comparisons with the state of the art demonstrated the extreme theoretical and practical potentials of our approach. C. Tighter LP-Relaxations and Cycle Repairing In [7] we have focused our attention on MRFs problems where the relaxation is known to be loose, or the solution of the relaxed problem is not optimal for the original one. We have introduced a novel generic solver that it does so by relying on a much tighter class of LP-relaxations, called cycle-relaxations. With the help of this class of relaxations, our algorithm tries to deal with a difficulty lying at the heart of MRF optimization: the existence of inconsistent cycles. To this end, it uses an operation called cycle-repairing. The goal of that operation is to fix any inconsistent cycles that may appear during optimization, instead of simply ignoring them as usually done up to now. The more the repaired cycles, the tighter the underlying LP relaxation becomes. As a result of this procedure, our algorithm is capable of providing almost optimal solutions even for very general MRFs with arbitrary potentials. D. LP-Relaxations and Higher Order MRFs In [10] towards addressing MRFs of higher order with arbitrary dependencies between the model variables we have introduced a novel optimization approach to derive an optimizer. The method can be applied to almost any higher-order MRF and optimizes a dual relaxation related to the input MRF problem. Such a generic approach is extremely flexible and thus can be easily adapted to yield far more power algorithms when dealing with subclasses of high-order MRFs. We introduced a new powerful class of high-order potentials, which are shown to offer enough expressive power and to be useful for many vision tasks. In order to address them, we derived a novel and extremely efficient message-passing algorithm, which goes beyond the aforementioned generic optimizer and is able to deliver almost optimal solutions of very high quality. III. MEDICAL IMAGE ANALYSIS AND COMPUTER AIDED DIAGNOSIS One can now combine the theoretical model with the efficient optimization techniques towards computer-aided diagnosis. Segmentation and registration are among the most fundamental problems in medical imaging. Knowledge-based segmentation consists of automatic delineation of a structure of interest from an image being constrained from certain shape priors. The objective of image fusion is to determine a transformation that will allow direct comparison of measurements coming from the same or different modalities etc. Such a technology facilitates clinical diagnosis and better understanding of the effects of different diseases. A. Image Segmentation In [2] and [3], we introduced a new approach to knowledge-based segmentation. Our method consists of a novel representation to model shape variations as well as an efficient inference procedure to fit the model to new data. The shape model: The considered shape model is similarity-invariant and refers to a graph where the nodes $p \in V$ represent control points and where the edges $(p, q) \in E$ represent the dependencies between them. An example of such a model in the case of the left ventricle is presented in [Fig. 1(a)]. These dependencies are encoded by the normalized Euclidean distances $d_{pq}$ between the connected control points. With this modeling, we introduce a prior knowledge about the shape variations by learning the probability density distributions $Pr(d_{pq})$ of the relative positions of the control points, using a training set of labeled shapes. The idea behind this model is to deform the surface of the object by displacing the control points in a way that is consistent with the learned prior constraints. ![Image](image.png) Fig. 1: Our model: a deformable shape associated with control points. (a) The control points and the associated left ventricle surface. (b) The apical control point with the associated Voronoi cell, intersected with the blood pool and the myocardium. The graph structure: Defining the graph structure by thoroughly selecting a subset of connections between nodes is an important issue to achieve a sparse representation that is computationally efficient on one hand, and that does not suffer from redundancy on the other hand. Therefore, we construct an incomplete graph that consists of intra and inter-cluster connections that represent the inter-dependencies of the control points. We first determine the clusters according to the co-dependencies of the deformations of the control points within the training set. Shape maps [12] provide an embedding into a manifold where the Euclidean distance describes the latter criterion. A new linear-programming-based clustering algorithm [9] is then used to determine the clusters as well as their number. Then, the connections between the components of a cluster represent the local structure while the connections between the clusters account for the global structure. The distributions of the normalized distances between these connections encode the prior model as stated previously. Model-based segmentation: During search, this model was used in a MRF framework (1), where the unknown variables $x_p$ are the positions of the control points in the image domain. To encode the image support, we considered a Voronoi decomposition ([Fig. 1(b)]) of the domain and used region-based statistics. Hence, the singleton potentials $V_p(x_p)$ evaluate from the image point of view the local deformation of the model by displacing the control point $p$ to the position $x_p$. The prior knowledge is encoded in the pairwise potentials $V_{pq}(x_p, x_q)$ that express the cost of deforming the connection $(p, q)$ (of the incomplete learned graph) to the new positions $x_p$ and $x_q$, with respect to the learned distributions $Pr(d_{pq})$. The resulting model is computationally efficient, can encode complex statistical models of shape variations and benefits from the image support of the entire spatial domain. Some experimental results with respect to the segmentation of the left ventricle in 3D CT images are shown in [Fig. 2]). B. Image Registration In [5] we introduced a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context the registration problem was formulated using a discrete Markov Random Field objective function. Considering the common approach of energy minimization for the registration of two images $I$ and $J$ $$T^* = \arg \min_T \phi(I, J \circ T)$$, one seeks for recovering the optimal transformation $T^*$ w.r.t. a similarity measure $\phi$ such that the two images are perfectly aligned. The new location of an image point $x$ can be depicted from the transformation $T(x) = \text{Id}(x) + D(x)$ which consists of the identity transformation $\text{Id}(x) = x$ and a dense displacement field $D(x) = \Delta x$. Towards dimensionality reduction on the variables we assume that the dense displacement field can be expressed using a small number of control points (registration grid) and The key idea in our approach is now to reformulate the registration problem as a discrete MRF labeling problem. Based on the previous definitions, the control points of the registration grid are considered as the discrete variables $\mathcal{V}$. Additionally, the discrete set of labels $\mathcal{L} = \{x_1, ..., x_i\}$ corresponds to a quantized version of the displacement space $\Theta = \{d_1, ..., d_i\}$. A label assignment $x_p$ to a grid node $p$ is associated with displacing the node by the corresponding vector $d_{x_p}$. Based on the general MRF energy, we encode the image costs of the registration problem through the singleton potential functions as $$V_p(x_p) = \phi_p(I, J \circ T_{x_p}) \quad ,$$ \hspace{1cm} (6) where $T_{x_p}$ is the potential transformation when $x_p$ is assigned to $p$. In [5], we propose an efficient approximation scheme for precomputing the singleton potentials. The idea is to approximate the image costs simultaneously for all grid nodes and a specific label $x_i$ by applying a global translation of $d_{x_i}$ to the image $J$. Additionally, the smoothness term is encoded through the pairwise potential functions as $$V_{pq}(x_p, x_q) = (d_{x_p} - d_{x_q})^2 \quad .$$ \hspace{1cm} (7) The problem of dense image registration can then be solved by minimizing $$E(x) = \sum_{p \in \mathcal{V}} \phi_p(I, J \circ T_{x_p}) + \lambda \sum_{(p,q) \in \mathcal{E}} (d_{x_p} - d_{x_q})^2 \quad ,$$ \hspace{1cm} (8) where $x$ is the discrete labeling and $\lambda$ controls the influence of the smoothness term. In order to account for large deformations and produce results on a high resolution level a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive warping of the source image $J$ towards the target image $I$ on different image and grid resolutions. Simultaneously, the capture range of the quantized displacement space is successively refined. Efficient linear programming using the primal dual principles was considered to recover the lowest potential of the cost function. Towards addressing the main limitation of the discrete optimization methods that is the quantization of the search space, in [6] we have proposed the use of uncertainties to locally determine the range of the search space. Some experimental results with respect to this application are shown in [Fig. (3)]. IV. Discussion In this paper we have presented a generic methodological framework as well as the corresponding inference method to address medical image analysis. We have opted for the use of Markov Random Fields and efficient linear programming. Such an approach addresses most of the challenges of biomedical image analysis. It can cope with an important number of problems, can deal with the non-linearity, the non-convexity and is gradient-free and modular. Two of the most important problems in the field of medical image analysis were considered to demonstrate the potentials of this method, that were the problems of segmentation and registration. The use of models involving higher order variables interactions is the most promising direction of our work. Modeling biological behaviors often requires interactions between significant number of model variables and the pairwise model is not the most adequate choice. Furthermore, exploring the same methodologies to address feature extraction, data structuring, dimensionality reduction and unsupervised clustering could be beneficial to a number of problems in medical image analysis. REFERENCES [1] D. Bertsekas, *Nonlinear Programming*. Athena Scientific, 1999. [2] A. Besbes, N. Komodakis, G. Langs, and N. Paragios. Shape priors and discrete mrf’s for knowledge-based segmentation. In *IEEE Conference in Computer Vision and Pattern Recognition (CVPR’09)*, 2009. (in press). [3] A. Besbes, N. Komodakis, and N. Paragios. Graph-based knowledge-driven discrete segmentation of the left ventricle. In *IEEE International Symposium on Biomedical Imaging (ISBI’09)*, 2009. (in press). [4] S. Geman and D Geman. Stochastic relaxation, gibbs distribution and the bayesian restoration of images. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 1984. [5] B. Glocker, N. Komodakis, G. Tziritas, N. Navab, and N. Paragios. Dense image registration through mrf’s and efficient linear programming. *Medical Image Analysis Journal*, 2008. [6] B. Glocker, N. Paragios, N. Komodakis, G. Tziritas, and N. Navab. Optical flow estimation with uncertainties through dynamic MRFs. In *IEEE Conference in Computer Vision and Pattern Recognition (CVPR’08)*, 2008. [7] N. Komodakis and N. Paragios. Beyond loose lp-relaxations: Optimizing mrf’s by repairing cycles. In *European Conference in Computer Vision (ECCV’08)*, 2008. [8] N. Komodakis, N. Paragios, and G. Tziritas. MRF via dual decomposition: Message-passing revisited. In *IEEE International Conference on Computer Vision (ICCV’07)*, 2007. [9] N. Komodakis, N. Paragios, and G. Tziritas. Clustering via lp-based stabilities. In *Neural Information Processing Systems (NIPS’08)*, 2008. [10] N. Komodakis, G. Tziritas, and N. Paragios. Optimization of MRFs with higher order cliques and arbitrary potentials. Technical report, Computer Science Department, University of Crete, 2008. [11] N. Komodakis, G. Tziritas, and N. Paragios. Performance vs computational efficiency for optimizing single and dynamic MRFs: Setting the state of the art with primal-dual strategies. *Computer Vision and Image Understanding Journal*, 2008. [12] G. Langs and N. Paragios. Modeling the structure of multivariate manifolds: Shape maps. In *IEEE Conference in Computer Vision and Pattern Recognition (CVPR’08)*, 2008. [13] Thomas W. Sederberg and Scott R. Parry. Free-form deformation of solid geometric models. In *SIGGRAPH*, New York, NY, USA, 1986. ACM Press.
Thermostable tensoresistors of Co doped GaSb–FeGa$_{1.3}$ eutectic composites R.N. Rahimov$^a$,*, A.A. Khalilova$^a$, D.H. Arasly$^a$, M.I. Aliyev$^a$, M. Tanoglu$^b$, L. Ozyuzer$^c$ $^a$ Institute of Physics of the Azerbaijan National Academy of Sciences, 33 H.Javid Avenue, Az-1143 Baku, Azerbaijan $^b$ Izmir Institute of Technology, Department of Mechanical Engineering, TR-35430, Urla, Izmir, Turkey $^c$ Izmir Institute of Technology, Department of Physics, Gulbahce Campus, TR-35430, Urla, Izmir, Turkey **Article history:** Received 7 October 2007 Received in revised form 16 April 2008 Accepted 26 May 2008 Available online 14 June 2008 **Keywords:** Eutectic composite Microstructure Tensoresistive effect Strain sensitivity Temperature coefficient of strain sensitivity **Abstract** The microstructure and tensoresistive properties of GaSb–FeGa$_{1.3}$ eutectic composites doped with 0.1% Co have been investigated. It was found that the Co impurity atoms mainly accumulate in the metallic inclusions. The length of the inclusions in GaSb–FeGa$_{1.3}$(Co) was measured to be about half of those in undoped GaSb–FeGa$_{1.3}$ eutectics. The tensometric characteristics of gauges based on GaSb–FeGa$_{1.3}$(Co) have been found to be more thermostable than undoped samples. © 2008 Elsevier B.V. All rights reserved. 1. Introduction Semiconductor tensoresistors have the potential for use in automated machinery devices and measurement systems such as flowmeters. For example, by employing these materials, it is possible to receive intense signals during flow experiments without preliminary signal amplification that would increase the cost of the devices. The main disadvantage of semiconductor tensoresistors is the high temperature coefficient of the strain sensitivity and the brittleness that creates some difficulties for their use at a wide range of temperatures and deformation intervals. This limits the design of strain gauge devices using semiconductor tensoresistors. Therefore, development of new thermostable semiconductor materials with lower temperature coefficients of strain sensitivity and less brittleness is critical for the applications mentioned above. In previous studies, we showed that these properties can be improved by fabrication of strain gauges using semiconductor–metal eutectic composites [1]. The advantages of such composites is that the properties of semiconductors and metals combine, allowing for the ability to control their characteristics with electric and magnetic fields, temperature, pressure, and various types of impurities. These eutectic composites also exhibit anisotropic properties in the presence of different directions of the electric current, heat flux, magnetic field, and metallic inclusions, which open wide prospects for their application in different areas of science and technology. Within the microstructure of semiconductor–metal eutectic composites, the metal phase exists as needle-shaped inclusions that reduce the brittleness of the material and cause distinctive behavior in electron and phonon processes in addition to changing the deformation characteristics [1,2]. Some characteristics of the composites may be adjustable by varying the size and density of the metallic inclusions. It has been previously reported that the size and density of the inclusions may be controlled by changing the freezing rate during the application of microgravitation, centrifugation, and magnetic fields in the solidification process [3]. 3d transition group (iron-group) elements form several deep levels in the band gap of III–V semiconductor compounds [4]. In previous studies [1], we showed that tensoresistors based on GaSb–FeGa$_{1.3}$ eutectic composites have thermostable deformation characteristics that are related to the deep impurity levels of iron atoms that are formed in the band gap of the GaSb matrix. It is expected that GaSb–FeGa$_{1.3}$ eutectic composites doped with Co atoms will create additional deep levels in the matrix that will result in higher stability of the strain characteristics as a function of temperature. A Philips™ FEG scanning electron microscope (SEM) was employed to characterize the microstructure of the alloys. An energy dispersive X-ray spectroscopy (EDX) model EDAX™ was used to obtain quantitative information about the elemental composition of the samples. The accelerating voltage during the EDX analysis was 15 kV. To determine the tensoresistive effect of the gauges, rectangular beams were cut from the grown crystals to obtain samples of the sensitive elements. Contacts were placed at the ends of the gauges, a minimum of 1 mm away from the ends. This tensoresistor on the base of GaSb–FeGa$_{1.3}$(Co) composite was attached to the bending beam (as illustrated in Fig. 1) using VL-931 glue as described in a previous work [1]. Characterization of the strain gauge was carried out using the compensation method in the range of 200–400 K and with deformation up to $2 \times 10^{-3}$ rel. unit. The measurements were performed with the current ($I$) perpendicular to the needles ($x$) and the needles parallel to the plane ($P$) of the gauge substrate ($I \perp x \parallel P$), owing to the strain gauges that exhibit the greatest strain sensitivity coefficient ($S$) [1]. The relative deformation of the bending beams ($\varepsilon$) was determined based on the following equation: $$\varepsilon = \frac{hd}{L^2}$$ where $d$ is the thickness of the beam, $h$ the displacement of the beam on bending, and $L$ is the working length of the beam. The present work focuses on the investigation of the influence of Co impurity atoms at 0.1% doping on the microstructure and the strain characteristics of GaSb–FeGa$_{1.3}$ eutectics. 2. Experimental GaSb–FeGa$_{1.3}$ eutectic composites with and without Co doping were prepared by using the vertical Bridgman method as described in detail in ref. [2]. To avoid ampoule vibration that may disturb the solid–melt interface, the prepared sample was kept motionless with the movement of the freezing interface accomplished by lifting the furnace. The solidified interfaces were planar and oriented perpendicular to the transport direction on all ingot sections. The solidification rate was set to about 1 mm/min. Fig. 2. SEM micrographs of GaSb–FeGa$_{1.3}$ and GaSb–FeGa$_{1.3}$(Co) showing cross sections of the sample along (a and c) the longitudinal and (b and d) lateral direction of the metallic inclusions, respectively. Fig. 3. X-ray spectra of GaSb–FeGa$_{1.3}$(Co) obtained with SEM-EDX from the (a) inclusions and (b) matrix along the lateral direction of the specimens. Fig. 4 shows the measured values of the relative change in resistance \((\Delta R/R)\) for the GaSb–FeGa\(_{1.3}\)(Co) composites as a function of strain \((\varepsilon)\) for various temperatures. It was revealed that the temperature dependence of the strain of the gauges is free of hysteresis. As shown in the figures, there is a linear dependence of \(\Delta R/R\) on both tension and compression types of strain within the measured strain range due to the flexural bending of the substrate. One of the critical parameters for strain gauges is the limit at which this linearity of strain breaks down. This limit was found to be about \(\pm 1.2 \times 10^{-3}\) rel. unit for GaSb–FeGa\(_{1.3}\)(Co). The linearity does not deviate with the variation in temperature. The strain sensitivity \((S)\) and temperature coefficient of strain sensitivity \((\alpha)\) were determined from the experiments using the following equations: \[ S = \frac{\Delta R/R}{\varepsilon} \tag{2} \] \[ \alpha = \frac{\Delta S/S_0}{\Delta T} \times 100 \ [\%/\text{degree}] \tag{3} \] where \(\Delta R = R_T - R_0\); \(\Delta S = S_T - S_0\) and \(\Delta T = T_T - T_0\). \(R_T, S_T\) and \(R_0, S_0\) are the resistance and the coefficients of strain sensitivity at fixed temperature and at room temperature, respectively. The dependence of \(S\) on temperature for GaSb–FeGa\(_{1.3}\)(Co) compared with data for GaSb and GaSb–FeGa\(_{1.3}\) eutectics is presented in Fig. 5 for loading under tensional and compressive strains resulting from the bending of the substrate. The strain and temperature characteristics of the gauges show no hysteresis. Average values of \(S\) at room temperature and \(\alpha\) for GaSb–FeGa\(_{1.3}\)(Co) were calculated to be 34 ± 5 and 0.17%/degree, respectively. The temperature coefficient of the sensitivity of the GaSb–FeGa\(_{1.3}\)(Co) gauge was found to be more than 15% lower than those with the GaSb–FeGa\(_{1.3}\). The decrease in the values of \(\alpha\) may be associated with the presence of the additional deep impurity levels in the GaSb matrix. Based on the findings from the SEM-EDX, the Co atoms were found to be mainly accumulated in the metallic needles. This means that a fraction of the Co atoms would be expected to form deep levels in the matrix because the crystal symmetry due to an anisotropic strain deformation is broken and resulting in a vanishing of the degeneracy. The valance band edges of light and heavy holes are displaced in opposite directions and the redistribution of holes between sub-bands takes place. When a semiconductor material is doped with an impurity that generates deep levels, the change of the charge carrier concentration under deformation (strain) is significant. Because the changes in hole concentration and mobility as a function of temperature have opposite signs, the relative change of resistance under deformation is stable. This is one of the conditions that causes a reduction in the temperature coefficient of the sensitivity in these strain gauges. The contact resistance and interaction between metal inclusions and the semiconductor matrix in these compositions plays a significant role in the tensoresistivity characteristics, and thus, these should be taken into consideration. As shown in our previous work [1], the presence of the oriented metallic phases generates an anisotropy in the strain characteristics. Any external effect, including additional doping, that results in a change of the density and dimensions of the metallic inclusions causes a change of interface resistance and in the degree of anisotropy. Therefore, the change in the density and dimensions of the inclusions of the GaSb–FeGa\(_{1.3}\) eutectic composite doped with Co atoms directly results in an improvement in the temperature coefficient of sensitivity. ### 4. Summary GaSb–FeGa\(_{1.3}\) eutectics with and without Co doping were prepared using a vertical Bridgman technique. Microstructural investigations on the cross section of the samples along longitudinal and lateral directions revealed needle-shaped metallic phases embedded within the semiconductor matrix. The metallic inclusions in the GaSb–FeGa$_{1.3}$(Co) were observed to form as nail-shaped needles. It was found that the length of the inclusions in the GaSb–FeGa$_{1.3}$(Co) is about half of those in undoped GaSb–FeGa$_1$ eutectics. The EDX analysis has revealed that the Co impurity atoms accumulated mainly in the metallic inclusions. The strain and temperature characteristics of the GaSb–FeGa$_{1.3}$(Co) based gauges showed no hysteresis. Average values of $S$ at room temperature and $\alpha$ for GaSb–FeGa$_{1.3}$(Co) were calculated to be 34 ± 5 and 0.17%/degree, respectively. The temperature coefficient of the sensitivity of the GaSb–FeGa$_{1.3}$(Co) gauge was found to be about 15% lower than those with the GaSb–FeGa$_{1.3}$. References [1] M.I. Aliyev, A.A. Khalilova, D.H. Arasly, R.N. Rahimov, M. Tanoglu, L. Ozyuzer, Appl. Phys.: A 79 (2004) 2075–2079. [2] M.I. Aliyev, A.A. Khalilova, D.H. Arasly, R.N. Rahimov, M. Tanoglu, L. Ozyuzer, J. Phys. D: Appl. Phys. 36 (2003) 2627–2633. [3] W.R. Wilcox, L.L. Regel, Acta Astronautica 38 (1996) 511–516. [4] E.M. Omelyanovskiy, V.I. Fistul, Impurity of Transition Metal in Semiconductors (in Russian), Metallurgy, Moskow, 1983, p. 282. Biographies Rashad Nizameddin Rahimov graduated from Azerbaijan State University, Baku (1973) in semiconductor physics and received his PhD in semiconductor and dielectric physics in 1983. At present, he is a leading researcher at the Institute of Physics of Azerbaijan National Academy of Sciences. His main interests are electron and phonon processes in solid solutions and eutectic composites based on the III–V semiconductor compounds and their practical applications. Almaz Ahmediyeva Khalilova graduated from Azerbaijan State University, Baku in 1962. Since 1961, she has been researcher at the Institute of Physics of Azerbaijan National Academy of Sciences. She received the PhD degree in 1967 in semiconductor and dielectric physics. At present, she is a leading researcher at the Institute of Physics of Azerbaijan National Academy of Sciences. Her main interests include materials science and new tensosensors based on the eutectic composites of III–V semiconductor compounds. Durdana Hamid Arasly graduated from Azerbaijan State University, Baku in 1961. Since 1961, she has been a researcher at the Institute of Physics of Azerbaijan National Academy of Sciences. She received the PhD degree in 1967 and the Full D in semiconductor and dielectric physics in 1987. At present, she is a principal researcher at the Institute of Physics of Azerbaijan National Academy of Sciences. Her main interests are electron and phonon processes in solid solutions and eutectic composites based on the III–V semiconductor compounds and their practical applications. Maksud Isfendiyar Aliyev graduated from Azerbaijan State University, Baku in 1950. Since 1961, he has been a researcher at the Institute of Physics of Azerbaijan National Academy of Sciences. He received the PhD degree in 1957 and the Full D in semiconductor and dielectric physics in 1966. At present, he is a principal researcher at the Institute of Physics of Azerbaijan National Academy of Sciences. His main interests are transport phenomena in solid solutions and eutectic composites and their practical applications. Metin Tanoglu has a BS from Istanbul Technical University, Turkey in 1992 and an MSc (1996) and a PhD (2000) from University of Delaware, USA in materials science and engineering. Since 2004, he has been acting as an associate prof. of Mechanical Engineering Department, Izmir Institute of Technology. His main research interests are the processing and characterization of composite materials; nanocomposites; layered clays and carbon nanotubes; and mechanical, physical, and microstructural characterization of materials. Lutfi Ozyuzer graduated from physics engineering (BS), Hacettepe University, Turkey in 1991. He received MS degree in physics at Illinois Institute of Technology, USA in 1995, and a PhD degree at the same institute in 1999. He worked as a research associate and postdoctoral researcher at Materials Science Division of Argonne National Laboratory, USA from 1995 to 2000. Since 2004, he has been working as an associate professor of physics at Izmir Institute of Technology, Turkey. His main research areas are tunneling spectroscopy of superconductors, Josephson junctions, and terahertz generation from superconductors.
The E-Myth Revisited By Michael E. Gerber Introduction - Over 1 million new businesses are started each year in the U.S. - At least 40% will not make it through the first year - Within five years, more than 80% will have failed - And 80% of those business that survive the first five years, fail in the second five years - Therefore only 40,000 businesses or 4% of the original 1 million survive the first 10 years. To change those odds you need to understand what a business really is and what it takes to make it work. Part I – The E-Myth and American Small Business 1. The Entrepreneurial Myth – - In this country there is a romantic belief that small businesses are started by entrepreneurs, when, in fact, most are not. - Rather, one day you wake up and say to yourself, “Why am I working for this guy? I know as much about the business as he does. Why not start my own business?” So you go into business for yourself. - But there is a Fatal Assumption in your thinking – if you understand the technical work of a business, you understand a business that does that technical work. Wrong! - The technical work of a business and a business that does that technical work are two totally different things! 2. The Entrepreneur, The Manager and The Technician - Everybody who goes into business is three people in one: The Entrepreneur, The Manager and The Technician who battle each other. - The Entrepreneur - Is the visionary in us - Creates a great deal of havoc around him - Considers most people as problems getting in the way of the dream - The Manager - Is pragmatic - Does the planning, keeps things in order - Cleans up the messes of the Entrepreneur - The Technician - Is the doer 3. Infancy: The Technician’s Phase - Businesses, like people, are supposed to grow; and with growth, comes change. - Three phases of a business’s growth: Infancy, Adolescence, and Maturity. - It’s easy to spot a business in Infancy - The owner and the business are one and the same thing. - You are the business. - Without you there would be no business. - Then subtle changes begin to occur – - You begin to fall behind. - There’s more work than you can possibly get done. - You begin to drop some of the balls! - Infancy ends when the owner realizes that the business cannot continue to run the way it has been; in order to survive it will have to change. - To be a great Technician is simply insufficient to the task of building a great small business. - If the Technician fills your work day, then you are avoiding the Entrepreneurial’s challenge of learning how to grow a business. - The purpose of going into business is to get free of a job so you can create jobs for other people. - Your Entrepreneur needs to be encouraged to build a small business that actually works. 4. Adolescence: Getting Some Help - Adolescence begins in the life of your business when you decide to get help. - A critical moment in every business is when the owner hires his first employee to do the work he doesn’t know how to do or doesn’t want to do. - Management by Abdication – let somebody else do it without supervision until your employee begins dropping the ball. - As the balls continue to fall, you begin to realize that no one cares about your business the way you do. No one is willing to work as hard as you. 5. Beyond the Comfort Zone - Every Adolescent business reaches a point where it pushes beyond its owner’s Comfort Zone – outside of which he begins to lose control. - The Technician’s boundary is determined by how much he can do himself. - The Manager’s is defined by how many technicians he can supervise effectively. - The Entrepreneur’s boundary is a function of how many managers he can engage in pursuit of his vision. - As a business grows, it invariably exceeds its owner’s ability to control it. 6. Maturity and the Entrepreneurial Perspective - A Mature business knows how it got to be where it is, and what it must do to get where it wants to go. - Maturity is not an inevitable result of the first two phases. Mature companies started out that way! The people who started them had a totally different perspective about what a business is and why it works. - A person who launches his company as a Mature company goes through Infancy, Adolescence and Maturity in an entirely different way, with an Entrepreneurial Perspective. - The Entrepreneurial Perspective - Asks the question: “How must the business work?” - Sees the business as a system for producing results - Starts with a picture of a well-defined future, and then comes back to the present with the intention of changing it to match the vision. - Views the business as a network of integrated components, each contributing to produce a specifically planned result. - Each step in the development of such a business is measurable. - There is a standard for the business - The business operates according to articulated rules and principles. - The Entrepreneurial Model - It’s a model of a business that fulfills the perceived needs of a specific segment of customers in an innovative way. - The commodity isn’t what’s important – the way it’s delivered is. - It understands that without a clear picture of that customer, no business can succeed. - It answers the question, “How will my business stand out from all the rest?” Part II – The Turn-Key Revolution: A New View of Business 7. The Turn-Key Revolution - Turning the Key: The Business Format Franchise - It provides the franchisee with an entire system of doing business. - Is built on the belief that the true product of a business is not what it sells but how it sells it. - Selling the Business Instead of the Product - Ray Kroc (founder of McDonald’s) set about the task of creating a foolproof, predictable business. - A systems-dependent business, not a people-dependent business. - A system that can work without you. The Franchise Prototype is the model you need to make your business work successfully. 8. The Franchise Prototype - Fact: While 80% of all new businesses fail in the first five years, 75% of all Business Format Franchises succeed! - The Franchise Prototype is where all assumptions are put to the test to see how well they work before becoming operational in the business. - The system runs the business. The people run the system. - The system integrates all of the elements required to make a business work. - The system leaves the franchisee with as little operating discretion as possible by sending him through rigorous training before ever allowing them to operate the franchise. - Turn-Key Operation: the franchisee is licensed the right to use the system, learns how to run it, and then “turns the key.” The business does the rest. - Business Format Franchise is a proprietary way of doing business that successfully and preferentially differentiates every extraordinary business from every one of its competitors. - The question is: How do I build my Franchise Prototype? 9. Working On Your Business, Not In It - The point is: your business is not your life – they are two totally separate things. - The primary catalyst from this point forward is to work on your business not in it. - Pretend that you are going to franchise your business. - Six Rules to follow in “franchising” your business 1) The model will provide consistent value to your customers, employees, and lenders beyond what they expect 2) The model will be operated by people with the lowest possible level of skill a. A systems-dependent system rather than a people-dependent system b. A way of doing things in order to compensate for the disparity between the skills your people have and the skills your business needs. c. A business that depends on the ability of the employee will ultimately not deliver consistently excellent results. 3) The model will stand out as a place of impeccable order 4) All work in the model will be documented in the Operations Manual a. Documentation provides your people with the structure they need and a written account of how to “get the job done” in the most efficient and effective way. b. The Operations Manual – the company’s How-To-Do-It Guide. 5) The model will provide a uniformly predictable service to the customer 6) The model will utilize a uniform color, dress, and facilities code. o Questions to ask yourself ▪ How can I get my business to work, but without me? ▪ How can I get my people to work, but without my constant interference? ▪ How can I own my business, and still be free of it? o To successfully develop a business you need a process, a practice by which to obtain that information and, once obtained, a method with which to put that information to use in your business productively. **Part III – Building a Small Business That Works!** 10. **The Business Development Process** o Building the Prototype of your business is a continuous process, a Business Development Process. There are 3 distinct activities 1) **Innovation** ▪ It’s not the commodity that demands Innovation but the process by which it is sold. ▪ The franchisor aims his innovative energies at the way in which his business does business ▪ It is at the heart of every exceptional business. ▪ It poses the question: What is standing in the way of my customer getting what he wants from my business? ▪ It simplifies your business to its critical essentials. ▪ It should make things easier for your people, or it’s not innovation. ▪ Seeks the answer to, “What is the best way to do this?” 2) **Quantification** ▪ To be effective, all Innovations need to be quantified. ▪ Without it how would you know whether the Innovation worked? ▪ Begin by quantifying everything related to how you do business. ▪ Eventually, you and your people will think of your entire business in terms of the numbers. You’ll quantify everything. 3) Orchestration – - Is the elimination of discretion, or choice, at the operating level of your business. - If everyone in your company is doing it differently each time you do a loan, you’re creating chaos, not order. - If you haven’t orchestrated it, you don’t own it. And if you don’t own it, you can’t depend on it. - The definition of a franchise is simply your unique way of doing business. - When your system doesn’t work any longer, change it! - The Business Development Process is not static: it’s not something you do once and then are done with it. You do it all the time. - Once you’ve innovated, quantified, and orchestrated your business, you must continue to innovate, quantify and orchestrate it. - It is a habit – a way of doing something habitually. 11. Your Business Development Program - Your Business Development Program is the step-by-step process through which you convert your existing business into a perfectly organized model for thousands more just like it. - The Program is composed of 7 distinct steps - Your Primary Aim - Your Strategic Objective - Your Organizational Strategy - Your Management Strategy - Your People Strategy - Your Marketing Strategy - Your System Strategy 12. Your Primary Aim - To determine what your role will be, you need to answer these questions - What do I value most? - How would I wish my life to be on a day-to-day basis? - How would I like people to think about me? - What would I like to be doing five years from now? Ten years from now? - How much money will I need to do the things I wish to do? By when will I need it? - Your Primary Aim answers all these questions. Great people have a vision of their lives that they practice emulating each and every day. Great people create their lives actively, while everyone else is created by their lives, passively waiting to see where life takes them next. The difference between the two is - The difference between living fully and just existing. - The difference between living intentionally and living by accident. The answers to these questions become the standards against which you can begin to measure your life’s progress. Without such standards, your life will drift aimlessly, without purpose, without meaning. Your Primary Aim is the vision necessary to bring your business to life and your life to your business. 13. Your Strategic Objective It is a very clear statement of what your business has to ultimately do for you to achieve your Primary Aim. It is the vision of your finished product that is and will be your business. It is a product of your Life Plan, as well as your Business Strategy and Plan. Your Life Plan shapes your life, and the business that is to serve it. But unless your Business Strategy and Plan can be reduced to a set of simple and clearly stated standards, it will do more to confuse you than to help. Your Strategic Objective is just such a list of standards. It is a tool for measuring your progress toward a specific end. List of Standards - **The First Standard: Money** - How much money do I need to live the way I wish? Not in income but in assets. - In other words, how much money do I need in order to be independent of work? - **The Second Standard: An Opportunity Worth Pursuing** - It is a business that can fulfill the financial standards I’ve created for my Primary Aim and my Strategic Objective. - If it is reasonable to assume that it can, the business is worth pursuing. - If it is unreasonable to assume that it can, then no matter how exciting and interesting the business is, forget it. Walk away from it. - **Standards Three Through?** There is no specific number of standards in your Strategic Objective. Only specific questions. - When is your Prototype going to be completed? In two years? Three? Ten? - What geographic market are you going to be in? What standards are you going to insist upon regarding reporting, training, customer service, etc? - Standards create the energy by which the best companies, and the most effective people, produce results. 14. Your Organizational Strategy - The Organization Chart can have a more profound impact on a small company than any other single Business Development Step. - If everybody’s doing everything, then who’s accountable for anything? - Once your Strategic Objective is completed, which defines how you will be doing business based on your list of standards, then the creation of your Organization Chart is next. 1. Identify all of the positions in the company as you visualize it in the future when the company has matured to its optimal size. 2. Next write a Position Contract for each position on the Organization Chart. It is a summary of - The results to be achieved by each position - The work the position is accountable for - A list of standards by which the results are to be evaluated - And include a line for the signature of the person who agrees to fulfill those accountabilities. 3. Finally, identify who is going to fill that position, understanding it cannot be more than one person. - Prototyping the Position: Replacing Yourself with a System - When you find the right person, hire him, hand him the Operations Manual, have him learn the system and finally let him go to work. - You have now replaced yourself with a system that works in the hands of someone who wants to work it. - Now your job becomes managing the system rather than doing the work. - Your Organization Chart flows down from your Strategic Objective, which in turn flows down from your Primary Aim. 15. Your Management Strategy - A Management System is a system designed into your Prototype to produce a marketing result. - Its purpose is not to just create an efficient Prototype but an effective one. - An effective Prototype is a business that finds and keeps customers better than any other. - The Operations Manual is nothing but a series of checklists. Each checklist itemizes the specific steps to produce the desired result. 16. Your People Strategy - How do I get people to do what I want? Create an environment in which “doing it” is more important to your people than not doing it. - The idea behind our work 1) The customer is not always right, but whether he is or not, it is our job to make him feel that way. 2) Everyone who works here is expected to work toward being the best he can possibly be at the tasks he’s accountable for. If he is unwilling to do this, he should leave. 3) The business is a place where everything we know how to do is tested by what we don’t know how to do, and that the conflict between the two is what creates growth, what creates meaning. - People want to work where there is a clearly defined structure for acting in the world. Such a structure is called a game. - The very best businesses represent to the people who create them a game to be played in which the rules symbolize the idea you, the owner, have about the world. - The Rules of the Game - First create the game with defined rules. - Never create a game for your people you’re unwilling to play yourself. - Make sure there are ways of winning the game without ending it. - The game can never end - But unless there are victories along the way, your people will grow weary. - Victories keep people in the game and make the game appealing. - Change the game from time to time – the tactics not the strategy. - Know when change is called for, watch your people as they will tell you when the game’s all but over. - Never expect the game to be self-sustaining. People need to be reminded of it constantly. - The game has to make sense. The best games are built on universally verifiable truths. - The game needs to be fun from time to time. - No game needs to be fun all of the time. - Part of the thrill of playing a game well is to learn how to deal with “no fun” part of the game. - Fun needs to be planned into your game, not too often maybe once a quarter. - If you can’t think of a good game, steal one. - Always be on the look out for how another company’s game can be incorporated into your game. What's wrong with hiring experienced employees? They will work by the standards they have been taught at somebody else's business. You must set the standard by establishing a Management System through which all managers, and all those who would become managers in your company, are expected to produce results. In short, you want people who want to play your game, not people who believe they have a better one. The foundation of your Management System is composed of four distinct components. 1) How We Do It Here. 2) How We Recruit, Hire, and Train People to Do It Here. 3) How We Manage It Here. 4) How We Change It Here. The "It" refers to the stated purpose of your business. Every bit of which is documented in your Operations Manual. It is the system, not only your people that will differentiate your business from everyone else's. 17 Your Marketing Strategy When it comes to marketing, what you want is unimportant – it's what the customer wants that matters. When a customer says, "I want to think about it," don't believe him. He is saying one of two things: - He is emotionally incapable of saying no for fear of how you might react if he told you the truth, or - You haven't provided him with the "food" his unconscious mind craves. The Two Pillars of a Successful Marketing Strategy – - Demographics – who your customer is - Psychographics – why he buys Having determined the who and why, you then begin constructing a Prototype to satisfy your customer's unconscious needs. What must our business be in the mind of our customers in order for them to choose us over everyone else? - Lead Generation – the promise you make to attract them to you. - Lead Conversion – the sale you make once they get there. - Client Fulfillment – it ends with the delivery of the promise before they leave your door. The primary aim of every business to get them to come back for more. The customer you've got is a lot less expensive to sell to than the customer you don't have yet. 18. Your Systems Strategy There are three kinds of systems - **Hard Systems** – inanimate things, e.g., a computer system - **Soft Systems** – animate, living things or ideas. You are a Soft System, so is a script of Hamlet. - **Information Systems** – provide us with information about interaction between the other two. Inventory control, cash flow forecasting, and sales activity reports are all examples. - The integration of these three kinds of systems in your business is what your Business Development Program is all about. - A Sales System is a **Soft System** – a selling system is a fully orchestrated interaction between you and your customer that follows six primary steps. 1) Identification of the specific Benchmarks – or consumer decision points – in your selling process. 2) The literal scripting of the words that will get you to each benchmark successfully. 3) The creation of various materials to be used with each script. 4) The memorization of each Benchmark’s script. 5) The delivery of each script by your salespeople in identical fashion. 6) Allowing your people to communicate more effectively by engaging each and every prospect as fully as he needs to be. - An **Information System** to interact with your Sales System it should provide the following information: - How many prospects were reached? - How many appointments were scheduled? - How many Needs Analysis Presentations were completed? - How many Solutions Presentations were completed? - How many sales were made? - What was the average dollar value? - The Information System will track the activity of your Selling System from benchmark to benchmark.
Including the efficacy of land ice changes in deriving climate sensitivity from paleodata Lennert B. Stap\textsuperscript{1}, Peter Köhler\textsuperscript{1}, and Gerrit Lohmann\textsuperscript{1} \textsuperscript{1}Alfred-Wegener-Institut, Helmholtz-Zentrum für Polar- und Meeresforschung, Am Handelshafen 12, 27570 Bremerhaven, Germany Correspondence: L.B. Stap (firstname.lastname@example.org) Abstract. The equilibrium climate sensitivity (ECS) of climate models is calculated as the equilibrium global mean surface warming resulting from a simulated doubling of the atmospheric CO$_2$ concentration. In these simulations, long-term processes in the climate system, such as land ice changes, are not incorporated. Hence, they have to be compensated for when comparing climate sensitivity derived from paleodata to the ECS of climate models. Several recent studies found that the impact these long-term processes have on global temperature cannot be quantified directly through the global radiative forcing they induce. This renders the approach of deconvoluting paleotemperatures through a partitioning based on radiative forcings inaccurate. Here, we therefore implement an efficacy factor $\varepsilon_{[LI]}$, that relates the impact of land ice changes on global temperature to that of CO$_2$ changes, in our calculation of climate sensitivity from paleodata. We apply our new approach to a proxy-inferred paleoclimate dataset, and base the range in $\varepsilon_{[LI]}$ we use on a multi-model assemblage of simulated relative influences of land ice changes on the Last Glacial Maximum temperature anomaly. We find that $\varepsilon_{[LI]}$ is smaller than unity, meaning that per unit of radiative forcing the impact on global temperature is less strong for land ice changes than for CO$_2$ changes. Consequently, our obtained ECS estimate of $5.8 \pm 1.3$K, where the uncertainty reflects the implemented range in $\varepsilon_{[LI]}$, is $\sim 50\%$ higher than the result of the old approach that does not consider efficacy. 1 Introduction Equilibrium climate sensitivity (ECS) expresses the simulated equilibrated surface air temperature response to an instantaneous doubling of the atmospheric CO$_2$ concentration. The simulated effect of the applied CO$_2$ radiative forcing anomaly includes the Planck response, as well as the fast feedbacks e.g. through snow, sea ice, lapse rate, clouds and water vapour changes. ECS varies significantly between different state-of-the-art climate models, as for instance the CMIP5 ensemble shows a range of 1.9 to 4.4 K (Vial et al., 2013). Several ways have been put forward to constrain ECS, for example through the usage of paleoclimate data (e.g. Covey et al., 1996; Edwards et al., 2007), which is also the focus of this study. However, unlike results of models, which can be run ceteris paribus, temperature reconstructions based on paleoclimate proxy data always contain a mixed signal of all processes active in the climate system. Among these are long-term processes (or slow feedbacks) such as changes in vegetation, dust, and, arguably most importantly, land ice changes, which are not taken into account in the quantification of ECS. Therefore, it is necessary to correct paleotemperature records for the influence of these processes, in order to make a meaningful comparison to ECS calculated by climate models. In a co-ordinated community effort, the PALAEOSENS project proposed to relate the temperature response caused by these long-term processes to the global averaged radiative forcing they induce (PALAEOSENS Project Members, 2012). Consequently, the paleotemperature record can be disentangled on the basis of the separate radiative forcings of these long-term processes (e.g. von der Heydt et al., 2014; Martínez-Botí et al., 2015; Köhler et al., 2015, 2017b, 2018; Friedrich et al., 2016). If all processes are accounted for in this manner, the sole effect of CO$_2$ changes, as is asserted by the ECS, can be quantified. However, several studies have shown that, depending on the type of radiative forcing, the same global average radiative forcing can lead to different global temperature changes (e.g. Stuber et al., 2005; Hansen et al., 2005; Yoshimori et al., 2011). For instance, in a previous article (Stap et al., 2018) we simulated the separate and combined effects of CO$_2$ changes and land ice changes on global surface air temperature using the intermediate complexity climate model CLIMBER-2 and showed that the specific global temperature change per unit radiative forcing change depends on which process is involved. As a possible solution to this problem, Hansen et al. (2005) formulated the concept of ‘efficacy’ factors, which express the impact of radiative forcing by a certain process in comparison to the effect of radiative forcing by CO$_2$ changes. Based on the concept of Hansen et al. (2005), here we introduce an efficacy factor for radiative forcing by albedo changes due to land ice variability, in our method of deriving climate sensitivity from paleodata. We first validate our refined approach by applying it to transient simulations over the past 5 Myr using CLIMBER-2 (Stap et al., 2018). We compare the results of our approach of obtaining the sole effect of CO$_2$ changes on global temperature from a simulation forced by land ice and CO$_2$ changes, to a simulation where CO$_2$ changes are the only operating long-term process. Hence, we can assess the error resulting from using a constant efficacy factor. Thereafter, we refine a previous estimate of climate sensitivity based on a paleoclimate dataset of the past 800 kyr (Köhler et al., 2015, 2018). In this dataset, the sole effect of CO$_2$ is not a-priori known. We therefore investigate the influence of the introduced efficacy factor on the calculated climate sensitivity. To do so, we appraise the influence of land ice changes and the associated efficacy using a range that is given by different modelling efforts of the Last... Glacial Maximum (LGM; $\sim 21$ kyr ago) (Shakun, 2017). The climate sensitivity resulting from applying this range provides a quantification of the consequence of the uncertain efficacy of land ice changes. 2 Material and methods In this section, we recapitulate the approach to obtain climate sensitivity from paleodata, used in numerous earlier studies (e.g. PALAEOSENS Project Members, 2012; von der Heydt et al., 2014; Martínez-Botí et al., 2015; Köhler et al., 2015, 2017b, 2018; Friedrich et al., 2016). We also discuss the main refinement we make in this study, which is the inclusion of the efficacy of land ice changes, and a further small refinement that unifies the dependent variable in cross-plots of radiative forcing and global temperature anomalies. 2.1 Approach to obtain climate sensitivity from paleodata Equilibrium climate sensitivity (ECS) is the long-term global average surface air temperature change resulting from a CO$_2$ doubling, and is usually obtained from climate model simulations. In these simulations, fast feedbacks, i.e. processes in the climate system with timescales of less than $\sim 100$ yrs, are accounted for. However, slower processes, such as ice sheet, vegetation and dust changes, are commonly kept constant. The resulting response is also sometimes called ‘Charney’ sensitivity (Charney et al., 1979). Following the notation of PALAEOSENS Project Members (2012), taking the ratio of the temperature change ($\Delta T_{[CO_2]}$) over the radiative forcing due to the CO$_2$ change ($\Delta R_{[CO_2]}$), leads to $S^a$ (in KW$^{-1}$ m$^2$, and where $a$ stands for *actuo*): $$S^a = \frac{\Delta T_{[CO_2]}}{\Delta R_{[CO_2]}}.$$ (1) The subscript denotes that CO$_2$ is the only long-term process involved. Analogously, paleoclimate sensitivity ($S^p$) can be deduced from paleo-temperature reconstructions and paleo-CO$_2$ records as $$S^p = \frac{\Delta T_g}{\Delta R_{[CO_2]}}.$$ (2) In this case, the average global paleotemperature anomaly with respect to the pre-industrial (PI) ($\Delta T_g$) is, however, also affected by the long-term processes that are typically neglected in climate simulations. Therefore, a correction to the paleotemperature perturbation is needed to obtain $\Delta T_{[CO_2]}$ from $\Delta T_g$: $$\Delta T_{[CO_2]} = \Delta T_g (1 - f),$$ (3) or equivalently $S^a$ from $S^p$: $$S^a = S^p (1 - f) = \frac{\Delta T_g}{\Delta R_{[CO_2]}} (1 - f).$$ (4) Here, $f$ represents the effect of the slow feedbacks on paleotemperature (e.g. van de Wal et al., 2011). To obtain $f$, PALAEOSENS Project Members (2012) proposed an approach, which has subsequently been used in numerous studies aiming to constrain climate sensitivity from paleodata (e.g. von der Heydt et al., 2014; Martínez-Botí et al., 2015; Köhler et al., 2015, 2017b, 2018; Friedrich et al., 2016). Their idea was that the influence of long-term processes (X) on global temperature, is directly proportional to the radiative forcing perturbation they induce ($\Delta R_{[X]}$), hence: $$f = \frac{\Delta R_{[X]}}{\Delta R_{[CO_2]} + \Delta R_{[X]}} = 1 - \frac{\Delta R_{[CO_2]}}{\Delta R_{[CO_2]} + \Delta R_{[X]}}$$ (5) Combining Eqs. 4 and 5 and following the PALAEOSENS nomenclature, we can then derive the ‘specific’ paleoclimate sensitivity $S_{[CO_2,X]}$, where X represents the processes that are accounted for in the calculation of $f$: $$S_{[CO_2,X]} = \frac{\Delta T_g}{\Delta R_{[CO_2]}} \left(1 - \frac{\Delta R_{[X]}}{\Delta R_{[CO_2]} + \Delta R_{[X]}}\right) = \frac{\Delta T_g}{\Delta R_{[CO_2]} + \Delta R_{[X]}} = \frac{\Delta T_g}{\Delta R_{[CO_2,X]}}.$$ (6) If, for instance, only the most important slow feedback in the climate system, namely radiative forcing anomalies induced by albedo changes due to land ice (LI) variability are taken into account, then one can correct $S^p$ to derive the following specific climate sensitivity: $$S_{[CO_2,LI]} = \frac{\Delta T_g}{\Delta R_{[CO_2]} + \Delta R_{[LI]}} = \frac{\Delta T_g}{\Delta R_{[CO_2,LI]}}.$$ (7) Using this approach, several studies performed a least-squares regression through scattered data from paleotemperature and radiative forcing records (Martínez-Botí et al., 2015; Friedrich et al., 2016; Köhler et al., 2015, 2017b, 2018) relating $\Delta T_g$ to $\Delta R_{[CO_2,LI]}$ in a time-independent manner, from which $S_{[CO_2,LI]}$ could be determined. In this way, a state dependency of $S_{[CO_2,LI]}$ as function of background climate has been deduced for those data which are best approximated by a non-linear function. Furthermore, the quantification of $S_{[CO_2,LI]}$ for those state-dependent cases has been formalized in Köhler et al. (2017b). A synthesis of estimates of $S_{[CO_2,LI]}$ from both colder- and warmer-than-present climates has been compiled by von der Heydt et al. (2016). To obtain $S^a$, one needs to multiply $S_{[CO_2,LI]}$ by a factor of 0.64 that accounts for the influence of other long-term processes, namely vegetation, aerosol and non-CO$_2$ greenhouse gas changes (PALAEOSENS Project Members, 2012). Finally, we obtain the equivalent ECS by multiplying $S^a$ by 3.7 W m$^{-2}$, the radiative forcing perturbation representing a CO$_2$ doubling (Myhre et al., 1998). ### 2.2 Refinement 1: Taking the efficacy of land ice changes into account The validity of the PALAEOSENS approach to calculate $f$ is contingent on the notion that identical global-average radiative forcing changes leads to identical global temperature responses, regardless of the processes involved. However, it has been demonstrated that the horizontal and vertical distribution of the radiative forcing affects the resulting temperature response (e.g. Stuber et al., 2005; Hansen et al., 2005; Yoshimori et al., 2011; Stap et al., 2018) because, e.g. different fast feedbacks are triggered depending on the location of the forcing. To address this issue, Hansen et al. (2005) introduced the concept of ‘efficacy’ factors, which we will explore further in this study. These factors ($\varepsilon_{[X]}$) relate the strength of the temperature response to radiative forcing caused by a certain process X ($\Delta T_{[X]} / \Delta R_{[X]}$), to a similar ratio caused by CO$_2$ radiative forcing. This introduction of efficacy requires a reformulation of $f$ as $f_\varepsilon$: $$f_\varepsilon = \frac{\varepsilon_{[X]} \Delta R_{[X]}}{\Delta R_{[CO_2]} + \varepsilon_{[X]} \Delta R_{[X]}} = 1 - \frac{\Delta R_{[CO_2]}}{\Delta R_{[CO_2]} + \varepsilon_{[X]} \Delta R_{[X]}},$$ (8) and hence also of $S_{[CO_2, X]}$ as $S^\varepsilon_{[CO_2, X]}$: $$S^\varepsilon_{[CO_2, X]} = \frac{\Delta T_g}{\Delta R_{[CO_2]} + \varepsilon_{[X]} \Delta R_{[X]}}.$$ (9) In these reformulations, where in principal $\varepsilon_{[X]}$ can take any value, we introduce the superscript $\varepsilon$. This serves to clearly distinguish these newly-derived sensitivities from those of the PALAEOSENS project in which efficacy was not taken into account, implying that identical radiative forcing of different processes leads to identical temperature changes. To calculate $S^\varepsilon_{[CO_2, LI]}$, we constrain the efficacy factor for radiative forcing by land ice changes ($\varepsilon_{[LI]}$), using the following formulation, which is based on, but slightly modified from Hansen et al. (2005): $$\frac{\Delta T_{[LI]}}{\Delta R_{[LI]}} = \varepsilon_{[LI]} \frac{\Delta T_g - \Delta T_{[LI]}}{\Delta R_{[CO_2]}}.$$ (10) This leads to: $$\varepsilon_{[LI]} = \frac{\omega}{1 - \omega} \frac{\Delta R_{[CO_2]}}{\Delta R_{[LI]}},$$ (11) where $\omega$ represents the fractional relative influence of land ice changes on the global temperature change ($\omega = \Delta T_{[LI]} / \Delta T_g$). If $\varepsilon_{[LI]}$ is assumed to be constant in time (see Sect. 3.2 and 5), it can be calculated using Eq. 11 from data of any specific moment in time, and consequently applied to the whole record of $\Delta R_{[CO_2]}$ and $\Delta R_{[LI]}$ (Fig. 1a,c). As before, with this $\varepsilon_{[LI]}$ a quantification of $S^\varepsilon_{[CO_2, LI]}$ can be obtained by performing a least-squares regression through scattered data from paleotemperature and radiative forcing records, now relating $\Delta T_g$ to $(\Delta R_{[CO_2]} + \varepsilon_{[LI]} \Delta R_{[LI]})$ in a time-independent manner. Note that apart from the formulation based on Hansen et al. (2005) followed here, other formulations of the efficacy factor are possible. For instance, one can define an alternative efficacy factor ($\varepsilon_{[LI], alt}$) such that it relates the effect of land ice changes on global temperature directly to the radiative forcing anomaly caused by CO$_2$ changes, leading to: $$S^\varepsilon_{[CO_2, X], alt} = \frac{\Delta T_g}{\Delta R_{[CO_2]} + \varepsilon_{[LI], alt} \Delta R_{[CO_2]}}.$$ (12) In this alternative case, the efficacy factor $\varepsilon_{[LI], alt}$ relates to our original $\varepsilon_{[LI]}$ as: $$\varepsilon_{[LI], alt} = \varepsilon_{[LI]} \frac{\Delta R_{[LI]}}{\Delta R_{[CO_2]}}.$$ (13) This implies that if $\varepsilon_{[LI]}$ is indeed constant, any non-linearity in the relation between $\Delta R_{[CO_2]}$ and $\Delta R_{[LI]}$ would demand a more complex formulation of the alternative efficacy factor $\varepsilon_{[LI], alt}$ (e.g. via a higher-order polynomial). Since we find such a non-linearity in our data (Fig. 2), using an F test to determine that a second order polynomial is a significantly (p value < 0.0001) better fit to the data than a linear function, we refrain from following this alternative formulation further. 2.3 Refinement 2: Unifying the dependent variable In the cross-plots of radiative forcing and global temperature anomalies used to calculate $S_{[CO_2,LI]}^e$, the radiative forcing on the x-axis is caused by a combination of CO$_2$ and land-ice changes. To more readily compare $S_{[CO_2,LI]}^e$ to other specific paleoclimate sensitivities $S_{[CO_2,X]}^e$, where more and/or different long-term processes are considered, the dependent variable has to be unified. Here, we therefore reformulate our equation to get $\Delta R_{[CO_2]}$ in the nominator, enabling the use of cross-plots that now have $\Delta R_{[CO_2]}$ on the x-axis. $$S_{[CO_2,X]}^e = \frac{\Delta T_g}{\Delta R_{[CO_2]} + \varepsilon_{[X]} \Delta R_{[X]}} = \frac{\Delta T_g}{\Delta R_{[CO_2]}} \frac{\Delta R_{[CO_2]}}{\Delta R_{[CO_2]} + \varepsilon_{[X]} \Delta R_{[X]}} = \frac{\Delta T_{[-X]}^e}{\Delta R_{[CO_2]}}.$$ (14) Here, $\Delta T_{[-X]}^e$ is the global temperature change (with respect to PI) stripped of the inferred influence of processes X, defined as: $$\Delta T_{[-X]}^e := \Delta T_g \frac{\Delta R_{[CO_2]}}{\Delta R_{[CO_2]} + \varepsilon_{[X]} \Delta R_{[X]}}.$$ (15) Hence, for the calculation of $S_{[CO_2,LI]}^e$ we use: $$\Delta T_{[-LI]}^e := \Delta T_g \frac{\Delta R_{[CO_2]}}{\Delta R_{[CO_2]} + \varepsilon_{[LI]} \Delta R_{[LI]}}.$$ (16) Now, we quantify $S_{[CO_2,LI]}^e$ by performing a least-squares regression (regfunc) through scattered data from $\Delta T_{[-LI]}^e$ and $\Delta R_{[CO_2]}$. We use the precondition that no change in CO$_2$ is related to no change in $\Delta T_{[-LI]}^e$, meaning the regression intersects the y-axis at the origin ($(x,y) = (0,0)$). Following Köhler et al. (2017b), for any non-zero $\Delta R_{[CO_2]}$, we calculate $S_{[CO_2,LI]}^e$ as: $$S_{[CO_2,LI]}^e \bigg|_{\Delta R_{[CO_2]}} = \frac{\text{regfunc}}{\Delta R_{[CO_2]}} \bigg|_{\Delta R_{[CO_2]}}.$$ (17) If $\Delta R_{[CO_2]} = 0$ W m$^{-2}$, as is among others the case for pre-industrial conditions, $S_{[CO_2,LI]}^e$ is quantified as: $$S_{[CO_2,LI]}^e \bigg|_{\Delta R_{[CO_2]}=0} = \frac{\delta(\text{regfunc})}{\delta(\Delta R_{[CO_2]})} \bigg|_{\Delta R_{[CO_2]}=0}.$$ (18) Equations 17 and 18 yield a quantification of $S_{[CO_2,LI]}^e$, which can be compared to the value obtained for $S_{[CO_2,LI]}$ using the approach without considering efficacy (equivalent to using $\varepsilon_{[LI]} = 1$) (Köhler et al., 2018). In this study, we continue to use a multiplication factor of 0.64 to obtain $S^a$ from $S_{[CO_2,LI]}^e$. Note that this scaling still assumes unit efficacy for processes other than land ice changes. Therefore, it is a source of uncertainty to be investigated in future research. The equivalent ECS (in K per CO$_2$ doubling) can again be calculated by multiplying $S^a$ by 3.7 W m$^{-2}$. 3 Validation of the approach using model simulations In this section, we validate our refined approach by applying it to transient simulations over the past 5 Myr using CLIMBER-2 (Stap et al., 2018). We compare the results of our approach of obtaining the sole effect of CO$_2$ changes on global temperature from a simulation forced by land ice and CO$_2$ changes, to a simulation where CO$_2$ changes are the only operating long-term process. By doing so, we assess the error resulting from using a constant efficacy factor. 3.1 CLIMBER-2 model simulations Using the intermediate complexity climate model CLIMBER-2 (Petoukhov et al., 2000; Ganopolski et al., 2001), climate simulations over the past 5 Myr were performed and analysed in Stap et al. (2018). CLIMBER-2 combines a 2.5-dimensional statistical-dynamical atmosphere model, with a 3-basin zonally averaged ocean model (Stocker et al., 1992), and a model that calculates dynamic vegetation cover based on the temperature and precipitation (Brovkin et al., 1997). In brief, the simulations are forced by solar insolation which changes due to orbital (O) variations (Laskar et al., 2004), and further by land ice (I) changes on both hemispheres (based on de Boer et al., 2013), and CO$_2$ (C) changes (based on van de Wal et al., 2011). In the reference experiment (OIC) all input data are varied, while in other model integrations the land ice (experiment OC) or the CO$_2$ concentration (experiment OI) is kept fixed at PI level. The synergy of land ice and CO$_2$ changes is negligibly small, meaning their induced temperature changes add approximately linearly when both forcings are applied. Furthermore, the influence of orbital variations is also very small, so that experiment OC approximately yields the sole effect of CO$_2$ changes on global temperature ($\Delta T_{[OC]}$). As in Stap et al. (2018), we use the simple energy balance model of Köhler et al. (2010) to analyse the applied radiative forcing of land ice albedo and CO$_2$ changes and simulated global temperature changes, after averaging to 1,000 year temporal resolution (Fig. 1a,b). 3.2 Analysis First, we analyse experiment OC, which will serve as a target for our refined approach as deployed later in this section. We use a least-squares regression through scattered data of $\Delta R_{[CO_2]}$ and $\Delta T_{[OC]}$ to fit a second order polynomial (Fig. 3a). Using a higher order polynomial rather than a linear function allows us to capture state dependency of paleoclimate sensitivity. Fitting even higher order polynomials leads to negligible coefficients for the higher powers, and is not pursued further. From the fit, we calculate a specific paleoclimate sensitivity $S^e_{[CO_2,LI]}$ of 0.74 K W$^{-1}$ m$^2$ for PI conditions ($\Delta R_{[CO_2]} = 0$ W m$^{-2}$) using Eq. 18. Note that, in this case, $S^e_{[CO_2,LI]}$ is equal to $S^e_{[CO_2]}$, $S_{[CO_2,LI]}$ and $S_{[CO_2]}$ as there are no land ice changes and therefore also no efficacy differences. The fit further shows decreasing $S^e_{[CO_2,LI]}$ for rising $\Delta R_{[CO_2]}$. Now, we apply our approach to the results of experiment OIC, in which both CO$_2$ and land ice cover vary over time, with the aim of deducing the sole effect of CO$_2$ changes on global temperature. We calculate the efficacy of land ice changes for the Last Glacial Maximum (21 kyr ago; LGM) from experiment OI, in which the CO$_2$ concentration is kept constant. We obtain $\omega = \Delta T_{[LI]} / \Delta T_g = \Delta T_{[OI]} / \Delta T_{[OIC]} = 0.54$. Consequently, we find $\varepsilon_{[LI]} = 0.58$ from Eq. 11, and apply this value to the whole record of $\Delta R_{[CO_2]}$ and $\Delta R_{[LI]}$. In this manner, we calculate $\Delta T^e_{[-LI]}$ using Eq. 16. We then fit a second order polynomial to the scattered data of the thusly obtained $\Delta T^e_{[-LI]}$ from the results of experiment OIC, and $\Delta R_{[CO_2]}$ (Fig. 3b). Between $\Delta R_{[CO_2]} = -0.5$ W m$^{-2}$ and $\Delta R_{[CO_2]} = 0.5$ W m$^{-2}$, outliers resulted from division by small numbers (not shown in Fig. 3b). To remove these outliers, we first calculate the root mean square error (RMSE) between the fit and the data in the remainder of the domain. We then exclude all 144 values from the range $\Delta R_{[CO_2]} = -0.5$ W m$^{-2}$ to $\Delta R_{[CO_2]} = 0.5$ W m$^{-2}$ where the fit differs from the data by more than $3 \times$ RMSE, and perform the regression again. This yields an $S_{[CO_2, LI]}^e$ of 0.72 KW$^{-1}$ m$^2$ for PI (Fig. 3b), which supports our approach since it is only slightly lower than the $S_{[CO_2, LI]}^e$ of 0.74 KW$^{-1}$ m$^2$ obtained from experiment OC, which it should approximate. The relationship between $\Delta T_{[-LI]}^e$ and $\Delta R_{[CO_2]}$ (Fig. 3b) is more linear than that between $\Delta T_{[OC]}$ and $\Delta R_{[CO_2]}$ (Fig. 3a), hence the state dependency of $S_{[CO_2, LI]}^e$ is reduced. However, the difference between the $S_{[CO_2, LI]}^e$ obtained from both experiments remains smaller than 0.07 KW$^{-1}$ m$^2$ through the entire 5-Myr interval, indicating that a constant efficacy is an acceptable assumption which only introduces a negligible additional uncertainty. However, the possible time-dependency of efficacy could be investigated more rigorously in future research using more sophisticated climate models. In principal, $\varepsilon_{[LI]}$ can be obtained using data from any moment in time, preferably when the radiative forcing anomalies are large to prevent outliers resulting from divisions by small numbers. For example, using the results from all glacial marine isotope stages of the past 810 kyr (MIS 2, 6, 8, 10, 12, 14, 16, 18, and 20), instead of just the LGM, leads to a mean ($\pm 1\sigma$) $\varepsilon_{[LI]}$ of 0.56 ± 0.09. The resulting PI $S_{[CO_2, LI]}^e$ is 0.73$^{+0.06}_{-0.05}$ KW$^{-1}$ m$^2$ (Fig. 3c). The old approach, which is equal to using $\varepsilon_{[LI]} = 1$ in the refined approach, yields a PI $S_{[CO_2, LI]}$ of 0.54 KW$^{-1}$ m$^2$ (Fig. 3d). This is clearly much more off-target than the results of our refined approach, signifying the importance of considering efficacy. 4 Application to proxy-inferred paleoclimate data In this section, we compare our refined approach to calculate $S_{[CO_2, LI]}^e$ incorporating efficacy, to our previous quantification of $S_{[CO_2, LI]}$ (Köhler et al., 2018), by reanalysing the same paleoclimate dataset (introduced in Köhler et al., 2015). Other than for climate model simulations, the influence of land ice changes on global temperature perturbations cannot be directly obtained from proxy-based datasets, and is hence a-priori unknown. We therefore base the value of $\varepsilon_{[LI]}$ we implement here on a multi-model assemblage of simulated relative influences of land ice changes on the Last Glacial Maximum (LGM) temperature anomaly (Shakun, 2017). 4.1 Proxy-inferred paleoclimate dataset The investigated dataset contains reconstructions of $\Delta T_g$, $\Delta R_{[CO_2]}$, and $\Delta R_{[LI]}$. Although it covers the past 5 Myr, here we focus on the past 800 kyr (Fig. 1c,d) because over this period $\Delta R_{[CO_2]}$ is constrained by high-fidelity ice core CO$_2$ data, whereas Pliocene and Early Pleistocene CO$_2$ levels are still heavily debated (e.g. Badger et al., 2013; Martínez-Botí et al., 2015; Willeit et al., 2015; Stap et al., 2016, 2017; Chalk et al., 2017; Dyez et al., 2018). Radiative forcing by CO$_2$ is obtained from Antarctic ice core data compiled by Bereiter et al. (2015), using $\Delta R_{[CO_2]} = 5.35$ Wm$^{-2} \cdot \ln(\text{CO}_2/(278 \text{ppm}))$ (Myhre et al., 1998). Revised formulations of $\Delta R_{[CO_2]}$ following Etminan et al. (2016) lead to very similar results with less than 0.01 Wm$^{-2}$ differences between the approaches for typical late Pleistocene CO$_2$ values (Köhler et al., 2017a). Radiative forcing caused by land ice albedo changes, as well as the global surface air temperature record ($\Delta T_g$), are based on results of the 3D ice-sheet model ANICE (de Boer et al., 2014). ANICE was forced by northern hemispheric temperature anomalies with respect to a reference PI climate, obtained from a benthic $\delta^{18}$O stack (Lisiecki and Raymo, 2005) using an inverse technique. This provided geographically specific land ice distributions, and hence radiative forcing due to albedo changes with respect to PI on both hemispheres. In Köhler et al. (2015), the northern hemispheric (NH) temperature anomalies \((\Delta T_{\text{NH}})\) are translated into global temperature perturbations \((\Delta T_{g1}\) in Köhler et al. (2015)) using polar amplification factors \((f_{\text{PA}} = \Delta T_{\text{NH}} / \Delta T_g)\) as follows: at the LGM, \(f_{\text{PA}} = 2.7\) is taken from the average of PMIP3 model data (Braconnot et al., 2012), while at the mid-Pliocene Warm Period (mPWP, about 3.2 Myr ago), \(f_{\text{PA}} = 1.6\) is calculated from the average of PlioMIP results (Haywood et al., 2013). At all other times, \(f_{\text{PA}}\) is linearly varied as a function of NH temperature. In Appendix A, we investigate the influence of the chosen polar amplification factor (Köhler et al., 2015) on our results. The temperature dynamics follow from a benthic \(\delta^{18}\text{O}\) stack and are unconstrained by climatic boundary conditions such as insolation and greenhouse gases, since ANICE only simulates land ice dynamics. Therefore, these results are here considered to be more similar to those of proxy-based reconstructions than of climate-model-based simulations. The temporal resolution of the dataset is 2,000 years. Analysing this dataset, Köhler et al. (2018) found a temperature-\(\text{CO}_2\) divergence appearing mainly during, or in connection with, periods of decreasing obliquity related to land ice growth or sea level fall. For these periods, a significantly different \(S_{[\text{CO}_2,\text{LI}]}\) was obtained than for the remainder of the time frame. However, in the future we expect sea level to rise, hence these intervals of strong temperature-\(\text{CO}_2\) divergence should not be considered for the interpretation of paleodata in the context of future warming, e.g. by using paleodata to constrain ECS. In the following analysis, we therefore exclude these times with strong temperature-\(\text{CO}_2\) divergence, leaving 217 data points as indicated in Fig. 1c,d. ### 4.2 Analysis Shakun (2017) compiled the simulated relative impact of land ice changes on the LGM temperature anomaly (\(\omega\) in Eq. 11) using a 12-member climate model ensemble, and found a range of \(0.46 \pm 0.14\) (mean \(\pm 1\sigma\), full range \(0.20 - 0.68\)). Applying these values, in combination with the LGM values (taken here as the mean of the data at 20 and 22 kyr ago) \(\Delta R_{[\text{CO}_2]} = -2.04 \text{ W m}^{-2}\) and \(\Delta R_{[\text{LI}]} = -3.88 \text{ W m}^{-2}\), yields \(\varepsilon_{[\text{LI}]} = 0.45^{+0.34}_{-0.20}\). Implementing this range for \(\varepsilon_{[\text{LI}]}\) in Eq. 16, we calculate \(\Delta T^\varepsilon_{[-\text{LI}]}\) over the whole 800-kyr period. Fitting second order polynomials by least-squares regression to the scattered data of \(\Delta T^\varepsilon_{[-\text{LI}]}\) and \(\Delta R_{[\text{CO}_2]}\), we infer a PI \(S^\varepsilon_{[\text{CO}_2,\text{LI}]}\) of \(2.45^{+0.53}_{-0.56} \text{ K W}^{-1} \text{ m}^2\) (Fig. 4a). The substantial uncertainty given here only reflects the \(1\sigma\) uncertainty in \(\varepsilon_{[\text{LI}]}\). Similar to Köhler et al. (2018), we also detect a state dependency with decreasing \(S^\varepsilon_{[\text{CO}_2,\text{LI}]}\) towards colder climates for this dataset, more strongly so in case of lower \(\varepsilon_{[\text{LI}]}\). This state dependency is opposite to the one found in the CLIMBER-2 results. The difference may be related to the fact that fast climate feedbacks are too linear, or that some slow feedbacks are underestimated in intermediate complexity climate models like CLIMBER-2 (see Köhler et al., 2018, for a detailed discussion). At \(\Delta R_{[\text{CO}_2]} = -2.04 \text{ W m}^{-2}\), the LGM value, \(S^\varepsilon_{[\text{CO}_2,\text{LI}]}\) is only \(1.45^{+0.33}_{-0.37} \text{ K W}^{-1} \text{ m}^2\). The old approach, which does not consider efficacy and is therefore equivalent to the new approach using \(\varepsilon_{[\text{LI}]} = 1\), yields \(S_{[\text{CO}_2,\text{LI}]} = 1.66 \text{ K W}^{-1} \text{ m}^2\) for PI, and \(S_{[\text{CO}_2,\text{LI}]} = 0.93 \text{ K W}^{-1} \text{ m}^2\) for the LGM (Fig. 4b). The specific paleoclimate sensitivities we find using the refined approach are hence generally larger than those obtained by using the old approach. This is because, for the range of the impact of land ice changes on the LGM temperature anomaly implemented \((\omega = 0.46 \pm 0.14)\), the efficacy factor \(\varepsilon_{[\text{LI}]}\) is smaller than unity. In other words, these land ice changes contribute comparatively less per unit radiative forcing to the global temperature anomalies than the \(\text{CO}_2\) changes. Our inferred PI $S_{[CO_2, LI]}^{\varepsilon}$ is equivalent to an $S^a$ of $1.6^{+0.3}_{-0.4}$ KW$^{-1}$ m$^2$, and an ECS of $5.8 \pm 1.3$ K per CO$_2$ doubling. This is on the high end of the results of other approaches to obtain ECS (Knutti et al., 2017), e.g. the 2.0 to 4.3 K 95%-confidence range from a large model ensemble (Goodwin et al., 2018), and the 2.2 to 3.4 K 66% confidence range from an emerging constraint from global temperature variability and CMIP5 (Cox et al., 2018). Hence, the low end of our ECS estimate is in the best agreement with these other estimates. This could mean that the influence the relative influence of land ice changes on the LGM temperature anomaly is on the high side, or possibly higher than, the $0.46 \pm 0.14$ range we consider here. Alternatively, the factor of 0.64 we use to convert $S_{[CO_2, LI]}^{\varepsilon}$ to $S^a$ is an overestimation, which could be caused by a larger-than-unity efficacy of long-term processes besides CO$_2$ and land ice changes. 5 Conclusions We have incorporated the concept of a constant efficacy factor (Hansen et al., 2005), that interrelates the global temperature responses to radiative forcing caused by land ice changes and CO$_2$ changes, into our framework of calculating specific paleoclimate sensitivity $S_{[CO_2, LI]}^{\varepsilon}$. The aim of this effort has been to overcome the problem that land ice and CO$_2$ changes can lead to significantly different global temperature responses, even when they induce the same global-average radiative forcing. Firstly, we have shown the importance of considering efficacy differences by applying our new approach to results of 5-Myr CLIMBER-2 simulations (Stap et al., 2018), where the separate effects of land ice changes and CO$_2$ changes can be isolated. In the results of these simulations, the error from assuming the efficacy factor to be constant in time is negligible. Thereafter, we have used our new approach to reanalyse an 800-kyr proxy-inferred paleoclimate dataset (Köhler et al., 2015). We have inferred a range in the land ice change efficacy factor $\varepsilon_{[LI]}$ from the $0.46 \pm 0.14$ (mean $\pm 1\sigma$) relative impact of land ice changes on the LGM temperature anomaly simulated by a 12-member climate model ensemble (Shakun, 2017). The thusly obtained efficacy factor $\varepsilon_{[LI]}$ is smaller than unity, implying that the impact on global temperature per unit of radiative forcing is less strong for land ice changes than for CO$_2$ changes. Consequently, our derived PI $S_{[CO_2, LI]}^{\varepsilon}$ of $2.45^{+0.53}_{-0.56}$ KW$^{-1}$ m$^2$ is $\sim 50\%$ larger than the result of the old approach. The uncertainty in this estimate is only caused by the implemented range in $\varepsilon_{[LI]}$. The equivalent $S^a$ and ECS corresponding to this $S_{[CO_2, LI]}^{\varepsilon}$ are $1.6^{+0.3}_{-0.4}$ KW$^{-1}$ m$^2$ and $5.8 \pm 1.3$ K per CO$_2$ doubling respectively. Data availability. The CLIMBER-2 dataset is available at https://doi.pangaea.de/10.1594/PANGAEA.887427, and the proxy-inferred paleoclimate dataset is available at https://doi.pangaea.de/10.1594/PANGAEA.855449, from the PANGAEA database. For more information or data, please contact the authors. Appendix A: Influence of the polar amplification factor In the analysis performed in Sect. 4.2, we have used a global temperature record that was obtained from northern high-latitude temperature anomalies using a polar amplification factor $f_{PA}$ that varies from 2.7 at the coldest to 1.6 at the warmest conditions (Sect. 4.1). However, recent climate model simulations of the Pliocene using updated paleogeographic boundary conditions show that in warmer times polar amplification could have been nearly the same as in colder times (Kamae et al., 2016; Chandan and Peltier, 2017). We therefore repeat the analysis using the same range in $\varepsilon_{[LI]}$ and the same dataset, but with an applied constant $f_{PA} = 2.7$ over the entire past 800 kyr to generate $\Delta T_g$ ($\Delta T_{g2}$ in Köhler et al. (2015)). The constant polar amplification used here counteracts increasing state dependency towards low temperatures, as the temperature differences are no longer amplified by changing polar amplification. Hence, $S^{\varepsilon}_{[CO_2, LI]}$ is smaller at PI, $1.96^{+0.42}_{-0.44}$ K W$^{-1}$ m$^2$ compared to $2.45^{+0.53}_{-0.56}$ K W$^{-1}$ m$^2$ using the variable $f_{PA}$, but diminishes less strongly towards colder conditions (Fig. A1a cf. Fig. 4a). As before, the old approach (equivalent to the new approach using $\varepsilon_{[LI]} = 1$), yields a lower PI $S_{[CO_2, LI]}$ of 1.34 K W$^{-1}$ m$^2$ (Fig. A1b). The PI $S^{\varepsilon}_{[CO_2, LI]}$ inferred here using the refined approach corresponds to an $S^{\alpha}$ of $1.3^{+0.2}_{-0.3}$ K W$^{-1}$ m$^2$, and an ECS of $4.6^{+1.0}_{-1.3}$ K per CO$_2$ doubling. **Author contributions.** L.B.S. designed the research. L.B.S. and P.K. performed the analysis. L.B.S. drafted the paper, with input from all co-authors. **Competing interests.** The authors declare that they have no conflict of interest. **Acknowledgements.** This work is institutional-funded at AWI via the research program PACES-II of the Helmholtz Association. We further thank Roderik van de Wal for commenting on an earlier draft of the manuscript, and two anonymous referees for their constructive comments, which have helped to improve the quality of the manuscript. References Badger, M. P. S., Lear, C. H., Pancost, R. D., Foster, G. L., Bailey, T. R., Leng, M. J., and Abels, H. A.: CO$_2$ drawdown following the middle Miocene expansion of the Antarctic Ice Sheet, Paleoceanography, 28, 42–53, 2013. Bereiter, B., Eggleston, S., Schmitt, J., Nehrbass-Ahles, C., Stocker, T. F., Fischer, H., Kipfstuhl, S., and Chappellaz, J.: Revision of the EPICA Dome C CO$_2$ record from 800 to 600 kyr before present, Geophysical Research Letters, 42, 542–549, 2015. Braconnot, P., Harrison, S. P., Kageyama, M., Bartlein, P. J., Masson-Delmotte, V., Abe-Ouchi, A., Otto-Bliesner, B., and Zhao, Y.: Evaluation of climate models using palaeoclimatic data, Nature Climate Change, 2, 417–424, 2012. Brovkin, V., Ganopolski, A., and Svirezhev, Y.: A continuous climate-vegetation classification for use in climate-biosphere studies, Ecological Modelling, 101, 251–261, 1997. Chalk, T. B., Hain, M. P., Foster, G. L., Rohling, E. J., Sexton, P. F., Badger, M. P. S., Cherry, S. G., Hasenfratz, A. P., Haug, G. H., Jaccard, S. L., et al.: Causes of ice age intensification across the Mid-Pleistocene Transition, Proceedings of the National Academy of Sciences, 114, 13 114–13 119, 2017. Chandan, D. and Peltier, W. R.: Regional and global climate for the mid-Pliocene using the University of Toronto version of CCSM4 and PlioMIP2 boundary conditions, Climate of the Past, 13, 919, 2017. Charney, J. G., Arakawa, A., Baker, D. J., Bolin, B., Dickinson, R. E., Goody, R. M., Leith, C. E., Stommel, H. M., and Wunsch, C. I.: Carbon dioxide and climate: a scientific assessment, National Academy of Sciences, Washington, DC, 1979. Covey, C., Sloan, L. C., and Hoffert, M. I.: Paleoclimate data constraints on climate sensitivity: the paleocalibration method, Climatic Change, 32, 165–184, 1996. Cox, P. M., Huntingford, C., and Williamson, M. S.: Emergent constraint on equilibrium climate sensitivity from global temperature variability, Nature, 553, 319, 2018. de Boer, B., van de Wal, R. S. W., Lourens, L. J., Bintanja, R., and Reerink, T. J.: A continuous simulation of global ice volume over the past 1 million years with 3-D ice-sheet models, Climate Dynamics, 41, 1365–1384, 2013. de Boer, B., Lourens, L. J., and van de Wal, R. S. W.: Persistent 400,000-year variability of Antarctic ice volume and the carbon cycle is revealed throughout the Plio-Pleistocene, Nature Communications, 5, 2014. Dyez, K. A., Hölnisch, B., and Schmidt, G. A.: Early Pleistocene obliquity-scale pCO$_2$ variability at ~1.5 million years ago, Paleoceanography and Paleoclimatology, 33, 1270–1291, 2018. Edwards, T. L., Crucifix, M., and Harrison, S. P.: Using the past to constrain the future: how the palaeorecord can improve estimates of global warming, Progress in Physical Geography, 31, 481–500, 2007. Etminan, M., Myhre, G., Highwood, E. J., and Shine, K. P.: Radiative forcing of carbon dioxide, methane, and nitrous oxide: A significant revision of the methane radiative forcing, Geophysical Research Letters, 43, 2016. Friedrich, T., Timmermann, A., Tigchelaar, M., Timm, O. E., and Ganopolski, A.: Non-linear climate sensitivity and its implications for future greenhouse warming, Science Advances, 2, e1501923, 2016. Ganopolski, A., Petoukhov, V., Rahmstorf, S., Brovkin, V., Claussen, M., Eliseev, A., and Kubatzki, C.: CLIMBER-2: a climate system model of intermediate complexity. Part II: model sensitivity, Climate Dynamics, 17, 735–751, 2001. Goodwin, P., Katavouta, A., Roussenov, V. M., Foster, G. L., Rohling, E. J., and Williams, R. G.: Pathways to 1.5°C and 2°C warming based on observational and geological constraints, Nature Geoscience, p. 1, 2018. Hansen, J., Sato, M. K. I., Ruedy, R., Nazarenko, L., Lacis, A., Schmidt, G. A., Russell, G., Aleinov, I., Bauer, M., Bauer, S., et al.: Efficacy of climate forcings, Journal of Geophysical Research: Atmospheres, 110, 2005. Haywood, A. M., Hill, D. J., Dolan, A. M., Otto-Bliesner, B. L., Bragg, F., Chan, W.-L., Chandler, M. A., Contoux, C., Dowsett, H. J., Jost, A., et al.: Large-scale features of Pliocene climate: results from the Pliocene Model Intercomparison Project, Climate of the Past, 9, 191–209, 2013. Kamae, Y., Yoshida, K., and Ueda, H.: Sensitivity of Pliocene climate simulations in MRI-CGCM2.3 to respective boundary conditions, Climate of the Past, 12, 1619–1634, 2016. Knutti, R., Rugenstein, M. A. A., and Hegerl, G. C.: Beyond equilibrium climate sensitivity, Nature Geoscience, 10, 727, 2017. Köhler, P., Bintanja, R., Fischer, H., Joos, F., Knutti, R., Lohmann, G., and Masson-Delmotte, V.: What caused Earth’s temperature variations during the last 800,000 years? Data-based evidences on radiative forcing and constraints on climate sensitivity, Quaternary Science Reviews, 29, 129–145, https://doi.org/10.1016/j.quascirev.2009.09.026, 2010. Köhler, P., de Boer, B., von der Heydt, A. S., Stap, L. B., and van de Wal, R. S. W.: On the state-dependency of the equilibrium climate sensitivity during the last 5 million years, Climate of the Past, 11, 1801–1823, 2015. Köhler, P., Nehrbass-Ahles, C., Schmitt, J., Stocker, T. F., and Fischer, H.: A 156 kyr smoothed history of the atmospheric greenhouse gases CO₂, CH₄, and N₂O and their radiative forcing, Earth System Science Data, 9, 363–387, https://doi.org/10.5194/essd-9-363-2017, 2017a. Köhler, P., Stap, L. B., von der Heydt, A. S., de Boer, B., van de Wal, R. S. W., and Bloch-Johnson, J.: A state-dependent quantification of climate sensitivity based on paleo data of the last 2.1 million years, Paleoceanography, 32, 1102–1114, https://doi.org/10.1002/2017PA003190, 2017PA003190, 2017b. Köhler, P., Knorr, G., Stap, L. B., Ganopolski, A., de Boer, B., van de Wal, R. S. W., Barker, S., and Rüpke, L. H.: The effect of obliquity-driven changes on paleoclimate sensitivity during the late Pleistocene, Geophysical Research Letters, 45, 6661–6671, 2018. Laskar, J., Robutel, P., Joutel, F., Gastineau, M., Correia, A. C. M., Levrard, B., et al.: A long-term numerical solution for the insolation quantities of the Earth, Astronomy & Astrophysics, 428, 261–285, 2004. Lisiecki, L. E. and Raymo, M. E.: A Pliocene-Pleistocene stack of 57 globally distributed benthic δ¹⁸O records, Paleoceanography, 20, 2005. Martínez-Botí, M. A., Foster, G. L., Chalk, T. B., Rohling, E. J., Sexton, P. F., Lunt, D. J., Pancost, R. D., Badger, M. P. S., and Schmidt, D. N.: Plio-Pleistocene climate sensitivity evaluated using high-resolution CO₂ records, Nature, 518, 49–54, 2015. Myhre, G., Highwood, E. J., Shine, K. P., and Stordal, F.: New estimates of radiative forcing due to well mixed greenhouse gases, Geophysical Research Letters, 25, 2715–2718, 1998. PALAEOSENS Project Members: Making sense of palaeoclimate sensitivity, Nature, 491, 683–691, 2012. Petoukhov, V., Ganopolski, A., Brovkin, V., Claussen, M., Eliseev, A., Kubatzki, C., and Rahmstorf, S.: CLIMBER-2: a climate system model of intermediate complexity. Part I: model description and performance for present climate, Climate Dynamics, 16, 1–17, 2000. Shakun, J. D.: Modest global-scale cooling despite extensive early Pleistocene ice sheets, Quaternary Science Reviews, 165, 25–30, 2017. Stap, L. B., de Boer, B., Ziegler, M., Bintanja, R., Lourens, L. J., and van de Wal, R. S. W.: CO₂ over the past 5 million years: Continuous simulation and new δ¹¹B-based proxy data, Earth and Planetary Science Letters, 439, 1–10, 2016. Stap, L. B., van de Wal, R. S. W., de Boer, B., Bintanja, R., and Lourens, L. J.: The influence of ice sheets on temperature during the past 38 million years inferred from a one-dimensional ice sheet–climate model, Climate of the Past, 13, 1243–1257, 2017. Stap, L. B., Van de Wal, R. S. W., de Boer, B., Köhler, P., Hoencamp, J. H., Lohmann, G., Tuenter, E., and Lourens, L. J.: Modeled influence of land ice and CO₂ on polar amplification and paleoclimate sensitivity during the past 5 million years, Paleoceanography and Paleoclimatology, 33, 381–394, 2018. Stocker, T. F., Mysak, L. A., and Wright, D. G.: A zonally averaged, coupled ocean-atmosphere model for paleoclimate studies, Journal of Climate, 5, 773–797, 1992. Stuber, N., Ponater, M., and Sausen, R.: Why radiative forcing might fail as a predictor of climate change, Climate Dynamics, 24, 497–510, 2005. van de Wal, R. S. W., de Boer, B., Lourens, L. J., Köhler, P., and Bintanja, R.: Reconstruction of a continuous high-resolution CO$_2$ record over the past 20 million years, Climate of the Past, 7, 1459–1469, https://doi.org/10.5194/cp-7-1459-2011, 2011. Vial, J., Dufresne, J.-L., and Bony, S.: On the interpretation of inter-model spread in CMIP5 climate sensitivity estimates, Climate Dynamics, 41, 3339–3362, 2013. von der Heydt, A. S., Köhler, P., van de Wal, R. S. W., and Dijkstra, H. A.: On the state dependency of fast feedback processes in (paleo) climate sensitivity, Geophysical Research Letters, 41, 6484–6492, 2014. von der Heydt, A. S., Dijkstra, H. A., van de Wal, R. S. W., Caballero, R., Crucifix, M., Foster, G. L., Huber, M., Köhler, P., Rohling, E., Valdes, P. J., et al.: Lessons on climate sensitivity from past climate changes, Current Climate Change Reports, 2, 148–158, 2016. Willeit, M., Ganopolski, A., Calov, R., Robinson, A., and Maslin, M.: The role of CO$_2$ decline for the onset of Northern Hemisphere glaciation, Quaternary Science Reviews, 119, 22–34, 2015. Yoshimori, M., Hargreaves, J. C., Annan, J. D., Yokohata, T., and Abe-Ouchi, A.: Dependency of feedbacks on forcing and climate state in physics parameter ensembles, Journal of Climate, 24, 6440–6455, 2011. Figure 1. Timeseries of radiative forcing anomalies ($\Delta R$) caused by CO$_2$ (red) changes and land ice changes (blue), and global temperature anomalies ($\Delta T_g$) with respect to PI, from **a-b**) the CLIMBER-2 model dataset (Stap et al., 2018), with temperature data for experiment OIC in black and for experiment OC in green, and from **c-d**) the proxy-inferred dataset (Köhler et al., 2015), with solid lines for the whole dataset, and dots for the data used in this study which exclude times with strong temperature-CO$_2$ divergence (see Sect. 4.1). Note the differing axis scales. Figure 2. The relation between radiative forcing anomalies caused by CO$_2$ changes ($\Delta R_{[CO_2]}$) and land ice changes ($\Delta R_{[LI]}$) from the proxy-inferred dataset (Köhler et al., 2015) (pink dots). The red line represents a second order polynomial least-squares regression through the scattered data. Figure 3. Temperature anomalies with respect to PI over the last 5 Myr from CLIMBER-2 (Stap et al., 2018) against imposed radiative forcing of CO$_2$. **a)** Simulation with fixed PI land ice distribution (experiment OC) ($\Delta T_{[OC]}$). **b)** Calculated global temperature perturbations from experiment OIC stripped of the inferred influence of land ice ($\Delta T^e_{[-LI]}$) using Eq. 16 with $\varepsilon_{[LI]} = 0.58$. Here, $\varepsilon_{[LI]}$ is obtained from matching climate sensitivity with the target value at the LGM. **c)** Same as in (b), but using $\varepsilon_{[LI]} = 0.47$ (cyan dots), $\varepsilon_{[LI]} = 0.56$ (pink dots), and $\varepsilon_{[LI]} = 0.65$ (yellow dots). Here, $\varepsilon_{[LI]}$ is obtained from the mean ($\pm 1\sigma$) of matching climate sensitivity with the target value at all glacial marine isotope stages of the past 810 kyr (MIS 2, 6, 8, 10, 12, 14, 16, 18, and 20). **d)** Same as in (b), but using $\varepsilon_{[LI]} = 1$, which is equivalent to the old approach where efficacy differences were not considered. The red lines - and in (c) also the orange and blue lines - represent second order polynomial least-squares regressions through the scattered data. Figure 4. The global temperature perturbations stripped of the inferred influence of land ice ($\Delta T_{[-LI]}^{\varepsilon}$) calculated using Eq. 16 against $\Delta R_{[CO_2]}$ from the proxy-inferred paleoclimate dataset (Köhler et al., 2015), using: **a)** $\varepsilon_{[LI]} = 0.79$ (maroon dots), $\varepsilon_{[LI]} = 0.45$ (cyan dots), and $\varepsilon_{[LI]} = 0.25$ (green dots). Here, $\varepsilon_{[LI]}$ is obtained by converting the multi-model assemblage of simulated relative influences of land ice changes on the LGM temperature anomaly (0.46±0.14) (Shakun, 2017). **b)** Same as in (a), but using $\varepsilon_{[LI]} = 1$ (grey dots), which is equivalent to the old approach. The brown, blue, dark green (**a**), and black lines (**b**) represent second order polynomial least-squares regressions through the data. Figure A1. The global temperature perturbations stripped of the inferred influence of land ice ($\Delta T_{[-LI]}^{\varepsilon}$) calculated using Eq. 16 against $\Delta R_{[CO_2]}$ from the proxy-inferred paleoclimate dataset (Köhler et al., 2015), using: **a)** $\varepsilon_{[LI]} = 0.79$ (maroon dots), $\varepsilon_{[LI]} = 0.45$ (cyan dots), and $\varepsilon_{[LI]} = 0.25$ (green dots). Here, $\varepsilon_{[LI]}$ is obtained from converting the multi-model assemblage of simulated relative influences of land ice changes on the LGM temperature anomaly (0.46±0.14) (Shakun, 2017). **b)** Same as in (a), but using $\varepsilon_{[LI]} = 1$ (grey dots), which is equivalent to the old approach. The brown, blue, dark green (**a**), and black lines (**b**) represent second order polynomial least-squares regressions through the data. Here, the global temperature anomalies are derived from the northern high-latitude temperature anomaly reconstruction assuming a constant polar amplification factor ($f_{PA}$) of 2.7, as opposed to the variable $f_{PA}$ used in Fig. 4.
The Tails of Two Myosins Laura M. Machesky School of Biosciences, Division of Molecular and Cell Biology, University of Birmingham, Birmingham B15 2TT United Kingdom Two papers in this issue of *The Journal of Cell Biology* uncover a possible new connection between the actin-nucleating complex of proteins, the Arp2/3 complex, and the type I myosin motors. In this issue, Lechler et al. (2000) and Evangelista et al. (2000) show a direct interaction of the *S. cerevisiae* myosin I motors (Myo3p and Myo5p) with the Arp2/3 complex through an acidic COOH-terminal sequence motif. The data suggest that a large complex containing both myosin motors and actin nucleating proteins may be a functional unit for signal-induced actin assembly. Furthermore, both studies provide evidence that myosin I function is essential for assembly and maintenance of filamentous actin structures in cells. This is exciting and raises some controversy, given the recent discovery that intracellular pathogens such as *Shigella flexnerii* and *Listeria monocytogenes* do not use myosin motors to achieve actin-based motility (Loisel et al., 1999). Clearly, future research will be devoted to resolving the role of myosin I motor activity in actin-based motility in eukaryotic cells, as this may constitute a mechanistic difference between the *Listeria* and *Shigella* model systems and eukaryotic cell motility. The Arp2/3 complex nucleates new actin filament assembly, most likely in response to signals such as the activation of receptor tyrosine kinases or receptors coupled to small GTPases of the Rho family (e.g., Rho, Rac, and Cdc42; Machesky and Insall, 1998; Svitkina and Borisy, 1999). It is named Arp2/3 because in addition to five unique polypeptides, it contains the actin-related proteins, Arp2' and Arp3. In vitro, the Arp2/3 complex cross-links actin filaments, caps the slow-growing (pointed) end of filaments (Mullins et al., 1998), and nucleates actin assembly (Mullins et al., 1998; Welch et al., 1998). This activity can be greatly stimulated by direct interaction with proteins of the WASP family (Machesky and Insall, 1999; Svitkina and Borisy, 1999). WASP family proteins are named for Wiskott-Aldrich syndrome, a fatal immune disease in humans that results from mutations in the gene encoding WASP (Thrasher et al., 1998). In *S. cerevisiae*, the single WASP family protein is called Las17p or Bee1p (Fig. 1). While the details of how the WASP family proteins activate the Arp2/3 complex are not yet known, a conserved acidic tail sequence in all WASP family members binds directly to the complex (Machesky et al., 1998; Winter et al., 1999). This sequence motif is highly homologous to the Myo3p and Myo5p acidic tail sequences (Fig. 1), suggesting that these myosins may connect to the Arp2/3 complex in a similar fashion to WASP family proteins. Myosin I motors have been implicated in transport of membrane vesicles in endocytosis and also in polarized cell growth and motility (Coluccio, 1997; Raposo et al., 1999). Myosin I proteins can bind to membrane phospholipids via a conserved stretch of basic sequence (Fig. 1) and to actin filaments via the motor region (Fig. 1), providing a potential link between the actin cytoskeleton and membrane vesicles or the plasma membrane (Adams and Pollard, 1989). They also contain a conserved SH3 domain (Fig. 1), which, in *Acanthamoeba*, binds to a protein called Acan125 which has homologues in *Dictyostelium* and mammals (Xu et al., 1997, 1995). Both Lechler's and Evangelista's studies emphasize that there is functional redundancy between the acidic Arp2/3 complex binding sites of Myo3p, Myo5p, and Bee1p. This is particularly interesting, as it suggests that multiple Arp2/3 activating motifs may exist in all cells, providing a backup or alternative system for regulating actin assembly. While it is easier to picture systems that work in a linear fashion, more frequently, cells seem to use a multiply redundant or circular system where large complexes can form among proteins with multiple binding sites and many partners. Focal adhesion complexes of mammalian cells provide one example of this. This may provide flexibility, such as the ability to build large and small assemblies according to the task at hand, and/or as with the myosin I motors, it may allow several relatively weak interactions to add up into a fairly stable but dynamic assembly. The functional redundancy also suggests that Myo3p and Myo5p can somehow promote activation of the Arp2/3 complex in a way similar to WASP family proteins. This is surprising, given the lack of a WH2 motif in Myo3p or Myo5p (Fig. 1). The WH2 motif appears to be required for WASP family proteins to activate nucleation by the Arp2/3 complex (Machesky et al., 1999). Perhaps the interaction of Myo3p and Myo5p with verprolin, which lacks an Arp2/3-binding site but has a WH2 motif (Fig. 1), supports the stimulation of actin nucleation. Mechanisms of activation of Arp2/3 complex and the importance of the WH2 motif require further study. Lechler et al. demonstrated a requirement for myosin I motor activity in actin assembly in permeabilized cells. This comes as a surprise, given that in a reconstituted system using *Listeria monocytogenes*, actin-based motility... was supported by purified cofilin, Arp2/3 complex and capping protein in the absence of any myosin motor activity (Loisel et al., 1999). However, intracellular pathogens may not be perfect models for actin-based motility. The bacterial surface lacks actin-membrane interfaces, while eukaryotic cells primarily seem to polymerize actin in association with lipid membrane surfaces. Alternatively, the constitutive actin-based motility of *Listeria* may mimic a different intracellular pathway than the Cdc42p-induced actin polymerization studied by Lechler et al. and Evangelista et al. Many different signals can trigger actin assembly in cells, so there could be some pathways that use myosins and others that do not. Myosin I motors work in clusters, giving them additive strength and perhaps processivity (Ostap and Pollard, 1996). The two studies featured here suggest that in addition to membrane binding, myosin I may be clustered via an interaction with WASP family proteins. Bee1p has a binding partner called verprolin or Vrp1p. Verprolin has sequence similarity to Bee1p in the proline-rich region and in the actin-binding WH2 sequence (Fig. 1). Verprolin also has an apparent mammalian counterpart, WIP, which binds to WASP (Fig. 1). The proline-rich sequences of both verprolin and Bee1p interact with the myosin I SH3 domain, to create several potential myosin I binding sites on the verprolin/Bee1p complex. This could, in turn, create several potential Arp2/3 complex binding sites. Altogether, a complex could form which contains from 2–21 myosin Is, 3–22 Arp2/3 complexes, and 2 actin monomers per Bee1p/verprolin complex. Of course, steric hindrance may prevent such large complexes from forming, so we await further characterization of the actual stoichiometry. While it is attractive to speculate that actin filament assembly could involve clusters of membrane-bound myosin I and WASP family proteins, we do not yet have enough information to form a complete model. Evangelista et al. and Lechler et al. both speculate that actin filament elongation may be regulated by myosin I in a similar way to microtubule motors which grasp the ends of mitotubules and allow or facilitate addition of subunits at the plus end. The myosin I clusters could stay associated with the plasma membrane and with actin filaments and push out the membrane allowing the filaments to elongate. The biggest conceptual problem with this model is the lack of processivity of myosin I motors due to weak binding to actin filaments (Ostap and Pollard, 1996). Myosin I spends most of its time dissociated from the actin filament, unlike kinesin motors, which spend more time associated. Due to the **Figure 1.** Schematics of the sequences of Myo3p, Myo5p, WIP, verprolin, Bee1p/Las17p, and WASP. Molecules and sequence domains/motifs are drawn roughly to scale. Myo3p and Myo5p contain basic sequences (purple) that interact with negatively charged phospholipids, and SH3 domain (yellow) and an acidic tail sequence (green) which binds to the Arp2/3 complex. WIP and verprolin contain a WH2 (WASP homology 2 motif that binds to monomeric actin, red) and are mostly proline-rich (yellow). Bee1p/Las17p and WASP contain WH1 (WASP homology 1, white), proline-rich sequences (yellow), WH2 motifs (red) and acidic tail sequences that bind to Arp2/3 complex (green). WASP also contains a Cdc42-binding motif (blue) while Bee1p/Las17p does not. **Figure 2.** Myosin I may transport Arp2/3 complex to sites of actin polymerization. (1 and 2) Arp2/3 complex (blue) dissociates from an older branchpoint, allowing cofilin (yellow circles) to accelerate the disassembly of older filaments. Cofilin binds to the sides of actin filaments (red lines) and to actin monomers (red circles). (3) Myosin I clusters could then bind to the free Arp2/3 complex via the myosin I acidic tail sequence. (4) Once attached to a filament, the myosin I cluster could transport Arp2/3 complex back to the plasma membrane. (5) When the myosin I cluster arrives at the leading edge of the cell, it could dock via its SH3 domains contacting WASP family protein prolineyl sequences. This could provide activated Arp2/3 complex in zones of nucleation of new actin. Myosin I clusters could then hand the Arp2/3 complex over to a WASP family protein and dissociate, or remain bound in a large complex (see Discussion in Lechler et al. and Evangelista et al.). Although there are several potential binding sites on WASP family proteins and verprolin for myosin I, and myosin I must work in clusters to be processive, only one myosin I per Arp2/3 complex has been drawn here for simplicity. kinetics of association and dissociation of myosin I, Ostap and Pollard (1996) predicted that clusters of >20 myosin I molecules would be needed for processive motility. The reason for an apparent requirement for myosin I motor activity in actin assembly in the reconstituted system of Lechler et al., thus, remains a bit of a mystery. Another model suggests that myosin I could transport the nucleation machinery to the barbed ends (fast-growing) of filaments. This could include transport back to the plasma membrane of Arp2/3 complexes that dissociate from the actin filament network following depolymerization (Fig. 2). This model also has the problem with processivity described above. It would also require either that clusters of myosin I travel in a branched network of actin filaments, or that some long relatively unbranched filaments also exist in lamellipodial zones. It is not obvious how the WASP family proteins are important in this model, unless they also require transport or unless the myosin I uses its SH3 domain to dock on a WASP family protein when it reaches the plasma membrane. Clearly, there are several interesting possibilities, which future studies will no doubt resolve. Given the new information raised by these two studies, many questions arise. How does the mammalian system, where no known myosin I protein contains an acidic tail, compare to the *S. cerevisiae* system? Does the myosin I SH3 domain connect mammalian myosin IIs to the Arp2/3 complex via a WASP family protein or WIP? How many redundant Arp2/3 complex binding sequences exist in eukaryotic cells? The next step may be to look for the proposed clusters of myosin I proteins to test whether these tentative models have a solid physiological grounding. Zot et al. (1992) showed that myosin I motors could move actin filaments along lipid substrates in vitro, so it should be possible to test whether myosin I can transport Arp2/3 complex along actin filaments. Submitted: 29 December 1999 Accepted: 3 January 2000 **References** Adams, R.J., and T.D. Pollard. 1989. Membrane-bound myosin-I provides new mechanisms in cell motility. *Cell Motil. Cytoskeleton*. 14:178–182. Coluccio, L.M. 1997. Myosin I. *Am. J. Physiol.* 273:C347–C359. Evangelista, M., B.M. Klebl, A.H.Y. Tong, B.A. Webb, T. Leeuw, E. Leberer, M. Whiteway, D.Y. Thomas, and C. Boone. 2000. A role for myosin I in actin assembly through interactions with Vrp1p, Bee1p, and the Arp2/3 complex. *J. Cell Biol.* 148:353–362. Lechler, T., A. Shevchenko, A. Shevchenko, and R. Li. 2000. Direct involvement of myosin type I myosin in Cdc42-dependent actin polymerization. *J. Cell Biol.* 148:363–373. Loisel, T.P., Boujemaa, D. Pantaloni, and M.F. Carlier. 1999. Reconstitution of actin-based motility of *Listeria* and *Shigella* using pure proteins. *Nature*. 401:613–616. Machesky, L.M., and R.H. Insall. 1998. Scar1 and the related Wiskott-Aldrich syndrome protein, WASP, regulate the actin cytoskeleton through the Arp2/3 complex. *Curr. Biol.* 8:1347–1356. Machesky, L.M., and R.H. Insall. 1999. Signaling to actin dynamics. *J. Cell Biol.* 146:267–276. Machesky, L.M., R.D. Mullins, H.N. Higgs, D.A. Kaiser, L. Blanchoin, R.C. May, M.E. Hall, and T.D. Pollard. 1999. Scar, a WASp-related protein, activates nucleation of actin filaments by the Arp2/3 complex. *Proc. Natl. Acad. Sci. USA*. 96:3739–3744. Mullins, R.D., J.A. Heuser, and T.D. Pollard. 1998. The interaction of Arp2/3 complex with actin: nucleation, high affinity pointed end capping, and formation of branching networks of filaments. *Proc. Natl. Acad. Sci. USA*. 95:6181–6186. Ostap, E.M., and T.D. Pollard. 1996. Biochemical kinetic characterization of the Acanthamoeba myosin I ATPase. *J. Cell Biol.* 132:1053–1060. Raposo, G., M.N. Cordonnier, D. Tenza, B. Menchi, A. Durrrbach, D. Louvard, and E. Coudrier. 1997. Association of myosin I alpha with endosomes and lysosomes in *Acanthamoeba*. *Mol. Biol. Cell*. 10:1477–1494. Svitkina, T.M., and G.G. Borisy. 1999. Progress in protrusion: the tell-tale scar. *Trends Biochem. Sci.* 24:432–436. Thrasher, A.J., G.E. Jones, C. Kinnon, P.M. Brickell, and D.R. Katz. 1998. Is Wiskott–Aldrich syndrome a cell trafficking disorder? *Immunol. Today*. 19:537–539. Welch, M.D., J. Rosenblatt, J. Skoble, D.A. Portnoy, and T.J. Mitchison. 1998. Interaction of Arp2/3 complex and the *Listeria monocytogenes* ActA protein in actin filament nucleation. *Science*. 281:105–108. Winter, D., T. Lechler, and R. Li. 1999. Activation of the yeast Arp2/3 complex by Bee1p, a WASP family protein. *Curr. Biol.* 9:501–504. Xu, P., K.F. Mitchellhill, B. Kobe, B.E. Kempf, and H.G. Zot. 1997. The myosin-I-binding protein Acan125 binds the SH3 domain and belongs to the superfamily of leucine-rich repeat proteins. *Proc. Natl. Acad. Sci. USA*. 94:3685–3690. Xu, P., A.S. Zot, and H.G. Zot. 1995. Identification of Acan125 as a myosin-I-binding protein present with myosin-I on cellular organelles of Acanthamoeba. *J. Biol. Chem.* 270:25316–25319. Zot, H.G., S.K. Doberstein, and T.D. Pollard. 1992. Myosin I moves actin filaments on a phospholipid substrate: implications for membrane targeting. *J. Cell Biol.* 116:367–376.
What is this gliding stuff all about? How does it stay up, how does it take off and how safe is it? Read page 2 to find out. Going gliding for the first time? Check page 3 to find out what to bring along. Your guide to meet the gliding instructor of your dreams. See page 4 for the in-depth bio’s of the club’s instructors. A new national club class champion in the club? The story of the 200/2001 national championships is on page 6. Where is the shed at West Beach and why is it important? Page 8. What’s been going on? Check out page 9 to see what you have been missing out on. Lots and lots of things will be happening this year. See page 10 for what’s going to be happening soon with the club. STOP PRESS Membership Renewal: Previous members: Contact Dennis Medlow to renew your club membership for this year. If you do not contact Dennis, your membership will not be renewed for this year. Another Bergfalke at Lochiel Soon: Anthony has bought the other Bergfalke 4, GZQ from the Gympie Soaring Club. It should be arriving at the West Beach Shed around Monday 12 March. Orientation Week BBQ Monday 26 Feb: Come on down to the West Beach shed on Monday night at 7:00pm for free beers and a barbie and to meet all the other club members. There is a map on how to get there on page 8. Call Matt on 0412 870 963 to let us know that you are coming. ANNUAL GENERAL MEETING The club’s annual general meeting will be held on the evening of Wednesday 4 April at 7:30 pm in the Little Cinema. Meet at 6:30 pm in the Equinox Bistro for dinner if you are interested. This is an important meeting for the club where the reports from last years Executive Committee are tabled and a new committee elected. The club development plan which will set the direction of the club for the next 10 years, will also be tabled for discussion and approval. Every member of the club should come along (or present a note from their mum). WHAT IS GLIDING? Gliding is the art of flying an aircraft without using an engine. A glider is simply an aeroplane without an engine. It has all the same controls and instruments as a powered aircraft. Contrary to popular belief, engines do not make aeroplanes fly: wings do! For wings to work they must be moving forward. Engines are used in powered aircraft to supply this forward speed in a steady convenient form. Gliders use gravity - they are always gliding downwards through the air, though their design means that they glide at a shallow angle, typically only 30 meters forward for every 1 meter down. A light aircraft such as a single-engine Cessna with its engine off will glide around 10 meters forward for every 1 meter down - still controllable, but nowhere near as efficient as a true glider. **How does a glider stay up?** The air is rarely still. It moves laterally as wind and also vertically. The magic starts when the glider is in air that is rising faster than the glider is descending. The glider will be carried up by circling in the rising air, exactly the same way as eagles and other soaring birds. When the glider leaves the rising air it will resume its slow descent again. Using this rising air is called 'soaring'. Provided that there is enough rising air around, the glider can stay up indefinitely. Of course air is invisible and it can’t be directly seen when it is rising. Although, there is much theory and also some instruments to help the pilot to find rising air, it is here that gliding passes into the nether world between science and art. The challenge of using rising air to the best advantage is akin to a sailor using the winds and currents of the ocean and this challenge is what keeps most enthusiasts coming back. Rising air (also called ‘lift’) can be found in the form of bubbles of hot air called ‘thermals’. These bubbles can go very high during the summer. Rising air can also be found where the wind blows over a ridge or range of hills. The air is forced up over the face of the hill which provides continuous, predictable rising air called ‘ridge lift’. Unfortunately this lift is limited to near the hill and doesn’t go very high unless the wind is strong and it is a big hill. One of the advantages that the Adelaide Uni Gliding Club has at its airfield near Lochiel is the Hummocks range which is ideal for ridge lift. All that is required is a reasonable Westerly wind. The club is fortunate that westerly winds are reasonably common in winter and the club can fly all year round. Most gliding clubs suffer in winter when the thermals are few and weak. **How does a glider take off?** There are a number of ways to get a glider airborne. The most well known is to simply tow the glider behind a powered aeroplane (called a ‘tug’). This process is called ‘aerotowing’ and has the advantage that the glider can be towed to any height or position in the sky. But the downside is the cost. The maintenance, fuel costs etc associated with the tugs makes aerotowing an expensive method of take-off, one that is most likely to be outside of the price affordable by most students. The Adelaide Uni Gliding Club uses a cheaper method known as ‘winch launching’. Many people would argue that this is a more fun way of taking off as well as being safer. Winch launching is where a large engine, mounted on the back of a stationary truck, is used to wind in a cable at high speed. The glider is attached to the other end of the cable and is pulled into the air like a kite. An average launch gets the glider to 1300 feet above the ground, although heights in excess of 2000 feet can be gained in the right conditions. At the top of the launch the cable is either released by the pilot or automatically dropped by the glider itself. **How safe is gliding?** Gliding is a very safe sport. The most dangerous part of the day’s flying is probably the car trip to and from the airfield. Yes, there are risks just like in any other activity, but the risks are fully understood and catered for - procedures are put in place and religiously followed to make sure that the risks are minimised to the lowest extent humanly possible. Our self preservation instincts are just as strong as yours. Before a glider is permitted to fly on any particular day, it must be carefully inspected by a qualified inspector. Furthermore the gliders are put through a thorough inspection every year where the gliders are disassembled and checked. All of the clubs instructors are experienced pilots that have undergone rigorous training and testing that is supervised by the Gliding Federation of Australia. You can read about them on Page 4. You learn to fly at your own pace and the more advanced aspects are only introduced as you are ready for them. The club’s aim is to produce a safety conscious, competent pilot. Someone who flies regularly (at least once per fortnight) can expect to go solo at around 10 to 12 hours of flying. There are no minimum time requirements - once your instructor is satisfied that you have reached the required ability, you are given the opportunity to go it alone! The first step is to call the contact person, Matt on 0412 870 963 on the Thursday before between 8.00pm and 10:00 pm. He will be able to tell you what is going on and organise the instructors. He can arrange a lift for you (normally from the footbridge at uni or the Catex service station at Bolivar) or give you directions on where to find us. If you drive up to the airfield, there are signs to show you the way once you reach Lochiel, but please take care on the dirt roads. When you get there, remember that it is a farmers paddock and to close the gate after you go through it. This will save us trying to round up the escaped sheep at the end of the day. The airfield is in a farmers paddock which is quite exposed to the elements. You should bring with you a hat, sunglasses, sunblock and a water bottle. Wear something cool and that you are prepared to get a little dirty. You can bring lunch, but there is a supply of pies and soft drinks etc in the clubhouse. You can bring a camera or video along if you wish. Lastly be sure to bring along your sense of fun and adventure. Once at the airfield, experienced club members will be able to show you how to help out. Flying finishes whenever everyone has had enough or the sun sets, whichever comes first. YOUR INSTRUCTORS GUIDE David Conway (Chief Flying Instructor): David started flying in 1984 and has been an instructor since 1986. He is widely known throughout the gliding community as “Catherine Conway’s husband”. He runs his own electronic systems company which, funnily enough, owns a Ventus glider. His hobbies include balancing light switches so that they can be turned on by an errant gust of wind, setting off fireworks and landing in other paddocks after cable breaks and whilst doing hangar runs. Gliding Hours: 1000 Instructing Hours: 331 Redmond Quinn: Redmond joined the club a very long time ago in 1980 and still hasn’t given up yet. He has been instructing since 1983 and was the club’s Chief Flying Instructor for the decade or so before David Conway. Redmond is an engineer who is occasionally accused of actually working whilst being paid by Santos. Redmond enjoys hitting things with hammers in the hope that it will fix them, trying to create Wilpena Pound sized craters with burning LPG cylinders and running over things in his 4WD. Flying Hours: 530 Instructing Hours: 300 Catherine Conway: Catherine was famous for her never ending appearances in Australian Gliding magazine. She started flying in 1986 and has been instructing since 1989. Cathy enjoys flying her Boomerang, QZ, whenever she can dump the kids with David, but enjoys flying their Ventus even more if she can lever David out of it. Cathy works for an internet communications company and tries to forget that she ever worked for Telecom on the Jindalee Project. Flying Hours: 778 Instructing Hours: 173 Dennis Medlow: Dennis became an instructor in 1984 after joining the club in 1982. Dennis is not allowed to tell anybody what he does, but it involves communications and the army, which are two mutually exclusive items. He enjoys flying powered aircraft, which he finds are easier to gain height in and doing tail slides in the Bergfalke. He really objects to being called Dippy. The silliest thing Dennis has ever done was send Peter Cassidy solo. Flying hours: 664 Instructing Hours: 311 Peter Temple: Peter Temple started flying in 1982 and has been an instructor since 1989. Peter is ‘another engineer’ who works at the DSTO, beyond that he won’t say anything more. He can regularly be found flying a long way from anywhere in the DG200 that he shares with Mandy Wilson. He is a prolific competition pilot and is the current National Champion in club class. Peter has done many silly things in the past, but he won’t tell us what they were. Peter and Mandy will be disappearing to the USA for a couple of years in the middle of this year. Flying Hours: 1713 Instructing Hours: 492 Anthony Smith: Anthony started gliding in 1987 as well. After escaping to Melbourne with the Air Force for a while, he was eventually caught and extradited back to Adelaide and forced to become an instructor in 1998. Anthony is an aeronautical engineer who has recently resigned from the Air Force to become a civilian and was promptly contracted back to the RAAF for more money. Anthony’s hobbies include being President of the gliding club for the term of his natural life, renovating Bergfalkes (he has recently bought one from Gympie) and outlanding as far away as possible when flying cross country. Gliding Hours: 713 Instructing Hours: 175 Stephen Were: Stephen has been flying since 1985 and an instructor since 1986. Steve is an organic chemist who now works at the Bolivar Sewerage works (he gets all the smelly jobs at home too). He owns half of a PIK-20B, which he doesn’t fly very much (he got married). Steve’s hobbies included driving very fast into big fence posts and kicking tyres around the airfield when he is frustrated. Flying Hours: 1580 Instructing Hours: 754 Bradley Gould: Bradley started flying in 1988 and has been an instructor since 1991. He unexpectedly became State Champion in 1991 as well, much to everyones surprise and Bradley’s extreme embarrassment. Brad works with Catherine Conway in the same internet communications company. Bradley is infamous for his ‘Top Gun’ antics (and that is not just with his flying either) and briefly became notorious for having a romance with an inflatable doll (which he later put out of its misery by tying it to a tree and shooting it several times). Gliding Hours: 420 Instructing Hours: 225 Michael Texler: Michael started gliding in 1987 and became an instructor in 1996. He is a doctor you cuts up dead people to see how they died. This is OK as his patients never complain about it afterwards. Michael also flies powered aircraft and is often found tugging (ie towing gliders) at Gawler. He occasionally instructs there as well. Michael collects jokes about bottoms, which he will tell you and then apologise for it afterwards. Michael is planning on getting married soon. Gliding Hours: 510 Instructing Hours: 126 Greg Newbold: Greg also started gliding in 1987 (what a great year that was) and was coerced into being an instructor in 1996. He is a mechanical engineer who works for the CSIRO. Greg’s bad habits include letting his student pilots make him air sick and using the airfield fence as an arrestor wire for the Bergfalke. Gliding Hours: 211 Instructing Hours: 43 Mandy Wilson: Mandy began gliding late in 1995 after deciding that gliding slowly down through the air was far better than plummeting out of the sky with a parachute (skydiving). She then became an instructor in 1998. She is famous for supplanting Cathy Conway in the *Australian Gliding* magazine by getting her photos of gliders, mostly of her DG200 which she shares with Peter Temple, published. Mandy enjoys making matching cushions for her glider and cooking dumplings for Peter. Gliding Hours: 501 Instructing Hours: 34 Raj Bholanat: Raj began flying in 1995 and became an instructor in 1999. Raj works as an engineer at Mitsubishi. Raj’s claim to fame is his inability to find thermals (when they are going to 10,000 feet) but still being able to find anything that has been lost in the airfield, no matter how small. Raj also enjoys crash testing his Magna with kangaroos (so far the Magna is winning). Gliding Hours: 144 Instructing Hours: 28 2000/2001 NATIONAL CLUB CLASS CHAMPIONSHIPS Report by Peter Temple This year the Australian Club Class Nationals were held in Benalla, Victoria. The competition was held in November rather than the usual time of January to avoid a clash with the World Gliding Championship at Gawler and was also a selection event for the next World Club Class Championship team for Musbach, Germany, in 2002. 23 gliders entered the competition, which is less than normal, probably since the competition was held early in the season. However AUGC had a strong presence with two entries, Cathy Conway in QZ, and myself in WUZ. Emilis was also there, flying the old AUGC Super Arrow TJ. This year Sports Class was to be created for the lower performance gliders but unfortunately there were not enough entries. Club Class competition allows older gliders to fly in a handicapped race. This means that you don’t need the latest, and very expensive, gliders to compete at the top level of competition, and many of the competitors were flying club owned gliders. The gliders are also flown without water ballast to be fairer for the gliders that can’t carry water. Traditionally Pilot Selected Tasks (PST) have been used where the pilots fly as far as possible in an allocated time. With some restrictions the pilots can fly to any of the turnpoints from a list and the fastest pilot around the task (after applying the glider’s handicap) wins. Overall the weather was complex with combinations of strong and weak thermals, wave from the mountains in the south, and strong cold southerly winds. Often the conditions changed dramatically during the day requiring a change from high speed cruising to very conservative flying. The combination of strong wind and weak thermals on some days made progress very difficult for Cathy and Emilus in their wooden gliders. Practice Days Mandy (crew extaordinaire) and I arrived 3 days before the competition to allow time for setting up and familiarisation with the site and local conditions. This paid off and I was the best prepared by the first competition day. Bruce Taylor (champion for the past 2 years and on the Australian team for many years before that) arrived 2 days late with no preparation. He didn’t even have maps and was madly trying to load turnpoint coordinates into his GPS while we were already on task on the first competition day! During the practice I resolved to stay out of the mountains and also away from the green country to the east after getting low near Moyhu. Unfortunately the first two competition days were unflyable due to bad weather. Fortunately for Bruce Taylor this meant he was able to fly on the first competition day. We used the time to explore the local wineries and sights of the area. Day 1 On the 3rd attempt we finally had good enough weather to fly a competition task. It was a good day for me achieving 104 km/hr over 450 km, to finish 1st for the day, narrowly ahead of Bruce. Day 2 – *The dreaded wave* A complex mix of wave and thermals dominated Day 2. I couldn’t make sense of the lift and was very slow compared to most of the other pilots. Getting very low on task and spending 15 minutes drifting downwind waiting for a thermal to form did not help my speed. I found out after the competition that these conditions are common at Benalla. I have lots to learn about competing in wave conditions. Day 3 – *How much character can you have?* Conditions were forecast to be good and a long 5 hour task was set. As predicted, the thermals were initially strong with good climbs but later strong southerly winds destroyed the lift and made getting home very difficult. These cold winds (apparently unusual at this time of year) were the bane of the rest of the competition. I eventually managed 470 km, and was glad to be home for 2\textsuperscript{nd} place. Emilis commented that this was a “character building day”. **Day 4** This was the best weather so far with 10 knot thermals to 11500’. Bruce won the day with a distance of 596 km and the fastest speed of the competition (132 km/hr). Once again the strong southerly winds late in the day slowed an otherwise perfect day. **Day 5** A poor day was predicted with only a 2 hour task set. But in the event conditions were good and speeds of around 120 km/hr were achieved. I was happy with 117 km/hr for another 2\textsuperscript{nd} place. **Day 6 - I always wanted to land on a cricket pitch!** A new type of task was tried with some controversy. An Assigned Area Task (AAT) is a combination of PST and a set task. 2 or more turnpoints are set and the pilots must fly around them in the set order. The unique feature is that a radius, for example 30km, is set on each turnpoint and the pilot only needs to fly within the radius, giving the possibility of turning short or extending the task beyond the turnpoint. This was the first opportunity that we had to fly on the same task with other competitors and observing their tactics was very useful. The thermals were also weak and low so the other competitors made great thermal markers. Bruce outlanded on this day and shortly before landing reported over the radio that “I always wanted to land on a cricket pitch”. He made a safe landing on the Yarrawonga cricket oval! After trying the task there was unanimous approval of the task type. **Day 7 – Don’t go to Moyhu!** Before the start there were beautiful cu’s overhead but these rapidly dissipated before the start gate opened. Some of the pilots flew to the mountains and were reporting very strong conditions while the rest of us were low over the plains. As I was heading into Moyhu, (ah yes, I seem to recall resolving not to go there…) chasing cu’s that were rapidly evaporating, Jonathon Shand reported over the radio the golden rule for the day: “Don’t go to Moyhu!””. Unfortunately I was committed and struggled past. Most of the pilots that went to the mountains didn’t get home. I was frustrated by finishing 2\textsuperscript{nd} for the third time in the competition. Day 8 – The dreaded wave again At briefing on the final day (the infamous) Simon Brown was offering disks of the overall “winners” flights for sale at $5 (with Bruce’s permission). Bruce had already won 5 of the 7 days and was the hot favourite to take out the championship. The final competition task did not start well for me. I could not get above 4000’ when others were reporting over 10000’ in wave. Starting anyway I had a good downwind run to the first turn (arriving at 1200’ AGL) and then found 10 kt climbs to 9000’ under cu’s before once again the cold southerly came in (the Benalla “sea-breeze”) - the last 70km taking 1.5 hours. The fastest speeds for the day were achieved by those brave enough to fly in the mountains. With some new names at the top of the daily score and many of the top pilots outlanding (including Tom Gilbert, Toby Geiger and Jonathon Shand) the final aggregate result was going to be interesting. Bruce got into trouble but eventually sneaked home an hour after everyone else. Final Results Before the presentation dinner I had done some back-of-envelope calculations and was confident of finishing 2nd overall – one place better than the Gawler nationals. It wasn’t until the final presentation at the dinner that it was announced that I was the new Australian club class champion. Consistent scores every day and no outlandings took me ahead of Bruce. Emilis was the highest placed sports class glider. Final Aggregate Placings: 1. Peter Temple 918.7 pts 2. Bruce Taylor* 903.8 pts 3. Rolf Buelter 877.7 pts 4. Tom Gilbert* 868.1 pts 5. Toby Geiger 867.4 pts 6. Terry Cubley 866.9 pts 7. Ron Sanders* 865.0 pts ... 14. Emilis Prelgauskas 718.7 pts 20. Cathy Conway 634.1 pts * Competed at the recent Gawler World Championship SO YOU WANT TO HELP AT WEST BEACH? West Beach is where we carry out the maintenance and repair on our gliders and equipment. There are usually volunteers working down there on Monday, Tuesday and Wednesday evenings. The entrance is at the end of Foreman St, West Beach. The Puchatek is being finished there at the moment. Winch #3 has had the gearbox fitted to the front engine and now the transfer box needs to be located and fitted. So you want to help fix the gliders at West Beach, but can’t get there? A lift can be available from the Adelaide University footbridge at 7.30pm by arrangement. Ring Anthony on (wk) 8393 3319, (hm) 8269 2687 or E-mail: firstname.lastname@example.org. WHAT’S BEEN GOING ON! Well January was a busy month, especially if you were anywhere near the World Club Class Gliding Championships held at Gawler. These were held from 7 January through to 27 January and featured 46 pilots from 21 countries. There were also 7 entries in the Grand Prix event held simultaneously. The effort that had gone into organisation of the event by the Adelaide Soaring Club yielded quite reasonable results. Seven tugs launched all of the entrants in about one hour of fast activity on each of the flying days. The weather was interesting to say the least. A number of days were cancelled because the predicted soaring conditions were too poor (ie the thermals weren’t going high enough) despite the high temperatures. Most of the European entrants had difficulty with the heat and had underestimated the amount of drinking water required to be taken in the aircraft with them when they flew. The first flyable competition day was in very average conditions and resulted in almost half of the gliders outlanding. However, the number of outlandings sharply decreased as pilots adapted to the conditions as the competition progressed. The club hired the PIK-20D to Zeljko Roskar from Slovenia, a small country wedged in between Austria, Croatia and Hungary. He finished a respectable 21st overall despite being in his first international competition and the first time that he had flown a flapped 15 meter aircraft of any kind. SA FM’s Skyshow was (very conveniently) held after the closing ceremony and a large club contingent dragged Zeljko and his girlfriend Jana, the Swiss team and the Belgian pilot along to see the fireworks. Apart from all that, there has been flying at Lochiel too! Congratulations this month go to Derrek Spencer and Steve Grey for gaining their C certificates. Derrek also finally got around to getting his A and B certificates as well. Congratulations also go to David Hichens, Matt Fenn and Scott’s Battersby and Lewis for gaining their independent operators ratings (these guys can now go flying without having an instructor at the airfield). In other news, Matt Fenn has refreshed and updated the club’s web page: www.augc.aus-soaring.on.net. It is looking particularly good, but Matt is still looking for photos of club activities to stick on the photo page. If you have any worth displaying, please send him a copy. The newsletters are now available in Adobe PDF format and you can download them from the appropriate page on the web site. Please e-mail email@example.com if you would like a copy of the newsletter sent directly to you each month. In the future a selection of past newsletters will also be converted to PDF to add to the historical archive of the club. DO YOU STILL WANT TO BE A MEMBER OF THE CLUB? It is that time of the year again! Please contact Dennis Medlow to renew your club membership for 2001. If you do not contact Dennis, your membership will not be renewed. You can contact Dennis by phone on (mob) 0407 833 565, (hm) 8337 3265 or E-mail: firstname.lastname@example.org. WHAT IS GOING TO HAPPEN SOON X-country Weekend, Sat 17 to Sun 18 Feb. Another dedicated weekend for people to select an aircraft and go cross country. O’Week, Mon 19 Feb to Fri 23 Feb. Help recruit new, keen glider pilots from the masses of students at O’week. Help will be needed to rig in the morning and de-rig the Club Libelle in the afternoon. Club members will also be needed to talk to people and hand out pamphlets etc during the day. Call Scott L for details. Barbeque, Mon 26 Feb. Come on down to West Beach shed to meet all the new people from O’week and enjoy a free barbeque. 7:00 pm onwards General Meeting Wed 7 Mar: Welcome to new members. Come along and meet the people that joined during O’week. Beers, pizzas and gliding videos. What more could you want in a night? 7:30pm Canon Poole Room, Union Building. 25th Anniversary of AUGC. The 25th anniversary celebrations will include a huge dinner for past and present members as well as a flying weekend. Date to be decided. Call Cathy if you want to help. Annual General Meeting, Wed 4 Apr: Little Cinema at 7:30 pm. Dinner at 6:30 pm in the Equinox Bistro if you are interested. The big meeting for the club where the reports from last years Executive Committee are tabled and a new committee elected. The club development plan which will set the direction of the club for the next 10 years, will also be tabled for discussion and approval. 13-22 April, Easter and mid week flying at Lochiel. Go flying for the week during uni holidays. The best way to advance your flying with several days back to back practice. This may be extended to ANZAC day, 25 April if there is sufficient demand and an available instructor. 19-21 May, Pt Pirie Camp. The club is trying to arrange a camp with the Whyalla Gliding Club near Pt Pirie for the long weekend. Hopefully there will be westerly winds that will allow us to fly the big ridge there. 9-11 June, Flinders Ranges. Visit the scenic Flinders Ranges for the Queen’s Birthday long weekend. Flying, bush walking, gorge touring and camp fires galore. We are inviting the Waikerie Gliding Club and the Gliding Club of Victoria along this year and may extend the camp to 17 June if there is enough demand.
AUTHOR: Behm, Mary; Behm, Richard TITLE: You Can Help Your Child with Reading and Writing! Ten Fun and Easy Tips = Puede ayudar a sus hijos a leer y escribir! Diez sugerencias faciles y divertidas. PUB DATE: 94 NOTE: 33p.; Separately published Spanish version, "Puede ayudar a sus hijos a leer y escribir," appended. AVAILABLE FROM: EDINFO Press, P.O. Box 5953, Bloomington, IN 47407 (Booklets come in packets of 20; 1-4 packets, $15 per packet; 5-19 packets, $12 per packet; 20-49 packets, $9 per packet; 50+ packets, $7.50 per packet). PUB TYPE: Guides - Non-Classroom Use (055) -- Multilingual/Bilingual Materials (171) LANGUAGE: English; Spanish EDRS PRICE: MF01/PC02 Plus Postage. DESCRIPTORS: Beginning Reading; Early Childhood Education; Mass Media Use; *Parent Participation; Parents as Teachers; *Parent Student Relationship; *Reading Aloud to Others; Television IDENTIFIERS: Beginning Writing; Emergent Literacy ABSTRACT: Adapted from "101 Ideas to Help Your Child Learn to Read and Write," this booklet presents 10 tips for parents to help their children learn and have a good time in the process. The booklet begins with a letter to parents which discusses five basic principles to remember as they help their children. Tips in the booklet include: read aloud to children; get children their personal library card; make a special place for the children's books and magazines; display children's art and writings; let children help in the kitchen; discuss television programs and commercials; and ask children specific questions every day about school. (RS) EASY You Can Help Your Child with Reading and Writing Ten Fun and Easy Tips BEST COPY AVAILABLE U.S. DEPARTMENT OF EDUCATION Office of Educational Research and Improvement EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) This document has been reproduced as received from the person or organization originating it. Minor changes have been made to improve reproduction quality. Points of view or opinions stated in this document do not necessarily represent official OERI position or policy. "PERMISSION TO REPRODUCE THIS MATERIAL HAS BEEN GRANTED BY C.B. Smith TO THE EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC)." Adapted from 101 Ideas to Help Your Child Learn to Read and Write by Mary and Richard Behm A Letter to Parents Dear Parent, You are your child's first and most important teacher! You can make your home a place where your child has many opportunities to learn and practice reading and writing. Reading and writing will be fun experiences that you share together. Here are some basic principles to remember as you help your child: - **Home is the center for learning.** When parents value reading and writing, their children will view them with pleasure. Let your children see you read and write often. When parents show they care about education, children care, too. - **Learning brings the family together.** Sharing learning experiences brings parents and children together. Helping your child with reading and writing not only leads to success in school, but also helps you communicate better with each other. - **Play is the essence of learning.** Children learn most effectively through play. Parents can keep kids busy with games which help strengthen reading and writing skills. - **Speaking, listening, reading, and writing are related.** Good speaking and listening skills lead to strong reading and writing. Listening to, writing, reading, and telling stories with your child reinforce each other and help your child succeed. - **Praise fosters growth.** The mastery of reading and writing requires encouragement, praise, and emotional support. Allow children to experiment and play with language. Praise them often for their efforts. Enjoy these ten fun and easy tips with your son or daughter. You will help your children learn and have a good time in the process. Reading Aloud Begin reading stories to your child as soon as you can, even when he is a baby. He won’t understand the story, but will be fascinated with the sound of the words. Continue reading aloud as your child grows older. When he is old enough, take turns reading to each other. At the Library Apply for your child's personal library card as soon as possible, even before she can read. Let your child look through and choose her own books to read and have read to her. Bring home as many books as the library will allow, so your home will always be full of books. Books Galore! Make a special place in the living room, family room, or even the kitchen for your child's books and magazines. Make it as easy for your child to read a book as it is to turn on the television. Maria's Bookshelf Every author or artist loves to be "published" or displayed. Display drawings on the refrigerator or have them framed and hang them at your work place. Offer to type out stories for younger children and make photocopies of your child's stories and pictures at a local library, post office, or copy shop. Photocopying offers an inexpensive way to make gifts or handmade stationary for relatives, as well as boosting your child's confidence in her own creativity. Help in the Kitchen Encouraging your children to help you in the kitchen not only helps them become future cooks, but aids them in learning to read and write. Read your child recipes as he adds the ingredients, or let him read to you. Have your child copy down his favorite recipes and keep them in his own recipe box. On the Road Games in the car make long trips pass more quickly and short trips more bearable. Always keep a few books handy in the glove compartment, and play word games as you drive. Spell and say a word, such as C-A-R spells car, and help your child find and spell a word that rhymes. For older children, try scrambling the letters of a word in your head. Say the scrambled letters out loud and have your child unscramble them to reveal the word. For longer trips, build stories, where one person begins the story and stops at a crucial point, allowing the next person to continue. Using Television Read the television guide with your child, or let him read it before choosing a channel. Help your child develop critical thinking skills by discussing commercials and television programs as you watch them. What information did they give? How did it affect you? What strategies did they use to sell their product? Could the things that take place on the program happen in real life? Set aside a special place in your child's room for bookshelves and reading space. Make sure there is a good bedside reading light, and as your child gets older, add a desk to the room and equip it with a few art supplies, pencils, pens, and paper. Reserving space for quiet reading, writing, studying, and working gives your child a sign that such activities are important, and teaches her a concentration skill that will last her throughout her formal education and the working world. School Books & More Have your child bring home school and text books and read and discuss them with her. Find out what books she is reading in class, and read them on your own so you can discuss them. When she begins reading longer books, have her recommend books for you to read. How Is School? Ask your child specific questions every day about school. What happened in science class today? What books did you read in reading class? What did you learn today? Visit the school often, at parent-teacher meetings and PTA meetings. Or offer to go talk to your child's class about your work. Set aside at least thirty minutes a day to work with your child on his homework, or if he has none, to read together or discuss current events. Conclusion Most important of all, make the time you share with your child fun for each of you. This will give your child positive feelings about reading, writing, and learning. © 1994 EDINFO Press P.O. Box 5247 • Bloomington, IN 47407 1-800-925-7853 ¡Puede ayudar a sus hijos a leer y escribir! Diez sugerencias fáciles y divertidas Una adaptación de Prepare a sus hijos para leer y escribir: 101 Ideas por Mary y Richard Behm Carta a los padres Estimado padre o madre: Usted es el primer y más importante maestro de su hijo. Usted puede hacer de su hogar un lugar donde su hijo encuentre muchas oportunidades para aprender y practicar la lectura y la escritura. Leer y escribir serán experiencias divertidas que compartirán juntos. A continuación, algunos principios básicos que recordar mientras ayuda a su hijo: - **El hogar es el centro de aprendizaje.** Cuando los padres valoran la lectura y la escritura, sus hijos visualizan estas actividades como placenteras. Permita que sus hijos lo vean leer y escribir frecuentemente. Cuando los padres muestran aprecio por la educación, los hijos la aprecian también. - **El aprender une la familia.** Compartir experiencias de aprendizaje une a padres e hijos. Ayudar a su hijo a leer y escribir, no solo lo ayuda a tener éxito en la escuela, sino que también los ayuda a ustedes a tener mejor comunicación. - **El juego es la esencia del aprendizaje.** El juego es el método más efectivo para que los niños aprendan. Los padres pueden mantener a sus hijos ocupados con juegos que ayuden a fortalecer sus destrezas de lectura y escritura. - **Hablar, escuchar, leer, y escribir están relacionados.** Tener buenas destrezas para hablar y escuchar, conduce a desarrollar buenas destrezas de lectura y escritura. Escuchar, escribir, leer, y narrar historias con su hijo son destrezas que se refuerzan entre sí, y ayudan a su hijo a alcanzar el éxito. - **El elogio fomenta el crecimiento.** El dominio de la lectura y la escritura requiere estímulo, elogio, y apoyo emocional. Permita a los niños experimentar y jugar con el lenguaje. Elogie su esfuerzo con frecuencia. Disfrute estos diez fáciles y divertidos consejos con su hijo o hija. Ayudará a su niño a aprender, y se divertirán en el proceso. Lectura en voz alta Comience a leerle cuentos a su hijo tan temprano como sea posible, aun si es todavía un bebé. No entenderá la historia, pero quedará fascinado con el sonido de las palabras. A medida que su hijo crece, continúe leyéndole en voz alta. Cuando sea lo suficientemente mayor, tomen turnos leyéndose el uno al otro. En la biblioteca Solicite una tarjeta de biblioteca para su hijo tan pronto como sea posible, aún antes de que comience a leer. Permita que su niño mire y escoja sus propios libros para leer y para que usted lea. Lleve a casa tantos libros como la biblioteca le permita tomar prestados. Así, su hogar estará siempre lleno de libros. ¡Libros a granel! Separe un lugar en la sala, cuarto familiar, o aun en la cocina, especialmente para los libros y revistas de su niño. Haga para su hijo el leer tan fácil como ver la televisión. Biblioteca de María ¡Autor! ¡Artista! Todo autor o artista gusta publicar o exhibir sus trabajos. Exhiba los dibujos de su hijo pegándolos del refrigerador, o enmárquelos y exhíbalos en su lugar de trabajo. Ofrezca mecanografear las historias que escriban los niños y haga fotocopias en la biblioteca o la estación de correo. Hacer fotocopias es una manera poco costosa de preparar regalos o papel de escribir para regalar a los parientes. Además, aumenta en su niño la confianza en su propia creatividad. Ayuda en la cocina Estimular a sus hijos a ayudarlo en la cocina no solo los ayuda a convertirse en futuros cocineros, sino que también los ayuda con sus destrezas de lectura y escritura. Léale recetas a su hijo mientras éste añade los ingredientes, o permita que su hijo lea las recetas para usted. Que su hijo copie sus recetas favoritas y las guarde en su propia caja de recetas. En el camino Los juegos en el automóvil ayudan a que los viajes largos parezcan más cortos, y que los viajes cortos sean más divertidos. Siempre tenga algunos libros en la guantera, y practique juegos de palabras con sus hijos mientras usted conduce. Deletrée y diga una palabra, como A-U-T-O es auto, y que su niño encuentre y deletrée una palabra que rime con la que usted dijo. Con niños mayores, pruebe revolviendo las letras de una palabra en su mente. Diga las letras revueltas, y que su hijo las organice para encontrar la palabra original. Para viajes más largos, construyan historias de manera que una persona comience la historia y se detenga en un punto crucial para que la próxima persona en turno continúe. Use la televisión Lea la guía de programación televisiva con su hijo, o permita que la lea para usted antes de escoger el canal que verán. Ayude a su hijo a desarrollar destrezas de pensamiento crítico discutiendo comerciales y programas de televisión mientras los observan. ¿Qué información contenían? ¿Cómo les afecta? ¿Cuáles estrategias utilizaron en el comercial para vender el producto? ¿Podrían las cosas que ocurrieron en el programa ocurrir en la vida real? Un lugar especial para leer y escribir Separe un lugar especial en la habitación de su hijo para estantes de libros, y espacio para la lectura. Asegúrese de que haya buena luz para leer al lado de la cama de su hijo. A medida que éste crezca, añada un escritorio y provéale materiales de arte, lápices, bolígrafos, y papel. Reservar lugar para lectura silenciosa, escribir, estudiar, y trabajar le da a su hijo un sentido de que estas actividades son importantes, y le enseña destrezas de concentración que perdurarán durante toda su educación formal y aun cuando entre en el mundo del trabajo. Se hace el estudiante de siempre. Libros escolares y más Que su niño traiga a casa libros escolares y de texto. Léalos y discútalos con su hijo. Entérese de qué libros su niño está leyendo en clase, y léalos por su cuenta para que puedan discutir acerca del contenido. Cuando su hijo comience a leer libros más largos, pídale que le recomiende libros para usted leer. ¿Qué tal la escuela? Todos los días, hágale a su hijo preguntas específicas acerca de la escuela. ¿Qué pasó hoy en la clase de ciencias? ¿Qué leíste en la clase de lectura? ¿Qué aprendiste hoy? Visite la escuela frecuentemente y asista a las reuniones de padres y maestros. Ofrezca hablar sobre su trabajo a los niños en la clase de su hijo. Separe por lo menos treinta minutos cada día para trabajar con su niño en las tareas. Si no le asignaron tareas para el hogar, lean juntos o discutan eventos de actualidad durante ese tiempo. Conclusión Lo más importante, haga el tiempo que comparte con su hijo divertido para ambos. Esto fomentará en su hijo actitudes positivas hacia la lectura, la escritura, y el aprendizaje. ERIC/EDINFO Press P.O. Box 5953 Bloomington, IN 47407 BEST COPY AVAILABLE © 1994 EDINFO Press P.O. Box 5247 • Bloomington, IN 47407 1-800-925-7853
As recognized, adventure as without difficulty as experience practically lesson, amusement, as with ease as settlement can be gotten by just checking out a book Garry Kasparov On My Great Predecessors 3 in addition to it is not directly done, you could resign yourself to even more around this life, not far off from the world. We allow you this proper as well as simple way to acquire those all. We allow Garry Kasparov On My Great Predecessors 3 and numerous book collections from fictions to scientific research in any way. in the course of them is this Garry Kasparov On My Great Predecessors 3 that can be your partner. **Das Schachmädchen** - Tim Crothers 2017-04-10 Ein Buch über die Macht der Hoffnung, die Träume wahr werden lässt Phiona Mutesi zählt zu den Ärmsten der Armen in Afrika. Mit ihrer Mutter und ihren drei Geschwistern lebt sie in einer schäbigen Wellblechhütte in Katwe, einem Slum am Rande der ugandischen Hauptstadt Kampala. Ihre Mutter kann das Geld für die Schule nicht aufbringen, und oft gehen Phiona und ihre Geschwister hungrig schlafen. Doch ein Tag im Jahr 2005 wird ihr Leben für immer verändern. Auf der Suche nach etwas zu essen, folgt sie ihrem Bruder bis zu einer staubigen Veranda – und begegnet Robert Katende, der hier Slumkindern mit einer warmen Mahlzeit versorgt und ihnen das Schachspielen beibringt: ein Spiel, das für sie so fremd ist, dass es in ihrer Sprache keinen Namen dafür gibt. Zur Überraschung aller verfügt Phiona über enormes Talent und das Unglaubliche wird wahr: Mit 11 Jahren wird sie Junioren-Meisterin, mit 15 nationale Meisterin von Uganda, 2010 reist sie nach Sibirien und nimmt an der Schach-Olympiade teil. **Garry Kasparov on My Great Predecessors: Euwe, Botvinnik, Smyslov, Tal** - Garri Kimovich Kasparov 2003 Garry Kasparov, the thirteenth world champion and widely acclaimed as the greatest player ever, assesses the contribution of his 12 great predecessors. This is the second part of a three-volume series. **Computers and Games** - H. Jaap van den Herik 2007-09-28 This book constitutes the thoroughly refereed post-proceedings of the 5th International Conference on Computers and Games, CG 2006, colocated with the 14th World Computer-Chess Championship and the 11th Computer Olympiad. The 24 revised papers cover all aspects of artificial intelligence in computer-game playing. Topics addressed are evaluation and learning, search, combinatorial games and theory opening and endgame databases, single-agent search and planning, and computer Go. **Garry Kasparov on My Great Predecessors: Petrossian, Spassky** - Garri Kimovich Kasparov 2003 "The battle for the World Chess Championship has witnessed numerous titanic struggles which have engaged the interest not only of chess enthusiasts but of the public at large. The chessboard is the ultimate mental battleground and the world champions themselves are supreme intellectual gladiators."--Back cover. **Bobby Fischer lehrt Schach** - Bobby Fischer 2003 **José Raúl Capablanca** - Miguel A. Sánchez 2015-07-06 This is the most complete and thorough biography of José Raúl Capablanca, one of the greatest players in the history of chess. Beginning with his family background, birth, childhood and introduction to the game in Cuba, it examines his life and play as a young man; follows his evolution as a player and rise to prominence, first as challenger and then world champion; his loss of the title to Alekhine and his efforts to recapture the championship in the last years of his too-short life. What emerges is a portrait of a complex man with far-ranging interests and concerns, in stark contrast to his robotic reputation as “the chess machine.” Meticulously researched, utilizing many sources available only in Capablanca’s home country, it puts truth to legend regarding a man who stood astride the chess world in of its most dynamic and dramatic eras. Numerous games and diagrams complement the text, as do a wealth of photographs. The Batsford Book of Chess - Sean Marsh 2014-11-24 The Batsford Book of Chess is a landmark, full-colour chess instruction book, authoritatively written and beautifully designed. Arranged in the form of a course, it will take you all the way from tentative beginner to formidable chess player. 'Quick Start' reference pages help you retain the information you've learned, and puzzle sections let you test yourself as you go. To illustrate more advanced strategy and tactics, the author uses world-class 'chess heroes' such as Bobby Fischer and Mikhail Tal to bring the concepts to life. Essential topics include: • Pieces and Moves: the very basics, covering the chessboard, notation, the names of the pieces and how they move, plus an overview of chess etiquette • What Chess is All About: an exploration of chess culture and history • Winning, Drawing and Losing: Covers the various ways of winning at chess, and how games are drawn • Six Openings for Life: Coverage of six of the best chess openings, each illustrated by a different 'chess hero' • Tactical Weapons: An examination of forks, pins, skewers and other tactical devices, followed by illustrative games from Tactical Hero Mikhail Tal • Positional Play: Looks at good and bad positions, plus the art of planning, seen through the games of Positional Hero Tigran Petrosian • Human Factors: Typical mistakes and blunders you'll need to steer clear of Easy to follow, yet more thorough and more challenging than other chess instruction books on the market, this book is an essential companion for all budding chess champions. Smyslov, Bronstein, Geller, Taimanov and Averbakh - Andrew Soltis 2022-02-24 A crucial decision spared chess Grandmaster David Bronstein almost certain death at the hands of the Nazis—one fateful move cost him the world championship. Russian champion Mark Taimanov was a touted as a hero of the Soviet state until his loss to Bobby Fischer all but ruined his life. Yefim Geller's dream of becoming world champion was crushed by a bad move against Fischer, his hated rival. Yuri Averbakh had no explanation how he became the world's oldest grandmaster, other than the quixotic nature of fate. Vasily Smyslov, the only one of the five to become world champion, would reign for just one year--fortune, he said, gave him pneumonia at the worst possible time. This book explores how fate played a capricious role in the lives of five of the greatest players in chess history. Masterpieces and Dramas of the Soviet Championships - Sergey Voronkov 2021-12-05 The second part of Sergey Voronkov's three-volume treatise continues from where Volume I left off. It covers the eleventh to fifteenth Soviet championships, the 1941 match tournament for the title of Soviet Absolute Champion, and the main events in the country's chess history between these tournaments. Themes include the downfall of Nikolai Krylenko, the persecution and disappearance of Soviet chess players during the purges, and the experience of chess players in World War Two. The atmosphere of the time is captured in contemporary accounts and memoirs of key players and cultural figures. We see Botvinnik and Keres established as leading challengers for Alekhine's throne, with plans being made to arrange a title match. We encounter for the first time and witness the rise of great Soviet players such as Smyslov, Bronstein and Boleslavsky, and enjoy the games of many other stars including Flohr, Lilienthal, Bondarevsky, Kotov and Tolush. This volume contains 84 games and fragments mostly annotated by the players themselves and their peers, and subjected to recent computer analysis. It is illustrated with around 250 photos and cartoons from the period, the main sources being Russian chess magazines and tournament bulletins. Volume I of Masterpieces and Dramas of the Soviet Championships was named the English Chess Federation's Book of the Year 2021. The jury stated: "The book reads like a novel... A most remarkable, absorbing and entertaining chess history which fully lives up to its title, Masterpieces and Dramas, on and off the board. A worthy winner of Book of the Year 2021 over strong competition." *Chess Life - 2007* **Garry Kasparov on Garry Kasparov, Part 2** - Garry Kasparov Garry Kasparov on Garry Kasparov: Part II is the second volume in a major three-volume series made unique by the fact that it records the greatest chess battles played by the greatest chessplayer of all-time. Kasparov’s series of historical volumes have received great critical and public acclaim for their rigorous analysis and comprehensive detail regarding the developments in chess that occurred both on and off the board. Part I of this series saw Kasparov emerging as a huge talent and eventually toppling his great rival Anatoly Karpov to gain the world title. This volume focuses on the period from 1985-1993 which witnessed three title defences against Karpov as well as a number of shorter matches against elite players including Hübner, Anderssen, Timman and Miles. This period also saw Kasparov achieve spectacular results in both individual and team events. Kasparov won the board gold medal in three Olympiads (Dubai 1986, Thessaloniki 1988 and Manila 1992). The late 1980s also saw the emergence of the World Cup series which Kasparov utterly dominated, finishing either clear first or equal first at Belfort 1988 (11½/15), Reykjavik 1988 (11/17), Barcelona 1989 (11/16) and Skelleftea 1989 (9½/15). Other major tournament victories include Brussels 1987 (8½/11), Amsterdam 1988 (9/12), Tilburg 1989 (12/14), Belgrade 1989 (9½/11) and Linares 1990 (8/11). During the late 1980s and early 1990s Kasparov emphasized his huge superiority over his rivals. Despite generally adopting an uncompromising, double-edged attacking style he almost never lost. The games in this volume feature many masterpieces of controlled aggression played against the world’s absolute best. **Attacking with g2 - g4** - Dmitry Kryakvin 2020-01-10 The secret of its success may be its anti-positional look. The pawn thrust g2-g4 is often so counter-intuitive that it’s a perfect way to confuse your opponents and disrupt their position. Ever since World Champion Mikhail Botvinnik started using it to defeat the elite grandmasters of his day, it has developed, on all levels of play, into an ever more popular and attractive way to fight for the initiative. Grandmaster Dmitry Kryakvin owes a substantial part of his successes as a chess player to the g2-g4 attack. In this book he shows how it can be used to defeat Black in a number of important Closed and Semi-Closed Defences and Flank Openings: the Dutch, the Queen’s Gambit, the Nimzo-Indian, the King’s Indian, the Slav and several variations of the English Opening. With lots of instructive examples, Kryakvin explains the ins and outs of the attack on the g-file: the typical ways to gain tempi and keep the momentum, and the manoeuvres that will maximize your opponent’s problems. After working with this book you will be fully equipped to use this modern battering ram to define the battlefield. You will have fun and win games! **The Greatest Chess Kings** - Sylvia Lovina Chidi 2014-06-08 This book covers the lives and selected chess games of the following players; George Koltanowski, Ruy Lopez de Segura, Wilhelm Steinitz, Paul Morphy, Emanuel Lasker, Jose Raul Capablanca, Bobby Fischer, Garry Kasparov, Anatoly Karpov, Carlsen Magnus, Kramnik, Vladimir, Aronian Levon, Radjabov Teimour, Karjakin Sergey, Anand Viswanathan, Topalov Veselin, Nakamura Hikaru, Mamedyarov Shakhriyar, Grischuk Alexander, Caruana Fabiano, Morozevich, Alexander, Ivanchuk Vassily, Svidler Peter, Leko Peter, Wang Hao, Kamsky Gata, Gelfand Boris, Gashimov Vugar, Jakovenko Dmitry, Maurice Ashley and Pontus Carlsson. 242 chess games of the current and past male chess players in the world. 8 fantastic games have been chosen from each of the modern chess Kings. The remaining 20 games are games that include previous and current male chess pioneers. This book is full of history and an excellent book for studying openings, middle games, end games and solving problems. **Train Your Chess Pattern Recognition** - Master International Master Arthur van de Oudeweetering 2016-06-22 In this sequel to his instant classic Improve Your Chess Pattern Recognition, a highly original take on practical middlegame instruction, Arthur van de Oudeweetering presents players of almost every level with a fresh supply of essential, yet easy-to-remember building blocks for their chess knowledge. Pattern recognition is one of the most important mechanisms of chess improvement. It helps you to quickly grasp the essence of a position on the board and find the most promising continuation. In short, well-defined and practical chapters, experienced chess trainer Van de Oudeweetering presents hundreds of examples of middlegame themes. To test your understanding he provides an abundance of exercises. After working with this book, an increasing number of positions, pawn structures and piece placements will automatically activate your chess knowledge. As a result, you will find the right move more often and more quickly! Positionelles Schach - Mark I. Dvoreckij 1996 Wie trainiert man sein Positionsgefühl? Woran erkennt man typische Stellungen und die dazugehörigen Pläne? Selten war ein anspruchsvolles Lehrwerk so nahe an der Praxis. - Die Schachschule des Trainerteams Dworetski/Jussupow gilt vielen Schachfreunden als die weltweit beste. Bobby Fischer - Frank Brady 2016-01-18 Das Vorbild für Beth Harmon aus Das Damengambit Mit einem IQ von 181 gesegnet, wird Bobby Fischer mit nur 13 Jahren zum jüngsten Schachmeister Amerikas. Durch seine Erfolge erreicht er schon bald eine beispiellose Popularität. Auf dem Höhepunkt des Kalten Kriegs gelingt Fischer schließlich das Undenkbare. Er ist der erste Amerikaner, der den als unbesiegbar geltenden Sowjets Paroli bietet und Boris Spasski 1972 im "Match des Jahrhunderts" als Weltmeister enthronnt. Doch mit der Zeit veränderte sich der schon immer als exzentrisch geltende Fischer und wird paranoid: Er glaubt, dass ihn die sowjetische Regierung nach dem Sieg umbringen will, spielt 20 Jahre kein Schachturnier mehr, begeistert sich für die Mafia, wird zum Antisemiten und tritt einer apokalyptischen Sekte bei. In diesem Buch greift Frank Brady auf das Familienarchiv und private E-Mails Bobby Fischers sowie posthum freigegebene FBI-Akten zurück, um die bizarre Lebensgeschichte des Schachgenies zu erzählen. Es ist eine tragische Odyssee, die in armen Verhältnissen in Brooklyn beginnt und über den Schachweltmeistertitel und unbeschreiblichen Ruhm in die Krankheit und bittere Einsamkeit führt. Garry Kasparov on Garry Kasparov, Part 1: 1973-1985 - Garry Kasparov 2011-10-01 Garry Kasparov on Garry Kasparov, part 1 is the first book in a major new three-volume series. This series will be unique by the fact that it will record the greatest chess battles played by the greatest chessplayer of all-time. The series in itself is a continuation of Kasparov’s mammoth history of chess, comprising My Great Predecessors and Modern Chess. Kasparov’s historical volumes have received great critical and public acclaim for their rigorous analysis and comprehensive detail regarding the developments in chess that occurred both on and off the board.. This new volume and series continues in this vein with Kasparov scrutinising his most fascinating encounters from the period 1973-1985 whilst also charting his development away from the board. This period opens with the emergence of a major new chess star from Baku and ends with Kasparov’s first clash with reigning world champion Anatoly Karpov - a mammoth encounter that stretched out over six months. It had been known in Russia for some time that Kasparov had an extraordinary talent but the first time that this talent was unleashed on the western world was in 1979. The Russian Chess Federation had received an invitation for a player to participate in a tournament at Banja Luka and, under the impression that this was a junior event, sent along the fifteen year old Kasparov (as yet without even an international rating!). Far from being a junior tournament, Banja Luka was actually a major international event featuring numerous world class grandmasters. Undeterred Kasparov stormed to first place, scoring 11½/15 and finishing two points clear of the field. Over the next decade this 'broad daylight' between Kasparov and the rest of the field was to become a familiar sight in the world's leading tournaments. Garri Kasparow lehrt Schach - Gari Kasparov 1988 Every chess player knows that some moves are harder to see than others. Why is it that, frequently, uncomplicated wins simply do not enter your mind? Even strong grandmasters suffer from blind spots that obscure some of the best ideas during a game. What is more: often both players fail to see the opportunity that is right in front of their eyes. Neiman and Afek have researched this problem and discovered that there are actually reasons why your brain discards certain ideas. In this book they demonstrate different categories of hard-to-see chess moves and clearly explain the psychological, positional and geometric factors which cloud your brain. Invisible Chess Moves with its many unique examples, instructive explanations and illuminative tests, will teach how to discover your blind spots and see the moves which remain invisible for others. Your results at the board will improve dramatically because your brain will stop blocking winning ideas. Jocul de șah poate deveni un factor primordial în educația și formarea caracterului și gândirii unei persoane. Șahul face apel principiile de bază ale teoriei învățării psihologice: memoria, recunoașterea modelelor, strategiei și tacticii. Toate aceste variabile interacționează în timpul unui joc de șah și produc rezultatele procesului de gândire umană: o victorie sau o pierdere. Numărul de mișcări și variații posibile este extrem de mare în șah, chiar dacă acest număr este finit. Astfel, jocul poate fi analizat și organizat pentru studiu, la fel ca muzica, calculul sau limbile străine. Jucătorul are posibilitatea astfel să apeleze, în limita regulilor, la scheme și sisteme prestatibile, inclusiv deschideri, jocul de mijloc și jocul final, pentru a obține victoria. La fel ca în viață. The third volume of Sergey Voronkov's epic tale takes the reader on a historical journey through the late Stalinist period in the USSR. It covers in depth the five Soviet championships from 1948 to 1952 and the playoff match between Botvinnik and Taimanov in 1953, which concludes one month before Stalin’s death. Against a background of rampant anti-Semitism, a new wave of repressions and descent into the First Cold War, in which chess was an important front, the USSR captures the world chess crown and Botvinnik and the generation that followed him, including Smyslov, Keres, Bronstein, and Boleslavsky, assert their places at the top-tables of Soviet and indeed global chess. Yet a new group of legends begins to emerge, including Petrosian, Geller, Korchnoi, Taimanov, Averbakh, Simagin, Kholmov, and Furman making their championship debuts, as well as a semi-final appearance by Nikitin and Spassky's first quarter-final. At the same time, the reader learns about lesser-known masters Yuri Sakharov and Johannes Weltmander, victims of Stalinism who found solace in chess from their otherwise tragic lives. The present volume contains 77 games and fragments, once again mostly annotated by the participants and other contemporary masters, augmented with modern computer analysis. It is illustrated with over 220 photos and cartoons from the period. Many of these photos come from unique archives, including that of David Bronstein, and are published for the first time. Kreml-Gegner Kasparow über Putins Bestrebern die freie Welt zu spalten Der Aufstieg des hochrangigen ehemaligen KGB-Offiziers Wladimir Putin zum russischen Präsidenten im Jahr 1999 hätte für uns ein Warnsignal sein können, dass sich Russland in eine nicht demokratische Richtung bewegt. In den folgenden Jahren jedoch – während die USA und die anderen führenden Nationen eine auf Russland zielende Appeasement-Politik betrieben – hat sich Putin nicht nur zu einem Diktator, sondern zu einer globalen Bedrohung entwickelt. Mit seinem riesigen Arsenal an Nuklearwaffen bildet Putins Russland das Zentrum eines weltweiten Angriffs auf die politische Freiheit. Putins Russland stellt sich wie der IS oder wie Al Qaida gegen die demokratischen Länder dieser Welt. Es ist noch immer dem Kalten Krieg verhaftet und hat seine Lektionen daraus nicht gelernt. Damit wir uns nicht weiter in einen neuen kalten Krieg verwickeln, fordert Kasparow, dass wir in den USA und Europa auf wirtschaftlicher und auf diplomatischer Ebene eindeutig Stellung gegen Putin beziehen. Solange die Staatschefs der demokratischen Länder nach wie vor Beziehungen zu Putin unterhalten und mit ihm verhandeln, hat er Anerkennung, Glaubwürdigkeit und Rückhalt im eigenen Land. Kasparow argumentiert mit der ihm eigenen klaren Logik und aus seiner Überzeugung und der Liebe zu seinem Land heraus. "Warum wir Putin stoppen müssen" ist ein Aufruf zu handeln und die Bedrohung durch Putins Russland nicht länger zu ignorieren. Planning: Move by Move - Zenón Franco 2019-09-01 "First the idea and then the move!" Miguel Najdorf used to say in his habitually enthusiastic fashion; that statement is the perfect summary of planning in chess. Planning is of crucial importance in chess and yet this is an area that has not been well discussed or explained to ambitious players who wish to improve. A very well known saying in chess is "Better a bad plan than no plan at all". Playing without a plan – effectively staggering from one move to the next – is a recipe for disaster. It is essential to have some kind of idea of what you are trying to achieve and how to go about it. However, planning is not a straightforward matter. A good plan might be very short, lasting just two or three moves. Another plan might require almost an entire game to implement. A plan can be highly ambitious and complex or somewhat modest and simple. In chess, as in life, circumstances can change quickly and when they do, new plans are needed. How is a player expected to juggle all these different concepts while dealing with the immediate problems posed by the opponent’s most recent move? In this book, grandmaster and experienced author Zenón Franco explains planning in detail. He organises material in terms of: typical structures, advantage in space, manoeuvring play, simplification and, finally attack and defence. Using games played by elite players he explains how plans are formed and carried out in these different scenarios. If you want to take your game to the next level, then Planning Move by Move will enable you to do this. The Chess Puzzle Book 4 - Karsten Mueller 2012-12-05 Welcome To The Chess Puzzle Book 4! - It mostly deals with the important technical question of how to convert a static advantage. As noted by Mark Dvoretsky in his Foreword: "I cannot think of any books with high-quality exercises regarding such topics as domination, the 'do not hurry' principle, the principle of two weaknesses, etc., all of which are discussed by Müller and his co-author Alexander Markgraf ... I hope that you enjoy this new book by Müller and Markgraf and I encourage you to seriously study the positions discussed in the book. As a result, you will significantly progress in your understanding of chess and improve your results." Topics include Prophylaxis, The Principle of Two Weaknesses, The Right Exchange, Domination, Do Not Rush, and Converting an Advantage. There are also many well-chosen exercises with comprehensive solutions to help guide and instruct the reader. The Chess Puzzle Book 4 is the fourth volume in the series formerly known as the ChessCafe Puzzle Books. Garry Kasparov on Garry Kasparov, Part 3 - Garry Kasparov Garry Kasparov on Garry Kasparov: Part III is the final volume in a major three-volume series made unique by the fact that it records the greatest chess battles played by the greatest chessplayer of all-time. Kasparov's series of historical volumes have received great critical and public acclaim for their rigorous analysis and comprehensive detail regarding the developments in chess that occurred both on and off the board. The first two volumes in this series saw Kasparov emerging as a huge talent, toppling his great rival Anatoly Karpov and then defending the World Championship title on three occasions. This third volume focuses on the final 12 years of Kasparov's career up until his retirement from full-time chess in 2005. This period witnessed three further World Championship matches: wins against Short (London 1993) and Anand (New York 1995) before the loss against Kramnik (London 2000) which finally ended Kasparov's 15-year tenure as world champion. This period also saw Kasparov achieve a colossal 2851 rating (1999), a record which stood until 2013. Despite loss of the World Championship, Kasparov continued to be ranked as the world number one and dominated the elite tournament circuit. He won the Linares super-tournament for four consecutive years (1999-2002) with the fourth of these victories in 2002 concluding an unprecedented run of ten straight wins in the world's elite events (Linares 4, Wijk aan Zee 3, Sarajevo 2 and Astana 1). The games in this volume feature many masterpieces of controlled aggression played against the world's absolute best. **The Magic Tactics of Mikhail Tal** - Karsten Muller 2014-03-07 Mikhail Tal was one of the greatest geniuses of chess history. The magician from Riga, as he was known because of his dazzling attacking games, took the chess world by storm and in 1961, at the age of twenty-three, he won the world championship. His sacrificial style made Tal immensely popular with chess players all over the world. In this book Grandmaster Karsten Muller and chess journalist Raymund Stolze have created an instructional chess tactics guide by investigating and explaining the secrets of his breathtaking combinations. Moreover, the authors have selected from the games Tal played one hundred exercises which will teach amateurs how they can finish a game with a stunning sacrifice. **Meine besten Partien** - Anatoly Karpov 2006 **Garry Kasparov on My Great Predecessors, Part Three** - Garry Kasparov 2020-06-15 This magnificent compilation of play from the 1960s through to the 1970s forms the basis of the third part of Garry Kasparov's history of the World Chess Championship. This volume features the play of champions Tigran Petrosian (1963-1969) and Boris Spassky (1969-1972). **Die sieben Todsünden des Schachspielers** - Jonathan Rowson 2003-07-01 **Kingwalks** - Yasser Seirawan 2021-06-20 The Fearsome Fascination of Kingwalks! Marching your king across the board – at times right through or into enemy lines – may be both exhilarating and terrifying. Nothing may be quite as satisfying as a majestic kingwalk across the board which brings you glorious victory. And nothing as tragicomic as a needless journey ending in epic failure. Chessplayers are fascinated by kingwalks, perhaps because of their inherent contradiction and even implausibility. The most important – and vulnerable – chess piece does something other than trying to remain safe. Topics include: Kingwalks to Prepare an Attack; Kingwalks in Anticipation of an Endgame; Kingwalks to Defend Key Points; Kingwalks to Attack Key Points or Pieces; Mating Attacks; Escaping to Safety Across the Board; Escaping to Safety Up the Board; Kingwalks in the Opening; Kingwalks in the Endgame; Double Kingwalks; and Unsuccessful Kingwalks. For sheer entertainment as well as instructive value, the kingwalk is transcendent! Executing a successful kingwalk has the power to make a chessplayer happy and the same can be said about playing over the many beautiful examples in this book. Enjoy! From the Foreword by Hans Ree About the Authors American grandmaster Yasser Seirawan is a four-time U.S. champion. He also won the World Junior Championship in 1979. He is one of the best-selling chess authors and is considered one of the top commentators for games broadcast on the web. Canadian master Bruce Harper has been champion of British Columbia many times and has also participated in several Canadian championships. He is the co-author with Yasser Seirawan of the highly acclaimed three-volume series, *Chess on the Edge*, chronicling the career of Canadian grandmaster Duncan Suttles. He is also co-author, with American grandmaster Hikaru Nakamura, of *Bullet Chess: One Minute to Mate*. **The Wisest Things Ever Said About Chess** - Andrew Soltis 2013-01-08 • ‘The best opening is the opening your opponent doesn’t know.’ • ‘The winner of the game is the player who makes the next-to-last mistake.’ This fascinating book contains 300 of the most astute insights on chess ever uttered, culled from three centuries of great players. Each of these invaluable maxims is illustrated with an annotated chess position, making the book a short cut to learning from the masters. These snippets of wisdom are arranged into chapters for easy reference: Calculation, Intuition, Strategy, Position Evaluation, Openings, Sacrifices, Attitude, Endgames, Mistakes, Studying, Time Management and Tournament Tactics. This is a great book to dip in and out of – every page contains a nugget of wisdom that will help you hone your own chess skills and win your next game. **The Shereshevsky Method to Improve in Chess** - Mikhail Shereshevsky 2018-01-25 Two instructional classics condensed into one practical volume! In 2014 the Russian Chess Federation started a wide-ranging programme aimed at the revival of chess in Russia. One of the first actions that were taken was commissioning legendary Belarusian chess coach Mikhail Shereshevsky to recapitulate and condense his famous training methods. In doing so Shereshevsky has created a totally reworked compendium of his acclaimed classics Endgame Strategy and The Soviet Chess Conveyor, with many new examples, exercises and discussions of various training methods. Furthermore, he has added a new and highly effective approach on how to calculate variations. Club players all over the world who wish to improve their game now have access to Shereshevsky’s famous training programme in one volume and can learn: How to build an opening repertoire How to study the chess classics to maximum benefit How to master the most important endgame principles How to effectively and efficiently calculate variations The Shereshevsky Method offers a unique opportunity to improve your game with one of the supreme examples of Russian chess training excellence. Studying this manual will enrich your understanding of chess enormously and help your progress on the way to chess mastery. Schach mit Neuem Schwung - Jeremy Silman 2016-03-23 Schach mit neuem Schwung, die deutsche Übersetzung der vierten und vollständig überarbeiteten Ausgabe von Silmans legendärem "How to Reassess Your Chess", ist ein moderner Klassiker, in dem Silman sein bahnbrechendes Konzept der Ungleichgewichte auf eine ganz neue Stufe hebt. Das Buch wendet sich an Spieler mit einer Wertungszahl zwischen 1400 und 2100 und an Trainer, die einen sofort anwendbaren Schachkurs suchen. In diesem Buch nimmt der Autor den Leser auf eine Reise mit, die das Denken erweitert, die Grundlagen der Ungleichgewichte erklärt, dafür sorgt, dass jedes Detail der Ungleichgewichte verstanden wird und gibt dem Spieler/Schachliebhaber dadurch etwas, von dem er stets geträumt hat, aber immer für unerreichbar hielt: ein positionelles Grundverständnis auf Meisterniveau. Ein Abschnitt über praktische Schachpsychologie (mit dem Titel 'Psychologische Streifzüge') präsentiert nie zuvor veröffentlichte Ideen über psychologische Prozesse, die Spieler aller Spielstärken an der Entfaltung hindern und verrät leicht umsetzbare Tipps und Techniken, die jedem helfen, diese weit verbreiteten geistigen/psychologischen Schwächen zu überwinden. Hunderte von Partien, die durch anschauliche Erklärungen lebendig werden, und Geschichten, die humorvoll und lehrreich sind, erläutern die Themen des Buches auf persönliche und unterhaltsame Weise. Wenn Ihnen die positionellen Meisterwerke der Schachlegenden immer unverständlich geblieben sind und Schachstrategie für Sie stets ein Buch mit sieben Siegeln war und Sie glauben, im Positionsspiel ein Bauer und kein Meister zu sein, dann kann Schach mit neuem Schwung Ihr Leben ändern. Chess Variants - Tal, Petrosian, Spassky and Korchnoi - Andrew Soltis 2018-12-06 This book describes the intense rivalry--and collaboration--of the four players who created the golden era when USSR chess players dominated the world. More than 200 annotated games are included, along with personal details--many for the first time in English. Mikhail Tal, the roguish, doomed Latvian who changed the way chess players think about attack and sacrifice; Tigran Petrosian, the brilliant, henpecked Armenian whose wife drove him to become the world's best player; Boris Spassky, the prodigy who survived near-starvation and later bouts of melancholia to succeed Petrosian--but is best remembered for losing to Bobby Fischer; and "Evil" Viktor Korchnoi, whose mixture of genius and jealousy helped him eventually surpass his three rivals (but fate denied him the title they achieved: world champion). Garry Kasparov's Greatest Chess Games - Igor Stohl 2006-02-05 Garry Kasparov has dominated the chess world for more than twenty years. His dynamism and preparation have set an example that is followed by most ambitious players. Igor Stohl has selected the best and most instructive games from Kasparov's later years, and annotated them in great detail. The emphasis is on explaining the thoughts behind Kasparov's decisions, and the principles and concepts embodied by his moves. Stohl provides a wealth of fresh insights into these landmark games, together with many new analytical points. This makes the book outstanding study material for all chess enthusiasts. Garry Kasparov was born in 1963, and burst onto the scene in the late 1970s with a series of astonishing results in Soviet and international events. In 1985 he became the youngest world champion in history by defeating Anatoly Karpov in an epic struggle. When he announced his retirement from professional chess twenty years later, he was still world number 1. Kasparov is an internationally renowned figure, famous even among the non-chess-playing public. Learn to Play Chess Like a Boss - Patrick Wolff 2019-09-17 Stop playing like a pawn and start playing like the king You already know just how enjoyable--and and challenging--the game of chess can be. For those who play, chess leads to a lifetime of fun. But how do you make the first move to learn the rules and transform from a pawn to a king? The path to a perfect checkmate is in your hands! In the pages of this book, you'll find an introduction to all the chess pieces including their strengths and weaknesses, tips on how to protect your pieces and prevent their capture, and guidance on when to attack and defend like a boss. You'll also find a bonus tear-out card to take your new tactics on the go! Play Unconventional Chess and Win - Noam Manella The computer has changed the way top players think about chess. The silicon mind has no psychological barriers. It is "willing" to check moves that most humans, including top players, consider absurd and reject instantly. Thus this brave, new computer era inevitably leads to a reassessment of old axioms, principles and evaluations. In this book the reader will discover the incredible power unconventional moves can have. These moves contradict the most fundamental principles of the "old chess", and yet most of them played by leading grandmasters. At first sight these moves look so strange that the reader can not avoid asking, "Was this grandmaster was inspired or drunk?" The answer will definitely surprise you. The 3...Qd8 Scandinavian - Daniel Lowinger 2013-11-25 What's Old Is New -- and Surprisingly Strong! The world's oldest opening variation, 3...Qd8 in the Scandinavian Defense, has resurfaced in the last decade to give players at all levels a winning edge. Whether you prefer a sharp tactical game or slower positional maneuvering, the 3...Qd8 Scandinavian provides a genuine alternative for club players and grandmasters seeking to play for a win from the outset. Elite players such as Michael Adams, Josif Dorfman, Kiril Georgiev and Julian Hodgson, among others, have successfully raised the banner of the 3...Qd8 Scandinavian. As the author demonstrates, this variation's doubtful reputation is undeserved. It is completely playable -- and easy to learn! 3...Qd8 is not the ugly duckling sibling of 3...Qa5 and 3...Qd6 -- it is a superb alternative. "Dan's a strong player, but he's an even stronger teacher. The book sparkles with practical insight, lucidly explained." International Grandmaster Zviad Izoria Dreihundert Schachpartien - Siegbert Tarrasch 1909
On the meaning and use of contribution links Sotirios Liaskos, Norah Alothman, Alexis Ronse, and Wisal Tambosi School of Information Technology, York University, 4700 Keele St., Toronto, Canada, M3J 1P3 {liaskos,norah,aronse,email@example.com Abstract. Contribution links are at the core of goal modelling languages of the $i^*$ family. They allow representation of how satisfaction of one goal is affected by satisfaction of others assisting thereby deep and detailed understanding of the impact of low-level design decisions to high-level stakeholder objectives in various decision support scenarios. Several approaches have been proposed in the literature for representing and performing inferences with the construct. While theoretical arguments are typically evoked to support each such method, their usability and intuitiveness by users is also important for deciding which method is suitable for what task. In this paper, we offer a short summary of some of those approaches for treating contribution links and review a group of initial experimental studies we have conducted to understand how untrained users perceive the meaning of contribution links via observing the inferences users spontaneously make with them. Keywords: Conceptual Modelling · Goal Models · Model Comprehension · Experimental Study. 1 Introduction One of the most important features of goal modelling languages within the $i^*$ family [14,3,4] are contribution links. Such links allow the expression of the supposition that satisfaction of one goal within the model affects satisfaction of another goal in some way. The construct is particularly useful for representing and exploring how various low-level options encoded within goal models affect higher level stakeholder objectives, assisting thereby decision making when there is a lack of precise quantitative decision models or hard evidence. Nevertheless, due to the abstract nature of the construct, it seems to be difficult to pinpoint its precise meaning and to subsequently find an obviously effective way to represent such meaning. The variety of ways found in the literature to represent and understand the semantics of contributions appear to be evidence of this difficulty. Thus, there are qualitative contribution links of various kinds in which symbols and words are used to convey the quality and magnitude of the contribution as well as quantitative contribution links in which numbers, also of various formats, are employed together with symbols such as signs and subscripts to represent similar information. Newcomers to $i^*$ may likely be perplexed with regards to which of the various proposals to adopt for their specific needs. We believe that the problem is too central to i*’s usefulness and adoption potential to be ignored. In this paper, we offer a brief review of some of the proposals offered by the literature so far (Section 2), followed by a presentation of a experimental research program we have been engaging in for exploring the intuitiveness of various contribution representation approaches (Section 3). We close with an outline of our medium term research agenda (Section 4). 2 Understanding and Representing Contributions A contribution link $A \xrightarrow{l} B$ from goal $A$ to goal $B$ generally shows that the satisfaction of goal $B$ is affected by the satisfaction of another goal $A$ according to label $l$. The quality (e.g., positive or negative) and strength of contribution that is effected to $B$ depends on our understanding of the state of satisfaction of $A$ and label $l$. The literature offers several ways for representing $l$. In qualitative frameworks, the label $l$ can be a symbol such as “+”, “−” signifying partial and “++”, “−−” sufficient contribution [14,5,2]. As of iStar 2.0, words are proposed instead of symbols (“help”, “hurt”, “make”, “break”). Labels can also be quantitative, i.e. a number from some numeric interval and, if relevant, signed [5,2,10]. Labels may also contain subscripts as in “0.2$_{+D}$” and “−$_S$” when more than one variable are used to denote goal satisfaction status [5]. Contribution labels allow users of the diagram perform inferences about the satisfaction status of one goal given the corresponding status of other goals in the diagram. Typically some notion and representation of belief or evidence about partial goal satisfaction is introduced to allow such inferences. In qualitative frameworks partial goal satisfaction is represented through associating the goal with a variable that takes values from some ordered set characterising “levels” of satisfaction (evidence/ belief), such as the set {N, P, F} denoting No, Partial, Full satisfaction of a goal, respectively. Visually, various icons are used in place of symbols {N, P, F} as annotations next to the goal they refer to [5,2]. In quantitative frameworks a continuous domain is used for the variable, such as [0.0, 1.0] [10], [0,100] or [-100,100] [2], again commonly represented as annotations next to the goal in question. Giorigini et al. [5] define two variables for each goal, one to capture satisfaction and one to capture denial, expressing thereby inconsistencies to our beliefs about satisfaction of goals. The way by which contribution links $A \xrightarrow{l} B$ can be used to perform inferences about partial goal satisfaction is expressed via rules that show how a given partial satisfaction level of goal $A$, say $sat(A)$, affects the partial satisfaction level of goal $B$, $sat(B)$, based on what label $l$ is – noting also that denial values $den(A)$ and $den(B)$ can also be considered. Moreover, in the general case, $B$ is targeted by more than one goals $A_1, A_2, \ldots$ using contributions labelled with different labels, $l_1, l_2, \ldots$. Thus, to fully define satisfaction of $B$ we need rules which dictate (a) how the satisfaction level of each $A_i$ and $l_i$ are combined | Approach | Quantitative | Qualitative | |--------------------------|--------------|-------------| | | Effect | Aggregation | Effect | Aggregation | | URN ([2]) | Multiplication | Grand Sum | Min | (Custom) | | AHP-inspired ([10,13]) | Multiplication | Clustered Sums | - | - | | Evidence-based ([5]) | Multiplication | Max | | | | | Serial-Parallel | Max | Min | Max | | | Min | Max | | | **Table 1.** Alternative meanings of contributions links. into an *effect* from $A_i$, (b) how the corresponding effects from all $A_i$ should be *aggregated* to calculate satisfaction of $B$. There is variability in the literature with regards to both how effects should be calculated and to how they should be aggregated. In qualitative frameworks [14,5,2] a set of rules in logical or tabular form is defined for deciding both the above. Given their two-value system, Giorgini et al. [5] follow an evidence maximization principle for aggregating such effects. Amyot et al. [2] use a single value system and as such use a more complex function that explicitly labels conflict. In both, the strength of the contribution effect is the minimum between the strength of the label and the satisfaction of the origin, noting that negative labels invert satisfaction into denial and denial into satisfaction. Aggregation however is different in Amyot et al. where strong and weak effects are counted and compared separately to then combine in a hybrid additive/maximization fashion marking co-presence of strong positive and negative effects with “conflict” labels (Table 3 of [2]). We note that, in the context of such conflicts, Horkoff et al. [7] suitably proposes human intervention for their resolution, instead of relying on rules. Quantitative frameworks use algebraic expressions instead of rules and exhaustive tables. Amyot et al. [2] multiply satisfaction values of goals $A_i$ (a number in [-100,100]) with the label $l_i$ (also a number in [-100,100]). The satisfaction of $B$ is calculated by adding up the results – as in $sat(B) = \sum sat(A_i) \times l_i$. In the AHP-based proposals by Liaskos et al. [10] and Maiden et al. [13], the same aggregation approach is followed, with the important difference, however, that each goal can receive multiple groups of incoming contribution links, each group independently concerned with a specific local decision. Thus, the AHP-based approach is not concerned with calculating a global satisfaction value that results from a total evaluation of a goal model, but rather sets of satisfaction values corresponding to options in decision problems expressed as OR-decompositions in the model. Another important difference of that approach is that it does not define denial of goal, which greatly simplifies the problem of devising effect and aggregation rules. In their quantitative framework, Giorgini et al. avoid committing to a specific way by which effect of a contribution is calculated: it can be the product $(sat(A_i) \cdot l_i)$ or a serial/parallel resistance model $(\frac{sat(A_i) \cdot l_i}{sat(A_i) + l_i})$, while a model more similar to the qualitative arrangement is that of minimization $(min(sat(A_i), l_i))$. In all cases, aggregation follows a maximization principle – as in $sat(B) =$ $max(sat(A_1) \otimes l_1, sat(A_2) \otimes l_2, \ldots)$ where $\otimes$ represents any of the effect calculation methods above. A summary of effect calculation and aggregation approaches can be seen in Table 1, stressing that it is not exhaustive. 3 Evaluating The Intuitiveness Aspect The variety of methods to represent and reason with contribution links, brings up the question of which of the methods is appropriate and for what purpose. Theoretical approaches, such as ontological analysis (e.g. [6]) or demonstrative appeals to e.g. expressiveness, flexibility or amenability to tractable automated reasoning, are normally followed to measure usefulness of each option. However, an additional criterion is how the representations work for users in practice, i.e. how they are helping them use goal models to their benefit. In this context, we have been specifically exploring how contribution links are spontaneously understood by users who are not trained to the exact semantics of such constructs. Our goal is to see if any version of the operational semantics we reviewed above appears to be more intuitive for users, i.e., more readily understood. In our first study of the kind [1], we focussed on quantitative contribution links. We developed a number of goal models with quantitative labels and trained a number of users on the abstract meaning of contribution links but without exposing them to any of the precise inference rules of Table 1. We then presented them with small-to-medium size goal models and asked them to perform forward reasoning, i.e., infer the satisfaction level of a top-level goal given the corresponding level of the leaf-level goals. They were specifically given four options for the satisfaction of the top-level goal, each corresponding to the result that is acquired by following the rules of each method of Table 1. An additional factor was added: for some goal models, the weights $l_i$ of contributions targeting a goal always add up to 1.0. In the results, we firstly saw that some semantic choices were more preferred than others, with the AHP-Inspired (multiplication/sum) and the min/max being more popular and serial-parallel/max being the least popular. In other words when following the serial-parallel/max propagation rules we arrive at satisfaction levels that are not expected by untrained users. Moreover, in goal models in which incoming contribution labels were restricted to 1.0, users tended to pick the choice corresponding to the multiplication/sum rules, apparently, as we hypothesize, after spontaneously inferring that the meaning of contributions is that of share of contribution of each origin goal to the satisfaction of the destination goal. The results also show some effect of size, with the min/max interpretation increasing in popularity as size increases. In a different study [11], we took up qualitative contribution labels and the Giorgini et al. semantics of label propagation. This time we focussed exclusively on effect calculation, by only considering two goals, one contributing to the other. Like before, we offered various examples of such pairs of goals with different labels and satisfaction levels of the origin goal, and asked participants what they thought the satisfaction of the destination goal was. We then compared what they responded with the normative semantics. The most important finding is the perception problems of negative contributions especially combined with goal denial. According to the formal semantics, goal denial of the origin goal translates to goal satisfaction of the destination, when the contribution label is negative (“—” or “—_”). However, our participants (note: first year university students) did not assume that the two negatives combined will result to positive satisfaction. An additional interesting finding is that even in cases where the origin had no satisfaction or denial assigned to it, the participants assumed the destination to still have positive or negative values, interpreting contributions as generators of satisfaction or denial rather than mere propagators of such. In our latest effort [12], a direct comparison between qualitative and quantitative contribution links is attempted. Participants are presented with single decisions (OR-decompositions of goals) that are connected with an hierarchy of soft-goals through contribution links. They are asked to identify the option that satisfies – in their opinion based on what they see – the top level goal the best. Participant responses matched much more frequently the multiplication/sum semantics in the quantitative models than the min/max semantics of the qualitative ones, an effect we attribute to the familiarity of participants with interpretation, aggregation and comparison of numbers. In parallel, we have also experimented with the impact of the way contributions are visualized to intuitiveness and correct use [9]. Using optimal decision detection exercises similar to the ones described above [12] we considered three different representations of contribution link based decision problems: traditional graphs, tree-maps and a combination of bar- and pie-charts. We found that the latter allowed for more accurate identification of the optimal decision. Hence, attempting to replace symbolic representations with visual ones appears to improve the task of making inferences in goal models. 4 Future Work The main motivation of the presented research program is to establish goal models as useful decision support tools, worthy of the effort investment to construct and maintain them. Key to this is the development of a deep understanding of contribution relationships in a way that also satisfies user expectations and the development of intuitive ways to represent and perform inferences therewith. Our plans for future empirical exploration follow a number of directions. Firstly, continuing the path of the works mentioned earlier, we plan to turn to more qualitative empirical methodologies – similar to those of Horkoff and Yu [7] – aiming at understanding what goes in users’ minds when confronted with a contribution link network and asked to perform reasoning with it. Secondly, we plan to make the plethora of associated automated reasoning techniques ([8] for survey) part of our investigation. Thus, we wish to explore the extent to which the way reasoners aggregate local contribution structures into a final evaluation of interest coincides with user’s intuition and also understand what affects users’ trust in the reasoner. Finally, we intend to continue exploring vivisualizations alternative to the traditional box-and-line ones, focussing on ways to replace symbolic representations of contribution and satisfaction with visual ones. References 1. Alothman, N., Zhian, M., Liaskos, S.: User Perception of Numeric Contribution Semantics for Goal Models: an Exploratory Experiment. In: Proceedings of the 36th International Conference on Conceptual Modeling (ER’17). pp. 451–465 (2017) 2. Amyot, D., Ghanavati, S., Horkoff, J., Mussbacher, G., Peyton, L., Yu, E.S.K.: Evaluating goal models within the goal-oriented requirement language. International Journal of Intelligent Systems 25(8), 841–877 (2010) 3. Amyot, D., Mussbacher, G.: User Requirements Notation: The First Ten Years, The Next Ten Years. Journal of Software (JSW) 6(5), 747–768 (2011) 4. Dalpiaz, F., Franch, X., Horkoff, J.: iStar 2.0 Language Guide. The Computing Research Repository (CoRR) abs/1605.0 (2016) 5. Giorgini, P., Mylopoulos, J., Nicchiarelli, E., Sebastiani, R.: Reasoning with Goal Models. In: Proceedings of the 21st International Conference on Conceptual Modeling (ER’02). pp. 167–181. London, UK (2002) 6. Guizzardi, R.S., Franch, X., Guizzardi, G., Wieringa, R.: Ontological distinctions between means-end and contribution links in the i* framework. In: Proceedings of the 32nd International Conference on Conceptual Modeling (ER 2013). pp. 463–470. Hong-Kong, China (2013) 7. Horkoff, J., Yu, E.S.K.: Interactive goal model analysis for early requirements engineering. Requirements Engineering 21(1), 29–61 (2016) 8. Horkoff, J., Yu, E.S.: Comparison and evaluation of goal-oriented satisfaction analysis techniques. Requirements Engineering (REJ) 18(3), 1–24 (2011) 9. Liaskos, S., Dundjerovic, T., Gabriel, G.: Comparing Alternative Goal Model Visualizations for Decision Making: an Exploratory Experiment. In: Proceedings of the 33rd Annual ACM Symposium on Applied Computing (SAC’18). pp. 1272–1281. Pau, France (2018) 10. Liaskos, S., Jalman, R., Aranda, J.: On Eliciting Preference and Contribution Measures in Goal Models. In: Proceedings of the 20th International Requirements Engineering Conference (RE’12). pp. 221–230. Chicago, IL (2012) 11. Liaskos, S., Ronse, A., Zhian, M.: Assessing the Intuitiveness of Qualitative Contribution Relationships in Goal Models: an Exploratory Experiment. In: Proceedings of the 11th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM’17). pp. 466–471. Toronto, Canada (2017) 12. Liaskos, S., Tambosi, W.: Factors affecting comprehension of contribution links in goal models: an experiment. In: Proceedings of the 32nd International Conference on Conceptual Modeling (ER 2019) (to appear). Salvador, Brazil (2019) 13. Maiden, N.A.M., Pavan, P., Gizikis, A., Clause, O., Kim, H., Zhu, X.: Making Decisions with Requirements: Integrating i* Goal Modelling and the AHP. In: Proceedings of the 8th International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ’02). Essen, Germany (2002) 14. Yu, E.S.K.: Towards Modelling and Reasoning Support for Early-Phase Requirements Engineering. In: Proceedings of the 3rd IEEE International Symposium on Requirements Engineering (RE’97). pp. 226–235. Annapolis, MD (1997)
Instructions to Supervisors Confidential To be given immediately to the teacher(s) responsible for GCE Physics Open on receipt • These instructions are provided to enable centres to make appropriate arrangements for the Unit 6 Externally Marked Practical Assignment (EMPA) • It is the responsibility of the Examinations Officer to ensure that these Instructions to Supervisors are given immediately to the Supervisor of the practical examination. INSTRUCTIONS TO THE SUPERVISOR OF THE EXTERNALLY MARKED PRACTICAL EXAMINATION General Security/confidentiality The instructions and details of the EMPA materials are strictly confidential. In no circumstances should information concerning apparatus or materials be given before the examination to a candidate or other unauthorised person. The EMPA supplied by AQA at AS and at A2 for a given academic year must only be used in that academic year. It may be used for practice in later academic years. Using information for any purpose beyond that permitted in this document is potentially malpractice. Guidance on malpractice is contained in the JCQ document Suspected Malpractice in Examinations and Assessments: Policies and Procedures. The Examinations Officer should give copies of the Teacher Notes (PHA3/B3/XTN and/or PHA6/B6/XTN) to the teacher entrusted with the preparation of the examination upon receipt. Material from AQA For each EMPA, AQA will provide: - Instructions to Supervisors - Section A Task 1 and Task 2 question paper/answer booklets - Section B EMPA written test papers. Preparation/Centre responsibility This practical assessment should be carried out after candidates have acquired the necessary skills and after the appropriate sections of the specification have been taught so that candidates are familiar with any specialist apparatus involved. The assessment must be carried out between the dates specified by AQA. It is the responsibility of the centre to ensure that each of the specified practical activities works with the materials provided to the candidates. The assessment and management of risks are the responsibility of the centre. Practical Skills Verification (PSV) Candidates must undertake the five practical activities specified, in order for them to demonstrate in the EMPA that they can use apparatus appropriate to the teaching of Physics at this level. In doing so, candidates will be familiar with the equipment and skills they will use in the EMPA. The teacher must confirm on the Candidate Record Form that this requirement has been met. Section A: Task 1 and Task 2 - Candidates should work individually and be supervised throughout. They should not discuss their work with other candidates at any stage. - The work can be carried out in normal timetabled lessons and at a time convenient to the centre. Teachers will be in the best position to judge how many sessions are appropriate for candidates in their own centre. - The candidates’ work must be handed to the teacher at the end of each practical session and kept securely until the next stage of assessment. - There is no specified time limit for these tasks, however candidates should be informed by the Supervisor of the expected timescale and timetable arrangements involved in carrying out the EMPA. Candidates must also be instructed that all readings must be entered in the question paper/answer booklet provided and all working must be shown. **Scrap paper must not be used.** **Sharing equipment / working in groups** Candidates are to work individually. Where resources mean that equipment has to be shared, the teachers should ensure that the candidates complete the tasks individually. Where appropriate, spare sets of apparatus should be prepared to ensure that time is not lost due to any failure of equipment. Centres may choose to provide sufficient sets of apparatus for the candidates to work on Section A in a circus format with some candidates completing the questions in reverse order. In such cases the changeover should be carefully supervised and the apparatus returned to its original state before being used again. **Practical sessions** Before the start of the test the apparatus and materials for each candidate should be arranged, ready for use, on the bench. The apparatus should not be assembled unless a specific instruction to do so is made in these Instructions. If a candidate is unable to perform any experiment, or is performing an experiment incorrectly, or is carrying out some unsafe procedure, the supervisor is expected to give the minimum help required to enable the candidate to proceed. In such instances the *Supervisor’s Report* should be completed with the candidate’s name and number, reporting to the Examiner the nature and extent of the assistance given. No help may be given to proceed with the analysis of their experimental data. Any failure of equipment which, in the opinion of the Supervisor, may have disadvantaged any candidate should be detailed on the *Supervisor’s Report*. Section B: EMPA written test - The Section B EMPA written test should be taken as soon as convenient after completion of Section A. - The test must be carried out under controlled conditions and must be completed in a single uninterrupted session. - When carrying out the Section B EMPA written test, candidates should be provided with their completed copy of Section A Task 2 question paper/answer booklet. - Supervisors should ensure that candidates understand that Section A Task 2 is for reference only and they must not make any written alterations to this previous work while undertaking Section B. - The duration of the Section B EMPA written test is 1 hour 15 minutes except where candidates have been granted additional time. Administration Candidates must not bring any paper-based materials into any session or take any assessment materials away at the end of a session. Mobile phones or other communication devices are not allowed. Modifications The equipment requirements for the experimental tasks are indicated on these Instructions. Centres are at liberty to make any reasonable minor modifications to the apparatus which may be required for the successful working of the experiment but it is advisable to discuss these with the Assessment Advisor or with AQA. A written explanation of any such modification must be given in the Supervisor’s Report. Absent candidates Candidates absent for any of the Section A Tasks should be given an opportunity to carry out the tasks before attempting the Section B EMPA written test. In extreme circumstances, when such arrangements are not possible, the teacher can supply a candidate with class data. In this case, there will be no evidence for Task 1 or Task 2, so no marks can be awarded for Section A. Redrafting Candidates may make only one attempt at a particular EMPA and redrafting is not permitted at any stage during the EMPA. The Supervisor’s Report Details to be given on the Supervisor’s Report (page 19) should explain - any part of the equipment provided that differs significantly from that specified in these Instructions - any help given to candidates in those circumstances given on page 3. Supervisors must also include any numerical data that is specified in the Instructions. This may involve the Supervisor performing an experiment before the test and collecting certain data. Such data should be given to the uncertainty indicated. Note that the Examiners may rely heavily on such data in order to make a fair assessment of a candidate’s work. Security of assignments Candidates’ scripts and any other relevant materials, printed or otherwise, should be collected and removed to a secure location at the end of each session. Under no circumstances should candidates be allowed to remove question papers from the examination room. Once completed, each candidate’s EMPA should be collated in the following order: - Section A Task 1 - Section A Task 2 - Section B EMPA written test. The assembled material should then be secured using a treasury tag. Completed EMPAs are to be treated in the same manner as other completed scripts and should be kept under secure conditions before their despatch to the Examiner. Submission of materials to the AQA Examiner By the specified deadline centres should assemble and then despatch the following materials: - collated candidates’ scripts, in candidate number order - the Supervisor’s Report (page 19 of these Instructions) - a completed Candidate Record Form for each candidate, arranged in candidate number order - a completed Centre Declaration Sheet. Section A Task 1 Candidates are to investigate the small-amplitude oscillations of a chain, suspended from one end, in a vertical plane. Question 1 Apparatus - 24 steel paper clips, round ends, length 50 mm, width at widest point about 10 mm, of uniform quality; these should be formed into three short chains, each consisting of eight inter-connected paper clips with the paper clips arranged in the same way, i.e. the end with the larger diameter bend should always be linked to an end with a smaller diameter bend; mark the end of each chain with the larger diameter bend with a small blob of Tipp-Ex correction fluid or white paint - retort stand of height at least 600 mm, fitted with a boss near the top - strong nail or small screwdriver to be clamped horizontally; this provides the means of supporting the end of the chain - digital stopwatch capable of reading to 0.01 s - suitable means of providing a fiducial mark, e.g. additional stand with pointer, at the Centre’s discretion The Examiners require no information for this question. Question 2 Candidates are required to observe the motion of, and measure the time for energy transfer between, two coupled pendulums. Apparatus - two 200 g masses - two equal lengths of strong thread about 80 cm long - two retort stands of height at least 600 mm, each fitted with a boss and clamp near the top - four small squares of this wood or similar, to provide well-defined points of suspension for the pendulums - one length of strong thread about 45 cm long - the candidates will require 6 unconnected steel paper clips of same type specified above, e.g. round ends, length 50 mm, width at widest point about 10 mm, of uniform quality - digital stopwatch capable of reading to 0.01 s - labels, to be fixed to the edge of the bench (as shown in the diagram opposite) on which candidates may make fiducial marks Supervisors should set up two identical coupled pendulums as shown in the diagram opposite. The pendulums should hang clear of the bench. Labels, on which candidates may make fiducial marks, should be made on the edge of the bench, as shown in the diagram. Ensure that fresh labels are provided for any candidate reusing the apparatus. The pendulums, 20.0 cm apart, each of length 70.0 cm, should be adjusted so that their periods of oscillation are identical. The masses should then be joined by a single length (approximately 40 cm) of thread. A chain of four paper clips should be suspended from the centre of the thread, as shown above. One pendulum should be displaced about 5 cm from its equilibrium position in the plane of the diagram, whilst the other is held in its equilibrium position. Both should then be released simultaneously and the time for the transfer of motion, from one to the other, and back again, should be measured. Trials have shown that this time may be between one and two minutes. For the system described above, Examiners require the period for the transfer of motion from one pendulum to the other and back again, when one mass is displaced by 5 cm while the other is held at rest and then both are released simultaneously (to ± 2 s). Section A Task 2 Candidates are to make measurements on a chain, supported at each end, which hangs in equilibrium in a vertical plane. Question 1 Apparatus - steel paper clips of the type specified for Section A Task 1, e.g. round ends, length 50 mm, width at widest point about 10 mm, of uniform quality; the candidates will require between 12 and 15 unconnected paper clips for part (a) – these can be placed on a piece of A4 card with the label ‘for part (a)’ printed on it - for part (c), the candidates will also require a chain of 24 interconnected clips: these should be joined, along the length of the chain, in the same way, i.e. an end with the larger diameter bend should always be linked to an end with a smaller diameter bend – this chain can be placed on a piece of A4 card with the label ‘for part (c)’ printed on it - micrometer screw gauge, capable of reading to 0.01 mm - metre ruler - about one metre of (paper) ticker tape and short pieces of Sellotape with which to fix the ticker tape down on to the bench - set-square - two retort stands of height at least 600 mm, each fitted with a boss near the top - two strong nails or small screwdrivers to be clamped horizontally; these provide the means of supporting the end of the chain Place all this apparatus on the bench beforehand. No prior assembly required. Examiners require the mean length of the chains of 24 interconnected paper clips that the students will use, to ± 5 mm. Section B Apparatus - small plane mirror The mirror may be used to assist candidates in making their gradient determinations. Note that when completing Section B of the test candidates should be provided with their completed copy of Section A Task 2, whereas candidates’ copies of Section A Task 1 should not be made available to them. 1 You are to investigate the small-amplitude oscillations of a chain, suspended from one end, in a vertical plane. 1 (a) You are provided with three short chains, each consisting of eight paper clips joined together. One end of each chain has a small white mark painted on it to show the end from which it should be suspended. Suspend one chain from the horizontally-clamped support so that the chain hangs freely in a vertical plane. The white mark should be at the point of suspension of this chain. Displace the lower end then release the chain so that it performs small-amplitude oscillations in a vertical plane, as shown in Figure 1. Figure 1 end of chain with small white mark painted on it suspended from horizontally-clamped support direction of oscillation 1 (a) (i) Make and record suitable measurements to calculate the period, $T_1$, of the oscillations of this chain. You should use a fiducial mark to assist in making these measurements. 1 (a) (ii) Connect one of the other chains to the lower end of the suspended chain, thereby doubling the number of inter-connected paper clips. The white mark on the lower chain should be at the point of suspension to the upper chain. Repeating the procedure as before, make and record suitable measurements to calculate the period, $T_2$, of the oscillations of this chain. 1 (a) (iii) Connect the remaining chain to the lower end of the suspended chain, thereby suspending all the paper clips in a single chain. The white mark on the lower chain should be at the point of suspension to the upper chain. Repeating the procedure as before, make and record suitable measurements to calculate the period, $T_3$, of the oscillations of this chain. (3 marks) 1 (b) It is suggested that $n$, the number of suspended paper clips is related to $T$, the period of the paper clip chain by an expression of the form $n \propto T^x$ where $x$ is an integer. With the aid of the grid provided or otherwise, use the results that you obtained in part (a) to determine the value of $x$. (4 marks) 1 (c) A student claims that $T$ can be calculated in the same manner as the period of a simple pendulum of length equal to that of the chain. Show that the student’s claim is false. (2 marks) 2 You are provided with two identical pendulums coupled to each other by thread from which four paper clips have been suspended. 2 (a) Displace the bob of the left-hand pendulum about 5 cm leftwards, keeping the string in the vertical plane defined by the rest position of the pendulums. Release the bob and observe the subsequent motion of both pendulums; you will see that the amplitude of the left-hand pendulum gradually decreases and the amplitude of the right-hand pendulum increases. After a certain time has elapsed, the left-hand pendulum briefly comes to rest and the right-hand pendulum swings with maximum amplitude, then the transfer of energy between the pendulums reverses until the right-hand pendulum is once again at rest and the left-hand pendulum swings with maximum amplitude. Make suitable measurements to calculate the time, $\tau$, for the amplitude of either pendulum to increase from zero to a maximum and then fall to zero again. Labels, on which you may write, have been placed on the edge of the bench to assist you in making these measurements. (1 mark) 2 (b) It is suggested that $\tau$ may be inversely proportional to the number of paper clips suspended from the thread. 2 (b) (i) Make measurements to calculate $\tau$ with five paper clips suspended from the thread. 2 (b) (ii) Make additional measurements to calculate $\tau$ with six paper clips suspended from the thread. 2 (b) (iii) Explain whether your results from parts (a) and (b) show that $\tau$ is inversely proportional to the number of paper clips suspended from the thread. (4 marks) 2 (c) Explain one difficulty that might be encountered if you were to make measurements to determine $\tau$ with less than four paper clips suspended from the thread. (1 mark) END OF QUESTIONS In this experiment you are to make measurements on a chain of paper clips, supported at each end, which hangs in equilibrium in a vertical plane above the bench. 1 (a) You are provided with a number of unconnected paper clips. Place a metre ruler on the bench with the graduations uppermost and lay some paper clips against the edge of the ruler so they are aligned in a single row, each paper clip touching the next without overlapping, as shown in Figure 2. Make suitable measurements to determine the mean length, $c$, of one paper clip. (1 mark) 1 (b) Using the micrometer screw gauge, make suitable measurements to determine the diameter, $d$, of the wire from which the paper clips have been formed. (1 mark) 1 (c) Adjust the height of the horizontally clamped supports until these are close to the top of the stands and the top surface of each is the same vertical distance above the bench. Position one metre of paper tape parallel to the edge of the bench, about 20 cm from the edge. Fix this down to the bench with Sellotape. You are also provided with a chain of 24 paper clips. Suspend one end of the chain from one horizontally-clamped support and the other end from the second horizontally-clamped support, so that the full length of the chain hangs in equilibrium in a vertical plane above the bench. Adjust the positions of the stands to which the horizontal supports are clamped until the chain lies directly above the length of paper tape and the horizontal distance, $s$, between the ends of the paper clip chain is 750 mm. Mark on the tape the point directly below the centre of the chain. Using the additional apparatus provided, measure and record values of $x$ and $y$, which are the horizontal and vertical distances respectively, from the point marked on the paper tape to junctions between paper clips in the chain, as shown in Figure 3. Take sufficient readings of $x$ and $y$ to define the shape of the chain from the centre to the right-hand end of the chain. Record all your measurements and observations. Figure 3 $s = 750 \text{ mm}$ horizontal support junction between 12\textsuperscript{th} and 13\textsuperscript{th} paper clips level of bench $x$ $y$ Measurements and observations. (6 marks) 1 (d) Plot, on the grid opposite, a graph of $y$ on the vertical axis and $x$ on the horizontal axis. (8 marks) END OF QUESTIONS Section B Answer all the questions in the spaces provided. 1 In part (a) and part (b) of Section A Task 2 you obtained measurements to determine the mean length, $c$, of one paper clip, and $d$, the diameter of the wire from which the paper clips have been formed. It can be shown that $L$, the length of the paper clip chain used in part (c) of Section A Task 2, when laid out flat, is given by $$L = nc - 2d(n - l),$$ where $n =$ number of paper clips in the chain. 1 (a) Evaluate $L$. \hspace{1cm} (2 marks) 1 (b) A student suggests that because $d$ is much less than $c$, the length of the chain can be safely estimated by calculating $nc$. The student calculates the percentage difference between the calculated value of $nc$ and the true value of $L$, for different values of $n$. The student’s results are shown in Table 1. | $n$ | percentage difference | |-----|-----------------------| | 1 | 0.00 | | 2 | 2.17 | | 4 | 3.28 | | 8 | 3.85 | | 16 | 4.14 | | 32 | 4.28 | | 64 | 4.35 | 1 (b) (i) Explain why the percentage difference increases as $n$ increases. 1 (b) (ii) The student suggests that the percentage difference tends towards a constant value when $n$ becomes very large. Explain with reference to the data in Table 1, why the student’s suggestion might be correct. 1 (b) (iii) A different student decides that calculating $nc$ is an acceptable method of estimating $L$, providing that the percentage difference is less than 4%. Suggest how the student could use the data in Table 1 to determine the largest value of $n$ that meets this condition and explain what the student should do so this value of $n$ is determined accurately. You should illustrate your answer with a sketch. (5 marks) 2 A student performs the experiment using apparatus identical to that which you used. The student records the position of every junction between paper clips in the chain, starting at the centre of the chain where the 12th and 13th paper clips are joined, and finishing where the 24th paper clip meets the horizontal support at the right-hand end of the chain. Using all the data measured, the student uses a computer to produce the graph, shown in Figure 4. 2 (a) Use Figure 4 to determine the gradient, $G$, at the junction between the 18th and 19th paper clips. You are provided with a small plane mirror which you may use to assist you in answering the question. (2 marks) 2 (b) The student calculates the length of the chain, $L$, and measures the horizontal distance, $s$, between the ends of the paper clip chain. The student’s results are $L = 1.17 \text{ m}$ and $s = 0.756 \text{ m}$. Using your result for $G$ and the student’s values for $L$ and $s$, evaluate 2 (b) (i) $p$, where $p = \frac{L}{4G}$, 2 (b) (ii) $q$, where $q = \frac{s}{2p}$. (1 mark) 2 (c) The sag, $r$, is the vertical distance between the point of suspension and the bottom of the chain. Evaluate $r$, where $r = \frac{p}{2}(e^q + e^{-q} - 2)$. (2 marks) 3 In Section A Task 1 you measured the period, $T$, of an oscillating chain of paper clips. 3 (i) Make a sketch to show how you used a fiducial mark (reference point) to reduce the uncertainty in your values of $T$. 3 (ii) Explain why you positioned the fiducial mark in the position shown in the sketch. (2 marks) 4 In Section A Task 1 you investigated the motion of coupled pendulums, measuring the time, $\tau$, for the amplitude of either pendulum to increase from zero to a maximum and then fall to zero again. A student performs this experiment and measures four values of $\tau$ with three, five and then seven paper clips suspended from the thread. The student’s results are shown in Table 2. | $n$ | $\tau_1$/s | $\tau_2$/s | $\tau_3$/s | $\tau_4$/s | mean $\tau$/s | uncertainty/s | percentage uncertainty | |-----|------------|------------|------------|------------|---------------|---------------|------------------------| | 3 | 112.8 | 111.2 | 115.8 | 114.3 | | | | | 5 | 67.3 | 69.9 | 64.2 | 66.2 | | | | | 7 | 44.8 | 49.1 | 48.7 | 47.9 | | | | 4 (a) Complete the relevant column of Table 2 to show the mean value of $\tau$ for $n = 3$, $n = 5$ and $n = 7$. (1 mark) 4 (b) (i) Calculate the uncertainty in the mean values of $\tau$ for $n = 3$, $n = 5$ and $n = 7$; show the results of these calculations in the relevant column of Table 2. 4 (b) (ii) Use your results to calculate the percentage uncertainty in the mean values of $\tau$ for $n = 3$, $n = 5$ and $n = 7$; show the results of these calculations in the relevant column of Table 2. (2 marks) 4 (c) A student uses a motion sensor connected to a data logger to investigate the motion of one of the coupled pendulums. Data about the displacement, $x$, of the pendulum bob is recorded over an interval of 100 seconds and then displayed graphically, as shown in Figure 5. **Figure 5** ![Graph showing displacement vs time for a pendulum] 4 (c) (i) Use Figure 5 to estimate $\tau$ for these coupled pendulums. 4 (c) (ii) Determine the period of the pendulum’s motion represented in Figure 5. (3 marks) 4 (d) State and explain two advantages of using a data logging technique to produce the data in an experiment such as this, compared with the method which you were required to use in Section A Task 1. (4 marks) END OF QUESTIONS PHYSICS (SPECIFICATIONS A AND B) PHA6/B6/XTN Unit 6 SUPERVISOR’S REPORT When completed by the Supervisor, this Report must be attached firmly to the attendance list, before despatch to the Examiner. Information to be provided by the centre. Section A Task 1 Question 1 The Examiners require no information for this question. Question 2(a) Examiners require the period for the transfer of motion (see page 7). Section A Task 2 Question 1 Examiners require the length of the chains of 24 interconnected paper clips that the students will use, to ±5 mm. Supervisor’s Signature ............................................................... Centre Number ............................................................................ Date .......................................................................................... Centres may make copies of this Supervisor’s Report for attachment to individual scripts where necessary. Copyright © 2010 AQA and its licensors. All rights reserved.
Histochemical localization of the PBAN receptor in the pheromone gland of *Heliothis peltigera* Miriam Altstein\textsuperscript{a,*}, Orna Ben-Aziz\textsuperscript{a}, Kalpana Bhargava\textsuperscript{b}, Qijing Li\textsuperscript{c}, Manuela Martins-Green\textsuperscript{d} \textsuperscript{a} Department of Entomology, The Volcani Center, Bet Dagan 50250, Israel \textsuperscript{b} Department of Biophysics, Medical College of Wisconsin, Milwaukee, WI 53226, USA \textsuperscript{c} Department of Microbiology and Immunology, Stanford University, School of Medicine, Palo Alto, CA 94305, USA \textsuperscript{d} Department of Cell Biology and Neurosciences, University of California, Riverside, CA 92521, USA Received 26 June 2003; accepted 2 September 2003 **Abstract** The presence of the pyrokinin (PK)/Pheromone biosynthesis activating neuropeptide (PBAN) receptor in pheromone gland cells of *Heliothis peltigera* females was demonstrated, and its spatial distribution in the ovipositor was visualized with two photo-affinity biotinilated ligands: BpaPBAN1-33NH$_2$ and BpaArg$^{27}$-PBAN28-33NH$_2$. Light microscopy histological studies revealed that the gland is contained within the inter-segmental membrane (ISM) between the 8th and 9th abdominal segments. The gland was found to be composed of a single layer of columnar epithelial cells positioned under the inter-segmental cuticle. Similar epithelial cells were also found in the dorsal and ventral regions of the 9th abdominal segment. All regions containing the glandular cells bound both ligands, indicating presence of the PK/PBAN receptor. The patterns obtained with both ligands were similar, hinting at the possibility that either both ligands bind to the same receptor, or, that if there are two distinct receptors, their spatial distribution throughout the gland is very similar. © 2003 Elsevier Inc. All rights reserved. **Keywords:** PBAN receptor; Photo-affinity ligands; *Heliothis peltigera*; Pheromone gland; Insect neuropeptide ### 1. Introduction The sexual communication between sexes in Lepidopteran species is mediated mainly by sex pheromones, which are volatile compounds used by Lepidopteran insects to attract potential mates from a distance [20]. Sex pheromones play an important role in the elicitation of mating behavior in moths and are, therefore, crucial for successful mating and maintenance of reproductive isolation. Understanding the mechanisms that underlie sex pheromone production is, therefore, of major interest and importance. Sex pheromones in Lepidopteran species are synthesized in a specialized gland, which is a modification of the inter-segmental membrane (ISM) located between the 8th and 9th abdominal segments. The pheromone-producing cells are epithelial cells, overlaid by a modified ISM cuticle, which, in most Lepidopterans is produced by the cells themselves. The pheromone is produced within the epithelial cells, transported through the cuticle via special porous cuticular spines, and is disseminated from the surface [33,41]. Sex pheromone biosynthesis in moths is affected by a variety of exogenous and endogenous factors (such as temperature, photoperiod, host plants, age, mating, as well as hormonal and neurohormonal factors). A major breakthrough in our understanding of the endogenous factors involved in this process occurred in 1984, when Raina and Klun [37] first reported that sex pheromone production in female *Helicoverpa* (then *Heliothis*) *zea* moths is controlled by a cerebral neuroendocrine factor, which they termed pheromone biosynthesis activating neuropeptide (PBAN). PBAN was found to control sex pheromone biosynthesis in many other moth species, and the peptide itself was found in many Lepidopteran species as well as in other insect orders [36]. PBAN was first isolated and characterized as a 33 amino acid C-terminally amidated neuropeptide by Raina et al. [38] in *H. zea*, and then its primary sequence was determined in numerous other moth species [17,36]. Determination of the primary amino acid sequence of PBAN revealed that its C-terminal pentapeptide sequence (FSPRLNH$_2$) is homologous with the C-terminal pentapeptide sequence of leucopyrokinin (LPK), the first pyrokinin (PK) peptide, which was isolated from the Maderae cockroach, *Leucophaea maderae* [28]. LPK, PBAN and other... pheromonotropic (e.g. pheromonostatin) and myotropic [29] peptides were all grouped into a family termed the PK/PBAN family, characterized by a structural homology at their C-terminus pentapeptide sequence (FXPRLNH$_2$; X = S, T, G, V) [17,36], which composes their active core [2–5,25,29,32,39,40]. The PK/PBAN family of peptides is a multifunctional family of peptides. In addition to their ability to stimulate sex pheromone biosynthesis in moths, they mediate key functions associated with feeding (gut muscle contractions) [28,42], development (pupariation and diapause) [21,30,31] and defense (melanin biosynthesis) [3,26] in a variety of insects (moths, cockroaches, locusts and flies) (for review see [17,36]). Currently, ca. 15 peptides have been identified (including pyrokamins, myotropins, PBAN, melanization and reddish coloration hormone (MRCH), diapause hormone and pheromonostatin) [17,36]. Studies performed in several laboratories including ours have shown that sex pheromone production (as well as all of the other above functions) can be stimulated by more than one peptide and that the peptides do not exhibit species specificity [17]. Although it is generally agreed that pheromone production in many Lepidopteran species involves the participation of PBAN and possibly other members of the family, many questions still remain unanswered. Currently, very little is known about the mechanism of action of PBAN and other members of this family, and much remains to be determined about the structural, chemical and cellular bases of their activity. Even the nature of the peptide(s) that actually elicit pheromonotropic activity in vivo (full-length PBAN or one of the other pheromonotropic peptides that share sequence homology with it) is not known yet. One reason for our lack of knowledge results from the fact that most studies on the role of the PK/PBAN peptides in pheromone biosynthesis are based on in vivo bioassays (e.g. [1,18,37]) in which peptides are injected into the hemolymph of insects under conditions in which sex pheromone production does not occur naturally (photophase or decapitated females as scotophase). Despite their great advantage in providing a good estimate of the bioactivity of the tested compounds such bioassays do not enable us to address the above issues, specifically those related to the target tissue on which PBAN acts and the downstream cellular events. Contradictory results that accumulated in the course of the years, regarding the principal site(s) of action of the neuropeptide (pheromone gland vs. other organ) and the possible involvement of additional factors in sex pheromone biosynthesis complicated the issue even further. For example, Teal et al. [47] demonstrated that the prime target of PBAN is the terminal abdominal ganglion (TAG), which in turn provides a signal to the gland to produce pheromone. Supportive of this theory were the findings of Christensen and co-workers [10,11], who demonstrated that sex pheromone production in *H. zea* and *H. virescens* is elicited by the biogenic amine octopamine in the absence of PBAN, in an age- and photoperiod-dependent manner. These workers suggested that PBAN activates the TAG, and that this in turn secretes octopamine that activates the gland. The bursa copulatrix has also been suggested as a potential target for PBAN. In a study performed by Jurenka et al. [23] in *Argyrothaenia velutinana*, it was demonstrated that abdomen cultures responded to a much greater extent than the pheromone gland to exogenously applied synthetic PBAN, and that the bursa copulatrix was essential for a full stimulatory response to PBAN. The study suggested that a pheromonotropic factor, other than PBAN, originating in the bursa copulatrix, is essential for pheromonotropic activity and that the role of PBAN is to stimulate the release of such a factor. The involvement of a bursal factor in the pheromonotropic activity has been reported in other moths as well [13]. Furthermore, experiments performed on *Trichoplusia ni*, *Agrotis segetum*, *A. velutinana* and on the pink bollworm *Pectinophora gossypiella* with pheromone glands in vitro failed to show pheromone biosynthesis in response to application of brain-subesophageal (SOG) extracts or synthetic PBAN [35,45,50]. All of the above studies hinted at the possibility that PBAN may act on a target other than the pheromone gland. Other studies, though, clearly identified the pheromone gland as the prime target of PBAN. These studies demonstrated that in vitro gland cultures or preparations of ovipositor tips from a variety of insects could easily be stimulated to produce sex pheromone by application of brain extracts as well as of synthetic PBAN [15,34,43]. Anatomical evidence provided direct proof for the existence of PBAN release sites, presumably in the region of the pheromone gland [12,19,24], and recent studies demonstrated that viable pheromone gland cell clusters from the ISM could produce pheromone in response to a pheromonotropic peptide [16]. One way to get a better insight into the above issues and to resolve some of the above contradictions is by a direct demonstration of the presence of PBAN receptors on the pheromone gland cells. Receptors are central to the understanding of the biological function of any neuropeptide (especially in families where several peptides exhibit similar bio-activities) and central to prove a direct correlation between the activity of a given neuropeptide and its target. Currently, very little is known about the PBAN receptor. No one has shown its cellular localization in the pheromone gland cells (or in any other tissue) and no information exists about its spatial distribution within the glandular area. It has also not been determined whether one or multiple receptors mediate sex pheromone biosynthesis in moths, and whether the same or different receptors mediate the various functions elicited by peptides of the PK/PBAN family in moths and other insects. In a previous study we used a radiolabeled ligand to develop a radio-receptor assay (RRA) that enabled us to partially characterize the PK/PBAN receptor of the pheromone gland of *Heliothis peltigera* females and to determine its expression under different conditions [6,7]. In the present study we have used two biotinilated photo-affinity (benzophenon substituted) PBAN ligands, a full-length PBAN1-33NH$_2$ molecule and a shorter fragment derived from its C-terminus Arg$^{27}$-PBAN28-33NH$_2$, to demonstrate the presence of the PBAN receptor in pheromone gland cells of *H. peltigera* females and to determine its spatial distribution in the ovipositor. 2. Material and methods 2.1. Insects *H. peltigera* moths were reared on an artificial diet as described previously [14]. Pupae were sexed and females and males were placed in separate rooms with a dark/light regime of 10h:14h, at 25 ± 2 °C and 60–70% relative humidity. Adult moths were kept in screen cages and supplied with a 10% sugar solution. Moth populations were refreshed every year with males caught from the wild by means of pheromone traps, as previously described [14]. All females used in this study were 5.5 days old. As a routine, representative of each colony used for histochemical studies, were tested for their capability to synthesize sex pheromone in response to the injection of 1 pmol synthetic PBAN1-33NH$_2$. All tested colonies were positive generating pheromone amounts that ranged from 144 to 189 ng per female. 2.2. Ligands and other peptides Photo-affinity biotinilated peptide ligands were synthesized as described below. All other peptides were prepared as previously described [48]. The purity of all tested peptides was 90–95%. 2.2.1. Chemicals for peptide synthesis Fmoc-protected amino acids with standard side-chain protecting groups, Rink amide 4-methylbenzhydrylamine (MBHA) resins and reagents for peptide synthesis were purchased from NOVA Biochem (Switzerland). Ultra-pure quality solvents were purchased from Baker (USA). Other reagents were purchased from Aldrich. 2.2.2. Synthesis of photo-affinity biotinilated ligands Two biotinilated photo-affinity ligands were synthesized: a biotinilated-photo-affinity full-length PBAN molecule (BpaPBAN1-33NH$_2$) and a shorter fragment derived from its C-terminus Arg$^{27}$-PBAN28-33NH$_2$ (BpaArg$^{27}$-PBAN28-33NH$_2$). A $p$-benzoyl-Phe amino acid (Bachem, Switzerland) was substituted for Phe$^{29}$ in each PBAN molecule. Peptides were synthesized by the solid phase peptide synthesis methodology [9] on a Rink amide MBHA resin (0.6 mmol/g loading), by means of 9-fluorenylmethoxycarbonyl (Fmoc) chemistry. Coupling of Fmoc-protected amino acids was performed by using: a three-fold excess of pre-activated amino acid, a three-fold excess of bromo-tris-pyrrolidino-phosphonium hexafluorophosphate (PyBroP), and a seven-fold excess of diisopropylethylamine (DIEA) in *N*-methylpyrrolidinone (NMP) for 10 min prior to coupling. Each coupling reaction was continued for 3 h. The Fmoc-protecting group was removed with 20% piperidine in NMP. After each coupling and Fmoc deprotection the resin was washed with NMP (5 × 2 min) followed by dichloromethane (DCM) washing (2 × 2 min). 2.2.3. Attachment of biotin to the peptides After Fmoc deprotection of the last amino acid on the peptidyl resin, a coupling reaction was performed with biotin (Sigma, St. Louis, USA) with a three-fold excess of biotin preactivated with a three-fold excess of PyBroP and a seven-fold excess of DIEA in NMP, for 10 min prior to coupling. The coupling reaction was repeated twice, for 2 h each time. At the end of the reaction the peptidyl resin was washed with NMP (5 × 2 min), DCM (2 × 2 min) and methanol (twice), and the peptidyl resin was dried under vacuum. 2.2.4. Deprotection of side chain groups and cleavage of peptides from the resin Peptides were deprotected and cleaved from the resin with 90% trifluoroacetic acid (TFA) in the presence of scavengers (1% double-distilled water (DDW) and 1% triisopropylsilane). The cleavage process continued for 3 h at room temperature (RT). The resin was removed by filtration and washed with TFA (2 × 5 ml). The TFA filtrate was evaporated to dryness under a stream of nitrogen. The peptide was precipitated with cold diethyl ether and washed twice with ether. The crude peptide thus obtained was dissolved in 50% acetonitrile (ACN) and lyophilized. 2.2.5. Purification of peptides The crude peptides were purified by preparative reverse-phase high-performance liquid chromatography (RP-HPLC) on a Vydac RP-18 column (25 mm × 250 mm) with a Merck-Hitachi 655A liquid chromatography pump. The solvent systems used were ACN and DDW, each containing 0.1% TFA. The peptides were eluted between 15 and 70% ACN in 40 min. The flow rate was maintained at 9 ml/min and the peptides were detected at 220 nm. Purity and homogeneity of the peptides were cross-checked by analytical RP-HPLC on a Merck Lichrocart RP-18 column (5 mm × 250 mm) with a Merck-Hitachi L-700 Lachrom liquid chromatography pump. The flow rate was 1 ml/min and the absorbance was detected at 220 nm. The peptide purity was found to be in the range of 90–95%. Purified peptides were characterized by MALDI time-of-flight-mass spectrometry (TOF-MS, linear CHCA matrix) and amino acid analysis of hydrolyzates. The molecular masses obtained were 4219.96 and 1253.61 for BpaPBAN1-33NH$_2$ and BpaArg$^{27}$-PBAN28-33NH$_2$, respectively. 2.3. Determination of the pheromonotropic activity of Bpa peptides The pheromonotropic activities of Bpa peptides and their corresponding non-biotinilated analogs were determined in *H. peltigera* females by means of the pheromonotropic bioassay, as previously described [2]. Activity was determined by the injection of a dose of 0.1–1000 pmol of each peptide for 2 h. At the end of the experiment glands were excised and their pheromone contents were determined by capillary gas chromatography as previously described [2]. ### 2.4. Histology #### 2.4.1. Fixation and embedding of tissue for light microscopy Fully extended ovipositors of 5.5-day-old *H. peltigera* females were excised at the 5–7 h of photophase and fixed in 2% (w/v) paraformaldehyde (PFA) in phosphate buffered saline (PBS, 0.15 M NaCl in Na phosphate buffer, pH 7.2), for 2 h at RT. Fixed ovipositors were stored in 0.5% PFA at 4 °C until use. Before use, the glands were rinsed briefly with PBS (2 × 5 min), transferred to 3% (v/v) glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.2, and microwave irradiated (General Electric Model JES 633) at 480 W for 30 s to increase fixation efficiency [41]. Ovipositors were then transferred to 1 ml of fresh 3% (v/v) glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.2 and re-fixed for 2 h at RT. At the end of the fixation the tissue was rinsed with PBS, pH 7.2 (6 × 10 min) and post-fixed with 2% (v/v) osmium tetroxide in DDW for 2 h at RT. The tissue was then rinsed (3 × 30 min) with DDW and subjected to a series of dehydration steps comprising soaking in increasing percentages of acetone in DDW (30% for 60 min; 50% for 60 min; 70% for overnight (ON); 95% for 60 min; 100% for 3 × 60 min). The tissue was then embedded in Spurr (Ted Pella, Redding, CA) according to the following protocol: 2:1 (acetone:Spurr) for 4 h at RT with shaking; 1:2 (acetone:Spurr) ON with very slow shaking, and 5 × 4 h of straight Spurr. The embedded tissue was transferred to rubber molds in straight Spurr, positioned to facilitate coronal or sagittal sectioning, and allowed to polymerize for 18–20 h at 60 °C. #### 2.4.2. Tissue sectioning Three 1 μm coronal sections were collected every 20 μm and stained with methylene blue. A total of 50 tissue samples representing different areas of the ISM and the 9th abdominal segment were examined. ### 2.5. Scanning electron microscopy (SEM) Fully extended ovipositors of 5.5-day-old *H. peltigera* females were excised at the 5–7 h of photophase. Tissue was fixed and dehydrated in acetone as above (Section 2.4.1). Ovipositors were attached to a copper specimen holder with cryo-adhesive tape and critical point dried. At the end of the process, the tissue was coated with gold palladium at 15 mA/min for 3 min, and viewed with an SEM (Phillips XL-30 FEG) as previously described [27]. ### 2.6. Histochemistry #### 2.6.1. Fixation of tissue for frozen sections Fully extended ovipositors of 5.5-day-old *H. peltigera* females were excised at the 5–7 h of photophase, fixed in 2% PFA in PBS for 2 h at RT and stored in 0.5% PFA at 4 °C until use. Before use, the ovipositors were washed briefly in PBS (2 × 5 min), re-fixed in 4% PFA in PBS for 30 min at RT, washed with PBS (3 × 15 min) and transferred to 0.1 M glycine in PBS for 30 min (to block free aldehyde groups). The tissue was then washed with PBS (2 × 10 min) and transferred to 15% (w/v) sucrose solution in PBS at 4 °C with gentle shaking until the tissue sank (2–4 h). Next, the tissue was transferred to 30% (w/v) sucrose solution in PBS and was kept ON at 4 °C. On the next day, the tissue was removed from the sucrose solution, transferred to plastic molds, dried from all remaining sucrose solution and mounted (in the desired position) with cold (4 °C) OCT (Triangle Biomedical Sciences, Durham, NC). The molds were then placed on dry ice until the OCT froze, and were stored at −70 °C until further use. #### 2.6.2. Preparation of frozen sections Coronal sections, 10 μm thick, were cut with a Microm HM500 OM cryostat. Sections were placed on gelatin-coated slides and kept at −70 °C until further use. #### 2.6.3. Receptor localization Sections were immersed in PBS (2 × 10 min) to remove the OCT, and were re-fixed in 4% PFA in PBS for 15 min at RT. Excess PFA was washed off with PBS (2 × 5 min) and the sections were transferred to 0.1 M glycine in PBS for 20 min as above (to block reactive groups). After another rinse with PBS (10 min) the tissue was incubated with avidin (Sigma) in PBS at 0.1 mg/ml (30 min at RT) (to block endogenous biotin). Unbound avidin was washed away with PBS (4 × 5 min) and the sections were incubated with biotin (Sigma) in PBS at 0.01 mg/ml (30 min at RT) to block free avidin sites. Unbound biotin was washed away with PBS (4 × 5 min) and the sections received an additional rinse (5 min) with reaction buffer (10 mM NaHCO₃, 145 mM sucrose, 10 mM HEPES, pH 8.0). The sections were then incubated for 75 min at RT with 10 pmol of ligand (BpaPBAN1-33NH₂ or BpaArg²⁷-PBAN28-33NH₂) in the presence or absence of 6 nmol of competing peptides (PBAN1-33NH₂ or PBAN28-33NH₂) made up in reaction buffer. At the end of the incubation the sections were irradiated for 30 min at 4 °C with a 2 × 15 W 375 nm UV lamp placed at a distance of 9 cm from the reaction mixture. Then the sections were rinsed with PBS (4 × 5 min) at RT, transferred to a blocking solutions of 0.5% (w/v) bovine serum albumin (BSA) in PBS for 20 min and incubated with streptavidin fluorescein (FITC) or streptavidin Alexa Fluor 568 conjugates (Molecular Probes, Eugene, OR) (diluted 1:500) in 0.3 M NaCl made up in 50 mM Na phosphate, pH 7.2 for 60 min in the dark. Unbound reagents were washed away with PBS (4 × 5 min) and nuclei were stained with the nuclear dye TO-PRO-3 iodide (Molecular Probes) (diluted 1:1000 in PBS containing 0.1% Triton-X-100) for 15 min. Excess dye was washed away with PBS (3 × 5 min), the slides were dried and the sections were mounted with Vectashield mounting medium (Vector Laboratories, Burlingame, CA). The specimens were stored at 4 °C in the dark. Microscopic examination was performed with a laser confocal microscope (Zeiss LSM510). A minimum of three specimens, representing at least two different moth generations, was tested for ligand binding. 3. Results A preliminary requirement for receptor localization studies is the availability of a highly potent ligand. For that purpose we synthesized photo-affinity ligands that could bind covalently to the receptor. Two biotinilated photo-affinity ligands (BpaPBAN1-33NH$_2$ and BpaArg$^{27}$-PBAN28-33NH$_2$), were synthesized, in which a $p$-benzoyl-Phe amino acid was substituted for a Phe$^{29}$ in each PBAN molecule. In order to determine whether the substitution of the Phe$^{29}$ and addition of a biotin residue to the molecule affected activity, both Bpa ligands were tested for their in vivo pheromontropic activity and compared with that of the parent molecules. As indicated in Fig. 1, BpaPBAN1-33NH$_2$ exhibited a slightly lower activity than the parent non-biotinilated peptide although at a concentration of 10 pmol and above the activity was comparable with that of PBAN1-33NH$_2$. BpaArg$^{27}$-PBAN28-33NH$_2$, on the other hand, was more active than Arg$^{27}$PBAN28-33NH$_2$ at concentrations of 10 pmol and above. The availability of highly bioactive photo-affinity biotinilated peptides enabled us to further characterize the receptor and to localize it in the ovipositor of *H. peltigera* by means of streptavidin-coupled fluorophore reporters. A secondary requirement for histochemical studies is elucidation of the structural details of the tested tissue. For that purpose we excised ovipositors of female *H. peltigera* moths and performed a detailed analysis of the pheromone gland structure by light microscopy (LM) and scanning electron microscopy (SEM). For the LM study we prepared 1 μm sagittal sections of various regions of the ovipositor as well as 1 μm coronal sections (at 20 μm intervals) of the entire ovipositor (regions I through III in Fig. 2A). Examination of the intact ovipositor under a stereoscopic microscope revealed an ISM (region II Fig. 2B and C) bordered by two heavily sclerotized cuticular regions that formed the 9th and 8th abdominal segments (regions I and III, respectively, Fig. 2B and C). The heavy sclerotized cuticle of the 8th segment formed an almost complete ring whereas that of the 9th abdominal segment formed two sclerotized valves that extended around the segment laterally but did not completely meet one another ventrally (Fig. 2C) or dorsally (not shown). LM examination of the stained coronal and sagittal sections revealed that the pheromone gland comprised the ISM (region II in Fig. 2A–C) between the 9th and 8th abdominal segments. The pheromone gland cells comprised a single layer of columnar epithelial cells positioned under the inter-segmental cuticle (Fig. 2D and E). These cells extended along the entire ISM and surrounded the entire circumference of the tissue, to form a complete ring-type gland (Fig. 2F). Epithelial cells, similar to those of the ISM, extended all the way to the tip of the ovipositor. Fig. 2G and H depict the ventral and dorsal epithelial layers (regions V and D) overlaid by the inter-segmental cuticle (light blue color, Fig. 2G) and the heavily sclerotized lateral cuticular valves of the 9th segment (dark blue color Fig. 2G). SEM studies revealed that the pheromone gland region is highly convoluted (Fig. 3A–C), densely covered with hollow cuticular spines (Fig. 3D–F) that cover the entire circumference of the ISM, and through which the pheromone is, most likely, secreted. In order to localize PK/PBAN receptor-containing cells, ovipositors were fixed, and slide-mounted 10 μm frozen sections were incubated with the Bpa ligands, UV irradiated (for covalent attachment of the ligand to the tissue), and visualized by application of streptavidin conjugated to an Alexa or FITC fluorophore. Binding of BpaPBAN1-33NH$_2$ to the sections revealed that the epithelial cells of the ISM were heavily stained with the ligand (Fig. 4A and B). The BpaPBAN1-33NH$_2$ ligand was fully displaced in the presence of a 600-fold excess of non-biotinilated PBAN1-33NH$_2$ (Fig. 4C) or PBAN28-33NH$_2$ (Fig. 4D). Staining was also obtained with BpaArg$^{27}$-PBAN28-33NH$_2$ (Fig. 4E) and this ligand was also fully displaced with non-biotinilated (A) Schematic diagram of the 8th and 9th somites in the chick embryo. (B) Photograph of the 8th and 9th somites in the chick embryo. (C) Photograph of the 8th, 9th, and 10th somites in the chick embryo. (D) Photomicrograph of the 8th somite in the chick embryo. (E) Photomicrograph of the 9th somite in the chick embryo. (F) Photomicrograph of the 10th somite in the chick embryo. (G) Photomicrograph of the 11th somite in the chick embryo. (A) SEM image of the surface of a human hair follicle. (B) Higher magnification of the yellow box in (A). (C) SEM image of the surface of a human hair follicle. (D) Higher magnification of the yellow box in (C). (E) Higher magnification of the yellow box in (D). (F) Higher magnification of the yellow box in (E). (A) PGCs (blue) in the chorion (C) and yolk sac (Y) at 10.5 dpc. (B) PGCs (blue) in the chorion (C) and yolk sac (Y) at 12.5 dpc. (C) PGCs (blue) in the chorion (C) and yolk sac (Y) at 14.5 dpc. (D) PGCs (blue) in the chorion (C) and yolk sac (Y) at 16.5 dpc. (E) PGCs (blue) in the chorion (C) and yolk sac (Y) at 18.5 dpc. (F) PGCs (blue) in the chorion (C) and yolk sac (Y) at 20.5 dpc. (G) PGCs (blue) in the chorion (C) and yolk sac (Y) at 22.5 dpc. (H) PGCs (blue) in the chorion (C) and yolk sac (Y) at 24.5 dpc. PBAN28-33NH$_2$ (Fig. 4F). The C-terminally free acid PBAN analog (PBAN1-33COOH) that is devoid of any pheromontropic activity [2], did not displace the ligand (Fig. 4G). Also, the staining was highly specific and no binding of streptavidin–fluorophore conjugates could be demonstrated in the absence of ligands (Fig. 4H). The peripheral heavy red or green stain that appears in all sections was contributed by cuticular auto-fluorescence, which was also prominent in the absence of any ligand or in the presence of an excess of non-biotinilated PBAN1-33NH$_2$ or PBAN28-33NH$_2$ molecules. Next, we set out to determine the spatial distribution of the receptor within the pheromone gland. For that purpose, the whole ovipositor (regions I through III in Fig. 2A) was sectioned into 10 µm sections and sets of three serial sections were stained sequentially with BpaPBAN1-33NH$_2$ and BpaArg$^{27}$-PBAN28-33NH$_2$ ligands. Throughout region II (represented by Fig. 5A) binding of both ligands stained the entire circumference of the tissue (Fig. 5B), clearly indicating the presence of the receptor in all parts of the ISM. In the 9th segment (represented by Fig. 5D), only the dorsal and ventral regions were stained (Fig. 5E). In the lateral areas, where a heavily sclerotized cuticle was visualized, no staining was observed (Fig. 5E). No cells, other than the glandular cells underneath the “thinner” cuticle, were stained in regions I and II (Fig. 5B, C, E and F), indicating the high specificity of the staining. No staining was observed in region III (8th abdominal segment). Interestingly, the staining in the ISM pheromone gland cells as well as in the glandular cells of the dorsal and ventral parts of the 9th abdominal segments exhibited a polar pattern, with intense staining appearing at the basal part of the epithelial cells (Fig. 5C and F). The patterns obtained with both ligands BpaPBAN1-33NH$_2$ and BpaArg$^{27}$-PBAN28-33NH$_2$, were similar. 4. Discussion In the present study we demonstrated the presence of the PK/PBAN receptor in pheromone gland cells of *H. peltigera* females, and visualized its spatial distribution within the ovipositor. Receptor visualization was performed with two photo-affinity biotinilated ligands: a full-length PBAN1-33NH$_2$ and Arg$^{27}$-PBAN28-33NH$_2$. Photo-affinity labeling is a very powerful technique for histochemical investigation of receptors, but it is necessary to consider carefully the choice of the peptide and the photo-label, the design of the photo-reactive peptide, and the irradiation process to ensure that the photo-labeled peptide will remain active, that the chemistry of labeling will be relatively simple and highly efficient, and that the photo-ligand will be stable and reactive to a degree that will not cause high non-specific interactions. In light of the above considerations we chose to use benzophenone-substituted peptides that fulfill the above requirements and have previously been successfully used to label a variety of receptors [44,49]. The PBAN-modified photo-affinity ligands were further biotinilated at the N-terminus in a manner that did not affect their bioactivity and enabled their employment in histochemical studies for visualization of the PK/PBAN receptor with the aid of streptavidin-conjugated fluorophores. In order to detect PBAN receptor-containing cells among the pheromone producing glandular cells in the ISM and other regions of the ovipositor it was necessary to perform a detailed histological study of the structure of the pheromone gland of *H. peltigera*. Although Noctuidae species have been examined more than any other family and considerable information has accumulated concerning the structure of the cells that form the sex pheromone gland, the extent to which the ISM is glandular is extremely variable among moths. Our present study indicates that the pheromone gland of the *H. peltigera* female is a simple unicellular ring gland, whose glandular cells encircle the ovipositor, and occupy the entire ISM between the 8th and the 9th abdominal segments. Glandular cells were also found in the dorsal and ventral part of the 9th abdominal segment, laterally divided by a sclerotized cuticular wall. By analogy with findings in other *Heliothinae* species [33], we assume that the cells underneath the sclerotized cuticle are unmodified squamous cells or cells modified for the insertion of muscles. Based on the above it seems that the structure of the pheromone gland of *H. peltigera* is similar to that of two other *Heliothinae* species: *H. zea*, that was found to be ring glands with dorsal and ventral glandular cells in the 9th segment [8,22,33,41] and *H. phloxiphaga* [22]. Despite the information that has been accumulated concerning the structure of the glandular cells, the biochemical elucidation of the structural–functional relationship of these cells is still incomplete. Several studies have tried to correlate the structure of the pheromone gland with its function as a pheromone-producing organ and to define the spatial distribution of pheromone-producing cells. The approach that has been commonly employed to address the issue was based on examination of ultra-structural changes associated with the production and release of pheromone by maturing gland cells. Indeed, it has been found in several studies that the cells change from primarily protein-secreting cells to primarily lipid-secreting cells [33]. Recently, Raina et al. [41] addressed the issue by monitoring pheromone titers in various sections of the *H. zea* ovipositor, correlating them with morphological changes that follow pheromone production. This study led to a much more precise localization of the pheromone-producing regions within the ovipositor and indicated specific cellular changes that occur during pheromone production and non-production periods. An alternative and more direct approach to the localization of sex pheromone producing cells in the ovipositor is the cellular approach where direct visualization of cells expressing receptors of pheromontropic peptides (e.g. PK/PBAN) can be obtained. PK/PBAN peptides have long been known to stimulate sex pheromone biosynthesis in moths, thus, a demonstration of the presence of PK/PBAN receptors on the epithelial cells of the ISM and other regions of the ovipositor provides direct evidence for the presence of pheromone-producing cells. Two photo-affinity biotinilated ligands were chosen for this purpose: The full-length PBAN (BpaPBAN1-33NH$_2$) which is considered to be the prime pheromonotropic peptide; and a modified C-terminally derived analog Arg$^{27}$-PBAN28-33NH$_2$ (BpaArg$^{27}$-PBAN28-33NH$_2$), which contains the “signature sequence” of the PK/PBAN family, and which has been found to elicit pheromone biosynthesis in a manner similar to that of the full-length PBAN [2,48]. The latter ligand is, theoretically, a more “universal” PK/PBAN ligand exhibiting a higher potential than PBAN1-33NH$_2$ to bind to multiple receptors of the PK/PBAN family. Employment of both ligands on slide-mounted, fixed sections stained the columnar epithelial cells throughout the ISM as well as the ventral and dorsal epithelial cells in the 9th abdominal segment. Epithelial cells underlying the sclerotized cuticle were not stained. The patterns obtained with both ligands: BpaPBAN1-33NH$_2$ and BpaArg$^{27}$-PBAN28-33NH$_2$ were similar, indicating two possibilities: either that both ligands bind to the same receptor under the tested conditions, or, if there are two distinct receptors, that their spatial distribution through out the gland is very similar. Staining exhibited a polar pattern, with intense staining appearing at the basal part of the epithelial cells. This polarity of the PBAN receptor most likely facilitates efficient contact with the hemolymph and the blood-borne hormones (e.g. PBAN) that stimulate sex pheromone production in these cells. Staining with both ligands was highly specific: no other cells in the tissue were stained, binding was fully displaced with an excess of non-biotinilated ligands (PBAN1-33NH$_3$ and PBAN28-33NH$_2$), and was not displaced with the C-terminally free acid analog PBAN1-33COOH, which is devoid of pheromonotropic activity [2]. It is interesting to note that although the glands used in this study were excised at photophase, they still revealed the presence of a marked amount of PK/PBAN receptors. This is in line with previous findings from our laboratory [18] as well as others [36] that moths are responsive to PBAN at photophase, although natural pheromone production occurs during the scotophase. In summary, our data clearly demonstrate the presence of a PBAN receptor throughout the ISM region as well as in ventral and dorsal region of the 9th abdominal segment, which indicates that the columnar epithelial cells in both regions (ISM and parts of the 9th segment) are pheromone-producing cells. These data correlate well with the findings of all the studies that considered on the basis of morphological criteria, that the columnar cells are pheromone-producing cells, and with the recent data of Raina et al. [41] who demonstrated that about 70% of pheromone-producing cells are present in the ISM (in both the dorsal and ventral region), and that 17% of these cells are present in the 9th segment. Several authors [33,46] indicated, on the basis of behavioral and microscopic studies, that pheromone is produced in the 9th abdominal segment, but that this region may contribute differently to the composition of the pheromone blend. Our present findings on the presence of PK/PBAN receptor in the 9th abdominal segment clearly support the hypothesis that pheromone is indeed produced in this region, although we are unable to determine whether the nature of pheromone produced in this region is different from that produced in the main ISM area. It should be noted, however, that since both ligands used in our study exhibited a similar pattern throughout the ovipositor it is unlikely to consider that pheromone components that are synthesized in the cells of the 9th segment differ from those synthesized in the ISM region, unless the same ligands activates similar receptors in different areas, leading to the initiation of different biosynthetic pathways. In addition to the spatial localization our data also clearly indicate that the pheromone gland serves as a target for PBAN. As indicated in Section 1, the issue of the target tissue and transport route of PBAN has engaged, and still engages many research groups dealing with the neuroendocrine control of sex pheromone biosynthesis in moths. Our present study, as well as our previous biochemical and pharmacological study of the PK/PBAN receptor in the pheromone gland of $H. peltigera$ [6,7] contribute direct evidence for the target tissue issue although it does not resolve the problem of the transport route. Although we indicated above that the basal location of the receptors on the epithelial cells hints at the possibility that the receptors are arranged so as to ensure them an easier access to humoral factors (which would indicate a hemolymph-borne factor as the stimulator of pheromone biosynthesis), it is still possible that nerve cells terminating on the epithelial glandular cells stimulate the biosynthetic process. Unfortunately, our study did not focus on this issue and the presence of such nerve terminals was not visualized. Another issue that still needs to be further addressed is that of the involvement of additional factors in the stimulation of sex pheromone biosynthesis in the gland. Despite our clear cut data on the presence of PK/PBAN receptors in the gland (which strongly indicate the role of these peptides in this process) the possibility that other factors acting directly or indirectly on the gland, cannot be excluded and needs further investigation. To the best of our knowledge this is the first report on the histochemical visualization of the PK/PBAN receptor in the pheromone gland. The ligands that are described in this study as well as the histochemical methods that were employed to visualize the receptor could serve as a basis for the localization of receptors belonging to this family in the pheromone glands of other moth species as well as in other organs. **Acknowledgments** The histochemical part of the present study was carried out while the senior author (M.A.) was on a sabbatical in the laboratory of Prof. M. Martins-Green, Department of Cell Biology and Neurosciences, University of California, Riverside (UCR). Microscopy analysis was carried out at the Central Facility for Advanced Microscopy and Microanalysis (CFAMM) at UCR. Gratitude is extended to all the people in Dr. Martins-Green laboratory especially to Mrs. Lina Wong for assistance during the performance of the study. We also thank: Dr. Larissa Dobrzhinetskaya of the Institute of Geophysics and Planetary Physics at UCR for the SEM analysis; Mrs. Irit Scheffer of the Department of Entomology, Chemistry Unit, at the Volcani Center, for gas chromatography analyses of pheromone content in moths; and Prof. Gilon of the Department of Organic Chemistry at the Hebrew University of Jerusalem, Israel for assistance in the design of the photo-affinity ligands. Special appreciation is extended to Mr. Danny Shavit of the Professional Scientific Photography Unit at the Volcani Center, ARO for the highly professional artwork. This research was supported in part by the office of the vice-chancellor for research at UCR (M.M.G. and M.A.), by AHA grant #0050732Y and by TRDRP grant # 10IT-0170 (M.M.G.) and by the Israel Science Foundation, administered by the Israel Academy of Sciences and Humanities (M.A.). References [1] Altstein M, Harel M, Dunkelblum E. Effect of a neuroendocrine factor on sex pheromone biosynthesis in the tomato looper, *Chrysodeixis chalcites* (Lepidoptera: Noctuidae). Insect Biochem 1989;19:645–9. [2] Altstein M, Dunkelblum E, Gabay T, Ben-Aziz O, Schafler I, Gazit Y. PBAN-induced sex pheromone biosynthesis in *Heliothis peltigera*: structure, dose and time-dependent analysis. Arch Insect Biochem Physiol 1995;30:309–17. [3] Altstein M, Gazit Y, Ben-Aziz O, Gabay T, Marcus R, Vogel Z, et al. Induction of cuticular melanization in *Spodoptera littoralis* larvae by PBAN/MRCH: development of a quantitative bioassay and structure function analysis. Arch Insect Biochem Physiol 1996;31:355–70. [4] Altstein M, Ben-Aziz O, Gabay T, Gazit Y, Dunkelblum E. Structure–function relationship of PBAN/MRCH. In: Carde RT, Minks AK, editors. Insect pheromone research: new directions. New York: Chapman and Hall; 1996. p. 56–63. [5] Altstein M, Dunkelblum E, Gazit Y, Ben-Aziz O, Gabay T, Vogel Z, et al. Structure–function analysis of PBAN/MRCH: a basis for antagonist design. In: Rosen D, et al., editors. Modern agriculture and the environment. Dordrecht: Kluwer Academic Publishers; 1997. p. 109–16. [6] Altstein M, Gabay T, Ben-Aziz O, Daniel S, Zeltser I, Gilon C. Characterization of a putative pheromone biosynthesis activation neuropeptide (PBAN) receptor from the pheromone gland of *Heliothis peltigera*. Invertebrate Neurosci 1999;4:33–40. [7] Altstein M, Ben-Aziz O, Daniel S, Zeltser I, Gilon C. Pyrokinin/PBAN radio-receptor assay: development and application for the characterization of a putative receptor from the pheromone gland of *Heliothis peltigera*. Peptides 2001;22:1379–89. [8] Aubrey JG, Boudreaux HB, Grodner ML, Hammond AM. Sex pheromone producing cells and their associated cuticle in female *Heliothis zea* and *Heliothis virescens* (F.). (Lepidoptera: Noctuidae). Ann Entomol Soc Am 1983;76:343–8. [9] Barany G, Merrifield RB. In: Gross E, Meinehofer J, editors. The peptides, vol. 2. New York: Academic Press; 1980. p. 1–284. [10] Christensen TA, Itagaki H, Teal PEA, Jasensky RD, Tumlinson JH, Hildebrand JG. Innervation and neural regulation of the sex pheromone gland in female *Heliothis* moths. Proc Natl Acad Sci USA 1991;88:4971–5. [11] Christensen TA, Lehman HK, Teal PEA, Itagaki H, Tumlinson JH, Hildebrand JG. Dial changes in the presence and physiological actions of octopamine in the female sex-pheromone glands of *Heliothis* moths. Insect Biochem Mol Biol 1993;22:841–9. [12] Davis NT, Homberg U, Teal PEA, Altstein M, Agricola H-J, Hildebrand JG. Neuroanatomy and immunocytochemistry of the neurosecretory system of the subesophageal ganglion of the tobacco hawkmoth, *Manduca sexta*: immunoreactivity to PBAN and other neuropeptides. Microsc Res Tech 1996;35:201–29. [13] Delisle J, Piccinbon J-F, Simard J. Physiological control of pheromone production in *Choristoneura fumiferana* and *C. rosaceana*. Arch Insect Biochem Physiol 1999;42:253–65. [14] Dunkelblum E, Kehat M. Female sex pheromone components of *Heliothis peltigera* (Lepidoptera: Noctuidae): chemical identification from gland extracts and male response. J Chem Ecol 1989;15:2233–45. [15] Fabrias G, Marco M-P, Camps F. Effect of the pheromone biosynthesis activating neuropeptide on sex pheromone biosynthesis in *Spodoptera littoralis* isolated glands. Arch Insect Biochem Physiol 1994;27:77–87. [16] Fonagy A, Yokoyama N, Okano K, Tatsuki S, Maede S, Matsumoto S. Pheromone producing cells in the silkmoth *Bombyx mori*: identification and their morphological changes in response to pheromonotropic stimuli. J Insect Physiol 2000;46:735–44. [17] Gädé G. The explosion of structural information on insect neuropeptides. Prog Chem Org Natural Products 1997;71:1–128. [18] Gazit Y, Dunkelblum E, Benichis M, Altstein M. Effect of synthetic PBAN and derived peptides on sex pheromone biosynthesis in *Heliothis peltigera* (Lepidoptera: Noctuidae). Insect Biochem 1990;20:853–8. [19] Golubeva E, Kingan TG, Blackburn MB, Masler EP, Raina AK. The distribution of PBAN (pheromone biosynthesis activating neuropeptide)-like immunoreactivity in the nervous system of the gypsy moth *Lymantria dispar*. Arch Insect Biochem Physiol 1997;34:391–408. [20] Howse PE. The role of pheromones in insect behavior and ecology. In: Howse P, Stevens I, Jones O, editors. Insect pheromones and their use in pest management. London: Chapman and Hall; 1998. p. 38–68. [21] Imai K, Konno T, Nakazawa Y, Komiya T, Isobe M, Koga K, et al. Isolation and structure of diapause hormone of the silkworm, *Bombyx mori*. Proc Japan Acad 1991;67(Ser B):98–101. [22] Jefferson RN, Shorey HH, Rubin RE. Sex pheromone of noctuid moths. XVI. The morphology of the female sex pheromone glands of eight species. Ann Entomol Soc Am 1968;61:861–5. [23] Jurenka RA, Fabrias G, Roelofs WL. Hormonal control of female sex pheromone biosynthesis in the redbanded leafroller moth, *Argyrotaenia velutinana*. Insect Biochem 1991;21:81–9. [24] Kingan TG, Blackburn MB, Raina AK. The distribution of pheromone-biosynthesis-activating neuropeptide (PBAN) immunoreactivity in the central nervous system of the corn earworm moth, *Helicoverpa zea*. Cell Tissue Res 1992;270:229–40. [25] Koehansky JP, Raina AK, Kempe TG. Structure–activity relationship in C-terminal fragments analogs of pheromone biosynthesis activating neuropeptide in *Helicoverpa zea*. Arch Insect Biochem Physiol 1997;35:315–32. [26] Matsumoto S, Kitamura A, Nagasawa H, Kataoka H, Orikasa C, Mitsui T, et al. Functional diversity of a neurohormone produced by the suboesophageal ganglion: molecular identity of melanization and reddish colouration hormone and pheromone biosynthesis activating neuropeptide. J Insect Physiol 1990;36:427–32. [27] Melkonian G, Le C, Zheng W, Talbot P, Martins-Green M. Normal patterns of angiogenesis and extracellular matrix deposition in chick chorioallantoic membranes are disrupted by mainstream and sidestream cigarette smoke. Toxicol Appl Pharmacol 2000;163:26–37. [28] Nachman RJ, Holman MG, Cook BJ. Active fragments and analogs of the insect neuropeptide leucopyrokinin: structure–function studies. Biochem Biophys Res Commun 1986;137:936–42. [29] Nachman RJ, Holman MG. Myotropic insect neuropeptide families from the cockroach *Leucophaea maderae*. In: Menn JJ, Kelly TJ, Masler EP, editors. Insect neuropeptides. Washington, DC: American Chemical Society; 1991. p. 194–214. [30] Nachman RJ, Holman GM, Schoofs L, Yamashita O. Silkworm diapause induction activity of myotropic pyrokinin (FXPRLamide) insect Nps. Peptides 1993;14:1043–8. [31] Nachman RJ, Zdareej J, Holman MG, Hayes TK. Pupariation acceleration in fleshfly (*Sarcophaga bullata*) larvae by the pyrokinin/PBAN neuropeptide family. Ann NY Acad Sci 1997;814:73–9. [32] Nagasawa H, Kuniyoshi H, Arima R, Kawano T, Ando T, Suzuki A. Structure and activity of *Bombyx* PBAN. Arch Insect Biochem Physiol 1994;25:261–70. [33] Percy-Cunningham JE, MacDonald JA. Biology and ultrastructure of sex pheromone-producing glands. In: Prestwich GD, Blomquist GJ, editors. Pheromone biochemistry. Orlando: Academic Press; 1987. p. 27–75. [34] Rafaeli A, Soroker V, Kamensky B, Raina AK. Action of pheromone biosynthesis activating neuropeptide on *in vitro* pheromone glands of *Heliothis armigera* females. J Insect Physiol 1990;36:641–6. [35] Rafaeli A, Klein Z. Regulation of pheromone production by female pink bollworm moth *Pectinophora gossypiella* (Saunders) (Lepidoptera: Gelechiidae). Physiol Entomol 1994;19:159–64. [36] Rafaeli A. Neuroendocrine control of pheromone biosynthesis in moths. Int Rev Cytol 2002;213:49–91. [37] Raina AK, Klun JA. Brain factor control of sex pheromone production in the female corn earworm moth. Science 1984;225:531–3. [38] Raina AK, Jaffe H, Kempe TG, Keim P, Blacher RW, Fales HM, et al. Identification of a neuropeptide hormone that regulates sex pheromone production in female moths. Science 1989;244:796–8. [39] Raina AK, Kempe TG. A pentapeptide of the C-terminal sequence of PBAN with pheromonotropic activity. Insect Biochem 1990;20:849–51. [40] Raina AK, Kempe TG. Structure activity studies of PBAN of *Helicoverpa zea* (Lepidoptera: Noctuidae). Insect Biochem Mol Biol 1992;22:221–5. [41] Raina AK, Wergin WP, Murphy CA, Erbe EF. Structural organization of the sex pheromone gland in *Helicoverpa zea* in relation to pheromone production and release. Arthropod Struct Dev 2000;29:343–53. [42] Schoofs L, Holman MG, Nachman RJ, Hayes TK, DeLoof A. Isolation, primary structure, and synthesis of locustapyrokinin: a myotropic peptide of *Locusta migratoria*. Gen Comp Endocrinol 1991;81:97–104. [43] Soroker V, Rafaeli A. In vitro hormonal stimulation of [14C]acetate incorporation by *Heliothis armigera* pheromone glands. Insect Biochem 1989;19:1–5. [44] Suva LJ, Flannery MS, Caulfield MP, Findlay DM, Jüppner H, Goldring SR, et al. Design, synthesis and utility of novel benzophenone-containing calcitonin analog for photoaffinity labeling the calcitonin receptor. J Pharmacol Exp Therap 1997;283:876–84. [45] Tang JD, Charlton RE, Jurenka RA, Wolf WA, Phelan PL, Sreng L, et al. Regulation of pheromone biosynthesis by a brain hormone in two moth species. Proc Natl Acad Sci USA 1989;86:1806–10. [46] Teal PEA, Carlyle TC, Tumlinson JH. Epidermal glands in terminal abdominal segments of female *Heliothis virescens* (F.) (Lepidoptera: Noctuidae). Ann Entomol Soc Am 1983;76:242–7. [47] Teal PEA, Tumlinson JH, Oberlander H. Neural regulation of sex pheromone biosynthesis in *Heliothis* moths. Proc Natl Acad Sci USA 1989;86:2488–92. [48] Zeltser I, Gilon G, Ben-Aziz O, Scheffer I, Altstein M. Discovery of a linear lead antagonist to the insect pheromone biosynthesis activating neuropeptide (PBAN). Peptides 2000;21:1457–67. [49] Zhou AT, Bessalle R, Bisello A, Nakamoto C, Rosenblatt M, Suva LJ, et al. Direct mapping of an agonist-binding domain within the parathyroid hormone/parathyroid hormone-related protein receptor by photoaffinity crosslinking. Proc Natl Acad Sci USA 1997;94:3644–9. [50] Zhu J, Millart J, Löfstedt C. Hormonal regulation of sex pheromone biosynthesis in the turnip moth, *Agrotis segetum*. Arch Insect Biochem Physiol 1995;30:41–59.
Using Picture Books to Promote Understanding of the Continent of Africa in the Elementary Classroom Dorothy N. Bowen *Eastern Kentucky University*, firstname.lastname@example.org Follow this and additional works at: [http://encompass.eku.edu/ci_fsresearch](http://encompass.eku.edu/ci_fsresearch) Part of the Education Commons, and the Library and Information Science Commons **Recommended Citation** Bowen, Dorothy N., "Using Picture Books to Promote Understanding of the Continent of Africa in the Elementary Classroom" (2009). *Curriculum and Instruction Faculty and Staff Scholarship*. Paper 24. [http://encompass.eku.edu/ci_fsresearch/24](http://encompass.eku.edu/ci_fsresearch/24) The continent of Africa covers six per cent of the earth's surface and over 20 per cent of its total land area. Its population is second only to the continent of Asia. The continent is made up of 53 distinct countries, counting the island of Madagascar and other islands associated with the continent such as the Seychelles. More than 1000 indigenous African languages are spoken on this vast continent and over 300 of these languages are spoken in the country of Nigeria alone. How may the elementary classroom teacher convey something of Africa's beauty, and make at least some part of the continent come alive for students? Using trade books with our elementary students is an effective way of bringing a topic to life for them. The textbook normally presents the topic from the historian's or geographer's point of view and may treat it with limited depth, but the trade book may examine it from a child's point of view. Children will be much more engaged when they see the issue being presented through the eyes of another child. During the past decade many beautiful picture books have been published which provide a few windows of understanding into some of Africa's rich cultures and resourceful peoples. Let us look at a sample of these picture books and consider some possible ways they might be used in the classroom. An example is *Ryan and Jimmy: And the Well in Africa that Brought Them Together*, a story told by Herb Shoveller. When Ryan Hreljac was a six-year-old first grader he learned that in some parts of the world there is no safe drinking water and as a result thousands and thousands of people, including children, become sick and even die. "That's crazy," Ryan thought. His teacher went on to tell how far some had to walk every day in search of water, which often when found, was not fit to drink. Ryan decided that he would raise the money to build a well in Africa. Many caught the vision as Ryan carried on his campaign. Ryan became pen pals with Jimmy, a Ugandan. When the well was built in Jimmy's village, Ryan was able to go to Uganda and meet him face-to-face. *Ryan and Jimmy* may inspire children in the classroom to learn about places like Jimmy's village and to seek ways they can be involved and make a difference. The web site for Ryan's Well Foundation ([http://www.ryanswell.ca/](http://www.ryanswell.ca/)) gives information on similar projects, shows maps locating these projects, and provides details about how one can be involved. A web site at [http://www.Timeforkids.com/TFK](http://www.Timeforkids.com/TFK) gives a virtual voyage to the Country of Uganda. The power of one person to bring change is also told in two books about Wangari Maathai, who grew up in the highlands of Kenya when there were many trees and fish-filled streams. Her story is told by Claire Nivola in *Planting the Trees of Kenya: The Story of Wangari Maathai* and also by Jeannette Winter in *Wangari's Trees of Peace: A True Story from Africa*. The authors tell of how Maathai came to the U.S. to attend college, where she studied science. When she returned to Kenya just five years later, she saw a great change in the landscape of her country. Trees had been cut down and people no longer grew their own food, but instead purchased much of it in stores. One result of the removal of many trees was the lack of clean drinking water that Ryan learned about. Wangari set out to change the situation by convincing people all over Kenya to plant trees. The job has not been an easy one, but in spite of protests and even personal danger, she persevered. In 2004 Wangari Maathai became the first woman from Africa to receive the Nobel Peace Prize. It was awarded to her for the connection she made between the health of her country’s natural environment and the well-being of her country’s people (Nivola). Reading about this brave woman in Kenya can be used to spark a discussion of how children can take care of the environment where they live. Another topic that brings understanding of what life is like for some children in Africa is the AIDS pandemic. How can a child in an American classroom relate to the fact that AIDS has orphaned over 11 million children in Sub-Saharan Africa, a number that is expected to rise to 20 million by the year 2020? In her book, *Our Stories, Our Songs: African Children Talk about AIDs*, Canadian author Deborah Ellis tries to show what it really means to be a child living in the midst of this terrible pandemic. Ellis traveled to Malawi and Zambia where she actually met some of these children. Some are in their teens, but some are very young. In her book Ellis tells many of their stories and the stories tell what life is like because of this devastating disease. The author explains in easy-to-understand language what AIDs is, what it does, and how it is and is not passed on. She also defines terms associated with the disease. Ellis writes, “This is what it is to be human: it’s about knowing that other humans are just as we are. It’s about shouting our stories, singing our songs, and letting them float out into the universe. It’s about celebrating all our stories, all our songs, and all our histories.” This title could be used as a resource in a study of many sub-Saharan African countries. Two picture books tell the stories of international projects that have enabled African children to earn money in order to go to school. Page McBrier’s *Beatrice’s Goat* relates the story of a young Ugandan girl who longs to attend school but does not have the money for uniforms and books. She comes from a family of six children, and the prospect looks impossible. When the family receives a gift of a goat through the Heifer Project, the goat provides milk for the family and the sale of extra milk eventually allows Beatrice to attend school. The day Beatrice’s mother tells her that she has saved enough to pay for school is the day her dream comes true. A similar story comes from Ghana in West Africa. The young boy Kojo is given a loan which he uses to buy a hen. The hen lays eggs, which Kojo is able to sell to buy more hens, enabling him to complete school and college. This eventually changed his community by enabling others to succeed. Katie Smith Milway tells his story in *One Hen: How One Small Loan Made a Big Difference*. These books help children to understand that schooling is not always a given for children in some parts of the world. The books not only inform children of how blessed they are to have an education provided for them, but also suggest that they can be involved in organizations that provide heifers, goats or small loans to people all over the world. The teacher can go to YouTube.com and search “aid projects in Africa” and find video clips that demonstrate what can be accomplished through such projects. The Coretta Scott King Award winning book, *Brothers in Hope: The Story of the Lost Boys of Sudan*, enables children of elementary age to understand in part what effect war has had on the children of Sudan. It tells in picture book format the story of eight-year-old Garang who is tending cattle in Southern Sudan when war breaks out in his village. He returns home and finds everything has been destroyed. He joins a band of other boys who walk hundreds of miles through Ethiopia and Kenya, and finally finds a new home in the United States. One of the criteria for using books that deal with such devastating issues as AIDS, poverty and war is that they should always give hope to the reader. Williams’ book meets that criterion in spite of the hardship portrayed. On December 26, 2004 a tsunami struck in the Indian Ocean near Indonesia. *National Geographic News* reported that it is estimated to have released the energy of 23,000 Hiroshima-type atomic bombs. By the end of the day more than 150,000 people were dead or missing and millions more were homeless in 11 countries, making it perhaps the most destructive tsunami in history. In spite of the deadly destruction on that day, a beautiful story of hope and friendship has come out of that event. The true story of a baby hippo who loses his mother during the tsunami has inspired at least five picture books for children. Isabella and Craig Hatkoff and Paula Kahumbu have written three books about the hippo. All three are illustrated by photographs by Peter Greste. Jeanette Winter has published a wordless picture book telling the story as well. Her book has the title *Mama: A True Story in which a Baby Hippo Loses His Mama during a Tsunami, but Finds a New Home, and a New Mama*. Marion Dane Bauer wrote the story in fiction form, but her book, *A Mama for Owen*, is also based on the actual event. Winter’s book is a charming story of the hippo Owen, showing him when he was at home with his mother and the rest of the herd; when the tsunami struck his life changed forever. He is washed up on shore, captured in a net and taken to a game preserve where he meets Mzee, a 130-year-old male giant tortoise. Mzee means “old man” in Swahili. The two became inseparable. The Harkoffs and Kahumbu tell the story in much greater detail, including many actual photographs of the event that brought Owen and Mzee together and the ongoing friendship that developed. Bauer embroiders the story a bit, but basically tells the same story told by the other picture books. The Owen and Mzee web site ([http://www.owenandmzee.com/omweb/](http://www.owenandmzee.com/omweb/)) is a wonderful resource to complement the picture books. It includes several videos, some actual footage of the rescue of Owen, some actual news reports, and some animated stories about the friends. The presentation of these videos is at a variety of age and interest levels. A search under “Owen and Mzee” will bring up a number of other web sites which can add to the resources for a study of many topics, e.g. tsunamis, friendships, animal friendships, Kenya, game preserves, etc. Children could create a mural which tells the story of Owen and Mzee, truly a story of hope out of disaster. The trade books we have examined and the many others which are available can greatly enhance the exposure of elementary children to some of the cultures of Africa. These books will show them not only the differences which exist, but also the many similarities between childhood in the U.S. and in some areas of Africa. Most of all, they can help children to see that there is hope even in the most difficult of life’s circumstances. Dorothy N. Bowen *email@example.com* *Editor’s Note:* Dr. Bowen was a librarian and teacher in Kenya for 33 years. WORKS CITED: Bauer, Marion Dane. *A mama for Owen*. Illustrated by John Butler. NY: Simon & Schuster Books for Young Readers, 2007. "Deadliest Tsunami in History?" 7 Jan. 2005 <http://news.nationalgeographic.com/news/2004/12/1227_041226_tsunami.html> Ellis, Deborah. *Our stories, our songs: African children talk about AIDS*. Ontario: Fitzhenry & Whiteside, 2005. Hatkoff, Isabella, Craig Hatkoff, C., Paula Kahumbu. *Owen & Mzee: The true story of a remarkable friendship*. Photographs by Peter Greste. NY: Scholastic, 2006. Hatkoff, Isabella, Craig Hatkoff, C., Paula Kahumbu. *Owen & Mzee: Best friends*. Photographs by Peter Greste. NY: Scholastic, 2007. Hatkoff, Isabella, Craig Hatkoff, C., Paula Kahumbu. *Owen & Mzee: The language of friendship*. Photographs by Peter Greste. NY: Scholastic, 2007. McBrier, Page. *Beatrice's goat*. Illustrated by Lori Lohstoeter. NY: Atheneum Books for Young Readers, 2001. Milway, Katie Smith. *One hen: How one small loan made a big difference*. Illustrated by Eugenie Ferdanes, 2008. Nivola, Claire A. *Planting the trees of Kenya: The story of Wangari Maathai*. New York: F. Foster Books, 2008. Shoveller, Herb. *Ryan and Jimmy: And the well in Africa that brought them together*. Tonawanda, NY: Kids Can Press, 2006. Williams, Mary. *Brothers in hope: The story of the lost boys of Sudan*. Illustrated by R. Gregory Christie. NY: Lee & Low Books, 2005. Winter, Jeanette. *Mama: A true story in which a baby hippo loses his mama during a tsunami, but finds a new home, and a new mama*. Orlando: Harcourt, 2006. Winter, Jeanette. *Wangari's trees of peace: A true story from Africa*. Harcourt Brace Javanovich, 2008. OTHER RESOURCES ON AFRICAN TOPICS: Arnold, Katya. *Elephants can paint too!* NY: Atheneum Books for Young Readers, 2005. Fontes, Justine and Ron Fontes. *A to Z Kenya*. NY: Children's Press, 2003. Lynch, Emma. *We're from Kenya*. Chicago: Heinemann, 2005. Rumford, James. *A giraffe's journey*. Boston: Houghton Mifflin, 2008.
Search for the Decays $B_s^0 \to \tau^+\tau^-$ and $B^0 \to \tau^+\tau^-$ R. Aaij et al.* (LHCb Collaboration) (Received 13 March 2017; revised manuscript received 25 April 2017; published 21 June 2017) A search for the rare decays $B_s^0 \to \tau^+\tau^-$ and $B^0 \to \tau^+\tau^-$ is performed using proton–proton collision data collected with the LHCb detector. The data sample corresponds to an integrated luminosity of 3 fb$^{-1}$ collected in 2011 and 2012. The $\tau$ leptons are reconstructed through the decay $\tau^- \to \pi^-\pi^+\pi^-\nu_\tau$. Assuming no contribution from $B^0 \to \tau^+\tau^-$ decays, an upper limit is set on the branching fraction $\mathcal{B}(B_s^0 \to \tau^+\tau^-) < 6.8 \times 10^{-3}$ at the 95% confidence level. If instead no contribution from $B_s^0 \to \tau^+\tau^-$ decays is assumed, the limit is $\mathcal{B}(B^0 \to \tau^+\tau^-) < 2.1 \times 10^{-3}$ at the 95% confidence level. These results correspond to the first direct limit on $\mathcal{B}(B_s^0 \to \tau^+\tau^-)$ and the world’s best limit on $\mathcal{B}(B^0 \to \tau^+\tau^-)$. DOI: 10.1103/PhysRevLett.118.251802 Processes where a $B$ meson decays into a pair of oppositely charged leptons are powerful probes in the search for physics beyond the Standard Model (SM). Recently, the first observation of the $B_s^0 \to \mu^+\mu^-$ decay was made [1,2] (the inclusion of charge-conjugate processes is implied throughout this Letter). Its measured branching fraction ($\mathcal{B}$) is compatible with the SM prediction [3] and imposes stringent constraints on theories beyond the SM. Complementing this result with searches for the tauonic modes $B \to \tau^+\tau^-$, where $B$ can be either a $B^0$ or a $B_s^0$ meson, is of great interest in view of the recent hints of lepton flavor nonuniversality obtained by several experiments. In particular the measurements of $R(D^{(*)}) = [\mathcal{B}(B^0 \to D^{(*)}\tau^+\nu_\tau)] / [\mathcal{B}(B^0 \to D^{(*)}\ell^+\nu_\ell)]$, where $\ell^+$ represents either a muon, an electron or both, are found to be larger than the SM prediction by 3.9 standard deviations ($\sigma$) [4], and the measurement of $R_K = [\mathcal{B}(B^+ \to K^+\mu^+\mu^-)] / [\mathcal{B}(B^+ \to K^+e^+e^-)]$ is 2.6$\sigma$ lower than the SM prediction [5]. Possible explanations for these and other [6] deviations from their SM expectations include leptoquarks, $W'/Z'$ bosons, and two-Higgs-doublet models (see, e.g., Refs. [7,8]). In these models, the $B \to \tau^+\tau^-$ branching fractions could be enhanced with respect to the SM predictions, $\mathcal{B}(B^0 \to \tau^+\tau^-) = (2.22 \pm 0.19) \times 10^{-8}$ and $\mathcal{B}(B_s^0 \to \tau^+\tau^-) = (7.73 \pm 0.49) \times 10^{-7}$ [3], by several orders of magnitude [8–12]. All minimal-flavor-violating models predict the same enhancement of $\mathcal{B}(B_s^0 \to \tau^+\tau^-)$ over $\mathcal{B}(B^0 \to \tau^+\tau^-)$ as in the SM. The experimental search for $B \to \tau^+\tau^-$ decays is complicated by the presence of at least two undetected neutrinos, originating from the decay of the $\tau$ leptons. The BABAR collaboration has searched for the $B^0 \to \tau^+\tau^-$ mode [13] and published an upper limit $\mathcal{B}(B^0 \to \tau^+\tau^-) < 4.10 \times 10^{-3}$ at the 90% confidence level (C.L.). There are currently no experimental results for the $B_s^0 \to \tau^+\tau^-$ mode, though its branching fraction can be indirectly constrained to be less than 3% at the 90% C.L. [14–16]. In this Letter, the first search for the rare decay $B_s^0 \to \tau^+\tau^-$ is presented, along with a search for the $B^0 \to \tau^+\tau^-$ decay. The analysis is performed with proton–proton collision data corresponding to integrated luminosities of 1.0 and 2.0 fb$^{-1}$ recorded with the LHCb detector at center-of-mass energies of 7 and 8 TeV, respectively. The $\tau$ leptons are reconstructed through the decay $\tau^- \to \pi^-\pi^+\pi^-\nu_\tau$, which proceeds predominantly through the decay chain $\tau^- \to a_1(1260)^- \nu_\tau$, $a_1(1260)^- \to \rho(770)^0\pi^-$ [17]. The branching fraction $\mathcal{B}(\tau^- \to \pi^-\pi^+\pi^-\nu_\tau)$ is $(9.31 \pm 0.05)\%$ [18]. Because of the final-state neutrinos, the $\tau^+\tau^-$ mass provides only a weak discrimination between signal and background, and cannot be used as a way to distinguish $B_s^0$ from $B^0$ decays. The number of signal candidates is obtained from a fit to the output of a multivariate classifier that uses a range of kinematic and topological variables as input. Data-driven methods are used to determine signal and background models. The observed signal yield is converted into a branching fraction using as a normalization channel the decay $B^0 \to D^-D_s^+$ [19,20], with $D^- \to K^+\pi^-\pi^-$ and $D_s^+ \to K^-K^+\pi^+$. The LHCb detector, described in detail in Refs. [21,22], is a single-arm forward spectrometer covering the pseudorapidity range $2 < \eta < 5$. The online event selection is performed by a trigger [23], which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. The hardware trigger stage requires events to have a muon with high transverse momentum ($p_T$) with respect to the beam line or a hadron, photon, or electron... with high transverse energy in the calorimeters. For hadrons, the transverse energy threshold is around 3.5 GeV, depending on the data-taking conditions. The software trigger requires a two-, three-, or four-track secondary vertex with a significant displacement from the primary $pp$ interaction vertices (PVs). A multivariate classifier [24] is used for the identification of secondary vertices that are significantly displaced from the PVs, and are consistent with the decay of a $b$ hadron. At least one charged particle must have $p_T > 1.7$ GeV/$c$ and be inconsistent with originating from any PV. Simulated data are used to optimize the selection, obtain the signal model for the fit and determine the selection efficiencies. In the simulation, $pp$ collisions are generated using PYTHIA [25] with a specific LHCb configuration [26]. Decays of hadrons are described by EVTGEN [27], in which final-state radiation is generated using PHOTOS [28]. The interaction of the generated particles with the detector, and its response, are implemented using the GEANT4 toolkit [29] as described in Ref. [30]. The $\tau^- \to \pi^- \pi^+ \pi^- \nu_\tau$ decays are generated using the resonance chiral Lagrangian model [31] with a tuning based on the BABAR results for the $\tau^- \to \pi^- \pi^+ \pi^- \nu_\tau$ decays [32], implemented in the TAUOLA generator [33]. In the off-line selection of the candidate signal and normalization decays, requirements on the particle identification (PID) [34], track quality, and the impact parameter with respect to any PV are imposed on all charged final-state particles. Three charged tracks, identified as pions for the $B \to \tau^+ \tau^-$ decays, and pions or kaons for the $B^0 \to D^- D_s^+$ decays, forming a good-quality vertex are combined to make intermediate $\tau$, $D^+$, and $D_s^+$ candidates. The kinematic properties of these candidates, like momenta and masses, are calculated from the three-track combinations. The flight directions of the $\tau$, $D^+$, and $D_s^+$ candidates are estimated from their calculated momentum vectors. For the $\tau$ candidates this is a biased estimate due to the missing neutrinos. In turn, $B$-meson candidates are reconstructed from two oppositely charged $\tau$ or from $D^-$ and $D_s^+$ candidates with decay vertices well separated from the PVs. The $B$-meson candidates are required to have $p_T > 2$ GeV/$c$, at least one $\tau$, $D^+$, and $D_s^+$ candidate with $p_T > 4$ GeV/$c$ and at least one pion or kaon with $p_T > 2$ GeV/$c$. No further selection requirements are imposed on the normalization mode. For each $\tau$ candidate, the two-dimensional distribution of the invariant masses $m_{\pi^+\pi^-}$ of the two oppositely charged two-pion combinations is divided into nine sectors, as illustrated in Fig. 1. Exploiting the intermediate $\rho(770)^0$ resonance of the $\tau$ decays, these sectors are used to define three regions. The signal region consists of $B$ candidates with both $\tau$ candidates in sector 5, and is used to determine the signal yield. The signal-depleted region, composed of $B$ candidates having at least one $\tau$ candidate in sectors 1, 3, 7, or 9, provides a sample used when optimizing the selection. The control region corresponds to $B$ candidates with one $\tau$ candidate in sectors 4, 5, or 8 and the other in sectors 4 or 8, and provides the background model. For the $B \to \tau^+ \tau^-$ modes, further requirements are imposed on two types of isolation variables that are able to discriminate signal from background from partially reconstructed decays with additional charged or neutral particles. The first class of isolation variables, based on the decision of a multivariate classifier trained on simulated signal and other $b$-hadron decays, discriminates against processes containing additional charged tracks that either make a good-quality vertex with any selected pion or $\tau$ candidate, or belong to the same $b$-hadron decay as the selected pion candidates. The second class of isolation variables is based on calorimeter activity due to neutral particles in a cone, defined in terms of the pseudorapidity and polar angle, centered on the $B$ candidate momentum. In addition to the isolation variables, a method to perform an analytic reconstruction of the $B \to \tau^+ \tau^-$ decay chain, described in detail in Refs. [35,36], has been developed. It combines geometrical information about the decay and mass constraints on the particles ($B$, $\tau$, and $\nu$) in the decay chain to calculate the $\tau$ momenta analytically. The possible solutions for the two $\tau$ momenta are found as solutions of a system of two coupled equations of second degree with two unknowns. The finite detector resolution and approximations made in the calculation prevent real solutions being found for a substantial fraction of the signal events. However, several intermediate quantities associated with the method are exploited to discriminate signal from background. To make full use of the discrimination power present in the distributions of the selection variables, a requirement is added on the output of a neural network [37], built using seven variables: the $\tau^\pm$ candidate masses and decay times, a charged track isolation variable for the pions, a neutral isolation variable for the $B$ candidate, and one variable from the analytic reconstruction method, introduced in Ref. [36]. The classifier is trained on simulated $B \to \tau^+\tau^-$ decays, representing the signal, and data events from the signal-depleted region. In order to determine the signal yield, a binned maximum likelihood fit is performed on the output of a second neural network (NN), built with 29 variables and using the same training samples. The NN inputs include the eight variables from the analytic reconstruction method listed in Ref. [36], further isolation variables, as well as kinematic and geometrical variables. The NN output is transformed to obtain a flat distribution for the signal over the range $[0.0, 1.0]$, while the background peaks towards zero. Varying the two-pion invariant mass sector boundaries, the signal region is optimised for the $B_s^0 \to \tau^+\tau^-$ branching fraction limit using pseudoexperiments. The boundaries are set to 615 and 935 MeV/$c^2$. The overall efficiency of the selection, determined using simulated $B_{(s)}^0 \to \tau^+\tau^-$ decays, is approximately $2.2(2.4) \times 10^{-5}$, including the geometrical acceptance. Assuming the SM prediction, the number of $B_s^0 \to \tau^+\tau^-$ decays expected in the signal region is 0.02. After the selection, the signal, signal-depleted, and control regions contain, respectively, 16%, 13%, and 58% of the simulated signal decays. The corresponding fractions of selected candidates in data are 7%, 37%, and 47%. Most signal decays fall into the control region, but the signal region, which contains about 14 700 candidates in data after the full selection, is more sensitive due to its lower background contamination. For the fit, ten equally sized bins of NN output in the range $[0.0, 1.0]$ are considered, where the high NN region $[0.7, 1.0]$ was not investigated until the fit strategy was fixed. The signal model is taken from the $B_s^0 \to \tau^+\tau^-$ simulation, while the background model is taken from the data control region, correcting for the presence of expected signal events in this region. The fit model is given by $$N_{\text{data}}^{\text{SR}} = s \hat{N}_{\text{sim}}^{\text{SR}} + f_b \left( N_{\text{data}}^{\text{CR}} - s \frac{\epsilon^{\text{CR}}}{\epsilon^{\text{SR}}} \hat{N}_{\text{sim}}^{\text{CR}} \right),$$ where $N_{\text{sim(data)}}^{\text{SR}}$ ($N_{\text{sim(data)}}^{\text{CR}}$) is the NN output distribution in the signal (control) region from simulation (data), $s$ is the signal yield in the signal region, $f_b$ is a scaling factor for the background template, and $\epsilon^{\text{SR}}$ ($\epsilon^{\text{CR}}$) is the signal efficiency in the signal (control) region. The quantities $s$ and $f_b$ are left free in the fit. The corresponding normalized distributions $\hat{N}_{\text{sim}}^{\text{SR}}$, $\hat{N}_{\text{sim}}^{\text{CR}}$, and $\hat{N}_{\text{data}}^{\text{CR}}$ are shown in Fig. 2. The agreement between the background NN output distributions in the control and signal regions has been tested in different samples: in the data for the background-dominated NN output bins $[0.0, 0.7]$, in a generic $b\bar{b}$ simulated sample and in several specific simulated background modes (such as $B^0 \to D^- \pi^+ \pi^- \pi^+$ with $D^- \to K^0 \pi^- \pi^+ \pi^-$, or $B_s^0 \to D_s^- \pi^+ \pi^- \pi^+$ with $D_s^- \to \tau^- \nu_\tau$). Within the statistical uncertainty, the distributions have been found to agree with each other in all cases. The background in the control region can therefore be used to characterize the background in the signal region. Differences between the shapes of the background distribution in the signal and control regions of the data are the main sources of systematic uncertainties on the background model. These uncertainties are taken into account by allowing each bin in the $N_{\text{data}}^{\text{CR}}$ distribution to vary according to a Gaussian constraint. The width of this Gaussian function is determined by splitting the control region into two approximately equally populated samples and taking, for each bin, the maximum difference between the NN outputs of the two subregions and the unsplit sample. The splitting is constructed to have one region more signal-like and one region more backgroundlike. ![Fig. 2](image) FIG. 2. (Left) Normalized NN output distribution in the signal ($\hat{N}_{\text{sim}}^{\text{SR}}$) and control ($\hat{N}_{\text{sim}}^{\text{CR}}$) region for $B_s^0 \to \tau^+\tau^-$ simulated events. (Right) Normalized NN output distribution in the data control region $\hat{N}_{\text{data}}^{\text{CR}}$. The uncertainties reflect the statistics of the (simulated) data. The signal can be mismodeled in the simulation. The $B^0 \to D^-D_s^+$ decay is used to compare data and simulation for the variables used in the NN. Ten variables are found to be slightly mismodeled and their distributions are corrected by weighting. The difference in the shape of the NN output distribution compared to the original unweighted sample is used to derive the associated systematic uncertainty. The fit procedure is validated with pseudoexperiments and is found to be unbiased. Assuming no signal contribution, the expected statistical (systematic) uncertainty on the signal yield is $^{+62}_{-40} \ (\ ^{+40}_{-43})$. The fit result on data is shown in Fig. 3 and gives a signal yield $s = -23^{+63}_{-53}\text{(stat)}^{+41}_{-40}\text{(syst)}$, where the split between the statistical and systematic uncertainties is based on the ratio expected from pseudoexperiments. The $B_s^0 \to \tau^+\tau^-$ signal yield is converted into a branching fraction using $\mathcal{B}(B_s^0 \to \tau^+\tau^-) = \alpha^s s$, with $$\alpha^s \equiv \frac{\epsilon^{D^-D_s^+} \mathcal{B}(B^0 \to D^-D_s^+) \mathcal{B}(D^+ \to K^-\pi^+\pi^+) \mathcal{B}(D_s^+ \to K^+K^-\pi^+) f_d}{N_{D^-D_s^+}^{\text{obs}} e^{\tau^+\tau^-} [\mathcal{B}(\tau^- \to \pi^-\pi^+\pi^-\nu_\tau)]^2} \frac{f_d}{f_s},$$ (2) where $\epsilon^{\tau^+\tau^-}$ and $\epsilon^{D^-D_s^+}$ are the combined efficiencies of trigger, reconstruction, and selection of the signal and normalization channels. The branching fractions used are $\mathcal{B}(B^0 \to D^-D_s^+) = (7.5 \pm 1.1) \times 10^{-3}$ [19], $\mathcal{B}(D^- \to K^+\pi^-\pi^-) = (9.46 \pm 0.24)\%$ [18] and $\mathcal{B}(D_s^+ \to K^+K^-\pi^+) = (5.45 \pm 0.17)\%$ [18], and $f_s/f_d = 0.259 \pm 0.015$ [38] is the ratio of $B_s^0$ to $B^0$ production fractions. The efficiencies are determined using simulation, applying correction factors derived from data. The $B^0 \to D^-D_s^+$ yield, $N_{D^-D_s^+}^{\text{obs}}$, is obtained from a fit to the mass distribution, which has four contributions: the $B^0 \to D^-D_s^+$ component, modeled by a Hypatia function [39], a combinatorial background component, described by an exponential function, and two partially reconstructed backgrounds, $B^0 \to D^*s-D_s^+$ and $B^0 \to D^*D_s^{*+}$, modeled as in Ref. [40]. The resulting fit is shown in Fig. 4 and gives a yield of $N_{D^-D_s^+}^{\text{obs}} = 10629 \pm 114$, where the uncertainty is statistical. Uncertainties on $\alpha^s$ arise from the $B^0 \to D^-D_s^+$ fit model, the finite size of the simulated samples, the uncertainty from the corrections to the simulation and external inputs. The latter contribution, which includes the branching fractions and hadronization fractions in Eq. (2), is dominant, giving a relative uncertainty of 17% on $\alpha^s$. The $B^0 \to D^-D_s^+$ fit model is varied using the sum of two Gaussian functions with a common mean and power-law tails instead of the Hypatia function for the signal, a second-order Chebyshev polynomial instead of an exponential function for the combinatorial background, and adding two other background components. ![FIG. 3. Distribution of the NN output in the signal region $\Lambda^{\text{SR}}$ data (black points), with the total fit result (blue line) and the background component (green line). The fitted $B_s^0 \to \tau^+\tau^-$ signal component is negative and is therefore shown multiplied by $-1$ (red line). For each bin of the signal and background component the combined statistical and background component the combined statistical and systematic uncertainty on the template is shown as a light-colored band. The difference between data and fit divided by its uncertainty (pull) is shown underneath.](image1) ![FIG. 4. Invariant mass distribution of the reconstructed $B^0 \to D^-D_s^+$ candidates in data (black points), together with the total fit result (blue line) used to determine the $B^0 \to D^-D_s^+$ yield. The individual components are described in the text.](image2) from $B^0_s \to D^-D^{*+}_s$ and $B^0 \to a_1(1260)^-D^{*+}_s$ decays. The change in signal yield compared to the nominal fit is taken as a systematic uncertainty, adding the contributions from the four variations in quadrature. The overall relative uncertainty on $\alpha^s$ due to $N_{D^-D^*_s}^{\text{obs}}$ (including the fit uncertainty) is 1.7%. Corrections determined from $J/\psi \to \mu^+\mu^-$ and $D^0 \to K^-\pi^+$ data control samples are applied for the tracking, PID, and the hadronic hardware trigger efficiencies. The relative uncertainty on $\alpha^s$ due to selection efficiencies is 2.9%, taking into account both the limited size of the simulated samples and the systematic uncertainties. The normalization factor is found to be $\alpha^s = (4.07 \pm 0.70) \times 10^{-5}$. The shapes of the NN output distributions and the selection efficiencies depend on the parametrization used in the simulation to model the $\tau^- \to \pi^- \pi^- \pi^+ \nu_\tau$ decay. The result obtained with the TAUOLA BABAR-tune model is therefore compared to available alternatives [41], which are based on CLEO data for the $\tau^- \to \pi^- \pi^0 \pi^0 \nu_\tau$ decay [42]. The selection efficiency for these alternative models can be up to 20% higher, due to different structures in the two-pion invariant mass, resulting in lower limits. Dependence of the NN signal output distribution on the $\tau$-decay model is found to be negligible. Since the alternative models are based on a different $\tau$ decay, the BABAR-tune model is chosen as default and no systematic uncertainty is assigned. The signal yield obtained from the likelihood fit is translated into an upper limit on the $B^0_s \to \tau^+\tau^-$ branching fraction using the CL$_s$ method [43,44]. Assuming no contribution from $B^0 \to \tau^+\tau^-$ decays, an upper limit is set on the $B^0_s \to \tau^+\tau^-$ branching fraction of $5.2(6.8) \times 10^{-3}$ at 90% (95%) C.L. This is the first experimental limit on $\mathcal{B}(B^0_s \to \tau^+\tau^-)$. The analysis is repeated for the $B^0 \to \tau^+\tau^-$ decay. The fit is performed by replacing the signal model with that derived from simulated $B^0 \to \tau^+\tau^-$ decays, giving $s = -15^{+67}_{-56}(\text{stat})^{+44}_{-42}(\text{syst})$ [36]. The expected statistical (systematic) uncertainty on the signal yield is $-64^{+41}_{-38}$. The corresponding normalization factor is $\alpha^d = (1.16 \pm 0.19) \times 10^{-5}$. The limit obtained is $\mathcal{B}(B^0 \to \tau^+\tau^-) < 1.6(2.1) \times 10^{-3}$ at 90% (95%) C.L., which constitutes a factor of 2.6 improvement with respect to the BABAR result [13] and is the current best limit on $\mathcal{B}(B^0 \to \tau^+\tau^-)$. We thank Jérôme Charles (CPT, Marseille, France) for fruitful discussions and help in developing the analytic reconstruction method. We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ, and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG, and MPG (Germany); INFN (Italy); FOM and NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MinES and FASO (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland), and OSC (USA). We are indebted to the communities behind the multiple open source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany), EPLANET, Marie Skłodowska-Curie Actions and ERC (European Union), Conseil Général de Haute-Savoie, Labex ENIGMASS, and OCEVU, Région Auvergne (France), RFBR and Yandex LLC (Russia), GVA, XuntaGal, and GENCAT (Spain), Herchel Smith Fund, The Royal Society, Royal Commission for the Exhibition of 1851, and the Leverhulme Trust (United Kingdom). [1] V. Khachatryan et al. (CMS and LHCb Collaborations), Observation of the rare $B^0_s \to \mu^+\mu^-$ decay from the combined analysis of CMS and LHCb data, Nature (London) 522, 68 (2015). [2] R. Aaij et al. (LHCb Collaboration), Measurement of the $B^0_s \to \mu^+\mu^-$ branching fraction and effective lifetime and search for $B^0 \to \mu^+\mu^-$ decays, Phys. Rev. Lett. 118, 191801 (2017). [3] C. Bobeth, M. Gorbahn, T. Herrmann, M. Misiak, E. Stamou, and M. Steinhauser, $B_{s,d} \to \ell^+\ell^-$ in the Standard Model with Reduced Theoretical Uncertainty, Phys. Rev. Lett. 112, 101801 (2014). [4] Y. Amhis et al. (Heavy Flavor Averaging Group Collaboration), Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of summer 2016, arXiv:1612.07233, http://www.slac.stanford.edu/xorg/hfag/. [5] R. Aaij et al. (LHCb Collaboration), Test of Lepton Universality Using $B^+ \to K^+\ell\ell$ Decays, Phys. Rev. Lett. 113, 151601 (2014). [6] R. Aaij et al. (LHCb Collaboration), Angular analysis of the $B^0 \to K^{*0}\mu^+\mu^-$ decay using 3 fb$^{-1}$ of integrated luminosity, J. High Energy Phys. 02 (2016) 104. [7] A. Crivellin, G. D’Ambrosio, and J. Heeck, Addressing the LHC flavor anomalies with horizontal gauge symmetries, Phys. Rev. D 91, 075006 (2015). [8] D. Beširević, S. Fajfer, N. Košnik, and O. Sumensari, Leptoquark model to explain the $B$-physics anomalies, $R_K$ and $R_D$, Phys. Rev. D 94, 115021 (2016). [9] A. Dighe and D. Ghosh, How large can the branching ratio of $B^0_s \to \tau^+\tau^-$ be?, Phys. Rev. D 86, 054023 (2012). [10] R. Alonso, B. Grinstein, and J.M. Camalich, Lepton universality violation and lepton flavor conservation in $B$-meson decays, J. High Energy Phys. 10 (2015) 184. [11] J.M. Cline, Scalar doublet models confront $\tau$ and $b$ anomalies, Phys. Rev. D 93, 075017 (2016). [12] D. Bečirević, N. Košnik, O. Sumensari, and R. Z. Funchal, Palatable leptoquark scenarios for lepton flavor violation in exclusive $b \to s\ell_1\ell_2$ modes, J. High Energy Phys. 11 (2016) 035. [13] B. Aubert et al. (BABAR Collaboration), A Search for the Rare Decay $B^0 \to \tau^+\tau^-$ at BABAR, Phys. Rev. Lett. 96, 241802 (2006). [14] Y. Grossman, Z. Ligeti, and E. Nardi, $B \to \tau^+\tau^-(X)$ decays: First constraints and phenomenological implications, Phys. Rev. D 55, 2768 (1997). [15] A. Dighe, A. Kundu, and S. Nandi, Enhanced $B_s^0 - \bar{B}_s^0$ lifetime difference and anomalous like-sign dimuon charge asymmetry from new physics in $B_s^0 \to \tau^+\tau^-$, Phys. Rev. D 82, 031502 (2010). [16] C. Bobeth and U. Haisch, New physics in $\Gamma_{1,2}^{(\bar{s}b)}(\bar{\tau}\pi)$ operators, Acta Phys. Pol. B 44, 127 (2013). [17] S. Schael et al. (ALEPH Collaboration), Branching ratios and spectral functions of tau decays: Final ALEPH measurements and physics implications, Phys. Rep. 421, 191 (2005). [18] C. Patrignani et al. (Particle Data Group Collaboration), Review of particle physics, Chin. Phys. C 40, 100001 (2016). [19] A. Zupanc et al. (Belle Collaboration), Improved measurement of $\bar{B}^0 \to D_s^-D^+$ and search for $\bar{B}^0 \to D_s^+D_s^-$ at Belle, Phys. Rev. D 75, 091102 (2007). [20] B. Aubert et al. (BABAR Collaboration), Study of $B \to D^{(*)}D^{(*)}_{sJ}$ decays and measurement of $D^-_s$ and $D_{sJ}(2460)$-branching fractions, Phys. Rev. D 74, 031103 (2006). [21] A. A. Alves Jr. et al. (LHCb Collaboration), The LHCb detector at the LHC, J. Instrum. 3, S08005 (2008). [22] R. Aaij et al. (LHCb Collaboration), LHCb detector performance, Int. J. Mod. Phys. A 30, 1530022 (2015). [23] R. Aaij et al., The LHCb trigger and its performance in 2011, J. Instrum. 8, P04022 (2013). [24] V. V. Gligorov and M. Williams, Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree, J. Instrum. 8, P02013 (2013). [25] T. Sjöstrand, S. Mrenna, and P. Skands, PYTHIA 6.4 physics and manual, J. High Energy Phys. 05 (2006) 026; A brief introduction to PYTHIA 8.1, Comput. Phys. Commun. 178, 852 (2008). [26] I. Belyaev et al., Handling of the generation of primary events in Gauss, the LHCb simulation framework, J. Phys. Conf. Ser. 331, 032047 (2011). [27] D. J. Lange, The EvGen particle decay simulation package, Nucl. Instrum. Methods Phys. Res., Sect. A 462, 152 (2001). [28] P. Golonka and Z. Was, PHOTOS Monte Carlo: A precision tool for QED corrections in $Z$ and $W$ decays, Eur. Phys. J. C 45, 97 (2006). [29] J. Allison et al. (Geant4 Collaboration), Geant4 developments and applications, IEEE Trans. Nucl. Sci. 53, 270 (2006); S. Agostinelli et al. (Geant4 Collaboration), Geant4: A simulation toolkit, Nucl. Instrum. Methods Phys. Res., Sect. A 506, 250 (2003). [30] M. Clemencic, G. Corti, S. Easo, C. R Jones, S. Miglioranzi, M. Pappagallo, and P. Robbe, The LHCb simulation application, Gauss: Design, evolution and experience, J. Phys. Conf. Ser. 331, 032023 (2011). [31] I. M. Nugent, T. Przedzinski, P. Roig, O. Shekhovtsova, and Z. Was, Resonance chiral Lagrangian currents and experimental data for $\tau^- \to \pi^-\pi^-\pi^+\nu_\tau$, Phys. Rev. D 88, 093012 (2013). [32] I. M. Nugent, Invariant mass spectra of $\tau^- \to h^-h^-h^+\nu_\tau$ decays, Nucl. Phys. B, Proc. Suppl. 253–255, 38 (2014). [33] N. Davidson, G. Nanava, T. Przedziński, E. Richter-Was, and Z. Was, Universal interface of TAUOLA technical and physics documentation, Comput. Phys. Commun. 183, 821 (2012). [34] M. Adinolfi et al., Performance of the LHCb RICH detector at the LHC, Eur. Phys. J. C 73, 2431 (2013). [35] A. Mordà, Ph.D. thesis, Aix-Marseille Université, [CERN Report No. CERN-THESIS-2015-264, 2015 (unpublished)]. [36] See Supplemental Material at http://link.aps.org/supplemental/10.1103/PhysRevLett.118.251802 for further details. [37] M. Feindt, A neural Bayesian estimator for conditional probability densities, arXiv:physics/0402093. [38] R. Aaij et al. (LHCb Collaboration), Measurement of the fragmentation fraction ratio $f_{s}/f_{d}$ and its dependence on $B$ meson kinematics, J. High Energy Phys. 04 (2013) 001, $f_{s}/f_{d}$ value updated in Report No. LHCb-CONF-2013-011. [39] D. M. Santos and F. Dupuis, Mass distributions marginalized over per-event errors, Nucl. Instrum. Methods Phys. Res., Sect. A 764, 150 (2014). [40] R. Aaij et al. (LHCb Collaboration), First observations of $\bar{B}_s^0 \to D^+D^-$, $D_s^+D^-$ and $D^0\bar{D}^0$ decays, Phys. Rev. D 87, 092007 (2013). [41] Z. Was and J. Zaremba, Study of variants for Monte Carlo generators of $\tau \to 3\pi\nu$ decays, Eur. Phys. J. C 75, 566 (2015). [42] D. M. Asner et al. (CLEO Collaboration), Hadronic structure in the decay $\tau^- \to \nu_\tau\pi^-\pi^0\pi^0$ and the sign of the tau neutrino helicity, Phys. Rev. D 61, 012002 (1999). [43] A. L. Read, Presentation of search results: The CL$_s$ technique, J. Phys. G 28, 2693 (2002). [44] G. Cowan, K. Cranmer, E. Gross, and O. Vitells, Asymptotic formulae for likelihood-based tests of new physics, Eur. Phys. J. C 71, 1554 (2011); Erratum, Eur. Phys. J. C 73, 2501(E) (2013). W. Barter, F. Baryshnikov, M. Baszczyk, V. Batozskaya, B. Batsukh, V. Battista, A. Bay, L. Beaucourt, J. Beddow, F. Bedeschi, I. Bediaga, A. Beiter, L. J. Bel, V. Bellee, N. Belloli, K. Belous, I. Belyaev, E. Ben-Haim, G. Bencivenni, S. Benson, S. Beranek, A. Berezhnoy, R. Bernet, A. Bertolin, C. Betancourt, F. Betti, M.-O. Bettler, M. van Beuzekom, Ia. Bezshyiko, S. Bifani, P. Billiro, A. Birnkraut, A. Bitadze, A. Bizzeti, T. Blake, F. Blanc, J. Blouw, S. Blusk, V. Bocci, T. Boettcher, A. Bondar, W. Bonivento, I. Bordyuzhin, A. Borgheresi, S. Borghi, M. Borisyan, M. Borsato, F. Bossu, M. Boubdir, T. J. V. Bowcock, E. Bowen, C. Bozzi, S. Braun, T. Britton, J. Brodzicka, E. Buchanan, C. Burr, A. Bursche, J. Buytaert, S. Cadeddu, R. Calabrese, M. Calvi, M. Calvo Gomez, A. Camboni, P. Campana, D. H. Campora Perez, L. Capriotti, A. Carbone, G. Carboni, R. Cardinale, A. Cardini, P. Carniti, L. Carson, K. Carvalho Akiba, G. Casse, L. Cassina, Castillo Garcia, M. Cattaneo, G. Cavallero, R. Cenci, D. Chamont, M. Charles, Ph. Charpentier, G. Chatzikonstantinidis, M. Chefdeville, S. Chen, S.-F. Cheung, V. Chobanova, M. Chrzaszcz, A. Chubykin, X. Cid Vidal, G. Ciezarek, P. E. L. Clarke, M. Clemencic, H. V. Cliff, J. Cloisier, V. Coco, J. Cogan, E. Cogneras, V. Cogoni, L. Cojocariu, P. Collins, A. Comerma-Montells, A. Contu, A. Cook, G. Coombs, S. Coquereau, G. Corti, M. Corvo, C. M. Costa Sobral, B. Couturier, G. A. Cowan, D. C. Craik, A. Crocombe, M. Cruz Torres, S. Cunliffe, R. Currie, C. D’Ambrosio, F. Da Cunha Marinho, E. Dall’Occo, J. Dalseno, P. N. Y. David, A. Davis, K. De Bruyn, S. De Capua, M. De Cian, J. M. De Miranda, L. De Paula, M. De Serio, P. De Simone, C. T. Dean, D. Decamp, M. Deckenhoff, L. Del Buono, H.-P. Dembinski, M. Demmer, A. Dendek, D. Derkach, O. Deschamps, F. Dettori, B. Bey, A. Di Canto, P. Di Nezza, H. Dijkstra, F. Dordei, M. Dorigo, A. Dosil Suárez, A. Dovbnya, K. Dreimanis, L. Dufour, G. Dujany, K. Dungs, P. Durante, R. Dzhelyadin, M. Dziewiecki, A. Dziurda, A. Dzyuba, N. Déléage, S. Easo, M. Ebert, U. Egede, V. Egorychev, S. Eidelman, S. Eisenhardt, U. Eitschberger, R. Ekelhof, L. Eklund, S. Ely, S. Esen, H. M. Evans, T. Evans, A. Falabella, N. Farley, S. Farry, D. Fazzini, D. Ferguson, G. Fernandez, A. Fernandez Prieto, F. Ferrari, Ferreira Rodrigues, M. Ferro-Luzzi, R. Filippov, R. A. Fini, M. Fiore, M. Fiorini, M. Firlej, C. Fitzpatrick, T. Fiutowski, F. Fleuret, K. Fohl, M. Fontana, F. Fontanelli, D. C. Forshaw, R. Forty, V. Franco Lima, M. Frank, C. Frei, J. Fu, W. Funk, E. Furfaro, C. Färber, A. Gallas Torreira, D. Galli, S. Gallorini, S. Gambetta, M. Gandelman, P. Gandini, Y. Gao, L. M. Garcia Martin, J. García Pardiñas, J. Garra Tico, L. Garrido, P. J. Garsed, D. Gascon, C. Gaspar, L. Gavardi, G. Gazzoni, D. Gerick, E. Gersabeck, M. Gersabeck, T. Gershon, Ph. Ghez, S. Giani, V. Gibson, O. G. Girard, L. Giubega, K. Gizdov, V. V. Gilgorov, D. Golubkov, A. Golutvin, A. Gomes, I. V. Gorelov, C. Gotti, E. Govorkova, R. Graciani Diaz, L. A. Granado Cardoso, E. Graugés, E. Graverini, G. Graziani, A. Grecu, R. Greim, P. Griffith, L. Grillo, B. R. Gruberg Cazon, O. Grüning, E. Gushchin, Yu. Guz, T. Gys, C. Göbel, T. Hadavizadeh, C. Hadjivassiliou, G. Haefeli, C. Haen, S. C. Haines, B. Hamilton, X. Han, S. Hansmann-Menzemer, N. Harnew, S. T. Harnew, J. Harrison, M. Hatch, J. He, A. Heister, K. Hennessy, P. Henrard, L. Henry, E. van Herwijnen, M. Heß, A. Hicheur, D. Hill, C. Hombach, H. Hopechov, Z.-C. Huard, W. Hulsbergen, T. Humair, M. Hushchyn, D. Hutchcroft, M. Idzik, P. Ilten, R. Jacobsson, J. Jalocha, E. Jans, A. Jawahery, F. Jiang, M. John, C. R. Jones, C. Joram, B. Jost, N. Jurik, S. Kandybei, M. Karacson, J. M. Kariuki, S. Karodia, M. Kecke, M. Kelsey, M. Kenzie, K. Ketel, E. Khairullin, B. Khanji, Khurewathanakul, T. Kirn, S. Klaver, K. Klimaszewski, T. Klimkovich, S. Koliiiev, M. Kolpin, I. Komarov, R. Kopecnak, P. Koppenburg, A. Kosmyntseva, S. Kotriakhova, A. Kozachuk, M. Kozeiha, L. Kravchuk, M. Kreps, P. Krokovny, F. Kruse, W. Krzemien, W. Kucewicz, M. Kucharczyk, V. Kudryavtsev, A. Kuonen, K. Kurek, T. Kvaratskheliya, D. Lacarrere, G. Lafferty, A. Lai, G. Lanfranchi, C. Langenbruch, T. Latham, C. Lazzeroni, R. Le Gac, J. van Leerdam, A. Leflat, L. Lefrançois, R. Lefèvre, F. Lemaitre, E. Lemos Cid, O. Leroy, T. Lesiak, B. Leverington, T. Li, Y. Li, Z. Li, T. Likhomanenko, R. Lindner, F. Lionetto, X. Liu, D. Loh, J. Longstaff, J. H. Lopes, D. Lucchesi, M. Lucio Martinez, H. Luo, A. Lupato, E. Luppi, O. Lupton, A. Lusiani, X. Lyu, F. Machefert, F. Maciuc, O. Maev, K. Maguire, S. Malde, A. Malinin, T. Maltsev, G. Manca, G. Mancinelli, P. Manning, J. Maratas, J. F. Marchand, U. Marconi, C. Marin Benito, M. Marinangeli, P. Marino, Marks, G. Martellotti, M. Martin, M. Martinelli, D. Martinez Santos, F. Martinez Vidal, D. Martins Tostes, L. M. Massacrier, A. Massafferri, R. Matev, A. Mathad, Z. Mathe, C. Matteuzzi, A. Mauri, E. Maurice, B. Maurin, A. Mazurov, M. McCann, A. McNab, R. McNulty, B. Meadows, F. Meier, D. Melnychuk, M. Merk, A. Merli, E. Michielin, D. A. Milanes, M.-N. Minard, D. S. Mitzel, A. Mogini, J. Molina Rodriguez, I. A. Monroy, S. Monteil, M. Morandin, A. Mordà, M. J. Morello, O. Morgunova, J. Moron, A. B. Morris, R. Mountain, F. Muheim, M. Mulder, M. Mussini, D. Müller, K. Müller, V. Müller, P. Naik, T. Nakada, R. Nandakumar, A. Nandi, I. Nasteva, M. Needham, N. Neri, S. Neubert, N. Neufeld, M. Neuner, T. D. Nguyen, C. Nguyen-Mau, S. Nieswand, R. Niet, N. Nikitin, T. Nikodem, A. Nogay, A. Novoselov, D. P. O’Hanlon, A. Oblakowska-Mucha, V. Obraztsov, S. Ogilvy, R. Oldeman, C. J. G. Onderwater, A. Ossowska, J. M. Otalora Goicoechea, P. Owen, R. P. Pais, A. Palano, M. Palutan, A. Papanestis, M. Pappagallo, L. L. Pappalardo, C. Pappenheimer, W. Parker, C. Parkes, G. Passaleva, A. Pastore, M. Patel, C. Patigniani, A. Pearce, A. Pellegrino, G. Penso, M. Pepe Altarelli, Perazzini, P. Perret, L. Pescatore, K. Petridis, A. Petrolini, A. Petrov, M. Petruzzo, E. Picatoste Olloqui, B. Pietrzyk, M. Pikies, D. Pinci, A. Pistone, A. Piucci, V. Placinta, S. Playfer, M. Plo Casasus, T. Poikela, F. Polci, M Poli Lener, A. Poluektov, I. Polyakov, E. Polycarpo, G. J. Pomery, S. Ponce, A. Popov, D. Popov, B. Popovici, S. Poslavskii, C. Potterat, E. Price, J. Prisciandaro, C. Prouve, V. Pugatch, A. Puig Navarro, G. Punzi, C. Qian, W. Qian, R. Quagliani, B. Rachwal, J. H. Rademacker, M. Rama, M. Ramos Pernas, M. S. Rangel, I. Raniuk, F. Ratnikov, G. Raven, F. Redi, S. Reichert, A. C. dos Reis, C. Remon Alepuz, V. Renaudin, S. Ricciardi, S. Richards, M. Rihl, K. Rinnert, V. Rives Molina, P. Robbe, A. B. Rodrigues, E. Rodrigues, J. A. Rodriguez Lopez, P. Rodriguez Perez, A. Rogozhnikov, S. Roiser, A. Rollings, V. Romanovskiy, A. Romero Vidal, J. W. Ronayne, M. Rotondo, M. S. Rudolph, T. Rut, Ruiz Valls, J. J. Saborido Silva, E. Sadykhov, N. Sagidova, B. Saitta, V. Salustino Guimaraes, D. Sanchez Gonzalo, C. Sanchez Mayordomo, B. Sammartin Sedes, R. Santacesaria, C. Santamarina Rios, M. Santimaria, E. Santovetti, A. Sarti, C. Satriano, A. Satta, D. Saunders, D. Savrina, S. Schael, M. Schellenberg, M. Schiller, H. Schindler, M. Schlupp, M. Schmelling, T. Schmelzer, B. Schmidt, O. Schneider, A. Schopper, H. F. Schreiner, K. Schubert, M. Schubiger, M.-H. Schune, R. Schwemmer, B. Sciascia, A. Sciubba, A. Semennikov, A. Sergi, N. Serra, J. Serrano, L. Sestini, P. Seyfert, M. Shapkin, Y. Shcheglov, T. Shears, L. Shekhtman, V. Shevchenko, B. G. Siddi, R. Silva Coutinho, L. Silva de Oliveira, G. Simi, S. Simone, M. Sirendi, N. Skidmore, T. Skwarnicki, E. Smith, I. T. Smith, J. Smith, M. Smith, I. Soares Lavra, M. D. Sokoloff, F. J. P. Soler, B. Souza De Paula, B. Spaan, P. Spradlin, S. Sritharan, F. Stagni, M. Stahl, S. Stahl, P. Stefkov, S. Stefkova, O. Steinkamp, S. Stemmler, O. Stenyakin, H. Stevens, S. Stoica, S. Stone, B. Storaci, S. Stracka, M. E. Stramaglia, M. Straticiu, U. Straumann, L. Sun, W. Sutcliffe, K. Swientek, V. Syropoulos, M. Szczekowski, T. Szumlak, S. T. J’Ampars, A. Tayduganov, T. Tekampe, G. Tellarini, F. Teubert, E. Thomas, J. van Tilburg, M. J. Tilley, V. Tisserand, M. Tobin, S. Tolk, L. Tomassetti, D. Tonelli, S. Topp-Joergensen, F. Toriello, R. Tourinho Jadallah Aoude, E. Tournefier, S. Tourneur, K. Trabelsi, M. Traill, M. T. Tran, M. Tresch, A. Trisovic, A. Tsaregorodtsev, P. Tsopelas, A. Tully, N. Tuning, A. Ukleja, A. Ustyuzhanin, U. Uwer, C. Vacca, V. Vagnoni, A. Valassi, S. Valat, G. Valenti, R. Vazquez Gomez, P. Vazquez Regueiro, S. Vecchi, M. van Veghel, J. J. Velthuis, M. Veltri, G. Veneziano, A. Venkateswaran, T. A. Verlage, M. Vernet, M. Vesterinen, J. V. Viana Barbosa, B. Viaud, D. Vieira, M. Vieites Diaz, H. Viemann, X. Vilasis-Cardona, M. Vitti, V. Volkov, A. Vollhardt, B. Voneki, A. Vorobyev, V. Vorobyev, C. Voß, J. A. de Vries, C. Vázquez Sierra, R. Waldi, C. Wallace, R. Wallace, J. Walsh, J. Wang, D. R. Ward, H. M. Wark, N. K. Watson, D. Websdale, A. Weiden, M. Whitehead, J. Wicht, G. Wilkinson, M. Wilkinson, M. Williams, M. P. Williams, M. Williams, T. Williams, F. F. Wilson, J. Wimberley, M. A. Winn, J. Wishahi, W. Wislicki, M. Witke, G. Wormser, S. A. Wotton, K. Wraight, K. Wyllie, Y. Xie, Z. Xing, Z. Xu, Z. Yang, Z Yang, Y. Yao, H. Yin, Y. Yu, X. Yuan, O. Yushchenko, K. A. Zarebski, M. Zavertyaev, L. Zhang, Y. Zhang, A. Zhelezov, Y. Zheng, X. Zhu, Zhukov, and S. Zucchelli (LHCb Collaboration) 1 Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil 2 Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil 3 Center for High Energy Physics, Tsinghua University, Beijing, China 4 LAPP, Université Savoie Mont-Blanc, CNRS/IN2P3, Annecy-Le-Vieux, France 5 Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France 6 CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France 7 LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France 8 LPNHE, Université Pierre et Marie Curie, Université Paris Diderot, CNRS/IN2P3, Paris, France 9 I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany 10 Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany 11 Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany 12 Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany 13 School of Physics, University College Dublin, Dublin, Ireland 14 Sezione INFN di Bari, Bari, Italy 15 Sezione INFN di Bologna, Bologna, Italy 16 Sezione INFN di Cagliari, Cagliari, Italy 17 Sezione INFN di Ferrara, Ferrara, Italy 18 Sezione INFN di Firenze, Firenze, Italy 19 Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy 20 Sezione INFN di Genova, Genova, Italy 21 Sezione INFN di Milano Bicocca, Milano, Italy 22 Sezione INFN di Milano, Milano, Italy 23 Sezione INFN di Padova, Padova, Italy 24 Sezione INFN di Pisa, Pisa, Italy 25 Sezione INFN di Roma Tor Vergata, Roma, Italy 26 Sezione INFN di Roma La Sapienza, Roma, Italy 27 Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland 28 AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland 29 National Center for Nuclear Research (NCBJ), Warsaw, Poland 30 Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania 31 Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia 32 Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia 33 Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia 34 Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN), Moscow, Russia 35 Yandex School of Data Analysis, Moscow, Russia 36 Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia 37 Institute for High Energy Physics (IHEP), Protvino, Russia 38 ICCUB, Universitat de Barcelona, Barcelona, Spain 39 Universidad de Santiago de Compostela, Santiago de Compostela, Spain 40 European Organization for Nuclear Research (CERN), Geneva, Switzerland 41 Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland 42 Physik-Institut, Universität Zürich, Zürich, Switzerland 43 Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands 44 Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, Netherlands 45 NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine 46 Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine 47 University of Birmingham, Birmingham, United Kingdom 48 H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom 49 Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom 50 Department of Physics, University of Warwick, Coventry, United Kingdom 51 STFC Rutherford Appleton Laboratory, Didcot, United Kingdom 52 School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom 53 School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom 54 Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom 55 Imperial College London, London, United Kingdom 56 School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom 57 Department of Physics, University of Oxford, Oxford, United Kingdom 58 Massachusetts Institute of Technology, Cambridge, Massachusetts, USA 59 University of Cincinnati, Cincinnati, Ohio, USA 60 University of Maryland, College Park, Maryland, USA 61 Syracuse University, Syracuse, New York, USA 62 Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil (associated with Institution Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil) 63 University of Chinese Academy of Sciences, Beijing, China (associated with Institution Center for High Energy Physics, Tsinghua University, Beijing, China) 64 School of Physics and Technology, Wuhan University, Wuhan, China (associated with Institution Center for High Energy Physics, Tsinghua University, Beijing, China) 65 Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China (associated with Institution Center for High Energy Physics, Tsinghua University, Beijing, China) 66 Departamento de Fisica, Universidad Nacional de Colombia, Bogota, Colombia (associated with Institution LPNHE, Université Pierre et Marie Curie, Université Paris Diderot, CNRS/IN2P3, Paris, France) 67 Institut für Physik, Universität Rostock, Rostock, Germany (associated with Institution Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany) 68 National Research Centre Kurchatov Institute, Moscow, Russia (associated with Institution Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia) 69 Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain (associated with Institution ICCUB, Universitat de Barcelona, Barcelona, Spain) 70 Van Swinderen Institute, University of Groningen, Groningen, Netherlands (associated with Institution Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands) a Also at Università di Ferrara, Ferrara, Italy b Also at P.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia. c Also at Università di Milano Bicocca, Milano, Italy. d Also at Università di Modena e Reggio Emilia, Modena, Italy. e Also at Novosibirsk State University, Novosibirsk, Russia. f Also at LIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain. g Also at Università di Bologna, Bologna, Italy. h Also at Università di Roma Tor Vergata, Roma, Italy. i Also at Università di Genova, Genova, Italy. j Also at Scuola Normale Superiore, Pisa, Italy. k Also at Università di Cagliari, Cagliari, Italy. l Also at Università di Bari, Bari, Italy. m Also at Laboratoire Leprince-Ringuet, Palaiseau, France. n Also at Università degli Studi di Milano, Milano, Italy. o Also at Universidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil. p Also at AGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Kraków, Poland. q Also at Università di Padova, Padova, Italy. r Also at Iligan Institute of Technology (IIT), Iligan, Philippines. s Also at Hanoi University of Science, Hanoi, Viet Nam. t Also at Università di Pisa, Pisa, Italy. u Also at Università di Roma La Sapienza, Roma, Italy. v Also at Università della Basilicata, Potenza, Italy. w Also at Università di Urbino, Urbino, Italy.
第4回 ダウン症候群トータル医療ケア・フォーラム 平成22年3月7日(日) 13:00~16:00 長崎大学医学部記念講堂 主催:長崎大学医学部小児科学教室 共催:染色体障害児・者を支える会 (バンビの会) プログラム ご挨拶 長崎大学医学部小児科教授 森内浩幸 第1部 司会 近藤達郎 (1) ダウン症候群自然歴について 13:00~13:30 長崎大学医学部保健学科教授 松本正 (2) ダウン症候群の精神的諸問題について 13:30~14:00 長崎大学医学部精神神経科名誉教授 中根允文 (3) ダウン症候群患者へのアリセプト療法 14:00~14:30 重症心身障害児・者施設 みさかえの園むつみの家診療部長 近藤達郎 休憩10分 第2部 バンピーズ・ダンス 14:40~15:00 第3部 司会 森内浩幸 米国のダウン症候群についての現状 15:00~16:00 Medical Director of the Jane and Richard Thomas Center for Down Syndrome Dr. Karen L. Summar ご挨拶 長崎大学医学部小児科教授 森内浩幸 多くの方々のご協力を得て、こうして第4回目のダウン症候群トータル医療ケア・フォーラムを開くことができました。第1回目を平成18年6月18日に行って以来、ダウン症候群をもつ方々への様々な医療的ケアに関する情報交換が続けることができていることを心から喜んでおります。 今回のテーマは、「ダウン症候群の方々の精神的な問題を含む長期的展望」です。ダウン症候群をもつ人の数は、試算によりますと少しずつ増加傾向を示しているといわれ、現在、約2,500名位のダウン症候群の赤ちゃんが毎年生まれてきています。平均寿命も60歳を超えている可能性もあり、そうであれば、心身両面にわたる総括的な健康管理の重要性は益々大きくなると思われます。 今回のフォーラムでは、まず、ダウン症候群の方々の現状を把握するために行った「自然歴」のアンケート調査についてお話をお聞きします。昨年の秋頃より企画立案して昨年末に回収終了したもので、長崎県在住の方々を中心に概略が分かってくるものと期待されます。ご協力いただいた多くの方にこの場をお借りして深謝申し上げます。この調査で示される現状を分析することで、私たちが取り組むべき課題が見えてくるのではないかと思います。次に、成人期になると、様々な精神的な問題に遭遇することもあります。このような状況についてどう考えたら良いかなどのお話しもお聞きする機会を得ました。更には、長崎大学小児科が全国に先駆けて行っている塩酸ドネペジル療法が、ダウン症候群をもつ方々のQOLの改善にどのように働いているのかについても、最近の状況を含めてお話を聞く予定です。 染色体障害児・者を支える会(パンビの会)のお子様で構成されているダンスユニット「パンビーズ」にもお披露目をしていただきます。元気に活動されている状況を是非ご覧いただきたく存じます。 更に今回は、The Jane and Richard Thomas Center for Down Syndrome(米国)からKaren L. Summar先生をお招きして、米国のダウン症候群の方々の状況をお聞きする機会も得ることができました。 是非、この機会に多くのことを共に学んでいき、情報を交換し、お互いに認識を高めていくことができることを、心より祈っております。 I-1. ダウン症候群自然歴について 長崎大学医学部保健学科教授 松本正 I-2. ダウン症候群の精神的諸問題について 長崎大学医学部精神神経科名誉教授 中根允文 I - 3. ダウン症候群患者へのアリセプト療法 重症心身障害児・者施設 みさかえの園むつみの家診療部長 近藤達郎 III. 米国のダウン症候群についての現状 Medical Director of the Jane and Richard Thomas Center for Down Syndrome Dr. Karen L. Summar Translational research is defined by the United States National Institutes of Health (NIH) scientific discoveries that are translated into clinical care. Historically, scientific discoveries began with basic, "bench" science and then progressed to the clinical level, "the bedside". Now it is understood that translational research is a two way street. In addition to the model above, clinical scientists can make observations about the natural history of illness that in turn, can stimulate novel scientific discovery. トランスレーショナルリサーチは米国国立衛生研究所(NIH)により「臨床ケアに結び付けられる科学的発見」と定義されています。歴史的に、科学的発見は基礎的な研究(ベンチ)から始まり、臨床段階(ベッドサイド)へと進展してきました。それが、現在、トランスレーショナルリサーチは相互方向的なものであると理解されています。臨床医が病気の自然歴を観察することで、新しい科学的発見を刺激することができるからです。 The field of rare disease research is a fertile area for translational research. Rare diseases have very clearly defined phenotypes (symptoms) which can be observed for between group comparisons and differences in epigenetic effects. This information can then be used to inform basic scientists to pursue novel discoveries. 希少疾患研究の分野はトランスレーショナルリサーチにはもってこいの領域です。希少疾患は、非常に明確な表現型(症状)を有しているため、遺伝子以外による影響に関して、グループ間の比較やその違いを観察することができます。そして、研究者がこういった情報をを使って新たな発見を追跡するのです。 Down syndrome is considered by some to be a rare disease. There are 400,000 people with Down syndrome living in the United States. In order for translational research to benefit people with rare disorders, including Down syndrome, there is a need to develop research infrastructure. This infrastructure must include formal as well as informal elements. ダウン症は希な疾患だと考えている人もいますが、米国では40万人の患者がいます。トランスレーショナルリサーチによって、ダウン症を含む希少疾患を有する人々が何らかの利益を得ることができるようにするには、研究基盤を発達させなくてはいけません。この研究基盤には、公的と公的でない方法が必要です。 Dr. Karen Summar's work on one of the formal elements is the development of a patient registry for Down syndrome. This will be a prospective, longitudinal study of individuals with Down syndrome. The information collected will be used to more clearly study the natural history of Down syndrome, particularly in adults where almost nothing is known about the medical problems suffered. Dr. Karen Summarは公的な方法として、ダウン症の患者登録の推進しています。これはダウン症個々の患者に対する、前向き、縦断的(経時的)な調査です。集められた情報は自然歴の研究に用いられます。特に、大人の患者に関しては、どのような医学的問題があるかが殆ど判っていないため、この点を明らかにすることができます。 In addition, Dr. Summar is interested in developing informal research infrastructure around Down syndrome. To this aim, she is working with a number of collaborators from many different fields. 更に、Dr. Summarはダウン症に関わる、公的ではない方法の発展にも関心を持っています。この目的のために、多くの異なる分野の研究者と共同研究を行っています。 Dr. Summar will discuss her plans for a patient registry as well as some of the results of collaborative research in which she is currently working. Dr. Summarは患者登録に関する計画と、最近の共同研究の結果についてお話させていただきます。 スライド 1: I would like to thank you all for inviting me to speak about Down syndrome. ダウン症候群について話す機会をいただき感謝申し上げます。 スライド 2: The objective's of today's lecture are the following: - To review what is known about Down syndrome - To discuss special education as it exists in the US - To discuss clinical and translational research about Down syndrome 講演の目的(内容)を以下に示します。 ・ダウン症候群について現在分かっていることのまとめ ○米国で行われている特殊教育について ○ダウン症候群についての臨床的研究とトランスレーショナル・リサーチについて *トランスレーショナル・リサーチ (translational research) : トランスレーショナル・リサーチは、20世紀の後半に登場した演繹的な厳密科学の方法と、臨床医科学という古代に淵源する経験科学の、相互翻訳と融合の場とされている。言い換えると、基礎研究で見出された新規発見を応用して、臨床に役立つように「翻訳」するために必要な一連の研究を、立案・実行する過程をいう。 スライド 3: Down syndrome is the most common genetic cause of intellectual disability. There are 400,000 people with DS living in the US currently. ダウン症候群症は知的障がいで最も一般的な遺伝的原因です。現在米国に40万人のダウン症候群の方が生活されています。 Life expectancy has significantly increased for people with DS in the past 20 to 30 years. ダウン症者の平均余命は過去20〜30年間でかなり延びました。 DS is associated with both increased and decreased frequency of secondary medical conditions. ダウン症候群ではその生涯で罹患する疾患の中に、他の人達と比べて多いものと少ないものがあります。 People with DS are a population that is underserved in both medical care and in research. ダウン症候群をもっている人々は、医療と研究の両面でサービスが行き届いていません。 スライド 4: John Langdon Down, for whom the syndrome is named, described in great detail the classic phenotype in his treatise "An Ethnic Classification of Idiots". ダウン症候群の名前の由来となるジョン・ラングドン・ダウ(John Langdon Down)博士は、「An Ethnic Classification of Idiots」という彼の論文の中で古典的表現型について極めて詳細に説明しました。 He was the resident physician of the Royal Earlswood Asylum for the feeble minded. He was also a grandfather of a child with DS. 彼は知的障害のための王立アールウッド保護施設(the Royal Earlswood Asylum)のレジデント*医師でした。彼はダウン症の子供の祖父でもありました。 レジデント*: 専門医学実習者《インターンを終了した後の(病院住み込みの)研修医》。 Down was very observant and descriptive, however, his hypothesis that maternal tuberculosis caused the syndrome was not quite correct. ダウン先生は、非常に観察力が鋭くて、描写に優れた先生でした。しかしながら、母が結核であることが本症候群の原因であるという彼の仮説は全然正しくありませんでした。 スライド 5: Jerome Lejeune was the physician-scientist who discovered the etiology of DS. In 1959, he reported that this syndrome was due to a duplication of human chromosome 21. ジェローム・ルジューン(Jerome Lejeune)博士は、ダウン症候群の病因を発見した医師でもある科学者でした。1959年に彼はこの症候群がヒトの21番染色体の重複によって起こることを報告しました。 スライド 6: This slide shows a karyotype of typical trisomy 21. This is what Lejeune first saw with a light microscopy (although his patient was a boy). このスライドは典型的な 21 トリソミーの核型です。これはまさに、ルジューンが最初に光学顕微鏡で見たものです(彼の患者は男の子でしたが)。 スライド 7: We now know that the cause of trisomy 21 can be due to several mechanisms. Trisomy 21, which is responsible for approximately 95% of DS, is caused by nondysjunction which can occur during meiosis or mitosis. 私たちは、現在、21 トリソミーの原因がいくつかのメカニズムによることを知っています。21 トリソミーはダウン症候群の原因の約 95%を占め、生殖分裂か体細胞分裂の間におこる染色体不分離によって引き起こされます。 This slide also shows that mosaicism is responsible for DS in approximately 2.5% of cases and that translocation of portions of chromosomes are responsible for another 2.5%. Recently, other descriptions have been made, including partial trisomies, ring chromosomes, and isochromosomes. このスライドは、ダウン症候群の約 2.5%を占めるモザイクとやはり約 2.5%を占める転座を示しています。最近、部分的トリソミー、環状染色体、および同腕染色体などの他の状況も報告されるようになりました。 スライド 8: Most cases of DS are caused by nondysjunction that occurs during maternal meiosis. There is an observed correlation between maternal age and risk of DS. ダウン症候群のほとんどの例が、母の生殖分裂間での不分離によって起こります。母親の年齢とダウン症児の出生頻度の間に相関が認められます。 スライド 9: Approximately 2.5% cases of DS are caused by a Robertsonian translocation, named for R. B. Robertson, the geneticist who first described this finding in grasshoppers in 1916. A Robertsonian translocation is a non-reciprocal change involving two chromosomes from two different pairs of chromosomes. ダウン症候群の約 2.5%は R.B.ロバートソン博士(遺伝学者)にちなんで命名されたロバートソン転座によって引き起こされます(彼は 1916 年にバッタにおいてこの現象を報告しました)ロバートソン転座は、異なる 2 組の染色体による 2 つの染色体が含まれる非相互性の変化です。 If the translocation is balanced (no net change of genetic information) the individual has a normal phenotype. If the translocation is not balanced the individual will have DS. もし転座が均衡型であれば(つまり遺伝情報の正味の変化がなければ)、その人は正常表現型です。転座が非均衡型であれば、ダウン症候群が起こり得ます。 スライド 10: The presence of a translocation becomes significant in genetic counseling. If a baby with DS has a karyotype revealing a non-balanced translocation, it is important to obtain karyotypes on the parent. If a parent is the carrier of a balanced translocation, the risk of having another baby with DS is significantly increased. 転座の存在は遺伝カウンセリングで重要になります。ダウン症候群をもっている赤ちゃんに非均衡型転座の核型が認められれば、両親の核型を明らかにすることが重要です。親が均衡型転座の保因者であれば、次の赤ちゃんがダウン症候群となる可能性はかなり高くなります。 スライド 11: The physical phenotype of DS has been known since the time of John Langdon Down. (read slide) ジョン・ラングドン・ダウン先生の時以来、ダウン症候群の身体的表現型は明らかになっています。(スライド参照) スライド 12: (スライド参照) スライド 13: In infants, particularly those who are not of Caucasian background, it can be difficult to decide if an infant has DS. I find Hall's criteria helpful in determining whether or not DS is present. One additional trick is to make the baby cry. This tends to accentuate the facial features of DS. 幼児、特に白人以外の幼児では、ダウン症候群かどうかを決定するのが難しい場合があります。ダウン症候群かどうかを決定する際に Hall の評価基準が役立ちました。診断の手助けとなることを 1 つ追加するなら、それは赤ちゃんを泣かせることです。そうすると、ダウン症候群の顔貌の特徴が強調されます。 スライド 14: (スライド参照) スライド 15: This is a cartoon of an atrioventricular canal defect. The most common defect in DS that requires surgery. (point out the defects) これは房室中隔欠損症(心内膜欠損症)のイラストです。外科的処置を必要とするダウン症候群で最も一般的な合併症です(欠損部を指摘します)。 スライド 16: This is a ventricular septal defect. (point out the defect) これは心室中隔欠損症です(欠損部を指摘します)。 スライド 17: This is a tetralogy of Fallot. This occurs less frequently in DS. It is one of the few cyanotic heart lesions seen in DS. これはファロー四徴症です。これはダウン症に先に述べた心疾患と比べ、さほど頻度が高いわけではありませんが、ダウン症候群で見られる数少ないチアノーゼ型心疾患の1つです。 スライド 18: This slide demonstrates the "double bubble" sign of duodenal stenosis. (pointer). This is the air in the stomach of the baby. This second bubble is air in the intestine past the obstruction. This area in the middle is the obstruction due to stenosis or atresia. このスライドでは十二指腸狭窄で認められる「二重気泡」兆候を示しています。これは赤ちゃんの胃の空気です。この2番目の気泡は狭窄部の先における腸の空気です。この中央部の領域は狭窄症か閉鎖症によって空気の通りが悪くなっています。 スライド 19: This is an xray of the abdomen demonstrating the findings with Hirschsprung's disease. (pointer). Here you will notice an absence of intestinal gas. これは、ヒルシュスプルング病の所見を示す腹部レントゲン写真です。この部分に、腸管ガスがないことに気付かれると思います。 スライド 20: (スライド参照) スライド 21: In addition to many physical features of DS, there is a behavioral phenotype as well. Many parents have told me that their child with DS is stubborn. I answer by saying that the stubborn gene is on chromosome 21. ダウン症候群の多くの身体的特性に加えて、精神行動的特徴もよく知られています。多くの両親が、ダウン症児は頑固だと私におっしゃいます。私は「21番染色体上に頑固遺伝子があるのよ」と言って答えるようにしています。 In all seriousness, however, children with DS are frequently perceived as sweet, loving children without any behavioral concerns. I agree that they are sweet and loving, but up to 20 - 40% will have significant behavioral problems. Many of these problems will persist into adulthood. 8 to 10% of children with DS will meet diagnostic criteria for autism spectrum disorder. しかしながら、冗談抜きに、ダウン症候群のお子様は行動上の問題など全くない優しくて愛すべき子供として多くの場合認識されています。私は、彼らが優しくて愛すべき人たちであることに賛成しますが、20〜40%の子ども達は明らかに行動上の問題を伴います。これらの問題の多くが成人期まで続き得ます。ダウン症児の8〜10%は自閉症スペクトラム障がいの診断基準を満たします。 Most of the behavior problems in DS are due to internalizing symptoms, such as depression and anxiety. Few children with DS have externalizing behaviors such as unprovoked aggression. ダウン症候群の行動上の問題のほとんどは、憂うつや不安症などのように内向的な徴候に起因するものです。ダウン症児では、いわれのない攻撃性のような外向的な行動をとることは滅多にありません。 These behaviors can occur at any time, but many commonly begin in late adolescence and young adulthood. これらの行動はいつでも起こりえますが、多くが思春期後期や青年期に始まることが一般的です。 50% of adults with DS will develop Alzheimer’s disease by the age of 50 years. ダウン症候群の成人の50%は50歳までにアルツハイマー病を発症するとされています。 スライド 22: Due, in part, to improved medical care, including surgical repair of congenital heart and gastrointestinal malformations, life expectancy has greatly increased in the past 20 to 30 years for people with DS. 先天性心疾患や消化器系の異常の外科的治療を含む医療ケアの改善が理由の一つとなって、ダウン症候群の人々の平均寿命は過去20〜30年間で大いに延びました。 A baby born in 2010 with DS can be expected to live to 60 or 70 years in the US. 2010年に生まれたダウン症の赤ちゃんは米国では60歳か70歳までは生きることができると予測されています。 These graphs show the result of a study conducted by the Centers for Disease Control in the US. The investigators reviewed death certificates of 17,000 people with DS and compared them to the death certificates of 17,000 people without DS during the years 1983 until 1997. They found that median age of death had increased from 29 years to 59 years in Caucasians with DS. (point to slide). They also found that not all racial groups benefited equally. Median age of death for blacks with DS in the US is approximately 20 years and 10 years for other racial groups. This raises a question about access to proper medical care or possibly a genetic variation that has not been. このグラフは、米国の疾病対策センター(CDC)によって報告されている研究の結果を示しています。1983年から1997年までにダウン症候群の人々1万7000人の死亡診断書とダウン症候群でない1万7000人の死亡診断書との比較がなされました。その結果、死亡年齢の中央値がダウン症候群の白人で29歳から59歳まで増加したことがわかりました。ただし、すべての人種集団が等しく利益を得たというわけではありません。米国のダウン症候群黒人の死亡年齢の中央値はおよそ20歳であり、他の人種集団では10歳でした。このことは、これらの人種集団では適切な医療ケアが受けられないているためか、ことによると未知の遺伝的変異があるのか、という問題を提起しています。 スライド23: The same CDC study looked at the causes of death and found that there is both an increased and decreased frequency of death due to secondary illnesses. As expected, there are more deaths in people with DS due to congenital heart disease, dementia, leukemia, and seizures than in the general population (point to slide). However, this study also found fewer than expected deaths due to atherosclerosis and solid tumors (point to slide). 同じCDCの研究では死亡原因も検討していますが、続発性に生じる疾患による死亡の頻度をみると、疾患によって一般人口よりも多いものと少ないものがあることが分かりました。予想されるように、ダウン症の人々は先天性心疾患、認知症、白血病、および痙攣発作における死亡は一般人口より多くみられました。しかしながら、アテローム性動脈硬化症や固形腫瘍による死亡は、一般人口から予測されるより少ないことが分かりました。 スライド24: Now we will change directions and talk about the educational system for children with DS in the US. (read slide) さて、話題を変えて、米国のダウン症児の教育システムについて話をします。(スライド参照) スライド25: (スライド参照) スライド26: (スライド参照) スライド27: The phrase "individualized education plan" or IEP is used to describe a legally binding contract between a child with a disability and the public school system. The term is also used to refer to the meeting at which the terms of this contract are determined. (read slide) 「個別教育プラン(individualized education plan、IEPと略されます)」いう語句は、障がいをもっている子どもと公立校システムとの間の法的に拘束力がある契約について説明するのに用いられます。この言葉はまた、この契約の合意が決められる会議について言及する場合にも使われます。 スライド28: The previous slide described the legal contract. Now I will discuss the IEP meeting. (read slide) 前のスライドは法的な契約について説明しました。今度はIEP会議について議論します。(スライド参照) スライド29: (スライド参照) スライド30: Now I would like to talk about clinical and translational research in DS. 今度は、ダウン症候群における臨床研究とトランスレーショナル・リサーチに関して話したいと思います。 スライド31: Because people with DS are living longer than ever, we know almost nothing about the secondary illnesses that affect adults with DS. ダウン症者はますます長寿になってきているため、私たちはダウン症成人に起こる続発性疾患に関してほとんど何も分からない状況になっています。 People with DS are frequently excluded from clinical trials for new medicines. This occurs because they and their families are unaware of such trials or because investigators chose to exclude them. ダウン症候群の人々は、新薬の臨床試験からしばしば除外されます。これは、ダウン症者や彼らの家族がそのようなトライアルに気づいていないか、または研究者が彼らを除外することを選択するために起こります。 Because trisomy 21 is a unique biological model, there is no reason to assume that people with DS will respond to medicines in the same manner as people who do not have trisomy 21. In fact, we have evidence that they do not always tolerate the same doses of medications that others tolerate. 21トリソミーは生物学的にユニークなところがあるので、ダウン症の人々が21トリソミーを持たない人々と同じように薬剤に反応すると仮定する根拠はありません。実際、ダウン症の人々は他の人々が許容できるレベルの薬物投与量に必ずしも耐えられるものではないという事実を、私たちはつかんでいます。 For these reasons, my primary research focus has been in developing a patient registry for DS. This registry would serve two primary purposes: 1. collection of prospective, longitudinal data about secondary illnesses that occur in a large population of people with DS (there is currently no such collection of data) and 2. collection of people with DS who are interested in participating in research that could be quickly re-contacted by researchers when needed. これらの理由で、私は主たる研究として、ダウン症の人々の登録を効めて行くことに焦点を当てました。この登録の目的は主として二つあります:1.ダウン症候群の人々の大規模な集団において、続発性疾病に関しての前方視的かつ継続的なデータを集積すること(現在このようなデータの集積はありません)と 2. 研究に参加することに興味を持っているダウン症の人々を集積し、必要に応じて研究者がすぐに再度コンタクトできるようにすることです。 By using a web-based electronic data capture system, we hope to enroll people with DS even when they live a long distance from our participating institutions. ウェブベースの電子データ検索取り込みシステムを使用することによって、登録への参加施設から遠方で暮らしておられる時でさえ、ダウン症の人々の登録が望めます。 There are two steps to this process, the first is a web portal where individuals with DS (or their families) may enter data. From that portal, they will be able to opt to be contacted by one of the participating institutions. At that point, a research assistant from the participating institutions can contact the family and obtained more detailed information, including source documents such as medical and educational records. この過程への2ステップがあって、1番目は、ダウン症者個人(または家族)がデータを入力できるウェブ入り口です。その入り口から、参加施設の中のどこから連絡してもらうのかを選ぶことができます。その段階で、参加施設に所属する研究助手が家族と連絡を取って、医療記録や教育記録などの基礎データを含む、より詳細な情報を得ることができるのです。 In the very near future, we would like to connect this database with a biobank which will collect DNA, plasma, and possibly other biological specimens. This would then be a tool for basic scientists to begin new investigations. ごく近い将来、DNA、血漿、およびことによると他の生物学的資料を集めるバイオバンクを、このデータベースに連結したいと思います。そうすると、基礎科学者が新しい研究を始める手段となるでしょう。 スライド32: Now to go from a large scale project to a smaller project. I would like to discuss my DS sleep study that I did as my Master's thesis. さて今度は、大規模なプロジェクトから、より小さいプロジェクトに話を移します。私の修士課程論文として行なったダウン症候群の睡眠に関する研究について議論したいと思います。 I think that this study demonstrates translational research. The investigators included a pediatrician, a psychologist and a neurologist with board certification in sleep disorders. 私は、この研究がトランスレーショナル・リサーチだと思います。研究者は、睡眠障害に関する専門委員会により認証されている小児科医、心理学者、および神経科医が含まれました。(スライド参照) スライド33: Now the disappointing thing to me was that Dr. Shott beat me to press. (point to slide) さて私にとってがっかりしたことは、ショット博士が私よりも早く論文発表に漕ぎ着けたことです。 What I find significant about this is that Dr. Shott and I had very similar findings in our cohorts. This was despite the fact that the demographics of our studies were very different. My cohort was chosen from the general population of DS in Nashville, TN and Dr. Shott's cohort was from patients referred for otolaryngologic care. ただ私がこの件に関して重要だと思ったことは、ショット博士と私がそれぞれの研究集団においてとても似通った調査結果を得たということです。それぞれの研究集団が人口統計学的に非常に異なっていたという事実にもかかわらず、そうだったのです。私の研究集団はナッシュビル(テネシー州)の一般人口の中から選ばれたダウン症候群の人々だったのに対し、ショット博士の研究集団は耳鼻咽喉科に紹介された患者から抽出されています。 スライド 34: Now moving to another example of translational research. では、トランスレーショナル・リサーチの別の例に移ります。(スライド参照) スライド 35: SMOR stands for Standardized Mortality Odds Ratio. This is a risk ratio. In this case we are comparing the risk of death due to a secondary illness in patients with DS to those without DS. A SMOR higher than 1 indicates an increased risk of death. A SMOR less than 1 indicates a reduced risk of death. Therefore, people with DS have a reduced risk of death due to solid tumors. SMORは標準化死亡オッズ比を示します。これはリスク比です。この場合私たちは、ダウン症の人々とそうでない人々との間で続発性疾患による死亡の危険性を比較しています。1より高いSMORは死亡の危険性の増加を示します。1未満のSMORは死亡リスクの低下を示します。したがって、ダウン症の人々は、固形腫瘍での死亡リスクは低いことが分かります。 I will also mention at this point that this epidemiologic finding has confirmed in prospective, tumor registries in the Netherlands and Great Britain. 私はこの疫学的発見が、オランダと英国での前方視的腫瘍登録で確認されたことを申し添えます。 スライド 36: So knowing this, my friend Roger Reeves, at Johns Hopkins hypothesized that if you bred a DS mouse with a mouse with increased tumor risk, you should have progeny with fewer tumors than the parent mouse with increased tumor risk. それで、これを知った私の友人であるジョーンズ・ホプキンス大学のロジャー・リーブズ博士は、腫瘍のリスクが高いマウスとダウン症モデルマウスを掛け合わせると、腫瘍のリスクが高い親マウスより腫瘍になることが少ない子孫が出来るのではないか、と仮定しました。 スライド 37: Dr. Reeves did just that. When he crossed these mice, he did indeed find that the progeny had fewer tumors than the parent. So, he asked himself, why would trisomy 21 (or its homologue in the mouse) be protective against solid tumors? リーブズ博士はまさしくそれをしました。これらのネズミを掛け合わせた時、本当に、子孫には親より腫瘍が少ないことがわかりました。彼はそれで、なぜ21トリソミーで固形腫瘍ができにくいのかと自問自答しました。 スライド 38: Dr. Reeves identified 3 candidate genes on chromosome 21 that might explain cancer risk. リーブズ博士は癌の危険について説明出来るかもしれない21番染色体上の3つの候補遺伝子を特定しました。 RCAN1 and Endostatin are both genes involved in regulation of angiogenesis. RCAN1とEndostatinはともに血管形成の調節にかかわる遺伝子です。 Ets2 is an oncogene. So he picked Ets 2. Ets2はがん遺伝子です。そのため彼はEts2を選択しました。 スライド 39: In order to drill down to his candidate gene, Dr. Reeves generated a mouse that was missing a copy of Ets 2 (a knockout). Ms1Rhr was crossed with the mouse with increased tumor risk. The progeny that was euploid for Ets 2 had fewer tumors than the progeny with only one Ets 2. 彼の候補遺伝子が本物であることを確かめるために、リーブズ博士はEts2遺伝子を欠いたマウス(ノックアウトマウス)を作成しました。Ms1Rhr(ノックアウトマウス)は腫瘍のリスクが高いマウスと交配されました。Ets2が正倍数体(2個)であった子孫は、Ets2が1つのみの子孫より少ない腫瘍を示しました。 So, having more than one copy of Ets 2 seemed to protect against tumors. それで、Ets2遺伝子を2コピー以上持っていることは腫瘍を予防すると推測されました。 スライド40: Finally, Dr. Reeves crossed a mouse that was trisomic for Ets 2 (Ts1Rhr) with the mouse at increased cancer risk. The progeny that were trisomic for Ets 2 had fewer tumors than their litter mates with only 2 copies of Ets 2. 最終的に、リーブズ博士はEts2遺伝子が3コピーであるマウス(Ts1Rhr)と癌のリスクが増加しているマウスを交配させました。Ets2が3コピーあるマウスの子孫はEts2遺伝子が2コピーの子孫より、さらに腫瘍を形成することが少ないことを示しました。 スライド41: This very complicated table shows that there is an inverse relationship between the number of Ets 2 genes and tumor risk. この非常に複雑な表は、Ets2遺伝子の数と腫瘍リスクの間に逆相関があるのを示しています。 Now this is well and good for mice.... But what if you had a population of people with extra copies of Ets 2 and decreased cancer risk? But wait, we do...people with DS. Would this be a starting place to look at novel cancer treatments in people with cancer? さて、このことはネズミにとっては、健康的で良いことです。…しかし、あなたがEts2の過剰コピー数を持ち癌のリスクが低い集団を持っていれば、どうなるでしょうか? しかし、待ってください、いるではありませんか—ダウン症の人々です。これって、癌患者に対する新たな治療法を見つけ出すスタート地点になっていないでしょうか? スライド42: Now just when I have you thinking in one direction, I am going to change directions again. As another example of clinical/translational research, my husband and I are studying the pro oxidant state of people with DS. さて、あなた方に一つの研究の方向性について考えてもらいましたが、私は再び方向を変えてお話したいと思います。別の臨床研究/トランスレーショナル・リサーチの例として、私の夫と私はダウン症候群の人々の酸化状態を研究しています。(スライド参照) スライド43: We found significantly elevated levels of circulating NO levels in babies with DS when compared to babies without DS undergoing heart surgery. 心臓外科手術を受けたダウン症候群の赤ちゃんの血中一酸化窒素値が、同様に手術を受けたダウン症でない赤ちゃんと比べて非常に高いことが分かりました。 スライド44: (スライド参照) スライド45: This cartoon depicts some of the known mechanisms of oxidant damage that occurs due to peroxynitrite. (point to slide) このイラストはペーオキシナイトライト*のため発生する酸化ダメージのメカニズムとして分かっているものを見つめています。 *ペーオキシナイトライト(peroxynitrite): Peroxynitriteは、生体内においてNOとスーパーオキサイドとの反応によって生成され、神経系などにおいてNOとは異なった生理活性を有することが報告されている。 Peroxynitrite is formed when free radical oxygen molecules combine with NO. 遊離の活性酸素分子がNOと化合すると、ペーオキシナイトライトが形成されます。 Many of these mechanisms of oxidant damage are currently under investigation as they relate to diseases of aging such as Alzheimer's disease. 酸化障害は、そのメカニズムの多くはアルツハイマー病のような加齢の疾患と関連するので、現在検討中です。 スライド46: (スライド参照) スライド47: Thank you very much for inviting me to speak today. 今日は本当に有り難ございました。
Addressing Debt Vulnerabilities: Role of Debt Strategies and Debt Managers A Policy Perspective by Mr. Udaibir S. Das Monetary and Capital Markets Department International Monetary Fund The views expressed are those of the author and do not necessarily reflect the views of UNCTAD Addressing Debt Vulnerabilities: Role of Debt Strategies and Debt Managers A Policy Perspective Udaibir S. Das November 10, 2009 A Debt Crisis? Debt Default? - Focus on global economic recovery and sustainable growth - Emphasis on re-establishing economic and financial stability - All round adjustment underway both within and across countries - For some, adjustment is harder - Some have reached a point where imbalances are acute and policy options are limited A Debt Crisis? Debt Default? - All efforts to account for the impact on the poor and most vulnerable - Helping find necessary budgetary savings - Ring-fencing social spending on the most vulnerable Global economic crisis has hit the low-income countries very hard IMF responding with unprecedented actions to help support the efforts of its LIMCs Increased resources for LICs - Resources expected to boost concessional lending ($17 billion through 2014) - Interest payments zero on outstanding concessional loans (through 2011) - New set of lending instruments tailored to the diverse needs of LICs 1. Rising debt and fiscal vulnerabilities 2. Positive but uncertain outlook of debt capital markets conditions 3. Renewed focus on debt and risk management arrangements 4. Importance of new IMF/World Bank MTDS Framework 5. Significant challenges lie ahead for debt managers 6. Macroeconomic and policy oriented focus key for sustainable strategies Increased Deficit, Debt and Fiscal vulnerabilities as a result of the Global Economic Crisis Global growth is projected to contract by 1.1 percent in 2009... Real Gross Domestic Product (Percent; quarter over quarter, annualized) Emerging and Developing countries World Advanced economies Sources: Fund staff projections, and Global Data For LICs, growth projections have been revised down significantly since March. Low Income Country GDP Growth (In percent) Pre-shock (Spring 2008 WEO) Post-shock (March paper projections) Post-shock (current projections) Sources: WEO database, and Fund staff calculations. LICs’ fiscal balances are projected to deteriorate by 2.8 percent of GDP in 2009. Change in Average Overall Fiscal Balance in 2009 relative to 2008, by Country Groups 1/ (in percent of GDP) Commodity exporters: -4% Non-commodity exporters: -3% All low-income countries: -3% Source: Fund staff estimates and projections. 1/ Including grants. While fiscal policies in LICs are directed toward supporting growth, the risks to debt sustainability continue to rise. G-20 Countries: General Government Debt Ratios, 2000–14 (In percent of GDP) LICs: Medium-Term Impact of the Crisis on Debt Burden Indicators PV of debt-to-exports ratio 1/ 2/ Sources: Most recent DSAs (issued after June 1), and Fund staff simulations. 1/ Results are compared to older DSAs. Simulations results are from the WEO fiscal scenario. 2/ For countries in Appendix I, except Azerbaijan, India, Maldives, Pakistan, and Uzbekistan, for which LIC DSAs are unavailable or were not produced because countries had significant market access. Stochastic simulations of medium-term debt paths confirm the risks of sustained increases in debt ratios... Representative Emerging G-20 Country: Evolution of Public Debt (Gross debt in percent of GDP) Note: Lines represent the distribution (in deciles) of the simulated debt-to-GDP ratios. Staff estimates based on the October 2009 WEO. ...and a greater probability that more LICs could move into debt distress scenarios Source: Fund staff estimates. 1/ Based on debt sustainability analyses available as of end-July 2009, except for Georgia (low risk), which experienced a deterioration in its risk of debt distress. 2/ For all countries included in Appendix I, except Azerbaijan, India, Maldives, Pakistan and Uzbekistan, for which LIC DSAs are unavailable or were not produced because countries had significant market access. 3/ Based on recent DSAs and Staff simulations. The post-crisis risk ratings resulting from staff simulations are based on the worst-case scenario that all identified debt vulnerabilities automatically translate into a deterioration of the country's pre-crisis risk of debt distress rating. Evolution of Debt Capital Markets Conditions After the shutdown in international debt markets in 4Q08, financing conditions facing EM issuers have improved considerably. **Emerging Markets External Debt Spreads (bps)** - Global - Africa - Asia - Europe - LatAm **EM Sovereign Issuance in International Capital Markets (USD bn)** - LatAm & Caribbean - Emerging Europe - Asia - Middle East & Africa Source: JP Morgan Chase, Bloomberg In 3Q09, sovereign upgrades outnumbered negative rating actions for the first time since the beginning of the crisis. Emerging Markets Sovereign Ratings Upgrades/Downgrades (As of October 27, 2008) 1. All rating agencies combined. Does not include ratings affirmations. Includes outlook changes from May 2009 onwards. Strong flows into EM debt bode well for the sovereign issuance pipeline building up moving into 2010 Cumulative Flows into EM Funds (USD bn) * As of September 11, 2009 Source: EPFR Global Similarly, LIC sovereigns access to financing has improved, particularly in the external syndicated loan market. Public Sector's Syndicated Loan Issuance (In billions of USD) - Q1: 2008 - 3.2, 2009 - 4.9 - Q2: 2008 - 2.3, 2009 - 1.1 - Q3: 2008 - 3.6, 2009 - 0 - Q4: 2008 - 2.5, 2009 - 0 Private Sector's Syndicated Loan Issuance (In billions of USD) - Q1: 2008 - 14.7, 2009 - 9.2 - Q2: 2008 - 16.8, 2009 - 19.2 - Q3: 2008 - 6.7, 2009 - 0 - Q4: 2008 - 24.7, 2009 - 0 Source: Dealogic. Conditions in international primary and secondary bond markets for LICs have also started to ease... ...with Sri Lanka becoming the first to tap international markets after the onset of the crisis (bid-to-cover of 13.6 times) But the improvements in sentiment and debt supply conditions are not irreversible - Heightened volatility. Significant downside risk remains for EM sovereign bonds. - Increased borrowing costs. Surge in bond supply could lead to higher interest rates. - Portfolio deterioration. Could trigger weakening of debt structures/composition. Role of Debt and Risk Management Why is debt management important for LICs in the context of increased financing needs and improving debt market conditions? - Improved access to funding. Better position to benefit from the movements in sovereign spreads. - Increased use of non-concessional borrowing from multilaterals. More likely to qualify for non-concessional IMF lending. - Renewed interest from foreign direct investors. Establish that long-term financing projects are creditworthy and proper end use of borrowed funds. A Medium-Term Debt Management Strategy Framework (MTDS) What is a debt strategy? - A medium term plan to achieve a desired composition of the government debt portfolio - Operationalizes key debt management objectives that helps meet financing needs and payments obligations - Makes funding decisions consistent with a prudent degree of risk while maintaining debt sustainability - Facilitates reform of local debt capital market Objectives set by policy makers but strategies proposed by the debt manager Account for policy and market constraints Debt managers must therefore be an integral part of the full process Ensure consistency with macroeconomic and financial policies Debt strategy can help anchor policy consistency - **Cost/Risk Analysis** - Constraints - Information on cost and risk - **Debt Management Strategy Development** - Consistency/Constraints - Information on cost and risk - Market development initiatives - Demand constraints - **Macroeconomic Framework/Debt Sustainability** - **Financing Sources/Market Environment** Clearly the debt manager cannot do it alone! Need a comprehensive approach - Raise awareness with policy makers - The new IMF/World Bank MTDS Framework provides that comprehensive approach Through the MTDS the debt manager can help - Ensure two-way link - Identify complementary reforms - Enable countries to design debt strategies MTDS formulation suggested steps 1. Objectives and scope of the MTDS 2. Current strategy and costs and risks of existing debt 3. Potential sources of finance 4. Medium-term macro and market environment 5. Vulnerabilities, risks, and structural factors 6. Analysis of alternative debt management strategies 7. Review with fiscal and monetary authorities and market participants 8. Propose and approve MTDS Policy Inter-linkages Are Key - Sustainable fiscal and exchange rate policies - Credible monetary policies - Efficient coordination of debt and monetary management - Capital account policies and the exchange rate regime Policy Inter-linkages are Key (continued) - Nature of macro-shocks should guide debt strategy choices - Account for balance of payments financing constraints - Portfolio restrictions on financial institutions Key prerequisites for effective debt strategies - Capacity to elaborate medium-term fiscal and macro plans that determine financing needs - Capacity to design and implement sound debt policies - Access to deep and diversified financial markets - Effective monetary and liquidity management capacity Capacity building should take a project approach - Diagnostic studies (initial MTDS mission) identifies reform areas - Assistance in debt management capacity - Capacity building in complementary reform areas for effective MTDS formulation and implementation - Follow-up MTDS missions to evaluate progress Enhance the visibility of the debt management function - Debt management should be viewed as a key policy area - May need a separate debt management institution - May need to raise the debt manager’s function in the ministry of finance and central bank hierarchy Debt managers face considerable challenges in meeting increasing funding. Countries with sound overall macro and debt policies have fared better than others. Debt managers have shown more flexibility in instrument choices and currency mix. But have generally maintained pre-crisis medium-term debt strategies and targets. Taken measures to shore up domestic primary and secondary markets. Some lessons from the crisis - Strategies must be anchored in sound macro fundamentals to be credible - Closer monitoring of foreign investor participation in domestic markets - Approach from an asset and liability management framework - Realistically and continually assess debt and portfolio related risk exposures - Study the links between deficits, debt and interest rates - Need to announce credible exit strategies now to avoid risk of public debt snowballing in the absence of corrective action. Forthcoming from the IMF.... - "Crisis and Policy Lessons for Debt Managers" - "Interlinkages between Effective Risk Management of Debt and Financial Stability" - Extension of IMF’s *Risk Measures Dynamic Toolkit* for a Sovereign Debt Portfolio - "Deficits, Debt and Interest Rates" - "Determination of EM Sovereign External Bond Spreads" - "Measurement and Management of Sovereign Contingent Liabilities: New Approaches" - "ALM and Debt Management"
The impact of daytime light exposures on sleep and mood in office workers Mariana G. Figueiro, PhD\textsuperscript{a,*}, Bryan Stevenson, MA\textsuperscript{b}, Judith Heerwagen, PhD\textsuperscript{b}, Kevin Kampschroer, MA\textsuperscript{b}, Claudia M. Hunter, PhD\textsuperscript{a}, Kassandra Gonzales, MS\textsuperscript{a}, Barbara Plitnick, RN\textsuperscript{a}, Mark S. Rea, PhD\textsuperscript{a} \textsuperscript{a} Lighting Research Center, Rensselaer Polytechnic Institute, Troy, NY \textsuperscript{b} Office of Federal High-Performance Green Buildings, US General Services Administration **Article history:** Received 7 November 2016 Received in revised form 9 March 2017 Accepted 15 March 2017 **Keywords:** Light exposure Circadian rhythms Sleep Mood Phasor analysis **Abstract** **Background:** By affecting the internal timing mechanisms of the brain, light regulates human physiology and behavior, perhaps most notably the sleep–wake cycle. Humans spend over 90% of their waking hours indoors, yet light in the built environment is not designed to affect circadian rhythms. **Objective:** Using a device calibrated to measure light that is effective for the circadian system (circadian-effective light), collect personal light exposures in office workers and relate them to their sleep and mood. **Setting:** The present study was conducted in 3 buildings managed by the US General Services Administration. **Participants:** This study recruited 109 participants (89 females), of whom 81 (34 females) participated in both winter and summer. **Measurements:** Self-reported measures of mood and sleep, and objective measures of circadian-effective light and activity rhythms were collected for 7 consecutive days. **Results:** Compared to office workers receiving low levels of circadian-effective light in the morning, receiving high levels in the morning is associated with reduced sleep onset latency (especially in winter), increased phasor magnitudes (a measure of circadian entrainment), and increased sleep quality. High levels of circadian-effective light during the entire day are also associated with increased phasor magnitudes and reduced sleep onset latency. **Conclusions:** The present study is the first to measure personal light exposures in office workers using a calibrated device that measures circadian-effective light and relate those light measures to mood, stress, and sleep. The study’s results underscore the importance of daytime light exposures for sleep health. © 2017 National Sleep Foundation. Published by Elsevier Inc. All rights reserved. **Introduction** Retinal light exposures affect human physiology and behavior by directly stimulating the brain’s biological clock.\textsuperscript{1} The daily pattern of light and dark falling on our retinas sets the timing of the biological clock, which most notably perhaps compels us to sleep at night and stay awake during the day in synchrony with Earth’s 24-hour axial rotation.\textsuperscript{2} The human circadian clock free-runs in constant darkness, generally with a period slightly greater than 24 hours. Sustained morning light is needed to advance, and therefore synchronize, the biological clock to local time on Earth.\textsuperscript{3} In contrast to foveal vision, on which most building lighting standards are based, the human circadian system requires high retinal exposures from short-wavelength light to be activated. Since electric lighting used in buildings is presently manufactured, designed, and specified only to meet visual requirements, the built environment may not provide a sufficient amount and the appropriate spectrum of light at the right time to stimulate the circadian system during the day. With the advent of self-luminous displays, there also may be too much light exposure during the night.\textsuperscript{4,5} Irregular light–dark patterns or exposure to light at the wrong time may lead to circadian disruption and poor sleep quality, both of which have been associated with mood disorders, including depression, and with health risks such as diabetes, obesity, cardiovascular disease, and cancer.\textsuperscript{6–14} Consistent with the idea that reduced daytime light exposure might affect sleep quality and mood in office workers, Boubekri et al.\textsuperscript{15} reported that office workers sitting close to windows, and presumably receiving higher amounts of light during the day than their colleagues in windowless offices, exhibited more activity overall and slept, on average, about 46 minutes longer at night. Office workers sitting close to windows also reported having better scores on the Pittsburgh Sleep Quality Index (PSQI) and the vitality scale of... the Medical Outcomes Study 36-item short form health survey (SF-36). A limitation of the study was that light exposures were reported in terms of photopic illuminance using devices worn on participants’ wrists. Figueiro et al.\textsuperscript{12} showed that light level measurements recorded on the wrist are not well correlated with circadian-effective light at the eye. Moreover, photopic illuminance, defined in terms of the spectral sensitivity of foveal cones, peaking at 555 nanometers (nm), misrepresents circadian-effective light because the spectral sensitivity of the human circadian system peaks at approximately 460 nm. A more appropriate measure is circadian light (CL\textsubscript{A}), which uses a spectral sensitivity function that best matches the response by the circadian system to light, as measured by acute melatonin suppression (discussed in Light exposure and activity measurements). This rapidly evolving understanding of the circadian system led us to hypothesize that in buildings where daylight was a major design consideration, people would be exposed to lighting conditions that were sufficient to reliably entrain the circadian system to local time on Earth, especially during the winter months. Specifically, we hypothesized that workers receiving morning circadian stimulus (CS) of ≤0.1, an exposure level needed for reliable measurements of nocturnal melatonin suppression in the laboratory,\textsuperscript{17} would be less synchronized to the natural day–night cycle than those experiencing morning CS ≥0.3. As a corollary, we further hypothesized that those receiving morning CS ≥0.3 would exhibit better sleep quality and mood than those receiving morning CS ≤0.1. To test these 2 hypotheses, participants were recruited from 5 different buildings managed by the General Services Administration (GSA), the largest landlord in the United States (US). GSA selected the buildings. Four of the buildings were selected because daylight considerations were incorporated in their original design (GSA Central Office, Washington, DC) or during extensive renovations undertaken between 2009 and 2013 (Edith Green–Wendell Wyatt Federal Building, Portland, OR; Federal Center South Building 1202, Seattle, WA; and Wayne N. Aspinall Federal Building and U.S. Courthouse, Grand Junction, CO). The fifth building (GSA Regional Office Building, Washington, DC), where daylight was not a major design consideration and many participants had little or no access to daylight, was selected as an experimental control. The selection was based on the notion that occupants in buildings with abundant daylight availability would be exposed to high levels of CS during work. Unfortunately, as we had usable data for only 5 participants in winter and 10 participants in summer from the non-daylit building, we do not have sufficient data to provide comparisons between participants from that building and the other 4 buildings. Participants and methods Participants The study included 109 participants (69 females), of whom 81 (54 females) participated in both summer and winter (Table 1). One participant did not indicate their sex in the personal data. The total number of measurements obtained from these participants in both buildings for both seasons was 191 (124 from females); of those, 87 (58 from females) measurements were collected in summer and 104 (66 from females) were collected in winter. (Due to issues related to participant compliance and/or the absence of useable data, the numbers of participants noted for the analyses reported in Results vary from the totals listed here.) All participants were federal employees from the 5 federal buildings selected for the study. All participants were employed as office workers; to a limited amount, some participants in the Seattle and Portland buildings conducted fieldwork. No exclusion criteria were applied in the selection of participants, as the study did not include a lighting intervention. Generally, the participants in all 5 buildings received the Illuminating Engineering Society’s recommended levels\textsuperscript{18} (ie, approximately 30 footcandles [300 lux]) of horizontal illuminance at their desk space. However, data from the Grand Junction and Portland facilities sometimes experienced lower levels during winter. Data collection in all 5 buildings was conducted between 2014 and 2016, and the analyses reported herein were conducted in the spring and summer of 2016. Light exposure and activity measurements Circadian light and circadian stimulus Using published action spectrum data for acute melatonin suppression, Rea et al. proposed a mathematical model of human circadian phototransduction.\textsuperscript{19,20} This model is also based on fundamental knowledge of retinal neurophysiology and neuroanatomy, including the operating characteristics of circadian phototransduction (converting light into electrical signals), from response threshold to saturation.\textsuperscript{19,21} The intrinsically photosensitive retinal ganglion cells (ipRGCs) are the central elements in the phototransduction model, consistent with electrophysiological and genetic knockout studies.\textsuperscript{22–27} The model also reflects neural input from the outer plexiform layer of the retina, consistent with studies showing that signals from rods and cones provide photic information to the ipRGCs.\textsuperscript{21} Using this phototransduction model, the spectral irradiance at the cornea is first converted into CL\textsubscript{A}, reflecting the spectral sensitivity of the circadian system, and then, second, transformed into the CS, reflecting the absolute sensitivity of the circadian system. Thus, CS is a measure of the effectiveness of the retinal light stimulus for the human circadian system from threshold (CS = 0.1) to saturation (CS = 0.7).\textsuperscript{28,29} Fig. 1 shows the modeled spectral sensitivity of the human circadian system at one light level (300 scotopic lux at the cornea) needed to determine CL\textsubscript{A} at that light level, and Fig. 2 shows the absolute sensitivity of the human circadian system plotted as a function of CL\textsubscript{A}. For reference, corresponding values for photopic illuminance, CL\textsubscript{A}, and CS for common light sources (incandescent and daylight) are shown in Fig. 2. | GSA building, location | Summer | Winter | Both | Total per building | |------------------------|--------|--------|------|--------------------| | GSA Central Office, Washington, DC | 31 (16) | 43 (22) | 31 (16) | 74 (38) | | Edith Green–Wendell Wyatt Federal Building, Portland, OR | 18 (13) | 18 (13) | 18 (13) | 36 (26) | | Federal Center South Building 1202, Seattle, WA | 20 (15) | 26 (17) | 19 (14) | 46 (32) | | Wayne N. Aspinall Federal Building and U.S. Courthouse, Grand Junction, CO | 7 (7) | 11 (10) | 7 (7) | 18 (17) | | GSA Regional Office Building, Washington, DC | 11 (7) | 6 (4) | 6 (4) | 17 (11) | | Total number of measurements | 87 (58) | 104 (66) | 81 (54) | 191 (124) | Note: Number of measurements in females indicated in parentheses. The following equations show how $CL_A$ and CS are determined. $$CL_A = \begin{cases} 1548 \left[ \int M_{C_\lambda} E_\lambda + \left( a_0 - \gamma \left( \int \frac{S_\lambda}{mp_\lambda} E_\lambda d\lambda - k \int \frac{V_\lambda}{mp_\lambda} E_\lambda d\lambda \right) - a_{od} \left( 1 - e^{-\int V_\lambda E_\lambda d\lambda / RodSat} \right) \right) \right] & \text{if } \int \frac{S_\lambda}{mp_\lambda} E_\lambda d\lambda - k \int \frac{V_\lambda}{mp_\lambda} E_\lambda d\lambda > 0 \\ 1548 \left[ \int M_{C_\lambda} E_\lambda D_\lambda \right] & \text{if } \int \frac{S_\lambda}{mp_\lambda} E_\lambda d\lambda - k \int \frac{V_\lambda}{mp_\lambda} E_\lambda d\lambda \leq 0 \end{cases}$$ Where: - $CL_A$: circadian light. The constant, 1548, sets the normalization of $CL_A$ so that 2856 K blackbody radiation at 1000 lux has a $CL_A$ value of 1000 - $E_\lambda$: light source spectral irradiance distribution - $M_{C_\lambda}$: melatonin (corrected for crystalline lens transmittance) - $S_\lambda$: S-cone fundamental - $mp_\lambda$: macular pigment transmittance - $V_\lambda$: photopic luminous efficiency function - $V'_\lambda$: scotopic luminous efficiency function - $RodSat$: half-saturation constant for bleaching rods = 6.5 W/m² $k = 0.2616$ $a_{0-\gamma} = 0.7000$ $a_{od} = 3.3000$ $$CS = 0.7 - \frac{0.7}{1 + \left( \frac{CL_A}{10^4 \gamma} \right)^{1.026}}$$ It should be noted that while CS was developed using data from studies that measured acute melatonin suppression, Zeitzer et al. showed that acute melatonin suppression and phase shifting of melatonin rhythms followed similar threshold and saturation response characteristics. Moreover, the model has been successfully used to predict the effectiveness of various light sources and spectra for activating the circadian system in laboratory and in field studies. For example, our field research with Alzheimer’s disease patients, submariners, teenagers, and healthy older adults shows that exposure to a CS ≥0.3 at the eye, for at least 1 hour in the morning, improves sleep, mood, and behavior in these populations. **The Daysimeter** The Daysimeter, a calibrated device that continuously measures light and motion, was used to collect personal light-exposure and activity data. Light-sensing by the Daysimeter is performed via an integrated circuit sensor array (Hamamatsu model S11059-78HT) that includes 4 measurement channels: red (R), green (G), blue (B), and infrared (IR). The R, G, B, and IR photo-elements have peak spectral responses at 615 nm, 530 nm, 460 nm, and 855 nm, respectively. The Daysimeter is calibrated in terms of $CL_A$; CS is determined through post-processing of the recorded $CL_A$ values. Recordings of activity–rest patterns were based upon the outputs from 3 solid-state accelerometers calibrated in g-force units (1 g-force = 9.8 m/s²) with an upper frequency limit of 6.25 Hz. An activity index ($AI$) was determined using the formula: $$AI = k \sqrt{\left( SS_R + SS_G + SS_B \right) / n}$$ where $SS_R$, $SS_G$, and $SS_B$ are the sum of the squared deviations from the mean of each channel over the logging interval, $n$ is the number of samples in a given logging interval, and $k$ is a calibration factor equal to 0.0039 g-force per count. Logging intervals for both light and activity were set at 90 seconds. **Questionnaires** Participants completed 5 questionnaires concerning mood and sleep habits at the end of the study. a. **Center for Epidemiologic Studies Depression Scale** The Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire is designed to measure depressive symptoms. This 20-item test measure asks how often over the past week participants experienced symptoms associated with depression, such as restless sleep, poor appetite, and feelings of loneliness. Response options range from 0 to 3 for each item (0 = rarely or none of the time, 1 = some or little of the time, 2 = moderately or much of the time, 3 = most or almost all of the time). Total scores range from 0 to 60, with scores ≥16 indicating greater depressive symptoms. b. Perceived Stress Scale The Perceived Stress Scale (PSS-10) questionnaire assesses participants’ thoughts and feelings over the past month by posing 10 questions concerning how often they have thought or felt a specific way.\textsuperscript{45} Answers are scored on a 5-point scale ranging from 0 (never) to 4 (almost always). Total scores ≥20 are considered to indicate high stress. c. Pittsburgh Sleep Quality Index The Pittsburgh Sleep Quality Index (PSQI) questionnaire is a subjective measure of sleep quality and patterns experienced for the majority of days and nights over the past month.\textsuperscript{42} It differentiates poor from good sleep by measuring responses in 7 areas: subjective sleep quality, sleep latency, sleep duration, sleep efficiency, sleep disturbance, use of sleeping medication, and daytime dysfunction. Answers are scored on a scale ranging from 0 to 3, and the questionnaire yields a single global score. A global score of ≥5 indicates a poor sleeper. d. Positive and Negative Affect Schedule In the Positive and Negative Affect Schedule (PANAS) questionnaire,\textsuperscript{43} subjective feelings about 10 positive affects (ie, interested, excited, strong, enthusiastic, proud, alert, inspired, determined, attentive, and active) and 10 negative affects (ie, distressed, upset, guilty, scared, hostile, irritable, ashamed, nervous, jittery, and afraid) are rated by participants on a scale ranging from 1 (very slightly or not at all) to 5 (extremely). Total scores range from 10 to 50, with higher scores representing higher levels of positive affect and lower scores representing lower levels of negative affect. e. Patient-Reported Outcomes Measurement Information System Sleep Disturbance–Short Form 8a The Patient-Reported Outcomes Measurement Information System (PROMIS) Sleep Disturbance–Short Form 8a questionnaire requests responses to 8 statements regarding sleep quality (e.g., my sleep was refreshing, I had difficulty falling asleep, my sleep was restless, etc.).\textsuperscript{44} Answers are scored on a scale ranging from 1 to 5 (1 = very much, 2 = quite a bit, 3 = somewhat, 4 = a little bit, 5 = not at all). For this measure, raw scores are rescaled into a standardized T-score (mean ± standard deviation = 50 ± 10), with higher scores indicating greater sleep disturbance. Protocol Participants signed a consent form approved by the Institute Review Board at Rensselaer Polytechnic Institute. Once enrolled in the study, participants were asked to wear the Daysimeter as a pendant for 7 consecutive days during 2 data collection periods between: (1) December and February (“winter”) and (2) late May and August (“summer”). Participants were instructed to maintain the device uncovered at all times. To permit monitoring of their sleep–wake activity patterns at night, participants were asked to wear the Daysimeter on their wrist. During the 7-day data collection period, participants were asked to keep a sleep log of bedtime and wake time, sleep latency, quality of sleep, and any naps taken. Participants in the Washington, DC, buildings had flexible schedules and were permitted to telecommute. These participants were asked to note: (a) the days in which they were in the office and (b) the desk space number they used during those days. All participants were free to choose their desk space for the day, and all desk spaces were equally available to each participant. In general, however, participants would stay in the same desk space, or at least in the same area in the office, for the entire week. The workers who were permitted to telecommute were asked to spend at least 3 days in the building during the data collection period. The devices and questionnaires were mailed in sealed envelopes to the GSA staff volunteer serving as the on-site point person, who then distributed the envelopes to the participants. Upon completion of the 7-day data collection period, the staff volunteer collected the devices and questionnaires, again in sealed envelopes, and did not have access to any data at any time. The staff volunteer had no other role in the study, and no issues concerning this method of delivering/returning study materials from/to the researchers were reported. Data analyses a. Circadian stimulus In terms of circadian-effective light exposures, we calculated the average CS during working hours in the building. These values were based upon self-reports from participants who were asked to record the days and times they were in the building. If this information was not available, we assumed their time in the building to be between 08:00 a.m. and 05:00 p.m. In addition, given that morning light is particularly relevant for circadian entrainment and that our main hypothesis was that better circadian entrainment would result in better sleep and mood, we calculated the CS exposure in the morning between 08:00 a.m. and 12:00 p.m. In order to test the hypothesis that a CS ≥0.3 was positively associated with better sleep quality and mood and less stress, the data set was divided into participants who were exposed to a CS ≥0.3 (high CS) and those who were exposed to a CS ≤0.15 (low CS). The low CS criterion was set at CS ≤0.15 to ensure a sample size comparable to that for the high CS criterion. (In a separate analysis, not reported here, we also tested the hypothesis using a low CS ≤0.1 and found similar results; however, only 6 subjects were included in the CS ≤0.1 group.) It is important to note that while measuring total light exposures over waking hours is needed to predict circadian entrainment,\textsuperscript{45} the main goal of the analyses presented here was to understand how light at work affected sleep and mood. Therefore, we limited our analyses to morning and daytime light exposures. b. Light–dark and activity–rest synchrony (phasors) To quantify the degree of circadian entrainment exhibited by participants, the synchrony between their measured 24-hour light–dark pattern and their simultaneously measured activity–rest pattern was determined using phasor analysis. Phasor analysis operationalizes circadian entrainment in terms of a vector amplitude (magnitude) and phase (angle). In the phasor analysis conducted for the present study, light was measured in units of CS and activity in units of AL Conceptually, the full set (e.g., 7 days) of light–dark and of activity–rest data are each joined end-to-end in a continuous loop. One loop is then rotated with respect to the other. Periodically (e.g., every 5 minutes), the correlation ($r$) between the light–dark data and the activity–rest data is computed, giving a correlation spectrum for the entire data set. A Fast Fourier transform (FFT) is applied to the resulting correlation function to compute a phase and amplitude vector for every frequency in the power spectrum. The phase and amplitude vector, or phasor, for the 24-hour frequency is used to quantify circadian entrainment. The greater the phasor magnitude, the greater the synchrony between the light–dark and activity–rest patterns and, therefore, greater circadian entrainment is inferred. Dayshift nurses, for example, were found to have phasor magnitudes averaging about 0.5, whereas nurses on rotating shifts have phasor magnitudes averaging... about 0.1. This suggests, as would be expected, that dayshift nurses exhibit a high degree of circadian entrainment but rotating shift nurses exhibit a high degree of circadian disruption. Phasor angle, in hours, is a measure of the offset between the 24-hour activity–rest pattern and the 24-hour light–dark exposure pattern; a positive angle means that the activity pattern is delayed with respect to light exposure pattern and a negative angle means that activity pattern is advanced with respect to light exposure pattern. Typically, entrained individuals have a positive phasor angle of about 1 hour,\textsuperscript{46} indicating low CS in the evening while people are still active. For consistency with previously published phasor analyses,\textsuperscript{37,46–50} the Daysimeter data collected during waking hours (when the device was worn as a pendant) were used in the analyses and light and motion data were set to zero during reported sleep times. c. Objective sleep analyses The Daysimeter sleep algorithm was developed as an analogue of the Actiwatch algorithm from the Actiwatch-Sleep Version 3.4 (Actiwatch Sleep Version 3.4; Mini Mitter Co., Inc [now Philips Respironics, Murrysville, PA]). Modifications to the Actiwatch Algorithm were introduced to produce similar results using activity index (AI) provided by the Daysimeter instead of activity counts provided by the Actiwatch. The Daysimeter data obtained when the device was assumed to be worn on the wrist during the self-reported time-in-bed were used for the sleep analyses. Every 90-second epoch during self-reported time-in-bed as well as 20 minutes before reported bedtime and 20 minutes after reported wake time was used as the sleep analysis period. Each of those epochs was scored as mobile or immobile based on whether the AI value for that epoch exceeded a “mobility threshold.” The mobility threshold was defined as twice the “baseline activity,” where baseline activity is the most frequent AI value greater than 0 and less than half of the maximum AI during the sleep analysis period. Usually, the average AI was non-zero when the Daysimeter was at rest because most of the accelerometers produce electronic noise. Epochs where AI was less than the Daysimeter’s “mobility threshold” were scored as immobile, and epochs where AI was greater than or equal to the “mobility threshold” were scored as mobile. In parallel, AI values during the sleep analysis period were transformed into “filtered activity” for every epoch ($FA_i$), where $FA_i$ is a weighted moving average of AI for a given epoch (i) within the sleep analysis period. $FA_i$ is computed from the AI value for that epoch ($AI_i$) together with the 2 AI epoch values occurring just before and the 2 AI epoch values occurring just after $AI_i$. Specifically, $$FA_i = \frac{1}{25}AI_{i-2} + \frac{1}{5}AI_{i-1} + AI_i + \frac{1}{5}AI_{i+1} + \frac{1}{25}AI_{i+2}$$ where i designates the current epoch being evaluated. Before the Daysimeter activity data could be scored as sleep or wake however, the $AI_i$ values above the “mobility threshold” were used to set the “wake threshold” defined as 8/9 the average $AI_i$ of epochs scored as mobile; $FA_i$ values less than the “wake threshold” were scored as sleep. Following the definitions from the Actiwatch algorithm, sleep onset latency, sleep time, wake time, and sleep efficiency were determined. Regardless of day of the week, only the nights after which participants reported being in the office were used in the sleep analyses. d. Subjective sleep analyses Scores obtained from self-reports of sleep quality, mood, and stress (see Questionnaires, above) were calculated and used in the statistical analyses. e. Statistical analyses All phasor results, objective sleep measurements collected via Daysimeter, and subjective sleep, stress, and mood measurements from questionnaires were submitted to mixed-model linear regressions using IBM SPSS Statistics 23.0 software (IBM Corp., Armonk, NY). In each regression, “participant” was entered as a random factor. Combinations of the following were entered as fixed factors: (1) season (summer or winter), (2) CS exposure in the morning on workdays (high versus low), (3) CS received during the entire workday (high versus low), and (4) CS throughout the workday (continuous variable). Interactions between CS measures and season were also submitted to mixed-model linear regressions. The results were considered significant if the associated p-value was ≤0.05. As described in each section, the results include measurements from only those participants who provided complete and/or usable data, and not necessarily all of the participants in the study. Because of the study’s longitudinal design, and because participants volunteered as part of their work duties, some did not complete all the data collection measures. Rather than dropping valuable data from the analyses, we chose to use mixed-model regression techniques that can produce valid results from data sets with missing points. The number of participants is relatively large for such a study, and the number of missing data points is relatively small, which should obviate some concerns that certain participants’ data would have greater weight than others. Additionally, we contend the missing data are probably “missing at random.” That is, those participants who did not complete data collection for a measure probably did not do so because of any factor related to the study design. For example, we do not believe that a participants’ failure to complete a sleep questionnaire had anything to do with poor sleep quality. Results Table 2 lists the average and mean values and standard error of the mean (SEM) for the measures employed in the present study. Only those outcomes that were statistically significant are discussed below. High versus low CS in the morning Overall, 56 total participants were included in these analyses. Of those, 31 received high CS (≥0.3) between the time they arrived at work and 12:00 p.m., and 25 received low CS (<0.15). a. Effects of high versus low CS in the morning and season on phasors Participants who had high CS during the morning hours showed a statistically significant effect on phasor magnitude ($F_{1,45} = 41.94, P < .0001$), suggesting greater circadian entrainment (Fig. 3). Phasor angle was significantly affected by season ($F_{1,6} = 37.72, P = .001$), irrespective of whether participants were exposed to high or low CS in the morning. Phasor angles in winter were higher than in summer. In general, a higher phasor angle in winter means that participants were active during the evening hours when CS values were low, while in summer, evening CS tended to be higher due to more daylight availability. b. Effects of high versus low morning CS and season on sleep measures and mood Receiving high CS exposure during morning hours had a statistically significant main effect on sleep onset latency ($F_{1,15} = 10.43, P = .005$). Participants who received low CS took longer to fall asleep than those receiving high CS during the morning (Fig. 4). Table 2 Summary of mean values and standard error of the mean for light exposure and activity, objective sleep analyses, and subjective sleep analyses. | Measure | Morning CS | Workday CS | |--------------------------------|------------|------------| | | High | Low | High | Low | | | (≥0.3) | (<0.15) | (≥0.3) | (<0.15) | | **Light exposure and activity (phasor)** | | | | | | Circadian stimulus | 0.35 | 0.12 | 0.35 | 0.11 | | Phasor magnitude | 0.33 | 0.23 | 0.33 | 0.22 | | Phasor angle | 1.04 | 1.40 | 0.96 | 1.51 | | **Objective sleep analyses** | | | | | | Sleep onset latency | 17.99 | 44.91 | 20.70 | 25.06 | | Sleep time (min) | 355.39 | 335.69 | 349.78 | 345.87 | | Wake time (min) | 60.13 | 48.40 | 53.01 | 50.91 | | Sleep efficiency | 76.87 | 74.06 | 77.52 | 76.37 | | **Subjective mood and sleep analyses** | | | | | | Total CES-D | 4.75 | 8.19 | 5.71 | 7.03 | | PROMIS T-score | 45.96 | 50.95 | 46.32 | 49.97 | | PANAS positive | 33.38 | 29.86 | 32.46 | 32.03 | | PANAS negative | 14.72 | 15.25 | 14.57 | 14.92 | | PSQI | 4.72 | 7.15 | 5.54 | 6.64 | | PSS-10 | 12.31 | 12.29 | 11.83 | 13.44 | Abbreviations: CES-D = Center for Epidemiologic Studies Depression Scale; CS = circadian stimulus; PANAS = Positive and Negative Affect Schedule; PROMIS = Patient-Reported Outcomes Measurement Information System; Sleep Disturbance–Short Form 8a; PSQI = Pittsburgh Sleep Quality Index; PSS-10 = Perceived Stress Scale; SEM = standard error of the mean. High morning CS was also associated with significant results in several of the mood and sleep questionnaire measures (Fig. 4). Mean CES-D, for which lower scores indicate less depression, was significantly lower for participants with high morning CS ($F_{1,51} = 6.25, P = .016$). The mean PSQI, for which lower scores indicate better sleep quality, was also significantly lower for participants with high morning CS ($F_{1,44} = 9.48, P = .004$). Participants with high morning CS also reported significantly less sleep disturbance ($F_{1,39} = 11.67, P = .002$) as shown by their mean PROMIS T-score. c. Interaction between high versus low morning CS and season The benefit of having high morning CS was sometimes affected by season (Fig. 5). Seasonal interaction with CS was significant in the case of the PSQI measure ($F_{1,36} = 4.56, P = .040$). Participants with high morning CS reported higher PSQI scores in summer than in winter. The pattern was reversed for participants with low morning CS; they reported a lower mean PSQI score in summer than in winter. In respect to PSS-10, for which lower scores indicate lower perceived stress, season also interacted with morning CS exposure ($F_{1,2} = 29.08, P = .041$), but for this measure there was no main effect of either season or CS. Participants with high morning CS had a higher mean PSS-10 score in summer than in winter. Participants with low morning CS exposure also had higher scores in summer than in winter. Morning CS The effect on measures of morning CS received between the time of arrival at work and 12:00 p.m. was analyzed for all participants, not just those who received the high or low CS. In general, morning CS had beneficial effects. The morning CS analyses included 173 measurements overall, 79 in summer and 94 in winter. a. Effects of morning CS and season on phasors The amount of CS received in the morning affected phasor magnitude (Fig. 6). As morning CS increased, so did phasor magnitude with significant effect ($F_{1,109} = 63.12, P < .0001$). Season had a significant effect on phasor angle ($F_{1,123} = 9.82, P = .002$), with higher phasor angles observed in winter than in summer. b. Effects of morning CS on sleep measures The PSQI scores also decreased with increasing morning CS ($F_{1,155} = 6.19, P = .014$ [Fig. 7]). The participants also reported less sleep disturbance, as their PROMIS T-scores decreased ($F_{1,165} = 4.76, P = .031$). Sleep onset latency declined as morning CS increased ($F_{1,162} = 13.49, P = .002$). Season had a main effect on sleep onset latency ($F_{1,130} = 4.49, P = .036$), with shorter times reported in summer compared to winter (Fig. 8). c. Interaction between morning CS and season There was also a significant interaction between season and morning CS ($F_{1,123} = 4.19, P = .043$). Although sleep onset latency values decreased with increasing morning CS in both seasons, sleep onset latency decreased to a lesser degree in the summer. High versus low workday CS While receiving high CS in the morning is hypothetically the most beneficial for entrainment, receiving high levels of CS over an entire workday may still improve participants’ sleep and mood. These analyses included 67 participants: 31 received high CS during the entire workday, and 36 received low CS during the entire workday. a. Effects of high versus low workday CS and season on phasors A pattern of significant effects on phasors emerged from our analysis of high versus low workday CS (Fig. 9). Participants who had high workday CS had greater phasor magnitudes than those who had low workday CS. This effect was statistically significant ($F_{1,39} = 35.38, P < .0001$). Phasor angle, on the other hand, was only significantly affected by season ($F_{1,36} = 6.08, P = .019$). Phasor angles during winter were greater than during summer (Fig. 9). b. Effects of high versus low workday CS and season on sleep measures and mood High workday CS had a significant main effect on participants’ reported sleep quality and mood (Fig. 10). Participants with high workday CS had significantly lower PSQI scores than those with low workday CS ($F_{1,21} = 6.12, P = .022$). Participants with high workday CS reported significantly less sleep disturbance ($F_{1,32} = 10.44, P = .003$) as shown by the lower PROMIS T-scores. Fig. 3. The significant effects of high versus low morning CS on phasor magnitude and season on phasor angle. (The error bars represent standard error; **** designates a statistical significance at $P < .0001$ and *** designates a statistical significance at $P < .001$.) Fig. 4. The significant effects of high versus low morning CS on sleep onset latency, depression, and sleep quality measures. (The error bars represent standard error; ** designates a statistical significance at $P < .01$ and * designates a statistical significance at $P < .05$.) Finally, participants with high workday CS also reported significantly lower depression scores for the CES-D measure ($F_{1,44} = 4.68, P = .036$). Season also had a significant main effect on mood outcomes (Fig. 11). Participants’ mean PANAS Negative score was significantly higher in summer than in winter ($F_{1,48} = 5.56, P = .030$). The CES-D scores were significantly lower in winter ($F_{1,28} = 5.49, P = .026$) than in summer. **Discussion** The present study set out to determine whether exposure to high circadian-effective light in the workplace during the day, particularly in the morning, was associated with significant changes in circadian entrainment (phasor magnitude), objective sleep quality (sleep onset latency), subjective sleep quality (PSQI and PROMIS) and mood (CES-D and PANAS), as well as lower stress (PSS-10). These results are the first to demonstrate the utility of the CS metric for characterizing circadian-effective light in a field study. Several findings are noteworthy. First, as hypothesized, higher CS exposure in the morning was associated with shorter sleep onset latency than lower CS exposure in the morning. This association was stronger in winter months, when the opportunity to receive light prior to arriving at work is reduced due to the later occurrence of dawn. These results are consistent with the findings of Vetter et al., who showed that office workers who were exposed to high correlated color temperature (CCT) light (8000 Kelvin [K]) for 5 consecutive weeks became entrained to light during working hours, whereas those exposed to a lower CCT (4000 K) at work exhibited a relatively advanced circadian phase that paralleled the seasonal progression of sunrise. (CCT is a specification used to describe a light source’s dominant color tone, ranging from warm [yellows and reds] to cool [blue]. Lamps with a CCT rating ~3200 K are usually considered warm sources, whereas those with a CCT >4000 K usually considered cool in appearance.) A high-CCT light source generally emits more short-wavelength radiation than a lower CCT light source, and therefore delivers higher CS values. Presumably, exposure to natural morning light before arriving at work served as the primary entraining light for those in the low-CCT group, but office lighting served as the primary entraining light for those in the high-CCT group. Second, consistent with our hypotheses, high CS exposure in the morning was associated with greater phasor magnitudes and better sleep quality (PSQI and PROMIS) than low CS exposure in the morning. Although CS exposure in the morning and phasor magnitudes were unrelated to season, there were somewhat complicated interactions between levels of morning CS exposure and season in terms of both sleep quality (PSQI) and mood (PSS-10). With regard to PSQI scores, high CS in the morning was associated with better sleep quality than low CS exposure in the morning. Consistent with what might be expected, low morning CS was associated with decreased sleep quality in winter, but unexpectedly, high CS in the morning was associated with better sleep quality in winter than in summer. Although it is not possible to determine conclusively, this decrease in sleep quality might have been due to increased evening light/daylight in summer compared to winter. (In fact, our evening light exposure data [not reported here] and data reported by Crowley et al.\textsuperscript{52} showed that workers do indeed receive more light in the evening hours during summer months than during winter.) Regarding PSS-10 scores, high CS in the morning was associated with lower self-reports of stress than low CS exposure in the morning during both summer and winter, but the difference between the 2 CS morning exposures was less pronounced during winter than during summer. It should be noted, however, that none of the other sleep quality and mood scores exhibited similar interactions. This might suggest that the statistically significant interactions associated with PSQI and PSS-10 might not manifest themselves again in a future study. These results may also be attributable to personal life events or lifestyles that had a stronger effect on self-reports than did individual light exposures. Consistent with the findings associated with morning CS levels, high CS exposure during the entire workday was associated with greater phasor magnitudes than low CS exposure during the workday. High CS exposure during the entire workday was associated with lower depression scores (CES-D) and higher sleep quality (PSQI and PROMIS) scores than low CS exposure during the workday. Unlike the results of regression analyses relating CS during the morning to phasor magnitudes, CES-D scores, PSQI scores, and PROMIS scores, CS during the entire workday was not significantly related to these outcome measures. Nevertheless, it seems reasonable to infer that exposure to CS $\geq$0.3 during the day, particularly in the morning, was associated with better overall sleep quality and mood scores than exposure to CS $\leq$0.15. The CS metric has been successfully applied to quantify light intervention in many other laboratory and field studies. In the laboratory, CS was used to predict melatonin suppression from self-luminous displays,\textsuperscript{43} and in the field CS was used... to predict entrainment in nuclear submariners, and sleep quality and mood in persons with Alzheimer’s disease and related dementia living in senior facilities. This inference is consistent with the results from Viola et al., who showed that exposure to a high-CCT light source reduced daytime sleepiness and increased self-reported sleep quality in office workers. Unlike Boubekri et al., who showed an increase in sleep duration for office workers sitting close to windows, and who should be receiving higher circadian-effective light, the present study did not show any significant association between CS exposure and actual sleep times. The participants in this study had an objectively measured mean sleep time of <6 hours, perhaps resulting from active social lives and personal obligations that limited longer sleep. The mean sleep onset latency was close to 45 minutes for participants receiving low CS (CS ±0.15) in the morning. These results suggest that greater phase delay in these participants was due to a lack of sufficient morning light, which is known to promote entrainment to the 24-hour solar day. Moreover, longer phasor magnitudes, which indicate greater behavioral circadian entrainment, are associated with those subjects receiving high CS (CS ±0.3) values both during the morning and all day, and those reporting better mood and sleep. Importantly, promoting circadian entrainment in the built environment has been associated with better sleep as well as reduced stress and anxiety. Sleep restriction, even after only a few days, has been linked to diabetes and obesity. Chronic circadian disruption, such as that experienced by rotating shift workers over the course of many years, has also been associated with higher risk for cardiovascular disease and cancer. The present study is novel because it is the first to measure personal circadian light exposure in office workers using a device calibrated to measure circadian-effective light. It is also the first to directly relate circadian-effective light measures to mood, stress, and sleep outcomes. As with any field study, the present study has limitations. Perhaps most importantly, although the Daysimeter is a calibrated light meter, when worn as a pendant it does not measure light at the eye level and the CS exposures obtained from participants may be as much as 25% lower than those experienced at the eye level. Furthermore, in this study it was assumed that workers spent their working hours inside their respective buildings, but it is unknown whether the CS measurements employed in our calculations were actually obtained inside or outside the office environment. Finally, while this study was not designed to measure social obligations and other personal issues, these may have affected the participants’ self-reports of sleep, depression, and stress. The results of this study are significant because they have the potential to inform building owners and designers about the importance of delivering appropriate light for the circadian system in the built environment during the daytime. One interesting finding was that the presence of daylight in a building does not necessarily ensure high CS exposure for workers. Most of the buildings studied here were designed to maximize daylight availability in the space, yet CS exposures did not always reach the desired criterion level of 0.3. Furniture placement, window shade openness, desk space location, task locations, and visual and thermal comfort need to be taken into consideration when attempting to maximize exposure to CS in an office environment. Nevertheless, while much has been discussed about the detrimental effects of evening or night light on sleep and health, little attention has been paid to the importance of daytime light exposures, especially in winter months, on sleep health. The present results can be considered as a first step toward promoting the adoption of new, more meaningful metrics for field research, providing the sleep research community with new ways to measure and quantify circadian-effective light. Disclosure The authors have no conflicts of interest to disclose. Acknowledgements This study was funded by the US General Services Administration. The authors would like to acknowledge David Pedler, Geoffrey Jones, Jennifer Brons, Sharon Lesage, Greg Ward, Dennis Guyon, and Rebekah Mullaney for their technical and editorial assistance. References 1. Turek FW. Circadian clocks: not your grandfather’s clock. *Science*. 2016; 354(6315):992–993. 2. Klein DC, Moore RK, Reppert SM. Suprachiasmatic nucleus: The mind’s clock. New York, NY: Oxford University Press; 1991. 3. Jensen ME, Birnberg RW, Duffy JF, Klerman EB, Kronauer RE, Czeisler CA. Human circadian pacemaker is sensitive to light throughout subjective day without evidence of transients. *Am J Physiol*. 1997;273(42):1800–1809. 4. Figuero MG, Czeisler CA, O’Neill DP. Light and melatonin suppression in adolescents. *Light Res Technol*. 2016;48(8):966–975. 5. Figuero MG, Piltrick B, Wood B, Rea MS. The impact of light from computer monitors on melatonin levels in college students. *Neuro Endocrinol Lett*. 2011; 32(2):141–152. 6. Chang A-M, Deschbach D, Duffy JF, Czeisler CA. Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness. *Proc Natl Acad Sci*. 2015;112(4):1232–1237. 7. Cajochen C, Frey S, Anders D, et al. Evening exposure to a light-emitting diodes (LED)-backlit computer screen affects circadian physiology and cognitive performance. *J Appl Physiol*. 2011;110(5):1432–1438. 8. Kripke C, Hulsey A. Circadian rhythms and sleep in bipolar disorder. *Bipolar Disord*. 2010;12(5):466–472. 9. Costa IC, Carvalho HN, Fernandes L. Aging, circadian rhythms and depressive disorders: a review. *Am J Neurodegener Dis*. 2013;2(4):228–246. 10. Kripke DF, Sylva LG. The role of sleep in bipolar disorder. *Nat Sci Sleep*. 2016;8: 207–214. 11. Stevens RG. Light-at-night, circadian disruption and breast cancer: assessment of existing evidence. *Int J Epidemiol*. 2009;38(4):963–970. 12. Schernhammer ES, Feskanich D, Liang G, J. H. Rotating night-shift work and lung cancer risk among female nurses in the United States. *Am J Epidemiol*. 2013; 178(9):1434–1441. 13. Smolensky M, Herdiana E, Simons A, Sakai T, Lundeen J, Porporato P. Circadian rhythms: clinical perspectives of disease pathology and basis for chronotherapeutic intervention. *Chronobiol Int*. 2016;33(8):1101–1119. 14. Reuterkul S, Knutson KL. Consequences of circadian disruption on Cardiometabolic health. *Sleep Med Clin*. 2015;10(4):455–468. 15. Babish M, Chenery C, Smith P, Workman P. Impact of windows and daylight exposure on overall health and sleep quality of office workers: a case-control pilot study. *J Clin Sleep Med*. 2014;10(6):603–611. 16. Figuero MG, Czeisler R, Bieman A, Rea MS. Comparisons of three practical field devices used to measure personal light exposures and activity levels. *Light Res Technol*. 2013;45(4):421–434. 17. Rea MS, Figuero MG. A working threshold for acute nocturnal melatonin suppression from “white” light sources used in architectural applications. *J Circadian Rhythms*. 2007;5(1):1000–1000. 18. Illuminating Engineering Society. The lighting handbook: Reference and application. 10th ed. New York: Illuminating Engineering Society; 2011. 19. Rea MS, Figuero MG, Bullough JD, Bieman A. A model of phototransduction by the human circadian system. *Biophys Rev*. 2005;2(2):213–228. 20. Rea MS, Figuero MG, Bieman A, Hamner K. Modelling the spectral sensitivity of the human circadian system. *Light Res Technol*. 2012;44(4):386–396. 21. Rea MS, Bullough JD, Figuero MG. Phototransduction for human melatonin suppression. *J Physiol Res*. 2002;52(2):209–213. 22. Lucas R, Hattar S, Takao M. Diminished pupilary light reflex at high irradiances in melanopsin-knockout mice. *Science*. 2003;299:245–247. 23. Panda S, Sato TK, Castrucci AM, et al. Melanopsin (Opn4) requirement for normal light-induced circadian phase shifting. *Science*. 2002;298(5601):2213–2216. 24. Ruby N, Brennan T, Xie X. Role of melanopsin in circadian responses to light. *Science*. 2002;298(5601):2217–2220. 25. Berson DM, Dunn FA, Takao M. Phototransduction by retinal ganglion cells that set the circadian clock. *Science*. 2002;295(5557):1070–1073. 26. Belenky MA, Smeraski CA, Provance I, Solars PJ, Pickard GE. Melanopsin ganglion cells receive bipolar and amacrine cell synapse. *J Comp Neural*. 2003;460: 380–393. 27. Hattar S, Lucas RJ, Mrosovsky N, et al. Melanopsin and rod-cone photoreceptive systems account for all major accessory visual functions in mice. *Nature*. 2003; 424:76–81. 28. Bieman A, Klein TR, Rea MS. The Daysimeter: a device for measuring optical radiation as a stimulus for the human circadian system. *Meas Sci Technol*. 2005;16: 2292–2299. 29. Bieman A, Klein TR, Rea MS. The Daysimeter: a device for measuring optical radiation as a stimulus for the human circadian system. *Meas Sci Technol*. 2005;16: 2292–2299. 29. Rea MS, Figueiro MG, Bierman A, Bulough JD. Circadian light. *J Circadian Rhythms*. 2010;8:2. 30. Brainard GC, Hanifin JP, Greeson JM, Byrne B, Glickman G, Gerner E, Rollag MD. Action spectrum for melatonin regulation in humans: evidence for a novel circadian photoreceptor. *J Neurosci*. 2001;21:6405–6412. 31. Thapan A, Arendt J, Brown DJ. An action spectrum for melatonin suppression: evidence for a novel non-visual photoreceptor system in humans. *J Physiol*. 2001;535:267–278. 32. Rea MS, Figueiro MG. Light as a circadian stimulus for architectural lighting. *Lighting Res Technol*. 2016. http://dx.doi.org/10.1177/147717316882368 [in press]. 33. Zeitzer JM, Dijk DJ, Kronauer RE, Brown EN, Czeisler CA. Sensitivity of the human circadian pacemaker to nocturnal light: melatonin phase resetting and suppression. *J Physiol*. 2000;526(2):695–702. 34. Figueiro MG, Bulough JD, Rea MS. Spectral sensitivity of the circadian system. Paper presented at: Proceedings of the International Society for Optical Engineering (SPIE); 2003 [San Diego]. 35. Figueiro MG, Rea MS, Zhan NZ, Bullough JD. Implications of controlled short-wavelength light exposure for sleep in older adults. *BMC Res Notes*. 2011;4:8.334. 36. Figueiro MG, Hunter CM, Higgins PA, et al. Tailored lighting intervention for persons with dementia and caregivers living at home. *Sleep Health*. 2015;1(4):322–329. 37. Young CR, Jones GE, Figueiro MG, et al. At-Sea trial of 24-b-based submarine Watchstanding schedules with high and low correlated color temperature light sources. *J Biol Rhythms*. 2010;30(2):144–154. 38. Figueiro MG, Rea MS. Lack of dimming of growth light during the school day delays diurnal light melatonin onset (DIMO) in middle school students. *Neuro Endocrinol Lett*. 2010;31(1):4. 39. Sloane PD, Figueiro MG, Cohen L. Light as therapy for sleep disorders and depression in older adults. *Clin Gerontol*. 2008;16(3):25–31. 40. Radloff LS. The CES-D scale: a self-report depression scale for research in the general population. *Appl Psychol Measur*. 1977;1:385–401. 41. Cohen S, Williamson G. Perceived stress in a probability sample of the United States. In: Spacapan S, Oskamp S, editors. *The Social Psychology of Health. Newbury Park, California: Sage*; 1988. p. 31–67. 42. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh sleep quality index: a new instrument for psychiatric practice and research. *Psychiatry Res*. 1989;28(2):193–213. 43. Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: the PANAS scales. *J Pers Soc Psychol*. 1988;54:1063–1070. 44. Cole SW, Kelly W, Stone A, et al. The Patient-reported outcomes measurement information system (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. *J Clin Epidemiol*. 2010;63(11):1179–1194. 45. Figueiro MG. Delayed sleep phase disorder: clinical perspective with a focus on light therapy. *Nur Sci Sleep*. 2016;8:91. 46. Miller D, Figueiro MG, Bierman A, Schernhammer E, Rea MS. Ecological measurements of light exposure, activity and circadian disruption. *Light Res Technol*. 2010;42:271–280. 47. Figueiro MG, Eggleston G, Rea MS. Effects of light exposure on behavior of Alzheimer’s patients - a pilot study. Light and human health: EUPILRO 5th international lighting research symposium; 2002. p. 151–156. 48. Figueiro MG. Lessons from the Daysimeter: can circadian disruption in individuals with Alzheimer’s disease be measured? *Neuroligence Dis Manag*. 2012;2(6):553–556. 49. Figueiro MG, Plitnick B, Lok A, et al. Tailored lighting intervention improves measures of sleep, depression and agitation in persons with Alzheimer’s disease and related disorders living in long-term care facilities. *Clin Interv Aging*. 2014;9:1527–1537. 50. Rea MS, Figueiro MG, Bierman A, Bulough JD. Circadian light. *J Circadian Rhythms*. 2010;8:1. 51. Verbeek C, Juda M, Lang D, Wojtysiak A, Roenneberg T. Blue-enriched office light compensates with natural light as a zeitgeber. *Scand J Work Environ Health*. 2011;37(5):437–444. 52. Crowley SJ, Molina TA, Burgess HJ. A week in the life of full-time office workers: work day and weekend light exposure in summer and winter. *Appl Ergon*. 2015;46(Part A):193–200. 53. Wood B, Rea MS, Plitnick B, Figueiro MG. Light level and duration of exposure determine the efficacy of off-luminous tablets on melatonin suppression. *Appl Ergon*. 2013;44(2):237–240. 54. Viola AU, James LM, Schlangen LJ, Dijk DJ. Blue-enriched white light in the workplace improves self-reported alertness, performance and sleep quality. *Scand J Work Environ Health*. 2008;34(2):176–184. 55. Broussard JL, Van Cauter E. Disturbances of sleep and circadian rhythms: novel risk factors for obesity. *Curr Opin Endocrinol Diabetes Obes*. 2016;23(5):352–359. 56. Kreier F, Kalbacher A, Schmid HP, Fleiss E, Romijn JA, Buja RM. “diabetes of the elderly”: type 2 diabetes in nursing patients: possible role of the biological clock. *Exp Gerontol*. 2007;42(1):22–27. 57. Maemura K, Takeda N, Nagai R. Circadian rhythms in the CNS and peripheral clock disorders: role of the biological clock in cardiovascular diseases. *J Pharmacol Sci*. 2007;104(1):1–10. 58. Young ME, Bray MS. Potential role for peripheral circadian clock dysynchrony in the pathogenesis of cardiovascular dysfunction. *Sleep Med*. 2007;8(6):656–667. 59. Hansen J. Risk of breast cancer after night-and shift work: current evidence and controversies. *Int J Occup Environ Health*. 2007;13(4):321–327. 60. Schernhammer ES, Laden F, Speizer FE, et al. Rotating night shifts and risk of breast cancer in women participating in the Nurses’ health study. *J Natl Cancer Inst*. 2001;93(20):1563–1568.
Procedure: Evidence George W. Pugh Repository Citation George W. Pugh, Procedure: Evidence, 23 La. L. Rev. (1963) Available at: https://digitalcommons.law.lsu.edu/lalrev/vol23/iss2/22 that "the verdict is contrary to the law and the evidence,"\textsuperscript{33} and the defendant is urging that there is no evidence,\textsuperscript{34} or "no evidence of any probative value"\textsuperscript{35} of an essential element of the crime. In a recent federal case, \textit{United States ex rel. Weston v. Sigler},\textsuperscript{36} where a writ of habeas corpus had been applied for, the circuit court of appeals held, that the failure to furnish a free transcript of all testimony to an indigent defendant who was appealing was a denial of "equal protection" of the law. The ultimate issue appears to be whether a complete transcript was, in the particular case, required for an adequate presentation of the defendant's appeal.\textsuperscript{37} It may be necessary, in order to provide for the situation where a defendant is claiming that there is a complete lack of probative evidence of an essential element of the crime, to amend and liberalize the provision of Article 500 of the Code of Criminal Procedure,\textsuperscript{38} "that any accused desiring to send up the testimony of all of the witnesses so taken, shall pay for the same." In such an amendment the general right to a free transcript should be limited to indigent defendants.\textsuperscript{39} \section*{EVIDENCE} \textit{George W. Pugh*} \section*{WITNESSES} \textit{Testimonial "Judicial Confessions"} Should a party-witness in a civil case be inexorably bound by his own disserving testimony? Relying upon Article 2291 of \begin{itemize} \item \textsuperscript{33} La. R.S. 15:509(1) (1950). \item \textsuperscript{34} State v. Linkletter, 239 La. 1000, 120 So.2d 835 (1960), \textit{Cf.} State v. Giangosso, 157 La. 360, 102 So. 429 (1924) where the facts certified by the trial judge showed that the defendant, convicted of receiving stolen things, really owned them. \item \textsuperscript{35} State v. LaBorde, 234 La. 28, 99 So.2d 11 (1958). \textit{Accord,} Mayerhafer v. Department of Police, 235 La. 437, 104 So.2d 163 (1958) using the phrase "no probative evidence." \item \textsuperscript{36} (Oct. 1962) 5th Circuit Case No. 19402, rehearing pending. \item \textsuperscript{37} State v. Bueche, 243 La. 160, 142 So.2d 381 (1962) where the Louisiana Supreme Court upheld the trial judge's refusal of a complete transcript, stressing the fact that there was no allegation that an essential element of the crime was entirely unsupported by the proof. \item \textsuperscript{38} La. R.S. 15:500 (1950). \item \textsuperscript{39} The 1960 statute (Act 12), which was suspended in 1960 and repealed in 1962 (Act 449), had provided for a free transcript for all defendants. *Professor of Law, Louisiana State University. \end{itemize} the Civil Code\textsuperscript{1} and a prior decision of the Orleans Court of Appeal,\textsuperscript{2} the First Circuit, in \textit{Franklin v. Zurich Insurance Co.},\textsuperscript{3} held that if the disserving testimony of a party-witness is completely consistent and unequivocal on a crucial factual issue, then it is a "judicial confession," compelling a judgment adverse to the party-witness, despite the existence in the record of contradictory testimony by other witnesses. It appears that at common law, it is improper to classify disserving testimony by a party as a judicial admission or judicial confession.\textsuperscript{4} The proper effect to be given to a party's disserving testimony, however, has been the subject of conflicting decisions. Professor McCormick takes the firm position that a mechanical rule of law requiring that a party be inexorably bound by his own testimony should not be adopted.\textsuperscript{5} Certainly, in weighing the evidence or determining whether a motion for a directed verdict should be granted, a party's testimony adverse to his own cause will receive great weight. A party may be scrupulously honest while giving disserving testimony—but may be in error. If a court is persuaded from all of the evidence that the party was mistaken, should it nonetheless be forced to render a judgment against him? If a party-witness is lying when he gives disserving testimony, is the loss of his lawsuit the price that he should be forced to pay, or a perjury prosecution the more appropriate remedy? A rule requiring the acceptance of a party-litigant's disserving testimony can present real problems when both plaintiff and defendant testify adversely to their respective interests on a crucial factual question.\textsuperscript{6} It seems to this writer that in the absence of controlling authority, the better view is to treat disserving testimony of the party-litigant along with all the other evidence in the record, weighing it in light of all of the circumstances. At the time Article 2291 (defining and setting forth the effects of a judicial confession) was originally placed in the Civil Code,\textsuperscript{7} a party-litigant, because of the interest disqualifica- \begin{enumerate} \item \textsc{La. Civil Code} art. 2291 (1870). \item Thompson v. Haubtman, 137 So. 362 (La. App. Orl. Cir. 1931). \item 136 So.2d 735 (La. App. 1st Cir. 1962). \item See McCormick, Evidence §§ 239, 243 (1954); 9 Wigmore, Evidence §§ 2588, 2594a (3d ed. 1940). \item McCormick, Evidence § 243 (1954). See also 9 Wigmore, Evidence, § 2594a (3d ed. 1940). \item See McCormick, Evidence § 243 (1954); Sutherland v. Davis, 236 Ky. 743, 151 S.W.2d 1021 (1941). \item As Article 237 of the Civil Code of 1808, and as Article 2270 of the Civil Code of 1825. \end{enumerate} tion, was generally incompetent to testify as a witness.\textsuperscript{8} The application of the article in present-day law to a party-witness' disserving testimony, therefore, may appear somewhat anachronistic.\textsuperscript{9} It seems that in several instances where the courts have had real doubt as to the verity of the disserving testimony, the article has not been applied.\textsuperscript{10} In the instant case, the court stressed that the party-witness' testimony was completely consistent and unequivocal, and apparently would limit the application of Article 2291 to such cases. Earlier cases\textsuperscript{11} indicate like limitation of the article, but the article itself does not so stipulate. It seems unfortunate for Louisiana to have a rule of law requiring automatic acceptance of a party's disserving testimony, even when it is completely consistent and unequivocal—an undesirable departure from the normal practice of weighing all the evidence in light of the totality of circumstances. \section*{EXAMINATION OF WITNESSES} \subsection*{Sequestration of Witnesses} Whether, in a criminal case, a motion to sequester witnesses should be granted, and whether, if such an order is granted, certain witnesses should be excluded from its coverage, is a matter within the sound discretion of the trial judge.\textsuperscript{12} \begin{footnotesize} \begin{enumerate} \item Article 2260 of the Civil Code of 1825 (substantially the same as Article 248 of the Civil Code of 1808) provided: "The competent witness of any covenant or fact, whatever it may be, in civil matters, is that who is above the age of fourteen years complete, of a sound mind, free or unfranchised, and not one of those whom the law deem infamous. "He must besides be not interested, neither directly nor indirectly, in the cause. "The husband cannot be a witness either for or against his wife, nor the wife for or against her husband; neither can ascendants with respect to their descendants, nor descendants with respect to their ascendants." (Emphasis added.) See Brander v. Ferriday, Bennett & Co., 16 La. 296 (1840); Baudoin v. Nicolas, 12 Rob. 594 (La. 1846); Beer v. Word, 13 La. Ann. 467 (1858). \item It has been the subject of conflicting jurisprudence in another context. In connection with the applicability of the article to allegations made by a party in pleadings in prior suits, see Farley v. Frost-Johnson Lumber Co., 133 La. 497, 63 So. 122 (1913) and numerous cases therein discussed; Sanderson v. Frost, 198 La. 295, 3 So.2d 626 (1941); \textit{The Work of the Louisiana Supreme Court for the 1957-1958 Term—Evidence}, 19 L.A. L. Rev. 431, 433 (1959). \item See Stroud v. Standard Accident Insurance Co., 90 So.2d 477 (La. App. 2d Cir. 1956); Bowers v. Hardware Mutual Casualty Co., 119 So.2d 671 (La. App. 2d Cir. 1960); Richard v. Canning, 158 So. 598 (La. App. Orl. Cir. 1935). \item Stroud v. Standard Accident Insurance Co., 90 So.2d 477 (La. App. 2d Cir. 1956); Bowers v. Hardware Mutual Casualty Co., 119 So.2d 671 (La. App. 2d Cir. 1960). \item L.A. R.S. 15:371 (1950); State v. Barton, 207 La. 820, 22 So.2d 183 \end{enumerate} \end{footnotesize} Supreme Court has stated that when an order of sequestration excludes from its coverage certain designated witnesses, the trial court will be reversed only if this exercise of discretion has been "arbitrary and unreasonable and the accused has been thereby prejudiced in obtaining a fair and impartial trial."\textsuperscript{13} If a bill of exception has been taken to a ruling excluding certain persons from an order of sequestration, is it necessary for defense counsel, in order to protect his rights on appeal, to take an additional bill of exception when the witness in question is called to the stand to testify? Citing earlier cases, the court in \textit{State v. Ricks}\textsuperscript{14} indicates that the failure to take the additional bill "precludes consideration of the claim of defendant that their [the witnesses'] testimony was prejudicial to him."\textsuperscript{15} Although this position is clearly supported by the earlier \textit{Ferguson} case, it seems to this writer to be unwise and unduly technical. If, from the judge's per curiam on the first bill, or from other bills, it is possible for defense counsel to show that the discretionary refusal to sequester was "arbitrary or unreasonable" and that the defendant was thereby prejudiced, this should suffice. He has already registered his dissent from the judge's ruling, and should not be forced further to incur the disfavor of the witness. Thus, it is submitted, failure to renew the objection at the time the witness is called should not necessarily result in a forfeiture of his rights. \textbf{Prejudicial Effects of Unanswered Question} Prior to 1952, Article 495 of the Code of Criminal Procedure provided that a witness (including a defendant who took the stand) could be compelled to answer on cross-examination whether or not he had ever been indicted or arrested, and, if so, how many times.\textsuperscript{16} If the witness answered in the negative, however, his cross-examiner was not allowed to prove the affirmative by extrinsic evidence.\textsuperscript{17} In 1952, the article was amended to provide that "no witness, whether he be defendant or not, can be \textit{asked} on cross-examination whether or not he has ever been indicted or arrested."\textsuperscript{18} (Emphasis added.) When, \begin{itemize} \item \textsuperscript{13} State v. Ferguson, 240 La. 593, 124 So.2d 558, 567 (1960). \item \textsuperscript{14} 242 La. 823, 138 So.2d 589 (1962). \item \textsuperscript{15} \textit{Id.} at 831, 138 So.2d at 592. \item \textsuperscript{16} LA. R.S. 15:495 (1950). \item \textsuperscript{17} See State v. Vastine, 172 La. 137, 133 So. 389 (1931). \item \textsuperscript{18} LA. R.S. 15:495 (1950), as amended by La. Acts 1952, No. 180, § 1. A \end{itemize} despite the prohibitory language of the 1952 amendment, a defendant is questioned on cross-examination concerning prior arrests, are his rights violated by the mere asking of the question—whether or not objection to the question is sustained and no answer to it ever received? Phrased differently, does the statutory provision give a witness not only a right not to answer the question but also a right not to have the question asked? In *State v. Maney*¹⁹ the district judge had sustained defense counsel's objection to a question relative to previous arrests, and instructed the jury to disregard it. The Supreme Court found that the question violated the express language of the article as amended, but held that defendant's rights had been adequately protected, and that the district judge had not erred in refusing to grant defendant's motion for mistrial. In this connection, the court stated: "The question itself furnishes no objectionable information, it is the answer to that question which could furnish the prohibited information. For an accused to be permitted to stand mute before such a question with the Court's sanction gives no cause for a jury to conclude that he has or has not been arrested before, or to draw any other inference therefrom harmful to defendant."²⁰ There appears to be a strong implication in the opinion that in no instance would the mere asking of the question afford a right to a mistrial. It seems to the writer, however, that this position is not in keeping with the purpose of the 1952 amendment, or the actualities of jury trial. Yet, when defense counsel is confronted with such a question, his interposition of an objection may itself strongly suggest to the jury that if the defendant were permitted to answer the question, the answer would be in the affirmative. The jury will presumably reason that otherwise the witness would have been permitted to answer. Thus the question and the objection thereto may be as communicative to the jury as the answer itself would have been. Unless a defendant is protected from the *asking* of such a question, he may suffer incurable prejudice, as well recognized by Professor Wig- --- ¹⁹ full discussion of admissibility, in Louisiana criminal trials, of evidence as to prior arrests is contained in Comment, *Admissibility of Evidence of Prior Arrests in Louisiana Criminal Trials*, 19 La. L. Rev. 684 (1959). ²⁰ 242 La. 223, 135 So.2d 473 (1961). ²⁰ *Id.* at 233, 135 So.2d at 476. more, a recent Note in the *Louisiana Law Review*, and a recent federal case. The phraseology itself of Article 495 seems clearly to set the face of the law against the wafting of innuendo by the mere asking of the arrest question. **Motion To Strike** The court in *State v. Rogers*, speaking of a motion to strike and quoting from *State v. Saia*, stated that "insofar as we have been able to ascertain, the Louisiana Code of Criminal Procedure does not provide for any such motion in the trial of a criminal case." In both the *Rogers* and *Saia* cases, the motions to strike appear to have been inappropriate and without merit. For the court to use language, however, indicating that in no instance is a motion to strike available in Louisiana seems unsound. Although it is true that there is no express authority in the Code of Criminal Procedure for the motion, it is also true that there is no express prohibition or abolition of this common law device. The motion is at times useful and convenient, often serving functionally the same purpose as the frequently employed motion for the court to instruct the jury to disregard certain testimony. In *State v. Norris*, decided during the same term as the *Rogers* case, the Supreme Court noted that "the [trial] judge, at the instance of defense counsel and as requested to do, sustained an objection to the witness' testimony, ordered all of it stricken, and directed the jury to disregard it." (Emphasis added.) The Supreme Court expressly stated that the ruling had been correct. --- 21. 6 Wigmore, Evidence § 1808 (3d ed. 1940). 22. Note, 19 La. L. Rev. 881 (1959). 23. United States v. C.L. Guild Construction Co., 193 F. Supp. 268 (D. R.I 1961). 24. See Justice Jackson's opinion in Michelson v. United States, 335 U.S. 469 (1948). 25. 241 La. 841, 132 So.2d 819 (1961). 26. 212 La. 868, 877, 33 So.2d 685, 688 (1948). 27. 241 La. 841, 898, 132 So.2d 819, 839 (1961). 28. See McCormick, Evidence § 52 (1954); 1 Wigmore, Evidence § 18 (3d ed. 1940). 29. Ibid. 30. See, for example, State v. Norris, 242 La. 1070, 141 So.2d 368 (1962); State v. Johnson, 229 La. 476, 86 So.2d 108 (1956); State v. Cooper, 223 La. 560, 66 So.2d 336 (1953); State v. Foster, 164 La. 813, 114 So. 696 (1927); State v. Armstrong, 118 La. 480, 43 So. 57 (1907); Roquest v. Boutin, 14 La. Ann. 44 (1859); McCormick, Evidence § 52 (1954); 1 Wigmore, Evidence § 18 (3d ed. 1940). 31. 242 La. 1070, 141 So.2d 368 (1962). 32. Id. at 1080-81, 141 So.2d at 372. Article 0.2 of the Code of Criminal Procedure provides that "in matters of criminal procedure where there is no express law the common law rules of procedure shall prevail." It is submitted that under the authority of this article, the motion to strike should still be available. **Hearsay** *Statements in the Presence of the Accused* In *State v. Ricks* the Supreme Court, upholding the trial court's overruling of a hearsay objection, stated: "The testimony was not hearsay for, as pointed out by the judge, the entire identification incident took place in the presence of the accused." With deference, it is submitted that the mere fact that an out-of-court statement was made in the presence of an accused should not necessarily cause it to be classified as non-hearsay, or admissible as an admission. At times, because of its independent relevance, the fact that a statement was made in the presence of an accused outside of court may be admissible as fact of utterance rather than utterance of fact, as, for example, to show that the accused was possessed of certain information. If an accused remains silent or does not deny a statement made in his presence, and circumstances are such that an ordinary person would deny the validity of the statement if he believed it to be untrue, then the silence of the accused may qualify as an admission, and this may well have been the situation actually presented in the instant case. Although it is sometimes assumed that an out-of-court statement made in the presence of the accused is always automatically admissible, it is submitted that the better reasoned view is to the contrary. Let us consider, for example, (1) an accusatory statement made in the presence of a person (later the defendant) which he at the time expressly denied, or (2) a non-accusatory, non-incriminating statement to a third party made in the presence of a person (later the defendant) which he clearly had no interest either to affirm or deny. In such cases the presence of the defendant cer- --- 33. La. R.S. 15:0.2 (1950). 34. 242 La. 823, 138 So.2d 589 (1962). 35. Id. at 831, 138 So.2d at 592. 36. Comment, *Hearsay and Non-Hearsay as Reflected in Louisiana Criminal Cases*, 14 La. L. Rev. 611 (1954). 37. McCormick, *Evidence* § 247 (1954); 4 Wigmore, *Evidence* §§ 1069-1072 (3d ed. 1940). 38. McCormick, *Evidence* § 247 (1954). tainly should afford no magical balm transmuting the out-of-court assertion to admissible non-hearsay. Although, in the instant case, the evidence may well have been admissible as an admission, it seems unwise for Louisiana to take the position that no statement made in the presence of a defendant is subject to a hearsay objection. Confessions Rules for the protection of a defendant against jury consideration of an inadmissible confession are fundamental to our law. Article 451 of the Code of Criminal Procedure\(^{39}\) expressly provides that before a confession may be introduced in evidence, it must be affirmatively shown to have been freely and voluntarily made. The decisions make it clear that the same rule applies to admissions involving criminal intent or inculpatory fact.\(^{40}\) Speaking of this preliminary showing, the Supreme Court has stated\(^{41}\) that unquestionably the correct practice is to require that the jury be withdrawn, for if, after the jury has heard the state's evidence in this connection, the defendant's statement is held inadmissible, a mistrial must be granted.\(^{42}\) What of indubitable knowledge coming to a juror prior to trial that a defendant has confessed? Can a juror, however conscientious, completely disregard such firsthand knowledge? Do our present rules afford defendant adequate protection? In *State v. Rideau*,\(^{43}\) a murder case, the defendant, prior to indictment, had been "interviewed" by the sheriff, and had "admitted his part in the crime."\(^{44}\) The entire interview had been filmed with sound track, and telecast locally three times. Although other admissions and confessions were admitted in evidence, there is no indication in the Supreme Court's opinion that the televised interview was offered or received. As might be expected, however, the effect of the public showing presented problems at the trial. Relying in part upon the "sensational" news coverage, de- --- \(^{39}\) La. R.S. 15:451 (1950). \(^{40}\) See *The Work of the Louisiana Supreme Court for the 1955-1956 Term—Evidence*, 17 La. L. Rev. 421, 424-25 (1957). \(^{41}\) State v. Green, 221 La. 713, 730, 60 So.2d 208, 213 (1952). \(^{42}\) As to the incurable effect of a remark by a district attorney relative to an inadmissible statement made by the defendant, see *State v. Coleman*, 140 La. 417, 73 So. 252 (1916). \(^{43}\) 242 La. 431, 137 So.2d 283 (1962). \(^{44}\) Id. at 447, 137 So.2d at 289. fendant moved for a change of venue. The Supreme Court, citing traditional rules vesting wide discretion in the trial judge, found no error in the denial of the motion. One of the prospective jurors, when examined on *voir dire*, had stated, in the presence of eleven jurors, that he had a fixed opinion and could not try the case solely on the evidence adduced at the trial, since he had seen the defendant confessing on television in the presence of the sheriff. The district court granted a challenge for cause and instructed the jury to disregard the remark, but denied defendant's motion for mistrial. Since the prosecution had not been responsible for the remark, the Supreme Court, relying upon prior jurisprudence, upheld the ruling.\(^{45}\) Three of the jurors who tried the defendant had testified on *voir dire* that they had seen the television "interview." Defendant's challenges for cause of these jurors had been denied by the trial court, since they had testified "that they could lay aside any opinion, give the defendant the presumption of innocence as provided by law, base their decision solely upon the evidence, and apply the law as given by the court."\(^{46}\) Applying the usual test, the Supreme Court also upheld this ruling. It seems to this writer that it would be practically impossible for even the most conscientious juror completely to disregard defendant's filmed confession. The problems inherent in a pre-trial telecast of a defendant's confession were certainly not foreseen when the various traditional rules applied in the above holdings were formulated. If a defendant is to be protected from consideration of such confessions in our new electronic age, then it seems that some means must be designed either to prohibit telecasts such as that in the instant case, or to provide more effective rules for the implementation of fundamental principles. \(^{45}\) A like incident occurred during the selection of an alternate juror, this time in the presence of all twelve jurors. Similar rulings were made in the district court and Supreme Court. \(^{46}\) 242 La. 431, 462, 137 So.2d 283, 295 (1962).
Table of Contents State/Territory Name: California State Plan Amendment (SPA) #: 20-0024 This file contains the following documents in the order listed: 1) Approval Letter 2) CMS 179 Form/Summary Form (with 179-like data) 3) Approved SPA Pages May 13, 2020 Jacey Cooper Chief Deputy Director, Health Care Programs California Department of Health Care Services P.O. Box 997413, MS 0000 Sacramento, CA 95899-7413 Re: California State Plan Amendment (SPA) 20-0024 Dear Ms. Cooper: We have reviewed the proposed amendment to add section 7.4 Medicaid Disaster Relief for the COVID-19 National Emergency to your Medicaid state plan, as submitted under transmittal number (TN) 20-0024. This amendment proposes to implement temporary policies, which are different from those policies and procedures otherwise applied under your Medicaid state plan, during the period of the Presidential and Secretarial emergency declarations related to the COVID-19 outbreak (or any renewals thereof). On March 13, 2020, the President of the United States issued a proclamation that the COVID-19 outbreak in the United States constitutes a national emergency by the authorities vested in him by the Constitution and the laws of the United States, including sections 201 and 301 of the National Emergencies Act (50 U.S.C. 1601 et seq.), and consistent with section 1135 of the Social Security Act (Act). On March 13, 2020, pursuant to section 1135(b) of the Act, the Secretary of the United States Department of Health and Human Services invoked his authority to waive or modify certain requirements of titles XVIII, XIX, and XXI of the Act as a result of the consequences of the COVID-19 pandemic, to the extent necessary, as determined by the Centers for Medicare & Medicaid Services (CMS), to ensure that sufficient health care items and services are available to meet the needs of individuals enrolled in the respective programs and to ensure that health care providers that furnish such items and services in good faith, but are unable to comply with one or more of such requirements as a result of the COVID-19 pandemic, may be reimbursed for such items and services and exempted from sanctions for such noncompliance, absent any determination of fraud or abuse. This authority took effect as of 6PM Eastern Standard Time on March 15, 2020, with a retroactive effective date of March 1, 2020. The emergency period will terminate, and this state plan provision will no longer be in effect, upon termination of the public health emergency, including any extensions. Pursuant to section 1135(b)(5) of the Act, for the period of the public health emergency, CMS is modifying the requirement at 42 C.F.R. 430.20 that the state submit SPAs related to the COVID-19 public health emergency by the final day of the quarter, to obtain a SPA effective date during the quarter, enabling SPAs submitted after the last day of the quarter to have an effective date in a previous quarter, but no earlier than the effective date of the public health emergency. The State of California also requested a waiver of public notice requirements applicable to the SPA submission process. Pursuant to section 1135(b)(1)(C) of the Act, CMS is waiving public notice requirements applicable to the SPA submission process. Public notice for SPAs is required under 42 C.F.R. §447.205 for changes in statewide methods and standards for setting Medicaid payment rates, 42 C.F.R. §447.57 for changes to premiums and cost sharing, and 42 C.F.R. §440.386 for changes to Alternative Benefit Plans (ABPs). Pursuant to section 1135(b)(1)(C) of the Act, CMS is approving the state’s request to waive these notice requirements otherwise applicable to SPA submissions. The State of California also requested a waiver to modify the tribal consultation timeline applicable to this SPA submission process. Pursuant to section 1135(b)(5) of the Act, CMS is also allowing states to modify the timeframes associated with tribal consultation required under section 1902(a)(73) of the Act, including shortening the number of days before submission or conducting consultation after submission of the SPA. These waivers or modifications of the requirements related to SPA submission timelines, public notice, and tribal consultation apply only with respect to SPAs that meet the following criteria: (1) the SPA provides or increases beneficiary access to items and services related to COVID-19 (such as by waiving or eliminating cost sharing, increasing payment rates or amending ABPs to add services or providers); (2) the SPA does not restrict or limit payment or services or otherwise burden beneficiaries and providers; and (3) the SPA is temporary, with a specified sunset date that is not later than the last day of the declared COVID-19 public health emergency (or any extension thereof). We nonetheless encourage states to make all relevant information about the SPA available to the public so they are aware of the changes. We conducted our review of your submittal according to the statutory requirements at section 1902(a) of the Act and implementing regulations. This letter is to inform you that California’s Medicaid SPA Transmittal Number 20-0024 is approved effective March 1, 2020. Please note that the effective date for the new COVID-19 testing eligibility group described at section 1902(a)(10)(A)(ii)(XXIII) of the Act is March 18, 2020. Enclosed is a copy of the CMS-179 summary form and the approved state plan pages. Please contact Cheryl Young at 415-744-3598 or by email at email@example.com if you have any questions about this approval. We appreciate the efforts of you and your staff in responding to the needs of the residents of the State of California and the health care community. Sincerely, Anne M. Costello -S Anne Marie Costello Deputy Director Center for Medicaid & CHIP Services Enclosures cc: Anastasia Dodson, Department of Health Care Services (DHCS) Lindy Harrington, DHCS Rene Mollow, DHCS Angeli Lee, DHCS Amanda Font, DHCS TRANSMITTAL AND NOTICE OF APPROVAL OF STATE PLAN MATERIAL FOR: CENTERS FOR MEDICARE & MEDICAID SERVICES TO: REGIONAL ADMINISTRATOR CENTERS FOR MEDICARE & MEDICAID SERVICES DEPARTMENT OF HEALTH AND HUMAN SERVICES 1. TRANSMITTAL NUMBER 2_0 — 0_0_24 2. STATE California 3. PROGRAM IDENTIFICATION: Title XIX of the Social Security Act (Medicaid) 4. PROPOSED EFFECTIVE DATE March 1, 2020 5. TYPE OF PLAN MATERIAL (Check One) □ NEW STATE PLAN □ AMENDMENT TO BE CONSIDERED AS NEW PLAN ☐ AMENDMENT COMPLETE BLOCKS 6 THRU 10 IF THIS IS AN AMENDMENT (Separate transmittal for each amendment) 6. FEDERAL STATUTE/REGULATION CITATION 42 U.S.C. § 1320b-5; 42 CFR Part 447, including Subpart F (see box 23) Title XIX of the Social Security Act 7. FEDERAL BUDGET IMPACT $44,583,623 a. FFY 2020 $8,784,000 (monthly) b. FFY n/a $n/a 8. PAGE NUMBER OF THE PLAN SECTION OR ATTACHMENT Section 7.4 pages 90a-m 9. PAGE NUMBER OF THE SUPERSeded PLAN SECTION OR ATTACHMENT (If Applicable) Attachment 4.19 A, Pages 38-40-5, Section D.a Attachment 4.19 B, p 3-3, 21-25, 11-38, 41d, 66 Attachment 4.19 B, Sections C.1, D.1, E.1 Attachment 3.1 A, Page 1 (see more in box 23) 10. SUBJECT OF AMENDMENT Medicaid Disaster Relief for the Novel Coronavirus Disease (COVID-19) National Emergency - Request for Additional Flexibilities to Waive or Modify Certain Requirements of California's State Plan 11. GOVERNOR'S REVIEW (Check One) □ GOVERNOR'S OFFICE REPORTED NO COMMENT ☐ OTHER, AS SPECIFIED □ COMMENTS OF GOVERNOR'S OFFICE ENCLOSED □ NO REPLY RECEIVED WITHIN 45 DAYS OF SUBMITTAL 12. SIGNATURE OF STATE AGENCY OFFICIAL 13. TYPED NAME Jacey Cooper 14. TITLE State Medicaid Director 15. DATE SUBMITTED April 3, 2020 16. RETURN TO Department of Health Care Services Attn: Director's Office P.O. Box 997413, MS 0000 Sacramento, CA 95899-7413 17. DATE RECEIVED April 3, 2020 18. DATE APPROVED May 13, 2020 PLAN APPROVED - ONE COPY ATTACHED 19. EFFECTIVE DATE OF APPROVED MATERIAL March 1, 2020 20. SIGNATURE OF REGIONAL OFFICIAL Anne M. Costello -S/ Digitally signed by Anne M. Costello -S/ Date: 2020.05.13 09:45:53 -07'00' 21. TYPED NAME Anne Marie Costello 22. TITLE CMCS Deputy Director 23. REMARKS For Box 6, additional responses are: 1902(a)(47)(B) of the Act; 42 CFR 435.1110 . 5/4/20: CMS pen ink change. For Box 9, additional responses are: Supplement 3 to Attachment 3.1 A, Pages 3-6a; Attachment 3.1K, Page 18; Attachment 4.19-D; Supplement 4 to Attachment 4.19-D 5/4/20 Box 9: CMS pen ink change - pages remain in State Plan. For Box 11 "Other, As Specified," Please note: The Governor's Office does not wish to review the State Plan Amendment. 5/1/20: The state updated the monthly fiscal impact for box 9. FORM CMS-179 (07/92) Instructions on Back Section 7 – General Provisions 7.4. Medicaid Disaster Relief for the COVID-19 National Emergency On March 13, 2020, the President of the United States issued a proclamation that the COVID-19 outbreak in the United States constitutes a national emergency by the authorities vested in him by the Constitution and the laws of the United States, including sections 201 and 301 of the National Emergencies Act (50 U.S.C. 1601 et seq.), and consistent with section 1135 of the Social Security Act (Act). On March 13, 2020, pursuant to section 1135(b) of the Act, the Secretary of the United States Department of Health and Human Services invoked his authority to waive or modify certain requirements of titles XVIII, XIX, and XXI of the Act as a result of the consequences COVID-19 pandemic, to the extent necessary, as determined by the Centers for Medicare & Medicaid Services (CMS), to ensure that sufficient health care items and services are available to meet the needs of individuals enrolled in the respective programs and to ensure that health care providers that furnish such items and services in good faith, but are unable to comply with one or more of such requirements as a result of the COVID-19 pandemic, may be reimbursed for such items and services and exempted from sanctions for such noncompliance, absent any determination of fraud or abuse. This authority took effect as of 6PM Eastern Standard Time on March 15, 2020, with a retroactive effective date of March 1, 2020. The emergency period will terminate, and waivers will no longer be available, upon termination of the public health emergency, including any extensions. The State Medicaid agency (agency) seeks to implement the policies and procedures described below, which are different than the policies and procedures otherwise applied under the Medicaid state plan, during the period of the Presidential and Secretarial emergency declarations related to the COVID-19 outbreak (or any renewals thereof), or for any shorter period described below: Describe shorter period here. NOTE: States may not elect a period longer than the Presidential or Secretarial emergency declaration (or any renewal thereof). States may not propose changes on this template that restrict or limit payment, services, or eligibility, or otherwise burden beneficiaries and providers. Request for Waivers under Section 1135 ___X__ The agency seeks the following under section 1135(b)(1)(C) and/or section 1135(b)(5) of the Act: a. ___X___ SPA submission requirements – the agency requests modification of the requirement to submit the SPA by March 31, 2020, to obtain a SPA effective date during the first calendar quarter of 2020, pursuant to 42 CFR 430.20. b. ___X___ Public notice requirements – the agency requests waiver of public notice requirements that would otherwise be applicable to this SPA submission. These requirements may include those specified in 42 CFR 440.386 (Alternative Benefit Plans), 42 CFR 447.57(c) (premiums and cost sharing), and 42 CFR 447.205 (public notice of changes in statewide methods and standards for setting payment rates). TN: ___20-0024_____ Supersedes TN: ___None_____ Approval Date: 5/13/2020 Effective Date: 3/1/2020 c. X Tribal consultation requirements – the agency requests modification of tribal consultation timelines specified in California Medicaid state plan, as described below: Please describe the modifications to the timeline. To the extent there is a direct impact to Tribal Health Programs requiring a notice, California requests a 10 business-day notice period that will occur after the SPA is submitted to CMS for approval. Section A – Eligibility 1. X The agency furnishes medical assistance to the following optional groups of individuals described in section 1902(a)(10)(A)(ii) or 1902(a)(10)(C) of the Act. This may include the new optional group described at section 1902(a)(10)(A)(ii)(XXIII) and 1902(ss) of the Act providing coverage for uninsured individuals. The state elects to cover all uninsured individuals as defined under 1902(ss) of the Act pursuant to Section 1902(a)(10)(A)(ii)(XXIII) of the Act effective March 18, 2020. 2. _____ The agency furnishes medical assistance to the following populations of individuals described in section 1902(a)(10)(A)(ii)(XX) of the Act and 42 CFR 435.218: a. _____ All individuals who are described in section 1905(a)(10)(A)(ii)(XX) Income standard: _______________ -or- b. _____ Individuals described in the following categorical populations in section 1905(a) of the Act: Income standard: _______________ 3. X The agency applies less restrictive financial methodologies to individuals excepted from financial methodologies based on modified adjusted gross income (MAGI) as follows. Less restrictive income methodologies: California disregards income up to 138% FPL for the following eligibility groups: • Individuals Eligible For But Not Receiving Cash Assistance--section 1902(a)(10)(A)(ii)(I) • Age and Disability Poverty Level--section 1902(a)(10)(A)(ii)(X) State/Territory: California Page: 90c Less restrictive resource methodologies: 4. The agency considers individuals who are evacuated from the state, who leave the state for medical reasons related to the disaster or public health emergency, or who are otherwise absent from the state due to the disaster or public health emergency and who intend to return to the state, to continue to be residents of the state under 42 CFR 435.403(j)(3). 5. The agency provides Medicaid coverage to the following individuals living in the state, who are non-residents: 6. The agency provides for an extension of the reasonable opportunity period for non-citizens declaring to be in a satisfactory immigration status, if the non-citizen is making a good faith effort to resolve any inconsistencies or obtain any necessary documentation, or the agency is unable to complete the verification process within the 90-day reasonable opportunity period due to the disaster or public health emergency. Section B – Enrollment 1. X The agency elects to allow hospitals to make presumptive eligibility determinations for the following additional state plan populations, or for populations in an approved section 1115 demonstration, in accordance with section 1902(a)(47)(B) of the Act and 42 CFR 435.1110, provided that the agency has determined that the hospital is capable of making such determinations. Please describe the applicable eligibility groups/populations and any changes to reasonable limitations, performance standards or other factors. California allows HPE for the following eligibility groups: - Individuals Eligible For But Not Receiving Cash Assistance--section 1902(a)(10)(A)(ii)(I) - Individuals Receiving Home and Community-Based Services--section 1902(a)(10)(A)(ii)(VI) - Optional State Supplement Beneficiaries--section 1902(a)(10)(A)(ii)(XI) - PACE Enrollees--section 1934 - Age and Disability Poverty Level--section 1902(a)(10)(A)(ii)(X) - Work Incentives/BBA--section 1902(a)(10)(A)(ii)(XIII) - Uninsured individuals as defined under 1902(ss) of the Act pursuant to Section 1902(a)(10)(A)(ii)(XXIII) of the Act effective March 18, 2020 TN: 20-0024 Supersedes TN: None Approval Date: 5/13/2020 Effective Date: 3/1/2020 PE Period Limitations: California intends to add an additional PE period to the above HPE coverage groups, specifically allowing for the following total number of PE periods within a 12-month period. California allows 2 PE periods in a 12-month period, beginning on the date of the first PE approval. 2. The agency designates itself as a qualified entity for purposes of making presumptive eligibility determinations described below in accordance with sections 1920, 1920A, 1920B, and 1920C of the Act and 42 CFR Part 435 Subpart L. Please describe any limitations related to the populations included or the number of allowable PE periods. 3. The agency designates the following entities as qualified entities for purposes of making presumptive eligibility determinations or adds additional populations as described below in accordance with sections 1920, 1920A, 1920B, and 1920C of the Act and 42 CFR Part 435 Subpart L. Indicate if any designated entities are permitted to make presumptive eligibility determinations only for specified populations. Please describe the designated entities or additional populations and any limitations related to the specified populations or number of allowable PE periods. 4. The agency adopts a total of _____ months (not to exceed 12 months) continuous eligibility for children under age _____ (not to exceed age 19) regardless of changes in circumstances in accordance with section 1902(e)(12) of the Act and 42 CFR 435.926. 5. The agency conducts redeterminations of eligibility for individuals excepted from MAGI-based financial methodologies under 42 CFR 435.603(j) once every _____ months (not to exceed 12 months) in accordance with 42 CFR 435.916(b). 6. The agency uses the following simplified application(s) to support enrollment in affected areas or for affected individuals (a copy of the simplified application(s) has been submitted to CMS). a. The agency uses a simplified paper application. b. The agency uses a simplified online application. c. The simplified paper or online application is made available for use in call-centers or other telephone applications in affected areas. Section C – Premiums and Cost Sharing 1. X The agency suspends deductibles, copayments, coinsurance, and other cost sharing charges as follows: Please describe whether the state suspends all cost sharing or suspends only specified deductibles, copayments, coinsurance, or other cost sharing charges for specified items and services or for specified eligibility groups consistent with 42 CFR 447.52(d) or for specified income levels consistent with 42 CFR 447.52(g). The state waives cost-sharing for testing services (including in vitro diagnostic products), testing-related services, and treatments for COVID-19, including vaccines, specialized equipment and therapies, for any quarter in which the temporary increased FMAP is claimed. 2. X The agency suspends enrollment fees, premiums and similar charges for: a. _____ All beneficiaries b. X The following eligibility groups or categorical populations: Please list the applicable eligibility groups or populations. • Optional Targeted Low-Income Children (OTLIC) – see SPA 17-044; Attachment 4.18-F • Work Incentives/BBA--section 1902(a)(10)(A)(ii)(XIII) 3. _____ The agency allows waiver of payment of the enrollment fee, premiums and similar charges for undue hardship. Please specify the standard(s) and/or criteria that the state will use to determine undue hardship. Section D – Benefits Benefits: 1. _____ The agency adds the following optional benefits in its state plan (include service descriptions, provider qualifications, and limitations on amount, duration or scope of the benefit): 2. X The agency makes the following adjustments to benefits currently covered in the state plan: The state allows physicians and other licensed practitioners, in accordance with State law, to order Medicaid Home Health services as authorized in the COVID-19 Public Health Emergency Medicare interim final rule (CMS-1744-IFC). The state modifies its rehabilitative services benefit in the Drug Medi-Cal State Plan to expand individual counseling visits to include visits focused on short-term personal, family, job/school or other problems and their relationship to substance use, in addition to the currently allowable visits for the purpose of intake, crisis intervention, collateral services, and treatment and discharge planning. Remove utilization controls on covered benefits to the extent such limits cannot be exceeded based on medical necessity in the relevant approved State plan. 3. **X** The agency assures that newly added benefits or adjustments to benefits comply with all applicable statutory requirements, including the statewideness requirements found at 1902(a)(1), comparability requirements found at 1902(a)(10)(B), and free choice of provider requirements found at 1902(a)(23). 4. **X** Application to Alternative Benefit Plans (ABP). The state adheres to all ABP provisions in 42 CFR Part 440, Subpart C. This section only applies to states that have an approved ABP(s). a. **X** The agency assures that these newly added and/or adjusted benefits will be made available to individuals receiving services under ABPs. b. ______ Individuals receiving services under ABPs will not receive these newly added and/or adjusted benefits, or will only receive the following subset: Please describe. **Telehealth:** 5. **X** The agency utilizes telehealth in the following manner, which may be different than outlined in the state’s approved state plan: Please describe. Face-to-face requirement: Modify face-to-face requirement for State Plan benefits/services to be provided via all forms of telehealth and telephone, regardless of originating or distant site. This affords providers the flexibility to safely and expeditiously render necessary care for people. **Drug Benefit:** 6. **X** The agency makes the following adjustments to the day supply or quantity limit for covered outpatient drugs. The agency should only make this modification if its current state plan pages have limits on the amount of medication dispensed. TN: __20-0024_______ Approval Date: 5/13/2020 Supersedes TN: _None_____ Effective Date: _3/1/2020_ Please describe the change in days or quantities that are allowed for the emergency period and for which drugs. Removal of the six-prescription per calendar month limitation on covered outpatient drugs. This applies to all FFS Medi-Cal pharmacy providers and all covered outpatient drugs. Non-legend acetaminophen-containing drugs, non-legend cough, and cold drugs that are covered outpatient drugs will be included in the pharmacy benefit. Providers may dispense up to a 100-day supply at one time of all covered outpatient drugs. 7. X Prior authorization for medications is expanded by automatic renewal without clinical review, or time/quantity extensions. 8. The agency makes the following payment adjustment to the professional dispensing fee when additional costs are incurred by the providers for delivery. States will need to supply documentation to justify the additional fees. Please describe the manner in which professional dispensing fees are adjusted. 9. The agency makes exceptions to their published Preferred Drug List if drug shortages occur. This would include options for covering a brand name drug product that is a multi-source drug if a generic drug option is not available. Section E – Payments Optional benefits described in Section D: 1. Newly added benefits described in Section D are paid using the following methodology: a. Published fee schedules – Effective date (enter date of change): ____________ Location (list published location): ____________ b. Other: Describe methodology here. Increases to state plan payment methodologies: 2. The agency increases payment rates for the following services: Please list all that apply. Clinical laboratory or laboratory services, as generally described in State Plan Attachment 3.1-A, page 1, paragraph 3, that relate to the 2019 Novel Coronavirus (COVID-19). The COVID-19 procedure codes include U0001, U0002, and 87635 for diagnostic laboratory testing, G2023 and G2024 for the related specimen collection, and any COVID-19 diagnostic testing or collection procedure code, or equivalent code, adopted or established by CMS in the future. The payment increases will be effective for dates of service on or after March 1, 2020, or the date the procedure code is adopted or established by CMS. This change will affect the clinical laboratory or laboratory services methodology described on pages 3d and 3f of Attachment 4.19-B and authorize 100 percent of the Medicare rate as the reimbursement methodology for procedure codes related to COVID-19. Skilled Nursing Facilities (SNFs), including Freestanding Nursing Facilities Level-B; Nursing Facilities Level-A; Distinct Part Nursing Facilities Level-B; Freestanding Adult Subacute facilities; Distinct Part Adult Subacute facilities; Distinct Part Pediatric Subacute facilities; Freestanding Pediatric Subacute facilities; and Intermediate Care Facilities for the Developmentally Disabled (ICF/DDs), ICF/DD-Habilitative, and ICF/DD-Nursing as described in State Plan Attachment 4.19-D and Supplement 4 to Attachment 4.19-D. This would not apply to state-owned SNFs and state-owned ICFs, inclusive of Developmental Centers and Veterans Homes. a. Payment increases are targeted based on the following criteria: Please describe criteria. Clinical laboratories and laboratory services are experiencing increased cost pressures to provide a high volume of COVID-19 diagnostic testing and related specimen collection services. The payment increases will provide sufficient reimbursement in order for providers to collect specimen and to conduct the necessary COVID-19 diagnostic testing during COVID-19 outbreak and national emergency. SNFs and ICF/DDs are experiencing increased cost pressures in a variety of areas as a result of the COVID-19 response and the state is seeking flexibility to allow consideration of all costs being incurred by facilities to ensure the health and safety of residents. Increased costs related to the COVID-19 response could include, but are not limited to, increased staffing costs, medical equipment costs, and sanitizing costs. b. Payments are increased through: i. A supplemental payment or add-on within applicable upper payment limits: Please describe. ii. X An increase to rates as described below. Rates are increased: X Uniformly by the following percentage: 10 percent of current SNF (including Freestanding Nursing Facilities Level-B; Nursing Facilities Level-A; Distinct Part Nursing Facilities Level-B; Freestanding Adult Subacute Facilities; Distinct Part Adult Subacute Facilities; Distinct Part Pediatric Subacute facilities; Freestanding Pediatric Subacute facilities) and ICF/DD (including ICF/DDs, ICF/DD-Habilitative, and ICF/DD-Nursing) per diem rates. This increase would not apply to state-owned SNFs or ICFs, including Developmental Centers and Veterans Homes. The SNF and ICF/DD per diem rates are inclusive of add-ons, the Freestanding Pediatric Subacute Facility supplemental payments described on page 37 of Attachment 4.19-D, and the ICF/DD supplemental payments as described on page 35 of Attachment 4.19-D, but exclusive of ancillary charges and other supplemental payments, including, the Quality and Accountability Supplemental Program described on pages 20-24 of Supplement 4 to Attachment 4.19-4, the ICF/DD day treatment supplemental payment described on page 30 of Attachment 4.19-D, and the Special Treatment Program (STP) Patch under 22 CCR § 51511.1. The state will provide demonstration that payments for the state fiscal year are within the applicable fee-for-service upper payment limits, including those as defined in 42 CFR 447.272 and 447.321, when the upper payment limit demonstrations are due for the fiscal year. If the demonstration shows that payments for any category have exceeded the upper payment limit, the state will take corrective action as determined by CMS. Through a modification to published fee schedules – Effective date (enter date of change): _______________ Location (list published location): _______________ X Up to the Medicare payments for equivalent services. The payment for clinical laboratory COVID-19 related procedure codes will be equal to the Medicare payment for equivalent services. By the following factors: Please describe. Payment for services delivered via telehealth: 3. X For the duration of the emergency, the state authorizes payments for telehealth services that: a. X Are not otherwise paid under the Medicaid state plan; b. _____ Differ from payments for the same services when provided face to face; c. X Differ from current state plan provisions governing reimbursement for telehealth; d. X Include payment for ancillary costs associated with the delivery of covered services via telehealth, (if applicable), as follows: i. _____ Ancillary cost associated with the originating site for telehealth is incorporated into fee-for-service rates. ii. X Ancillary cost associated with the originating site for telehealth is separately reimbursed as an administrative cost by the state when a Medicaid service is delivered. Payment for ancillary costs, as described in paragraph 3.d. above, is applicable to Drug Medi-Cal services only. 4. **X** Other payment changes: In accordance with the Emergency Paid Sick Leave Act under HR 6201, allow the In-Home Supportive Services (IHSS) Individual Provider Rate, which includes Wages, Payroll Tax, Benefits, Administrative Costs, and Paid Time Off within the negotiated rate, to include payment for paid time off of IHSS providers related to COVID-19 sick leave benefits for a limited time period, beginning April 2, 2020 through December 31, 2020, or the end of the COVID-19 public health emergency period if sooner. The State approved county governmental, contracted, and private individual provider rates are documented in a fee schedule and that fee schedule has been updated to reflect the additional sick leave mandated pursuant to the Emergency Paid Sick Leave Act on April 2, 2020, and is effective for services provided after that date through December 31, 2020, or the end of the COVID-19 public health emergency period if sooner. This fee schedule is published on the California Department of Social Services website at: [https://www.cdss.ca.gov/inforesources/ihss/county-ihss-wages-rates](https://www.cdss.ca.gov/inforesources/ihss/county-ihss-wages-rates) For Drug Medi-Cal (DMC) non-Narcotic Treatment Program (non-NTP) services provided on or after March 1, 2020, until the COVID-19 public health emergency ends, the State will: (1) provide interim reimbursement equal to the lower of the county’s billed amount or the Statewide Maximum Allowance (SMA) increased by 100 percent; and (2) in the settlement process described in Attachment 4.19-B at page 41b, settle these payments to allowable cost, and thereby waive the limitations of usual and customary charge or SMA. These updates are implemented as follows: (1) Interim payments for non-NTP services provided to Medi-Cal beneficiaries are reimbursed up to the SMA for the current year increased by 100 percent. Interim payments for NTP services provided to Medi-Cal beneficiaries are reimbursed up to the USDR rate for the current year. This methodology supersedes the methodology described in paragraph E.1. on page 41 of Attachment 4.19-B, except for the methodology described in paragraphs E.1.a. and E.1.b. on pages 41 and 41a of Attachment 4.19-B. (2) The reimbursement methodology for county and non-county operated providers of non-NTP services is the provider’s allowable costs of providing these services. This methodology supersedes the methodology described in paragraph B.1. on page 39 of Attachment 4.19-B. For Specialty Mental Health Services provided on or after March 1, 2020, until the COVID-19 public health emergency ends, the State will: (1) provide interim reimbursement to county owned and operated providers based upon the established interim rates for the current year increased by 100 percent; and (2) in the settlement process described in paragraphs C and D of Attachment 4.19-B, at pages 24 through 25.6, settle interim payments to private organizational providers and to private and state owned and operated hospital-based outpatient providers to allowable cost. These updates are implemented as follows: (1) Interim payments for services delivered by county owned and operated providers are based upon interim rates, which are established by the State for those providers on an annual basis, increased by 100 percent. (2) Total reimbursable costs for private organizational providers are equal to the provider’s reasonable and allowable costs for the reporting period. Total reimbursable costs for private and state owned and operated hospital-based outpatient providers are equal to the provider’s allowable costs determined in the CMS 2552 hospital cost report and supplemental schedules. (3) The change in paragraph (1) above supersedes any conflicting portions of paragraphs C.1 and D.1 of Attachment 4.19-B, at pages 24 through 25.4. The change in paragraph (2) above supersedes any conflicting portions of paragraphs C and D of Attachment 4.19-B, at pages 24 through 25.6. The Clinical laboratory COVID-19 diagnostic testing procedures codes mentioned above will be exempt from the 10 percent payment reductions in Welfare and Institutions Code section 14105.192, as described in Attachment 4.19-B, page 3.3, paragraph 13 of the State Plan. Add Associate Clinical Social Worker (ACSW) and Associate Marriage and Family Therapist (AMFT) as billable provider types in addition to the provider types listed on pages 6B.1 and 6C of Attachment 4.19-B for FQHCs and RHCs. Doing so allows the services of ACSWs and AMFTs furnished within their scope of practice in accordance with California state law to be billable services in Federally Qualified Health Centers and Rural Health Clinics (RHCs). Licensed practitioners will supervise and assume the professional liability of services furnished by the unlicensed ACSW and AMFT practitioners. The ACSW/AMFT services in RHCs are included under 42 CFR 440.20(c): Other ambulatory services furnished by a rural health clinic. Allow the State to supersede the scope of service change requirements for MFTs on page 6W of Attachment 4.19-B. Section F – Post-Eligibility Treatment of Income 1. ___ The state elects to modify the basic personal needs allowance for institutionalized individuals. The basic personal needs allowance is equal to one of the following amounts: a. ___ The individual’s total income b. ___ 300 percent of the SSI federal benefit rate c. ___ Other reasonable amount: ____________ 2. ___ The state elects a new variance to the basic personal needs allowance. (Note: Election of this option is not dependent on a state electing the option described the option in F.1. above.) The state protects amounts exceeding the basic personal needs allowance for individuals who have the following greater personal needs: Please describe the group or groups of individuals with greater needs and the amount(s) protected for each group or groups. TN: ___20-0024_______ Approval Date: 5/13/2020 Supersedes TN: ___None____ Effective Date: 3/1/2020 Section G – Other Policies and Procedures Differing from Approved Medicaid State Plan /Additional Information PRA Disclosure Statement According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid OMB control number. The valid OMB control number for this information collection is 0938-1148 (Expires 03/31/2021). The time required to complete this information collection is estimated to average 1 to 2 hours per response, including the time to review instructions, search existing data resources, gather the data needed, and complete and review the information collection. Your response is required to receive a waiver under Section 1135 of the Social Security Act. All responses are public and will be made available on the CMS web site. If you have comments concerning the accuracy of the time estimate(s) or suggestions for improving this form, please write to: CMS, 7500 Security Boulevard, Attn: PRA Reports Clearance Officer, Mail Stop C4-26-05, Baltimore, Maryland 21244-1850. ***CMS Disclosure*** Please do not send applications, claims, payments, medical records or any documents containing sensitive information to the PRA Reports Clearance Office. Please note that any correspondence not pertaining to the information collection burden approved under the associated OMB control number listed on this form will not be reviewed, forwarded, or retained. If you have questions or concerns regarding where to submit your documents, please contact the Centers for Medicaid & CHIP Services at 410-786-3870. TN: ___20-0024_________ Supersedes TN: __None_____ Approval Date: 5/13/2020 Effective Date: 3/1/2020_
Lobeline Analogs with Enhanced Affinity and Selectivity for Plasmalemma and Vesicular Monoamine Transporters Dennis K. Miller,1 Peter A. Crooks, Guangrong Zheng, Vladimir P. Grinevich,2 Seth D. Norholm,3 and Linda P. Dwoskin College of Pharmacy, University of Kentucky, Lexington, Kentucky Received March 9, 2004; accepted April 28, 2004 ABSTRACT Lobeline attenuates the behavioral effects of psychostimulants in rodents and inhibits the function of nicotinic receptors (nAChRs), dopamine transporters (DATs), and vesicular monoamine transporters (VMAT2s). Monoamine transporters are considered valid targets for drug development for the treatment of methamphetamine abuse. In the current study, a series of lobeline analogs were evaluated for affinity and selectivity at these targets. None of the analogs was more potent than nicotine at the [3H]methyllycaconitine binding site (α7* nAChR subtype). Lobeline tosylate was equipotent with lobeline in inhibiting [3H]nicotine binding but 70-fold more potent in inhibiting nicotine-evoked 86Rb+ efflux, demonstrating antagonism of α4β2* nAChRs. Compared with lobeline, the defunctionalized analogs lobelane, mesotransdiene, and (−)-trans-transdiene showed dramatically reduced affinity at α4β2* nAChRs and a 15- to 100-fold higher affinity (K_i = 1.95, 0.58, and 0.26 μM, respectively) at DATs. Mesotransdiene and (−)-trans-transdiene competitively inhibited DAT function, whereas lobelane and lobeline acted noncompetitively. 10S/10R-MEPP [N-methyl-2R-(2R/2S-hydroxy-2-phenylethyl)6S-(2-phenylethyl)piperidine] and 10R-MESP [N-methyl-2R-(2R-hydroxy-2-phenylethyl)6S-(2-phenylethen-1-yl)piperidine] were 2 to 3 orders of magnitude more potent (K_i = 0.01 and 0.04 μM, respectively) than lobeline in inhibiting [3H]serotonin uptake; 10S/10R-MEPP showed a 600-fold selectivity for this transporter. Uptake results using hDATs and human serotonin transporters expressed in human embryonic kidney-293 cells were consistent with native transporter assays. Lobelane and ketoalkene were 5-fold more potent (K_i = 0.92 and 1.35 μM, respectively) than lobeline (K_i = 5.46 μM) in inhibiting [3H]methoxytetraabenazine binding to VMAT2 in vesicle preparations. Thus, structural modification (defunctionalization) of the lobeline molecule markedly decreases affinity for α4β2* and α7* nAChRs while increasing affinity for neurotransmitter transporters, affording analogs with enhanced selectivity for these transporters and providing new leads for the treatment of psychostimulant abuse. Lobeline, an alkaloid from Indian tobacco, inhibits the behavioral and neurochemical effects of psychostimulant drugs of abuse. For example, lobeline attenuates d-amphetamine-, methamphetamine- and nicotine-induced hyperactivity (Green et al., 2001; Miller et al., 2001, 2002) and inhibits the discriminative stimulus effects of methamphetamine (Miller et al., 2001). Although lobeline is not self-administered, it decreases methamphetamine self-administration in rats, which is not surmounted by increasing methamphetamine unit doses (Harrod et al., 2001, 2003). These results suggest that lobeline lacks abuse liability while decreasing the stimulant and rewarding effects of methamphetamine via a noncompetitive mechanism of action. Psychostimulant-induced behavioral activation and reinforcement are at least partly mediated via interaction with neurotransmitter transporters that regulate synaptic dopamine (DA) concentrations (Wise and Bozarth, 1987; Koob, 1992). Methamphetamine is a substrate for the DA trans- ABBREVIATIONS: DA, dopamine; DAT, DA transporter; VMAT2, vesicular monoamine transporter; SERT, serotonin transporter; nAChR, nicotinic receptor; 5-HT, serotonin; NE, norepinephrine; NET, NE transporter; MLA, methyllycaconitine; RTI-55, (−)-2β-carbomethoxy-3β-(4-iodophenyl)tropane; MTBZ, methoxytetraabenazine; BSA, bovine serum albumin; PEI, phenylethylamine; TTX, tetrodotoxin; 10R-MESP, N-methyl-2R-(2R-hydroxy-2-phenylethyl)6S-(2-phenylethen-1-yl)piperidine; 10S/10R-MEPP, N-methyl-2R-(2R/2S-hydroxy-2-phenylethyl)6S-(2-phenylethyl)piperidine; MTD, mesotransdiene; TTD, trans-transdiene; GBR-12909, 1-[2-[bis(4-fluorophenyl)methoxy]ethyl]-4-(3-phenylpropyl)piperazine; HEK, human embryonic kidney; SAR, structure-activity relationship. porter (DAT) (Sulzer et al., 1995; Johnson et al., 1998) and decreases the activity of the vesicular monoamine transporter (VMAT2) (Brown et al., 2000, 2001). Studies with VMAT2 knockout mice in which amphetamine-induced conditioned place preference is attenuated (Takahashi et al., 1997) support a role for VMAT2 in mediating the behavioral effects of stimulant drugs. Although effects on DAT, SERT, and/or VMAT2 may not be the only mechanisms responsible for the reinforcing properties of psychostimulants (Rocha et al., 1998; Sora et al., 1998), these neurotransmitter transporters are considered prime targets for developing pharma- therapies to treat psychostimulant abuse. Until recently, the pharmacological activity of lobeline was believed to primarily result from its high-affinity ($K_i = 4–20$ nM) interaction with nicotinic acetylcholine receptors (nAChRs) (Abood et al., 1988; Reavill et al., 1990; Bhat et al., 1991; Court et al., 1994). Lobeline inhibits nAChR subtypes mediating both nicotine-evoked $[^3\text{H}]$DA release and nicotine-evoked $^{86}\text{Rb}^+$ efflux (Miller et al., 2000); however, lobeline also interacts with VMAT2 and DAT (Dwoskin and Crooks, 2002). Lobeline potently inhibits $[^3\text{H}]$dihydroxytetraabenazine binding to VMAT2 ($IC_{50} = 0.90$ $\mu$M) and inhibits $[^3\text{H}]$DA uptake ($IC_{50} = 0.88$ $\mu$M) into rat striatal vesicle preparations (Teng et al., 1997, 1998) and is furthermore $\sim 90$-fold less potent ($IC_{50} = 80$ $\mu$M) in inhibiting $[^3\text{H}]$DA uptake into rat striatal synaptosomes (Teng et al., 1997). In addition to inhibiting DAT and VMAT2 function, high concentrations of lobeline (10–50 $\mu$M) increase $[^3\text{H}]$serotonin (5-HT) release from rat hippocampal slices in a mepyramine-insensitive manner (Lendvai et al., 1996), suggesting that lobeline interacts with SERT. Thus, in addition to interacting with nAChRs, lobeline inhibits VMAT2 more potently than DAT or SERT, suggesting that VMAT2 may be a critical target for its pharmacological activity. Consistent with the observation that lobeline is not self-administered in rats (Harrod et al., 2003), lobeline does not evoke DA release but stimulates dihydroxyphenylacetic acid overflow (Teng et al., 1997), which likely results from alterations in presynaptic DA storage via an interaction of lobeline with VMAT2 (Dwoskin and Crooks, 2002). Furthermore, lobeline inhibits $d$-amphetamine- and methamphetamine-evoked DA release from superfused rat striatal slices (Miller et al., 2001; S. Krishnamurthy, G. Zheng, P. A. Crooks, and L. P. Dwoskin, manuscript submitted for publication). These results are consistent with the effect of lobeline in inhibiting methamphetamine self-administration (Harrod et al., 2001). These preclinical data suggest that lobeline has potential as a pharmacotherapy for psychostimulant abuse. Structural modification of the lobeline molecule has afforded compounds with differing affinities for nAChRs (Flammia et al., 1999); however, these analogs have not been evaluated for their activity at neurotransmitter transporters. The present study evaluates a series of lobeline analogs for their activities at $\alpha_4\beta_2*$ nAChRs, $\alpha_7*$ nAChRs, DAT, SERT, NET, and VMAT2 with the aim of identifying analogs with high affinity and selectivity for these target sites. The exact subunit composition, stoichiometry, and arrangement of native nAChRs remain to be elucidated, which is indicated by an asterisk (*) following the subunit designation (Lukas et al., 1999). Such analogs may be useful candidates for probing specific targets for elucidating the underlying neurochemical mechanism(s) responsible for lobeline-induced inhibition of the behavioral effects of methamphetamine. **Materials and Methods** **Animals.** Male Sprague-Dawley rats (200–250 g upon arrival) were purchased from Harlan (Indianapolis, IN) and housed two per cage with ad libitum access to food and water in the Division of Laboratory Animal Resources at the College of Pharmacy at the University of Kentucky (Lexington, KY). Experimental protocols involving the animals were in accordance with the *NIH Guide for the Care and Use of Laboratory Animals* and were approved by the Institutional Animal Care and Use Committee at the University of Kentucky. **Chemicals.** $[^3\text{H}]$DA (specific activity, 25.6 Ci/mmol); $(\pm)$-$[^3\text{H}]$methyllycaconitine (MLA; specific activity, 25.4 Ci/mmol); $S(-)$-$[^3\text{H}]$nicotine (specific activity, 80 Ci/mmol); $[^3\text{H}]$norepinephrine (NE; specific activity 27.5 Ci/mmol); $^{86}\text{RbCl}$ (specific activity, 55.2 mCi/mmol); $[^{125}\text{I}]$IORT-55 (specific activity, 2200 Ci/mmol); and $[^3\text{H}]$serotonin (5-HT; specific activity, 27.5 Ci/mmol) were purchased from PerkinElmer Life and Analytical Sciences (Boston, MA). $[^3\text{H}]$MTBZ (specific activity, 56.8 Ci/mmol) was a generous gift from Dr. Michael Kilbourn (Department of Radiology, University of Michigan Medical School, Ann Arbor, MI). Bovine serum albumin (BSA), catechol, dopamine, EDTA, EGTA, fluoxetine HCl, GBR-12909 HCl, HEPES, $S(-)$-nicotine ditartrate (nicotine), nomifensine maleate, pargyline HCl, polyethyleneimine (PEI), serotonin, tetrodotoxin (TTX), tris(hydroxymethyl)aminomethane hydrochloride (Trizma HCl), tri(hydroxymethyl)-aminomethane base (Trizma), and tropolone were purchased from Sigma-Aldrich (St. Louis, MO). $\alpha$-D-Glucose, L-ascorbic acid, and potassium phosphate monobasic were purchased from Aldrich Chemical Co. (Milwaukee, WI), Analar-R-BHD Ltd. (Poole, UK), and Mallinckrodt (St. Louis, MO), respectively. Lobeline hemisulfate was purchased from MP Biomedicals (Irvine, CA). All other commercially obtained chemicals were purchased from Fisher Scientific Co. (Pittsburgh, PA). The lobeline analogs ketoalkene ($N$-methyl-2R-(2-oxo-2-phenylethyl)6S-(2-phenylethyl-1-yl)piperidine), 10R-MESP, 10S/10R-MEPP, lobelamine, lobelamine, mesotransdiane (MTD), $(\rightarrow)$-trans-diansdiane (TTD), lobelane, and lobeline tosylate (lobeline-8-O-tosylate) were synthesized by structural modification of the lobeline molecule (G. Zheng, L. P. Dwoskin, A. G. Deacue, and P. A. Crooks, manuscript submitted for publication) and are illustrated in Fig. 1. The structures of the lobeline analogs were verified by $^1$H and $^{13}$C NMR spectroscopy, mass spectrometry, and, in some cases, X-ray crystallography. **$[^3\text{H}]$Nicotine Binding Assay.** Striata from 2 to 4 rats were homogenized using a Tekmar polytron (Tekmar-Dohrmann, Mason, OH) in 10 volumes of ice-cold modified Krebs-HEPES buffer (20 mM HEPES, 118 mM NaCl, 4.8 mM KCl, 2.5 mM CaCl$_2$, and 1.2 mM MgSO$_4$, pH 7.5). Homogenates were incubated (5 min at 37°C) and centrifuged (29,000g for 20 min at 4°C). Resulting pellets were resuspended in 10 volumes of ice-cold MilliQ water (Millipore Corporation, Molsheim, France), incubated (5 min at 37°C), and centrifuged (29,000g for 20 min at 4°C). Resulting pellets were again resuspended in 10 volumes of ice-cold 10% Krebs-HEPES buffer and then incubated and centrifuged as described above. Final pellets were stored at $-70$°C in fresh 10% Krebs-HEPES buffer until use. Upon assay, pellets were resuspended in 10% Krebs-HEPES buffer, incubated, and centrifuged as described above. Final pellets were resuspended in ice-cold MilliQ water (2.0 ml) to provide $\sim 200$ $\mu$g of protein/100 $\mu$l of membrane suspension. Inhibition of specific $[^3\text{H}]$nicotine binding by lobeline and its analogs was assessed using a previously described method (Crooks et al., 1995). Briefly, assays were performed in triplicate in a final volume of 200 $\mu$l of Krebs-HEPES buffer containing 250 mM Tris (pH 7.5, 4°C). Reactions were initiated by the addition of 100 $\mu$l of membrane suspension to tubes containing 50 $\mu$l of Krebs-HEPES buffer or 1 of 9 concentrations (final concentration, 0.1 nM–1 mM) of nicotine, lobeline, or analog and 50 μl of [3H]nicotine (final concentration, 3 nM). Nonspecific binding was determined in triplicate in the presence of nicotine (10 μM). Following incubation (90 min at 4°C), reactions were terminated by the dilution of samples with ice-cold Krebs-HEPES buffer followed by immediate filtration through Whatman GF/B glass fiber filters (presoaked in 0.5% PEI) using a cell harvester (MP-43RS; Brandel Inc., Gaithersburg, MD). Filters were processed, and radioactivity was determined by liquid scintillation spectroscopy (B1600TR scintillation counter; PerkinElmer Life and Analytical Sciences). [3H]MLA Binding Assay. Whole rat brain (minus cortex, striatum, and cerebellum) was homogenized in 20 volumes of ice-cold hypotonic buffer (2 mM HEPES, 14.4 mM NaCl, 0.15 mM KCl, 0.2 mM CaCl₂, and 0.1 mM MgSO₄, pH 7.5). Homogenates were incubated at 37°C for 10 min and centrifuged (25,000g for 15 min at 4°C). Pellets were washed three times by resuspension in 20 volumes of buffer and followed by centrifugation. Final pellets were resuspended in the incubation buffer to provide ~150 μg of protein/100 μl of membrane suspension. Binding assays were performed in duplicate in a final volume of 250 μl of incubation buffer containing 20 mM HEPES, 144 mM NaCl, 1.5 mM KCl, 2 mM CaCl₂, 1 mM MgSO₄, and 0.05% BSA, pH 7.5. Assays were initiated by the addition of 100 μl of membrane suspension to 150 μl of sample containing 2.5 nM [3H]MLA and 1 of 8 concentrations (final concentration, 30 nM–100 μM) of lobeline or analog and incubated for 2 h at room temperature. Nonspecific binding was determined in the presence of nicotine (1 mM). Assays were terminated by dilution with ice-cold incubation buffer (3 ml) followed by immediate filtration through glass fiber filters (Schleicher and Schuell, Inc., Keene, NH) presoaked with 0.5% PEI using a cell harvester (MP-43RS; Brandel Inc.). Filters were processed, and radioactivity was determined as described above. 86Rb⁺ Efflux Assay. The ability of lobeline and its analogs to evoke 86Rb⁺ efflux was determined using a previously published method (Miller et al., 2000). nAChR-mediated 86Rb⁺ efflux from preloaded rodent brain synaptosomes has been used to characterize functional interactions of ligands with [3H]nicotine binding sites based on the findings that the response to nAChR agonists in the 86Rb⁺ efflux assay is highly correlated with the displacement of high-affinity [3H]nicotine binding (α4β2* nAChRs) and nAChR agonist-evoked 86Rb⁺ efflux is eliminated when brain from β2-subunit knockout mice is used (Marks et al., 1993, 1995, 1999; Sharples et al., 2000). In the current study, thalamus was homogenized and centrifuged (1000g for 10 min at 4°C). Supernatants were centrifuged (12,000g for 20 min at 4°C) to obtain synaptosomes. Synaptosomes were incubated for 30 min in 35 μl of uptake buffer (140 mM NaCl, 1.5 mM KCl, 2.0 mM CaCl₂, 1.0 mM MgSO₄, and 20 mM α-D-glucose, pH 7.5) containing 86Rb⁺ (4 μCi). 86Rb⁺ uptake was terminated by filtration onto glass fiber filters (6 mm, type A/E; Gelman Instrument Co., Ann Arbor, MI) under gentle vacuum (0.2 atm) and followed by three washes with uptake buffer (0.5 ml each). Each filter with 86Rb⁺-loaded synaptosomes (~40 μg of protein/μl) was subsequently placed on a glass fiber filter (13 mm, type A/E), mounted on a polypropylene platform. Synaptosomes were superfused at a rate of 2.5 ml/min with 86Rb⁺ efflux assay buffer (125 mM NaCl, 5 mM CsCl, 1.5 mM KCl, 2 mM CaCl₂, 1 mM MgSO₄, 25 mM HEPES, 20 mM α-D-glucose, 0.1 μM TTX, and 1.0 g/l BSA, pH 7.5). TTX and CsCl were included in the buffer to block voltage-gated Na⁺ and K⁺ channels, respectively, and to reduce the rate of basal 86Rb⁺ efflux. Lobeline- and analog-induced 86Rb⁺ efflux (intrinsic activity) and lobeline- and analog-induced inhibition of nicotine-evoked 86Rb⁺ efflux were determined. For these assays, the concentration (1 μM) of nicotine was chosen based on previous observations that this was the lowest concentration producing maximal 86Rb⁺ efflux (~1.0% tissue content) (Miller et al., 2000). After 8 min of superfusion, basal samples were collected for 2 min. Synaptosomes were subsequently superfused for 3 min with 1 of 5 concentrations (1 nM–100 μM) of lobeline or analog. Nicotine was then added to the buffer containing lobeline or analog, and superfusion continued for 3 min. Each aliquot part of thalamic synaptosomes was exposed to only one concentration of lobeline or analog. In each experiment, one synaptosomal aliquot part was also exposed to nicotine in the absence of lobeline or analog, and one synaptosomal aliquot part was superfused in the absence of lobeline, analog, and nicotine to determine basal 86Rb⁺ efflux during the entire course of the experiment. Samples were analyzed using liquid scintillation spectroscopy as described above. Inhibition of [3H]DA and [3H]5-HT Uptake into Rat Striatal and Hippocampal Synaptosomes, Respectively. Lobeline- and analog-induced inhibition of [3H]DA and [3H]5-HT uptake into rat striatal and hippocampal synaptosomes, respectively, was assessed using modifications of a previously described method (Teng et al., 1997). Analog-induced inhibition was compared with that induced by the selective DAT and SERT transporter inhibitors GBR-12909 and fluoxetine, respectively (Fuller et al., 1991; Carroll et al., 2002). Brain regions were homogenized in 20 ml of ice-cold 0.32 M sucrose solution containing 5 mM NaHCO₃ (pH 7.4) with 12 up-and-down strokes of a Teflon pestle homogenizer (clearance ≈ 0.003). Homogenates and supernatants were centrifuged at 2,000g for 10 min at 4°C and 20,000g for 15 min at 4°C, respectively. Pellets were resuspended in 1.5 ml of Krebs buffer (125 mM NaCl, 5 mM KCl, 1.5 mM MgSO₄, 1.25 mM CaCl₂, 1.5 mM KH₂PO₄, 10 mM α-D-glucose, 25 mM HEPES, 0.1 mM EDTA, 0.1 mM pargyline, and 0.1 mM ascorbic acid saturated with 95% O₂/5% CO₂, pH 7.4). Final protein concentrations were 400 μg/ml and were determined by protein-dye binding (Bradford, 1976). Assays were performed in duplicate in a total volume of 500 μl. Aliquot parts of synaptosomal suspension (50 μl) were added to tubes containing 350 μl of Krebs buffer and 50 μl of buffer containing final concentrations of 1 nM to 1 mM lobeline, lobeline analog, GBR-12909, fluoxetine, or 50 μl of buffer without drug. Tubes were incubated at 34°C for 10 min before the addition of... 50 μl of [3H]DA (final concentration, 10 nM) or 50 μl of [3H]5-HT (final concentration, 10 nM). Accumulation proceeded for 10 min at 34°C. Reactions were terminated by the addition of 3 ml of ice-cold Krebs buffer. Nonspecific [3H]DA and [3H]5-HT uptake was determined in the presence of nomifensine (10 μM) and fluoxetine (10 μM), respectively. Samples were rapidly filtered through a Whatman GF/B filter using a cell harvester (MP-43RS; Brandel Inc.), and filters were subsequently washed three times with 4 ml of ice-cold Krebs buffer containing catechol (1 mM). Radioactivity retained by the filters was determined by liquid scintillation spectroscopy (B1600 TR scintillation counter; PerkinElmer Life and Analytical Sciences). **Kinetic Analysis of [3H]DA Uptake into Rat Striatal Synaptosomes.** To determine whether the inhibition of [3H]DA uptake was via a competitive or noncompetitive mechanism, kinetic analyses were performed for lobeline (60 μM), MTD (3 μM), (−)-TTD (1 μM), and lobelane (1 μM). The concentrations used in the kinetic analyses were chosen to approximate the IC$_{50}$ values from the above concentration-response experiments for analog-induced inhibition of [3H]DA uptake. Experiments were conducted in the absence and presence of lobeline or analog. The absence of lobeline or analog (buffer alone) represented the control condition. Nonspecific uptake was determined in the presence of nomifensine (10 μM). In the absence and presence of nomifensine, aliquot parts of rat striatal synaptosomal suspension (50 μl) were added to tubes containing 350 μl of Krebs buffer and 50 μl of lobeline, analog, or buffer alone. Tubes were incubated for 5 min at 34°C. Uptake was initiated by the addition of [3H]DA (final concentration, 1 nM–5 μM, 50 μl, isotopically diluted with unlabeled DA (0.3–83 μM) to achieve varying DA concentrations and a consistent amount of radioactivity (i.e., 500,000 dpm per tube)). Accumulation proceeded for 10 min at 34°C. Reactions were terminated by the addition of 3 ml of ice-cold Krebs buffer. Samples were washed three times with 4 ml of ice-cold Krebs buffer containing catechol (1 mM) and filtered. Radioactivity retained by the filters was determined by liquid scintillation spectroscopy as described above. **Inhibition of [125I]RTI-55 Binding to hDAT, hSERT, and hNET Stably Expressed in HEK-293 Cells.** Lobeline-, lobelane- and MTD-induced inhibition of [125I]RTI-55 binding to hDAT, hSERT, and hNET was assessed using a previously described method (Eshleman et al., 1999). HEK-hDAT and HEK-hSERT cells were incubated in Dulbecco’s modified Eagle’s medium supplemented with 5% fetal bovine serum, 5% calf bovine serum, 0.05 U of penicillin/streptomycin, and puromycin (2 μg/ml). HEK-hNET cells were incubated in Dulbecco’s modified Eagle’s medium supplemented with 10% fetal bovine serum, 0.05 U of penicillin/streptomycin, and genetin (300 μg/ml). HEK-293 cells stably expressing hDAT, hSERT, or hNET were grown to 80% confluence on 150-mm diameter tissue culture dishes in a humidified 10% CO$_2$ environment at 37°C. The medium was poured off the plates, and the plates were washed with 10 ml of calcium- and magnesium-free phosphate-buffered saline. Lysis buffer (2 mM HEPES with 1 mM EDTA) was added, and the plates were placed on ice. After 10 min, cells were scraped from the plates and centrifuged (30,000g for 20 min). Pellets were resuspended in 12 to 32 ml of 0.32 M sucrose using a Polytron homogenizer (setting 7 for 10 s). Resuspension volumes depended on the density of binding sites within a cell line, providing binding of <10% of the total radioactivity. Analog-induced inhibition of [125I]RTI-55 binding was compared with that induced by cocaine as the standard. Nonspecific binding was determined in the presence of mazindol (5 μM) for hDAT and hNET assays or imipramine (5 μM) for the hSERT assays. Competition assays were conducted with duplicate determinations for each point. Aliquot parts of membranes (50 μl, ~10–15 μg of protein) were added to tubes containing 15 μl of inhibitor (lobeline, lobelane, MTD, or cocaine; 20 nM–10 μM) or Krebs-HEPES assay buffer (122 mM NaCl, 2.5 mM CaCl$_2$, 1.2 mM MgSO$_4$, 10 μM parglyline, 100 μM tropolone, 0.2% glucose, and 0.02% ascorbic acid buffered with 25 mM HEPES, pH 7.4). Tubes containing membranes and inhibitor were preincubated at room temperature for 10 min before the addition of 25 μl of [125I]RTI-55 (final concentration, 40–80 pM) and sufficient Krebs-HEPES buffer to obtain a final volume of 250 μl. Tubes were incubated at 25°C for 90 min. Binding was terminated by filtration over GF/C filters using a 96-well cell harvester. Filters were washed for 6 s with ice-cold saline. Scintillation fluid (50 μl) was added to each tube, and radioactivity remaining on the filter was determined using a Wallac MicroBeta or Betaplate scintillation counter (EG & G Wallac, Turku, Finland). **Inhibition of [3H]DA, [3H]5-HT, and [3H]NE Uptake by hDAT, hSERT, and hNET, Respectively, in Stably Expressed HEK-293 Cells.** Lobeline-, lobelane- and MTD-induced inhibition of [3H]DA, [3H]5-HT, and [3H]NE uptake by hDAT, hSERT, or hNET, respectively, in HEK-293 cells stably expressing these transporters was assessed using a previously described method (Eshleman et al., 1999). HEK-hDAT, HEK-hSERT, and HEK-hNET were grown on 150-mm diameter culture dishes as described above. The medium was removed, and cells were washed twice with phosphate-buffered saline at room temperature. Following the addition of 3 ml of Krebs-HEPES buffer, plates were placed in a 25°C water bath for 5 min. The cells were gently scraped, and clusters were separated by trituration using a pipette for 5 to 10 aspirations and ejections. Cells from multiple plates were combined for use in assays. Analog-induced inhibition was compared with that induced by cocaine as the standard. Nonspecific uptake was determined in the presence of mazindol (5 μM) for hDAT and hNET assays or imipramine (5 μM) for hSERT assays. Aliquot parts of cell preparation (50 μl) were added to 1-ml vials containing 350 μl of Krebs-HEPES buffer, 50 μl of inhibitor (lobeline, lobelane, MTD, or cocaine; 20 nM–10 μM), and 50 μl of either mazindol or imipramine in a final assay volume of 500 μl to determine nonspecific uptake. Tubes were incubated at 25°C for 10 min before the addition of 50 μl of [3H]DA, [3H]5-HT, or [3H]NE (final concentration, 20 nM). Accumulation of [3H]neurotransmitter proceeded for 10 min. Reactions were terminated by filtration through Whatman GF/C filters presoaked in 0.05% polyethylenimine. Scintillation cocktail was added, and radioactivity remaining on the filter was determined as described above for the binding assay. **Inhibition of [3H]MTBZ Binding to Vesicles Prepared from Rat Whole Brain.** Lobeline- and analog-induced inhibition of [3H]MTBZ binding was determined using modifications of a previously described method for [3H]dihydroxytetrazenazine binding (Teng et al., 1998). Nonspecific binding was determined in the presence of tetrazabenazine (20 μM). Rat whole brain (excluding cerebellum) was homogenized in 20 ml of ice-cold 0.32 M sucrose solution with seven up-and-down strokes of a Teflon pestle homogenizer (clearance = 0.003). Homogenates and supernatants were centrifuged at 1,000g for 12 min at 4°C and 22,000g for 10 min at 4°C, respectively. Resulting pellets were incubated in 18 ml of cold water for 5 min, and 2 ml of HEPES (25 mM) and potassium-tartrate (100 mM) solution was subsequently added. Samples were centrifuged (20,000g for 20 min at 4°C), and MgSO$_4$ (1 mM) solution was then added to the supernatants. Solutions were centrifuged (100,000g for 45 min at 4°C) and resuspended in cold assay buffer (25 mM HEPES, 100 mM potassium-tartrate, 5 mM MgSO$_4$, 0.1 mM EDTA, and 0.05 mM EGTA, pH 7.5). The final protein concentration was 15 μg of protein/100 μl (Bradford, 1976). Assays were performed in duplicate in 96-well plates. Aliquot parts of vesicular suspension (100 μl) were added to wells containing 50 μl of [3H]MTBZ (final concentration, 3 nM), 50 μl of lobeline or analog, and 50 μl of buffer. Reactions were terminated by filtration (Filtermate harvester; PerkinElmer Life and Analytical Sciences) onto Unifilter-96 GF/B filter plates (presoaked in 0.5% polyethylenimine). Filters were subsequently washed five times with 350 μl of ice-cold buffer (25 mM HEPES, 100 mM K$_2$-tartrate, 5 mM MgSO$_4$, and 10 mM NaCl, pH 7.5). Filter plates were dried and bottom-sealed, and each well was filled with 40 μl of scintillation cocktail (MicroScint 20; PerkinElmer Life and Analytical Sciences). Radioactivity in filters was determined by liquid scintillation... spectroscopy (TopCount NXT scintillation counter; PerkinElmer Life and Analytical Sciences). **Data Analysis.** For lobeline- and analog-induced inhibition of $[^3\text{H}]$nicotine and $[^3\text{H}]$MLA binding, specific binding was determined by subtracting nonspecific binding from total binding. Concentrations of inhibitor that produced 50% inhibition ($\text{IC}_{50}$ values) and 95% confidence intervals were determined from the concentration-effect curves via an iterative curve-fitting program (Prism 3.0; GraphPad Software Inc., San Diego, CA). Inhibition constants ($K_i$ values) were determined using the Cheng-Prusoff equation (Cheng and Prusoff, 1973). For the $^{86}\text{Rb}^+$ efflux assay, basal rate of efflux was determined via an exponential decay curve used to fit the data points preceding and following superfusion with lobeline or analog and nicotine (SigmaPlot version 8; Systat Software, Inc., Point Richmond, CA). The lobeline-, analog- or nicotine-evoked increase in $^{86}\text{Rb}^+$ efflux was calculated as the fractional increase above baseline. Increases were summed to obtain total $^{86}\text{Rb}^+$ efflux during the period of superfusion with lobeline, analog, and/or nicotine and normalized to $^{86}\text{Rb}^+$ content in the corresponding synaptosomal sample to reduce variability within and between experiments. To determine intrinsic activity of lobeline or analog, total $^{86}\text{Rb}^+$ efflux during the 3-min period of superfusion in the absence of nicotine was analyzed by one-way repeated measures analysis of variance with lobeline or analog concentration as a within-subject factor (SPSS version 9.0; SPSS Inc., Chicago, IL). To assess the lobeline- and analog-induced inhibition of nicotine-evoked $^{86}\text{Rb}^+$ efflux, total efflux during the 3-min period of superfusion in the presence of nicotine and lobeline or analog was analyzed by one-way repeated measures analysis of variance with lobeline or analog concentration as a within-subject factor. Concentrations of lobeline or analog that exhibited intrinsic activity were not included in the analysis to determine inhibition of the effect of nicotine. Additionally, $\text{IC}_{50}$ values were determined by nonlinear regression fit of the mean data to sigmoidal concentration-response curves (Prism 3.03; GraphPad Software Inc.). To generate concentration-response curves for the inhibition of $[^3\text{H}]$neurotransmitter uptake, specific uptake was determined by subtracting nonspecific uptake from total uptake. Similarly, for the competition binding curves for $[^2\text{H}]$MTBZ and $[^2\text{H}]$RTI-55, specific binding was determined by subtracting nonspecific from total binding. $\text{IC}_{50}$ values were determined from the curves by an iterative curve fitting program (Prism 3.03; GraphPad Software Inc.), and $K_i$ values were calculated using the Cheng-Prusoff equation (Cheng and Prusoff, 1973). For $[^2\text{H}]$RTI-55 binding, Hill coefficients were determined. For analysis of $[^3\text{H}]$DA uptake kinetics, $K_m$ and $V_{\text{max}}$ values were determined from concentration-effect curves for specific $[^3\text{H}]$DA uptake (Taylor and Insel, 1990; Kenakin, 1997). Paired two-tailed $t$ tests were performed on the log $K_m$ and $V_{\text{max}}$ values to determine differences ($P < 0.05$) between the absence (control condition) and presence of lobeline, MTD, (−)-TTD, or lobelane. **Results** **Inhibition of $[^3\text{H}]$Nicotine Binding.** Figure 2 illustrates the competition curves for lobeline and its analogs to inhibit $[^3\text{H}]$nicotine binding to rat striatal membranes and provides the $K_i$ values derived from these competition curves. Lobeline completely inhibited $[^3\text{H}]$nicotine binding to rat striatal membranes, and a $K_i$ value of 0.016 μM was obtained. Lobeline was ~3-fold less potent than nicotine in this assay. With the notable exception of MTD, which did not inhibit $[^3\text{H}]$nicotine binding, the remaining lobeline analogs inhibited $[^3\text{H}]$nicotine binding with wide variation in affinity. The tosyl sulfonic acid ester of lobeline, lobeline tosylate, completely inhibited $[^3\text{H}]$nicotine binding and was the most potent ($K_i = 0.011 \mu\text{M}$) of the lobeline analogs, being equipotent with lobeline. The dehydrated analog ketoalkene and its fully reduced analog 10S/10R-MEPP also completely inhibited binding, but these analogs were ~45-fold less potent than lobeline in this assay. The mono and dihydroxy analogs lobelanidine and 10R-MESP, respectively, completely inhibited $[^3\text{H}]$nicotine binding, and these analogs were more than 2 orders of magnitude less potent than lobeline. The defunctionalized analogs (−)-TTD and lobelane and the diketo analog lobelanine provided ~60% maximal inhibition of $[^3\text{H}]$nicotine binding at the concentrations examined, and these analogs were more than 3 orders of magnitude less potent than lobeline. The defunctionalized analog MTD did not inhibit $[^3\text{H}]$nicotine binding, demonstrating a lack of interaction of this analog with α4β2* nAChRs. **Inhibition of Nicotine-Evoked $^{86}\text{Rb}^+$ Efflux.** $^{86}\text{Rb}^+$ efflux from preloaded rat thalamic synaptosomes has been used as a functional assay for α4β2* nAChRs to determine whether compounds act as agonists or antagonists at this site (Marks et al., 1995; Miller et al., 2000). Lobeline has been previously shown to act as a nAChR antagonist in this assay (Miller et al., 2000). Functional effects of the most potent analogs (lobeline tosylate and ketoalkene), as well as the effect of an analog (MTD) that had no affinity for the $[^3\text{H}]$nicotine binding site, were assessed in the $^{86}\text{Rb}^+$ efflux assay. Neither lobeline nor any of the above analogs evoked $^{86}\text{Rb}^+$ efflux (Table 1), demonstrating no agonist activity at α4β2* nAChRs. As expected, nicotine (1 μM) evoked an increase in $^{86}\text{Rb}^+$ efflux of ~1.0% of total $^{86}\text{Rb}^+$ tissue content (Fig. 3). Lobeline inhibited ($\text{IC}_{50} = 0.73 \mu\text{M}$) nicotine-evoked $^{86}\text{Rb}^+$ efflux ($F_{6,24} = 15.01, P < 0.001$), consistent with previous results (Miller et al., 2000). Post hoc analysis revealed that 1 and 10 μM lobeline decreased nicotine-evoked $^{86}\text{Rb}^+$ efflux compared with control (nicotine alone). Lobeline tosylate, which was equipotent to lobeline in the $[^3\text{H}]$nicotine binding assay (Fig. 2), was nearly 70-fold more potent ($\text{IC}_{50} = 0.011 \mu\text{M}$) than lobeline in inhibiting nicotine-evoked $^{86}\text{Rb}^+$ efflux ($F_{5,20} = 10.76, P < 0.01$). Post hoc analysis revealed that lobeline tosylate (10 nM–10 μM) inhibited the effect of nicotine. Ketoalkene also inhibited ($\text{IC}_{50} = 0.30 \mu\text{M}$) nicotine-evoked $^{86}\text{Rb}^+$ efflux ($F_{5,20} = 4.48, P < 0.05$) and had a similar ![Fig. 2. Lobeline and lobeline analogs inhibit $[^3\text{H}]$nicotine binding to rat striatal membranes. Nicotine was used as a standard for comparison. Nonspecific binding was determined in the presence of nicotine (10 μM). $K_i$ values for lobeline and its analogs are provided in brackets. Data are the mean (±S.E.M.) specific binding presented as a percentage of the control condition (mean ± S.E.M., 51.4 ± 2.4 fmol/mg; $n = 4$ rats per compound).](image-url) TABLE 1 Intrinsic activity of lobeline, lobeline tosylate, ketoalkene, and MTD on $^{86}$Rb$^+$ efflux from preloaded rat thalamic synaptosomes. Synaptosomes were superfused for a 3-min period in the absence or presence of lobeline or lobeline analog before the addition of nicotine to buffer. Data from the period of superfusion following addition of nicotine to buffer are presented in Fig. 3. | Lobeline or Lobeline Analog Concentration | Control | 1 nM | 10 nM | 100 nM | 1 μM | 10 μM | |------------------------------------------|---------|--------|--------|--------|--------|--------| | Lobeline | 0.13 ± 0.04$^a$ | −0.16 ± 0.22$^b$ | 0.03 ± 0.07 | −0.02 ± 0.08 | 0.44 ± 0.20 | 0.18 ± 0.08 | | Lobeline tosylate | 0.11 ± 0.06 | 0.46 ± 0.27 | 0.14 ± 0.13 | 0.30 ± 0.32 | −0.06 ± 0.06 | 0.00 ± 0.00 | | Ketoalkene | 0.17 ± 0.14 | 0.36 ± 0.07 | 0.16 ± 0.13 | 0.16 ± 0.10 | 0.16 ± 0.12 | 0.01 ± 0.01 | | MTD | −0.08 ± 0.10 | 0.03 ± 0.02 | 0.21 ± 0.03 | 0.30 ± 0.10 | 0.31 ± 0.15 | 0.27 ± 0.11 | $^a$ Data are mean (±S.E.M.) percentages of $^{86}$Rb$^+$ efflux tissue content. $^b$ Negative values indicate that $^{86}$Rb$^+$ efflux was below that obtained at baseline; $^{86}$Rb$^+$ efflux induced by a compound was calculated as the fractional change from baseline. ![Fig. 3. Lobeline, ketoalkene, and lobeline tosylate, but not MTD, inhibit nicotine (1 μM)-evoked $^{86}$Rb$^+$ efflux from superfused rat thalamic synaptosomes. Thalamic synaptosomes were superfused with buffer containing lobeline or lobeline analog for 3 min. Nicotine (1 μM) was subsequently added to the buffer, and superfusion continued for an additional 3 min. Data are presented as the mean (±S.E.M.) of the percentage of $^{86}$Rb$^+$ tissue content during the latter 3-min period of superfusion ($n = 5–7$ rats per experiment). *, $P < 0.05$ different from control condition.](image) Potency to lobeline in this assay. Post hoc analysis revealed that 1 and 10 μM ketoalkene significantly inhibited the effect of nicotine. In contrast, MTD, which did not interact with the $[^3\text{H}]$nicotine binding site (Fig. 2), also did not inhibit nicotine-evoked $^{86}$Rb$^+$ efflux (Fig. 3). **Inhibition of $[^3\text{H}]$MLA Binding.** Although completely inhibiting $[^3\text{H}]$MLA binding to membranes prepared from whole rat brain, lobeline exhibited low affinity ($K_i = 11.6 \mu M$) for $\alpha_7$ nAChRs (Fig. 4) and was ~20-fold less potent than nicotine in this assay. The 10-hydroxy analogs 10$R$-MESP and 10$S$/10$R$-MEPP were ~4- to 8-fold more potent than lobeline in this assay. Lobelanidine was 3-fold more potent than lobeline. Lobeline tosylate, ketoalkene lobelanine, and lobelane had similar affinity to lobeline at the $[^3\text{H}]$MLA binding site. MTD and (−)-TTD did not inhibit $[^3\text{H}]$MLA binding at the concentrations examined, demonstrating no affinity for $\alpha_7$ nAChRs. Thus, defunctionalization of lobeline decreases affinity for $\alpha_7$ nAChRs. **Inhibition of $[^3\text{H}]$DA Uptake.** Lobeline and each of the lobeline analogs completely inhibited specific $[^3\text{H}]$DA uptake into rat striatal synaptosomes, but with varying affinity (Fig. 5). GBR-12909 inhibited $[^3\text{H}]$DA uptake ($K_i = 18 \text{nM}$), consistent with previously reported results (Carroll et al., 2002). Also in agreement with previous reports (Teng et al., 1997), lobeline inhibited ($K_i = 29.4 \mu M$) $[^3\text{H}]$DA uptake but had low affinity for this site. The highly functionalized analogs lobelanidine (a dihydroxy analog), lobeline tosylate (the O-8-tosyl sulfonic acid ester), and lobelanine (a diketo analog) were equipotent ($K_i = 16–33 \mu M$) with lobeline in inhibiting $[^3\text{H}]$DA uptake. Ketoalkene, lobelane, 10$S$/10$R$-MEPP, and 10$R$-MESP were 5- to 34-fold more potent than lobeline. Although an order of magnitude less potent than GBR-12909, the defunctionalized and unsaturated C-6 epimers MTD and (−)-TTD were the most potent analogs in inhibiting $[^3\text{H}]$DA uptake, being 50- to 100-fold more potent in inhibiting DAT compared with lobeline and exhibiting a surprising lack of stereoselectivity. Thus, the removal of the functionalities from the C-2 and C-6 side chains of lobeline and the introduction of unsaturation [MTD and (−)-TTD] afforded the most potent analogs in the $[^3\text{H}]$DA uptake assay. **Kinetic Analysis of the Inhibition of $[^3\text{H}]$DA Uptake.** To determine whether lobeline and selected lobeline analogs inhibited $[^3\text{H}]$DA uptake competitively or noncompetitively, kinetic analyses were performed, and the results are illustrated in Fig. 6 and Table 2. Concentrations of lobeline (60 μM), MTD (3 μM), (−)-TTD (1 μM), and lobelane (1 μM) were chosen based on their IC$_{50}$ values obtained from the inhibition curves illustrated in Fig. 5. Lobeline decreased the $V_{\text{max}}$ compared with control ($t_5 = 3.16$, $P < 0.05$) without altering the $K_m$ ($t_5 = 0.39$, $P = 0.72$; Fig. 6A, Table 2), indicating that lobeline noncompetitively inhibits $[^3\text{H}]$DA uptake into striatal synaptosomes. Similarly, lobelane inhibited $[^3\text{H}]$DA uptake in a noncompetitive manner, decreasing the $V_{\text{max}}$ ($t_5 = 2.57$, $P = 0.05$) without altering the $K_m$ ($t_5 = 1.38$, $P = 0.23$; Fig. 6B, Table 2). In contrast to lobeline and lobelane, the defunctionalized unsaturated stereoisomers MTD and (−)-TTD increased the $K_m$ compared with control ($t_5 = 4.49$, $P < 0.05$ and $t_6 = 6.65$, $P < 0.001$, respectively; Fig. 6, C and D, Table 2) without altering the $V_{\text{max}}$ ($t_5 = 0.28$, $P = 0.79$ and $t_6 = 0.33$, $P = 0.76$, respectively), indicating competitive inhibition of DAT. **Inhibition of $[^3\text{H}]$5-HT Uptake.** Lobeline and its analogs completely inhibited specific $[^3\text{H}]$5-HT uptake into rat hippocampal synaptosomes, but with varying affinity (Fig. 7). Consistent with previous reports (Fuller et al., 1991), fluoxetine inhibited specific $[^3\text{H}]$5-HT uptake with a $K_i$ of 41 nM. Lobeline and lobelanidine were the least potent analogs, with $K_i$ values of ~25 μM. Lobelanine, lobelane, lobeline tosylate, and ketoalkene were 8- to 16-fold more potent than lobeline in inhibiting $[^3\text{H}]$5-HT uptake. (−)-TTD was 24-fold more potent than its C-6 epimer MTD in inhibiting $[^3\text{H}]$5-HT uptake, and both epimers were more potent than lobeline in this assay. Both C-10 monohydroxy analogs 10$S$/10$R$-MEPP and 10$R$-MESP had high affinity for SERT ($K_i = 10$ and 44 nM, respectively), constituting the most potent analogs in the series. Furthermore, 10S/10R-MEPP and 10R-MESP were equipotent with fluoxetine. Importantly, these monohydroxy analogs were ~2 to 3 orders of magnitude more potent than lobeline. **Inhibition of [125I]RTI-55 Binding and [3H]DA, [3H]5-HT, and [3H]NE Uptake by hDAT, hSERT, and hNET, Respectively, in Stably Expressed HEK-293 Cells.** Similar to what has been previously shown for a number of uptake inhibitors (Eshleman et al., 1999), the potency of lobeline and its analogs in inhibiting hDAT and hSERT function ([3H]DA and [3H]5-HT uptake, respectively) was correlated with potency in inhibiting [125I]RTI-55 binding in the cell lines expressing the respective transporters. The relationship between neurotransmitter uptake and binding was less apparent for hNET in both the previous and current studies. In binding and uptake assays, lobeline generally exhibited low affinity for hDAT, hSERT, and hNET compared with cocaine, and, similar to cocaine, lobeline did not exhibit selectivity at these transporter sites (Table 3). In both [125I]RTI-55 binding and [3H]DA uptake assays, the defunctionalized saturated analog lobelane ($K_i = 97$ and $87 \text{ nM}$, respectively) was ~50-fold more potent than lobeline in inhibiting hDAT. In contrast to lobeline, lobelane demonstrated high nanomolar affinity for hSERT. Whereas lobelane was equipotent to lobeline in the [125I]RTI-55 binding assay probing hNET, lobelane was 33-fold more potent than lobeline in inhibiting [3H]NE uptake via hNET expressed in HEK-293 cells. Furthermore, lobelane showed only a 6- to 10-fold selectivity for hDAT over hSERT. However, lobelane was 18-fold more selective for hDAT compared with hNET in the binding assay but demonstrated no selectivity between hDAT and hNET in the uptake assay. The defunctionalized unsaturated analog MTD was at least 100-fold more potent than lobeline at hDAT and hSERT in both [125I]RTI-55 binding and uptake assays. With respect to hNET, MTD was 7- to 14-fold more potent than lobeline in binding and uptake assays. Generally, the affinity of MTD for these three transporters was not different from that obtained for lobelane in these assays (≤ a 7-fold difference between MTD and lobelane; Table 3). In the [125I]RTI-55 binding assays, MTD was 44-fold more selective for hDAT over hSERT and 7-fold more selective for hDAT over hSERT. Similarly, MTD was 40-fold more selective for hDAT over hSERT in the uptake assays, whereas this analog showed no selectivity for hDAT compared with hNET in the uptake assay. Generally, the Hill coefficients for all of the compounds approximated unity, suggesting competition with [125I]RTI-55 at a single binding site. **Inhibition of [3H]MTBZ Binding.** Lobeline and its analogs inhibited [3H]MTBZ binding to vesicle membranes prepared from rat whole brain (Fig. 8). With the exception of lobelanidine, this series of compounds exhibited a narrow range of $K_i$ values (0.92–8.8 $\mu\text{M}$). Lobelane was the most potent of the analogs in this series ($K_i = 920 \text{ nM}$), and lobelanidine was the least potent ($K_i = 26 \mu\text{M}$). In contrast to the results obtained for inhibition of [3H]DA and [3H]5-HT uptake in rat brain synaptosomes, lobeline and its analogs were equipotent in inhibiting [3H]MTBZ binding to vesicle membranes, with the exception of ketoalkene and lobelane, which were both more potent at VMAT2 than lobeline. Furthermore, MTD was significantly more potent than its epimer (−)-TTD in inhibiting [3H]MTBZ binding to vesicle membranes. **Discussion** The current SAR analysis investigated effects of defunctionalized, esterified, reduced, and oxidized lobeline analogs on α4β2* and α7* nAChRs and monoamine plasmalemma and vesicular monoamine transporters. As previously reported, lobeline has high affinity for [3H]nicotine binding sites in rodent striatal membrane preparations ($K_i = 4–20 \text{ nM}$) (Abood et al., 1988; Reavill et al., 1990; Bhat et al., 1991; Court et al., 1994; Flammia et al., 1999). Defunctionalized lobeline analogs lobelane, MTD, and (−)-TTD generally exhibited decreased affinity for [3H]nicotine binding sites (α4β2* nAChRs). MTD, an unsaturated, cis-defunctionalized analog of lobeline, did not interact with α4β2* sites; however, its C-6 epimer (−)-TTD showed enhanced interaction with... the $\alpha 4\beta 2*$ site, indicating that stereochemical factors play a role in binding site recognition. However, the affinity of (−)-TTD for the $[^3\text{H}]$nicotine binding site was 3 orders of magnitude less than that of lobeline. Partially defunctionalized analogs ketoalkene and 10S/10R-MEPP exhibited intermediate affinity for $\alpha 4\beta 2*$ nAChRs, with 30-fold lower affinity than lobeline. Structurally related analogs lobelanine and lobelanidine had 2 to 3 orders of magnitude lower affinity than lobeline in the $[^3\text{H}]$nicotine binding assay, consistent with results from previous studies (Flammia et al., 1999). Lobeline and lobeline tosylate were equipotent in the $[^3\text{H}]$nicotine binding assay, which assesses interaction with high-affinity $\alpha 4\beta 2*$ nAChRs. The latter results suggest that increases in molecular volume and steric bulk adjacent to the 8-hydroxy group by the addition of the tosyl sulfonate moiety can be accommodated at the high-affinity $\alpha 4\beta 2*$ site. Although a correlation between inhibition of $[^3\text{H}]$nicotine binding and inhibition of nicotine-evoked $^{86}\text{Rb}^+$ efflux was expected, tosylation of the 8-hydroxy group surprisingly provided an analog (lobeline tosylate) with higher potency than lobeline in the $^{86}\text{Rb}^+$ efflux assay. This lack of correlation may be the result of different modes of interaction of these compounds at the high-affinity $\alpha 4\beta 2*$ nAChRs (e.g., competitive versus noncompetitive). The $^{86}\text{Rb}^+$ efflux assay assesses functional response at $\alpha 4\beta 2*$ nAChRs (Marks et al., 1995; Miller et al., 2000). Effects of fully and partially defunctionalized analogs were evaluated in this assay to ascertain whether these analogs act as nAChR agonists or antagonists. Neither lobeline nor its analogs evoked $^{86}\text{Rb}^+$ efflux, demonstrating that these compounds are not agonists at $\alpha 4\beta 2*$ nAChRs. Lobeline acts as an $\alpha 4\beta 2*$ nAChR antagonist (Miller et al., 2000), and the current findings are consistent with these previous results. Ketoalkene and lobeline tosylate both inhibited $[^3\text{H}]$nicotine binding to striatal membranes and inhibited nicotine-evoked $^{86}\text{Rb}^+$ efflux from thalamic synaptosomes, demonstrating antagonism of $\alpha 4\beta 2*$ nAChRs. MTD, which did not inhibit $[^3\text{H}]$nicotine binding, also did not inhibit nicotine-evoked $^{86}\text{Rb}^+$ efflux, indicating that it does not interact with $\alpha 4\beta 2*$ sites either competitively or noncompetitively. In general, the members of this series of lobeline analogs have low affinity for $\alpha 7*$ nAChRs. The C-10 hydroxy analogs 10S/10R-MEPP and 10R-MESP were the most potent in the series but were 5-fold less potent than nicotine. Because lobeline is progressively defunctionalized, affinity for $\alpha 7*$ nAChRs is decreased, and selectivity between $\alpha 4\beta 2*$ and $\alpha 7*$ is diminished or eliminated, mainly due to the marked decrease in affinity for $\alpha 4\beta 2*$ nAChRs. Conversely, lobeline and ![Fig. 6](image-url) **Fig. 6.** Kinetic analysis of the inhibition of specific $[^3\text{H}]$DA uptake by lobeline, lobelan, MTD, and (−)-TTD. Concentrations of lobeline (60 μM; panel A), lobelan (3 μM; panel B), MTD (1 μM; panel C) and (−)-TTD (3 μM; panel D) were chosen from the concentration-response curves illustrated in Fig. 5. Nonspecific uptake was determined in the presence of nomifensine (10 μM). $K_m$ and $V_{max}$ values are presented in Table 2 ($n = 6–7$ rats per compound). Lobeline (60 μM) and lobelane (3 μM) inhibit DAT function via a noncompetitive mechanism, whereas MTD (3 μM) and (−)-TTD (1 μM) competitively inhibit DAT function in rat striatal synaptosomes. Data are presented as mean ± S.E.M. values for $K_m$ and $V_{\text{max}}$. Nonspecific [3H]DA uptake was determined in the presence of nomifensine (10 μM). No significant differences were found in $V_{\text{max}}$ or $K_m$ values between control groups (parameters determined in the absence of drug) in the four series of experiments. Control values for specific [3H]DA uptake were combined for tabular presentation. Concentration-response curves are presented in Fig. 6. | Compound | $K_m$ μM | $V_{\text{max}}$ pmol/min/mg | |--------------|----------|-----------------------------| | Control | 0.270 ± 0.138a | 35.7 ± 5.75 | | Lobeline | 0.312 ± 0.145 | 19.0 ± 3.95b | | Lobelane | 0.135 ± 0.078 | 26.8 ± 4.74b | | MTD | 0.507 ± 0.126b | 33.0 ± 12.5 | | (−)-TTD | 0.398 ± 0.161b | 37.0 ± 3.78 | a Values are presented as mean (±S.E.M.) $K_m$ or $V_{\text{max}}$ values; $n = 6$ to 7 rats per group. b $P < 0.05$ different from control. Lobeline tosylate are nearly 3 orders of magnitude more selective for α4β2* nAChRs than α7* nAChRs. Lobeline interacts nonselectively with monoamine transporters (DAT, SERT, NET, and VMAT2), consistent with previous findings (Teng et al., 1997, 1998; Dwoskin and Crooks, 2002). Lobeline exhibited the highest affinity, albeit in the low micromolar range, for VMAT2. Generally, defunctionalization of the lobeline molecule provided analogs with higher affinity for the plasmalemma transporters. Specifically, two compounds, MTD and (−)-TTD, were 1 to 2 orders of magnitude more potent than lobeline in inhibiting DAT function. Lobelane (a defunctionalized, fully saturated analog) and 10R-MESP and ketoalkene (partially defunctionalized analogs) were 15- to 30-fold more potent than lobeline in inhibiting DAT function. Both lobeline and lobelane inhibited DAT function noncompetitively. Interestingly, the unsaturated, defunctionalized epimers MTD and (−)-TTD competitively inhibited DAT. Taken together, it seems that this series of analogs interact with at least two different sites on DAT. The more rigid unsaturated epimers MTD and (−)-TTD compete with the substrate site, whereas the flexible, fully saturated analog lobelane seems to interact with an alternative site on DAT. With respect to inhibition of SERT, (−)-TTD was 24-fold more potent than its C-6 epimer MTD, indicating that SERT is sensitive to the stereochemistry at C-6, whereas DAT and VMAT2 are not. Partially defunctionalized analogs 10S/10R-MEPP and 10R-MESP were 2 to 3 orders of magnitude more potent than lobeline in inhibiting SERT function. These analogs had affinity similar to fluoxetine, a drug that selectively inhibits SERT (Fuller et al., 1991). Defunctionalization of lobeline at C-8 and the introduction of a C-10 hydroxy group affords 10S/10R-MEPP and 10R-MESP, both of which exhibit high affinity for SERT. The removal of the C-10 hydroxy group from 10S/10R-MEPP and 10R-MESP afforded lobelane and MTD, respectively, both of which exhibited an ~200-fold lower affinity at SERT. Thus, the completely defunctionalized analogs (lobelane and MTD) have low affinity for and little or no selectivity between DAT and SERT. Moreover, 10S/10R-MEPP showed 600-fold more selectivity in inhibiting SERT over DAT. Therefore, the C-10 hydroxy group seems to be a critical functionality for selective interaction with SERT. Taken together, the results suggest that 10S/10R-MEPP may be a potential lead compound for the development of new therapeutic agents for the treatment of mood disorders. Lobeline, lobelane, and MTD were investigated for their interaction with DAT, SERT, and NET expressed in HEK-293 cells to further investigate their selectivity for specific transporters. Consistent with previous findings (Eshleman et al., 1999), the current study generally shows similar results (i.e., the same rank order of transporter inhibition for the compounds) in the hDAT and hSERT expression systems compared with rat DAT and SERT in brain synaptosomal preparations. By and large, higher affinity for these analogs was observed in the expression systems than in native tissues, but this may be a consequence of comparison between human and rat transporters. In contrast, more variable results comparing binding and uptake assays using cell expression systems were obtained with respect to NET. Interaction of lobeline, lobelane, and MTD with NET was not determined in rat brain; however, results from HEK-293 cells suggest that lobelane and MTD should inhibit [3H]NE uptake in brain more potently than lobeline. Lobelane and ketoalkene were significantly more potent than lobeline in inhibiting binding of [3H]MTBZ to VMAT2 in whole brain synaptic vesicle preparations. Although there was only a 5-fold difference in affinity between $K_i$ values for lobeline and these two analogs, the confidence intervals for lobelane and ketoalkene did not overlap with that for lobeline, indicating significant differences in affinity for VMAT2. Similarly, MTD exhibited a significantly higher affinity for VMAT2 compared with (−)-TTD, although only a 4-fold difference in affinity was observed. These results indicate a modest enhancement of affinity at VMAT2 for these analogs over lobeline. The SAR trends for VMAT2 interaction indicate that the introduction of a 10R hydrox group into the lobeline molecule reduces affinity for VMAT2, and complete defunctionalization of the lobeline molecule affords analogs with the highest affinity for VMAT2 in the series. The series of lobeline analogs assessed in the current study were generally more potent than lobeline at DAT, SERT, NET, and VMAT2, demonstrating that appropriate modifi- ![Fig. 7. Lobeline and lobeline analogs inhibit specific [3H]5-HT uptake into rat hippocampal synaptosomes. Fluoxetine, a specific SERT inhibitor, was used as a standard for comparison. Nonspecific uptake was determined in the presence of fluoxetine (10 μM). $K_i$ values are provided in brackets. Data are the mean (±S.E.M.) specific [3H]5-HT uptake presented as a percentage of the control condition (mean ± S.E.M., 1.39 ± 0.08 pmol/min/mg; $n = 4–6$ rats per compound).](image-url) TABLE 3 Lobeline, lobelane, and MTD inhibit $^{[125}\text{I}]$RTI-55 binding and $[^3\text{H}]$neurotransmitter uptake into recombinant hDAT, hSERT, and hNET stably expressed in HEK-293 cells | Compound | $^{[125}\text{I}]$RTI-55 Binding $K_i$ $\mu M$ | Hill Coefficient | $[^3\text{H}]$Neurotransmitter Uptake IC$_{50}$ $\mu M$ | |----------------|-----------------------------------------------|------------------|------------------------------------------------------| | | | | | | HEK-hDAT cells | | | | | Lobeline | $5.40 \pm 1.30^a$ | $-1.03 \pm 0.10$ | $>10$ | | Lobelane | $0.097 \pm 0.033$ | $-0.92 \pm 0.06$ | $0.087 \pm 14$ | | MTD | $0.043 \pm 0.015$ | $-0.80 \pm 0.07$ | $0.117 \pm 0.033$ | | Cocaine | $0.469 \pm 0.037$ | $-1.19 \pm 0.12$ | $0.524 \pm 0.055$ | | HEK-hSERT cells| | | | | Lobeline | $>10$ | ND | | | Lobelane | $0.530 \pm 0.130$ | $-0.930$ | $0.830 \pm 0.320$ | | MTD | $1.89 \pm 0.29$ | $-0.99 \pm 0.09$ | $4.80 \pm 1.20$ | | Cocaine | $0.353 \pm 0.030$ | $-1.07 \pm 0.03$ | $0.419 \pm 0.037$ | | HEK-hNET cells | | | | | Lobeline | $1.87 \pm 0.82$ | $-0.58 \pm 0.05$ | $2.79 \pm 0.480$ | | Lobelane | $1.80 \pm 0.47$ | $-0.97 \pm 0.14$ | $0.085 \pm 0.006$ | | MTD | $0.277 \pm 0.082$ | $-0.79 \pm 0.04$ | $0.198 \pm 0.070$ | | Cocaine | $2.42 \pm 0.12$ | $-0.87 \pm 0.04$ | $0.370 \pm 0.056$ | ND, not determined. $^a$ Values are presented as means (±S.E.M.) from at least three independent experiments. Each experiment was conducted in duplicate (binding) or triplicate (uptake). Fig. 8. Lobeline and lobelane analogs inhibit specific $[^3\text{H}]$MTBZ binding to vesicles prepared from rat whole brain. Nonspecific binding was determined in the presence of tetraabenazine (20 $\mu M$). $K_i$ values are reported in brackets. Data are the mean (±S.E.M.) specific $[^3\text{H}]$MTBZ binding presented as a percentage of the control condition (mean ± S.E.M., 226 ± 10.8 fmol/mg; $n = 4–6$ rats per compound). cation of lobeline affords compounds with higher affinity at these sites. These transporters have been implicated in the behavioral activation induced by methamphetamine and cocaine (e.g., Kuhar et al., 1991; Sulzer et al., 1995; Johnson et al., 1998; Brown et al., 2000, 2001). Comparison across transporter assays provides an indication of the selectivity exhibited by these analogs. Lobeline was significantly more potent at VMAT2 than at plasmalemma transporters, although only a 5-fold difference in selectivity was observed. The structurally related *Lobelia* alkaloid lobelanidine showed no selectivity at these transporter sites, whereas lobelanine, another *Lobelia* alkaloid, exhibited a 5- to 7-fold selectivity for SERT and VMAT2 over DAT. Lobeline tosylate was 6- to 17-fold more selective for SERT over DAT and VMAT2; however, affinity of this compound for SERT was in the low micromolar range. Although lobelane and ketoalkene were among the most potent inhibitors at VMAT2, these analogs showed no selectivity between transporters. (−)-TTD showed significantly greater (14–20-fold) selectivity for the plasmalemma transporters over VMAT2, whereas its C-6 epimer MTD showed greater selectivity for DAT over SERT but was not selective with regard to DAT and VMAT2. 10R-MESP, which exhibited high affinity at SERT ($K_i = 44$ nM), showed a 20-fold selectivity at SERT over DAT and a 200-fold selectivity at SERT over VMAT2. The epimeric mixture 10S/10R-MEPP exhibited the highest affinity ($K_i = 10$ nM) at SERT and was 660-fold more selective for SERT than for DAT and 260-fold more selective for SERT than for VMAT2. Thus, since 10S/10R-MEPP demonstrated good selectivity at SERT and was devoid of nAChR activity, selectivity between plasmalemma transporters can be achieved through structural modification of the lobeline molecule. Findings from this initial SAR study demonstrate that defunctionalization of lobeline markedly decreases affinity for α4β2* and α7* nAChRs while increasing affinity and selectivity for monoamine neurotransmitter transporters. These results suggest that the oxygen functionalities in the lobeline molecule are critical for interaction with nAChRs but not for interaction with neurotransmitter transporters. Monoamine transporters are considered valid targets for drug development for the treatment of methamphetamine abuse. In this respect, little is known regarding the VMAT2 pharmacophore. Drug discovery targeting VMAT2 may provide a unique opportunity to probe the underlying neurochemical mechanisms responsible for psychostimulant abuse and yield novel approaches for treatment. Subsequent SAR studies will be directed at enhancing selectivity of lobeline analogs for individual transporters. Acknowledgments We acknowledge the technical assistance of M. Dathan Chesnut, David Eaves, Gabriela Deaciu, and Anne Woods. We also acknowledge Dr. Michael Kilbourn, who generously supplied $[^3\text{H}]$MTBZ [supported by National Institutes of Health (NIH) Grant MH 47611]. References Abood LG, Shahid K, and Maiti A (1988) Structure-activity studies of carbamate and other esters: agonists and antagonists to nicotine. *Pharmacol Biochem Behav* **30**:403–408. Bhat RV, Marks MJ, and Collins AC (1991) Effects of chronic nicotine infusions on kinetics of high-affinity nicotine binding. *J Neurochem* **62**:574–581. Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. *Anal Biochem* **72**:248–254. Brown JM, Hanson GR, and Fleckenstein AE (2000) Methamphetamine rapidly decreases vesicular dopamine uptake. *J Neurochem* **74**:2221–2223. Brown JM, Hanson GR, and Fleckenstein AE (2001) Regulation of the vesicular... monoamine transporter-2: a novel mechanism for cocaine and other psychostimulants. *J Pharmacol Exp Ther* **296**:762–767. Carroll FI, Lewin AH, and Marsarella SW (2002) Dopamine-transporter uptake blockers: structure-activity relationships, in *Neurotransmitter Transporters: Structure, Function and Regulation* (Reith ME ed) pp 381–432, Humana Press, Totowa NJ. Cheng YC and Prusoff WH (1973) Relationship between the inhibition constant ($K_i$) and the concentration of inhibitor which causes 50 percent inhibition ($I_{50}$) of an enzymatic reaction. *Biochem Pharmacol* **22**:3099–3108. Court JA, Perry EK, Spurden D, Lloyd S, Gillespie JI, Whiting P, and Barlow R (1994) Comparison of the binding of nicotinic agonists to receptors from human and rat cerebral cortex and from chick brain (a4β2) transfected into mouse fibroblasts with ion channel activity. *Brain Res* **667**:118–122. Crooks PA, Ramon A, White LL, Teng LH, Buxton SR, and Dwoskin LP (1995) Inhibition of nicotine-evoked $[^{3}H]$dihydrotetrahydroisoquinoline N-substituted nicotine analogues: a new class of nicotinic antagonist. *Drug Dev Res* **36**:91–102. Dwoskin LP and Crooks PA (2002) A novel mechanism and potential use for lobeline as a treatment for psychostimulant abuse. *Biochem Pharmacol* **63**:89–98. Eshleman AJ, Carmoli M, Cumbay M, Martins CR, Neve KA, and Janowsky A (1999) Characteristics of drug interactions with recombinant biogenic amine transporters expressed in the same cell type. *J Pharmacol Exp Ther* **289**:877–885. Flamia D, Marcella D, Damaj MI, Martin B, and Glennon RA (1999) Lobeline: structure-affinity relationship of nicotinic acetylcholinergic receptor binding. *J Med Chem* **42**:3726–3751. Fuller RW, Wong DT, and Robertson DW (1991) Fluoxetine, a selective inhibitor of serotonin uptake. *Med Res Rev* **11**:17–34. Green TA, Müller DK, Wong MY, Harrod SB, Crooks PA, Bardo MT, and Dwoskin LP (2001) Lobeline attenuates methamphetamine and cocaine self-administration and locomotor sensitization in rats. *Soc Neurosci Abstr* **8**:783. Harrod SB, Dwoskin LP, Crooks PA, Klebaur JE, and Bardo MT (2001) Lobeline attenuates d-amphetamine self-administration in rats. *J Pharmacol Exp Ther* **296**:170–179. Harrod SB, Dwoskin LP, Green TA, Gehrke BJ, and Bardo MT (2003) Lobeline does not serve as a reinforcer in rats. *Psychopharmacology* **165**:397–404. Johnson RA, Eshleman AJ, Meyers T, Neve KA, and Janowsky A (1998) $[^{3}H]$Substrate- and cell-specific effects of uptake inhibitors on human dopamine and serotonin transporter-mediated efflux. *Synapse* **30**:97–106. Kenakin T (1997) *Pharmacologic Analysis of Drug-Receptor Interaction*. Lippincott-Raven, Philadelphia. Koob GF (1992) Neural mechanisms of drug reinforcement. *Ann NY Acad Sci* **654**:171–191. Kuhar MJ, Ritz MC, and Boja JW (1991) The dopamine hypothesis of the reinforcing properties of cocaine. *Trends Neurosci* **14**:299–302. Lendvai B, Sershen H, Lajtha A, Santha E, Baranyi M, and Vizi ES (1996) Differential mechanisms involved in the effect of nicotinic agonists DMPP and lobeline to release $[^{3}H]5-HT$ from rat hippocampal slices. *Neuropharmacology* **35**:1769–1777. Lukas RJ, Changeux JP, Le Neveu N, Albuquerque EX, Balfour DJ, Berg DK, Bertrand D, Chiappinelli VA, Clarke PB, Collins AC, et al. (1999) International Union of Pharmacology. XX. Current status of the nomenclature for nicotinic acetylcholine receptors and their subunits. *Pharmacol Rev* **51**:397–401. Marks MJ, Bullock AK, and Collins AC (1995) Sodium channel blockers partially inhibit nicotine-stimulated $^{86}Rb^+$ efflux from mouse brain synaptosomes. *J Pharmacol Exp Ther* **274**:41–9. Marks MJ, Farnham DA, Grady SR, and Collins AC (1993) Nicotinic receptor function determined by stimulation of rubidium efflux from mouse brain synaptosomes. *J Pharmacol Exp Ther* **264**:542–552. Marks MJ, Whiteaker P, Calciterra J, Stitzel JA, Bullock AE, Grady SR, Picciotto MR, Changeux JP, and Collins AC (1999) Two pharmacologically distinct components of nicotinic receptor-mediated rubidium efflux in mouse brain require the beta2 subunit. *J Pharmacol Exp Ther* **289**:1090–1103. Miller DK, Crooks PA, and Dwoskin LP (2000) Lobeline inhibits nicotine-evoked $[^{3}H]$dopamine release from rat striatal slices and nicotine-evoked $^{86}Rb^+$ efflux from thalamic synaptosomes. *Neuropharmacology* **39**:2651–2662. Miller DK, Crooks PA, Teng L, Witkin JM, Munzar P, Goldberg SR, Acri JB, and Dwoskin LP (2001) Lobeline inhibits the neurochemical and behavioral effects of amphetamine. *J Pharmacol Exp Ther* **296**:1023–1034. Miller DK, Harrod SB, Green TA, Wong MY, Bardo MT, and Dwoskin LP (2002) Lobeline attenuates the locomotor stimulation induced by repeated nicotine administration in rats. *Pharmacol Biochem Behav* **74**:279–286. Reavill C, Walther B, Stolerman IP, and Testa B (1990) Behavioral and pharmacokinetics studies on nicotine, cytisine and lobeline. *Neuropharmacology* **39**:619–624. Roth BA, Fumagalli F, Gainetdinov RR, Jones SR, Ater R, Giros B, Miller GW, and Caron MG (1998) Cessive self-administration in dopamine transporter knockout mice. *Nat Neurosci* **1**:132–137. Sharples CG, Kaiser S, Soliakov L, Marks MJ, Collins AC, Washburn M, Wright E, Spensier JA, Gallagher T, Whiteaker P, et al. (2000) UB-165: a novel nicotinic agonist with subtype selectivity implicates the alpha4beta2 subtype in the modulation of dopamine release from rat striatal synaptosomes. *J Neurochem* **20**:2783–2791. Sora I, Wichems C, Takahashi N, Li XF, Zeng Z, Revay R, Lesch KP, Murphy DL, and Uhl GR (1998) Cocaine reward models: conditioned place preference can be established in dopamine- and serotonin-transporter knock out mice. *Proc Natl Acad Sci USA* **95**:7693–7704. Squires D, Chang HK, Lau YY, Kristensen H, Rayport S, and Ewing A (1995) Amphetamine redistributes dopamine from synaptic vesicles to the cytosol and promotes reverse transport. *J Neurosci* **15**:4102–4108. Takahashi N, Miner LL, Sora I, Ujike H, Revay RS, Kostic V, Jackson-Lewis V, Prezdebski S, and Uhl GR (1997) VMAT2 knockout mice: heterozygotes display reduced amphetamine conditioned reward, enhanced amphetamine locomotion and enhanced MPTP toxicity. *Proc Natl Acad Sci USA* **94**:9938–9943. Taylor P and Insel PA (1990) Molecular basis of pharmacologic selectivity, in *Principles of Drug Action* (Pratt WB and Taylor P eds) pp 1–102, Churchill Livingstone, Philadelphia. Teng L, Crooks PA, and Dwoskin LP (1998) Lobeline displaces $[^{3}H]$dihydroxytetrahydroisoquinoline binding and releases $[^{3}H]$dopamine from rat striatal synaptic vesicles; comparison with d-amphetamine. *J Neurochem* **71**:258–265. Teng L, Crooks PA, Sonsalla PK, and Dwoskin LP (1997) Lobeline and nicotine evoke $[^{3}H]$overflow from rat striatal slices preloaded with $[^{3}H]$dopamine: differential inhibition of synaptosomal and vesicular $[^{3}H]$dopamine uptake. *J Pharmacol Exp Ther* **280**:1432–1444. Wise RA and Bozarth MA (1987) A psychomotor stimulant theory of addiction. *Psychol Rev* **94**:469–492. **Address correspondence to:** Dr. Linda P. Dwoskin, College of Pharmacy, University of Kentucky, Lexington, KY, 40536-0082. E-mail: email@example.com
Clarifying the Disaster Process of the Elderly in the Aspect of Social Welfare Keiko Tamura*, Haruo Hayashi*, and Reo Kimura** *Disaster Prevention Research Institute, Kyoto University **Graduate School of Environmental Studies, Nagoya University Synopsis This study aims to gather basic data on the response to elders and show the necessity of constructing the discipline of disaster management care, the systematic approach to the disaster process of elders under drastic social environmental changes. Two case studies were conducted to clarify the disaster process of the elderly in two disasters. The major findings were as follows: 1) Care managers as professionals licensed by the government-sponsored Long-Term Care Insurance System worked effectively to manage the needs of moving temporarily to care facilities, 2) 13% of elders who moved to the care facilities as temporary shelters still stayed in the care facilities 6 months after the impact. Those results suggested that care managers should be the more effective agent to respond to elders in disasters; however, they need to learn the disaster process of elders. Keywords: disaster care management, Long-Term Care Insurance System, care manager, aging society 1. Introduction The Niigata Prefecture suffered two big disasters in 2004. The Niigata Flood occurred on July 13 and caused 15 deaths, 13 of which were senior citizens. Of those 13 deaths, 8 were 75 or older. The disaster reminded the public that disaster-prevention measures for the elderly are necessary. The Mid-Niigata Prefecture Earthquake occurred on October 23 and had 1) long-lasting aftershocks, 2) evacuations by the village units, 3) officials advised a wide area to evacuate, and 4) substantial damage to the life line, which resulted in a larger number of evacuees and more senior citizens needing care. The aim of this study is to gather basic data on the response to the elderly, to show the necessity for disciplined disaster management care, and to develop a systematic approach to the disaster process for seniors under dramatic changes in social environment. | Age | Casualties | Casualties | EQ Related Death | |-----|------------|------------|------------------| | | Niigata Flood | Mid-Niigata Prefecture Earthquake | | 5 | 3 | 0.16(2months) | | 10 | 1,1,2,2 | | | 15 | | | | 20 | 0 | | | 25 | | | | 30 | 4 | 2 | | 35 | 7 | 9 | | 40 | 2 | 2 | | 45 | | 8 | | 50 | 4 | 3 | | 55 | 5 | 9 | | 60 | 3 | 0 | | Young | 65 | 5,7,8,9 | | Old | 70 | 2,2,2 | | Old | 75 | 5,6,6,7,8,8 | | Old | 80 | 2,4 | | Old | 85 | 7 | | Old | 90 | 1 | Male(black letters), Female(red letters) Fig.1 Age Distribution of Casualties in 2 Disasters 2. The 2004 Niigata Flood 2.1 Purpose of Research in the Case of Niigata Flood The purpose of this study is to investigate the reason for the concentration of elderly victims in the Niigata flood on July 13, 2004 and propose an appropriate measure for reducing the number of victims of flood disasters in the future. There are tens of thousands of people classified as elderly living in the districts stricken by the flood. It is essential to analyze the reasons why only twelve elderly persons lost their lives and what factors made these persons different from others. Possible factors include the physical properties of the hazard, the geographical characteristics of the districts where the victims lived, and the personal attributes of the victims. This study aims to clarify and combine these factors and determine the causes of death in this disaster. In other words, this study aims to “profile” the causes of death. 2.2 Method of Survey in the Case of Niigata Flood 2.2.1 Subjects of disaster area Sanjo City, Niigata Prefecture was the subject area in Survey (1) on “the Niigata Flood on July 13”. According to Hayashi et. al.(2005), the Kakenhi (Grants in Aid for Scientific Research) report, four patterns of death were observed in “the Niigata Flood on July 13”. Deaths were caused by 1) a landslide, 2) house destruction by gushing water, 3) moving outdoors after the house was flooded well above the floor level, 4) remaining in a house flooded well above the floor level. It was determined that most of the deaths of the elderly who normally needed daily care were caused by 4) remaining in a house flooded well above the floor level. Therefore, we collected data to assess this issue. We mailed the questionnaires to the subjects and asked them to return the completed survey via mail. We mailed the questionnaires on March 18, 2005 and collected them until April 5, 2005. Toward the end of March, reminders were mailed to those who had yet to return their questionnaires. 2.2.2 Subjects of this survey Participants in Survey (1) included 1) leaders and sub-leaders of community associations in the area where the elderly victims lived, 2) nursing-care insurance service providers, who provided services in the subject area, and 3) care managers, who were caring for the elderly victims. Care managers are nation-certified specialists, who have special knowledge and techniques, which help senior citizens receiving care to lead an independent daily life. 2.3 Results of Survey in the Case of Niigata Flood (1) The conditions that surrounded the deaths of the elderly care receivers On July 13, 2004 at 13:07, the left bank of the Igarashi River at Suwa (Margaribuchi) broke. The surveyed area (Rannan, Sanjo City, which is on the west Table 1. Situations when the People who Needed Assistance Died in the Niigata Flood | Community association | Community Association A | Community Association B | Community Association C | |-----------------------|-------------------------|-------------------------|-------------------------| | Attributes | Female, 87, walking with a cane, living alone | Female, 76, needed nursing care, living alone | Female, 84, needed nursing care, living alone | | Estimated time of death | 13th in the afternoon | 13th around 20:00 | 13th in the afternoon | | Time the body was found | 15th around 12:40 | 14th around 09:00 | 15th around 17:45 | | Place the body was found | In between the flooded kitchen and living room in her house | In the flooded water on the first floor of her house | In the flooded living room in her house | | Cause of death | There are traces of the flood up to 130 cm above the floor in her house. It seems that she stayed home without being able to evacuate by herself and drowned in the rush of the flood | It seems that a neighbor initially carried her to the second floor, but she later went downstairs by herself for some reason and drowned, probably as she fell | There are traces of flooding to 110 cm above the floor in her house. It appears that she stayed home without being able to evacuate by herself and drowned in the rush of the flood | | | | | He was confined to bed and received nursing care at home. He drowned in his house that was flooded 120 cm above the floor level, although his wife tried to save him | side of the Shinetsu Line, the railroad) was a little far from the broken bank. Flooding began at 15:00 or later and reached 1.5 m. In the interview, the leaders and sub-leaders of community associations uniformly told that 1) they assumed it was “a typical flood” since the surrounding area had frequently flooded, 2) although the official advice to evacuate was issued at 11:40, they were unaware that an evacuation had been issued, 3) when the floodwater rose swiftly, all they could do was to go upstairs, 4) they felt that the residents of their community association would manage to survive the flood by themselves since the seniors lived by themselves. The four senior citizens, who died in this area, were 75 or older and they all died in their homes. The interview revealed that these four people “needed some assistance to walk”, or “could not walk by themselves”. It was confirmed that three of them were receiving in-home nursing care services, but at the time of flood, none were receiving care from a caregiver (Table 1). (2) Action taken for the elderly care receivers in the Niigata Flood Care managers and nursing care workers have become social resources to assist the elderly since many senior citizens that live in their own homes receive nursing care insurance. Those professionals essentially worked well during the disaster. However, one reason for the elderly deaths is that there was not a systematic emergency preparedness plan for community care managers. It was also concluded that there was a clear gap in understanding the senior citizens’ needs between the nursing care workers and the leaders and sub-leaders of community associations or district welfare officers. 3. The 2004 Mid-Niigata Earthquake 3.1 Purpose of Research in the Case of 2004 Mid-Niigata Earthquake The Mid-Niigata Prefecture Earthquake occurred on October 23 and had 1) long-lasting aftershocks, 2) evacuations by the village units, 3) officials advised a wide area to evacuate, and 4) substantial damage to the life line, which resulted in a larger number of evacuees and more senior citizens needed care. The aim of this study is to gather basic data on the response to the elderly, to show the necessity for disciplined disaster management care, and to develop a systematic approach to the disaster process for seniors under dramatic changes in social environment. 3.2 Survey 1: Social Random Sampled Survey in the impacted Areas 3.2.1 Subjects of disaster area The survey was conducted in Ojiya City and Kawaguchi Town where the casualties and damage to the houses were serious throughout the area. Both male and female adults living in this area participated in this survey. The adopted method was stratified two-stage sampling. Initially, 50 spots in this area were randomly selected: 43 spots in Ojiya City and 7 spots in Kawaguchi Town, which is proportional to the population ratio. Then a sampling was conducted with the probability proportional to size. Using the basic registers of the residents, we sampled 20 individuals, which did not reside in the same household, from each spot. We specified the individual to complete the survey so that an equal number of male and female subjects were sampled. Consequently, 1,000 subjects were sampled, i.e., 2.19% of the population in the area (45,668 persons as of March, 2005). We mailed the questionnaires to the subjects and asked them to return the completed survey via mail. We mailed the questionnaires on March 18, 2005 and collected them until April 5, 2005. Toward the end of March, reminders were mailed to those who had yet to return their questionnaires. 3.2.2 Basic Attributes We collected 543 responses (response rate: 54.3%). Responses, which 1) were partially or not completed, 2) were error laden, 3) did not specify sex or age, and 4) were from individuals that did not reside in Ojiya City or Kawaguchi Town during the earthquake, were excluded. Hence, 518 completed surveys were collected (effective response rate: 51.8%). To ensure random sampling, we verified that the respondents reflected the features of the general population in terms of basic attributes, i.e., sex and age (generation). The number of households and estimated population per municipality (as of March 1, 2005) and the estimated population per age (5-year increments) (aggregate) (as of January 1, 2005), which was published by the Emergency Management and Disaster Division, Niigata Prefecture Government, was used to determine the basic attributes of the general population. The result of the goodness-of-fit test did not show significant differences in sex and age (generation) between the respondents and the general population (sex: $X^2(1)=0.85$, n.s., generation:$X^2(2)=5.82$, n.s.). Since there were not significant differences in the basic attributes, it was concluded that respondents do represent the tendencies of the area. 3.2.3 The Results of the Survey 1 Fig. 2 showed 80% of people who lived in the impacted Area because many numbers of Aftershocks were occurred (over 500 times of aftershocks were occurred from Oct 23-31), while 10.6% of the people chose to stay at home. 20.1% of them were evacuated to open spaces, 19.2% went to roads, 17.9 went to their own Garage, and 13.8% stayed in the cars. The survey asked the respondent who chose to stay at home (Fig.3), 35.7% of them answered because they had the elderly in their family so they did not move to somewhere else but home. 3.3 Survey 2: Social survey of the Care Managers and Senior Citizens 3.3.1 Subjects of this survey Survey (2) included 23 care managers that provided home-based nursing care support in Ojiya City and 399 senior citizens, who were sent to welfare facilities for the elderly or hospitals during the emergency evacuation. Since it is difficult for some seniors to answer the questionnaire, we asked the care managers, which cared for the senior citizens that evacuated to welfare facilities or hospitals, to answer the questions on the seniors’ behalf. 3.3.2 Results of Survey 2 (1) The general situation The questionnaires were distributed on January 15, 2005 through the Geriatric Welfare Division, Ojiya City Office, and collected on January 31, 2005. A 100% response rate was achieved with the assistance of the Geriatric Welfare Division since all 23 care managers that provide home-based nursing care in Ojiya City responded to Questionnaire-A in Survey (2). Questionnaire-B was distributed to 399 senior citizens, although 382 were returned, only 257 were assessed to be useful for the survey. Consequently, the effective response rate was 64.4%. (2) Situation of the senior citizens received at welfare facilities and hospitals in Ojiya City after the Mid-Niigata Prefecture Earthquake The graph in Fig. 4, which is based on the data from the Geriatric Welfare Division, Ojiya City, chronologically shows the extent that care managers coped with the emergency evacuation needs. The division accumulated the data by calling the welfare facilities and hospitals to confirm the senior citizens received during the emergency evacuation based on reports from the care managers. The division continued calling the facilities and hospitals until December 12, 2004, and the total number of people received reached 399. On the day of the earthquake, the need for emergency evacuations arose mainly for “the elderly, who are highly dependent on medical care”. The needs then shifted to “senior citizens, who have difficulties staying at the shelters for a long period.” The needs peaked on October 27, 4 days after the earthquake (Fig. 4). (3) Actual situation of the elderly received at welfare facilities or hospitals during the emergency evacuation The average age of the 257 “senior citizens received during the emergency evacuation” was 84. 89.9% were 75 or older, 6.2% were 65 to 74, and 3.9% were under 65. The reasons for the emergency hospitalization or reception at welfare facilities were: 1) their houses were damaged and it became impossible to receive home-based nursing care (58.2%), 2) it became impossible for the family member(s) to care for them at home (19.5%), 3) their facilities suffered physical damage (8.6%), 4) their needs changed (2.3%), 5) their conditions changed (1.8%), 6) they evacuated outside the city (1.8%), and 7) other (7.8%). The resources supported the home-based care decreased by 77.7%. Before the earthquake, 65.9% of the received seniors dwelled at home and 27.1% resided in welfare facilities (Others were 7%). On the day of the earthquake, 37.1% stayed in a tent or a car, 16.3% evacuated to shelters, 17.6% stayed in welfare facilities, 6.9% stayed in hospitals, and 10.6% remained at home. Two to four days after the earthquake, 20% were still living in tents, cars, or shelters. One week after the earthquake, the number of senior citizens received at welfare facilities and hospitals increased and peaked one month after the earthquake. By that time, 61.7% had moved to welfare facilities and 23.0% to hospitals (Others were 15.3%). Two months after the earthquake, the number of seniors at welfare facilities and hospitals decreased, implying that they started to return home. However, three months after the earthquake only 42.0% of the received seniors had returned home (Fig. 5). (4) Senior citizens returning home from hospitals/welfare facilities Figure 6 chronologically compares “the number of senior citizens received at hospitals/welfare facilities during the emergency evacuation” and “the number of evacuees staying at shelters”. The number of evacuees peaked five days after the earthquake. 29,000 of the 44,000 residents in Ojiya City evacuated. The number of evacuees started to decline two weeks after the earthquake and all of them returned home by December 20, 2004. On the other hand, the number of elderly people received at welfare facilities and hospitals during the emergency evacuation increased in accordance with the increase of evacuees at the shelters and peaked 12 days after the earthquake. Although the number of evacuees sharply declined two weeks after the earthquake, the number of the seniors at the welfare facilities and hospitals did not decline as rapidly. The number of senior citizens staying at welfare facilities and hospital gradually declined to 177 as of December 12. These findings suggest that the senior citizens, who needed care, were unable to reconstruct their lives compared to the other evacuees. Via a verbal confirmation on February 17, 2005, 67 seniors were still at hospitals and welfare facilities. As of May 27, 2005, that number decreased to 50. Thus, our current task is to help these 50 seniors return to their own homes. 4. Observations and Conclusions An essential issue in disaster planning is how to help people that have difficulties evacuating by themselves. Typically, the welfare services that support the life of these people are only provided during fixed hours and are unable to handle abrupt catastrophic disasters. Nevertheless, care managers play a pivotal role in the current welfare service and are knowledgeable about senior citizens and their need for assistance when a disaster occurs. Care managers also have a strong sense of responsibility for the safety of people in need. Thus, these care managers must be involved in developing individual evacuation plans for future senior citizens that need support. Since care managers determine the evacuation plan for senior citizens, who require care, individuals and organizations, which will become the life-saving resources in an emergency, must be secured. Once these resources are ascertained, the administration, the welfare-related employees, and the community need to simulate evacuations and provide training in order to improve the disaster prevention abilities of the community. After the emergency period of the Mid-Niigata Prefecture Earthquake was over, the care managers mainly provided support to the elderly requiring care by facilitating their reception at welfare facilities and hospitals. However, there are fundamental issues. 1) The facilities were selected from the limited options within the care managers’ network. It was difficult to select the facilities that might help these seniors reconstruct their lives. 2) These seniors tended to have prolonged stays at these facilities. In Ojiya City, which was hit by the Mid-Niigata Prefecture Earthquake, more than 50 senior citizens (as of May 27, 2005 when this paper is written) had yet to return to their previous lives within the community. The resources available to care receivers living at home decreased due to the disaster, which prevented these seniors from leaving the hospitals and welfare facilities, and returning home. Hence, a new special area called “disaster care management” must be established. This area allows care managers to play a leading role in systematizing knowledge, techniques, and networking for total care management during a disaster and remain within the framework of nursing care insurance system. Acknowledgments This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Special Purposes, Research of the Niigata, Fukushima and Fukui Flooding Disaster in July 2004, and for COE Research, Center of Excellence for Natural Disaster Science and Disaster Reduction (Disaster Prevention Research Institute, Kyoto University, Japan). Fig.6 Comparison of the number of people staying at shelters and the number of senior citizens evacuated to hospital/welfare facilities during the emergency evacuation 社会福祉の観点から見た高齢者における災害過程の解明 田村圭子*・林春男*・木村玲欧** *京都大学防災研究所 **名古屋大学大学院環境学研究科 要旨 新潟豪雨水害、中越地震を事例に、高齢者の変化・課題、対応の実態について調査した。結果、介護保険の担い手としてケアマネジャーは、地域の要援護者に対する災害時支援に有効に機能する存在であった事が明らかとなった。しかし災害時の活動は、専門職としての高いモラルと献身に支えられた活動であり、災害時の負担増は顕著であるため、災害対応従事者も含んだ地域での災害時ケアの仕組みを福祉と防災の間で推進する必要がある。 キーワード: 災害時要援護者、危機管理、高齢化社会、介護保険、社会福祉、ケアマネジメント
Extended Islands of Tractability for Parsimony Haplotyping* Rudolf Fleischer\textsuperscript{1}, Jiong Guo\textsuperscript{2}, Rolf Niedermeier\textsuperscript{3}, Johannes Uhlmann\textsuperscript{3}, Yihui Wang\textsuperscript{1}, Mathias Weller\textsuperscript{3}, Xi Wu\textsuperscript{1} \textsuperscript{1} School of Computer Science, IIPL, Fudan University, Shanghai, China \\ \{rudolf,yihuiwang,wuxi\}@fudan.edu.cn \textsuperscript{2} Universität des Saarlandes, Campus E 1.4, D-66123 Saarbrücken, Germany \\ email@example.com \textsuperscript{3} Institut für Informatik, Friedrich-Schiller-Universität Jena, Ernst-Abbe-Platz 2, D-07743 Jena, Germany \\ \{rolf.niedermeier,johannes.uhlmann,mathias.weller\}@uni-jena.de **Abstract.** Parsimony haplotyping is the problem of finding a smallest size set of haplotypes that can explain a given set of genotypes. The problem is NP-hard, and many heuristic and approximation algorithms as well as polynomial-time solvable special cases have been discovered. We propose improved fixed-parameter tractability results with respect to the parameter “size of the target haplotype set” $k$ by presenting an $O^*(k^{4k})$-time algorithm. This also applies to the practically important constrained case, where we can only use haplotypes from a given set. Furthermore, we show that the problem becomes polynomial-time solvable if the given set of genotypes is complete, i.e., contains all possible genotypes that can be explained by the set of haplotypes. 1 Introduction Over the last few years, haplotype inference has become one of the central problems in algorithmic bioinformatics [10,2]. Its applications include drug design, pharmacogenetics, mapping of disease genes, and inference of population histories. One of the major approaches to haplotype inference is \textit{parsimony haplotyping}: Given a set of genotypes, the task is to find a minimum-cardinality set of haplotypes that explains the input set of genotypes. The task to select as few haplotypes as possible (parsimony criterion) is motivated by the observation that in natural populations the number of haplotypes is much smaller than the number of genotypes [2]. Referring for the background in molecular biology to the rich literature (see, e.g., the surveys by Catanzaro and Labbé [2] and Gusfield and Orzack [10]), we focus on the underlying combinatorial problem. In an abstract way, a genotype can be seen as a length-$n$ string over the alphabet $\{0, 1, 2\}$, * Supported by the DFG, research projects PABI, NI 369/7, and DARE, GU 1023/1, NI 369/11, NSF China (No. 60973026), Shanghai Leading Academic Discipline Project (project number B114), Shanghai Committee of Science and Technology of China (nos. 08DZ2271800 and 09DZ2272800), the Excellence Cluster on Multimodal Computing and Interaction (MMCI), and Robert Bosch Foundation (Science Bridge China 32.5.8003.0040.0). while a haplotype can be seen as a length-$m$ string over the alphabet $\{0, 1\}$. A set $H$ of haplotypes explains, or resolves, a set $G$ of genotypes if for every $g \in G$ there is either an $h \in H$ with $g = h$ (trivial case), or there are two haplotypes $h_1$ and $h_2$ in $H$ such that, for all $i \in \{1, \ldots, m\}$, - if $g$ has letter 0 or 1 at position $i$, then both $h_1$ and $h_2$ have this letter at position $i$, and - if $g$ has letter 2 at position $i$, then one of $h_1$ or $h_2$ has letter 0 at position $i$ while the other one has letter 1. For example, $H = \{00100, 01110, 10110\}$ resolves $G = \{02120, 20120, 22110\}$. Parsimony haplotyping is NP-hard, and numerous algorithmic approaches based on heuristics and integer linear programming methods are applied in practice [2]. There is also a growing list of combinatorial approaches (with provable performance guarantees) including the identification of polynomial-time solvable special cases, approximation algorithms, and fixed-parameter algorithms [5,13,14,16,11]. In this work, we contribute new combinatorial algorithms for parsimony haplotyping, based on new insights into the combinatorial structure of a haplotype solution. Lancia and Rizzi [14] showed that parsimony haplotyping can be solved in polynomial time if every genotype string contains at most two letters 2, while the problem becomes NP-hard if genotypes may contain three letters 2 [13]. Sharan et al. [16] proved that parsimony haplotyping is APX-hard in even very restricted cases and identified instances with a specific structure that allow for polynomial-time exact solutions or constant-factor approximations. Moreover, they showed that the problem is fixed-parameter tractable with respect to the parameter $k =$ “number of haplotypes in the solution set”. The corresponding exact algorithm has running time $O(k^{k^2+k}m)$. These results were further extended by van Iersel et al. [11] to cases where the genotype matrix (the rows are the genotypes and the columns are the $m$ positions in the genotype strings) has restrictions on the number of 2’s in the rows and/or columns. They identified various special cases of haplotyping with polynomial-time exact or approximation algorithms with approximation factors depending on the numbers of 2’s per column and/or row, leaving open the complexity of the case with at most two 2’s per column (and an unbounded number of 2’s per row). Further results in this direction have been recently provided by Cicalese and Milanic [3]. Finally, Fellows et al. [5] introduced the constrained parsimony haplotyping problem where the set of haplotypes may not be chosen arbitrarily from $\{0, 1\}^m$ but only from a pool $\tilde{H}$ of plausible haplotypes. Using an intricate dynamic programming algorithm, they extended the fixed-parameter tractability result of Sharan et al. [16] to the constrained case, proving a running time of $k^{O(k^2)} \cdot \text{poly}(m, |\tilde{H}|)$. Jäger et al. [12] recently presented an experimental study of algorithms for computing all possible haplotype solutions for a given set of genotypes, where the integer linear programming and branch-and-bound algorithms were sped up using some insights into the combinatorial structure of the haplotype solution, as for example eliminating equal columns from the genotype matrix and recursively decomposing a large problem into smaller ones. Our contributions are as follows. We simplify and improve the fixed-parameter tractability results of Sharan et al. [16] and Fellows et al. [5] by proposing fixed-parameter algorithms for the constrained and unconstrained versions of parsimony haplotyping that run in $k^{4k} \cdot \text{poly}(m, |\hat{H}|)$ time, which is a significant exponential speed-up over previous algorithms. Moreover, we develop polynomial-time data reduction rules that yield a problem kernel of size at most $2^k k^2$ for the unconstrained case. A combinatorially demanding part is to show that the problems become polynomial-time solvable if we require that the given set of genotypes is complete in the sense that it contains all genotypes that can be resolved by some pair of haplotypes in the solution set $H$. We call this special case \textit{induced parsimony haplotyping}, and we distinguish between the case that the genotypes are given as a multiset (note that different pairs of haplotypes may resolve the same genotype), or just as a set without multiplicities. We show that, while there may be an exponential number of optimal solutions in the general case, there can be at most two optimal solutions in the induced case. For both induced cases, unconstrained and constrained, we propose algorithms running in $O(k \cdot m \cdot |G|)$ and $O(k \cdot m \cdot (|G| + |\hat{H}|))$ time, respectively. Note that these polynomial-time solvable cases stand in sharp contrast to previous polynomial-time solvable cases [3,14,16,11], all of which require a bound on the number of 2’s in the genotype matrix. \section{Preliminaries and Definitions} Throughout this paper, we consider \textit{genotypes} as strings of length $m$ over the alphabet $\{0, 1, 2\}$, while \textit{haplotypes} are considered as strings of length $m$ over the alphabet $\{0, 1\}$. If $s$ is a string, then $s[i]$ denotes the letter of $s$ at position $i$. This applies to both haplotypes and genotypes. Two haplotypes $h_1$ and $h_2$ \textit{resolve} a genotype $g$, denoted by $\text{res}(h_1, h_2) = g$, if, for all positions $i$, either $h_1[i] = h_2[i] = g[i]$, or $g[i] = 2$ and $h_1[i] \neq h_2[i]$. For a given set $H$ of haplotypes, let $\text{res}(H) := \{\text{res}(h_1, h_2) \mid h_1, h_2 \in H\}$ denote the set of genotypes resolved by $H$ and $\text{mres}(H)$ the multiset of genotypes resolved by $H$ (the multiplicity of a genotype $g$ in $\text{mres}(H)$ corresponds to the number of pairs of haplotypes in $H$ resolving $g$). We also write $\text{res}(h, H)$ ($\text{mres}(h, H)$) for the (multi)set of genotypes resolved by $h$ with all haplotypes in $H$. We say a set $H$ of haplotypes \textit{resolves} a given set $G$ of genotypes if $G \subseteq \text{res}(H)$, and $H$ \textit{induces} $G$ if $\text{res}(H) = G$. If $G$ is a multiset, we similarly require $G \subseteq \text{mres}(H)$ and $\text{mres}(H) = G$, respectively. A haplotype $h$ is \textit{consistent} with a genotype $g$ if $h[i] = g[i]$ for all positions $i$ with $g[i] \neq 2$. We refer to the monographs [4,6,15] for any details concerning parameterized algorithmics and the survey [9] for an overview on problem kernelization. We consider the following haplotype inference problems parameterized with the size of the haplotype set $H$ to be computed: \textbf{Haplotype Inference by Parsimony (HIP)}: \textbf{Input}: A set $G$ of length-$m$ genotypes and an integer $k \geq 0$. \textbf{Question}: Is there a set $H$ of length-$m$ haplotypes such that $|H| \leq k$ and $G \subseteq \text{res}(H)$? In \textbf{Constrained Haplotype Inference by Parsimony (CHIP)} the input additionally contains a set $\hat{H}$ of length-$m$ haplotypes and the task is to find a set of at most $k$ haplotypes from $\hat{H}$ resolving $G$. Note that with $k$ haplotypes one can resolve at most \( \binom{k}{2} + k \) genotypes. Hence, throughout this paper, we assume that \( |G| \) is bounded by \( \binom{k}{2} + k \). In this paper, we introduce the “induced case” of constrained and unconstrained parsimony haplotyping. To simplify the presentation of the results for the induced case, in Section 3 we assume that each genotype contains at least one letter 2. Then, we need two different haplotypes to resolve a genotype. Hence, in the induced case, we assume that \( \text{res}(H) \) does not contain an element of \( H \). We claim without proof that our algorithms in Section 3 can be adapted to instances without these restrictions. Formally, **Induced (Constrained) Haplotype Inference by Parsimony**, (C)IHIP for short, is defined as follows. Given a set \( G \) of length-\( m \) genotypes (and a set \( H \) of length-\( m \) haplotypes), the task is to find a set \( H(\subseteq H) \) of length-\( m \) haplotypes such that \( G = \text{res}(H) \)? Due to the lack of space, some proofs are deferred to a full version of this paper. ### 3 Induced Haplotype Inference by Parsimony The main result of this section is that one can solve **Induced Haplotype Inference by Parsimony** (IHIP) and **Induced Constrained Haplotype Inference by Parsimony** (ICHIP) in \( O(k \cdot m \cdot |G|) \) and \( O(k \cdot m \cdot |G| \cdot |H|) \) time, respectively. In the first paragraph, we consider the following special case of IHIP: given a multiset of \( \binom{k}{2} \) length-\( m \) genotypes (which are not necessarily distinct), is there a multiset of \( k \) length-\( m \) haplotypes inducing them? By allowing genotype multisets, we enforce that the input contains information about how often each genotype is resolved by the haplotypes. This allows us to observe a special structure in the input, which makes it easier to present our results. In the second paragraph, we extend our findings to the case that the input genotypes are given as a set, that is, without multiplicities. In this case, we might have some genotypes that are resolved multiple times. However, we do not know in advance which of the input genotypes would be resolved more than once. This makes the set case more delicate than the multiset case. In fact, the set case can be interpreted as a generalization of the multiset case. However, being easier to present, we focus on the multiset case first. Recall that, for the ease of presentation, throughout this section we assume that every genotype contains at least one letter 2 and that \( \text{res}(H) \) and \( \text{mres}(H) \) do not intersect \( H \). #### The Multiset Case In this paragraph, we show that one can solve **Induced Haplotype Inference by Parsimony** (IHIP) in \( O(k \cdot m \cdot |G|) \) time in the multiset case. This easily generalizes to the constrained case. We need the following notation. Let \( \#_x(i) \) denote the number of genotypes in \( G \) which have letter \( x \) at position \( i \), for \( x \in \{0, 1, 2\} \). We start with a simple structural observation that must be fulfilled by yes-instances. If \( G \) is a yes-instance for IHIP, then the set of genotypes restricted to their first positions (i.e., single-letter genotypes) is also a yes-instance. By a simple column-exchange argument, this extends to all positions, implying the following observation (see Fig. 1 for an example). Observation 1 (“Number Condition”) If a multiset of genotypes is a yes-instance for IHIP, then, for each position $i \in \{1, \ldots, m\}$, there exist two integers $k_0 \geq 0$ and $k_1 \geq 0$ such that $k = k_0 + k_1$, $\#_0(i) = \binom{k_0}{2}$, $\#_1(i) = \binom{k_1}{2}$, and $\#_2(i) = k_0 \cdot k_1$. The next lemma is the basis for recursively solving IHIP. For the ease of presentation, we define the operation $\oplus$. It can be applied to a haplotype $h$ and a genotype $g$ if, for all $i \in \{1, \ldots, m\}$, either $h[i] = g[i]$ or $g[i] = 2$. It produces the unique length-$m$ haplotype $h' := h \oplus g$ such that $\text{res}(h, h') = g$. We further define $i^*$ as the first position for which there are genotypes $g, g' \in G$ with $g[i^*] \neq g'[i^*]$. Furthermore, for all $x \in \{0, 1, 2\}$, we denote the set of all genotypes $g \in G$ with $g[i^*] = x$ as $G_x$. Clearly, any solution for $G$ can be partitioned into a solution for $G_0$ and a solution for $G_1$, as formalized by Lemma 1. Lemma 1. Let $G$ be a multiset of genotypes such that not all genotypes in $G$ are identical. Let $H$ be a set of haplotypes inducing $G$. For $x \in \{0, 1\}$, let $H_x$ denote the haplotypes in $H$ with $x$ at position $i^*$. Then, $H_0$ induces $G_0$, $H_1$ induces $G_1$, and $G_2$ is exactly the multiset of genotypes resolved by taking each time one haplotype from $H_0$ and one haplotype from $H_1$. Moreover, $H_0 \cap H_1 = \emptyset$. The function $\text{solve}(G)$ (see Alg. 1) recursively computes a solution for $G$, with the base cases provided by the next two lemmas. Lemma 2 identifies two cases for which there exists a unique solution for $G$, which in each case can be computed in polynomial time. Lemma 2. Assume that $|G| \geq 2$. If all genotypes in $G$ are identical or if $G_x = \emptyset$ for some $x \in \{0, 1\}$, then there exists at most one solution for $G$. Moreover, in $O(|G| \cdot m)$ time, one can compute a solution or report that $G$ is a no-instance. Proof. First we consider the case that all genotypes are identical. Since every genotype has letter 2, Lemma 1 implies that $G$ is a no-instance. Now, assume that not all genotypes are identical and $G_x = \emptyset$ for some $x \in \{0, 1\}$. Without loss of generality, $G_0 = \emptyset$ and $G_1 \neq \emptyset$. By definition of $i^*$, $G_2 \neq \emptyset$. Note that in a solution for $G$ there can be at most one haplotype having letter 0 at position $i^*$ (otherwise, we have a contradiction to the fact that $G_0 = \emptyset$). Moreover, there must exist at least one haplotype with 0 at position $i^*$ (otherwise one cannot resolve the haplotypes in $G_2$). Thus, in any solution $H$ for $G$, there must exist a unique haplotype $h \in H$ Function solve(G) Input: A multiset of genotypes $G \subseteq \{0, 1, 2\}^m$. Output: A set $\mathcal{H}$ containing at most two multisets of haplotypes each of which induces $G$, if $G$ is a yes-instance; otherwise “no”. begin if all genotypes in $G$ are identical or $G_x = \emptyset$ for some $x \in \{0, 1\}$ then return the unique solution $\{H\}$ (see Lemma 2); else if $|G_0| = 1$ and $|G_1| = 1$ then return the at most two solutions $\{H, H'\}$ (see Lemma 3); else Choose $x \in \{0, 1\}$ such that $|G_x| > 1$ and $|G_x|$ is minimal; $\mathcal{H} \leftarrow \text{solve}(G_x)$; forall $H \in \mathcal{H}$ do replace $H$ with MultisetExtend($H, G, G_2$) in $\mathcal{H}$; if $\mathcal{H}$ contains only the empty set then return “no”; return $\mathcal{H}$; end Algorithm 1: solve($G$) recursively computes all (at most two) solutions for $G$. with $h[i^*] = 0$; further, $G_2 = \text{mres}(h, H \setminus \{h\})$. One can now infer all haplotypes as follows. Clearly, one can answer “no” if there is an $i$, $1 \leq i \leq m$, such that both letters 0 and 1 appear at position $i$ of the genotypes in $G_2$. If there is a position $i$ and a $g \in G_2$ with $g[i] \neq 2$, then one can set $h[i] := g[i]$; otherwise, to have a solution for $G$, all genotypes in $G_1$ must have the same letter $y \in \{0, 1\}$ at this position, so one can set $h[i] := 1 - y$. With $h$ settled, one can easily determine the haplotypes $h'$ with $h'[i^*] = 1$ (these are the haplotypes $g \oplus h$ for $g \in G_2$). Finally, one has to make sure that all these haplotypes induce $G$. If not, then the input instance is a no-instance. The running time of this procedure is $O(|G| \cdot m)$. □ Next, we show that there are at most two solutions for $G$ if each of $G_0$ and $G_1$ contains only a single genotype. Lemma 3. If $|G_0| = 1$ and $|G_1| = 1$, then there are at most two solutions for $G$. Moreover, in $O(m)$ time, one can compute these solutions or report that $G$ is a no-instance. Proof: Let $g_0$ and $g_1$ be the genotypes in $G_0$ and $G_1$, respectively. By Lemma 1, two pairs of haplotypes are required to resolve them, denoted by $h^0_0$ and $h^1_0$ (resolving $g_0$), and $h^0_1$ and $h^1_1$ (resolving $g_1$). If $|G_2| \neq 4$, then return “no” (see Observation 1); otherwise, let $G_2 = \{g_2, g_3, g_4, g_5\}$. If none of $g_0$ and $g_1$ contains letter 2, then the haplotypes are easily constructed (they are equal to the respective genotype). Otherwise, let $i$ be the first position where $g_0$ or $g_1$ has letter 2, say $g_0[i] = 2$. Without loss of generality, let $h^0_0[i] := 0$ and $h^1_0[i] := 1$. We consider the following two cases: Case 1: $g_1[i] \neq 2$. Without loss of generality, let $g_1[i] = 0$. Then, two of the genotypes in $G_2$ must have 0 at position $i$ and the other two must have 2 at position $i$; otherwise, return “no”. Without loss of generality, let $g_2[i] = g_3[i] = 0$ and $g_4[i] = g_5[i] = 2$. Since $g_2$ and $g_3$ must be resolved by $h_0^0$, one can uniquely determine $h_1^0$ as follows. Consider any position $j$. If $g_2[j] \neq 2$ and $g_3[j] \neq 2$, then they must both be equal (if not, then return “no”). In this case, let $h_0^0[j] = g_2[j]$. If exactly one of $g_2[j]$ and $g_3[j]$ is equal to 2, say $g_3[j] = 2$, then let $h_0^0[j] = g_2[j]$. If $g_2[j] = g_3[j] = 2$, then we know that $g_1[j] \neq 2$ (otherwise, return “no”) and thus $h_0^0[j] := 1 - g_1[j]$. Finally, let $h_1^0 := h_0^0 \oplus g_0$, $h_1^0 := h_0^0 \oplus g_2$, and $h_1^1 := h_0^0 \oplus g_3$. If these haplotypes also correctly resolve $g_1$, $g_4$, and $g_5$, then we have a unique solution for $G$, otherwise return “no”. **Case 2:** $g_0[i] = g_1[i] = 2$. There is a genotype in $G_2$ having 0 at position $i$ and another having 1 at position $i$ (otherwise, return “no”). Without loss of generality, let $g_2[i] = 0$, $g_3[i] = 1$, $h_1^0[i] := 0$, and $h_1^1[i] := 1$. Then, $g_4[i] = g_5[i] = 2$ and $g_2 = \text{res}(h_0^0, h_1^0)$ and $g_3 = \text{res}(h_0^0, h_1^1)$. Now there are two possibilities to resolve $g_4$ and $g_5$. Either $g_4 = \text{res}(h_0^1, h_1^0)$ and $g_5 = \text{res}(h_0^0, h_1^1)$, or $g_4 = \text{res}(h_0^0, h_1^1)$ and $g_5 = \text{res}(h_0^1, h_1^0)$. By choosing one of these two possibilities, all four haplotypes are fixed. Thus, there are at most two solutions for $G$. Note that there are only six genotypes. Thus, for every position the computations are clearly doable in constant time. Hence, the whole procedure runs in $O(m)$ time. □ The next two lemmas show that one can solve an IHIP instance recursively if neither Lemma 2 nor Lemma 3 applies. That is, we now assume that not all genotypes are identical and we have $|G_x| > 1$ for some $x \in \{0, 1\}$. We show that, given a solution for $G_x$, one can uniquely extend this solution to a solution for $G$, or decide that $G$ is a no-instance, leading to function $\text{MultisetExtend}$ (see Alg. 2). **Lemma 4.** Let $|G_x| > 1$ for some $x \in \{0, 1\}$, let $H_x$ be a multiset of haplotypes inducing $G_x$, and let $g$ be a genotype in $G_2$ with the smallest number of 2’s. If $G$ is induced by $H$ with $H_x \subseteq H$, then all haplotypes in $H_x$ consistent with $g$ must be identical. **Proof.** Without loss of generality, we assume that $|G_0| > 1$. Suppose that there is an $H$ with $H_x \subseteq H$ inducing $G$. Since $g[i^+] = 2$, there must be a haplotype $h_1 \in H_x$ and a haplotype $h_2 \in H \setminus H_x$ resolving $g$. Clearly, $h_1$ and $h_2$ are consistent with $g$. We show that there is no other haplotype $h \in H_x$ such that $h \neq h_1$ and $h$ is consistent with $g$. For the sake of contradiction, assume that there is such a haplotype $h$. First, note that $h$, $h_1$, and $h_2$ are consistent with $g$ and hence identical at positions where $g$ does not have letter 2. Since $h \neq h_1$, $h$ differs from $h_1$ in at least one of the positions where $g$ has letter 2. Thus, $h_2$ (which together with $h_1$ resolves $g$ and hence is the complement of $h_1$ at the positions where $g$ has letter 2) must have the same letter as $h$ at some position where $h_1$ and $h_2$ differ. This implies that $\text{res}(h, h_2) \in G_2$ has fewer 2’s than $g$, contradicting the choice of $g$. □ **Lemma 5.** Let $|G_x| > 1$ for some $x \in \{0, 1\}$, and let $H_x$ be a multiset of haplotypes inducing $G_x$. If $G$ is induced by $H$ with $H_x \subseteq H$, then $H$ is uniquely determined and function $\text{MultisetExtend}$ (see Alg. 2) computes $H$ in $O(|H_x| \cdot |G_x| \cdot m)$ time. **Proof.** The correctness of lines 4–7 of $\text{MultisetExtend}$ (see Alg. 2) follows from Lemma 4. Since including $h' := h \oplus g$ in $H$ is the only choice, the genotypes resolved by $h'$ and other haplotypes in $H_x$ should also be in $G_2$; otherwise, no solution exists. Function MultisetExtend (H_x, G, G_2) Input: A haplotype multiset H_x inducing G_x for some x ∈ {0, 1}, and a multiset G_2 of genotypes. Output: A haplotype multiset H inducing G with H_x ⊆ H, if one exists; otherwise an empty set. begin H := H_x; while G_2 ≠ ∅ do Choose a g ∈ G_2 with smallest number of 2’s; Choose an h ∈ H_x consistent with g; h' := h ⊕ g; H := H ∪ {h'}; G' := {g' | ∃h'' ∈ H_x : g' = res(h', h'')}; if G' ⊈ G_2 then return ∅; G_2 := G_2 \ G'; end if mres(H) = G then return H; else return ∅; end Algorithm 2: An algorithm to extend a solution for G_x to G in the multiset case. Thus, lines 8 and 9 of MultisetExtend are correct. Line 10 of MultisetExtend safely removes the genotypes resolved by h' from G_2. The next while-iteration proceeds to find the next pair consisting of a haplotype h and a genotype g ∈ G_2 satisfying Lemma 4. If there is a solution for G comprising H_x, then we must end up with an empty G_2. Moreover, H \ H_x should resolve all genotypes in G_{1-x} and, together with H_x, the genotypes in G_2; this is examined in line 12 of MultisetExtend. Thus, the function MultisetExtend is correct. By Lemma 4, the solution H with H_x ⊆ H is unique. Concerning the running time, note that the most time-consuming part of the function is to find the consistent haplotypes in H_x for a given genotype in G_2. This can be done in O(|H_x| · |G_2| · m) time by iterating over all haplotypes in H_x and for each haplotype over all genotypes in G_2. Putting all together, we obtain the main theorem of this paragraph. Theorem 1. In case of a multiset G of length-m genotypes, Induced Haplotype Inference by Parsimony and Constrained Induced Haplotype Inference by Parsimony can be solved in O(k · m · |G|) and O(k · m · (|G| + |H|)) time, respectively. Proof. (Sketch) We show that the algorithm solve(G) (see Alg. 1) is correct. If all genotypes are identical or G_x = ∅, for some x ∈ {0, 1}, then the correctness follows from Lemma 2. Hence, in the following, assume that not all genotypes are identical, G_0 ≠ ∅, and G_1 ≠ ∅. Distinguish the cases that |G_0| = |G_1| = 1 and |G_x| > 1, for some x ∈ {0, 1}. In the case that |G_0| = |G_1| = 1, one can compute the solutions (at most two) for G using Lemma 3. In the other case, for some x ∈ {0, 1}, it holds that \(|G_x| > 1\) and \(|G_{1-x}| > 0\). Without loss of generality, assume \(|G_0| > 1\). By Lemma 1, a solution for \(G\) consists of a solution \(H_0\) for \(G_0\) and a solution \(H_1\) for \(G_1\), and \(H_0 \cap H_1 = \emptyset\). Since one tries to extend every solution for \(G_0\) and these extensions are unique by Lemma 5, one will find every possible solution for \(G\). Since the base cases have at most two solutions and extensions are uniquely determined by Lemma 5, there exist at most two solutions for \(G\). In the constrained case, one only needs to check whether one of the computed solutions is in the given set of haplotypes. The claimed running time follows from Lemmas 2, 3, and 5. The Set Case. If the input is not a multiset, but a set \(G\) of genotypes, that is, all genotypes in \(G\) are pairwise distinct, then the Number Condition (Observation 1) does not necessarily hold. Consider the haplotype set \(H = \{000, 001, 110, 111\}\) which induces the set \(\text{res}(H) = \{002, 112, 221, 220, 222\}\), but also induces the multiset \(\text{mares}(H) = \{002, 112, 221, 220, 222, 222\}\) (observe that \(\text{res}(000, 111) = \text{res}(001, 110) = 222\)). The problem is that we cannot directly infer from \(G\) which genotypes should be resolved more than once. However, many properties of the multiset case (as for example Lemmas 1, 2, and 3) carry over to the set case, so we only need a moderate modification of the multiset algorithm to solve the set case. More specifically, the key to solve the set case is to adapt function \texttt{MultisetExtend} (all details are deferred to the long version of this paper). **Theorem 2.** In case of a set \(G\) of length-\(m\) genotypes, \textsc{Induced Haplotype Inference by Parsimony} and \textsc{Constrained Induced Haplotype Inference by Parsimony} can be solved in \(O(k \cdot m \cdot |G|)\) and \(O(k \cdot m \cdot (|G| + |H|))\) time, respectively. ## 4 General Haplotype Inference by Parsimony This section contains an algorithm to solve the general parsimony haplotyping problem for the unconstrained and the constrained versions in \(O(k^{4k+1} \cdot m)\) and \(O(k^{4k+1} \cdot m \cdot |H|)\) time, respectively, improving and partially simplifying previous fixed-parameter tractability results [16,5]. In addition, we provide a simple kernelization. We start with some preliminary considerations. Given a set of haplotypes resolving a given set of genotypes, the relation between the haplotypes and the genotypes can be depicted by an undirected graph, the \textit{solution graph}, in which the edges are labeled by the genotypes and every vertex \(v\) is labeled by a haplotype \(h_v\). If an edge \(\{u, v\}\) is labeled by genotype \(g\), we require that \(g = \text{res}(h_u, h_v)\). We call such a vertex/edge labeling \textit{consistent}. If only the edges are labeled, the graph is an \textit{inference graph} (because it allows us to infer all the haplotypes). Solution graphs and inference graphs may contain loops. In what follows, assume that the input is a yes-instance, i.e., a solution graph exists. Intuitively, our algorithm “guesses” an inference graph for \(G\) (by enumerating all possible such graphs) and then infers the haplotypes from the genotype labels on the edges. To this end, it guesses for every connected component of the solution graph a spanning subgraph with edges labeled by some of the genotypes in \(G\) in such a way Input: A set of genotypes $G \subseteq \{0, 1, 2\}^m$ and an integer $k \geq 0$. Output: Either a set of haplotypes $H$ with $|H| \leq k$ and $G \subseteq \text{res}(H)$, or “no” if there is no solution of size at most $k$. ``` forall size-k subsets $G' \subseteq G$ do forall inference graphs $\Gamma$ for $G'$ on $k$ vertices and $k$ edges do forall non-bipartite connected components of $\Gamma$ do if possible, compute the labels of all vertices of the component (Lemma 7), otherwise try the next inference graph (goto line 2); end forall bipartite connected components of $\Gamma$ do if possible, compute a consistent vertex labeling for the component (Lemma 8), otherwise try the next inference graph (goto line 2); end Let $H$ denote the inferred haplotypes (vertex labels); if $G \subseteq \text{res}(H)$ then return $H$; end return "no"; ``` Algorithm 3: An algorithm solving HIP in $O(k^{4k+1} \cdot m)$ time. that we have enough information at hand to infer the haplotypes. Then, one has to solve the following subproblem: Given an inference graph for a subset of genotypes of $G$, does there exist a consistent vertex labeling? The next three lemmas show how to solve this subproblem by separately considering the connected components of the inference graphs. **Lemma 6.** Let $G$ be a set of genotypes and let $\Gamma = (V, E)$ be a connected inference graph for $G$. For each position $i$, $1 \leq i \leq m$, if there is a genotype $g \in G$ with $g[i] \neq 2$, then one can, in $O(|V| + |E|)$ time, uniquely infer the letters of all haplotypes at position $i$ or report that there is no consistent vertex labeling. **Lemma 7.** Let $\Gamma = (V, E)$ be a connected inference graph for a set $G$ of genotypes that contains an odd-length cycle. Then, there exists at most one consistent vertex labeling. Furthermore, one can compute in $O(m \cdot (|V| + |E|))$ time a consistent vertex labeling or report that no consistent vertex labeling exists. **Lemma 8.** Let $\Gamma = (V_a, V_b, E)$ be a connected bipartite inference graph for a set $G$ of length-$m$ genotypes. Let $u \in V_a$ and $w \in V_b$ be arbitrarily chosen. Then, 1. one can compute in $O(m \cdot (|V_a| + |V_b| + |E|))$ time a consistent vertex labeling or report that no consistent vertex labeling exists, and 2. the genotypes resolved by $h_u$ and $h_w$ are identical for every consistent vertex labeling. Next, we describe the algorithm for the unconstrained version (HIP), see Alg. 3. To solve HIP, we could enumerate all inference graphs for $G$ and then find the vertex labeling using Lemmas 7 and 8. However, to be more efficient, we first select a size-$k$ subset of genotypes (line 1 of Alg. 3), and then we enumerate all inference graphs on $k$ vertices containing exactly $k$ edges labeled by the $k$ chosen genotypes (line 2 of Alg. 3). Assume that there exists a solution graph for $G$. Of all inference graphs on $k$ vertices and $k$ edges consider one with the following properties: – it contains a spanning subgraph of every connected component of the solution graph, and – the spanning subgraph of any non-bipartite connected component contains an odd cycle (thus, the bipartite components of the inference graph are exactly the bipartite components of the solution graph). Obviously, this inference graph exists and is considered by Alg. 3. By Lemma 7, we can uniquely infer the vertex labels for all connected components of the inference graph containing an odd cycle. For every bipartite component, we can get a consistent vertex labeling from Lemma 8. In such a bipartite component, for any two vertices $u \in V_a$ and $v \in V_b$, the genotypes resolved by $h_u$ and $h_v$ are identical for every consistent vertex labeling. Thus, the haplotypes resolve all genotypes contained in the respective (bipartite) component of the solution graph. In summary, if the given instance is a yes-instance, then our algorithm will find a set of at most $k$ haplotypes resolving the given genotypes. **Theorem 3.** *Haplotype Inference by Parsimony and Constrained Haplotype Inference by Parsimony* can be solved in $O(k^{4k+1} \cdot m)$ and $O(k^{4k+1} \cdot m \cdot |\bar{H}|)$ time, respectively. **Proof.** (Sketch) We first consider the unconstrained case. By the discussion above, Alg. 3 correctly solves HIP. It remains to analyze its running time. First, there are $O(\binom{|G|}{k})$ size-$k$ subsets $G'$ of $G$. Second, there are $O(k^{2k})$ inference graphs on $k$ vertices containing exactly $k$ edges labeled by the genotypes in $G'$ because for every genotype $g \in G'$ we have $k^2$ choices for the endpoints (loops are allowed) of the edge labeled by $g$. For each of those inference graphs, applying Lemma 7 and Lemma 8 to its connected components takes $O(k \cdot m)$ time. Hence, the overall running time of Alg. 3 sums up to $O(\binom{|G|}{k}) \cdot k^{2k} \cdot m \cdot k)$. Since $|G| \leq k^2$, the running time can be bounded by $O(k^{4k+1} \cdot m)$. One can easily adapt Alg. 3 to solve CHIP as follows. As before, one enumerates all size-$k$ subsets $G' \subseteq G$ and all inference graphs for $G'$. Since, by Lemma 7, the vertex labels for the connected components containing an odd cycle are uniquely determined, one only has to check whether the inferred haplotypes are contained in the given haplotype pool $\bar{H}$ (otherwise, try the next inference graph). Basically, the only difference is how to proceed with the bipartite components of the inference graph. Let $(W, F)$ be a connected bipartite component of the current inference graph. Instead of choosing an arbitrary consistent vertex labeling as done in Lemma 8, proceed as follows. Choose an arbitrary vertex $v \in W$ and check for every haplotype $h \in \bar{H}$ whether there exists a consistent vertex labeling for this component where $v$ is labeled by $h$. Note that fixing the vertex label for $v$ implies the existence of at most one consistent vertex labeling of $(W, F)$. If it exists, this labeling can be computed by a depth-first traversal starting at $v$. If for a haplotype $h$ there exists a consistent vertex labeling of $(W, F)$ such that all labels are contained in $\bar{H}$, then proceed with the next bipartite component. Otherwise, one can conclude that for the current inference graph there is no consistent vertex labeling using only the given haplotypes, and, hence, one can proceed with the next inference graph. The correctness and the claimed running time follow by almost the same arguments as in the unconstrained case. **Problem Kernelization.** In this paragraph, we show that HIP admits an exponential-size problem kernel. To this end, we assume the input $G$ to be in the matrix representation that is mentioned in the introduction; that is, each row represents a genotype while each column represents a position. Since it is obvious that we can upper-bound the number $n$ of genotypes in the input by $k^2$, it remains to bound the number $m$ of columns (positions) in the input. The idea behind the following data reduction rule is that we can safely delete a column if there is another column that is identical. By applying this rule exhaustively, we can bound the number of columns by $2^k$. **Reduction Rule.** Let $(G,k)$ be an instance of HIP. If two columns of $G$ are equal, then delete one of them. The correctness of the reduction rule follows by the observation that, given at most $k$ haplotypes resolving the genotypes in the reduced instance, we can easily find a solution for the original instance by copying the respective haplotype positions. Next, we bound the number of columns. **Lemma 9.** Let $(G,k)$ be a yes-instance of HIP that is reduced with respect to the reduction rule. Then, $G$ has at most $2^k$ columns. **Proof.** Let $H$ denote a matrix of $k$ haplotypes resolving $G$. It is obvious that if two columns $i$ and $j$ of $H$ are equal, then columns $i$ and $j$ of $G$ are equal. Now, since $G$ does not contain a pair of equal columns, neither does $H$. Since there are only $2^k$ different strings in $\{0,1\}^k$, it is clear that $H$ cannot contain more than $2^k$ columns and thus, neither can $G$. □ Since the number $n$ of genotypes can be bounded by $k^2$ and the number $m$ of columns can be bounded by $2^k$ (Lemma 9), one directly obtains Proposition 1. **Proposition 1.** Haplotype Inference by Parsimony admits a problem kernel of size at most $2^k \cdot k^2$ that can be constructed in $O(n \cdot m \cdot \log m)$ time. Plugging Proposition 1 into Theorem 3, we achieve the following. **Corollary 1.** HIP can be solved in $O(k^{4k+1} \cdot 2^k + n \cdot m \cdot \log m)$ time. ## 5 Conclusion We contributed new combinatorial algorithms for parsimony haplotyping with the potential to make the problem more feasible in practice without giving up the demand for optimal solutions. Our results also lead to several new questions for future research. For instance, our kernelization result yields a problem kernel of exponential size. It would be interesting to know whether a polynomial-size problem kernel exists, which may also be seen in the light of recent breakthrough results on methods to prove the non-existence of polynomial-size kernels [1,7]. A second line of research is to make use of the polynomial-time solvable induced cases to pursue a “distance from triviality” approach [8]. The idea here is to identify and exploit parameters that measure the distance of general instances of parsimony haplotyping to the “trivial” (that is, polynomial-time solvable) induced cases. Research in this direction is underway. A more speculative research direction could be to investigate whether our results on the induced case (with at most two optimal solutions) may be useful in the context of recent research [12] on finding all optimal solutions in the general case. Clearly, it remains an interesting open problem to find a fixed-parameter algorithm for parsimony haplotyping with an exponential factor of the form $c^k$ for some constant $c$. References 1. H. L. Bodlaender, R. G. Downey, M. R. Fellows, and D. Hermelin. On problems without polynomial kernels. *J. Comput. System Sci.*, 75(8):423–434, 2009. 2. D. Catanzaro and M. Labbé. The pure parsimony haplotyping problem: Overview and computational advances. *International Transactions in Operational Research*, 16(5):561–584, 2009. 3. F. Cicalese and M. Milanič. On parsimony haplotyping. Technical Report 2008-04, Universität Bielefeld, 2008. 4. R. G. Downey and M. R. Fellows, *Parameterized Complexity*. Springer, 1999. 5. M. R. Fellows, T. Hartman, D. Hermelin, G. M. Landau, F. A. Rosamond, and L. Rozenberg. Haplotype inference constrained by plausible haplotype data. In *Proc. 20th CPM*, volume 5577 of *LNCS*, pages 339–352. Springer, 2009. 6. J. Flum and B. Grohe, *Parameterized Complexity Theory*. Springer, 2006. 7. L. Fortnow and R. Santhanam. Infeasibility of instance compression and succinct PCPs for NP. In *Proc. 40th STOC*, pages 133–142. ACM Press, 2008. 8. J. Guo, F. Hüffner, and R. Niedermeier. A structural view on parameterizing problems: Distance from triviality. In *Proc. 1st IWPEC*, volume 3162 of *LNCS*, pages 162–173. Springer, 2004. 9. J. Guo and R. Niedermeier. Invitation to data reduction and problem kernelization. *ACM SIGACT News*, 38(1):31–45, 2007. 10. D. Gusfield and S. H. Orzack. Haplotype inference. CRC Handbook on Bioinformatics, chapter 1, pages 1–25. CRC Press, 2005. 11. L. van Iersel, J. Keijsper, S. Kelk, and L. Stougie. Shorelines of islands of tractability: Algorithms for parsimony and minimum perfect phylogeny haplotyping problems. *IEEE/ACM Trans. Comput. Biology Bioinform.*, 5(2):301–312, 2008. 12. G. Jäger, S. Climer, and W. Zhang. Complete parsimony haplotype inference problem and algorithms. In *Proc. 17th ESA*, volume 5757 of *LNCS*, pages 337–348. Springer, 2009. 13. G. Lancia, M. C. Pinotti, and R. Rizzi. Haplotyping populations by pure parsimony: Complexity of exact and approximation algorithms. *INFORMS Journal on Computing*, 16(4):348–359, 2004. 14. G. Lancia and R. Rizzi. A polynomial case of the parsimony haplotyping problem. *Operations Research Letters*, 34:289–295, 2006. 15. R. Niedermeier. *Invitation to Fixed-Parameter Algorithms*. Oxford University Press, 2006. 16. R. Sharan, B. V. Halldórsson, and S. Istrail. Islands of tractability for parsimony haplotyping. *IEEE/ACM Trans. Comput. Biology Bioinform.*, 3(3):303–311, 2006.
Argumentation for reconciling agent ontologies Cássia Trojahn dos Santos, Jérôme Euzenat, Valentina Tamma, Terry Payne To cite this version: Cássia Trojahn dos Santos, Jérôme Euzenat, Valentina Tamma, Terry Payne. Argumentation for reconciling agent ontologies. Attila Elki, Mamadou Koné, Mehmet Orgun. Semantic Agent Systems, Springer, pp.89-111, 2011, <10.1007/978-3-642-18308-9_5>. <hal-00781028> Argumentation for reconciling agent ontologies Cássia Trojahn, Jérôme Euzenat, Valentina Tamma and Terry R. Payne Abstract Within open, distributed and dynamic environments, agents frequently encounter and communicate with new agents and services that were previously unknown. However, to overcome the ontological heterogeneity which may exist within such environments, agents first need to reach agreement over the vocabulary and underlying conceptualisation of the shared domain, that will be used to support their subsequent communication. Whilst there are many existing mechanisms for matching the agents’ individual ontologies, some are better suited to certain ontologies or tasks than others, and many are unsuited for use in a real-time, autonomous environment. Agents have to agree on which correspondences between their ontologies are mutually acceptable by both agents. As the rationale behind the preferences of each agent may well be private, one cannot always expect agents to disclose their strategy or rationale for communicating. This prevents the use of a centralised mediator or facilitator which could reconcile the ontological differences. The use of argumentation allows two agents to iteratively explore candidate correspondences within a matching process, through a series of proposals and counter proposals, i.e., arguments. Thus, two agents can reason over the acceptability of these correspondences without explicitly disclosing the rationale for preferring one type of correspondences over another. In this chapter we present an overview of the approaches for alignment agreement based on argumentation. Cássia Trojahn INRIA & LIG, e-mail: firstname.lastname@example.org Jérôme Euzenat INRIA & LIG, e-mail: email@example.com Valentina Tamma University of Liverpool, e-mail: firstname.lastname@example.org Terry R. Payne University of Liverpool, e-mail: email@example.com 1 Introduction The problem of dynamic reconciliation of vocabularies, or *ontologies*, used by agents during interactions has recently received significant attention, motivated by the growing adoption of service-oriented and distributed computing. In such scenarios, agents are situated in open environments and may encounter unknown agents offering new services due to changes in a user’s context or goal. These multi-agent systems are, by nature, distributed and heterogeneous, and as such, ontologies play a fundamental role in formalising the concepts that agents perceive, share, or encounter. However, as the heterogeneity that permeates these environments increases, fewer assumptions on the vocabulary and content of these ontologies can be made, hindering seamless interactions between the agents. Thus, mechanisms that can dynamically and autonomously reconcile the differences between ontologies are essential if agents are to communicate within such open and evolving environments. Early systems avoided the problem of ontological heterogeneity by relying on the existence of a shared ontology, or simply assuming that a canonical set of ontology correspondences (possibly defined at design time) could be used to resolve ontological mismatches. However, such assumptions work only when the environment is (semi-) closed and carefully managed, and no longer hold in open environments where a plethora of ontologies exist. Moreover, the assumption of a common ontology forces an agent to comply with a fixed, but highly constrained view of the world, with respect to a set of predefined tasks and, as a consequence, abandon its own world view, which may have evolved due to interactions with other agents [8]. To facilitate the communication between two agents, they first need to establish a set of correspondences (or an *alignment*) between their respective ontologies. The reconciliation of heterogeneous ontologies has been investigated at length by research efforts in *ontology matching* [15], which tries to determine suitable correspondences between two ontologies. However the increased availability of mechanisms for ontology matching has facilitated the potential construction of a plethora of different correspondence sets between two ontologies, depending on the approach used. In addition, the majority of traditional matching approaches cannot be easily utilised as part of dynamic interaction protocols since they either require human intervention or they align the ontologies at design time. Even when alignments are pre-computed and stored within some alignment library, the selection of a possible correspondence that would be mutually acceptable to two transacting agents can be problematic, as the choice of correspondences can be highly dependent on the current task or available knowledge. For example, an agent may prefer correspondences which have been approved by its own institution and another one may prefer a correspondence designed for the same task. These may not be easy to reconcile. Hence, some correspondences may be preferable to some agents, but may be unsuitable or untrustworthy to others. In addition, it may not always be desirable for an agent to disclose a preference for a given type of correspondence as this may reveal its goal, and thus compromise it’s ability to negotiate strategically with other agents within a competitive environment. Thus it is not always possible to utilise a collaborative approach, or exploit the use of a third party mediator to determine a mutually acceptable set of correspondences. The agreement on a mutually acceptable alignment is an important problem that therefore arises when different parties need to reconcile private, yet potentially conflicting preferences over candidate correspondences. Such an agreement can be achieved through a negotiation process whereby agents iteratively exchange proposals and counter-proposals [28, 20] until some consensus is reached. Argumentation can be seen as a qualitative negotiation model based on the construction and comparison of arguments [12, 29, 3], either supporting or refuting a set of possible propositions. Thus, by considering these propositions as correspondences (with justifications that support their use), agents can strategically argue in favour of (or against) possible correspondences given their individual strategies or preferences. This chapter presents an overview on the approaches for alignment agreement based on argumentation. The different approaches are presented following two scenarios. In the first one, agents with different preferences need to agree on the alignment of their ontologies in order to communicate with each other. For the second scenario, specialised matcher agents rely on different matching approaches and argue on their individual results in order to obtain a consensual alignment. The remainder of this chapter is structured as follows. First, we provide the basic definitions of ontology matching and argumentation frameworks (\S 2). Next, two argumentation frameworks for alignment agreement are introduced (\S 3). The different proposals on argumentation for alignment agreement are then presented (\S 4). The limitations and challenges in this domain are discussed (\S 5). Finally, related work (\S 6) and final remarks (\S 7) are presented. 2 Foundations: Alignment and Argumentation Frameworks 2.1 Ontology matching An **ontology** typically provides a vocabulary describing a domain of interest and a specification of the meaning of terms in that vocabulary. As different agents within an open multi-agent system may be developed independently, they may commit to different ontologies to model the same domain. Whilst these different ontologies may be similar, they may differ in granularity or detail, use different representations, or model the concepts, properties and axioms in different ways. In order to illustrate the matching problem, let us consider an e-Commerce marketplace, where two agents, a *buyer* and a *seller*, need to negotiate the price of a digital camera. Before starting the negotiation, they need to agree on the vocabulary to be used for exchanging the messages. They use the ontologies $o$ and $o'$, respectively (Figure 1). These ontologies contain subsumption statements (e.g., DigitalCamera $\sqsubseteq$ Product), property specifications (e.g., price domain Product) and instance descriptions (e.g., ThisCamera price 250$). **Ontology matching** is the task of finding correspondences between ontologies. Correspondences express relationships supposed to hold between entities in ontologies, for instance, that an Electronic in one ontology is the same as a Product in another one or that DigitalCamera in an ontology is a subclass of CameraPhoto in another one. In the example above, one of the correspondences expresses an equivalence, while the other one is a subsumption correspondence. A set of correspondences between two ontologies is called an alignment. An alignment may be used, for instance, to generate query expressions that automatically translate instances of these ontologies under an integrated ontology or to translate queries with respect to one ontology in to query with respect to the other. ![Fig. 1 Fragments of ontologies $o$ and $o'$ with alignment $A$.](image) Matching determines an alignment $A'$ for a pair of ontologies $o$ and $o'$. There are other parameters that can extend the definition of the matching process, namely: $(i)$ the use of an input alignment $A$, which is to be completed by the process; $(ii)$ the matching parameters, for instance, weights, thresholds, etc.; and $(iii)$ external resources used by the matching process, for instance, common knowledge and domain specific thesauri. ![Fig. 2 The ontology matching process (from [15]).](image) Each of the elements featured in this definition can have specific characteristics which influence the difficulty of the matching task. As depicted in Figure 2, the matching process receives as input three main parameters: the two ontologies to be matched ($o$ and $o'$) and the input alignment ($A$). The input ontologies can be characterized by the input languages they are described (e.g., OWL-Lite, OWL-DL, OWL-Full), their size (number of concepts, properties and instances) and complexity, which indicates how deep is the hierarchy structured and how dense is the interconnection between the ontological entities. Other properties such as consistency, correctness and completeness are also used for characterizing the input ontologies. The input alignment ($A$) is mainly characterized by its multiplicity (or cardinality, e.g., how many entities of one ontology can correspond to one entity of the others) and coverage in relation to the ontologies to be matched. In a simple scenario, the input alignment is empty. Regarding the parameters, some systems take advantage of external resources, such as WordNet, sets of morphological rules or previous alignments of general purpose (Yahoo and Google catalogs, for instance). Different approaches to the problem of ontology matching have emerged from the literature [15]. The main distinction between each is due to the type of knowledge encoded within each ontology, and the way it is utilized when identifying correspondences between features or structures within the ontologies. Terminological methods lexically compare strings (tokens or n-grams) used in naming entities (or in the labels and comments concerning entities), whereas semantic methods utilise model-theoretic semantics to determine whether or not a correspondence exists between two entities. Approaches may consider the internal ontological structure, such as the range of their properties (attributes and relations), their cardinality, and the transitivity and/or symmetry of their properties, or alternatively the external ontological structure, such as the position of the two entities within the ontological hierarchy. The instances (or extensions) of classes could also be compared using extension-based approaches. In addition, many ontology matching systems rely not on a single approach. The output alignment $A'$ is a set of correspondences between $o$ and $o'$. Generally, correspondences express a relation $r$ between ontology entities $e$ and $e'$ with a confidence measure $n$. These are abstractly defined in [15]. In this chapter, we will restrict the discussion to simple correspondences. **Definition 1 (Simple correspondence).** Given two ontologies, $o$ and $o'$, a simple correspondence is a quintuple: $$\langle id, e, e', r, n \rangle,$$ such that: - $id$ is a URI identifying the given correspondence; - $e$ and $e'$ are named ontology entities, i.e., named classes, properties, or instances; - $r$ is a relation among equivalence ($\equiv$), more general ($\supseteq$), more specific ($\subseteq$), and disjointness ($\perp$); - $n$ is a number in the $[0, 1]$ range. The correspondence \( \langle id, e, e', n, r \rangle \) asserts that the relation \( r \) holds between the ontology entities \( e \) and \( e' \) with confidence \( n \). The higher the confidence value, the higher the likelihood that the relation holds. Alignments may have different cardinalities: 1:1 (one-to-one), 1:m (one-to-many), n:1 (many-to-one) or n:m (many-to-many). An alignment is a 1:1 alignment, if and only if no two different entities in one of the ontologies are matched to the same entity in the other ontology. Mechanisms that facilitate the construction of alignments require access to both ontologies. Whilst it may be desirable to embed such mechanisms within agents that operate in transparent and collaborative environments, exposing one’s ontology may not always be desirable in competitive or adversarial environments, as this may allow other agents to infer, and exploit this knowledge in subsequent negotiations. In addition, creating alignments can be costly, and thus the ability to cache or save previously generated alignments (possibly generated by trusted third parties) may be desirable. Thus, agents may rely on an external **alignment service**. For example, the *Alignment server*, built on the Alignment API [13], provides functionality to facilitate ontology matching, as well as storing and retrieving alignments. In addition, it can provide assistance to agents when attempting to determine relationships between their ontologies, so that they can understand and interpret each other’s messages. An agent plug-in has been developed to allow agents based on the JADE/FIPA ACL (*Agent Communication Language*) to interact with the server in order to retrieve alignments. Such a service can provide alignments over which the agents will argue in order to choose the more suitable correspondences. Alignments, and the correspondences within such alignments, can be better qualified, through the inclusion of *metadata*, which may refer to the provenance and origin of alignments, confidence ratings, and the original purposes for which they were created. Other metadata may also include any manual (human-based) checks or endorsements provided by some authority. This type of metadata is used, for instance, by *Bioportal* [26], which is an alternative alignment web-service, where users can select correspondences based on providence-based alignment metadata. ### 2.2 Argumentation frameworks Argumentation is a decentralised, peer-based negotiation model for reasoning based on the construction and comparison of arguments. The central notion in argumentation systems is the notion of *acceptability*. Different argumentation frameworks have been specified presenting different notions of acceptability. The classical argumentation framework (AF) was proposed by Dung [12], whose notion of acceptability defines that an argument should be accepted only if every attack on it is attacked by an accepted argument. Dung defines an argumentation framework as follows: **Definition 2 (Argumentation Framework [12])**. An Argumentation Framework (AF) is a pair \( \langle A, \times \rangle \), such that \( A \) is a set of arguments and \( \times \) (attacks) is a biternary relation on \( \mathcal{A} \). \( a \prec b \) means that the argument \( a \) attacks the argument \( b \). A set of arguments \( S \) attacks an argument \( b \) iff \( b \) is attacked by an argument in \( S \). The key question about the framework is whether a given argument \( a \in \mathcal{A} \) should be accepted or not. Dung proposes that an argument should be accepted only if every attack on it is attacked by an accepted argument. This notion then leads to the definition of acceptability (for an argument), admissibility (for a set of arguments) and preferred extension: **Definition 3 (Acceptable argument [12])**: An argument \( a \in \mathcal{A} \) is acceptable with respect to set arguments \( S \), noted \( \text{acceptable}(a, S) \), iff \( \forall x \in \mathcal{A}, (x \prec a \longrightarrow \exists y \in S, y \prec x) \). **Definition 4 (Conflict-free set [12])**: A set \( S \) of arguments is conflict-free iff \( \neg \exists x, y \in S, x \prec y \). A conflict-free set of arguments \( S \) is admissible iff \( \forall x \in S, \text{acceptable}(x, S) \). **Definition 5 (Preferred-extension [12])**: A set of arguments \( S \) is a preferred extension iff it is a maximal (with respect to inclusion set) admissible set of \( \mathcal{A} \). Thus, a preferred extension represents a consistent position within an argumentation framework, which defends itself against all attacks and cannot be extended without raising conflicts. In Dung’s framework, all arguments have equal strength, and therefore attacks always succeed. This is reasonable when dealing with deductive arguments, but in many domains, arguments may lack some coercive force: they provide reasons which may be more or less persuasive. For that purpose, preference-based argumentation has been designed [2] which assigns preferences to arguments, so that preferred arguments would successfully attack less preferred ones (but not vice versa). Bench-Capon [6] went one step further with the Value Based Argumentation framework (VAF\(^1\)), which assigns to arguments the values they promote. Agents are distributed among different audiences which ascribe different preferences to such values. Hence, different audiences will have different preferences among the arguments and similarly, successful attacks for an audience are those made by arguments of highest values to the audience. **Definition 6 (Value-based AF [6])**: A Value-based Argumentation Framework (VAF) is a quintuple \( \langle \mathcal{A}, \prec, \mathcal{V}, v, \succeq \rangle \) such that \( \langle \mathcal{A}, \prec \rangle \) is an argumentation framework, \( \mathcal{V} \) is a nonempty set of values, \( v : \mathcal{A} \rightarrow \mathcal{V} \), and \( \succeq \) is the preference relation over \( \mathcal{V} \) (\( v_1 \succeq v_2 \) means that, in this framework, \( v_1 \) is preferred over \( v_2 \)). To each audience, \( \alpha \) corresponds a value-based argumentation framework \( \text{VAF}_\alpha \) such that \( v_1 \succeq_\alpha v_2 \) states that audience \( \alpha \) prefers \( v_1 \) over \( v_2 \). Attacks are then deemed successful, based on the preference ordering on the arguments’ values. This leads to re-defining the notions seen previously: --- \(^1\) We describe here as VAF what [6] calls an audience-specific value-based argumentation framework, but the result is equivalent. **Definition 7 (Successful attack [6])**. In a value-based argumentation framework, \(\langle A, \succ, V, v, \succeq \rangle\), an argument \(a \in A\) defeats (or successfully attacks) an argument \(b \in A\), noted \(a \succ b\), iff both \(a \succ b\) and \(v(b) \not\succeq v(a)\). **Definition 8 (Conflict-free set [6])**. A set \(S\) of arguments is conflict-free for an audience \(\alpha\) iff \(\forall x, y \in S, \neg(x \succ y) \lor v(y) \succeq_\alpha v(x)\). Acceptable arguments and preferred extensions are defined as before. In order to determine preferred extensions with respect to a value ordering promoted by distinct audiences, *objective* and *subjective* acceptance are defined. An argument is *subjectively acceptable* if and only if it appears in some preferred extension for some specific audience. An argument is *objectively acceptable* if and only if it appears in all preferred extension for every specific audience. ### 3 Argumentation Frameworks for Alignment Agreement In alignment agreement, arguments can be seen as positions that support or reject correspondences. Such arguments interact following the notion of attack and are selected according to the notion of acceptability. Argumentation frameworks for alignment agreement redefine the notion of acceptability, taking into account the confidence of the correspondences and the number of agents agreeing on a correspondence. In this section we first introduce the general definition of argument, which will be extended according to the scenario where argumentation is used (\S4), and then we present the argumentation frameworks. #### 3.1 Arguments on correspondences The different approaches presented below all share the same notion of correspondence argument originally defined in [22]. The general definition of correspondence argument is as follows: **Definition 9 (Argument [22])**. An argument \(a \in AF\) is a tuple \(a = \langle c, v, h \rangle\), such that \(c\) is a correspondence \(\langle e, e', r, n \rangle\); \(v \in V\) is the value of the argument and \(h\) is one of \{+, −\} depending on whether the argument is that \(c\) does or does not hold. In this definition, the set of considered values may be based on: the types of matching techniques that agents tend to prefer; the type of targeted applications; information about various level of endorsement of these correspondences, and whether or not they have been checked manually. Thus, any type of information which can be associated with correspondences (see §2.1) may be used. For example, an alignment may be generated for the purpose of information retrieval; however, this alignment may not be suitable for an agent performing a different task requiring more precision. This agent may therefor prefer the correspondences generated by a different agent for web service composition. Likewise, another agent may prefer human curated alignments rather than alignments generated on the fly. Arguments interact based on the notion of attack relation: **Definition 10 (Attack [22]).** An argument $\langle c, v, h \rangle \in A$ attacks another argument $\langle c', v', h' \rangle \in A$ iff $c = c'$ and $h \neq h'$. Therefore, if $a = \langle c, v_1, + \rangle$ and $b = \langle c, v_2, - \rangle$, $a \propto b$ and vice-versa ($b$ is the counter-argument of $a$, and $a$ is the counter-argument of $b$). ### 3.2 Strength-based argumentation framework (SVAF) Bench-Capon’s framework acknowledges the importance of preferences when considering arguments. However, within the specific context of ontology matching, an objection can still be raised regarding the lack of complete mechanisms for handling persuasiveness. Indeed, many ontology matchers generate correspondences with a strength that reflects the confidence they have in the similarity between the two entities. These confidence levels are usually derived from similarity assessments made during the matching process, e.g., from the edit distance measure between labels, or overlap measure between instance sets, and thus are often based on objective grounds. In order to represent arguments with *strength*, reflecting this confidence in a correspondence, [34] proposed the *Strength-based Argumentation Framework (SVAF)*, extending Bench-Capon’s *VAF* by redefining the notion of acceptability. **Definition 11 (SVAF [34]).** A strength-based argumentation framework (SVAF) is a sextuple $\langle A, \propto, V, v, \succeq, s \rangle$ such that $\langle A, \propto, V, v, \succeq \rangle$ is a value-based argumentation framework and $s : A \rightarrow [0, 1]$ represents the strength of the argument. As in value-based argumentation frameworks, each audience $\alpha$ is associated with its own framework in which only the preference relation $\succeq_\alpha$ differs. In order to accommodate the notion of *strength*, the notion of *successful attack* is extended: **Definition 12 (Successful attack [34]).** In a strength-based argumentation framework $\langle A, \propto, V, v, \succeq, s \rangle$, an argument $a \in A$ *successfully attacks* (or *defeats*, noted $a \mid b$) an argument $b \in A$ iff $$a \propto b \land (s(a) > s(b) \lor (s(a) = s(b) \land v(a) \succeq v(b)))$$ ### 3.3 Voting-based argumentation framework (VVAF) The frameworks described so far assume that candidate correspondences between two entities may differ due to the approaches used to construct them, and thus these argumentation frameworks provide different mechanisms to identify correspondences generated using approaches acceptable to both agents. However, different alignment generators may often utilise the same approach for some correspondences, and thus the approach used for that correspondence may be significant. Some large-scale experiments involving several matching tools (e.g. the OAEI 2006 Food track campaign [14]) have demonstrated that the more often a given approach for generating a correspondence is used, the more likely it is to be valid. Thus, the SVAF was adapted and extended in [19], to take into account the level of consensus between the sources of the alignments, by introducing the notions of support and voting into the definition of successful attacks. Support enables arguments to be counted as defenders or co-attackers during an attack: **Definition 13 (VVAF [19]).** A voting-based argumentation framework (VVAF) is a septuple \((\mathcal{A}, \times, \mathcal{S}, \mathcal{V}, v, \succeq, s)\) such that \((\mathcal{A}, \times, \mathcal{V}, v, \succeq, s)\) is a SVAF, and \(\mathcal{S}\) is a (reflexive) binary relation on \(\mathcal{A}\), representing the support relation between arguments. \(\mathcal{S}(x, a)\) means that the argument \(x\) supports the argument \(a\) (i.e., they have the same value of \(h\)). \(\mathcal{S}\) and \(\times\) are disjoint relations. A simple voting mechanism (e.g. plurality voting) can be used to determine the success of a given attack, based upon the number of supporters for a given approach. **Definition 14 (Successful attack [19]).** In a VVAF \((\mathcal{A}, \times, \mathcal{S}, \mathcal{V}, v, \succeq, s)\), an argument \(a \in \mathcal{A}\) successfully attacks (or defeats) an argument \(b \in \mathcal{A}\) (noted \(a \triangleright b\)) iff \[a \times b \land (\lvert \{x | \mathcal{S}(x, a)\} \rvert > \lvert \{y | \mathcal{S}(y, b)\} \rvert \lor \lvert \{x | \mathcal{S}(x, a)\} \rvert = \lvert \{y | \mathcal{S}(y, b)\} \rvert \land v(a) \succeq v(b)).\] This voting mechanism is based on simple counting. As some ontology matchers include confidence values with correspondences, a voting mechanism can exploit this confidence value, for example by simply calculating the total confidence value of the supporting arguments. However, this relies on the questionable assumption that all values are equally scaled (as is the case with the SVAF). In [19], a voting framework that normalised these confidence values (i.e. strengths) was evaluated, but was inconclusive. Another possibility would be to rely on a deeper justification for correspondences and to have only one vote for each justification. Hence, if several matchers considered two concepts to be equivalent because WordNet considers their identifier as synonyms, this would be counted only once. ### 4 Argumentation over Alignments The use of argumentation has been exploited in two different scenarios presented below. In the first, agents attempt to construct mutually acceptable alignments based on existing correspondences to facilitate communication, based on their alignment preferences (which may be task specific). They therefore argue directly over candidate correspondences provided by an alignment service, with each agent specifying an ordered preference of correspondence types and confidence thresholds. The second scenario focuses on the consensual construction of alignments involving several agents, each of which specialises in constructing correspondences using different approaches. These matching agents generate candidate correspondences and attempt to combine these to produce a new alignment through argumentation. Thus, whilst the first scenario utilises argumentation as a negotiating mechanism to find a mutually acceptable alignment between transacting agents, this latter scenario could be viewed as offering a service for negotiating alignments. 4.1 Argumentation over alignments for communication in multi-agent systems 4.1.1 Meaning-based argumentation Laera et al. proposed the meaning-based argumentation approach [22, 23, 21], to allow agents to propose, attack, and counter-propose candidate correspondences according to the agents’ preferences, in order to identify mutually acceptable alignments. Their approach utilises Bench-Capon’s VAF [6] to support the specification of preferences of correspondent types (as discussed in §2.1) within each argument. Thus, when faced with different, candidate correspondences who’s type differ, each agents’ preference ordering can be considered when determining if an argument for one correspondence will successfully attack another. Different audiences therefore represent different sets of arguments for preferences between the categories of arguments (identified in the context of ontology matching). Each agent is defined as follows: **Definition 15 (Agent).** An agent $Agi$ is characterised by a tuple $\langle O_i, F, \epsilon_i \rangle$, such that $O_i$ is the ontology used by the agent, $F$ is its (valued-based) argumentation framework, and $\epsilon_i$ is the private threshold value. Candidate correspondences are retrieved from an alignment service (see §2.1) which also provides the justifications $G$ (described below) for each correspondence, based on the approach used to construct the correspondence. The agents use this information to exchange arguments supplying the reasons for their choices. In addition, as these grounds include a confidence value associated with each correspondence, each agent utilises a private threshold value $\epsilon$ to filter out correspondences with low confidence values\(^2\). This threshold, together with the pre-ordering of preferences, are used to generate arguments for and against a correspondence. It extends the notion of argument presented in §3.1: **Definition 16 (Argument [22]).** An argument is a triple $\langle G, c, h \rangle$, where $c$ is a correspondence $\langle e, e', r, n \rangle$, $G$ is the grounds justifying a prima facie belief that the corres- --- \(^2\) The use of confidence profiles has since been explored to specify correspondence-type specific thresholds, resulting in the agreement over a greater diversity of agreed correspondences, and consequently more inclusive alignments [9]. respondece does, or does not hold; and $h$ is one of $\{+, -\}$ depending on whether the argument is that $c$ does or does not hold. The grounds $G$ justifying a correspondence between two entities are based on the five categories of correspondence types (as discussed in §2.1) - namely Semantic (S), Internal Structural (IS), External Structural (ES), Terminological (T), and Extensional (E). These classes are used as types for the values $\mathcal{V}$, i.e., $\mathcal{V} = \{M, IS, ES, T, E\}$, that are then used to construct an agent’s partially-ordered preferences, based on the agents ontology and task. Thus, an agent may specify a preference for terminological correspondences over semantic correspondences if the ontology it uses is mainly taxonomic, or vice versa if the ontology is semantically rich. Preferences may also be based on the type of task being performed; extensional correspondences may be preferred when queries are about instances that are frequently shared. The pre-ordering of preferences $\succeq$ for each agent $Agi$ is over $\mathcal{V}$, corresponding to the specification of an audience. Specifically, for each candidate correspondence $c$, if there exists one or more justifications $G$ for $c$ that corresponds to the highest preferences $\succeq$ of $Agi$ (with the respect of the pre-ordering), assuming $n$ is greater than its private threshold $\epsilon$, an agent $Agi$ will generate arguments $x = (G, c, +)$. If not, the agent will generate arguments against: $x = (G, c, -)$. The arguments interact based on the notion of attack, as specified in §3.1. The argumentation process takes four main steps: (i) each agent $Agi$ constructs an argumentation framework $VAFi$ by specifying the set of arguments and the attacks between them; (ii) each agent $Agi$ considers its individual frameworks $VAFi$ with all the argument sets of all the other agents and then extends the attack relations by computing the attacks between the arguments present in its framework with the other arguments; (iii) for each $VAFi$, the arguments which are undefeated by attacks from other arguments are determined, given a value ordering – the global view is considered by taking the union of these preferred extensions for each audience; and (iv) the arguments in every preferred extension of every audience are considered – the correspondences that have only arguments for are included in the a set called agreed alignments, whereas the correspondences that have only arguments against them are rejected, and the correspondences which are in some preferred extension of every audience are part of the set called agreeable alignments. The dialogue between agents consists of exchanging sets of arguments and the protocol used to evaluate the acceptability of a single correspondence is based on a set of speech acts ($Support, Contest, Withdraw$). For instance, when exchanging arguments, an agent sends $Support(c, x_1)$ for supporting a correspondence $c$ through the argument $x_1 = (G, c, +)$ or $Contest(c, x_2)$ for rejecting $c$, by $x_2 = (G, c, -)$. If the agents do not have any arguments or counter-arguments to propose, then they send $Withdraw(c)$ and the dialogue terminates. To illustrate this approach, consider the two agents buyer $b$ and seller $s$, using the ontologies in Figure 1. First, the agents access the alignment service that returns the correspondences with the respective justifications: - $m_1$: $(zoom_a, zoom_{a'}, \equiv, 1.0)$, with $G = \{T, ES\}$ - $m_2$: $(Battery_a, Battery_{a'}, \equiv, 1.0)$, with $G = \{T\}$ - \( m_3 \): \((MemoryCard_o, Memory_{o'}, \equiv, 0.54)\), with \( G = \{T\} \) - \( m_4 \): \((brand_o, brandName_{o'}, \equiv, 0.55)\), with \( G = \{T, ES\} \) - \( m_5 \): \((price_o, price_{o'}, \equiv, 1.0)\), with \( G = \{T, ES\} \) - \( m_6 \): \((CameraPhoto_o, DigitalCamera_{o'}, \equiv, 1.0)\), with \( G = \{ES\} \) - \( m_7 \): \((resolution_o, pixels_{o'}, \equiv, 1.00)\), with \( G = \{ES\} \) Agent \( b \) selects the audience \( R_1 \), which prefers terminology to external structure \((T \succ_{R_1} ES)\), while \( s \) prefers external structure to terminology \((ES \succ_{R_2} T)\). All correspondences have a degree of confidence \( n \) that is above the threshold of each agent and then all of them are taken into account. Both agents accept \( m_1, m_4 \) and \( m_5 \), \( b \) accepts \( m_2, m_3 \), while \( s \) accepts \( m_6 \) and \( m_7 \). Table 1 shows the arguments and corresponding attacks. **Table 1. Arguments and attacks.** | id | argument | attack | agent | |----|-------------------|--------|-------| | A | \((T, m_1, +)\) | | \(b, s\) | | B | \((ES, m_1, +)\) | | \(b, s\) | | C | \((T, m_2, +)\) | \(D\) | \(b\) | | D | \((ES, m_2, -)\) | \(C\) | \(s\) | | E | \((T, m_3, +)\) | \(F\) | \(b\) | | F | \((ES, m_3, -)\) | \(E\) | \(s\) | | G | \((T, m_4, +)\) | | \(b, s\) | | H | \((ES, m_4, +)\) | | \(b, s\) | | I | \((T, m_5, +)\) | | \(b, s\) | | J | \((ES, m_5, +)\) | | \(b, s\) | | L | \((ES, m_6, +)\) | \(M\) | \(s\) | | M | \((T, m_6, -)\) | \(L\) | \(b\) | | N | \((ES, m_7, +)\) | \(O\) | \(s\) | | O | \((T, m_7, -)\) | \(N\) | \(b\) | The arguments \( A, B, G, H, I, \) and \( J \) are not attacked and then are acceptable for both agents (they form the *agreed alignment*). The arguments \( C \) and \( D \) are mutually attacked and are acceptable only in the corresponding audience, i.e., \( C \) is acceptable for the audience \( b \) and \( D \) is acceptable for the audience \( s \). The same occurs for the arguments \( E, F, L, M, M, \) and \( O \). The correspondences in such arguments are seen as the *agreeable alignments*. ### 4.1.2 The approach by Trojahn and colleagues In order to provide translations between messages in agent communication, [33] formally defines an alignment as a set of correspondences between *queries* over ontologies. The alignment is obtained by specialised matcher agents that argue in order to agree on a globally acceptable alignment. The set of acceptable arguments is then represented as conjunctive queries in OWL-DL [18]. A conjunctive query has the form \( \bigwedge_i (P_i(s_i)) \), where each \( P_i(s_i) \) represents a correspondence. For instance, \((CameraPhoto_o, DigitalCamera_{o'}, \equiv, 1.0)\) is represented as \( Q(x) : CameraPhoto(x) \equiv DigitalCamera(x) \). Consider the example where the agents “buyer \( b \)” and “seller \( s \)” interact to agree on the price of a digital camera, using the ontologies \( o \) and \( o' \) of Figure 1, respectively. Before the agents can agree on the price, they need to agree on the terms used to communicate to each other. This task can be delegated to a matcher agent \( m \), that receives the two ontologies and sends them to an argumentation module. This module, made up of different specialised agents $a_1, \ldots, a_n$ (which can be distributed on the web), receives the ontologies and returns a set of DL queries representing the acceptable correspondences. These interactions are loosely based on the Contract Net Interaction Protocol [16]. The argumentation process between the specialised matchers is detailed in Section 4.2. Table 2 describes the steps of the interaction between the agents. **Table 2** Interaction steps [33]. | Step | Description | |------|-------------| | 1 | Matcher agent $m$ requests the ontologies to be matched to agents $b$ and $s$ | | 2 | Ontologies are sent from $m$ to the argumentation module | | 3 | Matchers $a_1, \ldots, a_n$ apply their algorithms | | 4 | Each matcher $a_i$ communicate with each others to exchange their arguments | | 5 | Preferred extensions of each $a_i$ are generated | | 6 | Objectively acceptable arguments $o$ are computed | | 7 | Correspondences in $o$ are represented as conjunctive queries | | 8 | Queries are sent to $m$ | | 9 | Queries are sent from $m$ to $b$ and $s$ | | 10 | Agents $b$ and $s$ use the queries to communicate with each other | In fact, only one of the agents should receive the DL queries, which should be responsible for the translations. We consider that the set of objectively acceptable arguments has the correspondences shown in Figure 3, with the respective queries. **Fig. 3** Conjunctive queries. Figure 4.1.2 shows an AUML\(^3\) interaction diagram with the messages exchanged between the agents $b$ and $s$ during the negotiation of the price of the camera. The agents use the queries to search for correspondences between the messages sent from each other and the entities in the corresponding ontologies. In the example, the agent $b$ sends a message to the agent $s$, using its vocabulary. Then, the agent $s$ converts the message, using the DL queries. \(^3\) AUML – Agent Unified Modelling Language [17]. 4.1.3 Reducing the argumentation space through modularization Doran et al. [11] utilised modularization to identify the ontological descriptions relevant to the communication, and consequently reduce the number of correspondences necessary to form the alignment. The use of argumentation can be computationally costly, as the complexity can reach $PSPACE$-complete in some cases. Thus, by reducing the number of arguments, the time required for generating the alignments can be significantly reduced; even when taking into account the time necessary for the modularization process itself. In an empirical study, the authors found that the use of modularization significantly reduced the average number of correspondences presented to the argumentation framework, and hence the size of the search space – in some cases by up to 97%, across a number of different ontology pairs. They also noted that three patterns emerged: i) where no reduction in size occurred (in 4.84% of cases within the study); ii) where the number of correspondences was reduced (55.14%); and iii) where modules of size zero were found (40.02%), corresponding to failure scenarios; i.e. where the subsequent transaction would fail due to insufficient alignment between the ontologies. An ontology modularization technique extracts a consistent module $M$ from an ontology $A$ that covers a specified signature $Sig(M)$, such as $Sig(M) \subseteq Sig(O)$. $M$ is the part of $O$ that is said to cover the elements defined by $Sig(M)$. The first agent engaging in the communication specifies the $Sig(M)$ of its ontology $O$ where $M$ is an ontology concept relevant for a task. The resulting module contains the entities considered to be relevant for its task, including the subclasses and properties of the concepts in $Sig(M)$. The step-by-step interaction between two agents, following an argumentation based on modularization is presented in Table 3. Table 3 Ontology modularization and argumentation for alignment agreement [10]. | Step | Description | |------|-------------| | 1 | $Ag_1$ asks a query, $query(A \in Sig(O))$ to $Ag_2$. | | 2 | $Ag_2$ does not understand the query, $A \notin Sig(O')$) and informs $Ag_1$ they need to use a server. | | 3 | $Ag_1$ produces, $om(O, Sig(A))$, an ontology module, $M$, to cover the concepts required for its task. | | 4 | $Ag_1$ and $Ag_2$ invoke the server. $Ag_1$ sends its ontology, $O$ and the signature of $M$, $Sig(M)$. | | 5 | The alignment service aligns the two ontologies and filters the correspondences according to $M$. Only those features an entity from $M$ are returned to both agents. | | 6 | The agents begin the process of argumentation, with each agent generating arguments and counter-arguments. | | 7 | The iteration terminates when the agents agree on a set of correspondences. | | 8 | $Ag_1$ asks again $Ag_2$, using the agreed correspondences, $query(A \in Sig(O) \land B \in Sig(O'))$ where $A$ and $B$ are aligned. | | 9 | $Ag_2$ answers the query using the agreed correspondences. | For communicating, only the initiating agent ($Ag_1$) is aware of its task and, consequently, which concepts are relevant to this task (Steps 1 and 2). These concepts will be included in $Sig(M)$, the signature of the resulting ontology module (Step 3). The set of candidate correspondences (Step 4) is filtered (Step 5) according to the filtering function $filter()$. $filter$ returns a subset $Z$ of correspondences, where the entities $e$ in these correspondence are in $Sig(M)$. The set $Z$ is then used within the argumentation process. Modularization is therefore used to filter the correspondences that are passed to the argumentation process. The agents then argue (Steps 6-7) to reach an acceptable alignment. The combination of argumentation and modularization reduces the cost of reaching an agreement over an alignment, by reducing the size of the set of correspondences argued over, and hence the number of arguments required. This greatly contributes to reduce the consumed time, at a minimal expense in accuracy. Following the example of the buyer and seller agents, the buyer agent knows which concepts will be used for communicating and then a module of the ontology $o$ is extracted containing such concepts (i.e., $CameraPhoto$, $resolution$, $zoom$, and $price$). The buyer agent then filters the correspondences in order to retrieve the subset containing only these concepts. ### 4.2 Solving conflicts between matcher agents In [34], alignments produced by different matchers are compared and agreed via an argumentation process. The matchers interact in order to exchange arguments and the SVAF model (§3.2) is used to support the choice of the most acceptable of them. Each correspondence can be considered as an argument because the choice of a correspondence may be a reason against the choice of another correspondence. Correspondences are represented as arguments, extending the notion of argument specified in §3.1: **Definition 17 (Argument).** An argument $x \in AF$ is a tuple $x = \langle c, v, s, h \rangle$, such that $c$ is a correspondence $\langle e, e', v, n \rangle$; $v \in V$ is the value of the argument; $s$ is the strength of the argument, from $n$; and $h$ is one of $\{+, -\}$ depending on whether the argument is that $c$ does or does not hold. The matchers generate arguments representing their alignments following a *negative arguments as failure* strategy. It relies on the assumption that matchers return complete results. Each possible pair of ontology entities which is not returned by the matcher is considered to be at risk, and a negative argument is generated ($h = -$). The values $v$ in $V$ correspond to the different matching approaches and each matcher $m$ has a preference ordering $\succeq_m$ over $V$ such that its preferred values are those it associates to its arguments. For instance, consider $V = \{l, s, w\}$, i.e., *lexical*, *structural* and *wordnet-based* approaches, respectively, and three matchers $m_l$, $m_s$ and $m_w$, using such approaches. The matcher $m_l$ has as preference order $l \succeq_{m_l} s \succeq_{m_l} w$. The basic idea is to obtain a consensus between different matchers, represented by different preferences between values. Arguments interact based on the notion of attack presented in §3.1. The argumentation process can be described as follows. First, each matcher generates its set of correspondences, using some specific approach and then the set of corresponding arguments is generated. Next, the matchers exchange with each others their set of arguments – the dialogue between them consists of the exchange of individual arguments. When all matchers have received the set of arguments of each others, they instantiate their SVAFs in order to generate their set of acceptable correspondences. The consensual alignment contains the correspondences represented as arguments that appear in every set of acceptable arguments, for every specific audience (objectively acceptable). In order to illustrate this process, consider two matchers, $m_l$ (lexical) and $m_s$ (structural), trying to reach a consensus on the alignment between the ontologies in Figure 1. $m_l$ uses an edit distance measure to compute the similarity between labels of concepts and properties of the ontologies, while $m_s$ is based on the comparison of the direct super-classes of the classes or classes of properties. Table 4 shows the correspondences and arguments generated by each matcher. The matchers generate complete alignments, i.e., if a correspondence is not found, an argument with value of $h = -$ is created. It includes correspondences that are not relevant to the task at hand. For the sake of brevity, we show only the arguments with $h = +$ and the corresponding counter-arguments (Table 5). We consider 0.5 as the confidence level $c$ for negative arguments ($h = -$). Considering $\mathcal{V} = \{l, v\}$, $m_l$ associates to its arguments the value $l$, while $m_s$ generates arguments with value $s$. $m_l$ has as preference ordering: $l \succ_{m_l} s$, while $m_s$ has the preference: $s \succ_{m_s} l$. | id | correspondence | argument | matcher | |----|----------------|----------|---------| | A | $c_{1,1} = (zoom_o, zoom_o', \equiv, 1.0)$ | $(c_{1,1}, l, 1.0, +)$ | $m_l$ | | B | $c_{1,2} = (Battery_o, Battery_o', \equiv, 1.0)$ | $(c_{1,2}, l, 1.0, +)$ | $m_l$ | | C | $c_{1,3} = (MemoryCard_o, MemoryCard_o', \equiv, 0.33)$ | $(c_{1,3}, l, 0.33, +)$ | $m_l$ | | D | $c_{1,4} = (brand_o, brandName_o', \equiv, 0.22)$ | $(c_{1,4}, l, 0.22, +)$ | $m_l$ | | E | $c_{1,5} = (price_o, price_o', \equiv, 1.0)$ | $(c_{1,5}, l, 1.0, +)$ | $m_l$ | | F | $c_{s,1} = (CameraPhotos, DigitalCamera_o', \equiv, 1.0)$ | $(c_{s,1}, s, 1.0, +)$ | $m_s$ | | G | $c_{s,2} = (zoom_o, zoom_o', \equiv, 1.0)$ | $(c_{s,2}, s, 1.0, +)$ | $m_s$ | | H | $c_{s,3} = (battery, brandName_o', \equiv, 1.0)$ | $(c_{s,3}, s, 1.0, +)$ | $m_s$ | | I | $c_{s,4} = (resolution, pixels, \equiv, 1.0)$ | $(c_{s,4}, s, 1.0, +)$ | $m_s$ | | J | $c_{s,5} = (price_o, price_o', \equiv, 1.0)$ | $(c_{s,5}, s, 1.0, +)$ | $m_s$ | Having their arguments $\mathcal{A}$, the matchers exchange them, $m_l$ sends to $m_s$ its set of arguments $\mathcal{A}_l$ and vice-versa. Next, based on the attack notion, each matcher $m_i$ generates its attack relation $\propto_i$ and then instantiates its $SVAFs_i$. The arguments $A$, $D$, $E$, $G$, $H$ and $J$ are acceptable in both SVAFs (they are not attacked by counter-arguments with $h = -$). $F$, $I$, and $B$ ($h = +$) successfully attack their counter-arguments ($h = -$) $L$, $M$ and $N$, respectively, because they have highTable 5 Counter-arguments (attacks) for the arguments in Table 4. | id | correspondence | counter-argument | matcher | |----|----------------|------------------|---------| | $L$ | $c_{1,1} = \langle CameraPhoto_o, DigitalCamera_{o'}, \equiv, 0.5 \rangle$ | $\langle c_{1,6}, t, 0.5, - \rangle$ | $m_L$ | | | $c_{1,2} = \langle resolution_o, pixels_{o'}, \equiv, 0.5 \rangle$ | $\langle c_{1,7}, t, 0.5, - \rangle$ | $m_L$ | | $M$ | $c_{3,6} = \langle Battery_o, Battery_{o'}, \equiv, 0.5 \rangle$ | $\langle c_{3,6}, s, 0.5, - \rangle$ | $m_M$ | | | $c_{3,7} = \langle MemoryCard_o, Memory_{o'}, \equiv, 0.5 \rangle$ | $\langle c_{3,7}, s, 0.5, - \rangle$ | $m_M$ | The arguments in the preferred extension of both matchers $m_L$ and $m_M$ are: $A$, $D$, $E$, $F$, $G$, $H$, $J$, $F$, $I$, $B$ and $O$. While $\langle resolution_o, pixels_{o'}, \equiv, 1.0 \rangle$, $\langle Battery_o, Battery_{o'}, \equiv, 1.0 \rangle$ and $\langle CameraPhoto_o, DigitalCamera_{o'}, \equiv, 1.0 \rangle$ have been accepted, $\langle MemoryCard_o, Memory_{o'}, \equiv, 0.33 \rangle$ has been discarded. 5 Weaknesses and Challenges As discussed above, argumentation for alignment agreement has been exploited in different ways, for different scenarios. However, there are still various challenges ahead for achieving a fully satisfying approach. We briefly consider some of them. Confidence of arguments In [34], the notion of attack between the arguments highly depends on the confidence associated to the correspondences. Such confidence levels are usually derived from similarity assessments made during the matching process, e.g., from edit distance measure between labels, or overlap measure between instance sets. However, there is no objective theory nor even informal guidelines for determining such confidence levels. Using them to compare results from different matchers is therefore questionable especially because of potential scale mismatches. For example, a same strength of 0.8 may not correspond to the same level of confidence for two different matchers. Complete alignments Generating complete alignments is at first sight quite unrealistic, but it can nevertheless be supported by the observation that most matchers try to provide as much correspondences as possible. However, dealing with a large number of arguments can become prohibitively costly. Following the approach from [10], the search space within the argumentation process can be reduced, by isolating only the correspondences that are relevant to the communication. Other authors isolate the subpart of the ontologies to be matcher relevant for the communication before matching only these pieces of ontologies instead of the whole ontologies [27]. These approaches have to be developed with guarantees that the isolated items are the relevant ones. Inconsistent alignments An important issue in such argumentation for alignment agreement is related to the potential inconsistency in the agreed alignment. Indeed, even if the initial alignments are consistent, selected sets of correspondences may generate concepts that are not satisfiable. Solving the inconsistency problem in alignments has two possible alternatives: - express the inconsistency within the argumentation framework [1]; - deal alternatively with the logical and argumentative part of the problem. Integrating the logics within the argumentation framework seems a more elegant solution and it can be achieved straightforwardly when correspondences are arguments and incompatible correspondences can mutually attack each others. However, this works only when two correspondences are incompatible. When the set of incompatible correspondences is larger, the encoding is not so straightforward and may lead to the generation of an exponential amount of argument and attack relations. On the other side, alternating logical and argumentative treatments may also lead to prohibitive computational costs. In this case, the solution seems to be a trade-off between the computational costs and the expected consistency. Availability of justifications The presented approaches argue for or against a correspondence based on justifications for the arguments. They are thus highly dependent on justifications for the arguments provided with the alignments. Although, alignment servers provide the necessary metadata for storing such justification with alignments (see §2.1), it is not common for people or for matchers to provide this information. Ideally, matchers should provide such justifications, as a way to understand why a particular alignment is found or why a certain match is ranked higher than another. However, this is not common practice. The development of such methods may therefore be slowed by the unavailability of justification metadata. It seems necessary to provide incentive to both automatic and manual matchers to generate this information. One such incentive could be, of course the ability to be involved in an argumentation process and then to provide better alignments. Another incentive would be to better help explain matcher results to users [30]. 6 Other Related Work This chapter has covered all the work carried out in the domain of alignment argumentation per se. However, in order to find alignments between ontologies used by agents, some work have proposed different techniques that we consider here. [31] has proposed alignment negotiation to establish a consensus between different agents using the MAFRA alignment framework [24]. The approach is based on utility functions used to evaluate the confidence in a particular correspondence in the context of each agent. These confidence values are combined in order to decide if the correspondence is accepted, rejected or need to be negotiated. A meta-utility function is also applied to evaluate if the effort necessary to negotiate is beneficial or not; it may so automatically change the thresholds so that some correspondences are directly rejected or accepted. The approach is highly dependent on the MAFRA framework and cannot be directly applied to other environments. Schemes for obtaining ontology alignments through the working cycles of agents have been developed. They either observe failure or success of the communication and statistically learn the alignments [7] or they use the interaction protocol of each agent for reducing the possible meaning of concepts used as performative [4]. [8] presents an approach for agents to agree on a common ontology in a decentralised way. The approach assumes that each agent adopts a private ontology and shares an intermediate ontology. The private ontology is used for storing and reasoning with operational knowledge, i.e., knowledge relevant to a particular problem or task at hand. The intermediate ontology is used for communication. Communication proceeds by translating from the speaker’s private ontology to the intermediate ontology which the hearer translates back again into its own private ontology. The authors show how to establish such an intermediate ontology, which is the common goal for every agent in the system. In the approaches we have presented, on the other hand, the result of the negotiation is a set of correspondences between the terms of the different ontologies. [5] presents an ontology negotiation protocol to provide semantic interoperability in multi-agent systems in an automated fashion at run-time. The ontology negotiation protocol enables agents to discover ontology conflicts or unknown terms. Then, it goes through (i) incremental interpretations of the unknown terms with the help of external resources, (ii) clarification, by proposing putative correspondences, (iii) evaluation, through the impact of such correspondences on some tasks, and (iv) update of the ontology with the correspondence. The final result of this process is that each agent will converge on a single, shared ontology. In contrast, in the approaches presented in this chapter, agents keep their own ontologies that they have been designed to reason with, whilst generating alignments with other agent’s ontologies. In [25], the authors propose an argumentation framework for inter-agent dialogue to reach an agreement on terminology, which formalizes a debate in which the divergent representations (expressed in description logic) are discussed. The proposed framework is stated as being able to manage conflicts between claims, with different relevancies for different audiences, in order to compute their acceptance. However, no detail is given about how agents will generate such claims. [32] proposes a cooperative negotiation model, where agents apply individual matching algorithms and negotiate on a final alignment. Basically, the negotiation process involves the exchange of proposal and counter-proposals that represents correspondences. Each correspondence is negotiated individually. Three kinds of agents interact, lexical, structural, and semantic, and the communication is managed by a mediator agent. 7 Final Remarks This chapter has presented an overview of the approaches for alignment agreement based on argumentation. Such approaches provide a way for agents, with different ontologies, to agree upon mutually acceptable ontology alignments to facilitate communication within a dynamic environment. We have discussed how two agents committing to different ontologies can align their ontologies in order to interoperate and how agents relying on different matching approaches can agree on a common alignment. The approaches for both scenarios are not fully satisfying and there are still various challenges ahead for achieving such maturity. References 1. Amgoud, L., Besnard, P.: Bridging the gap between abstract argumentation systems and logic. In: Proceedings of the 3rd International Conference on Scalable Uncertainty Management, pp. 12–27. Springer-Verlag, Berlin, Heidelberg (2009) 2. Amgoud, L., Cayrol, C.: On the acceptability of arguments in preference-based argumentation. In: G. Cooper, S. Moral (eds.) Proceedings of the 4th Conference on Uncertainty in Artificial Intelligence (1998) 3. Amgoud, L., Cayrol, C.: A reasoning model based on the production of acceptable arguments. Annals of Mathematics and Artificial Intelligence 34(1-3), 197–215 (2002) 4. Atencia, M.: Semantic alignment in the context of agent interaction. Ph.D. thesis, Universita Autonoma de Catalunya, Barcelona (SP) (2010) 5. Bailin, J., Truszczynski, M.: Ontology negotiation between intelligent information agents. Knowledge Engineering Review 17(1), 7–19 (2002). DOI http://dx.doi.org/10.1017/S0269888902000292 6. Bench-Capon, T.: Persuasion in practical argument using value-based argumentation frameworks. Journal of Logic and Computation 13(3), 429–448 (2003) 7. Besana, P., Robertson, D.: How service choreography statistics reduce the ontology mapping problem. In: Proceedings 6th International Semantic Web Conference, no. 4825 in Lecture Notes in Computer Science, pp. 44–57 (2007) 8. van Diggelen, J., Beun, R.J., Dignum, F., van Eijck, R.M., Meyer, J.J.: ANEMONE: An effective minimal ontology negotiation environment. In: Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 899–906. ACM, New York, NY, USA (2006). DOI http://doi.acm.org/10.1145/1160633.1160794 9. Doran, P., Payne, T.R., Tamma, V., Palmisano, I.: Deciding agent orientation on ontology mappings. In: Proceedings of the 9th International Semantic Web Conference, Shanghai, China (2010) 10. Doran, P., Tamma, V., Palmisano, I., Payne, T.R.: Efficient argumentation over ontology correspondences. In: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, pp. 1241–1242. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2009) 11. Doran, P., Tamma, V., Payne, T., Palmisano, I.: Dynamic selection of ontological alignments: A space partitioning mechanism. In: International Joint Conference on Artificial Intelligence (2009), URL http://www.ijcai.org/Proceedings/IJCAIJJCAI-09/paper/view/551 12. Dung, P.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence 77(2), 321–357 (1995) 13. Euzenat, J.: An API for ontology alignment. In: Proceedings of the 3rd International Semantic Web Conference, pp. 698–7112. Yokohama, Japan (2004) 14. Euzenat, J., Moorhead, M., Shvaiko, P., Stuckenschmidt, H., Svah, O., Svatek, V., van Hage, W.K., Yatsenko, M.: Results of the ontology alignment evaluation initiative 2006. In: First International Workshop on Ontology Matching, Athens, GA, US (2006) 15. Euzenat, J., Shvaiko, P.: Ontology matching. Springer, Heidelberg (DE) (2007) 16. FIPA: Contract net interaction protocol specification. Tech. Rep. SC00029H, Foundation for Intelligent Physical Agents (2002) 17. FIPA: Modeling: Interaction diagrams. Tech. rep., Foundation for Intelligent Physical Agents (2003) 18. Haase, P., Motik, B.: A mapping system for the integration of OWL-DL ontologies. In: Proceedings of the 1st International Workshop on Interoperability of Heterogeneous Information Systems, pp. 9–16. ACM, New York, NY, USA (2005). DOI http://doi.acm.org/10.1145/1096967.1096970 19. Isaac, A., dos Santos, C.T., Wang, S., Quaresma, P.: Using quantitative aspects of alignment generation for argumentation on mappings. In: P. Shvaiko, J. Euzenat, F. Giunchiglia, H. Stuckenschmidt (eds.) OM, CEUR Workshop Proceedings, vol. 431. CEUR-WS.org (2008) 20. Jennings, N., Faratin, P., Lomuscio, A., Parsons, S., Wooldridge, M., Sierra, C.: Automated negotiation: Prospects methods and challenges. Group Decision and Negotiation 10(2), 199–215 (2001) 21. Laera, L., Blaces, I., Tamma, V., Payne, T., Euzenat, J., Bench-Capon, T.: Argumentation over ontology correspondences in MAS. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1–8. ACM, New York, NY, USA (2007). DOI http://doi.acm.org/10.1145/1329125.1329400 22. Laera, L., Tamma, V., Euzenat, J., Bench-Capon, T., Payne, T.R.: Reaching agreement over ontology alignments. In: Proceedings of the 5th International Semantic Web Conference, Lecture Notes in Computer Science, vol. 4273/2006, pp. 371–384. Springer Berlin / Heidelberg (2006). DOI 10.1007/11926078 23. Laera, L., Tamma, V.A.M., Euzenat, J., Bench-Capon, T.J.M., Payne, T.R.: Agents arguing over ontology alignments. In: B. Dunin-Keplicz, A. Omicini, J.A. Padget (eds.) Proceedings of the 4th European Workshop on Multi-Agent Systems, CEUR Workshop Proceedings, vol. 223. CEUR-WS.org (2006) 24. Maedche, A., Motik, B., Silva, N., Volz, R.: MAFRA — a MApping FRamework for distributed ontologies. In: Proceedings of the 13th International Conference on Knowledge Engineering and Knowledge Management: Ontologies and the Semantic Web, pp. 235–250. Springer-Verlag, London, UK (2002) 25. Morge, M., Routier, J.C., Secq, Y., Dujardin, T.: A formal framework for inter-agents dialogue to reach an agreement about a representation. In: R. Ferrario, N. Guarino, L. Prevot (eds.) Proceedings of the Workshop on Formal Ontologies for Communicating Agents (2006) 26. Noy, N.F., Shah, N.H., Whetzel, P.L., Dai, B., Dorf, M., Griffith, N., Jonquet, C., Rubin, D.L., Storey, M.A.D., Chute, C.G., Musen, M.A.: Bioportal: ontologies and integrated data resources at the click of a mouse. Nucleic Acids Research 37(Web-Server-Issue), 170–173 (2009) 27. Packer, H., Payne, T., Gibbins, N., Jennings, N.: Evolving ontological knowledge bases through agent collaboration. In: Proceedings 6th European Workshop on Multi-Agent Systems, Bath UK, Springer (2008) 28. Parsons, S., Jennings, N.: Negotiation through argumentation-A preliminary report. In: Proceedings of the 2nd International Conference Multi-Agent Systems, pp. 267–274. Kyoto, Japan (1996) 29. Prakken, H., Sartor, G.: Argument-based extended logic programming with defeasible priorities. Journal Applied Non-Classical Logics 7(1), 25–75 (1997) 30. Shvaiko, P., Giunchiglia, F., da Silva, P.P., McGuinness, D.L.: Web explanations for semantic heterogeneity discovery. In: A. Gómez-Pérez, J. Euzenat (eds.) Proceedings of the 2nd European Semantic Web Conference, Lecture Notes in Computer Science, vol. 3532, pp. 303–317. Springer (2005) 31. Silva, N., Maio, P., Rocha, J.: An approach to ontology mapping negotiation. In: Proceedings of the Third International Conference on Knowledge Capture Workshop on Integrating Ontologies. Banff, Canada (2005) 32. Trojahn, C., Moraes, M., Quaresma, P., Vieira, R.: Using cooperative agent negotiation for ontology mapping. In: Proceedings of the 4th European Workshop on Multi-Agent Systems, CEUR Workshop Proceedings, vol. 223, pp. 1–10. CEUR-WS.org (2006) 33. Trojahn, C., Quaresma, P., Vieira, R.: Conjunctive queries for ontology based agent communication in MAS. In: Proceedings of the 7th international joint conference on Autonomous Agents and Multiagent Systems, pp. 829–836. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2008) 34. Trojahn, C., Quaresma, P., Vieira, R., Moraes, M.: A cooperative approach for composite ontology mapping. LNCS Journal on Data Semantic X (JoDS) 4900(1), 237–263 (2008). DOI 10.1007/978-3-540-77688-8
Ethnicity effects in relative pitch Michael J. Hove Cornell University, Ithaca, New York and Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany Mary Elizabeth Sutherland McGill University, Montreal, Quebec, Canada AND Carol L. Krumhansl Cornell University, Ithaca, New York Absolute pitch (AP), the rare ability to identify a musical pitch, occurs at a higher rate among East Asian musicians. This has stimulated considerable research on the comparative contributions of genetic and environmental factors. Two studies examined whether a similar ethnicity effect is found for relative pitch (RP), identifying the distance or interval between two tones. Nonmusicians ($n = 103$) were trained to label musical intervals and were subsequently tested on interval identification. We establish similar ethnicity effects: Chinese and Korean participants consistently outperformed other participants in RP tasks, but not in a “relative rhythm” control task. This effect is not driven by previous musical or tone-language experience. The parallel with the East Asian advantage for AP suggests that enhanced perceptual–cognitive processing of pitch is more general and is not limited to highly trained musicians. This effect opens up many research questions concerning the environmental and genetic contributions related to this more general pitch-based ability. Previous research has uncovered ethnicity effects in pitch processing among highly trained musicians. Absolute pitch (AP), the rare ability to identify or produce by name (e.g., C, C♯, D) a musical pitch without a reference tone, is more common among East Asians than among non-Asians (e.g., Deutsch, Henthorn, Marvin, & Xu, 2006; Gregersen, Kowalsky, Kohn, & Marvin, 2000). This ethnicity effect has prompted considerable attention from geneticists and cognitive neuroscientists primarily due to the potential interaction of genetic factors and environmental factors, such as tone-language experience, musical training style, a critical learning period, and location of childhood. In the present article, we examine whether a similar ethnicity effect is found for relative pitch (RP), the ability to identify musical intervals by name (e.g., major second [M2], perfect fourth [P4], and perfect fifth [P5].) Several studies provide evidence for a genetic component in AP. Siblings of AP possessors were more likely to have AP, and this familial aggregation was strong, even when shared environmental factors were controlled for (Baharloo, Johnston, Service, Gitschier, & Freimer, 1998). In another study, the rate of AP among music theory students in the U.S. was only 12% (Gregersen et al., 2000). However, within that sample, the rate among East Asian students (47.5%) was markedly higher than it was among Caucasian students (9%), and higher rates of AP were “present among all the major ethnic subgroups—Japanese (26%), Korean (37%), and Chinese (65%).” This Asian advantage suggests a potential genetic component, inasmuch as higher rates among East Asians cannot be attributed simply to cultural factors (the three cultures are distinct) or to tone-language experience (Zatorre, 2003). Environmental factors also appear to contribute to AP. A comparison of conservatory students in China and the U.S. showed higher rates of AP for the Chinese in China (~50%) than for the non-Asians in the U.S. (~10%; Deutsch, Henthorn, & Dolson, 2004). This led them to argue that early exposure to tone language can predispose individuals to AP, because, if pitch carries meaning in language, babies might attend more to pitch cues and be more likely to develop AP than if pitch carries no meaning. Consistent with this, a reanalysis of data from Gregersen et al. (2000) contends that the observed East Asian AP advantage appears only for individuals who spent their early childhoods in East Asia, potentially due to more tone or pitch-accent language exposure (Henthorn & Deutsch, 2007). Conversely, this geographic effect could be driven by a music training method that is more common in Asia, rather than by a language exposure (Gregersen, Kowalsky, & Li, 2007). Finally, AP has a critical period: Adults with AP started music lessons early (Baharloo et al., 1998; Profita & Bidder, 1988). To sum, after considerable research attention, the relative contributions of these environmental and genetic factors remain elusive, due largely to the entanglement of genes and environment and the extreme rarity of AP (AP rates are estimated as low as 1 in 10,000 in the general population; Takeuchi & Hulse, 1993; cf. Levitin & Rogers, 2005). RP has been studied much less than AP has, even though RP is well developed in almost all trained musicians, is more musically useful than AP, and thus is of interest in and of itself. Moreover, some considerations suggest commonalities between the two abilities. Both require forming associations between pitch-based sensory events and arbitrary labels. Indeed, a brain area involved in conditional associations, the dorsolateral prefrontal cortex (DLPFC), was similarly active when AP possessors heard a tone and when non-AP musicians made RP judgments (Zatorre, Perry, Beckett, Westbury, & Evans, 1998), and also when nonmusicians identified chords with an arbitrary label (Bermudez & Zatorre, 2005). Because of this common neurocognitive mechanism underlying AP and RP, in conjunction with the Asian AP advantage, we examine potential ethnicity effects in RP. Some evidence has emerged for both genetic and environmental components in non-AP pitch tasks. Identification of mistuned intervals in melodies was more similar for identical twin pairs than for fraternal twin pairs: Genetic heritability estimates of pitch recognition were at around .75 (Drayna, Manichaikul, de Lange, Snieder, & Spector, 2001). Tone-language experience improved identification of speech tones, but not of pitch sweeps or the just-noticeable-difference threshold for pitch (Bent, Bradlow, & Wright, 2006). In a test of pitch memory that does not require labeling, Japanese children more accurately identified pitch-shifted familiar melodies than did their Canadian counterparts (Trehub, Schellenberg, & Nakata, 2008). However, in another study with this nonlabeling task, no difference was observed between Chinese- and European-Canadian children (Schellenberg & Trehub, 2007). In the two present studies, we investigated RP recognition by East Asian and Caucasian nonmusicians who were unfamiliar with assigning labels to musical intervals. Working with nonmusicians is advantageous, because it mitigates factors such as training method. Here, they were trained to identify three pitch intervals by arbitrary color labels and subsequently were tested on identification accuracy. In addition to ethnicity, participants reported language experience, musical training, musical environment, primary language in the home, and country of early childhood. **STUDY 1** In order to examine the relative contributions of ethnicity and tone language in RP-interval identification, we compared pitch-interval identification of three groups of nonmusicians: Chinese, a native tone-language speaking group previously showing pitch processing advantages; Hmong, also a native tone-language group, but culturally and genetically distinct (Wen et al., 2004); and Caucasian. **Method** **Participants.** Thirty-eight unpaid volunteers from secondary schools participated. Three groups participated: students in China and Taiwan (henceforth referred to as *Chinese*, $n = 10$, mean age = 16.6 years), Caucasian students in the U.S. ($n = 14$, mean age = 16.3 years), and Hmong students in the U.S. ($n = 14$, mean age = 16.5 years). The Hmong students were all native Hmong speakers, half of whom were born in the U.S. and half born in Thailand or Laos and immigrated to the U.S. (5 before age 5, 2 at age 10); this did not affect performance. Participants were nonmusicians, having no more than 3 years of musical instrument or singing lessons or classes. They were not currently playing an instrument or singing and were unfamiliar with labeling musical intervals. The extent of musical training did not differ across the Chinese ($M = 1.2$ years), Caucasian ($M = 1.8$ years), and Hmong ($M = 0.9$ years) groups ($p > .15$). **Materials.** The stimuli consisted of three ascending intervals: M2, P4, and P5. The intervals began with one of two reference pitches, C4 or F4, in order to limit a strategy of recognizing intervals on the basis of only the second pitch. Participants identified the intervals by using color labels chosen arbitrarily: M2s were red, P4s were green, and P5s were blue (see Figure 1). All tones were presented in a MIDI metallophone timbre over circumaural headphones and each lasted 500 msec, with 250 msec of silence separating the reference tone and the higher tone in the interval. Stimuli were presented and responses were recorded on a Macintosh computer running a MAX/MSP program. **Procedure.** First, participants were introduced to the experimental structure and the task of labeling pitch intervals. The experimenter presented the different intervals and explained that the distance between the two pitches determined the interval. Participants briefly ![Figure 1. Stimuli in Study 1: three pitch intervals starting on one of two reference pitches. Participants identified the intervals by using arbitrarily chosen color labels: Major seconds (M2s) were red, perfect fourths (P4s) were green, and perfect 5ths (P5s) were blue.](image-url) practiced the interval-identification task, in which they heard the two tones of the interval in succession and indicated which interval they thought they heard by pressing one of three colored keys. The subsequent training phase consisted of 20 blocks, each containing the six stimuli intervals (3 intervals × 2 reference pitches) in random order, for 120 total training trials. A colored square appeared on the screen, indicating the to-be-produced interval. Participants produced the intervals by first pressing the space bar for the reference pitch and then pressing the appropriate colored key for the higher interval tone. Following training, participants performed the interval-identification test. The interval sounded, to which the participants responded with the appropriate colored key. The test was 96 trials for the Caucasian and Hmong groups, but was reduced to 48 trials for the Chinese group due to time limitations.\textsuperscript{2} The experiment lasted approximately 25 min for the Caucasian and Hmong groups and 20 min for the Chinese group. **Results and Discussion** Pitch-interval identification was analyzed in a 3 (ethnicity: Chinese, Hmong, Caucasian) × 3 (interval: M2, P4, P5) × 2 (reference pitch: C, F#) mixed-model ANOVA. The main effect of ethnicity was highly significant \([F(2,35) = 18.1, p < .001, \eta^2_p = .51]\). See Figure 2. The Chinese (\(M = 72.0\%\)) outperformed the Caucasians (\(M = 45.5\%\)) and the Hmong (\(M = 45.5\%\), pairwise comparison, \(p_s < .001\)), and no difference occurred between the Caucasian and Hmong groups (\(p > .9\), Bonferroni corrections were applied to these and all subsequent pairwise comparisons). Additionally, a significant main effect of interval was observed \([F(2,70) = 4.9, p = .010, \eta^2_p = .12]\); pairwise comparisons revealed significantly higher recognition only for M2s over P4s (\(p = .018\)). No main effect of reference pitch was observed (\(p > .5\)). The interval × reference pitch interaction was highly significant \([F(2,70) = 55.2, p < .001, \eta^2_p = .61]\): Recognition was better for the small M2 interval on the lower reference pitch (C) than on the higher reference pitch (F#), whereas recognition was better for the large P5 interval on the F# than on the C. This indicates that AP height cues can influence perception of interval size. Ethnicity did not interact with interval or reference pitch in two-way interactions (\(p_s > .1\)). The ethnicity × interval × reference pitch three-way interaction approached significance \([F(4,70) = 2.2, p = .080, \eta^2_p = .11]\). This marginal three-way interaction was driven by the Chinese group’s better identification of M2s on the F# reference pitch, which demonstrates that the Chinese group was less susceptible than were the others to misapplying AP height cues. Previous musical training did not correlate with the interval-recognition scores \((r = .14, p = .4)\). The Chinese advantage for nonmusicians in this RP task parallels the previously established AP advantage for highly trained Chinese musicians. Identical performance for Caucasian and the native tone-language-speaking Hmong group indicates that tone–language experience does not necessarily improve pitch-interval identification. **STUDY 2** Study 2 further investigated ethnicity effects in a similar RP identification task and included an analogous “relative rhythm” task (identifying time intervals rather than pitch intervals). This was added, in order to control for possible motivational, memory, or general cognitive differences. For example, both RP and relative rhythm tasks involve context, or the relation between stimuli, and hence might favor more context-sensitive East Asian cultures (e.g., Masuda & Nisbett, 2001). Conversely, if groups performed differently in the pitch task, but not in the rhythm task, one could infer that those group differences truly arose from differences in the pitch domain. **Method** **Participants.** Sixty-five Cornell undergraduates participated for course credit or $8/hr. Participants were nonmusicians, having no more than 3 years of musical training. There were three groups: Caucasian (\(n = 30\), musical training = 0.9 years), Chinese (\(n = 24\), musical training = 1.4 years), and Korean (\(n = 11\), musical training = 1.5 years). The duration of musical training did not differ across groups (\(p > .1\)). Participants reported their primary home language and language experience on a 1–5 scale (1 = understanding, but trouble speaking and 5 = fluency). All Chinese participants reported having tone-language experience (\(M = 4.0, SD = 1.4\)), and all Korean participants reported speaking Korean (\(M = 4.0, SD = 0.8\)). Some dialects of Korean are pitch-accent languages, but the vast majority of our participants (and their parents) spoke only Seoul or standard South Korean, which do not have pitch accents (Sohn, 1999). We excluded 13 additional participants, 1 due to corrupted data and 12 due to ethnicities that were not Caucasian, Chinese, or Korean. **Materials and Procedure.** The pitch-interval identification task included the same intervals as were used in Study 1: M2 (red), P4 (green), and P5 (blue) on two reference pitches (C and F#). See Figure 3A. Each stimulus presentation consisted of the reference tone alone, followed by two iterations of the interval (the second and third tones and then the fourth and fifth tones, for five tones total). In the rhythmic-pattern task, participants learned to identify rhythmic patterns by color. As shown in Figure 3B, three rhythmic intervals were used: 8th notes with a 1:1 ratio (red), triplets with a 2:1 ratio (green), and 16th notes with a 3:1 ratio (blue). Each pattern consisted of seven clicks, with the rhythmic interval presented twice (by the second, third, and fourth clicks, and then by the fifth, sixth, and seventh clicks). Each rhythm pattern was presented at one of two tempi, to ensure that participants learned the relative time intervals, not the absolute times. The corresponding interonset intervals (IOIs) appear in msec for slow and fast tempi in Figure 3B. The rhythms were presented in a MIDI woodblock timbre. The procedure for the counterbalanced pitch and rhythm portions was identical. The experimenter defined the intervals for the participants, who were then given a short practice. The subsequent training phase consisted of 96 trials (16 blocks containing the randomized intervals). During training, the color of the upcoming interval appeared on the screen and a space bar press started the interval. Each training phase lasted approximately 10 min. Then, participants completed a 96-trial test. The session was self-paced and lasted approximately 1 h. **Results and Discussion** Average percent correct for the pitch and rhythm tasks (see Figure 4) was analyzed in a 3 (ethnicity: Caucasian, Chinese, Korean) × 2 (task: pitch, rhythm) mixed-model ANOVA. Overall performance did not differ between the pitch ($M = 66.7\%$) and the rhythm ($M = 65.4\%$) task [$F(1,62) = 3.1, p = .08, \eta^2_p = .05$]. In the critical test, the task × ethnicity interaction was significant [$F(2,62) = 5.5, p < .01, \eta^2_p = .15$], indicating that ethnicity had a substantially larger effect on the pitch-interval identification than on the rhythm-pattern identification. Further unpacking of performance in separate pitch and rhythm analyses (reported below) revealed that ethnicity significantly affected pitch-interval recognition, but not rhythm-pattern recognition. Pitch-interval recognition was analyzed in a 3 (ethnicity: Chinese, Korean, Caucasian) × 3 (interval: M2, P4, P5) × 2 (reference pitch: C, F#) mixed-model ANOVA. Ethnicity significantly affected pitch-interval recognition \[F(2,62) = 8.77, p < .001, \eta^2_p = .22\]. Pairwise comparisons showed that the Chinese (\(M = 72.2\%\) correct) and Koreans (\(M = 78.2\%\) correct) outperformed the Caucasians (\(M = 58.2\%, ps < .01\)) but that no difference occurred between the Chinese and Korean groups (\(p > .8\)). Additionally, interval had a main effect on recognition \[F(2,62) = 13.50, p < .001, \eta^2_p = .18\], with better recognition for the M2s than for the P4s and P5s (\(ps < .01\)). Reference pitch had a main effect as well \[F(1,62) = 27.5, p < .001, \eta^2_p = .30\], with better recognition occurring for intervals with the higher, F# reference pitch. The interval × reference pitch interaction was highly significant \[F(2,124) = 94.30, p < .001, \eta^2_p = .60\]: Recognition was better for M2s that started on the C reference pitch than on the F#, whereas recognition was better for P5s that started on the F# reference pitch than on the C. Ethnicity did not interact with interval or reference pitch in two-way interactions (\(ps > .25\)). However, the three-way interaction, ethnicity × interval × reference pitch, was significant \[F(2,124) = 5.00, p = .001, \eta^2_p = .14\]: The Chinese and Korean groups were less likely to misapply AP height cues when identifying intervals. Unlike in Study 1, the duration of previous musical training significantly correlated with interval-recognition performance \((r = .41, p = .001)\). The rhythm-interval recognition was analyzed in an analogous 3 (ethnicity: Chinese, Korean, Caucasian) × 3 (rhythmic pattern: 8th note, triplet note, 16th note) × 2 (tempo: fast, slow) mixed-model ANOVA. No effect of ethnicity on rhythmic recognition was observed between the Chinese (\(M = 67.8\%\)), Korean (\(M = 66.5\%\)), and Caucasian (\(M = 63.2\%\)) participants \[F(2,62) = 0.7, p = .5\]. Tempo did not affect performance (\(p > .3\)). A main effect of rhythmic pattern was observed \[F(2,124) = 88.9, p < .001\], where recognition was best for the 8th-note (1:1) rhythm (\(ps < .001\)) and performance was better for the triplet (2:1) than for the 16th-note (3:1) rhythm (\(p < .05\)). No ethnicity interactions were significant (\(ps > .5\)). The extent of previous musical training did not correlate with rhythm recognition performance (\(p > .3\)). Next, we examined the potential role of language on pitch-interval identification. As reported above, there was no difference between the Chinese participants (all with tone language experience) and the Korean participants (all Korean speakers). The Korean participants were predominately non-pitch-accent speakers, thus we find no evidence that the ethnicity effect is driven by lexical use of pitch.\(^4\) Additionally, the degree of tone-language experience did not affect performance: Within the Chinese group, the 15 participants who rated themselves as fluent (\(M = 67.2\%\)) tended to score lower on the pitch test than did the 9 participants with nonfluent tone-language experience (\(M = 80.4\%, p = .07\)). The primary language spoken in the home (tone language vs. non-tone language) had no effect (\(ps > .6\)). Finally, we observed no difference between those East Asians whose early childhoods were spent in East Asia (\(M = 77.6\%, n = 15\)) and those whose early childhoods were spent in North America (\(M = 71.5\%, n = 20, p > .3\)).\(^5\) In sum, Study 2 demonstrates an ethnicity effect in RP, but not in relative-rhythm identification. This effect is not driven by tone-language experience. Since ethnicity differences among Chinese, Koreans, and Caucasians emerged in the pitch task, but not in the analogous rhythm task, we can more confidently conclude that the observed differences reliably reflect differences in pitch identification, rather than effects of motivation, strategy, or other cognitive differences. **DISCUSSION** The two studies reported here indicate an ethnicity effect in RP identification: The Chinese and Korean groups better identified musical pitch intervals, but not rhythm intervals, in a control task. This Asian advantage for RP among nonmusicians parallels the well-established Asian advantage in AP, suggesting that this perceptual–cognitive advantage in labeling pitch-based sensory events is more general and is not limited to highly trained musicians. Although the exact cultural, environmental, or genetic factors in this ethnicity effect must be ascertained, we observe no evidence supporting tone-language effects. Compared with English-speaking Caucasians, the tone-language-speaking Hmong group showed no advantage in Study 1, whereas the non-tone-language-speaking Korean group did show an advantage in Study 2. Numerous uncontrolled-for factors could affect performance, but the large ethnicity effects remained stable across two studies with very different sets of participants. Overall, participants more accurately identified small intervals starting on the low reference pitch and large intervals starting on the high reference pitch, indicating that AP height cues were misapplied in these RP judgments. This was less true for the Chinese and Korean groups, who more appropriately applied the relative cues. However, this did not drive the ethnicity effect, because the Chinese and Korean participants were consistently more accurate across intervals in both high- and low-pitch ranges. RP identification inherently involves context or the relation between stimuli; thus, the observed ethnicity effect might be an auditory analogue of the greater context sensitivity shown by East Asian cultures in the visual domain (e.g., Masuda & Nisbett, 2001). However, the null effect in the relative-rhythm task and the well-established ethnicity effects in AP, which does not involve relational processing, make it unlikely that this result is due only to context sensitivity and that it is more specific to musical pitch. Ethnicity effects in AP, and now RP, potentially involve the ability to form associations between pitch-based sensory events and labels, inasmuch as little or no ethnicity effects are observed in low-level auditory tasks (Bent et al., 2006) or in pitch memory tasks not requiring labeling (Schellenberg & Trehub, 2007). Both AP and RP identification necessitate the ability to form an association between a pitch (AP) or a combination of pitches (RP) and a long-term memory representation, and they involve similar activation of the DLPFC, which has been implicated in conditional associations (Zatorre et al., 1998). Genetics is another possible factor in ethnicity effects on RP identification and has been identified in other pitch-perception abilities, including identifying a mistuned interval (Drayna et al., 2001) and AP (Baharloo et al., 1998; Gregersen et al., 2000; Theusch, Basu, & Gitschier, 2009). Recently, Dediu and Ladd (2007) showed that the population frequencies of two derived haplotypes of brain development genes, ASPM and Microcephalin, correlate with whether that population speaks a tone language. The haplotypes could affect subtle cortical organization and lead to cognitive biases, such as in pitch perception or tonal-verbal associations, which would facilitate the acquisition of tone language (manifest in linguistic change over many generations). The East Asian populations showing better pitch-interval identification here have derived ASPM and Microcephalin frequencies consistent with the proposed cognitive bias for pitch processing (Evans et al., 2005; Mekel-Bobrov et al., 2005). However, the Hmong population also has this pattern of derived ASPM and Microcephalin, but Hmong participants in the present study showed no pitch-interval identification advantage. Thus, although behavioral genetic work represents a potentially fruitful avenue, genetic factors are clearly not acting alone in ethnicity effects of pitch perception. Indeed, in their recent article on AP, Gregersen et al. (2007) took a broad view: “‘Ethnic’ differences encompass all the cultural, environmental, and genetic differences that can be found between the major population groups” (p. 105). Establishing the relative contribution of cultural, environmental, and genetic factors in the emergence of AP has proven to be difficult, due to the entanglement of these factors, and compounded by the rarity of AP possessors. Here, we establish an ethnicity effect in RP identification in nonmusicians that previously was seen only in highly trained musicians with AP. This opens up many research questions concerning the genetic and environmental contributions related to this and to more general pitch-based auditory abilities. AUTHOR NOTE We thank Robert Zatorre and Morten Christiansen for comments on an earlier draft. Address correspondence to M. J. Hove, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, Leipzig 04103, Germany (e-mail: email@example.com). REFERENCES Baharloo, S., Johnston, P. A., Service, S. K., Gitschier, J., & Freimer, N. B. (1998). Absolute pitch: An approach for identification of genetic and nongenetic components. *American Journal of Human Genetics*, **62**, 224-231. Bent, T., Bradlow, A. R., & Wright, B. A. (2006). The influence of linguistic experience on the cognitive processing of pitch in speech and nonspeech sounds. *Journal of Experimental Psychology: Human Perception & Performance*, **32**, 97-103. Bermudez, P., & Zatorre, R. J. (2005). Conditional associative memory for musical stimuli in nonmusicians: Implications for absolute pitch. *Journal of Neuroscience*, **25**, 7718-7723. doi:10.1523/JNEUROSCI.1560-05.2005 Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. *Proceedings of the National Academy of Sciences*, **104**, 10944-10949. Deutsch, D., Henthorn, T., & Dolson, M. (2004). Absolute pitch, speech, and tone language: Some experiments and a proposed framework. *Music Perception*, **21**, 339-356. Deutsch, D., Henthorn, T., Marvin, E., & Xu, H. (2006). Absolute pitch among American and Chinese conservatory students: Prevalence differences, and evidence for a speech-related critical period. *Journal of the Acoustical Society of America*, **119**, 719-722. Drayna, D., Manichakul, A., de Lange, M., Snieder, H., & Spector, T. (2001). Genetic correlates of musical pitch recognition in humans. *Science*, **291**, 1969-1972. doi:10.1126/science.291.5510.1969 Evans, P. D., Gilbert, S. L., Mekel-Bobrov, N., Vallender, E. J., Anderson, J. R., Vaez-Azizi, L. M., et al. (2005). *Microcephalin*, a gene regulating brain size, continues to evolve adaptively in humans. *Science*, **309**, 1717-1720. doi:10.1126/science.1113722 Gregersen, P. K., Kowalsky, E., Kohn, N., & Marvin, E. W. (2000). Early childhood music education and predisposition to absolute pitch: Teasing apart genes and environment [Letter to the editor]. *American Journal of Human Genetics*, **98**, 280-282. Gregersen, P. K., Kowalsky, E., & Li, W. (2007). Reply to Henthorn and Deutsch: Ethnicity versus early environment: Comment on “Early childhood music education and predisposition to absolute pitch: Teasing apart genes and environment” by Peter K. Gregersen, Elena Kowalsky, Nina Kohn, and Elizabeth West Marvin [2000]. *American Journal of Medical Genetics*, **143A**, 104–105. Henthorn, T., & Deutsch, D. (2007). Ethnicity versus early environment. “Early childhood music education and predisposition to absolute pitch: Teasing apart genes and environment” by Peter K. Gregersen, Elena Kowalsky, Nina Kohn, and Elizabeth West Marvin [2000]. *American Journal of Medical Genetics*, **143A**, 102-103. Ladd, D. R. (2008). *Intonational phonology*. Cambridge: Cambridge University Press. Levitin, D. J., & Rogers, S. E. (2005). Absolute pitch: Perception, coding, and controversies. *Trends in Cognitive Sciences*, **9**, 26-33. Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. *Journal of Personality & Social Psychology*, **81**, 922-934. Mekel-Bobrov, N., Gilbert, S. L., Evans, P. D., Vallender, E. J., Anderson, J. R., Hudson, R. R., et al. (2005). Ongoing adaptive evolution of *ASPM*, a brain size determinant in *Homo sapiens*. *Science*, **309**, 1720-1722. doi:10.1126/science.1116815 Proffitt, J., & Bidder, T. G. (1988). Perfect pitch. *American Journal of Medical Genetics*, **29**, 763-771. Schellenberg, E. G., & Trehub, S. E. (2007). Is there an Asian advantage for pitch memory? *Music Perception*, **25**, 241-252. Sohn, H.-M. (1999). *The Korean language*. Cambridge: Cambridge University Press. Takeuchi, A. H., & Hulse, S. H. (1993). Absolute pitch. *Psychological Bulletin*, **113**, 345-361. Theusch, E., Basu, A., & Gitschier, J. (2009). Genome-wide study of families with absolute pitch reveals linkage to 8q24.21 and locus heterogeneity. *American Journal of Human Genetics*, **85**, 112-119. Trehub, S. E., Schellenberg, E. G., & Nakata, T. (2008). Cross-cultural perspectives on pitch memory. *Journal of Experimental Child Psychology*, **100**, 40-52. Wen, B., Li, H., Lu, D., Song, X., Zhang, F., He, Y., et al. (2004). Genetic evidence supports demic diffusion of Han culture. *Nature*, **431**, 302-305. Zatorre, R. J. (2003). Absolute pitch: A model for understanding the influence of genes and development on neural and cognitive functions. *Nature Neuroscience*, **6**, 692-695. Zatorre, R. J., Perry, D. W., Beckett, C. A., Westbury, C. F., & Evans, A. C. (1998). Functional anatomy of musical processing in listeners with absolute pitch and relative pitch. *Proceedings of the National Academy of Sciences*, **95**, 3172-3177. NOTES 1. However, note that Japanese and some Korean dialects are pitch-accent languages, in which pitch can carry some lexical meaning (Ladd, 2008; Sohn, 1999). 2. This did not alter performance: Analysis of the 96-trial tests for the Hmong and Caucasian groups revealed no differences (e.g., fatigue effects) between the first 48 trials and the second 48 trials. 3. However, as is noted above, the extent of musical training did not differ between groups, and ethnicity effects in pitch-interval identification remained highly significant when removing the variance associated with the musical training covariate in a supplemental ANCOVA. 4. Excluding the 1 Korean participant who spoke a pitch-accent dialect (a mix of Gyeongsang and Seoul dialects) does not significantly alter the results. 5. These analyses on subsets of data involve less statistical power, and the null results should be interpreted cautiously. For example, although the location of childhood here did not approach significance, the numerical trend for higher pitch-interval recognition among East Asian participants who grew up in Asia might encourage further examination. (Manuscript submitted August 9, 2009; revision accepted for publication December 30, 2009.)
PROBABILISTIC FINITE ELEMENT ANALYSIS OF VERTEBRAE OF THE LUMBAR SPINE UNDER HYPEREXTENSION LOADING A. Zulkifli\textsuperscript{1,a}, A.K. Ariffin\textsuperscript{1,b} and M.M. Rahman\textsuperscript{2,c} \textsuperscript{1}Department of Mechanical & Materials Engineering Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia 46300 UKM, Bangi, Selangor, Malaysia Phone: +603-89250200, Fax: +603-89216106 E-mail: firstname.lastname@example.org\textsuperscript{a}; email@example.com\textsuperscript{b} \textsuperscript{2}Faculty of Mechanical Engineering, Universiti Malaysia Pahang 26600 Pekan, Kuantan, Pahang, Malaysia Phone: +609-4242346, Fax: +603-4242202 Email: firstname.lastname@example.org\textsuperscript{c} ABSTRACT The major goal of this study is to determine the stress on vertebrae subjected to hyperextension loading. In addition, probabilistic analysis was adopted in finite element analysis (FEA) to verify the parameters that affected failure. Probabilistic finite element (PFE) analysis plays an important role today in solving engineering problems in many fields of science and industry and has recently been applied in orthopaedic applications. A finite element model of the L2 vertebra was constructed in SolidWorks and imported by ANSYS 11.0 software for the analysis. For simplicity, vertebra components were modelled as isotropic and linear materials. A tetrahedral solid element was chosen as the element type because it is better suited to and more accurate in modelling problems with curved boundaries such as bone. A Monte Carlo simulation (MCS) technique was performed to conduct the probabilistic analysis using a built-in probabilistic module in ANSYS with 100 samples. It was found that the adjacent lower pedicle region depicted the highest stress with 1.21 MPa, and the probability of failure was 3%. The force applied to the facet (FORFCT) variable needs to be emphasized after sensitivity assessment revealed that this variable is very sensitive to the stress and displacement output parameters. Keywords: Probabilistic, finite element analysis, lumbar spine, stress, hyperextension. INTRODUCTION In engineering, uncertainties are the most important thing to measure in order to make the analysis as real as in nature. Neglecting the existence of uncertainties in the biological system and environment can make the application fail even when the calculation suggests it is safe enough. However, the values of the variables that are working on the system cannot be predicted with certainty. Structural geometric properties, mechanical properties and the external loads are all uncertain in nature. In particular, the uncertainties in the external loads are very serious (Qiu and Wu, 2010). However, Taddei et al. (2006) found that bone stresses and strains in the proximal femur were more sensitive to uncertainties in the geometric representation than material properties. In the probabilistic approach, all uncertain variables are considered to be random and the uncertain problems are analysed based on their statistical properties (Qiu and Wu, 2010). Hyperextension is a straightening movement that goes beyond the normal, healthy boundaries of the joint and often results in orthopaedic injury. This movement will produce an extreme condition and create a failure in the vertebra. It may occur during training by athletes and or sometimes by accident. The pedicle is most commonly the part where fractures are observed during trauma, experimentally and clinically (Xia et al., 2006). Occasionally, a pedicular fracture may occur that suggests a causative relationship with the patient’s hyperactivity (Sirvanci et al., 2002). Finite element analysis is one of the most advanced simulation techniques and has been used in orthopaedic biomechanics for many decades (Kayabasi and Ekici, 2008). Up to now, many finite element (FE) simulations as well as in vivo or in vitro studies have been conducted for biomechanical analyses of the lumbar spine (Kuo et al., 2010). They can also be successfully applied for the simulation of biomechanical systems (Odin et al., 2010). FE methods have become an important tool to evaluate mechanical stresses and strains in bone (Hernandez et al., 2001) and have been widely used to investigate the mechanical behaviour of bone tissue (Herrera et al., 2007). The purpose of this study is to determine the highest stress on the vertebra due to the hyperextension condition and calculate the probability of failure for the current model. The sensitivity analyses were incorporated with probabilistic analysis to support the results and verify the input random variables that are sensitive to the output parameters. The hypothesis for this study is that the pedicle is the most critical region that affects the vertebrae when the facet joints are subjected to hyperextension loading. **METHODOLOGY** A three-dimensional finite element model of a lumbar vertebra was constructed using SolidWorks software and analysed by ANSYS 11.0. The lumbar segment has five vertebrae that stack each other vertically, but a single vertebra was focused on in this study due to the similarity of analysis. So, the analysis target was the second lumbar vertebra (L2), since it seems to be responsible for bone fractures (Sances et al., 1984) and has also been reported on by Woodhouse (2003). ![Figure 1. Anatomy of the vertebrae of the lumbar spine](image) The vertebrae are composed of six components. These are the vertebra body, spinous process, transverse process, lamina, pedicle, and facet joints. Figure 1 shows the anatomy of a lumbar spine vertebra from various different angles. The vertebra has two layers, of cortical and cancellous bone, which are generally considered as one integrated region of body material. In fact, the surface of the lumbar vertebra is not regular, and the simplified model was developed by removing the unnecessary surface and smoothing the irregular surface during the trimming process. Three-dimensional meshes with tetrahedral 20 node quadratic elements (SOLID186) were constructed using an automatic mesh function of ANSYS. The area of the critical region is refined using finer meshes so that reliable results are necessarily produced especially in the vertebra body. To evaluate the effects of the hyperextension condition, a simple compressive loading was applied to the vertebral model shown in Figure 1. The lower vertebral body is fully constrained in all degrees of freedom, whereas the upper body and upper facet (indicated in red) represent the area subjected to a load based on the weight of an 80 kg person. This weight converts to a force of 460 N or 59% of total weight, to represent the upper body comprising the head, trunk and limbs, as reported by Langrana et al. (1996). To quantitatively assess the changes of the hyperextension condition, the portion of load applied to the facet joints was calculated. The value of pressure applied to the vertebra was defined as Eq. (1): \[ \rho_{(i)} = \begin{cases} F_t (1 - (i/10))/A_b & \text{(body)} \\ F_t (i/10)/2A_f & \text{(facet)} \end{cases} \] where \( \rho \) (Pa) denotes the pressure applied to the vertebra, \( F_t \) is total force, \( A_b \) and \( A_f \) are the surface area of the body and facet respectively. **Material Properties** In nature, bone is a non-linear, inhomogeneous and anisotropic material and varies in the boundary regions between cortical and cancellous bone (Xia et al., 2006; Yang et al., 2010; Peng et al., 2006). However, most studies performed in this area have been based on the assumption that bone material has an isotropic and inhomogeneous distribution of material properties due to its simplicity (Yang et al., 2010; Peng et al., 2006). Therefore, this study was conducted on linear isotropic and the whole vertebra is considered as having cortical bone properties. In this study, random input variables were arbitrarily assumed as defined in Table 1. Standard deviations were computed by assuming a coefficient of variation (COV) of 0.1 and distribution types were assumed based on experience. **Table 1. Type of model random variables** | Variables | Description | Mean | COV\(^a\) | Distribution type | Ref. | |---------------|-------------------|--------|-----------|-------------------|---------------| | YMODCOR | Young Modulus | 12 GPa | 0.21 | Lognormal | (Thacker et al., 2001) | | PSSNRAT | Poisson ratio | 0.3 | ±0.017 | Uniform | (Sarah et al., 2007) | | FORBDY | Force to the body | 414 N | 0.1 | Normal | b | | FORFCT | Force to the facet| 46 N | 0.1 | Normal | b | | AREBDY | Body area | 1298 mm\(^2\) | 0.1 | Lognormal | b | | AREFCT | Facet area | 166 mm\(^2\) | 0.1 | Lognormal | b | \(^a\)COV= coefficient of variation \(^b\)Arbitrarily assumed Reliability and Probabilistic Analysis A probabilistic analysis was conducted of a structural failure under uncertain material and geometric characteristics subject to random loads applied to the model. $X$ denotes a vector of random variables, with components $X_1, X_2, ..., X_n$ representing the uncertainties in the load, material properties and geometry (Akramin et al., 2007). The probabilistic design system was modelled as Eq. (2): $$Z(X) = Z(X_1, X_2, X_3, ..., X_n) \quad (2)$$ where $Z(X)$ is a random variable describing the system (e.g. stress, displacement) at a node or element. Each random variable is defined by a probability density function (PDF), which is commonly defined by parameters such as a mean value, standard deviation and distribution type. The structural uncertainties are generated by the Latin Hypercube Sampling (LHS) technique that requires fewer simulation loops to get better accuracy. The limit state function for lumbar $g(X)$ can be expressed as Eq. (3): $$g(X) = Y(X) - S(X) \quad (3)$$ where $Y(X)$ is the yield strength of bone, $S(X)$ is the Von Mises stress computed from FEA and $X$ is a random variable as defined earlier. Suppose that the model failure occurs if $g \leq 0$, whereas no failure occurs if $g > 0$. The probability of failure ($P_f$) is the likelihood when the stress exceeds the yield strength of bone or satisfies the function $g \leq 0$ (Sarah et al., 2007). The probability of survival, $P_s$ is one minus the probability of failure and referred to as reliability, $P_s = 1 - P_f$. A MCS was performed by a powerful computer to minimize cost and time consumption. This method will converge with the approximately correct solution but needs a lot of samples during analysis. The number of simulations necessary in a MCS to provide that kind of information is usually between 50 and 200. Thus, this study used 100 samples after considering the complexity of the model and range of simulation. However, the more simulation loops you perform, the more accurate the results will be. ![Figure 2. Work sequence of a probabilistic finite element program](image) The uncertainty of the mechanical properties of bone, especially the Young Modulus of vertebrae, depends on the person, since the physiological loading affects the stress distribution of the vertebra. Therefore, the PFE program has been developed using ANSYS software incorporating MCS. The work sequence of a patient-specific FEA using the ANSYS software program is shown in Figure 2. RESULTS AND DISCUSSION Figure 3 shows the stress distribution of the vertebra under compression loading where the contours represent the level of stress. It was found that the highest stress concentrations were at the adjacent lower posterior vertebral body, with Von Mises stress value $1.2117 \text{ MPa}$. Stress concentration will reduce the mechanical integrity of the bone, making it susceptible to fracture during trauma (Kasiri & Taylor, 2008). This critical area of the vertebra body tends to act as a pivot when another load is applied to the facet joints and creates a bending effect. A longer distance between the facet joints and the vertebral body causes an increase in the bending moment, as well as a stress concentration. ![Figure 3. The highest stress distribution of the vertebra](image) The displacement of the model is very small, at about $0.24758e^{-08} \text{ mm}$. This happens due to the assumption that all components act as one body with the same material, which is cortical. Cortical material is brittle compared with other materials, with the highest strength in the vertebra component. Failure or fracture of the bone starts at the highest stress concentration and it produces the weakest area of the bone. This result agrees well with research by El-Rich et al. (2009), which concluded that in extension loading, the maximum stress is located in the lower pedicle region of L2 and fractures start in the left facet joint, then expand into the lower endplate. In Figure 4, the stress distributions for different types of ratio represent the effect of hyperextension. The comparisons between these ratios are the proportion load applied to the vertebral body and facet joints. For the ratio $i=1$, there are some stresses in the vertebral body, whilst for the ratio $i=3$, the vertebral body was not affected wholly. Hence, load ratio $i=3$ means that hyperextension starts after the facet joint sustains in excess of 30% of the total load, as reported by Nabhani et al. (2002) and Hall (1995). The cumulative distribution function (CDF) offers a function to determine the probabilistic design variable. This feature is very helpful to evaluate the probability of failure or reliability of a component for a very specific and limited value given. For this study, the limit state function in Eq. (3) used the yield strength of the material as a limit value to determine the probability of failure if $g \leq 0$. The curve in Figure 5 indicates that the probability of success complies with the limit state function $g > 0$ and there is about a 97% or 0.97 probability that the stress remains below 1.2117 MPa. Therefore, the probability of failure can be calculated as 1-0.97=0.03 or 3% probability stress greater than 1.2117 MPa. From the result observation, 3% of the probability of failure indicates that the model is very reliable and safe to use. This means that the load applied to the model needs only be very low to induce the model to fail. The probabilistic sensitivity diagrams in Figure 6 illustrate those variables that are sensitive to the maximum stress and maximum deflection respectively. The sensitivities are given as absolute values in the bar chart and the relative variables are represented in the pie chart. Four input variables are very sensitive to the stress and deflection, as shown in Figure 6. The most significant variable that strongly affected the maximum stress and maximum deflection is FORFCT, which is force applied to the facet joints. This means that a small change in the maximum stress of the important variables (AREBDY in this case) will result in a change in the computed probability. The positive sensitivity values indicate that a positive change in the mean value will result in an increase in the computed probability and negative sensitivities and vice versa. The insignificant or unimportant random variables have been eliminated from the sensitivity chart to improve the computational efficiency. Figure 6. Sensitivity factors for (a) maximum stress and (b) maximum deflection Figure 7. Scatter plot for input variables (a) AREBDY and (b) FORFCT The scatter plot in Figure 7(a) indicates the relationship between the AREBDY variable and maximum stress, while Figure 7(b) is between the FORFCT variable and maximum deflection. These two scatter plots represent the correlation between the input variables and output parameters that are generated by the same set. There are 100 blue dots to represent the 100 sampling points or samples that were used for this analysis. Probabilistic sensitivities measure how much the range of scatter of an output parameter is influenced by the scatter of the random input variables. The influences of probabilistic sensitivities are the slope of the gradient and the width of the scatter range. of the random input variables. The slope of the gradient depends on the scatter range of the random input variables and output parameters. To improve the reliability, there are two options: 1) reduce the width of the scatter, and 2) shift the range of scatter. However, we do not discuss these options here as they are beyond the scope of this study. Since these variables contribute the most to the computed probability, improved estimates for the mean, standard deviation, and distribution will have the most impact on the computed probability (Thacker et al., 2001). CONCLUSION This study has achieved the objectives of determining the stress concentration and the probability of failure of the lumbar vertebra using finite element analysis. The probabilistic analysis method investigated here is useful to understand the inherent uncertainties and variations in biological structures. ACKNOWLEDGEMENT The author would like to thank Universiti Malaysia Pahang for supporting this research. Also, thanks to all the members who were involved in this study. REFERENCES Akramin, M.R.M., Abdulnaser, A., Hadi, M.S.A., Ariffin, A.K. and Mohamed, N.A.N. 2007. Probabilistic analysis of linear elastic cracked structures. *Journal of Zhejiang University SCIENCE A*, 8(11): 1795-1799. El-Rich, M., Arnoux, P.J., Wagnac, E., Brunet, C. and Aubin, C.A. 2009. Finite element investigation of the loading rate effect on the spinal load-sharing changes under impact conditions. *Journal of Biomechanics*, 42: 1252-1262. Hall, S.J. 2007. *Basic Biomechanics (5th edition)*. New York: McGraw-Hill Companies. Hernandez, C.J., Beaupre, G.S., Keller, T.S. and Carter, D.R. 2001. The influence of bone volume fraction and ash fraction on bone strength and modulus. *Bone*, 29: 74-78. Herrera, A., Panisello, J.J., Ibarz, E., Cegonio, J., Puértolas, J.A. and Gracia, L. 2007. Long-term study of bone remodeling after femoral stem: A comparison between dexa and finite element simulation. *Journal of Biomechanics*, 40: 15-25. Kasiri, S. and Taylor, D. 2008. A critical distance study of stress concentrations in bone. *Journal of Biomechanics*, 41: 603-609. Kayabasi, O. and Ekici, B. 2008. Probabilistic design of a newly designed cemented hip prosthesis using finite element method. *Materials and Design*, 29: 963–971. Kuo, C.S., Hu, H.T., Lin, R.M., Huang, K.Y., Lin, P.C., Zhong, Z.C. and Hseih, M.L. 2010. Biomechanical analysis of the lumbar spine on facet joint force and intradiscal pressure - a finite element study. *BMC Musculoskeletal Disorders*, 11: 1-13. Langrana, N.A., Edwards, W.T. and Sharma, M. 1996. Biomechanical analyses of loads on the lumbar spine. In: Wiesel, S.W., Weinstein, J., Herkowitz, H., Dvorak, J. and Bell, G. *The Lumbar Spine*. Philadelphia, W.B. Saunders Company. pp. 163-181. Nabhani, F. and Wake, M. 2002. Computer modelling and stress analysis of the lumbar spine. *Journal of Materials Processing Technology*, 127: 40-47. Odin, G., Savoldelli, C., Bouchard, P.O. and Tillier, Y. 2010. Determination of Young’s modulus of mandibular bone using inverse analysis. *Medical Engineering & Physics*, 32: 630-637. Peng, L., Bai, J., Zeng, X. and Zhou, Y. 2006. Comparison of isotropic and orthotropic material property assignments on femoral finite element models under two loading conditions. *Medical Engineering & Physics*, 28: 227-233. Qiu, Z.P. and Wu, D. 2010. A direct probabilistic method to solve state equations under random excitation. *Probabilistic Engineering Mechanics*, 25: 1-8. Sances Jr., A., Myklebust, J.B., Maiman, D.J., Larson, S.J., Cusick, J.F. and Jodat, R.W. 1984. The biomechanics of spinal injuries. *Critical Reviews in Biomedical Engineering*, 11: 1-76. Sarah, K.E., Saikat, P., Tomaszewski, P.R., Petrella, A.J., Rullkoetter, P.J. and Peter, J.L. 2007. Finite element-based probabilistic analysis tool for orthopaedic applications. *Computer Methods and Programs in Biomedicine*, 85: 32-40. Sirvanci, M., Ulusoy, L. and Duran, C. 2002. Pedicular stress fracture in lumbar spine. *Journal of Clinical Imaging*, 26: 187-193. Taddei, F., Martelli, S., Reggiani, B., Cricftofolini, L. and Viceconti, M. 2006. Finite-element modeling of bones from CT data: sensitivity to geometry and material uncertainties. *IEEE Transactions on Biomedical Engineering*, 53: 2194-2200. Thacker, B.H., Nicolella, D.P., Kumaresan, S., Yoganandan, N. and Pintar, F.A. 2001. Probabilistic finite element analysis of the human lower cervical spine. *Math Model Sci Computer*, 13: 12-21. Woodhouse, D. 2003. Post-traumatic compression fracture. *Clinical Chiropractic*, 6: 67-72. Xia, Q.T., Tan, K.W., Lee, V.S. and Teo, E.C. 2006. Investigation of thoracolumbar T12-L1 burst fracture mechanism using finite element method. *Medical Engineering & Physics*, 28: 656-664. Yang, H., Ma, X. and Guo, T. 2010. Some factors that affect the comparison between isotropic and orthotropic inhomogeneous finite element material models of femur. *Medical Engineering & Physics*, 32: 553–560.
Christmas in All its Weirdness And now, your relative Elizabeth in her old age has also conceived a son; and this is the sixth month for her who was said to be barren. For nothing will be impossible with God.’ Then Mary said, ‘Here am I, the servant of the Lord; let it be with me according to your word.’ Then the angel departed from her. In those days Mary set out and went with haste to a Judean town in the hill country. – The Gospel of St. Luke. We just read those words and go, “Ok, cool. Virgin pregnancies. And Medicare pregnancies. That’s normal, it’s Christmas.” Maybe we better rethink Christmas. It’s like one church’s nativity scene: they had a living nativity one day a year during Advent … it was a little manger scene in the parking lot you could drive by. It was filled with straw and live animals and people dressed as Mary and Joseph and the other typical nativity characters. It was usually pretty cold so the shifts only lasted 20 minutes before new folks would step in. One year the pastor’s wife was helping the different folks get dressed when a 7 year-old boy came in from his shift. The pastor’s spouse asked him how he had liked being a shepherd in the nativity scene. “It was ok,” he answered, “but I think next year I wanna be a pirate”. You remember, right? The pirate that was at the birth of our Lord? This of course is absurd, but let’s be honest, a pirate was just about as likely as a Drummer boy. Seriously, the Little Drummer Boy? It’s the perfect example of weird things creeping into nativity scenes. Like when along with the sheep and goats you occasionally see a pig in a crèche scene as though there were swine at the birth of our Jewish Lord. Maybe the worst are those awful nativities that include a pious little Santa Claus kneeling at the manger. Placing drummers and pigs and pirates and Santas in nativity scenes is inappropriate if not just Biblically illiterate. But, but, but: is a drummer boy any less strange than a magi? Think about those astrology-obsessed sharp-dressed gypsy dudes from St. Matthew’s Gospel. Whoa. All that to say, does our over-familiarity with the Christmas story prevent us from understanding how weird it really was? I mean, if it involves virgin pregnancies and old ladies from the hill country and soothsaying magi and rank shepherds and fearsome angels and celestial choirs and God being born as a refugee in straw and mud then who’s to say a pirate or a drummer is so weird? Don’t miss Christmas in all its weirdness. Memorials Building Fund In memory of Sharon Thomas from Randy George. Music Fund In memory of Rosemary Baldridge from Rick & Sandra Githens First Gift In memory of Oscar Edward Cloyd, Sr. from Martha Cloyd Peal In memory of William Ralph Peal from Martha Cloyd Peal Honorary Building Fund In honor of Mr. & Mrs. Paul Merkle from Tom & Carolyn Murphy & Martha Storer Our Sympathy To: Will and Lori Beth Rhodes and their children, Addison & Caroline in the death of his grandmother, Doris L. Rhodes. Blue Christmas Service Thursday, December 21, 6:00 pm | Kilpatrick Chapel Christmas and winter seasons are a complicated and difficult time of year for many people. From mourning what has been lost, to what was never had, from life circumstances such as illness, the death of a loved one, unemployment, and the excess of expectation during the holiday season, this time of year can be a very “blue” season. Join Dr. Jack O’Dell as he leads us in a time of worship reminding us that there is always light in the midst of the darkness. "Silent night, holy night..." Coming Soon in Missions... Common Ground | 6806 Southern Ave. | Wednesdays, 8:30 - 10:30 am (Winter Hours) Attention: No Highland Blessing Dinner in December! Do to the high number of events during the week of Christmas, the Highland Center has decided not to host the Highland Blessing dinner on the day of the month we normally host. We will host again on Thursday, January 25. Pat Ferguson’s mother, Lurline Folk, was Broadmoor UMW President in the 80’s and wore this special pin. Many thanks to Pat for donating her mom’s pin to be worn by future UMW Presidents in memory of Lurline. Allison Wray Events Coming Soon... Adam Hamilton Study on “Why” The Covenant Tiger’s Sunday School class will be studying Adam Hamilton’s book “Why”. In it he addresses questions such as ‘where is God when the innocent suffer? Where is God when my prayers go unanswered? Why is God’s will so hard to understand? The Covenant Tigers meet in room 312 in the three story building and welcome all who are interested. Christmas Caroling | Thursday, December 21 Hanna and Reece Roark will host a gathering in their home at 5:30 pm on Thursday, December 21, 2017 for anyone interested in Christmas Caroling. Bring your favorite Christmas hors d’oeuvres or desserts to the Roark’s home to share with other carolers. The Roarks are members of the River worship leadership team and lend their incredible talents to that service faithfully. Now they are sharing their faith and abilities in continuing this century old Christmas tradition. After sharing Christmas snack foods, hot cider, and hot chocolate; Hanna and Reece will lead us in a brief carol rehearsal. As soon as we are warmed up, we will jump in cars and travel caravan style to homes of persons who have experienced a major life passage and for whom this Christmas season may be different from Christmases past. If you play an instrument, please bring that along as well. For more information, please contact Benny Vaughan, 318-751-7563 or Hanna Roark, 318-286-2286. “Silent night, holy night…” Women’s Retreat: “Stars at Night: When Darkness Unfolds as Light” | Saturday January 6 At some point in life, we all find ourselves lost and disillusioned, seeking to find our way. The death of a loved one, the loss of a job, facing the unexpected illness, both mental and physical, the pain of divorce or aging…Each of these has the potential to alter the way we look at ourselves and the way we think about the journey. Join us from 9 – 3:30 p.m. as we kick off the New Year sharing our stories, studying scriptures, and experiencing creative ways that we can rekindle our joy. Together in community we will find hope and love holding us in the midst of difficult times. We will be using the book, “Stars at Night: When Darkness Unfolds as Light” by Paula D’Arcy as our resource. A copy for you is included with your registration. Cost $30. First Graders Save the Date January 7th! We will celebrate you during our 11:00 services. To kick off the new year, you and your family will be presented with a family devotional. Don’t miss this milestone and opportunity to gather as a family and spend time in God’s Word. For questions or more information contact Kristin at the church office or by email email@example.com To Russia With Love This summer, thousands of people will be traveling to Yekaterinburg, Russia to watch several matches of the World Cup. Over 20 years ago, James Gillespie and members of his mission trip got off the train in the same city... by mistake. The grace filled result of that trip has led to a relationship between Broadmoor United Methodist Church and First Yekaterinburg United Methodist Church that continues to this day. First Yekaterinburg now shares its building with another congregation, Holy Trinity, who joyfully work to invite people into a relationship with Christ. This Advent season, Olga Fateeva, one of the members of Holy Trinity had the opportunity to visit her friends in Louisiana. She was able to visit with me on Friday to connect and share what’s going on in Russia. She was excited to share about the home groups her church is growing. They are a wonderful way to grow leadership in Christian communities. She and James also shared the plans for an upcoming Mission Trip to Yekaterinburg, Russia. This year’s trip will coincide with the end of the World Cup in Yekaterinburg. Our mission team plans include visiting home groups that practice speaking English, assisting with Bible Camp (known to us as Vacation Bible School), and offering support to a leadership retreat hosted by the United Methodist Bishop of Russia. I asked Olga what we could do to support her and her church family. Without hesitation, she asked us to pray for them. Prayer is a powerful way to keep the connection between our churches strong. We can also support them with supplies for their Bible Camp. In addition, Olga emphasized the importance of us visiting our Russian brothers and sisters. As Olga began attending church she remembered how important it was for her to see guests from other countries. There is great support found in fellowship as Christians from around the world share their beliefs and their struggles. This Advent season, remember to pray for Olga, Holy Trinity, and First Yekaterinburg United Methodist Church. Take a moment to see how the Spirit is calling you to strengthen our connection with a church on the other side of the globe. If you’d like to hear more stories and get connected to this ministry in any way, contact James Gillespie at firstname.lastname@example.org. Monday Pathways | Led by Laura Vaughan and Callie Hamm | Begins January 8 Grounded in being present to how God is working in our lives, we share our stories, our heartaches and our joys in a safe space. We will kick off the New Year with Anne Lamott’s Book “Hallelujah Anyway” (Cost $15.) Monday mornings from 10:30 am – 11:30 am in the Parlor. Monday Women’s Bible Study Led by Bonnie Daniel | Begins January 8 Join us from 10:00 - 11:30 in the meeting room beginning January 8 – Feb. 26 as Bonnie Daniel leads us in the study of Kelly Minter’s, “No Other God’s – Confronting our Everyday Idols.” Our lives revolve around our deepest needs and greatest treasures; relationships, family, financial security, private hopes and dreams. No Other Gods offers a revealing look at the heart of a woman. Author Kelly Minter explores what happens when good desires become false gods, robbing us of an intimate relationship with our heavenly father. So discover the freedom in surrender. The healing in worship. And the joy found in exchanging everyday gods for the one true God. Cost is $20 Wednesday Pathways Led by Mary Virginia Taylor | Begins January 10 Join us as we listen for God’s presence and voice through Lectio divina, a slow, contemplative praying of the scriptures or of various spiritual writings. Meets on Wednesday evenings from 4:30 pm – 5:30 pm in the library. More than a Mom | Led by Kelie Taylor and Laura Vaughan | Begins January 10 Our young mom’s group meets on Wednesdays from 9:15 – 10:45 to share the joys and struggles of being a mom. Join us for a time to grow deeper friendships with other women, share life stories, and encourage each other on our faith journeys. We will kick the new year off focusing on “Tips for Taking Care of Yourself: Eight Weeks of Wellness”. As we begin a new year, the tradition is to make resolutions, usually in order to lose weight or become more physically active. But what about the rest of who we are? Join us as we learn about the eight areas of wellness and work together to explore how we can live a more balanced, healthy life. These sessions will be led by therapists Ashley Drew (LMSW) and Shannon Huertas (PLPC) who work at the Family Counseling Center in Shreveport. Child Care is provided. Enneagram Introductions | Led by Sarah Duet | Begins Wednesday January 10th (5 weeks) 6-7:30 p.m. Are you interested in being a better version of yourself? In having healthier relationships? Are you ever perplexed by yourself, loved ones, or coworkers? If you answered yes to any of these questions, the Enneagram could be a valuable tool for you! Join us for a 5-week study on the value of self-knowledge and the role of personality in our relationships with God, one another, and our own selves. You can expect to gain valuable insights and practical tools with which to grow and mature in your spiritual journey. Cost $25. Tiptoeing with Tiny Tim | Led by Dr. Greg Davis | Begins January 9 or 10 You are invited to a four week study of Timothy with Dr. Greg Davis. There are two time options: 6 p.m. on Tuesdays or 10 a.m. Wednesdays. Cost is $20. Board Game Night | Young Adults | January 12 The Vine Sunday School Class is hosting “Board Game Night” on Friday, January 12th at 6:00 p.m. and the home of Peter and Leah Gaughan. For more information, please contact Rachel Sherman at email@example.com. Women’s Ministry Fundraiser January 19 Join us on Friday, January 19th from 6-8 p.m. for our annual Women’s Ministry Fundraiser! Individual tickets ($30) will go on sale Sunday, January 7th at Broadmoor UMC. To reserve a table ($300), call 318.861.0586 or email firstname.lastname@example.org. All proceeds will go to fund our Women’s Ministry. What a Fantastic Weekend! Last weekend Broadmoor was filled with the anticipation and excitement of the Christmas season. We kicked off the weekend on Saturday with the Big Man himself, Santa took time to spread some holiday cheer at our Pancakes and Parents’ Morning Out. Kids had the opportunity to get that last gift request in and to make some gifts of their own. On Sunday we celebrated March to the Manger as a church family. After a time of fellowship, we watched the story of the first Christmas unfold, a story of God’s promise of a Savior sent to redeem His people. Our greatest gift, God’s love and grace, came as baby. Just as the Wisemen brought gifts to honor baby Jesus we brought gifts to bless children in our community. It truly was a weekend full of joy. Thank you to everyone who helped make this weekend so special! Want to Volunteer? Haven’t found the right place to volunteer? Consider running sound for the traditional service. It’s FUN! and easy. Don’t think you can? We can show you how easy it is. Contact Stephanie @ 861-0586 or email@example.com FIRST GIFTS TO THE CHRIST CHILD A Great Way to Enter Advent During December it has been a custom at Broadmoor UMC to receive a special offering called First Gifts. In this season of gift giving some find it most fitting that their first gift of Christmas is a gift back to the One who has given all for us. This special offering comes at an important time of the year for our church. It is a time when we are completing the financial year of the church by receiving the final donations and taking care of the last expenditures. In the last few years, with your First Gifts and other second-mile giving, we have been successful in covering all expenses and beginning each New Year with sufficient funds to move forward. It is out of your generosity that this has been possible. Thanks for your prayerful consideration of your First Gift this Christmas. Please have any donations you want in 2017 delivered to the church or post marked by December 31, 2017. Gifts will be received December 27, 28, & 29 at the church office from 10:00 am - 2:00 pm. Here are some opportune ways to give at Year-End: • Cash – gifts of cash are easy and may be eligible for a federal tax deduction. • Securities – gifts of appreciated assets are appealing because long-term gains are not taxed, and the gift is generally deductible from income tax at the full value on the date of the gift (and can offset tax up to 30% of AGI). Please consult your professional advisers on how this gift would fit into your overall plans and your eligibility for tax benefits. For those over 70 ½, Federal legislation allows you to give from your IRA directly to a public charity. • Life Income Gifts – may bring many benefits: your generous support for our work, fixed income for life, and potential tax benefits.* Check our website for more information about gift annuities.
Coexistence of Distinct Single-Ion and Exchange-Based Mechanisms for Blocking of Magnetization in a Co$^{II}_2$Dy$^{III}_2$ Single-Molecule Magnet Kartik Chandra Mondal, Alexander Sundt, Yanhua Lan, George E. Kostakis, Oliver Waldmann,* Liviu Ungur, Liviu F. Chibotaru,* Christopher E. Anson, and Annie K. Powell* In memory of Ian J. Hewitt Research efforts in the quest for new single-molecule magnets (SMMs) have increasingly focused on systems either based on or else incorporating 4f ions.[1] For most pure 3d systems, and especially those containing the Mn$^{III}$ ion such as the original Mn$_{12}$-Ac coordination cluster, spin reorientation is blocked when the ground state spin ($S$) combines with uniaxial magnetic anisotropy ($D$) to give an energy barrier to magnetic relaxation with the superexchange interactions between the metal centers leading to a molecular spin ground state and a molecular anisotropy.[2,3] The resultant exchange-based blocking of magnetization can be analyzed using a giant spin model.[4] In systems incorporating highly anisotropic 4f ions,[1] it has become clear that magnetic interactions between 4f ions are weak and generally dipolar in nature. Here the single-ion spin and anisotropy become of greater relevance. For example, recent calculations on a Dy SMM showed that the blocking mechanism largely arises from the individual Dy$^{III}$ ions with exchange-based behavior only seen at very low temperatures.[5] In systems combining 3d and 4f ions the aim is to embed highly anisotropic 4f ions into an exchange-coupled molecular 3d system, since 3d–4f interactions can be intermediate in magnitude between 3d–3d and 4f–4f. However, analysis of the origins of the blocking mechanism in such systems is not straightforward and can generally only be achieved through detailed ab initio calculations, such as we recently reported for a Cr$_4$Dy$_4$ SMM.[6] We now present a SMM comprising two Co$^{II}$ and two Dy$^{III}$ ions for which we can demonstrate the novel situation of single-ion blocking of the Dy$^{III}$ ions at higher temperatures with a crossover to molecular exchanged-based blocking at low temperatures. Reaction of Dy(NO$_3$)$_3\cdot6$H$_2$O, Co(NO$_3$)$_2\cdot6$H$_2$O, H$_2$L and Et$_3$N in the molar ratio 1:1:2:4 in MeOH gives crystalline red powder which was recrystallized from THF giving pink crystals of [Co$_2$Dy$_2$(L)$_4$(NO$_3$)$_2$(THF)$_4$]·4THF (1) in 75% yield. H$_2$L is the Schiff-base we previously described[7] resulting from condensation of $o$-vanillin and 2-aminophenol to give a “pocket ligand” capable of binding two different types of metal ion (see Figure S1 in the Supporting Information). Compound 1 crystallizes in the triclinic space group $P\bar{1}$ with $Z = 1$. Within the core of the centrosymmetric complex, the metal ions are linked by four (L)$^{2-}$ ligands in the butterfly (or defect-dicubane) topology (Figure 1). One of the two crystallographically independent ligands chelates Dy(1) through its imine nitrogen and the two phenolate oxygens O(1) and O(3) (corresponding to pocket I, see Figure S1 in the Supporting Information). Co(1) and Co(1') are linked... through a $\mu_2$-OR-bridge from the phenolate O(3) which also connects to Dy(1) to form a Co$_2$Dy triangle. The other ligand chelates Co(1) through pocket I as well as the O(4) and O(5) donors of pocket II which also coordinate Dy(1) with the (amino)phenolate O(6) bridging to Dy(1'). The ligand and its inversion equivalent provide $\mu$-OR bridges along the four outer edges of the Co$_2$Dy$_2$ rhombus with a chelating nitrate on Dy$^{III}$ and the O(10) oxygen of THF Co$^{II}$ completing the respective coordination spheres. Co(1) has a slightly distorted octahedral geometry with an O$_{2}$N donor set whilst the O$_{2}$N donor set about Dy(1) is close to pentagonal-bipyramidal geometry if we regard the chelating nitrate as a single donor (similar cone angle as for a chloride) on an axial site (Figure S2 in the Supporting Information). The complexes are surrounded by lattice THF molecules which prevent any intercomplex $\pi-\pi$ stacking and the intermolecular Dy···Dy distances are over 10 Å. Full details of the structure are in the Supporting Information with selected bond lengths and angles in Table S2. The magnetic data of **1** were collected on a powdered polycrystalline sample. The dc $\chi T$ product under applied direct current (dc) magnetic fields ranging from 0.01 to 1 T in the temperature range of 1.8 to 300 K (Figure 2) show that with decreasing temperature, the $\chi T$ product slightly decreases from 300 to 55 K, then sharply increases to 3 K before decreasing again down to 1.8 K. The behavior between 55 and 3 K suggests intramolecular ferromagnetic interactions dominate. The room-temperature $\chi T$ value of about 35.3 cm$^3$ K mol$^{-1}$ per molecule is higher than the expected value of 32.1 cm$^3$ K mol$^{-1}$ (Co$^{II}$: $S = 3/2$, $g = 2$; and Dy$^{III}$: $S = 5/2$, $L = 5$, $^{\text{1H}}I_{15/2}$, $g_s = 4/3$) consistent with the presence of ferromagnetic interactions with some contribution from the unquenched orbital contribution from the Co$^{II}$ ions.[9] The field dependence of magnetization (Figure 2, inset) below 5 K abruptly increases below 0.5 T confirming the presence of ferromagnetic interactions in **1**. At higher field the magnetization curve follows a linear slope and reaches 15.2 $\mu_B$ without saturation even up to 7 T, suggesting the presence of low-lying excited states and/or magnetic anisotropy in **1**.[9,10] The strong temperature and frequency dependences of the in-phase ($\chi'$) and out-of-phase ($\chi''$) alternating current (ac) susceptibility signals under zero dc field (Figure 3 and Figures S4 and S5) are characteristic of SMM behavior. The ![Figure 2](image) *Figure 2.* $\chi T$ versus $T$ at different applied dc fields. Inset: $M$ versus $H$ and $M$ versus $H/T$ plots at indicated temperatures. Experimental data: unfilled symbols; solid lines: ab initio calculated magnetism (see text for details). ![Figure 3](image) *Figure 3.* a) Logarithm of the relaxation time $\tau$ plotted as a function of $1/T$. Empty circles refer to the data extracted from the ac out-of-phase signal and full squares to those extracted from dc magnetic decay measurements. The lines represent Arrhenius fits to the data in the temperature range 1.6–8 K and 18–22 K, yielding $\Delta E_1 = 11.0$ cm$^{-1}$, $\tau_1 = 7.7 \times 10^{-4}$ s, and $\Delta E_2 = 82.1$ cm$^{-1}$, $\tau_2 = 6.2 \times 10^{-7}$ s, respectively. b) Frequency dependence of the ac out-of-phase susceptibility for $T = 2$ to 25 K. c) Dc magnetization decay curves for $T = 1.6$–2.6 K. frequency dependence of the ac susceptibility was analyzed using the Debye model to extract the relaxation time $\tau$, plotted as a function of $1/T$ in Figure 3a. There are two thermally activated regimes with $\Delta E_1 = 11.0$ cm$^{-1}$ and $\tau_1 = 7.7 \times 10^{-4}$ s in the temperature range 1.6–8 K and $\Delta E_2 = 82.1$ cm$^{-1}$ and $\tau_2 = 6.2 \times 10^{-7}$ s between 18 and 22 K. Notably, the regime of quantum tunneling of magnetization is still not achieved within the investigated temperature domain. A nearly symmetrical Cole–Cole (Argand) plot results between 5 and 21 K (Figure S5). Fitting the diagram at each temperature to the generalized Debye model leads to a parameter $\alpha$ ranging from 0.024 to 0.012 over the temperature range 8–15 K, while in the high-temperature regime above 18 K the parameter $\alpha$ is found to be always less than 0.009 (Figure S5, Table S3) indicating a very narrow distribution of relaxation times for each process. To confirm the SMM behavior, hysteresis loops were recorded using a micro-Hall magnetometer for which a sufficient range of sweep rates is available\textsuperscript{[11]} on a crystal well coated in Apiezon grease. Even if such a crystal fractures on cooling the grease prevents any movement of the fragments and the measurements correspond to those on aligned single crystallites. Hysteresis was clearly observed below 3 K at a sweep rate of 235 mT s$^{-1}$. The coercive fields of the hysteresis loops increase with decreasing temperature and increasing field sweep rates (Figure 4 and Figure S6). The loops display steplike features below 1.5 K, indicating that resonant quantum tunneling occurs below this temperature. Time decay measurements of the dc magnetization were performed in zero magnetic field on a single crystal of 1 in the temperature range 1.6 to 2.6 K (Figure 3c). The data could be fitted well using single-exponential time decay curves. ![Figure 4](image) **Figure 4.** Temperature-dependent magnetic hysteresis loops of 1 below 4 K and a sweep rate of the external magnetic field of 235 mT s$^{-1}$. Fragment ab initio calculations using the previously described MOLCAS program package\textsuperscript{[12]} were performed on 1 (see also the Supporting Information).\textsuperscript{[6,13]} The calculated lowest energy levels on the Co$^{2+}$ and Dy$^{3+}$ centers in 1 are given in Table 1 and the dashed lines in Figure 5 show the resulting orientation of the local anisotropy axes on Dy$^{3+}$ ions with respect to the molecular frame in 1. The exchange interaction in 1 was calculated within the Lines model (see the Supporting Information for details),\textsuperscript{[14]} Table 2 shows the spectrum of the lowest exchange levels. These levels are grouped into doublets split by an amount $\Delta_t$ because of the even number of electrons in 1. Such Ising doublets are characterized by one single direction of magnetization $Z$, which varies according to the doublet and zero transversal magnetization ($g_X = g_Y = 0$). On the other hand, the inversion symmetry of the complex, according to which the anisotropy axes on the opposite ions in 1 are parallel to each other in addition to the predominant Ising interaction, makes some of the exchange doublets in Table 2 nonmagnetic ($g_Z = 0$). **Table 1:** Energy (cm$^{-1}$) of the lowest Kramers doublets on individual magnetic centers in 1. Only the states corresponding to the free ion $^4$T term on Co$^{2+}$ and to the multiplet $J = 15/2$ of the free ion Dy$^{3+}$ are presented. | Dy | Co | |------|------| | 0.0 | 0.0 | | 192.6 | 109.1 | | 307.7 | 1045.5 | | 386.2 | 1276.2 | | 430.9 | 1624.0 | | 509.0 | 1750.8 | | 565.8 | | | 627.9 | | main values of the g tensor of the lowest Kramers doublets | $g_X = 0.005$ | $g_X = 1.89$ | | $g_Y = 0.008$ | $g_Y = 3.24$ | | $g_Z = 19.53$ | $g_Z = 6.74$ | **Figure 5.** Main anisotropy axes (dashed lines) on Dy and Co ions and local magnetizations (arrows) in the ground state in 1. **Table 2:** Lowest exchange spectrum (cm$^{-1}$) arising from the exchange interaction of the lowest Kramers doublets on magnetic centers in 1. | Energy | $\Delta_t$ | $g_Z$ | |--------|------------|-------| | 0.0 | $1.7 \times 10^{-6}$ | 49.4 | | 12.5 | $7.8 \times 10^{-6}$ | 0.0 | | 13.4 | $5.9 \times 10^{-6}$ | 0.0 | | 15.9 | $1.4 \times 10^{-5}$ | 39.5 | | 17.9 | $8.4 \times 10^{-7}$ | 0.0 | | 20.1 | $7.0 \times 10^{-7}$ | 39.7 | | 20.5 | $1.4 \times 10^{-7}$ | 0.0 | | 26.5 | $3.7 \times 10^{-6}$ | 31.8 | This arises because for these doublet states, the magnetic moments on the two Dy$^{III}$ and two Co$^{II}$ ions point in opposite directions and completely compensate each other. In contrast, in the ground exchange doublet they are parallel to each other (Figure 5) resulting in a large magnetic moment $\mu_Z = 1/2 g_Z \mu_B$, $g_Z = 24.7 \, \mu_B$ (Table 2). As a result of the exchange interaction, local magnetizations on the Co$^{II}$ ions make an angle of about 13.2° with the main anisotropy axis $g_Z$ of the ground Kramers doublet, while on Dy$^{III}$ this angle is only 0.3° (Figure 5). A comparison of measured and calculated magnetism is shown in Figure 2. Table 2 shows that the ground exchange doublet is characterized by a relatively small tunneling gap, which explains why quantum tunneling of magnetization is suppressed until very low temperatures are reached (Figure 3). The fourth exchange doublet has a tunneling gap of the order $10^{-5}$ cm$^{-1}$, which opens the channel for tunneling relaxation of magnetization through this state.\[15\] We can, therefore, associate the height of the barrier ($11.0$ cm$^{-1}$) to the relaxation regime at $T < 18$ K (dashed line in Figure 3a) with this state. On the other hand the Arrhenius regime of relaxation observed at $T > 18$ K (solid line in Figure 3a) cannot be associated with the exchange states since the highest of these states ($26$ cm$^{-1}$, Table 2) lies much lower than the value of the extracted barrier ($82$ cm$^{-1}$), meaning that this regime must be associated with relaxation through excited Kramers doublets of individual metal ions, as previously inferred for some Dy complexes.\[16\] The condition for this relaxation regime to be observed in ac measurements is $\omega \tau_i \approx 1$,\[17\] where $\tau_i(T)$ is the intraionic relaxation time and $\omega$ is the frequency of the ac field. Table 1 tells us that the Dy$^{III}$ ions are much more axial ($g_{x^2} \approx g_y \ll g_z$) than the Co$^{II}$ ions and, therefore, should possess much longer relaxation times. Hence, for $\omega \leq 1000$ Hz the observed maximum of $\chi'(\omega)$ at $T > 18$ K (Figure 3b and Figure S4) should be attributed to the intraionic relaxation through the Dy$^{III}$ ion.\[17\] As additional confirmation of a very fast relaxation on the Co$^{II}$ ions in 1, the ac measurements made on the isostructural compound Co$_2$Y$_2$ show a zero out-of-phase signal. The calculated lowest excitation energy on the Dy$^{III}$ sites (Table 1) is higher than the estimated height of the barrier for this regime, which probably results from the lack of sufficient temperature points being taken in the high-$T$ region in Figure 3 as a result of instrumental limitations.\[18\] The two relaxation regimes can also be seen in the temperature dependence of $\chi'(\omega)T$ for $\omega > 1000$ Hz (Figure 6). The downturn of $\chi'T$ from the isothermal curve at 1500 Hz can be associated with quenching of intraionic relaxation mediated by Dy$^{III}$ ions when $\omega \tau_{Dy}(T)$ becomes $> 1$.\[4\] The $\chi'T$ value drops with approximately constant slope by a value roughly corresponding to the contribution of two Dy$^{III}$ ions, this corresponding to the regime where the single ion contributions dominate. At lower temperatures, the further drop of $\chi'T$, but with a shallower slope (Figure 6) is where the presence of the Co$^{II}$ ions becomes important and can be attributed to the exchange-blocked relaxation regime where the cooperative coupling of the 3d and 4f ions dominates. Such a switch to the exchange-blocked relaxation regime at low temperatures has been inferred recently for Dy$_2$ complexes.\[8\] Here, however this can be unambiguously identified since for 1 (Figure 3) the two regimes on the ln($\tau$) versus $1/T$ curve can be observed thanks to the 10-times larger exchange splitting of the low-lying levels seen here compared with the Dy$_2$ complex. This is a direct result of the presence of the 3d ions and allows us to come to the important conclusion that the observation of two curves of different gradient, as seen in Figure 3, can be taken as the signature of mixed 3d–4f SMMs. A similar, but more dramatic, behavior is expected in mixed 4,5d–4f complexes, which in addition will possess larger exchange barriers than 1 because of more diffuse magnetic orbitals on the transition-metal ions and, therefore, should be regarded as most promising for the design of effective and efficient SMMs. **Experimental Section** Crystallography: Structures solved and refined using SHELXTL 6.14,\[19\] 1: C$_{36}$H$_{40}$Co$_2$Dy$_2$N$_2$O$_{24}$ (1964.46 g mol$^{-1}$), triclinic, $P\bar{1}$, $a = 11.7308(9)$, $b = 13.2206(11)$, $c = 14.9021(12)$ Å, $\alpha = 109.568(7)$, $\beta = 93.391(6)$, $\gamma = 113.742(6)^{\circ}$, $V = 1951.0(3)$ Å$^3$, $T = 150(2)$ K, $Z = 1$, $\rho_s = 1.672$ gcm$^{-3}$, $F(000) = 992$, $\mu$(Mo-K$\alpha$) = 2.393 mm$^{-1}$, 16703 reflections measured, 9316 unique ($R_{\text{int}} = 0.0256$), refinement (510 parameters) to $wR_2 = 0.1060$, $S = 0.997$ (all data), $R_1 = 0.0411$ (8192 data with $I > 2\sigma(I)$), largest peak/hole 0.86/−2.46 e Å$^{-3}$; 2: C$_{36}$H$_{40}$Co$_2$N$_2$O$_{24}$Y$_2$ (1817.28 g mol$^{-1}$), triclinic, $P\bar{1}$, $a = 11.7437(12)$, $b = 13.2227(13)$, $c = 14.9685(14)$ Å, $\alpha = 109.024(6)$, $\beta = 93.149(8)$, $\gamma = 113.694(7)^{\circ}$, $V = 1956.5(3)$ Å$^3$, $T = 180(2)$ K, $Z = 1$, $\rho_s = 1.542$ gcm$^{-3}$, $F(000) = 938$, $\mu$(Mo-K$\alpha$) = 1.968 mm$^{-1}$, 13449 reflections measured, 8261 unique ($R_{\text{int}} = 0.0209$), refinement (510 parameters) to $wR_2 = 0.1006$, $S = 0.972$ (all data), $R_1 = 0.0378$ (7037 data with $I > 2\sigma(I)$), largest peak/hole 0.74/−0.55 e Å$^{-3}$. CCDC 853440, 853441 contain the supplementary crystallographic data for this paper. These data can be obtained free of charge from The Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_request/cif. Received: February 23, 2012 Published online: June 12, 2012 **Keywords:** ab initio calculations · cobalt · high-energy barriers · lanthanides · single-molecule magnets --- [1] R. Sessoli, A. K. Powell, *Coord. Chem. Rev.* **2009**, *253*, 2328–2341, and references therein. [2] a) A. Caneschi, D. Gatteschi, R. Sessoli, A. L. Barra, L. C. Brunel, M. Guillot, *J. Am. Chem. Soc.* **1991**, *113*, 5873; b) L. Thomas, F. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, B. Barabara, *Nature* **1996**, *383*, 145; c) J. R. Friedman, M. P. Sarachik, J. Tejada, R. Ziolo, *Phys. Rev. Lett.* **1996**, *76*, 3830; d) S. Hill, R. S. Edwards, N. Aliaga-Alcalde, G. Christou, *Science* **2003**, *302*, 1015. [3] G. Aromí, E. K. Brechin, *Struct. Bonding (Berlin)* **2006**, *122*, 1, and references therein. [4] D. Gatteschi, R. Sessoli, J. Villain, *Molecular nanomagnets*, Oxford University Press, Oxford, 2006. [5] Y.-N. Guo, G.-F. Xu, W. Wernsdorfer, L. Ungur, Y. Guo, J. Tang, H.-J. Zhang, L. F. Chibotaru, A. K. Powell, *J. Am. Chem. Soc.* **2011**, *133*, 11948. [6] J. Rinck, G. Novitchi, W. van den Heuvel, L. Ungur, Y. Lan, W. Wernsdorfer, C. E. Anson, L. F. Chibotaru, A. K. Powell, *Angew. Chem.* **2010**, *122*, 7746; *Angew. Chem. Int. Ed.* **2010**, *49*, 7583. [7] K. C. Mondal, G. E. Kostakis, Y. Lan, W. Wernsdorfer, C. E. Anson, A. K. Powell, *Inorg. Chem.* **2011**, *50*, 11604. [a] X.-Q. Zhao, Y. Lan, B. Zhao, P. Cheng, C. E. Anson, A. K. Powell, *Dalton Trans.* **2010**, *39*, 4911; b) V. Chandrasekhar, B. M. Pandian, R. Azhakar, J. J. Vittal, R. Clérac, *Inorg. Chem.* **2007**, *46*, 5140; c) V. Chandrasekhar, B. M. Pandian, R. Azhakar, J. J. Vittal, R. Clérac, *Inorg. Chem.* **2009**, *48*, 1148; d) F. Chen, W. Lu, Y. Zhu, B. Wu, X. Zheng, *J. Coord. Chem.* **2009**, *62*, 808; e) H. Xiang, Y. Lan, H.-Y. Li, L. Jiang, T.-B. Lu, C. E. Anson, A. K. Powell, *Dalton Trans.* **2010**, *39*, 4737. [9] J. D. Rinehart, K. R. Meihaus, J. R. Long, *J. Am. Chem. Soc.* **2010**, *132*, 7572. [10] C. Benelli, D. Gatteschi, *Chem. Rev.* **2002**, *102*, 2369, and references therein. [11] D. Schray, G. Abbas, Y. Lan, V. Mereacre, A. Sundt, J. Dreiser, O. Waldmann, G. E. Kostakis, C. E. Anson, A. K. Powell, *Angew. Chem.* **2010**, *122*, 5312; *Angew. Chem. Int. Ed.* **2010**, *49*, 5185. [12] F. Aquilante, L. De Vico, N. Ferre, G. Ghigo, P. A. Malmqvist, P. Neogrady, T. B. Pedersen, M. Pitonak, M. Reiher, B. O. Roos, L. Serrano-Andres, M. Urban, V. Veryazov, R. Lindh, *J. Comput. Chem.* **2010**, *31*, 224. [13] a) L. Ungur, W. Van den Heuvel, L. F. Chibotaru, *New J. Chem.* **2009**, *33*, 1224; b) L. F. Chibotaru, L. Ungur, A. Soncini, *Angew. Chem.* **2008**, *120*, 4194; *Angew. Chem. Int. Ed.* **2008**, *47*, 4126; c) F.-S. Guo, J.-L. Liu, J.-D. Leng, Z.-S. Meng, Z.-J. Lin, M.-L. Tong, S. Gao, L. Ungur, L. F. Chibotaru, *Chem. Eur. J.* **2011**, *17*, 2458. [14] L. F. Chibotaru, L. Ungur, Computer program POLY_ANISO, University of Leuven, 2006. [15] In a Co$^{II}$,Co$^{III}$ wheel the lack of SMM behavior was explained by the existence of tunneling gap of $3 \times 10^{-3}$ cm$^{-1}$ in the ground exchange doublet in L. F. Chibotaru, L. Ungur, C. Aronica, H. Elmoll, G. Pilet, D. Luneau, *J. Am. Chem. Soc.* **2008**, *130*, 12445. [16] P. H. Lin, T. J. Burchell, L. Ungur, L. F. Chibotaru, W. Wernsdorfer, M. Murugesu, *Angew. Chem.* **2009**, *121*, 9653; *Angew. Chem. Int. Ed.* **2009**, *48*, 9489. [17] A counterexample is the complex Dy$^{III}$,Cr$^{III}$, where the Dy$^{III}$ ions were found to be nonaxial ($g_{\parallel} = 1.7$, $g_{\perp} = 5.8$, $g_{Z} = 14.4$), and accordingly, no intraionic relaxation was detected in ac susceptibility measurements, see ref. [6]. [18] A good agreement of calculated and measured magnetic properties (Figure 2) rules out the possibility of a large deviation of the energy of the first excited Kramers doublet on Dy from the value in Table 1. [19] G.M. Sheldrick, SHELXTL 6.14, Bruker AXS Inc., 6300 Enterprise Lane, Madison, WI 53719-1173, USA 2003.
REQUESTED COMMISSION ACTION: Consent Ordinance X Resolution Consideration/Discussion Presentation SHORT TITLE: A RESOLUTION OF THE CITY COMMISSION OF THE CITY OF POMPANO BEACH, FLORIDA, MAKING CERTAIN FINDINGS AND DESIGNATING THE REAL PROPERTY LOCATED ON THE NORTHWEST QUADRANT OF THE INTERSECTION AT NW 31ST AVENUE AND W. ATLANTIC BLVD., IDENTIFIED BY FOLIO NO. 484232190010, AS A BROWNFIELD AREA PURSUANT TO SECTION 376.80(2)(C), FLORIDA STATUTES, FOR THE PURPOSE OF REHABILITATION, JOB CREATION AND PROMOTING ECONOMIC REDEVELOPMENT; AUTHORIZING THE CITY MANAGER TO NOTIFY THE FLORIDA DEPARTMENT OF ENVIRONMENTAL PROTECTION OF SAID DESIGNATION; PROVIDING AN EFFECTIVE DATE. Summary of Purpose and Why: The property owner, West Atlantic Boulevard Apartment Investors, LLC (WABAI) has submitted a letter of application requesting the City Commission designate the property identified by Folio # 484232190010, as a Brownfield site pursuant to Section 376.80(2)(c), Florida Statutes. The City has approved and permitted a 404 unit residential complex with a total capital cost estimated at over $62 million. Staff finds that WABAI has demonstrated this project and property meet the five statutory criteria for designation of a Brownfield site as set forth in Section 376.80(2)(c), Florida Statutes and as such, it is a mandatory designation. Proper notice has been provided in accordance with Section 376.80(1) and 166.041(3)(c)2, Florida Statutes for this proposed action. (1) Origin of request for this action: City Manager's Office Dennis Beach (2) Primary staff contact(s): Chris Clemens/ Greg Harrison Ext. 4048 (3) Expiration of contract, if applicable: N/A (4) Fiscal impact and source of funding: N/A DEPARTMENTAL COORDINATION DATE DEPARTMENTAL RECOMMENDATION DEPARTMENTAL HEAD SIGNATURE Dev. Serv. Dept. 10/23/2015 Approval City Attorney 10/28/2015 [Signature] Advisory Board [Signature] X City Manager [Signature] ACTION TAKEN BY COMMISSION: Resolution Workshop 1st Reading 11/10/15 Approved 2nd Reading 12/8/15 Results: The following is a review of the West Atlantic Boulevard Apartments Investors (WABAI) brownfield application you had asked staff to review, specifically the five applicable brownfield area designation criteria set forth in Section 376.80(2)(c), Florida Statutes, as follows: **Agreement to Redevelop the Brownfield Site:** WABAI satisfies this criterion in that it owns the Subject Property, is requesting the property be designated a Brownfield area, and has agreed to redevelop and, as necessary, rehabilitate the Subject Property. The applicant has provided proof of ownership. **Economic Productivity:** WABAI satisfies this second criterion in that, when fully developed, the Project will employ 9 full-time associates. These figures exceed the requirement of the “creation of at least 5 new permanent jobs at the brownfield site.” It is estimated that the total capital cost of the WABAI project exceeds $62 million. **Consistency with Local Comprehensive Plan and Permitable Use Under Local Land Development Regulations:** The land use and zoning at the Subject Property are Residential dashed-line Medium (16-25 du/ac) and Residential Planned Unit Development (RPUD) respectively. Both districts permit multifamily residential developments which satisfies the criteria. The applicant has also provided the Development Order (No. 14-12000018) for the project, issued by the Planning & Zoning Board authorizing the development of 19 new three-story multi-family buildings consisting of 404 residential units. **Public Notice and Comment:** WABAI satisfied this criterion by posting notice at the Subject Property and in the Sun Sentinel, the Daily Business Review and on Craigslist. The applicant provided the City with a picture of the posting on the Subject Property and well as copies of the ads and the dates it ran. WABAI also hosted a public meeting at the Jan Moran Collier City Learning Library at 2800 NW 9th Court in Pompano Beach and stated to us that there were no attendees at the meeting. **Reasonable Financial Assurance:** The applicant provided the City with a letter outlining the company’s successful development record and the planned financing for the current project, as well as, personal assurances from the Operating Member and Manager of the project as to the financial assurances provided. The City’s Finance Department reviewed the letter submittal and was satisfied that the provided assurances satisfied the statutory requirement. Based on a review of the Statute and the provided information, staff finds that WABAI has satisfied each of the Florida Statute requirements for the Brownfield area designation. Lastly, the Economic Development Council reviewed these findings on October 26, 2015. A RESOLUTION OF THE CITY COMMISSION OF THE CITY OF POMPANO BEACH, FLORIDA, MAKING CERTAIN FINDINGS AND DESIGNATING THE REAL PROPERTY LOCATED ON THE NORTHWEST QUADRANT OF THE INTERSECTION AT NW 31ST AVENUE AND W. ATLANTIC BLVD., IDENTIFIED BY FOLIO NO. 484232190010, AS A BROWNFIELD AREA PURSUANT TO SECTION 376.80(2)(C), FLORIDA STATUTES, FOR THE PURPOSE OF REHABILITATION, JOB CREATION AND PROMOTING ECONOMIC REDEVELOPMENT; AUTHORIZING THE CITY MANAGER TO NOTIFY THE FLORIDA DEPARTMENT OF ENVIRONMENTAL PROTECTION OF SAID DESIGNATION; PROVIDING AN EFFECTIVE DATE. WHEREAS, pursuant to § 97-277, Laws of Florida, codified at § 376.77 – 376.86, Florida Statutes, the State of Florida has provided for designation of a “brownfield area” by resolution at the request of the person who owns or controls one or more real estate parcels, to provide for their environmental remediation and redevelopment and promote economic development and revitalization generally; and WHEREAS, West Atlantic Boulevard Apartment Investors, LLC (“WABAI”) owns the property located on the Northwest quadrant of the intersection at NW 31st Avenue and West Atlantic Blvd., Pompano Beach, Broward County, Florida 33069, Folio number 484232190010, (hereinafter the “Property”) depicted and more particularly described in Exhibit “A” and is developing it for residential use; and WHEREAS, WABAI has requested that the City Commission of Pompano Beach designate the Property as a “brownfield area” pursuant to §376.80(2)(c), Florida Statutes; and WHEREAS, the City Commission has reviewed the criteria set forth in § 376.80(2)(c), Florida Statutes, and has determined that the Property qualifies for designation as a “brownfield area” because the following requirements have been satisfied: 1. WABAI owns the Property which is proposed for designation and has agreed to rehabilitate and redevelop it; 2. The rehabilitation and redevelopment of the Property will result in economic productivity in the area; 3. The redevelopment of the Property is consistent with the City’s Comprehensive Plan and is a permittable use under the City’s Zoning and Land Development Code; 4. Proper notice of the proposed rehabilitation of the Property has been provided to neighbors and nearby residents, and WABAI has provided those receiving notice the opportunity to provide comments and suggestions regarding the rehabilitation; and 5. WABAI has provided reasonable assurance that it has sufficient financial resources to implement and complete a rehabilitation agreement and redevelopment plan. WHEREAS, the City Commission desires to notify the Florida Department of Environmental Protection of its resolution designating the Property as a “brownfield area” to further its rehabilitation and redevelopment for purposes of § 376.77 – 376.86, Florida Statues; and WHEREAS, the applicable procedures set forth in § 376.80 and § 166.041, Florida Statutes, have been followed and proper notice has been provided in accordance with § 376.80(1) and 166.041(3)(c)2, Florida Statutes; and WHEREAS, such designation shall not render the City of Pompano Beach liable for costs or site remediation, rehabilitation and economic development or source removal, as those terms are defined in Section 376.79 (17) and (18), Florida Statutes, or for any other costs, above and beyond those costs attributed to the adoption of this Resolution; now, therefore, BE IT RESOLVED BY THE CITY COMMISSION OF THE CITY OF POMPANO BEACH, FLORIDA: SECTION 1. That the recitals and findings set forth in the Preamble to this Resolution are hereby adopted by reference thereto and incorporated herein as if fully set forth in this Section. SECTION 2. That the City Commission finds that WABAI has satisfied the criteria set forth in § 376.80(2)(c), Florida Statutes. SECTION 3. That the City Commission designates the Property depicted on Exhibit "A" attached hereto and incorporated herein by reference, as a “brownfield area” for purposes of §376.77 – 376.86, Florida Statutes. SECTION 4. That the City Manager, or his designee, is hereby authorized to notify the Florida Department of Environmental Protection of the City Commission’s resolution designating the Property a “brownfield area” for purposes of § 376.77 – 376.86, Florida Statutes. SECTION 5. This Resolution shall become effective upon passage. PASSED AND ADOPTED this ______ day of ________________________, 2015. PASSED AND ADOPTED this ______ day of ________________________, 2015. LAMAR FISHER, MAYOR ATTEST: ASCELETA HAMMOND, CITY CLERK CLS:jrm 10/29/2015 L.reso/2016-33 EXHIBIT "A" The just values displayed below were set in compliance with Sec. 193.011, Fla. Stat., and include a reduction for costs of sale and other adjustments required by Sec. 193.011(8). | Year | Land | Building | Just / Market Value | Assessed / SOH Value | Tax | |------|----------|----------|---------------------|----------------------|-----| | 2016 | $1,169,250 | | $1,169,250 | $1,169,250 | | | 2015 | $1,169,250 | | $1,169,250 | $321,510 | | | 2014 | | | | | | IMPORTANT: The 2016 values currently shown are "roll over" values from 2015. These numbers will change frequently online as we make various adjustments until they are finalized on June 1. Please check back here AFTER June 1, 2016, to see the actual proposed 2016 assessments and portability values. ### 2016 Exemptions and Taxable Values by Taxing Authority | | County | School Board | Municipal | Independent | |------------------------|--------|--------------|-----------|-------------| | Just Value | $1,169,250 | $1,169,250 | $1,169,250 | $1,169,250 | | Portability | 0 | 0 | 0 | 0 | | Assessed/SOH | $1,169,250 | $1,169,250 | $1,169,250 | $1,169,250 | | Homestead | 0 | 0 | 0 | 0 | | Add. Homestead | 0 | 0 | 0 | 0 | | Wid/Vet/Dis | 0 | 0 | 0 | 0 | | Senior | 0 | 0 | 0 | 0 | | Exempt Type | 0 | 0 | 0 | 0 | | Taxable | $1,169,250 | $1,169,250 | $1,169,250 | $1,169,250 | ### Sales History | Date | Type | Price | Book/Page or CIN | |---------|------|-----------|------------------| | 2/11/2015 | SWD-D | $3,640,600 | 112814101 | ### Land Calculations | Price | Factor | Type | |-------|--------|------| | $1.00 | 1,169,248 | SF | *Click here for old map program* LAND DESCRIPTION: A PARCEL OF LAND IN THE SOUTHEAST ONE-QUARTER (S.E. 1/4) OF SECTION 32, TOWNSHIP 48 SOUTH, RANGE 42 EAST, BROWARD COUNTY, FLORIDA, SAID PARCEL BEING MORE PARTICULARLY DESCRIBED AS FOLLOWS: COMMENCING AT THE SOUTHEAST CORNER OF SAID SECTION 32; THENCE NORTH 01°22'47" WEST ALONG THE EAST LINE OF SAID SECTION 32, SAME BEING THE WEST LINE OF SAID SECTION 33, A DISTANCE OF 365.78 FEET TO AN INTERSECTION WITH THE NORTH RIGHT OF WAY LINE OF WEST ATLANTIC BOULEVARD, A 120.00 FOOT RIGHT-OF-WAY ACCORDING TO FLORIDA DEPARTMENT OF TRANSPORTATION MAP NUMBER 410055, SECTION 86130-2504, SHEET 12 OF 18, SAME BEING THE SOUTHWEST CORNER OF TEXACO-POMPANO, ACCORDING TO THE PLAT THEREOF AS RECORDED IN PLAT BOOK 124, PAGE 10, OF THE PUBLIC RECORDS OF BROWARD COUNTY, FLORIDA, SAID POINT ALSO BEING ON THE ARC OF A CURVE CONCAVE TO THE SOUTHWEST, HAVING A RADIUS OF 1,587.89 FEET (A RADIAL LINE TO SAID POINT BEARS NORTH 11°21'03" EAST), SAID POINT ALSO BEING THE POINT OF BEGINNING; THENCE NORTHWESTERLY ALONG SAID NORTH RIGHT-OF-WAY LINE AND ALONG THE ARC OF SAID CURVE, THROUGH A CENTRAL ANGLE OF 02°27'04", AN ARC DISTANCE OF 67.93 FEET; THENCE NORTH 67°56'40" WEST CONTINUING ALONG SAID NORTH RIGHT-OF-WAY LINE, 164.17 FEET; THENCE NORTH 89°26'43" WEST CONTINUING ALONG SAID NORTH RIGHT-OF-WAY LINE, 160.24 FEET; THENCE SOUTH 76°08'26" WEST CONTINUING ALONG SAID NORTH RIGHT-OF-WAY LINE, 54.19 FEET; THENCE SOUTH 82°47'08" WEST CONTINUING ALONG SAID NORTH RIGHT-OF-WAY LINE, 240.26 FEET; THENCE SOUTH 76°36'32" WEST CONTINUING ALONG SAID NORTH RIGHT-OF-WAY LINE, 20.77 FEET; THENCE LEAVING SAID NORTH RIGHT-OF-WAY LINE, NORTH 15°01'12" WEST, 256.26 FEET; THENCE NORTH 12°02'24" EAST, 44.31 FEET; THENCE NORTH 55°23'30" WEST, 132.52'; THENCE NORTH 00°00'00" EAST, 702.13 FEET; THENCE NORTH 90°00'00" WEST, 108.87 FEET; THENCE NORTH 00°00'00" EAST, 430.80 FEET TO THE SOUTH LINE OF THE NORTH ONE-HALF (N. 1/2) OF THE NORTHEAST ONE-QUARTER (N.E. 1/4) OF THE SOUTHEAST ONE-QUARTER (S.E. 1/4) OF SAID SECTION 32; THENCE NORTH 88°45'32" EAST ALONG THE AFORESAID SOUTH LINE, 793.50 FEET; THENCE SOUTH 01°22'47" EAST, 637.40 FEET; THENCE NORTH 88°48'35" EAST ALONG THE WESTERLY PROLONGATION OF A NORTH LINE OF PARCEL G, GIBSON'S PLAT, ACCORDING TO THE PLAT THEREOF RECORDED IN PLAT BOOK 99, PAGE 45, OF THE PUBLIC RECORDS OF BROWARD COUNTY, FLORIDA, 135.00 FEET TO THE WESTERLY MOST NORTHWEST CORNER OF SAID PARCEL G; THENCE SOUTH 01°22'47" EAST ALONG A WEST LINE OF SAID PARCEL G AND ALONG THE WEST LINE OF TRACTS 47, 48 AND 49 OF COLLIER CITY LOTS (UNRECORDED) AND ALONG THE WEST LINE OF TRACTS 1-3, PANTON FARMS, ACCORDING TO THE PLAT THEREOF AS RECORDED IN PLAT BOOK 89, PAGE 9, OF THE PUBLIC RECORDS OF BROWARD COUNTY, FLORIDA, AND ALONG THE WEST LINE OF SAID TEXACO-POMPANO PLAT, 909.26 FEET TO THE POINT OF BEGINNING. SAID LANDS SITUATED IN THE CITY OF POMPANO BEACH, BROWARD COUNTY, FLORIDA, CONTAINING 26.842 ACRES (1,169,248 SQUARE FEET), MORE OR LESS. The following is a review of the West Atlantic Boulevard Apartments Investors (WABAI) brownfield application you had asked staff to review, specifically the five applicable brownfield area designation criteria set forth in Section 376.80(2)(c), Florida Statutes, as follows: **Agreement to Redevelop the Brownfield Site:** WABAI satisfies this criterion in that it owns the Subject Property, is requesting the property be designated a Brownfield area, and has agreed to redevelop and, as necessary, rehabilitate the Subject Property. The applicant has provided proof of ownership. **Economic Productivity:** WABAI satisfies this second criterion in that, when fully developed, the Project will employ 9 full-time associates. These figures exceed the requirement of the “creation of at least 5 new permanent jobs at the brownfield site.” It is estimated that the total capital cost of the WABAI project exceeds $62 million. **Consistency with Local Comprehensive Plan and Permitable Use Under Local Land Development Regulations:** The land use and zoning at the Subject Property are Residential dashed-line Medium (16-25 du/ac) and Residential Planned Unit Development (RPUD) respectively. Both districts permit multifamily residential developments which satisfies the criteria. The applicant has also provided the Development Order (No. 14-12000018) for the project, issued by the Planning & Zoning Board authorizing the development of 19 new three-story multi-family buildings consisting of 404 residential units. **Public Notice and Comment:** WABAI satisfied this criterion by posting notice at the Subject Property and in the Sun Sentinel, the Daily Business Review and on Craigslist. The applicant provided the City with a picture of the posting on the Subject Property and well as copies of the ads and the dates it ran. WABAI also hosted a public meeting at the Jan Moran Collier City Learning Library at 2800 NW 9th Court in Pompano Beach and stated to us that there were no attendees at the meeting. **Reasonable Financial Assurance:** The applicant provided the City with a letter outlining the company’s successful development record and the planned financing for the current project, as well as, personal assurances from the Operating Member and Manager of the project as to the financial assurances provided. The City’s Finance Department reviewed the letter submittal and was satisfied that the provided assurances satisfied the statutory requirement. Based on a review of the Statute and the provided information, staff finds that WABAI has satisfied each of the Florida Statute requirements for the Brownfield area designation. Lastly, the Economic Development Council reviewed these findings on October 26, 2015. Chris, please be advised that as advertised, we held a community meeting yesterday evening regarding the proposed brownfield designation of the Former Palm Aire Golf Course from 5:30 to 7:30 PM at the Jan Moran Collier City Learning Library located at 2800 NW 9th Court, Pompano Beach, FL 33069. There were no attendees. Thank you. Regards, Dalayna Dalayna M. Tillman, Esq. The Goldstein Environmental Law Firm, P.A. One SE Third Avenue, Suite 2120 Miami, FL 33131 Direct Telephone: (305) 777-1686 Cell Phone: (703) 499 -7132 Email: firstname.lastname@example.org http://www.goldsteinenvlaw.com/ From: Dalayna Tillman Sent: Monday, September 28, 2015 2:31 PM To: 'Chris Clemens' Cc: 'Fawn Powers' Subject: West Atlantic Boulevard Apartments Investors, LLC - Proposed Former Palm Aire Golf Course Brownfield Area, Pompano Beach - Notices Chris, please find enclosed copies of the notices that were published (i) in the Sun Sentinel Newspaper; (ii) in the New Times Newspaper, Community Bulletin Section; (iii) on Craigslist (Broward County Community Events); and (iv) at the property regarding the proposed former Palm Aire Golf Course Brownfield Area. Specifically, the notices advertise the community meeting, which will be held this evening, September 28th, at the Jan Moran Collier City Learning Library located at 2800 NW 9th Court, Pompano Beach, FL 33069 from 5:30 to 7:30 PM. We will supplement this email with a copy of the sign-in sheet from the meeting tonight. Thank you. Regards, Dalayna Dalayna M. Tillman, Esq. The Goldstein Environmental Law Firm, P.A. One SE Third Avenue, Suite 2120 Miami, FL 33131 Direct Telephone: (305) 777-1686 Cell Phone: (703) 499 -7132 Email: email@example.com http://www.goldsteinenvlaw.com/ NOTICE OF PROPOSED TAX INCREASE The City of Hallandale Beach has tentatively adopted a measure to increase its property tax levy. Last year's property tax levy: A. Initially proposed tax levy........... $24,743,900 B. Less tax reductions due to Value Adjustment Board and other assessment changes....... $2,259,242 C. Actual property tax levy............... $22,484,658 This year's proposed tax levy.....$24,668,042 All concerned citizens are invited to attend a public hearing on the tax increase to be held on: FRIDAY, SEPTEMBER 25, 2015 5:05 P.M. AT 400 SOUTH FEDERAL HIGHWAY HALLANDALE BEACH COMMISSION MEETING ROOM HALLANDALE BEACH, FLORIDA 33009 A FINAL DECISION ON the proposed tax increase and the budget will be made at this hearing. BUDGET SUMMARY CITY OF HALLANDALE BEACH - FISCAL YEAR 2015-16 THE PROPOSED OPERATING BUDGET EXPENDITURES OF THE CITY OF HALLANDALE BEACH ARE 8.6% MORE THAN LAST YEAR'S TOTAL OPERATING EXPENDITURES | General Fund | 5.919 | |--------------|-------| | Goldstein EXWRY | 1.054 | | Three Issues Check | 0.082 | ESTIMATED REVENUES | Items | Estimated Per $1000 | |-------|---------------------| | Ad Valorem Taxes | 1.5198 | | Ad Valorem Taxes | 1.0554 | | Ad Valorem Taxes | 0.0800 | | Charges for Services | 19,894.57 | | Fines and Forfeitures | 475.00 | | Licenses and Utility Taxes | 5,474.17 | | Intergovernmental Revenue | 3,774.91 | | Interest and Profits | 1,633.87 | | Other Financing Sources | 2,661.77 | TOTAL SOURCES | Items | Estimated Per $1000 | |-------|---------------------| | Taxation | 897.221 | | Fines & Forfeitures/Intergovernmental | 751.294 | TOTAL UNEXPENDED, TRANSFERS AND BALANCES | Items | Estimated Per $1000 | |-------|---------------------| | General Fund | 84,035.832 | | Public Safety | 39,047.293 | | Physical Environment | 2,853.453 | | Transportation | 0 | | Water Resources | 2,288.791 | | Cultural Recreation | 5,026.236 | | Other Financing Sources | 5,026.236 | TOTAL EXPENDITURES | Items | Estimated Per $1000 | |-------|---------------------| | Taxation | 84,035.832 | | Public Safety | 39,047.293 | | Physical Environment | 2,853.453 | | Transportation | 0 | | Water Resources | 2,288.791 | | Cultural Recreation | 5,026.236 | | Other Financing Sources | 5,026.236 | TOTAL UNEXPENDED, TRANSFERS AND BALANCES | Items | Estimated Per $1000 | |-------|---------------------| | General Fund | 84,035.832 | | Public Safety | 39,047.293 | | Physical Environment | 2,853.453 | | Transportation | 0 | | Water Resources | 2,288.791 | | Cultural Recreation | 5,026.236 | | Other Financing Sources | 5,026.236 | NOTICE OF PROPOSED BROWNFIELD DESIGNATION Representatives for West Atlantic Boulevard Apartments Investors, LLC, will hold a community meeting on September 28, 2015, from 5:30 PM to 7:30 PM, for the purpose of affording interested parties an opportunity to provide comments and suggestions about the potential designation of the West Atlantic Boulevard Field Number 49423319010, Pompano Beach, Broward County, FL, 33069, as a potential area proposed for Brownfield Site, Federal Status, and about development and rehabilitation activities associated with the potential designation, including public meetings to be held by the Pompano Beach City Commission to consider the request for designation. The community meeting will be held at the Collier City Library, 2800 NW 6th Court, Pompano Beach, FL, and is free and open to all members of the public. For more information regarding the community meeting, including directions, or to provide comments and suggestions at any time before or after the meeting date, please contact Michael R. Goldstein, By telephone: (305) 777-0000; By Fax: (305) 777-0001; The Goldstein Environmental Law Firm, P.A., 1 SE 3rd Avenue, Suite 2120, Miami, FL 33131; and/or by email: firstname.lastname@example.org. SUN SENTINEL Published Daily Fort Lauderdale, Broward County, Florida Boca Raton, Palm Beach County, Florida Miami, Miami-Dade County, Florida STATE OF FLORIDA COUNTY OF BROWARD/PALM BEACH/MIAMI-DADE Before the undersigned authority personally appeared BETTY ARMAND who on oath says that he/she is a duly authorized representative of the Classified Department of the Sun-Sentinel, daily newspaper published in Broward/Palm Beach/Miami-Dade County, Florida, that the attached copy of advertisement, being, a PUBLIC NOTICE in the matter of THE GOLDSTEIN ENVIRONMENTAL LAW FIRM of PROPOSED BROWNFIELD DESIGNATION appeared in the paper on SEPTEMBER 19, 2015 AD ID# 3589741-1, affiant further says that the said Sun-Sentinel is a newspaper published in said Broward/Palm Beach/ Miami-Dade County, Florida, and that the said newspaper has heretofore been continuously published in said Broward/Palm Beach/Miami-Dade County, Florida, each day, and has entered as second class matter at the post office in Fort Lauderdale, in said Broward County, Florida, for a period of one year next preceding the first publication of the attached copy of advertisement; and affiant says that he/she has neither paid, nor promised, any person, firm or corporation any discount, rebate, commission or refund for the purpose of securing this advertisement for publication in said newspaper. [Signature] BETTY ARMAND, Affiant Sworn to and subscribed before me on SEPTEMBER 21, 2015, A.D NOREEN RUBIN Notary Public - State of Florida My Comm. Expires Oct 24, 2016 Commission # EE 213961 Bonded Through National Notary Assn. [Signature of Notary Public) (Name of Notary typed, printed or stamped) Personally Known X or Produced Identification NOTICE OF PROPOSED BROWNFIELD DESIGNATION Representatives for West Atlantic Boulevard Apartments Investors, LLC, will hold a community meeting on September 28, 2015, from 5:30 P.M. to 7:30 P.M. for the purpose of affording interested parties the opportunity to provide comments and suggestions about the potential designation of the property identified by Folio Number 484232190010, Pompano Beach, Broward County, FL 33069, as a brownfield area pursuant to §376.80(2)(C), Florida Statutes, and about development and rehabilitation activities associated with the potential designation, including public hearings to be held by the Pompano Beach City Commission to consider the request for designation. The community meeting will be held at the Collier City Library, 2800 NW 9th Court, Pompano Beach, FL, and is free and open to all members of the public. For more information regarding the community meeting, including directions, or to provide comments and suggestions at any time before or after the meeting date, please contact Michael R. Goldstein. By telephone: (305) 777-1682; by U.S. Mail: The Goldstein Environmental Law Firm, P.A.; 1 SE 3rd Avenue, Suite 2120, Miami, FL 33131; and/or by email: email@example.com. Community Meeting - Proposed Pompano Beach Brownfield Area Designation Representatives for West Atlantic Boulevard Apartments Investors, LLC, will hold a community meeting on September 28, 2015, from 5:30 P.M. to 7:30 P.M. for the purpose of affording interested parties the opportunity to provide comments and suggestions about the potential designation of the property identified by Folio Number 484232190010, Pompano Beach, Broward County, FL 33069, as a brownfield area pursuant to §376.80(2)(C), Florida Statutes, and about development and rehabilitation activities associated with the potential designation, including public hearings to be held by the Pompano Beach City Commission to consider the request for designation. The community meeting will be held at the Collier City Library, 2800 NW 9th Court, Pompano Beach, FL, and is free and open to all members of the public. For more information regarding the community meeting, including directions, or to provide comments and suggestions at any time before or after the meeting date, please contact Michael R. Goldstein. By telephone: (305) 777-1682; by U.S. Mail: The Goldstein Environmental Law Firm, P.A., 1 SE 3rd Avenue, Suite 2120, Miami, FL 33131; and/or by email (see above). • do NOT contact me with unsolicited services or offers A DEVELOPMENT ORDER ISSUED BY THE PLANNING AND ZONING BOARD (LOCAL PLANNING AGENCY) OF THE CITY OF POMPANO BEACH, BROWARD COUNTY, FLORIDA, PURSUANT TO CHAPTER 155 OF THE CODE OF ORDINANCES; APPROVING WITH CONDITIONS THE APPLICATION FOR DEVELOPMENT PERMIT FOR PALM AIRE ASSOCIATES LIMITED PARTNERSHIP. WHEREAS, Section 155.2407, of the Code of Ordinances, defines the project referenced above as a Major Review; and WHEREAS, Section 155.2204, of the Code of Ordinances, authorizes the Planning and Zoning Board (Local Planning Agency) to issue a final development order for the subject project to construct nineteen (19) new three-story multi-family buildings with a total of 404 residential units, a club house building, dog park, tot lot, associated parking, and landscape improvement. The property is located at 3491 W. Atlantic Boulevard; more specifically described in the legal description below. A PARCEL OF LAND IN THE SOUTHEAST ONE-QUARTER (S.E. 1/4) OF SECTION 32, TOWNSHIP 48 SOUTH, RANGE 42 EAST, BROWARD COUNTY, FLORIDA. SAID LANDS SITUATED IN THE CITY OF POMPANO BEACH, BROWARD COUNTY, FLORIDA, CONTAINING 26.773 ACRES (1,166,233 SQUARE FEET), MORE OR LESS. AS WELL AS: A PARCEL OF LAND IN THE SOUTH ONE-HALF (S. 1/2) OF SECTION 32, TOWNSHIP 48 SOUTH, RANGE 42 EAST, BROWARD COUNTY, FLORIDA. SAID LANDS SITUATED IN THE CITY OF POMPANO BEACH, BROWARD COUNTY, FLORIDA, CONTAINING 46.3524 ACRES (2,019,111 SQUARE FEET), MORE OR LESS. WHEREAS, the Development Review Committee has met to review this project and has provided the applicant with written comments; and WHEREAS, the Application for Development Permit is not in compliance with the applicable standards and minimum requirements of this Code, but the developer has agreed in writing that no building permit will be issued until those conditions the Development Services Director finds reasonably necessary to insure compliance are met; and WHEREAS, copies of the survey and final site plan are on file with the Department of Development Services, stamped with the meeting date of November 19, 2014. The Application for Development Permit is hereby approved by the Planning and Zoning Board (Local Planning Agency) subject to the following conditions and bases therefore: 1. Approval of the site plan is contingent upon the final approval of the RPUD. 2. Park dedication, at the northeast corner of the property, to the City of Pompano Beach must be completed prior to building permit approval. 3. Final plat approval is required prior to building permit approval. 4. No building may exceed 180 linear feet in length or exceed 20,000 square feet in building footprint. 5. Provide a photometric plan in compliance with Table 155.5401.E. showing a minimum of 1 foot candle illumination throughout the vehicular use area. 6. Provide details of each of the proposed amenities: tot lot, dog park, mail kiosks. 7. Applicant shall provide evidence that the project will achieve at least ten sustainable development points, prior to building permit approval. 8. Landscape & irrigation plans must meet zoning code requirements. 9. Address closure of Atlantic Boulevard tunnel prior to final C.O. of last building of Phase-I. Be advised that pursuant to Section 155.2407 (G) of the Pompano Beach Code of Ordinances, a DEVELOPMENT ORDER for a site plan application shall remain in effect for a period of 24 months from the date of its issuance. Heard before the Planning and Zoning Board/Local Planning Agency and Ordered this 19th day of November, 2014. Jill Beeson Chairman Planning and Zoning Board/Local Planning Agency Filed with the Advisory Board Secretary this 4th day of December, 2014. MATTHEW DESANTIS Zoning Technician
Tool 6 Waste Picker Integration Plan Template This tool presents a simple template for a waste picker integration plan (WPIP). It is taken from Annexure 1 in the Waste Picker Integration Guideline. - This template can be used by collaborative waste picker integration working groups that include waste pickers and municipal officials to develop a Waste Picker Integration Plan. It can also be used by collaborative working groups developing WPIPs for industries, companies, NGOs and any other organisation. - The template is designed to be used with the Waste Picker Integration Guideline for South Africa. The Guideline provides information on who waste pickers are, what waste picker integration means, why it is important, how to integrate waste pickers, and why WPIPs are necessary. - Section H of the Guideline presents a participatory process to develop a WPIP. Following these steps will generate all information required to complete this WPIP template. - The template includes text drawn from the Guideline. Instructions are in square brackets. The instructions should be deleted when the section is complete. - The template also includes tables that can be used to present information. A WPIP can also include additional sections. Annexure 1 – Waste Picker Integration Plan Template This annexure presents a simple template for a waste picker integration plan (WPIP). It can be used by municipalities, industries, and any company or organisation that works with waste pickers or provides recycling collection services. The template is designed to be used with the Department of Environment, Forestry and Fisheries Waste Picker Integration Guideline for South Africa. The Guideline provides information on who waste pickers are, what waste picker integration means, why it is important, how to integrate waste pickers, and why WPIPs are necessary. Section H of the Guideline presents a participatory process to develop a WPIP. Following these steps will generate all of the information required to complete this WPIP template. The template includes text drawn from the National Guideline. Instructions are in square brackets. They should be deleted when the section is complete. Text that is underlined indicates where specific information should be inserted. The template also includes tables that can be used to present information. A WPIP can also include additional sections. [Name of Organisation] Waste Picker Integration Plan 20xx – 20xx Table of Contents [Insert Table of Contents] Abbreviations | IDP | Integrated Development Plan | |-----|-----------------------------| | IWMP | Integrated Waste Management Plan | | NGO | Non-governmental Organisation | | WPNIP | Waste Picker Integration Plan | 1. Introduction 1.1 Background This Waste Picker Integration Plan (20xx – 20xx) sets out how waste pickers and the system they have created to salvage and revalue recyclable and reusable materials will be integrated with the formal waste management and recycling systems and programmes in name of municipality/company/sector. [Add a description of how the plan was developed in your municipality/industry – who was involved, how were they involved, the period of time when was it developed, any challenges encountered when developing it, areas that need to be strengthened in future WPIPs, and so on. Include information on how waste pickers and NGOs that assist them were involved in the process.] 1.2 Aim The aim of this WPIP is to ensure that waste and recycling policies and programmes in name of municipality/company/sector recognise, value and integrate waste pickers and the systems they have created, build on the strengths of the existing system, improve the work and livelihoods of waste pickers, and increase recycling rates. 1.3 Objectives The objectives of this plan are to: 1. involve waste pickers in all decisions that affect their work, livelihoods and lives; 2. ensure that waste pickers and their separation outside source (SoS) system are integrated into formal systems to collect recyclables at all levels of the value chain; 3. develop locally relevant, cost-effective programmes that increase current diversion of recyclable and reusable materials away from landfills and align with the waste picker integration principles; 4. generate data required to develop a comprehensive understanding of the intended and unintended effects of each integrated recycling option and make evidence-based decisions on the selection of options; 5. ensure that waste pickers’ conditions and livelihoods are improved and not worsened by formal recycling and waste picker integration programmes; 6. minimise and mitigate harm caused to waste pickers by existing recycling and waste picker programmes to the greatest extent possible; 7. create alternatives for affected waste pickers when negative effects cannot be avoided 8. develop a coherent waste picker integration plan with a clear budget, timeline and allocation of responsibilities to ensure effective implementation. 2. Principles This WPIP is guided by the following principles: 1. Recognition, respect and redress – Waste pickers’ role in the recycling system is recognised and taken into account. Waste pickers are engaged respectfully. Unequal power relations between waste pickers and municipal and industry officials, as well as those rooted in gender, race, class, nationality and so on, are recognised and addressed. 2. Value waste picker expertise – Officials cannot presume to know what waste pickers want, how they are affected by changes in the recycling and waste management systems, what the best form of integration would be, or how waste pickers work. Successful integration programmes are based on waste pickers’ needs and interests – as communicated by waste pickers. 3. Meaningful engagement – Legitimate platforms are created to meaningfully include waste pickers as equal partners in decision-making related to recycling programmes and waste picker integration. Waste pickers are supported to organise themselves so that they can better represent themselves. 4. Build on what exists – Waste pickers’ informal system for collecting, preparing and selling recyclables is recognised and valued, and provides the basis for the development of new formal recycling programmes and contracts. 5. Increased diversion and cost effectiveness – New waste picker integration and recycling initiatives increase diversion of recyclables from landfills through cost effective means. 6. Evidence-based - Waste picker integration and recycling policies and programmes are evidence-based. Piloting can assist in generating necessary evidence. Information generated through monitoring and evaluation contributes to revisions and future developments. 7. Enabling environment – Enabling environments for waste picker integration are created at national, provincial and local levels. 8. Improved conditions and income – Official waste picker integration and recycling policies and programmes improve waste pickers’ working conditions, incomes and social security. Waste pickers are provided with alternatives and compensated for any displacement or deterioration of conditions and incomes resulting from official waste picker integration and recycling programmes and contracts. 9. Payment for services and savings – Waste pickers are remunerated for the collection services they provide, for costs avoided by municipalities and industry, and for environmental benefits they generate. 10. Holistic integration – Successful integration of waste pickers requires changing how they are seen and engaged by residents, industry and government. Waste pickers are recognised as active and equal participants in political, economic, social, cultural and environmental processes. 3. Accountability and decision-making 3.1 Responsible official [Provide details of the responsible official, who should have a senior position with decision-making authority]. 3.2 Waste picker integration engagement platform [As discussed in the Guideline, initiatives to integrate waste pickers cannot succeed unless waste pickers are involved in their design, implementation and evaluation. The WPIP should, therefore, include a waste picker integration engagement platform. The platform should include representatives of all waste picker organisations working at the relevant scale; representatives elected by independent waste pickers; and representatives from all relevant local and district or company or industry departments. In this section, describe the engagement platform. The Guideline suggests discussing this closer to the end of the process once concrete issues have been discussed and relationships have been developed] 3.3 Decision making process [Provide information on how decisions related to waste picker integration will be made through the engagement platform and how they will be finalised.] 3.4 Resolving disputes [Provide information on how disagreements between stakeholders will be resolved] 4. Analysis of the current system [As discussed in the Guideline, before changes are made to the current waste and recycling systems it is important to know what exists.] 4.1 Current commitments related to recycling and waste pickers [Complete the table of current targets and commitments related to recycling and waste pickers.] | Document | Commitments | Targets | Indicators | Timeframe | Responsibility | |----------|-------------|---------|------------|-----------|----------------| | | | | | | | | | | | | | | | | | | | | | 4.2 Baseline information on waste pickers in the municipality or sector [Provide as much information as possible on the number, gender, race and nationality of waste pickers, the areas where they work, the materials they collect, where they sell, how much they earn, and so on. See the Guideline for some ideas regarding how to collect this information.] 4.3 Overview of the existing and planned recycling systems [Provide an overview of the formal and informal recycling systems and how they intersect. Include all parts of the systems, including buy-back centres, sorting spaces etc. It would be useful to include a diagram]. 4.4 Existing and planned official recycling and waste picker programmes and contracts [Provide information on all official waste picker-specific programmes as well as all recycling programmes and contracts. The Guideline includes a process for gathering this information] | Programme/Contract | Start Date | End Date | Areas | Activities | How waste pickers are included | Targets | Indicators | Budget | Responsible | |--------------------|------------|----------|-------|------------|-------------------------------|---------|------------|--------|-------------| | Recycling programmes and contracts | | | | | | | | | | | Waste picker specific projects and programmes | | | | | | | | | | 4.5 Current challenges in the formal and informal recycling systems [Provide information on the current challenges in the formal and informal recycling systems] 4.6 Effects of existing official programmes and contracts on waste pickers [Provide information on the effects of existing recycling and waste picker programmes and contracts on waste pickers.] 5. Addressing adverse effects of current programmes and contracts [The most immediate priority of a WPIP is to address the adverse effects on waste pickers of current programmes and contracts. Follow the Guideline process to develop initiatives to address the adverse effects identified in Section 4 of the WPIP, and present them in this section. This section should also include ways to strengthen positive effects of existing initiatives. It can be useful to present the information in the table below and also provide a written overview.] | Contract/Programme | Effects for waste pickers | Redress actions | Time frame | Targets | Indicators | Budget | Responsible | |--------------------|---------------------------|-----------------|------------|---------|------------|--------|-------------| | | | | | | | | | | | | | | | | | | | | | | | | | | | 6. New recycling and waste picker specific programmes [The Guideline provides some ideas of different ways to integrate waste pickers and their informal recycling system. Follow the process in the Guideline to develop an appropriate approach to integrate waste pickers and their system in the development of new official recycling programmes and projects. Ensure that these are in line with the principles. In this section, provide details on the planned projects, including why they were selected, targets, indicators, time frames, budgets and responsibility. Identify how waste pickers will be included as well as possible negative effects for waste pickers and how these will be mitigated. It is important to include ALL recycling programmes, projects and contracts, as well as all waste picker specific programmes. It is also useful to complete the summary table below.] | Programme/Contract | Start Date | End Date | Areas | Activities | How waste pickers are included | Targets | Indicators | Budget | Responsible | |--------------------|------------|----------|-------|------------|-------------------------------|---------|------------|--------|-------------| | Recycling programmes and contracts | | | | | | | | | | | Waste picker specific projects and programmes | | | | | | | | | | 7. Building capacity and support [Successful waste picker integration requires strengthening the capacity and support of officials, waste pickers, businesses or industry, and residents. Provide details of how this will be done. Include time frames, targets, indicators, budget and allocation of responsibility.] 8. Institutionalizing waste picker integration [As discussed in the Guideline, to institutionalize waste picker integration, clearly state in this section what targets and so on will be included in relevant documents and what changes are required to bylaws (if relevant), policies and plans in order to achieve your commitments to waste picker integration.] 8.1 IDP [for municipalities] 8.2 Bylaws [for municipalities] 8.4 KPIs 8.5 Existing policies 9. Implementation Plan [Provide a detailed implementation plan] 10. Monitoring, evaluation and revision [Provide details on how the WPIP will be monitored and evaluated, how this will feed into revision of the plan and activities, and how waste pickers will be included in this process.] 11. Financial Framework 11.1 Budget [Provide a full budget for official programmes and budgets to implement the WPIP. Ensure that sufficient funds are allocated to support meaningful engagement by waste pickers] 11.2 Funding sources [Identify potential sources of funding from all levels of government, the private sector, donors and so on. Identify the person or people responsible for raising funds.] 11.2.1 Municipal funding 11.2.2 Provincial funding 11.3.3 National funding 11.4.4 Private Sector Funding 11.4.5 Donor Funding 11.4.6 EPR funding 12. Appendices [Attach any necessary appendices]
HIGHLIGHTS- APRIL 2018 - Due to incessant rains in the month of April, the temporary shelters hosting early childhood development (ECD) classrooms for about 2,700 Burundian children (3-6 years) collapsed. UNICEF carried out a quick assessment and temporarily accommodated the affected children in the existing facilities. The other two ECD learning in permanent centre and home-based is continuing. Immediate funds approx. US$ 200,000 are required to find a more permanent solution to the temporary shelters. - Health and nutrition activities are ongoing with the support of the Government and implementing partners, exceeding the SPHERE standards. Child protection services are being integrated through the national child protection systems. UNICEF’s Response with Partners | Sect | UNICEF Target¹ | UNICEF Results | |-------------------------------------------|----------------|----------------| | WASH: # of people provided with prepositioned materials² | 10,000 | 0³ | | Health: # of children vaccinated against measles | 9,900 | 2,097 | | Nutrition: # of children admitted for SAM treatment | 300 | 62 | | Early childhood development: Children aged 0 to 6 years benefiting from the provision of early childhood development (ECD) services through centre- and home-based care | 1,100 | 0 | | Child protection: # of children and adolescents including UASC receiving critical child protection services | 30,000 | 26,700⁴ | | Child protection # of UASC receiving appropriate alternative care services | 200 | 0⁵ | | Education: # of children accessing quality education | 19,000 | 22,947 | ¹ The targets were set based on the planning figure of an expected 120,000 Burundian refugees in Mahama Camp and reception centres. Currently Burundian refugees are 47% of the planning figure. ² This activity relates to the preposition of WASH supplies that is expected to cater 10,000 new refugees. Supplies will only be used if new influx of refugees will take place. ³ Due to no change in the numbers of 0-6 years during Jan-Feb 2018, the sector has reported zero progress. ⁴ This intervention is for the most vulnerable children. Since the population has remained static, it there has been no change in this indicator. ⁵ Due to no change in the number of UASC children during Jan-Feb 2018, the progress is being shown as zero. According to UNHCR statistics from 31 March 2018, there are 177,369 refugees and asylum seekers in Rwanda. Of these, 92,840 are Burundian refugees, 75,162 are Congolese refugees, 8,727 are asylum seekers, and 640 are refugees from various other countries. Nearly 50 per cent of the refugees and asylum seekers are children under 18. Two refugee camps for Congolese refugees were established in 1996 and 1997, respectively, and the other three camps were established in 2005, 2012 and 2014. In 2012, UNHCR took full responsibility for the Congolese refugee response. However, as an additional 10,000 Congolese refugees are expected in 2018, UNICEF has begun contingency planning and prepositioning of supplies. Mahama Camp currently hosts 57,407 Burundian refugees, while the three reception centres (Bugesera, Nyanza and Gatore) host a total of 2,229 Burundian refugees. For the first time, the new transit centre in Nyarushishi received 399 new arrivals from Burundi during this reporting period. In addition, there are 34,922 Burundian refugees in the urban areas of Kigali and Huye. There are 21,451 refugees who are particularly vulnerable due to serious medical conditions, disabilities, and those who are unaccompanied or separated children according to UNHCR. **Humanitarian leadership and coordination** The Ministry of Disaster Management and Refugee Affairs (MIDIMAR) and UNHCR are the overall coordinators of the inter-agency response to the refugee situation. For the Burundian refugees residing in Mahama Camp, UNICEF is the UN co-coordinator for the response in WASH, child protection, education, early childhood development, health (with WHO and UNFPA), and nutrition (with WFP). The main implementing partners are district and community authorities, the Ministry of Health, Rwanda Biomedical Center, district hospitals and health centres, Africa Humanitarian Action, American Refugee Committee (health, nutrition and shelter), Save the Children (child protection), the Adventist Development and Relief Agency (ADRA) in ECD and education, the Ministry of Infrastructure, Rwanda Water and Sanitation Corporation (WASAC), Global Humanitarian and Development Foundation (GHDF), and Oxfam (WASH). In 2016, the Government of Rwanda joined the Comprehensive Refugee Response Framework (CRRF), which is to strengthen donor and Government engagement towards inclusion of refugees in national systems, while at the same time will promote an equity approach in refugee hosting areas so that development investments benefit both host and refugee communities. **Humanitarian strategy** The humanitarian strategy agreed between the Government and development partners is to provide comprehensive services to refugees and seek fulfilment of their basic rights. This includes provision of registration, shelter, household equipment, food and water, maintaining sanitation and hygiene, --- 6 The CRRF is a new framework adopted by all 193 Member States of the United Nations as part of the New York Declaration for Refugees and Migrants in September 2016 that provides for a more comprehensive, predictable and sustainable response that benefits both refugees and their hosts. health and nutrition services, education, and protection. Refugee Coordination Meetings are held each month and include donors and development partners such as the World Bank. In Mahama Camp for Burundian refugees, UNICEF’s continuing response includes, technical assistance, screening and management of severe acute malnutrition, promotion of appropriate infant and young child feeding practices, and provision of polio and measles vaccines for children, as well as routine immunisations. In addition, unaccompanied and separated children are registered, their families are traced, and child-friendly spaces are established. Support for the prevention and response to violence against children is being provided. UNICEF is also supporting access to early learning and basic education for refugee children. **Summary analysis of programme response for refugees from Burundi and DRC** **Nutrition** During the reporting period, in collaboration with American Refugee Committee (ARC), Save the Children in Mahama Camp for Burundian refugees and Africa Humanitarian Action (AHA) in the five Congolese camps, UNICEF continued to provide technical support and nutrition supplies for malnourished children under five by integrating refugees into national programmes. In March 2018, UNICEF provided 70 cartons of ready-to-use therapeutic food (RUTF) to AHA through the district for distribution in Bugesera and Nyanza reception centres hosting Burundian refugees, and in Kigeme and Mugombwa Camps hosting Congolese refugees. Mahama Camp received 150 cartons of RUTF from Kirehe District Hospital for the treatment of severe acute malnutrition (SAM), as well as 75 cartons (6,840 boxes) of micronutrient powders (MNP) for the prevention of deficiencies like anaemia for children under two. The number of estimated cases children with SAM is 300 fo 2018. Supplies will be replenished as needed. Community-based activities for maternal, infant and young child nutrition continued in all villages. By March 2018, 62 cases of SAM had been identified (28 boys and 34 girls) and admitted to out-patient programmes. All of these children received treatment in the nutrition rehabilitation centre in the camp. 24 boys and 31 girls with SAM have been successfully rehabilitated and transferred to the moderate acute malnutrition (MAM) programme. Five cases (four boys and one girl) with medical complications were admitted and treated at Kirehe District Hospital. Three boys and one girl responded to the treatment and were discharged. In late February, there was one reported death of a girl with severe anemia in the outpatient programme, who presented too late in the hospital. **Health** In March 2018, 849 children aged 0-5 years have been reached with essential vaccines-BCG, polio, DTC, Hepatitis B, Haemophilus influenza B, Rotavirus, Pneumococcal conjugate, and measles/rubella (MR) and 111 pregnant women were provided with tetanus toxoid vaccines. The procurement of the vaccines was co-financed by GAVI and UNICEF. **Water, Sanitation and Hygiene (WASH)** Since 2015, UNHCR has and continues to provide all WASH support to Congolese refugees. While UNICEF contributed to the establishment of water supply and sanitation services in 2015-17 in Mahama Camp for Burundian refugees, UNHCR is now responsible for all WASH services. In the event of an influx of additional refugees, UNICEF will distribute WASH supplies to affected populations. In the current context, however, UNICEF monitors the situation in all camps and provides technical assistance to UNHCR where needed. In February, UNHCR requested and received technical support from UNICEF to determine the WASH requirements for a newly established isolation centre in Bugesera Transit Camp, which UNHCR began implementing in April. UNICEF is ready to respond with WASH services in the event of disruption of services or if there is an unexpected influx. The 2017 WFP-UNHCR Joint Assessment Mission indicated that SPHERE standards for water and sanitation services are not being met in all camps. UNICEF is assessing the current situation with partners and working to determine feasible solutions. **Child Protection** During this reporting period, UNICEF continued to work with Save the Children to provide child protection support to over 26,700 children in Mahama Camp. Child protection interventions responded to the different needs of girls and boys, especially unaccompanied and separated children based on specific protection risks. Child- and youth-friendly spaces have been established to provide a safe environment for girls and boys to play receive psychosocial support. On average, 12,476 children (6,488 boys and 5,988 girls) use these spaces on a weekly basis. In addition, UNICEF is strengthening the technical capacity of partners to appropriately manage child protection cases; 66 child protection volunteers received refresher trainings on case management of the protection needs of unaccompanied and separated children. These trainings focused on how to identify and report child protection cases, and overall case management and referral services for victims of abuse under the supervision of social work professionals. In total, 937 children (415 girls and 522 boys) have been provided necessary protection services and continue to be monitored closely. The Child Protection services includes provision of psychosocial support, follow up of the child protection case referrals. Furthermore, Save the Children provided child and youth capacity empowerment training as part of an effort to build to vocational skills in order to build self-resilience. Community-based volunteers make daily home visits to these children living in alternative care arrangements, and monthly visits are made to those placed in foster families. Three community sensitisation campaigns were also conducted through cultural dramas and sketches focusing on specific risks. One drama focused on the prevention of early marriages, one on teenage pregnancies and one on back-to-school campaigns. In 2018, UNICEF began transitioning from a predominantly camp-based approach to supporting refugee children within the national child protection system. As part of a mechanism to build the humanitarian development divide, UNICEF, Save the Children and the National Commission for Children conducted two joint meetings that brought together the Child Protection workforce from the Mahama District and the Refugee camp. The meetings were organised by Save the Children and involved 30 (15 male, 15 female) host community Child and family protection volunteers known as Friends of the Family/Inshuti z’Umuryango, 6 (3 male, 3 female) administrative Sector and Cell Social Affair officers and 2 Executive Sector and Cell leaders and one officer in charge of Education. The purpose of the meetings was to increase awareness and support to refugee children in need, including support in case of referral and prevention of child labour and abuse outside the camp. Plans are underway to facilitate 30 Friends of the Family to visit the Mahama Camp in order to learn and exchange experiences with the Child Protection volunteers in the camp. This activity is important to improve collaboration and management of Child Protection referrals. Similarly, an agreement was signed with Save the Children to intervene in 11 districts in a development setting, of which six are hosting refugees with at least 50 per cent of the population being children under 18 years. This entails strengthening the national child protection system to include refugee children for identification, management, and referral of child protection cases. This cooperation agreement covers Huye (total refugee population: 3,410), and Nyarugenge and Gasabo Districts of Kigali City (30,632 refugees). These districts host a considerable number of urban refugees from Burundi. The agreement also covers Gicumbi and Nyamagabe Districts, which host Gihembe and Kigeme Refugee Camps, respectively, for Congolese refugees (12,418 and 14,469 respectively). Strengthening the national child protection system will be a good opportunity to ensure appropriate inclusion of refugee children to maximize protection of their rights through more sustainable preventive and responsive actions. **Education** During the reporting period, UNICEF continued to support quality education in Mahama schools through the provision of teaching supplies, which included mathematic teaching kits and dustless chalks used by 386 teachers. UNICEF also maintained support to the integration of refugee children into the education system by providing learning materials, which benefits both refugee children as well children from the host community who together study in the national government school Paysannat L. The school follows the national competence-based curriculum. For the continuing functionality of ICT materials (computers and accessories) in Mahama schools, UNICEF organised refresher trainings on computer maintenance for IT technicians. Fuel was also provided to power a generator and the school computers. **Early Childhood Development (ECD)** A second permanent ECD centre, currently being constructed with support from the Government of Sweden, will be handed over to MIDIMAR and UNHCR by the end of May. The centre will have six stimulation rooms to accommodate 400 children attending in double shifts. The existing ECD centre is also being upgraded with two additional stimulation rooms supported by the Government of Japan (GoJ) funds. The ongoing construction of a multi-purpose play park is also funded by the GoJ; outdoor play materials have been procured and installation has begun; the play park will be operational by mid-June 2018. ECD services are provided to children aged 0-6 in Mahama Camp through three main approaches: integrated ECD services in the permanent ECD centre; pre-primary services in temporary shelters; and home-based services through parent-led groups. In total, 5,756 children aged 0-6 have benefited from ECD services through these approaches carried out by 88 caregivers. The permanent ECD centre has 420 children (219 girls and 201 boys), and 4,730 children (2,235 girls and 2,495 boys) are in the temporary shelters. Through the 60 home-based groups managed by 120 trained parent leaders, 606 Burundian children (327 girls and 279 boys) are being benefitted. In addition to these approaches, broader outreach activities are conducted to build parenting skills of parents with children aged 0-3 years. ECD caregivers facilitate parenting sessions and organise supervision and coaching visits for home-based groups. On 17 April, three temporary ECD structures providing spaces for over 2,400 children collapsed due to heavy rains that week. The collapse occurred at night and no casualties were reported. Two of five temporary structures had collapsed, and the three remaining structures are in a compromised condition, which poses danger to children and caregivers. Some of the affected children have been integrated into other classrooms, while others have been taken to the nearby permanent ECD centre. This has resulted in significant overcrowding. Other structures, such as structures within child-friendly spaces and health facilities, remain vulnerable if weather conditions do not improve. **Funding** In 2018, UNICEF Rwanda requires a total of US$ 2,837,000 for the refugee response, including US$ 1,837,000 for the Burundian refugee response, and US$ 1 million for the Congolese response, as per the inter-agency RRRPs. Carry over resources from 2017 will be utilised by the end of June 2018, and thus UNICEF is in critical need of funding support to continue responding to the needs of refugees. | Sector | Requirements Burundi Refugees | Requirements DRC Refugees | Total Requirements | Funds available | Funding gap**** | |-------------------------|-------------------------------|---------------------------|--------------------|-----------------|-----------------| | | | | | Funds Received Current Year* | Carry-Over from 2017 | $ | % | | Nutrition | 198,000 | 50,000 | 198,000 | 0 | 0 | 198,000 | 100 | | Health (includes C4D) | 440,000 | 110,000 | 600,000 | 0 | 0 | 440,000 | 100 | | WASH | 220,000 | 240,000 | 220,000 | 0 | 0 | 220,000 | 100 | | Education** | 385,000 | 240,000 | 645,000 | 0 | 0 | 385,000 | 100 | | ECD** | 297,000 | 140,000 | 557,000 | 0 | 0 | 500,510 | 100 | | Child Protection | 297,000 | 220,000 | 617,000 | 0 | 91,978 | 320,000 | 100 | | **Total** | **1,837,000** | **1,000,000** | **2,837,000** | **0** | **103,744** | **2,744,533** | **97** | * Carry-forward from 2017, which were committed and utilised by June 2018. ** RRRP has ECD and Education figure combined. **** Results have been achieved through the allocation of regular resources, including carry over from 2017 to the refugee response. **For more information:** Ted Maly Representative +250 788 302 716 firstname.lastname@example.org Oliver Petrovic Deputy Representative +250 788 300 717 email@example.com Rajat Madhok Chief of Communications, Advocacy and Partnerships +250 788 301 419 firstname.lastname@example.org
Evolution of genetic redundancy Martin A. Nowak*, Maarten C. Boerlijst*, Jonathan Cooke† & John Maynard Smith‡ * Department of Zoology, University of Oxford, South Parks Road, Oxford OX1 3PS, UK † National Institute for Medical Research, The Ridgeway, London NW7 1AA, UK ‡ School of Biological Sciences, University of Sussex, Brighton BN1 9QG, UK Genetic redundancy means that two or more genes are performing the same function and that inactivation of one of these genes has little or no effect on the biological phenotype. Redundancy seems to be widespread in genomes of higher organisms\textsuperscript{1–9}. Examples of apparently redundant genes come from numerous studies of developmental biology\textsuperscript{10–15}, immunology\textsuperscript{16,17}, neurobiology\textsuperscript{18,19} and the cell cycle\textsuperscript{20,21}. Yet there is a problem: genes encoding functional proteins must be under selection pressure. If a gene was truly redundant then it would not be protected against the accumulation of deleterious mutations. A widespread view is therefore that such redundancy cannot be evolutionarily stable. Here we develop a simple genetic model to analyse selection pressures acting on redundant genes. We present four cases that can explain why genetic redundancy is common. In three cases, redundancy is even evolutionarily stable. Our theory provides a framework for exploring the evolution of genetic organization. There are an increasing number of observations demonstrating that experimental inactivation of certain genes has no apparent effect on the phenotype or fitness of an animal. In specific cases, it seems that the natural function of a gene can be taken over by another gene. Such a redundant genetic organization is sensible from an engineer’s point of view: important functions require backup devices that can take over in case of failure. But can natural selection favour the emergence and stability of redundant genes? Consider a population of animals in which some essential function can be performed by genes at either of two loci, $A$ and $B$. (We use the word ‘function’ to refer to an effect of a gene during development; thus two genes coding for different proteins can have the same function.) Non-functional alleles, $a$ and $b$, arise by mutation at rates $u_a$ and $u_b$ per generation; reverse mutations are ignored. For simplicity, we consider a haploid population, but the models can be extended to diploid populations and the conclusions remain essentially unchanged. There are four genotypes: $AB$, $Ab$, $aB$ and $ab$. In each generation, random mating is followed by mutation and selection. Natural selection can maintain both genes if redundancy is only apparent, that is, if the $AB$ genotype is fitter than the other genotypes. Less obvious is the question of whether natural selection can maintain true redundancy in the sense that an individual with one of the two genes is as fit as an individual with both. Models 1–3 will address this question. Model 4 studies the consequence of developmental errors. In model 1, we assume that both genes are equally effective, and that each can function perfectly on its own (Fig. 1a). The fitness of $AB$, $Ab$ and $aB$ is one, while the fitness of $ab$ is zero. Let us first consider the case where the mutation rates in both genes are the same: $u_a = u_b = u$. The system admits a line of equilibria. All trajectories converge to this line. For small mutation rates, the maximum equilibrium frequency of $AB$ is approximately $1 - 2(u/r)$, where $r$ is the recombination rate between the two loci. Thus a large proportion of individuals can carry functional alleles for both genes. There is, however, an important caveat. We have assumed that the mutation rates $u_a$ and $u_b$ are equal. But any small deviation from $u_a = u_b$ destroys the equilibrium line. If $u_a \neq u_b$, then model 1 does not admit any interior equilibrium, and redundancy does not survive\textsuperscript{22}. A simple way of understanding this result is as follows. At equilibrium, the rate at which deleterious genes arise by mutation must equal the rate at which they are removed by selection. Because... only \( ab \) individuals are removed selectively, the rates of removal of the two genes are equal; therefore the rates at which they arise by mutation must be equal. If \( u_a > u_b \), then \( A \) will become extinct while \( B \) will be fixed. But if mutation rates are very similar, this may take a long time. In model 1, \( A \) declines as \( \exp[-(u_a - u_b)T] \), where \( T \) is the number of generations\(^2\). (This represents an upper limit.) For a mutation rate of \( 10^{-6} \) per gene per generation, and a 10% difference between \( u_a \) and \( u_b \), the average lifetime of redundancy is about \( 10^7 \) generations. Therefore, a certain amount of redundancy in our genomes could be the consequence of recent gene duplication events. We note, however, that redundancy cannot only be a consequence of gene duplication because very different genes can also show overlapping redundancy. Several authors have studied stochastic versions of model 1 and computed the time it takes for random drift to eliminate one of the two genes even if mutation rates are exactly equal\(^{23-28}\). In model 2, we assume that the genes \( A \) and \( B \) perform the same function, but with slightly different efficacies (Fig. 1b). Suppose \( A \) performs the function with an efficacy of one, while \( B \) does it with a reduced efficacy, \( h \). If both genes are present, the function is performed with the higher of the two efficacies; this is essentially a definition of redundancy. Thus the fitness of genotypes \( AB \) and \( Ab \) is one, while the fitness of genotype \( aB \) is \( h \). The fitness of \( ab \) is zero. Unexpectedly, this can lead to a stable equilibrium with both genes \( A \) and \( B \) maintained in the population, provided that the mutation rate in \( A \) is higher than in \( B \). Redundancy is maintained because gene \( B \), with the lower efficacy, also has a lower mutation rate, and is maintained by selection in \( a \) genotypes. In this case, \( B \) is fully redundant in the sense that its inactivation has no effect on fitness, whereas deletion of \( A \) causes a small reduction in fitness. A stable equilibrium is also possible if the fitness of \( Ab \) is higher than the fitness of \( AB \), which in turn has a higher fitness than \( aB \). In this case, redundancy is even maintained at a cost. Model 3 relates pleiotropy to redundancy (Fig. 1c). Pleiotropy implies that genes perform more than one specific function (for example, by being expressed at more than one time and place in the developing organism\(^29\)). The idea is that redundancy between two genes occurs only with respect to a given function, while the genes are maintained by selection because of another, independent func- **Figure 2** Complex redundancy–pleiotropy networks evolve if the mutation rate of complete inactivation of a gene is higher than the mutation rate of inactivating only one function of a gene while leaving other functions unaffected. Results from a stochastic computer simulation are shown. In each round, two individuals are chosen to reproduce, and their genes are recombined and mutated. Mutation can either inactivate a gene completely, inactivate one or more functions of a gene, or change the efficacies at which a gene performs a function. The fitness of the offspring is given by the product of the efficacies at which each function is performed, where each function is performed with the efficacy of the most efficient gene for this function. There are four loci, \( A_1 \) to \( A_4 \), and four functions, \( F_1 \) to \( F_4 \). Gene \( i \) performs function \( j \) with efficacy \( h_{ij} \). Initially each gene performs one function with an efficacy of one. During the simulation, each gene can evolve to perform additional functions, but the sum over all efficacies is limited by \( h_{\text{max}} \) for each gene: \( \sum_j h_{ij} < h_{\text{max}} \). For example, \( h_{\text{max}} = 1.9 \) means that a gene can perform one function with an efficacy of one and a second function with an efficacy of 0.9. The figure shows the initial configuration, and the population averages of \( h_{ij} \) after \( t = 100,000 \) and \( t = 500,000 \) generations for three different values of \( h_{\text{max}} \). The population size is 20,000. The mutation rate for complete inactivation is \( u_1 = 0.001 \), \( u_2 = 0.0011 \), \( u_3 = 0.0012 \) and \( u_4 = 0.0013 \) for loci 1–4. A specific function is lost with mutation rate \( u_i/10 \) and changed to random value between 0 and 1 (subject to \( \sum_j h_{ij} < h_{\text{max}} \)) with mutation rate \( u_i/20 \). Redundancy arises as a consequence of ‘functional overlap’ between genes. In the simplest case, there are two functions, $F_1$ and $F_2$, and two genes, $A$ and $B$. Suppose $A$ performs $F_1$ with an efficacy of one, while $B$ performs $F_1$ with a slightly reduced efficacy, $h$, and $F_2$ with an efficacy of one. Mutations in $A$ lead to the inactive variant $a$; the mutation rate is $u_a$. In the second locus, we consider two types of mutants: $b_1$ has lost the ability to perform $F_1$, but still performs $F_2$; $b_2$ is completely inactive. The mutation rate from $B$ to $b_1$ is $u_{b1}$. The fitness of each variant is evaluated by assuming that each function is performed with the efficacy of the most efficient gene, and the overall fitness is the product of the efficacies at which the two functions are performed. Therefore genotypes $AB$ and $Ab_1$ have a fitness of one, genotype $aB$ has fitness $h$, and all other genotypes have a fitness of zero. Using the same framework as above, we find that a stable equilibrium with $AB$ is possible provided the mutation rate $u_{b1}$ is smaller than $u_a$. Functional overlap (and therefore redundancy in performing function $F_1$) is stable if the mutation rate at which mutants are produced that have lost the functional overlap but still maintain the original function is lower than the mutation rate of producing inactive mutants at the other locus ($u_{b1} < u_a$). This is plausible, as mutations that destroy all functions of a gene are likely to be more common than mutations destroying one function but leaving another unaffected. This is especially true if the two functions are very similar. Models 2 and 3 show that true redundancy can be evolutionarily stable, but in both cases the relevant selection pressures are weak if the mutation rates are low. For weak selection pressures to counteract random drift the population size has to be large. In our case the population size has to exceed $1/u$ for redundancy to be maintained. Models 1–3 explore the maintenance of redundancy. We would also like to understand the origin of redundancy and extend the idea to a larger number of genes and functions. Figure 2 shows computer simulations based on a stochastic version of our model. The starting configuration is neither redundant nor pleiotropic: each function is performed by one gene, and each gene performs one function. During the simulation, mutations that lead to functional overlap arise spontaneously and can be favoured by selection. Genes evolve to perform additional functions. Redundancy is selected and can be fixed in the population if the mutation rate for complete inactivation of a gene is higher than the mutation rate for inactivating only a specific function of a gene. Final configurations often consist of complex redundancy–pleiotropy networks in which each function is performed by several genes and each gene performs several functions. Finally, model 4 considers ‘developmental errors’ (Fig. 3). The transmission of information from the egg to the adult organism is subject to errors. Let us therefore consider the possibility that a gene is intact in the germ line but fails to perform its function during development. We suggest that such developmental failure may arise either by somatic mutation, by errors in the origin and maintenance of cell differentiation (for example, in copying DNA methylation patterns), or through errors in cell-to-cell signalling. In terms of our model, we assume that two genes $A$ and $B$ perform the same function, but with probabilities $\delta_a$ and $\delta_b$ gene $A$ and $B$, respectively, fail to perform this function in the course of development. If both genes fail, the function is not performed and the animal does not survive. In the absence of developmental error, the genotypes $AB$, $Ab$ and $aB$ have a fitness of one, but taking into account developmental error, the average fitnesses of $AB$, $Ab$ and $aB$ are respectively $1 - \delta_a\delta_b$, $1 - \delta_a$, and $1 - \delta_b$. As before, the mutation rates to produce inactive genes $a$ and $b$ are $u_a$ and $u_b$. We find that the redundant genotype, $AB$, is stable if $u_a < \delta_b$ and $u_b < \delta_a$. Thus the mutation rate in each gene has to be smaller than the developmental error rate in the other gene. It is plausible that such developmental failures are more frequent than germline mutations: repeated rounds of cell replication provide increased probabilities for somatic mutation; errors can occur in the DNA methylation pattern and may result in incorrect cell differentiation; and interactions with other cells and response to signals can fail. Therefore we expect the error rate for normal expression of developmental genes, per individual ontogeny, to be higher than germline mutation rates. This is also supported by observations that routine examination of large numbers of embryos in each phase of development reveal spontaneous cases of obvious morphogenetic failure much in excess of germline mutation rates. Model 4 suggests that redundancy should be more common in developmental genes that are expressed in specific spatio-temporal patterns in the body than in genes encoding for ‘housekeeping’ functions that are required in all cells (for example, essential metabolic enzymes). Somatic mutations or failures of gene expression that simply kill the cell in which they occur may have much less phenotypic effect than similar events that misguide subsequent developmental signals. Therefore the developmental error rate should be higher in genes that are not required in every cell. The model can be extended to more than two genes per function (Fig. 3b). An elegant result is obtained if we consider genes with similar mutation rate, $u$, and similar developmental error rate, $\delta$. The number of redundant genes that can be maintained per function is the largest integer less than $1 + (\log u)/(\log \delta)$. For example, if the mutation rate is $u = 10^{-6}$ and the developmental error rate is $\delta = 10^{-3}$, selection can maintain up to three genes for a given function. Model 1 shows that in situations where true redundancy is not evolutionarily stable, it may nevertheless take a long time until it is eliminated from the population, provided that mutation rates are small. Models 2 and 3 describe situations in which true redundancy can be maintained indefinitely. Model 3 can lead to complex redundancy networks in which each function is performed by several genes and each gene performs more than one function. Such networks are evolutionarily stable provided that random mutations are more likely to destroy all functions of a gene, rather than destroy just one function while leaving other functions unaffected. Model 4 introduces the concept of developmental errors, and shows that redundancy is evolutionarily stable provided that developmental error rates are larger than mutation rates in the germ line. According to model 4, redundancy should occur for those genes (or functions) that are under a high developmental error rate. The four models are not mutually exclusive; together they explain how mutation and selection can lead to redundant genetic organization. Methods Model 1. Consider a haploid population with genes at two loci, $A$ and $B$. Non-functional alleles, $a$ and $b$, arise at mutation rates $u_a$ and $u_b$. There are four genotypes, $AB$, $Ab$, $aB$ and $ab$. The frequencies are $x_1$, $x_2$, $x_3$ and $x_4$, and the fitnesses are $f_1$, $f_2$, $f_3$ and $f_4$ respectively. In each generation there is mating (with recombination), followed by mutation and selection. Mating is described by the difference equations: $x'_1 = x_1 + D$, $x'_2 = x_2 - D$, $x'_3 = x_3 - D$, and $x'_4 = x_4 + D$. Here, $D = r(x_3x_4 - x_1x_2)$, where $r$ is the recombination rate between the $A$ and $B$ loci, and $r$ is a number between 0 and 0.5. Mutation is described by $x'_1 = x_1(1 - u_a)(1 - u_b)$, $x'_2 = x_1(1 - u_a)u_b + x_3(1 - u_b)$, $x'_3 = x_3u_a(1 - u_b) + x_1(1 - u_b)$, and $x'_4 = x_1u_a u_b + x_3u_a + x_1u_b + x_4$. Selection is described by $x'_i = f_i x_i / f$, where $f = \Sigma_i x_i f_i$ denotes the average fitness of the population. Suppose both genes perform function $F$ with equal efficacy. We have $f_1 = f_2 = f_3 = 1$ and $f_4 = 0$. For exactly equal mutation rates, $u_a = u_b = u$, there is a line of equilibria given by $x_i = x_1 x_i r(1 - u)/u$. For unequal mutation rates, the gene with higher mutation rate will become extinct. Model 2. This has the same framework as model 1, but genes $A$ and $B$ perform function $F$ with different efficacies, $h_a$ and $h_b$. Let $h_a > h_b$. The genotype fitnesses are $f_1 = f_2 = h_a$, $f_3 = h_b$ and $f_4 = 0$. Redundancy can be evolutionarily stable if $B$ has a lower mutation rate than $A$, $u_b < u_a$. If $1 - (h_b/h_a) > u_a > u_b[1 + (1/r)(h_a - h_b)/h_b]$ the equilibrium is $x'_1 = (1 - x'_5) \times [h_b(1 - u_a) - h_b(1 - u_b)] / [(h_a - h_b)(1 - u_a)]$, $x'_2 = (1/\theta)[u_b/(1 - u_b) \times [h_b(1 - u_a) - h_b(1 - u_b)] / [h_b(u_a - u_b)]$, $x'_3 = 1 - x'_1 - x'_2$, and $x'_4 = 0$. For low mutation rates, the equilibrium frequency of the redundant $AB$ genotype is approximately $x'_1 \approx 1 - (1/r)[u_b/(u_a - u_b)](h_a - h_b)/h_b$. For example, if $h_a = 1$, $h_b = 0.99$, $u_a = 1.1 \times 10^{-8}$, $u_b = 10^{-6}$ and $r = 0.5$, then the equilibrium frequency of $AB$ is about 0.8. This model can be expanded to $n$ genes with different mutation rates and different efficacies. The fitness of a particular genotype is given by the efficacy of the most efficient gene. If less efficient genes have lower mutation rates then stability of several redundant genes is possible. For a large number of genes, however, the conditions on efficacies and mutation rates become very restrictive. Model 3. Consider two genes, $A$ and $B$, and two functions, $F_1$ and $F_2$. Gene $A$ performs function $F_1$ with efficacy $h_{a1}$ and gene $B$ performs function $F_1$ with a lower efficacy $h_{b1}$ and function $F_2$ with an efficacy of one. Mutations in $A$ lead to the inactive variant $a$; the mutation rate is $u_a$. Mutations in $B$ can either lead to variant $b_1$, which has lost the ability to perform function $F_1$ but still performs $F_2$, or to variant $b_2$, which is completely inactive; mutation rates are $u_{b1}$ and $u_{b2}$, respectively. Variant $b_2$ can also arise from $b_1$ at a mutation rate $u_{b3}$. The redundant organization for performing function $F_1$, is evolutionarily stable if $u_{b2} < u_a$. The analysis is similar to model 2 if $u_{b2} \approx u_{b3}$; for low mutation rates, the equilibrium frequency of $AB$ is approximately $x'_1 \approx 1 - (1/r)[u_{b1}/(u_a - u_{b1})] \times (h_a - h_{b1})/h_a$. For the same numerical values as model 2, and assuming that $u_{b1}$ is 10 times smaller than $u_a$, we find that the equilibrium frequency of $AB$ is 0.998. Pleiotropy facilitates redundancy. Model 4. Consider two genes $A$ and $B$ with mutation rates $u_a$ and $u_b$ and developmental error rates $\delta_a$ and $\delta_b$. Mutation and selection are described by the difference equations $x'_1 = (1 - \delta_a \delta_b)(1 - u_a)(1 - u_b)x/f$, $x'_2 = (1 - \delta_a) \times (x_4 u_a + x_1)/f$, $x'_3 = (1 - \delta_b)(1 - u_a)x_1 u_a + x_2)/f$, $x'_4 = 0$, where $f$ is such that $x'_1 + x'_2 + x'_3 + x'_4 = 1$. In contrast to models 1–3, recombination is not essential here. The equilibrium frequency of $AB$ is $x_1 = 1/[1 + [u_a(1 - \delta_a)]/[\delta_b(1 - \delta_a) - u_a(1 - \delta_a, \delta_b)] + [u_b(1 - \delta_a)]/[\delta_b(1 - \delta_a) - u_a(1 - \delta_a, \delta_b)]]$. For small values of $u$ and $\delta$, we obtain $x_1 \approx 1/[1 + [u_a/\delta_a] + [u_b/\delta_a - u_a]]$. Thus necessary conditions for a large $x_1$ are $u_a \ll \delta_b$ and $u_b \ll \delta_a$. The model can be extended to $n$ genes. Suppose all genes have mutation rate $u$ and developmental error rate $\delta$. Let $x_i$ denote genotypes with $i$ genes ($i = 0, \ldots, n$). The population dynamics are $x'_{n-k} = (f_{n-k}/f)\Sigma_{i=0}^{n-k}(x'_i)$ $\times u^{k-1}x'_{n-k}$, where $f_{n-k} = (1 - \delta)^n(1 - u)^k$ and $f$ is such that all frequencies add to one. The equilibrium can be solved recursively. An equilibrium with the genotype containing all $n$ redundant genes is possible if $f_n > f_{n-1}$. This leads to $n < 1 + (\log u)/(\log \delta)$. Diploid models. Our results for haploid models also apply to diploid models. In diploid models, we distinguish four gametes, $AB$, $Ab$, $aB$ and $ab$, which form nine zygotes: $AB/AB$, $AB/Ab$, $Ab/Ab$, $AB/aB$, $aB/aB$, $AB/ab$, $Ab/ab$, $aB/ab$ and $ab/ab$. For each generation we assume that mutation acts on gamete frequency, then zygotes are formed, selection acts on zygotes, and finally new gametes are formed, including the possibility of recombination. In agreement with haploid model 1, we find that the case where all zygotes have high fitness except $ab/ab$ which has low fitness, does not lead to stable redundancy. Cases similar to models 2 and 3 give stable redundancy. Diploid models with developmental errors also give stable redundancy. There are some additional cases that can lead to redundancy in diploid models. One such case was discovered by Brookfield: it assumes that the double heterozygote, $AB/ab$, is as fit as the wild type, $AB/AB$, but $Ab/ab$, $AB/ab$ and $ab/ab$ have low fitness\(^1\). In addition, stable redundancy is also possible for partial dominance where all homozygotes have high fitness, the double heterozygote has a lower fitness, the single heterozygotes have still lower fitness, and $ab/ab$ has lowest fitness. Classification of redundancy. It is helpful to distinguish three types of genetic redundancy. (1) True redundancy\(^1\) denotes the situation where an individual with a redundant genotype, $AB$, is not fitter than one in which one of the redundant genes has been knocked out, $Ab$. In model 2, $B$ is truly redundant, but $A$ is not. In cases with pleiotropy, ‘true redundancy’ implies that the fully redundant genotype is not fitter than a genotype where the pleiotropic function of one gene has been eliminated. (2) ‘Generic redundancy’ is the case when an $AB$ individual is only occasionally fitter than an $Ab$ individual. This can be the consequence of rare developmental errors. Another possibility is that $AB$ is only fitter than $Ab$ in some environments. (3) ‘Almost redundancy’ means than the redundant genotype $AB$ is always slightly fitter than any genotype where one of the redundant genes has been knocked out. Of course, the fitness difference should be small if the situation is to be regarded as one of redundancy. Several such examples have been discussed previously\(^2\). Received 14 January; accepted 25 April 1997. 1. Brookfield, J. F. Y. Genetic redundancy. *Adv. Genet.* **36**, 137–155 (1997). 2. Brookfield, J. F. Y. Can genes be truly redundant? *Curr. Biol.* **2**, 553–554 (1992). 3. Tautz, D. Redundancies, development and the flow of information. *BioEssays* **14**, 263–266 (1992). 4. Goldstein, D. B. & Hogness, D. S. Maintenance of polygenic variation in spatially structured populations. *Evolution* **46**, 412–429 (1992). 5. Thomas, J. H. Thinking about genetic redundancy. *Trends Genet.* **9**, 395–399 (1993). 6. Dover, G. A. Evolution of genetic redundancy for advanced players. *Curr. Opin. Genet. Dev.* **3**, 902–910 (1993). 7. Pickett, F. B. & Meeks-Wagner, D. R. Seeing double: appreciating genetic redundancy. *Plant Cell* **7**, 1347–1356 (1995). 8. Bird, A. P. Genetic mosaics: noise reduction and biological complexity. *Trends Genet.* **11**, 94–100 (1995). 9. O’Brien, S. J. On estimating function gene number in eukaryotes. *Nature New Biol.* **242**, 52–54 (1973). 10. Kastner, P. *et al*. Nontestis nuclear receptors: what are genetic studies telling us about their role in real life? *Cell* **83**, 859–869 (1995). 11. Rudnicki, M. A. *et al*. Inactivation of MyoD in mice leads to up-regulation of the myogenic HLH1 gene Myf-5 and results in apparently normal muscle development. *Cell* **71**, 383–390 (1992). Distinct cortical areas associated with native and second languages Karl H. S. Kim*†, Norman R. Reikin†, Kyoung-Min Lee*† & Joy Hirsch† Department of Neurology, * Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, New York 10021, USA † Department of Neurology and Neuroscience, Cornell University Medical College, 1300 York Avenue, New York, New York 10021, USA The ability to acquire and use several languages selectively is a unique and essential human capacity. Here we investigate the fundamental question of how multiple languages are represented in a human brain. We applied functional magnetic resonance imaging (fMRI) to determine the spatial relationship between native and second languages in the human cortex, and show that within the frontal-lobe language-sensitive regions (Broca’s area)¹⁻³, second languages acquired in adulthood ('late' bilingual subjects) are spatially separated from native languages. However, when acquired during the early language acquisition stage of development ('early' bilingual subjects), native and second languages tend to be represented in common frontal cortical areas. In both late and early bilingual subjects, the temporal-lobe language-sensitive regions (Wernicke’s area)¹⁻³ also show effectively little or no separation of activity based on the age of language acquisition. This discovery of language-specific regions in Broca’s area advances our understanding of the cortical representation that underlies multiple language functions. Indirect evidence for topographic specialization within the language-dominant hemispheres of multilingual subjects has been provided by clinical reports of selective impairments in one or more of several languages as a result of surgery involving the left perisylvian area⁴. Multilingual patients with complex partial seizure disorders of temporal lobe origin have been reported to shift from a primary to a second language together with ictal progression⁵. Different languages have also been selectively disrupted in polyglots by electrical stimulation of discrete regions of the neocortex of the dominant hemisphere⁶⁷. Changes in the topography of background electroencephalogram (EEG) coherence obtained during translation tasks also suggest spatial separation of cortical regions involved in multiple languages⁸. Although these reports are consistent with the existence of spatially separate representations for each language, such functions have not been localized. Silent, internally expressive linguistic tasks were performed in two languages by subjects who either acquired conversational fluency in their second languages as young adults ('late' bilinguals) or who acquired two languages simultaneously early in their development ('early' bilinguals) (Table 1). As Broca’s and Wernicke’s areas are known to perform central roles in human language functions¹⁻³,⁹⁻¹², we have focused our observations on these cortical areas. The main findings for a typical 'late' bilingual subject (subject (A)) are shown in Fig. 1. The anterior language area is highlighted by the green box and shown expanded in the inset. Red indicates significant activity during the native language task (English), whereas yellow indicates activity associated with the second language task (French). Two distinct but adjacent centres of activation (+) separated by ~7.9 mm were evident within the inferior frontal gyrus, suggesting that two specific regions served each of the two languages. In the posterior language area of the same subject (Fig. 2), the same tasks yielded centroids of activity with a centre-to-centre spacing of 1.1 mm, less than the width of a voxel, suggesting that similar or identical cortical regions served both languages in this posterior area. For all six late bilingual subjects, distinct areas of activation were observed for the native and second languages in Broca’s area (Table 2a and Fig. 3). The separation between centroids of activity ranged from ~4.5 mm to 9.0 mm within one slice, and the number of voxels for each language was similar for each subject. On the other hand, activity in Wernicke’s area (Table 2b) showed centre-to-centre distances between the centre-of-mass centroids ranging from 1.1 to 2.8 mm. The mean centroid distance between the anterior --- **Figure 1** A representative axial slice from a 'late' bilingual subject (A) shows all voxels that pass the multistage statistical criteria at $P \leq 0.0005$ as either red (native language) or yellow (second acquired language). An expanded view of the pattern of activity in the region of interest (inferior frontal gyrus, Brodmann’s area 44 (refs 2, 3, 18), corresponding to Broca’s area¹⁻³) indicates separate centroids (+) of activity for the two languages. Centre-of-mass calculations indicate that the centroids are separated on this plane by 7.9 mm. The green line on the upper right mid-sagittal view indicates the plane location. R indicates the right side of the brain.
Think Tanks in America THOMAS MEDVETZ THE UNIVERSITY OF CHICAGO PRESS CHICAGO AND LONDON In 1982, Charles Murray was a 39-year-old independent writer with a background in government program evaluation. “Charles, at the time, was a not-very-well-known social scientist, but his analytical and writing skills impressed us greatly,” says Lawrence J. Mone, president of the Manhattan Institute for Policy Research, the conservative think tank that hired Murray.\(^1\) Eight years earlier, Murray had completed a PhD in political science at Massachusetts Institute of Technology with a dissertation titled “Investment and Tithing in Thai Villages: A Behavioral Study of Rural Modernization.” While the topic of the study placed Murray outside the mainstream of his discipline, it nonetheless established a theme that he would return to again and again in his writing: the idea that government bureaucracies do more harm than good, even for their supposed beneficiaries. After finishing graduate school, Murray left the academic world and worked for seven years at the American Institutes for Research (AIR), a private research firm in Washington, DC. “You must understand, I was never attracted to the university track,” Murray says. “I’m temperamentally—I find that the whole faculty world is uncongenial.”\(^2\) The job at AIR did not suit him much better. “I would write these research reports,” Murray remembers, “and they were lovingly crafted. And I worked . . . fifty, sixty hour weeks, routinely. But nobody ever read the damn things. You send them in to the sponsor and they’re put on the shelf and nothing ever happens.” Not only was the audience for Murray’s work small, but the job afforded him little in the way of intellectual freedom: “What you worked on were the things that the government wanted to write contracts for,” he says. Equally disenchanted with, and marginal to, the worlds of government and academic research, Murray soon discovered an occupational niche located structurally in between the two: the growing world of public policy “think tanks.” Murray quit his job at AIR and applied for positions at the Manhattan Institute, the Heritage Foundation, and the American Enterprise Institute—three of the top conservative think tanks. Each organization would eventually play a critical role in his success. Heritage vice president Burton Yale Pines received Murray’s job application and became the first sponsor of his developing book project. As Murray recalls, “Burt Pines called me in for an interview. I was talking about the way that social programs that I’d evaluated just hadn’t worked . . . and he gave me, I think, $2,500 to write a monograph that I spent three months writing. It was entitled *Safety Nets and the Truly Needy*, and that was the forerunner of . . . the book.” Murray then joined the staff of the Manhattan Institute, where he converted the monograph into *Losing Ground*, a sweeping historical account of American social policy that sought to show the pernicious effects of government welfare programs. *Losing Ground*’s publication in 1984 was a momentous occasion for the Manhattan Institute, an organization still trying to establish a distinctive identity. (Until 1981, it had been called the International Center for Economic Policy Studies.) The organization launched an aggressive promotional campaign for the book.\(^3\) In an internal memorandum, Manhattan president William Hammett wrote that, “Any discretionary funds at our disposal for the next few months will go toward financing Murray’s outreach activities.”\(^4\) As Murray remembers, Hammett “had about 500 copies sent to the office, and I spent a day inscribing those copies . . . and they were sent out to lots of senior senators and Supreme Court justices and people of that sort.” The organization also sent Murray on a national speaking tour, booked him on numerous radio and television programs, and, with funding from the conservative Scaife and Olin Foundations, convened a two-day symposium that brought together twenty leading welfare reform scholars to discuss the book.\(^5\) *Losing Ground* quickly became an object of media attention. In Murray’s view, the first important notice was a September 1984 *Newsweek* column by Robert Samuelson. “Bob was one of the ones that was sent a copy. But unlike Supreme Court justices and senators, he actually read the damn thing,” Murray says. “And Bob Samuelson, when he writes about something, that starts things going.” Samuelson called *Losing Ground* a “well-documented polemic” and concluded, “We cannot reduce poverty simply by being generous. Ultimately, only economic growth and individual effort will suffice.” A week later, *Washington Post* columnist William Raspberry called Murray’s book “thoughtful, well-reasoned and, in many ways, deeply disturbing.” An echo effect began in the press. For example, in 1986, journalist Nicholas Lemann discussed Murray’s arguments in a two-part *Atlantic Monthly* essay called “The Origins of the Underclass.” “Once that got started,” Murray says, “you cannot overestimate the degree to which journalists . . . just pick up on whatever else is going on.” In the ensuing years, *Losing Ground* would be profiled, reviewed, and discussed in hundreds of newspaper and magazine articles. Meanwhile, a very different conversation was developing about *Losing Ground* among academic social scientists, who found fault with the book for containing measurement errors and for using data selectively to support its claims. Economists David Ellwood and Mary Jo Bane, for example, tested and found no support for Murray’s finding that welfare benefits caused an increase in single motherhood. Other critics charged Murray with neglecting important macroeconomic changes in his analysis of the poverty rate, overlooking evidence demonstrating the poverty-reducing effects of welfare entitlements, and failing to engage sufficiently with previous research. Some also noted that despite *Losing Ground*’s suggestions to the contrary, there had been no considerable rise in antipoverty spending over the previous decade, and that such increases, where they did exist, had gone primarily to the elderly. Summarizing the academic reception, sociologist S. M. Miller wrote in the November 1985 issue of *Contemporary Sociology* that “Murray’s major theses” had been “substantially undermined, as social scientists’ serious reviews have supplanted the puff pieces that first greeted the book.” Even apart from the negative scholarly reviews, there were signs that *Losing Ground*’s arguments might have little impact outside of conservative intellectual circles. To some observers, the book’s ambitious prescription for ending welfare entitlements to working-age able-bodied citizens—including Aid to Families with Dependent Children (AFDC), food stamps, and medical assistance programs—went entirely beyond the political pale. To make matters worse, the Reagan administration had shown no interest in comprehensive welfare reform as a policy priority. As Murray himself puts it, “The people in the Reagan administration were actually quite scared of *Losing Ground*. Because, you know, the Reagan administration's line was that the problems were welfare queens who were cheating and you had to stop the cheating. They didn't want to have a radical rethinking of the whole welfare structure. There simply was, in the Reagan administration, zero policy to back it up with." In 1987, sociologist William Julius Wilson summarized the political orthodoxy of the day by suggesting that the "laissez-faire social philosophy represented by Charles Murray is . . . too extreme to be seriously considered by most policymakers."\(^{14}\) The political winds shifted dramatically over the next several years, however, as conservatives carried on Murray's antiwelfare drumbeat. Murray himself left the Manhattan Institute in 1990 amid controversy surrounding his then-forthcoming book, *The Bell Curve* (cowritten with Harvard psychologist Richard Herrnstein) and became a fellow at the American Enterprise Institute.\(^{15}\) Three years later, Murray re-entered the welfare debate with a well-timed polemic in the *Wall Street Journal*. The October 1993 column argued that illegitimacy was the engine of social problems such as crime, drugs, poverty, and illiteracy, and that increasing rates of single motherhood among poor and less educated white women would lead to the emergence of a white "underclass."\(^{16}\) The column touched a nerve. The next month, ABC's David Brinkley devoted a portion of his Sunday morning telecast to the topic, with Murray present as a featured guest.\(^{17}\) Other media outlets continued the debate. As Murray remembers, [NBC *Nightly News* anchor] Tom Brokaw was interviewing Bill Clinton the next week and somebody called me and said, "You've got to turn on Tom Brokaw . . . ," at which point Clinton said, "Well, Charles and I have had lots of disagreements over the years." You sort of imagine us drinking beer in the college dorm together or something. We had never met. "We've had a lot of disagreements over the years, but I think he's done the country a real service." I was watching the TV and I said, "Holy shit." Murray, the pundit once considered too conservative by the Reagan administration, was now being cited approvingly by President Clinton as an expert on welfare policy, if not a personal friend. Clinton went on to declare that "[Murray's] analysis is essentially right."\(^{18}\) The defeat of Clinton's health care plan and the Republican takeover of Congress in 1994 further amplified the salience of welfare as a political issue. Needing a policy achievement with which to appeal to centrist and conservative swing voters in the 1996 elections, Clinton decided to make welfare reform the new centerpiece of his first-term domestic agenda. Over the next two years, culminating in the passage of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, Murray’s arguments remained a compulsory point of reference in the debate. The legislation captured both the spirit and many of the specific features of his proposals, including work requirements, the elimination of welfare as an entitlement program, and the focus on out-of-wedlock births. Following the law’s passage, it became almost de rigueur for fans and critics alike to refer to Murray as having supplied the “intellectual groundwork” for welfare reform.\textsuperscript{19} As Murray summarizes, “It took ten years for \textit{Losing Ground} to go from being controversial to conventional wisdom. And by the way, there is very little in \textit{Losing Ground} right now that’s not conventional wisdom.” \textbf{The Rise of Think Tanks in America} Charles Murray’s transformation from academic journeyman to guru of welfare reform mirrors another notable success story: the rapid rise of public policy “think tanks,” both in the United States and around the world. As we have already seen, three such organizations—the Heritage Foundation, the American Enterprise Institute, and the Manhattan Institute—helped catapult Murray from marginality into the mainstream despite persistent doubts from social scientists about the tenability of his claims. Apart from welfare reform, think tanks have been involved in formulating some of the marquee policy ideas of our time. An early blueprint for the Iraq War, for example, was sketched in the late 1990s by a group of neoconservative foreign policy specialists at the Project for the New American Century. The zero-tolerance policing method known as the “broken windows” approach originated in the Manhattan Institute in the early 1980s before being implemented in New York City and exported to other countries. Likewise, the antievolution theory of intelligent design was born in the Seattle-based Discovery Institute during the 1990s. In other areas as well, such as environmental, tax, and regulatory policies, think tanks have been visible participants in policy debate.\textsuperscript{20} At a more general level, think tanks have become fixtures of the national policy-making scene by helping to satisfy what the \textit{Washington Post} once called the “desperate daily need for intellectual meat to feed the hearings, the speeches, [and] the unrelenting policy grinder.” On Capitol Hill, for example, they supply expert testimony at legislative hearings. In the 24-hour world of cable news, think tank–affiliated “quotemeisters” speak as pundits about the burning issues of the day. Think tanks have also become indispensable to the practice of “politics as a vocation.” Consider, for example, some of the notable roles they have played in the careers of recent American presidents: Ronald Reagan famously distributed copies of the Heritage Foundation’s policy guide *Mandate for Leadership* to his inner circle upon taking office in 1981. A decade later, a young Arkansas politician named Bill Clinton emerged from relative obscurity with substantial help from a think tank called the Progressive Policy Institute, an offshoot of his party’s “New Democrat” movement. And if plans for the Iraq War originated in a think tank, then perhaps it was fitting that Clinton’s successor, George W. Bush, considered managing his post–White House reputation in these terms in 2006: “I would like to leave behind a legacy or a think tank, a place for people to talk about freedom and liberty and the de Tocqueville model.” Bush followed through on these plans by forming the George W. Bush Institute in 2011, but he was not the first ex-president to align himself with a think tank: Gerald Ford joined the American Enterprise Institute as a distinguished fellow in 1977, while Jimmy Carter created the Atlanta-based Carter Center in 1982. Finally, even the “candidate for change,” Barack Obama, adhered to what has now become the conventional practice among incoming presidents. After the 2008 elections, Obama selected his transition chief from one think tank, the Center for American Progress, and several of his key staff members from another, the Center for a New American Security. This book brings the tools of sociological investigation to bear on the rise of think tanks in the United States. It poses a series of basic questions about their origins, history, and modes of influence. What caused the “veritable explosion” of think tanks in this country over the last four decades? What forces shape their intellectual production? Do think tanks have an impact that matches their growing visibility, or has their influence been overstated? If they are influential, then how so? If not, then why has there been such a flurry of activity in this sphere? To put the central question in stark terms: Are think tanks the new machinery for creating policy and bounding public debate in America, or do they operate merely as “window dressing” for a political process that is actually centered elsewhere? To answer these questions, this book reports on a wide-ranging empirical study that brings together several kinds of data, including historical/archival records, in-depth interviews conducted with representatives from dozens of think tanks (from rank-and-file employees to think tank founders and presidents), firsthand observations carried out in several think tanks, and an original database of the educational and career backgrounds of more than 1,000 think tank–affiliated “policy experts.” (For a detailed overview of the data, see the appendix.) My central argument is that think tanks, the products of a long-term process of institutional growth and realignment, have become the primary instruments for linking political and intellectual practice in American life. Their proliferation over the last forty years has resulted in the formation of a new institutional subspace located at the crossroads of the academic, political, economic, and media spheres. Like a territorial buffer zone, this space of think tanks, as I will call it, has the paradoxical quality of being defined most readily in terms of what it is not, or in terms of its negative relationships with the more established institutions that it helps to separate and delimit. Nonetheless, through their growing interconnectedness, think tanks have collectively developed their own social forms, including their own conventions, norms, and hierarchies, built on a common need for political recognition, funding, and media attention. These needs powerfully limit the think tank’s capacity to challenge the unspoken premises of policy debate, to ask original questions, and to offer policy prescriptions that run counter to the interests of financial donors, politicians, or media institutions. To grasp the importance of think tanks in American life, we must recognize another way in which they are like a buffer zone. As I will argue, the space of think tanks produces its main effects, not with its interior landscape, but with its structure or boundary. By occupying a crucial point of juncture in between the worlds of political, intellectual, economic, and media production, think tanks increasingly regulate the circulation of knowledge and personnel among these spheres. As a result, any intellectual figure who wishes to take part in American political debate must increasingly orient his or her production to the rules of this hybrid subspace. Thus, my argument in this book is that the growth of think tanks over the last forty years has ultimately undermined the value of independently produced knowledge in the United States by institutionalizing a mode of intellectual practice that relegates its producers to the margins of public and political life. Before I can elaborate this argument, however, I will need to discuss the three main perspectives from which scholars have previously examined think tanks. As I will explain in the next section, the first of these approaches grasps think tanks as machinery of ruling class power oriented to the protection of capitalism and the defense of elite interests; the second approach classifies think tanks more open-endedly as instruments in a political setting marked by pluralistic struggle; and the third approach locates think tanks within their wider institutional environments while attempting to uncover their effects at various stages of the political process. I will argue that while each of these perspectives has served as the basis for illuminating studies of think tanks, none of them allows us to grasp what is most distinctive about the rise of think tanks in the United States or elsewhere in the world. Moreover, the gaps and tensions among these theories actually deepen some of the mysteries surrounding the topic. My goal in the next section, then, will be to survey briefly the terrain of existing knowledge about think tanks as a way of clarifying the aims of this study. **Three Views of the Think Tank** The first perspective—derived from the elite theory tradition inaugurated by C. Wright Mills—depicts think tanks as the intellectual machinery of a closed network of corporate, financial, and political elites.\(^{26}\) Mills’ followers have argued that think tanks should be analyzed, not as neutral centers of research and analysis, but instead as instruments deployed strategically in the service of a ruling class political agenda. A characteristic expression of this view comes from G. William Domhoff, who argues that, “In concert with the large banks and corporations in the corporate community, the foundations, think tanks, and policy-discussion groups in the policy-planning network provide the organizational basis for the exercise of power on behalf of the owners of all large income-producing properties.”\(^{27}\) On this view, while think tanks may issue reports or policy recommendations that are distinctive for their technicality and seeming rigor, their actual purpose is to assist in the business of “top down policymaking.”\(^{28}\) The elite theory approach is often set against the pluralist perspective, which builds on a longstanding tradition that grasps public policy making as the product of a dynamic interplay among organized interest groups, each with its own resources, strategies, and goals.\(^{29}\) In the pluralist view, think tanks should be analyzed, not as weapons of ruling class power, but as one kind of organization among many in a wide array of societal groups that compete to shape public policy—including labor unions, lobbying firms, social movement organizations, and regional and identity-based associations. The pluralist and elite theories of think tanks developed together during the 1960s and 1970s in the context of a wider scholarly debate about the nature and distribution of political power in the United States. Having set the terms for much of the early discussion about think tanks, they remain major reference points in the academic literature. Nevertheless, recent scholarship on think tanks has been deeply critical of both perspectives. Most scholars, for example, argue that the language of pure cooptation built into the elite theory perspective is far too mechanical, too functionalist, and too seamless to characterize think tanks adequately. While elite theory may offer a compelling macrostructural view of the networks connecting think tanks to economic, military, and political elites, it is less illuminating when it comes to how these networks actually translate into political influence.\(^{30}\) For example, the elite theorists exhaustively trace specific personnel connections among think tanks—how many trustees of the Council on Foreign Relations sat on various corporate boards, how many went on to serve in high government offices, and so on. And yet across many studies, these scholars have surprisingly little to say about all but the broadest contours of a think tank’s activity. Nor, of course, can the elite theory perspective account for the existence of think tanks that orient themselves \textit{against} ruling class interests, or those that lack ties to the rich and powerful. From the point of view of this theory, such organizations are merely “static” in an otherwise elite phenomenon. The pluralists, for their part, aimed to correct these shortcomings by refusing to assign any essential character or role to think tanks. Yet the extreme openness of their theory also came at a cost, since they could make fewer general claims about think tanks, which then tended to dissolve into the wider sea of interest group struggles. However, if the pluralist approach was in this sense too “open,” then in another sense it was too closed. As scholars such as Steven Lukes have shown of pluralist theory in general, the perspective focuses almost exclusively on decision-making processes carried out in the context of open, visible political struggle.\(^{31}\) It pays much less attention to the hidden dimensions of power, such as agenda-setting processes and what the elite theorists called “non-decision making.” When applied to the study of think tanks, this omission becomes a serious error. After all, if the guiding assumption of the pluralist approach is that the relevant target of a think tank’s activity is always a specific policy outcome, then think tanks can be described as influential only to the degree that they directly shape such outcomes. The problem, as other scholars have noted, is that think tanks may have other important effects not captured in a “billiard ball” model of cause and effect. Put differently, even if it is rare to find the “smoking gun” of direct policy influence in the world of think tanks, this is no reason to conclude that they are not influential in other ways. As the elite theorists already pointed out, it may be that think tanks are influential in their ability to create cohesion among political elites or otherwise shape the relations among classes. These are the standard critiques of the elite and pluralist perspectives, and while I agree with each of them, I would argue that scholars have overlooked what is actually the most glaring problem with the two approaches. The problem becomes apparent, however, only from a vantage point informed by the sociology of intellectuals. Put simply, if we take a step back and consider the wider relationship between the elite theorists and the pluralists themselves, then the debate begins to seem less like a straightforward argument about think tanks per se than a euphemized battle between two sets of intellectuals over their own proper social role. After all, the main thrust of the elite theory perspective was to say that think tanks, and by extension, those who aided and identified with them, were not “truly” intellectuals, but rather servants of power. It was no coincidence, then, that their opponents in the debate (not just the pluralists, but all defenders of American-style liberal democracy) tended to occupy structural positions more proximate to, and sometimes inside of, think tanks. Nelson Polsby, for example, a major pluralist scholar, was a fellow at the Brookings Institution and the Roosevelt Center for American Policy Studies and a member of the Council on Foreign Relations. Likewise, Seymour Martin Lipset, who was generally critical of both the Marxist and elite theory traditions, spent the latter part of his career at the Hoover Institution. It should come as no surprise, then, that the pluralists usually adopted a more sanguine view of think tanks, even as they charged the elite theorists with making unverifiable claims about the hidden mechanisms of power. Of course, these observations alone do not invalidate either theory. However, they do help to underscore the main problem with both approaches. Put simply, despite their differences, both theories built their ultimate conclusions into their definitions of a think tank. The pluralists, for example, often used the language of cognitive autonomy to define think tanks, and to differentiate them from non–think tanks. Polsby, for example, distinguished “true” think tanks from mere “public policy research institutes” in the following terms: Whereas “a true think tank obliges its inhabitants to follow their own intellectual agendas,” those at public policy research institutes “are generally not free to do what they please with their time or to follow their intellectual priorities without constraint.”\textsuperscript{32} As a definitional tenet, this distinction instantly disables any attempt a scholar might make to determine whether or not “actually existing” think tanks (by which I now mean organizations so named in public debate) truly enjoy cognitive autonomy. Put differently, Polsby’s statement is tautological: either a think tank maintains a certain level of cognitive independence or else it is not “really” a think tank. The elite theorists avoided this particular tautology, yet so focused were they on the task of revealing that policy making in the United States was not truly a pluralistic struggle that when they examined think tanks, all they could see was a menagerie of intellectual mercenaries and lobbyists-in-disguise. Their tendency, then, was to revert to the opposite view: namely, that any think tank disconnected from the elite machinery of power was therefore somehow a “lesser” think tank and should be relegated to the margins of the discussion. The overarching point is that both the pluralists and the elite theorists tended to lock themselves into certain categorical judgments about the nature of think tanks, even prior to their empirical investigations as such. More broadly, I would argue, both perspectives became mired in what Gil Eyal and Larissa Buchholz call the “problematic of allegiance” in their approach to intellectuals.\textsuperscript{33} By this phrase, Eyal and Buchholz mean a mode of analysis centered on the question of an intellectual’s ultimate loyalties or commitments. In the classical sociology of intellectuals, for example, the prototype of the intellectual was the “engaged man of letters” marked by his allegiance to the ideals of truth and justice (as exemplified by Émile Zola of the Dreyfus Affair).\textsuperscript{34} The main problem with thinking about intellectuals in this way, as Eyal and Buchholz show, is that it tends to draw scholars into the very struggles over intellectual authority that their work ostensibly aims to describe from an impartial point of view. Consequently, even seemingly neutral academic debates on questions of intellectual loyalty quickly become forms of \textit{boundary work}, or strategic attempts by intellectuals to establish where the “true” dividing line is between intellectuals and nonintellectuals.\textsuperscript{35} An argument about the so-called “demise of the intellectual,” for example (a common trope in the classical sociology of intellectuals), can also be read as an attempt to undermine or discredit efforts made by other intellectuals to lay claim to the title itself. To remedy the problem, Eyal and Buchholz recommend shifting the sociology of intellectuals toward the study of “how forms of expertise can acquire value as public interventions.”\textsuperscript{36} The purpose of this seeming digression into the sociology of intellectuals is to suggest that the “problematic of allegiance” was projected into the early scholarly debate on think tanks. Whereas the elite theorists were concerned mainly with showing that think tanks were not truly organs of intellectual production, the pluralists were inclined to defend them. Doubtless both sides would disagree with my characterization and insist that their theories managed to transcend their social moorings. Yet their best defense would be to point out that their ultimate concerns lay, not in the development of a theory of think tanks per se, but in a more general attempt to theorize American politics, for which think tanks were only empirical anchors. And yet this defense would unwittingly underscore my central point, albeit in a different sense, since it would show that neither theory was especially well suited to capturing what was distinctive about think tanks. As Abelson puts it, the pluralists typically portrayed think tanks as “one voice among many” in the political sphere, while the elite theorists sought to show that the same organizations were nodes in an elite policy-planning network.\textsuperscript{37} On the other hand, if our aim is to understand think tanks without subsuming them into a pre-devised theory of politics, then neither approach has much to offer. A two-pronged methodological lesson follows from this discussion. The first prong is that we should be careful not to smuggle into the analysis any essentialist conclusions about a think tank’s ultimate political or intellectual proclivities. Instead, we should adopt a more flexible theoretical approach that allows us to investigate the properties and purposes of think tanks as empirical questions. The second prong, which might initially seem to be at odds with the first, is that we cannot excuse ourselves from the task of clarifying what we mean by the term \textit{think tank}. Analytically prior to the question of what think tanks do, after all, is the question of what they \textit{are}—and neither of the first two approaches offers a compelling answer. Here, then, is the first challenge of this book: How can we define the study’s subject matter clearly without also prejudging it? With this question in mind, let me turn now to the third, and chronologically the most recent, of the three perspectives that scholars have used to examine think tanks. I am referring to the family of approaches that fall under the heading of institutionalism, which focus on the structural environments in which think tanks are embedded, the rules and norms that shape their behavior, and the organizational arrangements and processes to which they must respond. Does institutional theory offer a set of useful tools for analyzing think tanks? More specifically, does it overcome the limitations of the pluralist and elite theory perspectives? With respect to the first problem mentioned above—that of prejudging think tanks—I believe the benefits of an institutionalist framework are obvious. The approach does not lock us into a tautological argument about what a think tank does. Nor does it force us to draw any advance conclusions about a think tank’s political or intellectual propensities. Instead, the working premise of an institutionalist approach is that think tanks comprise a heterogeneous array of organizations with a wide range of possible effects. As Abelson puts it, scholars operating in this tradition attempt to describe how think tanks “shape the political agenda, contribute to policy formation, and assist in policy implementation.” I would also point out that, when it comes to describing the think tank–affiliated actors commonly known as “policy experts,” the institutionalist framework seems to offer an escape from the problematic of allegiance that hampered the classical sociology of intellectuals.* On this point, the main contribution comes from the subset of institutionalist studies focused on epistemic communities, or networks of politically engaged experts and professionals who share certain basic cognitive frames and assumptions. By depicting think tank–affiliated policy experts as members of an epistemic community, institutionalist scholars free themselves from having to weigh in on the futile debate over whether or not these actors are “truly” intellectuals. Instead, they can shift their focus to the structure, reach, and function of the networks in which policy experts are embedded. Given these advantages, it might seem as if an institutionalist approach represents the perfect antidote to the shortcomings of the pluralist and elite theories. Yet I would disagree. In fact, I would argue that the solutions it offers to the problems sketched above are partial at best. Consider first the question of a think tank’s potential influence. The chief merit of the institutionalist framework, as I noted, is that it widens the analytic net to capture the effects of think tanks at every stage of the policy process. * For stylistic purposes, I will omit the quotation marks around the phrase “policy expert” from this point forward. However, as I will elaborate below, I use the term in an emic sense to refer to a political folk category whose history and meaning must be examined empirically. Moreover, my central point about the term will be that it offers a selective—indeed misleading—description of think tank–affiliated actors by highlighting only a particular dimension of their activity (namely, that which involves the use of knowledge and technical proficiency). Yet even this expanded focus, I would argue, remains too narrow, as a simple rhetorical question illustrates: Given the tremendous uncertainty surrounding think tanks, why should we assume that their effects are focused entirely, or even primarily, within the sphere of official politics? One of the central arguments of this book, in fact, will be that the impact of think tanks extends well beyond the political sphere into other social settings. Situated at the crossroads of the academic, political, business, and media spheres, think tanks have generated effects in each setting. For example, as suppliers of media sound bites, facts and figures, and opinion pieces, they have been major participants in what Ronald Jacobs and Eleanor Townsley call “the rise of organized punditry.” Think tanks have also exercised a degree of influence in academic circles by serving as models for university-based policy institutes and employers of public policy school graduates—the growth of which over the last half-century coincides historically with the proliferation of think tanks. Moreover, think tanks have generated effects in the world of business by supplying vehicles through which corporations and wealthy individuals can intervene in political affairs, often without the unwanted visibility that accompanies more direct forms of political intervention. In this way, think tanks have expanded the strategic repertoires of market actors in American politics, especially the members of the “business-activist” movement that has played a leading role in the promotion of promarket ideology since the 1960s. To summarize these effects, I would argue that it is at the macrostructural level, or in the articulation of the spheres of politics, the media, business, and academia, that we must look for the main effects of think tanks. I am also not convinced by the institutionalist solution to the problem of how to depict think tank–affiliated actors. While the concept epistemic community certainly moves beyond the problematic of allegiance as described above, it is still limiting as an analytic tool. After all, in the international relations literature from which the concept derives, the term refers to a network of policy-oriented actors whose members share a certain brand of expertise, such as legal or scientific knowledge. (In a widely cited article, Peter M. Haas defines epistemic community as “a network of professionals with recognized expertise and competence in a particular domain and an authoritative claim to policy-relevant knowledge within that domain or issue area.”) But when applied to the world of think tanks, this idea tends to conceal as much as it illuminates. In the first place, think tank–affiliated actors are not obviously engaged in a coherent professionalization project, being equipped with different resources, credentials, and forms of expertise. An institutionalist scholar might reply that multiple epistemic communities therefore coexist within the world of think tanks. But this only pushes the operative question to a different level: Why should we assume that think tank–affiliated actors are first and foremost “experts”? As I will argue, credentialled knowledge is only one of several resources that policy experts must deploy in order to succeed, even on their own terms. Other socially valued resources circulating in the space of think tanks include network ties to political elites and journalists, media savvy, the ability to raise money, and specialized political skills. Crucially, then, it is the *relative values* of these resources that remains the central unanswered question about the role of policy experts. For example, does the ability to raise money trump academic achievement in the space of think tanks, or is being “good on television” sufficient to compensate for a lack of relevant knowledge about a given policy issue? Furthermore, what counts as “relevant” knowledge? These are not questions with simple answers, nor can they be treated as entirely settled within the world of think tanks. Instead, they are also *stakes* in an ongoing competition among policy experts, who inevitably arrive at the think tank endowed with different resources, forms of expertise, and credentials. Together these observations point to what I believe is actually the most glaring problem with the institutionalist framework. Like its predecessors, this approach offers no analytic concept of a think tank, no adequate sense of the *distinctive social or organizational forms* denoted by the term. Whereas scholars operating in the elite theory tradition reduced think tanks to appendages of the “policy-planning network,” the pluralists vacillated between the idea that think tanks were havens for freethinking intellectuals and the notion that they could be subsumed analytically into the vast sea of interest groups. The institutionalist approach usefully shifts our focus to the rules and constraints within which think tanks are embedded and the personnel networks they coordinate, albeit without clearly elucidating what a think tank is. To be sure, most scholars working in this tradition have taken care to formulate operational definitions of the term *think tank*, some of which I will discuss in the next chapter. Yet, as I will argue, these definitions are theoretically problematic because they inevitably rest on the arbitrary premise that “true” think tanks are marked by formal independence from bureaucratic, party, market, academic, and media institutions. As I will show, there are good reasons to discard this assumption altogether, since in certain ways think tanks are also highly dependent on these same institutions for their existence. Let me close this section, then, by noting what is undoubtedly the central irony in the study of think tanks. Despite decades of research on the topic, no one has yet offered a satisfying answer to the most basic question of all: What is a think tank? Plan of the Book Chapter 1 will address this question at length. The approach I will take is derived from the work of Pierre Bourdieu and recent extensions of his theory by scholars such as Gil Eyal and Loïc Wacquant. It rests on a seeming paradox: To clarify the status of the ambiguous creatures known as think tanks, we will need to build the structural blurriness of the object into our conceptualization itself. However, it is not the mere fact of blurriness that distinguishes think tanks from other organizations, since many social institutions exhibit this characteristic. Rather, it is the particular brand of blurriness exhibited by think tanks that holds the key to their identity. My argument will be that think tanks are best understood, not as a discrete class of organizations per se, but as a fuzzy network of organizations, themselves divided by the opposing logics of academic, political, economic, and media production. It is this series of oppositions that drives the interior dynamics of the space of think tanks. We can overcome any challenge posed by the fuzziness of think tanks by historicizing the organizational network in which they are embedded—that is, by documenting its formation and determining how its members have marked themselves off from more established institutions. Built into a think tank’s practical repertoire, I will argue, is an elaborate symbolic balancing act that involves gathering multiple institutionalized resources from neighboring social spheres, including samplings of academic, political, economic, social, and media capital. Chapter 2 will proceed with the task of historicizing the space of think tanks by relating the long “prehistory” of think tanks to a transformation in what Bourdieu calls the field of power, or the system of struggles in which holders of various institutionalized resources “vie to impose the supremacy of the particular kind of power they wield.” Focusing on the period from the 1890s to the early 1960s, I will argue that the forerunners of American think tanks emerged in the context of a precarious encounter among elites, including politically moderate capitalists, aspiring bureaucrats and diplomats, and the members of an emerging intelligentsia. At one level, this process can be read (just as the elite theorists would suggest) as a strategic collaboration among different segments of the “ruling class.” However, at another level, the same process must be understood as part of a struggle interior to the upper class over the relative values of their different resources or media of power. To their progressive capitalist cofounders, for example, the forerunners of the think tank were useful, both as tools for brokering compromises with organized labor and for resisting the expansion of the New Deal. More broadly, the same organizations were part of a wider effort by capitalists to “become modern” by harnessing the tools of science and rationality for their own ends. On the other hand, for the aspiring diplomats, foreign policy specialists, and social scientists, the forerunners of the think tank were significant mainly as vehicles of professionalization. The result of this ambivalent encounter among elites was the formation of a large, segmented machinery of “technoscientific reason” that filled the gap left by the absence of an official government technocracy in the United States. Chapter 3 will use this claim as a point of departure for an analysis of the formation of the space of think tanks starting in the 1960s. To understand how a diffuse set of organizations became oriented to one another in their judgments and practices, I will situate this process in the context of a wider struggle among groups with different claims to politically relevant knowledge. As scholars such as Eyal have argued, the 1960s was a decade of “intense and undecided conflict over the prototype of intellectual work,” both in the United States and in other countries around the world. In the United States, I will argue, this conflict took the form of a series of challenges to the technocratic specialists who had become the leading suppliers of policy advice during the first half of the twentieth century. The main such challenge, I will argue, was issued by an emergent group of conservative “activist-experts” who sought to undermine the power of technocrats from a standpoint of greater intellectual openness and public engagement in what Eyal calls the “field of expertise.” As the activist-experts gained influence, however, their struggles with the technocrats gave rise to a convergence between the two groups. The main result of this process was the formation of a new subspace of knowledge production with its own orthodoxies, conventions, and interior dynamics. As the technocrats and activist-experts drew closer together and became more interconnected, they gradually settled on common norms and criteria of intellectual judgment distinct from those of academia. It was through this process, I will argue, that think tanks collectively acquired an identity of their own. Having traced the formation of the space of think tanks historically, I will turn to an analysis of its present day form and functioning. Chapter 4 will develop both a structuralist mapping, or a *social topology*, of the space of think tanks and a general theory of “policy research” as a loosely coordinated system of intellectual practices. The chapter will begin by examining the external forces and determinations that are brought to bear on think tanks. To succeed in their complex missions, I will argue, think tanks must carry out a delicate balancing act that involves signaling their cognitive autonomy to a general audience while at the same time signaling their *heteronomy*—or willingness to subordinate their production to the demands of clients—to a more restricted audience. To reconcile this opposition, think tanks gather a combination of resources from the “parent” fields of academia, politics, the market, and the media, and assemble these into novel packages. To function stably, think tanks depend on a set of social agents who subscribe to the ethos of policy research. Turning then from structure to agency, chapter 4 will examine what I call the “occupational psyche” of the policy expert, or the antithetical combination of drives, perceptions, habits, and reflexes needed to excel in the world of think tanks. The most successful policy experts, I will argue, are those who blend styles, skills, and sensibilities that mirror the structural oppositions among the fields on which think tanks depend for their resources and recognition. By depicting think tanks as inhabitants of an *interstitial field*, we can arrive at a better understanding of both the considerable differences among think tanks and the unifying forces that draw them together in the practice of policy research. But how should we understand the distinctiveness of policy research as a form of intellectual practice? In one sense, it is tempting to describe the work of a think tank using a language of pure constraint—the think tank’s dependence on clients being the main factor that prevents it from questioning the basic orthodoxies of policy debate or posing its studies against the interests of donors, politicians, or journalists. However, I will argue that the same conditions that undermine the cognitive freedom of think tanks also operate as curious sources of flexibility and *power*. The nature of this power must be understood largely in terms of its reconfiguring effects within the wider space of knowledge production in the United States. By claiming for themselves a central role in policy debate, think tanks effectively limit the range of options available to more autonomous American intellectuals, whose products become increasingly dispensable in political and media fields dominated by moneyed interests and political specialists. The main conclusion of chapter 4, then, is that think tanks produce their most important effects, not in spite of, but precisely through their “blurriness.” It is this quality, I argue, that enables them to suspend conventional questions of identity and carry out practices not possible in any of their parent fields. A brief thought experiment can help to illustrate these points and bring this introductory discussion full circle. How would we identify the source of Charles Murray’s efficacy in the welfare reform debate of the 1990s as described in the opening vignette? In the classical sociology of intellectuals, the standard approach would have been to classify Murray using some typology of intellectual role-sets. We would be forced to decide, for example, whether Murray most closely resembled a noble “public intellectual,” an aloof “ivory tower” figure, a servile “technician,” or some other ideal-type. However, I believe we should be wary of this approach, not least because existing attempts to classify Murray in this way typically end up saying more about the classifier than about the presumed object of classification. To label Murray an “exemplary social scientist” (as American Enterprise Institute president Christopher DeMuth did while bestowing on him the Irving Kristol Award in 2009), for example, or to call him a “conservative evildoer” or a producer of “racist pseudo-science” (as progressive journalist and Center for American Progress fellow Eric Alterman did in his book *What Liberal Media?*) is to locate oneself in the system of political and intellectual struggles that one is attempting to analyze. Murray himself, however, remains strangely untouched by these descriptions. A better approach, I believe, is to recognize that Murray’s successful intervention in the welfare debate of the 1980s and 1990s depended not on his ability to embody a particular intellectual type but rather on his ability to exist “in between” types by merging disparate skills and switching roles as the situation demanded. As the opening vignette illustrated, Murray first entered the welfare debate with all of the outward appearances of a “public intellectual,” or someone who could challenge the political orthodoxy of the day from a standpoint of relative autonomy while speaking in terms that were accessible to the lay public. However, he also gained a degree of authority from the appearance of technical proficiency that came from his experience as a former government policy analyst. Once the Republicans took control of Congress, however, Murray subtly repositioned himself as a crusader and spokesman for the antiwelfare movement by testifying on Capitol Hill and serving on an official White House–sponsored commission to move the legislation forward. We can even find a hint of “ivory tower” scholasticism in Murray’s story, although the site of his privileged seclusion was not a university. As Murray himself says in an interview, “In the think tank world . . . I have—and this is not really an exaggeration—I have essentially spent the last twenty-one years doing exactly as I pleased, every day and all day.”\textsuperscript{46} Chapter 5 will put the general theory of think tanks developed in the book into action by examining the history of struggles over poverty and welfare policy in the United States from the late 1950s to the passage of the 1996 welfare reform legislation. I will argue that the formation of the space of think tanks during this period was one of the main institutional processes leading to the discursive shift from a problematic of \textit{deprivation}—or a policy debate centered briefly on mass poverty and its structural underpinnings—to a problematic of \textit{dependency} that identified welfare receipt itself as a form of moral degeneracy and a source of social ills. By transforming the institutional structures of knowledge production and consumption in the United States, the growth of think tanks made possible a shift in the cognitive framework within which policy makers worked to achieve policy solutions in the last decades of the twentieth century. In describing the history and present day role of think tanks, I would like this book to contribute to a wider discussion about the “time-honored question of the relationship between social knowledge and public action.”\textsuperscript{47} With this aim in mind, the concluding chapter will relate the study of think tanks to three ongoing debates connected to this question. The first requires us to consider think tanks in what will surely seem like a paradoxical and unfamiliar context: namely as heirs to the long and deep-seated \textit{anti-intellectual} tradition that commentators since Alexis de Tocqueville have identified as part of the national culture. Resituating the topic within a framework centered on the relations among intellectual groups, I will argue that the charge of anti-intellectualism is best understood as a strategic stance or “position-taking” in the intellectual field—one that typically involves an attempt by a relatively autonomous intellectual group to discredit its less autonomous counterparts. Focusing our attention on the struggles among intellectual groups will point the way toward a clearer understanding of the circumstances under which think tanks are likely to be regarded as organs of intellectualism or anti-intellectualism. The second debate I will address concerns the status of the so-called “public intellectual.” At one level, the lively debate on this topic engendered by Russell Jacoby’s 1987 book, *The Last Intellectuals*, might seem to offer a natural starting point for the study of think tanks. After all, in the standard narrative associated with Jacoby, the putative demise of the public intellectual takes place concurrently with the rise of think tanks, suggesting the possibility of a causal linkage. But at another level, the debate on public intellectuals only promises to hinder our understanding of think tanks. Having become predictably mired in confusion over the meaning of the central concept, the debate on public intellectuals has generated more heat than light. In keeping with the relational approach of this study, I will argue, first, that the term *public intellectual* is best understood as referring, not to a flesh-and-blood actor per se, but to a specific position in a space of relations among actors with claims to knowledge and expertise. Furthermore, while the germ of a public intellectual project may have incubated briefly in the late 1950s and early 1960s, it was quickly snuffed out. And yet the main process leading to its failure has been largely overlooked by scholars. Thus, against the prevailing wisdom, I will argue that the recent historical period has been marked neither by the demise of the public intellectual, as some writers have claimed, nor by the opposite process, that is, by a simple growth in the public role of intellectuals, as others have argued. Instead, the proliferation of think tanks has made possible a new kind of public figure in American life known as a “policy expert,” whose authority is built on a claim to mediate an encounter among holders of various forms of power. The last discussion with which I will engage in chapter 6 is the ongoing debate in academic sociology about the prospects for, and the desirability of, a civically engaged “public sociology.” Initiated in 2004 by sociologist Michael Burawoy, this discussion has generated a spirited conversation about the soul and direction of the sociological discipline. However, I will argue that the debate, being framed largely in terms of the relations between sociologists and their “publics” and among sociologists themselves, has generally failed to take into account the place of sociology within the wider American intellectual field. In particular, writings on public sociology have largely overlooked what I will argue is the chief obstacle to civic-sociological engagement in the United States: namely, the rise of heteronomous knowledge producers in the space of public debate since the 1960s. Relating public sociology to the rise of think tanks will provide a useful starting point for a theory of the institutional conditions under which sociological knowledge is produced, consumed, and (most often) ignored in American public debate. By issuing policy prescriptions tailored to the preferences of sponsors and consumers (especially politicians and journalists), think tanks tend to relegate the most autonomous sociologists to the margins of policy debate and draw others toward a more technocratic style of political-intellectual engagement.
CHAPTER X TREATMENT OF THE AMPUTATED In the first portion of this book, no mention was made of amputations performed at the front since these are strictly surgical in nature. The usual operation is a simple oval or circular amputation, executed as rapidly as possible, with little thought of any result other than saving the patient's life. When these patients with a limb already amputated reach the base hospital, their further treatment should fall into the hands of some one versed not merely in surgical technic but in orthopedic principles and, above all, in the application of artificial limbs. The practice of turning the patient over to the manufacturer of artificial limbs as soon as the amputation wound has healed, is frequently responsible for much unnecessary suffering and many instances of poor function. Only by a rational harmonizing of surgical technic and orthopedic treatment with the brace-maker's art, can satisfactory results be achieved. Preliminary Treatment of the Stump.—When the Amputation Wound is Still Unhealed.—It frequently occurs that by the time the patient has reached the base hospital the loose sutures applied at the time of the primary amputation have torn out, the skin flaps have retracted, and a large granulating area lies exposed. Attempt must be made to prevent further retraction of the skin. This is best done by applying a piece of stockinette to the stump after first painting it with some adhesive mixture, such as a solution of mastic.\(^1\) The free ends of the stockinette projecting below the stump are gathered \(^1\) The solution of mastic is made as follows: R. Mastic................................. 20; Chloroform............................... 50; Linseed oil.............................. gtt. xx. together by a stout cord, which, passing over a pulley, serves for the attachment of a suitable weight, (3 to 10 pounds). To bandage the wound, the cord is loosened and the edges of the stockinette turned backward so as to expose the granulating area. In many cases where the skin has not already become adherent, this method suffices to coapt the skin edges; when much retraction has already taken place and the skin has become adherent to the deeper structures, it merely prevents further retraction. Postural Treatment.—Care must be taken to prevent the development of contractures. The most frequent mistake is in the case of patients with amputations of the thigh or of the calf. The nurse, in her effort to make the patient comfortable, places a pillow beneath the stump, thus flexing the thigh at the hip or flexing the knee. This error, usually unnoticed at the time, results in flexion contractures whose significance is not appreciated until the first fitting of the artificial limb. Then the brace-maker tells the surgeon that something is wrong, and that he cannot make the artificial limb fit correctly. As a consequence months of treatment are required to lengthen the contracted tissues until the free range of motion has again been acquired. The same principle emphasized in the treatment of injuries to the muscles should be applied to the amputated; the position of the limb should be such as to prevent the overaction of the strong muscles at the expense of the weaker. Thus, at the hip and at the knee, every effort must be made to prevent the strong flexors from overcoming the action of their weaker antagonists. At the shoulder, the strong adductors must not be allowed to contract at the expense of the abductors. The application of the principle is simple. In the case of a patient with thigh amputation, a small pillow is placed under the buttocks so as to allow the thigh by its own weight to fall into the position of slight hyper-extension. For the amputation of the calf, a pillow is placed not in the popliteal space, as is so frequently done, but near the end of the stump, so as to promote the full degree of extension. For amputations of the arm, a small pillow is placed between the chest and the limb, so as to promote abduction. For amputations of the lower arm, the limb is simply allowed to lie in the fully extended position. The one exception to this rule is in the case of amputation just below the knee, where the stump is so short that there is no possibility of affixing the artificial limb to the calf. In this event, it is particularly difficult to keep the short segment of the calf extended and as the artificial limb is constructed so as to permit the patient to walk about with the stump flexed, there is no advantage gained in attempting to maintain the extended position. Re-amputation.—The surgeon should not be too hasty in deciding that re-amputation is necessary. I well recall two instances in which despite the discouraging appearance of the stump, which led me to prepare the patient for operative revision, I was able within several weeks’ time to secure excellent results by non-operative procedures. The extension method for exerting traction on the skin has already been described; in addition to this, every effort is made to encourage epithelialization. The presence of scar tissue over the end of the stump does not necessarily mean a poor stump, although it is, of course, preferable to have a normal skin covering. The indications for re-amputation are: (1) projection of the bone beyond the granulation tissue; (2) persistent ulceration of the stump owing to the thinness of the epithelial covering; (3) a fixed contraction of a short stump in such a position as to render application of the artificial limb impossible; (4) in rare instances for painful neuromata which yield to no other form of treatment. A conical stump is in itself no indication for re-amputation since it may, if properly exercised, develop excellent functional capacity. A discharging sinus, due to the presence of a sequestrum or foreign body, necessitates operative removal (easily accomplished through a small incision) but this operation is in no way analogous to a re-amputation. Whenever possible, re-amputation should be avoided, since it invariably necessitates shortening the stump. This means loss of power, since the longer the stump, the more accurate its coaptation to the artificial limb and the more effective its action. Of course, if the stump be a long one, with the site of the amputation just above the ankle or the knee, a few inches can be sacrificed without appreciable diminution of power. The principle of maintaining the maximum length of the stump disagrees with the practice of many eminent surgeons, and therefore deserves further consideration. Thus, it is maintained by Riedel, who himself suffered amputation below the knee-joint, that the stump of the calf, although amply sufficient for the attachment of the artificial limb, was a useless encumbrance. After one year's trial, he insisted upon a re-amputation at the knee, using the Gritti method, and professed himself far happier with the short stump than with the longer. My experience has led me to the opposite conclusion. Except in those rare instances already referred to, where the stump of the calf is so short as to make it impossible to grip it in the socket of the artificial limb, every patient whom I treated found it of great advantage to be able to control the prosthesis by the action of the intact quadriceps extensor muscle. Whether the stump was suitable for weight-bearing or not, made far less difference than the additional security given by the voluntary control of the knee-joint. The longer the stump of the calf, the longer the leverage arm controlled by the patient, and the easier for the brace-maker to secure an accurate fit. This is made clear if one thinks of the stump as the piston of an air-pump. Just as the security of the piston is most marked when it is pressed downward its full length into the air-pump, so too, the stability of the stump within the artificial limb is greatest when there is the largest area of contact between it and the prosthesis. The same holds good for amputations of the thigh, where in the case of the short stump, it is exceedingly difficult for the patient to manipulate the apparatus; whereas, with the long stump, almost the normal stride can be attained. With the upper and lower arm, the effectiveness of the stump for practical purposes is in proportion to its length; and in the case of wounds shortly below the elbow, everything should be done to preserve a stump of the forearm, however short that may be. In applying the rule relative to the maximum length of the stump, the surgeon must beware of ultraconservatism. Thus, for instance, when an amputation at the ankle is indicated, it would be unwise to leave the astragalus attached to the stump, since in the first place, this bone would render the stump too long for the proper application of the prosthesis; in the second place it would not be as well suited to weight-bearing as an osteoplastic stump. George Marks recites an instance of amputation through the mid-calf in the case of a patient whose knee-joint had already been ankylosed. Naturally this ultraconservatism made the normal application of the prosthesis impossible, and the patient had to go about with one thigh apparently 6 inches longer than the other. The principle of maintaining the maximum length of the limb does not belittle the importance of securing, whenever possible, a weight-bearing stump. If the stump can be rendered capable of supporting the body, the problem of fixing the artificial limb is rendered much simpler. To this end, certain osteoplastic operations are of great value and should be performed wherever feasible. In a class by themselves stand the Pirogoff and Gritti amputations. Both these procedures are excellent examples of the physiological method, and when properly executed invariably give good results. Of course, an important condition for the success of all the osteoplastic operations is an absolutely aseptic field. When this cannot be had, the operations are contraindicated. In the calf, when the stump is a long one, so that several inches may be sacrificed with comparatively little loss of power, the Bier osteoplastic method usually results in a weight-bearing stump. When this operation is not feasible, it matters little whether the so-called "aperiosteal" technic is followed, or whether the periosteum is left adherent to the stump. Irrespective of the treatment of the periosteum, it will be found that in some cases bony spurs develop, and in others they do not. In all cases of amputation of the calf, the fibula should be divided at least \( \frac{1}{2} \) inch above the level of the tibia. I have found the following technic to give good results in cases where the Bier osteoplastic method is contraindicated. The skin flaps are so planned that the anterior is large enough to cover the inferior surface of the stump. The muscle flap, on the contrary, is taken from the posterior aspect of the calf, since the fleshy gastrocnemius and soleus furnish the best covering for the inferior surface of the tibia. The muscles are attached to the periosteum by strong sutures anterior to the weight-bearing surface; as the skin suture lies posterior, there is no suture line subjected to pressure when the artificial limb is applied. In amputations of the thigh, where the Gritti is not applicable, the Bier method can be followed provided the stump is sufficiently long. If the stump be short, as little tissue should be sacrificed as possible. An elliptical incision is made, and a cone of granulation tissue and muscle—with its apex at the bone—is excised, the bone sawed off at this level, and the parts drawn together by strong, coapting sutures. In patients with a femoral stump, not more than 2 or 3 inches long, the presence of an abduction or flexion contracture renders the application of the artificial limb impossible. The problem in these cases is solved most simply by disarticulation of the femur at the hip. Nothing is lost, since the stump is too short to control the artificial limb, and much is gained in the ease of application. For amputations of the upper limb, the question of weight-bearing plays no rôle whatever. The stump should invariably be left as long as possible, and re-amputation performed only when there is urgent indication. Kinetic Stumps.—Vanghetti and later Ceci attempted the utilization of the latent muscular force of the stump by freeing the tendons or muscle bellies in such a way as to enclose them with skin flaps. These flaps could then be moved by the voluntary muscular contraction of the patient’s stump. During the last 3 years the method has been modified by Sauerbruch (until recently professor of surgery at the University of Zurich) and the technic so developed that it can be regarded as a perfected surgical procedure. Figs. 127 et seq. illustrate the steps of the operation. Instead of the original Vanghetti technic a much simpler method has been adopted. After freeing a skin flap of appropriate size (Fig. 127) a tunnel is bored through the muscle belly (in this instance the biceps) and widened sufficiently to admit the skin flap which has been sutured to form an epithelial lined tube (Fig. 128). A simple skin plastic completes the operation (Figs. 129 and 130). The canal is kept patent by means of a rubber drainage tube or ivory peg, and as soon as possible active exercise of the muscle (see Fig. 130) begun. **Fig. 127.**—The Sauerbruch method of producing a kinetic stump. First step of operation. A tunnel has been bored through the biceps muscle. A skin flap has been freed and is being sewed about a piece of rubber tubing with the epithelial surface turned inward. Excellent though the operative results are, the practical benefit to the patient has thus far been slight, owing to the difficulty in constructing a prosthesis capable of utilizing the muscular force placed at its disposal. If this mechanical problem can be solved, the Sauerbruch procedure will constitute an important advance in our methods of treating the amputated. Although Sauerbruch has, so far as I know, confined his operations to the upper extremity, its field of usefulness might well be extended to amputations of the thigh. Here voluntary control of the artificial limb by means of the quadriceps extensor, would be of great assistance to the patient, particularly to one whose work called for walking over uneven ground, hill climbing, and ascending or descending steps. The Education of the Stump.—Even before the wound has healed, the physician must begin treating the stump with a view to developing its function. The muscles should be massaged and the patient should be encouraged to move the limb. As soon as the wound has healed, more vigorous measures can be adopted. The stump should then be bathed daily with cold water, and in addition to the massage, graduated Fig. 129.—The Sauerbruch method of producing a kinetic stump. Third step of the operation. The sutures are being taken to unite the edges of the skin flap to the skin of the arm near the point of emergence from the muscular channel. Fig. 130.—The Sauerbruch method of producing a kinetic stump. Fourth step of operation. The operation is completed by uniting the skin edges as shown in the illustration. The canal is kept patent by running a piece of rubber tubing or an ivory peg through it. exercises should be performed. These consist of simple movements—flexion, extension, abduction, adduction and rotation—against the resistance of a weight running over a pulley, or of the hand of a trained masseur. Bandaging the stump firmly helps remove fat and reduce the oedema. Fig. 131.—The Sauerbruch method of producing a kinetic stump. The after-treatment. To exercise the muscle through which the channel has been bored, the ivory peg running through it is attached to a pendulum apparatus. The patient can by a voluntary contraction of the muscle cause the ivory peg to move upward and thus move the lever of the apparatus. By regulating the length of the pendulum the exercises can be graduated to meet the increasing muscular power of the patient. assist in the hardening process, leading to weight-bearing function, the patient should learn to rest the end of the stump against a chair or stool of suitable height. At first the chair is thickly padded; gradually the padding is removed, until the patient is able to bear his weight on the bare wood. He then begins to hammer with the end of the stump against the support, since a certain amount of this pounding motion is incidental to walking with the artificial limb. This treatment should, of course, be carefully graduated, otherwise the stump tends to become irritated instead of hardened. Some authors have laid great emphasis on forming a deep circular furrow in the stump. This furrow serves for the attachment of the socket of the artificial limb, and does in some instances undoubtedly add to the stability of the prosthesis. I have found that with rare exceptions, however, the method is not of particular value. The exception consists of those instances of short stumps of the calf (about 3 inches long) which it is difficult to grasp firmly with the artificial limb. In these cases, a furrow is of distinct assistance. The Esmarch bandage, or better still, a strong piece of rubber tubing about \( \frac{3}{8} \) inch in diameter, is applied to the stump under as much pressure as the patient can stand, and kept in place for an increasing length of time with each application. After several days the patient is usually able to stand the pressure for several hours. Within two weeks, a distinct furrow can be developed. The greatest educator of the stump is the artificial limb itself. Therefore, it should be applied as soon as possible. The use of a crutch for the amputated is an indication of inadequate treatment. The early use of an artificial limb presents one great difficulty: the stump is still swollen, a large amount of fatty tissue is still present, and the muscles are usually flabby. With time, the stump changes its shape so markedly that the artificial limb, which fitted accurately when first applied, is no longer suitable. If this has been made of leather or wood, great expense has been involved, and the value of early training of the stump seems to be outbalanced by the economic waste of time and material involved in the construction of an artificial limb whose period of usefulness is so short-lived. Owing to this difficulty, the provisional or temporary prosthesis has been evolved. The evolution of these provisional limbs has been most interesting. At first they were constructed in the crudest way of a broom-stick or a piece of bamboo incorporated in a plaster shell fitting the patient's stump (see Fig. 132). Later, an iron framework was substituted for the broom-stick, terminating in a flat metal plate which could be rivetted into the empty shoe of the patient. A still later development was the use of a hinged joint corresponding to the knee (see Fig. 133), in cases of amputation of the thigh, so that the patient could learn early to utilize the joint of the artificial limb instead of striding with a stiff leg. All of these contrivances served their purpose in helping to educate the stump and in teaching the patient how to walk. To Mommsen belongs the credit of evolving what is, in my experience, the most practical and efficient provisional artificial limb. Assume that the patient has been amputated six inches below the knee. An exact plaster impression is taken of the stump by enveloping it with a plaster-of-Paris bandage. The plaster should not be thicker than $1\frac{1}{16}$ inch. While it is hardening, the operator should carefully mould the tuberosity Fig. 133.—A provisional artificial limb (Spitzy) with movable knee joint. The transverse pieces marked 11 are easily bent, so that they conform to the curve of the thigh and are easily attached to the plaster dressing which encircles the stump. The foot piece is rivetted to the patient's boot. of the tibia (see Fig. 134), since this bony projection forms the chief weight-bearing area. The head of the fibula and the condyles of the tibia are not subjected to pressure, since experience has shown that they are not adapted to weightbearing. The plaster negative is then turned over to the brace-maker, who makes the corresponding foot, steel supports, knee-joint, and thigh-piece, just as though he were making an artificial limb for a patient whose stump had assumed its final definite form. The one difference between the final Fig. 134—Making a provisional artificial limb for an amputation of the calf. (Mommsen.) The figure illustrates the first step in the process when the exact plaster impression is taken of the patient's stump. Note that the surgeon is bringing pressure to bear on each side of the tuberosity of the tibia. The condyles and the head of the fibula should not be exposed to pressure. prosthesis and this provisional one, lies in the fact that the plaster shell has been substituted for the usual leather socket (see Fig. 135). The steel uprights are firmly fixed to the plaster by means of two rivets, and a series of bandages soaked in a mixture of plaster-of-Paris and bone glue (see footnote). In other words, the patient is given at once the same type of artificial limb which he is to wear after the stump has attained its constant shape. During the stump's transition period, the plaster negative can be changed whenever necessary, since the cost is minimal and the labor involved comparatively slight. This mixture, which though light is extremely hard, is prepared as follows: 400 grams of bone glue, broken into small chips, are dissolved in half a liter of water, heated over a water-bath. When boiling, 400 grams of alabaster plaster-of-Paris in the form of a thin plaster cream are added slowly to the glue. The mixture is constantly stirred during the process, and the preparation kept as near 100°C. as possible. When thoroughly mixed and boiled, the requisite number of starched bandages of appropriate width are immersed in the fluid, and when saturated are wound about the plaster shell, so as to strengthen it and hold the steel upright of the artificial limb firmly in place. Complete by a few turns of a plain gauze bandage. Dry in a warm room one to two days. For amputations of the thigh, the technic is similar. In these cases, the surgeon must lay stress upon an accurate moulding of the tuberosity of the ischium, since this bone is to bear the weight of the patient's body (see Fig. 136). When the stump has, after many months, assumed a form which no longer changes, then leather is substituted for the plaster-of-Paris, and the patient is equipped with a finished prosthesis. Fig. 136.—Making the provisional artificial limb for an amputation of the thigh. (Mommsen.) An exact plaster impression is taken of the stump. The surgeon's fist brings pressure to bear just below the tuberosity of the ischium, so as to mold the support for the weight of the body. Types of Artificial Limbs for the Lower Extremities.—It would far exceed the limits of this book were even mention to be made of the hundreds of different varieties of artificial limbs designed for amputations of the lower extremities which have been devised during preceding centuries, or which are now on the market. Study of about fifty different specimens has impressed me with certain conclusions which are, I think, of greater importance than the details of each particular invention. 1. For amputations of the thigh, it is important to distinguish between those stumps which are weight-bearing and those which are not. In the latter case, the success or failure of the artificial limb depends upon an accurate fit at the ischial tuberosity. Most brace-makers fail to realize that the tuberosity does not slant from above downward and forward but in the reverse direction, namely, from below upward and forward. This upward inclination, be it ever so slight, must be taken into account. The usual type of support given by the brace-maker, does not conform to this anatomical fact, but slants from above downward and forward, so that the patient slips downward on the support and almost invariably suffers pain anteriorly, near the pubic bone. The result is that the stump is rotated, and the artificial limb does not fit. In addition to the tuberosity of the ischium, the adductor muscles are capable of bearing great weight when they have been properly hardened. The pubic bone, however, cannot stand pressure and must be left free. The gluteal muscles and the vasti also help to support the body-weight. When the stump is short, a pelvic girdle with a strong joint at the level of the trochanter is necessary; whereas in the long stumps, the pelvic band and trochanteric joint are unnecessary. In patients with marked atrophy of the muscles, unable to balance themselves securely upon their stump, the trochanter joint should allow flexion and extension only, since the pelvis would drop toward the opposite side of the body, were abduction permitted. In applying the steel uprights which support the body, or, in case of a wooden limb, in joining the thigh-piece with the calf, it is advisable to give the calf about 2° of genu valgum position. This adds markedly to the stability of the artificial limb. The type of knee-joint does not, so far as I can observe, play an important rôle. In general, the simpler the mechanism the more effective. Complicated screws, ratchets, or springs add merely to the likelihood of breakage and to the cost of keeping the limb in order. Besides, for the majority of patients, who live at a distance from the industrial centres where brace-makers are to be found, the entire construction of the limb should be so simple as to permit the wearer himself to make the necessary repairs. In one European hospital there is an admirable custom of giving each amputated patient a 3 weeks' course in the brace-maker's shop, and discharge from the hospital is dependent upon ability to repair his own prosthesis. An essential in the mechanical construction of the joint is the location of its axis posterior to the centre of gravity of the ![Diagram](image) **Fig. 137.**—Diagrams illustrating the importance of posterior displacement of the knee joint of the artificial limb. A, Body; B, hip; C, knee joint; D, ankle. In Fig. I, the axis of the artificial joint corresponds in position to the anatomical. A slight degree of flexion brings the body weight posterior to the axis and, as is evident from the figure, further flexion must result. For the patient this position of the axis causes insecurity since the least degree of flexion is almost certain to cause him to fall. In Fig. II, the axis of the artificial limb has been displaced posteriorly. The body weight, represented by the dotted line, now falls anterior to C (the axis) and tends to lock the knee instead of producing further flexion. anatomical joint. If this demand is not complied with, the patient loses all sense of security, because the artificial leg tends to bend at the knee under the patient's weight. If the mechanical joint lies posterior to the normal, then the body-weight tends to lock the joint as is seen by reference to the diagram (Fig. 137). An artificial quadriceps does not, I find, add to the naturalness of the stride, but almost invariably tends to hold the leg fully extended, so that the patient walks as though the knee were ankylosed. A freely swinging joint with some simple rubber or spring device to prevent jarring in extension or flexion gives the patient the best opportunity to imitate the normal gait. 2. For amputations of the calf, the type of limb depends upon the length of the stump. If it is short—less than one-half the length of the calf—there must invariably be a thigh-piece and a knee-joint. If it is long, these may be dispensed with provided the stump is capable of weight-bearing. As already indicated, when the stump is not capable of weight-bearing, the artificial limb must be so moulded as to grasp the tuberosity of the tibia firmly, not the condyles, as is usually taught. The patella-tendon also is capable of weight-bearing, as can be learned by observing any patient who has worn an artificial limb for many years. Some difficulty is frequently experienced in bringing the leather socket of the artificial limb over the gastrocnemeii. This can be obviated by slitting the socket posteriorly and inserting eyes so as to lace it up when once it is in proper position. The ankle-joint, like the knee, should be of the simplest type, allowing merely flexion and extension. In addition to the ankle-joint, there should be one corresponding to the metatarsophalangeal junction. Types of Artificial Limb for Amputations of the Upper Extremity.—The problem of dealing with amputations of the upper extremity is far more difficult than is the case with amputations of the lower limbs. The legs merely have to carry the body, but the arm has a great variety of functions to perform. Depending upon the nature of these functions, and also to a great extent upon the site of the amputation, the artificial limb must vary from one case to another. Thus, an artificial limb which might be of value to a lawyer or business man would be of little use to the farmer or mechanic; and of two farmers, one with an amputation of the forearm, another with an amputation above the elbow, the one would have to be equipped with a type of limb differing markedly from that supplied to the other. There is no universal artificial limb applicable to all cases. 1. Types of Artificial Arms Designed for Amputations of the Fore-arm.—For the farmer and artisan, a simple and effective prosthesis has been designed by August Keller. Amputated Fig. 138.—The Keller artificial hand. The picture illustrates Keller's method of inserting a small knife, with which he is sharpening his pencil. Note also the piece of cork attached to the pencil. This enables him to grip the pencil between the claws and to write with it. The lower arm socket is held firmly in place by a broad strap which makes a figure-of-eight turn about the elbow. himself, some nine years ago, he constructed an artificial limb of the simplest materials, so well adapted to the needs of the farmer that the amputated scarcely note the handicap under which they are compelled to work. Keller's device consists of a leather socket reinforced by two longitudinal steel bars, held in place by a figure-of-eight strap which passes just above the elbow (Fig. 138). The hand-piece, made of wood, can be removed from the socket if desired (Fig. 141). Inserted into the wooden hand-piece are three strong steel hooks. These are not adjustable. They aid the patient in two ways: first, small objects, such as pencil or knife, can be inserted between them, second, they furnish the leverage for larger instruments. To hold these latter in place, a leather strap, attached to the anterior portion of the apparatus, is made to take a double turn about the handle of the article used (see Fig. 143) and then passing backward between the hooks, is fixed to the posterior aspect by means of a steel pin. The illustrations indicate how Keller uses his own device. The speed, accuracy and power which he exhibits are scarcely inferior to that of the normal individual. A large number of other contrivances have been evolved to replace the fingers. These consist of hooks, rings, clamps, and holders designed for special articles, such as knife, fork, spoon, pen or pencil, knitting needle, etc. Some of these are shown in Figs. 144, 145, 146 and 147. Several excellent devices have been invented by Judge Corley, of Dallas, Texas. One of these, a most ingenious arrangement enabling the wearer to button his own collar, is illustrated in Fig. 148. For the business man, or the professional, a more suitable type is the arm designed by Carnes. In this, the mechanism ![Image](image) **Fig. 140—The Keller artificial hand. Keller pruning a small tree.** is far more complicated, and the cost therefore proportionately greater. Despite the delicate mechanism, however, it is capable of standing the usual amount of wear and tear, and a break of any constituent part can readily be replaced. The essential feature of the arm is the voluntary control of motion of the fingers and of the wrist by means of bands which become shortened or lengthened by motion of the elbow-joint. The arm requires considerable practice before the technic of its use can be acquired. To give a patient such an artificial limb and expect him to be able to use it at once, is as illogical as presenting a man with a violin and telling him to play upon it. When, however, its use has been mastered, it gives surprisingly good results. The mode of attachment of the artificial limb to the stump is of importance. The hinge-joint at the elbow with an upper arm cuff, the usual type found in the brace-maker’s shop should not be employed, since it gives no opportunity for pronation and supination. A simpler and far more advantageous method of attachment is the figure-of-eight strap, which passes just above the condyles of the humerus and crossing the posterior surface of the humerus descends again over the anterior surface (see Fig. 138). 2. Types of Arm Designed for Disarticulation of the Elbow or Amputations of the Upper Arm.—The classical type of limb is a useless encumbrance and is almost always relegated to the garret by the intelligent patient. To be of any assistance to its wearer, the prosthesis must, even more than in the case of that for the forearm amputation, be particularly designed for the special work to be performed. Fig. 149 shows a fourteen-year-old patient to whom belongs the credit of evolving a prac- ![Fig. 142.—The Keller artificial hand. For aesthetic purposes Keller draws a glove over the hooks. This he terms his "Sunday" hand.](image) tical working arm for disarticulation at the elbow. When this lad was placed in the carpenter shop, I suggested that he construct an artificial limb to help him at his work. I expected to see the usual hinge-joint at the elbow, prolonged downward to serve for the attachment of a hook or a clamp. To my great surprise, after a few days the lad showed me the artificial limb pictured in Fig. 150. It will be noted that instead of a hinge-joint, there is a ball-and-socket joint at the Fig. 143.—Keller splitting wood. Note the double turn of the leather strap around the handle of the axe. This gave Keller so strong a grip on the handle that the united strength of three men was unable to pull the axe away. Keller's dexterity equalled that of an expert woodsman. Fig. 144.—The Fischer clamp for the use of the one-armed. The three prongs facilitate holding objects obliquely as well as in the axis of the limb. elbow, which, according to the patient's statement, he had constructed because he wished not merely to bend at the elbow but also to turn the forearm. In other words, he had solved a problem which makers of artificial limbs had for centuries Fig. 145.—Clamp and hook serviceable for the amputated workman. The clamp serves to hold a file, brush, small hammer, etc. The hook can be used to carry a pail or to lift heavy objects. Fig. 146.—A professional pianist, whose right hand had to be amputated because of gunshot injury. Equipped with a special device of Hoeftemann's, he was able to continue his profession. It was possible for him to strike single notes and chords with facility. failed to answer; namely, the best method of combining flexion and extension with pro- and supination. Between the concave extremities of the upper and lower arm pieces was inserted a wooden sphere, bound to the adjacent concavities by a strong spring. The friction between the opposing surfaces was sufficient to lock the arm at any desired angle. Fig. 147.—Hoeftemann’s device for the professional pianist shown in Fig. 146. Fig. 148.—Judge Corley’s apparatus for helping the man who has lost both hands to button his own collar. the aid of this simple device, the patient within two years became an expert carpenter and, entirely unassisted, was able to do the finest kind of cabinet work. Of course it must be remembered that the artificial hand plays the rôle of assistant to the sound arm, and the success of the patient in becoming an expert artisan was due in large part to the fact that the major work done by the carpenter is performed by one hand aided to a comparatively slight degree by the other. Fig. 149.—A 14-year old carpenter's apprentice amputated at the elbow, showing the artificial limb which he himself designed. By inserting a wooden sphere between the concave extremities of the upper and lower arm pieces he could not only flex and extend but supinate and pronate. Another valuable type of arm is illustrated in Fig. 152. This device is purely for working purposes, and must be supplemented by another arm which hides the defect. It consists of a broad padded metal ring which fits over the shoulder and is held firmly in place by straps passing around the body. To this ring is attached a second, which, running on ball bearings, has perfect freedom of rotation on the first ring. To the second are attached steel uprights which run parallel with the stump and terminate at the level of the elbow in a circular disc to which various instruments useful to the carpenter can be Fig. 150.—The carpenter's apprentice shown in Fig. 149 guiding the plane with his artificial arm. attached. The stump is bound firmly to the steel uprights by means of straps, and owing to the ball-bearing joint at the shoulder the wearer has almost the normal range of motion. A little ingenuity in devising the tools to be inserted into the disc enables the amputated to do even the most delicate kind of carpentry work. One tool suffices to grasp the screw of the screw-and-bit; another grasps the nail so that the uninjured hand is free to hammer; another is designed to hold the chisel, etc. An interesting modification of the working arm suitable for amputations above the elbow, is the utilization of a spring at ![Image] Fig. 151.—The carpenter's apprentice already pictured in the preceding figures, at work with the saw. The artificial limb is used to steady the board. the elbow-joint, which permits a springy motion of distinct value in hammering, filing, etc., work in which absolute fixation at the elbow takes away from the freedom of the stroke. Fig. 154 illucidates the principle of this arm. By fastening screws $A$ and $B$, the arm can be absolutely fixed at any desired Fig. 152.—The Siemens-Schuckert arm for amputations above the elbow. For descriptive text see page 209 et seq. Fig. 153.—The Biesalski artificial arm for amputations above the elbow. (First model.) This arm was probably the first in which an elbow joint was constructed corresponding to the anatomy of the normal, and the first in which a working arm was combined with an aesthetic means of hiding the defect. The lower arm portion consists of a strong metal tube, into which working implements can be inserted and over which the artificial hand can be placed, when the wearer is through with his day's work. angle. By releasing screw $A$ which controls the springs, the plunger is allowed to move backward and forward, allowing about 10° motion, but not beyond the limit set by the screw $B$. Pronation and supination are not possible in this type of ![Diagram of artificial arm] **Fig. 154.—Artificial arm in which a limited amount of springy motion can take place at the elbow by adjusting the screws $A$ and $B$. (Model of Biesalski.)** arm except by rotating the tool which is inserted into the hollow barrel corresponding to the forearm. Two types of working arm have been constructed after the pattern of the ball-and-socket joint devised by the young carpenter's apprentice already mentioned. To render the fixation at the elbow firmer, a screw is attached to the elbow articulation which locks the upper and lower arm against the spherical surface of the intervening steel ball (see Fig. 155). Although these two arms are capable of withstanding great strain, they are not, so far as I have been able to judge, as advantageous as that pictured in Fig. 152, because the tool is not brought into sufficiently intimate contact with the stump. Fig. 155.—A working arm designed for amputations above the elbow (Rota arm). The joint at the elbow is so constructed that not only flexion and extension but pro- and supination are made possible. The portion corresponding to the lower arm consists of a tube into which tools of various kinds can be inserted. It can be fixed in any desired position by a turn of the screw just above the elbow joint. As a rule, with practically no exceptions, the nearer the stump can be brought to the instrument which it is to control, the more effective is the amputated’s use of the implement. The Carnes arm already described in speaking of amputations of the forearm, is also applicable to amputations of the upper arm. The motor power is then derived by the movements of the shoulders (see Figs. 156, 157 and 158). The difficulty in learning to use the arm is increased when the amputation lies above the elbow, nor is it particularly well suited to the use of the artisan. For aesthetic purposes, however, it is the most ingenious device of which I know. The shorter the upper arm stump, the more difficult the attachment of the prosthesis, and the more difficult it is to Fig. 156.—A case of double amputation, on the right side through the elbow, on the left 4 inches below the shoulder. In a case of this kind, unlike that pictured in Fig. 171, an artificial limb is necessary, since the two stumps cannot be approximated. The Carnes artificial arms are seen lying on the table. The patient can put these on without assistance and is then able to eat alone, dress, shave and use many tools. (See also Fig. 157.) render the stump capable of doing its fair share of work. As a rule it is almost impossible to train a patient with a stump less than 4 inches long to be an independent farmer or artisan. An exception is pictured in Fig. 163. Despite the short stump, this young boy was able to work skillfully in the machine shop (see Figs. 159 et seq.). The prosthesis shows the excellent shoulder device designed by Riedinger. As a rule the patient with the short upper arm stump can be made capable of doing lighter garden work (see Figs. 164 et seq.), or in suitable instances, he can be trained to work at a Fig. 157.—The Carnes artificial arm for the patient shown in Fig. 156. factory machine. For this latter purpose close coöperation is necessary between physician, machinist, and the manager of the factory. Even when the entire arm has been disarticulated at the shoulder, a prosthesis can be applied with distinct benefit to the wearer. The artificial limb is controlled by the swing of the body, and enables the amputated to wield a broom, rake, etc. In these cases as well as in the higher amputations of the upper arm, the simple device shown in Fig. 164 has proven most serviceable. It consists of a round piece of wood resembling a spool, with a strap passing over it fixed on the one side, ending in a catch on the other side, similar to that frequently used on ice or roller skates. The handle of the implement, spade, rake, wheelbarrow, etc., is fastened between the strap and the spool. There is sufficient fixation for all purposes, and at the same time enough latitude of motion to allow the wheelbarrow to be tipped, or the angle of the rake to be changed. The Life of the Amputated.—Care of the stump and the application of the artificial limb constitute only two of the numerous problems which confront the physician in the care of the amputated. Particularly in the case of those who have lost a hand, the entire mode of life must be modified. Nothing can be done as it was previously done, and the simplest actions of everyday life must be relearned. First, the amputated must be taught to dress and undress with one hand. The question of washing gave me considerable trouble, since the amputated were unable properly to cleanse the fingers and hand of the sound arm. The simple device pictured in Fig. 167, a board fitting over the wash basin, to which scrubbing brush and nail file could be attached, solved the problem. Lacing the shoes was another difficulty. Here I was aided by one of the amputated boys of the crippled children's hospital with which I was associated. He used a single, long lace, on the same principle as that employed in lacing a whipstock. One end was firmly attached to the lowermost eyelet of the shoe. The other end was then passed through the eyelets in the usual way, and then, allowing a loop long enough to be zigzagged between the hooks, was passed beneath the lacings back to the starting point. The loop was then caught zigzag from one hook to another, and the slack taken in by a vigorous pull on the end of the lacing projecting beyond the first eyelet. In eating, the only difficulty was occasioned by the need of cutting and using the fork with the same hand. For this purpose a number of devices are on the market. These consist of a knife-blade terminating in a fork-like projection Fig. 158. ORTHOPEDIC TREATMENT OF GUNSHOT INJURIES Fig. 158.—Two diagrams illustrating the principle of the Carnes arm for a double amputation, with one arm amputated between the elbow and the shoulder, and the other arm disarticulated at the shoulder. I. View from in front. II. View behind. The functions of the different straps are as follows: Strap No. 1. Bends or operates the elbow. This strap coming from the back, passing over pulleys in the upper arm, and being anchored to the forearm, enables the wearer to get the elbow movement, simply by moving his stump forward a little. Strap No. 2. Locks the rotating wrist. To unlock the wrist, the elbow is bent up to the extreme. When the wrist is not locked, it turns or rotates as the elbow is bent, but can be locked in any position desired, by first bending the elbow until the wrist and hand are rotated to the position desired, then hold it in this position while pulling on Strap No. 2, to lock it there. Strap No. 3. Opens and closes the fingers. On the amputation above the elbow, by throwing the shoulder down, a sufficient tension is had on this strap to open or close the fingers; then, by raising the shoulder, the cord is pulled back into the hand, allowing the mechanism to reverse, and then, by again pushing the shoulder down, the opposite movement of opening and closing the hand is obtained. Straps Nos. 4 and 5. Opens and closes the hand on the shoulder or disarticulated amputation. Strap No. 6. Simply an elastic support to hold the arm in place. For a single amputation on either side, the harness will be as shown, excepting that on the opposite side, it would simply be looped up under the good arm. Straps No. 2 are the only ones which come across the chest and these are not tight, it being necessary to throw the arm out to the side, in order to lock the wrist. For the diagrams and explanatory text I am indebted to the Carnes Artificial Limb Co., Kansas City, Mo. (see Fig. 168). The blade is convex, so that the food is easily cut by a rocking movement. When the right hand has been lost, the patient must at once be taught to write with the left. This can be learned by the average man in about 3 weeks. It is advantageous to stimulate the patients by the competition afforded by class-room work. Fig. 159.—Patient of Riedinger with very short upper arm stump. In hundreds of ways, the physician can help the amputated to readjust themselves to the new mode of life; and in many instances the amputated will teach the physician and his comrades new methods of usefulness. This training in proficiency, combined with the wholesome cheeriness of physician and instructor, does more than anything else to overcome the depression under which most of the patients are laboring, and fits them for the next important step in rendering them useful citizens of their community—specialized training of the stump, for the particular purposes for which it is to be used. For this of course the men must be divided into groups depending upon the type of amputaFig. 160.—The same patient as in Fig. 159, equipped with a Riedinger prosthesis. Note the broad circular pad which closely surrounds the shoulder and serves as support for the leather socket which is attached to it by a strong joint, permitting motion in all directions. Fig. 161.—The mechanic's tools employed by the patient shown in Fig. 160. These are inserted into the slot at the lower end of the forearm piece and fastened firmly in place by a turn of the screw. Fig. 162.—The same patient as in Fig. 160, illustrating the method of using hammer and chisel. Fig. 163.—The same patient as in Fig. 160 at work at the turning-lathe. tion and the nature of the work. In helping the patient to decide what work he is fitted for, the physician should have as consultant a staff of technical assistants versed in the details of all the handicrafts. Experience has shown that amputations of the forearm and of the upper arm if not more than 2 or 3 inches above the elbow, do not debar a man from becoming a carpenter, farmer, or some type of mechanician. Fig. 164.—This patient suffered an amputation of the right arm $2\frac{1}{2}$ inches below the shoulder. Equipped with the Biesalski artificial arm shown in Fig. 153 he was able to do all forms of light gardening. Note the simple contrivance at the wrist consisting of a spool over which a strap passes. This device gives a firm grip and at the same time allows sufficient play to dump the wheel barrow. Of course, those possessing an elbow-joint have a great advantage over those amputated above the elbow. When the amputation has occurred near the shoulder-joint, it is foolish to attempt training a man for these branches. He should then be taught some handicraft allied to his previous occupation. Thus, the carpenter should be taught sufficient mechanical drawing and building construction to enable him to act as Fig. 165.—The same patient as in Fig. 164. He is shown at work with the spade. Fig. 166.—The same patient as in Fig. 165. The spool device at the wrist enables him to use the rake effectively. foreman; or, if he is not sufficiently well educated to assume this responsibility, he can be taught to be a furniture polisher. In this occupation, practically all the work is done with a sweeping motion of one arm; the other hand is used simply to hold the varnish or other polishing substance—a function which is quite as well filled by a small tray placed near the worker. Fig. 167.—A simple toilet arrangement for the one-arm soldier. To permit proper cleansing of the hand, a scrubbing brush and a nail file are fastened firmly to the board which rests on the basin. The artificial limb can be used to advantage in many instances, but for many men the stump is the best form of prosthesis. This applies particularly to a moderately long forearm stump. This can be used for filing, almost as effectively as the normal hand (See Fig. 169); for hammering, the Fig. 168.—A combination of knife and fork for the one-armed. handle is gripped in the elbow between the upper arm and the stump, as shown in Fig. 170. At the turning lathe, the stump can easily be trained to turn the adjusting swivel. In learning to use the stump, it is of great assistance to have an amputated man himself act as instructor. It is remarkable to what extent the delicacy of the skin improves. In one instance, in which I tested the fineness of perception by the two-point test, used by the physiologists in determining the number of tactile corpuscles in the cutis, I found almost the same degree of sensitiveness of the forearm stump as that normally found over the finger tips. Those suffering amputation of a lower limb do not require the same specialized training. All they need is the proper Fig. 169.—The bracemaker's apprentice pictured in Fig. 170. Here he is shown in the act of filing. The stump had become so hardened that he was able to use it exactly as the ordinary mechanic uses his left hand. stump treatment and the application of a well-fitting prosthesis to render them fit to return to their community. With rare exceptions, they are able to return to their previous occupations. The exceptions are the cases of double amputation or amputation near the hip in cases of men who previously did hard manual labor. They must be taught a trade which allows them to be seated most of the time. Far and away the most difficult problem presented in the care of the amputated is that of those who have lost both hands. Provided the stumps are sufficiently long to allow Fig. 170.—The one-armed bracemaker's apprentice already pictured in Fig. 169. This illustration shows his method of gripping the hammer between the stump, upper arm and chest. them to be approximated, the loss is not as tragic as it at first appears. In Fig. 171 is shown one of the teachers of the crippled children's home already referred to. He is seen in the act of buttoning his collar by means of a button hook held between the two stumps. This man had learned to dress himself alone, to eat with delicacy and grace, to write a perfect hand with more than the normal speed, and had passed the examinations qualifying him as a licensed teacher. He did all this without use of artificial limbs. I also had opportunity of meeting several other men with double amputations who used their stumps as skillfully as he. Fig. 171.—A teacher, both of whose hands had been amputated when six years old. He had learned to be absolutely independent and had passed his examination entitling him to a teacher's license. Without artificial limb he could dress himself (the illustration shows him in the act of buttoning his collar), shave, eat with grace and assurance, write an unusually legible hand with more than normal rapidity, travel long distances alone, carry a suit case and pay his fares, just as the normal individual would. All this was done by careful education of the stump, which in his case had acquired almost the same sensitiveness as the tips of the fingers. When, however, the stumps are so short that they cannot be brought together, then an artificial limb must be applied—either the Carnes' arm (see Fig. 157), or Judge Corley's, since with sufficient training it enables the wearer to become a reasonably independent being, whereas without it he is absolutely helpless. The double-amputated require a school all to themselves, especially devised clothing with snap-hooks instead of buttons, trousers so devised as to fit directly to a vest (see Fig. 172), etc. In no instance, however, should the individual be allowed to feel that his case is hopeless. Even in one pathetic instance in which in addition to the loss of both hands the patient had been blinded by the explosion, much was accomplished and he left the hospital ready to assume a post in the office of a large business establishment. Fig. 172.—A patient with double amputation showing the vest and trousers designed by Spitzy and a type of artificial arm attachable directly to the clothing. Note the ring hanging down from the slit of the trousers. By pulling this upward with the hook of the artificial arm, the trousers are closed by means of a thin chain with interlocking teeth.
Bad directions in cryptographic hash functions Daniel J. Bernstein\textsuperscript{1,2}, Andreas Hülsing\textsuperscript{2}, Tanja Lange\textsuperscript{2}, and Ruben Niederhagen\textsuperscript{2} \textsuperscript{1} Department of Computer Science University of Illinois at Chicago Chicago, IL 60607–7045, USA \email@example.com} \textsuperscript{2} Department of Mathematics and Computer Science Technische Universiteit Eindhoven P.O. Box 513, 5600 MB Eindhoven, The Netherlands \firstname.lastname@example.org} \email@example.com} \firstname.lastname@example.org} \textbf{Abstract.} A 25-gigabyte “point obfuscation” challenge “using security parameter 60” was announced at the Crypto 2014 rump session; “point obfuscation” is another name for password hashing. This paper shows that the particular matrix-multiplication hash function used in the challenge is much less secure than previous password-hashing functions are believed to be. This paper’s attack algorithm broke the challenge in just 19 minutes using a cluster of 21 PCs. \textbf{Keywords:} symmetric cryptography, hash functions, password hashing, point obfuscation, matrix multiplication, meet-in-the-middle attacks, meet-in-many-middles attacks \section{Introduction} \textit{Under normal circumstances, the system protected the passwords so that they could be accessed only by privileged users and operating system utilities. But through accident, programming error, or deliberate act, the contents of the password file could occasionally become available to unprivileged users. . . . For example, if the password file is saved on backup tapes, then those backups must be kept in a physically secure place. If a backup tape is stolen, then everybody’s password needs to be changed. Unix avoids this problem by not keeping actual passwords anywhere on the system.} —“Practical UNIX \& Internet Security” [23, p. 84], 2003 This work was supported by the National Science Foundation under grant 1018836, by the Netherlands Organisation for Scientific Research (NWO) under grant 639.073.005, and by the European Commission through the ICT program under contract INFSO-ICT-284833 (PUFFIN). Permanent ID of this document: 7c4f480d7f090d69c58b96437b6011b1. Date: 2015.02.23. Consider a server that knows a secret password 11000101100100. The server could check an input password against this secret password using the following `checkpassword` algorithm (expressed in the Python language): ```python def checkpassword(input): return int(input == "11000101100100") ``` But it is much better for the server to use the following `checkpassword_hashed` algorithm (see Appendix A for the definition of `sha256hex`): ```python def checkpassword_hashed(input): return int(sha256hex(input) == ( "ba0ab099c882de48c4156fc19c55762e" "83119f44b1d8401dba3745946a403a4f" )) ``` It is easy for the server to write down this `checkpassword_hashed` algorithm in the first place: apply SHA-256 to the secret password to obtain the string `ba0...a4f`, and then insert that string into a standard `checkpassword_hashed` template. (Real servers normally store hashed passwords in a separate database, but in this paper we are not concerned with superficial distinctions between code and data.) There is no reason to believe that these two algorithms compute identical functions. Presumably SHA-256 has a second (and third and so on) preimage of SHA-256(11000101100100), i.e., a string for which `checkpassword_hashed` returns 1 while `checkpassword` returns 0. However, finding any such string would be a huge advance in SHA-256 cryptanalysis. The `checkpassword_hashed` algorithm outputs 1 for input 11000101100100, just like `checkpassword`, and outputs 0 for all other inputs that have been tried, just like `checkpassword`. The core advantage of `checkpassword_hashed` over `checkpassword` is that it is obfuscated. If the `checkpassword` algorithm is leaked to an attacker then the attacker immediately sees the secret password and seizes control of all resources protected by that password. If `checkpassword_hashed` is leaked to an attacker then the attacker still does not see the secret password without solving a SHA-256 preimage problem: the loss of confidentiality does not immediately create a loss of integrity. Obfuscation is a broad concept. There are many aspects of programs that one might wish to obfuscate and that are not obfuscated in `checkpassword_hashed`: for example, one can immediately see that the program is carrying out a SHA-256 computation, and that (unless SHA-256 is weak) there are very few short inputs for which the program prints 1. In the terminology of some recent papers (see Section 2), what is obfuscated here is the key in a particular family of “keyed functions”, but not the choice of family. Further comments on general obfuscation appear below. We emphasize password obfuscation because it is an important special case: a widely deployed application using widely studied symmetric techniques. ### 1.1. State-of-the-art password hashing Of course, some preimage problems can be efficiently solved. If the attacker knows (or correctly guesses) that the secret password is a string of 14 digits, each 0 or 1, then the attacker can simply try hashing all $2^{14}$ possibilities for that string. Even worse, if the attacker sees many `checkpassword_hashed` algorithms from many users’ secret passwords, the attacker can efficiently compare all of them to this database of $2^{14}$ hashes: the cost of multiple-target preimage attacks is essentially linear in the sum of the number of targets and the number of guesses, rather than the product. There are three standard responses to these problems. First, to eliminate the multiple-target problem, the server randomizes the hashing. For example, the server might store the same secret password 11000101100100 as the following `checkpassword_hashed_saluted` algorithm, where prefix was chosen randomly by the server for storing this password: ```python def checkpassword_hashed_saluted(input): prefix = "b1884428881e20fe61c7629a0f71fcda" return int(sha256hex(prefix + input) == ( "5f5616075f77375f1e36e2b707e55744" "91a308c39653afe689b7a958455e55d2" )) ``` The attacker sees the prefix and can still find this password using at most $2^{14}$ guesses, but the attacker can no longer share work across multiple targets. (This benefit does not rely on randomness: any non-repeating prefix is adequate. For example, the prefix can be chosen as a counter; on the other hand, this requires maintaining state and raises questions of what information is leaked by the counter.) Second, the server chooses a hash function that is much more expensive than SHA-256, multiplying the server’s cost by some factor $F$ but also multiplying the attack cost by almost exactly $F$, if the hash function is designed well. The ongoing “Password Hashing Competition” [9] has received dozens of submissions of “memory-hard” hash functions that are designed to be expensive to compute even for an attacker manufacturing special-purpose chips to attack those particular functions. Third, users are encouraged to choose passwords from a much larger space. A password having only 14 bits of entropy is highly substandard: for example, the recent paper [14] reports techniques for users to memorize passwords with four times as much entropy. ### 1.2. Matrix-multiplication password hashing: the “point obfuscation” challenge A “point obfuscation” challenge was announced by Apon, Huang, Katz, and Malozemoff [7] at the Crypto 2014 rump session. “Point obfuscation” is the same concept as password hashing: see, e.g., [33] (a hashed password is a “provably secure obfuscation of a ‘point function’ under the random oracle model”). The challenge consists of “an obfuscated 14-bit point function on Dropbox”: a 25-gigabyte program with the promise that the program returns 1 for one secret 14-bit input and 0 for all other 14-bit inputs. The goal of the challenge is to determine the secret 14-bit input: “learn the point and you win!” An accompanying October 2014 paper [5] described the challenge as having “security parameter 60”, where “security parameter $\lambda$ is designed to bound the probability of successful attacks by $2^{-\lambda}$”. We tried the 25-gigabyte program on a PC with the following relevant resources: an 8-core 125-watt AMD FX-8350 “Piledriver” CPU (about $200), 32 gigabytes of RAM (about $400), and a 2-terabyte hard drive (about $100). The program took slightly over 4 hours for a single input. A brute-force attack using this program would obviously have been feasible but would have taken over 65536 hours worst-case and over 32768 hours on average, i.e., an average of nearly 4 years on the same PC, consuming 500 watt-years of electricity. ### 1.3. Attacking matrix-multiplication password hashing In this paper we explain how we solved the same challenge in just 19 minutes using a cluster of 21 such PCs. The solution is 11000101100100; we reused this string above as our example of a secret password. Of course, knowing this solution allowed us to compress the original program to a much faster `checkpassword` algorithm. The time for our attack algorithm against a worst-case input point would have been just 34 minutes, about 5000 times faster than the original brute-force attack, using under 0.2 watt-years of electricity. Our current software is slightly faster: it uses just 29.5 minutes on 22 PCs, or 35.7 minutes on 16 PCs. More generally, for an $n$-bit point function obfuscated in the same way, our attack algorithm is asymptotically $n^4/2$ times faster than a brute-force search using the original program. This quartic speedup combines four linear speedups explained in this paper, taking advantage of the matrix-multiplication structure of the obfuscated program. Two of the four speedups (Section 3) are applicable to individual inputs, and could have been integrated into the original program, preserving the ratio between attack time and evaluation time; but the other two speedups (Section 4) share work between separate inputs, making the attack much faster than a simple brute-force attack. See Section 1.6 for generalizations to more functions. ### 1.4. Matrix-multiplication password hashing vs. state-of-the-art password hashing It is well known that a $2^n$-guess preimage attack against a hash function, cipher, etc. does not cost exactly $2^n$ times as much as a single function evaluation: there are always ways to merge small amounts of initial work across multiple inputs, and to skip small amounts of final work. See, for example, [34] (“Reduce the DES encryption from 16 rounds to the equivalent of $\approx 9.5$ rounds, by shortcircuit evaluation and early aborts”), [29] (“biclique” attacks against various hash functions), and [13] (“biclique” attacks against AES). However, one expects these speedups to become less and less noticeable for functions that have more and more rounds. For any state-of-the-art cost-$C$ password-hashing function, the cost of a $2^n$-guess preimage attack is very close to $2^n C$. The matrix-multiplication function is much weaker: the cost of our attacks is far below $2^n$ times the cost of the best method known to evaluate the function. Even worse, the matrix-multiplication approach has severe performance problems that end up limiting the number $n$ of input bits. The “obfuscated point function” includes $2n$ matrices, each matrix having $n+2$ rows and $n+2$ columns, each entry having approximately $4((\lambda + 1)(n + 4) + 2)^2 \log_2 \lambda$ bits; recall that $\lambda$ is the target “security parameter”. If $\lambda$ is just 60 and $n$ is above 36 then a single obfuscated password does not fit on a 2-terabyte hard drive, never mind the time and memory required to print and evaluate the function. Earlier password-hashing functions handle a practically unlimited number of input bits with negligible slowdowns; fit obfuscated passwords into far fewer bits (a small constant times the target security level); allow the user far more flexibility to select the amount of time and memory used to check a password; and do not have the worrisome matrix structure exploited by our attacks. 1.5. Context: obfuscating other functions. Why, given the extensive hashing literature, would anyone introduce a new password-obfuscation method with unnecessary mathematical structure, obvious performance problems, and no obvious advantages? To answer this question, we now explain the context that led to the Apon–Huang–Katz–Malozemoff point-obfuscation challenge; we start by emphasizing that their goal was not to introduce a new point-obfuscation method. Point functions are not the only functions that cryptographers obfuscate. Consider, for example, the following fast algorithm to compute the $pq$th power of an input mod $pq$, where $p$ and $q$ are particular prime numbers shown in the algorithm: ```python def rsa_encrypt_unobfuscated(x): p = 37975227936943673922808872755445627854565536638199 q = 40094690950920881030683735292761468389214899724061 pinv = 23636949109494599360568667562368545559934804514793 qinv = 15587761943858646484534622935500804086684608227153 return (qinv*q*pow(x,q,p) + pinv*p*pow(x,p,q)) % (p*q) ``` The following algorithm is not as fast but uses only the product $pq$: ```python def rsa_encrypt(x): pq = int("15226050279225333605356183781326374297180681149613" "80688657908494580122963258952897654000350692006139") return pow(x,pq,pq) ``` These algorithms compute exactly the same function $x \mapsto x^{pq} \mod pq$, but the primes $p$ and $q$ are exposed in `rsa_encrypt_unobfuscated` while they are obfuscated in `rsa_encrypt`. This obfuscation is exactly the reason that `rsa_encrypt` is safe to publish. In other words, RSA public-key encryption is an obfuscation of a secret-key encryption scheme. (Note that this size of $pq$ is too small for serious security. The particular $pq$ shown here was introduced many years ago as the “RSA-100” challenge and was factored in 1991. See [3]. One should take larger primes $p$ and $q$.) In a FOCS 2013 paper [25], Garg, Gentry, Halevi, Raykova, Sahai, and Waters proposed an obfuscation method that takes any fast algorithm $A$ as input and “efficiently” produces an obfuscated algorithm $\text{Obf}(A)$. The security goal for $\text{Obf}$ is to be an “indistinguishability obfuscator”: this means that $\text{Obf}(A)$ is indistinguishable from $\text{Obf}(A')$ if $A$ and $A'$ are fast algorithms computing the same function. For example, if $\text{Obf}$ is an indistinguishability obfuscator, and if an attacker can extract $p$ and $q$ from $\text{Obf}(\text{rsa\_encrypt\_unobfuscated})$, then the attacker can also extract $p$ and $q$ from $\text{Obf}(\text{rsa\_encrypt})$, since the two obfuscations are indistinguishable; so the attacker can “efficiently” extract $p$ and $q$ from $pq$, by first computing $\text{Obf}(\text{rsa\_encrypt})$. Contrapositive: if $\text{Obf}$ is an indistinguishability obfuscator and the attacker cannot “efficiently” extract $p$ and $q$ from $pq$, then the attacker cannot extract $p$ and $q$ from $\text{Obf}(\text{rsa\_encrypt\_unobfuscated})$; i.e., $\text{Obf}(\text{rsa\_encrypt\_unobfuscated})$ hides $p$ and $q$ at least as effectively as $\text{rsa\_encrypt}$ does. Another example, returning to symmetric cryptography: It is reasonable to assume that $\text{checkpassword}$ and $\text{checkpassword\_hashed}$ compute the same function if the input length is restricted to, e.g., 200 bits. This assumption, together with the assumption that $\text{Obf}$ is an indistinguishability obfuscator, implies that $\text{Obf}(\text{checkpassword})$ hides a $\leq 200$-bit secret password at least as effectively as $\text{checkpassword\_hashed}$ does. These examples illustrate the generality of indistinguishability obfuscation. In the words of Goldwasser and Rothblum [27], efficient indistinguishability obfuscation is “best-possible obfuscation”, hiding everything that ad-hoc techniques would be able to hide. There are, however, two critical caveats. First, it is not at all clear that the $\text{Obf}$ proposal from [25] (or any newer proposal) will survive cryptanalysis. There are actually two alternative proposals in [25]: the first relies on multilinear maps [24] from Garg, Gentry, and Halevi, and the second relies on multilinear maps [22] from Coron, Lepoint, and Tibouchi. In a paper [19] posted early November 2014 (a week after we announced our solution to the “point obfuscation” challenge), Cheon, Han, Lee, Ryu, and Stehlé announced a complete break of the main security assumption in [22], undermining a remarkable number of papers built on top of [22]. The attack from [19] does not seem to break the application of [22] to point obfuscation (since “encodings of zero” are not provided in this context), but it illustrates the importance of leaving adequate time for cryptanalysis. A followup work by Gentry, Halevi, Maji, and Sahai [26] extends the attack from [19] to some settings where no “encodings of zero” below the “maximal level” are available, although the authors of [26] state that “so far we do not have a working attack on current obfuscation candidates”. Second, the literature already contains much simpler, much faster, much more thoroughly studied techniques for important examples of obfuscation, such as password hashing and public-key encryption. Even if the new proposals in fact provide indistinguishability obfuscation for more general functions, there is no reason to believe that they can provide competitive security and performance for functions where the previous techniques apply. We would expect the generality of these proposals to damage the security-performance curve in a broad range of real applications covered by the previous techniques, implying that these proposals should be used only for applications outside that range. The goal of Apon, Huang, Katz, and Malozemoff was to investigate “the practicality of cryptographic program obfuscation”. Their obfuscator is not limited to point functions; it takes more general circuits as input. However, after performance evaluation, they concluded that “program obfuscation is still far from being deployable, with the most complex functionality we are able to obfuscate being a 16-bit point function”; see [5, page 2]. They chose a 14-bit point function as a challenge. 1.6. Attacking matrix-multiplication-based obfuscation of any function. The real-world importance of password hashing justifies focusing on point functions, but we have also adapted our attack algorithm to arbitrary $n$-bit-to-1-bit functions. Specifically, we have considered the method explained in [5] to obfuscate an arbitrary $n$-bit-to-1-bit function, and adapted our attack algorithm to this level of generality. For the general case, with $u$ pairs of $w \times w$ matrices using $n$ input bits, we save a factor of roughly $uw/2$ in evaluating each input, and a further factor of approximately $n/\log_2 w$ in evaluating all inputs. The $n/\log_2 w$ increases to $n/2$ for the standard input-bit order described in [5], but for an arbitrary input-bit order our attack is still considerably faster than a simple brute-force attack. See Section 8. We comment that standard cryptographic hashing can be used to obfuscate general functions. We suggest the following trivial obfuscation technique as a baseline for future obfuscation challenges: precompute a table of hashes of the inputs that produce 1; add fake random hashes to pad the table to size $2^n$ (or a smaller size $T$, if it is acceptable to reveal that at most $T$ inputs produce 1); and sort the table for fast lookups. This does not take polynomial time as $n \to \infty$ (for $T = 2^n$), but it nevertheless appears to be smaller, faster, and stronger than all of the recently proposed matrix-multiplication-based obfuscation techniques for every feasible value of $n$. 2 Review of the obfuscation scheme Since the initial Obf proposal by Garg, Gentry, Halevi, Raykova, Sahai, and Waters [25] a lot of research was spent on finding applications and improving the proposed scheme. The challenge from [5] which we broke uses the relaxed-matrix-branching-program method by Ananth, Gupta, Ishai, and Sahai [4] to generate a size-reduced obfuscated program and combines it with the integer-based multilinear map (CLT) due to Coron, Lepoint, and Tibouchi [22]. As mentioned in Section 1, the recent CLT attack by Cheon, Han, Lee, Ryu, and Stehlé [19] relies on “encodings of zero” and therefore does not apply to this point-obfuscation scheme. Our attack will also work for other matrix-multiplication-type obfuscation schemes with a similar structure, and in particular we see no obstacle to applying the same attack strategy with the Garg–Gentry–Halevi [24] multilinear map in place of CLT. Most of the Obf literature does not state concrete parameters and does not present computer-verified examples. The first implementations, first examples, and first challenge were from Apon, Huang, Katz, and Malozemoff in [5], [6], and [7], providing an important foundation for quantifying and verifying attack performance. The challenge given in [5] is an obfuscation of a point function, so we first give a self-contained description of these obfuscated point-function programs from the attacker’s perspective; we then comment briefly on more general functions. For details on how the matrices below are constructed, we refer the reader to [4], [22], and of course [5]; but these details are not relevant to our attack. 2.1. Obfuscated point functions. A point function is a function on \(\{0, 1\}^n\) that returns 1 for exactly one secret vector of length \(n\) and 0 otherwise. The obfuscation scheme starts with this secret vector and an additional security parameter \(\lambda\) related to the security of the multilinear map. The obfuscated version of the point function is given by a list of \(2n\) public \((n + 2) \times (n + 2)\) matrices \(B_{b,k}\) for \(1 \leq b \leq n\) and \(k \in \{0, 1\}\) with integer entries; a row vector \(s\) of length \(n + 2\) with integer entries; a column vector \(t\) of length \(n + 2\) with integer entries; an integer \(p_{zt}\) (a “zero test” value, not to be confused with an “encoding of zero”); and a positive integer \(q\). All of the entries and \(p_{zt}\) are between 0 and \(q - 1\) and appear random. The number of bits of \(q\) has an essentially linear impact upon our attack cost; [5] chooses the number of bits of \(q\) to be approximately \(4((\lambda + 1)(n + 4) + 2)^2 \log_2 \lambda\) for multilinear-map security reasons. The obfuscated program works as follows: - Take as input an \(n\)-bit vector \(x = (x[1], x[2], \ldots, x[n])\). - Compute the integer matrix \(A = B_{1,x[1]} B_{2,x[2]} \cdots B_{n,x[n]}\) by successive matrix multiplications. - Compute the integer \(y(x) = sAt\) by a vector-matrix multiplication and a dot product. - Compute \(y(x)p_{zt}\) and reduce mod \(q\) to the range \([-(q - 1)/2, (q - 1)/2]\). - Multiply the remainder by \(2^{2\lambda+11}\), divide by \(q\), and round to the nearest integer. This result is by definition the matrix-multiplication hash of \(x\). - Output 0 if this hash is 0; output 1 otherwise. We have confirmed these steps against the software in [6]. The matrix-multiplication hash here is reminiscent of “Fast VSH” from [20]. Fast VSH hashes a block of input as follows: use input bits to select precomputed primes from a table, multiply those primes, and reduce mod something. The matrix-multiplication hash hashes a block of input as follows: use input bits to select precomputed matrices from a table, multiply those matrices, and reduce mod something. The matrices are secretly chosen with additional structure, but we do not use that structure in our attack. 2.2. Initial security analysis. A straightforward brute-force attack determines the secret vector by computing the matrix-multiplication hash of all $2^n$ vectors $x$. Of course, the computation stops once a correct hash is found. Unfortunately [5] and [7] do not include timings for $\lambda = 60$ and $n = 14$, so we timed the software from [6] on one of our PCs and saw that each evaluation took 245 minutes, i.e., $2^{45.74}$ cycles at 4GHz. As the code automatically used all 8 cores of the CPU, this leads to a total of $2^{48.74}$ cycles per evaluation. A brute-force computation using this software would take $2^{14} \cdot 2^{48.74} = 2^{62.74}$ cycles worst-case, and would take more than $2^{60}$ cycles for 85% of all inputs. For comparison, recall that the CLT parameters were designed to just barely provide $2^\lambda = 2^{60}$ security, although the time scale for the $2^{60}$ here is not clear. If the time scale of the security parameter is close to one cycle then the cost of these two attacks is balanced. In their Crypto 2014 rump-session announcement [8], the authors declared this brute-force attack to be infeasible: “The great part is, it’s only 14 bits, so you think you can try all 2 to the 14 points, but it takes so long to evaluate that it’s not feasible.” The authors concluded in [5, Section 5] that they were “able to obfuscate some ‘meaningful’ programs” and that “it is important to note that the fact that we can produce any ‘useful’ obfuscations at all is surprising”. We agree that a 500-watt-year computation is a nonnegligible investment of computer time (although we would not characterize it as “infeasible”). However, in Section 3 we show how to make evaluation two orders of magnitude faster, bringing a brute-force attack within reach of a small computer cluster in a matter of days. Furthermore, in Section 4 we present a meet-in-the-middle attack that is another two orders of magnitude faster. 2.3. Obfuscation of general functions and keyed functions. The obfuscation scheme in [4] transforms any function into a sequence of matrix multiplications. At every multiplication the matrix is selected based on a bit of the input $x$ but usually the bits of $x$ are used multiple times. For general circuits of length $\ell$ the paper constructs an oblivious relaxed matrix branching program of length $n\ell$ which cycles $\ell$ times through the $n$ entries of $x$ in sequence to select from $2n\ell$ matrices. In that case most of the matrices are obfuscated identity matrices but the regular access pattern stops the attacker from learning anything about the function. Sometimes (as in the password-hashing example) the structure of the circuit is already public, and all that one wants to obfuscate is a secret key. In other words, the circuit computes $f_z(x) = \phi(z, x)$ for some secret key $z$, where $\phi$ is a publicly known branching program; the obfuscation needs to protect only the secret key $z$, and does not need to hide the function $\phi$. This is called “obfuscation of keyed functions” in [4]. For this class of functions the length of the obfuscated program equals the length of the circuit for $\phi$; the bits of $x$ are used (and reused as often as necessary) in a public order determined by $\phi$. The designer can drive up the cost of brute-force attacks by including additional matrices as in the general case, but this also increases the obfuscation time, obfuscated-program size, and evaluation time. 3 Faster algorithms for one input This section describes two speedups to the obfuscated programs described in Section 2. These speedups are important for constructive as well as destructive applications. Combining these two ideas reduced our time to evaluate the obfuscated point function for a single input from 245 minutes to under 5 minutes (4 minutes 51 seconds), both measured on the same 8-core CPU. The authors of [6] have recently included these speedups in their software, with credit to us. 3.1. Cost analysis for the original algorithm. Schoolbook multiplication of the two \((n+2) \times (n+2)\) matrices \(B_{1,x[1]}\) and \(B_{2,x[2]}\) uses \((n+2)^3\) multiplications of matrix entries. Similar comments apply to all \(n-1\) matrix multiplications, for a total of \((n-1)(n+2)^3\) multiplications of matrix entries. This quartic operation count understates the asymptotic complexity of the algorithm for two reasons, even when the security parameter \(\lambda\) is treated as a constant. The first reason is that the number of bits of \(q\) grows quadratically with \(n\). The second reason is that the entries in \(B_{1,x[1]} B_{2,x[2]}\) have about twice as many bits as the entries in the original matrices, the entries in \(B_{1,x[1]} B_{2,x[2]} B_{3,x[3]}\) have about three times as many bits, etc. The paper [5] reports timings for point functions with \(n \in \{8, 12, 16\}\) for security parameter 52, and in particular reports microbenchmarks of the time taken for each of the matrix products, starting with the first; these microbenchmarks clearly show the slowdown from one product to the next, and the paper explains that “each multiplication increases the multilinearity level of the underlying graded encoding scheme and thus the size of the resulting encoding”. We now account for the size of the matrix entries. Recall that state-of-the-art multiplication techniques (see, e.g., [11]) take time essentially linear in \(b\), i.e., \(b^{1+o(1)}\), to multiply \(b\)-bit integers. The original entries have size quadratic in \(n\), and the products quickly grow to size cubic in \(n\). More precisely, the final product \(A = B_{1,x[1]} \cdots B_{n,x[n]}\) has entries bounded by \((n+2)^{n-1}(q-1)^n\) and typically larger than \((q-1)^n\); similar bounds apply to intermediate products. More than \(n/2\) of the products have typical entries above \((q-1)^{n/2}\), so the multiplication time is dominated by integers having size cubic in \(n\). The total time to compute \(A\) is \(n^{7+o(1)}\) for constant \(\lambda\), equivalent to \(n^{5+o(1)}\) multiplications of integers on the scale of \(q\). This time dominates the total time for the algorithm. 3.2. Intermediate reductions mod \(q\). We do better by limiting the growth of the elements in the computation. The final result \(y(x)p_{zt}\) is in \(\mathbb{Z}/q\), the ring of integers mod \(q\), and is obtained by a sequence of multiplications and additions, so we are free to reduce mod \(q\) at any moment in the computation. Any of the initial integer multiplications has inputs at most \(q-1\); we allow the temporary values to grow to at most \((n+2)(q-1)^2\) by computing the sum of the products for one entry and then reduce mod \(q\). Thus any future multiplication also has its inputs at most \(q-1\). State-of-the-art division techniques take time within a constant factor of state-of-the-art multiplication techniques, so \((n + 2)^2\) reductions mod \(q\) take asymptotically negligible time compared to \((n + 2)^3\) multiplications. The number of bits in each intermediate integer drops from cubic in \(n\) to quadratic in \(n\). More precisely, the asymptotic speedup factor is \(n/2\), since the original multiplication inputs had on average about \(n/2\) times as many bits as \(q\). We observe a smaller speedup factor for concrete values of \(n\), mainly because of the overhead for the extra divisions. The total time to compute \(A \mod q\) is \(n^{6+o(1)}\) for constant \(\lambda\), dominated by \((n - 1)(n + 2)^3 = n^4 + 5n^3 + 6n^2 - 4n - 8\) multiplications of integers bounded by \(q\), inside \((n - 1)(n + 2)^2 = n^3 + 3n^2 - 4\) dot products mod \(q\). ### 3.3. Matrix-vector multiplications We further improve the computation by reordering the operations used to compute \(y(x)\): specifically, instead of computing \(A\), we compute \[ y(x) = \left( \cdots \left( (sB_{1,x[1]})B_{2,x[2]} \right) \cdots B_{n,x[n]} \right) t. \] This sequence of operations requires \(n\) vector-matrix products and a final vector-vector multiplication. This combines straightforwardly with intermediate reductions mod \(q\) as above. The total time to compute \(y(x) \mod q\) is \(n^{5+o(1)}\), dominated by \(n(n + 2) + 1 = (n + 1)^2\) dot products mod \(q\). ### 4 Faster algorithms for many inputs A brute-force attack iterates through the whole input range and computes the evaluation for each possible input until the result of the evaluation is 1 and thus the correct input has been found. In terms of complexity our improvements from Section 3 reduced the cost of brute-forcing an \(n\)-bit point function from time \(n^{7+o(1)}2^n\) to time \(n^{5+o(1)}2^n\) for constant \(\lambda\), dominated by \((n + 1)^22^n\) dot products mod \(q\). This algorithm is displayed in Figure 4.1. This section presents further reductions to the complexity of the attack. These share computations between evaluations of many inputs and have no matching speedups on the constructive side (which usually only evaluates at a single point at once and in any case cannot be expected to have related inputs). #### 4.2. Reusing intermediate products Recall that Section 3 computes \(y(x) = sB_{1,x[1]} \cdots B_{n,x[n]}t \mod q\) by multiplying from left to right: the last two steps are to multiply the vector \(sB_{1,x[1]} \cdots B_{n-1,x[n-1]}\) by \(B_{n,x[n]}\) and then by \(t\). Notice that this vector does not depend on the choice of \(x[n]\). By computing this vector, multiplying the vector by \(B_{n,0}\) and then by \(t\), and multiplying the same vector by \(B_{n,1}\) and then by \(t\), we obtain both \(y(x[1], \ldots, x[n-1], 0)\) and \(y(x[1], \ldots, x[n-1], 1)\). This saves almost half of the cost of the computation. Similarly, we need only two computations of \(sB_{1,x[1]}\) for the two choices of \(x[1]\); four computations of \(sB_{1,x[1]}B_{2,x[2]}\) for the four choices of \((x[1], x[2])\); etc. Overall there are \(2 + 4 + 8 + \cdots + 2^n = 2^{n+1} - 2\) vector-matrix multiplications here, plus \(2^n\) ```python execfile('subroutines.py') import itertools def bruteforce(): for x in itertools.product([0,1],repeat=n): L = s for b in range(n): L = [dot(L,[B[b][x[b]][i * w + j] for j in range(w)]) for i in range(w)] result = solution(x,dot(L,t)) if result: return result print bruteforce() ``` **Fig. 4.1.** Brute-force attack algorithm, separately evaluating $y(x) \mod q$ for each $x$, including the speedups of Section 3: reducing intermediate matrix products mod $q$ (inside `dot`) and replacing matrix-matrix products with vector-matrix products. See Appendix A for definitions of subroutines. final multiplications by $t$, for a total of $(n+2)(2^{n+1}-2)+2^n = (2n+5)2^n-2(n+2)$ dot products mod $q$. To minimize memory requirements, we enumerate $x$ in lexicographic order, maintaining a stack of intermediate products. We reuse products on the stack to the extent allowed by the common prefix between $x$ and the previous $x$. In most cases this common prefix is almost the entire stack. On average slightly fewer than two matrix-vector products need to be recomputed for each $x$. See Figure 4.3 for a recursive version of this algorithm. ### 4.4. A meet-in-the-middle attack To do better we change the order of matrix multiplication yet again, separating $\ell$ “left” bits from $n-\ell$ “right” bits: $$y(x) = (sB_{1,x[1]} \cdots B_{\ell,x[\ell]})(B_{\ell+1,x[\ell+1]} \cdots B_{n,x[n]}t).$$ We exploit this separation to store and reuse some computations. Specifically, we precompute a table of “left” products $$L[x[1],\ldots,x[\ell]] = sB_{1,x[1]} \cdots B_{\ell,x[\ell]}$$ for all $2^\ell$ choices of $(x[1],\ldots,x[\ell])$. The main computation of all $y(x)$ works as follows: for each choice of $(x[\ell+1],\ldots,x[n])$, compute the “right” product $$R[x[\ell+1],\ldots,x[n]] = B_{\ell+1,x[\ell+1]} \cdots B_{n,x[n]}t,$$ and then multiply each element of the $L$ table by this vector. Computing a single left product $sB_{1,x[1]} \cdots B_{\ell,x[\ell]}$ from left to right, as in Section 3, takes $\ell$ vector-matrix products, i.e., $\ell(n+2)$ dot products mod $q$. Overall the precomputation uses $\ell(n+2)2^\ell$ dot products mod $q$. Computing a single right product $B_{\ell+1,x[\ell+1]} \cdots B_{n,x[n]} t$ from right to left (starting from $t$) takes $n - \ell$ matrix-vector products, for a total of $(n - \ell)(n + 2)$ dot products mod $q$. The outer loop in the main computation therefore uses $(n - \ell)(n + 2)2^{n-\ell}$ dot products mod $q$ in the worst case. The inner loop in the main computation, computing all $y(x)$, uses just $2^n$ dot products mod $q$ in total in the worst case. The total number of dot products mod $q$ in this algorithm, including precomputation, is $\ell(n+2)2^\ell + (n-\ell)(n+2)2^{n-\ell} + 2^n$. In particular, for $\ell = n/2$ (assuming $n$ is even), the number of dot products mod $q$ simplifies to $n(n+2)2^{n/2} + 2^n$. For a traditional meet-in-the-middle attack, the outer loop of the main computation simply looks up each result in a precomputed sorted table. Our notion of “meet” is more complicated, and requires inspecting each element of the table, but this is still a considerable speedup: each inspection is simply a dot product, much faster than the vector-matrix multiplications used before. We comment that taking $\ell$ logarithmic in $n$ produces almost the same speedup with polynomial memory consumption. More precisely, taking $\ell$ close to $2\log_2 n$ means that $2^{n-\ell}$ is smaller than $2^n$ by a factor roughly $n^2$, so the term $(n - \ell)(n + 2)2^{n-\ell}$ is on the same scale as $2^n$. The table then contains roughly $n^2$ vectors, similar size to the original $2n$ matrices. Taking slightly larger $\ell$ reduces the term $(n - \ell)(n + 2)2^{n-\ell}$ to a smaller scale. A similar choice of $\ell$ becomes important for speed in Section 8.2. 4.5. Combining the ideas. One can easily reuse intermediate products in the meet-in-the-middle attack. See Figure 4.6. This reduces the precomputation to $2^{\ell+1} - 2$ vector-matrix multiplications, i.e., $(n+2)(2^{\ell+1} - 2)$ dot products mod $q$. It similarly reduces the outer loop of the main computation to $(n+2)(2^{n-\ell+1} - 2)$ dot products mod $q$. The total number of dot products mod $q$ in the entire algorithm is now $(n + 2)(2^{\ell+1} + 2^{n-\ell+1} - 4) + 2^n$. For example, for $\ell = n/2$, the number of dot products mod $q$ simplifies to $4(n + 2)(2^{n/2} - 1) + 2^n$. execfile('subroutines.py') l = n // 2 def precompute(xleft,L): b = len(xleft) if b == 1: return [(xleft,L)] result = [] for xb in [0,1]: newL = [dot(L,[B[b][xb][i * w + j] for j in range(w)]) for i in range(w)] result += precompute(xleft + [xb],newL) return result table = precompute([],s) def mainloop(xright,R): b = len(xright) if b == n - l: for xleft,L in table: result = solution(xleft + xright,dot(L,R)) if result: return result return for xb in [0,1]: newR = [dot(R,[B[n - 1 - b][xb][j * w + i] for j in range(w)]) for i in range(w)] result = mainloop([xb] + xright,newR) if result: return result print mainloop([],t) Fig. 4.6. Meet-in-the-middle attack algorithm, including reuse of intermediate products, using $\ell = \lfloor n/2 \rfloor$ bits on the left and $n - \ell$ bits on the right. This is not much smaller than the meet-in-the-middle attack without reuse: the dominant term is the same $2^n$. However, as above one can take much smaller $\ell$ to reduce memory consumption. The reuse now allows $\ell$ to be taken almost as small as $\log_2 n$ without significantly compromising speed, so the precomputed table is now much smaller than the original $2n$ matrices. If memory consumption is not a concern then one should compute both an $L$ table and an $R$ table, interleaving the computations of the tables and obtaining each $LR$ product as soon as both $L$ and $R$ are known. For equal-size tables this means computing $L_0$, $R_0$, $L_0 R_0$, $L_1$, $L_1 R_0$, $R_1$, $L_0 R_1$, $L_1 R_1$, etc. This order of operations does not improve worst-case performance, but it does improve average-case performance. The same improvement has been previously applied to other meet-in-the-middle attacks: for example, Pollard applied this improvement to Shanks’s “baby-step giant-step” discrete-logarithm method. Compare [37, pages 419–420] to [35, page 439, top]. 5 Parallelization We implemented our attack for shared-memory systems using OpenMP and for cluster systems using MPI. In general, brute-force attacks are embarrassingly parallel, i.e., the search space can be distributed over the computing nodes without any need for communication, resulting in a perfectly scalable parallelization. However, for this attack, some computations are shared between consecutive iterations. Therefore, some cooperation and communication are required between computing nodes. 5.1. Precomputation. Recall that the precomputation step computes all $2^\ell$ possible cases for the “left” $\ell$ bits of the whole input space. A non-parallel implementation first computes $\ell$ vector-matrix multiplications for $sB_{1,0} \cdots B_{\ell,0}$ and stores the first $\ell - 1$ intermediate products on a stack. As many intermediate products as possible are reused for each subsequent case. For a shared-memory system, all data can be shared between the threads. Furthermore, the vector-matrix multiplications expose a sufficient amount of parallelism such that the threads can cooperate on the computation of each multiplication. There is some loss in parallel efficiency due to the need for synchronization and work-share imbalance. For a cluster system, communication and synchronization of such a workload distribution would be too expensive. Therefore, we split the input range for the precomputation between the cluster nodes, compute each section of the precomputed table independently, and finally broadcast the table entries to all cluster nodes. For simplicity, we split the input range evenly which results in some workload imbalance. (On each node, the workload is distributed as described above over several threads to use all CPU cores on each node.) This procedure has some loss in parallel efficiency due to the fact that each cluster node separately performs $k$ vector-matrix multiplications for the first precomputation in its range, due to some workload imbalance, and due to the final all-to-all communication. 5.2. Main computation. For simplicity, we start the main computation once the whole precomputed table $L$ is available. Recall that a non-parallel implementation of the main computation first computes the vector $R[0,\ldots,0] = B_{\ell+1,0} \cdots B_{n,0}t$ using $n - \ell$ matrix-vector multiplications, and multiplies this vector by all $2^\ell$ table entries. It then moves to other possibilities for the “right” $n - \ell$ bits, reusing intermediate products in a similar way to the precomputation and multiplying each resulting vector $R[\ldots]$ by all $2^\ell$ table entries. For a shared-memory system, the computations of $R[\ldots]$ are distributed between the threads the same way as for the precomputation. However, vector-vector multiplication does not expose as much parallelism as vector-matrix multiplication. Therefore, we distribute over the threads the $2^\ell$ independent vector-vector multiplications of each of the $2^\ell$ table entries with $R[0,\ldots,0]$. As in the parallelization of precomputation, there is some loss of parallel efficiency due to synchronization and work-share imbalance for the vector-matrix multiplications and some loss due to work-share imbalance for the vector-vector multiplications. For a cluster system we again cannot efficiently distribute the workload of one vector-matrix multiplication over several cluster nodes. Therefore, we distribute the search space evenly over the cluster nodes and let each cluster node compute its share of the workload independently. This approach creates some redundant work because each cluster node computes its own initial $R[\ldots]$ using $n-\ell$ matrix-vector multiplications. 6 Performance measurements We used 22 PCs in the Saber cluster [12] for the attack. Each PC is of the type described earlier, including an 8-core CPU. The PCs are connected by a gigabit Ethernet network. Each PC also has two GK110 GPUs but we did not use these GPUs. 6.1. First break of the challenge. We implemented the single-input optimizations described in Section 3 and used 20 PCs to compute $2^{14}$ point evaluations for all possible inputs. This revealed the secret point 11000101100100 after about 23 hours. The worst-case runtime for this approach on these 20 PCs is about 52 hours for checking all $2^{14}$ possible input points. On 18 October 2014 we sent the authors of [5] the solution to the challenge, and a few hours later they confirmed that the solution was correct. 6.2. Second break of the challenge. We implemented the multiple-input optimizations described in Section 4 and the parallelization described in Section 5. Our optimized attack implementation found the input point in under 19 minutes on 21 PCs; this includes the time to precompute a table $L$ of size $2^7$. The worst-case runtime of the attack for checking all $2^{14}$ possible input points is under 34 minutes on 21 PCs. 6.3. Additional latency. Obviously “19 minutes” understates the real time that elapsed between the announcement of the challenge (19 August 2014) and our solution of the challenge with our second program (25 October 2014). See Table 6.4 for a broader perspective. The largest deterrent was the difficulty of downloading 25 gigabytes. Whenever a connection broke, the server would insist on starting from the beginning (“HTTP server doesn’t seem to support byte ranges”), presumably because the server stores all files in a compressed format that does not support random access. The same restriction also meant that we could not download different portions of the file in parallel. To truly minimize latency we would have had to overlap the download of the challenge, the broadcast of the challenge to the cluster, and the computation, and of course our optimizations and software would have had to be ready first. In this context, the precompute-$L$-table algorithm in Section 4 has a latency advantage compared to a bit-reversed algorithm that precomputes an $R$ table. | Attack component | Real time | |---------------------------------------------------------------------------------|--------------------| | Initial procrastination | a few days | | First attempt to download challenge (failed) | 82 minutes | | Subsequent procrastination | 40 days and 40 nights | | Fourth attempt to download challenge (succeeded) | about an hour | | Original program [6] evaluating one input | 245 minutes | | Original program evaluating all inputs on one computer (extrapolated) | 7.6 years | | Copying challenge to cluster (without UDP broadcasts) | about an hour | | Reading challenge from disk into RAM | 2.5 minutes | | Our faster program evaluating one input | 4.85 minutes | | First successful break of challenge on 20 PCs | 23 hours | | Further procrastination (“this is fast enough”) | about half a week | | Our faster program evaluating all inputs on 21 PCs | 34 minutes | | Second successful break of challenge on 21 PCs | 19 minutes | | Our current program evaluating all inputs on 1 PC | 444.2 minutes | | Our current program evaluating all inputs on 22 PCs | 29.5 minutes | | Time for an average input point on 22 PCs | 19.9 minutes | | Successful break of challenge on 22 PCs | 17.5 minutes | **Table 6.4.** Measurements of real time actually consumed by various components of complete attack, starting from announcement of challenge. instead of an $L$ table: the portion of the input relevant to $L$ is available sooner than the portion of the input relevant to $R$. ### 6.5. Timings of various software components We have put the latest version of our software online at [http://obviouscation.cr.yp.to](http://obviouscation.cr.yp.to). We applied this software to the same challenge on 22 PCs. The software took a total time of 1769 seconds (29.5 minutes) to check all $2^{14}$ input points. An average input point was checked within 1191 seconds (19.9 minutes). The secret challenge point was found within 1048 seconds (17.5 minutes). The rest of this section describes the time taken by various components of this computation. Each vector-matrix multiplication took 15.577 s on average (15.091 minimum, 16.421 maximum), using all eight cores jointly. For comparison, on a single core, a vector-matrix multiplication requires about 115 s. Therefore, we achieve a parallel efficiency of $\frac{115 \text{s}}{8} \approx 92\%$ for parallel vector-matrix multiplication. Each $y$ computation took 8.986 s on average (7.975 minimum, 9.820 maximum), using a single core. Each $y$ computation consists of one vector-vector multiplication, one multiplication by $p_{zt}$ (which we could absorb into the precomputed table, producing a small speedup), and one reduction mod $q$. On a single machine (no MPI parallelization), after a reboot to flush the challenge from RAM, the timing breaks down as follows: 1. Loading the matrices for “left” bit positions: 83.999 s. 2. Total precomputation of $2^7 = 128$ table entries: 4055.408 s. (a) Computing the first $\ell = 7$ vector-matrix products: 107.623 s. 4. Loading the matrices for “right” bit positions: 78.490 s. 5. Total computation of all $2^{14}$ evaluations: 22518.900 s. (a) Computing the first $n - \ell = 7$ matrix-vector products: 109.731 s. Overall total runtime: 26654 s (444.2 minutes). From these computations, steps 1, 2a, 4, and 5a are not parallelized for cluster computation. The total timing breakdown on 22 PCs, after a reboot of all PCs, is as follows: 1. Loading the matrices for “left” bit positions: 89.449 s average (75.786 on the fastest node, 104.696 on the slowest node). With more effort we could have overlapped most of this loading (and the subsequent loading) with computation, or skipped all disk copies by keeping the matrices in RAM. 2. Total precomputation of $2^7 = 128$ table entries: 253.346 s average (217.893 minimum, 295.999 maximum). (a) Computing the first $\ell = 7$ vector-matrix products: 107.951 s average (107.173 minimum, 109.297 maximum). 3. All-to-all communication: 153.591 s average (100.848 minimum, 199.200 maximum); i.e., about 53 s average idle time for the busier nodes to catch up, followed by about 101 s of communication. With more effort we could have overlapped most of this communication with computation. 4. Loading the matrices for “right” bit positions: 85.412 s average (73.710 minimum, 97.526 maximum). 5. Total computation of all $2^{14}$ evaluations: 1097.680 s average (942.981 minimum, 1169.520 maximum). (a) Computing the first $n - \ell = 7$ matrix-vector products: 108.878 s average (107.713 minimum, 110.001 maximum). 6. Final idle time waiting for all other nodes to finish computation: 80.277 s average (0.076 minimum, 80.277 maximum). Overall total runtime, including MPI startup overhead: 1769 s (29.5 minutes). The overall parallel efficiency of the cluster parallelization thus is $\frac{26654 \text{ s}/22}{1769 \text{ s}} \approx 68\%$. Steps 1, 2a, 3, 4, and 5a, totaling 545.281 s, are those parts of the computation that contain parallelization overhead (in particular the communication time in step 3 is added compared to the single-machine case). Removing these steps from the efficiency calculation results in a parallel efficiency of $\frac{(26654 \text{ s} - 380 \text{ s})/22}{1769 \text{ s} - 545 \text{ s}} \approx 98\%$, which shows that those steps are responsible for almost all of the loss in parallel efficiency. ## 7 Further speedups In this section we briefly discuss two ideas for further accelerating the attack. We considered further implementation work to evaluate the concrete impact of these ideas, but decided that this work was unjustified, given that solving the existing challenge on our cluster took only 19 minutes. 7.1. Reusing transforms. One fast way to compute an $m$-coefficient product of two univariate polynomials is to evaluate each polynomial at the $m$th roots of 1 (assuming that there is a primitive $m$th root of 1 in the coefficient ring), multiply the values, and interpolate the product polynomial from the products of values. The evaluation and interpolation take only $\Theta(m \log_2 m)$ arithmetic operations using a standard radix-2 FFT (assuming that $m$ is a power of 2), and multiplying values takes only $m$ arithmetic operations. More generally, to multiply two $w \times w$ matrices of polynomials where each entry of the output is known to fit into $m$ coefficients, one can evaluate each polynomial at the $m$th roots of 1, multiply the matrices of values, and interpolate the product matrix. Note that intermediate values are computed in the evaluation domain; interpolation is postponed until the end of the matrix multiplication. The evaluation takes only $\Theta(w^2 m \log_2 m)$ arithmetic operations; schoolbook multiplication of the resulting matrices of values takes only $\Theta(w^3 m)$ arithmetic operations; and interpolation takes only $\Theta(w^2 m \log_2 m)$ arithmetic operations. The total is smaller, by a factor $\Theta(\min\{w, \log_2 m\})$, than the $\Theta(w^3 m \log_2 m)$ that would be used by schoolbook multiplication of the original matrices. Smaller exponents than 3 are known for matrix multiplication, but there is still a clear benefit to reusing the evaluations (called “FFT caching” in [11]) and merging the interpolations (called “FFT addition” in [11]). Similar, somewhat more complicated, speedups apply to multiplication of integer matrices; see, e.g., [38, Table 17]. Obviously FFT caching and FFT addition can also be applied to matrix-vector multiplication, dot products, etc. For example, in the polynomial case, multiplying a $w \times w$ matrix by a length-$w$ vector takes only $\Theta(w^2 m)$ arithmetic operations on values and $\Theta(wm \log_2 m)$ arithmetic operations for interpolation, if the FFTs of matrix entries have already been cached. Similarly, computing the dot product of two length-$w$ vectors takes only $\Theta(wm)$ arithmetic operations on values and $\Theta(m \log_2 m)$ arithmetic operations for interpolation, if the FFTs of vector entries have already been cached. The speedup here is applicable to both the constructive as well as the destructive algorithms in this paper. We would expect the speedup factor to be noticeable in practice, as in [38]. We would also expect an additional benefit for the attack: a high degree of parallelization is supported by the heavy use of arithmetic on values at independent evaluation points. 7.2. Asymptotically fast rectangular matrix multiplication. The computation of many dot products between all combinations of left vectors and right vectors in our point-obfuscation attack can be viewed as a rectangular matrix-matrix multiplication. An algorithm of Coppersmith [21] multiplies an $N \times N$ matrix by an $N \times \lfloor N^{1/\beta} \rfloor$ matrix using just $N^{2+o(1)}$ multiplications of matrix entries, where $\beta = (5 \log 5)/(2 \log 2) < 6$. With the same number of multiplications one can multiply an $N \times \lfloor N^{1/\beta} \rfloor$ matrix by a $\lfloor N^{1/\beta} \rfloor \times N$ matrix. See [31] for context, and for techniques to achieve smaller $\beta$. Substitute $N = \lceil w^\beta \rceil$, and note that $\lfloor N^{1/\beta} \rfloor = w$, to see that one can multiply a $\lceil w^\beta \rceil \times w$ matrix by a $w \times \lceil w^\beta \rceil$ matrix, obtaining $\lceil w^\beta \rceil^2$ results, using $w^{2\beta + o(1)}$ multiplications. Note that this is $w^{1+o(1)}$ times faster than computing separate dot products between each of the $\lceil w^\beta \rceil$ vectors in the first matrix and each of the $\lceil w^\beta \rceil$ vectors in the second matrix. Our attack has $2^\ell$ left vectors and $2^{n-\ell}$ right vectors, each of length $w = n+2$. Asymptotically Coppersmith’s algorithm applies to any choice of $\ell$ between $\beta \log_2 w$ and $n/2$, allowing all of the dot products to be computed using just $w^{o(1)} 2^n$ multiplications, rather than $w 2^n$. Fast matrix multiplication has a reputation for hiding large constant factors in the $w^{o(1)}$, and we do not claim a speedup here for any particular $w$, but asymptotically $w^{o(1)}$ is much faster than $w$. Our operation count also ignores the cost of additions, but we speculate that a more detailed analysis would show a similar improvement in the total number of bit operations. 8 Generalizing the attack beyond point functions This section looks beyond point functions: it considers the general obfuscation method explained in [5] for any program. Recall from Section 2 that for general programs the number of pairs of matrices, say $u$, is no longer tied to the number $n$ of input bits: usually each input bit is used multiple times. Furthermore, each matrix is $w \times w$ and each vector has length $w$ for some $w > n$, where the choice of $w$ depends on the function and is no longer required to be $n + 2$. The speedups from Section 3 rely only on the general matrix-multiplication structure, not on the pattern of accessing input bits. Reducing intermediate results mod $q$ saves a factor approximately $u/2$. Using vector-matrix multiplication rather than matrix-matrix multiplication saves a factor $w$. However, the attacks from Section 4 rely on having each input bit used exactly once. We cannot simply reorder the matrices to bring together the uses of an input bit: matrix multiplication is not commutative. Usually many of the matrices are obfuscated identity matrices, but the way the matrices are randomized prevents these matrices from being removed or reordered; see [5] for details. This section explains two attacks that apply in more generality. The first attack allows cycling through the input bits any number of times, and saves a factor approximately $n/2$ compared to brute force. The second attack allows using and reusing input bits any number of times in any pattern, and saves a factor approximately $n/(2 \log_2 w)$ compared to brute force. The first attack is what one might call a “meet-in-many-middles” attack; the second attack does not involve precomputations. Both attacks exploit the idea of reusing intermediate products, sharing computations between adjacent inputs; both attacks can be parallelized by ideas similar to Section 5. 8.1. Speedup $n/2$ for cycling through input bits. Our first attack applies to any circuit obfuscated as explained in [5, Section 2.2.1]. The obfuscated circuit is constructed to “cycle through each of the input bits \(x_1, x_2, \ldots, x_n\) in order, \(m\) times”, using \(u = mn\) pairs of matrices. In other words, \(y(x)\) is defined as \[ s(B_{1,x[1]} \cdots B_{n,x[n]})(B_{n+1,x[1]} \cdots B_{2n,x[n]}) \cdots (B_{(m-1)n+1,x[1]} \cdots B_{mn,x[n]})t. \] Evaluating \(y(x)\) for one \(x\) from left to right takes \(mn\) vector-matrix multiplications and 1 vector-vector multiplication, i.e., \(uw + 1\) dot products mod \(q\). A straightforward brute-force attack thus takes \((uw + 1)2^n\) dot products mod \(q\). One can split the sequence of \(mn\) matrices at some position \(\ell\), and carry out a meet-in-the-middle attack as in Section 4. However, this produces at most a constant-factor speedup once \(m \geq 2\): either the precomputation has to compute products at most of the positions for all \(2^n\) inputs, or the main computation has to compute products at most of the positions for all \(2^n\) inputs, or both, depending on \(\ell\). We do better by splitting the sequence of *input bits* at some position \(\ell\). This means grouping the matrix positions into two disjoint “left” and “right” sets as follows, splitting each input cycle: \[ y(x) = (sB_{1,x[1]} \cdots B_{\ell,x[\ell]}) \left( B_{\ell+1,x[\ell+1]} \cdots B_{n,x[n]} \right) \] \[ (B_{n+1,x[1]} \cdots B_{n+\ell,x[\ell]}) \left( B_{n+\ell+1,x[\ell+1]} \cdots B_{2n,x[n]} \right) \] \[ \vdots \] \[ (B_{(m-1)n+1,x[1]} \cdots B_{(m-1)n+\ell,x[\ell]}) \left( B_{(m-1)n+\ell+1,x[\ell+1]} \cdots B_{mn,x[n]}t \right) \] \[ = L_1[x[1], \ldots, x[\ell]]R_1[x[\ell + 1], \ldots, x[n]] \] \[ L_2[x[1], \ldots, x[\ell]]R_2[x[\ell + 1], \ldots, x[n]] \] \[ \vdots \] \[ L_m[x[1], \ldots, x[\ell]]R_m[x[\ell + 1], \ldots, x[n]] \] where \[ L_1[x[1], \ldots, x[\ell]] = sB_{1,x[1]} \cdots B_{\ell,x[\ell]}, \] \[ L_i[x[1], \ldots, x[\ell]] = B_{(i-1)n+1,x[1]} \cdots B_{(i-1)n+\ell,x[\ell]} \quad \text{for } 2 \leq i \leq m, \] \[ R_i[x[\ell + 1], \ldots, x[n]] = B_{(i-1)n+\ell+1,x[\ell+1]} \cdots B_{in,x[n]} \quad \text{for } 1 \leq i \leq m - 1, \] \[ R_m[x[\ell + 1], \ldots, x[n]] = B_{(m-1)n+\ell+1,x[\ell+1]} \cdots B_{mn,x[n]}t. \] We exploit this grouping as follows. We use \(2^{\ell+1} - 2\) vector-matrix multiplications to precompute a table of the vectors \(L_1[x[1], \ldots, x[\ell]]\) for all \(2^\ell\) choices of \(x[1], \ldots, x[\ell]\), as in Section 4. Similarly, for each \(i \in \{2, \ldots, m\}\), we use \(2^{\ell+1} - 4\) matrix-matrix multiplications to precompute a table of the matrices \(L_i[x[1], \ldots, x[\ell]]\) for all \(2^\ell\) choices of \(x[1], \ldots, x[\ell]\). The tables use space for \((w + (m - 1)w^2)2^\ell\) integers mod \(q\). After this precomputation, the outer loop of the main computation runs through each choice of \(x[\ell + 1], \ldots, x[n]\), computing the corresponding matrices \(R_1[\ldots], \ldots, R_{m-1}[\ldots]\) and vector \(R_m[\ldots]\). The inner loop runs through each choice of $x[1], \ldots, x[\ell]$, computing each $y(x)$ by multiplying $L_1, R_1, \ldots, L_m, R_m$; each $x$ here takes $2m - 2$ vector-matrix multiplications and 1 vector-vector multiplication. Overall the precomputation costs $((m - 1)w^2 + w)(2^{\ell+1} - 2) - 2(m - 1)w^2$ dot products mod $q$; the outer loop of the main computation costs $((m - 1)w^2 + w)(2^{n-\ell+1} - 2) - 2(m - 1)w^2$ dot products mod $q$; and the inner loop costs $((2m - 2)w + 1)2^n$ dot products mod $q$. In particular, taking $\ell = n/2$ (assuming as before that $n$ is even) simplifies the total cost to $4w(2^{n/2} - 1) + 2^n$ for $m = 1$, exactly as in Section 4, and $4w((m - 1)w + 1)(2^{n/2} - 1) + ((2m - 2)w + 1)2^n - 4(m - 1)w^2$ for general $m$. Recall that brute force costs $(uw + 1)2^n = (mnw + 1)2^n$. For large $n$, large $w$, and $m \geq 2$, the asymptotically dominant term has dropped from $mnw2^n$ to $2mw2^n$, saving a factor of $n/2$. The same asymptotic savings appears with much smaller $\ell$, almost as small as $\log_2 w$. Beware that this does not make the tables asymptotically smaller than the original $2mn$ matrices for $m \geq 2$: most of the table space here is consumed by matrices rather than vectors. ### 8.2. Speedup $n/\log_2 w$ for any order of input bits One can try to spoil the above attack by changing the order of input bits. A slightly different order of input bits, rotating positions in each round, is already stated in [4, Section 3, Claim 2, final formula], but it is easy to adapt the attack to this order. It is more difficult to adapt the attack to an order chosen randomly, or an order that combinatorially avoids keeping bits together. Varying the input order is not a new idea; see, e.g., the compression functions inside MD5 [36] and BLAKE [10]. Many other orders of input bits also arise naturally in “keyed” functions; see Section 2. The general picture is that $y(x)$ is defined by the formula $$y(x) = sB_{1,x[\text{inp}(1)]}B_{2,x[\text{inp}(2)]}\cdots B_{u,x[\text{inp}(u)]}t$$ for some constants $\text{inp}(1), \text{inp}(2), \ldots, \text{inp}(u) \in \{1, 2, \ldots, n\}$. As a first unification we multiply $s$ into $B_{1,0}$ and into $B_{1,1}$, and then multiply $t$ into $B_{u,0}$ and into $B_{u,1}$. Now $B_{1,0}, B_{1,1}, B_{u,0}, B_{u,1}$ are vectors, except that they are integers if $u = 1$; and $y(x)$ is defined by $$y(x) = B_{1,x[\text{inp}(1)]}B_{2,x[\text{inp}(2)]}\cdots B_{u,x[\text{inp}(u)]}.$$ We now explain a general recursive strategy to evaluate this formula for all inputs without exploiting any particular pattern in $\text{inp}(1), \text{inp}(2), \ldots, \text{inp}(u)$. The strategy is reducing the number of variable bits in $x$ by one in each iteration. Assume that not all of $\text{inp}(1), \text{inp}(2), \ldots, \text{inp}(u)$ are equal to $n$. Substitute $x[n] = 0$ into the formula for $y(x)$. This means, for each $i$ with $\text{inp}(i) = n$ in turn, eliminating the expression “$B_{i,x[n]}$” as follows: - multiply $B_{i,0}$ into $B_{i+1,0}$ and into $B_{i+1,1}$ if $i < u$; - multiply $B_{i,0}$ into $B_{i-1,0}$ and into $B_{i-1,1}$ if $i = u$; - set $B_i \leftarrow B_{i+1}$, then $B_{i+1} \leftarrow B_{i+2}$, $\ldots$, then $B_{u-1} \leftarrow B_u$; • reduce $u$ to $u - 1$. Recursively evaluate the resulting formula for all choices of $x[1], \ldots, x[n - 1]$. Then do all the same steps again with $x[n] = 1$ instead of $x[n] = 0$. More generally, one can recurse on the two choices of $x[b]$ for any $b$. It is most efficient to recurse on the most frequently used index $b$ (or one of the most frequent indices $b$ if there are several), since this minimizes the length of the formula to handle recursively. This is equivalent to first relabeling the indices so that they are in nondecreasing order of frequency, and then always recursing on the last bit. Once $n$ is sufficiently small (see below), stop the recursion. This means separately enumerating all possibilities for $(x[1], \ldots, x[n])$ and, for each possibility, evaluating the given formula $$y(x) = B_{1,x[\text{inp}(1)]}B_{2,x[\text{inp}(2)]}\cdots B_{u,x[\text{inp}(u)]}$$ by multiplication from left to right. Recall that $B_{1,x[\text{inp}(1)]}$ is actually a vector (or an integer if $u = 1$). Each computation takes $u - 1$ vector-matrix multiplications, i.e., $(u - 1)w$ dot products mod $q$. (Here we ignore the extra speed of the final vector-vector multiplication.) The total across all inputs is $(u - 1)w2^n$ dot products mod $q$. To see that the recursion reduces this complexity, consider the impact of using exactly one level of recursion, from $n$ down to $n - 1$. If index $n$ is used $u_n$ times then eliminating each $B_{i,x[n]}$ costs $2u_n$ matrix multiplications, and produces a formula of length $u - u_n$ instead of $u$, so each recursive call uses $(u - u_n - 1)w2^{n-1}$ dot products mod $q$. The bound on the total number of dot products mod $q$ drops from $(u - 1)w2^n$ to $4u_nw^2 + (u - u_n - 1)w2^n$, saving $u_nw2^n - 4u_nw^2$. This analysis suggests stopping the recursion when $2^n$ drops below $4w$, i.e., at $n = \lceil \log_2 w \rceil + 1$. More generally, the algorithm costs a total of $$4u_nw^2 + 8u_{n-1}w^2 + 16u_{n-2}w^2 + \cdots + 2^{n-\ell+1}u_{\ell+1}w^2 + 2^n(u_\ell + \cdots + u_1 - 1)w$$ dot products mod $q$ if the recursion stops at level $\ell$. We relabel as explained above so that $u_n \geq u_{n-1} \geq \cdots \geq u_1$, and assume $n > \ell$. The sum $u_\ell + \cdots + u_1$ is at most $\ell u/n$, and the sum $u_n + 2u_{n-1} + 4u_{n-2} + \cdots + 2^{n-\ell-1}u_{\ell+1}$ is at most $2^{n-\ell}u/(n-\ell)$, for a total of less than $(4w2^{-\ell}/(n-\ell) + \ell/n)uw2^n$. Taking $\ell = \lceil \log_2 w \rceil + 1$ reduces this total to at most $(4/(n - \lceil \log_2 w \rceil - 1)) + (\lceil \log_2 w \rceil + 1)/n)uw2^n$. For comparison, a brute-force attack against the original problem (separately evaluating $y(x)$ for each $x$) costs $(u - 1)w2^n$. We have thus saved a factor of approximately $n/\log_2 w$. References [1] — (no editor), *53rd annual IEEE symposium on foundations of computer science, FOCS 2012, New Brunswick, New Jersey, 20–23 October 2012*, IEEE Computer Society, 2012. See [31]. [2] — (no editor), *54th annual IEEE symposium on foundations of computer science, FOCS 2013, 26–29 October, 2013, Berkeley, CA, USA*, IEEE Computer Society, 2013. See [25]. [3] — (no editor), *RSA numbers*, Wikipedia page (2014). URL: https://en.wikipedia.org/wiki/RSA_numbers. Citations in this document: §1.5. [4] Prabhanjan Ananth, Divya Gupta, Yuval Ishai, Amit Sahai, *Optimizing obfuscation: avoiding Barrington’s theorem*, in ACM-CCS 2014 (2014). URL: https://eprint.iacr.org/2014/222. Citations in this document: §2, §2, §2.3, §2.3, §8.2. [5] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, *Implementing cryptographic program obfuscation*, version 20141005 (2014). URL: https://eprint.iacr.org/2014/779. Citations in this document: §1.2, §1.5, §1.6, §1.6, §2, §2, §2, §2, §2.1, §2.2, §2.2, §3.1, §6.1, §8, §8, §8.1. [6] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, *Implementing cryptographic program obfuscation (software)* (2014). URL: https://github.com/amaloz/obfuscation. Citations in this document: §2, §2.1, §2.2, §3, §6.3, §A, §A, §A. [7] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, *Implementing cryptographic program obfuscation (slides)*, Crypto 2014 rump session (2014). URL: http://crypto.2014.rump.cr.yp.to/bca480a4e7fcdaf5bfaf9dec75ff890c8.pdf. Citations in this document: §1.2, §2, §2.2, §A. [8] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, *Implementing cryptographic program obfuscation (video)*, Crypto 2014 rump session, starting at 3:56:25 (2014). URL: https://gauchocast.ucsb.edu/Panopto/Pages/Viewer.aspx?id=d34af80d-bdb5-464b-a8ac-2c3adefc5194. Citations in this document: §2.2. [9] Jean-Philippe Aumasson, *Password Hashing Competition* (2013). URL: https://password-hashing.net/. Citations in this document: §1.1. [10] Jean-Philippe Aumasson, Luca Henzen, Willi Meier, Raphael C.-W. Phan, *SHA-3 proposal BLAKE (version 1.3)* (2010). URL: https://www.131002.net/blake/blake.pdf. Citations in this document: §8.2. [11] Daniel J. Bernstein, *Fast multiplication and its applications*, in [15] (2008), 325–384. URL: http://cr.yp.to/papers.html#multapps. Citations in this document: §3.1, §7.1, §7.1. [12] Daniel J. Bernstein, *The Saber cluster* (2014). URL: http://blog.cr.yp.to/20140602-saber.html. Citations in this document: §6. [13] Andrey Bogdanov, Dmitry Khovratovich, Christian Rechberger, *Biclique cryptanalysis of the full AES*, in Asiacrypt 2011 [30] (2011), 344–371. URL: https://eprint.iacr.org/2011/449. Citations in this document: §1.4. [14] Joseph Bonneau, Stuart E. Schechter, *Towards reliable storage of 56-bit secrets in human memory*, in USENIX Security Symposium 2014 (2014), 607–623. URL: https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/bonneau. Citations in this document: §1.1. [15] Joe P. Buhler, Peter Stevenhagen (editors), *Surveys in algorithmic number theory*, Mathematical Sciences Research Institute Publications, 44, Cambridge University Press, 2008. See [11]. [16] Christian Cachin, Jan Camenisch (editors), *Advances in cryptology—EUROCRYPT 2004, international conference on the theory and applications of cryptographic techniques, Interlaken, Switzerland, May 2–6, 2004, proceedings*, Lecture Notes in Computer Science, 3027, Springer, 2004. ISBN ISBN 3-540-21935-8. See [33]. [17] Ran Canetti, Juan A. Garay (editors), *Advances in cryptology—CRYPTO 2013—33rd annual cryptology conference, Santa Barbara, CA, USA, August 18–22, 2013, proceedings, part I*, Lecture Notes in Computer Science, 8042, Springer, 2013. See [22]. [18] Anne Canteaut (editor), *Fast software encryption—19th international workshop, FSE 2012, Washington, DC, USA, March 19–21, 2012, revised selected papers*, Lecture Notes in Computer Science, 7549, Springer, 2012. ISBN 978-3-642-34046-8. See [29]. [19] Jung Hee Cheon, Kyoohyung Han, Changmin Lee, Hansol Ryu, Damien Stehlé, *Cryptanalysis of the multilinear map over the integers* (2014). URL: https://eprint.iacr.org/2014/906. Citations in this document: §1.5, §1.5, §1.5, §2. [20] Scott Contini, Arjen K. Lenstra, Ron Steinfeld, *VSH, an efficient and provable collision-resistant hash function*, in Eurocrypt 2006 [39] (2006), 165–182. URL: https://eprint.iacr.org/2005/193. Citations in this document: §2.1. [21] Don Coppersmith, *Rapid multiplication of rectangular matrices*, SIAM Journal on Computing 11 (1982), 467–471. Citations in this document: §7.2. [22] Jean-Sebastien Coron, Tancrede Lepoint, Mehdi Tibouchi, *Practical multilinear maps over the integers*, in Crypto 2013 [17] (2013), 476–493. URL: https://eprint.iacr.org/2013/183. Citations in this document: §1.5, §1.5, §1.5, §1.5, §2, §2. [23] Simson Garfinkel, Gene Spafford, Alan Schwartz, *Practical UNIX & Internet security*, 3rd edition, O’Reilly, 2003. Citations in this document: §1. [24] Sanjam Garg, Craig Gentry, Shai Halevi, *Candidate multilinear maps from ideal lattices*, in Eurocrypt 2013 [28] (2012), 40–49. URL: https://eprint.iacr.org/2012/610. Citations in this document: §1.5, §2. [25] Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, Brent Waters, *Candidate indistinguishability obfuscation and functional encryption for all circuits*, in FOCS 2013 [2] (2013), 40–49. URL: https://eprint.iacr.org/2013/451. Citations in this document: §1.5, §1.5, §1.5, §2. [26] Craig Gentry, Shai Halevi, Hemanta K. Maji, Amit Sahai, *Zeroizing without zeroes: Cryptanalyzing multilinear maps without encodings of zero* (2014). URL: https://eprint.iacr.org/2014/929. Citations in this document: §1.5, §1.5. [27] Shafi Goldwasser, Guy N. Rothblum, *On best-possible obfuscation*, Journal of Cryptology 27 (2014), 480–505. Citations in this document: §1.5. [28] Thomas Johansson, Phong Q. Nguyen (editors), *Advances in cryptology—EUROCRYPT 2013, 32nd annual international conference on the theory and applications of cryptographic techniques, Athens, Greece, May 26–30, 2013, proceedings*, Lecture Notes in Computer Science, 7881, Springer, 2013. ISBN 978-3-642-38347-2. See [24]. [29] Dmitry Khovratovich, Christian Rechberger, Alexandra Savelieva, *Bicliques for preimages: attacks on Skein-512 and the SHA-2 family*, in FSE 2012 [18] (2011), 244–263. URL: https://eprint.iacr.org/2011/286. Citations in this document: §1.4. [30] Dong Hoon Lee, Xiaoyun Wang (editors), *Advances in cryptology—ASIACRYPT 2011, 17th international conference on the theory and application of cryptology and information security, Seoul, South Korea, December 4–8, 2011, proceedings*, Lecture Notes in Computer Science, 7073, Springer, 2011. ISBN 978-3-642-25384-3. See [13]. [31] François Le Gall, *Faster algorithms for rectangular matrix multiplication*, in FOCS 2012 [1] (2012), 514–523. URL: https://arxiv.org/abs/1204.1111. Citations in this document: §7.2. [32] Donald J. Lewis (editor), *1969 Number Theory Institute: proceedings of the 1969 summer institute on number theory: analytic number theory, Diophantine problems, and algebraic number theory; held at the State University of New York at Stony Brook, Stony Brook, Long Island, New York, July 7–August 1, 1969*, Proceedings of Symposia in Pure Mathematics, 20, American Mathematical Society, 1971. ISBN 0-8218-1420-6. MR 47:3286. See [37]. [33] Benjamin Lynn, Manoj Prabhakaran, Amit Sahai, *Positive results and techniques for obfuscation*, in Eurocrypt 2004 [16] (2004), 20–39. Citations in this document: §1.2. [34] Dag Arne Osvik, Eran Tromer, *Cryptologic applications of the PlayStation 3: Cell SPEED*, Workshop record of “SPEED—Software Performance Enhancement for Encryption and Decryption” (2007). URL: https://hyperelliptic.org/SPEED/slides/Osvik_cell-speed.pdf. Citations in this document: §1.4. [35] John M. Pollard, *Kangaroos, Monopoly and discrete logarithms*, Journal of Cryptology 13 (2000), 437–447. Citations in this document: §4.5. [36] Ronald L. Rivest, *The MD5 message-digest algorithm*, RFC 1321 (1992). URL: https://tools.ietf.org/html/rfc1321. Citations in this document: §8.2. [37] Daniel Shanks, *Class number, a theory of factorization, and genera*, in [32] (1971), 415–440. MR 47:4932. Citations in this document: §4.5. [38] Joris van der Hoeven, Grégoire Lecerf, Guillaume Quintin, *Modular SIMD arithmetic in Mathemagix* (2014). URL: https://arxiv.org/abs/1407.3383. Citations in this document: §7.1, §7.1. [39] Serge Vaudenay (editor), *Advances in cryptology—EUROCRYPT 2006, 25th annual international conference on the theory and applications of cryptographic techniques, St. Petersburg, Russia, May 28–June 1, 2006, proceedings*, Lecture Notes in Computer Science, 4004, Springer, 2006. ISBN 3-540-34546-9. See [20]. ## A Subroutines The `sha256hex` function is defined as the following wrapper around Python’s `hashlib`: ```python import hashlib def sha256hex(input): return hashlib.sha256(input).hexdigest() ``` In other words, `sha256hex` returns the hexadecimal representation of the SHA-256 hash of its input. The software from [6] stores nonnegative integers on disk in a self-delimiting format defined by GMP’s `mpz_out_raw` function (for integers that fit into $2^{32} - 1$ bytes): a 4-byte big-endian length $b$ precedes a $b$-byte big-endian integer. The following `load_mpz` and `load_mpzarray` functions parse the same format and return `gmpy2` integers: ```python import struct import gmpy2 ``` def mpz_inp_raw(f): bytes = struct.unpack('>i', f.read(4))[0] if bytes == 0: return 0 return gmpy2.from_binary('\x01\x01' + f.read(bytes)[::-1]) def load_mpzarray(fn, n): f = open(fn, 'rb') result = [mpz_inp_raw(f) for i in range(n)] f.close() return result def load_mpz(fn): return load_mpzarray(fn, 1)[0] Integers such as $w$, $q$, the $s$ entries, etc. are then read from files as gmpy2 integers: ```python w = load_mpz('size') pzt = load_mpz('pzt') q = load_mpz('q') nu = load_mpz('nu') s = load_mpzarray('s_enc', w) t = load_mpzarray('t_enc', w) n = w - 2 B = [[load_mpzarray('%d.%s' % (b, xb), w * w) for xb in ['zero', 'one']] for b in range(n)] ``` The file names are specified by the software from [6]. The challenge announced in [7] used an older version of the software from [6], using file name $x0$ instead of $q$, so we copied $x0$ to $q$. Note that the $B$ array is indexed $0, 1, \ldots, n-1$ rather than $1, 2, \ldots, n$. The dot function computes a dot product of two length-$w$ vectors and reduces the result mod $q$: ```python def dot(L, R): return sum([L[i]*R[i] for i in range(w)]) % q ``` The solution function takes $x$ and $y(x)$ as input, and returns $x$ as a string of ASCII digits if the output of the corresponding obfuscated program is 1: ```python def solution(x, y): y *= pzt y %= q if y > q - y: y -= q if y.bit_length() > q.bit_length() - nu: return ''.join([str(xb) for xb in x]) ```
THE DUPIC ALTERNATIVE FOR BACKEND FUEL CYCLE J.S. LEE, M.S. YANG, H.S. PARK Korea Atomic Energy Research Institute, Taejon, Republic of Korea P. BOCZAR, J. SULLIVAN, R.D. GADSBY Atomic Energy of Canada Ltd, Sheridan Park, Canada Abstract The DUPIC\(^1\) fuel cycle was conceived as an alternative to the conventional fuel cycle backed options, with a view to multiple benefits expectable from burning spent PWR fuel again in CANDU reactors. It is based on the basic idea that the bulk of spent PWR fuel can be directly refabricated into a reusable fuel for CANDU of which high efficiency in neutron utilization would exhaustively burn the fissile remnants in the spent PWR fuel to a level below that of natural uranium. Such "burn again" strategy of the DUPIC fuel cycle implies that the spent PWR fuel will become CANDU fuel of higher burnup with relevant benefits such as spent PWR fuel disposition, saving of natural uranium fuel, etc. A salient feature of the DUPIC fuel cycle is neither the fissile content nor the bulk radioactivity is separated from the DUPIC mass flow which must be contained and shielded all along the cycle. This feature can be considered as a factor of proliferation resistance by deterrence against access to sensitive materials. It means also the requirement for remote systems technologies for DUPIC fuel operation. The conflicting aspects between better safeguardability and harder engineering problems of the radioactive fuel operation may be the important reason why the decades' old concept, since INFCE\(^2\), of "hot" fuel cycle has not been pursued with much progress. In this context, the DUPIC fuel cycle could be a live example for development of proliferation resistant fuel cycle. As the DUPIC fuel cycle looks for synergism of fuel linkage from PWR to CANDU (or in broader sense LWR to HWR), Korea occupies a best position for DUPIC exercise with her unique strategy of reactor mix of both reactor types. But the DUPIC benefits can be extended to global bonus, expectable from successful development of the technology. 1. INTRODUCTION Spent fuel from PWR contains fissile remnants roughly two times higher than natural uranium. They can be separated by reprocessing or disposed of intact (i.e., the spent PWR fuel assemblies) in the geological repository. The DUPIC fuel cycle would be in fact a third alternative in-between the two conventional options in that spent PWR fuel is neither directly disposed of nor reprocessed to separate the fissile remnants in it, but that the bulk of spent PWR fuel is reformed into CANDU-compatible DUPIC fuel bundle with new cladding and appendages [1]. A precursor of the DUPIC concept would be the AIROX\(^3\) technology developed in the early sixties by Atomics International in the USA. It was destined to recycle spent LWR fuel back to LWR by addition of higher enrichment material to compensate the depleted portion in the spent fuel. Notwithstanding the difference with DUPIC in fuel form and content, a lot of its technical features are common to both fuel cycles which need remote fabrication of bulk oxides. Obviously, the AIROX concept attracted interest for its aspect of proliferation resistance in the late seventies [2], and for its possibility as an alternative to manage spent fuel in the USA [3]. There are more distant technical methods, in the family tree of fuel cycles, of remote fuel fabrication conceived for thorium oxides and metals [4]. --- \(^1\) DUPIC = Direct Use of Spent PWR Fuel in CANDU \(^2\) INFCE = International Nuclear Fuel Cycle Evaluation \(^3\) AIROX = Atomics International Reduction Oxidation The DUPIC fuel cycle studies was initiated in the early nineties by joint efforts involving Korea, Canada and the USA. The tripartite investigated technical feasibilities of the DUPIC fuel cycle including analysis of CANDU system with DUPIC fuel, comparison of various options and selection of reference process for DUPIC fuel fabrication, examination of safeguardabilities, etc. The confirmative conclusions of the feasibility study wrapped up in 1992 had lead to the next phase of the DUPIC program, experimental verification, with a target to the year 2000 in shared tasks between the tri-parties [5]. The IAEA has recently joined the program in safeguards affairs [6]. 2. THE DUPIC TECHNOLOGY A basic premise adopted for the DUPIC fuel cycle development is to minimize retrofittings that may arise in order to use DUPIC fuel in CANDU. This philosophy is based on the rationale that it would be much more cheaper and safer to fit fuel to reactor than to fit reactor to fuel. With this background, the technical question for the DUPIC fuel cycle converges to the feasibility of DUPIC fuel fabrication. For the fabrication of DUPIC fuel, a process called OREOX\(^4\) was selected as the most promising option, among other competing options, because it would allow maximum flexibility in fuel design satisfying the requirement of compatibility with existing CANDU system. The OREOX process is based on a thermal treatment of bulk powder of spent PWR fuel to prepare for manufacturing of pellet from which the DUPIC fuel bundle can be fabricated by remotized version of conventional technology for CANDU fuel production. Remote fabrication of DUPIC fuel will thus occupy the heart of DUPIC technology. 2.1. Compatibility with existing CANDU system Compatibility of DUPIC fuel with existing CANDU system may be grouped in two major categories: nuclear and mechanical. 2.1.1. Nuclear compatibility Spent PWR fuel from PWR contains fissile remnants around 1.5% depending on burnup attained in PWR. This is roughly two times higher than natural uranium which is used for fresh CANDU fuel. This is a theoretical indication that the DUPIC fuel could be burnt two times longer in a CANDU of equivalent output than natural uranium fuel. Frequency of fuel replacement will have to be adjusted accordingly. Major technical considerations for DUPIC fuel use in CANDU have been in analyses since the feasibility study. The neutronic and thermohydraulic compatibilities are integrated into the homogeneity of DUPIC fuel composition. As there are a variety of PWR fuel burnups resulting in compositional differences, consideration is given to the preparatory selection for batch mixture in such way to fabricate DUPIC fuel of homogeneous composition. A lot of efforts are also focused on the analysis of safety margins in CANDU and on the adjustment of control factors therefrom in the use of DUPIC fuel with reference to natural uranium fuel use. [7] 2.1.2. Mechanical compatibility Concerning the feasibility of DUPIC fuel handling, both for charge and discharge, it was found that the existing refueling machine could be used for DUPIC fuel loading into the fuel channel in \(^4\) OREOX = Oxidation and Reduction of Oxide Fuel reverse sequence to the current handling of spent fuel discharged from the core. A minor addition of lifting mechanism seems to be needed, nevertheless, down in the spent fuel bay in order to move the DUPIC fuel up to refueling position because the current spent CANDU fuel handling system is designed only to slide down spent fuel. 2.2. Feasibility of DUPIC fuel fabrication There are various methods to transform spent PWR fuel to CANDU type DUPIC fuel. They can be categorized into two technical groups: mechanical reconfiguration and powder conditioning, after decladding, for pellet formation or vibratory packing (vipac). The comparative assessment of these optional methods, during early feasibility study of the DUPIC program, concluded that the latter is preferred mainly for the reason of fuel compatibility as explained previously. Among the latter category, powder-pellet route was preferred mainly due to its wide commercial experience although the other option (vipac) may have advantages in remote fabrication due to its simpler process. For the powder conditioning for pellet fabrication which has been adopted as reference process for development and test of DUPIC fuel, the OREOX process uses repeated oxidation and reduction to make the bulk powder more apt to pellet formation and sintering. Once pelletized, the rest of the processes to produce DUPIC bundle is not much different from conventional processes for CANDU fuel fabrication [8]. The radioactive wastes arising from DUPIC fuel fabrication are mainly non-fuel bearing structural materials removed from disassembly of spent PWR fuel and decladding and off-gases released from OREOX and sintering processes. These wastes can be treated and managed by existing technologies, except the semi-volatile gases for which special trapping methods are being developed in the DUPIC program. As remarked above, all the processes involved in the DUPIC fuel fabrication must be performed remotely in biological shielding and containment to protect the workers from radiation hazards. The remote systems feature of the DUPIC fuel fabrication would require a new dimension in technological efforts and costs. This is just the penalty to the enhanced safeguardability of the radioactive process. This new direction, however, is convergent to the recent technical trend toward increasing automation in the manufacturing industry to reduce labor costs and risks. 2.3. Safeguards The DUPIC fuel fabrication is resistant to proliferation not only because it involves no separation of fissile material but also because the heavy shielding enclosing the radioactive process act as a barrier to diversion possibility. Similar technical principles have attracted considerable attention especially during the seventies when measures to enhance safeguardability of conventional reprocessing were looked for by such methods as low-decontamination or denaturing special nuclear materials with radioactive spikants. An extension of such principles is resumed recently as "spent fuel standard" by the National Academy of Science of the U.S. as a key security criterion for judging options for weapon plutonium disposition [9]. In the DUPIC program, systems for containment and surveillance are being developed to augment the safeguardability of DUPIC fuel fabrication. A recent outcome of this developmental efforts is an instrument that can measure fissile content in the spent fuel material with enhanced accuracy [10]. 3. BENEFITS OF THE DUPIC ALTERNATIVE The DUPIC alternative as a proliferation resistant fuel cycle concept offers a multiple benefits that are expectable from PWR-CANDU synergism in comparison with once-through option. Such benefits are maximized at a reactor ratio between 3 PWRs and 1 CANDU (depending on burnup). At this optimal ratio, up to 30% saving in natural uranium possible. Another advantage, more significant in today's perspective, is the multiple reduction in spent fuel arising by removal of spent PWR fuel and by the doubling burnup in CANDU. Corollary to this quantity reduction to about one-third in comparison once-through, it was also revealed that there would be a "quality effect" of radiotoxicity reduction at long-term by DUPIC in the final disposal of spent fuel [11]. Regarding the DUPIC economics, a study in the DUPIC program has indicated that the DUPIC alternative can be competitive with once-through, as well as other recycle options taking the synergetic effects into account [12]. 4. INTERNATIONAL DUPIC LINK The DUPIC fuel cycle concept is characterized by burning spent PWR fuel again in CANDU, without separating any fissile material, taking advantage of high neutron economy of heavy water reactors. It requires therefore a reactor mix PWR-CANDU, which Korea adopted coincidentally. The possibility of DUPIC fuel linkage from LWR to HWR is not, however, limited to mixed reactor counties like Korea: it can be extended to countries of LWR or HWR by international cooperation if such linkage is agreed between the interested countries. 5. CONCLUSION The DUPIC fuel cycle is an emerging alternative in fuel cycle backend for synergism between PWR and CANDU (and between LWR and HWR, in general). A conspicuous feature of the DUPIC fuel cycle concept is, among others, the proliferation resistance which is unique in its kind. Other benefits include not only saving of natural uranium for CANDU fuel, but also removal of spent PWR fuel that would be transformed into DUPIC fuel to give two times higher burnup in CANDU, thus contributing to more power production without additional burden to the environment. The developmental efforts are now in full swing, under international cooperation frame, in anticipation of multiple benefits on national and international level. REFERENCES [1] ASQUITH, J. G., GRANTHAM, L. F., A Low-Decontamination Approach to a Proliferation-Resistant Fuel Cycle, Nuclear Technology, Vol. 41 (Dec. 1978), pp. 39-40, 137-148. [2] THOMAS, T. R., AIROX Nuclear Fuel Recycling and Waste Management, Global '93 Conference (Sept. 12-17, 1993, Seattle), pp. 722-728. [3] SHUK, A. B., LOTT'S, A. L., DRUMHELLER, K., The Remote Fabrication of Reactor Fuels (in Reactor Technology-Selected Reviews, pp. 71-141), USAEC(1965). [4] LEE, J. S., et al., R&D Program of KAERI for DUPIC, Global '93 Conference (Sept. 12-17, 1993, Seattle), pp. 733-739. [5] RIM, C.C., Burning LWR Spent Fuel in Heavy Reactors, Presentation at the IAEA Scientific Program, Advanced Nuclear Fuel Cycle, New Concept for the Future, 40th General Conference of the IAEA (17 September, 1996). [6] Yang, M. S., et al., DUPIC Fuel Development Program in Korea, Companion paper of this Symposium. [7] CHOI, H.B., RHEE, B.W., PARK, H.S., Physics Study on Direct Use of Spent PWR Fuel in CANDU (DUPIC), Nuclear Science & Engineering, 126 (1977), pp. 80-93. [8] CHUN, K.S., TAYLOR, P., Basic Concept of Radioactive Waste Management for DUPIC Fuel Cycle in Korea, Global '93 Conference (Sept. 12-17, 1993, Seattle), pp. 201-203. [9] Hong, J. S., et al., Safeguards for DUPIC Fuel Cycle, Companion paper of this Symposium. BIBLIOGRAPHY DOUST, R., Recycling PWR Fuel: CANDU Can Do, Nuclear Engineering International (Feb. 1993). LEE, J. S., PARK, H. S., GADSBY, R. D., SULLIVAN, J., Burn Spent PWR Fuel Again in CANDU Reactors by DUPIC, Global '95 Conference (Sept. 11-14, 1995, Versaille), pp. 355-359. LEE, J. S., PARK, H. S., The DUPIC Fuel Cycle Alternative: Status and Perspective, 10th PBNC (Oct. 20-25, 1996, Kobe), pp. 1059-1061.
Next-generation phenomics for the Tree of Life June 26, 2013 · AVAToL Gordon Burleigh¹, Kenzley Alphonse², Andrew J Alverson³, Holly M Bik⁴, Carrine Blank⁵, Andrea L Cirranello⁶, Hong Cui⁷, Marymegan Daly⁸, Thomas G Dietterich⁹, Gail Gasparich¹⁰, Jed Irvine⁹, Matthew Julius¹¹, Seth Kaufman¹², Edith Law¹³, Jing Liu¹⁴, Lisa Moore¹⁵, Maureen A O’Leary¹⁶, Maria Passarotti¹², Sonali Ranade⁷, Nancy B Simmons⁶, Dennis W. Stevenson¹⁷, Robert W Thacker, Edward C Theriot¹⁸, Sinisa Todorovic⁹, Paul M. Velazco¹⁹, Ramona L Walls²⁰, Joanna M Wolfe¹⁹, Mengjie Yu¹⁸ ¹ University of Florida, ² KenX Technology, Medford, New York, ³ Department of Biological Sciences, University of Arkansas, ⁴ UC Davis Genome Center, ⁵ University of Montana, ⁶ Department of Mammalogy, American Museum of Natural History, ⁷ School of Information Resources & Library Science, University of Arizona, ⁸ Department of Evolution, Ecology, and Organismal Biology, The Ohio State University, ⁹ School of Electrical Engineering and Computer Science, Oregon State University, ¹⁰ Department of Biological Sciences, Towson University, ¹¹ Department of Biological Sciences, St. Cloud State University, ¹² Whirl-i-Gig, Greenport, New York, ¹³ Center for Research on Computation and Society, School of Engineering and Applied Sciences, Harvard University, ¹⁴ Department of Biology, University of Florida, ¹⁵ University of Southern Maine, ¹⁶ Department of Anatomical Sciences, Stony Brook University, ¹⁷ The New York Botanical Garden, ¹⁸ The University of Texas at Austin, Texas Natural Science Center, ¹⁹ American Museum of Natural History, ²⁰ University of Arizona Burleigh G, Alphonse K, Alverson AJ, Bik HM, Blank C, Cirranello AL, Cui H, Daly M, Dietterich TG, Gasparich G, Irvine J, Julius M, Kaufman S, Law E, Liu J, Moore L, O’Leary MA, Passarotti M, Ranade S, Simmons NB, Stevenson DW, Thacker RW, Theriot EC, Todorovic S, Velazco PM, Walls RL, Wolfe JM, Yu M. Next-generation phenomics for the Tree of Life. PLOS Currents Tree of Life. 2013 Jun 26 [last modified: 2013 Jun 26]. Edition 1. doi: 10.1371/currents.tol.085c713acafc8711b2ff7010a4b03733. Abstract The phenotype represents a critical interface between the genome and the environment in which organisms live and evolve. Phenotypic characters also are a rich source of biodiversity data for tree-building, and they enable scientists to reconstruct the evolutionary history of organisms, including most fossil taxa, for which genetic data are unavailable. Therefore, phenotypic data are necessary for building a comprehensive Tree of Life. In contrast to recent advances in molecular sequencing, which has become faster and cheaper through recent technological advances, phenotypic data collection remains often prohibitively slow and expensive. The next-generation phenomics project is a collaborative, multidisciplinary effort to leverage advances in image analysis, crowdsourcing, and natural language processing to develop and implement novel approaches for discovering and scoring the phenotype, the collection of phenotypic characters for a species. This research represents a new approach to data collection that has the potential to transform phylogenetics research and to enable rapid advances in constructing the Tree of Life. Our goal is to assemble large phenomic datasets built using new methods and to provide the public and scientific community with tools for phenomic data assembly that will enable rapid and automated study of phenotypes across the Tree of Life. Funding Statement This work is funded by the NSF grant DEB-1208256, AVAToL: Next Generation Phenomics for the Tree of Life. Introduction Biologists and non-biologists alike relate intuitively to the natural world and its underlying scientific principles through phenotypes. Phenomic data (e.g., morphology, behavior, physiology and other phenotypic traits) are also fundamental to inferring evolutionary histories\textsuperscript{12345} and enable systematists to integrate fossil taxa directly into phylogenetic trees. The placement of extinct taxa in phylogenies is essential for understanding patterns of diversification\textsuperscript{67} and can greatly improve our understanding of trait evolution\textsuperscript{8}. Thus, constructing the Tree of Life (ToL), representing the evolutionary history of all organisms, and understanding the patterns and processes of evolution are impossible without phenomic data. Technological advances, such as next-generation sequencing, have dramatically increased the scale and decreased the cost of molecular sequencing, transforming the field of molecular phylogenetics. By contrast, matrices of phenotypic characters for phylogenetic analysis are still largely generated manually using methods that have not changed significantly for decades. This situation represents a major bottleneck for assembling the ToL and for evolutionary biology research in general. This problem motivated the organization of the next-generation phenomics project. Biologists working with phenotypic data from taxa across the ToL are challenged by the difficulty of discovering and scoring characters, generating images that describe characters, and annotating and extracting phylogenetically informative data from legacy taxonomic and natural history literature. The next-generation phenomics project is leveraging innovations from computer science and engineering to build new tools to assemble phenomic character-by-taxon matrices cheaply and efficiently. Our project focuses on three distinct areas: 1) computer vision approaches to discover and score characters; 2) crowdsourcing approaches to increase the speed of scoring matrices and generating datasets enriched with labeled anatomical images; and 3) natural language processing approaches to extract character data to build matrices from legacy taxonomic literature (Figure 1). Our team includes experts in these fields and a consortium of phylophenomic practitioners studying diverse groups across the ToL. These practitioners will provide data and guidance for developing methods and testing new tools. Computer Vision for Character Discovery and Character Learning Traditionally, phenotypic characters have been scored in phylogenetic matrices by scientists with physical access to specimens. However, with the proliferation of inexpensive high-resolution digital cameras and the widespread availability of Scanning Electron Microscopy (SEM) and Computed Tomography (CT) equipment for natural history research, it is now feasible to capture high quality images that can be scored directly without access to specimens. Computer vision studies suggest that it should be possible to automate scoring from such images, which would improve the speed and consistency of matrix construction. In addition, advances in machine learning show promise of being able to discover new characters, accelerating the construction of new matrices and the expansion of existing ones. In this project, we expand upon recent work on automated species identification of arthropods\textsuperscript{910111213} to perform both character learning (including cell scoring) and character discovery (identification of new characters and states). In character learning the goal is to teach the computer to score the presence, absence, or quantitative value of a known character. In character discovery the goal is for the computer to discover and score new candidate characters. Character learning begins with a set of training images for the computer accompanied by meta-data applied by a scientist who specifies particular character states. In addition, character learning may require that some of the images have graphical annotations indicating image regions relevant to the character (e.g., bounding box around the character). The graphical annotations can capture constraints such as the presence of a feature (e.g., an additional wing, a protrusion or indentation) or the spatial relationships between features (e.g., fused or separated). The graphical annotations distinguish between structural (presence/absence of subpart), topological (fused vs. separate), and appearance (color, texture) features. Character discovery begins with a set of images, but without any graphical annotations. The metadata in this case must specify the taxonomic group, and may include information such as anatomical orientation (e.g., dorsal, ventral), scale (e.g., entire specimen, detailed view), and part (e.g., skull, pelvis, leaf, flower). We are developing algorithms that search for characters in the image set. Of course discovery of homologies is a complex process, and the computers might discover differences of little biological significance. However, it is of interest to learn whether computers can identify homologies that biologists happened to miss. An important challenge is to develop search criteria that lead to the discovery of meaningful characters as opposed to accidental properties of the specimens. Crowdsourcing Applied to Character Discovery and Character Learning Having images that support character states and cell scoring (enriched matrices) is increasingly important to scientists collecting phenomic data for the ToL. Labeled images illustrating homologies foster clear communication of an investigator’s character concepts, and joining words to pictures has greatly improved communication in emergent phylophenomic projects done by large teams\textsuperscript{514}. Having images in cells makes the process of character scoring more repeatable because other researchers can better understand the words and numbers used to score a cell when there is an image associated with it. Unfortunately, building enriched phylogenetic character matrices linking thousands of cells with images can be a tedious and prohibitively time-consuming process. Even with powerful NSF-supported online tools to organize and store such data (e.g., MorphoBank\textsuperscript{15}), much of the data entry must still be done manually\textsuperscript{16}. We are developing software to automate image entry into unscored matrix cells and crowd-sourcing approaches to score these cells based on character state exemplars that have been established by scientists. We can then perform experiments to compare the abilities of experts, citizen scientists, and computers to score the cells from images. Once a phenomics researcher establishes which “view(s)” to associate with a character and specifies exemplar images (with labels) for the character states, images for those views can be generated for the entire collection of species in the matrix, and these unscored images can be placed into matrix cells automatically by a computer. Crowdsourcing experts on our team are designing experiments in which volunteer human annotators score the cells and label cell images to complete a sample of enriched matrices. A major challenge in crowdsourcing is to find effective ways to elicit broad participation in a task. We are exploring several such methods, including: (a) creating a game where the motivation is having fun or earning rewards or recognition, (b) creating an interface that emphasizes the citizen science aspect (motivation is altruistic: to help science), (c) paying people to score the cells via Amazon Mechanical Turk (https://www.mturk.com/), an online marketplace where requestors can solicit human intelligence to solve problems for pay, and (d) assigning the task to student groups in undergraduate classes. In all cases, volunteers will be shown an image that they must compare to at least two labeled exemplar character state images for a given character. The volunteer then selects which of the states they think the new image is most like and places the label. Software will then organize these global observations into new phenomic matrices for analysis. **Natural Language Processing for Building Phylogenetic Character Matrices** For centuries scientists have composed detailed descriptions of species and groups of organisms. This rich legacy of taxonomic literature includes descriptions of phenotypic characters from thousands of species, including many without molecular data. Little of this wealth of taxonomic literature has, however, been mined for phylogenetic data. Mining literature can be discouragingly tedious and is complicated by the different styles and formats, or even languages, of character descriptions for different taxa and different vocabularies used to describe homologies\textsuperscript{17}. Nonetheless, recent efforts in (1) large scale digitization of scientific texts, such as the Biodiversity Heritage Library, (2) optical character recognition (OCR) technologies to convert digital images of printed material into text, (3) creation of sophisticated and detailed ontologies for phenomic characters, as in the Phenoscape project\textsuperscript{18}, and (4) development of automated text mining and natural language processing approaches to annotate and extract relevant data tailored to the taxonomic literature, make possible high-throughput generation of phenomic data sets from legacy taxonomic literature. We are creating a set of automated tools that systematists studying any part of the ToL can use to transform legacy taxonomic or natural history texts into phylogenetic character matrices. Our first focus is improving automated, semantic annotation methods to identify characters from taxonomic descriptions. CharaParser is a new tool to automate the annotation of phenomic descriptions that (a) describe a variety of taxon groups, (b) are written in telegraphic sublanguage (concise, technical descriptions), and (c) are published in a variety of formats\textsuperscript{19}. The output includes an annotation of all characters as an XML file. CharaParser has performed well on texts describing numerous organisms\textsuperscript{19,20}, and we are extending CharaParser for use across the ToL by further developing algorithms for discourse analysis of natural language descriptions and algorithms for disambiguating senses (i.e., meanings) of words, phrases, or text segments. We also are developing approaches to translate annotated XML files into phylogenetic character matrices. Although we anticipate that our natural language processing tools will be useful across the ToL, we are focusing attention on descriptions of microbial taxa. Microbial lineages of both eukaryotes and prokaryotes, which may appear to have few obvious and easily observable morphological characters, represent an important frontier for phenomic research. Many microbial phenomic features of interest, such as metabolism, are described by text and not represented by images. Furthermore, microbial diversity studies that rely on genomic approaches rarely incorporate phenomic data; trait data are often not available in a format that is amenable to computational approaches routinely used in molecular biology. Natural language processing approaches may, therefore, represent our best opportunity to build large-scale phenomic data matrices for microbial lineages and link these historical data to the growing body of genomic knowledge. **Community Involvement and Outreach** For our new tools and methods to be optimally effective, they must be built in close collaboration with the user community. Since we want our tools to be useful throughout the ToL, we have assembled a consortium of phylophenomic practitioners working on diverse groups (e.g., sponges, mammals, diatoms, nematodes, seed plants, as well as Archaea, and Bacteria) to work with the tool developers. These practitioners will generate images applicable to real phenomic research problems, supply relevant textual sources for natural language processing, and test and evaluate the performance of our tools. Although we are currently in the early stages of most tool development, we will actively solicit input and participation from the broader systematics and evolutionary biology community and provide education and training resources for assembling phenomic data sets. Readers interested in learning more about our project, following our progress, and eventually testing or using our tools can obtain more information from our project webpage (http://avatol.org/). The goal of this project is to reduce the barriers that prevent systematists and evolutionary biologists from using the wealth of phenomic data. We believe that such changes could have a profound effect on efforts to build and interpret the ToL. We stress that these tools will in no way obviate the need for organismal and phenotypic expertise or direct study of specimens; no scientists will be replaced by robots or computers. Rather, we seek to reduce the time and resources needed to assemble phenotypic datasets and provide tools to help discover new characters, allowing scientists to work with phenotypic data, not work for phenotypic data. **References** 1. Gauthier, J. A., A. Kluge, and T. Rowe. Amniote phylogeny and the importance of fossils. *Cladistics* 1988; 4:105-209. 2. Lewis, P. O. A likelihood approach to estimating phylogeny from discrete morphological character data. *Syst Biol.* 2001; 50(6): 913-925. 3. Wiens, J. J. The role of morphological data in phylogeny reconstruction. *Syst Biol.* 2004; 53:653-661 4. Losos, J. B., D. M. Hillis, and H. W. Greene. Who speaks with a forked tongue? *Science* 2012; 338: 1428-1429. 5. O’Leary, M. A., J. I. Bloch, J. J. Flynn, T. J. Gaudin, A. Giallombardo, N. P. Giannini, S. L. Goldberg, B. P. Kraatz, Z.-X. Luo, J. Meng, X. Ni, M. J. Novacek, F. A. Perini, Z. Randall, G. W. Rougier, E. J. Sargis, M. T. Silcox, N. B. Simmons, M. Spaulding, P. M. Velazco, M. Weksler, J. R. Wible, and A. L. Ciranello. The placental mammal ancestor and the post-KPg radiation of placentals. *Science* 2013; 339:662-667. 6. Quental, T. B., and C. R. Marshall. Diversity dynamics: molecular phylogenies need the fossil record. *Trends Ecol.Evol.* 2010; 25: 434-441. 7. Rabosky, D. L. Extinction rates should not be estimated from molecular phylogenies. Evolution 2010; 64: 1816-1824. 8. Slater, G. J., L. J. Harmon, and M. E. Alfaro. Integrating fossils with molecular phylogenies improves inference of trait evolution. Evolution 2012; 66: 3931-3944. 9. Mortensen, E., E. L. Delgado, H. Deng, D. Lytle, A. Moldenke, R. Paasch, L. Shapiro, P. Wu, W. Zhang, and T. G. Dietterich. Pattern recognition for ecological science and environmental monitoring: An initial report. In Automated Taxon Identification in Systematics: Theory, Approaches and Applications. CRC Press; 2007; 189-206. 10. Larios, N., H. Deng, W. Zhang, M. Sarpola, J. Yuen, R. Paasch, A. Moldenke, D. A. Lytle, S. R. Correa, E. N. Mortensen, L. G. Shapiro, and T. G. Dietterich. Automated insect identification through concatenated histograms of local appearance features. Machine Vision and Applications 2008; 19: 105-123. 11. Martinez, G., W. Zhang, N. Payet, S. Todorovic, N. Larios, A. Yamamuro, D. Lytle, A. Moldenke, E. Mortensen, R. Paasch, L. Shapiro, and T. G. Dietterich. Dictionary-free categorization of very similar objects via stacked evidence trees. IEEE Conference on Computer Vision and Pattern Recognition (CVPR2009) 2009; 1-8. 12. Lytle, D. A., G. Martínez-Muñoz, W. Zhang, N. Larios, L. Shapiro, R. Paasch, A. Moldenke, E. N. Mortensen, S. Todorovic, and T. G. Dietterich. Automated processing and identification of benthic invertebrate samples. J N Amer Benthological Soc 2010; 29: 867-874. 13. Larios, N., J. Lin, M. Zhang, D. Lytle, A. Moldenke, L. Shapiro, and T. Dietterich. Stacked spatial-pyramid kernel: An object-class recognition method to combine scores from random trees. 2011 IEEE Workshop on Applications of Computer Vision (WACV) 2011; 329-335. 14. Ramírez, M. J., J. A. Coddington, W. P. Maddison, P. E. Midford, L. Prendini, J. Miller, C. E. Griswold, G. Horninga, P. Sierwald, N. Scharff, S. P. Benjamin, and W. C. Wheeler. Linking of digital images to phylogenetic data matrices using a morphological ontology. Syst Biol. 2007; 56: 283-294. 15. O’Leary, M. A., and S. Kaufman. MorphoBank: phylophenomics in the cloud. Cladistics 2011; 27:1-9. 16. Ekdale, E. G., A. Berta, and T. A. Demere. The comparative osteology of the petrotympanic complex (ear region) of extant baleen whales (Cetacea: Mysticeti). PLoS One 2011; 6:1-42. 17. Deans, A. R., M. J. Yoder, and J. P. Balhoff. Time to change how we describe biodiversity. Trends Ecol Evol. 2012; 27: 78-84 18. Dahdul, W. M., J. P. Balhoff, J. Engeman, T. Grande, E. J. Hilton, C. Kothari, H. Lapp, J. G. Lundberg, P. E. Midford, T. J. Vision, M. Westerfield, and P. M. Mabee. Evolutionary characters, phenotypes and ontologies: curating data from the systematic biology literature. PLoS One 2010; 5:e10708. 19. Cui, H. CharaParser for fine-grained semantic annotation of organism morphological descriptions. J Amer Soc Inform Sci Tech 2012; 63:738-754. 20. Cui, H., S. Singaram, and A. Janning. Combine unsupervised learning and heuristic rules to annotate morphological characters. Proceedings of the 2011 Annual Meeting of American Society of Information Science and Technology 2011.
CAN PUBLIC DEBT ENHANCE DEMOCRACY? CLAYTON P. GILLETTE* ABSTRACT This Essay draws on historical and current examples to examine the extent to which public creditors can enhance democracy by monitoring public officials in a manner that compensates for the failures of the government debtor’s constituents to monitor public officials. Creditors and constituents may share significant interests, depending on the structure of security arrangements for public debt and the identity of the debtors. Where interests overlap, the capacity of creditors to overcome collective action problems suffered by constituents may transform creditors into surrogates for constituents. Whether creditors are willing to play this role, however, may depend on the existence of alternatives to creditor monitoring, such as diversification and market constraints on default. The Essay concludes with an examination of the plausible scope of creditor monitoring in contemporary settings of sovereign and state and local debt. * Max E. Greenberg Professor of Contract Law, NYU School of Law. An earlier version of this Essay was presented as the annual George Wythe Lecture at the College of William and Mary Marshall-Wythe School of Law in April 2008. Thanks to Alyssa Bell for substantial research support and to Kevin Davis and Katharina Pistor for conversations and comments. # Table of Contents **INTRODUCTION** ................................................................. 939 I. **THE IDENTITY OF CREDITOR AND CONSTITUENT INTERESTS** ............... 944 II. **PUBLIC CREDITORS AS PUBLIC MONITORS** .......................... 950 A. *The Limits of Monitoring: Shareholders and Constituents* .................. 950 B. *Creditors to the Rescue?* ........................................... 966 III. **WILLINGNESS TO MONITOR** ............................................. 975 IV. **THE PLAUSIBLE SCOPE OF CONTEMPORARY CREDITOR MONITORING** .......... 981 CONCLUSION ........................................................................ 987 INTRODUCTION In the early fifteenth century, the Republic of Genoa was teetering on the brink of financial disaster. Plague, war, and internal dissent had increased Genoa’s debt to the point that 90 percent of the Republic’s ordinary income was required to service interest obligations.\(^1\) Creditors of the Republic recognized the situation as unsustainable, and acted to protect their interests from what must have seemed like imminent bankruptcy.\(^2\) In 1407, they founded the Casa di San Giorgio (“San Giorgio”), an institution that was nominally a private association, but the function of which was to bring order to the Republic’s finances and reduce the risk of debt repudiation.\(^3\) San Giorgio began its operations by exchanging Genoa’s massive debt for equity shares in the new association.\(^4\) San Giorgio then made “grants” to fund the Republic’s governmental activities. In return, San Giorgio received the right to collect taxes, to operate the Republic’s profitable salt monopoly and mint, and to govern some of the Republic’s overseas territories.\(^5\) Thus, although San Giorgio’s 11,000 shareholders, governed by a corporate structure that involved several councils and an eight-person board of directors called “The Protectors of San Giorgio,” were nominally equity holders, they were effectively lenders secured by payments from dedicated revenue streams of the Republic.\(^6\) San Giorgio was, in some ways, a beneficent creditor. It distributed large sums to Genoan charities,\(^7\) forgave many of the Republic’s debts during times of excusable fiscal distress, and --- 1. Steven A. Epstein, *Genoa and the Genoese*, 958-1528, at 229 (1996). 2. See id. 3. See id.; Michele Fratianni, *Government Debt, Reputation, and Creditors’ Predictions: The Tale of San Giorgio*, 10 Rev. Fin. 487, 487 (2006). 4. See id. 5. See James Macdonald, *A Free Nation Deep in Debt: The Financial Roots of Democracy* 95 (2003). 6. Id. at 95-96. 7. See Fratianni, *supra* note 3, at 496. As I hope to indicate below, the capacity to distinguish between excusable fiscal distress and inexcusable strategic nonpayment by the debtor is a hallmark of the monitoring capacity of creditors. The fact that San Giorgio shareholders made this distinction indicates their capacity to monitor the conduct of the state. dedicated substantial contributions to the operation of the state.\textsuperscript{8} This beneficence may have been motivated largely by the self-interest of the shareholders, who also tended to be residents of Genoa.\textsuperscript{9} Thus, imposing unduly harsh conditions on the Republic would mean imposing such conditions on themselves.\textsuperscript{10} But most importantly, San Giorgio regularized access to credit, attracted shareholders by reducing the risk of default, and—by assuring repayment of loans—brought political stability to a state that had previously so abused the financial wherewithal of its constituents\textsuperscript{11} through policies such as forced loans that, in 1339, the enraged constituents burned the Republic’s tax and debt records in a popular revolt.\textsuperscript{12} Machiavelli so highly regarded the subsequent effect of San Giorgio on the finances and governance of Genoa that he referred to the association as “the preserver of the country and the Republic.”\textsuperscript{13} It would be inappropriate, however, to think of San Giorgio as a crucible of democracy. Although a “state within a state,”\textsuperscript{14} San Giorgio was primarily interested in protecting creditors’ rights and reversing Genoa’s reputation for debt repudiation.\textsuperscript{15} In doing so, San Giorgio arguably enriched its own shareholders by transforming the Republic into a mere pensioner and shifting the obligation to support the state from the merchant beneficiaries of the state’s commercial ventures to the general public.\textsuperscript{16} Moreover, San Giorgio enforced its contractual rights in ways that might be seen as inconsistent with democratic values. San Giorgio had the right not only to prosecute tax evaders, but to torture them, excommunicate them, and sentence them to death.\textsuperscript{17} There is, however, no indication of the use of waterboarding. \begin{itemize} \item \textsuperscript{8} \textit{Id.} \item \textsuperscript{9} See \textsc{Macdonald}, \textit{supra} note 5, at 142 (concluding that “the eleven thousand shareholders of San Giorgio represented the large majority of households in the city”). \item \textsuperscript{10} See Fratianni, \textit{supra} note 3, at 488. \item \textsuperscript{11} I use “constituents” throughout to refer to a group broader than the electorate, which may be limited by voting qualifications. \item \textsuperscript{12} \textsc{Macdonald}, \textit{supra} note 5, at 80-81. \item \textsuperscript{13} \textit{Id.} at 96 (quoting Machiavelli). \item \textsuperscript{14} Fratianni, \textit{supra} note 3, at 495. \item \textsuperscript{15} \textit{Id.} at 487. \item \textsuperscript{16} See \textit{id.} at 495. \item \textsuperscript{17} See \textsc{Macdonald}, \textit{supra} note 5, at 95. \end{itemize} Even if San Giorgio was undemocratic, its role in creating more widespread wealth, diluting the authority of the few autocratic families that had theretofore ruled Genoa, and constraining the exercise of political power by controlling financial affairs suggests that its policies greatly facilitated the growth of democratic values.\textsuperscript{18} San Giorgio certainly did not intend the fomentation of democracy to be one of its objectives, but governance through structures more likely to align the interests of officials and constituents may have been an inevitable byproduct of its activities. It was not simply that San Giorgio involved a complicated governance structure—a General Assembly of 480 shareholders and an elected protectorate of 8 members with financial expertise\textsuperscript{19}—that reduced concentrations of power by giving rise to a large pool of prospective public officials. It was also the case that the control that creditors exercised over the Republic’s access to credit constrained those who sought political control over Genoa from attaining power through the abuse of constituents’ rights.\textsuperscript{20} San Giorgio, then, stands as an exemplar of an interesting but underanalyzed phenomenon of public finance. It has become commonplace to suggest that the institutions that support public credit simultaneously create incentives for democratic governance within constitutional constraints.\textsuperscript{21} According to this theory, public \textsuperscript{18} \textit{Id.} at 96. Macdonald explains that San Giorgio effectively displaced political feuds by omitting the parties who placed power politics ahead of financial security: The Genoese, unable to form a cohesive polity on the basis of one-man-one-vote, had effectively formed [through San Giorgio] a parallel polity based on formalized power-sharing that largely excluded the two most disruptive elements in city life: the ex-feudal aristocracy and the urban poor. The Fieschi and Grimaldi families, so dangerous politically, were almost unrepresented in its administration; whereas the urban mercantile nobility, such as the Spinola family, featured prominently. \textit{Id.} \textsuperscript{19} Fratianni, \textit{supra} note 3, at 493-94 (reporting that the election was not fully democratic, in that persons eligible for election were limited to a subset of citizens listed in a secret book that was updated annually). Indeed, it is not clear that even this level of democratic election was always available. \textit{Cf.} MacDONALD, \textit{supra} note 5, at 96 (noting that “protectors” were appointed, and they, in turn, appointed their successors at the end of their term.). \textsuperscript{20} See Avner Greif, Institutions and the Path to the Modern Economy: Lessons from Medieval Trade 250 (2006). \textsuperscript{21} See, e.g., Michael Sonenscher, Before the Deluge: Public Debt, Inequality, and the Intellectual Origins of the French Revolution 3-8 (2007). creditors will condition their loans on the sovereign’s creation or toleration of institutions that increase the probability of payment by constraining the capacity of the debtor either to use loaned funds for unanticipated objectives (the moral hazard problem) or to repudiate or unilaterally alter the repayment obligation.\textsuperscript{22} Frequently, these institutions take the form of representative bodies—at least representative of creditors—that are able to control tax collection and sovereign expenditures.\textsuperscript{23} Through these footholds in the political process, creditors arguably set in motion the forces that have emerged into full political participation in advanced democracies. Moreover, commentators now perceive these institutions as precursors to rapid commercialization, economic growth, and the general enforcement of contract and property rights. In short, the presence of public debt is seen as a catalyst for democracy and robust markets rather than simply a means of financing the self-interested objectives of political officials.\textsuperscript{24} In this Essay, I explore an additional mechanism that identifies public credit with democratic governance. I suggest that, notwithstanding some inevitable divergence between the interests of governmental creditors and debtors, public debt can enhance the representative nature of democratic governance to the extent that creditors engage in monitoring that transforms them into surrogates or virtual representatives for the debtor’s constituents. To be specific, creditors may have the capacity to monitor government officials in a manner that both complements and improves the institutions that constituents at large utilize to constrain public officials. Indeed, creditors’ incentives allow them to overcome collective action problems that frustrate constituent monitoring of their officials. It was just this form of monitoring that presumably allowed the members of San Giorgio to distinguish between threatened defaults generated by benign conditions and those that arose from opportunistic behavior of public officials.\textsuperscript{25} Ostensibly, the creditors were willing to waive the former, but not the latter. \textsuperscript{22} See, e.g., Fratianni, \textit{supra} note 3, at 494-96. \textsuperscript{23} See, e.g., Macdonald, \textit{supra} note 5, at 95 (referring to San Giorgio as an example of such a body). \textsuperscript{24} See Sonenscher, \textit{supra} note 21, at 3 & n.7. \textsuperscript{25} See Fratianni, \textit{supra} note 3, at 488-89, 496. Any such strategy, however, required that creditors actively monitor officials to determine the source of the threatened default.\textsuperscript{26} On reflection, the claim that creditor monitoring can enhance democracy should not be surprising. The capacity of creditors to constrain the activities of officials in private firms is the subject of a vast literature.\textsuperscript{27} Creditors of firms presumably have interests that, to some extent, coincide with those of shareholders, insofar as profit-maximizing activities increase both the capacity of debtors to repay debts and the value of shares. Thus, although creditors provide financial advice to debtors or monitor fiscal behavior to constrain the firm’s officers from misusing corporate assets in a manner that would threaten repayment,\textsuperscript{28} they simultaneously confer a benefit on shareholders who lack either the capacity or the willingness to engage in similar monitoring. My objective here is to explore the extent to which a similar relationship exists between creditors of government and the constituents of government debtors. My reference to democracy in this Essay is somewhat idiosyncratic. It does not necessarily entail direct participation by constituents in political processes. Instead, it entails any political system in which officials face significant institutional constraints to comport themselves in a manner that is consistent with the interests of their constituents. Typically, those constraints come from the constituents themselves in their role as voters, or from designated third parties, such as courts, that are charged with enforcement of constitutionally dictated restrictions on governmental authority. Thus, my use of the term “democracy” necessarily embraces the concept of virtual representation by proxies who are not elected or accountable to constituents, but who, by virtue of sharing the constituents’ interests, serve to advance their preferences, which public officials might otherwise ignore. The alignment of interests between creditors and constituents is important because constituents—even when acting as voters—face numerous obstacles to serving as effective monitors of their officials. \textsuperscript{26} See \textit{id.} at 496. \textsuperscript{27} See Andrei Shleifer & Robert W. Vishny, \textit{A Survey of Corporate Governance}, 52 J. Fin. 737, 757 (1997) (describing some of the existing literature). \textsuperscript{28} See Frances E. Freund, \textit{Lender Liability: A Survey of Common Law Theories}, 42 Vand. L. Rev. 853, 856 (1989). Thus, the claim that public creditors can enhance democracy assumes that creditors can overcome the limitations on monitoring by noncreditor constituents. It is plausible, of course, that public creditors could serve as effective monitors of officials, but still not serve as an effective substitute for the electorate. This would be the case, for instance, if public creditors are monitoring for behavior that varies from the behavior that concerns the electorate, or if the time horizon of creditors varies from the time horizon of the electorate. Thus, the claim that creditors enhance democracy also assumes that within the range that creditors can monitor and noncreditor constituents cannot, the interests of creditors, and thus the conditions for which they would monitor, coincide with the interests of noncreditor constituents. I. THE IDENTITY OF CREDITOR AND CONSTITUENT INTERESTS The possibility that public debt could actually benefit democratic governance is by no means self-evident. Indeed, public debt is frequently considered antithetical to good government.\textsuperscript{29} After all, capital to which credible commitment provides easy access can be used for bad reasons as well as good. Much of the early history of public debt, particularly that incurred by hereditary monarchs, is written in the blood of destructive wars financed by foreign capital, the corruption of officials by financiers who advanced sums in return for a subsequently abused right of tax collection, and the reduced productivity created by the use of capital for forced loans rather than for the infrastructure and public goods that one might imagine constituents would have preferred.\textsuperscript{30} Because political officials have the capacity to raise taxes, they suffer fewer constraints than officials of firms for whom debt may serve as a bond to pay future cash flows to shareholder recipients of the debt.\textsuperscript{31} The San Giorgio experience illustrates how financial arrangements that mollify creditors are not necessarily embraced by those \textsuperscript{29} See, e.g., Sonenscher, \textit{supra} note 21, at 3. \textsuperscript{30} See, e.g., Macdonald, \textit{supra} note 5, at 138-44; David Stasavage, \textit{Public Debt and the Birth of the Democratic State: France and Great Britain, 1688-1789}, at 52-53 (2005). \textsuperscript{31} See Michael C. Jensen, \textit{Agency Costs of Free Cash Flow, Corporate Finance, and Takeovers}, 76 Am. Econ. Rev. 323, 324 (1986). who must pay the debt service, even when capital is put to good use. Governments could favor use of limited funds to repay creditors over use of the same funds to fulfill political and social obligations to constituents.\textsuperscript{32} The common practice in late and post-medieval England and Europe of granting creditors the right to collect debts directly, rather than to receive payments funneled through the debtor’s treasury,\textsuperscript{33} suggests distrust of governmental willingness both to collect sufficient revenues from constituents and to pay creditors those funds that were collected. The fact that creditors were from a small, propertied class of nobles and merchants,\textsuperscript{34} or worse yet, foreigners who feared debt repudiation more than they feared the hostility of taxpayers,\textsuperscript{35} exacerbated the divergent interests between creditors and constituents. Moreover, there are conditions under which creditors will fail to monitor, so that the mere extension of credit, for good reasons or bad, reveals little reason to believe that creditors and constituents share interests that would make the former group a useful proxy for the latter. Monitoring may be futile where sovereigns suffer little compunction about default or debt repudiation. France, for instance, witnessed five defaults between the mid-sixteenth and mid-seventeenth centuries.\textsuperscript{36} Subsequent monarchs avoided repudiation, but unilaterally reduced contracted-for interest rates.\textsuperscript{37} Spanish debt was “restructured,” after defaults on at least eight occasions between 1557 and 1662.\textsuperscript{38} Indeed, monitoring becomes superfluous to the extent that there is no effective way to enforce payments by sovereigns who wish to default, a phenomenon that resurfaced in the 1920s when sovereign borrowers as diverse as Russia, Mexico, \textsuperscript{32} See SÖNENSCHER, \textit{supra} note 21, at 11. \textsuperscript{33} See, e.g., STASAVAGE, \textit{supra} note 30, at 60 (noting that English kings “often secured loans by giving their creditors the right to directly collect certain Crown revenues”). \textsuperscript{34} Id. at 135 tbl.6.1 (reporting that in France, nobles and merchants made 77 percent of the loans to the State from 1682 to 1700, and 81 percent of the loans from 1730 to 1788). \textsuperscript{35} Id. at 135 tbl.6.2 (reporting that in France, foreign creditors made 12 percent of loans to the State from 1730 to 1749, and 18 percent of the loans from 1770 to 1789). \textsuperscript{36} See MACDONALD, \textit{supra} note 5, at 143 (noting that there were “five major bankruptcies ... the last four of which were accompanied by massive repudiations of debt”). \textsuperscript{37} See STASAVAGE, \textit{supra} note 30, at 89. \textsuperscript{38} MACDONALD, \textit{supra} note 5, at 129-30. and Turkey defaulted notwithstanding the apparent absence of any fiscal distress that warranted nonpayment.\textsuperscript{39} The ineffectiveness of monitoring is, of course, a relative matter. Creditors will still be willing to extend credit to risky debtors if there exists a reasonable substitute for monitoring. Traditionally, creditors have demanded risk premiums in the form of higher interest rates from sovereigns whose history indicated a higher likelihood of default to compensate for the higher risk of nonpayment.\textsuperscript{40} Although this tactic has been economically rational for creditors, it necessarily imposes a more significant financial burden on the debtors’ constituents. After French sovereigns in the late seventeenth and early eighteenth centuries defaulted on outstanding debts and unilaterally reduced interest rates, subsequent French sovereign borrowing occurred only at interest rates significantly higher than those charged for English sovereign borrowing,\textsuperscript{41} a fact that ostensibly contributed to the fiscal crises underlying the French Revolution.\textsuperscript{42} Alternatively, potential creditors may refuse to extend credit at all to sovereigns that have reneged on existing debts without justification.\textsuperscript{43} Assuming that the sovereign has the need for capital infusions and would use the proceeds to advance constituent interests, one can hardly consider creditor refusals to lend based on the defaults of former governments to be evidence of unity of interests with constituents. These concerns about public credit found voice in David Hume’s reaction to the vast debts incurred by England to fight the Seven Years War. In floating the prospect of the state’s declaration of voluntary bankruptcy, Hume famously wrote “[E]ither the nation must destroy public credit or public credit will destroy the nation.”\textsuperscript{44} Hume’s objections, however, were not simply economic. Instead, they were based largely on the political implications of debt. In particular, his concerns implied a necessary contradiction between \textsuperscript{39}. Michael Tomz, Reputation and International Cooperation: Sovereign Debt Across Three Centuries 86-87 (2007). \textsuperscript{40}. See Stasavage, \textit{supra} note 30, at 89. \textsuperscript{41}. See id. at 69. \textsuperscript{42}. See id. at 95. \textsuperscript{43}. See Tomz, \textit{supra} note 39, at 86-89. \textsuperscript{44}. David Hume, On Public Credit (1752), reprinted in Hume: Political Essays 166, 174 (Knud Haakonssen ed., 1994). public debt and the conditions for democracy.\textsuperscript{45} Public debt exacerbated divergent interests within the nation, insofar as it facilitated concentration of capital among an urban merchant class that was supported by taxation on the “provinces.”\textsuperscript{46} Debt inflated prices and taxes; it placed too much authority in the hands of foreign creditors whose allegiance was inconsistent with the interests of Englishmen; and, in a reminder of the biblical admonition to live by the sweat of our labor,\textsuperscript{47} it would encourage a “useless and unactive life” by allowing creditors to live idly off the interest of their investments.\textsuperscript{48} The state’s voluntary declaration of bankruptcy might sacrifice the welfare of the thousands of people who held state obligations, but the alternative was to sacrifice millions “for ever to the temporary safety of thousands.”\textsuperscript{49} That would be the case if servicing debt were to consume so much of the sovereign’s assets as to render the state defenseless against its enemies. The sovereign debt would be extinguished, but it would be a “violent death,” as compared with the “natural death” of bankruptcy.\textsuperscript{50} More recent literature is kinder to the relationship between public debt and democratic governance.\textsuperscript{51} In the most obvious connection, creditors who impose strict requirements on debtors that increase the probability of payment necessarily constrain the use of cash and reduce borrowing costs for the debtor’s constituents. San Giorgio’s tough tactics for debt collection revealed that relationship: long-term interest rates in Genoa were lower than those in virtually every other European financial center.\textsuperscript{52} More to the point, however, the terms that creditors demand from sovereign debtors induce the creation of democratic institutions.\textsuperscript{53} Those requirements result from the puzzle of public debt: although its extension could make a government economically secure enough to collect revenues necessary for repayment, any sovereign suffi- \begin{itemize} \item \textsuperscript{45} See \textit{id.} at 169-70. \item \textsuperscript{46} See \textit{id.} \item \textsuperscript{47} See \textit{Genesis} 3:19 (“In the sweat of thy face shalt thou eat bread.”). \item \textsuperscript{48} \textsc{Hume}, \textit{supra} note 44, at 170. \item \textsuperscript{49} \textit{Id.} at 177. \item \textsuperscript{50} \textit{Id.} at 176. \item \textsuperscript{51} See, e.g., \textsc{Sönenscher}, \textit{supra} note 21, at 3 n.7. \item \textsuperscript{52} \textsc{Fratianni}, \textit{supra} note 3, at 502. \item \textsuperscript{53} See, e.g., \textsc{Sönenscher}, \textit{supra} note 21, at 3 n.7. \end{itemize} ciently powerful to enforce a regime of taxation necessary to service its debt might also be secure enough to repudiate its obligations; any government with the resources necessary to attract public credit could also have the capability of defaulting with relative impunity. Thus, creditors should prefer not only financial covenants to ensure repayment, such as negative pledge clauses and promises to maintain revenues (taxes) sufficient to service the debt, but also institutional changes that reduce the moral hazard of using borrowed funds for high risk endeavors or that frustrate incentives to default. It may not be completely off the mark to think of those who loaned funds to rulers as the medieval equivalent of today’s venture capitalists, who, in return for funding, demand a seat on the board of directors and guide decisions that are consistent with the interests of shareholders generally. In effect, the adoption of institutional structures that restrict discretion constitutes credible commitments from sovereigns to repay their debts. Historically, these credible commitments have consisted of private and public institutions that ensured the collection of funds sufficient to repay debts and the imposition of constraints on executives who might otherwise divert funds that were collected.\footnote{54} As demonstrated by the work of Douglass North and Barry Weingast, these institutions of credible commitment have included the removal of authority from the executive (in England’s case, the King), such as occurred with the creation of the Bank of England to handle the government’s loan accounts or the assignment to Parliament of the right to approve loans.\footnote{55} The consequence was that public credit expanded, generating economic growth even as interest rates fell, and constitutional constraints initially instituted to ensure creditors’ rights evolved into constitutional and political constraints on the capacity of rulers to interfere with property and \footnote{54}{See \textit{Stasavage}, \textit{supra} note 30, at 66-67. The relationship between political representation and credible commitment to repay is such that their coexistence may depend on underlying economic and political conditions, rather than a purely causal relationship. See \textsc{David Stasavage}, \textit{The Rise of Political Representation and the Problem of Public Credit in Europe, 1250-1750} (forthcoming 2009) (manuscript at 5, 7-8).} \footnote{55}{See \textsc{Douglass C. North}, \textit{Institutions, Institutional Change and Economic Performance} 139 (1990); \textsc{Douglass C. North} & \textsc{Barry R. Weingast}, \textit{Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England}, 49 \textit{J. Econ. Hist.} 803, 820-21 (1989).} contract rights generally.\textsuperscript{56} The expanded powers of Parliament, the inclusion of the Contracts Clause into the American Constitution, and the evolution of an independent judiciary in both societies are all related in large part to demands imposed on sovereigns by creditors to restrict the discretion of debtors.\textsuperscript{57} Indeed, the development of a rich jurisprudence that injected significant substance into the Contracts Clause emerged in the late nineteenth century, after states and cities, through irrational exuberance or corruption, borrowed to finance poorly capitalized railroads and then sought to repudiate their debt obligations when the promised commercial benefits failed to materialize.\textsuperscript{58} More recent and more nuanced analyses suggest that the mere presence of representative assemblies and democratic governance does not establish the ability of sovereigns to make credible commitments.\textsuperscript{59} That is, democratic institutions are neither a necessary nor a sufficient condition to forestall default or to create an environment of trust that translates into a financial environment that makes economic growth more plausible.\textsuperscript{60} Nevertheless, democratic institutions may be more conducive to the circumstances that permit the creation of credible commitments.\textsuperscript{61} Political parties that allow logrolling among various interests can provide creditors with assurances that they can build coalitions to resist default, and administrative bureaucracies can shield default decisions from the demands of an autonomous executive.\textsuperscript{62} In short, political institutions can reduce the risk of financial distress that might discourage creditors from lending to sovereigns.\textsuperscript{63} \textsuperscript{56} See North & Weingast, \textit{supra} note 55, at 824-28. \textsuperscript{57} See NORTH, \textit{supra} note 55, at 17-38; Lino A. Graglia, \textit{The Burger Court and Economic Rights}, 33 TULSA L.J. 41, 46-47 (1997). \textsuperscript{58} See 6 Charles Fairman, \textit{History of the Supreme Court: Reconstruction and Reunion, 1864-88, Part One} 918 (1971). The most important decisions about the reach of the Contracts Clause continue to involve constraints on governments that seek to repudiate their debts. \textit{See, e.g., U.S. Trust Co. v. New Jersey}, 431 U.S. 1, 30-32 (1977). \textsuperscript{59} David Stasavage, \textit{Credible Commitment in Early Modern Europe: North and Weingast Revisited}, 18 J.L. ECON. \& ORG. 155, 155-56 (2002). \textsuperscript{60} See STASAVAGE, \textit{supra} note 30, at 45-47. \textsuperscript{61} See Stasavage, \textit{supra} note 59, at 183-84 (describing the structure of partisan interests and political coalitions important for the creation of credible commitment). \textsuperscript{62} See STASAVAGE, \textit{supra} note 30, at 2-3. \textsuperscript{63} See id. These analyses suggest that, although the efforts of creditors to ensure repayment may be motivated by self-interest, they can generate as a byproduct a series of constraints on the antidemocratic tendencies of officials that is consistent with the interests of constituents who would otherwise have less capacity to monitor their officials.\textsuperscript{64} Moreover, economic historians attribute the presence of institutions that emerge from—or that are consistent with—the desire to facilitate public debt to the developmental divide between rich and poor countries, while political scientists attribute the same phenomenon to the degree of freedom enjoyed by a nation’s citizens.\textsuperscript{65} But I want to explore a stronger claim: that public credit can enhance democracy not simply because the desire to attract credit generates institutions that check the exercise of executive discretion, but also because creditors have incentives to monitor the exercise of that discretion in ways that overcome limits on the capacity of constituents to deploy those democratic institutions. In short, the institutions that simultaneously attract credit, support democracy, and encourage economic development are not self-enforcing. These institutions provide avenues of opportunity that constituents can exploit to monitor their officials. II. PUBLIC CREDITORS AS PUBLIC MONITORS A. The Limits of Monitoring: Shareholders and Constituents If constituents fail to take advantage of institutions that facilitate monitoring, then there is little reason to believe that the presence of these institutions will enhance democracy. Unless, of course, some substitute group can compensate for the shortcomings in constituent monitoring. Can creditors play that role? That is, can creditors improve democratic governance, not simply by demanding institutional arrangements that make commitments to repay credible, but also by direct supervision of the governing process? \textsuperscript{64} See id. \textsuperscript{65} See generally \textit{Democracy, Governance, and Growth} (Stephen Knack ed., 2003) (containing articles describing these institutions as necessary for the relative poverty and development of nations). And if they do so, can they exercise that supervision in a manner that aligns with the interests of constituents at large? In this section, I suggest that there is at least a theoretical basis for concluding that creditors have the capacity to engage in monitoring that constituents of the state will otherwise avoid. “Monitoring” in this context means monitoring for fiscal propriety. The interests of creditors lie in obtaining repayment of the funds they have loaned. There is little reason to believe that creditors would fill any gap in monitoring for official conduct that exhibits moral turpitude or lax government skills, but does not have budgetary implications. Yet so much of what governments do, and—as I will suggest below—so much of what escapes the notice of constituents,\textsuperscript{66} is directly tied to budgetary issues that creditor interventions with respect to financial conditions would appear to have significant implications for democratic governance. Here, a corporate analogy may be appropriate. One underlying assumption of corporate governance is that capital structure affects the performance of firms, largely because different capital structures influence the mechanisms of corporate governance and impose different forms of discipline on managers.\textsuperscript{67} Adding debt to the firm’s capital structure has the positive effect of inducing creditors to scrutinize managers in a manner that may be unavailable at the same cost to shareholders.\textsuperscript{68} Different creditors of firms have different capacities to monitor their debtors, although there is debate about which creditors enjoy which advantage.\textsuperscript{69} Much of the debate about the efficiency of secured credit, for instance, assumes that unsecured creditors or creditors who have wraparound security interests in all the firm’s assets may monitor the entire firm, while creditors who lend against specific assets of the firm may closely monitor only those assets.\textsuperscript{70} \textsuperscript{66} See infra notes 86-97 and accompanying text. \textsuperscript{67} See Frank H. Easterbrook, Two Agency-Cost Explanations of Dividends, 74 AM. ECON. REV. 650, 653-54 (1984). \textsuperscript{68} See Jensen, supra note 31, at 324 (noting that “debt ... reduces the cash flow available for spending at the discretion of managers”); George G. Triantis & Ronald J. Daniels, The Role of Debt in Interactive Corporate Governance, 83 CAL. L. REV. 1073, 1078 (1995). \textsuperscript{69} See Robert E. Scott, A Relational Theory of Secured Financing, 86 COLUM. L. REV. 901, 909-11 (1986) (discussing contradictory positions in context of secured credit). \textsuperscript{70} For a summary of the debate as it has played out in literature, see id. at 904-11. Credit facilities for firms may also include covenants against actions that serve as observable proxies for imprudent debtor behavior.\footnote{See Clifford W. Smith, Jr., \textit{A Perspective on Accounting-Based Debt Covenant Violations}, 68 \textit{ACCT. REV.} 289, 289-90 (1993).} For example, covenants may mandate certain levels of performance, proscribe activities suggestive of managerial self-dealing, and constitute barometers of financial difficulties within the firm.\footnote{See Triantis \& Daniels, \textit{supra} note 68, at 1093.} Indeed, the very existence of fixed payment obligations is assumed to impose significant discipline on managers, because missed payments provide a readily detectable indication of mismanagement and trigger consequences more salient than those that attend other characteristics of managerial slack.\footnote{See \textit{id}.} Finally, debt issuance may reduce agency costs by retarding the capacity of managers to use corporate assets for projects that cannot be readily detected, but that deviate from the firm’s mission.\footnote{Easterbrook, \textit{supra} note 67, at 653-54; Scott, \textit{supra} note 69, at 909.} Debt covenants, for instance, may obligate managers to pay out “free cash flows,” that is, cash that cannot profitably be reinvested by the corporation, but that officers and managers might otherwise use to purchase perquisites or invest in projects with negative net present value.\footnote{See Richard A. Brealey \& Stewart Myers, \textit{Principles of Corporate Finance} 528 (6th ed. 2000); Easterbrook, \textit{supra} note 67, at 653-54; Jensen, \textit{supra} note 31, at 323-24.} These positive aspects of debt are, in theory at least, enhanced when debt is issued on a secured basis.\footnote{See Saul Levmore, \textit{Monitors and Freeriders in Commercial and Corporate Settings}, 92 \textit{YALE L.J.} 49, 56 (1982).} The effects of granting security interests are twofold. First, they create a bond between the debtor and creditor that encourages the latter to become significantly involved in the operations of the former.\footnote{See \textit{id.} at 55-56; Scott, \textit{supra} note 69, at 926.} Interactions that increase the debtor’s likelihood of success not only ensure repayment of outstanding debt, but also increase the likelihood of future profitable interactions between the parties.\footnote{See Scott, \textit{supra} note 69, at 937.} These same interactions facilitate monitoring by giving the creditor significant information about the debtor’s operations. Simultaneously, security interests encourage monitoring by providing the creditor with a significant payoff for reviewing the firm’s financial activity.\textsuperscript{79} Should the firm detect fiscal distress, the security interest provides a relatively inexpensive mechanism by which the creditor can exercise leverage, since the assets subject to the security interest are likely to be essential to the continued operation of the firm and the threat to foreclose is both credible and relatively inexpensive.\textsuperscript{80} Should that threat go unheeded, the existence of the security interest permits the creditor to extricate itself from the relationship at a lower loss than might be realized by unsecured creditors.\textsuperscript{81} By taking security interests, creditors can elevate their status in bankruptcy proceedings and preclude the firm from liquidating useful assets or from incurring additional debt that might be used to engage in low-expected-return activities.\textsuperscript{82} Perhaps most importantly, creditors hold both an ex ante and an ex post threat against managers who might otherwise be able to shirk in recognition of the limits of the monitoring capacity of shareholders. Creditors can effectively foreclose access to capital markets either by demanding negative pledge clauses or by tying up sufficient assets to leave little for subsequent creditors.\textsuperscript{83} The incentives for creditors to act in this way may simply be the \textit{in terrorem} effect that presumably deters debtor misbehavior in the first instance. But more to the immediate point, the capacity of creditors to detect misbehavior is derivative of the claim that they monitor the firm. Indeed, Henry Hansmann and Reiner Kraakman have suggested that the limited liability of corporations can best be explained as an inducement for creditors to monitor the corporation, because the assets of shareholders are unavailable in the event of default.\textsuperscript{84} Creditors, they contend, may have access to information unavailable to other stakeholders, such as shareholders who suffer from collective action problems. Additionally, the fact that recover- \textsuperscript{79} Henry Hansmann & Reiner Kraakman, \textit{The Essential Role of Organizational Law}, 110 YALE L.J. 387, 425 (2000). \textsuperscript{80} See Ronald J. Mann, \textit{Explaining the Pattern of Secured Credit}, 110 HARV. L. REV. 625, 645–47 (1997). \textsuperscript{81} Id. at 648–49. \textsuperscript{82} See George G. Triantis, \textit{A Free-Cash-Flow Theory of Secured Debt and Creditor Priorities}, 80 VA. L. REV. 2155, 2158–61 (1994); Triantis & Daniels, \textit{supra} note 68, at 1078. \textsuperscript{83} See Mann, \textit{supra} note 80, at 641–45. \textsuperscript{84} Hansmann & Kraakman, \textit{supra} note 79, at 425. ies can be had only against the firm’s assets induces creditors to seize that advantage by monitoring the firm’s financial condition.\textsuperscript{85} Many of these aspects of debt have some, if imperfect, analogy to the market for public credit. If debtholders and constituents are analogous to public creditors and residents respectively, then the capacity of private creditors of firms to compensate for any slack in shareholder monitoring may apply with equal force to the capacity of creditors of public entities to substitute for passive constituents. Consider in this regard the need for creditor monitoring. Although it may occur only out of creditors’ own self-interests, it confers a public benefit if constituents fail to monitor their officials and creditor monitoring serves as a substitute. There is significant support for the proposition that constituent monitoring is seriously limited.\textsuperscript{86} Typically, the imperfections that characterize constituent monitoring emanate from agency costs in the government-citizen relationship.\textsuperscript{87} If government officials were faithful servants of their constituents, then monitoring would be superfluous. If, on the other hand, government officials were self-interested, then they have no independent reason to pursue the public good when it deviates from their own self-interest. There is no shortage of claims that public officials deviate from the public interest model that underlies the most optimistic view of democratic governance.\textsuperscript{88} Examples of self-interested objectives that might induce public officials to act in a manner inconsistent with the interests of their constituents include maximization of governmental budgets, maximization of leisure time, or maximization of post-public service private sector employment opportunities.\textsuperscript{89} Even the assumed desire that public officials seek reelection or higher office, which might be thought to require performance of one’s current task in a manner reflective of the public interest, does not dilute the need for substantial monitoring of official performance. \textsuperscript{85} \textit{Id.} \textsuperscript{86} \textit{E.g.,} Joseph P. Kalt & Mark A. Zupan, \textit{The Apparent Ideological Behavior of Legislators: Testing for Principal-Agent Slack in Political Institutions}, 33 J.L. \& ECON. 103, 107-08 (1990). \textsuperscript{87} \textit{See id.} \textsuperscript{88} \textit{See, e.g., id. at 103-04.} \textsuperscript{89} \textit{See Clayton P. Gillette, Local Redistribution, Living Wage Ordinances and Judicial Intervention,} 101 Nw. U. L. REV. 1057, 1067-68, 1102-03 (2007). Continuation in office or elevation to higher office may depend on the support of discrete groups who may seek a disproportionate share of public resources or reciprocal support for programs that either return social benefits less than their costs or are inconsistent with principles of optimal redistribution.\footnote{Id. at 1085-86.} Monitoring, therefore, can detect and deter conduct that is aligned with the interests of particular groups that might serve the limited objectives of public officials, but is inconsistent with the interests of constituents at large. Stated in these terms, the imperfections of constituent monitoring follow from standard models of collective action.\footnote{See id. For the paradigmatic discussion on Collective Action Theory, see Mancur Olson, \textit{The Logic of Collective Action} (1965).} Monitoring itself constitutes the quintessential public good; an individual act of monitoring confers benefits on all constituents, none of whom can be excluded from enjoying the rewards of others’ efforts and each of whom could monitor without foreclosing others’ similar conduct. Thus, no potential beneficiary has an incentive to undertake the costs of monitoring, as he or she can enjoy identical benefits from monitoring by others. On this theory, free riding among constituents on matters of public finance should be prevalent, because the small consequences that befall any one constituent when an official misuses public funds is unlikely to justify any individual’s expenditure necessary to detect and publicize the misconduct.\footnote{See Russell Hardin, \textit{Collective Action} 17-18 (1982).} This remains true even when the aggregate costs of misconduct outweigh the social benefits of the expenditure. In this sense, constituents may be perceived as the functional equivalent of shareholders. According to what one commentator has called the “passivity story,” shareholders similarly face collective action problems in monitoring corporate officers.\footnote{Bernard S. Black, \textit{Shareholder Passivity Reexamined}, 89 Mich. L. Rev. 520, 526-29 (1990).} The analogy to shareholders offers both good news and bad news for constituents concerned about monitoring public officials. The first piece of bad news is that, in the absence of monitoring, public officials are more likely to impose agency costs on their constituents than unmonitored corporate officers. When monitoring is implausible, firms can adopt alternative strategies to counteract the tendencies of officers to pursue personal interests.\textsuperscript{94} Firms, for instance, can bond officers to shareholder interests, through means such as including in compensation packages stock options and incentive pay schemes that are tied to firm performance. Additionally, firms are sufficiently flexible to adopt organizational structures that separate agents who initiate and implement decisions from those who ratify and monitor decisions. A board of directors may thus explicitly disapprove proposed officer actions that conflict with shareholder interests.\textsuperscript{95} Constituents of public entities have fewer available mechanisms to bond public officials.\textsuperscript{96} As often noticed in the literature on privatization of governmental functions, the operations that public officials supervise do not generate residual profits in which officials can be granted an interest.\textsuperscript{97} Governments may use external monitors, but any such review is likely to be haphazard, rather than a systematic review as conducted by a board of directors.\textsuperscript{98} For example, the prospect of judicial review should constrain officials' willingness to deviate from the interests of constituents. But even when official defalcations diverge sufficiently from expectations to create a reviewable claim, judicial intervention requires that interested litigants initiate legal action. This requirement simply replicates the collective action problem of finding a party with a sufficient stake in the outcome to justify the litigation costs. Even when fiscal programs are challenged, judicial deference to political decisions that concern public expenditures may be appropriate, given the relative institutional competence of political and judicial decision makers.\textsuperscript{99} There is little reason to believe that courts have \textsuperscript{94} Easterbrook, \textit{supra} note 67, at 653-54. \textsuperscript{95} See Eugene F. Fama & Michael C. Jensen, \textit{Separation of Ownership and Control}, 26 J.L. \& Econ. 301, 307-08, 311 (1983). \textsuperscript{96} See generally Eimer R. Elhauge, \textit{Does Incentive Group Theory Justify More Intrusive Judicial Review?}, 101 YALE L.J. 31, 70-80 (1991) (describing the shortcomings of using judicial review to curb interest group influence over public officials). \textsuperscript{97} See, e.g., Clayton P. Gillette, \textit{Opting Out of Public Provisions}, 73 DENY. U. L. REV. 1185, 1189-90 (1996); Michael H. Schill, \textit{Privatizing Federal Low Income Housing Assistance: The Case of Public Housing}, 75 CORNELL L. REV. 878, 883-84 (1990). \textsuperscript{98} See Fama \& Jensen, \textit{supra} note 95, at 311. \textsuperscript{99} See Elhauge, \textit{supra} note 96, at 78 (describing problems related to judicial review of legislation). any advantage over even flawed political processes in applying the potentially broad scope that might legitimately be given to “public purpose” expenditures or to financial arrangements that might plausibly comply with the objectives of “debt” limitations.\textsuperscript{100} Courts are also likely to be highly imperfect monitors because they have limited ability to reverse engineer political decisions and distinguish benign political deals from malign rent-seeking.\textsuperscript{101} Judges, after all, have little capacity to replicate or second-guess the kinds of budgetary tradeoffs and investment strategies that affect decisions about capital expenditures on local public goods.\textsuperscript{102} In theory, courts can effectively limit the capacity of discrete, well-organized interests to override constituents’ preferences. In practice, however, courts will have difficulty distinguishing between financial decisions that respond to interest group entreaties and those that respond to well-intentioned constituent desires, but that nevertheless impose diffuse costs and confer concentrated benefits—the hallmark of interest group dominance.\textsuperscript{103} Indeed, fiscal decisions will systematically share those characteristics, insofar as taxes used to construct or operate a facility will be widely imposed, while the facility’s benefits may be enjoyed differentially.\textsuperscript{104} Without some relatively clear indication of fiscal impropriety, it is therefore unsurprising that courts tend to refrain from the kind of financial risk assessment for which the doctrines of public purpose, debt, and lending of credit serve as proxies.\textsuperscript{105} Moreover, governments could \textsuperscript{100} Gillette, \textit{supra} note 89, at 1117-18. \textsuperscript{101} See \textit{id.} at 1096-97. \textsuperscript{102} See Elhauge, \textit{supra} note 96, at 84. \textsuperscript{103} \textit{See id.} \textsuperscript{104} See Gillette, \textit{supra} note 89, at 1117-18 (describing a scenario in which public funding is used to construct a golf course used predominately by the rich). \textsuperscript{105} See Richard Briffault, \textit{The Disfavored Constitution: State Fiscal Limits and State Constitutional Law}, 34 RUTGERS L.J. 907, 956 (2003). I have argued that there may be some satisfactory metrics, predicated on the issue of who bears the risk of project failure, for whether a particular financing scheme violates constitutional debt limits. See Clayton P. Gillette, \textit{Direct Democracy and Debt}, 13 J. CONTEMP. LEGAL ISSUES 365, 381-83 (2004). Although such a test is more readily administrable by a court, it admittedly does not eliminate the need for a more difficult inquiry in some cases. At the same time, even if courts could adopt an easily administrable test, the very existence of debt limits must be judged against alternative measures for constraining fiscal overextension. The vast array of debt limitations reveals that even drafters of such provisions in deliberative constitutional conventions could not develop a single metric that can properly be used to determine the not easily adopt more rigorous private sector models that provide oversight or division of decision making authority. Constitutional structures and legal doctrines preclude local governments from delegating responsibilities in a manner that would provide disinterested nonresidents significant authority over local decision making.\footnote{For instance, the nondelegation doctrine and constitutional grants of home rule would arguably preclude localities from creating the functional equivalent of an outside board of directors to review the intramural decisions of local officials. See William N. Eskridge, Jr., \textit{Veto gates}, Chevron, \textit{Preemption}, 83 \textsc{Notre Dame} L. Rev. 1441, 1461 (2008) (defining the nondelegation doctrine as prohibiting Congress from delegating “law-elaborating authority” to agencies without guidelines sufficient to permit robust judicial review).} At the same time, shirking of duties by public officials may be less detectable than shirking by officers of firms. The willingness of constituents or shareholders to monitor will be inversely proportional to costs, and the costs of monitoring increase when information about officials’ performance is not readily available in a digestible form. Even when financial information about governmental performance exists, it is rarely transparent. Governmental budgets are long, complex, and difficult to decipher, such that it is not in the interest of the average constituent to take the time to discover defalcations. Even at the local level, where free riding on the monitoring efforts of others might be diminished by lower populations than at the state or federal level, systematic review of fiscal programs is likely to be rare. Just as an example, I would venture that few residents of the City of Williamsburg, Virginia have perused either the 280-page Fiscal Year 2008 Adopted Budget\footnote{\textit{See 2008 City of Williamsburg, Va., Adopted Budget, available at http://www.ci.williamsburg.va.us/Index.aspx?page=300.}} or the 91-page Comprehensive Annual Financial Report for Fiscal Year 2007,\footnote{\textit{See 2007 City of Williamsburg, Va., Comprehensive Ann. Fin. Rep. [hereinafter Ann. Fin. Rep.], available at http://www.ci.williamsburg.va.us/modules/ShowDocument.aspx?documentId=237.}} even though both are readily available on the city’s website. In firms, information about performance can be garnered from comparing the performances of competitors; however, since government-provided services tend to be monopolies, similar comparisons cannot readily be found.\textsuperscript{109} Even when multiple localities offer the same service (for example, mass transportation), intercity comparisons are of limited utility. Comparisons of subway fares per mile in New York and San Francisco, for instance, cannot easily be made without considering relative costs of living, construction costs, number of riders, and age of the system. Even property tax rates are not easily comparable, because the utility of the information that they generate will depend on the more mysterious metric of the proportion of market value used to derive the actual taxes that property holders pay.\textsuperscript{110} From the perspective of constituents, the second piece of bad news in the shareholder-resident analogy is that shareholders can solve collective action problems in ways that residents cannot. Shareholders vary significantly in the extent of their interests in the firm; some may have invested little of their wealth in the firm, while others may be substantially invested in the same firm. But the public goods nature of monitoring suggests that not all principals have to monitor to deter agent misconduct. Since deterrence of officer misconduct by some confers benefits on all, it is sufficient if there exist groups with a large enough stake in corporate performance to warrant intervention. Thus, institutional investors are frequently seen as surrogates for smaller investors.\textsuperscript{111} Of course, constituents also have disparate interests. Some will have disproportionately high stakes in their locality, by virtue of property ownership, job, or other relatively immobile investment.\textsuperscript{112} As in the case of institutional investors with significant stakes in \textsuperscript{109} The real distinction may be between entities that face competition and those that do not. \textit{See John Donahue}, \textit{The Privatization Decision} 64, 67, 147 (1989). Still there is a greater tendency for firms to face competition for a particular service than for municipalities to do so. \textsuperscript{110} \textit{See John A. Miller}, \textit{Rationalizing Injustice: The Supreme Court and the Property Tax}, 22 \textit{HOFSTRA L. REV.} 79, 85 (1993) (asserting that assessments often fail to keep pace with property appreciation, resulting in fractional assessments; and that these assessments serve to cap property "tax bills in an inflationary economy"). \textsuperscript{111} \textit{See Frank H. Easterbrook & Daniel R. Fischel}, \textit{The Economic Structure of Corporate Law} 66-67 (1991); Black, \textit{supra} note 93, at 523-25, 587; John C. Coffee, \textit{Liquidity Versus Control: The Institutional Investor as Corporate Monitor}, 91 \textit{COLUM. L. REV.} 1277, 1284-85 (1991) (pointing out that institutional investors may have interests that deviate from those of other shareholders). \textsuperscript{112} \textit{William A. Fischel}, \textit{The Homevoter Hypothesis} 5, 74-75 (2001). firms, these constituents may find it worthwhile to monitor against official misconduct. Landowners who pay large sums in property taxes or individuals who benefit from government programs that will be underfunded if the government is operated inefficiently all have incentives to overcome collective action obstacles that are typically attributed to political constituents.\textsuperscript{113} In addition, to the extent that those with high stakes are willing to underwrite the costs of discovering and disseminating information about the misconduct of local officials, they—like institutional shareholders of firms—can reduce search costs for other constituents who are interested in detecting misconduct. Thus, one might initially believe that those with low stakes will be able to free ride on those to whom monitoring is worthwhile in the public as well as the private sector. But the analogy between institutional investors and constituents with significant local stakes collapses once we consider the extent to which those who do monitor serve as proxies for the interests of those who do not. In the corporate context we fairly assume that all shareholders, regardless of the size of their stake in the firm, share a monolithic objective of maximizing the values of their shares.\textsuperscript{114} There may be variations among shareholders, so that those who anticipate short shareholding periods may prefer corporate strategies that differ from those preferred by shareholders who anticipate longer shareholding periods.\textsuperscript{115} But the fact that all shareholders seek some form of profit maximization, and that both large and small shareholders could have either long- or short-term interests, ensures that the objectives of shareholders with high stakes, for whom monitoring is cost effective, will largely coincide with those with low stakes.\textsuperscript{116} Thus, the latter can free ride on the former with relative impunity. \textsuperscript{113} See \textit{id.} at 75 (arguing that homeowners’ reduced mobility makes them eager to organize to protect home values). \textsuperscript{114} Easterbrook & Fischel, \textit{supra} note 111, at 6 (1991). \textsuperscript{115} However, some believe that even institutional investors are overly concerned with the short-term performance of public firms and therefore are less likely to play a role in monitoring the corporation. See, e.g., Martin Lipton & Steven A. Rosenblum, \textit{A New System of Corporate Governance: The Quinquennial Election of Directors}, 58 U. CHI. L. REV. 187, 205-06 (1991). \textsuperscript{116} See \textit{id.} at 208 (presenting the efficient capital markets hypothesis as a result of a blend of the long- and short-term). The market for managers and takeover markets further facilitates these efforts. Even if monitoring does not occur on a very wide scale, market mechanisms may provide substitute constraints on self-interested officers of firms. Officials who do not pursue the profit-maximizing interests of shareholders are likely to lose employees who can obtain better terms of employment at more financially successful competitors.\textsuperscript{117} The possibility of takeovers provides potential acquirers with an incentive to monitor the firm’s performance and to disseminate negative information at low cost to other shareholders, simultaneously providing them with opportunities to support those who promise more significant returns for all shareholders.\textsuperscript{118} The situation is different in the public sector. Consider first the conflicting interests that emerge from different residents’ expected period of residency. These periods may be analogous to shareholders’ expected periods of shareholding. Constituents who rent rather than own their homes and who anticipate leaving the jurisdiction within a relatively short period of time may prefer local officials to incur significant capital burdens in order to provide substantial public goods and services in the near term. Those constituents will receive the current benefit of the projects, but may not bear the full share of their costs through either tax or rent payments, some of which will be deferred to future residents. Thus, assuming imperfect capitalization of future payments into current property values, current short-term constituents may systematically prefer that the government incur more debt than residents who anticipate long periods of residency.\textsuperscript{119} If the latter group is more likely to monitor officials, they would not necessarily serve as good proxies for the former group. It is less clear, however, that this divergence would problematically distort monitoring. If residents who are expecting \textsuperscript{117} See id. at 214-15. \textsuperscript{118} See id. at 197-98. \textsuperscript{119} Residents who anticipate emigrating soon may also have less reason to incur the costs associated with voting, because they will not be able to enjoy the benefits of their votes, or may belong to the group of relatively poor who do not vote with the same frequency as wealthier residents. See, e.g., Mark Thomas Quinlivan, Comment, \textit{One Person, One Vote Revisited: The Impending Necessity of Judicial Intervention in the Realm of Voter Registration}, 137 U. PITT. L. REV. 2361, 2368 (1989) (noting that “[v]oter participation has always been strongly related with socioeconomic factors,” and that as a result, the poor experience lower rates of voter participation). to exit the jurisdiction in a relatively short period of time (especially tenants who are unconcerned about the capitalization of new projects into higher property taxes) would otherwise be urging a supraoptimal amount of debt, then the omission of their interests from the process is unlikely to skew local officials to act irresponsibly. There is, moreover, an alternative source of friction that complicates monitoring in the public sector. Even the most well-meaning (publicly interested) resident may fail to represent residents generally, because the objective function that governments legitimately pursue varies more widely than is the case with firms.\footnote{Clayton P. Gillette, \textit{Constraining Misuse of Funds from Intergovernmental Grants}, in \textsc{Fiscal Federalism In Unitary States} 101, 102 n.1 (Per Molander ed., 2004) (stating that local officials may think about the public welfare differently than other constituents).} As I noted above, all shareholders will be concerned with profit maximization in the firm. But governments, at least multifunction governments, have no such single objective.\footnote{Jean Tirole, \textit{The Internal Organization of Government}, 46 \textsc{Oxford Econ. Papers} 1, 3 (1994).} Governments provide some services, such as paving roads, police or defense services, and environmental cleanup, to solve market failures. But governments also offer some services, or modify the market allocation of collective goods, to engage in redistribution (for instance, municipal day care centers or other welfare services); and offer still other services that implicate both efficiency and distributional concerns (education may be an example). Moreover, the absence of a readily verifiable metric of success, such as profits, complicates the problem of determining whether public services and the officials who run them are performing in the public interest.\footnote{\textit{Id.} at 3-4.} Even if we were to disagree over whether long-term or short-term profitability were the proper measure of corporate success, it is still more complex to determine, for instance, whether a school system is doing “its job.” We could analyze standardized test scores, graduation rates, college acceptance rates, or any of a number of other variables that presumably serve as proxies for the quality of education. Thus, both the objective function of governments and the determination of whether that function has been satisfied may be subject to disputes far more contentious than in the case of firms. This multiplicity of functions and ambiguity in measurement has several consequences. First, it frustrates efforts of potential monitors to determine whether local officials are doing a “good job.” Conduct that satisfies such a standard is less observable by principals and less verifiable to third parties when agents must simultaneously perform potentially conflicting tasks (for example, efficient waste disposal may conflict with offering all residents equal access to waste disposal) than when agents can be evaluated against a single observable metric, such as profit maximization.\textsuperscript{123} This phenomenon means that officials themselves cannot easily determine which financial strategy is consistent with constituent preferences. The binary nature of voting in public elections requires that votes be cast based on an assessment of the overall performance of candidates. An official who wins an election will be unable to determine whether constituents were pleased with all of his or her policies, a majority of those policies, or only a minimum of those policies. The multiplicity of government objectives that I referred to above makes discerning any message from electoral results still more difficult to interpret. Second, to the extent that officials pursue multiple objectives, even those constituents who have high enough stakes to justify monitoring are unlikely to represent the interests of all constituents, at least outside of small jurisdictions that offer limited services and attract a largely homogeneous population.\textsuperscript{124} Even if we assume publicly interested actors, constituents with interests sufficiently intense to warrant monitoring are likely to equate the public’s interest with their own. If, for instance, I live next door to a public park, I will care with great intensity of purpose that it be maintained in a manner that ensures its cleanliness and safety. I am likely to identify my own interest in the adequacy of the park with that of the public generally. I may then feel perfectly righteous in lobbying for and monitoring the use of maximum expenditures for the park, even though funding for that purpose may require reduced expenditures for other services like school nurses or paved \textsuperscript{123} See id. \textsuperscript{124} These jurisdictions form the basis for William Fischel’s “Homevoter Hypothesis” that local taxpayers constrain the capacity of local officials to provide local public goods other than those preferred by residents. See FISCHEL, supra note 112, at 4, 19, 73-76. sidewalks—services in which I have a lesser interest. My desire for maximum spending on my objective, in short, is likely to vary from the median constituent’s view of optimal spending on the same objective. But the median constituent, by virtue of having no special interest in any subject, is unlikely to monitor for the sale of his or her competing objectives, simply because those with only average interest in expenditures suffer from the inducement to free ride on others with similar preferences. Thus, with respect to those who have high enough stakes in the public entity to monitor, the problem is not failure to constrain public officials; rather, it is that if these constituents have such an idiosyncratic stake in the conduct of public officials to justify incurring monitoring costs, then one might wonder what they are monitoring for. Think, for instance, of the media, which qualifies as one of the standard theoretical substitutes for lax constituent monitoring.\textsuperscript{125} By discovering and reporting on official scandals, the media may be able to increase circulation and individual reporters may enhance their reputations, so one might think that their self-interested objectives would transform them into effective proxies for free riding constituents. But, short of criminal activity that affects budgetary outlays, the media tends to scandalize low value defalcations in lieu of making more costly investigations into misappropriation of public funds. Recent events in New York provide a revealing illustration: In a two-week span, New York government suffered two substantial setbacks. One was related to the failure of the New York Metropolitan Transit Authority to make the capital improvements that it promised to deliver in return for fare hikes that it had received.\textsuperscript{126} The other involved former Governor Elliot Spitzer’s personal expenditures.\textsuperscript{127} I will leave to the reader’s speculation the issue of which story consumed more ink and newsprint. Public unions that have a stake in the public budget may also monitor to ensure their financial security. But the objectives for which they monitor will not necessarily coincide with those of \textsuperscript{125} See Timothy Besley, Robin Burgess & Andrea Prat, \textit{Mass Media and Political Accountability}, in \textit{THE RIGHT TO TELL} 45, 45 (Roumeen Islam et al. eds., 2002). \textsuperscript{126} William Neuman, \textit{M.T.A. Delays Improvements, Citing Drop in Real Estate Sales Taxes}, N.Y. \textsc{Times}, Mar. 25, 2008, at B1. \textsuperscript{127} See Danny Hakim & William K. Rashbaum, \textit{Spitzer, Linked to a Sex Ring as a Client, Gives an Apology}, N.Y. \textsc{Times}, Mar. 11, 2008, at A1. constituents whose concern is for the overall health of the municipality. For example, teachers’ unions may, in the name of educational quality, monitor the school budget to ensure that salaries are consistent with those of other school districts, but be less concerned with total educational expenditures or with whether taxes, which are used to pay their salaries, are set at optimal rates. This is not to say that there are no public analogues to the market mechanisms that constrain officers of firms in the absence of monitoring. Take, for instance, the constraint on officers of a firm that is created by the takeover market, which suggests that officers will seek to maximize returns for shareholders.\textsuperscript{128} Local officials likewise face a robust takeover market in the form of political opposition. Potential political opponents have significant incentives both to monitor behavior of incumbent officials and to disseminate information about shirking to the electorate. The problem with relying on electoral monitoring, therefore, is not limited to the high costs of discovering information (political opponents should be willing to subsidize those costs), the infrequency of elections, or small turnouts. The various objectives of constituents and the preference of voters for low value, but salient, indicia of success provide political challengers little incentive to scrutinize fiscal data with more than cursory attention. If the streets are clean and property taxes have remained stable, a decrease in bond rating will be of less concern, even though it may foretell a more difficult financial future. In some respects, one might anticipate more monitoring by constituents than by shareholders. Costly monitoring becomes unnecessary if one can exit an investment at relatively low cost after one discovers misconduct.\textsuperscript{129} Those who hold shares in firms—at least in publicly held firms—typically face thick markets for shares, and thus can exit easily once their tolerance for misconduct is exceeded.\textsuperscript{130} Selling shares may entail some financial loss, but a well-diversified shareholder should be able to absorb that loss with \textsuperscript{128} See Easterbrook & Fischel, \textit{supra} note 111, at 96-97 (observing that takeover markets increase future costs of poor performance, thus helping to assure contractual performance). \textsuperscript{129} See Fischel, \textit{supra} note 112, at 74. \textsuperscript{130} \textit{Id}. minimal dislocation. Investments in homes, jobs, and communities are more costly to exit and less easily diversified.\textsuperscript{131} As a result, we might anticipate that constituents would invest more in monitoring to forestall or detect value-reducing misconduct at an early stage. In light of all the disincentives to monitor that constituents face, however, it is difficult to believe that high exit costs alone are sufficient to overcome the collective action problem. The result may be that the sum total of public monitoring is not undersupplied, as classic collective action theory suggests.\textsuperscript{132} Instead, monitoring may be maldistributed. That is, it is oversupplied for discrete functions that affect an intensely interested group, and undersupplied for functions that have diffuse effects. As a result, public officials may not face optimal monitoring for any functions. Members of an intensely interested group may effectively lobby for services and expenditures that provide them with significant benefits. If that group’s preferences fail to coincide with the preferences of constituents generally, there is little reason to predict that the group serves as a representative proxy for those who would prefer to free ride. \textit{B. Creditors to the Rescue?} Can creditors enter this void and solve these collective action problems? The tentative answer I want to give is, “it depends.” Let me begin with reasons for optimism. There are a variety of ways in which the interests of creditors compensate for the collective action failures that dilute the monitoring capacity of constituents. The first is simply one of numbers. There will tend to be fewer creditors than there are constituents. Since numbers have some, if imperfect, relationship to free riding,\textsuperscript{133} the relative inability of a small number of creditors to free ride on the efforts of others suggests that any given creditor will be more willing to play a role in monitoring \textsuperscript{131} See \textit{id.} at 74–75. \textsuperscript{132} See \textsc{Olson}, \textit{supra} note 91, at 31. But cf. Eric Biber, \textit{The Importance of Resource Allocation in Administrative Law}, 60 \textsc{Admin. L. Rev.} 1, 45 (2008) (attributing the undersupply of monitoring to inaction); John O. McGinnis & Ilya Somin, \textit{Federalism vs. States’ Rights: A Defense of Judicial Review in a Federal System}, 99 \textsc{Nw. U. L. Rev.} 89, 98 (2004) (attributing undersupply of monitoring to rational ignorance and multiple principals). \textsuperscript{133} See \textsc{Hardin}, \textit{supra} note 92, at 182. officials than any given noncreditor constituent. To return to Williamsburg, Virginia, the financial statements reveal that public credit tends to be extended by bank loans or through the issuance of bonds.\textsuperscript{134} A bank that is the sole lender will obviously have a significant incentive to monitor the source of repayment. Even in the event of bonded debt, in which the ultimate bondholders may be numerous, the collective action problem may be at least partially solved by the presence of a trustee, who is appointed to receive funds for repayment and who can at least provide early warning signals of impending financial distress.\textsuperscript{135} Second, creditors, at least those within the same class, have a common interest. They want to be \textit{paid}; they care about the overall fiscal health of the debtor in ways that divided interests within the jurisdiction are willing to ignore.\textsuperscript{136} Thus, creditors can overcome the problems related to the multiplicity of objectives that preclude one set of constituents from serving as proxies for others. At least to the extent that creditors are secured by the general revenues of the debtor, they are less interested in the provision of any particular service than in the overall fiscal health of the jurisdiction. Here, the analogy to corporate creditors threatens to break down. I noted above that monitoring by corporate creditors is likely to be enhanced by secured credit, which provides both a bond between the creditor and debtor and allows the exercise of leverage in the event of threatened fiscal distress.\textsuperscript{137} Sovereign debtors, however, are less likely to be able to grant security interests to private creditors. In the event of default, creditors will not be able to seize the city's fire trucks or the state's military equipment. Even when creditors lend against a dedicated revenue stream, such as tolls from a toll bridge erected with loan proceeds, creditors may benefit from a rate covenant that assures that minimum tolls are charged. But \textsuperscript{134} See, e.g., ANN. FIN. REP., supra note 108, at 6 ("At June 30, 2007, outstanding liabilities were $17.3 Million, with $14.4 Million in bonds and notes payable."). \textsuperscript{135} See Levmore, supra note 76, at 73-74 (discussing the trustee as a provider of a warning system that aids in monitoring). \textsuperscript{136} See Omer Kimhi, \textit{Reviving Cities: Legal Remedies to Municipal Financial Crises}, 88 B.U. L. REV. 635, 664 (2008) (remarking that creditor monitoring plays an important role in maintaining fiscal health). \textsuperscript{137} See supra text accompanying notes 76-84. creditors will not be able to foreclose on the toll bridge in the event that collections are insufficient to service the debt. The unavailability of security interests, however, does not mean that creditors will fail to monitor. It may instead mean that creditors will find a substitute for pledged physical assets. For instance, creditors may develop benchmarks that are observable and that serve as indicia of financial success or failure, and monitor to see whether those benchmarks have been achieved. If creditors are able to withhold additional funding or accelerate repayments in the event of failure to maintain benchmarks, then the effect may be the same as if the creditor could make a credible threat to foreclose on collateral essential for the firm's success. Indeed, let me go further and claim that creditors will exercise their monitoring capacity in a manner that actually improves decision making over what would occur even if constituents *could* overcome the obstacles to collective action. Creditors may be absorbed in the financial wherewithal of the debtor to avoid default. But that interest requires a commitment to stability, overall welfare, and tradeoffs among different governmental functions that decision making by a more participatory process, dominated by interest groups that divide an expanded budget pie rather than by pluralistic compromise, will endanger. Thus, even Hume, with all his antipathy to public debt, acknowledged the mollifying influence that creditors could impose on a public driven by internal strife to be “factious, mutinous, seditious, and even perhaps rebellious.”\textsuperscript{138} In a rare moment of praise for debt, he responds: But to this evil the national debts themselves tend to provide a remedy. The first visible eruption, or even immediate danger, of public disorders must alarm all the stock-holders [by which he meant creditors], whose property is the most precarious of any; and will make them fly to the support of government, whether menaced by Jacobitish violence or democratical frenzy.\textsuperscript{139} \textsuperscript{138} \textsc{Hume}, \textit{supra} note 44, at 170. \textsuperscript{139} \textit{Id}. Here, we face the next assumption about the democratizing effects of public debt: that creditors who monitor will do so in a manner that reflects the interests of constituents. Of course, the alignment of interests between creditors and constituents will be closer when the two classes are composed of the same individuals. Certainly, public creditors who are also stakeholders in other aspects of the debtor’s activities, by virtue of their roles as taxpayers, tenants, or business operators, are likely to balance their various roles and subordinate their interests as creditors when doing so generates net benefits to them in their other roles.\textsuperscript{140} I have previously indicated that the large overlap between San Giorgio shareholders and Genoa residents may have facilitated the latter’s willingness to forgo technical defaults motivated by true financial distress.\textsuperscript{141} The same phenomenon may explain the success of the Dutch financing system. Dutch creditors were, to a large extent, Dutch citizens.\textsuperscript{142} Macdonald cites estimates that at a time when there were approximately 100,000 Dutch households, 65,000 were creditors of the state.\textsuperscript{143} These included public officials who, albeit not popularly elected, provided comfort to citizens that their financial interests would be served because failure to do so would adversely affect the decision makers as well as the populace.\textsuperscript{144} To some extent, this alignment of interests between creditors and constituents seemed to underlie Alexander Hamilton’s views about public credit.\textsuperscript{145} His argument for national assumption of state debts and of embracing a policy of national debt generally was based in part on the capacity of debt to create affinities between an important property-holding class and the national government.\textsuperscript{146} In his January 1790 Report to Congress on Public Debt, Hamilton famously wrote: \begin{itemize} \item \textsuperscript{140} See supra note 6 and accompanying text. \item \textsuperscript{141} MACDONALD, supra note 5, at 142; Fratianni, supra note 3, at 188; see also supra notes 8-12 and accompanying text. \item \textsuperscript{142} MACDONALD, supra note 5, at 154-55. \item \textsuperscript{143} Id. at 156. \item \textsuperscript{144} See id. (“Because the officers of the state themselves held large portions of their fortunes in government debt, every public creditor could be sure that his investment was safe.”). \item \textsuperscript{145} See ALEXANDER HAMILTON, REPORT OF THE SECRETARY OF THE TREASURY ON PUBLIC CREDIT (1790), reprinted in THE WORKS OF ALEXANDER HAMILTON 1 (N.Y., Williams & Whiting 1810). \item \textsuperscript{146} See id. \end{itemize} If all the public creditors receive their dues from one source, distributed with an equal hand, their interest will be the same. And having the same interests, they will unite in the support of the fiscal arrangements of the Government—as these, too, can be made with more convenience when there is no competition.... If, on the contrary, there are distinct provisions, there will be distinct interests, drawing different ways. That union and concert of views, among the creditors, which in every government is of great importance to their security, and to that of public credit, will not only not exist, but will be likely to give place to mutual jealousy and opposition.\textsuperscript{147} When Hamilton then pronounced a properly funded national debt to be “a national blessing,”\textsuperscript{148} did he have in mind that creditors would confer on the United States a class of monitors who would demand more democratic processes than constituents alone would require? Was he simply saying that national creditors would be more drawn to identify with the United States and thus assist in strengthening a federal government? Or was he also saying that they would constitute a propertied class that would improve the quality of decision making otherwise made by an electorate that, although narrow by today’s standard, could be driven by sensitivities inconsistent with Hamilton’s mercantile vision? I am not certain. One thing that does seem clear, though, is that Hamilton viewed creditors as having interests that could reduce the risks of factionalism that might otherwise endanger collective welfare.\textsuperscript{149} But what are the implications of this phenomenon for the situation in which creditors are not constituents of the debtor? Does it necessarily follow that these creditors will be poor representatives? At least in one respect, Hamilton’s concerns reflect a possibility that creditors may actually make better financial decisions than would be made by the constituents of debtors.\textsuperscript{150} Hamilton’s comments reflect a difficult inquiry posed by any democratic theory based on government accountability to constituent preferences: \begin{itemize} \item \textsuperscript{147} \textit{Id.} \item \textsuperscript{148} \textit{Id.} at 52. \item \textsuperscript{149} \textit{See id.} at 19. \item \textsuperscript{150} \textit{See id.} at 19-20. \end{itemize} whose preferences count.\textsuperscript{151} The question of long-term fiscal planning implicates that issue to the extent that it deals with intertemporal externalities. Current financial decisions that require long-term payments can impose significant costs on future generations who have little to say about the desirability of the long-term obligation at the time it is incurred. Optimal financial decisions, one would think, would reflect the interests of those who pay long-term costs as well as those who enjoy the short-term benefits. Who, as between creditors who fund long-term projects and current constituents, are better positioned to represent those future constituents? Because public credit necessarily requires attention to the risk of future payments, perhaps public creditors better internalize the benefits and burdens that financial decisions impose on both current and future generations than the more traditional delegation of those issues to the present generation of constituents alone. Decisions to fund capital improvements with a payment stream that extends for several decades necessarily commit creditors to a time horizon that exceeds the notoriously short attention span of public officials concerned primarily about the next election,\textsuperscript{152} or the high discount rate of constituents concerned about the level of taxes that they must pay today.\textsuperscript{153} Current constituents, for instance, may favor projects that generate immediate benefits, imposing the costs on future generations who may find the projects superfluous. Or, current constituents may favor default, especially when the creditors are nonresidents and those who would bear the burdens of taxation necessary to service the current debt are residents. Those current constituents may be either oblivious or indifferent to the default premium that future generations of residents will be required to pay. This is precisely the situation that arose in the late nineteenth century when cities and states incurred substantial debts to attract railroads that promised to confer commercial benefits sufficient to offset any tax burden necessary to service the government’s financial obligations. When those successful railroads failed to material- \textsuperscript{151} See id. at 20. \textsuperscript{152} See Sungjoon Cho, \textit{Doha’s Development}, 25 Berkeley J. Int’l L. 165, 201 (2007). \textsuperscript{153} See Peter H. Aranson & Kenneth A. Shepale, \textit{The Compensation of Public Officials as a Campaign Issue: An Economic Analysis of Brown v. Hartlage}, 2 Supreme Ct. Econ. Rev. 213, 249 (1983). ize, taxpayers were left with the legal obligation to pay debt service, but none of the promised benefits. Throughout the South and Midwest, cities and states did what any debtor with a brief time horizon would do—they repudiated their debts.\textsuperscript{154} Did they have to? That is, were they facing such financial distress if they complied with their obligations that repudiation was the only way to avoid dissolution? In that case, repudiation may not have been a manifestation of a brief time horizon, but only of an exogenous shock that makes repayment impracticable, all things considered. Nevertheless, urban historian Eric Monkkonen’s study of the phenomenon suggests that many of the defaults on railroad bonds were less the result of the kinds of fiscal distress that might have generated sympathy and waiver from the shareholders of San Giorgio\textsuperscript{155} than the consequence of class, ethnic, and political interests that ignored consequences for future generations.\textsuperscript{156} If we include within “constituents” of the debtor those future residents who pay for the fiscal errors of prior generations, then creditor demands may better reflect the interests of at least that class of constituents than current taxpayers alone.\textsuperscript{157} Creditors may also vary from constituents in that they are likely to have a different preference for risk.\textsuperscript{158} Although this may suggest a lack of alignment in the interests of the two groups, we might favor creditor monitoring for risk if creditors exhibit a more rational strategy of dealing with the long-term health of the debtor. Let us return again to the potentially analogous corporate sector. In the corporate setting, the divergence of interests between creditors and debtors is largely related to the role of each in setting the proper level of risk taking by the firm. Equity holders may favor a high degree of risk taking because they are essentially gambling with \textsuperscript{154}. See \textit{Eric Monkkonen}, \textit{The Local State: Public Money and American Cities} 24, 27-30 (1995) (“[T]he city of Duluth and the Minnesota state legislature used legal maneuvers to cheat the city’s bondholders of the early 1870s out of any hope of full debt recovery.”). \textsuperscript{155}. See \textit{supra} notes 8-10 and accompanying text. \textsuperscript{156}. See \textit{Monkkonen}, \textit{supra} note 154, at 24. \textsuperscript{157}. \textit{See id.} at 22-23. Monkkonen discussed Memphis, Tennessee and Watertown, Wisconsin as examples in which future residents suffered as a result of decisions by current taxpayers. \textit{Id.} \textsuperscript{158}. For example, William A. Fischel argues that homeowners are unique in that they are particularly sensitive to “the vulnerability of their largest asset,” which is their home. \textit{Fischel}, \textit{supra} note 112, at 12. Fischel also notes that homeowners may attach a sentimental value to their homes. \textit{Id.} creditors’ money. If the firm borrows money with a promise of engaging in relatively low risk activities, and subsequently engages in relatively high risk activities, the firm gets all the upside of its gamble—the creditors, though, receive only repayment of principal and interest.\footnote{See, e.g., Easterbrook \& Fischel, supra note 111, at 68.} If the venture fails, the creditors who supplied the funds bear the loss. As a result, creditors have incentives to monitor the firm who proposes to borrow funds for one purpose, but then deploys the funds for a riskier endeavor. Governmental entities are more constrained in the activities in which they can engage, and thus the level of risk they can take with borrowed funds may be less variable. It is, for example, difficult to hide a risky sports stadium in the guise of the municipal power plant that the locality indicated was the objective for which it was borrowing funds. But money is fungible, and governments can use funds from one source to free up funds from an alternative source and gamble with the latter in ways that expose the government as a whole to greater risk. A creditor who has both the capacity and the incentive to examine revenues and expenditures can serve much of the same risk-reducing function that is attributed to the general lender of private firms. Thus, creditor monitoring may also limit governmental risk taking to a level more consistent with constituent preferences. I am not, of course, positing a perfect identity of interests between creditors and constituents. The complicated issue of who the “constituents” are means that at least some within that group—perhaps tenants with short-term interests in residence—will find little similarity of interests with long-term creditors. Additionally, even if creditor monitoring forestalls government insolvency should bankruptcy occur, the interests of constituents in continuing governmental services will diverge greatly from the interests of creditors in raising taxes and liquidating governmental assets to assure payment. Should the municipal borrower prove able to pay only school teachers or creditors, local residents may opt for the former, while bondholders would obviously desire the latter. Alternatively, once doubts are raised about the locality’s future ability to make debt service payments, creditors are likely to want the locality to increase fees, while residents will want to shift the risk of nonpayment to creditors.\textsuperscript{160} These disputes about remedies indicate divergent interests of creditors and residents; they do not, however, indicate differences in the desire to detect fiscal impropriety before the events that would create such disputes arise. What I am positing is that, at least from a theoretical perspective, it is plausible that the interests of creditors and constituents will overlap sufficiently to allow the former to compensate for some of the monitoring lapses of the latter. Whether creditors will do so depends on a variety of factors, such as the extent to which creditors and constituents overlap and the structure of the transaction. Obviously, when creditors can be repaid without regard to the overall health of the borrower, such as when their payments come from only a single resource, creditors have little incentive to monitor more than that resource. Historically, creditor monitoring would have been diluted by allowing creditors to seek repayment directly from taxpayers rather than from the state.\textsuperscript{161} Seventeenth- and eighteenth-century French debts were incurred largely by the selling of offices to creditors who were willing to advance cash in return for subsequent payments from the state or the value of future tax revenues that could be collected through the offices.\textsuperscript{162} Venal office holders had a claim to the first taxes collected, so that once they collected the sums due to them, they had weakened incentives to collect the remaining sums that would be paid to the state.\textsuperscript{163} I will say only that this mechanism does not inspire confident predictions either that an optimal level of taxes will be collected or that, once collected, state funds will be expended in pursuit of social welfare. Our thicker understanding of sovereignty suggests that we are less likely to delegate tax collection than were city-states and \textsuperscript{160} For example, in \textit{Patterson v. Carey}, 363 N.E.2d 1146 (N.Y. 1977), New York State had granted Jones Beach State Parkway Authority the power to increase the toll on the parkway, but subsequently passed a law rescinding an increase that the Jones Beach State Parkway Authority authorized. \textit{Id.} at 1151–53. Much of the jurisprudence of the Contracts Clause has been written in terms of conflicts between the interests of municipal creditors and residents when fiscal distress precludes simultaneous satisfaction of each group’s preference. \textit{See, e.g.,} U.S. Trust Co. v. New Jersey, 431 U.S. 1, 32 (1977); Mobile v. Watson, 116 U.S. 289, 305 (1886); Van Hoffman v. City of Quincy, 71 U.S. (4 Wall.) 535, 555 (1866). \textsuperscript{161} \textit{See Stasavage, supra} note 30, at 86. \textsuperscript{162} \textit{Id.} \textsuperscript{163} Macdonald, \textit{supra} note 5, at 141. capital-hungry monarchs. But transactional structures still matter. The extent to which creditors can credibly substitute for constituents depends on the extent to which the creditors’ repayment rights are linked to the overall fiscal health of the debtor. Some transactional structures (such as bonds secured by a locality’s general revenues) align those interests, but others, which limit creditor’s rights to particular assets, may not. A lender secured solely by waterworks revenues, for example, has little incentive to monitor the debtor’s receipt of property taxes, although the latter may be a better indicator of officials’ performance. Legal doctrines may further frustrate monitoring by denying creditors the ability to take security interests in assets that might be easily monitored and that might serve as proxies for overall fiscal health. Potential lenders might also find monitoring worthwhile if they believed that the default risk was sufficiently high and had no lower cost way of dealing with such risk. In the next section, I suggest that significant obstacles to creditor monitoring arise from the availability of low cost alternatives to risk management that may reduce the scope of monitoring. But, as I will conclude, it may also focus monitoring on those situations where constituents are also most in need of external support to create democratic governance. III. WILLINGNESS TO MONITOR My argument to this point has been that historical lessons and corporate analogies tell us a great deal about the extent to which the theoretical capacity of creditors to compensate for suboptimal constituent monitoring can actually be implemented. But there is also reason to believe that creditors may fail to take advantage of these opportunities. Monitoring is costly, and potential monitors, if rational, will only undertake that task when (1) the costs of monitoring are less than expected benefits, such as by reducing the probability of default; and (2) no less costly alternative for loss avoidance exists.\footnote{See Douglas W. Diamond, \textit{Monitoring and Reputation: The Choice Between Bank Loans and Directly Placed Debt}, 99 J. Pol. Econ. 689, 697 (1991).} The second condition is perhaps more difficult to satisfy under current circumstances than has been true in the past. Monitoring and reputation may be substitutes, in that creditors will avoid monitoring costs when borrowers have developed a solid reputation for repayment.\textsuperscript{165} When governments have sufficiently invested in reputation that the perceived expected loss from default is less than the costs of monitoring, it is unlikely that creditors will engage in monitoring at all. The development of financial models and the longer history of repayment for sovereign borrowers during both good times and bad have allowed markets to distinguish between more and less reliable debtors and to adjust interest rates to reflect risk rather than to engage in monitoring.\textsuperscript{166} At least in the United States, default risk for governmental debt is remarkably low, typically below 2 percent when all municipal bonds are included, and significantly lower when the bonds are issued for general municipal purposes rather than when issued to provide low interest finance for a private firm.\textsuperscript{167} For instance, one study found that sixteen to twenty-three year cumulative default rates for tax-backed and traditional revenue bonds were less than 0.25 percent.\textsuperscript{168} Joel Seligman reports that the default rate on municipal bonds between 1983 and 1988 was 0.7 percent, while the default rate for corporate debt was 1.1 percent.\textsuperscript{169} Given these statistics, rational creditors are likely to forgo costly monitoring. Next, consider losses, or the risk that municipal creditors face in the event of default. Creditors are unlikely to monitor if they believe that default, should it occur, will be cured with little expense or loss on their part. Municipal defaults, especially in the case of sizeable cities, are likely to generate external costs that deprive surrounding areas of easy access to capital or that generate concerns about residents’ access to basic municipal services.\textsuperscript{170} As a result, defaults trigger significant calls for bailouts by more centralized levels of \textsuperscript{165} \textit{Id.} at 690. \textsuperscript{166} See TOMZ, \textit{supra} note 39, at 86-113. \textsuperscript{167} See, e.g., Good Jobs First, Municipal Bonds and Defaults, http://www.publicbonds.org/public_fm/default.htm (last visited Nov. 25, 2009). \textsuperscript{168} \textit{Id.} \textsuperscript{169} Joel Seligman, \textit{The Obsolescence of Wall Street: A Contextual Approach to the Evolving Structure of Federal Securities Regulation}, 93 MICH. L. REV. 649, 699 (1995). \textsuperscript{170} See Robert P. Inman, \textit{Transfers and Bailouts: Enforcing Local Fiscal Discipline with Lessons from U.S. Federalism}, in \textit{FISCAL DECENTRALIZATION AND THE CHALLENGE OF HARD BUDGET CONSTRAINTS} 35, 42-43 (Jonathan Rodden, Gunnar S. Eskeland & Jennie Litvak eds., 2005). government.\textsuperscript{171} Although those bailouts may require that the defaulting city suffer reduction of local fiscal autonomy, and hence more rigorous scrutiny by state agencies,\textsuperscript{172} creditors who anticipate bailouts in the event of default will rationally fail to monitor pre-default. A variety of legal doctrines also reduce creditor losses in the event of default and thus dissuade municipal creditors from monitoring. In some states in the United States, specific constitutional or statutory provisions protect municipal creditors in the event of default. Virginia, for instance, provides that any state funds that would otherwise be appropriated to a local government must be paid directly to creditors if the locality is in default on its general obligation bonds.\textsuperscript{173} Additionally, the New York Constitution famously provides that constitutional tax limitations can be exceeded in order to pay debts to which a locality’s faith and credit has been pledged.\textsuperscript{174} One would anticipate that creditors prefer these bailouts to the extent that they impose default costs on municipal residents while simultaneously reducing the need for costly pre-default scrutiny or requiring creditors to incur the costs associated with municipal debt adjustment under Chapter 9 of the Bankruptcy Code. When creditors have found monitoring to be useful, they may condition their lending on metrics that are easily monitored or that can serve as low cost proxies for risky activity that would otherwise \textsuperscript{171} Notwithstanding the famous “Ford to City: Drop Dead” headline, Congress ultimately provided a modest debt guarantee that assisted New York City in averting fiscal disaster, and the state created a municipal assistance authority that provided payments to bondholders. \textit{See id.} at 59. Additionally, Congress has provided a federal bailout of Washington, D.C., and state provision of a bailout of the cities of Detroit and Oakland, California. \textit{Id.} at 60-61. But note the absence of bailouts in the Washington Public Power Supply System (WPPSS) and Orange County. \textit{See id.} at 59-61; Gerald J. Miller, \textit{Debt Management Networks}, 53 \textit{Pub. Admin. Rev.} 50, 50-51 (1993). Robert Imman reports that the Illinois Constitution of 1870 contained a prohibition on local bailouts by the State. \textit{See Imman, supra} note 170, at 58 & n.33; Michael W. McConnell & Randal C. Picker, \textit{When Cities Go Broke: A Conceptual Introduction to Municipal Bankruptcy}, 60 U. Chi. L. Rev. 425, 442 (1993). \textsuperscript{172} For example, New York State maintains quarterly reports on the City of New York, including such information as the city’s financial statements and a review by an independent accountant. \textit{See Municipal Assistance Corporation of the City of New York}, http://www.nysl.nysed.gov/scandolinks/ocm18935828.htm (last visited Nov. 25, 2008). \textsuperscript{173} \textit{See Va. Code Ann. § 15.2-2659} (2000). \textsuperscript{174} \textit{See Flushing Nat’l Bank v. Mun. Assistance Corp. of N.Y.}, 358 N.E.2d 848, 852 (N.Y. 1976) (interpreting N.Y. Const. art. VIII, § 2). require costly investigations.\textsuperscript{175} For instance, when credible information about some government assets can be obtained at low cost, creditors may restrict the use of their loans to the purchase of those relatively transparent assets.\textsuperscript{176} When that is the case, the interests of creditors in ensuring that the funded asset generates sufficient revenue to support debt service is less likely to coincide perfectly with the general interest of constituents in the overall financial security of the state. In effect, creditors in such a case provide comfort to constituents that is parallel to the comfort that creditors of firms provide to shareholders when the creditors take security interests in specific assets of the firm rather than a wraparound security interest in all the firm’s assets.\textsuperscript{177} Alternatively, creditors may eschew examination of the underlying conditions of debt and consider only the amount of debt that a borrower has incurred, presumably on the theory that sovereigns will be able to service relatively small debts. Tamim Bayoumi, Morris Goldstein, and Geoffrey Woglom tested a market discipline hypothesis for sovereign debt.\textsuperscript{178} Their conclusions indicated that yields on debt of states within the United States rise at an increasing rate with the level of borrowing, and that at some level of borrowing, the market stops supporting a sovereign’s debt issuance.\textsuperscript{179} The result is that borrowers have market incentives to avoid issuing excessive debt.\textsuperscript{180} I do not want to make too much of these conclusions. To conclude that borrowers are attentive to market constraints is quite different from saying that borrowers’ officials properly respond to market incentives, an issue on which the authors are agnostic.\textsuperscript{181} Moreover, market constraints do not necessarily indicate that potential creditors are monitoring borrowers in a manner that compensates for constituent passivity. \textsuperscript{175} See Michael D. Bordo, Barry Eichengreen & Douglas A. Irwin, \textit{Is Globalization Today Really Different than Globalization a Hundred Years Ago?} 32-33 (Nat’l Bureau of Econ. Research, Working Paper No. W7195, 1999). \textsuperscript{176} See id. at 32-34. \textsuperscript{177} See supra note 70 and accompanying text. \textsuperscript{178} See Tamim Bayoumi, Morris Goldstein & Geoffrey Woglom, \textit{Do Credit Markets Discipline Sovereign Borrowers? Evidence from U.S. States}, 27 J. MONEY, CREDIT & BANKING 1046, 1046-47 (1995). \textsuperscript{179} See id. at 1050. \textsuperscript{180} Id. at 1057. \textsuperscript{181} See id. They may suggest only that creditors review the per capita debt burden of the issuer, which may be a very rough surrogate for quality of debt. These studies do, however, suggest that creditors react at least to some degree to the incentive to obtain information about their sovereign borrowers.\textsuperscript{182} Contemporary theories of finance may also reduce incentives to monitor in other ways. Creditors may be able to manage risk by diversifying their portfolios rather than by incurring monitoring costs. Indeed, in a world of securitization, even creditors who wish to specialize in a particular portfolio of loans, such as sovereign debt, can diversify by investing in funds that carry multiple loans rather than by investing in a single loan and monitoring the borrower.\textsuperscript{183} Although some have blamed securitization for the absence of monitoring that has allegedly contributed to credit crises, that literature only suggests that substituting securitization for monitoring has social costs, not that it is irrational for investors.\textsuperscript{184} The implication of these developments is that even investors who theoretically have the capacity to enhance democracy by monitoring for misconduct that constituents are otherwise likely to ignore will often fail to seize their comparative advantage and confer the benefits of monitoring on passive constituents. Indeed, the structure of the transactions may further frustrate any efforts to impress public creditors into service as monitors. By allowing credit to be extended against specific assets, debtors dilute the incentives of creditors who might otherwise monitor broadly, instead causing these creditors to direct their efforts only at specific sources of repayment.\textsuperscript{185} Consider in this context recent developments in the esoteric area of state and municipal debt finance. Those of us who play in the fields of state constitutional law—and who understand that the law school curriculum does a great disservice by concentrating only on the musings of a single supreme court when there are fifty state \textsuperscript{182} See id. \textsuperscript{183} For an example of such a fund, see Invesco PowerShares, \textit{PowerShares Emerging Markets Sovereign Debt Portfolio}, June 30, 2008, http://www.invescopowershares.com/pdf/PCY-PC-1.pdf (last visited Nov. 25, 2008). \textsuperscript{184} See, e.g., Benjamin J. Keys et al., \textit{Did Securitization Lead to Lax Screening? Evidence from Subprime Loans 2001-2006}, at 26 (Eur. Fin. Ass’n, meeting paper, 2008), available at http://faculty.london.edu/evig/index_files/securitize.pdf. \textsuperscript{185} See \textit{supra} text accompanying note 70. constitutions to analyze—sometimes consider the constraints placed on states and municipalities that seek to incur debt.\textsuperscript{186} Those limitations—which typically take the form of election requirements, flat dollar limitations, or percentages of taxable property—generally apply only to what is called general obligation debt, that is, debt secured by all the revenue-generating capacity of the issuer.\textsuperscript{187} They therefore do not apply to revenue bonds, that is, debt secured solely by the revenue produced by a single revenue-producing project, such as a toll bridge or a municipal water works.\textsuperscript{188} The history of debt limitations, therefore, is dominated by the efforts of highly paid, intelligent attorneys and investment bankers to structure transactions to look more like revenue debt not subject to debt limitations than to general obligation debt.\textsuperscript{189} One perhaps unanticipated consequence of this phenomenon has been to dilute the incentives of creditors to serve as proxies for constituents because the jurisdiction’s revenue sources are balkanized and the creditors’ interest is limited to a particular revenue source rather than to the general fiscal health of the debtor. If bonds issued to fund street improvements, for example, are secured by parking meter revenues, bondholders need only monitor meter collections and disbursements, notwithstanding that they are well positioned to review a broader array of local fiscal activity.\textsuperscript{190} One recent example of this phenomenon is in some respects eerily reminiscent of fifteenth-century Genoa. The governor of New Jersey, a former chairman of Goldman Sachs, recently advocated a plan to reduce outstanding state debt by establishing a nonprofit public benefit corporation that would collect tolls and manage highways in the state,\textsuperscript{191} a procedure that perhaps qualifies the corporation as the type of “state within a state” that characterized San Giorgio.\textsuperscript{192} As initially proposed, the corporation would issue approximately \textsuperscript{186} Gillette, \textit{supra} note 105, at 370-72. \textsuperscript{187} See \textit{id.} at 367-68. \textsuperscript{188} \textit{Id.} at 368 n.6. \textsuperscript{189} \textit{Id.} at 370-71. \textsuperscript{190} Among the other defects of debt limitations, which I will not examine at this time, is they have potentially antidemocratic effects. \textsuperscript{191} \textsc{State of New Jersey Office of the Governor}, \textsc{Financial Restructuring and Debt Reduction} [hereinafter \textsc{Financial Restructuring}], available at http://www.state.nj.us/frdr/pdf/background.pdf. \textsuperscript{192} See \textit{supra} note 14 and accompanying text. $40 billion worth of its own bonds, and use the proceeds both to pay off existing state debt and to finance the next seventy-five years of multi-modal transportation projects in the state.\textsuperscript{193} The corporation’s own bonds would then be paid by substantial toll hikes on the highways.\textsuperscript{194} The initial plan appears to have met its demise in massive resistance from legislative leaders who found the projected 800 percent toll increase over fifteen years politically nonviable.\textsuperscript{195} Apart from whether the public benefit corporation would share San Giorgio’s right to torture toll evaders on the Garden State Parkway, this end run around the New Jersey constitutional debt limitation arguably reduces the democratizing effects of credit. Although creditors of existing general obligation debt of the state might monitor for a broad range of fiscal activities, holders of the corporation’s bonds would be limited to a single revenue source—toll payments—and thus would have little incentive to monitor beyond those highway payments. To the extent that New Jersey constituents face collective action problems in monitoring their officials, they would find few reliable proxies in the new set of bondholders that would arise out of the proposed highway corporation. IV. THE PLAUSIBLE SCOPE OF CONTEMPORARY CREDITOR MONITORING Does the presence of low cost alternatives to monitoring combined with restricted collateral that reduces the incentives of creditors to monitor mean that, notwithstanding the theoretical possibility that creditors could compensate for constituent passivity, they will fail to serve as democracy-enhancing surrogates? I conclude with a suggestion that there remains some range within which creditors can enhance democratic monitoring. Moreover, creditor monitoring is perhaps most likely, and thus its benefits most plausible, in those contemporary situations that are strikingly similar to my historical examples. The successful credit arrangements that arose in fifteenth-century Genoa, seventeenth-century England, and eighteenth-century America all responded to and made possible \textsuperscript{193} See \textit{FINANCIAL RESTRUCTURING}, supra note 191, at 12-19. \textsuperscript{194} See \textit{id.} at 17. \textsuperscript{195} See Richard G. Jones & David Chen, \textit{Corzine Weighs Options on Toll Increases}, N.Y. TIMES, Apr. 30, 2008, at B5. demands for commercial expansion and the sharing of political and economic capital. In those situations, the debtor states may have been ambitious about the future, but they lacked the reputations, the thick credit markets, or the effective constituent political cohesion that would have rendered creditor monitoring superfluous. Instead, these situations cried out for some form of institutional constraints on the debtor governments, constraints that neither a small taxpayer base nor a limited electorate could supply. It was in that kind of environment that small numbers of creditors not only could, but had to fill the political gap, reduce corruption, and induce the creation of institutions that would both constitute credible commitments against default and lay the groundwork for broad political participation. The emerging nations of today stand in a similar situation. These potential debtors necessarily pose greater risks than developed nations insofar as their success in creating wealth that will support debt payments remains untested, and they have not generated reputations that can substitute for more costly monitoring. Although the creation of funds pooling multiple emerging nations’ debt allows some diversification that reduces the need for monitoring, many of these funds do not include debt of the least developed emerging nations. Rather, nations with limited or no credit 196. See Ruth Bosauer, Note, Emerging Market Instruments Pay Siren Song for Pension Plans, 7 MINN. J. GLOBAL TRADE 211, 213-14 (1998) (describing several factors that made it difficult for developing nations to pay their debt). 197. See supra notes 165-66 and accompanying text. 198. In reaching this conclusion, I reviewed the top ten holdings of each of the following emerging market bond funds (ticker symbols are provided in parentheses): AllianceBernstein High Income (ACDAX); PIMCO Emerging Markets Bond A (PAEMX); TCF Galileo Emerging Markets Income I (TGEXI); MPS Emerging Markets Debt A (MEDAX); T. Rowe Price Emerging Markets Bond (PREMX); Fidelity New Markets Income (FNMX); MainStay Global High Income A (MGHAX); Fidelity Advisor Emerging Markets Inc T (FAEMX). Popular holdings were from Argentina, Brazil, Colombia, Mexico, Russia, Turkey, and Venezuela. None of the funds listed an African or former Soviet bloc debtor among their top holdings. The major holdings of an exchange-traded fund that specializes in investments in the Middle East and Africa—State Street Global Advisors SPDR S&P Emerging Middle East & Africa ETP—consist of stocks from Middle Eastern and African countries rather than bonds. See State Street Global Advisors, SPDR S&P Emerging Middle East & Africa ETF (GAF), http://www.ssgafunds.com/etf/fund/etf_detail_GAF.jsp (last visited Sept. 17, 2008). The Invesco PowerShares Sovereign debt fund mentioned earlier appears to be more diversified, and includes sovereign debt from Bulgaria, Hungary, Poland, South Africa, and Vietnam. See Invesco PowerShares, supra note 183. history are likely to obtain capital through individual lenders who have informational advantages over the broader capital markets and are thus willing to lend at rates that more closely reflect the actual risks of payment. For similar reasons, relatively new firms will seek capital through bank loans rather than through sales of equity or the debt markets. Thus, lenders may find free riding implausible and monitoring financially worthwhile, given the absence of alternatives. Just as banks that make loans to new firms will want to monitor those firms to reduce moral hazard and to capitalize on their informational advantage about the firm, so may individual lenders to developing nations desire to take advantage of the informational advantage that they have over capital markets generally. A potentially happy coincidence that arises from this situation is that these same nations may be in the greatest need of the kind of creditor monitoring that can enhance democracy by substituting for low levels of constituent monitoring. Developing nations, by definition, are unlikely to have either a broad taxpayer base or politically cohesive institutions that can represent the financial interests of all constituents.\footnote{See generally Edmund Jan Osmanczyk, \textit{Encyclopedia of the United Nations and International Agreements} 527 (Anthony Mango ed., 2003).} If the incentive for monitoring arises out of fear that taxpayers’ funds will be misused, the absence of a significant taxpayer class necessarily undermines constituent monitoring.\footnote{This phenomenon, of course, can reduce constituent monitoring in extremely wealthy nations as well as extremely poor ones. For instance, renter nations that can fund their activities from sales of resources, such as oil, do not have to tax their citizens. \textit{See Between Fitna, Fauda and the Deep Blue Sea}, \textit{The Economist}, Jan. 12, 2008, at 40-41 (“No taxation without representation, said America’s revolutionaries. Arab governments have inverted this refrain: by appropriating national energy resources and other rents, they neatly absolve themselves of the need to levy heavy taxes and therefore to win the consent of the governed.”).} The essential question is whether creditors who participate in monitoring do so in a manner that is consistent with the interests of the constituents of developing nations. Clearly, the creditors of contemporary emerging nations are not, like the shareholders of San Giorgio or Dutch citizens, constituents of the debtors.\footnote{See supra notes 11-12, 142-44 and accompanying text.} Thus, the natural alignment of interests that arises from serving as both creditor and constituent does not exist in these cases. But given that creditors’ interests in repayment may require monitoring of the same conditions that constituents would prefer, the availability of monitoring may still serve as a proxy for weaker domestic politics. In short, as democracy comes to the developing world, it is just as likely to come through the back door of financial monitoring as it is to come through the front door of political participation. International credit markets provide at least as much opportunity to generate political reforms today as credit markets provided several centuries ago. It is in this context that policymakers should evaluate the conditions of lending for potential monitors such as the World Bank, the International Monetary Fund (IMF), or other international financial institutions (IFIs). Loans made through the World Bank or the IMF typically are governed by loan documents that contain specific provisions that exploit the lender’s capacity to dictate repayment terms.\footnote{202} As one might expect, these terms tend to address payment provisions that protect the interest of the creditor in repayment.\footnote{203} There is significant criticism of these institutions, and of IFIs generally, for imposing Western values on resistant cultures, for measuring success only by reference to narrow economic objectives and thus failing to remedy social issues that have only indirect economic implications, or for sponsoring globalization that adversely disrupts domestic labor markets.\footnote{204} Nevertheless, the stated objectives and mandates of IFIs, including the World Bank and the regional development banks involve not profit maximization, but rather the promotion of economic or social development or the reduction of poverty.\footnote{205} This is not to say that the IFIs are indifferent to repayment or that, in practice, efforts to obtain repayment do not trump the stated objectives. Of course, one \footnote{202}{See International Bank for Reconstruction and Development, Article IV: Operations, Feb. 16, 1989, available at http://web.worldbank.org/WBSITE/EXTERNAL/EXTABOUTUS/0,,contentMDK:20049605~pagePK:43912~menuPK:58863~piPK:36602,00.html.} \footnote{203}{See id. (allowing the Bank, for example, to set the terms and conditions of payments and to modify the terms of an amortization).} \footnote{204}{See, e.g., INTERHEMISPHERE RESOURCE CENTER & INST. FOR POLY STUDIES, FOREIGN POLICY IN FOCUS: INTERNATIONAL FINANCIAL INSTITUTIONS 2 (1996), available at http://www.ifpi.org/pdf/vol1/08ififi.pdf.} \footnote{205}{The World Bank, Multilateral Development Banks, http://go.worldbank.org/F3REECOMB1 (last visited Nov. 25, 2008). The regional development banks are the Inter-American Development Bank, the European Bank for Reconstruction and Development, the Asian Development Bank, and the African Development Bank. Id.'} effect of those conditions could be the creation of institutions that, as a happy byproduct of serving creditor interests, also obligate or induce debtor governments to enact reforms consistent with democratizing institutions. Indeed, it would be difficult to claim that the creation of incentives to subordinate other domestic objectives to repayment is necessarily at odds with constituent preferences, because repayment of IFI loans assists in the creation of a reputation that permits subsequent access to capital markets at low interest rates.\footnote{See \textit{Tomz}, supra note 39, at 86-88; Douglas W. Diamond, \textit{Reputation Acquisition in Debt Markets}, 97 J. Pol. Econ. 828, 830-31 (1989).} But my claims in this Article relate to the possibility that creditor monitoring can not only enhance democracy directly, but also indirectly by demanding the creation of institutions or reputations that, as a happy byproduct of monitoring, create greater consistency between official conduct and constituent preferences. The conditions of IFI lending have the possibility of conferring far more specific benefits than the creation of reputation that will have long-term benefits to the constituents of borrowers. IFI monitoring is likely to focus on benchmarks and the creation of institutions that can be monitored at a relatively low cost. But if those benchmarks and institutions reflect the objectives for which constituents would lobby if they were politically cohesive, then creditor monitoring serves as virtual representation of constituent interests. The controversial conditions offered by IFIs provide a basis for determining whether creditors actually play this role. In theory at least, conditions of lending can improve the quality and effect of aid. In practice however, interest group pressures within both the IFI\footnote{See, e.g., Joseph Stiglitz, \textit{Globalization and Its Discontents} (2002); Roland Vaubel, \textit{Bureaucracy at the IMF and the World Bank: A Comparison of the Evidence}, 19 World Econ. 195, 209 (1996).} and recipient countries may significantly distort the effects of aid.\footnote{See Wolfgang Mayer & Alexandros Mourmouras, \textit{The Political Economy of IMF Conditionality: A Common Agency Model}, 9 Rev. Dev. Econ. 449, 453-56 (2005).} Thus, developing nations may not realize the theoretical benefits of conditionality that are consistent with the monitoring capabilities of creditors. Indeed, to some extent, IFIs appear, at least superficially, to be reluctant to seize opportunities to use their monitoring capacities in ways that might distort decisions that recipient governments might otherwise render. The World Bank pledges in its documents not to interfere in the political affairs of members and to be guided only by economic considerations.\textsuperscript{209} Nevertheless, again in practice, the World Bank has utilized its role as lender to “recommend” structural reforms that seem to transcend financial concerns. For instance, although a recent report by World Bank staff with respect to Botswana suggested that authorities implement long-term plans to solve budget deficits, that same report recommended that avenues toward that goal include HIV/AIDS programs, deregulation of the labor market, stronger measures to enforce tax compliance, and trade liberalization.\textsuperscript{210} The same possibility seems to be inherent in the recent movement to reform conditionality to respond to criticisms of external intervention in domestic affairs. The IMF has advertised the requirement of conditionality through a relatively narrow lens that appears consistent with constituent preferences: “a way for the IMF to monitor that its loan is being used effectively in resolving the borrower’s economic difficulties, so that the country will be able to repay promptly.”\textsuperscript{211} The imposition of these conditions has, at least on occasion, ignored the preferences of officials in developing nations, as evidenced by their refusal to accept IMF loans even during periods of financial distress. Where democratic regimes are not in place, of course, that refusal does not necessarily mean that the conditions are inconsistent with the preferences of constituents.\textsuperscript{212} Even in the case of democratic borrowers, it is plausible that loans that depend on conditions such as those imposed by the IMF would be refused because meeting the conditions are deemed too costly. The World Bank has recently reduced its conditions and taken steps to make them more consistent with presumed internal preferences of borrowers, although it has accomplished the latter \textsuperscript{209}. See International Bank for Reconstruction and Development, \textit{supra} note 202, at art. IV, § 10. \textsuperscript{210}. See WORLD BANK, AN ASSESSMENT OF THE INVESTMENT CLIMATE IN BOTSWANA 3-5 (2007), available at http://siteresources.worldbank.org/INTAFRSUMAFTPS/Resources/BWA_ICA_Volume_1_FINAL.pdf. \textsuperscript{211}. International Monetary Fund, IMF Conditionality, http://www.imf.org/external/np/exr/facts/conditio.htm (last visited Nov. 25, 2008). \textsuperscript{212}. See Silvia Marchesi & Jonathan P. Thomas, \textit{IMF Conditionality as a Screening Device}, 109 ECON. J. C111, C114 (1999). through reference to standards that are inherently ambiguous, such as “ownership” of the policy by the borrower and “customization” of policy.\textsuperscript{213} But the number of conditions should matter less than their substance. The key is to create conditions that are both vulnerable to monitoring and reflective of constituent preferences for which constituents themselves have limited monitoring capacity. Wolfgang Mayer and Alexandros Mourmouras, for instance, suggest that conditions should be tailored to weaken interest groups that frustrate domestic institutional reforms necessary to broader national welfare.\textsuperscript{214} \section*{Conclusion} Little of the reasoning provided throughout this Essay would be lost on the Protectors of San Giorgio. They certainly understood the relationship between reducing payment risks and reducing political distortions between officials and constituents.\textsuperscript{215} The historical institutional changes wrought by creditors of developing nations, motivated largely by self-interest, have similarly sought to induce political officials to conduct themselves in a manner consistent with the interests of constituents.\textsuperscript{216} So my claims boil down to the following: creditors have the capacity to solve some collective action problems that compensate for defects in monitoring by constituents. Whether or not creditors have incentives to seize those opportunities depends on the structure of the debt transaction; the value of creditor monitoring increases as the probability that constituents will monitor decreases, and all these characteristics converge when credit is being extended to a jurisdiction in the birth pangs of democracy. If these claims have any resonance, then the primary implications for current public debt are to apply greater scrutiny to the transactional structures used by those who lend to developing nations; to celebrate their efforts to create institutions of credible commitment; \begin{footnotesize} \begin{enumerate} \item See \textsc{World Bank}, \textit{Conditionality in Development Policy Lending}, at i-iii (2007), available at http://siteresources.worldbank.org/PROJECTS/Resources/40940-1114615847489/Conditionalityfinalreport120407.pdf. \item See Mayer \& Mourmouras, \textit{supra} note 208, at 463. \item See \textit{supra} notes 22-23 and accompanying text. \item See \textit{supra} note 25 and accompanying text. \end{enumerate} \end{footnotesize} but even more so, to recognize how their self-interested pursuit of repayment—the conditions that sometimes earn these institutions substantial scorn—may be as crucial as their financial capital in contributing to the stability and accountability that historically is the precursor of both economic and political success.
Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): email@example.com Office Action Summary Application No. 14/002,365 Applicant(s) KOBAYASHI ET AL. Examiner ERICA LIN Art Unit 2853 AIA (First Inventor to File) Status No -- The MAILING DATE of this communication appears on the cover sheet with the correspondence address -- Period for Reply A SHORTENED STATUTORY PERIOD FOR REPLY IS SET TO EXPIRE 3 MONTHS FROM THE MAILING DATE OF THIS COMMUNICATION. - Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, may a reply be timely filed after SIX (6) MONTHS from the mailing date of this communication. - If NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHS from the mailing date of this communication. - Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133). - Any reply received by the Office later than three months after the mailing date of this communication, even if timely filed, may reduce any earned patent term adjustment. See 37 CFR 1.704(b). Status 1) ☒ Responsive to communication(s) filed on 8/30/2013. □ A declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/were filed on _______. 2a) □ This action is FINAL. 2b) ☒ This action is non-final. 3) □ An election was made by the applicant in response to a restriction requirement set forth during the interview on _______; the restriction requirement and election have been incorporated into this action. 4) □ Since this application is in condition for allowance except for formal matters, prosecution as to the merits is closed in accordance with the practice under Ex parte Quayle, 1935 C.D. 11, 453 O.G. 213. Disposition of Claims* 5) ☒ Claim(s) 1-7 is/are pending in the application. 5a) Of the above claim(s) ______ is/are withdrawn from consideration. 6) □ Claim(s) ______ is/are allowed. 7) ☒ Claim(s) 1-7 is/are rejected. 8) □ Claim(s) ______ is/are objected to. 9) □ Claim(s) ______ are subject to restriction and/or election requirement. * If any claims have been determined allowable, you may be eligible to benefit from the Patent Prosecution Highway program at a participating intellectual property office for the corresponding application. For more information, please see http://www.uspto.gov/patents/init_events/pph/index.jsp or send an inquiry to firstname.lastname@example.org. Application Papers 10) □ The specification is objected to by the Examiner. 11) ☒ The drawing(s) filed on 8/30/2013 is/are: a) ☒ accepted or b) □ objected to by the Examiner. Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85(a). Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121(d). Priority under 35 U.S.C. § 119 12) ☒ Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(a)-(d) or (f). Certified copies: a) ☒ All b) □ Some** c) □ None of the: 1. ☒ Certified copies of the priority documents have been received. 2. □ Certified copies of the priority documents have been received in Application No. ______. 3. □ Copies of the certified copies of the priority documents have been received in this National Stage application from the International Bureau (PCT Rule 17.2(a)). ** See the attached detailed Office action for a list of the certified copies not received. Attachment(s) 1) ☒ Notice of References Cited (PTO-892) 2) ☒ Information Disclosure Statement(s) (PTO/SB/08a and/or PTO/SB/08b) Paper No(s)/Mail Date 8/30/2013 3) □ Interview Summary (PTO-413) Paper No(s)/Mail Date. ______ . 4) □ Other: ______. The present application is being examined under the pre-AIA first to invent provisions. **DETAILED ACTION** **Priority** Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. **Information Disclosure Statement** The information disclosure statement (IDS) submitted on August 30, 2013, with the application. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. **Claim Rejections - 35 USC § 112** The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 2 recites the limitation "the other first surface electrode row". There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claims 1-2 and 5-7 rejected under pre-AIA 35 U.S.C. 102(b) as being anticipated by European Patent No. EP1336489 ("Hirota"). Regarding claim 1, Hirota discloses a piezoelectric actuator (Fig. 9A), comprising: a ceramic substrate being long in one direction (actuator 21), the ceramic substrate comprising a vibrating plate (Fig. 9A, vibrating plate 42), a common electrode disposed on the vibrating plate (Fig. 9A, common electrode 34a), and a piezoelectric ceramic layer disposed on the common electrode (Fig. 9A, piezoelectric ceramic layer 41) and having a plurality of first through holes connected to the common electrode (Fig. 9A, through-hole 41a within piezoelectric layer 41 connects to common electrode 34a); a plurality of individual electrodes disposed in a region of the piezoelectric ceramic layer opposed to the common electrode (Fig. 9A, individual electrodes 35a are opposite of common electrodes 34a); and a plurality of first surface electrodes respectively disposed inside a plurality of the first through holes in the piezoelectric ceramic layer and on a circumference of a plurality of the first through holes (paragraph [0056], the through-holes 41a are filled with conductive material to serve as electrodes), wherein a plurality of the first through holes are arranged along the one direction at a central part of the ceramic substrate in a direction orthogonal to the one direction (Fig. 9A, through-holes 41a are arranged... in a lateral direction along the length of actuator 21), and the first surface electrodes are long in the one direction (Fig. 9A with paragraph [0056], through-holes forming electrodes are vertical). Regarding claim 2, **Hirota** discloses the piezoelectric actuator according to claim 1, wherein the first surface electrodes comprises one first surface electrode row and the other first surface electrode row (Fig. 6, the first surface electrodes corresponding to paragraph [0056] also correspond to the pressure chambers 10 which alternate), and the first surface electrodes constituting the one first surface electrode row and the first surface electrodes constituting the other first surface electrode row are shiftedly arranged in the one direction (Fig. 6, the rows are slightly shifted in a lateral direction along the length of actuator 21). Regarding claim 5, **Hirota** discloses the piezoelectric actuator according to claim 1, wherein an arrangement is made so that the single first surface electrode is overlapped with the two or more first through holes (Fig. 9A, first surface electrode of paragraph [0056] is overlaps through holes 41a and 42a). Regarding claim 6, **Hirota** discloses the liquid discharge head, comprising: the piezoelectric actuator (Fig. 7 incorporates the actuator of Fig. 9) according to claim 1; and a passage member comprising a plurality of discharge holes (passage unit 4) and a plurality of pressurizing chambers respectively connected to a plurality of the discharge holes (pressure chambers 10), a plurality of the pressurizing chambers and a plurality of the individual electrodes being overlappedly stacked one upon another on a side of the piezoelectric actuator located closer to the vibrating plate (Fig. 6, pressure chambers 10 and electrodes 35a and 35b are overlapping). Regarding claim 7, **Hirota** discloses a recording device, comprising: the liquid discharge head according to claim 6 (Fig. 7); a conveyance section for conveying a recording medium to the liquid discharge head (Fig. 1, paper feed unit 111); and a control section for controlling a voltage applied to a plurality of the individual electrodes (paragraph [0033], driver IC 132). **Claim Rejections - 35 USC § 103** The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. **Claims 3-4 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over European Patent No. EP1336489 ("Hirota").** Regarding claim 3, **Hirota** discloses the piezoelectric actuator according to claim 1 wherein a second electrode 38 with conductive column 55a which is connected to the piezoelectric layer from an upper edge of the actuator 21 (paragraph [0040] and being connected to the common electrode (paragraph [0040] with electrodes 35a) is disposed at least one of end parts of the ceramic substrate in the one direction (Fig. 6, electrodes 38 are positioned at outer edge) and a second surface electrode is disposed inside the second through hole in the piezoelectric ceramic layer and on a circumference of the second through hole, the second surface electrode being long in a direction orthogonal to the one direction (Fig. 9B, electrodes 38 is long in lateral direction with 55a being filled with conductive material and thus the circumference). Hirota discloses that the second through hole is formed through a cover film 52 but does not explicitly disclose wherein a second through hole penetrating through the piezoelectric ceramic layer. It would have been obvious to one of ordinary skill in the art at the time of the invention to have formed the through hole through the piezoelectric ceramic layer, as in Fig. 9A, rather than the cover film because at the time of the invention there had been a recognized need in the art to decrease the size of actuators to increase the droplet density. There were a finite number of identified and predictable potential solutions to decreasing size of actuators. One of ordinary skill in the art could have pursued the known potential solutions with a reasonable expectation of success including eliminating the cover film and providing a second through hole through the piezoelectric layer since both solutions of penetrating a cover film or penetrating a piezoelectric layer provide voltage to the actuator. Regarding claim 4, Hirota discloses the piezoelectric actuator according to claim 3 (Fig. 9A), wherein an arrangement is made so that the single second surface electrode is overlapped with the two or more second through holes (Fig. 6 with 9B, pressure chambers 10 and electrodes 38 and 55a are overlapping). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERICA LIN whose telephone number is (571)270-7911. The examiner can normally be reached on 7:30 AM - 5:30 PM (Mon - Thurs). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Luu can be reached on (571) 272-7663. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERICA LIN/ Examiner, Art Unit 2853
2021 GMiS Conference Virtual Convening Report October 11-22, 2021 www.gmisconference.org Throughout the pandemic, GMiS pivoted to deliver its access and services to underserved students to continue to amplify their talents and experiences. Building on the success of its first-ever virtual conference in 2020 and coupled with the uncertainty of how the country would fare in Fall 2021 due to COVID-19, GMiS decided to deliver its 2021 GMiS Conference, virtually. Understanding the risks of Zoom fatigue, vaccination protocols, fluctuating travel restrictions, and higher education enrollment patterns, GMiS pushed forward to showcase, “First-Class Diversity, World-Class STEM.” COVID-19 caused setbacks all across the STEM Enterprise especially in the education progress of our underserved students. The achievement gaps that were closing were suddenly widened, the fading digital divide re-emerged, and students’ mental well-being became a more apparent concern. According to Dorn, et al (2021), at the pre-college level, “unfinished learning” has resulted in students being five months behind in mathematics and four months behind in reading. Nationally, the percentage of freshmen who return for their second year of college declined by an “unprecedented” level during the pandemic, according to a July 2021 report from the National Student Clearinghouse Research Center that tracks national enrollment trends. While in the past few years around 75% of students returned, just 73% of students came back in 2020 — the steepest drop in a decade — the center found (Cal Matters, https://nscresearchcenter.org/) By hosting its national conference, virtually, GMiS sought to mediate the negative STEM career pathway impacts the pandemic created. GMiS successfully convened its strategic partners representing a cross section of leading industries, laboratories, government agencies and academic institutions. The Class of 2021 HENAAC honorees – a stellar class of strong, resilient and visionary STEM role models - shared meaningful and inspirational success stories that serve as pathways and beacons of hope. To enhance the virtual student engagement, GMiS incorporated the following: - Expanded its STEM-Career Readiness Coaching Sessions for an entire month pre-conference - Brought back its GMiS College Bowl – virtually - Implemented Discord as the primary student chat platform - Hosted three virtual student testimonial contests - Expanded the virtual career fair by two hours each of the two days - Added a second Speed Networking Session - Added Graduate School Roundtables Students appreciated these additional engagement activities, which provided them more flexibility as they pursued their on-campus and online classes. As this country emerges from the COVID-19 pandemic, GMiS remains committed to providing programs and services to empower the STEM Identity and STEM-career readiness of our underserved students. This report is a summary of the impact on student engagement. To learn more about how you can support GMiS in advocating for student success, please visit our website at www.greatmindsinstem.org or email firstname.lastname@example.org. We look forward to seeing you in-person at our 2022 GMiS Conference, in Pasadena, CA. Juan Rivera, Ph.D. Acting CEO and Board Chairman Bertha Haro Executive Director | Metric | Value | |-------------------------|----------------| | Participants | 1,993 | | Events | 84 | | Days | 12 | | Virtual Platforms | 6 | | Career Fair Days | 2 | | Down Time | 0 | | Conference Portal Loads*| 139,217 | | Conference Website Loads*| 5,400 | * The reporting period for these metrics is 10/11 – 10/22 Thank You to Our Sponsors Host Sponsors Academic Hosts PRE-CONFERENCE ENGAGEMENT STEM Career Readiness Coaching Sessions In advance of the two-week conference, GMiS hosted a series of small-group, focused webinars to prepare students for the conference. These 45-minute sessions provided one-on-one resume reviews, virtual interviewing strategies, how to curate an online presence, and how to navigate a virtual career fair. Conference Website Launched! GMiS officially launched its Conference Website on August 22. Through the start of the conference, there were over 13,800 conference page loads. CAREER FAIR - Est. Unique Participants: 1,286 - Entities Represented: 55 - Chats: 4,245 - Avg. Chats*: 3.3 - Recruiter:Job Seeker Ratio: 1:2 *GMiS engaged Brazen to power its virtual career fair. According to Brazen, the average number of chats per job seeker is two, and the average Recruiter:Job Seeker ratio is 1:10. Percent Exhibitors by Sector - Corporate: 34% - Universities: 13% - Military: 19% - Government: 28% - Non-Profit: 6% COLLEGE STUDENTS *Based on participant data - Unknown, 2% - Female, 30% - Male, 68% Other/No Response, 2% Native American, 1% Asian/Pacific Islanders, 19% African American, 6% Hispanic, 54% INFORMATION CURRENT AS OF 10/31/21 Percent Students by Classification - Freshmen: 6% - Sophomore: 10% - Junior: 20% - Senior: 45% - Graduate: 15% - Unknown: 5% Top 5 STEM Disciplines Represented 1. Computer Science: 37% 2. Mechanical Engineering: 22% 3. Computer Engineering: 6% 4. Aerospace Engineering: 5% 5. Electrical Engineering: 5% Percent Distribution of Self-Reported GPAs - 4.00+: 10% - 3.5 - 3.99: 44% - 3.00 - 3.49: 34% - 2.50 - 2.99: 10% - 2.00 - 2.49: 1% "All the advice gave me a sense of comfort and understanding on how to approach different things and to continue to learn about myself and understand what roads to take." – Student Participant INFORMATION CURRENT AS OF 10/31/21 Student Attendance by Institutional Type 86% 4-Yr, Public 8% 4-Yr, Private 4% 2-Yr, Public 81% Hispanic-Serving Institutions 133 Colleges and Universities Represented – Conference Wide Albion College Arizona State University Athens State University Binghamton University California Baptist University California Institute of Technology California State Polytechnic University, Pomona California State University Maritime Academy California State University, Bakersfield California State University, Chico California State University, Dominguez Hills California State University, Fresno California State University, Fullerton California State University, Long Beach California State University, Los Angeles California State University, Northridge California State University, Sacramento California State University, Stanislaus College of the Canyons Colorado School of Mines Colorado Technical University Columbia University Del Mar College Dominican University Duke University East Los Angeles College El Camino College El Paso Community College Embry-Riddle Aeronautical University - Daytona Beach Florida Gulf Coast University Florida Institute of Technology Florida International University Georgia Institute of Technology Glendale Community College Houston Community College Illinois Institute of Technology Illinois State University Indiana Institute of Technology Indiana University-Purdue University Indianapolis Inter American University of Puerto Rico, Aguadilla Inter American University of Puerto Rico, Bayamon Kean University Lamar University Lehman College Lone Star College System Los Angeles Pierce College Los Angeles Trade Technical College Louisiana State University Louisiana State University Shreveport Loyola Marymount University Macalester College Marist College Massachusetts Institute of Technology Merced College Miami Dade College Missouri University of Science and Technology Morgan State University New Jersey Institute of Technology New Mexico Institute of Mining and Technology New Mexico State University New Mexico State University at Alamogordo New York City College of Technology North Carolina State University Northeastern Illinois University Northern Illinois University Oral Roberts University Oregon State University Polytechnic University of Puerto Rico Prairie View A&M University Reedley College Rensselaer Polytechnic Institute Rice University Rio Hondo College Rio Salado College Rutgers, The State University of New Jersey Saint Louis University San Antonio College San Francisco State University San Jose State University Santa Ana College Seattle University Stanford University Teachers College Texas A&M University Texas A&M University - Corpus Christi Texas A&M University - Kingsville Texas Tech University Texas Woman's University The Pennsylvania State University Tulane University Universidad Ana G. Mendez--Gurabo Universidad de Puerto Rico Recinto de Rio Piedras University of Arizona University of Arkansas at Little Rock University of California, Berkeley University of California, Davis University of California, Irvine University of California, Los Angeles University of California, Merced University of California, Riverside University of Central Florida University of Chicago University of Florida University of Houston University of Houston-Clear Lake University of Houston-Downtown University of Houston-Victoria University of Illinois at Chicago University of Illinois at Urbana-Champaign University of La Verne University of Maryland - Baltimore County University of New Mexico University of Oklahoma University of Puerto Rico at Arecibo University of Puerto Rico at Bayamon University of Puerto Rico at Mayagüez University of South Florida University of Southern California University of Texas at Arlington University of Texas at Austin University of Texas at Dallas University of Texas at El Paso University of Texas at San Antonio University of Texas Rio Grande Valley University of Washington Vanderbilt University Ventura College Virginia Commonwealth University Virginia Polytechnic Institute and State University Waukesha County Technical College West Virginia University COMPETITIONS Research Poster Competition 77 Accepted Submissions 18 Competition Awards CAHSI Hackathon 159 Participants 28 CAHSI Institutions Hidden Messages: TEAM 57 PenTest: TEAM 40 Forensics: TEAM 11 Password Cracking: TEAM 17 1st Overall: TEAM 41 2nd Overall: TEAM 10 3rd Overall: TEAM 49 CAHSI Data Analytics 27 Participants 15 CAHSI Institutions 1st Place: SOLO 2nd Place: UHD2021 COLLEGE BOWL 22 126 Participants 50 Unique winners 10 Individual Winners 1st Place Team: NASA 2nd Place Team: Northrop Grumman Alpha 3rd Place Team: Northrop Grumman One Students competed in Individual Competitions and on two different teams for the Team Competition. Over $22,400 in competition awards GMiS STEM SCHOLARS GMiS received over 800 completed applications for the GMiS STEM Scholarships, which offers both merit-based academic scholarships and STEM Civic Leadership Scholarships. This year, scholars were honored virtually at the Scholars Reception and the Student Leadership Awards. By end of 2021, GMiS will have awarded over $5.3M to more than 1,700 college students, since 2000. “As a Hispanic woman in STEM, it is important to me to have a support system that understands my background and experiences, and I am grateful that Lockheed Martin has provided me with role models and opportunities that will encourage me in my academic and professional development... this award will provide a springboard that will allow me to make contributions to the advancement of computing, data sciences, and information technology.” – Isabel O. Gallegos, 2021 GMiS Outstanding Student Leadership Award Recipient Number of GMiS STEM Scholarships: 97 Average GPA: 3.71 Classification: - 9% Graduate Students - 47% Seniors - 20% Juniors - 13% Sophomores - 11% Freshmen Gender: - 56% Male - 44% Female Total Awarded: $237,524 The GMiS STEM Scholars are part of the GMiS Scholarship Program which offers year-round scholarships including: Artemio. G. Navarro Scholarships For eligible graduating seniors attending Bishop Mora Salesian High School in Los Angeles. California Medical Scholarships For eligible underrepresented students pursuing a medical profession in one of these California medical schools – University of California, Davis School of Medicine; University of California, Los Angeles David Geffen School of Medicine; University of California, San Francisco School of Medicine; and University of Southern California Keck School of Medicine. California Health Scholarships For eligible undergraduate students pursuing a health-related discipline at a four-year institution in California. PROFESSIONALS Percent Professional Participants By Sector (n=744) - Corporate: 64% - Government: 6% - Military: 20% - Academic: 6% - Non-Profit: 3% - Other: 1% 2021 HENAAC Engineer of the Year Marla E. Pérez-Davis, Ph.D. Director Office of the Director NASA Glenn Research Center 2021 HENAAC Scientist of the Year Daniela Brunner, Ph.D. Chief Innovation Officer Data Sciences & Services Department PsychoGenics Inc. 2021 Chairman’s Award Major General R. Mark Toy Chief of Staff UNC Headquarters United Nations Command Percent HENAAC Award Honorees By Sector (n=47) (47% Female) - Corporate: 51% - Government: 17% - Military: 23% - Academic: 9% GMiS used Discord for the first-time. Over 530 users engaged with this communications platform during the virtual conference. | Platform | Impressions | Engagements | Audience | |----------|-------------|-------------|----------| | Facebook | 48,559 | 5,194 | 5,110 | | Twitter | 28,532 | 333 | 16,099 | | LinkedIn | 61,510 | 2,416 | 3,451 | | Instagram| 14,296 | 618 | 1,206 | Numbers from 10/11 – 10/29 On behalf of the entire GMiS Team, thank you for being part of the virtual 2021 GMiS Conference! Please continue to visit our GMiS Conference Website - [www.gmisconference.org](http://www.gmisconference.org) - to access the on-demand webinars, award shows, GMiS STEM Scholars, and Research Poster Exhibit Hall for the next several months. We look forward to seeing you at the 2022 GMiS Conference, in Pasadena, CA, from October 5 - 8, 2022!
Recruitment of PRC1 function at the initiation of X inactivation independent of PRC2 and silencing Stefan Schoeftner\textsuperscript{1}, Aditya K Sengupta\textsuperscript{1}, Stefan Kubicek\textsuperscript{1}, Karl Mechtler\textsuperscript{1}, Laura Spahn\textsuperscript{2}, Haruhiko Koseki\textsuperscript{3}, Thomas Jenuwein\textsuperscript{1} and Anton Wutz\textsuperscript{1,*} \textsuperscript{1}Research Institute of Molecular Pathology, Vienna, Austria, \textsuperscript{2}Centre of Molecular Medicine, Vienna, Austria and \textsuperscript{3}RIKEN Research Center for Allergy and Immunology (RCAI), RIKEN Yokohama Institute, Suehiro, Tsurumi-ku, Yokohama, Japan In mammals X inactivation is initiated by expression of \textit{Xist} RNA and involves the recruitment of Polycomb repressive complex 1 (PRC1) and 2 (PRC2), which mediate chromosome-wide ubiquitination of histone H2A and methylation of histone H3, respectively. Here, we show that PRC1 recruitment by \textit{Xist} RNA is independent of gene silencing. We find that \textit{Eed} is required for the recruitment of the canonical PRC1 proteins Mph1 and Mph2 by \textit{Xist}. However, functional Ring1b is recruited by \textit{Xist} and mediates ubiquitination of histone H2A in \textit{Eed} deficient embryonic stem (ES) cells, which lack histone H3 lysine 27 tri-methylation. \textit{Xist} expression early in ES cell differentiation establishes a chromosomal memory, which allows efficient H2A ubiquitination in differentiated cells and is independent of silencing and PRC2. Our data show that \textit{Xist} recruits PRC1 components by both PRC2 dependent and independent modes and in the absence of PRC2 function is sufficient for the establishment of Polycomb-based memory systems in X inactivation. The EMBO Journal (2006) 25, 3110–3122. doi:10.1038/sj.emboj.7601187; Published online 8 June 2006 Subject Categories: chromatin & transcription Keywords: Eed; Polycomb; Ring1b; X inactivation; \textit{Xist} Introduction Mammals equalise the dosage of X-linked genes between males and females by inactivation of one of the two female X chromosomes early in development. In female mice the paternal X chromosome is silenced in preimplantation embryos giving rise to the imprinted pattern of X inactivation in the extraembryonic lineages. In the cells forming the embryo, the inactive X (Xi) becomes reactivated at the blastocyst stage, followed by random inactivation of either the paternal or the maternal X before gastrulation (Huynh and Lee, 2003; Mak et al., 2004; Okamoto et al., 2004). Random X inactivation is recapitulated during the differentiation of mouse embryonic stem (ES) cells. The formation of an inactive X chromosome comprises an ordered series of chromatin modifications, including post-translational modifications of histones and the recruitment of Polycomb group (PcG) complexes (Plath et al., 2002). Initiation of silencing depends on the expression of the noncoding \textit{Xist} RNA (Borsani et al., 1991; Brockdorff et al., 1991; Brown et al., 1991a, b). However, \textit{Xist} is dispensable for the maintenance of the Xi at later stages of differentiation, when multiple pathways including DNA methylation and hypoacetylation of histone H4 stably propagate the inactive state (Csankovszki et al., 2001; Hernandez-Munoz et al., 2005). The silent state at the initiation of X chromosome inactivation is initially reversible (Wutz and Jaenisch, 2000) and is associated with chromosome-wide tri-methylation of histone H3 on lysine 27 (H3K27me3), mono-methylation of histone H4 on lysine 20 (H4K20m1) and ubiquitination of lysine 119 on histone H2A (H2AK119ub1) as well as the recruitment of the Polycomb repressive complexes 1 (PRC1) and 2 (PRC2; Cao et al., 2002; Plath et al., 2003; de Napoles et al., 2004; Fang et al., 2004; Kohlmaier et al., 2004). PRC2 contains the Ezh2, Eed, Suz12 and RbAp46/48 proteins and has histone H3 specific lysine methylase activity (Cao et al., 2002; Czermim et al., 2002; Kuzmichev et al., 2002, 2004; Muller et al., 2002). Recruitment of PRC2 by \textit{Xist} and appearance of H3K27me3 along the Xi are among the earliest events in X inactivation (Mak et al., 2002; Plath et al., 2003; Silva et al., 2003). This has led to the prevailing view that PRC2 and H3K27me3 have a crucial function in X inactivation. However, recruitment of the PRC2 complex and H3K27me3 also occur in the absence of transcriptional silencing (Plath et al., 2003; Kohlmaier et al., 2004). In differentiated cells \textit{Xist} is necessary but not sufficient for recruitment of H3K27me3, and thus H3K27me3 also depends on epigenetic information residing on the chromosome (Kohlmaier et al., 2004). When \textit{Xist} is expressed during an early time window in differentiation, a chromosomal memory is established that enables efficient histone methylation later in differentiation. This memory is maintained in differentiated cells independent of \textit{Xist} and gene silencing (Kohlmaier et al., 2004). Establishment of the memory temporally coincides with the transition from reversible to irreversible silencing, consistent with a role in the maintenance of X inactivation. The observation that recruitment of PRC2 and H3K27me3 is strictly dependent on \textit{Xist} RNA and is reversible excludes PRC2 as a stable component of the memory. However, this finding is compatible with a role of PRC2 in memory establishment. PcG complexes are thought to maintain a transcriptional memory for several developmental control genes in flies and mammals (Ringrose and Paro, 2004). It has been proposed that PRC2 recruits PRC1 based on the specificity of the chromodomain of Polycomb towards H3K27me3 (Fischle et al., 2003; Min et al., 2003). \textit{Eed} is required for the maintenance of the paternal Xi exclusively in differentiating extraembryonic trophoblast cells (Wang et al., 2001). However, no defect in the maintenance of imprinted X... inactivation has been observed in *Eed* mutant trophoblast stem cells or extraembryonic endoderm tissue, which lack H3K27me3 (Kalantry *et al.*, 2006). In trophoblast stem cells, *Eed* is necessary for *Xist* RNA stabilisation and reactivation of the Xi is observed only after onset of differentiation. The function of *Eed* in the initiation of random X inactivation in embryonic cells has not been studied and its significance in the embryo proper remains unclear. Here, we test the idea that PRC2 acts to recruit PRC1 in random X inactivation. Contrary to the expectation we find that *Xist* recruits the PRC1 protein Ring1b independent of H3K27me3 and Ring1b acts independently in the establishment of memory systems for the maintenance of X inactivation. This suggests that the present models for PcG complex recruitment in X inactivation need to be revised. **Results** *Xist mediated H2A ubiquitination is regulated by a memory in differentiated cells and independent of gene silencing* Biochemically purified mammalian PRC1 consists of several PcG proteins, including Ring1b, and its histone H2A lysine 119 specific ubiquitination activity has been shown (de Napoles *et al.*, 2004; Wang *et al.*, 2004). To investigate the function of PRC1 in X inactivation we have elucidated the kinetics of H2AK119ub1 in ES cells containing an inducible *Xist* expression system (Figure 1A). In the clone 36 ES cell line, an *Xist* cDNA transgene under control of the doxycycline inducible promoter is inserted into chromosome 11, and recapitulates chromosome-wide silencing (Wutz and Jaenisch, 2000). In ASX ES cells, the endogenous *Xist* locus has been modified by a targeted deletion of repeat A sequences of *Xist*, which are required for silencing, and concomitant introduction of an inducible promoter. This achieves inducible expression of a mutant *Xist* RNA, which does not cause gene silencing and thus circumvents the lethality associated with inactivation of the single X chromosome in this male ES cell line (Wutz *et al.*, 2002). H2AK119ub1 was established rapidly upon *Xist* induction in undifferentiated clone 36 ES cells. Importantly, induction of the silencing-deficient *Xist* RNA in ASX ES cells was also able to establish H2AK119ub1 on the chromosome (Figure 1B), indicating that H2AK119ub1 is not sufficient for gene silencing in X inactivation. We next studied the kinetics and stability of H2AK119ub1 during ES cell differentiation. We induced *Xist* starting at different time points in differentiating clone 36 ES cells and measured the levels of H2AK119ub1 and H3K27me3 at day 12 of differentiation (Figure 1C and D). In continuous presence of doxycycline, we detected a strong focal H2AK119ub1 signal in 69% of the nuclei, whereas no focus was observed if *Xist* was not induced. When *Xist* was turned off after 8 days of differentiation, focal H2AK119ub1 staining was observed in 7% of the cells on day 12 showing that H2AK119ub1 was reversible and *Xist*-dependent during differentiation. *Xist* induction starting from day 4 in differentiation resulted in low levels of H2AK119ub1 (16%) at day 12 compared to cultures where induction had occurred early. Therefore, in differentiated cells *Xist* is not sufficient for efficient imposition of H2AK119ub1, suggesting that H2A ubiquitination could be regulated by a chromosomal memory similar to H3K27me3. To test this, we induced *Xist* expression during the first 4 days of differentiation in clone 36 ES cells, subsequently turned off *Xist* for 4 days by withdrawing doxycycline and then measured H2AK119ub1 levels after re-induction of *Xist* for 4 more days. H2AK119ub1 staining was observed in 70% of these cells comparable to the percentage after 12 days of differentiation in continuous presence of doxycycline (Figure 1C). We conclude that *Xist* expression during an early time window in ES cell differentiation establishes a memory that is maintained independently of *Xist*. Reinduction of *Xist* in conjunction with this memory allows efficient H2AK119ub1 in differentiated cells. The recruitment of PRC1 mediated H2AK119ub1 therefore parallels the recruitment of PRC2 mediated H3K27me3 (Figure 1D) and could be a result of a dependency of PRC1 recruitment on H3K27me3. **Generation of ES cells lacking Eed** To directly investigate the function of the PRC2 complex in the recruitment of PRC1 at the initiation of X inactivation, we disrupted the *Eed* gene by targeting in clone 36 and ASX ES cells. The targeting vector replaced sequences encoding the first and second WD40 domains of the Eed protein with a stop cassette, which terminates transcription resulting in a null allele (Supplementary Figure 1A). After removal of the selection cassette from targeted ES clones by Cre-recombinase mediated deletion, the second allele of *Eed* was targeted using the same strategy. This yielded the cell lines 36*Eed−/−* (clone 1 and 2) and ASX*Eed−/−*, derivatives of clone 36 and ASX ES cells, respectively. Northern analysis confirmed the absence of wild-type *Eed* transcripts in these cells (Figure 2A). Two truncated *Eed* RNA species were observed in Eed−/− ES cells consistent with the termination of transcription at the introduced stop cassette. Western analysis revealed that Eed protein was absent in the Eed−/− cell lines (Figure 2B), while in control clone 36 and ASX ES cell lines the four Eed isoforms were resolved. We further reconstituted *Eed* expression in 36*Eed−/−* (clone 2) ES cells by introducing a transgene expressing an amino terminal fusion of the enhanced green fluorescent protein (EGFP) with the short Eed isoform. In these 36*EedTg* ES cells, we observed one protein migrating with the expected molecular weight of the EGFP-Eed fusion protein and a faster migrating product likely due to proteolysis (Figure 2B). In *Eed* deficient ES cells, *Suz12* RNA levels were reduced whereas steady-state levels of the *Ezh2* transcripts remained unchanged compared to control cell lines (Figure 2A). Western analysis revealed that *Ezh2* is drastically reduced below detection limit and *Suz12* was found in reduced amount in *Eed*−/− cells (Figure 2B). In 36*EedTg* cells, *Ezh2* and *Suz12* RNA and protein levels were rescued confirming that the effect was specific and caused by the lack of *Eed* (Figure 2B, and data not shown). *Eed* deficient 36*Eed−/−* and ASX*Eed−/−* ES cells showed a reduced ability to form colonies compared to control 36 and ASX ES cells but proliferation and self-renewal of ES cells was largely independent of *Eed* (Supplementary Figure 1B and C). Furthermore, the plating efficiency is rescued in 36*EedTg* ES cells showing that the defect is specific and due to lack of *Eed*. Eed−/− ES cells could be induced to differentiate with retinoic acid, but showed a reduced developmental potential indicated by the formation of irregular shaped embroid bodies and the absence of contractile structures indicative of cardiomyocytes in embroid body outgrowths (Supplementary Figure 1D and E, and data not shown). **Figure 1** PRC1 recruitment by *Xist*. (A) Overview of the inducible *Xist* expression system (TetOP) on chromosome 11 and the X in clone 36 and ΔSX ES cells, respectively. In clone 36 ES cells, *Xist* induction silences a linked puromycin marker gene (puro). In ΔSX cells, the A repeat of *Xist* (triangle) is deleted. (B) Recruitment of the PRC1 components Ring1b and Mph1 as well as resulting H2AK119ub1 was observed by combined *Xist* RNA FISH (red) and immunofluorescence analysis (green) in undifferentiated ΔSX ES cells after 3 days *Xist* induction. (C) H2AK119ub1 is regulated by a chromosomal memory in differentiated cells. Bar graphs representing the percentage of nuclei with focal H2AK119ub1 signals (grey bars) and *Xist* RNA (white bars) is given (above). Error bars represent the standard deviation. Below a scheme of the ES cell differentiation time course showing the presence (black) or absence (white) of doxycycline. An asterisk marks the *Xist* induction scheme revealing the chromosomal memory. (D) Analysis of H3K27me3 in parallel cultures to (C). **Xist recruits Suz12 independent of functional PRC2** Western analysis of *Eed* deficient ES cells revealed reduced Suz12 protein levels compared to control 36 ES cells and a loss of Ezh2 protein (Figure 2B). This was verified by combined immunofluorescence *Xist* RNA fluorescence *in situ* hybridisation (FISH) analysis on ES cells, after 3 days of *Xist* induction with doxycycline. In control clone 36 ES cells, 89, 79 and 88% of cells showed colocalisation of *Xist* RNA with Eed, Ezh2 and Suz12, respectively (Figure 2C–E; Table I). In *Eed* deficient 36*Eed−/−* ES cells *Xist* RNA showed Figure 2 Generation of ES cells lacking Eed. (A) Northern analysis of Eed, Suz12 and Ezh2 in undifferentiated control clone 36 and Eed deficient 36Eed−/− ES cells after Xist was induced for 3 days (+) or not (−); Gapdh as loading control. (B) Western analysis of Eed, Ezh2 and Suz12 in nuclear extracts from uninduced ES cells (−) or induced for 3 days (+). hnRNP A as loading control, asterisk indicates a nonspecific band. (C–E) Indirect immunofluorescence (green) of Eed (C), Ezh2 (D) or Suz12 (E) and subsequent Xist RNA FISH (red) of representative nuclei of undifferentiated 36Eed−/− and control clone 36 ES cells after 3 days of Xist induction. DAPI (blue) stains DNA. Statistics of the number of nuclei showing colocalisation of Suz12 staining with Xist in 36 and 36Eed−/− ES cells. Error bars indicate standard deviation (n > 600). normal localisation and no signal for Eed and Ezh2 was detected consistent with the loss of these proteins (Figure 2C and D). The Suz12 signal was markedly decreased in Eed deficient cells. However, we observed colocalisation of Suz12 with Xist RNA in 13 and 7% of 36Eed−/− clone 1 and clone 2 ES cells, respectively (Figure 2E). This demonstrates that recruitment of Suz12 by Xist RNA can occur, at least in part, independent of Ezh2 and Eed suggesting a role for Suz12 in PRC2 recruitment in X inactivation. **Xist recruits PRC1 independent of Eed and H3K27me3 in ES cells** To study the chromosomal marks at the initiation of X inactivation in Eed deficient ES cells, we performed combined Xist RNA FISH immunofluorescence analysis on 36Eed−/− and control ES cells (Figure 3 and Table I). After Xist induction for 3 days, we observed a strong focal H3K27me3 staining colocalising with Xist RNA in clone 36 ES cells. However, in 36Eed−/− ES cells di- and tri-methylation of H3K27 were drastically reduced and no colocalisation with Xist was observed consistent with a loss of PRC2 function in these cells (Figure 3B and C). A faint H3K27me3 signal was still observed at pericentric heterochromatin possibly due to weak cross-reactivity of the antibody with H4K20me3 (Peters et al., 2003). We detected a robust H3K27me1 signal at pericentric heterochromatin in 36Eed−/− cells comparable to controls (Figure 3A). In 36EedTG ES cells transgenic expression of EGFP-Eed rescued H3K27me3 (Supplementary Figure 1F). H2AK119ub1 and H4K20me1 are two marks associated with the initiation of X inactivation. H2AK119ub1 colocalised with Xist RNA in 97 and 98% of clone 36 and 36Eed−/− ES cells, respectively (Figure 3E). A robust H4K20me1 signal colocalising with Xist RNA was detectable in 82% control clone 36 ES cells. In 36Eed−/− ES cells, the H4K20me1 signal appeared less intense and was detected in 50% (clone 1) and 36% (clone 2) cells (Figure 3D). We conclude that ubiquitination of H2A on lysine 119 is independent of Eed, but PRC2 function supports the establishment of H4K20me1 by Xist (Table I). We observed normal H2A ubiquitination upon Xist expression in Eed deficient cells, which in ES cells is thought to be mediated by Ring1b, a core component of PRC1 (Figure 3E). To assess if PRC1 was indeed recruited by Xist independent of PRC2, we performed immunofluorescence analysis using antisera specific for the PRC1 core components Ring1b, Mph1 and Mph2. Colocalisation of Ring1b with Xist RNA was observed in ES cells independent of Eed (Figure 4A). The Mph1 signal colocalised with Xist in 48% of control 36 ES cells, but no colocalisation was observed in 36Eed−/− ES cells (Figure 4B). Colocalisation of Mph2 with Xist RNA was observed only in differentiated cells (Figure 4C), and was detected in 33% of clone 36 but not in Eed deficient 36Eed−/− ES cells on day 8 of differentiation. We conclude that recruitment of Mph1 and Mph2 by Xist is dependent on PRC2 function, but Ring1b is recruited independently of PRC2, Mph1 and Mph2. Despite the lack of detectable Mph1 and Mph2 recruitment, the Ring1b protein is enzymatically active as shown by ubiquitination of H2A. **PRC2 is critical for H3K27me2 and H3K27me3 in ES cells** To assess if disruption of Eed in 36Eed−/− and ΔSXEed−/− ES cells indeed caused a loss of PRC2 function, we performed an analysis of histone modifications. By Western analysis H3K27me2 and H3K27me3 were lost in 36Eed−/− and ΔSXEed−/− ES cells, but we found mono-methylation of H3K27 only slightly reduced consistent with our immunofluorescence data (Figure 3F). The mono-, di- and tri-methylation states of histone H3 lysine 9 or of H4 lysine 20, and ubiquitination of histone H2A lysine 119 were not altered in Eed deficient ES cells (data not shown). To further quantify the histone methylation marks, we performed a mass spectrometric analysis of nuclear extracts prepared from undifferentiated 36Eed−/− and ΔSXEed−/− and control ES cells. In control 36 ES cells, 17% of bulk histone H3 were mono-methylated, 58% di-methylated and 14% tri-methylated on lysine 27 (Figure 3G) consistent with previous reports (Peters et al., 2003). In Eed deficient ES cells, H3K27me3 and H3K27me2 were dramatically reduced compared to controls, but only a moderate reduction in the H3K27me1 signal was observed (Figure 3G). The loss of H3K27 di- and tri-methylation in Eed deficient ES cells resulted in a concomitant increase in unmodified but not mono-methylated H3K27. Di- and tri-methylation of H3K27 was restored in 36EedTG ES cells to 42 and 7%, corresponding to 72 and 50% of wild-type levels, respectively (Figure 3G). The methylation levels of H3K9 or H4K20 were unchanged by the absence of Eed (Supplementary Figures 2B and 3). However, H3K36me2 levels were significantly reduced from 50% in control clone 36 ES cells to 34% in 36Eed−/− ES cells, and 31% in ΔSXEed−/− ES cells (Supplementary Figures 2A and 5). Restoration of H3K36me2 levels in 36EedTG to 43% demonstrated that the PRC2 complex regulates global H3K36me2 marks. We conclude that in ES cells PRC2 is crucial for H3K27 di- and tri-methylation, but has no detectable contribution to H3K9 methylation. The H3K27me1 mark on pericentric heterochromatin was unaffected in Eed deficient ES cells (Figure 3A). --- **Table 1** PcG proteins and histone modifications recruited by Xist | | Eed | Ezh2 | Suz12 | Ring1b | Mph1 | Mph2* | H3K27me3 | H4K20me1 | H2AK119ub1 | |------------------|-------|-------|-------|--------|-------|-------|----------|-----------|-------------| | 36 | 89 ± 3% | 79 ± 9% | 88 ± 3% | 56 ± 7% | 48 ± 11% | 33 ± 8% | 96 ± 1% | 82 ± 13% | 97 ± 2% | | | n = 624 | n = 346 | n = 629 | n = 368 | n = 502 | n = 470 | n = 346 | n = 410 | n = 488 | | 36Eed−/− Clone1 | 0 | 0 | 13 ± 2% | ND | ND | ND | 0 | 50 ± 10% | 98 ± 0% | | | n = 510 | n > 600 | n = 624 | | | | | n > 600 | n = 227 | n = 479 | | 36Eed−/− Clone2 | 0 | 0 | 7 ± 1% | 53 ± 8% | 0 | 0 | 0 | 36 ± 7% | ND | | | n = 634 | n > 600 | n = 629 | n = 478 | n = 650 | n = 456 | n > 600 | n = 224 | | The percentage of focal signals colocalising with Xist RNA in ES cells treated with doxycycline for 3 days, or after 8 days of differentiation in the presence of doxycycline (*). Mean±s.d. of three independent slides and the total number of nuclei counted (n) are indicated. **Figure 3** Histone modifications in *Eed* deficient ES cells. (A–E) Combined *Xist* RNA FISH (red) indirect immunofluorescence of indicated histone modifications (green) analysis on undifferentiated 36*Eed−/−* and control 36 ES cells after 3 days *Xist* expression. Representative images are shown, statistics see Table I. (F) Western analysis of mono-, di- and tri-methylation of H3K27 in 36*Eed−/−* clones 1 and 2 and control 36 ES cells after *Xist* induction for 3 days (+) or not (−); loading control hnRNP A. (G) Mass-spectrometric analysis of histone H3 lysine 27 methylation in clone 36, 36*Eed−/−*, and 36*EedTG* ES cells. The percentage of the indicated modification state is given for three independent experiments; error bars indicate standard deviation. **Initiation of silencing by *Xist* is independent of PRC2** In clone 36 ES cells inducible *Xist* expression causes reversible silencing of a puromycin resistance gene, which was co-integrated with the *Xist* cDNA transgene on chromosome 11 (Wutz and Jaenisch, 2000). To establish whether *Eed* is required for initiation of silencing, we induced *Xist* expression in 36*Eed−/−* ES cells for 3 days and analysed puromycin resistance gene expression by Northern (Figure 5A). Silencing was equally efficient in control 36, $36^{Eed^{TG}}$ and $Eed$ deficient ES cells, demonstrating that Eed and H3K27me3 are dispensable for initiation of silencing by $Xist$. To investigate the role of $Eed$ for the maintenance of silencing, we induced $Xist$ in differentiating $36^{Eed^{−/−}}$ and control 36 ES cells. In retinoic acid differentiated cells in the presence of doxycycline for 8 days or for 4 days followed by 4 days without $Xist$ induction, we observed efficient maintenance of silencing of the puromycin gene compared to cultures, in which $Xist$ had not been induced (Figure 5B). Notably, there was no difference between $Eed$ deficient $36^{Eed^{−/−}}$ and control 36 ES cells demonstrating that the shift from reversible to irreversible gene silencing had occurred in the absence of PRC2 function. To test the function of $Eed$ for the maintenance of silencing in a more physiological differentiation model, we established embryoid body outgrowth cultures from $Eed$ deficient $36^{Eed^{−/−}}$ and control 36 ES cells in the presence or absence of doxycycline and measured expression of the puromycin marker gene after 4 weeks of differentiation. Northern analysis revealed that silencing was maintained in the absence of $Eed$ (Figure 5C). Finally to establish that long-range silencing was maintained **Figure 4** Recruitment of PRC1 components in the absence of Eed. (A, B) Indirect immunofluorescence (IF) of Ring1b (A), Mph1 (B) and subsequent $Xist$ RNA FISH (red) analysis on undifferentiated $36^{Eed^{−/−}}$ and control clone 36 ES cells after $Xist$ expression for 3 days. (C) Analysis for Mph2 in ES cells differentiated for 8 days in the presence of doxycycline. The percentage of nuclei showing focal IF staining colocalising with $Xist$ RNA is given for undifferentiated (ES), day 3 (DD3) and day 8 (DD8) of differentiation. Error bars represent standard deviation ($n > 350$). Figure 5 Initiation and maintenance of silencing independent of Eed. (A) Northern analysis of PGKpuromycin, (puro) silencing in 36Eed−/− and control clone 36 cells after Xist induction for 24, 48 and 72 h; Gapdh as loading control. (B) Maintenance of puro silencing in cells differentiated in the presence (+, lanes 2, 5 and 8) or absence (−, 1, 4 and 9) doxycycline, or differentiated for 4 days in the presence followed by 4 days in the absence of doxycycline (lanes 3, 6 and 7). (C) Northern analysis of puro expression in embryoid bodies outgrowths established in the presence of doxycycline (+) or without (−) after 4 weeks. (D) Quantitative expression analysis of Cct4, Npm1, Igf2bp1 and Tk1 on chromosome 11 in control 36 and 36Eed−/− ES cells at day 8 of differentiation in the absence (red bars), continuous presence (blue bars) of doxycycline, or presence of doxycycline for the first 4 days (green bars). Means of three independent measurements normalised to Gapdh are shown, error bars represent standard deviation. Scheme on the left shows the genes relative to the Xist transgene. as opposed to merely silencing of the marker gene in proximity of the *Xist* transgene, we established differentiated cultures by induction with retinoic acid and measured expression of genes on chromosome 11 by quantitative PCR analysis. The *Npm1* gene, which is located 20 Mb from the transgene integration site, was repressed by *Xist* on day 8 of differentiation in 36Eed−/− as well as in control cells to approximately 50%. *Npm1* repression was also maintained in cells that were differentiated in the presence of doxycycline for 4 days followed by 4 days without (Figure 5D). The repression of three other genes *Cct4*, *Igf2b1* and *Tkl* in differentiated cells showed the same trend but was more variable, probably because of heterogeneous regulation in the differentiated cultures. We conclude that *Eed* is not required for the initiation of *Xist* mediated silencing, and that PRC2 function and H3K27me3 are dispensable for the maintenance of long-range silencing. **Memory recruitment for H2AK119ub1 is independent of PRC2 function** We observed that H2AK119ub1 is regulated by a chromosomal memory in differentiated cells. To investigate whether this memory would be established in *Eed* deficient ES cells and could still contribute in this context to *Xist*-mediated silencing, we analysed the establishment of H2AK119ub1 in *Eed* deficient 36Eed−/− (clone 1 and 2) and ΔSXEed−/− ES cells (Figure 6A). The latter express a mutant *Xist* RNA that does not cause transcriptional repression, thus allowing us to follow memory establishment on an active chromosome. In all ES cell lines, *Xist* expression at early differentiation enabled efficient H2AK119ub1 at later time points comparable to control clone 36 ES cells (Figures 1C and 6A). We induced *Xist* expression for 4 days beginning at the start of differentiation, followed by withdrawal of doxycycline for 4 days. After this *Xist* RNA and H2AK119ub1 had been lost from the chromosome and then *Xist* expression was re-induced for 4 more days. In these cells we observed efficient re-ubiquitination (70, 59, 60 and 31% in 36Eed−/− clone 1, 36Eed−/− clone 2 and ΔSXEed−/− ES cells, respectively). This is comparable to cells, which were differentiated in continuous presence of doxycycline (69, 61, 58 and 41%). Importantly, efficient re-establishment of H2AK119ub1 was observed in differentiated ΔSXEed−/− ES cells and was therefore independent of silencing. In contrast, *Xist* induction starting at day 4 in differentiation resulted in focal H2AK119ub1 staining in a low percentage of cells (16, 19, 22 and 17%). Moreover, when *Xist* expression was turned off by withdrawing doxycycline from the medium H2AK119ub1 was lost from the chromosome at all time points examined in ES cell differentiation (Figure 6, and data not shown). We conclude that *Xist* expression establishes a chromosomal memory independent of *Eed* and gene silencing suggesting a possible explanation for maintenance of X inactivation in *Eed* deficient embryonic cells. **Discussion** **PRC1 recruitment in X inactivation is strictly dependent on *Xist* RNA** Using an inducible *Xist* expression system we have analysed the recruitment of PRC1 function in X inactivation. We find that *Xist* recruits Ring1b and concomitant H2AK119ub1 independent of transcriptional silencing. Recruitment of Polycomb complexes has been associated with heritable silencing of genes (Ringrose and Paro, 2004). We find that PRC1 and PRC2 also associate in the absence of gene silencing with the chromosome expressing *Xist*. Polycomb recruitment alone is therefore not sufficient for transcriptional repression in X inactivation. This is consistent with data in the fly where loading of PcG proteins onto Polycomb response elements (PREs) precedes the silencing of developmental control genes (Orlando *et al.*, 1998). Polycomb binding and H3K27me3 on PREs has been observed independent of silencing (Ringrose *et al.*, 2004) and loss of dRING function leads to derepression of genes despite of persistence of H3K27me3 (Wang *et al.*, 2004). Alternatively, coordinate loading of PcG complexes on the promoter and a PRE could be required for repression. It is tempting to speculate that in X inactivation *Xist* repeat A acts as a signal to repress gene expression, thereby enabling recruitment of promoters to the PcG territory of the chromosome. In ΔSX ES cells, promoters would then be predicted not to associate with the repressive PcG territory established by the silencing deficient *Xist* RNA lacking repeat A. We find that PRC1 recruitment is dependent on *Xist* RNA localisation and is reversible throughout ES cell differentiation when *Xist* is turned off. From this we conclude that PRC1 is not stable once loaded onto the chromosome, but depends on *Xist* and a chromosomal memory. Consistent with this, dynamic turnover of PRC1 has been observed on chromatin in the fly (Ringrose *et al.*, 2004; Ficz *et al.*, 2005). In striking contrast to the fly, where noncoding RNA transcription over PREs has been associated with gene activation (Schmitt *et al.*, 2005; Sanchez-Elsner *et al.*, 2006), in X inactivation the noncoding *Xist* RNA is associated with the repressed state. This suggests different mechanisms for Polycomb loading in the fly and in X inactivation and demonstrates a novel strictly RNA dependent recruitment mode for mammalian PRC1. **H2AK119ub1 activity of PRC1 does not require Mph1 or Mph2** Using *Eed*-deficient ES cells we show that the recruitment of the PRC1 core proteins Mph1 and Mph2 by *Xist* is dependent on *Eed*. This observation is consistent with the idea that PRC2 has a recruitment function for PRC1 components. A fly PRC1 core complex has been reconstituted containing the four components Psc, Pc, Ph and dRing (Francis *et al.*, 2001). A similar composition has been proposed for mammalian PRC1 like complexes based on purification and reconstitution experiments (Lavigne *et al.*, 2004; Wang *et al.*, 2004). However, we observe in *Eed* deficient ES cells that Ring1b is not only recruited by *Xist* in the absence of the PRC1 core components Mph1 and Mph2 but also appears to be functional as demonstrated by the concomitant ubiquitination of lysine 119 on histone H2A. Thus, in X inactivation Ring1b is either functional alone and can be recruited independently of other PRC1 members by *Xist*, or is part of a distinct complex that is yet to be identified (Ogawa *et al.*, 2002; Dou *et al.*, 2005; Isono *et al.*, 2005b). Based on our data we therefore propose two mechanisms for recruitment of PRC1 function by *Xist* in X inactivation (Figure 6b). A PRC2 dependent mode involves the binding of the Polycomb chromodomain to the H3K27me3 mark and operates via Mph1 or Mph2. This is predicted from biochemical evidence that H3K27me3 acts as a affinity signal recognised by the chromodomain of mammalian homologues of Polycomb (Fischle *et al.*, 2003; Min *et al.*, 2003). Our data provide evidence for a second mode of recruitment for PRC1 function. In the absence of PRC2 function, *Xist* can recruit Ring1b independent of the PRC1 core proteins Mph1 or Mph2. Both recruitment modes for PRC1 by *Xist* act synergistically to mediate H2AK119ub1 in the initiation of X inactivation. **Ring1b and PRC2 are regulated by a chromosomal memory** Establishment of H2AK119ub1 is restricted to an early time window in ES cell differentiation such that little H2AK119ub1 is imposed if *Xist* is induced at late time points in ES cell differentiation. Thus, *Xist* expression during an early window in differentiation establishes a chromosomal memory that in differentiated cells is required for H2AK119ub1. This memory is established at the time when X inactivation becomes irreversible and is stably maintained independent of *Xist* expression. To date, the molecular nature of this chromosomal memory is unknown. Our data demonstrate that the memory regulating the imposition of H2AK119ub1 and H3K27me3 in differentiated cells is independent of silencing. Recruitment of both PRC1 and PRC2 is also dependent on *Xist* RNA localisation and both histone marks are lost from the chromosome when *Xist* is turned off. Hence, H2AK119ub1 and H3K27me3 are reversible modifications and depend on *Xist* expression. We conclude that PRC1 and PRC2 are not stably maintained on the Xi throughout ES cell differentiation and can be excluded as integral components of the memory. Yet, the recruitment of PcG proteins early in X inactivation is consistent with a role in the establishment of a special chromatin structure that functions as chromosomal memory. Importantly, our data demonstrate that once established this chromatin structure is self-perpetuating and stable in the absence of *Xist*, PRC1 and PRC2. We show that a chromosomal memory regulating H2AK119ub1 is established independent of PRC2, Mph1 and Mph2. **Eed and H3K27me3 are not crucial for X inactivation in embryonic cells** Disruption of *Eed* in ES cells caused a lack Eed and Ezh2 protein and reduced levels of Suz12 consistent with earlier reports (Pasini *et al.*, 2004; Montgomery *et al.*, 2005). In the absence of Eed the levels of Ezh2 protein, which contains the catalytically active SET domain required for PRC2 histone methylase function, are reduced below detection. This could be the result of impaired translation or enhanced turnover of Ezh2 protein in the absence of Eed. In support of this notion, the disruption of PRC2 function in *Eed* deficient ES cells is clearly demonstrated by the loss of H3K27me3 in *Eed* deficient ES cells. Interestingly, we find that *Xist* RNA can recruit Suz12 independent of a functional PRC2 complex. Suz12 is a core component of the biochemically purified PRC2 complex, suggesting that PRC2 might be recruited at least in part via Suz12 in X inactivation. Consistent with this Suz12 also has roles in position effect variegation in fly and thus can act independent of PRC2 (Birve *et al.*, 2001). Western, immunofluorescence and mass spectrometric analyses show that disruption of PRC2 function leads to a specific loss of di- and tri- but not mono-methylation of H3K27 *in vivo* without affecting global levels of H3K9 methylation. This finding is consistent with and extends data from *Suz12* deficient embryos (Pasini *et al.*, 2004). Notably, the H3K27me1 marks at pericentric heterochromatin are not affected by loss of PRC2 function consistent with an independent regulation. In ES cells, *Xist* expression leads to rapid establishment of H3K27me3 along the chromosome, which requires PRC2 function. From mass spectrometric data we obtained a rough estimate that induction of *Xist* causes an approximately seven-fold increase in H3K27me3. Such an increase would require that 90% of the nucleosomes of the *Xist* expressing chromosome are tri-methylated on H3 lysine 27, compared to 14% total nuclear average (Supplementary Figure 2; Peters *et al.*, 2003). Given that in bulk chromatin 60% of histone H3 is di-methylated on lysine 27, the effect of *Xist* is a shift from di- to a tri-methyl marks that could provide increased affinity for PRC1. Our observation that recruitment of Mph1 and Mph2 by *Xist* is abolished in the absence of PRC2 supports this view. Reactivation of the paternal Xi was observed previously in differentiating trophoblast stem cells in *Eed* deficient embryos, indicating a role for PRC2 in maintenance of X inactivation (Wang *et al.*, 2001). However, maintenance of the Xi in trophoblast stem cells and extraembryonic endoderm is not affected by a mutation in *Eed* (Kalantry *et al.*, 2006). Imprinted X inactivation is initiated very early in embryogenesis and a maternal contribution of Eed could possibly function early in the initiation of imprinted X inactivation in *Eed* mutant embryos. Using *Eed* deficient ES cells, we can rule out PRC2 function at the initiation of *Xist* mediated silencing in embryonic cells. *Xist* expression in ES cells lacking functional PRC2 fails to establish H3K27me3 and recruit Mph1 and Mph2. However, in the absence of *Eed*, stable X inactivation can still be achieved. This unexpected finding suggests that functionally redundant mechanisms compensate for the loss of PRC2 function to maintain *Xist* mediated silencing in ES cell differentiation. PRC1 and PRC2 function independently in gene regulation as indicated by the requirement of both *Eed* and *Ring1b* for embryonic development (Wang *et al.*, 2002; Voncken *et al.*, 2003). Our data show that Ring1b can be recruited by *Xist* independent of PRC2. This recruitment of PRC1 function provides a likely explanation for the lack of an obvious defect on Xi maintenance in *Eed* deficient embryonic cells. This is in contrast to PRC2 action in the regulation of other genes, where a recruitment function of PRC2 is essential (Zhang *et al.*, 2004). The requirement of PRC2 for recruitment of some PRC1 components is also observed in X inactivation as *Xist* is unable to recruit Mph1 and Mph2 in the absence of Eed. In conclusion, we find that *Xist* can establish a chromatin structure that mediates a chromosomal memory in X inactivation independent of PRC2, suggesting the masking of a more dramatic defect in the maintenance of X inactivation in *Eed* deficient cells by an PRC2 independent mechanism for recruitment of PRC1 function by *Xist*. Future studies will be directed to establish the interplay between transcriptional silencing and the PcG complex mediated chromosomal memory during X inactivation. **Materials and methods** **Cell culture and generation of ES cell lines** ES cells were cultured as described previously (Wutz and Jaenisch, 2000). *Xist* expression was induced by the addition of 1 µg/ml of doxycycline. Differentiation medium contained 100 nM all-trans-retinoic acid and no LIF. Embryoid bodies were generated by the hanging drop method in medium without LIF. After 2 days aggregates were pooled and cultured in suspension for 3 days and subsequently plated on gelatin-coated culture dishes for 3 weeks. Cell numbers were determined using a Casy 1 cell counter (Schaerfe System GmbH, Germany). For construction of the *Eed* targeting vector, a 12 kb *XhoI–ClaI* genomic fragment was subcloned from a BAC isolated from the RPCII22 129 mouse BAC library (CHORI). The 2.8 kb *SacI–EcoRI* fragment containing three exons coding for WD40 domains 1 and 2 of the Eed protein were replaced by a stop cassette containing the adenoviral splice acceptor and polyadenylation signal separated by a loxP-flanked hygromycin-thymidine kinase selection cassette. Finally, a diphtheria toxin A chain cassette was inserted for counter selection of random insertions (see Figure 1B). Targeted clones were identified after selection with Hygromycin B (130 µg/ml) by Southern analysis of EcoRV digested DNA using probe pEed by a 12 kb band (wild-type band runs at 23 kb). The targeting frequency was between 17 and 37%. After Cre recombinase mediated excision of the selection cassette, the second allele was targeted using the same strategy yielding *Eed*−/− cells. For pCAG-EGFP-Eed-IREShygPA the short Eed isoform, corresponding to the human isoform 3 (Ruzmichev *et al.*, 2004), was tagged with EGFP at the N-terminus and cloned into pCAG-IREShygPA. 36*Eed*−/− clone 2 ES cells were electroporated with 50 µg of pCAG-EGFP-Eed-IREShygPA to generate 36*Eed*TG cells. **Immunostaining and RNA FISH** ES cells were attached to poly-l-lysine coated coverslips or cytocentrifuged using a Cytospin 3 centrifuge (Thermo Shandon, USA). Differentiated cells were grown on Roboz slides (CellPoint Scientific, USA). Immunostaining was performed as described (Peters *et al.*, 2003; Kohlmaier *et al.*, 2004). Briefly, cells were fixed for 10 min at RT in 4% PFA in PBS, permeabilised for 5 min at RT in 0.1% Na Citrate/0.1% Triton X-100, blocked for 60 min at RT in PBS containing 5% (wt/vol) BSA, 0.1% Tween-20. For H2AK119ub1 immunostaining cells were pre-extracted in 100 mM NaCl, 300 mM sucrose, 3 mM MgCl₂, 10 mM Pipes pH 6.8 and 0.5% Triton for 2 min at RT before fixation. RNA FISH probes were generated by random priming (Stratagene, USA) using Cy3-dCTP (Amersham). After immunostaining, cells were fixed in 4% PFA in PBS for 10 min at 4°C, dehydrated, hybridised and washed as described (Wutz and Jaenisch, 2000). Images were obtained using a fluorescence microscope (Zeiss Axioplan) equipped with a CCD camera and the MetaMorph image analysis software (Universal Imaging, USA). **RNA and protein analysis** Northern analysis was performed using 20 µg of RNA (Trizol; Invitrogen) as described previously (Wutz and Jaenisch, 2000). Antibodies for histone lysine methylation states and Western analysis were previously described (Peters *et al.*, 2003; Kohlmaier *et al.*, 2004) and the following dilutions were used (immunostaining/Western blot): α-H3K9m1 (#4858, 1:1000/1:500); α-H3K9m2 (#4677, 1:1000/1:1000); α-H3K9m3 (#4861, 1:750/1:1000); α-H3K27m1 (#4835, 1:6,000/1:1000); α-H3K27m2 (#8841, 1:1000/1:2,000); α-H3K27m3 (#6253, 1:1000/1:7,000); α-H4K20m1 (#0077, 1:500/1:3,000); α-H4K20m2 (#0080, 1:1000/1:1000); α-H4K20m3 (#0083, 1:3,000/1:3,000). Additional antibodies were as follows: α-H2AK119ub1 (α-ubiquityl-Histone H2A, clone E6C5; 05-678 Upstate Biotechnology, Lake Placid, New York, USA), 1:50/1:400; α-Suz12 (# 07-379; Upstate), 1:1000/1:1000; α-Eed (rabbit polyclonal antiserum, AKS and AW, unpublished results); 1:1000 α-Ezh2 (rabbit polyclonal antiserum, M Busslinger unpublished results); 1:1000/1:1000; α-Ring1b (Atsuta *et al.*, 2001), 1:100 for IF; α-Mph1 (Isono *et al.*, 2005a), 1:5 for IF; α-Mph2 (Isono *et al.*, 2005a), 1:100 for IF; α-hnRNP A1 (4B10 mouse monoclonal antiserum), 1:1000 for Western. Secondary antibodies: Alexa A-11034 Fluor 488 goat anti-rabbit IgG (H+L) and Alexa A-11034 Fluor 488 goat anti-mouse IgG (H+L) all at 1:500 (Molecular Probes, USA); HRP-conjugated AffiniPure goat anti-rabbit IgG (H+L), 1:10000 and HRP-conjugated AffiniPure goat anti-mouse IgG (H+L), 1:5000 (Jackson ImmunoResearch Laboratories, Inc., USA). **Quantitative PCR expression analysis** Random primed cDNA was generated from 10 µg total RNA from clone 36 and *Eed*−/− ES cells using the Superscript II Reverse transcription kit (Invitrogen). Quantitative PCR using the Taqman method (Applied Biosystems) for *Tk1* (primers: GCAACAGCCTTCTCCACACATGA, CGCGGACATGCAGGGCT; probe: CGGAACACCATGGACCATTTGC, *Npm1* (TGATGAGAAAGATGCAGACTCTGAA, CCTTCAGGCAGACA TCGCT; AGGAGGAGCCTAAAACCTTCTAGGATCTC), *Igfbp2* (CGCAAAGCGCGCAA, TGCCACTACACCCTCAGCTG; AGCGTAATGAGCTCGACAACCTTGC), *Cct4* (CTTACCGAGCACCGCACA, GCTTTGGCCGGCGAA; CCAGGCCCAATCGCCTTACCAAT) and *Gapdh* (CATGGCCTTCCGTGTTCCTA, TGTCTATCATCTTGGCAGGT TTCT; TCGGTATGACTTGCAGTGGCCGC) on a ABI PRISM 7000 detection machine was performed in triplicate as described (Pauker *et al.*, 2005). Quantification was achieved by the standard curve method using serial dilutions of cDNA generated from uninduced ES cells at day 8 of differentiation. Samples were normalised to *Gapdh* and the expression levels of uninduced clone 36 ES cells at day 8 of differentiation were set to 100 for each gene. **Nuclear extracts and mass spectrometry** ES cell cultures where harvested by trypsination and feeders were removed by plating on cell culture dishes twice for 30 min. Nuclear extracts were prepared as described (Peters *et al.*, 2003). For mass spectrometry 20 μg of nuclear extracts were separated by 15% SDS–PAGE and bands containing histone H3 and H4 were excised after Coomassie staining. Processing of the samples and quantitative mass spectrometric analyses were carried out as described (Peters et al, 2003). **Supplementary data** Supplementary data are available at *The EMBO Journal* Online. **Acknowledgements** We thank Gideon Dreyfuss, Arie Otte and Meinrad Busslinger for kindly providing antibodies, and Leonie Ringrose for critically reading the manuscript. This research is supported by the IMP through Boehringer Ingelheim and by grants from the GEN-AU initiative of the Austrian Ministry of Education, Science and Culture, and the European Union 6th framework program Epigenome Network of Excellence. **References** Atsuta T, Fujimura S, Moriya H, Vidal M, Akasaka T, Koseki H (2001) Production of monoclonal antibodies against mammalian Ring1B proteins. *Hybridoma* **20**: 43–46 Birve A, Sengupta AK, Beuchle D, Larsson J, Kennison JA, Rasmuson-Lestander A, Muller J (2001) Su(z)12, a novel *Drosophila* Polycomb group gene that is conserved in vertebrates and plants. *Development* **128**: 3371–3379 Borsani G, Tonolrenzi R, Simmler MC, Dandolo L, Arnaud D, Capra V, Grompe M, Pizzuti A, Muzny D, Lawrence C et al (1991) Characterization of a murine gene expressed from the inactive X chromosome. *Nature* **351**: 325–329 Brockdorff N, Ashworth A, Kay GF, Cooper P, Smith S, McCabe VM, Norris DP, Penny GD, Patel D, Rastan S (1991) Conservation of position and exclusive expression of mouse *Xist* from the inactive X chromosome. *Nature* **351**: 329–331 Brown CJ, Ballabio A, Rupert JL, Lafreniere RG, Grompe M, Tonolrenzi R, Willard HF (1991a) A gene from the region of the human X inactivation centre is expressed exclusively from the inactive X chromosome. *Nature* **349**: 38–44 Brown CJ, Lafreniere RG, Powers VE, Sebastio G, Ballabio A, Petitgrew AL, Ledbetter DH, Levy E, Craig IW, Willard HF (1991b) Localization of the X inactivation centre on the human X chromosome in Xq13. *Nature* **349**: 82–84 Cao R, Wang L, Wang H, Xia L, Erdjument-Bromage H, Tempst P, Jones RS, Zhang Y (2002) Role of histone H3 lysine 27 methylation in Polycomb-group silencing. *Science* **298**: 1039–1043 Csankovszki G, Nagy A, Jaenisch R (2001) Synergism of *Xist* RNA, DNA methylation, and histone hypoacetylation in maintaining X chromosome inactivation. *J Cell Biol* **153**: 773–784 Czernin B, Melfi R, McCabe D, Seitz V, Imhof A, Pirrotta V (2002) *Drosophila* enhancer of Zeste/ESC complexes have a histone H3 methyltransferase activity that marks chromosomal Polycomb sites. *Cell* **111**: 185–196 de Napoles M, Mermoud JE, Wakao R, Tang YA, Endoh M, Appanah R, Nesterova TB, Silva J, Otte AP, Vidal M, Koseki H, Brockdorff N (2004) Polycomb group proteins Ring1A/B link ubiquitylation of histone H2A to heritable gene silencing and X inactivation. *Dev Cell* **7**: 663–676 Dou Y, Milne TA, Tackett AJ, Smith ER, Fukuda A, Wysocka J, Allis CD, Chait BT, Hess JL, Roeder RG (2005) Physical association and coordinate function of the H3 K4 methyltransferase MLL1 and the H4 K16 acetyltransferase MOF. *Cell* **121**: 873–885 Fang J, Chen T, Chadwick B, Li E, Zhang Y (2004) Ring1b-mediated H2A ubiquitination associates with inactive X chromosomes and is involved in initiation of X inactivation. *J Biol Chem* **279**: 52812–52815 Ficz G, Heintzmann R, Arndt-Jovin DJ (2005) Polycomb group protein complexes exchange rapidly in living *Drosophila*. *Development* **132**: 3963–3976 Fischle W, Wang Y, Jacobs SA, Kim Y, Allis CD, Khorasanizadeh S (2003) Molecular basis for the discrimination of repressive methyl-lysine marks in histone H3 by Polycomb and HP1 chromodomains. *Genes Dev* **17**: 1870–1881 Francis NJ, Saurin AJ, Shao Z, Kingston RE (2001) Reconstitution of a functional core polycomb repressive complex. *Mol Cell* **8**: 545–556 Hernandez-Munoz I, Lund AH, van der Stoop P, Boutsma E, Muirjers I, Verhoeven E, Nusinow DA, Panning B, Marahrens Y, van Lohuizen M (2005) Stable X chromosome inactivation involves the PRC1 Polycomb complex and requires histone MACROH2A1 and the CULLIN3/SPOP ubiquitin E3 ligase. *Proc Natl Acad Sci USA* **102**: 7635–7640 Huynh KD, Lee JT (2003) Inheritance of a pre-inactivated paternal X chromosome in early mouse embryos. *Nature* **426**: 857–862 Isono K, Fujimura Y, Shinga J, Yamaki M, J OW, Takihara Y, Murahashi Y, Takada Y, Mizutani-Koseki Y, Koseki H (2005a) Mammalian polyhomeotic homologues phc2 and phc1 act in synergy to mediate polycomb repression of hox genes. *Mol Cell Biol* **25**: 6694–6706 Isono K, Mizutani-Koseki Y, Komori T, Schmidt-Zachmann MS, Koseki H (2005b) Mammalian polycomb-mediated repression of Hox genes requires the essential spliceosomal protein SF3b1. *Genes Dev* **19**: 536–541 Kalantry S, Mills KC, Yee D, Otte AP, Panning B, Magnuson T (2006) The Polycomb group protein Eed protects the inactive X-chromosome from differentiation-induced reactivation. *Nat Cell Biol* **8**: 195–202 Kohlmaier A, Savarese F, Lachner M, Martens J, Jenuwein T, Wutz A (2004) A chromosomal memory triggered by *Xist* regulates histone methylation in X inactivation. *PLoS Biol* **2**: E171 Kuzmichev A, Jenuwein T, Tempst P, Reinberg D (2004) Different EZH2-containing complexes target methylation of histone H1 or nucleosomal histone H3. *Mol Cell* **14**: 183–193 Kuzmichev A, Nishioka K, Erdjument-Bromage H, Tempst P, Reinberg D (2002) Histone methyltransferase activity associated with a human multiprotein complex containing the Enhancer of Zeste protein. *Genes Dev* **16**: 2893–2905 Lavigne M, Francis NJ, King IF, Kingston RE (2004) Propagation of silencing: recruitment and repression of naive chromatin in trans by polycomb repressed chromatin. *Mol Cell* **13**: 415–425 Mak W, Baxter J, Silva J, Newall AE, Otte AP, Brockdorff N (2002) Mitotically stable association of polycomb group proteins eed and enx1 with the inactive X chromosome in trophoblast stem cells. *Curr Biol* **12**: 1016–1020 Mak W, Nesterova TB, de Napoles M, Appanah R, Yamanaka S, Otte AP, Brockdorff N (2004) Reactivation of the paternal X chromosome in early mouse embryos. *Science* **303**: 666–669 Min J, Zhang Y, Xu RM (2003) Structural basis for specific binding of Polycomb chromodomain to histone H3 methylated at Lys 27. *Genes Dev* **17**: 1823–1828 Montgomery ND, Yee D, Chen A, Kalantry S, Chamberlain SJ, Otte AP, Magnuson T (2005) The murine polycomb group protein Eed is required for global histone H3 lysine-27 methylation. *Curr Biol* **15**: 942–947 Muller J, Hart CM, Francis NJ, Vargas ML, Sengupta A, Wild B, Miller EL, O’Connor MB, Kingston RE, Simon JA (2002) Histone methyltransferase activity of a *Drosophila* Polycomb group repressor complex. *Cell* **111**: 197–208 Ogawa H, Ishiguro K, Gaubatz S, Livingston DM, Nakatani Y (2002) A complex with chromatin modifiers that occupies E2F- and Myc-responsive genes in G0 cells. *Science* **296**: 1132–1136 Okamoto I, Otte AP, Allis CD, Reinberg D, Heard E (2004) Epigenetic dynamics of imprinted X inactivation during early mouse development. *Science* **303**: 644–649 Orlando V, Jane EP, Chinwalla V, Harte PJ, Paro R (1998) Binding of trithorax and Polycomb proteins to the bithorax complex: dynamic changes during early *Drosophila* embryogenesis. *EMBO J* **17**: 5141–5150 Pasini D, Bracken AP, Jensen MR, Lazzerini Denchi E, Helin K (2004) Suz12 is essential for mouse development and for EZH2 histone methyltransferase activity. *EMBO J* **23**: 4061–4071 Pauler FM, Stricker SH, Warczok KE, Barlow DP (2005) Long-range DNase I hypersensitivity mapping reveals the imprinted Igf2r and Air promoters share cis-regulatory elements. *Genome Res* **15**: 1379–1387 Peters AH, Kubicek S, Mechtler K, O’Sullivan RJ, Derijck AA, Perez-Burgos L, Kohlmaier A, Opravil S, Tachibana M, Shinkai Y, Martens JH, Jenuwein T (2003) Partitioning and plasticity of repressive histone methylation states in mammalian chromatin. *Mol Cell* **12**: 1577–1589 Plath K, Fang J, Mlynarczyk-Evans SK, Cao R, Worringer KA, Wang H, de la Cruz CC, Otte AP, Panning B, Zhang Y (2003) Role of histone H3 lysine 27 methylation in X inactivation. *Science* **300**: 131–135 Plath K, Mlynarczyk-Evans S, Nusinow DA, Panning B (2002) Xist RNA and the mechanism of X chromosome inactivation. *Annu Rev Genet* **36**: 233–278 Ringrose L, Ehret H, Paro R (2004) Distinct contributions of histone H3 lysine 9 and 27 methylation to locus-specific stability of polycomb complexes. *Mol Cell* **16**: 641–653 Ringrose L, Paro R (2004) Epigenetic regulation of cellular memory by the Polycomb and Trithorax group proteins. *Annu Rev Genet* **38**: 413–443 Sanchez-Elsner T, Gou D, Kremmer E, Sauer F (2006) Noncoding RNAs of Trithorax response elements recruit *Drosophila* Ash1 to Ultrabithorax. *Science* **311**: 1118–1123 Schmitt S, Prestel M, Paro R (2005) Intergenic transcription through a polycomb group response element counteracts silencing. *Genes Dev* **19**: 697–708 Silva J, Mak W, Zvetkova I, Appanah R, Nesterova TB, Webster Z, Peters AH, Jenuwein T, Otte AP, Brockdorff N (2003) Establishment of histone h3 methylation on the inactive X chromosome requires transient recruitment of Eed-Enx1 polycomb group complexes. *Dev Cell* **4**: 481–495 Voncken JW, Roelen BA, Roefs M, de Vries S, Verhoeven E, Marino S, Deschamps J, van Lohuizen M (2003) Rnf2 (Ring1b) deficiency causes gastrulation arrest and cell cycle inhibition. *Proc Natl Acad Sci USA* **100**: 2468–2473 Wang H, Wang L, Erdjument-Bromage H, Vidal M, Tempst P, Jones RS, Zhang Y (2004) Role of histone H2A ubiquitination in Polycomb silencing. *Nature* **431**: 873–878 Wang J, Mager J, Chen Y, Schneider E, Cross JC, Nagy A, Magnuson T (2001) Imprinted X inactivation maintained by a mouse Polycomb group gene. *Nat Genet* **28**: 371–375 Wang J, Mager J, Schneider E, Magnuson T (2002) The mouse PcG gene eed is required for Hox gene repression and extraembryonic development. *Mamm Genome* **13**: 493–503 Wutz A, Jaenisch R (2000) A shift from reversible to irreversible X inactivation is triggered during ES cell differentiation. *Mol Cell* **5**: 695–705 Wutz A, Rasmussen TP, Jaenisch R (2002) Chromosomal silencing and localization are mediated by different domains of Xist RNA. *Nat Genet* **30**: 167–174 Zhang Y, Cao R, Wang L, Jones RS (2004) Mechanism of Polycomb group gene silencing. *Cold Spring Harb Symp Quant Biol* **69**: 309–317
they must appear in a § 1 plaintiff’s complaint. As the Court noted, in the antitrust context: [a] statement of parallel conduct, even conduct consciously undertaken, needs some setting suggesting the agreement necessary to make out a § 1 claim; without that further circumstance pointing toward a meeting of the minds, an account of a defendant’s commercial efforts stays in neutral territory. An allegation of parallel conduct is thus much like a naked assertion of conspiracy in a § 1 complaint: it gets the complaint close to stating a claim, but without some further factual enhancement it stops short of the line between possibility and plausibility of “entitle[ment] to relief.” *Twombly*, 127 S.Ct. at 1966 (citation omitted). Rather, the Court held that “stating such a claim requires a complaint with enough factual matter (taken as true) to suggest that an agreement was made.” *Id.* at 1965. 2. Assessing the SCAC After *Twombly* At its heart, the SCAC alleges that Defendants imposed the same price and use restrictions on their sale of Internet Music to make that means of delivery of Digital Music less attractive to consumers, thereby buoying the prices of CDs.⁸ Plaintiffs --- ⁸ Plaintiffs do not argue that the joint ventures themselves violate the antitrust laws: “[I]t is not the existence or creation of these joint ventures that form the basis of the Plaintiffs’ allegations. Rather, Plaintiffs allege that Defendants . . . used those ventures as a means to implement their anticompetitive agreements.” Pls.’ Supp. Opp’n 10; see also *Copperweld Corp. v. Independence Tube Corp.*, 467 U.S. 1352, (continued on next page) advance essentially three arguments to support an inference that Defendants' parallel conduct resulted from an agreement: (a) that Defendants' creation of and participation in the joint ventures makes plausible the inference that their subsequent parallel conduct was the result of an agreement; (b) that further factors -- acts against Defendants' economic self-interests, motive to conspire, suspicious price increases, Defendants' "antitrust record" and opportunities to conspire through the RIAA (see PIs.' Opp'n 9-12) -- indicate that Defendants' parallel conduct resulted from agreement; and (c) that certain economic indicators -- market concentration and high barriers to market entry -- are sufficient to ground a § 1 conspiracy (see id. at 8-9, 11). I discuss each individually below and, affording Plaintiffs every reasonable inference, see Zinermon, 491 U.S. at 118, conclude that the further facts alleged by Plaintiffs, considered alone and collectively, do not place Defendants' conduct "in a context that raises a suggestion of a preceding agreement." Twombly, 127 S.Ct. at 1966. Therefore, Plaintiffs fail to state a claim for relief under § 1 of the Sherman Act and Count 1 of the SCAC must be DISMISSED. (continued from previous page) 768 (1984) (noting that joint ventures who "hold the promise of increasing a firm's efficiency and enabling it to compete more effectively" are reviewed under the rule of reason). a. The Joint Ventures Because Plaintiffs do not challenge the legality of the joint ventures themselves, it is somewhat unclear how they contend those ventures support an inference of agreement. They appear to argue that the creation and operation of the joint ventures yields an inference of agreement because those ventures were mere sham organizations designed solely to provide a forum in which to discuss and agree to the terms of the later agreement. To begin, the bald allegation that the joint ventures were shams is conclusory and implausible. It ignores the context in which those entities were created: an environment of widespread unauthorized downloading of Internet Music. (See, e.g., Almedia Decl. Ex. B (Bulcao Compl.) ¶ 37 ("The distribution of digital music exploded in the late 1990s with the emergence of Napster, the most popular online music service (which had tens of millions of users) Kazaa and other services offering free peer to peer file sharing, i.e., the ability of one person to share Online Music with anyone else via a website . . . . Napster initially provided file sharing for free . . . .").) I may consider the Bulcao complaint as a predecessor to the SCAC. See United States v. GAF Corp., 928 F.2d 1253, 1259 (continued on next page) result of that unauthorized downloading, "the major recording companies that control the copyrights to most popular music [were] generally unwilling to license their music for online sale except in protected formats." (See id. Ex. C (Tucker Compl.) ¶¶ 33-34.)\(^{10}\) Viewed in that context, each reason offered by Plaintiffs to support their sham allegation has an entirely reasonable independent justification: "unpopular" use restrictions and compromise in the collaboration's pricing structure are each consistent with a collaborative effort to address widespread music piracy. In the absence of any formal veil-piercing allegations and without challenging the legality of those joint ventures under the antitrust laws, Plaintiffs cannot now call into question their legitimacy simply by describing conduct consistent with rational business decisions. For that reason alone, I could decline to infer that the joint ventures were vehicles to create an antitrust conspiracy. (continued from previous page) (2d Cir. 1991) ("[T]he law is quite clear that superseded pleadings in civil cases may constitute admissions of party opponents, admissible in the case in which they were originally filed, as well as any subsequent litigation involving that party." (citing United States v. McKeon, 738 F.2d 26, 31 (2d Cir. 1984))). \(^{10}\) I am permitted to take judicial notice of the Tucker complaint under Rule 201(b) of the Federal Rules of Evidence. See Kramer v. Time Warner Inc., 937 F.2d 767, 773 (2d Cir. 1991). There is a further reason, however, not to draw such a negative inference. It is common sense that some level of information sharing must inevitably occur in the operation of a joint venture. As Judge Marilyn Hall Patel recently observed in a passage upon which Plaintiffs rely, "even a naif must realize that in forming and operating a joint venture, [record label] representatives must necessarily meet and discuss pricing and licensing." *In re Napster, Inc. Copyright Litig.*, 191 F. Supp. 2d 1087, 1109 (N.D. Cal. 2002). Judge Patel drew a negative inference from the possibility of such communication, allowing further discovery into Napster's allegation that the joint ventures themselves violated the antitrust laws. *See id.* at 1108-10. Of course, Plaintiffs offer no direct challenge to the joint ventures here. This situation is, therefore, more like the situation in *Twombly*, where the Supreme Court declined to draw a negative inference from allegations of information sharing that resulted from defendants' participation in a concededly legal industry trade group. *See* 127 S.Ct. at 1971 n.12. It is similarly unwarranted to draw a negative inference from allegations involving the unchallenged collaboration between and among Defendants. --- 11 Plaintiffs' allegation that Defendants "conspired to mask their anticompetitive conduct by pretextually establishing rules..." A more subtle argument could be made that a later illegal tacit agreement can be inferred from the fact of Defendants' explicit prior agreement with materially the same terms, reached in the context of the joint ventures. What scarce authority there is on this issue -- the parties have cited no reported decision, and research has disclosed but one -- does not address the precise issue. See United States v. Nat'l Malleable & Steel Castings Co., Civ. No. 30,281, 1957 U.S. Dist. Lexis 4209, 1957 Trade Cas. ¶ 68,890 (N.D. Oh. Nov. 26, 1957), affirmed, 358 U.S. 38 (1958) (mem.). In the Steel Castings case, the court confronted a price-fixing conspiracy that was alleged to have existed after defendants discontinued a trust agreement among themselves. See id. at *10-12. Though its legality was unchallenged, all appear to have agreed that the prior trust agreement was discontinued because it would have been considered illegal under then-recent changes in the law. See id. at *12. The court refused to conclude that the prior trust agreement was illegal; the court further refused to conclude that the prior agreement had "ended only in its outward manifestations" based (continued from previous page) to prevent antitrust violations" (SCAC § 90) is wholly conclusory. Further, I decline to infer that the joint ventures were designed to hide a true purpose of information sharing simply because Defendants structured them so as to comply with the antitrust laws. on certain economic evidence and other testimony about the market in question. See id. at *19-20.\textsuperscript{12} I conclude that an inference of subsequent agreement based on prior, unchallenged explicit agreement is unreasonable. By not challenging the legality of the joint ventures, Plaintiffs concede the possibility that Defendants, acting collectively through the joint ventures, were permissibly motivated in imposing the price and use restrictions in question. Cf. U.S. Dep't of Just. & Fed. Trade Comm'n, \textit{Antitrust Guidelines for Collaborations Among Competitors} 5-6 (2000) (recognizing that joint ventures offer significant pro-competitive benefits). Conceding that possibility, it is just as likely that each Defendant was motivated on its own by the same permissible impulses that motivated the group as a collective, and Plaintiffs offer nothing now to create a reasonable inference that Defendants were not so motivated.\textsuperscript{13} \textsuperscript{12} Some guidance may also be taken from the cases limiting the inference that may be drawn from allegations of antitrust conspiracy in other markets, see, e.g., Matsushita, 475 U.S. at 595-96, or from commentary recognizing the limits of allegations of earlier conspiracy in the same market, see 6 Areeda & Hovenkamp, \textit{supra}, § 1421b(3). Of course, the inference of agreement is weaker here \textit{a fortiori} because Plaintiffs do not claim that the joint ventures were illegal. \textsuperscript{13} Inertia is yet another possible explanation for Defendants' parallel conduct that does not implicate prior agreement. As Areeda and Hovenkamp discuss, parallel conduct can just as (continued on next page) For these reasons, I reject as unreasonable Plaintiffs' invitation to infer that Defendants' subsequent adoption of parallel price and use restrictions resulted from agreement based on their creation of or membership in the unchallenged joint ventures. b. Other Circumstantial Evidence The other circumstances alleged by Plaintiffs are similarly equivocal and do not justify the inference that Defendants' parallel conduct resulted from agreement. For instance, Plaintiffs' allegation of a "motive to conspire" is nothing more than an assertion of interdependence. Plaintiffs contend that Defendants possessed such a motive because they understood that price competition among them would only drive down the price of Digital Music. (See SCAC ¶ 83.) There is no agreement, however, merely because an oligopolist charges an inflated price knowing (or even hoping) that other oligopolists will match his high price. Such is bald conscious parallelism, and, as the Supreme Court has stated, "parallel conduct, even conduct consciously undertaken," does not itself state an antitrust conspiracy. See (continued from previous page) easily result from convention, under which circumstances an inference of prior agreement is illogical. See 6 Areeda & Hovenkamp, supra, § 1410c (quoting and discussing D. Lewis, Convention: A Philosophical Study (1969)). Twombly, 127 S.Ct. at 1966; see also 6 Areeda & Hovenkamp, supra, § 1433 (surveying cases); id. § 1432a (concluding that no agreement exists "merely from recognized interdependence without the addition of any facilitators"). As noted above, the Supreme Court observed in Twombly that the mere participation in an industry trade association would not yield an inference of improper inter-firm communication. See 127 S.Ct. at 1971 n.12. Plaintiffs' allegation concerning the RIAA in this action suffers a similar fate. That fact is, at best, neutral and thus adds nothing that would "'nudge [plaintiffs'] claims across the line from conceivable to plausible.'" In re Elevator Antitrust Litig., 502 F.3d 47, 50 (2d Cir. 2007) (quoting Twombly, 127 S.Ct. at 1974). Plaintiffs' allegation that Defendants' "antitrust record" supports an inference of agreement is even less helpful. First, Plaintiffs overstate the weight that should be afforded to such evidence. See 6 Areeda & Hovenkamp, supra, § 1421b(1) ("prior conspiracy is not alone probative of present collusion"); Richard A. Posner, Antitrust Law 79 (2d ed. 2001) (antitrust record of an industry is useful to help enforcement agencies target limited resources). Indeed, as one commentator has suggested, "caution is required lest the defendants' demonstrated moral infirmities distract the court's attention from the distinction between tacit coordination through mere interdependence and traditional conspiracy." 6 Areeda & Hovenkamp, *supra*, § 1421b(2). Still greater caution is required here, where the alleged "antitrust record" hardly illustrates any "demonstrated moral infirmities." As at least one other court has noted, mere investigation by governmental agencies does not show an "antitrust record." See *In re Graphics Processing Units Antitrust Litig.*, 527 F. Supp. 2d 1011, 1024 (N.D. Cal. 2007) (investigation alone "carries no weight in pleading an antitrust conspiracy claim"). Moreover, the investigations alleged here do not support the inference Plaintiffs urge: the DOJ closed its investigation after it "uncovered no evidence that the major record labels' joint ventures have harmed competition or consumers of digital music" (Almeida Decl. Ex. 5 (DOJ Press Release)), and the relevance of the New York State Attorney General's payola investigation is not apparent. Such an "antitrust record" cannot justify the already problematic inference that "once a criminal, always a criminal." Plaintiffs' conclusion that the imposition of price and use restrictions was against Defendants' economic self-interests is implausible and, likewise, cannot support an inference of agreement. As discussed above, the imposition of use restrictions was, in fact, not contrary to Defendants' collective economic self-interests when viewed against the backdrop of widespread unauthorized music downloading. (See supra 15-17.) That observation remains true for each individual Defendant. Indeed, contrary to Plaintiffs' suggestion, the unpopularity of Defendants' Internet Music use restrictions with consumers is hardly reflective of each Defendant's economic self-interest. (See SCAC ¶ 76 ("Any one of the Defendants might have removed these unpopular DRM and gained additional market share and profits . . . .").) Surely, any Defendant who decided to give its product away for free would have been popular with consumers, but refusing to do so is hardly the economically irrational decision Plaintiffs portray it to be. Especially under the circumstances of widespread pirating, the fact that customers disliked each Defendant's attempt to secure its copyrights shows nothing. Nor do Plaintiffs derive support from the fact that the price for Defendants' Internet Music converged at a higher price than that charged by the independent music labels. It is beyond peradventure that different products will fetch different prices, and, though the parties have not briefed the issue of what price disparity would be reasonable here, I need not decide that issue to conclude that the mere existence of a disparity does not itself bespeak an act against self-interest. Finally, Plaintiffs' ambiguous allegation of price increases does not support an inference of agreement because that conduct, as alleged, is consistent with sequential parallelism.\textsuperscript{14} As commentators note, "[n]o additional fact, such as advance agreement, is needed to explain that process," and, therefore, "agreement is ordinarily more difficult to infer from sequential actions." 6 Areeda & Hovenkamp, \textit{supra}, § 1425d( ). On the other hand, an inference of prior agreement may be warranted from simultaneous parallel price conduct where no actor had prior knowledge of or time to consider the other actors' conduct. See \textit{Taxi Weekly, Inc. v. Metro. Taxicab Board of Trade, Inc.}, 539 F.2d 907, 911-12 (2d Cir. 1976) (inference of prior agreement justified where taxi fleet owners each called to cancel subscription to trade publication within one half hour of each other one day after meeting); see also 6 Areeda & Hovenkamp, \textit{supra}, § 1425c. Here, Plaintiffs allege only that prices rose "in or about May 2005." Affording Plaintiffs every reasonable inference, \textit{Twombly} nevertheless requires that they plead further facts tending to show conspiracy; "facts" such as these that are just as consistent with independent action are insufficient as a matter of law. See, e.g., \textit{Matsushita}, 475 U.S. at 588 ("[C]onduct as consistent with permissible competition as with illegal conspiracy does not, standing alone, support an \textsuperscript{14} Plaintiffs seek leave to amend SCAC Paragraph 99. Because I conclude that their proposed amendment would be futile, leave to amend is DENIED. See \textit{Foman v. Davis}, 371 U.S. 178, 182 (1962); \textit{Jin v. Metro. Life Ins. Co.}, 310 F.3d 84, 101 (2d Cir. 2002). inference of antitrust conspiracy . . . ." (citing Monsanto, 465 U.S. at 764)). c. Economic Indicators Finally, Plaintiffs suggest that the existence of certain economic indicators is sufficient to justify the inference that Defendants' parallel conduct resulted from agreement. (See Pls.' Opp'n 9.) For this proposition, they rely principally on the work of Judge Richard A. Posner, who describes an approach to identifying and punishing tacit antitrust collusion based solely on economic evidence. See Posner, supra, at 69. That approach posits two sets of economic data: indicators that "identify] those markets in which conditions are propitious for the emergence of collusion" and indicators that reveal "whether there really is collusive pricing in any of those markets." Id. The first set of indicators, while valuable to help enforcement agencies direct limited resources, see id. at 69, 79, do not show that the alleged conduct "stemmed from independent decision or from an agreement, tacit or express." Theatre Enters., 346 U.S. at 540. Without reaching the question whether economic evidence alone may be sufficient to support an inference of agreement.\textsuperscript{15} Plaintiffs' attempt to do so here fails on its own terms. In this case, Plaintiffs allege only facts that would identify the market for Digital Music as one "in which conditions are propitious for the emergence of collusion."\textsuperscript{16} (See SCAC ¶¶ 5 (high seller-side concentration), 47 (low buyer-side \textsuperscript{15} Judge Posner observes how judicial treatment of the "plus factors" analysis has often mistakenly demanded evidence of actual agreement: "[w]hat the cases seem to mean, however, and what some of them make explicit, is that there must be an explicit agreement based upon actual communication between the parties." See Posner, \textit{supra}, at 94 (emphasis in original, footnote omitted); see id. at 99-100 (discussing language in Monsanto that aggravates judicial confusion regarding proof of tacit agreement). He argues against that requirement: "[i]f the economic evidence presented in a case warrants an inference of collusive pricing, there is neither legal nor practical justification for requiring evidence that will support the further inference that the collusion was explicit rather than tacit." See \textit{id.} at 94. \textsuperscript{16} Plaintiffs allege that the Digital Music market is characterized by low buyer-side concentration because "there are thousands of class members." (See PIs.' Opp'n 9.) That assertion is undermined somewhat by the allegation elsewhere in the SCAC that Defendants sold largely to retailers (see SCAC ¶¶ 56-57, 79), a group as to whose size the SCAC is silent. Further, it is worth noting that SCAC's description of the market for Internet Music is inconsistent in some basic respects with the type of market Judge Posner describes as vulnerable to price collusion. That is to say, as it is described in the SCAC, the Internet Music market is not characterized by the relative inability of competitors to increase supply or decrease prices to challenge effectively the conspirators' market control, see Posner, \textit{supra}, at 63-64, but rather as one where, for instance, eMusic was able to increase its "production" rapidly through relationships with "hundreds of independent record labels," sufficient even to surpass Defendants in the market (see SCAC ¶ 104). concentration), 70-71 (similar cost structures among Defendants), 72 (industry-wide cooperative practices). Judge Posner recognizes, however, those facts do not render plausible the inference of agreement among these Defendants: just because you grow up in a high crime area does not make you a criminal. For the foregoing reasons, I conclude that the SCAC does not allege the further facts required by Twombly to state a § 1 claim based upon parallel conduct. Count One is, therefore, DISMISSED. B. The State Antitrust and Consumer Protection Count 1. The State Antitrust Claims As noted above, Count Two of the SCAC asserts claims under the antitrust laws of the following 16 jurisdictions: Arizona, California, Washington, D.C., Iowa, Kansas, Maine, Michigan. --- 17 It should be noted that the paragraphs in the SCAC invoked to support the claim that Plaintiffs have pleaded high barriers to market entry (see Pls.’ Opp’n 11 (citing SCAC ¶¶ 55-57)) do not mention barriers to market entry. Paragraph 55 states: “Defendants have acted on grounds generally applicable to the entire Class, thereby making final injunctive relief or corresponding declaratory relief appropriate with respect to the Class as a whole.” Paragraphs 56 and 57 duplicate each other, and state: “Defendants produce, license and distribute Digital Music, including Internet Music and CDs, to retailers for sale throughout the United States and in some instances sell Internet Music and CDs directly to consumers through Internet sites, record clubs and other entities which they own or control.” Minnesota, Nevada, North Carolina, North Dakota, South Dakota, Tennessee, Vermont, West Virginia and Wisconsin.\textsuperscript{18} Defendants argue that those claims must be dismissed for the same reason as the federal claim. I agree. At its heart, \textit{Twombly} is a decision about the Federal Rules of Civil Procedure: to survive a Rule 12(b)(6) motion to dismiss, a pleading must include allegations that make its claim for relief plausible, not merely possible. See 127 S.Ct. at 1974 (pleading must include "enough facts to state a claim to relief that is plausible on its face"). That purely procedural standard of pleading binds this Court's evaluation of state law claims, see, e.g., 5B Charles Alan Wright & Arthur R. Miller, \textit{Federal Practice & Procedure} § 1357 (3d ed. 2008), and, while it \textsuperscript{18} Paragraph 136(n) of the SCAC purports to assert claims under the "New York common law against restraints of trade." New York law includes an antitrust provision, called the Donnelly Act. See N.Y. Gen. Bus. Law § 340 et seq. (McKinney 2004). Nevertheless, Plaintiffs state that it is not their intention to bring any claim under that Act (see Pls.' Opp'n 29 n.29); and they do not discuss or even identify the distinct "common law against restraints of trade" upon which to base their claim. Therefore, to the extent the SCAC asserts claims under New York law apart from its claims under New York's Consumer Protection from Deceptive Acts and Practices provisions, see N.Y. Gen. Bus. Law § 346 (McKinney 2004), those claims are DISMISSED. In any event, the substantive provisions of the Donnelly Act mirror federal antitrust law, see, e.g., \textit{State v. Mobil Oil Corp.}, 38 N.Y.2d 460, 463, 344 N.E.2d 357, 359 (1976); \textit{Reading Int'l, Inc. v. Oaktree Capital Mgmt. LLC}, 317 F. Supp. 2d 301, 333 (S.D.N.Y. 2003), and, thus, any New York antitrust claims would be dismissed for the same reasons as were the federal and other state antitrust claims. has caused much ado in the legal community, see, e.g., *Iqbal v. Hasty*, 490 F.3d 143, 155 (2d Cir. 2007) (finding “[c]onsiderable uncertainty concerning the standard for assessing the adequacy of pleadings” after *Twombly*), it did not alter the substantive federal law of antitrust: parallel conduct alone, even if consciously undertaken by individual firms, does not constitute a conspiracy to restrain trade in violation of § 1 of the Sherman Act. The question, therefore, is not whether the relevant state courts would decide *Twombly* the same way but rather, whether the state’s antitrust law incorporates the same substantive principle of federal antitrust law regarding conscious parallelism. I answer this question in the affirmative for several reasons. First, some courts have explicitly adopted, as a matter of state substantive antitrust law, the federal approach to the question of whether consciously parallel conduct alone constitutes an antitrust conspiracy.\(^{19}\) Second, several states’ \(^{19}\) See *Aguilar v. Atl. Richfield Co.*, 25 Cal. 4th 826, 851-52, 24 P.3d 493, 511-12 (2001) (“Ambiguous evidence or inferences showing or implying conduct that is as consistent with permissible competition by independent actors as with unlawful conspiracy by colluding ones do not allow such a trier of fact [to find an unlawful conspiracy].” (citing Areeda & Hovenkamp)); *Pease v. Jasper Wyman & Son*, 00 Civ. 15, 2002 WL 1974081, at *11-12 (Me. Super. Ct. Aug. 9, 2002); *Desgranges Psychiatric Ctr.*, PC v. *Blue Cross & Blue Shield of Mich.*, 124 Mich. App. 237, 244-45, 333 N.W.2d 562, 565 (Mich. Ct. App. 1983) (continued on next page) antitrust statutes explicitly direct state courts to consider, as persuasive or controlling authority, federal court decisions construing the federal antitrust laws.\textsuperscript{20} Third, even absent such \begin{quote} (A unilateral action, no matter how anticompetitive it may be, does not amount to a combination to restrain trade.") (citing Theatre Enters., 346 U.S. at 537); Wrench v. Assoc. Milk Producers, Inc., No. 78-131, 1979 WL 30778, at *6 n.21 (Wis. Ct. App. 1979) ("We recognize that similar practices by competitors, i.e., 'conscious parallelism,' will sometimes support an inference of an agreement. Only where the pattern of action undertaken is inconsistent with the self-interest of the individual actors, were they acting alone, may an agreement be inferred solely from such parallel action." (quotation marks omitted)); State v. Heritage Realty of Vermont, 137 Vt. 425, 429-30, 407 A.2d 509, 511-12 (1979) ("Price uniformity among competitors does not, of itself, violate the antitrust laws; however. If it is the result of independently reached pricing decisions, the element of 'agreement' necessary to establish an illegal price-fixing combination or conspiracy is absent." (citations omitted)). \end{quote} \textsuperscript{20} See Ariz. Rev. Stat. § 44-1412 (2008) ("It is the intent of the legislature that in construing this article, the courts may use as a guide interpretations given by the federal courts to comparable federal antitrust statutes."); D.C. Code § 28-4515 (2008) ("It is the intent of the Council of the District of Columbia that in construing this chapter, a court of competent jurisdiction may use as a guide interpretations given by federal courts to comparable antitrust statutes."); Iowa Code § 553.2 (2008) ("This chapter shall be construed to complement and be harmonized with the applied laws of the United States which have the same or similar purpose as this chapter."); Mich. Comp. Laws 445.784(2) (2008) ("It is the intent of the legislature that in construing all sections of this act, the courts shall give due deference to interpretations given by the federal courts to comparable antitrust statutes . . . ."); Nev. Rev. Stat. § 598A.050 (2008) ("The provisions of this chapter shall be construed in harmony with prevailing judicial interpretations of the federal antitrust statutes."); S.D. Codified Laws § 37-1-22 (2008) ("It is the intent of the Legislature that in construing a statutory mandate, the courts in each jurisdiction overwhelmingly look to federal antitrust decisions to construe their own antitrust statutes.\textsuperscript{21} It is irrelevant that states (continued from previous page) this chapter, the courts may use as a guide interpretations given by the federal or state courts to comparable antitrust statutes."); W.Va. Code § 47-18-16 (2008) ("This article shall be construed liberally and in harmony with ruling judicial interpretations of comparable federal antitrust statutes."). \textsuperscript{21} The following authorities are organized by jurisdiction. Arizona: \textit{See Johnson v. Pac. Lighting Land Co.}, 817 F.2d 501, 604 (9th Cir. 1987) (noting that "United States Supreme Court Sherman Act decisions [are] used to construe Arizona antitrust statute") citing \textit{Three Phoenix Co. v. Pace Indus., Inc.}, 133 Ariz. 113, 659 P.2d 1258, 1260 (1983)); \textit{see also Brooks Fiber Commc'ns of Tucson, Inc. v. GST Tucson Lightwave, Inc.}, 990 F. Supp. 1124, 1130 (D. Ariz. 1997). California: \textit{See Corwin v. Los Angeles Newspaper Serv. Bureau, Inc.}, 4 Cal. 3d 842, 852, 484 P.2d 953, 959 (1971) ("Sections 16720 and 16726 of the Cartwright Act were patterned after the Sherman Act and decisions under the latter act are applicable to the former."); \textit{see also County of Tuolumne v. Sonora Cmty. Hosp.}, 236 F.3d 1148, 1160 (9th Cir. 2001) (dismissing state antitrust claims because "[t]he analysis under California's antitrust law mirrors the analysis under federal law because [it] was modeled after the Sherman Act" (citing \textit{Mailand v. Burckle}, 20 Cal. 3d 367, 375, 572 P.2d 1142, 1147 (1978))). District of Columbia: \textit{See WAKA LLC v. DC Kickball}, 517 F. Supp. 2d 245, 252 (D.D.C. 2007) (failure to state a claim under § 1 equated to failure to state a claim under D.C. antitrust provision); \textit{GTE New Media Servs., Inc. v. Ameritech Corp.}, 21 F. Supp. 2d 27, 45 (D.D.C. 1998) ("The only difference between the two statutes is that the D.C. Code does not require an interstate nexus, but rather a connection within this jurisdiction."); \textit{Mazanderan v. Independent Taxi Owners' Assoc., Inc.}, 700 F. Supp. 588, 591 n.9 (D.D.C. 1988) ("Analysis of plaintiff's state antitrust claim necessarily follows that of the federal claim . . . ."). Iowa: \textit{See Davies v. Genesis Med. Ctr. Anesthesia & Analgesia, P.C.}, 994 F. Supp. 1078, 1103 (S.D. Iowa 1998) ("When interpreting Iowa antitrust statutes, Iowa courts are required by section (continued on next page) 553.2 to give considerable weight to federal cases construing similar sections of the Sherman Act."); see also Fed. Land Bank of Omaha v. Tiffany, 529 N.W.2d 294, 296-97 (Iowa 1995) (federal decisions about whether farm credit banks are subject to federal antitrust laws was dispositive of same question under Iowa antitrust law). Kansas: See Orr v. Beamon, 77 F. Supp. 2d 1208, 1211-12 (D. Kan. 1999) ("While recognizing that federal antitrust cases are not binding on the court in interpreting Kansas antitrust statutes, the court finds such cases sufficiently persuasive to guide its decision . . . ."); Bergstrom v. Noah, 266 Kan. 829, 845, 974 P.2d 520, 531 (1999) ("While such cases may be persuasive authority for any state court interpreting its antitrust laws, such authority is not binding upon any court in Kansas interpreting Kansas antitrust laws."). Maine: See Davric Maine Corp. v. Rancourt, 216 F.3d 143, 149 (1st Cir. 2000) ("We have noted that the 'Maine antitrust statutes parallel the Sherman Act,' and thus have analyzed claims thereunder according to the doctrines developed in relation to federal law." (quoting Tri-State Rubbish, Inc. v. Waste Mgmt., Inc., 998 F.2d 1073, 1081 (1st Cir. 1993))). Michigan: See First Med Representatives, LLC v. Futura Medi Corp., 195 F. Supp. 2d 917, 922 (E.D. Mich. 2002) ("[B]ecause Michigan courts apply Sherman Act analysis to the MARA, the following analysis applies to the entirety of Count I, for the allegations of both state and federal antitrust violations" (citing Blair v. Checker Cab Co., 219 Mich. App. 667, 675, 558 N.W.2d 439 (Mich. Ct. App. 1996))); Danou v. Kroger Co., 567 F. Supp. 1266, 1268 (E.D. Mich. 1983) ("The Michigan antitrust statute is patterned after the Sherman Act. Accordingly, the federal courts' interpretations of the Sherman Act are persuasive authority as to the meaning of the Michigan Act.") (citing Goldman v. Loubella Extendables, 91 Mich. App. 212, 283 N.W.2d 695 (Mich. Ct. App. 1979)). Minnesota: See State by Humphrey v. Alpine Air Prods., Inc., 490 N.W.2d 888, 894 (Minn. Ct. App. 1992) ("Minnesota antitrust law should be interpreted consistently with federal court interpretations of the Sherman Act unless state law is clearly in conflict with federal law."); see also Lamminen v. City of Cloquet, 987 F. Supp. 723, 734 (D. Minn. 1997) (same). North Carolina: See Rose v. Vulcan Materials Co., 282 N.C. 643, 655, 194 S.E.2d 521, 530 (1973) ("[T]he body of law applying the Sherman Act, although not binding upon this Court in applying (continued on next page) [North Carolina's antitrust law], is nonetheless instructive in determining the full reach of that statute."); see also United Roasters Inc. v. Colgate-Palmolive Co., 485 F. Supp. 1041, 1047-48 (C.D.N.C. 1979) ("[C]aution must be exercised in [taking guidance from Sherman Act decisions] because the Sherman Act is in some respects broader than [North Carolina's antitrust law]."). South Dakota: See Byre v. City of Chamberlain, 362 N.W.2d 69, 74 (S.D. 1985) ("[B]ecause of the legislative suggestion for interpretation found in SDCL 37-1-22, great weight should be given to the federal cases interpreting the federal statute."); see also In re S.D. Microsoft Antitrust Litig., 707 N.W.2d 85, 100 (S.D. 2005) (reiterating Byre); Assan Drug Co., Inc. v. Miller Brewing Co., Inc., 624 F. Supp. 411, 412 (D.S.D. 1985) ("[F]ederal court interpretations of the federal antitrust statutes may be used as a guide in interpreting the South Dakota statutes cited by plaintiffs in this case [and, therefore,] it is appropriate for correct analysis of the issues presented by this motion to refer to the relevant federal law."). Vermont: See State v. Heritage Realty of Vermont, 137 Vt. 425, 429-30, 407 A.2d 509, 511-12 (1979) (analyzing claim under Vermont antitrust law exclusively by reference to federal court Sherman Act decisions). West Virginia: See Kessel v. Monongalia County Gen. Hosp. Co., 220 W.Va. 602, 610, 648 S.E.2d 366, 374 (2007) ("[T]he Legislature has directed that the [West Virginia antitrust law] 'shall be construed liberally and in harmony with ruling judicial interpretations of comparable federal antitrust statutes.' Moreover, this Court held . . . that '[t]he courts of this state are directed by the legislature . . . to apply the federal decisional law interpreting the Sherman Act . . . to our own parallel antitrust statute." (citations omitted)). Wisconsin: See State v. Waste Mgmt. of Wis., Inc., 81 Wis. 2d 555, 574, 261 N.W.2d 147, 155 (1978) ("Except for the fact that the state act applies to intrastate commerce while the federal act applies to interstate commerce, what amounts to a conspiracy in restraint of trade under the Sherman Act amounts to a conspiracy in restraint of trade under the Wisconsin antitrust act."); see also Indep. Milk Producers Co-op v. Stoffel, 102 Wis. 2d 1, 6, 298 N.W.2d 102, 104 (Wis. Ct. App. 1980) ("The Wisconsin Antitrust law] is drawn largely from federal antitrust law. Interpretation of [the Wisconsin law], prohibiting conspiracies in restraint of trade or commerce, is controlled by federal case law." (citing Grams v. Boss, 97 Wis. 2d 332, 346, 294 N.W.2d 473, 480 (1980))). have declined to follow federal antitrust law for the proposition that indirect purchasers lack standing to sue. See Illinois Brick Co. v. Illinois, 431 U.S. 720 (1977). As the Supreme Court of Iowa explained: The purpose behind both state and federal antitrust law is to apply a uniform standard of conduct so that businesses will know what is acceptable conduct and what is not acceptable conduct. To achieve this uniformity or predictability, we are not required to define who may sue in our state courts in the same way federal courts have defined who may maintain an action in federal court. Harmonizing our construction and interpretation of state law as to what conduct is governed by the law satisfies the harmonization provision. Comes v. Microsoft Corp., 646 N.W.2d 440, 446 (Iowa 2002); accord Hyde v. Abbott Labs., Inc., 123 N.C. App. 572, 579, S.E.2d 680, 685 (N.C. Ct. App. 1996) (declining to follow Illinois Brick for other reasons). That is to say, disagreement about who can sue does not entail disagreement about when they may recover. Finally, however, the simple fact remains that each state statute requires some form of agreement,\(^{22}\) and \(^{22}\) See Ariz. Rev. Stat. § 44-1402 (prohibiting "[a] contract, combination or conspiracy between two or more persons in restraint of, or to monopolize, trade or commerce"); Cal. Bus. & Prof. Code § 16720 (2008) ("A trust is a combination of capital, skill or acts by two or more persons for any of the following purposes."); D.C. Code § 28-4502 (prohibiting "[e]very contract, combination in the form of a trust or otherwise, or conspiracy in restraint of trade or commerce."); Iowa Code § 553.4 ("A contract, combination, or conspiracy between two or (continued on next page) independently undertaken parallel conduct, even if undertaken consciously, does not itself demonstrate agreement. For these reasons and in light of my discussion of the federal claims, the state antitrust claims are DISMISSED. (continued from previous page) more persons shall not restrain or monopolize trade or commerce"; Kansas Stat. Ann. § 50-101 (2008) (defining "[a] trust is a combination of capital, skill, or acts, by two or more persons"); Me. Rev. Stat. Ann. tit. 10, § 1101 (2008) (prohibiting "[e]very contract, combination in the form of trusts or otherwise, or conspiracy, in restraint of trade"); Mich. Comp. Laws § 445.772 (prohibiting "[a] contract, combination, or conspiracy between 2 or more persons in restraint of, or to monopolize, trade or commerce"); Minn. Stat. § 325D.51 (2008) (prohibiting "[a] contract, combination, or conspiracy between two or more persons in unreasonable restraint of trade or commerce"); Nev. Rev. Stat. § 598A.050 (enumerating and prohibiting various types of agreements that "constitute[] a contract, combination or conspiracy in restraint of trade); N.C. Gen. Stat. § 75-1 (2008) (prohibiting "[e]very contract, combination in the form of trust or otherwise, or conspiracy in restraint of trade"); N.D. Cent. Code § 51-03-1-02 (2008) (prohibiting "[a] contract, combination, or conspiracy between two or more persons in restraint of, or to monopolize, trade or commerce"); S.D. Codified Laws § 37-1-3.1 (prohibiting "[a] contract, combination, or conspiracy between two or more persons in restraint of trade or commerce"); Tenn. Code Ann. § 47-25-101 (2008) (prohibiting "[a]ll arrangements, contracts, agreements, trusts, or combinations between persons or corporations made with a view to lessen, or which tend to lessen, full and free competition"); W.Va. Code § 47-18-3 (prohibiting "[e]very contract, combination in the form of trust or otherwise, or conspiracy in restraint of trade or commerce"); Wis. Stat. § 133.03 (2008) (prohibiting "[e]very contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce"). 2. State Consumer Protection Claims As noted above, Count Two of the SCAC also asserts claims under the consumer protection laws of the following eight jurisdictions: California, Washington D.C., Florida, Maine, Massachusetts, Nebraska, New Mexico and North Carolina.\(^{23}\) To support those claims, Plaintiffs allege the same conduct that forms the basis of their antitrust claims. (See, e.g., Pls.’ Opp’n 2 (“The pertinent state consumer protection laws encompass price-fixing claims because price fixing is a form of unfair, unconscionable or deceptive conduct.”); see also id. at 34 n.39.) While the statutes at issue may embrace a violation of federal antitrust laws as a grounds for relief,\(^{24}\) my conclusion \(^{23}\) While the SCAC asserts claims broadly under the Kansas Unfair Trade and Consumer Protection Act, see Kansas Stat. Ann. ch. 50; see also SCAC ¶ 136(f), Plaintiffs clarify that they assert only claims under Article 1 of that Act, prohibiting certain restraints of trade, see id. § 50-101 et seq., and not the portion of Article 6 of that Act entitled the Kansas Consumer Protection Act, see id. § 50-623 et seq.; see also Pls.’ Opp’n 33. \(^{24}\) See, e.g., Fla. Stat. § 501.203(3)(c) (2008) (Florida’s consumer protection act violated by violations of “[a]ny law, statute, rule, regulation, or ordinance which proscribes unfair methods of competition, or unfair, deceptive, or unconscionable acts or practices”); 940 Mass. Code Regs. 3.16(4) (2008) (Massachusetts consumer protection act violated by violation of the “the Federal Trade Commission Act, the Federal Consumer Credit Protection Act or other Federal consumer protection statutes”); see also Sunbelt Television, Inc. v. Jones Intercable, Inc., 795 F. Supp. 333, 338 (C.D. Cal. 1992) (where since plaintiff’s have adequately plead a violation of the that Plaintiffs have not adequately alleged such a violation necessarily precludes their attempt to recast that violation as an unfair business practice.\textsuperscript{25} For the reasons stated above, therefore, Count Two of the SCAC is DISMISSED. (continued from previous page) Sherman Act, they have clearly stated a cause of action under California’s Unfair Competition law.”); Dist. Cablevision Ltd. P’ship v. Bassin, 828 A.2d 714, 723 (D.C. 2003) (“Trade practices that violate other laws, including the common law, also fall within the purview of the [Washington D.C. Consumer Protection Procedures Act].”); Mack v. Bristol-Myers Squibb Co., 673 So. 2d 100, 104 (Fla. Ct. App. 1 Dist. 1996) (“Thus, the acts proscribed by subsection 501.204(1) include antitrust violations.”); Triple 7, Inc. v. Intervet, Inc., 338 F. Supp. 2d 1082, 1087 (D. Neb. 2004) (Nebraska consumer protection statute violated by violations of Sherman Act); ITCO Corp. v. Michelin Tire Corp., 722 F.2d 42, 48 (4th Cir. 1983) (“We thus hold that proof of conduct violative of the Sherman Act is proof sufficient to establish a violation of the North Carolina Unfair Trade Practices Act.”). \textsuperscript{25} See, e.g., In re Tamoxifen Citrate Antitrust Litig., 466 F.3d 187, 198 (2d Cir. 2006) (affirming district court’s dismissal of state consumer protection claims upon district court’s conclusion that plaintiffs failed to state a federal antitrust claim); Triple 7, 338 F. Supp. 2d at 1087 (“Plaintiff has failed to state a claim under the CPA for the same reasons discussed in connection with its Sherman Antitrust Act claim.”); R.J. Reynolds Tobacco Co. v. Philip Morris Inc., 199 F. Supp. 2d 362, 396 (M.D. N.C. 2002) (“Because Plaintiffs do not allege any facts that suggest that Defendant’s conduct is unlawful beyond the conduct that is the basis for their failed federal claims, Plaintiffs’ state common law and statutory claims fail as well.”); Carter v. Variflex, Inc., 101 F. Supp. 2d 1261, 1270 (C.D. Cal. 2000) (“Thus, in light of the Court’s findings under the Sherman Act, the Court finds that Variflex has failed to produce sufficient evidence to support its California unfair competition claim.”). C. The Unjust Enrichment Count Count Three of the SCAC alleges unjust enrichment: "The economic benefit of the overcharges and unlawful profits sought by and derived by Defendants through charging supracompetitive and artificially inflated prices for Internet Music and CDs is a direct and proximate result of Defendants' unlawful practices." (SCAC ¶ 141; see also Pls.' Opp'n 40 ("[T]he economic benefit gained by Defendants from Plaintiffs through Defendants' price-fixing and anticompetitive conduct is precisely the issue here. The proper focus is on the amounts by which Defendants were enriched." (emphasis in original))). Having concluded above that the SCAC fails to allege a violation of the antitrust laws, Plaintiffs cannot now maintain their unjust enrichment claim predicated on the benefit accruing to Defendants as a result of that alleged violation. Therefore, Count Three of the SCAC is DISMISSED. CONCLUSION For the reasons stated above, Defendants’ motion to dismiss the SCAC [dkt. no. 75] is GRANTED, and Plaintiffs’ motion to amend SCAC Paragraph 99 [dkt. no. 104] is DENIED as futile. The Clerk of the Court shall mark this action closed and all pending motions denied as moot. SO ORDERED: DATED: New York, New York October 9, 2008 Loretta A. Preska LORETTA A. PRESKA, U.S.D.J.
Application of Vernacular Landscape Symbols in Du Fu Thatched Cottage of Chengdu Dingying Ye, Xian Zhao, Yuanyuan Jiang College of Landscape Architecture, Sichuan Agricultural University, Chengdu, Sichuan Province, China Keywords: vernacular landscape symbols; Du Fu Thatched Cottage; application Abstract: This paper takes the perspective of semiotics, and mainly focuses on the signified and references features of symbols. The researchers conducted on-spot investigation on Du Fu Thatched Cottage, and collected the vernacular landscape symbols used in this garden. Through screening and classifying these symbols, the application characteristics of symbols in Du Fu Thatched Cottage are analyzed. This paper also excavates the regional cultural features of Chengdu, and provides the reference for the landscape construction of Chengdu area. 1. Introduction Since the 21st century, with the rapid development of social economy and culture, the comprehensive strength of our country has been growing. However, in the rapid development of urban construction, many historical relics are submerged. The features and cultural heritages of cities are facing enormous challenges. With the promotion of activities like "cultural return" and "tourism wave", the exploration of regional culture has attracted much attention. The uniqueness of vernacular landscape plays an important role in the formation of diversified regional landscape and can effectively prevent the convergence of landscape under the influence of globalization. [1] As a prominent and representative feature of vernacular landscape, special element symbols have a wide range of application value, and are closely related to the development of urban landscape with local cultural characteristics and inheritance of regional history and culture. This study explores the application of vernacular landscape symbols in Du Fu Thatched Cottage, a famous garden located in western Chengdu of Sichuan Province. It is conducive to the excavation of regional cultural characteristics in Chengdu and can provide the reference for the landscape construction and design innovation of this city. 2. The Concept of Vernacular Landscape Symbols In Ci Hai, the word "Xiang Tu", or vernacular is interpreted as "native land or hometown", which also refers to "local regions". [2] With the common development of many disciplines, the meaning of "Xiang Tu" is no longer limited in its original meaning. The connotation gradually expands and changes; a lot of related concepts come into being, such as vernacular landscape, vernacular culture and vernacular architecture, etc. They all focus the concepts of vernacular elements, regional places and hometown. [3] In short, "Xiang Tu" is the place where human beings are born, grow up and die out. With the in-depth study of "Xiang Tu", or vernacular, a new concept of "vernacular landscape" appeared and attracted people's attention. The word is used in relevant literature with different connotations due to the different focuses of study. At present, there are mainly three explanations of vernacular landscape, namely rural landscape, regional landscape and ordinary landscape. These three explanations have certain commonalities. They all believe that vernacular landscape is the procedure which people cognize the nature and adapt to the land as well as the space and pattern on the land. It is the manifestation of people's life style on the land at a special period of time, and is closely related to the modes of human life and production. It is the result of the spontaneous or semi-spontaneous process based on the accumulation of human activity experiences. [4] Symbols are abstract; their essence is to represent some substance or concepts through other substance. Semiotics is a science dealing with life and symbolic structure. It covers a wide range; it can analyze all things in the world. The vernacular landscape symbols derive from the combination of symbols and landscapes; they refer to landscape patterns formed spontaneously or semi-spontaneously in the accumulation of people's life experience. These symbols include human's recognition on nature and land, as well as the methods they adopted to adapt to specific spatial patterns. Vernacular landscape symbols have the three characteristics of signification, signifier and reference. The creation and inheritance of human culture are usually accomplished by symbols. The process of human culture creation is actually the process of constructing the symbol system; the process of symbol inheritance is also the process of cultural continuity. Therefore, in the process of building the human living environment, the traditional symbols created by people's subjective initiative are all vernacular landscape symbols. 3. The Application of Vernacular Landscape Symbols in Du Fu Thatched Cottage 3.1 Basic situation of the cottage Du Fu Thatched Cottage, located in the western suburbs of Chengdu, was once the residence of Fu Du, a famous poet of the Tang Dynasty who came to Chengdu to flee the war. It is also known as the Du Fu Cao Tang, Shaoling Thatched Cottage, Huanhua Thatched Cottage and Gongbu Thatched Cottage. The garden was built in the first year of the reign of Emperor Zongbao in Tang Dynasty, and underwent Zhuang Wei's reconstruction of in the late five generations; it was also expanded and repaired in Song, Yuan, Ming and Qing dynasties. The existing buildings of Du Fu Thatched Cottage were renovated in the 16th year of the reign of Emperor Jiaqing of Qing Dynasty. The Thatched Cottage is a complete architectural complex with five thematic buildings: the main entrance, the Daliao, the Shishitang Hall, the Chaimen Gate and the Temple of Gongbu, which are arranged on a central axis. According to the evolution of traditional Chinese cultural symbols, the construction history of the garden, as well as the life of Fu Du, the protagonist of the site, we can extract the possible signified meanings of vernacular landscape symbols in the park. These symbols express meanings such as poetry, politics, life, official career and leisure living. From this, we can get the overview of corresponding symbols in Table 1. Table 1. Symbols in Du Fu Thatched Cottage | Dynasty | Typical symbols | |---------|-----------------| | Tang | Animal pattern (poetry and official career intention), rolling grass pattern (life intention), Bao Xiang pattern (beautiful vision), brocade pattern (beautiful vision), Twig pattern (life intention), bird, flower and grass pattern (life and poetry intention), Lian Zhu pattern (beautiful vision) | | Song | Melon and fruit pattern (beautiful vision), flower and bird pattern (poetry intention), grass pattern (life intention), ribbon pattern (official career intention), character pattern (life intention) | | Ming | Flower pattern (life intention) | | Qing | Character Wan pattern (beautiful vision), animal pattern (poetry and official career intention), flower and bird pattern (poetry intention), character pattern (life intention) | 3.2 Application of vernacular landscape symbols Under the guidance of symbolic images, 117 vernacular landscape symbols were collected. Fifty-five symbols are kept after excluding damaged and unclear samples. According to the nature of these symbols and the comparison with patterns mentioned in Chinese Symbols (2008) and other works, the vernacular landscape symbols of the site were obtained; specific application statistics were carried out. The results are shown in Table 2. Table 2. Summary of symbols in Du Fu Thatched Cottage | Symbol | Dynasty | Meaning | Frequency | |---------------------------------------------|------------------|----------------------------------------------|-----------| | Cloud and thunder pattern | Shang and Zhou | power | 1 | | Nipple pattern | Qin and Han | Order | 1 | | Rolling grass pattern | Tang | Good luck and happiness | 1 | | Sun pattern | Tang | Mystery and power | 1 | | Flower and bird pattern | Song | Beautiful vision | 2 | | Character pattern | Song | Meditate on the past | 1 | | Plant pattern | Song | Different intentions | 25 | | Gold ingot pattern | Ming and Qing | Rolling in money | 2 | | Character Wan pattern | Qing | Good luck and happiness | 2 | | Calligraphy and painting pattern (scenery pattern) | Qing | Different intentions | 9 | | Myth pattern | Qing | Beautiful vision | 2 | | Geometric pattern | All dynasties | Order, power and preciseness | 8 | Du Fu Thatched Cottage was established for a long time and renovated during several dynasties; symbols of Tang and pre-Tang Dynasties gradually disappeared. The existing vernacular landscape symbol patterns include plants, figures, paintings, as well as clouds and thunder, among which the plant patterns of Tang and Song dynasties are the most frequent, followed by the calligraphy and painting patterns of Qing Dynasty and various geometric symbols. Other symbols scatter around the site as foils. In this paper, the typical symbols are analyzed specifically. Plant patterns. Plant patterns account for nearly 1/2 of all symbols in Du Fu Thatched Cottage. They are mostly used in bridges, pavements, structures, flower pots and armrests. The main forms are peonies, orchids, bamboos, plum blossom, lotus flowers and cherries. However, in areas with dense plant patterns, the implication of plants is not the most important; the key is to create a landscape with "a group of plants" as symbol aggregation. The common "plum blossom, orchid, bamboo and chrysanthemum" and the "three durable plants of winter: pine, bamboo and plum blossom" are typical examples. At the same time, the wide application of patterns with a single kind of flower is mainly used in the ancient pagoda area of Du Fu Thatched Cottage; most patterns are peony, lotus, orchid, peach blossom and branch. The majority of them represent the meaning of "longevity" (as shown in Figure 1). Figure 1. Plant patterns form a set of symbols in the space. Geometric patterns. Geometric patterns account for nearly one fifth of the total symbols in Du Fu Thatched Cottage. Most of them are used to express the landscape through different materials. For example, bamboos are woven to form rhombic railings around the countryside. The landscape wall made of bricks and tiles expresses a strong pastoral flavor, which conforms to the humanistic spirit of the site. The combination of bamboo weaving, bricks and tiles makes the landscape simple and lively. Bricks and tiles, straight eaves, red beams and columns, as well as lintels and railings create a unique vernacular garden which is completely different from northern and southern gardens. In the partition of the landscape wall, the repeated geometric patterns are often used to create orderly aesthetic. At the same time, the appearance of geometric symbols imitating bricks and tiles makes the landscape of the site more vivid (as shown in Figure 2). ![Fences with geometric patterns in Du Fu Thatched Cottage](image1) Figure 2. Fences with geometric patterns in Du Fu Thatched Cottage The application of geometric patterns, together with the series of landscapes of the thatched cottage, the pond and five Mu field, combines well Du Fu's life and the environment of the cottage. It not only provides Fu Du with a place to settle down, but also shows his feelings of living in a hut but concerning the whole world. Moreover, the integration of Du Fu Thatched Cottage and famous poem, *My Cottage Unroofed By Autumn Gales* not only increases the popularity of the cottage, but also increases the Confucian cultural accumulation of the place and sublimates its humanistic mood. For the aesthetic subjects, the experience of staying in the thatched house strengthens their aesthetic feeling, enriches their aesthetic experience and helps them to realize the aesthetic transcendence. Visitors can truly feel the Confucian mind of "with thousands of miles in sight and centuries of history in mind" described in the Five Poems of Spring Villages and Rivers. Painting and calligraphy patterns. Painting and calligraphy patterns account for 1/5 in Du Fu Thatched Cottage. That kind of pattern is commonly used in memorial gardens to express the theme of commemoration. Du Fu Thatched Cottage takes Fu Du's residence as its prototype. Fu Du's life experiences, feelings as well as the future generations' memory of him are the source of creation of calligraphy and painting patterns. Through landscape molding the spirit of this place is created to inherit its story (as shown in Figure 3). ![Painting and calligraphy patterns in Du Fu Thatched Cottage](image2) Figure 3. Painting and calligraphy patterns in Du Fu Thatched Cottage 4. Characteristics in the Application of Vernacular Landscape Symbols in Du Fu Thatched Cottage In Du Fu Thatched Cottage, there are many kinds of symbols, which cannot be obviously classified into different categories from the perspectives of history or archaeology. Therefore, in order to facilitate the generalization and summarization, symbols investigated in the garden are interpreted in a popular way according to their contents or images. For example, narrative-oriented symbols, as the name implies, are calligraphy, painting and figure patterns. After classifying and summarizing the vernacular landscape symbols in Du Fu Thatched Cottage from the aspects of signified and reference meaning, this paper summarizes the application characteristics of vernacular landscape symbols in the cottage from the perspective of the "signified" of the three characteristics of symbols. 4.1 Narrative is the main way to express the theme of landscape. In Du Fu Thatched Cottage, Du Fu, as the protagonist of the place, is carved in the painting and calligraphy patterns. These symbols are mainly narrative, showing the main themes of "Fu Du's image", "the original cottage", "pastoral scenery" and so on. The image of Fu Du and the cottage can be found on the railings and pavement of the garden. It not only shows the landscape atmosphere of the place, but also enables visitors to feel the historical differences in the comparison of symbols and landscapes and to experience the changes of the site. It also helps to shape the image of Fu Du. In the landscape-based painting and calligraphy patterns, the main elements are nature, cottage and native land, which conform to the feature of the site and bring visitors a kind of "spiritual sustenance". These symbols uplift the meaning of symbols to a higher level. The appearance of story patterns about characters related to the site is different from patterns reflecting the real life of the Spring and Autumn Period as well as the Warring States Period. The patterns here have nature of commemoration and nostalgia (as shown in Figure 4). ![Figure 4. Narrative landscape expressions](image) 4.2 The types of symbol application are greatly influenced by Confucianism, Buddhism and Taoism cultures Influenced by the strict feudal ruling ideology of Confucianism, such as "ministers submit to the monarch, while sons submit to the father", geometric patterns stand for order as well as the cloud and thunder patterns stand for rights often appear in Du Fu Thatched Cottage. The lotus-seed patterns, lotus-flower patterns, lotus-seat patterns and Wan-character patterns in the cottage are concrete manifestations of the influence of Buddhist culture and ideas such as "achieving the Buddha-hood". The emergence of painting patterns and cloud patterns which show mythological mood and the realm of immortality is the embodiment of Taoist culture. The traditional Chinese culture is deeply influenced by Confucianism, Buddhism and Taoism. Although Du Fu Thatched Cottage has been repaired and perfected for several times, its cultural foundation has not been affected. Therefore, the vernacular landscape symbols in the cottage are the inheritance and development of our traditional culture (as shown in Figure 5). ![Figure 5. Comprehensive influences of Confucianism, Buddhism and Taoism](image) 4.3 The meaning of symbols is full of auspicious and beautiful visions In Du Fu Thatched Cottage, there are a large number of landscape symbols full of auspicious implications, such as the patterns of character Wan, which means ten thousand in Chinese, the patterns of gold ingot, as well as various plants, flowers and birds patterns. In the fence symbols of the garden, the combination of Wan-character patterns and gold ingot patterns expresses the beautiful vision of "ten thousand words do not reach the end" and symbolizes the long-term happiness and longevity. The gold ingot patterns are integrated into the design of the landscape wall. It forms a kind of cut-off landscaping through repetition. The application integrates with the natural environment, and can satisfy the function of landscape and express people's pursuit for a better life. The appearance of a large number of plant and bird patterns is also a concentrated expression of the auspicious and beautiful life. For example, there are orchids representing tranquil and the leisure living, peonies representing glory and wealth, vinca representing exuberant scenario, as well as lively and festive birds and animals (as shown in Figure 6). ![Figure 6. Landscape expresses beautiful wills](image) 5. Conclusion Vernacular landscape symbols evolve in the historical process according to users' different functional needs, as well as their aesthetics and spiritual pursuits in different periods. Although the existing Du Fu Thatched Cottage was renovated during the reign of Emperor Jiaqing of Qing Dynasty, vernacular landscape symbols in the park are mostly popular patterns used after the Song Dynasty. As a famous literary shrine and memorial garden, the meaning of these symbols does not change fundamentally. The extensive use of plants and painting patterns creates a strong aroma of books and gives the site literati temperament. It is not only a simulation of Fu Du's living environment, but also a respect for the existing site. Therefore, when applying landscape symbols, we must innovate on the basis of fully understanding their cultural connotations, in order to design real landscapes with cultural characteristics and connotations, and can reflect the spirit of the times. Acknowledgement This paper is supported by Foundation for Key Projects of the Sichuan Landscape and Recreation Research Center of the Education Department of Sichuan in 2016. Project No.: JGYQ2016002. References [1] Y.L. Cui, F. Xue, Research on visual landscape planning of domestic and international cities, *J. Anhui Architecture*. 5 (2011) 7-8. [2] Z.Y. Hong, Application of Vernacular Landscape Elements in Modern Urban Parks, Fujian Agricultural and Forestry University, 2011. [3] L.J. Yu, Seeking for the God of Land, China Architecture and Building Press, Beijing, 2006. [4] Z.H. Chen, On the Study of Vernacular Architecture, Henan Science and Technology Press, Zhengzhou, 1999.
The motion picture industry has a history of anticompetitive practices.\(^1\) Since the early days of the motion picture industry, movie producers and distributors have sought absolute control of the industry by dominating the production, distribution, and exhibition of movies.\(^2\) In 1948, the United States Supreme Court held that such control of the industry was anticompetitive and forced major movie studios to separate the exhibition aspect of the industry from its production and distribution aspects.\(^3\) The emergence of home videos introduced an entirely new market, where video retailers became key players in the distribution of movies to the public. In *Cleveland v. Viacom, Inc.*, the Fifth Circuit analyzed output revenue-sharing agreements between movie studios and large chain video retailers, addressing the antitrust issues that emerge when a movie studio oligopoly uniformly refuses to deal with small independent retailers on similar terms as large chain video retailers.\(^4\) According to the court, because the small independent retailers were unable to offer the studios a deal similar to that of the large chain video retailers, the studios’ uniform refusal to deal was not illegal. The implications of *Cleveland*, however, become increasingly significant as the movie industry moves further along its digital evolution. Namely, as technology for disseminating movies over digital networks becomes more secure and affordable, the online movie distribution market will grow, and an increasing number of potential online movie distributors will emerge. In addition, as the technology becomes more secure and affordable, the competitive gap between small and large online movie distributors will likely shrink. The popularity of online movie distribution, coupled with lower prices stemming from competition, may then be seen --- © 2005 Daniel Castro 1. See generally Barak Y. Orbach, *Antitrust and Pricing in the Motion Picture Industry*, 21 YALE J. ON REG. 317 (2004). 2. See generally Ralph Cassady, Jr., *Monopoly in Motion Picture Production and Distribution: 1908-1915*, 32 S. CAL. L. REV. 325 (1959). 3. Unites States v. Paramount Pictures, Inc., 334 U.S. 131 (1948). 4. Cleveland v. Viacom, Inc., 73 Fed. Appx. 736 (5th Cir. Aug. 25, 2003), cert. denied, 520 U.S. 1219 (2004). as a viable threat to the existing video retail industry. Indeed, renting physical copies of movies may soon be obsolete in light of technology that instead allows users to obtain digital copies at home. This Note examines the implications of *Cleveland* on the movie industry within a digital marketplace. In particular, the Note examines the likelihood of digital technology sufficiently evolving to the point where small retailers may offer studios a deal legally similar to that of large retailers such as Blockbuster Inc. Part I begins by giving a background of the in-home movie market. Part II then provides a legal background of section 1 of the Sherman Act, the Robinson-Patman Act, and the First Sale Doctrine. Next, Part III provides a summary of *Cleveland*. Part IV discusses the online movie distribution market, the impediments potential online distributors will face in the future, and possible solutions to these impediments. Finally, this Note concludes in Part V that the reduced costs involved in digitally distributing movies will enable small retailers to enter this marketplace and that studios will ultimately have to distribute their movies via these retailers. I. BACKGROUND OF THE IN-HOME MOVIE MARKET The rental home video market emerged in the 1980s and was largely influenced by consumers’ desire to rent a video for temporary viewing, as opposed to purchasing ownership at a much higher price. According to this business model, video retailers would purchase videos and rent them to the public by invoking their first sale rights.\(^5\) Currently, home videos comprise the largest category of copyrighted works widely disseminated by rental.\(^6\) Despite the popularity of rental videos, the market has experienced difficulties. For example, during the mid-1990s, many video retailers had insufficient copies of “new release” titles available to accommodate customer demand. As a result, customers were often unable to rent popular movies which were in most demand, which translated to a loss in potential profits to video retailers. --- 5. The first sale doctrine is the right to resell, rent, or lend copies of copyrighted works to individual purchasers. See 17 U.S.C. § 109(a) (2000) (“[T]he owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord.”). 6. R. Anthony Reese, *The First Sale Doctrine in the Era of Digital Networks*, 44 B.C.L.REV. 577, 587-88 (2003). In 1997, Blockbuster responded to this problem by entering into long-term output revenue sharing contracts with various major movie studios. By dealing directly with the studios and bypassing the distributors, Blockbuster was able to obtain videos for lower upfront payments in exchange for a percentage of their revenues. It agreed to purchase all titles a studio released, regardless of performance or perceived popularity. As a result, it significantly increased its volume of new releases and subsequently distanced itself from the independent retailers in the video rental market. In August 2001, five of the seven largest movie studios\(^7\) announced their joint plan to offer Internet users digitized movies through a service called Moviefly.\(^8\) This service, which has been renamed Movielink, allows customers to download movies from the Internet.\(^9\) These movies may be downloaded approximately two months after they are released on video, and must be viewed within a 24-hour period.\(^{10}\) Approximately two months after the movies are released on video, customers may also order them on a pay-per-view basis from the customers’ local cable service or satellite provider.\(^{11}\) Similar to the Movielink business model, movies ordered via pay-per-view must be viewed during a 24-hour viewing period. And similar to renting movies from a video store, customers may rewind and fast-forward these movies as they desire.\(^{12}\) **II. LEGAL BACKGROUND** This Note primarily focuses on three legal issues. The first two issues involve interpreting the two laws under which the plaintiffs’ antitrust claims were brought in *Cleveland*: section 1 of the Sherman Act and price discrimination under the Robinson-Patman Act. The third issue involves the first sale doctrine, which is the copyright law that enables video retailers to rent copyrighted movies. --- 7. Metro-Goldwyn-Mayer (“MGM”), Paramount Pictures, Sony Pictures Entertainment, Universal Studios, and Warner Brothers. 8. See Gary Gentile, *Studios in Video on Demand Venture*, AP ONLINE, Aug. 16, 2001, available at 2001 WL 26180234. 9. See Movielink, http://www.movielink.com (last visited Mar. 4, 2005). 10. Ron Grover, *Video-on-Demand, Hollywood Style*, BUS. WK. ONLINE, Aug. 21, 2001, at http://www.businessweek.com/bwdaily/dnflash/aug2001/nf20010821_006.htm. 11. Id. 12. See Comcast, http://www.comcast.com/Benefits/CableDetails/Slot6PageOne.asp (last visited Mar. 4, 2005). A. Concerted Action Under Section 1 of the Sherman Act Under section 1 of the Sherman Act, it is illegal to enter into a contract or conspiracy in restraint of trade or commerce.\textsuperscript{13} Despite the statute's admonition that every concerted trade restraint is illegal, judicial interpretation of the statute has limited its scope by implementing a "standard of reason."\textsuperscript{14} Nevertheless, the statutory requirement of concerted action remains a fundamental threshold burden for any antitrust plaintiff. Concerted action requires an antitrust plaintiff to prove the defendant conspired either horizontally with a competitor, or vertically with a firm involved in a different stage of production than the defendant. In fact, courts have held that unilateral conduct simply cannot be deemed a violation of section 1 of the Sherman Act, no matter how anticompetitive.\textsuperscript{15} Courts have further held that unless the competitive process is harmed by a single buyer's particular purchasing agreement, no conspiracy to monopolize may be inferred.\textsuperscript{16} B. Price Discrimination Under the Robinson-Patman Act Under the Robinson-Patman Act, it is unlawful for a seller to either directly or indirectly discriminate in price between different purchasers of similar commodities.\textsuperscript{17} The application of this statute may occur in either of two situations. First, it may apply where the discrimination substantially lessens competition or tends to create a monopoly.\textsuperscript{18} Second, it may apply where the effect may be substantially "to injure, destroy, or prevent competition with any person who either grants or knowingly receives the benefit of such discrimination, or with customers of either."\textsuperscript{19} According to the Supreme Court, "price discrimination" under the Robinson-Patman Act is defined as being merely a difference in price.\textsuperscript{20} This difference in price, however, also includes discount prices that are theoretically available to all, but functionally not.\textsuperscript{21} Indeed, according to \begin{itemize} \item[13.] 15 U.S.C. § 1 (2000). \item[14.] See United States v. Am. Tobacco Co., 221 U.S. 106, 179 (1911); Standard Oil Co. v. United States, 221 U.S. 1, 60 (1911). \item[15.] See, e.g., Copperweld Corp. v. Independence Tube Corp., 467 U.S. 752, 768 (1984); Monsanto Co. v. Spray-Rite Serv. Corp., 465 U.S. 752, 761 (1984). \item[16.] NYNEX Corp. v. Discon, Inc., 525 U.S. 128 (1998). \item[17.] 15 U.S.C. § 13(a) ("It shall be unlawful for any person engaged in commerce, in the course of such commerce, either directly or indirectly, to discriminate in price between different purchasers of commodities of like grade and quality."). \item[18.] Id. \item[19.] Id. \item[20.] FTC v. Anheuser-Busch, Inc., 363 U.S. 536, 549 (1960). \item[21.] FTC v. Morton Salt Co., 334 U.S. 37, 42 (1948). the Supreme Court, the legislative history of the Robinson-Patman Act clearly shows Congress’s intent to prevent large buyers from securing a competitive advantage over small buyers simply based on the large buyers’ superior purchasing power.\textsuperscript{22} A price discrimination claim under the Robinson-Patman Act, therefore, could be described as focusing primarily on the rivals of both the discriminating seller and the buyer receiving lower prices, rather than the ultimate consumer of the product. C. First Sale Doctrine Under the first sale doctrine, a copyright owner ceases to have control of a particular copyrighted work after the owner’s first transfer of that copy.\textsuperscript{23} As a result, anyone who \textit{legally} purchases copies of a copyrighted work is free to resell, rent, or lend those copies. For the video rental industry, this doctrine thus provides a legal shelter under which renters can operate. The emergence of digital networks, however, created a new landscape for interpreting the first sale doctrine in which users can readily forward copyrighted works they may have legally obtained from the copyright owner. In 1995, a presidential task force formed to research this issue determined that such conduct was not permitted under the first sale doctrine. And although Congress has considered creating a “digital first sale doctrine,” nothing has actually passed.\textsuperscript{24} Instead, Congress has adopted a “wait and see” approach because of the inherent uncertainties involved with rapid technological advances in e-commerce and encryption.\textsuperscript{25} This policy remains in effect today. III. CASE SUMMARY A. Facts and Procedural History Several independent video retailers sued home video affiliates of the seven major Hollywood movie studios, including Blockbuster Inc. (“Blockbuster”) and its parent company Viacom Inc. (“Viacom”).\textsuperscript{26} \begin{itemize} \item \textsuperscript{22} \textit{Id.} at 43. \item \textsuperscript{23} 2 \textsc{Melville Nimmer} \& \textsc{David Nimmer}, \textsc{Nimmer on Copyright} § 8.12 (2004). \item \textsuperscript{24} \textit{Id.}; Reese, \textit{supra} note 6, at 581-83. \item \textsuperscript{25} Congress adopted this policy in 2001 under the recommendation of the United States Copyright Office. Reese, \textit{supra} note 6, at 581-83. \item \textsuperscript{26} The defendants are Buena Vista Home Entertainment, Inc.; Columbia Tri-Star Home Video, Inc.; Metro-Goldwyn-Mayer Home Entertainment, Inc.; Paramount Home Video, Inc.; Time Warner Entertainment Company, L.P.; and Twentieth Century Fox Home Entertainment, Inc. (collectively “studios”). plaintiffs alleged that Blockbuster conspired with the studios to deny independent retailers long-term output revenue-sharing agreements equivalent to its own.\textsuperscript{27} As a result, the plaintiffs alleged that the defendants violated antitrust and price discrimination statutes.\textsuperscript{28} The issues presented by the \textit{Cleveland} decision include whether disparate pricing agreements between media producers and retailers are violations of antitrust laws where 1) small independent retailers lack the resources to enter into agreements similar to those negotiated by large chain retailers, and 2) there is little evidence of any bad faith intent to exclude small independent retailers from entering into agreements similar to those negotiated by large chain retailers. Defendants moved for judgment as a matter of law at the close of plaintiffs’ case-in-chief. This motion was granted by the district court in 2001,\textsuperscript{29} and later affirmed by the appellate court in an unpublished opinion in 2003.\textsuperscript{30} Plaintiffs’ petition for certiorari to the United States Supreme Court was subsequently denied in 2004.\textsuperscript{31} \section*{B. The Fifth Circuit’s Analysis} The statutory basis of plaintiffs’ antitrust violation claim was section 1 of the Sherman Act.\textsuperscript{32} In its review, the Fifth Circuit conceded that it should consider all evidence in the light most favorable to the nonmovant, which were the plaintiffs.\textsuperscript{33} The court also, however, insisted that antitrust cases such as this one require “the range of permissible inferences [to be] limited by particular principles of antitrust law.”\textsuperscript{34} In particular, the court held that inferences of a conspiracy cannot be supported by evidence of conduct that is equally consistent with both permissible competition and an illegal conspiracy.\textsuperscript{35} Accordingly, since direct evidence of a conspiracy was lacking, the court required plaintiffs to present sufficient circumstantial evidence which would exclude the possibility of independent action.\textsuperscript{36} The court rejected plaintiffs’ argument that Blockbuster’s plan to increase its market share was proof of a conspiracy. According to the court, \begin{itemize} \item[27.] 73 Fed. Appx. 736, 739 (5th Cir. Aug. 25, 2003). \item[28.] \textit{Id.} \item[29.] Cleveland v. Viacom, 166 F. Supp. 2d 535 (W.D. Tex. 2001). \item[30.] Cleveland v. Viacom, 73 Fed. Appx. 736 (5th Cir. Aug. 25, 2003). \item[31.] Cleveland v. Viacom, 520 U.S. 1219 (2004). \item[32.] 15 U.S.C. § 1 (2000). \item[33.] \textit{Cleveland}, 73 Fed. Appx. at 739. \item[34.] \textit{Id.} (citing Viazis v. Am. Ass’n of Orthodontists, 314 F.3d 758, 762 (5th Cir. 2002)). \item[35.] \textit{Id.} \item[36.] \textit{Id.} a company’s ambitious desire to significantly increase its market share cannot support an inference of conspiracy without proof that it intends to achieve these goals via illegal means.\textsuperscript{37} The court further held that, despite plaintiffs’ evidence of Blockbuster requesting a “special deal” from Fox’s vice-president, an exclusive deal was never made and an inference of conspiracy would be premature.\textsuperscript{38} The court also rejected plaintiffs’ argument that a conspiracy may be inferred from the studios’ parallel conduct. In particular, the court held that plaintiffs cannot establish an inference of conspiracy by simply showing that the studios followed similar courses of action, without providing evidence that these acts stemmed from an agreement as opposed to each studios’ independent business judgment.\textsuperscript{39} According to plaintiffs’ expert testimony, the studios’ conduct was contrary to their economic self-interest. Plaintiffs argued that “because the studios received greater revenues under the terms of their deals with Blockbuster, they likewise would have received greater revenues under similar deals with distributors serving independents.”\textsuperscript{40} The court subsequently rejected this testimony, holding that it ignored key differences between independent retailers and large video chains such as Blockbuster.\textsuperscript{41} Plaintiffs’ price discrimination claim was brought under the Robinson-Patman Act.\textsuperscript{42} According to the court, however, this statute only applies where customers are otherwise purchasing on like terms and conditions.\textsuperscript{43} Here, the court found the studios’ transactions with Blockbuster and plaintiffs to be dissimilar. Namely, the court found Blockbuster’s purchasing agreement with the studios readily distinguishable from agreements entered into by plaintiffs because Blockbuster’s agreement required Blockbuster to 1) commit to long-term contracts and 2) purchase a studio’s entire output.\textsuperscript{44} According to the court, this distinction is so significant that the disparity in price cannot support a claim of price discrimination.\textsuperscript{45} \begin{itemize} \item \textsuperscript{37} \textit{Id.} at 740-41. \item \textsuperscript{38} \textit{Id.} at 740. \item \textsuperscript{39} \textit{Id.} \item \textsuperscript{40} \textit{Id.} at 741. \item \textsuperscript{41} \textit{Id.} \item \textsuperscript{42} See 15 U.S.C. § 13 (2000). The plaintiffs also claimed the price discrimination was in violation of the California Unfair Trade Practices Act. CAL. BUS. & PROF. CODE §§ 16750(a), 17078-17080 (West 1997), 17203 (West Supp. 2005). \item \textsuperscript{43} \textit{Cleveland}, 73 Fed. Appx. at 741 (citing FTC v. Borden, 383 U.S. 637, 643 (1966)). \item \textsuperscript{44} \textit{Id.} \item \textsuperscript{45} \textit{Id.} IV. DISCUSSION This Note attempts to foreshadow the significance of *Cleveland* in a future era of digital networks. In so doing, it examines how the court's original analysis fits within the framework of a digital marketplace where small website operators can offer movie studios the same deal as the Blockbusters of the world. Furthermore, this Note examines the likely options for the movie studios and the new antitrust considerations with which they will inevitably have to deal. Section A provides some background with respect to the studios' attempts to vertically integrate the online movie distribution market. Section B discusses the impediments potential online distributors will face in the future. Section C offers possible solutions to these impediments. A. Studios Seek to Vertically Integrate the Online Movie Distribution Market The studios have historically been reluctant to distribute their movies over the Internet because of the threat posed by illegal piracy.\(^{46}\) The emergence of Movielink, however, reveals a desire of the studios to cautiously embrace this new market through vertical integration. In particular, the studios hope this venture will enable them to set the technical and security standards necessary for the online movie market to safely flourish.\(^{47}\) Having learned from their music industry counterparts, the studios hope Movielink deters consumers from engaging in illegal piracy by providing them with a legitimate way to obtain movies over the Internet.\(^{48}\) The threat posed by legal file-sharing ventures has also expedited the emergence of Movielink.\(^{49}\) CenterSpan Communications, for example, plans to provide a legal version of a Napster-like software service that enables users to share movies online.\(^{50}\) Fortunately for the studios, third par- \(^{46}\). See Gentile, *supra* note 8. \(^{47}\). See Grover, *supra* note 10. \(^{48}\). Laura Rich, *Analysis: Hollywood Braces for "Napsterization"*, CNN.COM TECH PAGE, Jan. 10, 2001, at http://asia.cnn.com/2001/TECH/computing/01/10/hollywood.napsterization.idg/index.html. \(^{49}\). See id. \(^{50}\). CenterSpan acquired this software from Scour, Inc. in a court-supervised bankruptcy auction following Scour’s copyright infringement litigation against members of the Motion Picture Association of America (MPAA), the Recording Industry of America (RIAA), and the National Music Publishers’ Association (NMPA). Carey D. Ramos, *The Security of Music and Motion Pictures Distributed on the Internet: Legal Background and Developments*, in *MUSIC ON THE INTERNET: UNDERSTANDING THE NEW RIGHTS AND SOLVING NEW PROBLEMS* 402 (PLI Patents, Copyrights, Trademarks, and Literary Property Course, Handbook Series No. G0-00PP, 2001). ties such as CenterSpan currently lack the technology necessary to make such ventures profitable. By launching Movielink, however, the studios clearly want to get a head start. 1. Prospects of Movies Being Widely Disseminated over the Internet Despite having the legality of its revenue sharing agreements challenged in *Cleveland*, Blockbuster ultimately prevailed because it negotiated deals with the various studios that the independent retailers could not match. Today, however, the possibility of disseminating movies via the Internet is a significant threat to Blockbuster’s current business model. Such technology may eventually eliminate demand for large retail chains that rent tangible copies of movies. Indeed, the convenience of either downloading a movie from a studio website or ordering it via pay-per-view may soon supersede the convenience of going to a local Blockbuster. Although Blockbuster has already taken steps towards ascertaining its share of this new technology’s market share, it is struggling to obtain a firm foothold in it. Blockbuster executives, for example, have openly expressed their commitment to “delivering movies to people at home however they want to receive them” and have already begun negotiating deals with studios.\(^{51}\) Some of these negotiations have failed, however, including a failed negotiation with its corporate cousin Paramount Studios in which an insider is quoted as saying, “Hollywood isn’t about to give Blockbuster another blank check.”\(^{52}\) Although Blockbuster did secure limited video-on-demand rights from Universal Pictures, it failed in its efforts to start an Internet venture with Enron.\(^{53}\) Even if Blockbuster were to negotiate a deal with a particular movie studio to distribute movies online, it would still have to compete with independent websites that may offer studios the same deal. Current technological conditions also make it difficult for the online movie distribution market to emerge. In particular, the number of homes currently equipped with high-speed Internet connections has not yet reached a level where sustaining a movie distribution site would be profitable. Nevertheless, the studios are optimistic that the popularity of broadband Internet access will continue to rapidly grow and eventually make their Movielink joint venture profitable. --- 51. Konrad Gatien, *Internet Killed the Video Star: How In-House Internet Distribution of Home Video Will Affect Profit Participants*, 13 FORDHAM INTELL. PROP. MEDIA & ENT. L.J. 909, 923-24 (2003). 52. Id. 53. Id. Experts agree that these conditions may occur relatively soon. According to some analysts, the number of homes with high-speed Internet connections is expected to grow from two million in 2000, to approximately 47 million in 2005.\textsuperscript{54} By the year 2009, experts predict that approximately 107 million Internet users will have broadband connections (approximately 90% of all Internet users).\textsuperscript{55} Furthermore, according to reports from the Federal Communications Commission (FCC), the number of Internet users watching Internet videos has been steadily increasing over the past few years.\textsuperscript{56} The need to establish a secure technological standard is also critical to making online movie distribution a reality.\textsuperscript{57} Indeed, without a common technological standard, the industry may never emerge because of compatibility issues. The irony of developing such a standard, however, is that it would require the cooperation and agreement of all the major studios. In fact, according to some studio executives who were involved in launching Movielink, the opportunity to set the technical and security standards of the industry partially motivated them to take part in the joint venture.\textsuperscript{58} \textbf{2. Movielink Business Model Inconsistent with Paramount?} As mentioned previously, the Supreme Court in \textit{Paramount} forced the major movie studios to separate the exhibition aspect of the industry from its production and distribution aspects. One of the concerns alleged by the government in \textit{Paramount} was the apparent favoritism exhibited by the major studios of each other over smaller independent studios.\textsuperscript{59} Namely, the government alleged that the major studios were exhibiting only movies produced by the major studios and licensing first run movies only amongst themselves, rather than to independent theatres.\textsuperscript{60} \begin{itemize} \item[54.] Mary Rasenberger & M. Lorrane Ford, \textit{Untangling the Web of Rights to Film and Video: Before Putting Such Content On-Line, Clear It for Use on the Internet}, N.Y.L.J., Sept. 18, 2000, at S3. \item[55.] \textit{Id}. \item[56.] \textit{In re Annual Assessment of the Status of Competition in the Market for the Delivery of Video Programming}, Seventh Annual Report, CS Docket No. 00-132, at 49, ¶ 107 (Fed. Communications Comm’n 2001), available at http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-01-1A1.pdf. \item[57.] Mark A. Lemley, \textit{Antitrust and the Internet Standardization Problem}, 28 CONN. L. REV. 1041, 1042-43 (1996). \item[58.] Grover, \textit{supra} note 10. \item[59.] United States v. Paramount Pictures, Inc., 334 U.S. 131, 161-62 (1948). \item[60.] \textit{Id}. The Movielink business model will inevitably be scrutinized by antitrust regulators as being inconsistent with *Paramount*. By allowing the studios to control not only the production of movies, but also all aspects of distribution and pricing, consumers are left vulnerable to monopolistic prices stemming from a lack of competition. Indeed, other than the post-*Paramount* distribution channels of the movie industry, true competition is noticeably lacking. With these antitrust concerns in mind, the studios have taken affirmative steps towards circumventing these problems. Most importantly, they have agreed to offer a nonexclusive license to any potential distributor with a service that complies with the Movielink partners’ security and anti-piracy provisions.\(^{61}\) Therefore, the door is theoretically open for third-party vendors to launch their own distribution sites, so long as they obtain a license from the studios. Regardless of whether Movielink is ever found to violate antitrust law though, it is clear that third-party distributors will always play *some* role in the online movie market, either competing against the studios, or against each other. **B. Future Impediments for Small Online Movie Distributors** According to the holding in *Cleveland*, a plaintiff’s claim under section 1 of the Sherman Act requires direct evidence of a conspiracy. Namely, these claims require evidence of explicit collusion amongst the studios. Tacit collusion, however, provides a legal loophole in which studios may achieve the same objective without violating any antitrust laws. Indeed, while legal scholars have identified the act of communication as the distinguishing feature between explicit and tacit collusion, economists view the two as being essentially the same.\(^{62}\) Therefore, so long as each member of the studio cartel has an “understanding” of how other members will react, no formal communication is ever needed.\(^{63}\) Technological barriers to entry may also be significant for potential online distributors. As discussed previously, the threat of piracy gives studios added leverage to develop a highly sophisticated technological standard. But the more sophisticated a standard becomes, the more expensive it is to abide by it. Therefore, in order to limit the number of potential competitors, the studios may simply tacitly agree to an overly sophisticated standard. --- \(^{61}\) *Id.* \(^{62}\) Ian Ayers, *How Cartels Punish: A Structural Theory of Self-Enforcing Collusion*, 87 COLUM. L. REV. 295, 296-97 (1987). \(^{63}\) *Id.* at 297. Such a strategy would parallel the facts in *Cleveland* in that the independent retailers in *Cleveland* lacked the financial resources to offer the same deal as Blockbuster. In the future, small potential online distributors may lack the technological resources to offer the same deal as larger ones. The lack of a first sale doctrine will be another significant hurdle for potential online distributors to overcome. Unlike before, where retailers would purchase tangible copies of movies, retailers would now only be purchasing a license to access the servers of each studio. Therefore, because an actual purchase is never made, the first sale doctrine does not apply. As a result, the studios are able to control all copies of their copyrighted works. The same technology that enables small distributors to potentially offer the studios the same deal as large distributors may also, in the end, be the reason why small distributors fail: For example, if we assume that technology will evolve to where a significant number of individuals can comply with the standards set forth by the industry, the market may become so saturated with competitors that most will fail. This ironic fate would stem from the *perfect competition* scenario towards which antitrust law is designed to gravitate.\(^{64}\) **C. Possible Solutions** The *Cleveland* court’s rejection of plaintiffs’ vertical and horizontal theories of concerted action in violation of section 1 of the Sherman Act,\(^{65}\) --- \(^{64}\) PHILLIP AREEDA ET AL., ANTITRUST ANALYSIS: PROBLEMS, TEXT, AND CASES 5 (6th ed. 2004). A market economy will be perfectly competitive if the following conditions hold: 1) Sellers and buyers are so numerous that no one’s actions can have a perceptible impact on the market’s price, and there is no collusion among buyers and sellers. 2) Consumers register their subjective preferences among various goods and services through market transactions at fully known market prices. 3) All relevant prices are known to each producer, who also knows of all input combinations technically capable of producing any specific combination of outputs and who makes input-output decisions solely to maximize profits. 4) Every producer has equal access to all input markets and there are no artificial barriers to the production of any product. *Id.* (footnotes omitted). \(^{65}\) 15 U.S.C. § 1 (2000). as well as its rejection of plaintiffs’ allegation of price discrimination under the Robinson-Patman Act, reveal a general reluctance of the judiciary to resolve the problem of tacit collusion in oligopolies. In Rebel Oil, for example, the Ninth Circuit refused to rule on this issue despite explicitly recognizing that oligopolies may use this jurisprudential gap to engage in acts that would otherwise be prohibited by the Sherman Act. In particular, the court in Rebel Oil felt that Congress, and not the judiciary, was authorized to fill this gap. This reasoning is flawed, however, because antitrust law in the United States is primarily driven by judicial interpretation of broadly written legislative statutes. In fact, some scholars suggest that the legislative intent of drafting the Sherman Act with extremely broad and vague sentences may have been “little more than a legislative command that the judiciary develop a common law of antitrust.” With this in mind, the judiciary should feel authorized to equate tacit collusion with explicit collusion without having to wait for a cue from Congress. In addition, judicial bodies outside the United States have recognized the threat posed by shared monopolies and have taken affirmative steps towards enforcing their anticompetition laws when appropriate. For example, in a case known as Magill, the European Commission (EC) inferred that individuals were refusing to license their intellectual property in an attempt to prevent the creation of a comprehensive product. Magill involved television listings of various programs that were exclusively published by networks in the United Kingdom and Ireland. Each of these networks individually refused to license their proprietary listings to Magill, who was seeking to publish all program listings in one guide. 66. Id. § 13. 67. Rebel Oil Co. v. Atlantic Richfield Co., 51 F.3d 1421, 1443 (9th Cir. 1995). 68. Id. 69. See PHILLIP AREEDA & HERBERT HOVENKAMP, ANTITRUST LAW, ch. 1, ¶ 103d (2d ed. 2002) (“[M]ore than a century of judicial interpretation has now largely preempted contrary indications in the legislative history. Congress can amend the Sherman Act any time it wishes, but in more than a century has rarely done so, and then largely to correct technical deficiencies.”). 70. AREEDA ET AL., supra note 64, at 3; see also AREEDA & HOVENKAMP, supra note 69, ch. 1, ¶ 103d. 71. Brian A. Facey & Dany H. Assaf, Monopolization and Abuse of Dominance in Canada, the United States, and the European Union: A Survey, 70 ANTITRUST L.J. 513, 539-43 (2002). 72. Case 241/91 P, Radio Telefis Eireann v. Commission, 1995 E.C.R., I-743, ¶¶ 47, 58, 104, [1995] 4 C.M.L.R. 718, 730-33 (1995). 73. Id. ¶ 24-30, [1995] 4 C.M.L.R. at 726-28. 74. Id. The Court of Justice held each network liable for jointly monopolizing "television guides by excluding all competition . . . [and] den[ying] access to the basic information which is the raw material indispensable for the compilation of such a guide."75 Some scholars suggest that antitrust agencies such as the Department of Justice (DOJ) and the Federal Trade Commission (FTC) are actually better equipped than either Congress or the judiciary to regulate the dissemination of copyrighted work over the Internet.76 In particular, it is suggested that because these agencies have both the authority and experience to initiate programs that monitor fair competition and protect fair use, their involvement is ideal.77 The DOJ, for example, may play an active role in scrutinizing the studios' efforts to further expand their market power, much as they did with Microsoft under the Clinton and Bush administrations.78 Similarly, the FTC's inquiries into Intel and CD price-fixing79 may foreshadow the agency's future role in monitoring licensing agreements between the studios and potential online movie distributors. By working together, the DOJ and FTC may also take advantage of their respective institutional competences. For example, because the FTC can sue under the Federal Trade Commission Act,80 the FTC has much more discretion to protect consumers from the unfair and deceptive conduct of the studios. In fact, an FTC investigation may be initiated by letters from consumers or businesses and followed by either an "attempt to obtain voluntary compliance by entering into a consent order with the company . . . [or] an administrative complaint."81 The DOJ, on the other hand, is more familiar with traditional antitrust analysis and may more readily investigate the market structure of the movie industry.82 Ideally, this DOJ investigation would be coupled with a public FTC investigation focused on consumer protection to create leverage against the studios.83 The agencies may then use this leverage to induce the stu- 75. Id. ¶ 56, [1995] 4 C.M.L.R. at 791. 76. See Matthew Fagin et al., Beyond Napster: Using Antitrust Law to Advance and Enhance Online Music Distribution, 8 B.U. J. SCI. & TECH. L. 451, 549 (2002). 77. Id. 78. See id. 79. See Robert Pitofsky, Antitrust and Intellectual Property: Unresolved Issues at the Heart of the New Economy, 16 BERKELEY TECH L.J. 535 (2001). 80. 15 U.S.C. § 45 (2000). 81. FTC, How the FTC Brings an Action, at http://www.ftc.gov/ftc/action.htm (last visited Dec. 16, 2004). 82. See generally United States Department of Justice, at http://www.usdoj.gov (last visited Jan. 16, 2005). 83. See Fagin et al., supra note 76. dios to offer more favorable licensing agreements to prospective online movie distributors. This leverage may also be used to give prospective online movie distributors a voice in the standard-setting process, as well as to force the studios to disclose and limit their digital rights management techniques. V. CONCLUSION The adjudication following the *Cleveland* decision will have a significant effect on the mass commercial distribution of motion pictures. Other large video movie retailers will inevitably try to emulate the Blockbuster business model to take advantage of their smaller competitors' inability to offer the studios similar deals. Rumors regarding the industry, however, suggest that the number of these large video retailers may be shrinking.\(^{84}\) Higher prices resulting from this reduction in competition, coupled with advances in digital network technology, may lead to the eventual demise of the video retail industry. If so, we will likely revisit *Cleveland* within the context of a digital marketplace, where small retailers offer the studios a deal similar to that of large retailers. Moreover, because the economic results associated with particular antitrust policies are generally unpredictable, only time will tell whether the *Cleveland* decision will truly be in the best interest of the individual consumer and, more importantly, society as a whole. \(^{84}\). See Associated Press, *Hollywood Video Seeks Merger to Challenge Rival Blockbuster*, ABCLOCAL.COM MONEYSCOPE, Jan. 13, 2005 (discussing Hollywood Video's announcement of a proposed merger with Movie Gallery, Inc., as well as Blockbuster's continued efforts to acquire Hollywood Video), at http://abclocal.go.com/ktrk/business/011305_APbusiness_hollywood.html. BERKELEY TECHNOLOGY LAW JOURNAL
Manual of WST03-2 Dual Axis Solar Tracker Controller Company: Shenzhen Ming Wei Technology Co., Ltd. Tel: (+86) 18617166340 Email: email@example.com Controller Box: | ① LCD Display | ② Parameter Setting Buttons | |---------------|-----------------------------| | ③ LED Indicator Light | ④ Power Switch | | ⑤ Backlight Switch of LCD Display | ⑥ Limit Port (From Left to Right: COM/North/South/West/East) | | ⑦ Motor Output Port of East-West Axis | ⑧ Motor Output Port of South-North Axis | | ⑨ Power Supply Port (DC12V/24V, Left-Positive Pole, Right-Negative Pole) | | | ⑩ Fuse 1 (7.5A) -For Controller Power and S/N Motor | ⑪ Fuse 2 (7.5A) –For E/W Motor | | ⑫ Wind Speed Sensor(Anemometer) Port (From Left to Right: Power Positive Pole, Power Negative Pole, Signal) | | | ⑬ Light Sensor Port (From Left to Right: East, West, South, North, C) | | **Definition of the buttons:** **QUIT**: Save parameter settings, exit manual mode and enter auto standby state. **SET**: short press once to enter Manual Mode; press and hold for 5 seconds to enter Parameter Setting Mode. **E/W/S/N**: In Manual Mode, the platform moves to corresponding direction after pressing corresponding button. In Parameter Setting Mode, **E**: Next Page **W**: Previous Page **S**: Subtract **N**: Add The definition of the buttons on the IR remote control and the controller box are totally the same, thus the remote control can also set parameters. Controller Features: 1. High tracking accuracy, average tracking accuracy ≤ 1° (Actual accuracy is related to the platform's moving speed); 2. The light sensor has a wide range of detection angle and high accuracy. It’s inside the waterproof case, waterproof, dustproof and anti-aging; 3. With LCD display which can show multiple working modes and parameters; 4. Many of the parameters can be set (Tracking accuracy, threshold value of the light-sensor, threshold value of the wind speed sensor, the interval time of cyclical tracking, etc.); 5. With strong wind protection mode (Wind speed sensor is required); 6. Can set the platform to stay at any position at night (or cloudy day); 7. Can connect limit switch; 8. Can be operated by IR remote control or the buttons on the controller box; 9. Reverse battery protection, overcurrent protection; standby current ≤15mA. Controller Attention Matters: 1. Input Power Supply: DC 12V/24V (DC 9V~35V) Output Voltage=Input Voltage Output Current: E/W≤7.5A, S/N≤7.5A, total current ≤15A 2. The controller needs stable power supply, it can be battery or switching power supply, solar panels cannot be directly connected as power supply, unstable voltage may damage the controller. The maximum current of the power supply must be greater than 1.5 times of the total load current of the platform, because the starting current for DC motor is relatively large. 3. This controller can directly drive DC brush motor, the rated voltage of the motor should be the same as the power voltage of the controller. If use other types motor, this controller cannot directly drive, in this case additional control circuit is needed. 4. If the motor current in a single direction is greater than 7.5A, a relay module is needed to increase the load capacity of the controller. 5. The controller is preferably fixedly installed and will not move together with the platform. The controller is not waterproof and should prevent water from entering. The Light Sensor and IR Remote Control: Light Sensor IR Remote Control 3 Meters Wire Wiring of the Light Sensor: | Red Wire | Green Wire | Yellow Wire | White Wire | Black Wire | |----------|------------|-------------|------------|------------| | E | W | S | N | C | Light Sensor Attention Matters: 1. The light sensor should be fixed on the platform and can move together with the platform, it cannot be blocked by anything. 2. The base of the light sensor must be parallel to the solar panel or on the same plane. 3. The “East” direction marked on the light sensor must be consistent with the local geographic direction. 4. The length of the connecting wire for the light sensor must be greater than the platform's movable range. (Factory default length is 3 meters, it can be increased to 30 meters at most). Recommended installation position of the Light Sensor in the Northern Hemisphere Recommended installation position of the Light Sensor in the Southern Hemisphere IR Remote Control Attention Matters: 1. The remote control requires 2 pieces AAA batteries (factory defaults no batteries inside). 2. The remote control belongs to infrared remote control, it must be aligned with the controller when using, the remote control range is ≤ 6 meters. 3. The buttons on the remote control have the same function as the buttons on the controller box. About the Limit Switch: The limit switch should be the type with normally open contact and does not require a power supply. When COM terminal and E1/W1/S1/N1 terminals of controller respectively connect, the corresponding direction stops output. If use motor which has built-in limit switches, such as linear actuators, then there is no need to add outer limit switches, but it is recommended to add outer limit switches to protect the drive motor. Limit Diagram ![Limit Diagram](image) About the Wind Speed Sensor: The controller can work normally without the wind speed sensor, but if you want to use the Strong Wind Protection function, you need to buy an additional wind speed sensor, the output signal of the wind speed sensor should be voltage type, the signal output range should be 0-5V, the power voltage of the wind speed sensor should be the same as the power voltage of the controller. If the wind speed sensor used only has signal and negative wires, just connect these 2 wires to the corresponding port of the controller and then it will work. (Under the same level of wind, different models of anemometers have different output values, the V2 value needs to be set according to the actual anemometer used.) General Introduction of the Controller: The controller analyzes the signal from the light sensor, then control the motor of the corresponding direction to perform positive rotation, reverse rotation and stop, and make the platform to be aligned with the sun. Three Modes will appear during the controller’s auto working, the controller will automatically switch the three modes according to different conditions. Following is a brief introduction of the three modes: 1. Sunny Day Auto Tracking Mode: When the voltage value of any one direction (E/W/S/N) of the light sensor is greater than V3, and after completing T13 countdown, it’s still greater than V3, the controller thinks the current is a sunny day, at this time the controller compares the voltage of the East, West, South, North directions of the light sensor and makes corresponding output, so that the platform is aligned with the sun, and after the alignment, the controller executes the TX, TY countdown, after completing the countdown, the controller continues to track the sun for the next cycle. 2. Night (Cloudy Day) Mode: When the voltage values of all directions (E,W,S,N) of the light sensor are less than V3, and after completing T8 countdown, they are still less than V3, the controller thinks the current is night (cloudy day), then the controller makes the platform return to the night standby position: First, it outputs in the East and North direction (T9 is the time to the East, T11 is the time to the North), when completing, it outputs in the West and South direction (T10 is the time to the West, T12 is the time to the South). After completing the above actions, the controller enters the standby state. 3. Strong Wind Protection Mode (Wind speed sensor is required): When the value of F (The signal voltage of the wind speed sensor) is greater than V2 and lasts longer than 5 seconds, the controller thinks the current wind is a threat to the platform and immediately executes the Strong Wind Protection Mode: First, it outputs in the East and North direction (T3 is the time to the East, T5 is the time to the North), when completing, it outputs in the West and South direction (T4 is the time to the West, T6 is the time to the South). After completing the above actions, the controller executes the T7 time, T7 is sleeping time, the platform does not do any action during the T7 time. If the value of F is always greater than V2, then the T7 value is locked and unchanged. If the value of F is less than V2, after completing the T7 countdown, the controller will exit the wind speed protection and enter the auto standby state. If the Strong Wind Protection Mode occurs on sunny day, the value of T7 is equal to the set value. If the Strong Wind Protection Mode occurs at night (cloudy day), the value of T7 is locked without further countdown, the controller sleep in place and wait for the Sunny Day Mode. The Setting Method of the Standby Position in Night (Cloudy Day) Mode: In manual mode, use a stopwatch to measure the time values of T9,T10,T11,T12, then set them in the Parameter Setting Page: T9 must be greater than or equal to the total time the platform moves from the Westernmost to the Easternmost. T10 is equal to the time the platform moves from the Easternmost towards West until to the custom position. T11 must be greater than or equal to the total time the platform moves from the Southernmost to the Northernmost. T12 is equal to the time the platform moves from the Northernmost towards South until to the custom position. The Setting Method of the Standby Position in Strong Wind Protection Mode: In manual mode, use a stopwatch to measure the time values of T3,T4,T5,T6, then set them in the Parameter Setting Page: T3 must be greater than or equal to the total time the platform moves from the Westernmost to the Easternmost. T4 is equal to the time the platform moves from the Easternmost towards West until to the custom position. T5 must be greater than or equal to the total time the platform moves from the Southernmost to the Northernmost. T6 is equal to the time the platform moves from the Northernmost towards South until to the custom position. Reasonable setting of the standby position for Night Mode/Strong Wind Protection Mode can reduce the damage to the solar panel in bad weather, the night standby position can be set arbitrarily, when the sun appears the next day, the controller is able to track the sun. Usage Steps: 1. Install and fix the light sensor and controller as required, then connect the light sensor to the controller correctly. 2. Connect the East-West axis motor and South-North axis motor of the platform to the corresponding terminals on the controller. 3. Connect the Limit Switch. (Optional, not necessary) 4. Connect the Wind Speed Sensor. (Optional, not necessary) 5. After confirming the above wiring is correct, connect the controller to the power supply. 6. Press SET button once, the controller enters the Manual Mode, then press E/W/S/N button on the controller respectively, the platform should move to the corresponding direction. If the moving direction is wrong, then exchange the wiring of the motor in that direction. 7. The range the platform can move should be greater than or equal to the travel range of the motor. If the controller uses a limit switch, move the platform to the limit position in Manual Mode and check whether the limit function is normal. 8. After confirming the wiring and installation are correct, press QUIT button to exit the Manual Mode, the controller enters the Auto Standby Mode. LCD Screen Page Introduction: The controller is powered on and displays the following figure: Press E/ W to switch pages. ``` SL E:1.73 W:1.71 ``` The above picture shows the current voltage values for East direction and West direction of the light sensor. ``` SL S:1.27 N:1.22 ``` The above picture shows the current voltage values for South direction and North direction of the light sensor. ``` SL F:0.30 V:13.0 ``` (F:0.30) means the current wind speed signal voltage value is 0.3V, (V:13.0) means the current controller power supply voltage is 13V. Press the SET button once, the controller enters Manual Mode: ``` MT E=1.74 W=1.08 S:1.26 N:1.20 ``` The above picture shows the real-time voltage values of the light sensor in 4 directions. At this time the E/W/S/N buttons can control the platform to move to E/W/S/N directions respectively, and the corresponding LED indicator will light up. When complete testing the manual function and make sure it's normal, press the QUIT button to exit Manual Mode. Press and hold the SET button for 5 seconds, the controller enters Parameter Setting Page: **SET TX:060s** **E/W wait Time** TX: In Auto Tracking Mode, the interval time for next tracking after the East-West direction is aligning, press S/N button to add/subtract the parameter. Press E button for next page: **SET TY:050s** **N/S wait Time** TY: In Auto Tracking Mode, the interval time for next tracking after the South-North direction is aligning, press S/N button to add/subtract the parameter. (Reasonable setting of the TX/TY value can prevent the controller from wasting power by frequent tracking.) Press E button for next page: | SET T3:005s | Wind To The East | |-------------|-----------------| | SET T4:005s | Wind To The West| | SET T5:005s | Wind To North | | SET T6:005s | Wind To South | | SET T7:600s | Wind Lock Time | T3/T4/T5/T6/T7: The parameters in Strong Wind Protection Mode. Press E/W button to switch the page, press S/N button to add/subtract the parameter. For these parameters definition, please refer to the introduction of Strong Wind Protection Mode. Press E button for next page: **SET T8:1800s** **Sun Low Delay** T8: Waiting time before executing Night (Cloudy Day) Mode when the sunlight is weak, press S/N button to add/subtract the parameter. When the voltage values of all directions (E,W,S,N) of the light sensor are less than V3, the controller thinks the current sunlight is weak and has no tracking value, it will wait in place for a time of T8. Press E button for next page: | SET T9:005s | Sun Low to East | |-------------|-----------------| | SET T10:005s| Sun Low to West| | SET T11:005s| Sun Low to North| | SET T12:005s| Sun Low to South| T9/T10/T11/T12: The parameters in Night (Cloudy Day) Mode. Press E/W button to switch the page, press S/N button to add/subtract the parameter. For these parameters definition, please refer to the introduction of Night (Cloudy Day) Mode. T13: Waiting time before executing Sunny Day Auto Tracking Mode when the sunlight is strong, press S/N button to add/subtract the parameter. When the voltage in any one direction of the light sensor is greater than V3, and after completing T13 countdown, it’s still greater than V3, the controller thinks the current is a sunny day, then the controller executes the Sunny Day Auto Tracking Mode. (The purpose of T13 is to prevent night car lights, lightning and other interference) V1: The value of tracking accuracy, press S/N button to add/subtract the parameter. The smaller the value, the higher the accuracy. V2: The threshold value of the wind speed sensor, press S/N button to add/subtract the parameter. Definition of V2: When the signal voltage value F of the wind speed sensor is greater than V2 and lasts longer than 5 seconds, the controller thinks the current wind is a threat to the platform and immediately executes the Strong Wind Protection Mode. V3: The threshold value of the light sensor, press S/N button to add/subtract the parameter. Definition of V3: When the voltage in any direction of the light sensor is greater than V3, and after completing T13 countdown, it’s still greater than V3, the controller thinks the current is sunny day. When the voltage in all directions of the light sensor are all less than V3, and after completing T8 countdown, they are still less than V3, the controller thinks the current is night (cloudy day). When completing all the parameters setting, press the QUIT button to save and exit, the controller enters auto standby state. Parameter Definition: **SH**: Auto Tracking Mode on sunny days. **SL**: The sunlight is weak. **MT**: Manual Mode. **E**: East. **W**: West. **S**: South. **N**: North. **FS**: Strong Wind Protection Mode. **F**: Real time signal voltage value of the wind speed sensor. **Lock**: After completing the strong wind protection mode, the wind speed continuously exceeds the F value, then the current state is locked. **V**: Input voltage value of controller. **TX**: In Auto Tracking Mode, the interval time for next tracking after the East-West direction is aligning, the East-West axis motor does not move during this time. (Range is 000-999, factory default is 060) **TY**: In Auto Tracking Mode, the interval time for next tracking after the South-North direction is aligning, the South-North axis motor does not move during this time. (Range is 000-999, factory default is 050) **T3**: In Strong Wind Protection Mode, the total time for the platform to move from the Westernmost to the Easternmost. (Range is 000-999, factory default is 005) **T4**: In Strong Wind Protection Mode, after completing the T3 time, the time for the platform to move from the Easternmost towards West until to the custom position. (Range is 000-999, factory default is 005) **T5**: In Strong Wind Protection Mode, the total time for the platform to move from the Southernmost to the Northernmost. (Range is 000-999, factory default is 005) **T6**: In Strong Wind Protection Mode, after completing the T5 time, the time for the platform to move from the Northernmost towards South until to the custom position. (Range is 000-999, factory default is 005) **T7**: In Strong Wind Protection Mode, the time when the controller sleeps in place after completing the time of T3/T4/T5/T6 (Range is 000-999, factory default is 600) **T8**: Waiting time before executing Night (Cloudy Day) Mode when the sunlight is weak. (Range is 0000-9990, factory default is 1800) **T9**: In Night (Cloudy Day) Mode, the total time for the platform to move from the Westernmost to the Easternmost. (Range is 000-999, factory default is 005) **T10**: In Night (Cloudy Day) Mode, after completing the T9 time, the time for the platform to move from the Easternmost towards West until to the custom position. (Range is 000-999, factory default value is 005) **T11**: In Night (Cloudy Day) Mode, the total time for the platform to move from the Southernmost to the Northernmost. (Range is 000-999, factory default value is 005) **T12**: In Night (Cloudy Day) Mode, after completing the T11 time, the time for the platform to move from the Northernmost towards South until to the custom position. (Range is 000-999, factory default is 005) **T13**: Waiting time before executing Sunny Day Auto Tracking Mode when the sunlight is strong. (Range is 000-999, factory default is 010) **V1**: The value of tracking accuracy. The smaller the value, the higher the accuracy. (range is 0.01-0.10, factory default is 0.04) **V2**: The threshold value of the wind speed sensor. (range is 0.00-5.00, factory default is 2.00) **V3**: The threshold value of the light sensor. (range 0.05-3.00, factory default is 1.8) Note: 1. All the above time units are seconds, and voltage units are V. 2. The values of T3/T4/T5/T6/T9/T10/T11/T12 can be modified according to the actual position required. 3. The values of TX/TY/T7/T8/T13/V1/V2/V3 are factory set, if you need to modify them, please understand the definition of this parameter carefully. Packing List: 1* Controller Box. 1* Light Sensor. 1* IR Remote Control (No battery Included). 1* English Manual. Application:
Thank you very much for reading STEPS TO WRITING WELL 8TH INSTRUCTORS MANUAL FREE PDF. As you may know, people have searched hundreds of times for their chosen readings like this STEPS TO WRITING WELL 8TH INSTRUCTORS MANUAL FREE PDF, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some malicious virus inside their desktop computer. STEPS TO WRITING WELL 8TH INSTRUCTORS MANUAL FREE PDF is available in our book collection an online access to it is set as public so you can download it instantly. Our digital library spans in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the STEPS TO WRITING WELL 8TH INSTRUCTORS MANUAL FREE PDF is universally compatible with any devices to read. Understanding by Design Grant Wiggins 2005 Presents a multifaceted model of understanding, which is based on the premise that people can demonstrate understanding in a variety of ways. A Writer's Guide to Mindful Reading Ellen C. Carillo 2017 Offering a comprehensive approach to literacy instruction by focusing on reading and writing, *A Writer's Guide to Mindful Reading* supports students as they become more reflective, deliberate, and mindful readers and writers by working within a metacognitive framework. *The Reading Strategies Book* Minnie Juarez 2021-09-28 With hit books that support strategic reading through conferring, small groups, and assessment, Jen Serravallo gets emails almost daily asking, "Isn't there a book of the strategies themselves?" Now there is. "Strategies make the often invisible work of reading actionable and visible," Jen writes. In *The Reading Strategies Book*, she collects 300 strategies to share with readers in support of thirteen goals—everything from fluency to literary analysis. Each strategy is cross-linked to skills, genres, and Fountas & Pinnell reading levels to give you just-right teaching, just in time. With Jen's help you'll: develop goals for every reader give students step-by-step strategies for skilled reading guide readers with prompts aligned to the strategies adjust instruction to meet individual needs with Jen's Teaching Tips craft demonstrations and explanations with her Lesson Language learn more with Hat Tips to the work of influential teacher-authors. Whether you use readers workshop, Daily 5/CAFE, guided reading, balanced reading, a core reading program, whole-class novels, or any other approach, *The Reading Strategies Book* will complement and extend your teaching. Rely on it to plan and implement goal-directed, differentiated instruction for individuals, small groups, and whole classes. "We offer strategies to readers to put the work in doable terms for those who are still practicing," writes Jen Serravallo. "The goal is not that they can do the steps of the strategy but that they become more comfortable and competent with a new skill." With *The Reading Strategies Book*, you'll have ways to help your readers make progress every day. Driven by Data Paul Bambrick-Santoyo 2010-04-12 Offers a practical guide for improving schools dramatically that will enable all students from all backgrounds to achieve at high levels. Includes assessment forms, an index, and a DVD. A Field Guide for Science Writers Deborah Blum 2006 This guide offers practical tips on science writing - from investigative reporting to pitching ideas to magazine editors. Some of the best known science writers in the US share their hard earned knowledge on how they do their job. The Eight Answers for Happiness Hooseo B. Park 2014-12-20 Although everybody in the world is singing about love, dream, money, and happiness, ironically, not many people know what these things really are. I became an artist for my dream, which was to do something for the environment, and I came to New York City for the dream. But ironically, I realized that I didn't even know what the environment really was before I came to New York City. I have done a nonprofit sign campaign for the environment in Times Square in the fall of 2011 and spring of 2014. I saw and met many people who were from all over the world. The people and New York City have given me some answers in life. To do something for the environment, I have also picked up a lot of cigarette butts that no one really cared about on the streets of New York City. Then I put the thrown cigarette butts into many different glass jars. I have slowly realized that the most important thing for the environment was happiness--because happiness is the primary reason for all lives. There are many different reasons for each of our lives, but all the reasons lean to one point. That is happiness. And unhappiness is the third pollution source that makes people careless with everything. No one can well take care of other things if the person's level of happiness is too low. We have to be happy first no matter what. To be happy, we need to know what happiness really is, as well as the other very important things in life. These eight things are the answers that I've learned from my life and from New York City. *Reading Critically, Writing Well with 2020 APA and 2021 MLA Updates* Rise B. Axelrod 2021-09-23 This ebook has been updated to provide you with the latest guidance on documenting sources in MLA style and follows the guidelines set forth in the MLA Handbook, 9th edition (April 2021). *Reading Critically, Writing Well* is a diverse collection of readings from established, emerging, and student writers, combined with expert support for writing across genres. The readings aim to inspire engaged reading, spark curious conversations, and provoke thoughtful writing. *Reading Critically, Writing Well* provides both the readings and the support you need to make effective rhetorical choices in your own writing. *Research in Basic Writing* Michael G. Moran 1990 This reference handbook surveys research on the central issue associated with the teaching of unprepared writers. Though basic writing has only been recognized as a distinct area of teaching and research since 1975, the existing bibliographic texts already seem limited due to their age or lack of annotation. This volume provides current and extensive bibliographic essays and will help to define this new field of study for teachers and researchers. Following an introduction that summarizes the origins and significant texts in basic writing, the book is divided into three sections, Social Science Perspectives, Linguistic Perspectives, and Pedagogical Perspectives. The first section, which contains three essays, views the field through the lens of social, psychological, and political issues. The second section, also containing three essays, examines contributions made from studies of grammar, dialects, and second-language acquisition. The third section, in its four essays, focuses on... the design, development, administration, and evaluation of basic writing courses, the use of computers in basic writing classrooms, the role of the writing lab, and the preparation of basic writing teachers. An appendix that reviews current textbooks for basic writing courses is also included, as well as an index. This book will be a valuable resource for teachers of basic writing, in education courses and workshops that train teachers and tutors, and in fields such as linguistics, technical writing, and Teaching English as a Second Language. It will also be an important addition to public and university libraries and many education programs. **The Norton Field Guide to Writing, with Handbook** Richard Bullock 2013-02-01 Flexible, easy to use, just enough detail?and now the number-one best seller. With just enough detail ? and color-coded links that send students to more detail if they need it ? this is the rhetoric that tells students what they need to know and resists the temptation to tell them everything there is to know. Designed for easy reference ? with menus, directories, and a combined glossary/index. The Third Edition has new chapters on academic writing, choosing genres, writing online, and choosing media, as well as new attention to multimodal writing. The Norton Field Guide to Writing is available with a handbook, an anthology, or both ? and all versions are now available as low-cost ebooks. **On Writing Well** William Knowlton Zinsser 1994 Warns against common errors in structure, style, and diction, and explains the fundamentals of conducting interviews and writing travel, scientific, sports, critical, and humorous articles. **Military Review 2015** **The Case Writing Workbook** Gina Vega 2017-04-27 This book offers a modular set of chapters that focus specifically on the challenges related to case writing. Exercises, worksheets, and training activities help guide readers sequentially through the entire process of writing both a case and an instructor’s manual (teaching note). Designed as an individualized workshop to assist case authors to structure their writing, this book combines the easy-to-understand, student-focused language of the first edition with new material covering the latest developments and challenges in the world of case writing. These include: ● A section on writing cases in condensed time frames ● A new module on writing short cases in various formats ● A new module on turning research papers into teaching tools ● A section about growing communities of practice in a university ● An expansion of the student case writing module to include a section on case writing for graduate students ● Twelve new worksheets ● A complete index to facilitate use of the book Finishing all the book’s assignments will result in a complete case and instructor’s manual that can be tested in the classroom and submitted to a conference or journal. The Case Writing Workbook is a must for the shelf of any academic or student conducting qualitative research and looking to enhance their skill set. On Writing Well William Knowlton Zinsser 1976 The Joy of Teaching Harry Hazel 2010-01-01 Over the centuries, multitudes of women and men have gone into teaching as their chosen profession. Most successful instructors find joy in teaching and are glad to share that joy with others. Harry Hazel is one teacher who has found his forty years in the classroom highly satisfying. In this book, he not only includes insights from other Canadian and American teachers he once interviewed, but he primarily reflects on a long and happy career. While the material in this book is slanted toward college teaching, many of the techniques could also be applied to other levels of instruction, such as elementary, secondary, or adult education. Key principles include Motivating yourself, Motivating students, Polishing your speaking skills, Taking the pain out of writing, Making the joy last. **A Teacher's Guide to Writing Conferences (Classroom Essentials)** Carl Anderson 2018 "A getting-started primer for teachers conferring with writers in the K-8 classroom" -- Instructor's Manual to Accompany *Steps to Writing Well* Jean Wyrick 1993 **Including Students With Special Needs** Marilyn Friend 1999-07-01 **Steps to Writing Well** Jean Wyrick 2001-10-01 The informal, student-friendly tone of these rhetorically-organized rhetoric/reader/handbooks provides step-by-step instructions on writing a variety of 500-800-word essays. **Technical Communication for Engineers** Shalini Verma Technical Communication for Engineers has been written for undergraduate students of all engineering disciplines. It provides a well-researched content meticulously developed to help them become strategic assets to their organizations and have a successful career. The book covers the entire spectrum of learning required by a technical professional to effectively communicate the technicalities of his subject to other technocrats or to a non-technical person at their proper levels. It is unique inasmuch as it provides some thoughtful pedagogical tools that help the students attain proficiency in all the modes of communication. Key Features: - Marginalia, which are spread throughout the book to clarify and highlight the key points. - Tech Talk passages, which throw light on the latest advancements in communication technology and their innovative use. - Application-based Exercise, which encourages the readers to apply the concepts learnt to real-life situations. - Language-based Exercise (Grammar & Vocabulary) to help readers assess their language competency. - Ethical Dilemma, which poses a complex hypothetical situation of mental conflict on choosing between difficult moral imperatives - Experiential Learning-based Exercise (Project Work) devised to help learner 'feel' or 'experience' the concepts and theories learnt and thereby gain hands-on experience. 8th Standard English Questions and Answers - Tamil Nadu State Board Syllabus Mukil E Publishing And Solutions Pvt Ltd 2021-03-11 8th Standard English - Tamil Nadu State Board - solutions, guide For the first time in Tamil Nadu, Technical books are available as ebooks. Students and Teachers, make use of it. Study Guide for CTET Paper 2 (Class 6 - 8 Teachers) Social Studies/ Social Science with Past Questions 4th Edition Disha Experts 2019-10-10 The new edition of the book Study Guide for CTET Paper 2 - English 4th edition (Class 6 - 8 Social Studies/Social Science teachers), has been updated with the CTET Solved Papers of July 2013 to Sep 2018. • The languages covered in the book are English (1st language) and Hindi (2nd language). • The book provides separate sections for Child Development & Pedagogy, English Language, Hindi Language and Social Studies/Social Science. • Each section has been divided into chapters. For each chapter an exhaustive theory has been provided which covers the complete syllabus as prescribed by the CBSE/ NCERT/ NCF 2005. • This is followed by 2 sets of exercise. • The exercise 1 contains a set of MCQs from the PREVIOUS YEAR Question Papers of CTET and various STET's. • The exercise 2, "TEST YOURSELF" provides carefully selected MCQs for practice. • The book is a must for all the candidates appearing in the Paper 2, Social Studies stream of the CTET and State TETs like UPTET, Rajasthan TET, Haryana TET, Bihar TET, Uttarakhand TET, Punjab TET, Tamil Nadu TET etc. The Complete Idiot's Guide to Writing Well Laurie Rozakis 2000-01-09 You're no idiot, of course. You know how to tap out an email to your boss, scrawl a note to your sweetheart, even throw in an extra flourish when you sign a greeting card. But when it comes to really writing that excruciating process of transferring your thoughts to paper without inventing some strange new language well, let's just say you think you lack the write stuff. The written word was a great achievement in human history; don't give up on it just yet! 'The Complete Idiot's Guide to Writing Well' is the writing book you've been waiting for everything you need to know to make writing of any kind as easy as thinking or speaking. In this 'Complete Idiot's' Guide, you'll get: - Expert advice on making your writing as clear, persuasive and painless as possible, whether it's a thank-you note, a school paper, or an executive briefing. Easy-to-follow guidelines on a structure, spelling, punctuation, vocabulary and style. Nonsense advice on figuring out the three hardest parts of any writing: the beginning, middle and end. The World Book Encyclopedia 2002 An encyclopedia designed especially to meet the needs of elementary, junior high, and senior high school students. Helping Your Students with Homework Nancy Paulu 1998 Modern Radiant Readers: Teacher's Manual 6-8 Steps to Writing Well with Additional Readings Jean Wyrick 2016-01-01 With the most coverage of the writing process and the most professional readings, STEPS TO WRITING WELL WITH ADDITIONAL READINGS has helped thousands of students learn to write effective academic essays. Jean Wyrick's text is known for its student-friendly, approachable tone and the way it presents rhetorical strategies for composing essays in an easy-to-follow progression of useful lessons and activities. With thoughtful instruction, almost 70 student and professional readings, and a wealth of short and long assignments, the text gives students the models and practice they need to write well-constructed essays with confidence. This 10th edition features useful new visual learning aids; many new student samples, professional readings, and advertisements; new essay assignments that promote using sources and multiple rhetorical strategies; a new organization for expository writing assignments and research; and updated discussions of drafting and reading multimodal texts. Each student text is packaged with a free Cengage Essential Reference Card to the MLA HANDBOOK, Eighth Edition. Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version. Steps to Writing Well Jean Wyrick 2016-01-01 With the most coverage of the writing process of any rhetorical writing guide, STEPS TO WRITING WELL has helped thousands of students learn to write effective academic essays. Jean Wyrick’s text is known for its student-friendly, approachable tone and the way it presents rhetorical strategies for composing essays in an easy-to-follow progression of useful lessons and activities. With thoughtful instruction, almost 40 student and professional readings, and a wealth of short and long assignments, the text gives students the models and practice they need to write well-constructed essays with confidence. This 13th edition features useful new visual learning aids; many new student samples, professional readings, and advertisements; new essay assignments that promote using sources and multiple rhetorical strategies; a new organization for expository writing assignments and research; and updated discussions of drafting and reading multimodal texts. Each student text is packaged with a free Cengage Essential Reference Card to the MLA HANDBOOK, Eighth Edition. Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version. **Writing for College: the Eight Step Program to Writing Academic Argument Papers Using the Template Method** Rebecca Smith **The Writing Workshop Teacher's Guide to Multimodal Composition (6-12)** Angela Stockman 2022-05-31 Multimodal composition is a meaningful and critical way for students to tell their stories, make good arguments, and share their expertise in today’s world. In this helpful resource, writer, teacher, and best-selling author Angela Stockman illustrates the importance of making writing a multimodal endeavor in 6-12 workshops by providing peeks into the classrooms she teaches within. Chapters address what multimodal composition is, how to situate it in a writing workshop that is responsive to the unique needs of writers, how to handle curriculum design and assessment, and how to plan instruction. The appendices offer tangible tools and resources that will help you implement and sustain this work in your own classroom. Ideal for teachers of grades 6-12, literacy coaches, and curriculum leaders, this book will help you and your students reimagine what a workshop can be when the writers within it produce far more than written words. **The Well-Trained Mind: A Guide to Classical Education at Home (Fourth Edition)** Susan Wise Bauer 2016-08-09 Is your child getting lost in the system, becoming bored, losing his or her natural eagerness to learn? If so, it may be time to take charge of your child’s education—by doing it yourself. The Well-Trained Mind will instruct you, step by step, on how to give your child an academically rigorous, comprehensive education from preschool through high school—one that will train him or her to read, to think, to understand, to be well-rounded and curious about learning. Veteran home educators Susan Wise Bauer and Jessie Wise outline the classical pattern of education called the trivium, which organizes learning. around the maturing capacity of the child’s mind and comprises three stages: the elementary school “grammar stage,” when the building blocks of information are absorbed through memorization and rules; the middle school “logic stage,” in which the student begins to think more analytically; and the high-school “rhetoric stage,” where the student learns to write and speak with force and originality. Using this theory as your model, you’ll be able to instruct your child—whether full-time or as a supplement to classroom education—in all levels of reading, writing, history, geography, mathematics, science, foreign languages, rhetoric, logic, art, and music, regardless of your own aptitude in those subjects. Thousands of parents and teachers have already used the detailed book lists and methods described in The Well-Trained Mind to create a truly superior education for the children in their care. This extensively revised fourth edition contains completely updated curricula and book lists, links to an entirely new set of online resources, new material on teaching children with learning challenges, cutting-edge math and sciences recommendations, answers to common questions about home education, and advice on practical matters such as standardized testing, working with your local school board, designing a high-school program, preparing transcripts, and applying to colleges. You do have control over what and how your child learns. The Well-Trained Mind will give you the tools you’ll need to teach your child with confidence and success. *Writing without Teachers* Peter Elbow 1998-06-25 In *Writing Without Teachers*, well-known advocate of innovative teaching methods Peter Elbow outlines a practical program for learning how to write. His approach is especially helpful to people who get "stuck" or blocked in their writing, and is equally useful for writing fiction, poetry, and essays, as well as reports, lectures, and memos. The core of Elbow's thinking is a challenge against traditional writing methods. Instead of editing and outlining material in the initial steps of the writing process, Elbow celebrates non-stop or free uncensored writing, without editorial checkpoints first, followed much later by the editorial process. This approach turns the focus towards encouraging ways of developing confidence and inspiration through free writing, multiple drafts, diaries, and notes. Elbow guides the reader through his metaphor of writing as "cooking:" his term for heating up the creative process where the subconscious bubbles up to the surface and the writing gets good. 1998 marks the twenty-fifth anniversary of *Writing Without Teachers*. In this edition, Elbow reexamines his program and the subsequent influence his techniques have had on writers, students, and teachers. This invaluable guide will benefit anyone, whether in the classroom, boardroom, or living room, who has ever had trouble writing. **Writing Well in School and Beyond** Michael Berger 2013-06-22 The author passionately believes that everyone can improve as a writer, a conviction earned through twenty years of empowering young writers. This handy guidebook explains in an accessible way the keys to writing well, providing valuable insight into the fundamentals of the writing process and seventy incisive pieces of advice. It can help teachers inspire students' engagement and guide their development. A student reading this concise book will be encouraged by realizing that writing well is not a magical gift some people have and others don't, but a skill she or he can develop. It clarifies that writing is thinking and that vigorous revision as the heart of the writing process empowers students to express themselves clearly and precisely, making writing an engaging activity reflecting their most vibrant intellectual experience, valuable not just for assessing learning but as an essential means to foster deep learning and critical thinking. As a resource for faculty development, it provides many insights and activities to help instructors across the curriculum support student writing. It makes a powerful case for establishing a writing center in schools that do not yet have one. This concise guidebook is useful for secondary and post-secondary students, for educators in training, for advocates of and consultants in writing across the curriculum, for high school teachers and college faculty in professional development, and for all people who are eager to improve their writing skills and take pleasure in an activity essential to modern life that has not yet proved satisfying to them. For instructor exam copies and bulk discounts (30% on purchases of eight or more), email request to: email@example.com The Teachers' assistant and pupil teachers' guide 1876 Writing Your Journal Article in Twelve Weeks Wendy Laura Belcher 2009-01-20 This book provides you with all the tools you need to write an excellent academic article and get it published. Classroom Community Builders Walton Burns 2017-07-18 Students thrive in classrooms where they feel safe, welcome, and supported. Building a sense of community and teamwork is an effective means of facilitating student success. Burns skillfully blends community-building activities with real classroom content, providing students with opportunities to practice language skills while acclimatizing to the classroom. While intended primarily for language arts and English as a second language classrooms, Burns's activities readily adapt to a range of disciplines and age groups. Beginning with a section on setting classroom and instructor expectations, Burns moves on to team-building exercises focused on lesson content. His section on getting-to-know-you activities is designed to foster a sense of belonging, while the five get-to-know-your-teacher exercises introduce you to your students in a fun, relaxed manner. Supported by information on material requirements, time limits, and resources, Classroom Community Builders provides handouts and worksheets, available both within the book and online, offering new ideas to experienced and novice instructors alike. Steps to Writing Well, 2016 MLA Update Jean Wyrick 2017-01-27 With the most coverage of the writing process of any rhetorical writing guide, STEPS TO WRITING WELL has helped thousands of students learn to write effective academic essays. Jean Wyrick's text is known for its student-friendly, approachable tone and the way it presents rhetorical strategies for composing essays in an easy-to-follow progression of useful lessons and activities. With thoughtful instruction, almost 40 student and professional readings, and a wealth of short and long assignments, the text gives students the models and practice they need to write well-constructed essays with confidence. This 13th edition features useful new visual learning aids; many new student samples, professional readings, and advertisements; new essay assignments that promote using sources and multiple rhetorical strategies; a new organization for expository writing assignments and research; and updated discussions of drafting and reading multimodal texts. This edition has been updated to reflect guidelines from the 2016 MLA HANDBOOK, Eighth Edition. Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version. WPA, Writing Program Administration 1994 How to Write a Book in 24 Hours James Green 2015-03-09 Best-selling author James Green shares his own ground-breaking 6-step formula for producing top quality, highly successful nonfiction books in just 24 hours. 24 Hour Bestseller: How to Write a Book in 24 Hours will provide you with a 6-step writing blueprint that you can set on full 'rinse and repeat mode' providing you with a step-by-step recipe for writing success. After becoming disillusioned with his own writing struggles, the author decided to completely re-engineer the entire process, providing a plan for: generating and validating new book ideas; creating comprehensive book outlines; writing in a quick, easy and enjoyable way; publishing the completed books effortlessly. Inside 24 Hour Bestseller, you will learn: How to stir your creative juices to constantly think up new book ideas; How to validate and evaluate your ideas for maximum profit; How to create a solid book outline that will make the writing process a breeze; How to turn your writing into a fun game; How to stay motivated; When to outsource (and when not to); How to craft your book title and description for maximum impact; How to publish your book to KDP easily; Book pricing strategies; And much more... If you've become overwhelmed and disillusioned with the whole writing process, this book will be your guide and your tonic, re-energizing your authoring efforts. You'll be more productive than ever, and most importantly, you will find writing enjoyable once again! Whether you're a complete novice and have never even written a book before, are struggling to come up with new book ideas, or are a seasoned author who simply needs some tips on how to write more effectively, then this book is for you. 24 Hour Bestseller will guide you step-by-step through the entire formula and get you authoring for success once more!" Ground Instructor Instrument Written Test Guide United States. Federal Aviation Administration 1968 On Writing Well, 30th Anniversary Edition William Zinsser 2012-09-11 On Writing Well has been praised for its sound advice, its clarity and the warmth of its style. It is a book for everybody who wants to learn how to write or who needs to do some writing to get through the day, as almost everybody does in the age of e-mail and the Internet. Whether you want to write about people or places, science and technology, business, sports, the arts or about yourself in the increasingly popular memoir genre, *On Writing Well* offers you fundamental principles as well as the insights of a distinguished writer and teacher. With more than a million copies sold, this volume has stood the test of time and remains a valuable resource for writers and would-be writers.
Effect of the final state interaction of $\eta'N$ on the $\eta'$ photoproduction off the nucleon Shuntaro Sakai$^{1,a}$, Atsushi Hosaka$^{1,2}$, and Hideko Nagahiro$^{1,3}$ $^1$Research Center for Nuclear Physics (RCNP), Osaka University, Ibaraki, Osaka, 567-0047, Japan. $^2$Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki, 319-1195, Japan. $^3$Department of Physics, Nara Womens’s University, Nara 630-8506, Japan. (Dated: July 18, 2018) Abstract We investigate the $\eta'$ photoproduction off the nucleon with a particular interest in the effect of the final-state interaction (FSI) of the $\eta'$ meson and nucleon ($\eta'N$) based on the three-flavor linear $\sigma$ model. We find an enhancement in the cross section of the $\eta'$ photoproduction near the $\eta'N$-threshold energy owing to the $\eta'N$ FSI. With the $\eta'$ meson at forward angles, the energy dependence near the $\eta'N$ threshold is well reproduced with the $\eta'N$ FSI. The cross section at backward angles can also be a good probe to investigate the strength of the $\eta'N$ interaction. $a$ email@example.com Present address: Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Institutos de Investigación de Paterna, Aptdo. 22085, 46071 Valencia, Spain I. INTRODUCTION Hadrons are elementary excitations of the vacuum of quantum chromodynamics (QCD), and their properties reflect the vacuum structure of the low-energy QCD. The chiral symmetry is a basic feature of QCD, and it is broken spontaneously at low energies. In the nuclear medium, the spontaneously broken symmetry is expected to be restored, which we call chiral restoration, and the possible change of hadron properties associated with the chiral restoration at finite baryon densities has been an important subject of hadron physics (see, for example, Ref. [1] for a recent review article). For example, some theoretical analyses suggest the mass reduction of vector mesons as an evidence of the restoration of the chiral symmetry [2]. There exist some attempts and discussions for the study of the in-medium properties of the vector meson from the theoretical and experimental sides [3]. In the case of the $\omega$ meson, the experimental data are consistent with the weakly attractive optical potential. The analyses of the pion-nucleus system suggest partial restoration of chiral symmetry in the nuclear medium; the pion decay constant, the order parameter of the spontaneous breaking of chiral symmetry, is expected to be reduced about 35% at the normal nuclear density [4–7]. The pseudoscalar meson $\eta'$ is another candidate to probe such a change of the vacuum property. Its mass is larger than other low-lying pseudoscalar mesons, such as $\pi$, $K$, or $\eta$, due to the chiral symmetry breaking in the three-flavor system [8–14] together with the U$_A$(1) anomaly in QCD [15]. According to the argument of the restoration of the chiral symmetry in nuclear medium, the $\eta$-$\eta'$ mass difference can be as large as 150 MeV at the normal nuclear density, even if the property of the U$_A$(1) anomaly is unchanged in the medium [16]. So far there are many theoretical and experimental studies both for the $\eta$ and $\eta'$ to investigate property changes of these mesons [8–33]. For the $\eta'$ meson, several interesting experimental results have been recently reported and/or planned [29–41] which give the information on the $\eta'N$ interaction and $\eta'$-nucleus interaction. These possible property changes of the $\eta'$ in nuclear medium have been also discussed at several kinematical situations. Unfortunately, so far there is no theoretical framework to explain all the available data consistently. One of the reasons is in the complexity coming from the nuclear many body effects for mesons in nuclear medium, and hence in the extraction of the basic hadron interactions. Therefore, comparisons between theoretical predictions and experimental observables are not so simple. An example is the $\eta$-nucleus system [18, 19]. From a naive chiral symmetry argument, the mass of $\eta$ meson does not change much in nuclear medium because of the Nambu-Goldstone nature. However, it is also known that the $\eta$ meson couples strongly with the $N^*(1535)$, which in nuclear medium provides strong attraction. This attraction for $\eta$ is also referred to be as effective mass reduction of $\eta$, which should be different from that due to partial restoration of chiral symmetry. In such situations, we consider it very important to know the basic interactions of relevant mesons and nucleons, which is investigated theoretically in Refs. [42–61]. To this end, in the present paper we investigate the $\eta'$ photoproduction off a free nucleon with the final-state interaction (FSI) between the $\eta'$ meson and nucleon, which is the simplest process for the $\eta'N$ interaction. For this purpose, we employ a three-flavor linear $\sigma$ model. In this model, a strongly attractive $\eta'N$ interaction is allowed due to the U$_A$(1) anomaly and the scalar-meson exchange, such that an $\eta'N$ bound state is generated with a binding energy of typically about a few tens of MeV [27, 46]. In the present study, we supplement the $\rho$ meson for the $\eta'$ photoproduction, which is empirically known to be important, in the linear $\sigma$ model with relevant couplings fixed by existing data. We then focus on the final-state interaction of the $\eta'$ meson with the nucleon which can affect the energy dependence of the production cross sections near and above the threshold. For this purpose, we perform our analysis by changing the strength of the $\eta'$-nucleon coupling from the original one of the linear $\sigma$ model. By doing this, we discuss how the effect of the $\eta'-N$ interaction shows up in the observed experiment. This paper is organized as follows. In Sec. II, we explain the model setup used in this analysis of the $\eta'$ photoproduction. The $\eta'N$ interaction and the photoproduction amplitude used in the present study are also explained in this section. Section III is devoted to the discussion of the cross section and the beam asymmetry of the $\eta'$ photoproduction off the nucleon with the inclusion of the $\eta'N$ FSI. The summary and outlook of this study is given in Sec. IV. II. FORMULATION A. Model Lagrangian In this section, we explain the model setup for the $\eta'$ photoproduction in the three-flavor linear $\sigma$ model. In the linear model hadrons including pseudoscalar mesons, scalar mesons, and baryons are introduced as linear representations of chiral symmetry and their interactions are determined. This is done first by constructing a chiral invariant Lagrangian and then the vacuum is determined to minimize the effective potential; the neutral scalar fields have non-zero expectation values in association with the chiral symmetry breaking. Hadron properties in such a framework can naturally be related to the vacuum structure. The Lagrangian used in this calculation is given by \begin{align} \mathcal{L} &= \mathcal{L}_M + \mathcal{L}_N + \mathcal{L}_{\gamma VP}, \\ \mathcal{L}_M &= \frac{1}{2} \text{tr} \left[ D_\mu M (D^\mu M)^\dagger \right] - \frac{L^2}{2} \text{tr} \left[ MM^\dagger \right] - \frac{\lambda}{4} \text{tr} \left[ (MM^\dagger)^2 \right] - \frac{\lambda'}{4} \left[ \text{tr} (MM^\dagger) \right]^2 \\ &\quad + \sqrt{3} B (\det M + \det M^\dagger) + A \text{tr} \left( \chi M^\dagger + M \chi^\dagger \right) \\ &\quad - \frac{1}{4} \text{tr} \left[ (L^{\mu\nu})^2 + (R^{\mu\nu})^2 \right] + \frac{m_0^2}{2} \text{tr} \left[ (L^\mu)^2 + (R^\mu)^2 \right], \\ \mathcal{L}_N &= \bar{N} \left[ i \left\{ \phi + ig_V \left( V + \frac{K_V}{2m_N} \sigma_{\mu\nu} \partial^\mu V^\nu \right) \right\} \right. \\ &\quad \left. - m_N - g \left\{ \left( \frac{\bar{\sigma}_0}{\sqrt{3}} + \frac{\bar{\sigma}_8}{\sqrt{6}} \right) + i \gamma_5 \left( \frac{\eta_0}{\sqrt{3}} + \frac{\vec{\pi} \cdot \vec{\tau}}{\sqrt{2}} + \frac{\eta_8}{\sqrt{6}} \right) \right\} \right] N \\ \mathcal{L}_{\gamma VP} &= e g_{\gamma VP} \epsilon^{\mu\nu\alpha\beta} (\partial_\mu V^\alpha_\nu) (\partial_\alpha A_{\em \beta}), \\ D_\mu M &= \partial_\mu M + ig_V (L_\mu M - MR^\dagger_\mu), \\ M &= M_s + i M_{ps} = \sum_{a=0}^{8} \frac{\sigma^a \lambda^a}{\sqrt{2}} + i \sum_{a=0}^{8} \frac{\pi^a \lambda^a}{\sqrt{2}}, \\ N &= t(p,n), V^\mu = \frac{1}{\sqrt{2}} \sum_{a=0}^{3} \frac{V^{a\mu} \tau^a}{\sqrt{2}}, \\ \chi &= \sqrt{3} \text{diag}(m_u, m_d, m_s) = \sqrt{3} \text{diag}(m_q, m_q, m_s), \end{align} where we write $L^\mu$ and $R^\mu$ as $L^\mu = V^\mu + A^\mu$ and $R^\mu = V^\mu - A^\mu$ using the vector and the axial-vector fields $V^\mu$ and $A^\mu$, and $e > 0$ is the elementary charge unit. $A^\mu_{\em}$ denotes the electromagnetic field. $\bar{\sigma}_i$ ($i = 0, 8$) appearing in the nucleon part is the fluctuation of the neutral scalar field from its mean field. The mean field is determined so as to minimize TABLE I. Values of the parameters in the Lagrangian. | $g_V$ [−] | $\kappa_\rho$ [−] | $\kappa_\omega$ [−] | $g_{\gamma \eta' \rho}$ [MeV$^{-1}$] | $g_{\gamma \eta' \omega}$ [MeV$^{-1}$] | $g$ [−] | |-----------|------------------|------------------|---------------------------------|---------------------------------|-------| | 5.95 | 3.586 | 0 | $1.625 \times 10^{-3}$ | $5.622 \times 10^{-4}$ | 7.698 | the effective potential, which is obtained in the tree-level approximation in this study. The isospin symmetry is implemented with the degenerate $u$ and $d$ quark masses. The Lagrangian except for the vector field is the same as that used in Refs. [27, 46]. The Lagrangian is constructed to be invariant under the chiral transformation for the hadron field. The meson field $M$ is transformed as $U_L M U_R^\dagger$ with $U_{L/R}$ the element of SU(3)$_{L/R}$. Here, we note that the term proportional to $B$, which is not invariant under the U$_A$(1) transformation, reflects the effect of the U$_A$(1) anomaly. For the fermion part, the irrelevant hyperons in this study are omitted. The values of various coupling constants in the Lagrangian are summarized in Table I. The coupling of the vector meson and nucleon $g_V$ is fixed with the Kawarabayashi-Suzuki-Fayyazuddin-Riazuddin relation $g_V = \frac{m_V}{\sqrt{2f}}$ [62], where $m_V$ and $f$ are $m_V = (m_\rho + m_\omega)/2$ and $f = 92.2$ MeV. The masses of the $\rho$ and $\omega$ mesons are taken from Ref. [63]. The coefficient of the Pauli coupling between the nucleon and vector meson $\kappa_V$ is determined to reproduce the anomalous magnetic moment of proton, $\kappa_p = 1.793$. Following Refs. [27, 46], the parameter $g$ in the nucleon part is determined for $\langle \sigma \rangle$, the chiral order parameter, to reduce 35% at the normal nuclear density which is suggested by the analysis of the pion-nucleus system [4]. For the masses of the $\eta$, $\eta'$ mesons and the nucleon, there are constraints in our model. However, in the present study of the $\eta'$ photoproduction, we employ the experimental values for these masses. The coupling of the photon $\gamma$, vector meson $V^a$ ($V^0 = \omega$ and $V^3 = \rho^0$), and the $\eta'$ meson is called the anomalous coupling which is induced by the chiral anomaly in QED [64] with the vector meson dominance. Here, we use $g_{\gamma V^a \eta'}$ determined from the observed partial width of the $\eta'$ radiative decay [63]. B. $\eta'N$ amplitude for FSI In this section, we briefly revisit the $\eta'N$ amplitude in the framework of the linear $\sigma$ model [27, 46], which is relevant to the purpose of this study. The $\eta'$ photoproduction amplitude is given by the $T$ matrix $T_{\gamma N \rightarrow \eta' N}$ as $$T_{\gamma N \rightarrow \eta' N} = V_{\gamma N \rightarrow \eta' N}(1 + G_{\eta' N} T_{\eta' N \rightarrow \eta' N}), \tag{2}$$ whose diagrammatic expression is shown in Fig. 1. In Eq. (2), $V_{\gamma N \rightarrow \eta' N}$, $G_{\eta' N}$, and $T_{\eta' N \rightarrow \eta' N}$ are the $\eta'$-photoproduction kernel, the $\eta' N$ two-body Green’s function, and the $\eta' N$ $T$ matrix, respectively. The amplitude $T_{\eta' N \rightarrow \eta' N}$ is responsible for the rescattering of the $\eta'$ meson and nucleon in the final state. The $T$ matrix is obtained from a two-channel coupled equation of $\eta' N$ ($i = 1$) and $\eta N$ ($i = 2$). With the interaction kernels of the $\eta' N$ and $\eta N$ channels $V_{ij}$ ($i, j = 1, 2$), the $T$ matrices $T_{ij}$ satisfy the scattering equation given by, $$T_{ij} = V_{ij} + V_{ik} G_k T_{kj}, \tag{3}$$ where $$V_{11} = -\frac{6gB}{\sqrt{3}m^2_{\sigma_0}}, \quad V_{12} = V_{21} = +\frac{6gB}{\sqrt{6}m^2_{\sigma_8}}, \quad V_{22} = 0. \tag{4}$$ The diagrammatic expression of Eq. (3) is given in Fig. 2. The interaction kernels $V_{ij}$ given in Eq. (4) are obtained from the scattering amplitude within the tree-level approximation and the leading order of the momentum expansion in the flavor SU(3) symmetric limit. The diagrams taken into account in this calculation are shown in Fig. 3, where the scalar-meson exchange in the $t$ channel and the Born diagrams in the $s$ and $u$ channels are considered. One can see in Eq. (4) that an attractive interaction between the $\eta'$ meson and nucleon is induced by the scalar-meson exchange in this approximation. It is noteworthy that this interaction kernel is proportional to $B$, which reflects the effect of the U$_A$(1) anomaly as we mentioned in Sec. II.A. Owing to this attraction, the bound state of the $\eta'$ meson and nucleon can be generated. In this study, the vector-meson contribution for the $\eta'N$ interaction is not taken into account, because they do not give the leading contribution in the momentum expansion. Here, we take account of the $\eta'N$ and $\eta N$ channels, and omit the $\pi N$ one, because we expect that the contribution from the $\pi N$ channel would be small owing to the smallness of the $\pi N \rightarrow \eta'N$ cross section [65]. The divergence contained in the two-body Green’s function $G_i$ is removed by the dimensional regularization and the subtraction constant is fixed with the natural renormalization scheme [66]. The relevant parameters are given as $B = 997.95$ MeV, $g = 7.698$, $m_{\sigma_0} = 700$ MeV, $m_{\sigma_s} = 1225$ MeV, and the subtraction constants are $a_{\eta'N}(\mu = m_N) = -1.838$ and $a_{\eta N}(\mu = m_N) = -1.239$, where $\mu$ is the renormalization point. We use the same subtraction constant appearing in the Green’s function in Eq. (2) as that in the $\eta'N$ $T$ matrix in Eq. (3). TABLE II. Table for the coupling strength, scattering length, and binding energy for the cases (a) to (e). The cases (a) to (e) are characterized by the coupling parameter $g$. | coupling strength | (a) | (b) | (c) | (d) | (e) | |-------------------|------|------|------|------|------| | scattering length [fm] | $g \times 0.0$ | $g \times 1.0$ | $g \times 0.5$ | $g \times 1.5$ | $g \times -0.5$ | | binding energy [MeV] | $-$ | $9.79 - 7.10i$ | $-$ | $98.6 - 24.6i$ | $-$ | FIG. 4. Diagrams for the $\eta'$ photoproduction amplitude with the tree-level approximation, with the same notation as in Fig. 1. The first, second, and third terms are the contributions from the $s$, $u$, and $t$ channels, respectively. For the purpose of seeing the effect of the $\eta'N$ FSI, we show the result with varying the parameter $g$ as (a) $g \times 0.0$, (b) 1.0, (c) 0.5, (d) 1.5, and (e) $-0.5$. The coupling strength, $\eta'N$ scattering length, and binding energy of the $\eta'N$ bound state in these cases are summarized in Table II. In this model, there is no parameter set reproducing the scattering length suggested by the COSY-11 experiment [34], which has the larger imaginary part than the real part. On the other hand, the $\eta'$ optical potential by CBELSA/TAPS seems to be consistent with scattering length of case (c) in Table II within the errors by the use of the linear density approximation though it might be a crude one. Here, we do not restrict our analysis to the scattering length suggested by the analysis of COSY-11 [34] for an independent and complementary analysis of the $\eta'N$ interaction. C. Photoproduction amplitude In this section, we explain the $\eta'$-photoproduction kernel $V_{\gamma N \to \eta'N}$ in Eq. (2). It is evaluated within the tree-level approximation shown in Fig. 4, which contains the Born diagrams in the $s$ and $u$ channels, and the vector-meson exchange one in the $t$ channel. We can write down the amplitude from these diagrams as follows: \[-i \mathcal{M}_{\text{tree}} = e \bar{u}(p', s') \left[ g_{PN} \left\{ \gamma_5 \frac{F_s k + F_c (p + m_N)}{(p + k)^2 - m_N^2} \gamma_5 + \gamma_5 \frac{-F_u k' + F_c (p + m_N)}{(p - k')^2 - m_N^2} \gamma_5 \right\} \right.\] \[+ \frac{\kappa_p}{4m_N} \left( F_s \gamma_5 \frac{p + k + m_N}{(p + k)^2 - m_N^2} [k, \gamma] + F_u \frac{p - k' + m_N}{(p - k')^2 - m_N^2} \gamma_5 \right) \] \[+ i F_t \frac{g_V g_{V'N} p / 2}{t - m_V^2 + i \epsilon} g_{\mu \sigma} \epsilon^{\rho \sigma \alpha \beta} k'_\rho k_\alpha \delta_{\beta} \left\{ \gamma^\mu + \frac{\kappa_{V'}}{4m_N} [\gamma, \gamma^\mu] \right\} u(p, s), \tag{5}\] where \(e^\mu\) is the polarization vector of the photon, and the momentum transfer \(q^\mu\) is written as \(q^\mu = p'^\mu - p^\mu\). Here, the form factors \(F_x\) (\(x = s, t, u\)) are introduced in a gauge invariant manner following Ref. [67] and references therein. The form factors \(F_x\) appearing in Eq. (5) are written as \(F_x = \Lambda_x^4 / ((x - m_x^2)^2 + \Lambda_x^4)\), and \(F_c\) is given as \(F_c = F_s + F_u - F_s F_u\), where \(m_x\) denotes the exchanged hadron mass in the channel \(x\). The form factor reflects the size of hadron, and the typical value of the cutoff parameter \(\Lambda_x\) is about 1 GeV. We will discuss the actual values in the next section. For the kernel \(V_{\gamma N \rightarrow \eta' N}\) in Eq. (2), we use the production amplitude \(\mathcal{M}_{\text{tree}}\) in Eq. (5) by factorizing the amplitude with its on-shell value. In the present calculation, we have omitted the direct production of the \(\eta\) meson from the photon \(V_{\gamma N \rightarrow \eta N}\), expecting that the energy dependence of that channel is not very large in the region of the \(\eta' N\) threshold because the pole position of \(N^*(1535)\), which has the dominant contribution for the \(\eta\)-meson photoproduction, is far from there. ### III. RESULT Let us first discuss differential cross sections of the \(\eta'\) photoproduction without the \(\eta' N\) FSI as functions of the total energy \(W\) in the center-of-mass (c.m.) frame. The results are shown in Fig. 5. The left and right panels of the figure correspond to the cases of an \(\eta'\) meson production at the forward angle \((\cos \theta_{\eta'}^{\text{cm}} = 0.75)\) and the backward one \((\cos \theta_{\eta'}^{\text{cm}} = -0.75)\), respectively, where \(\theta_{\eta'}^{\text{cm}}\) denotes the angle between the initial photon and the produced \(\eta'\) meson in the c.m. frame. For the cutoff parameters, we use \(\Lambda = \Lambda_x = 700\) MeV \((x = s, t, u)\). In this figure, separate contributions from the \(s\), \(t\), and \(u\) channels are plotted. From the left panel of Fig. 5, we find that the cross section at the forward angle is dominated by the \(t\)-channel contribution with the vector-meson exchange. On the other hand, the \(u\)-channel contribution of the second term of Fig. 4 has a large fraction at the backward angle. As is often the case, the reaction cross sections depend on the cutoff parameters of the FIG. 5. Differential cross sections of the $\eta'$ photoproduction without FSI at $\cos\theta_{\eta'}^{\text{c.m.}} = 0.75$ (left) and $-0.75$ (right) as functions of the total energy $W$ in the c.m. frame. In the left figure, $s$- and $u$-channel contributions are multiplied by factors 150 and 5, respectively, and the $s$-channel one in the right figure is multiplied by a factor 100. form factor. Thus in Fig. 6 we show the differential cross sections at the forward angle with varying the cutoff parameter as $\Lambda = 500$, 700, and 900 MeV without the $\eta'N$ FSI. With the introduction of the form factor, some characteristic peak structure may appear in the energy dependence of cross section due to the competition of the increasing behavior of the phase space volume and the decreasing behavior of the form factor as the energy (or the relative momentum $q$) is increased. The cross section is proportional to $q|F(q)|^2$, where $q$ is the relative momentum of the final state $\eta'N$, and is related to the kinetic energy $E$ by $E = q^2/2\mu$ in the non-relativistic approximation for small $q$ ($\mu$ is the reduced mass). By using the typical cutoff $\Lambda \sim 1$ GeV and $\mu \sim 0.5$ GeV for the $\eta'N$ system, we find the peak position at around some hundreds MeV above the threshold. In Fig. 6, one finds that there is a peak at $2.5 - 2.8$ GeV, that is, $600 - 900$ MeV above the $\eta'N$ threshold as we expected, and that any characteristic structure around the threshold does not appear with these cutoff parameters. Now, Fig. 7 shows the total energy $W$ dependence of the differential cross sections with the inclusion of the $\eta'N$ FSI. The strength of the $\eta'N$ interaction is varied by changing the parameter $g$ appearing in Eq. (4) to see the dependence on the strength of the $\eta'N$ FSI. In the figure, the cases (a) to (e) correspond to those explained in Sec. II.B; (a) without FSI; (b), (c), (d) with attractive FSI; and (e) with repulsive FSI. Here, we use the same value for the cutoff parameter $\Lambda = 700$ MeV in all cases (a) to (e). In this study, only the $\eta'N$ FSI FIG. 6. Cutoff dependence of the differential cross sections of the $\eta'$ photoproduction off the nucleon without the $\eta'N$ FSI. The cutoff parameter $\Lambda$ is varied as $\Lambda = 500$, 700, and 900 MeV. Note that the results for $\Lambda = 500$ and 900 MeV are scaled by factors 11.5 and 0.2, respectively. FIG. 7. Differential cross sections at the forward (left, $\cos \theta_{\eta'}^{\text{c.m.}} = 0.75$) and the backward (right, $\cos \theta_{\eta'}^{\text{c.m.}} = -0.75$) angles as functions of $W$ with and without the $\eta'N$ FSI. The cases (a) to (e) in the legend follow those given in Table II. in the $S$-wave part is included. Therefore, we mainly focus on the energy around the $\eta'N$ threshold in the following discussions. In the left panel of Fig. 7 for the forward production of the $\eta'$ meson, we find a broad bump structure around 2.6 GeV for the case (a) without FSI, which originates from the form factor as mentioned above. With the inclusion of the $\eta'N$ FSI, the structure is modified: In the case (b), a significant enhancement near the $\eta'N$ threshold appears, which stems from the existence of a bound state just below the threshold. The enhancement becomes more moderate in the cases (c) and (d), where there exists no bound state around the $\eta'N$ threshold. Thus, we find an enhancement of the forward cross section near the $\eta'N$ threshold. TABLE III. Cutoff parameters $\Lambda_x$ used for the results shown in Fig. 8 in units of MeV. The cases (a) to (e) follow those of Table II. | | (a) | (b) | (c) | (d) | (e) | |-------|-------|-------|-------|-------|-------| | $\Lambda_{s,u}$ | 600 | 680 | 680 | 650 | 0 | | $\Lambda_t$ | 750 | 610 | 650 | 790 | 840 | FIG. 8. Differential cross sections of the $\eta'$ photoproduction at $\cos \theta_{\eta'}^{\text{c.m.}} = 0.75$ (left) and $-0.75$ (right) as functions of the total energy $W$. The cases (a) to (e) in the legend are the same as those in Fig. 7. The points with the error bar are the experimental data taken from Ref. [36]. due to the attractive $\eta'N$ FSI. In the case (e), where the $\eta'N$ FSI is repulsive, one cannot find such an enhancement. When the $\eta'$ meson is emitted at the backward angle, the $\eta'N$ FSI gives similar effect on the energy dependence of the cross sections as shown in the right panel of Fig. 7; the larger cross sections near the $\eta'N$ threshold are obtained in the cases (b), (c), and (d) than that in the case (a), and one can see the suppression in the case (e) compared with the case (a). In Fig. 8, we show the result of the differential cross sections compared with the experimental data [36]. In doing so, we have tuned the cutoff parameters $\Lambda_{u,s}$ and $\Lambda_t$ for each strength of the $\eta'N$ FSI to make an optimal comparison with the experimental data near the threshold at both forward and backward angles. The resulting cutoff parameters $\Lambda_x$ are summarized in Table III. At the forward angle shown in the left panel of Fig. 8, the rapid increase near the threshold is well reproduced in the cases (b), (c), and (d), where the $\eta'N$ FSI is attractive, though such behavior is not seen in the cases without the $\eta'N$ FSI, (a), nor with the repulsive one, (e). In the present method, we cannot reproduce a broad peak at around $W = 2.1$ GeV in the experimental data, which is considered to be due to a resonance as discussed in Ref. [56]. In the present study, however, we do not consider the resonance in that energy region, and we rather focus our discussions on the near threshold behavior by the $\eta'N$ FSI. As we have mentioned before, there is no parameter set which reproduces the scattering length suggested by the COSY-11 experiment [34]. We expect that the small scattering length leads to the similar result to the case (a), where the effect of the $\eta'N$ FSI is not taken into account and the rapid increase near the $\eta'N$ threshold is not reproduced well. Next, we move to the backward production of the $\eta'$ meson given in the right panel of Fig. 8. Here, we note that the experimental data in Ref. [36] very near the $\eta'N$ threshold are missing, and that only the data above 2 GeV are available. We find a clear effect of the FSI at the total energy $W$ below 2 GeV. The attractive $\eta'N$ FSI, the cases (b), (c), and (d), leads to a rapid increase of the cross sections around the $\eta'N$-threshold energy. In the case (e), the cross section is smaller than that in the case (a). This difference of the cross sections near the $\eta'N$ threshold can be a probe to investigate the strength of the low-energy $\eta'N$ interaction. As corresponding to the broad peak seen in the experimental data at the forward angle, a dip-like structure is seen at the backward angle at the same energy region $W \sim 2.1$ GeV. Once again, we do not discuss this structure because it may come from the resonance effect as mentioned above. The differential cross sections at $W = 1.925$, 2.045, 2.230, and 2.420 GeV as functions of $\cos\theta_{\eta'}^{\text{c.m.}}$ are shown in Fig. 9. Around the $\eta'N$-threshold energy, $W = 1.925$ GeV, the cross sections of our calculation do not depend on the variable $\cos\theta_{\eta'}^{\text{c.m.}}$ so much due to the expected $S$-wave dominance, though the experimental data have some structure. As we mentioned above, the differences among the theoretical curves come from those in the strength of the $\eta'N$ FSI; in the cases (b), (c), and (d) which contain the attractive $\eta'N$ FSI, the cross sections near the $\eta'N$ threshold have larger values compared with that in the case (a). At $W = 2.045$ GeV and around $\cos\theta_{\eta'}^{\text{c.m.}} = 1$, there is discrepancy between our calculation and the experimental data. This energy corresponds to the peak around 2.1 GeV in the experimental data in Fig. 8, which may come from the resonance contribution as mentioned above. At higher energies ($W = 2.23$ and 2.42 GeV), the forward peak structure stemming from the $t$-channel contribution becomes more apparent. The difference of the FIG. 9. Differential cross section of the $\eta'$ photoproduction as functions of $\cos\theta_{\eta'}^{\text{c.m.}}$ with and without the $\eta'N$ FSI at $W = 1.925$ (upper left), 2.045 (upper right), 2.230 (lower left), and 2.420 (lower right) GeV. The legend is the same as that in Fig. 7. behavior at the backward angle is caused by that of the $u$-channel contribution associated with the change of the parameter $g$. In Fig. 10, we show the total cross sections of the $\eta'$ photoproduction as functions of the total energy $W$. As in the case of the differential cross section, the enhancement of the cross sections near the $\eta'N$ threshold is seen in the cases (b), (c), and (d) with the attractive $\eta'N$ FSI. In the case (e), the cross section is smaller than that in the case (a). Finally, we show the beam asymmetries $\Sigma$ against the scattering angle $\theta_{\eta'}^{\text{c.m.}}$. $\Sigma$ is defined as $$\Sigma = \left( \frac{d\sigma}{d\Omega} \bigg|_{\phi=\pi/2} - \frac{d\sigma}{d\Omega} \bigg|_{\phi=0} \right) \bigg/ \left( \frac{d\sigma}{d\Omega} \bigg|_{\phi=\pi/2} + \frac{d\sigma}{d\Omega} \bigg|_{\phi=0} \right),$$ where $\phi$ is the azimuthal angle from the polarization vector of the photon in the initial state. The positive values of the beam asymmetries as shown in Fig. 11 originate from the dominant contribution of the $t$-channel diagram, which is of the magnetic nature associated with the anomalous coupling of $\gamma\eta'\rho$. The behavior of the beam asymmetry is qualitatively FIG. 10. Total cross sections of the $\eta'$ photoproduction off the nucleon as functions of the total energy $W$. The legend is the same as that in Fig. 7. FIG. 11. Beam asymmetries $\Sigma$ as functions of the scattering angle $\theta_{\eta'}^{\text{c.m.}}$ with the total energy $W = 1.903$ (left) and 1.912 (right) GeV. The cases (a) to (e) in the legend are the same as those in Fig. 7. different from observed one [38]. The difference may come from the interference as pointed out in Ref. [38]. Then, further development of the model, such as, the inclusion of the higher partial-wave contribution may be necessary for the description of the beam asymmetry. IV. SUMMARY AND OUTLOOK In this paper, we investigated the $\eta'$ photoproduction off a nucleon with the inclusion of the final-state interaction between the $\eta'$ meson and nucleon based on the linear $\sigma$ model. When there is an attractive final-state interaction, we found an enhancement of the differential cross section near the $\eta'/N$ threshold, typically around or below 2 GeV, at the forward and backward angles ($\cos \theta_{\eta'}^{\text{c.m.}} = \pm 0.75$). With an attractive $\eta'/N$ interaction, the energy dependence of the cross section near the $\eta'/N$ threshold is reproduced fairly well. Particularly, the magnitude of the enhancement near the threshold in the backward production of the $\eta'$ meson seems to be sensitive to the strength of the $\eta'/N$ interaction. The angular dependence of the differential cross section also agrees with the experimental data in Ref. [36]. The enhancement around the $\eta'/N$ threshold appears also in the energy dependence of the total cross section. Therefore, precise analysis of the threshold behavior is useful to determine the $\eta'/N$ interaction. Despite these agreements, the angular dependence of the beam asymmetry shows qualitatively different behavior as in the previous theoretical calculations [38]. The present study was based on a rather simple model and on the $S$-wave scattering. Other ingredients such as coupled channels of, e.g., $\eta N$ and $\pi N$, higher partial waves, resonances, and so on, may be included. These are expected to improve the aspects that cannot be explained in the present study. ACKNOWLEDGMENTS This work is supported in part by the Grants-in-Aid for Science Research (C) by the JSPS (Grant Nos. JP26400273 for A. H. and JP26400275 for H. N.). --- [1] R. S. Hayano and T. Hatsuda, Rev. Mod. Phys. 82 (2010) 2949. [2] G. E. Brown and M. Rho, Phys. Rev. Lett. 66 (1991) 2720. T. Hatsuda and S. H. Lee, Phys. Rev. C 46 (1992) no.1, R34. [3] M. Naruki et al., Phys. Rev. Lett. 96 (2006) 092301. S. Friedrich et al. [CBELSA/TAPS Collaboration], Phys. Lett. B 736 (2014) 26. P. Gubler and W. Weise, Nucl. Phys. A 954 (2016) 125. [4] K. Suzuki et al., Phys. Rev. Lett. 92 (2004) 072302. [5] E. Friedman et al., Phys. Rev. Lett. 93 (2004) 122302. [6] E. E. Kolomeitsev, N. Kaiser and W. Weise, Phys. Rev. Lett. 90 (2003) 092501. [7] D. Jido, T. Hatsuda and T. Kunihiro, Phys. Lett. B 670 (2008) 109. [8] R.D. Pisarski and F. Wilczek, Phys. Rev. D29, 338 (1984). [9] H. Kikuchi and T. Akiba, Phys. Lett. B 200 (1988) 543. [10] T. Kunihiro and T. Hatsuda, Phys. Lett. B 206 (1988) 385 Erratum: [Phys. Lett. 210 (1988) 278]. T. Kunihiro, Phys. Lett. B219, 363 (1989). [11] T. D. Cohen, Phys. Rev. D 54 (1996) 1867. [12] S.H. Lee and T. Hatsuda, Phys. Rev. D54, (1996) 1871. [13] N. J. Evans, S. D. H. Hsu and M. Schwetz, Phys. Lett. B 375 (1996) 262. [14] M. C. Birse, T. D. Cohen and J. A. McGovern, Phys. Lett. B 388 (1996) 137. [15] W.A. Bardeen, Phys. Rev. 184, 1848 (1969). M. Kobayashi and T. Maskawa, Prog. Theor. Phys. 44 (1970) 1422. J. Schechter and Y. Ueda, Phys. Rev. D 3 (1971) 168. G. 't Hooft, Phys. Rev. D 14 (1976) 3432 Erratum: [Phys. Rev. D 18 (1978) 2199]. E. Witten, Nucl. Phys. B 156, 269 (1979). G. Veneziano, Nucl. Phys. B 159, 213 (1979). C. Rosenzweig, J. Schechter and C. G. Trahern, Phys. Rev. D 21 (1980) 3388. K. Kawarabayashi and N. Ohta, Nucl. Phys. B 175 (1980) 477. [16] D. Jido, H. Nagahiro and S. Hirenzaki, Phys. Rev. C 85 (2012) 032201. [17] J.I. Kapusta, D. Kharzeev, and L.D. McLerran, Phys. Rev. D53, 5028 (1996). T. Csorgo, R. Vertesi and J. Sziklai, Phys. Rev. Lett. 105 (2010) 182301. S. Benic, D. Horvatic, D. Kekez and D. Klabucar, Phys. Rev. D 84 (2011) 016006. G. Fejos and A. Hosaka, Phys. Rev. D 94 (2016) no.3, 036005. [18] T. Waas and W. Weise, Nucl. Phys. A 625 (1997) 287. D. Jido, E. E. Kolomeitsev, H. Nagahiro and S. Hirenzaki, Nucl. Phys. A 811 (2008) 158. H. Nagahiro, D. Jido and S. Hirenzaki, Phys. Rev. C 80 (2009) 025205. [19] M. Pfeiffer et al., Phys. Rev. Lett. 92 (2004) 252001. J. Smyrski et al., Phys. Lett. B 649 (2007) 258. T. Mersmann et al., Phys. Rev. Lett. 98 (2007) 242301. F. Pheron et al., Phys. Lett. B 709 (2012) 21. [20] P. Costa, M.C. Ruivo, and Yu.L. Kalinovsky, Phys. Lett. B560, 171 (2003). [21] H. Nagahiro and S. Hirenzaki, Phys. Rev. Lett. 94, 232503 (2005). [22] S.D. Bass and A.W. Thomas, Phys. Lett. B634, 368 (2006). [23] K. Saito, K. Tsushima and A. W. Thomas, Prog. Part. Nucl. Phys. 58 (2007) 1. [24] H. Nagahiro, M. Takizawa and S. Hirenzaki, Phys. Rev. C 74 (2006) 045203. [25] Y. Kwon, S. H. Lee, K. Morita and G. Wolf, Phys. Rev. D 86 (2012) 034014 [26] H. Nagahiro, S. Hirenzaki, E. Oset, and A. Ramos, Phys. Lett. B709, 87 (2012). [27] S. Sakai and D. Jido, Phys. Rev. C 88, 064906 (2013). [28] M. Miyatani, H. Nagahiro, S. Hirenzaki and N. Ikeno, Acta Phys. Polon. B 47 (2016) 367. [29] K. Itahashi et al., Prog. Theor. Phys. 128 (2012) 601. [30] M. Nanova et al. [CBELSA/TAPS Collaboration], Phys. Lett. B 710 (2012) 600. [31] M. Nanova et al. [CBELSA/TAPS Collaboration], Phys. Lett. B 727 (2013) 417. [32] M. Nanova et al. [CBELSA/TAPS Collaboration], Phys. Rev. C 94 (2016) no.2, 025205. [33] Y. K. Tanaka et al. [η-PRiME/Super-FRS Collaboration], Phys. Rev. Lett. 117 (2016) no.20, 202501. [34] P. Moskal et al., Phys. Lett. B474, 416 (2000). P. Moskal et al., Phys. Lett. B482, 356 (2000). E. Czerwinski et al., Phys. Rev. Lett. 113, 062004 (2014) [35] P. G. Moyssides et al., Nuovo Cim. A 75, 163 (1983). [36] M. Williams et al. [CLAS Collaboration], Phys. Rev. C 80 (2009) 045213. [37] Y. Morino et al., PTEP 2015 (2015) no.1, 013D01. [38] P. Levi Sandri et al., Eur. Phys. J. A 51 (2015) no.7, 77. [39] M. Sumihama et al. [LEPS Collaboration], Phys. Rev. C 80 (2009) 052201. [40] V. Crede et al. [CBELSA/TAPS Collaboration], Phys. Rev. C 80 (2009) 055202. [41] V. L. Kashevarov et al., arXiv:1701.04809 [nucl-ex]. [42] K. Kawarabayashi and N. Ohta, Prog. Theor. Phys. 66, 1789 (1981). [43] S. D. Bass, Phys. Lett. B 463 (1999) 286. [44] B. Borasoy, Phys. Rev. D 61, 014011 (2000). [45] E. Oset and A. Ramos, Phys. Lett. B 704, 334 (2011). [46] S. Sakai and D. Jido, Hyperfine Interact. 234, 71 (2015). [47] T. Sekihara, S. Sakai and D. Jido, Phys. Rev. C 94 (2016) no.2, 025203. [48] J. F. Zhang, N. C. Mukhopadhyay and M. Benmerrouche, Phys. Rev. C 52 (1995) 1134. [49] Z. p. Li, J. Phys. G 23 (1997) 1127. [50] B. Borasoy, Eur. Phys. J. A 9 (2000) 95. [51] S. D. Bass, S. Wetzel and W. Weise, Nucl. Phys. A 686 (2001) 429. [52] B. Borasoy, E. Marco and S. Wetzel, Phys. Rev. C 66 (2002) 055208. [53] W. T. Chiang, S. N. Yang, L. Tiator, M. Vanderhaeghen and D. Drechsel, Phys. Rev. C 68 (2003) 045202. [54] A. Sibirtsev, C. Elster, S. Krewald and J. Speth, AIP Conf. Proc. 717 (2004) 837. [55] K. Nakayama and H. Haberzettl, Phys. Rev. C 69 (2004) 065212. [56] K. Nakayama and H. Haberzettl, Phys. Rev. C 73 (2006) 045211. [57] V. A. Tryasuchev, Phys. Part. Nucl. 39 (2008) 64. [58] X. Cao and X. G. Lee, Phys. Rev. C 78 (2008) 035207. [59] X. H. Zhong and Q. Zhao, Phys. Rev. C 84 (2011) 065204. [60] F. Huang, H. Haberzettl and K. Nakayama, Phys. Rev. C 87 (2013) 054004. [61] V. L. Kashevarov, L. Tiator and M. Ostrick, Bled Workshops Phys. 16 (2015) 9. [62] K. Kawarabayashi and M. Suzuki, Phys. Rev. Lett. 16 (1966) 255. N. Fayyazuddin and N. Riazuddin, Nucl. Phys. 31 (1962) 649. [63] K. A. Olive et al. [Particle Data Group Collaboration], Chin. Phys. C 38 (2014) 090001. [64] S.L. Adler, Phys. Rev. 177, 2426 (1969). J.S. Bell and R. Jackiw, Nuovo Cim. A 60, 47 (1969). [65] R. K. Rader et al., Phys. Rev. D 6 (1972) 3059. [66] T. Hyodo, D. Jido and A. Hosaka, Phys. Rev. C 78, 025203 (2008). [67] K. S. Choi, S. i. Nam, A. Hosaka and H. C. Kim, Phys. Lett. B 636 (2006) 253. K. S. Choi, S. i. Nam, A. Hosaka and H. C. Kim, J. Phys. G 36 (2009) 015008.
Face Recognition Using LDA Based Algorithms Juwei Lu, K.N. Plataniotis, and A.N. Venetsanopoulos Bell Canada Multimedia Laboratory, The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto, Toronto, M5S 3G4, ONTARIO, CANADA Submitted in January 15, 2001. Revised and re-submitted as a BRIEF in April 16, 2002. Accepted for publication by IEEE Transactions on Neural Networks in May 2002. CORRESPONDENCE ADDRESS: Prof. K.N. Plataniotis Bell Canada Multimedia Laboratory The Edward S. Rogers Sr., Department of Electrical and Computer Engineering University of Toronto 10 King’s College Road Toronto, Ontario M5S 3G4, Canada Tel: (416) 946-5605 fax: (416) 978-4425 e-mail: email@example.com http://www.comm.toronto.edu/~kostas Abstract Low-dimensional feature representation with enhanced discriminatory power is of paramount importance to face recognition (FR) systems. Most of traditional linear discriminant analysis (LDA) based methods suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the “small sample size” (SSS) problem which is often encountered in FR tasks. In this short paper, we propose a new algorithm that deals with both of the shortcomings in an efficient and cost effective manner. The proposed here method is compared, in terms of classification accuracy, to other commonly used FR methods on two face databases. Results indicate that the performance of the proposed method is overall superior to those of traditional FR approaches, such as the Eigenfaces, Fisherfaces and D-LDA methods. Keywords Face Recognition, Linear Discriminant Analysis (LDA), direct LDA, fractional-step LDA, principle component analysis (PCA), Eigenfaces, Fisherfaces. I. INTRODUCTION Feature selection for face representation is one of central issues to face recognition (FR) systems. Among various solutions to the problem (see [1], [2] for a survey), the most successful seems to be those appearance-based approaches, which generally operate directly on images or appearances of face objects and process the images as 2D holistic patterns, to avoid difficulties associated with 3D modeling, and shape or landmark detection [2]. Principle Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two powerful tools used for data reduction and feature extraction in the appearance-based approaches. Two state-of-the-art FR methods, Eigenfaces [3] and Fisherfaces [4], built on the two techniques respectively, have been proved to be very successful. It is generally believed that, when it comes to solving problems of pattern classification, LDA based algorithms outperform PCA based ones, since the former optimizes the low-dimensional representation of the objects with focus on the most discriminant feature extraction while the latter achieves simply object reconstruction [4], [5], [6]. However, the classification performance of traditional LDA is often degraded by the fact that their separability criteria are not directly related to their classification accuracy in the output space [7]. A solution to the problem is to introduce weighting functions into LDA. Object classes that are closer together in the output space, and thus can potentially result in mis-classification, should be more heavily weighted in the input space. This idea has been further extended in [7] with the introduction of the fractional-step linear discriminant analysis algorithm (F-LDA), where the dimensionality reduction is implemented in a few small fractional steps allowing for the relevant distances to be more accurately weighted. Although the method has been successfully tested on low dimensional patterns whose dimensionality is $D \leq 5$, it cannot be directly applied to high-dimensional patterns, such as those face images used in this short paper (It should be noted at this point that a typical image pattern of size $(112 \times 92)$ (Fig.2) results to a vector of dimension $D = 10304$.), due to two factors: (1) the computational difficulty of the eigen-decomposition of matrices in the high-dimensional image space; (2) the degenerated scatter matrices caused by the so-called “small sample size” (SSS) problem, which widely exists in the FR tasks where the number of training samples is smaller than the dimensionality of the samples [4], [5], The traditional solution to the SSS problem requires the incorporation of a PCA step into the LDA framework. In this approach, PCA is used as a pre-processing step for dimensionality reduction so as to discard the null space of the within-class scatter matrix of the training data set. Then LDA is performed in the lower dimensional PCA subspace [4]. However, it has been shown that the discarded null space may contain significant discriminatory information [5], [6]. To prevent this from happening, solutions without a separate PCA step, called direct LDA (D-LDA) methods have been presented recently [5], [6]. In the D-LDA framework, data are processed directly in the original high-dimensional input space avoiding the loss of significant discriminatory information due to the PCA pre-processing step. In this short paper, we introduce a new feature representation method for FR tasks. The method combines the strengths of the D-LDA and F-LDA approaches while at the same time overcomes their shortcomings and limitations. In the proposed framework, hereafter DF-LDA, we firstly lower the dimensionality of the original input space by introducing a new variant of D-LDA that results in a low-dimensional SSS-free subspace where the most discriminatory features are preserved. The variant of D-LDA developed here utilizes a modified Fisher’s criterion to avoid a problem resulting from the wage of the zero eigenvalues of the within-class scatter matrix as possible divisors in [6]. Also, a weighting function is introduced into the proposed variant of D-LDA, so that a subsequent F-LDA step can be applied to carefully re-orient the SSS-free subspace resulting in a set of optimal discriminant features for face representation. II. THE DIRECT FRACTIONAL-STEP LDA (DF-LDA) The problem of low-dimensional feature representation in FR systems can be stated as follows: Given a set of $L$ training face images $\{z_i\}_{i=1}^L$, each of which is represented as a vector of length $N (= I_w \times I_h)$, i.e. $z_i \in \mathbb{R}^N$ belonging to one of $C$ classes $\{Z_c\}_{c=1}^C$, where $(I_w \times I_h)$ is the image size and $\mathbb{R}^N$ denotes a $N$-dimensional real space, the objective is to find a transformation $\varphi$, based on optimization of certain separability criteria, to produce a representation $y_i = \varphi(z_i)$, where $y_i \in \mathbb{R}^M$ with $M \ll N$. The representation $y_i$ should enhance the separability of the different face objects under consideration. A. Where are the optimal discriminant features? Let $S_{BTW}$ and $S_{WTH}$ denote the between- and within-class scatter matrices of the training image set respectively. LDA-like approaches such as the Fisherface method [4] find a set of basis vectors, denoted by $\Psi$ that maximizes the ratio between $S_{BTW}$ and $S_{WTH}$: $$\Psi = \arg \max_\Psi \frac{|(\Psi^T S_{BTW} \Psi)|}{|(\Psi^T S_{WTH} \Psi)|}$$ \hspace{1cm} (1) Assuming that $S_{WTH}$ is non-singular, the basis vectors $\Psi$ correspond to the first $M$ eigenvectors with the largest eigenvalues of $(S_{WTH}^{-1} S_{BTW})$. The $M$-dimensional representation is then obtained by projecting the original face images onto the subspace spanned by the $M$ eigenvectors. However, a degenerated $S_{WTH}$ in (1) may be generated due to the SSS problem widely existing in most FR tasks. It was noted in the introduction that a possible solution is to apply a PCA step in order to remove the null space of $S_{WTH}$ prior to the maximization in (1). Nevertheless, it recently has been shown that the null space of $S_{WTH}$ may contain significant discriminatory information [5], [6]. As a consequence, some of significant discriminatory information may be lost due to this pre-processing PCA step. The basic premise of the D-LDA methods that attempt to solve the SSS problem without a PCA step is, that the null space of $S_{WTH}$ contains significant discriminant information if the projection of $S_{BTW}$ is not zero in that direction, and that no significant information will be lost if the null space of $S_{BTW}$ is discarded. Assuming that $A$ and $B$ represent the null space of $S_{BTW}$ and $S_{WTH}$, while $A' = \mathbb{R}^N - A$ and $B' = \mathbb{R}^N - B$ are the complement spaces of $A$ and $B$ respectively, the optimal discriminant subspace sought by D-LDA is the intersection space $(A' \cap B')$. The method in [6] first diagonalizes $S_{BTW}$ to find $A'$ when seek the solution of (1), while [5] diagonalizes $S_{WTH}$ to find $B$. Although it appears that the two methods are not significantly different, it may be intractable to calculate $B$ when the size of $S_{WTH}$ is large, which is the case in most FR applications. For example, a typical face pattern of $(112 \times 92)$ results to $S_{WTH}$ and $S_{BTW}$ matrices with dimensionality $(10304 \times 10304)$. Fortunately, the rank of $S_{BTW}$ is determined by $rank(S_{BTW}) = min(N, C - 1)$, with $C$ the number of image classes, which is usually a small value in most of FR tasks, e.g. $C = 40$ in the ORL database, resulting in $rank(S_{BTW}) = 39$. $A'$ can be easily found by solving eigenvectors of a $(39 \times 39)$ matrix rather than the original \((10304 \times 10304)\) matrix through an algebraic transformation [3], [6]. Then \((\mathcal{A}' \cap \mathcal{B})\) can be obtained by solving the null space of projection of \(S_{WTH}\) into \(\mathcal{A}'\), while the projection is a small matrix of size \((39 \times 39)\). Based on the analysis given above, it can be known that the most significant discriminant information exist in the intersection subspace \((\mathcal{A}' \cap \mathcal{B})\), which is usually low-dimensional so that it becomes possible to further apply some sophisticated techniques, such as the rotation strategy of the LDA subspace used in F-LDA, to derive the optimal discriminant features from the intersection subspace. ### B. A Variant of D-LDA The maximization process in (1) is not directly linked to the classification error which is the criterion of performance used to measure the success of the FR procedure. Modified versions of the method, such as the F-LDA approach, use a weighting function in the input space, to penalize those classes that are close and can potentially lead to mis-classifications in the output space. Thus, the weighted between-class scatter matrix can be expressed as: \[ \hat{S}_{BTW} = \sum_{i=1}^{C} \phi_i \phi_i^T \] where \(\phi_i = (L_i/L)^{1/2} \sum_{j=1}^{C} (w(d_{ij}))^{1/2} (\bar{z}_i - \bar{z}_j)\), \(\bar{z}_i\) is the mean of class \(Z_i\), \(L_i\) is the number of elements in \(Z_i\), and \(d_{ij} = \| \bar{z}_i - \bar{z}_j \|\) is the Euclidean distance between the means of class \(i\) and class \(j\). The weighting function \(w(d_{ij})\) is a monotonically decreasing function of the distance \(d_{ij}\). The only constraint is that the weight should drop faster than the Euclidean distance between the means of class \(i\) and class \(j\) with the authors in [7] recommending weighting functions of the form \(w(d_{ij}) = (d_{ij})^{-2p}\) with \(p = 2, 3, \ldots\). Most LDA based algorithms including Fisherfaces [4] and D-LDA [6] utilize the conventional Fisher’s criterion denoted by (1). In this work we propose the utilization of a variant of the conventional metric. The proposed metric can be expressed as follows: \[ \Psi = \arg \max_{\Psi} \frac{\left| (\Psi^T \hat{S}_{BTW} \Psi) \right|}{\left| (\Psi^T S_{TOT} \Psi) \right|} \] where \(S_{TOT} = S_{WTH} + \hat{S}_{BTW}\), and \(\hat{S}_{BTW}\) is the weighted between-class scatter matrix defined in (2). This modified Fisher’s criterion can be proven to be equivalent to the conventional one by introducing the analysis of [11] where it was shown that in $\mathbb{R}^N \forall x \in \mathbb{R}^N$, if $f(x) \geq 0$, $g(x) > 0$ and $f(x) + g(x) > 0$, and $h_1(x) = f(x)/g(x)$, $h_2(x) = f(x)/(f(x) + g(x))$, the function $h_1(x)$ has the maximum (including positive infinity) at point $x_0 \in \mathbb{R}^N$ iff $h_2(x)$ has the maximum at point $x_0$. For the reasons explained in section II-A, we start by solving the eigenvalue problem of $\hat{S}_{BTW}$. It is intractable to directly compute eigenvectors of $\hat{S}_{BTW}$ which is a large size ($N \times N$) matrix. Fortunately, the first $m$ ($\leq C - 1$) most significant eigenvectors of $\hat{S}_{BTW}$, which correspond to non-zero eigenvalues, can be indirectly derived from the eigenvectors of the matrix $(\Phi_b^T \Phi_b)$ with size ($C \times C$), where $\Phi_b = [\phi_1 \ldots \phi_c]$ [3]. Let $\lambda_i$ and $\mathbf{e}_i$ be the $i$-th eigenvalue and its corresponding eigenvector of $(\Phi_b^T \Phi_b)$, $i = 1 \cdots C$, sorted in decreasing eigenvalue order. Since $(\Phi_b \Phi_b^T)(\Phi_b \mathbf{e}_i) = \lambda_i (\Phi_b \mathbf{e}_i)$, $\mathbf{v}_i = \Phi_b \mathbf{e}_i$ is the eigenvector of $\hat{S}_{BTW}$. To remove the null space of $\hat{S}_{BTW}$, the first $m$ ($\leq C - 1$) eigenvectors: $\mathbf{V} = [\mathbf{v}_1 \cdots \mathbf{v}_m] = \Phi_b \mathbf{E}_m$, whose corresponding eigenvalues are greater than 0, are used, where $\mathbf{E}_m = [\mathbf{e}_1 \ldots \mathbf{e}_m]$. It is not difficult to see that $\mathbf{V}^T \hat{S}_{BTW} \mathbf{V} = \Lambda_b$, with $\Lambda_b = \text{diag}[\lambda_1^2 \cdots \lambda_m^2]$, a ($m \times m$) diagonal matrix. Let $\mathbf{U} = \mathbf{V} \Lambda_b^{-1/2}$. Projecting $\hat{S}_{BTW}$ and $\mathbf{S}_{TOT}$ into the subspace spanned by $\mathbf{U}$, we have $\mathbf{U}^T \hat{S}_{BTW} \mathbf{U} = \mathbf{I}$ and $\mathbf{U}^T \mathbf{S}_{TOT} \mathbf{U}$. Then, we diagonalize $\mathbf{U}^T \mathbf{S}_{TOT} \mathbf{U}$ which is a tractable matrix with size ($m \times m$). Let $\mathbf{p}_i$ be the $i$-th eigenvector of $\mathbf{U}^T \mathbf{S}_{TOT} \mathbf{U}$, where $i = 1 \cdots m$, sorted in increasing order according to corresponding eigenvalues $\lambda'_i$. In the set of ordered eigenvectors, those that correspond to the smallest eigenvalues maximize the ratio in (1) and they should be considered as the most discriminatory features. We can discard the eigenvectors with the largest eigenvalues, and denote the $M'(\leq m)$ selected eigenvectors as $\mathbf{P} = [\mathbf{p}_1 \cdots \mathbf{p}_{M'}]$. Defining a matrix $\mathbf{Q} = \mathbf{U} \mathbf{P}$, we can obtain $\mathbf{Q}^T \mathbf{S}_{TOT} \mathbf{Q} = \Lambda_w$, with $\Lambda_w = \text{diag}[\lambda'_1 \cdots \lambda'_{M'}]$, a ($M' \times M'$) diagonal matrix. Based on the derivation presented above, a set of optimal discriminant feature basis vectors can be derived through $\Gamma = \mathbf{Q} \Lambda_w^{-1/2}$. To facilitate comparison, it should be mentioned at this point that the D-LDA method of [6] uses the conventional Fisher’s criterion of (1) with $\mathbf{S}_{TOT}$ replaced by $\mathbf{S}_{WTH}$. However, since the subspace spanned by $\Gamma$ contains the intersection space ($\mathcal{A}' \cap \mathcal{B}$), it is possible that there exist zero eigenvalues in $\Lambda_w$. To prevent this from happening, a heuristic threshold was introduced in [6]. A small threshold value $\epsilon$ was set and any value below $\epsilon$ was adjusted to $\epsilon$. Obviously, performance heavily depends on the proper choice of the value for the artificial threshold $\epsilon$, which is done in a heuristic manner [6]. Unlike the method in [6], due to the modified Fisher’s criterion of (3), the non-singularity of $\Lambda_w = Q^T S_{TOT} Q$ can be guaranteed by the following lemma. **Lemma 1:** Suppose $B$ is a real matrix of size $(N \times N)$. Furthermore, let us assume that it can be represented as $B = \Phi \Phi^T$ where $\Phi$ is a real matrix of size $(N \times M)$. Then, the matrix $(I + B)$ is positive definite, i.e. $I + B > 0$, where $I$ is the $(N \times N)$ identity matrix. **Proof:** Since $B^T = B$, $I + B$ is a real symmetric matrix. Let $x$ be any $N \times 1$ non-zero real vector, we have $x^T (I + B)x = x^T x + x^T Bx = x^T x + (\Phi^T x)^T (\Phi^T x) > 0$. According to [12], the matrix $I + B$ that satisfies the above condition is positive definite, i.e. $I + B > 0$. ■ Similar to $\hat{S}_{BTW}$, $S_{WTH}$ can be expressed as $S_{WTH} = \Phi_w \Phi_w^T$, and then $U^T S_{WTH} U = (U^T \Phi_w)(U^T \Phi_w)^T$. Since $U^T \hat{S}_{BTW} U = I$ and $(U^T S_{WTH} U)$ is real symmetric it can be easily seen that $(U^T S_{TOT} U)$ is positive definite, and thus $\Lambda_w = Q^T S_{TOT} Q$ is non-singular. ### C. Rotation and re-orientation of the D-LDA subspace Through the enhanced D-LDA step discussed above, a low-dimensional SSS-free subspace spanned by $\Gamma$ has been derived without losing the most important, for discrimination purposes, information. In this subspace, $S_{TOT}$ is non-singular and has been whitened due to $\Gamma^T S_{TOT} \Gamma = I$. Thus, an F-LDA step can be safely applied to further reduce the dimensionality from $M'$ to the required $M$ now. To this end, we firstly project the original face images into the $M'$-dimensional subspace, obtaining a representation $x_i = \Gamma^T z_i$ where $i = 1, 2, \ldots, L$. Let $S_b$ be the between-class scatter matrix of $\{x_i\}_{i=1}^L$, and $\gamma_{M'}$ be the $M'$-th eigenvector of $S_b$ which corresponds to the smallest eigenvalue of $S_b$. This eigenvector will be discarded when dimensionality is reduced from $M'$ to $(M' - 1)$. A problem may be encountered during the dimensionality reduction procedure. If classes $Z_i$ and $Z_j$ are well-separated in the $M'$-dimensional input space, this will produce a very small $w(d_{ij})$. As a result, the two classes may heavily overlap in the $(M' - 1)$-dimensional output space which is orthogonal to $\gamma_{M'}$. To avoid the problem, a kind of “automatic gain control” is introduced to the weighting procedure in F-LDA [7], where dimensionality is reduced from $M'$ to $(M' - 1)$ at $r \geq 1$ fractional steps instead of one step directly. In each step, $S_b$ and its eigenvectors are recomputed based on the changes of $w(d_{ij})$ in the output space, so that the $(M' - 1)$-dimensional subspace is re-oriented and severe overlap between classes in the output space is avoided. $\gamma_{M'}$ will not be discarded until $r$ iterations are done. It should be noted at this point that the approach of [7] has only been applied in small dimensionality pattern spaces. To the best of the author’s knowledge the work reported here constitutes the first attempt to introduce fractional re-orientation in a realistic application involving large dimensionality spaces. This becomes possible due to the integrated structure of the DF-LDA algorithm, the pseudo-code implementation of which can be found in Figure 1. The effect of the above rotation strategy of the D-LDA subspace is illustrated in Fig.3, where the first two most significant features of each image extracted by PCA, D-LDA (the variant proposed in section II-B) and DF-LDA respectively, are visualized. The PCA-based representation shown in Fig.3-left is optimal in terms of image reconstruction, thereby provides some insight on the original structure of image distribution, which is highly complex and non-separable. Although the separability of subjects is greatly improved in the D-LDA-based subspace, some classes still overlap as shown in Fig.3-middle. It can be seen from Fig.3-right that the separability is further enhanced, and different classes tend to be equally spaced after a few fractional (re-orientation) steps. III. EXPERIMENTAL RESULTS Two popular face databases, the ORL [8] and the UMIST [13], are used to demonstrate the effectiveness of the proposed DF-LDA framework. The ORL database contains 40 distinct persons with 10 images per person. The images are taken at different time instances, with varying lighting conditions, facial expressions and facial details (glasses/no-glasses). All persons are in the up-right, frontal position, with tolerance for some side movement. The UMIST repository is a multi-view database, consisting of 575 images of 20 people, each covering a wide range of poses from profile to frontal views. Fig.2 depicts some samples contained in the two databases, where each image is scaled into $(112 \times 92)$, resulting in an input dimensionality of $N = 10304$. To start the FR experiments, each one of the two databases is randomly partitioned into a training set and a test set with no overlap between the two. The partition of the ORL database is done following the recommendation of [14], [15] which call for 5 images per person randomly chosen for training, and the other 5 for testing. Thus, a training set of 200 images and a test set with 200 images are created. For the UMIST database, 8 images per person are randomly chosen to produce a training set of 160 images. The remaining 415 images are used to form the test set. In the following experiments, the figures of merit are error rates averaged over 5 runs (4 runs in [14] and 3 runs in [15]), each run being performed on such random partitions in the two databases. It is worthy to mention here that both experimental setups introduce SSS conditions since the number of training samples are in both cases much smaller than the dimensionality of the input space. Also, we do have observed some partition cases, where zero eigenvalues occurred in $\Lambda_w$ as discussed in section II-B. In these cases, in contrast with the failure of D-LDA [6], DF-LDA was still able to perform well. In addition to D-LDA [6], DF-LDA is compared against two popular feature selection methods, namely: Eigenfaces [3] and Fisherfaces [4]. For each of the four methods, the FR procedure consists of: (i) a feature extraction step where four kinds of feature representation of each training or test sample are extracted by projecting the sample onto the four feature spaces generalized by Eigenface, Fisherface, D-LDA and DF-LDA respectively, (ii) a classification step in which each feature representation obtained in the first step is fed into a simple nearest neighbor classifier. It should be noted at this point that, since the focus in this short paper is on feature extraction, a very simple classifier, namely nearest neighbor, is used in step (ii). We anticipate that the classification accuracy of all four methods compared here will improve if a more sophisticated classifier is used instead of the nearest neighbor. However, such an experiment is beyond the scope of this short paper. The error rate curves obtained for the four methods are shown in Fig.4 as functions of the number of feature vectors. The number of fractional steps used in DF-LDA is $r = 20$ and the weighted function utilized is $w(d) = d^{-8}$. From Fig.4, it can be seen that the performance of DF-LDA is overall superior to that of the other three methods on both databases. Let $\alpha_i$ and $\beta_i$ be the error rates of the DF-LDA and one of the other three methods respectively, where $i$ is the number of feature vectors. We can obtain the average percentage of the error rate of DF-LDA over that of the other methods by $\mathcal{E}_{orl} = \sum_{i=5}^{25} (\alpha_i / \beta_i)$ for the ORL and $\mathcal{E}_{umist} = \sum_{i=3}^{12} (\alpha_i / \beta_i)$ for the UMIST database. The results summarized in Table I indicate that the average error rate of DF-LDA is approximately 50.5%, 43% and 80% of that of Eigenface, Fisherface and D-LDA respectively. It is of interest to observe the performance of Eigenfaces vs that of Fisherfaces. Not surprisingly, Eigenfaces outperform Fisherfaces in the ORL database, because Fisherfaces may lose significant discriminant information due to the intermediate PCA step. The similar observation has also been found in [10], [16]. The weighting function $w(d_{ij})$ influences the performance of the DF-LDA method. For different feature extraction tasks, appropriate values for the weighting function exponent should be determined through experimentation using the available training set. However, it appears that there is a set of values for which good results can be obtained for a wide range of applications. Following the recommendation in [7] we examine the performance of the DF-LDA method for $w(d_{ij}) \in \{d^{-4}, d^{-8}, d^{-12}, d^{-16}\}$. Results obtained through the utilization of these weighting functions are depicted in Fig.5 where error rates are plotted against the feature vectors selected (output space dimensionality). The lowest error rate on the ORL database is approximately 4.0% and it is obtained using a weighting function of $w(d) = d^{-16}$ and a set of $M = 22$ feature basis vectors, a result comparable to the best results reported previously in the literatures [14], [15]. IV. Conclusions In this short paper a new feature extraction method for face recognition tasks has been proposed. The method introduced here utilizes the well known framework of linear discriminant analysis and it can be considered as a generalization of a number of techniques which are commonly in use. The new method utilizes a new variant of D-LDA to safely remove the null space of the between-class scatter matrix and applies a fractional step LDA scheme to enhance the discriminatory power of the obtained D-LDA feature space. The effectiveness of the proposed method has been demonstrated through experimentation using two popular face databases. The DF-LDA method presented here is a linear pattern recognition method. Compared with nonlinear models, a linear model is rather robust against noises and most likely will not overfit. Although it has been shown that distribution of face patterns is highly non convex and complex in most cases, linear methods are still able to provide cost effective solutions to the FR tasks through integration with other strategies, such as the principle of “divide and conquer” in which a large and nonlinear problem is divided into a few smaller and local linear sub-problems. The development of mixtures of localized DF-LDA to be used in the problem of large size face recognition as well as the development of a non-linear DF-LDA through the utilization of kernel machine techniques are research topics under current investigation. ACKNOWLEDGMENTS The authors would like to thank Dr. Daniel Graham and Dr. Nigel Allinson for providing the UMIST face database, and thank AT&T Laboratories Cambridge for providing the ORL face database. REFERENCES [1] R. Chellappa, C.L. Wilson, and S. Sirohey, “Human and machine recognition of faces: A survey”, *Proceedings of the IEEE*, vol. 83, pp. 705–740, 1995. [2] M. Turk, “A random walk through eigenspace”, *IEICE Trans. Inf. & Syst.*, vol. E84-D, no. 12, pp. 1586–1695, December 2001. [3] M. Turk and A. P. Pentland, “Eigenfaces for recognition”, *Journal of Cognitive Neuroscience*, vol. 3, no. 1, pp. 71–86, 1991. [4] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection”, *IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 19, no. 7, pp. 711–720, 1997. [5] L-F Chen, H-Y Mark Liao, M-T Ko, J-C Lin, and G-J Yu, “A new LDA-based face recognition system which can solve the small sample size problem”, *Pattern Recognition*, vol. 33, pp. 1713–1726, 2000. [6] H. Yu and J. Yang, “A direct lda algorithm for high-dimensional data with application to face recognition”, *Pattern Recognition*, vol. 34, pp. 2067–2070, 2001. [7] R. Lotlikar and R. Kothari, “Fractional-step dimensionality reduction”, *IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 22, no. 6, pp. 623–627, 2000. [8] ORL face database, website: http://www.cam-orl.co.uk/facedatabase.html. *AT&T Laboratories Cambridge*. [9] D. L. Swets and J. Weng, “Using discriminant eigenfeatures for image retrieval”, *IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 18, pp. 831–836, 1996. [10] C. Liu and H. Wechsler, “Evolutionary pursuit and its application to face recognition”, *IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 22, no. 6, pp. 570–582, June 2000. [11] K. Liu, Y.Q. Cheng, J.Y. Yang, and X. Liu, “An efficient algorithm for foley-sammon optimal set of discriminant vectors by algebraic method”, *Int. J. Pattern Recog. Artif. Intell.*, vol. 6, pp. 817–829, 1992. [12] R. A. Horn and C. R. Johnson, *Matrix Analysis*, Cambridge University Press, 1992. [13] D. B Graham and N. M Allinson, “Characterizing virtual eigensignatures for general purpose face recognition”, in *Face Recognition: From Theory to Applications, NATO ASI Series F, Computer and Systems Sciences*, H. Wechsler, P. J. Phillips, V. Bruce, F. Fogelman-Soulie, and T. S. Huang, Eds., vol. 163, pp. 446–456. 1998. [14] S. Z. Li and J. Lu, “Face recognition using the nearest feature line method”, *IEEE Transactions on Neural Networks*, vol. 10, pp. 439–443, 1999. [15] S. Lawrence, C. L Giles, A.C. Tsoi, and A.D. Back, “Face recognition: A convolutional neural network approach”, *IEEE Transactions on Neural Networks*, vol. 8, no. 1, pp. 98–113, 1997. [16] A. M. Martinez and A. C. Kak, “PCA versus LDA”, *IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 23, no. 2, pp. 228–233, 2001. | Methods | Eigenfaces | Fisherfaces | D-LDA | |------------------|------------|-------------|-------| | $\mathcal{E}_{orl}$ | 74.18% | 38.51% | 80.03%| | $\mathcal{E}_{umist}$ | 26.75% | 47.68% | 79.6% | | $(\mathcal{E}_{orl} + \mathcal{E}_{umist})/2$ | 50.47% | 43.1% | 79.82%| **Input:** A set of training face images $\{z_i\}_{i=1}^L$, each of which is represented as a $N$-dimensional vector. **Output:** A low-dimensional representation $y$ of $z$ with enhanced discriminatory power, after a transformation $y = \varphi(z)$. **Algorithm:** Step 1. Calculate those eigenvectors of $\Phi_b^T \Phi_b$ with non-zero eigenvalues: $$E_m = [e_1 \ldots e_m],$$ where $m \leq C - 1$ and $\Phi_b$ is from $\hat{S}_{BTW} = \Phi_b \Phi_b^T$. Step 2. Calculate the first $m$ most significant eigenvectors and their corresponding eigenvalues of $\hat{S}_{BTW}$ by $V = \Phi_b E_m$ and $\Lambda_b = V^T \hat{S}_{BTW} V$. Step 3. Let $U = V \Lambda_b^{-1/2}$. Calculate eigenvectors of $U^T S_{TOT} U$, $P$. Step 4. Optionally discard those eigenvectors in $P$ with the largest eigenvalues. Let $P_{M'}$ and $\Lambda_w$ be the $M' (\leq m)$ selected eigenvectors and their corresponding eigenvalues. Step 5. Map all face images $\{z_i\}_{i=1}^L$ to the $M'$-dimensional subspace spanned by $\Gamma = UP_{M'} \Lambda_w^{-1/2}$, and have $\{x_i\}_{i=1}^L$, where $x_i = \Gamma^T z_i$. Step 6. Further reduce the dimensionality of $x_i$ from $M'$ to $M$ by performing a F-LDA on $\{x_i\}_{i=1}^L$, and let $W$ (size $M' \times M$) be the bases of the output space. Step 7. The optimal discriminant feature representation of $z$ can be obtained by $y = \varphi(z) = (\Gamma W)^T z$. Fig. 1. Pseudo-code for the computation of the DF-LDA algorithm Fig. 2. Some sample images of 3 persons randomly chosen from the two databases, left: the ORL, right: the UMIST. Fig. 3. Distribution of 170 face images of 5 subjects (classes) randomly selected from the UMIST database in left: PCA-based subspace, middle: D-LDA-based subspace and right: DF-LDA-based subspace. Fig. 4. Comparison of error rates obtained by the four FR methods as functions of the number of feature vectors, where $w(d) = d^{-12}$ is used in DF-LDA for the ORL, $w(d) = d^{-8}$ for the UMIST, and $r = 20$ for both. Fig. 5. Error rates of DF-LDA as functions of the number of feature vectors with $r = 20$ and different weighting functions.
$E^2XB$: A DOMAIN-SPECIFIC STRING MATCHING ALGORITHM FOR INTRUSION DETECTION K. G. Anagnostakis*, S. Antonatos, E. P. Markatos, M. Polychronakis† Institute for Computer Science (ICS) Foundation for Research and Technology - Hellas (FORTH) P.O. Box 1385 - Heraklio, Crete, GR-711-10 GREECE {kanag,antonat,markatos,mikepo }@ics.forth.gr appears in the Proceedings of the 18th IFIP International Information Security Conference, 2003 Abstract We consider the problem of string matching in Network Intrusion Detection Systems (NIDSes). String matching computations dominate in the overall cost of running a NIDS, despite the use of efficient general-purpose string matching algorithms. Aiming at increasing the efficiency and capacity of NIDSes, we have designed $E^2xB$, a string matching algorithm that is tailored to the specific characteristics of NIDS string matching. We have implemented $E^2xB$ in snort, a popular open-source NIDS, and present experiments comparing $E^2xB$ with the current best alternative solution. Our results suggest that for typical traffic patterns $E^2xB$ improves NIDS performance by 10%-36%, while for certain ruleset and traffic patterns string matching performance can be improved by as much as a factor of three. Keywords: network security, intrusion detection, string matching, network monitoring, network performance 1. Introduction Network Intrusion Detection Systems (NIDSes) are receiving considerable attention as a mechanism for shielding against “attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a computer network” (Bace and Mell, 2001). The typical function of a NIDS is based on a set of signatures, each describing one known intrusion threat. A NIDS examines network traffic and determines whether any signatures indicating intrusion attempts are matched. *Author is with the CIS Department, University of Pennsylvania, Email: firstname.lastname@example.org †Authors are also with the Computer Science Department, University of Crete The simplest and most common form of NIDS inspection is to match string patterns against the payload of packets captured on a network link. The use of existing efficient string matching algorithms for this purpose, such as (Boyer and Moore, 1977; Aho and Corasick, 1975), bears a significant cost: recent measurements of the snort NIDS (Roesch, 1999) on a production network show that as much as 31% of total processing is due to string-matching (Fisk and Varghese, 2002). The same study also reports that in the case of Web-intensive traffic, this cost is increased to as much as 80% of the total processing time. At the same time, NIDSe need to be highly efficient to keep up with increasing link speeds, and, as the number of potential threats (and associated signatures and rules) is expected to grow, the cost of string matching is likely to increase even further. These trends motivate the study of new string matching algorithms tailored to the particular requirements and characteristics of Intrusion Detection, much like domain-specific algorithms were developed for efficient routing lookups and packet classification in IP forwarding (Lakshman and Stiliadis, 1998; Gupta and McKeown, 1999). In this context, we present $E^2xB$, a string matching algorithm that is designed specifically for the relatively small input size (in the order of packet size) and small expected matching probability that is common in a NIDS environment. These assumptions allow string matching to be enhanced by first testing the input (e.g., the payload of each packet) for missing fixed-size sub-strings of the original signature string, called elements. The false positives induced by $E^2xB$, e.g., cases with all fixed-size sub-strings of the signature showing up in arbitrary positions within the input, can then be separated from actual matches using standard string matching algorithms, such as the Boyer-Moore algorithm (Boyer and Moore, 1977). Experiments with $E^2xB$ implemented in snort, show that in common cases, $E^2xB$ is more efficient than existing algorithms by up to 36%, while in certain scenarios, $E^2xB$ can be three times faster. This improvement is due to an overall reduction in executed instructions and, in most cases, a smaller memory footprint than existing algorithms. 2. Background The general problem of designing algorithms for string matching is well-researched. One of the most widely used algorithms was first proposed in (Boyer and Moore, 1977). The Boyer-Moore algorithm compares the search string with the input starting from the rightmost character of the search string. This allows the use of two heuristics that may reduce the number of comparisons needed for string matching (compared to the naive algorithm). Both heuristics are triggered on a mismatch. The first heuristic, called the bad character heuristic, works as follows: if the mismatching character appears in the search string, the search string is shifted so that the mismatching character is aligned with the rightmost position at which the mismatching character appears in the search string. If the mismatching character does not appear in the search string, the search string is shifted so that the first character of the pattern is one position past the mismatching character in the input. The second heuristic, called the *good suffixes heuristic*, is also triggered on a mismatch. If the mismatch occurs in the middle of the search string, then there is a non-empty suffix that matches. The heuristic then shifts the search string up to the next occurrence of the suffix in the string. Horspool (1980) improved the Boyer-Moore algorithm with a simpler and more efficient implementation that uses only the bad-character heuristic. Aho and Corasick (1975) provided an algorithm for concurrently matching multiple strings. The set of strings is used to construct an automaton which is able to search for all strings concurrently. The automaton consumes the input one character at-a-time and keeps track of patterns that have (partially) matched the input. Fisk and Varghese (2002) were the first to consider the design of NIDS-specific string matching algorithms. They proposed an algorithm called Set-wise Boyer-Moore-Horspool, adapting the Boyer-Moore algorithm to simultaneously match a set of rules. This algorithm is shown to be faster than both Aho-Corasick and Boyer-Moore for medium-size pattern sets. Their experiments suggest triggering a different algorithm depending on the number of rules: Boyer-Moore-Horspool if there is only one rule; Set-wise Boyer-Moore-Horspool if there are between 2 and 100 rules, and Aho-Corasick for more than 100 rules. This heuristic has been incorporated in *snort* and provides the baseline for our comparison in Section 4. Independently of Fisk and Varghese, Coit et al. (2002) implemented a similar algorithm in *snort*, adapting Boyer-Moore for simultaneously matching multiple strings, derived from the exact set matching algorithm of Gusfield (1977). Recently, we have proposed ExB, a precursor of $E^2xB$, providing quick negatives when the search string does not exist in the packet payload (Markatos et al., 2002). $E^2xB$ provides several improvements on ExB, the most important being a faster pre-processing phase, removing much of the overhead associated with initializing the occurrence map, and a wider set of experiment results, that also highlight NIDS properties that are interesting beyond the scope of the specific algorithm. ### 3. $E^2xB$: Exclusion-based string matching We present an informal description of $E^2xB$, first in its simplest and most intuitive form and then in its more general form. $E^2xB$ is based on the following simple observation: Suppose that we want to check whether an input string $I$ contains a small string $s$. If there exists at least one character of string $s$ that is not contained in $I$, then $s$ is not a substring of $I$. The above simple observation can be used to quickly determine several cases where a given string $s$ does *not* appear in the input string $I$: **if $s$ contains at least one character that is not in $I$, then $s$ is not a substring of $I$**. However, this observation cannot be used to determine the cases where $s$ is a substring of $I$. Indeed, if every character of string $s$ belongs to input string $I$, then we should use a standard string matching algorithms (e.g., Boyer-Moore-Horspool) to confirm whether $s$ is actually a substring of $I$ or not. The cases where every character of $s$ is in $I$, but $s$ is not a substring of $I$ are called \textit{false matches}, or \textit{false positives}. This method is effective only if there is a fast way of checking whether a given character $c$ belongs in $I$ or not. We perform this check with the help of an \textit{occurrence map}. Specifically, we first \textit{pre-process} the input string $I$, and for each (8-bit) character $c$ that appears in string $I$, we mark the corresponding (i.e. $c_{th}$) \textit{cell} on the (256-cell) map. Although we could use a binary value to mark the mentioned cells (i.e. if the $c_{th}$ position of the cell map is 1, then the character $c$ appears in $I$, otherwise it does not), our experiments in (Markatos et al., 2002) suggest that the cost of cleaning (i.e. filling with zeros) the cell map for each new packet can be very high. To reduce this cost, we decided to mark the cell with the (index) number of the current network packet. Thus, if the $c_{th}$ position of the cell map contains the number of the current network packet, the character $c$ appears in $I$, otherwise it does not.\footnote{To reduce the number of bits needed to store the cell map, the numbers of network packets are limited to a predefined number of bits, which we call \textit{cell\_size}. If the number of network packets exceed $2^{cell\_size}$, then the next packet gets the number 0.} In order to reduce the percentage of false matches, the above algorithm can be generalized for \textit{pairs} of (8-bit) characters: instead of recording the occurrence of single characters in string $I$, it is possible to record the appearance of each \textit{pair} of consecutive characters in string $I$. In the matching process, instead of determining whether each character of $s$ appears in $I$, the algorithm then checks whether each pair of consecutive characters of $s$ appears in $I$. If a pair is found that does not appear in $I$, $E^2xB$ knows that $s$ is not in $I$. Generalizing further, instead of using 8-bit characters, or 16-bit pairs of characters, $E^2xB$ can use bit-strings of arbitrary length (hereafter called \textit{elements}). That is, $E^2xB$ records all (byte-aligned) bit-strings of length $x$. The element size exposes a trade-off: larger elements are likely to result in fewer false matches, but also increase the size of the occurrence map, which could, in turn, increase capacity misses and degrade performance. The pseudo-code for pre-processing \texttt{input} and for matching a string $s$ on \texttt{input} is presented in Figure 1. The main difference between $E^2xB$ and ExB is the use of cells: ExB assumed an occurrence \textit{bitmap} where each element was marked by setting the 1-bit cell to 1. This required the bitmap to be cleared for each packet, adding unnecessary overhead. A second difference lies in the way the two bytes forming an element are \textit{hashed} together. $E^2xB$ uses \textit{OR} while ExB uses \textit{XOR}. Although in theory \textit{XOR} does provide a better hash than \textit{OR}, the difference in the number of collisions was found to be negligible. The value of using $XOR$ lies more in that $XOR$ instructions were found to result in slightly better performance. Finally, an important implementation detail that has been addressed in $E^2xB$ is support for case-insensitive matching, as many NIDS signatures are case-insensitive. This is done by modifying the search procedure to test for the occurrence of all four combinations of upper- and lower-case for each of the two bytes used to compute the element index. 4. Experimental evaluation Using trace-driven execution, we evaluate the performance of $E^2xB$ against the heuristic of (Fisk and Varghese, 2002) (denoted as FVh in the rest of this paper) and the implementation of (Boyer and Moore, 1977) in snort. 4.1 Environment For all the experiments we used a PC with a Pentium 4 processor running at 1.7 GHz, with a L1 cache of 8 KB and L2 cache of 256 KB, and 512 Mbytes of main memory. The measured memory latency is 1 ns for the L1 cache, 10.9 ns for the L2 cache and 170.4 ns for the main memory, measured using lmbench (McVoy and Staelin, 1996). The host operating system is Linux (kernel version 2.4.14, RedHat 7.3). We use snort version 1.9.0 (build 205) compiled with gcc version 2.96. Each packet is checked against the “default” rule-set of the snort distribution. The ruleset is organized as a two-dimensional chain data-structure, where each element - called a chain header - tests the input packet against a packet header rule. When a packet header rule is matched, the chain header points to a set of signature tests, including payload signatures that trigger the execution of the string matching algorithm. The default rule-set consists of 187 chain headers with a total of 1661 rules, 1575 of which are string matching rules. We use packet traces from four different sources: - A set of full-packet traces from the DEFCON “capture the flag” data-set.\(^2\) These traces contain numerous intrusion attempts. - A full packet trace containing Web traffic, generated by concurrently running a number of recursive `wget` requests on popular portal sites. - Three header-only traces from the NLANR archive. These packet traces were taken on backbone links. Because these are header-only traces, for our experiments we added random payloads. We argue that the results are representative after determining that random payloads do not significantly alter NIDS performance. - A set of header-only traces collected on the OC3 link connecting the University of Crete campus network (UCNET) to the Greek academic network (GRNET)(Courcoubetis and Siris, 1999), with random payloads. For the experiments of Sections 4.2 and 4.3, we use the DEFCON `eth0.dump2` trace containing 1,035,736 packets. For simplicity, traces are read from a local file by using the appropriate `snort` option, which is passed to the underlying `pcap(3)` library. (Replaying traces from a remote host provided similar results.) ### 4.2 Element and cell size We first determine the optimal size for $E^2xB$ elements and cells. In Figure 2 we show the fraction of false positives for different element and cell sizes, --- \(^2\)Available at http://www.shmoo.com/cctf/ and in Figure 3 the corresponding running time of snort, obtained using the time(1) facility of the host operating system. We observe that the fraction of false positives is well below 2% when using elements 13 bits or more. Completion time decreases with increasing element size, as the fraction of false positives that have to be searched using Boyer-Moore is reduced. However, it is not strictly decreasing: it is minimized at 13 bits but exhibits a slight increase for more than 13 bits, apparently because of the effect of data-structure size (8 KB for 13-bit elements, 64 KB for 16 bits, for a cell size of 8 bits) on cache performance. For our specific configuration, 13-bit elements and 8-bit cells appear to offer the best performance. ### 4.3 Experiments with the default rule-set We determine if $E^2xB$ offers any overall improvement compared to FVh and BM using the eth0.dump2 trace. The completion time for $E^2xB$, BM and FVh are 30.20, 47.31 and 47.36 seconds, respectively. We observe that using $E^2xB$, snort is 36% faster than both known algorithms. $E^2xB$ is faster because, in the common case, it can quickly decide that a given set of strings is not contained in a packet. More specifically, in this experiment, the string matching function was invoked 22,716,676 times. Out of those, $E^2xB$ was able to quickly state that the considered string was not a substring of the input packet in 22,395,210 of the invocations (or 98.4%). Thus, in 98.4% of all invocations, $E^2xB$ was able to deliver the correct answer without actually searching for the pattern in the packet. In the remaining 1.6%, $E^2xB$ used the Boyer-Moore string searching algorithm to find whether the string is really in the packet. ### 4.4 Other packet traces We repeated the experiments with the three algorithms on the full set of traces. The results are summarized in Table 1. We first confirm that random payloads behave similarly to real payloads for the DEFCON eth0.dump2 trace: the difference in performance between the original trace and the trace with the payload replaced with random data is negligible for all three algorithms. Based on this observation, we can argue that using random payloads on the NLANR and UCNET traces provides a reasonably accurate estimate on how the algorithms would perform with real payloads. Comparing the performance of the string matching algorithms, we observe that $E^2xB$ performs better than FVh and BM on all traces except for one and that the relative improvement varies. It is also interesting to see that FVh, reported in (Fisk and Varghese, 2002) to perform better than BM, sometimes performs worse for the traces examined. Although the improvement of $E^2xB$ is typically between 25% and 35%, and can be as high as 36.17%, there are cases where the gain is only around 8% or, even in the case of the NLANR AIX trace, worse than BM by 8%. This appears to relate, at least in part, | trace name | ID | nr. of packets | avg.pkt (bytes) | BM (sec) | FVh (sec) | $E^2xB$ (sec) | % | |---------------|------|----------------|-----------------|----------|-----------|--------------|---------| | eth0.dump2 | D.02 | 1035736 | 835 | 47.31 | 47.36 | 30.20 | +36.17 | | eth0.dump2.r | D.02.R | 959267 | 1481 | 46.35 | 46.60 | 29.77 | +35.77 | | eth0.dump4 | D.04 | 497302 | 1111 | 14.11 | 56.24 | 9.81 | +30.47 | | eth0.dump8 | D.08 | 1188660 | 761 | 9.79 | 41.51 | 6.74 | +31.15 | | webtrace | W.0 | | | | | | +8.76 | | NLANR IND | N.IND| 2254931 | 703 | 93.53 | 83.8 | 62.04 | +25.97 | | NLANR MRA | N.MRA| 2760531 | 760 | 137.39 | 122.40 | 89.07 | +27.23 | | NLANR AIX | N.AIX| 1624223 | 364 | 13.17 | 14.00 | 14.26 | -8.28 | | UCNET 0000 | UC.00| 1564131 | 422 | 103.93 | 82.35 | 66.84 | +18.83 | | UCNET 0100 | UC.01| 2245938 | 413 | 108.69 | 84.20 | 62.54 | +25.72 | Table 1. Completion time of snort with different string matching algorithms – all traces | trace | rules | % pkts | % bytes | avg pkt | |-------|-------|--------|---------|---------| | D.02 | 60 | 21.13 | 35.53 | 1336 | | | 62 | 21.18 | 36.20 | 1358 | | | 66 | 54.09 | 26.45 | 388 | | D.04 | 13 | 24.71 | 24.90 | 1472 | | | 32 | 73.98 | 74.60 | 1473 | | D.08 | 13 | 24.82 | 24.84 | 1093 | | | 32 | 74.83 | 74.91 | 1092 | | N.AIX | 28 | 87.63 | 92.20 | 330 | | | 36 | 5.56 | 2.84 | 160 | | N.IND | 36 | 4.98 | 5.25 | 692 | | | 38 | 40.07 | 30.31 | 495 | | | 60 | 30.82 | 36.90 | 785 | | | 62 | 8.38 | 9.22 | 721 | | N.MRA | 60 | 43.72 | 44.04 | 713 | | | 61 | 9.82 | 9.96 | 718 | | | 62 | 13.89 | 14.17 | 722 | | | 63 | 14.04 | 13.90 | 701 | | | 101 | 6.16 | 5.80 | 667 | | W.0 | 103 | 56.47 | 33.41 | 419 | | | 107 | 0.53 | 0.31 | 410 | | | 820 | 42.99 | 66.28 | 1092 | | UC.00 | 36 | 15.81 | 13.52 | 316 | | | 38 | 7.44 | 6.42 | 320 | | | 60 | 18.85 | 16.63 | 326 | | | 62 | 5.35 | 6.58 | 456 | | | 68 | 12.71 | 10.13 | 295 | | | 101 | 16.89 | 24.86 | 545 | | | 102 | 9.62 | 10.47 | 402 | | | 820 | 4.79 | 3.76 | 290 | | UC.01 | 36 | 11.54 | 9.51 | 296 | | | 38 | 5.75 | 4.78 | 299 | | | 60 | 42.71 | 39.91 | 336 | | | 61 | 5.35 | 4.76 | 320 | | | 68 | 7.35 | 5.70 | 279 | | | 101 | 10.42 | 17.33 | 599 | | | 102 | 7.68 | 10.09 | 473 | Table 2. Analysis of rule-set invocations (rules rarely triggered are not presented) to differences in the packet size distribution: the average packet size is 835 bytes for the DEFCON eth0.dump2 trace and 364 bytes for the NLANR AIX trace. For larger packets, snort spends more time in string matching, and $E^2xB$ offers significant benefits, while for smaller packets, snort spends less time in string matching, and $E^2xB$ is less useful. On the other hand, results can be very different for traces with similar packet size statistics. For example, the average packet size for webtrace and MRA are 761 and 760 bytes, respectively, but the gain of $E^2xB$ is 8.76% and 27.23%, respectively. More detailed analysis is therefore needed to understand the benefits of our approach. We obtain processor-level statistics of executed instructions and L2 data cache misses for each trace using the brink/abyss toolkit which collects data from the Pentium performance counters (Sprunt, 2002). The results are presented in Figures 4 and 5. We observe that the number of instructions for $E^2xB$ is significantly smaller in all cases except for the AIX trace. The reduction in L2 data cache misses is relatively small compared to the reduction in executed instructions. For example, for the W.0 trace (Web-traffic) $E^2xB$ has 30% less instructions but a slightly higher number of cache misses. This explains the relatively small overall performance gain (roughly 8%) for $E^2xB$ on this trace. To further understand the differences in the results, we instrumented snort to provide a trace of the chain headers and content rules invoked for each packet. The results for all packet traces are presented in Table 2. We observe that the string matching workload for different traces varies significantly. For instance, for the AIX trace 87.6% of the packets are checked against only 28 rules, while for the Web trace 56.4% of the packets are checked against 103 rules, and 43% against 820 rules. Considering these statistics, it appears that $E^2xB$ offers larger improvements in cases where a large fraction of packets are checked against 30 to 100 content rules (as in the IND, MRA and all DEFCON traces). This also indicates that it may be necessary to consider hybrid algorithms, especially in cases where there is either a very small or very large number of rules applying to a significant fraction of packets. In such cases, $E^2xB$ may not perform as well as BM when the number of content rules in a chain header is very small or the Aho-Corasick algorithm used in FVh when the number of content rules per chain header is large. Although the details of such a hybrid algorithm are beyond the scope of this paper, we run a simple experiment to confirm that the cost of $E^2xB$ is higher than BM for small sets of content rules and higher than Aho-Corasick for large sets of content rules. For this, we measure algorithm performance off-line e.g., as an isolated standalone program, with random inputs checked against a set of random rules. We fix the input size at 1500 bytes and obtain the average number of cycles for each input “packet” for different numbers of rules. Each rule is assumed to be a 20-bytes string. The results are presented in Figure 6 (left). We see that $E^2xB$ is indeed more expensive than FVh for less than 20 rules, and that the relative performance benefits are maximized at around 700-1000 rules. After a certain point, the cost of $E^2xB$ rises sharply, possibly due to the joint effect of increasing false-match rates per-packet and capacity misses (due to the size of the rule-set). We also run the same experiment with the input size set to 64 and 512 bytes, and compute the ratio of the average number of cycles consumed per-packet of $E^2xB$ over FVh. These results are presented in Figure 6 (right). As expected, the relative benefits of the two algorithms and the ranges in which they perform better depend a lot on packet size. Experimentation with the actual NIDS and a more realistic traffic model and rule-set (or rule-set model) is, therefore, required to obtain the right thresholds for such a hybrid algorithm. Beyond the hybrid algorithm, these results also provide some insights on the scalability of different algorithms: $E^2xB$ appears to cover a reasonable range of rule-set sizes that is likely to be sufficient as NIDS rulesets continue to increase in size. ### 4.5 Different architectures We repeat the experiments on a system with a 1 GHz Pentium 3 processor and a 512 KB L2 cache. The results for the Pentium 3 are presented in Figure 7. We see that the gain for $E^2xB$ is slightly higher on the Pentium 3 compared to the Pentium 4, with the proportion of the gain roughly consistent for the different traces. When comparing the performance of the P3 vs. the P4 system, the results may appear somewhat surprising: the P3 is almost always faster than the P4, as shown in Figure 8. This happens because the P3 has a 512 KB cache and the P4 we used has a 256 KB cache. For the Webtrace which has the highest memory usage among all traces, the P3 is almost 4 times faster than the P4. Besides highlighting the importance of considering the underlying system architecture when designing (and deploying) NIDSes, this experiment also demonstrates the great care needed in evaluating NIDS performance, as the results can be very sensitive to the environment. 5. Summary and concluding remarks We have studied the performance of NIDS string matching algorithms, and presented the design $E^2xB$, a new algorithm for NIDS string matching. Using an extensive set of packet traces, we have evaluated $E^2xB$ against existing algorithms. Our results show that in most cases $E^2xB$ offers significant overall improvement in NIDS performance. We have shown realistic cases in which our approach improves performance by as much as 37%. The impact of $E^2xB$ appears to relate to the packet size distribution and the number of string matching rules invoked per packet: small packets and very small or very large sets of rules per packet reduce the effectiveness of $E^2xB$. For medium-size rule-sets, $E^2xB$ appears to be much faster than existing algorithms. These results point to the need for a hybrid algorithm, with $E^2xB$ covering a range of medium-size rulesets. Determining the details of such a hybrid algorithm, including exact thresholds will be the subject of future work. Our results also allow for some more general observations to be made on the performance as well as modelling, analysis and benchmarking of NIDSes: we have found that results are very sensitive to traffic and NIDS host processor and that random payloads behave similarly to real payloads. We expect these results to be useful towards more effective NIDS benchmarking and design. Acknowledgments This work was supported in part by the IST project SCAMPI (IST-2001-32404) funded by the European Union. Work of the first author was also supported in part by the DoD University Research Initiative (URI) program administered by the Office of Naval Research under Grant N00014-01-1-0795, and by the USENIX/NLnet Research Exchange Program (ReX). We would also like to thank Dionisis Pnevmatikatos for his constructive comments, and Vasilis Siris for providing the UCnet traces. References Aho, A. and Corasick, M. (1975). Fast pattern matching: an aid to bibliographic search. *Communications of the ACM*, 18(6):333–340. Bace, R. and Mell, P. (2001). *Intrusion Detection Systems*. National Institute of Standards and Technology (NIST), Special Publication 800-31. Boyer, R. and Moore, J. (1977). A fast string searching algorithm. *Communications of the ACM*, 20(10):762–772. Coit, C. J., Staniford, S., and McAlerney, J. (2002). Towards faster pattern matching for intrusion detection, or exceeding the speed of snort. In *Proceedings of the 2nd DARPA Information Survivability Conference and Exposition (DISCEX II)*. Courcoubetis, C. and Siris, V. A. (1999). Measurement and analysis of real network traffic. In *Proceedings of the 7th Hellenic Conference on Informatics (HCI’99)*. Fisk, M. and Varghese, G. (2002). An analysis of fast string matching applied to content-based forwarding and intrusion detection. Technical Report CS2001-0670 (updated version), University of California - San Diego. Gupta, P. and McKeown, N. (1999). Packet classification on multiple fields. In *Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communication*, pages 147–160. ACM Press. Gusfield, D. (1997). *Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology*. University of California Press. Horspool, R. (1980). Practical fast searching in strings. *Software - Practice and Experience*, 10(6):501–506. Lakshman, T. V. and Stiliadis, D. (1998). High-speed policy-based packet forwarding using efficient multi-dimensional range matching. In *Proceedings of the ACM SIGCOMM ’98 conference on Applications, technologies, architectures, and protocols for computer communication*, pages 203–214. ACM Press. Markatos, E. P., Antonatos, S., Polychronakis, M., and Anagnostakis, K. G. (2002). ExB: Exclusion-based signature matching for intrusion detection. In *Proceedings of the IASTED International Conference on Communications and Computer Networks (CCN)*, pages 146–152. McVoy, L. and Staelin, C. (1996). Imbench: Portable tools for performance analysis. In *Proc. of the 1996 Usenix Technical Conference*, pages 279–294. Roesch, M. (1999). Snort: Lightweight intrusion detection for networks. In *Proceedings of the 1999 USENIX LISA Systems Administration Conference*. (available from http://www.snort.org/). Sprunt, B. (2002). Brink and abyss: Pentium 4 performance counter tools for linux. Available from http://www.eg.bucknell.edu/~bsprunt/.
A comparison of green-winged teal *Anas crecca* survival and harvest between Europe and North America Authors: Devineau, Olivier, Guillemain, Matthieu, Johnson, Alan R., and Lebreton, Jean-Dominique Source: Wildlife Biology, 16(1) : 12-24 Published By: Nordic Board for Wildlife Research URL: https://doi.org/10.2981/08-071 A comparison of green-winged teal *Anas crecca* survival and harvest between Europe and North America Olivier Devineau, Matthieu Guillemain, Alan R. Johnson & Jean-Dominique Lebreton The impact of waterfowl harvest on the dynamics of duck populations remains incompletely understood. While wide-scale monitoring and management programs have been set up in North America, far less has been done in Europe where populations and harvest are essentially managed at country level with a sole focus on population size. Hence, comparing North American waterfowl populations with European waterfowl populations could be useful in suggesting flyway-scale management options in Europe. In our paper, we analyse historical capture-recapture-recoveries data for the European teal *Anas crecca crecca* and we compare the computed survival and harvest rates to those obtained from a North American recovery data set for the green-winged teal *Anas crecca carolinensis*, its sister taxon. During 1960-1976, the annual probability of survival was slightly lower in Europe (average over sexes: $0.485 \pm 0.101$) than in North America ($0.545 \pm 0.010$ for both sexes). Assuming a 30% ring reporting rate, our estimate of the annual harvest rate was about three times higher in Europe (average over sexes: $0.178 \pm 0.051$) than in North America (average over sexes: $0.071 \pm 0.014$). Although the European population increased over the study period and continues to do so, such a hunting pressure may potentially reduce our flexibility in managing this population due to uncertainties such as environmental changes, and have deleterious effects in the long term. We use our results to discuss waterfowl research and management in Europe. Initiating studies to estimate ring reporting rate would be an essential first step to properly evaluate the impact of harvest on the dynamics of the teal population in Europe. **Key words:** *Anas crecca carolinensis*, *Anas crecca crecca*, capture-mark-recapture, Eurasian teal, green-winged teal, harvest, population dynamics, recoveries, waterfowl Olivier Devineau*, Centre d’Ecologie Fonctionnelle et Evolutive, 1919 Route de Mende, F-34293 Montpellier, France - e-mail: firstname.lastname@example.org Matthieu Guillemain, Office National de la Chasse et de la Faune Sauvage, CNERA Avifaune Migratrice, La Tour du Valat, Le Sambuc, F-13200 Arles, France - e-mail: email@example.com Alan R. Johnson, Tour du Valat, Le Sambuc, F-13200 Arles, France - e-mail: firstname.lastname@example.org Jean-Dominique Lebreton, Centre d’Ecologie Fonctionnelle et Evolutive, 1919 Route de Mende, F-34293 Montpellier, France - e-mail: email@example.com *Present address: Fundacion Charles Darwin, Puerto Ayora, Isla Santa Cruz, Galapagos, Ecuador Corresponding author: Olivier Devineau Received 10 October 2008, accepted 30 October 2009 Associate Editor: Anthony D. Fox The impact of recreational harvest on population dynamics remains poorly understood for most species of waterfowl (Anderson & Burnham 1976, Elmgberg et al. 2006). For example, whether harvest acts in a compensatory or additive way has been a contentious issue and is still unresolved (Nichols et al. 1995a, Nichols & Johnson 1996). In North America, this lack of knowledge gradually led to the setting up of monitoring programmes and, ultimately, to the implementation of an adaptive management scheme for waterfowl populations and harvest, in which information about population dynamics plays a central role (Nichols et al. 2006). On the contrary, the impact of harvest on the dynamics of exploited waterfowl populations has seldom been explored in Europe where, in accordance with directives from the European Union, waterfowl hunting regulations are implemented on a country-specific level with monitoring largely based on wintering numbers, which are of little help to understand future and past changes in population size (Elmberg et al. 2006). To this extent, waterfowl population dynamics are better known in North America than in Europe, and thus North American populations provide an interesting reference to which the dynamics of European waterfowl populations can be compared. In our paper, we compare some basic demographic parameters between Europe and North America using the example of the green-winged teal *Anas crecca*. The green-winged teal is of great management interest because it is the second-most harvested duck species after the mallard *Anas platyrhynchos*, both in Europe and in North America (Baldassarre & Bolen 2006, Mooij 2005, Mondain-Monval & Girard 2000). However, while thousands of captive bred mallards are released every year for hunting purposes (Mondain-Monval & Girard 2000), there is no significant release of captive bred teal, thus potentially making the impact of harvest more acute for the teal population. In addition, mid-January counts indicate that about 270,000 individuals winter in France, Italy, Spain and Portugal (Gilissen et al. 2002). By comparison, during 1998-1999, harvest was estimated to about 300,000 teal in France only (Mondain-Monval & Girard 2000). This seems to be a paradoxically high harvest, even if one considers that most hunting mortality likely occurs before the mid-January count. Nonetheless, the northwest European teal population is increasing while the western part of the Mediterranean population shows a slight decline (Delany & Scott 2006). Throughout our paper, we use the term ‘teal’ for both the European and the North American subspecies of the green-winged teal *Anas crecca crecca* and *A. c. carolinensis*, respectively. Several hypotheses can be put forward to explain the apparent paradox of the European teal population. Firstly, counts are, sometimes strongly, influenced by differences between observers (Faanes & Bystrak 1981, Sauer et al. 1994, Cunningham et al. 1999), or by the site coverage (Delany & Scott 2006). The teal is a small bird that appreciates vegetation cover (Johnson 1995), which may lead to many birds being missed by observers. In addition, counts do not account for movements of individuals and thus, only produces an instantaneous, and potentially biased, snapshot of the status of a population (Frederiksen et al. 2004). In addition, there is often an important turnover on the wintering grounds and the number of birds counted on a given site at a given time generally represents only a fraction of the birds actually using this site (Pradel et al. 1997b, Devineau 2007). Bird counts are generally considered as underestimates of actual numbers (Delany & Scott 2006, Dervieux et al. 1980). Therefore, with an actually bigger teal population, harvest would comparatively not be as high as it seems. Another explanation of the apparent paradox of the teal population could be density dependence mechanisms. Under this hypothesis, the reduction in density caused by harvest allow surviving individuals to have a higher survival and/or reproduction rate, which would compensate for the losses due to hunting. Although compensatory harvest has been widely discussed (Anderson & Burnham 1976, Burnham & Anderson 1984, Boyce et al. 1999), the importance or even the existence of such mechanisms is still under debate (Pöysä et al. 2004, Lebreton 2005). Finally, the actual impact of harvest on the population could be concealed by some particularities in the population dynamics. Indeed, in Europe, hunting regulations vary from one country to another, and available information indicates that annual duck harvest varies as well (Mooij 2005). Hence, hunting could induce a source-sink functioning (see for example Novaro et al. 2005), in which low-hunting pressure areas would supply birds to the wintering grounds where hunting pressure is higher. In our paper, we use a 20-year capture-recapture-recovery data set to provide robust estimates of important demographic parameters (survival and harvest rates) of the teal in Europe. Because population dynamics are difficult to analyse based on a single population study, we compare our results to those obtained from another, similar data set from North America. This may provide useful insights into the European teal population dynamics, which may eventually be translated into adequate management and conservation procedures. **Methods** **Study area/species** In Europe, the teal breeds from Scandinavia and northern Russia to France, Switzerland, and the Table 1. Brief description of the European data. The European data were a mixture of live recaptures and dead recoveries. This table only presents the data distribution at time of ringing and at time of recovery. Juveniles and adults indicate the age of the birds at time of ringing. First-year birds were only considered as juveniles for the first time interval following ringing. They were considered as adults from the first encounter event following ringing and thereafter. Hence, numbers given for recoveries should be read as ‘among the 18,849 female birds ringed during their first year, 2,484 were later recovered’. Counts given for recaptures represent the number of birds ‘ringed as’ that were recaptured alive at least once. The total number of recaptures was 5,315. | | Females | Males | Sex ratio (♂:♀) | Age ratio (Juv:Ad) | |------------------|---------------|-------------|-----------------|--------------------| | | Juveniles | Adults | Juveniles | Adults | Total | | | | Ringed | 18849 | 6289 | 18322 | 11715 | 55175 | ~1.2:1 | ~2.0:1 | | Recaptured at least once | 780 | 178 | 689 | 587 | 2234 | ~1.3:1 | ~1.9:1 | | Recovered | 2484 | 788 | 2917 | 1727 | 7916 | ~1.4:1 | ~2.1:1 | The northern edge of the Black Sea (see distribution map in Scott & Rose 1996). Winter grounds cover most of southern Europe, North Africa (Nile region), and the Middle East (Cramp & Simmons 1977, Johnson 1995). Specific ‘flyways’ have been recognised, but no clear populations can be distinguished (Scott & Rose 1996), and evidence for a fairly large amount of exchange among these flyways has challenged these delineations (Guillemain et al. 2005). In North America, the teal breeds throughout much of Canada, and winters throughout the United States and Mexico. Migration occurs along four major flyways (Johnson 1995). Data Duck ringing in France has been fairly intensive from the mid-1950s to the mid-1970s, but was then interrupted until new ringing programs were initiated in the early 2000s (e.g. Guillemain et al. 2007). However, this latter program has not yet provided enough data to adequately estimate demographic parameters, and we therefore used historical data from teal ringed during 1954/55-1975/76 during the internuptial season at the Tour du Valat biological station in the Camargue, southern France (43°30′28N, 04°40′07E). A large proportion of the French teal population winters in the Camargue (Hémery et al. 1979), which is a wetland of international importance according to the Ramsar criteria (i.e. > 1% of the considered population present in the area, Deceuninck et al. 2009). Because the Camargue is located at the limit between the northwest European and the Black Sea/Mediterranean regions, it attracts wintering birds from both sub-populations and as such, birds ringed in Camargue are fairly representative of the (western) European teal population. Our data consisted of a mixture of live recaptures and dead recoveries, with the latter mainly occurring in September-March, i.e. during the most common hunting season in southwestern Europe for the period of interest (nowadays, hunting season commonly ends in late January). Of recoveries, > 95% were from hunting and given the low number of other reported causes of mortality (e.g. predation), all reports were considered to be hunting mortalities in the analyses. Among the 55,175 individuals initially ringed, 2,234 were subsequently recaptured at least once by the same ringing crew (total 5,315 recaptures) and 7,916 were recovered by hunters (Table 1). Dead recoveries of teal in North America were obtained between 1960/61 and 1997/98 at various ringing stations across North America (see Gustafson et al. 1997 for details). Ringings were carried out in January-February, i.e. at the very end or after the hunting season occurring from late September to February. Subsequent information was only composed of recoveries of dead animals (i.e. no live recaptures), with > 99% of reported recoveries being due to hunting. No capture-recapture event of any kind was recorded outside the September-February period. A total of 47,276 individuals were ringed, from which 2,381 were shot and reported (Table 2). Model structure Traditional capture-recapture studies (Lebreton et al. 1992) imply that marked individuals are reported Table 2. Brief description of the North American data. The data were based only on dead recoveries (i.e. no live recaptures). Given in North America ringing was carried out in January and February (as opposed to September-March in Europe), all ringed birds were at least in their second (calendar) year at time of ringing (i.e. second-year and after-hatching year birds), and were thus all considered as adults. | | Females | Males | Total | Sex ratio (♂:♀) | |------------------|---------|-------|-------|-----------------| | Ringed | 12600 | 34676 | 47276 | ~2.7:1 | | Recovered | 513 | 1868 | 2381 | ~3.6:1 | to the ringing laboratory when they are encountered, i.e. when they are recaptured alive. When the population of interest is exploited, marked individuals are encountered not only when recaptured alive, but also when harvested. It is thus possible to consider two states, alive and dead, and to consider the encounter of marked individuals within the context of multistate models (Lebreton & Pradel 2002). Traditional capture-recapture models (Lebreton et al. 1992) can then be considered as two-states models (i.e. birds can be alive or dead), in which only live birds can be encountered. Similarly, dead recoveries models (Brownie et al. 1985) can be considered as two-states models, in which only dead birds can be encountered. Both live recaptures and dead recoveries can be analysed as a mixture of information within the multistate framework (Lebreton et al. 1999). We applied this approach to the European data. The main advantage of including live recaptures in the analysis was to increase the number of releases, i.e. to increase sample size. This literally corresponds to a recovery analysis with a larger number of marked individuals, which improves the precision of estimates. To a lesser extent, live recaptures also contribute to the estimate of the probability of survival (J-D. Lebreton, unpubl. data). In Europe, ducks were ringed from September to March, which roughly corresponded to the prevalent hunting season. A bird ringed early in the season was therefore more likely to be shot during the first hunting season than a bird ringed at the end of the season, which induced heterogeneity in survival estimates. In addition, spring hunting was common in Russia in the 1960s and 1970s, which lead to an appreciable number of recoveries to actually occur outside the September-March period. To account for these characteristics, we performed a combined analysis of live recapture and dead recoveries, and divided the year into three periods: fall-winter (hereafter FW: September-December), winter-spring (WS: January-March), and spring-fall (SF: April-August). The year was considered to start in September, with the beginning of the hunting season, and ringings occurred in FW and WS only. Most live recaptures actually occurred within a few months following ringing, and given that only one encounter event was possible for each period, live recaptures were limited to WS. Most recoveries occurred in FW and WS, but recoveries in period SF were also included in the analyses. Finally, ringing from September to March implied that first year birds were present in the data. Given most teal attempt breeding as soon as their second calendar year (Johnson 1995), these individuals were considered as juveniles for the first time interval only (i.e. from ringing to first re-encounter). The modelling of the first time interval as different from subsequent intervals is denoted by ‘age’ in the model notation. Usual assumptions of ring recovery models (Brownie et al. 1985) were more closely met by North American data, which did not require any further model adjustments and were analysed using Brownie models for dead recoveries. The model applied to North American data was based on a 1-year interval starting in September, with recoveries occurring only between September and February. Given they had been ringed in January-February, i.e. when aged $\geq 5$ months old, all individuals were considered as adults at time of ringing. **Statistical methods** **Goodness-of-fit** Due to the particular structure of the European data, no appropriate goodness-of-fit test was available for a global model. Hence, we assessed the goodness-of-fit of our most general model recognising full temporal variation in survival and recapture/recovery rate using multistate goodness-of-fit tests 3G and M in software U-Care (Choquet et al. 2005a). When a lack of fit was detected, we modelled the first occasion after ringing separately from subsequent occasions, either for survival (transient model) or for capture (trap-dependence model), according to the main significant effect. Only the main effect was accounted for in the model structure and other significant components were used to calculate a variance inflation factor (Lebreton et al. 1992) which was used to adjust Akaike information criterion (i.e. QAIC) for model selection (Burnham & Anderson 2003). **Model selection** All models were fitted using program M-Surge (Choquet et al. 2004, 2005b), and models were selected based on their lowest QAIC$_c$ value. However, since our data were relatively sparse, we could not fit the full model, and thus, we rather started model selection from a simple model which we gradually made more complex. Effects were considered first on recapture/recovery parameters and then on survival parameters (Lebreton et al. 1992). Priority was given to biologically relevant models (e.g. different survival between sexes) but models adjacent to low-AIC ones were also considered in search of unexpected effects, or interactions between effects. A similar approach to model selection was used for the North American data, and example models for Europe and North America are given in Tables 3 and 4, respectively. **Parameterisation** The parameterisation used in M-Surge was based on the probability \(\lambda\) that the ring was reported, conditional on the death of the bird with probability \(1-S\) (Lebreton et al. 1999). This differed from the traditional parameterisation of Brownie et al. (1985), which provides an estimate of the ring recovery probability \(f\), i.e. the probability that the bird was shot and reported. However, hunting is not the only source of mortality, and thus, \((1-S) > H\), with \(S\) being the probability of survival and \(H\) being the probability of mortality due to hunting. In this model only the product \(H*\delta\), i.e. the probability that the bird was shot (\(H\)) and reported (\(\delta\)), is actually identifiable. The two parameterisations are then simply related by \(H*\delta = (1-S)*\lambda = f\). For clarity, we hereafter discuss our results using the \(f\) notation. We also note that the Brownie parameterisation (based on the ring recovery probability \(f\)) can be modelled directly within the multistate framework (Gauthier & Lebreton 2008). **Obtaining annual estimates for European data** While North American data produced yearly estimates directly, the specificities of the European data implied seasonal estimates, which had to be combined in order to obtain annual estimates. The annual probability of recovery \(f_{yr}\) was thus obtained as \[ f_{yr} = f_{FW} + (S_{FW}*f_{WS}) + (S_{FW}*S_{WS}*f_{SF}) \] indicating that a teal recovered in a given year was either shot and reported during FW with probability \(f_{FW}\), or it survived FW (probability \(S_{FW}\)) but was shot and reported during WS (probability \(f_{WS}\)), or, it survived both FW and WS (probability \(S_{FW}*S_{WS}\)) but was shot and reported during SF (probability \(f_{SF}\)). Similarly, we calculated the annual probability of survival as the product of seasonal estimates \(S_{yr} = S_{FW}*S_{WS}*S_{SF}\) because a bird that survived the whole year must have survived the three periods. The standard deviations associated with annual estimates were derived from empirical variances over years using the delta method, and corrected as suggested by Burnham et al. (1987). Finally, the proportion of the population \(H\) that is harvested during a given period is related to the probability of recovery by the proportion \(\delta\) of ringed birds taken by hunters that are reported to the ringing lab (Williams et al. 2002). Provided \(\delta\) is known, it can be considered to compute an index of harvest rate as \(H = f/\delta\), which becomes \(h = f*(1+c)/\delta\) when accounting for crippling loss, i.e. for birds that were shot but not retrieved. To our knowledge, the ring reporting rate \(\delta\) has only been estimated for mallard in North America (Henny & Burnham 1976, Nichols et al. 1991, Royle & Garrettsion 2005), and no estimate is available for waterfowl in Europe. Given the time period over which our data were collected, the ring reporting estimate provided by Henny & Burnham (1976) may seem more appropriate, but methods used by Nichols et al. (1991) were actually more accurate. Hence, we used the value \(\delta = 0.32 \pm 0.063\) (SE) provided by Nichols et al. (1991) to estimate harvest rate for North America as well as for Europe. We discuss below the use of this estimate to evaluate harvest rate in Europe, as well as how different values of \(\delta\) may influence the evidence for a difference in harvest rate between Europe and North America. **Intra-annual comparison of survival in Europe** The seasonal estimates obtained from the European data allowed us to compare the probability of survival between hunting and non-hunting seasons, as well as between males and females. In southwestern Europe, the most commonly observed hunting season during 1954-1976 ranged from early September to late March, which corresponds to periods FW and WS (for short, FW + WS = FS). The non-hunting season corresponded to period SF. Since these two periods (hunting season FS and non-hunting period SF) were not of equal length, the corresponding survival probabilities were scaled to the month for comparison. In addition, since the probability of survival was sex-dependent for SF but not FS, we used the average over sexes for SF for the comparison between periods. **Comparison of demographic parameters between Europe and North America** A comparison of demographic parameters between Europe and North America was only possible during 1960/61-1975/76. Comparisons were realised on annual estimates, associated with their corresponding standard errors, using a Wald test (for an example of a Wald test in a capture-recapture context, see Lebreton et al. 1992: 90). Inter-annual variability was accounted for in the Wald test by using whole vectors of estimates for time-dependent parameters (e.g. annual probability of survival in Europe). However, estimates for North America were not sex-dependent, while those for Europe were sex-dependent (due to sex-dependence during SF period) and thus, the comparison was carried out using the average over sexes for Europe (variability was also accounted for when calculating this average). Because spring hunting was common in former Soviet Union during the years covered by our European data, period SF included a fairly large number of recoveries due to hunting. On the contrary, the annual harvest rate obtained from North American data concerned the September to February hunting season only. Hence, for the comparison with North America, we only considered periods FW and WS to estimate the annual harvest rate in Europe. **Ring reporting rate when harvest rate is the same in Europe as in North America** When using the ring reporting rate provided by Nichols et al. (1991) to estimate the harvest rate in Europe, we implicitly assumed that the ring reporting rate was the same for teal as for mallard and, more importantly, that the ring reporting rate was the same in Europe as in North America, i.e. we considered $\delta_{EU} = \delta_{US}$. In order to discuss this assumption as well as our estimate of the harvest rate, we also estimated what would be the minimum ring reporting rate in Europe for the harvest rate to be the same in Europe as in North America, i.e. what is the value of $\delta$ for $H_{EU} = H_{US}$. **Results** **Goodness-of-fit tests** For the European data, almost all components of the goodness-of-fit tests were significant, indicating lack of fit of the $\{S_t, P_t\}$ model. Component 3G.SR (see Choquet et al. 2005a for details on test components) indicated that a large number of teal were ringed but never re-encountered again. This was accounted for by considering an age structure on survival parameters (Pradel et al. 1997a), i.e. by differentiating the first interval following ringing ($a_1$ in model notation hereafter) from subsequent ones (noted $a_2$). The variance inflation factor calculated with the other components was $\hat{c} = 1.682$. Table 3 presents the initial model, some example models and the best AIC model for the European data. Given that the North American data did not include live recaptures, only the M component of the goodness-of-fit tests could be computed, and this test was not significant. In addition, there was no need to account for overdispersion. However, we detected a lack of direct recoveries due to the data structure. Indeed, in North America, teal were ringed in January and February, i.e. at the end of the hunting season. As a consequence, little chance existed for newly ringed birds to be shot by the end of the hunting season (i.e. direct recoveries), and these birds were more likely recovered in future hunting seasons (i.e. indirect recoveries). This lack of direct recoveries may result in overestimating the probability of survival. To account for this particularity, --- **Table 3.** Model selection results for the European data. Only the initial model, three intermediate models and the best QAIC$_c$ model (bold) are presented. Main effects are in plain text, and supplementary details are provided in subscripts when necessary. For example, time: indicates year to year variation and time$_{\{FW \text{ Ad, SF}\}}$ indicates that the time effect applies only to individuals ringed as adults for period FW, and to all individuals for period SF. Transience was accounted for in all models by considering direct recoveries ($a_1$ in subscripts) separately from indirect recoveries ($a_2$). Age at ringing was also included in all models presented here, although this effect was partially confounded with the model structure accounting transience, because first-year individuals were considered as juveniles only during the first time interval following ringing. For all models presented here both capture and survival parameters differed between periods FW, WS and SF. The number of parameters in each model is indicated by k. | Model | Survival, S | Recovery, f | Deviance | k | $\Delta$QAIC$_c$ | |-----------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------|------|------------------| | Initial | sex | sex | 106750.9 | 30 | 1397.4 | | Model 180 | sex$_{\{SF \text{ Ad}\}}$ * time$_{\{FW \text{ Juv, WS, FW Ad a2}\}}$ | sex$_{\{FW \text{ Ad, WS Ad, SF Ad}\}}$ * time$_{\{FW \text{ Juv}\}}$ | 103907.1 | 180 | 24.5 | | Model 179 | sex$_{\{SF \text{ Ad}\}}$ * time$_{\{FW \text{ Juv, WS, FW Ad a2}\}}$ | sex$_{\{WS \text{ Juv, SF Ad}\}}$ * time$_{\{FW \text{ Juv}\}}$ | 104901.4 | 181 | 23.1 | | Model 163 | sex$_{\{SF \text{ Ad}\}}$ * time$_{\{FW \text{ Juv, WS, FW Ad a2}\}}$ | sex$_{\{FW \text{ Ad, WS Ad, SF Ad}\}}$ * time$_{\{FW \text{ Juv}\}}$ | 103855.8 | 184 | 2.3 | | **Model 172** | sex$_{\{SF \text{ Ad}\}}$ * time$_{\{FW \text{ Juv, FW Ad a2, WS}\}}$ | sex$_{\{FW \text{ Ad, WS, SF Ad}\}}$ * time$_{\{FW \text{ Juv}\}}$ | **103848.5** | **185** | **0.0** | Table 4. Summary of model selection results for the North American data. Only the initial model, three examples and the best QAIC\textsubscript{c} model (bold) are presented. Contrary to Table 3, transience was not accounted for in all models presented here, it is therefore denoted as transient when relevant. For the North American data, parameters were estimated on an annual basis (see text), thus there is no ‘season’ effect as in Table 3. Notation is otherwise the same as in Table 3, with main effects in plain text and additional details in subscripts. Time indicates a year to year variation. | Model | Survival, S | Recovery, f | Deviance | k | ΔQAIC\textsubscript{c} | |---------|-------------|-------------|----------|---|------------------------| | Initial | sex | sex | 25645.1 | 4 | 21.4 | | Model 15| sex * transient\textsubscript{[F]} | sex * transient\textsubscript{[M]} | 25628.4 | 6 | 8.7 | | Model 21| sex | sex * transient\textsubscript{[M]} * time\textsubscript{[M, a1]} | 25467.4 | 121 | 77.7 | | Model 49| sex * transient\textsubscript{[F]} | sex * transient * time\textsubscript{[M, a1]} | 25545.5 | 44 | 1.8 | | Model 42| transient | sex * transient\textsubscript{[M]} * time\textsubscript{[M, a1]} | 25545.7 | 43 | 0.0 | the conditional probability of recovery (i.e. \( \lambda \)) was estimated separately for the first year following ringing. This is denoted \( a_1 \) in model notation below, with respect to \( a_2 \) for subsequent years. The model selection for the North American data is summarised in Table 4. **Parameter estimates** **Parameter estimates for Europe** In the model best describing the European data (see Table 3), the annual probability of recovery \( f \) varied between sexes and seasons but was constant over years. When considering year-round recoveries, on average over years 1954-1976, the annual probability of recovery was \( 0.064 \pm 0.018 \) (estimate ± standard error) for males and \( 0.084 \pm 0.014 \) for females. These values were significantly different (Wald test, \( z = 3.945, P < 0.001 \)). The average over years and across sexes, calculated while accounting for variability, was \( 0.074 \pm 0.016 \). In Europe, the annual probability of survival varied from year to year as well as between sexes. On average over years 1954-1976, the probability of survival was significantly different between males (average over years: \( 0.525 \pm 0.108 \)) and females (average over years: \( 0.445 \pm 0.092 \)) (Wald test: \( z = -5.367, P < 0.001 \)). The average over years and across sexes was \( 0.485 \pm 0.100 \). Annual harvest rate was sex and time-dependent for European teal. On average over years 1954-1976, the annual probability of harvest was \( 0.201 \pm 0.060 \) for males and \( 0.262 \pm 0.046 \) for females, when considering the whole year (Wald test: \( z = 14.312, P < 0.001 \)). The average over sexes for the period 1954-1976 was \( 0.227 \pm 0.022 \). Seasonal estimates obtained for European data were sex-specific only for period SF. The probability of survival during SF was thus \( 0.807 \pm 0.018 \) for females and \( 0.952 \pm 0.003 \) for males. These values were significantly different (Wald test: \( z = 10.667, P < 0.001 \)). Within the model applied to European data, hunting season was represented by periods FW and WS (September-March), whereas period SF (April-August) represented the non-hunting season. Estimates of the probability of survival were scaled to the month in order to be compared between hunting and non-hunting seasons. The monthly probability of survival was significantly different (Wald test, \( z = 6.360, P < 0.001 \)) between the hunting (average over sexes: \( 0.915 \pm 0.034 \)) and the non-hunting (\( 0.975 \pm 0.002 \)) seasons. **Parameter estimates for North America** In North America, the annual probability of recovery \( f \) was different for males and females but constant over years (see Table 4). During 1960-1998, the annual probability of recovery was \( 0.027 \pm 0.002 \) for males and \( 0.019 \pm 0.001 \) for females. These values were significantly different (Wald test: \( z = -4.345, P < 0.001 \)). The average over sexes was \( 0.023 \pm 0.001 \). Similarly, during 1960-1998, the annual probability of survival for North American teal was constant over years, and was \( 0.545 \pm 0.010 \) for both sexes. The annual probability of harvest was constant over time and not significantly different between sexes (Wald test: \( z = -1.229, P = 0.055 \)). For the period 1960-1998, it was \( 0.058 \pm 0.012 \) for females, and \( 0.084 \pm 0.017 \) for males. The average over sexes was \( 0.071 \pm 0.014 \). **Comparison between Europe and North America** As indicated earlier, recoveries were strictly restricted to September-February in North America, whereas in Europe, a significant amount of recoveries was actually observed during the spring/summer period. For the comparison between Europe and North America, these SF recoveries were discarded, and the annual probability of recovery \( f_{\text{yr}} \) was estimated in Europe using periods FW and WS only. In addition, we considered only the overlapping period between the two data sets, i.e. during 1960/61-1975/76. Over this reduced period, the probability of recovery (average over sexes) was $0.057 \pm 0.019$ in Europe and $0.023 \pm 0.001$ in North America. These values were significantly different (Wald test: $z = 11.701$, $P < 0.001$). To compare the annual probability of survival between Europe and North America, we used the weighted (using sex ratio) average over sexes ($0.492 \pm 0.101$) for Europe and the estimator provided by the best AIC$_c$ model for North America ($0.544 \pm 0.010$). The annual probability of survival was highly significantly different between Europe and North America (Wald test: $z = -3.130$, $P < 0.001$). However, the sex ratio among recoveries was 3.6 males per female in the North American data whereas it was more balanced (1.4 male per female) in Europe. Given that in Europe, the probability of survival was higher for males than for females, such a differential sex ratio may lead to a higher overall apparent probability of survival in North America. Nonetheless, applying a 3.6:1 sex ratio to the European data and estimating the weighted average of the annual probability of survival as $0.22^*S_\varphi + 0.78^*S_\delta$, i.e. artificially increasing the probability of survival in Europe, did not change the conclusion. Even then, the annual probability of survival remained significantly lower (Wald test: $z = -1.982$, $P = 0.012$) in Europe (weighted average over sexes: $0.508 \pm 0.105$) than in North America ($0.544 \pm 0.010$). For the period common to both data sets (1960-1976), the annual probability of harvest (average over sexes weighted using sex ratio) was estimated to $0.165 \pm 0.003$ in Europe, and to $0.071 \pm 0.014$ in North America, assuming the same ring reporting rate $\delta = 0.320 \pm 0.063$ for both locations. These two estimates were highly significantly different (Wald test, $z = 5.357$, $P < 0.001$). **Ring reporting rate when harvest rate is the same in Europe as in North America** When considering that the harvest rates is the same in Europe as in North America, the null hypothesis of the Wald test used for the comparison becomes $H_{EU} = H_{US}$. Given the survival and recovery rate estimates we obtained for Europe and North America, this hypothesis would be rejected (i.e. $z \leq 1.96$) only if the ring reporting rate in Europe was $\geq 0.797$. We discuss the relevance of such a high value below. **Discussion** **Results concerning Europe only** The particularities of the European data presented in our paper allowed us to estimate seasonal estimates of survival and harvest probabilities. Monthly survival was lower during the hunting period (i.e. periods FW and WS) than in the non-hunting period (SF), thus suggesting an impact of hunting on survival of the Eurasian teal. However, during the period considered in our paper, spring hunting was allowed in some European countries (Kostin 1996), and therefore period SF cannot be considered as an entirely non-hunting season. In addition, the hunting season we considered in our paper also included migration events, which can take a substantial toll on survival (Menu et al. 2005, Newton 2006), as well as winter and possible cold spells, to which teal are particularly sensitive (Lebreton 1973, Ridgill & Fox 1990, Bennett & Bolen 1978). Our results indicated that annual probability of survival was sex-specific. However, seasonal survival was sex-specific only for period SF. This suggests that difference in annual survival between males and females is likely due to differential parental investment during the reproduction season. Indeed, males are known to desert immediately after eggs are laid, and females provide all parental care, thus comparatively increasing their energy demand and risk of predation while on the nest (del Hoyo et al. 1992). In any case, our estimates of the annual probability of survival were similar to those obtained by Gitay et al. (1990) and by Boyd (1957), but slightly lower than estimates by Bell & Mitchell (1996), although the latter were derived from collected wings and population trends, and these methods are not as reliable as capture-recapture to estimate survival. When calculated over the whole year, including the ‘non-hunting’ period SF, annual harvest rate was higher for females than for males, which suggests that females could be more sensitive to hunting, and the sex ratio of recoveries was skewed towards males. However, the data also included more males at time of ringing, reflecting the traditional skewed sex ratio in wintering populations, due to the differential parental investment during the reproduction season. **Results concerning North America only** While one may expect the annual probability of survival to be sex-specific in North America, as in Europe, our estimate was constant over time and across sexes. It was similar to that obtained by Chu et al. (1995). Both the ring recovery rate and the harvest rate were nonetheless different between sexes and higher for males than for females. Although this could reflect an actual difference in report and/or kill rate, this is unlikely. It is worth noting that in our North American data, the sex ratio was strongly biased towards males, both at time of ringing and among recoveries. Hence, female-related data may not have been sufficient to properly estimate a separate probability of survival for females, although it was sufficient to estimate a sex-specific probability of recovery. These results could also be the result of an artifact in the data due to the post-season ringing. **Comparison between Europe and North America** According to our best AIC\textsubscript{c} models for both Europe and North America, the annual probability of recovery was higher in Europe than in North America. During the considered period, European hunters were possibly more inclined to report rings, or were actually killing more birds. However, our data do not allow us to conclude on this point. Annual survival probability was significantly higher in North America than in Europe. This held true even when artificially biasing the sex ratio to 4:1 males in the European data, thus increasing the average survival across sexes due to the higher survival of males. This difference in survival between Europe and North America was thus fairly robust. Although many factors could explain the difference in survival between Europe and North America, one possible explanation is the impact of harvest. During the years included in our study, the annual harvest rate was much higher in Europe than in North America. However, estimating the harvest rate is conditional on the availability of an estimate of the ring reporting rate. Although the ring reporting rate was estimated to $\sim 50\%$ in the 1970s (Henny & Burnham 1976), we used the value of $32\%$ provided by Nichols et al. (1991) in our analyses. Hence, we assumed that the reporting rate was the same in Europe as in North America, and the same for teal as for mallard. While a potential difference between bird species should be adequately tested, it is known that within a species, the reporting rate varies geographically across North America (Nichols et al. 1995b), and is thus likely to be different between Europe and North America. To our knowledge, no reward ring scheme has ever been carried out in Europe, and incentives to report rings have been being put in place in the last few years only. By comparison, reward ring studies are almost routinely carried out in North America, and a toll-free phone number which hunters can call to report rings has been engraved on rings for more than a decade (Royle & Garretsson 2005). Although we do not have any information for the 1950s and 1960s, we believe that the ring reporting rate is lower in Europe than in North America. We acknowledge that our data are fairly old and may not adequately represent the current situation. In particular, one may argue that the harvest rate may have decreased substantially since the 1950s. Indeed, during the last 10–20 years, spring hunting has been banned, and hunting season length has been reduced in most European countries (Mooij 2005). Other European measures such as the Bird Directive or the recent ban of lead ammunition, as well as the loss of interest of younger generations for hunting, also contributed to reducing the annual waterfowl harvest. Similarly, the ring reporting seems to vary significantly across time. For example, the proportion of fitted rings that were returned (which is only an approximation of reporting rate, since it also includes kill rate) decreased from $\sim 18\%$ to $\sim 10\%$ between the 1950s and the 1970s in teal (Guillemain et al. unpubl. data), as it did in other bird species (Grantham 2009). Could the harvest rate have decreased to a level similar to our estimates for North America? Instead of estimating the harvest rate in Europe $H_{EU}$ under the assumption that $\delta_{EU} = \delta_{US}$ (with $\delta$ being the ring reporting rate), it is also possible to estimate the ring reporting rate in Europe $\delta_{EU}$ under the assumption that $H_{EU} = H_{US}$. Based on our data, for the annual harvest rate to be the same in Europe as in North America (i.e. $\sim 7\%$), at least $80\%$ of rings would need to be actually reported by hunters. In the late 1980s in North America, such a high value could only be reached if a $40$ reward was granted to hunters reporting rings (Nichols et al. 1991). Without a reward, this value was reached only recently in North America, after more than 10 years of use of a toll-free phone number engraved on the rings, and several advertisement/incitement campaigns (Royle & Garretsson 2005). Therefore, the ring reporting rate is very unlikely to be as high as $80\%$ in Europe where virtually nothing has been done until very recently to incite hunters to report rings. This result validates the fact that the annual harvest rate is higher in Europe than in North America. Indeed, if, in Europe, the ring reporting rate is unlikely to be higher than 80%, then the annual harvest rate is equally unlikely to be lower than $\sim7\%$, and is therefore between 7% and 18%. Assuming that harvest has an impact on survival, one may expect a stronger difference in annual probability of survival between the two continents, with respect to the observed difference in annual harvest rates. In addition, there is no noticeable difference in the reproductive output of the two subspecies. Although hatching success is not well documented, egg size, clutch size and brood size are about the same in Europe and in North America ($\sim45 \times 33$ mm, 8-10 eggs and $\sim5$ ducklings, respectively, Johnson 1995, Cramp & Simons 1977). Overall, teal is a 'fast' species that reproduces early in life, produces numerous offsprings and dies relatively young. Good reproduction probably plays an important role in the teal population dynamics, as it compensates for losses due to hunting (Kalchreuter 1996). **Compensation** Our study does not rule out, nor allows testing for, the possible compensation of hunting mortality through density-dependent mechanisms, mostly because of year-round recoveries, which prevent us from estimating survival in absence of hunting. However, evidence for compensatory mortality is fairly elusive and the principle itself is still debated (e.g. Pöysä et al. 2004). In particular, the effects of compensation are confounded with those of harvest (Sedinger et al. 2007), which favours using additive models for management rather than compensatory models (Conn & Kendall 2004). Yet, this latter point is valid only when model-based management is implemented, which is far from being the case in Europe (see Elmgren et al. 2006). If compensation occurs, it is probably at a fairly low level (Lebreton 2005), which would be insufficient to compensate for 18% harvest, especially if we consider that our estimate of the harvest rate did not account for crippling (and lead poisoning) loss. It is also unclear how much harvest can be compensated for in the presence of other sources of mortality such as prolonged bad weather conditions, for example. Little is known about the interaction between harvest and weather conditions, and its effects on the dynamics of waterfowl populations. Although we did not specifically test for the effect of cold winters on survival, we noticed that in Europe, the peaks of mortality corresponded to the worse winters on record (winters 1955/56 and 1970/71). The currently available information does not allow determining what was the relative contribution of hunting, cold spells and migration to the variation in survival. Teal are particularly sensitive to cold spells, and they move towards southwestern Europe in case of adverse weather (Lebreton 1973, Ridgill & Fox 1990, Bennett & Bolen 1978). In addition, teal also frequently change flyways during migration events (Guillemain et al. 2005). As shown by simple population modelling (not presented in our paper, though see Devineau 2007), such movements may contribute to the apparent paradoxical increase of the population mentioned above. With our estimates, the population crashes when it is modelled as whole, which is inconsistent with the observed stable/increasing trend (Delany & Scott 2006). Demographic parameters are probably not homogenous across Europe, and the population may present a source-sink dynamic. For example, the total number of harvested ducks is higher in western Europe than in eastern Europe (Mooij 2005), and other demographic parameters are likely spatially variable as well. Hence, when modelling the population as two sub-populations differing by their harvest rate, the population does not crash anymore. In particular, a small amount of exchange from the low-harvest region to the high-harvest region allows the high-harvest sub-population to maintain itself, whereas it would otherwise crash in absence of exchange. However, a higher rate of exchange from the low-harvest to the high-harvest sub-population eventually leads to the crash of the whole population, because immigration then does not allow the sink to sustain itself (Lebreton & Gonzalez-Davila 1993). **Management implications** During the 1950s and 1960s, the harvest rate of teal in Europe was about three times higher than in North America. Survival was not so different, thus indicating that harvest has relatively little impact on the survival of a fast species such as the teal. Although only 15% of the juveniles produced a given year reach the wintering grounds (Guillemain et al. 2010), good reproduction seems to compensate, at least partially, for losses due to hunting. Other compensation mechanisms could not be ruled out from our study. Because of the historical nature of our data, our results do not necessarily represent the current situation. New ringing programmes are carried out in various European countries since the early 2000s, which will help to update our results. In particular, the annual harvest rate and the ring reporting rate have probably decreased since the 1950s (Grantham 2009). North America has a several decades-long history of science-based waterfowl population and harvest management, together with several incentives for hunters to report rings, whereas most European countries barely have any information at all on hunting statistics, let alone a proper management strategy. It is thus very unlikely that the current ring reporting rate reached in Europe will reach the level it now has in North America, i.e. about 80% (Royle & Garrettsion 2005). In other words, the current harvest rate in Europe probably lies somewhere between our estimate for Europe (~18%) and our estimate for North America (~7%). As a fast species, teal has a good capacity to withstand some level of harvest, and compensation mechanisms other than reproduction could not be ruled out with our study. However, it is unknown how much harvest the European teal population can stand, and how this compensation of hunting losses interact with other factors such as weather conditions, for example. In addition, the population is currently considered to be globally increasing (Delany & Scott 2006), but on the basis of counts, which are only moderately reliable as management tools. Stability may be apparent only, and the population may actually involve a source-sink system that maintains regions where harvest is high at the (hidden) expense of regions where harvest is lower. As a conclusion, it seems clear to us that subtle population mechanisms, such as the intricacies of spatial heterogeneity in harvest intensity and movement, may seriously complicate attempts to progress towards scientifically-based management of harvested populations. Comparing populations and situations seem to us particularly relevant and worthwhile in such a context. Acknowledgements - we are most grateful to Luc Hoffmann, Hubert Kowalski, Heinz Hafner and others who have ringed teal at the Tour du Valat for more than 25 years, and to all hunters and observers who reported rings. We would also like to thank Marc Lutz, Paul Isenmann and the Centre de Recherche sur la Biologie des Populations d’Oiseaux (Muséum National d’Histoire Naturelle, Paris) for their help while computerising the European teal data. In addition, we are particularly grateful to Jim Nichols and Jim Hines who provided us access to the North American ringing database as well as to the ringers who collected these data. Jim Nichols also provided useful comments on an earlier version of the manuscript. Our thanks also go to Roger Pradel, Rémi Choquet, Olivier Gimenez, Paul Doherty and others for their valuable suggestions regarding the data analysis and the manuscript. The analyses presented here were funded through a graduate stipend from the French Ministry of Research. References Anderson, D.R. & Burnham, K.P. 1976: Population ecology of the mallard VI. The effect of exploitation on survival. - United States Fish and Wildlife Service Resource Publication 128, 66 pp. Baldassarre, G.A. & Bolen, E.G. 2006: Waterfowl ecology and management. - Krieger Publishing, Malabar, Florida, USA, 576 pp. Bell, M.C. & Mitchell, C.R. 1996: Survival in surface feeding ducks. - Wildfowl and Wetlands Trust Report, Slimbridge, UK, 96 pp. Bennett, J.W. & Bolen, E.G. 1978: Stress response in wintering green-winged teal. - Journal of Wildlife Management 42: 81-86. Boyce, M.S., Sinclair, A.R.E. & White, G.C. 1999: Seasonal compensation of predation and harvesting. - Oikos 87: 419-426. Boyd, H. 1957: Mortality and kill amongst British-ringed teal Anas crecca. - Ibis 99: 157-177. Brownie, C., Anderson, D.R., Burnham, K.P. & Robson, D.S. 1985: Statistical inference from band recovery data - a handbook. - US Fish and Wildlife Service, US Department of the Interior, Washington DC, USA, 212 pp. Burnham, K.P. & Anderson, D.R. 1984: Tests of compensatory vs. additive hypotheses of mortality in mallards. - Ecology 65: 105-112. Burnham, K.P. & Anderson, D.R. 2003: Model Selection and Inference: A Practical Information-Theoretic Approach. - Springer, New York, New York, USA, 488 pp. Burnham, K.P., Anderson, D.R., White, G.C., Brownie, C. & Pollock, K.H. 1987: Design and analysis methods for fish survival experiments based on release-recapture. - American Fisheries Society, Bethesda, Maryland, USA, 437 pp. Choquet, R., Reboulet, A.M., Lebreton, J-D., Gimenez, O. & Pradel, R. 2005a: U-CARE 2.2 User’s Manual. - CEFE, UMR 5175, CNRS, Montpellier, France, 53 pp. Choquet, R., Reboulet, A.M., Pradel, R., Gimenez, O. & Lebreton, J-D. 2004: M-SURGE: new software specifically designed for multistate capture-recapture models. - Animal Biodiversity and Conservation 27: 207-215. Choquet, R., Reboulet, A.M., Pradel, R., Gimenez, O. & Lebreton, J-D. 2005b: M-SURGE 1.8 User’s Manual. - CEFE, UMR 5175, CNRS, Montpellier, France, 76 pp. Chu, D.S., Nichols, J.D., Hestbeck, J.B. & Hines, J.E. 1995: Banding reference areas and survival rates of green-winged teal, 1950-89. - Journal of Wildlife Management 59: 487-498. Conn, P.B. & Kendall, W.L. 2004: Evaluating mallard adaptive management models with time series. - Journal of Wildlife Management 68: 1065-1081. Cramp, S. & Simmons, K.E.L. 1977: Handbook of the birds of Europe, the Middle East and North Africa. The birds of the Western Palearctic. - Oxford University Press, Oxford, UK, 695 pp. Cunningham, R.B., Lindenmayer, D.B., Nix, H.A. & Lindenmayr, B.D. 1999: Quantifying observer heterogeneity in bird counts. - Australian Journal of Ecology 24: 270-277. Deceuninck, B., Maillet, N., Ward, A., Dronneau, C. & Mahéo, R. 2009: Synthèse des dénombrements d’Anatidés et de foulques hivernant en France - Janvier 2008. - Ligue pour la Protection des Oiseaux/Wetlands International, Rochefort-sur-Mer, France, 41 pp. (In French). Delany, S. & Scott, D.A. 2006: Waterbird Population Estimates. 4th edition. - Wetlands International, Wageningen, The Netherlands, 239 pp. del Hoyo, J., Elliott, A. & Sargatal, J. (Eds.) 1992: Handbook of the birds of the World, Volume 1; Ostrich to Ducks. - Lynx Edicions, Barcelona, Spain, 696 pp. Dervieux, A., Lebreton, J-D. & Tamisier, A. 1980: Censusing by air the wintering ducks and coots of the camargue - viability study. - La Terre et la Vie, Revue d’Ecologie Appliquée 34: 69-99. Devineau, O. 2007: Dynamique et gestion des populations exploitées: l'exemple de la sarcelle d'hiver. - PhD thesis, Université Montpellier 2, Sciences et Techniques du Languedoc, 96 pp. (In French). Elmberg, J., Nummi, P., Pöysä, H., Sjoberg, K., Gunnarsson, G., Clausen, P., Guillemain, M., Rodrigues, D. & Vaananen, V.M. 2006: The scientific basis for new and sustainable management of migratory European ducks. - Wildlife Biology 12(2): 121-127. Faanes, C. & Bystrak, D. 1981: The role of observer bias in the North American Breeding Bird Survey. - Studies in Avian Biology 6: 353-359. Frederiksen, M., Hearn, R.D., Mitchell, C., Sigfusson, A., Swann, R.L. & Fox, A.D. 2004: The dynamics of hunted icelandic goose populations: a reassessment of the evidence. - Journal of Applied Ecology 41: 315-334. Gauthier, G. & Lebreton, J-D. 2008: Analysis of band-recovery data in a multistate capture-recapture framework. - Canadian Journal of Statistics 36: 59-73. Gilissen, N., Haastra, L., Delany, S., Boere, G. & Hage-meijer, W. 2002: Numbers and distribution of wintering waterbirds in the western palearctic and southwest Asia in 1997, 1998 and 1999. Results from the international waterbird census. - Wetlands International, Wageningen, The Netherlands, Technical Report 11, 182 pp. Gitay, H., Fox, A.D. & Ridgill, S.C. 1990: Survival estimates of teal (*Anas crecca*) ringed at three stations in Britain. - The Ring 13: 45-58. Grantham, M. 2009: Why should I report a ringed bird? - BTO News May-June 2009: 8-9. Guillemain, M., Bertout, J.M., Christensen, T.K., Pöysä, H., Väinänen, V.M., Triplet, P., Schricke, V. & Fox, A.D. 2010: How many juvenile Teal *Anas crecca* reach the wintering grounds? Flyway-scale survival rate inferred from age-ratio during wing examination. - Journal of Ornithology 151: 50-60. Guillemain, M., Poisbleau, M., Denonfoux, L., Lepley, M., Moreau, C., Massez, G., Leray, G., Caizergues, A., Arzel, C., Rodrigues, D. & Fritz, H. 2007: Multiple tests of the effect of nasal saddles on dabbling ducks: combining field and aviary approaches. - Bird Study 54: 35-45. Guillemain, M., Sadoul, N. & Simon, G. 2005: European flyway permeability and abmigration in teal *Anas crecca*, based on ringing recoveries. - Ibis 147: 688-696. Gustafson, M.E., Hildenbrand, J. & Metras, L. 1997: The North American Bird Banding Manual. Version 1.0. - Available at: http://www.pwrc.usgs.gov/BBL/manual/manual.htm (Last accessed on 28 May 2007). Hémery, G., Houtsa, F., Nicolau-Guillaumet, P. & Roux, F. 1979: Distribution géographique, importance et évolution numériques des effectifs d’Anatidés et de Foulques hivernant en France (janvier 1967 à 1976). - Bulletin Mensuel de l’Office National de la Chasse - Numéro Spécial Sciences et Techniques Mai 79: 5-91. (In French). Henny, C.J. & Burnham, K.P. 1976: Reward band study of mallards to estimate band reporting rates. - Journal of Wildlife Management 40: 1-14. Johnson, K. 1995: Green-winged teal (*Anas crecca*). - In: Poole, A. (Ed); The Birds of North America Online. Cornell Lab of Ornithology, Ithaca, New York, USA. Available at: http://bna.birds.cornell.edu/bna/species/193 (Last accessed on 28 May 2007). Kalchreuter, H. 1996: Waterfowl harvest and population dynamics : a review. - Gibier Faune Sauvage 13: 991-1008. Kostitin, I.O. 1996: Subsistence hunting of arctic Anatidae in Russia. - Gibier Faune Sauvage 13: 1083-1089. Lebreton, J-D. 1973: Etude des déplacements saisonniers des sarcelles d’hiver, *Anas c. crecca* L., hivernant en camargue à l’aide de l’analyse factorielle des correspondances. - Comptes Rendus des Séances de l’Académie des Sciences. Série D, Sciences Naturelles 277: 2417-2420. (In French). Lebreton, J-D. 2005: Dynamical and statistical models for exploited populations. - Australian & New Zealand Journal of Statistics 47: 49-63. Lebreton, J-D., Almeras, T. & Pradel, R. 1999: Competing events, mixtures of information and multistratum recapture models. - Bird Study 46: 39-46. Lebreton, J-D., Burnham, K.P., Clobert, J. & Anderson, D.R. 1992: Modeling survival and testing biological hypotheses using marked animals - a unified approach with case-studies. - Ecological Monographs 62: 67-118. Lebreton, J-D. & Gonzalez-Davila, G. 1993: An introduction to models of subdivided populations. - Journal of Biological Systems 1: 389-423. Lebreton, J-D. & Pradel, R. 2002: Multistate recapture models: modelling incomplete individual histories. - Journal of Applied Statistics 29: 353-369. Menu, S., Gauthier, G. & Reed, A. 2005: Survival of young greater snow geese (Chen caerulescens atlantica) during fall migration. - Auk 122: 479-496. Mondain-Monval, J. & Girard, O. 2000: Le canard colvert, la sarcelle d'hiver et autres canards de surface. - Faune Sauvage 251: 124-139. (In French). Mooij, J.H. 2005: Protection and use of waterbirds in the European Union. - Beiträge zur Jagd und Wildforschung 30: 49-76. Newton, I. 2006: Can conditions experienced during migration limit the population levels of birds? - Journal of Ornithology 147: 146-166. Nichols, J.D., Blohm, R.J., Reynolds, R.E., Trost, R., Hines, J.E. & Bladen, J.P. 1991: Band reporting rates for mallards with reward bands of different dollar values. - Journal of Wildlife Management 55: 119-126. Nichols, J.D. & Johnson, F.A. 1996: The management of hunting of Anatidae. - Gibier Faune Sauvage 13: 977-989. Nichols, J.D., Johnson, F.A. & Williams, B.K. 1995a: Managing North American waterfowl in the face of uncertainty. - Annual Review of Ecology and Systematics 26: 177-199. Nichols, J.D., Reynolds, R.E., Blohm, R.J., Trost, R.F., Hines, J.E. & Bladen, J.P. 1995b: Geographic-variation in band reporting rates for mallards based on reward banding. - Journal of Wildlife Management 59: 697-708. Nichols, J.D., Runge, M.C., Johnson, F.A. & Williams, B.K. 2006: Adaptive harvest management of North American waterfowl populations - recent successes and future prospects. - Journal of Ornithology 147: 28-28. Novaro, A.J., Funes, A.D. & Walker, R.S. 2005: An empirical test of source-sink dynamics induced by hunting. - Journal of Applied Ecology 42: 910-920. Pöysä, H., Elmberg, J., Gunnarsson, G., Nummi, P., Sjöberg, G.G. & Sjöberg, K. 2004: Ecological basis of sustainable harvesting: is the prevailing paradigm of compensatory mortality still valid? - Oikos 104: 612-615. Pradel, R., Hines, L.E., Lebreton, J-D. & Nichols, J.D. 1997a: Capture-recapture survival models taking account of transients. - Biometrics 53: 60-72. Pradel, R., Rioux, N., Tamisier, A. & Lebreton, J-D. 1997 b: Individual turnover among wintering teal in camargue: a mark-recapture study. - Journal of Wildlife Management 61: 816-821. Ridgill, S.C. & Fox, A.D. 1990: Cold weather movements of waterfowl in Western Europe. - IWRB Special publication, Wageningen, The Netherlands, 91 pp. Royle, J.A. & Garrettsion, P.R. 2005: The effect of reward band value on mid-continent mallard band reporting rates. - Journal of Wildlife Management 69: 800-804. Sauer, J.R., Peterjohn, B.G. & Link, W.A. 1994: Observer differences in the north-american breeding bird survey. - Auk 111: 50-62. Scott, D.A. & Rose, P.M. 1996: Atlas of Anatidae populations in Africa and Western Eurasia. - Wetlands International, Wageningen, The Netherlands, 336 pp. Sedinger, J.S., Nicolai, C.A., Lensink, C.J., Wentworth, C. & Conant, B. 2007: Black brant harvest, density dependence, and survival: A record of population dynamics. - Journal of Wildlife Management 71: 496-506. Williams, B.K., Nichols, J.D. & Conroy, M. 2002: Analysis and management of animal populations. Modeling, estimation, and decision making. - Academic Press, San Diego, California, USA, 817 pp.
STATEMENT OF BENJAMIN R. CIVILETTI ATTORNEY GENERAL BEFORE THE SUBCOMMITTEE ON CIVIL AND CONSTITUTIONAL RIGHTS COMMITTEE ON THE JUDICIARY U.S. HOUSE OF REPRESENTATIVES FBI CHARTER SEPTEMBER 6, 1979 Mr. Chairman: It is a pleasure to appear before the Subcommittee on Civil and Constitutional Rights this morning to endorse H.R. 5030, the proposed charter for the FBI. The proposal submitted by this Administration and introduced by Chairman Rodino for himself, Mr. Clory, Mr. Hyde and Mr. Sensenbrenner, was the product of extensive work over a long period of time. We believe it is a sound charter which will enhance civil and constitutional rights and, at the same time, strengthen law enforcement. We hope that it will receive favorable consideration before this Committee and ultimately the full Senate and House of Representatives. The Charter is intended to be a constitution for the FBI. Its main purpose is to define the jurisdiction and duties of the FBI. It is not and should not be a rigid encyclopedia of do's and don't's, nor an exhaustive code of incomprehensible regulations. The charter is a comprehensive charter, for it deals with the fundamental authority and responsibility of the FBI in every important part of the Bureau's work. But it will not stand alone. There are also other important statutes, Attorney General guidelines, manuals and other regulations which govern the work of the FBI. For example, the full range of the federal criminal laws, as well as state and local laws, apply to all Department of Justice and Bureau personnel. Second, the body of constitutional and other case law, both civil and criminal, continues in full force and effect. These civil and criminal remedies supplement the provisions within the charter itself to ensure that the FBI enforces the law within the law. In addition, there are existing mechanisms and practices for congressional oversight, Department review, and internal disciplinary investigations and compliance audits. The charter is intended to be the foundational statement of the basic duties and responsibilities of the FBI and also its general investigative powers and the principal minimum limitations on those powers. But it need not and should not contain exhaustive, detailed and lengthy provisions on all these matters. After all, the charter will be supplemented by several other provisions, not in statutory form. First, the charter will be interpreted, as all statutes are, by reference to legislative history which this committee will carefully develop. In this regard, our proposal was accompanied by an extensive section-by-section analysis or commentary designed to explain and interpret the intent behind various provisions of the charter and to make clear the meaning of the charter language. It is expected that this commentary would serve as one key source for the development of legislative history, together with the series of hearings which start today and other materials which will be developed in the normal legislative process. Second, the charter expressly requires the Attorney General to promulgate guidelines in some eight major areas of FBI activity. As you know, guidelines were promulgated by former Attorney General Levi in 1976 concerning three areas: (1) Domestic Security Investigations (2) Informants (3) Civil Disturbances These guidelines will be supplemented by additional provisions and by new guidelines in each of the other areas required by the charter. I believe that the experience in the past three years with the Levi guidelines has been highly encouraging. It has demonstrated that guidelines can be drawn which are well understood by Bureau personnel and by the public and which can be filed and reviewed by the appropriate Congressional committees. It has also shown that guidelines can be successfully applied to particular kinds of investigative activity and even to certain specific decisions made on a case-by-case basis. The reasonable conclusion which can be drawn from the success of these guidelines is that the charter need not detail every limitation or safeguard by express statutory terms. Such details are better covered in guidelines, with the charter setting forth the obligatory principles and objectives which the guidelines must meet and achieve. I would like to assure the committee that the guidelines to be written will be thorough, that they will be drafted in consultation with appropriate members and staff of the oversight committees, that they will be promulgated at the earliest possible time, and that they will fully meet the objectives set forth in the charter. I can report to the Committee that the initial work on guidelines has already begun by teams of selected lawyers in the Department and appropriate officials in the Bureau. A review group will make recommendations to the Attorney General once the initial process of drafting and revision has been completed. Please bear in mind that in promulgating guidelines, the Attorney General can and may choose on the basis of advice and contemporaneous information and developments to impose additional or even higher standards or levels of authorization and review than those minimum levels contained in the charter itself. Turning to the charter itself, I would like to point out that it is an integrated document, that is, various provisions located in different sections work together. Recognizing this inter-relationship is critical to understanding the purposes and effects of the charter, both in terms of what it authorizes the FBI to do and what it prevents the FBI from doing. In a very overly simplified way, the charter consists essentially of four types of provisions: (1) Provisions containing general principles by which all criminal investigations must be conducted; (2) Provisions which limit who and what can be investigated and establish threshold requirements which must be met before an investigation can even be started; (3) Provisions which authorize and limit the use of the various sensitive investigative techniques; and (4) Provisions which limit retention of information collected during investigations and the specific purposes and parties for which investigative information can be disseminated outside the FBI. The charter is intended as an exclusive statement of jurisdiction. Accordingly, if authority for a particular kind of investigative activity is not found in the charter, there is no authority. Therefore, for example, activity of the type associated with COINTELPRO is not authorized in the charter; therefore, it is precluded absolutely as outside the jurisdiction of the FBI. The broad purpose and intention of the charter is aimed at criminal activity under criminal standards. Specifically, before an investigation can be initiated, there must be "facts" indicating a criminal violation, and the purpose of the investigation and the manner of carrying out the investigation must be directed toward and limited to three purposes: (1) The detection of crime; (2) The prevention of crime; and (3) The prosecution of criminal offenders. Nevertheless, in order to remove any doubt whatever, the charter explicitly commands that there shall be no investigation by the FBI of the lawful exercise of the right to dissent -- the right to peaceably assemble and petition the government, or of any other right guaranteed by the Constitution and laws of the United States. 1/ In addition, Exhibit 1, attached hereto, lists the Charter provisions which by their terms or necessary effects prohibit the improper activities commonly referred to as COINTELPRO. The heart of the charter is Subchapter III which contains the basic authorization for the FBI to conduct criminal investigations. The key section is Section 533 which contemplates investigation on two levels: (1) Preliminary investigations which are called "inquiries" (2) Full investigations which are called simply "investigations" The purpose of inquiries is limited to determining rationally whether there is a basis for conducting an investigation. The purpose of an investigation, of course, is to collect evidence on which to base a prosecution as well as to seize evidence, fruits and tools of crime and to apprehend perpetrators. We believe it is essential for the FBI to have specific authority to conduct brief preliminary activities called "inquiries", which are far more limited in duration and scope than investigations. Otherwise, the government would be powerless to act even tentatively on specific allegations of crime which did not meet the requirement of "facts or circumstances" that would reasonably indicate criminal activity. This is the standard that must be met before an "investigation" could be initiated. However, such allegations frequently contain sufficient information to demonstrate a substantial risk and to make it clear as a matter of common sense that some effort should be made to determine if there is some substance to the allegation. It is important to emphasize that the inquiries ordinarily are of very short duration. Frequently, they can be completed in a matter of a few weeks. Also, their purpose is limited to making an initial assessment of the validity of the allegation or general information; they are not a means for attempting to secure evidence for prosecution. Moreover, in most inquiries it is not necessary to resort to sensitive investigative techniques. Generally, inquiries are limited to interviewing persons, checking existing law enforcement files and reviewing other publicly available information. Section 533 which contemplates two levels of investigation also specifically identifies two different kinds of investigation: (1) Investigation of a specific criminal act; (2) Investigation of an ongoing criminal enterprise engaged in either racketeering or terrorist activities. The investigation of a specific criminal act, such as an interstate theft, ordinarily does not involve great issues of sensitivity from either a legal or a policy standpoint. Moreover, the scope of such investigations is self-defining since the essential purpose of the investigation is plainly limited to identifying and apprehending the criminal and proving the elements of the particular crime. The duration of such a criminal investigation cannot be projected because it depends on circumstances which vary enormously from one case to another, but what can be said with confidence is that such an investigation ordinarily ends with the indictment of the subject. The second type of investigation concerns ongoing criminal enterprises engaged either in racketeering or terrorist activities. Special and broader investigative authority is necessary in these two narrowly defined areas because the ongoing nature and the organizational strength of these criminal groups poses real and special problems for society and for law enforcement. In order to effectively combat these threats, we believe it is necessary that the FBI be authorized to conduct investigations which are substantially greater as to scope, duration and emphasis on future criminal acts than the investigations authorized in section 533(b)(1). To be effective, racketeering and terrorist investigations need to focus not only on particular criminal acts, whether past, present or future, but also on the overall membership of the criminal group, its financing, its capabilities for various kinds of harm, its plans, its relationship to other criminal groups, its possible targets, etc. These considerations are generally outside the scope of a regular criminal investigation of a specific act because that investigation is limited to collecting evidence to approve the specific elements of the offense involved. Similarly, it is necessary to continue to investigate racketeering and terrorist groups as long as they retain vitality, even though a particular member, or members may have been apprehended, prosecuted and sent to prison. Thus, enterprise investigations will continue as long as the group continues its criminal enterprise activity. We recognize that the ongoing nature of such groups requires us to investigate broadly into past acts, current activity and potential for future criminal acts. While demonstrably necessary in order to protect the society from very great harm, enterprise investigations, we acknowledge, may create apprehension of danger to lawful activities, privacy interests, and constitutionally protected free speech and association. To guard against this potential threat, we have fashioned these provisions far more tightly than those concerning ordinary investigation of specific offenses. First, we have limited the investigation to circumstances where there is "reasonable indication" of crime so that the same level of certainty is required to open a racketeering or terrorist enterprise investigation as to open more conventional investigation focusing on particular acts. Secondly, we have very deliberately limited the basis of the investigation to activities which are clearly criminal and serious. This plainly precludes FBI from investigating all forms of non-criminal activities. Third, in both the case of racketeering and terrorism, we have specifically required that there be information indicating that the enterprise presently exists, that it is a continuing enterprise, and that its essential nature and purpose is criminal. Thus, we have excluded circumstances which involve little more than speculation that a group that is now lawful may later adopt a criminal philosophy. Terrorism enterprise investigations are generally believed to be more sensitive than racketeering enterprise investigations since the former avowedly involve some political purposes and motivations while the latter ordinarily do not. We felt that the necessities and realities of modern day society requires us to authorize FBI to conduct terrorism investigations on the same standard as organized crime investigations. That is, we require the same standard of reasonableness; facts or circumstances which reasonably indicate the criminal enterprise. However, in recognition of their greater sensitivity and for protection for all lawful political activities, we have provided special safeguards which apply to only terrorism enterprise investigations. These include special standards and limitations on informant infiltration, extra report requirements for opening and the continuation of terrorist enterprise investigations, the involvement of high level FBI officials, including the Director, and notice to the Attorney General or his or his designee of investigations which continue beyond one year. It must be emphasized that the group which can be investigated under this subsection is only the actual criminal enterprise. Where that group is a subgroup of a larger organization which is engaged in lawful political activity, the larger group itself cannot be investigated. Finally, the investigation must be conducted pursuant to Attorney General Guidelines. As you know, we presently are governed by the Levi Domestic Security Guidelines for terrorist investigations. These Guidelines will be continued and, if amended at all, will be strengthened. Another most important part of the charter is section 533(b) which contains limitations on the use of the more sensitive investigative techniques. The section mandates that the Attorney General issue guidelines concerning the sensitive techniques covered by the section which are: (1) Informants and Undercover Agents (2) Physical Surveillance (3) Mail Surveillance (4) Electronic Surveillance (5) Access to Third Party Records (6) Access to Tax Records (7) Miscellaneous investigative techniques (including trash covers, pen registers, consensual monitoring, electronic location detectors, covert photographic surveillance and pretext interviews) Of course, mail and electronic surveillance are already covered by explicit statutes and court decisions and require judicial warrants. The others are discussed in some detail in the charter itself, particularly informants and access to third- party records pursuant to the new investigative demand authority which the charter would give to the FBI. The section requires that the guidelines meet three important and express limiting purposes: (1) To ensure that the investigative techniques are used in such a way as to keep intrusion into privacy to a minimum; (2) To require that the greater the potential intrusion into a true area of privacy, the more formalized and higher level the review and authorization procedures must be; (3) To ensure that information obtained through the use of sensitive techniques is used by the FBI only for lawful and authorized purposes as set forth in the charter itself. This section also authorizes the FBI to issue investigative demands, which are similar to administrative subpoenas, for specific categories of records: (1) Toll records of communications common carriers, such as the phone company; (2) Insurance records maintained by insurance companies or agencies; (3) Records of credit institutions not covered by the Financial Right to Privacy Act of 1978; (4) Banking and other financial records that are covered by that Act. Concerning bank records and other records covered by the Right to Financial Privacy Act, the charter simply grants the FBI authority to issue an investigative demand which is contemplated by that Act and specifies that every procedural requirement of the Act must be followed to the letter. Briefly, the need for the investigative demand power arises from the following circumstances. First, the FBI has been giving increased priority recently to investigation of white collar crime, public corruption, fraud against government programs, financing of organized crime groups and other similar areas. In each of these areas, ability to obtain financial records is important to success, and indeed it is hard to make real progress in such an investigation without access to these kinds of records. Second, the FBI previously obtained many of these kinds of records on a voluntary basis from the custodians. But it has recently encountered a growing reluctance of custodians to turn over such records for fear of possible legal liability or loss of trade from favored customers. As a result, the FBI in most places has recently lost the capacity to get these records. Rules governing issuance of investigative demands would be covered by guidelines which the charter requires the Attorney General to issue. As I mentioned earlier, the initial work on producing the guidelines in this and all the other areas has already begun. I would expect that the use of investigative demands would be limited to cases where there was a demonstration of need, where there was a substantiated allegation, and where a grand jury was not already involved in obtaining records on the matter. However, more detailed rules must await the completion of the study, review, and drafting that is now underway. The limitations in this section of use of informants, particularly their use to infiltrate groups under investigation for terrorism, is of special concern to many, including some on this Committee. First, the charter seeks to prevent unreliable or truly uncontrollable persons from becoming regular informants in the first place by requiring a background investigation of each potential informant. Second, written approval must be given by a supervisory level FBI official before the informant can be used on a continuing basis to provide information on a particular person. Such approval must include findings that, based on the background investigation, the person is "suitable" for use as an informant, and that he is likely to have information pertinent to matters which the charter authorizes the FBI to investigate. Third, these findings must be reviewed on a regular basis by the Director or his designee. Fourth, the informant must be told that under no circumstances may he instigate or initiate a plan to commit criminal acts or use illegal techniques such as break-ins or wiretaps without court warrant, to obtain information or evidence on behalf of the FBI. He must also be warned not to engage in violence. Finally, he is told that his working as an informant for the FBI will not protect him from prosecution for participating in criminal activity except the activity which is under investigation and even then only if a supervisory official determines in writing that such participation is justified because it is necessary to getting information or saving lives and this need outweighs the seriousness of the conduct the informant is to participate in. Moreover, these determinations must be reviewed annually by the Director or his designee. In addition to all this, before an informant may infiltrate a terrorist group, the group itself must be properly under investigation for violent crimes and the infiltration must have been found "necessary" under the circumstances in a written finding by a supervisory official. The charter provides for enforcement in a number of ways. First, the charter, as I mentioned before, relies on the existing criminal law which applies to FBI agents, justice department attorneys and everybody else. As you know, the law is plain on matters such as wiretapping without court warrant and breaking into homes without warrants, and prosecutions have been brought in such cases. Secondly, there is the full range of civil suits which can be brought against government officials who act illegally and without authority. Thirdly, the charter depends for enforcement on the internal disciplinary system of the FBI. This is highlighted by the requirement under Duties of the Director, that the Director must maintain an "effective" internal disciplinary system. Moreover, the charter adds further sanctions by authorizing the Director to impose fines for up to $5,000 for willful violations of the section of the charter governing the use of sensitive investigative techniques. Accordingly, we believe that the charter is enforceable and will be complied with. There is simply no need to create new civil suits, new criminal offenses, or new procedural rights for defendants. With this brief summary of some of the charter's key provisions, I would like to conclude my remarks by saying simply that we look forward to discussing the specific terms of the charter with the Committee at future hearings. I would be pleased to answer any questions on the main thrust of the proposal. Provisions Barring COINTELPRO 1. Section 531 a General Principles Subsection (c) Investigation of Criminal Conduct only (p. 4) Subsection (d) Limitations No investigation of - political views - peaceable assembly - exercise of other rights 2. Section 531 a(b) Investigations must be conducted with minimal intrusion (p. 4) 3. Section 533(b)(3) Terrorist Enterprise Investigation only if - significant criminal violence for purpose of political intimidation, and - facts or circumstances reasonably indicate 4. Section 533 a Attorney General Guidelines for Investigation of Criminal Matters (p. 13) Subsection (a)(1) Investigation must "focus" on criminal activity; purposes of investigation must be limited to - detection - prevention - prosecution 5. Section 533b(a)(3) General Restrictions Information may be used "only for lawful government purposes" (p. 14) 6. Section 533 c Retention, dissemination and destruction of information (p. 26) Subsection (a) - retain only what's pertinent to investigations authorized by charter (b) - disseminate only for proper official uses, e.g., to local police on a matter within their investigative jurisdiction
Rethinking Design Thinking: Empathy Supporting Innovation McDonagh, D. and Thomas, J. Published version deposited in CURVE September 2012 Original citation & hyperlink: McDonagh, D. and Thomas, J. (2010) Rethinking Design Thinking: Empathy Supporting Innovation. Australasian Medical Journal - Health and Design 1, volume 3 (8): 458-464 http://dx.doi.org/10.4066/AMJ.2010.391 Publisher statement: This article is in an open access journal and is freely available at http://dx.doi.org/10.4066/AMJ.2010.391. Copyright © and Moral Rights are retained by the author(s) and/or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This item cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder(s). The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holders. CURVE is the Institutional Repository for Coventry University http://curve.coventry.ac.uk/open Rethinking Design Thinking: Empathy Supporting Innovation Deana McDonagh and Joyce Thomas School of Art + Design, University of Illinois at Urbana-Champaign, Illinois, USA Abstract Background The material landscape we construct within our personal lives and inherit in public environments has significant impact upon our daily experiences. They affect our productivity, our feeling of wellbeing, and sense of being socially connected. Products that provide a positive user-experience can empower people and contribute to a healthful environment. Products that do not meet the product user’s functional or emotional needs can cause a person’s sense of independence to be eroded. Method The authors have developed an empathic design research strategy that builds on the capitals (e.g., background, physical abilities, and education) of the individual and the designer, to ensure that more intuitive design outcomes are generated which meet real needs, rather than assumed needs. Acknowledging that all people have an empathic horizon (a boundary to their knowledge, experience, and awareness), further learning can take place by the designer in direct consultation and collaboration with the users. Results Well-designed products that are intuitive to use contribute to a person’s quality of life and independence. The possessions surrounding us can generate a sense of balance, harmony, and wellbeing. The number of possessions we own is not critical, but their usefulness and meaning to us is. As we age and develop disabilities, being able to live independently lives becomes increasingly important. Conclusion Designers are developing ways in which to bridge the divide that exists between lived experiences, user needs, and existing products that fail to satisfy the user. Key Words empathy, material landscape, designing process Background The products which people surround themselves with have significant impact on how they experience activities of their daily living. We engage with our material landscape on both rational and emotional levels\(^1\). This helps us to communicate and construct who we are\(^2\). Products that provide a positive user experience can empower people and contribute to a healthful environment. Products that do not meet the user’s functional or emotional needs can cause their sense of independence to be eroded. As the worldwide demographics are shifting towards an older population who are likely to begin to experience disabilities, these design issues become increasingly critical. Empathy This research relies upon the belief that a deeper understanding of users’ needs is critical for a designer to respond with more effective product outcomes. By employing empathic modelling strategies, designers can gain insight and shared understanding with their target users. Design thinking and understanding needs to be flexible as the user’s situation and cultural cues evolve and are shaped by the material and historical dimensions of their lives. Designers, in turn, must expand and push beyond their own empathic horizon to include life-expert-users. This can take the designer outside his or her own personal comfort zone. Material Landscape Material landscape is a dynamic concept that considers the changing requirements and roles that people need for their personal and public environments. We fill our homes with products that represent our achievements (e.g. trophies, certificates), cultural affiliations (e.g. football memorabilia, music CDs or film DVD collections), and status objects (e.g. expensive cars, perfume bottles) that provide insight into the selected lifestyle aspirations\(^3\). In addition, how we display these objects (e.g. highlight, cluster) and even hide stigma objects (e.g. dandruff shampoo, condoms, acne cream) provides valuable life experience indicators into an individual’s daily life. “Never have more of us had more possessions than we do now, even as we make less and less use of them. The homes in which we spend so little time are filled with things.”\textsuperscript{4} Personal environments offer us a flexible place to be social, reclusive, quiet, or studious. We design the mood of our environments through product/object placement, lighting, scented candles, decoration, comfortable furniture and similar home comforts. The products and our environments have a significant impact on how we communicate and present ourselves to the outside world (others) and help to support us with positive affirmations (e.g. photographs of loved ones, mementoes of experiences). **Product abandonment** When a gulf exists between the user and the product or environment, significant psychological barriers can develop which become increasingly difficult to remove. Products that present difficulty can strip us of our dignity (e.g. opening basic food packaging or medicine containers, or even struggling with remote controls). This reaction can lead to a diminishing food choice, thus eroding some of the key active ingredients of people’s nutritional and sensorial experiences. This can result in product abandonment, avoidance, and/or misuse - and can be especially true for people with disabilities. For example, it is common for patients recovering from intrusive hip replacement surgery to undergo extensive physiotherapy and pain management, and yet they fail to use a walking cane or walker. Product stigma can repel the user from utilising valuable assistive technologies; this is not because they are not functional and helpful, but because the product does not resonate with the user. If a product carries a stigma, it can lead to product abandonment. There are many examples in our personal and public environments where we navigate, accommodate, and adapt our behaviour to overcome such disconnections. For example, engaging with a door, which visually indicates that it should be pulled towards the user to open, when in fact it needs to be pushed, can generate significant embarrassment to an individual. Rarely does the individual acknowledge the design failure, but rather blames themselves for “getting it wrong.” In reality, the product (door) failed them. “...hidden geographies” of small but deceptively important things such as the size of print, the positioning of furniture, the location of the toilets, the juxtaposition of offices, doorways, and so on.”\textsuperscript{5} Figure 1 illustrates that compliance with legal requirements under the Americans with Disabilities Act\textsuperscript{6} does not always respond to the lived experience. This public space offers signage for those with vision, but fails to accommodate those with visual impairments. The man in Figure 1a-1c is blind and 5 ft. 2 in. tall. He must climb on top a piece of furniture in order to read the Braille on the sign (Figure 1b). ![Figure 1](image) Figure 1 (a) (b) and (c): Signage, which incorporates Braille, but does not take into account accessibility resulting in problems for the intended user In Figure 1c, the individual demonstrates how he must stretch to reach even the bottom of the sign when standing on the floor after the furniture has been moved out of the way. Though this example may appear to be rather extreme, as soon as we become more sensitive and conscious of our environments, we begin to identify such product failures in our daily interactions. As people age and develop various disabilities, navigating less-than-accommodating environments can result in individuals becoming marginalised, isolated, excluded and literally impaired by products and environments. As designers, we try to generate products, environments, and services that will support the user for many years, and this attention to detail does not necessarily mean the products’ retail costs would increase. **Supra-Functionality** Designers have many challenges to ensure that product design outcomes are relevant and appropriate for users whose needs, expectations, and desires can be very dynamic. Products that are simply functional and do not create an enjoyable experience will normally not satisfy a user. When a purchaser considers products whose price points and functional needs are similar, the design, style, colours, and physical sensations are frequently the deciding factor that makes them choose one product over another. These more ephemeral needs of users, which go beyond the utilitarian functionality of the product itself, are referred to as supra-functionality.\textsuperscript{7} Elements that contribute to an enjoyable experience are often rooted in our social, emotional, and cultural desires. Purchase decision-making, user-product bonding, and brand loyalty are impacted by this experience. These often difficult-to-grasp elements of supra-functionality\textsuperscript{7} can be the final deciding factor for which product is finally chosen. In order to meet these needs, designers must actively develop research methodologies that are specifically aimed at collecting design-relevant data. A shift in design thinking is required to consider the “normality of doing things differently”\textsuperscript{5}. Rather than aiming to design products for the persona of the ideal user, this focus utilises Empathic Design Research Strategies to reveal and discover product opportunities for real people. As designers use empathy to support their research, “design moments” emerge which provide them with more design-relevant data and supports product innovation. Design thinking is changing. Figure 2 illustrates various approaches to user involvement within the designing process, (a) historically products were designed for the user, (b) then designers began to utilise user input, and (c) finally designers are actively involving users. Discussion of the designing process is significant because for the first time in product development history, the target life-expert-user is being consulted whilst also becoming personally active within the development process (e.g. Freitag bags, Puma’s Mongolian Barbeque shoes). Empathic design strategies utilise the most appropriate research methods available to the designers. Methods may include using passive ethnographic-type observations, through which designers can gain insight about the life-expert-user’s interaction with their material landscape - watching, listening, and absorbing without interfering in the user’s actions. Informal conversations provide the basis for developing trust between the designer and life-expert-user. Another approach may include collaboration that tends to rely on natural respect, patience, tolerance and a shared goal. Empathic modelling places the designer actively into the life-expert-user role and provides a supporting process to achieve a more thorough understanding of their experience. The designer temporarily views the world through life-expert-user’s eyes, from his or her physical viewpoint, to become aware of frustrations and challenges in dealing with their material landscape. Other methods, which may be useful for designers include focus groups, shadowing, and role-playing. In this approach, the designer and user engage as collaborators, and together develop knowledge and understanding in order to generate appropriate solutions for real needs. Life-expert-users who often have very different personal capital (e.g., background, physical abilities, and education) than the designer are embraced as co-creators to inform the designing process. Empathic design research relies on the user being an active and participatory partner within the information creation and designing process.\textsuperscript{8,9,10,11} “…listening to the voices of difference.”\textsuperscript{11} **Empathic Horizon** “In order to develop empathy with users, it is clear that designers need to be able to engage, listen, and understand the outlook of other people, which means involving actual people in the design process.”\textsuperscript{12} Empathy deepens designers’ understanding of people whose background, education, and culture may be very different from their own. Gaining insight into a user’s emotions, aspirations, and fears can provide the designer with critical cues and inspiration to create more balanced functional and supra-functional products. Employing an empathic design research strategy enables the designer to expand his/her empathic horizon.\textsuperscript{7,9,13} Fulton Suri\textsuperscript{14} advocates that empathy “is simply about achieving greater awareness, an extended imagination, and sensitivity to another person’s world in a powerfully memorable way.” Plowman\textsuperscript{15} wrote that empathy is “the altered subjectivity that can come from immersion into a particular context,” a view that is helpful for designers. learning about human communication during the design process. According to Hoffman, empathy is “[the] effective response more appropriate to someone else’s situation than one’s own”. 16 Hickman discussed empathy with regard to the creative process: “I believe that one feature of creative behaviour is the ability to empathize. Asking people to put themselves into the place of another person . . . can facilitate ‘empathic understanding’: a way of knowing intuitively about people and things outside of our own personal world.” 17 **Integrating Users in the Design Process** ![Figure 3](image) Designers and users blend together as a team of co-designers. The authors have developed an ongoing course at a North American university, which involves students with and without disabilities designing together as a single group/community. Since 2007, Industrial Design students (engaged in Masters and Bachelors of Fine Arts degree programmes) have been partnered with students with various physical and sensory disabilities that are studying diverse subjects outside design (refer to Figure 3). This course is conducted under the guidelines of the university’s Institutional Review Board (IRB) and has included students with a variety of disabilities, including: amputation, cerebral palsy, dystonia, muscular atrophy, muscular dystrophy, retinitis pigmentosa, multiple sclerosis, spinal scoliosis, and transverse myelitis. ![Figure 4](image) Ethnographic shadowing: a student with disabilities eating in public with a Personal Assistant helping him, and a student baking a cake in her apartment. The students are taught empathic research strategies that consider user-needs to support wellness and wellbeing and the creation of more empowering products and spaces. In the process of developing empathy, awareness and understanding, all the students carry out empathic design research activities to help support their personal insights into living with a disability (Figures 4 and 5). They observe daily living from the perspective of individuals with different life experiences - listening to what people tell about their experiences and watching how they behave in relation to things/environment. 14 ![Figure 5](image) It takes an Industrial Design student (without any physical disabilities) a short period of time to appreciate how it feels to eat in a public restaurant when you cannot feed yourself and you rely on another person to assist you. Even though the student was with friends, she reported that she was overwhelmed by the reaction from other diners (e.g. staring, negative expressions). Empathic modelling activities include very brief artificial experiences such as using a wheelchair, restricting mobility in the limbs, and restricting vision. Though this offers the students only a relatively superficial level of understanding into another’s abilities, it is still a powerful method to alert designers to how the most basic of activities can be challenging for individuals. Simulation is an important technique that may facilitate building empathy; however, empathy is about relationship. To build understanding and collaboration, student design teams are encouraged to talk to each other and learn about each other’s lives, dreams, goals, and aspirations. Ideally, the person without a disability will be as self-revealing as the person with a disability, making it a two-way street between designer and life-expert-user, breaking the boundaries generated by physical differences. Unlike the traditional scientific research relationship of researcher/subject, more equal partnerships of designer/life-expert develop. Utilising these empathic research strategies, design students have developed simple, insightful personal or assistive products that were intended to improve the quality of life (QOL) for students with disabilities. The goal was to create products that did not carry stigma and would visually integrate into the individual’s lifestyle and personal environment. **Results** The resulting product development was driven predominantly by design moments discovered during engagement between the pairs of designer and counterpart. Some of the innovative products that have been conceptualised have included a standing device to help a person with paraplegia engage in golf as a leisure activity, an electronic “direction finder” used in public buildings for visually impaired people, and a headset for a student with Cerebral Palsy that uses puffs of breath to dial a mobile telephone. The findings of this student project show that collaboration between designers and life-user-experts allows development of a different kind of design-specific capital. For instance, the group developed a shared working language that is an example of the redefinition of values, beliefs, actions, and processes. Conversations heard during the designing process made it apparent that the students were not only gaining an understanding of a different worldview, but they were also beginning to demonstrate an intimacy that moves towards empathising with the challenges inherent to certain kinds of disabilities. The students became more mindful of their designing process, including the people-centred focus on users, which relates to Inclusive Design (ID). Whilst similar these approaches are distinctly different. ID requires user involvement in the process while we employ an approach that requires the designer to develop empathy with the user so that they design as if they were the user. In addition, after the students participated in this course they exhibited a greater desire and ease in engaging with real users. Individuals are impaired by products and environments. It is only when one is faced with unnecessary challenges does one feels less able. “The built environment directly affects how people feel and behave.”\(^{18}\) In professional practice, empathic design research is increasingly playing a role in the development of successful products. Dan Formosa is one of the founders of SMART Design in New York and participated in the development of the OXO Good Grips range of products. He views design being less about generating products and more about creating positive experiences for the user. His designs offer the mainstream market place good examples of more intuitive assistive products without the usual visual stigmas. The OXO products (Figure 6) were developed specifically for users with arthritis whilst being adopted enthusiastically by people of all levels of ability. Reducing stigma reduces the risk of product abandonment. ![OXO Good Grips](image) **Figure 6:** OXO Good Grips A compelling case study involves a woman taking her husband’s medicine by mistake due to poor visibility and legibility of information on the container. Clearly, taking the wrong medicine can have dire consequences. Deborah Adler developed (Figure 7) a wedge-shape form which provides more space for critical information, is both easier to read and open, and introduces a colour coding system so that individuals in multiple-person homes can readily identify their own medication. It is now used widely throughout the Target Store pharmacy service within the United States. ![Medicine Bottles](image) **Figure 7:** (a) Typical medicine bottle and (b) Adler’s design response. **Discussion** In North America, industrial designers tend to go into professional practice immediately upon completion of their Bachelor’s Degree. They are likely to be involved in the development of mainstream products that are on the market in less than a year of their graduation from university. As educators, we recognise the importance of preparing our students for rapid immersion into the profession and we encourage the adoption and adaption of more empathic design research strategies for student designers. These less conventional design strategies require alternate interventions and support in order to provide a meaningful learning environment for all classroom participants. Additional immersive empathic modelling studies are being developed that could lead to more in-depth understanding of others with visual impairments. Simulating walking in total blindness, the authors sought to assess the level of risk to the student and monitor the length of time it took to complete the task. Figure 8: (a) (b) (c) and (d): Walking Blind Figure 8 illustrates two significant moments during the study. 8a shows one author walking down a corridor in a public space where she feels completely alone. As we take a wider view in 8b, we see she was surrounded by students and colleagues. In 8c, another author is experiencing unexpected barriers that were above ground level. This experiment certainly took the authors outside their comfort zones. Within only a couple of minutes, it became evident that their senses of hearing seemed ‘amplified’ and other senses seemed to compensate for the lack of vision. The difficulty of this exercise was significantly greater than anticipated, leading to a reduction in the distance they covered and a revision of the planned classroom activity. A disability specialist at the University of Illinois at Urbana-Champaign raises concern over negative stereotyping of having disabilities and those living with disabilities: “…we want to be careful and mindful of how we present and execute simulated activities … as they sometimes can backfire and perpetuate stereotypes rather than diminish them, even with good intentions.”\textsuperscript{19} The authors continue to explore sensorial impairment as one of the multiple research approaches to help support understanding within the product development of everyday objects. **Conclusion** Though our ideal is for all individuals to be able to conduct their daily lives without unnecessary challenges, we still have a long way to go, and the value of developing empathy cannot be underestimated. Why does oral contraceptive packaging offer no tactile indication to the user as to which pills contain the active medicine (e.g. weeks 1-3) and the placebo (e.g. week 4)? Picking up the packaging upside down could and taking the placebo instead of the active medicine could have serious consequences. Why do single serving coffee packs in hotel rooms, offer no tactile indication of which is caffeinated and decaffeinated? How does one operate a hotel shower if one is unable to read the visual cues? Though these may seem minor irritations for the majority, with the increasing proportion of our population developing disabilities, they represent the constant erosion of one’s ability to function in an able-bodied world. Designers, the designing process, and ultimately the resulting products are beginning to respond to authentic user needs. Health care maintenance is of critical importance as we strive to maintain a good quality of life for all. Shifting demographics will result in more seniors than ever before. As disability and aging are no longer perceived as a barrier to quality of life, products and environments that are less than empowering will no longer be acceptable. The authors believe that there will be significant changes in personal and social engagement in the future. The individual will take more of an active role in their own health maintenance, with an emphasis on prevention rather than cure. The focus will be on weight management and wellness rather than superficial cosmetic surgery. We could be controlling our health care and medicine management via the web, as we now do our money. It is possible that video communication systems will replace person-to-person medical appointments, especially if touch and smell can be conveyed via computer in the future. Clothing will contain sensors and monitors that alert us to a drop in body temperature, salt levels in our perspiration, and urine concentration. Life-long learning will increasingly result in universities accommodating multi-generational classrooms accommodating students from ages 18–80+. Multiple careers for individuals have become more common, which requires a more flexible approach to education and to re-education. There is an overlap between the office and home as more workers telecommute and the numbers of home-based businesses increase. However, many people will continue to relocate for work opportunities which suggests an investment in customised housing units (e.g. modular systems), which can literally be relocated when we change jobs. Public and private space will continue to merge beyond what we have experienced today with Wi-Fi, constant electronic contact, and the need for social connectedness. Focusing on the lived experience of users offers the product developer a significant resource to bridge the gulf between existing product solutions and future design outcomes that will enhance quality of life for all. Material landscapes need to be more empowering. Built environments need to consider users with various sensorial abilities. By including the marginalised voice now, we will be instilling the product developers of tomorrow with valuable insight, awareness, and sensitivity to their target users. We recommend employing empathic research strategies early within the education curricula of designers to enhance their awareness of others. Rather than designing only for the mainstream and general user, let our designers design for real people. Enable them to begin “… listening to the voices of difference.”\textsuperscript{11} References 1. Chapman J. Emotionally Durable Design: Objects, Experiences and Empathy. London: Earthscan; 2005. 2. Walker S. Sustainable By Design: Explorations in Theory and Practice. London: Earthscan; 2006. 3. Riggins SH. Fieldwork. “The Living Room” in S. H. Riggins, ed., The Socialness of Things. New York: Mouton de Gruyter; 1994. 4. Sudjic D. The language of things: understanding the world of desirable objects. London: W W Norton; 2009. p. 7. 5. Hansen N and Philo C. The Normality of Doing Things Differently: Bodies, Spaces and Disability Geography. Tijdschrift voor Economische en Sociale Geografie. 2007; 98 (4): 493-506. p. 498. 6. Americans with Disabilities Act [homepage on the Internet]. [cited 9 August 2009]. Available from: http://www.eeoc.gov/types/ada.html 7. Weightman D and McDonagh D. People are doing it for themselves. In: Proceedings of the International Conference on Designing Pleasurable Products and Interfaces: 2003; Pittsburgh, Pennsylvania: ACM Press; 2003. p. 34-39. 8. Clarkson J, Coleman R, Keates S, and Lebbon C. Inclusive Design: Design for the whole population. London: Springer; 2003. 9. McDonagh D. Do It Until It Hurts!: Empathic Design Research. Design Principles and Practices: An International Journal. 2008; 2 (3): p. 103–110. 10. Formosa D. Social responsibility through design: Smart Design calls for a wide-angle view of Universal Design. Smart News [homepage on the internet]. [updated 2006, cited 14 September 2009]. Available from: http://www.smartdesignworldwide.com/news/article.php?id=98http://www.smartdesignworldwide.com/news/article 11. Galanakis M. Space Unjust: Socio-Spatial Discrimination in Urban Public Space. Cases from Helsinki and Athens. Helsinki: Helsinki, Finland School of Design, University of Art and Design; 2008. p. 23. 12. Strickfaden M, Devlieger P, and Heylighen A. Building empathy through dialogue. In: Malins J editor. Proceedings of the Eighth International Conference of the European Academy of Design. Design Connexity; 2009 April; Aberdeen, Scotland: Gray’s School of Art; 2009. p. 451. 13. Denton H and McDonagh D. Using focus group methods to improve students’ design project research in schools: drawing parallels from action research at undergraduate level. International Journal of Technology and Design Education. 2003; 13(2): 129-144. 14. Fulton Suri J. Empathic design: Informed and inspired by other people’s experience. In Koskinen I, Battarbee K, and Mattelmäki T, editors. Empathic design: User experience in product design. Helsinki, Finland: IT Press; 2003. p. 52. 15. Plowman I. Ethnography and critical design practice. In Laurel B, editor. Design research: Methods and perspectives. Cambridge, MA: MIT Press; 2003. p. 34. 16. Hoffman ML. Empathy and moral development: Implications for caring and justice. New York: Cambridge University Press; 2000. p. 4. 17. Hickman R. Why we make art and why it is taught. Bristol, UK: Intellect. 2005. p. 138. 18. Goldstein E. Applications of Universal Design to Facilities. In Burgstaher SE and Cory RC, editors. Universal Design in Higher Education: From Principles to Practice. Cambridge, MA: Harvard Educational Press, 2008. p.211. 19. Heft Sears S. In personal correspondence with the authors (October) 2009. PEER REVIEW Not commissioned. Externally peer reviewed. CONFLICTS OF INTEREST The authors declare that they have no competing interests.
Lithium Prevents and Ameliorates Experimental Autoimmune Encephalomyelitis Patrizia De Sarno, Robert C. Axtell, Chander Raman, Kevin A. Roth, Dario R. Alessi and Richard S. Jope *J Immunol* 2008; 181:338-345; doi: 10.4049/jimmunol.181.1.338 http://www.jimmunol.org/content/181/1/338 **References** This article cites 48 articles, 20 of which you can access for free at: http://www.jimmunol.org/content/181/1/338.full#ref-list-1 **Why *The JI*? Submit online.** - **Rapid Reviews! 30 days** from submission to initial decision - **No Triage!** Every submission reviewed by practicing scientists - **Fast Publication!** 4 weeks from acceptance to publication *average **Subscription** Information about subscribing to *The Journal of Immunology* is online at: http://jimmunol.org/subscription **Permissions** Submit copyright permission requests at: http://www.aai.org/About/Publications/JI/copyright.html **Email Alerts** Receive free email-alerts when new articles cite this article. Sign up at: http://jimmunol.org/alerts --- *The Journal of Immunology* is published twice each month by The American Association of Immunologists, Inc., 1451 Rockville Pike, Suite 650, Rockville, MD 20852 Copyright © 2008 by The American Association of Immunologists All rights reserved. Print ISSN: 0022-1767 Online ISSN: 1550-6606. Lithium Prevents and Ameliorates Experimental Autoimmune Encephalomyelitis Patrizia De Sarno, Robert C. Axtell, Chander Raman, Kevin A. Roth, Dario R. Alessi, and Richard S. Jope Experimental autoimmune encephalomyelitis (EAE) models, in animals, many characteristics of multiple sclerosis, for which there is no adequate therapy. We investigated whether lithium, an inhibitor of glycogen synthase kinase-3 (GSK3), can ameliorate EAE in mice. Pretreatment with lithium markedly suppressed the clinical symptoms of EAE induced in mice by myelin oligodendrocyte glycoprotein peptide (MOG\textsubscript{35–55}) immunization and greatly reduced demyelination, microglia activation, and leukocyte infiltration in the spinal cord. Lithium administered postimmunization, after disease onset, reduced disease severity and facilitated partial recovery. Conversely, In knock-in mice expressing constitutively active GSK3, EAE developed more rapidly and was more severe. In vivo lithium therapy suppressed MOG\textsubscript{35–55}-reactive effector T cell differentiation, greatly reducing in vitro MOG\textsubscript{35–55}-stimulated proliferation of mononuclear cells from draining lymph nodes and spleens, and MOG\textsubscript{35–55}-induced IFN-\(\gamma\), IL-6, and IL-17 production by splenocytes isolated from MOG\textsubscript{35–55}-immunized mice. In relapsing/remitting EAE induced with prototipol protein peptide, lithium administered after the first clinical episode maintained long-term (90 days after immunization) protection, and after lithium withdrawal the disease rapidly relapsed. These results demonstrate that lithium suppresses EAE and identify GSK3 as a new target for inhibition that may be useful for therapeutic intervention of multiple sclerosis and other autoimmune and inflammatory diseases afflicting the CNS. Multiple sclerosis is the most common autoimmune inflammatory disease of the CNS. Multiple sclerosis is characterized by immune-mediated demyelination and neurodegeneration of the CNS with lesions predominantly occurring in the white matter (1–4). Although multiple sclerosis afflicts more than two million people, its etiology remains unresolved and currently there are no adequate therapeutic interventions. Experimental autoimmune encephalomyelitis (EAE)\(^1\) is widely studied in animals to model many of the clinical, immunological, and neuropathological features of multiple sclerosis (5). EAE is induced in susceptible mice by eliciting an immune response to injected myelin Ags, such as myelin oligodendrocyte glycoprotein (MOG) peptide or proteolipid protein peptide (PLP) (6). In EAE, the integrity of the blood-brain barrier is impaired, allowing perivascular infiltrates in the CNS, leading to demyelination and loss of neuronal function, and culminating in paralysis. CD4\(^+\) T cells infiltrating the CNS are the initiator and the early effector cells in the development of EAE, but infiltrating macrophages, dendritic cells, and resident glia are the ultimate effector cells that amplify neuroinflammation and cause demyelination and axonal damage (3, 7). Lithium has been used for >50 years for the therapeutic treatment of psychiatric diseases in humans (8). The basis of this therapeutic action of lithium remains unresolved, but accumulating evidence indicates that it stems largely from its inhibition of glycogen synthase kinase-3 (GSK3) (9), a serine-threonine protein kinase with regulatory actions affecting many cellular functions (10). The activity of GSK3 is regulated mainly by the inhibitory phosphorylation of N-terminal serines of the two isoforms of GSK3, Ser2 in GSK3\(\alpha\) and Ser9 in GSK3\(\beta\) (10). Mutation of the N-terminal serine to alanine provides a form of GSK3 that cannot be inactivated by this mechanism. Recently, knock-in mice were produced in which the regulatory serines of both GSK3 isoforms were mutated to alanines, S9A-GSK3\(\beta\) and S21A-GSK3\(\alpha\), with both GSK3 isoforms expressed at normal levels (11). Thus, in these GSK3 knock-in mice, GSK3\(\alpha\) and GSK3\(\beta\) are present at physiological levels, but they cannot be inhibited by serine-phosphorylation, so GSK3 is maximally active. Growing evidence in the literature indicates that GSK3 is a major regulator of inflammation. By inhibiting GSK3, lithium greatly reduced the production of major proinflammatory cytokines following stimulation of several types of Toll-like receptors in human monocytes and mouse peripheral blood monocytes (12). In rodents, lithium and other inhibitors of GSK3 also increased survival from lethal sepsis (12) and lethal lupus (13), attenuated organ injury associated with sepsis (14), provided significant protection from a wide variety of apoptotic insults in the CNS (15), and ameliorated several inflammatory and immune conditions, such as arthritis, peritonitis, and colitis (14, 16–18). Notably, administration of GSK3 inhibitors reduced the development of inflammation. --- \(^1\)Department of Psychiatry and Behavioral Neurobiology, \(^2\)Department of Medicine, and \(^3\)Department of Pathology, University of Alabama, Birmingham, AL 35294; and \(^4\)Medical Research Council, Nutrition Unit, School of Life Sciences, University of Dundee, Dundee, United Kingdom Received for publication December 13, 2007. Accepted for publication April 24, 2008. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. This work was supported by grants from the National Multiple Sclerosis Society PP1337 (to P.D.), RG3891 (A.R. and C.R.), and National Institute of Health Grants MH38752 and NS37768 (to R.S.J.). \(^2\)Address correspondence and reprint requests to Dr. Richard S. Jope and Dr. Patrizia De Sarno, Department of Psychiatry and Behavioral Neurobiology, University of Alabama, 1720 Seventh Avenue South, Birmingham, AL 35294. E-mail addresses: email@example.com and firstname.lastname@example.org Current address: Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA 94305. Abbreviations used in this paper: EAE, experimental autoimmune encephalomyelitis; MOG, myelin oligodendrocyte glycoprotein; PLP, proteolipid protein peptide; GSK3, glycogen synthase kinase-3; CDI, cumulative disease index; Treg, T regulatory. Copyright © 2008 by The American Association of Immunologists, Inc. 0022-1767/08/$2.00 and tissue injury associated with spinal cord trauma, significantly blocking the development of hind limb motor impairments (19). Most importantly for the present study, in 1991, before lithium was known to inhibit GSK3, intraperitoneal injections of lithium in rats was reported to inhibit the development of EAE (20). Unfortunately, high toxic doses of lithium were used and it was concluded that “the immunosuppression was a toxic effect” (20), which appears to have discouraged further studies. Considering the greater understanding of the effects of GSK3 and the long history of safe usage of lithium in humans, we considered the possibility that administration of low, therapeutically relevant, levels of lithium may provide protection from inflammatory autoimmune diseases affecting the CNS. Lithium at therapeutic levels is nontoxic and is commonly administered to mice in the diet to achieve serum levels equivalent to those attained therapeutically in human patients (21). The results reported in this study show that pretreatment with therapeutically relevant levels of lithium almost completely blocked the onset of EAE, lithium promoted recovery when administered after the development of EAE; and, remarkably, chronic treatment blocked relapse episodes of EAE, which rapidly returned after lithium was withdrawn. **Materials and Methods** **Animals** Male C57BL/6 and female SJL mice were purchased from Frederick Cancer Research. To test whether constitutively active GSK3 exacerbates EAE, GSK3 knock-in mice (11) and matched controls were used. These mice express a constitutively active form of GSK3β, composed of both GSK3 isoforms, S21A-GSK3α and S9A-GSK3β, in place of endogenous GSK3α/β to disable the inhibitory serine phosphorylation of GSK3α so GSK3α/β retain maximal activities. All mice were housed and treated in accordance with National Institutes of Health and the University of Alabama Animal Care and Use Committee guidelines. For lithium pretreatment, lithium was administered in pelleted food containing 0.2% lithium carbonate (Hartan-Teklad) for 1 wk before immunization and maintained after immunization. This lithium administration to mice in the diet is nontoxic and is commonly used to achieve serum levels equivalent to those attained therapeutically in human patients (21). For lithium treatment after immunization, mice were administered the lithium-containing food and were given an i.p. injection of LiCl (10 mg/kg) on the first and second days of lithium treatment to increase lithium levels more rapidly than can be attained by dietary administration alone. The concentration of lithium in the serum was measured by inductively coupled plasma/mass spectrometry performed by Medtox Laboratories. **Induction of active EAE** Male C57BL/6 mice (8–12 wk old, from Frederick Cancer Research) or GSK3 knock-in mice (11) and matched controls were immunized with a s.c. injection of 150 μg of MOG\textsubscript{35-55} peptide (BiosynTech International) emulsified in CFA on day 0, and an intraperitoneal injection of 500 ng pertussis toxin (List Biological Laboratories) on days 0 and 2. Female SJL mice (8–12 wk old, from Frederick Cancer Research) were immunized with a s.c. injection of 150 μg of PLP\textsubscript{139-151} peptide (BiosynTech International) emulsified in CFA on day 0. Onset and progression of EAE symptoms were monitored daily using a standard scale of 0 to 6: 0, no clinical signs; 1, loss of tail tone; 2, flaccid tail; 3, incomplete paralysis of one or two hind legs; 4, complete hind limb paralysis; 5, moribund (animals were humanely euthanized); 6, death. To compare the time course of disease development in different groups of mice, the daily average of the clinical scores was calculated for each mouse. A cumulative disease index (CDI) score for each treatment group was calculated as the product of the sum of the daily clinical scores for each mouse. Statistically significant differences were calculated using a Student’s t test and values of p < 0.05 were considered to be statistically significant. **T cell proliferation and cytokine production** Single cell suspensions from draining lymph nodes and spleen were obtained 10 days after MOG\textsubscript{35-55}-immunization, a time at which very efficient MOG\textsubscript{35-55}-specific responses can be detected (22). Cells were cultured in 96-well plates (2 × 10\textsuperscript{5} cells/well) and stimulated with 0, 1, or 10 μg/ml MOG\textsubscript{35-55} peptide, or 1 μg/ml anti-CD3 (145–2C11) in triplicate. After 72 h, cells were labeled with [3H]-thymidine (1 μCi/well) for 18 h, and incorporation of [3H]-thymidine was measured. Single cell suspensions of splenocytes were stimulated with 0, 1, or 10 μg/ml MOG\textsubscript{35-55} peptide, or 1 μg/ml anti-CD3, and after 48 h, cytokines in the culture supernatants were measured by ELISA (eBioscience). **Flow cytometry** Mice were anesthetized, spleen and draining lymph nodes removed, and single cell suspension was prepared. Mice were then perfused, and spinal cords were removed and incubated with 2 mg/ml collagenase D (Roche) and 5 U/ml DNase (Sigma-Aldrich) for 1 h at 37°C. Mononuclear cells from the spinal cord were purified by two-step Percoll gradient centrifugation, as described previously (23). Mononuclear cell preparations were incubated with anti-CD16/32 (2.4G2, FcR block), staining with PE-mAb CD8 (53–6.7), PerCP-mAb CD4 (L3T4), FITC-mAb CD11b (DX5), or PE-mAb-CD25 (PC61.5), and conjugated to appropriate fluorochromes, as indicated. For intracellular staining, surface stained cells were permeabilized... and stained with Alexa647-anti-FoxP3. All Abs were obtained from eBiosciences. Stained cells were analyzed using a FACSCalibur (BD Biosciences). **NK cell cytotoxicity assay** YAC-1 cells expressing firefly luciferase (YAC-1–Luc) were used as target cells to measure NK activity in spleen cells as described (24). In brief, spleen mononuclear cells were incubated and day 10 postimmunization were incubated with YAC-1–Luc cells at a ratio of 5:1 for 4 h at 37°C in a tissue culture incubator. Separate wells contained only YAC-1 cells. The level of luciferase activity was determined at the end of the incubation by a chemiluminescence assay according to the manufacturer’s instructions (Promega). For each target, three replicates of the internal references for the 0% viability background (MIN\(_{0\%}\)) and 100% viability maximal signal (MAX) were run. The 0% viability reference point was determined by plating target cells in media with a final concentration of 1% SDS (MIN\(_{SDS}\)). The 100% viability reference point (MAX\(_{media}\)) was determined by plating target cells in media without effector cells. Percent viability was calculated as the mean luminescence of the experimental sample minus background (MIN\(_{SDS}\)) divided by the mean luminescence of the input number of target cells used in the assay (MAX\(_{media}\) – mean background (MIN\(_{SDS}\))). Percent-specific lysis is equal to \((1 - \text{percent viability}) \times 100\) and is calculated as follows: \% specific lysis = \([1 - \text{counts per 5 s (experimental} - \text{MIN}_{SDS})/\text{MAX}_{media} - \text{MIN}_{SDS}] \times 100\). **T regulatory (Treg) cell activity assay** Treg cell activity was determined by a proliferation-suppression assay (25). CD4\(^+\)CD25\(^+\) T cells (Treg cells) from spleens and lymph nodes of untreated and day 10 lithium-treated mice and CD4\(^+\)CD25\(^-\) responder T cells from C57BL/6 mice were fractionated using magnetic bead chromatography (Stem Cell Technologies). Cultures of Treg cells and responder cells at a ratio of 2:1 were stimulated with 1 µg/ml anti-CD3 (I45-2C11) in the presence of gamma-irradiated T-depleted spleen cells that served as APCs for 48 h. Proliferation was assessed by incubation with 1 µCi of \([\text{^3H}] \text{thymidine}\) for an additional 18 h of culture. Separate wells contained anti-CD3 and cultures of responder T cells plus APC only, Treg cells plus APC, and APC-only. The purity of responder cells or Treg cells prepared as above are routinely greater than 95%. **Histopathology and immunohistochemistry** Cross-sections made through the whole length of the spinal cords were immersion-fixed in Bouin’s fixative and paraffin-embedded, and six sections (5 µm) from a minimum of three animals per group were deparaffinized and stained with Luxol fast blue for evaluation of demyelination, or with biotin-conjugated *Griffonia simplicifolia* lectin (GS-I-B\(_4\)) for staining microglia. For immunohistochemistry, sections were deparaffinized, followed by Ag retrieval and inhibition of endogenous peroxidase activity, and blocked with serum (1% BSA, 0.2% normal milk, 0.05% Triton X-100 in PBS for rabbit Abs, or 5% horse serum, 0.3% Triton X-100 in PBS for mouse Abs). Sections were incubated overnight at 4°C with rabbit anti-mielyperoxidase (Lab Vision) or with goat anti-mouse CD4 (R&D Systems) for detection of neutrophils and CD4\(^+\) T cells, respectively, followed by PBS washes and incubation of HRP-conjugated anti-rabbit or anti-goat secondary Abs (Jackson ImmunoResearch Laboratories) for 1 h at room temperature. After three washes in PBS, cyanine 3-conjugated tyramide was deposited according to the manufacturer’s protocol (TSA Plus, PerkinElmer Life Science Products). Sections were washed and counterstained with Hoechst 33258 (Sigma-Aldrich), coverslipped with PBS; glycerol (1:1), and viewed with a Zeiss-Axioscope microscope (Carl Zeiss) equipped with a CCD camera. Digital images were captured using a Zeiss Axiscam and Zeiss Axiovision software. All sections used for analysis were processed in parallel for detection in the same staining group, using... FIGURE 3. Lithium pretreatment reduces Ag-specific T cell proliferation. Mice were immunized with MOG\textsubscript{35–55} peptide with or without lithium pretreatment and analyzed after 10 days. Mononuclear cells isolated from draining lymph nodes or spleens were stimulated with 0, 1, or 10 μg/ml MOG\textsubscript{35–55} peptide (A) or 0 or 1 μg/ml anti-CD3 (B), and proliferation was measured. For each experiment, cells from two mice were pooled. Means ± SEM (n = 6); *, p < 0.05 compared with mice not pretreated with lithium. FIGURE 4. Lithium pretreatment reduces Ag-specific T cell production of cytokines. Mice were immunized with MOG\textsubscript{35–55} peptide with or without lithium pretreatment and analyzed after 10 days. Isolated splenocytes were stimulated with 0, 1, or 10 μg/ml MOG\textsubscript{35–55} peptide (A), or 0 or 1 μg/ml anti-CD3 (B), and the production of IFN-γ, IL-6, and IL-17 were measured. For each experiment, cells from two mice were pooled. Means ± SEM (n = 3); *, p < 0.05 compared with mice not pretreated with lithium. Results Lithium administration ameliorates clinical symptoms of EAE To assess whether lithium is protective and anti-inflammatory in EAE, C57BL/6 mice with or without lithium pretreatment were immunized with MOG\textsubscript{35–55} peptide to induce EAE. Lithium-free mice developed clinical EAE after 19.5 ± 0.7 days with an incidence of 100% and a CDI of 48.5 ± 3.1 (Fig. 1A). Lithium pretreatment completely prevented EAE in 81% (13/16) of mice and the afflicted 19% of mice had a delayed onset (28 ± 2.1 days) and greatly reduced severity, with a CDI of only 3.4 ± 2.0 (p < 0.05). The lithium concentration in the serum of mice on a lithium diet for 5 wk after EAE induction was 0.53 ± 0.03 mEq/l (n = 4). To test whether lithium administration was capable of ameliorating ongoing EAE, MOG\textsubscript{35–55}-immunized mice were treated with lithium after onset of clinical symptoms. In one protocol, lithium treatment was initiated when mice attained a clinical score of 2, which was achieved on day 16.8 ± 1.3 postimmunization, and mice were monitored to day 53. Whereas lithium-free mice with EAE continued to deteriorate with increased clinical scores, mice with EAE given lithium upon reaching a score of 2 stabilized at that level of disease and did not worsen (Fig. 1B). The overall severity of EAE as measured by the CDI was significantly lower at 77.8 ± 4.4 for lithium-treated EAE mice compared with 107.6 ± 5.1 for lithium-free mice (p < 0.05). A more challenging protocol was also tested, in which lithium treatment was initiated 20 days postimmunization, at the peak of the acute phase (Fig. 1C). The CDI from day 0 to 19 (before lithium treatment) was 23.5 ± 3.4 for the mice that were not going to be treated with lithium, and 24.4 ± 3.2 for the mice that were subsequently administered lithium. The CDI from days 20 to 72 was 97.0 ± 16.7 for the lithium-free mice and 69.3 ± 11.0 for the lithium-treated mice (p < 0.05). Therefore, lithium treatment enabled a significant recovery in the clinical course of EAE. Thus, lithium treatment before immunization with MOG\textsubscript{35–55} peptide rendered mice resistant to the development of EAE and lithium treatment after establishment of EAE lowered disease severity and/or facilitated partial recovery. Lithium administration ameliorates neuropathology associated with EAE Spinal cords examined 33 days after MOG\textsubscript{35–55} peptide immunization contained activated microglia that colocalized with extensive demyelination, which were absent in the spinal cords of lithium-pretreated mice (Fig. 2, A and B, respectively). The spinal cords from mice with EAE that were treated with lithium after reaching a clinical score of 2 displayed lower microglial activation and less demyelination than lithium-free mice with EAE (Fig. 2, A and B, respectively). Thus, pre- or posttreatment with lithium attenuated clinical progression, demyelination, and microglia activation in mice with EAE. Amelioration of EAE by lithium treatment was further confirmed by examinations of leukocyte infiltration into the CNS. Spinal cords from MOG\textsubscript{35–55}-immunized mice examined after the lithium pretreatment and lithium post-treatment paradigms described in Fig. 1, A and B contained much lower evidence of infiltrated CD4+ T cells (Fig. 2C) and neutrophils (Fig. 2D) than... FIGURE 5. Effects of lithium treatment of mice on NK cells and Treg cell. A, Lithium treatment did not alter the number of NK cells. The histogram depicts the percent of NKL1 positive cells from each of lithium-untreated and treated mice. The scatter plot (lower panel) shows the number of NKL1 cells per million spleen cells ± SD from lithium-untreated and treated mice \((n = 5)\). B, Treg cells are reduced in splens of lithium-treated mice. The dot plot shows the percentage of CD25\(^+\)FoxP3\(^+\) T cells within a CD4\(^+\) gated population of cells from splens of lithium-untreated and treated mice. The bar graph plots the mean number of CD4\(^+\)gated CD25\(^+\)FoxP3\(^+\) T cells per million spleen cells from lithium-untreated and treated mice ± SD \((n = 5)\). C, Lithium treatment did not alter Treg cell activity. Treg cells from lithium-treated and untreated mice were evaluated for their ability to suppress the anti-CD3-induced proliferation of CD4\(^+\)CD25\(^-\) responder T cells from C57BL/6 mice in a \([^{3}H]\)-thymidine incorporation assay. Data represent mean ± SEM \((n = 3)\). Matched spinal cords from MOG\(_{35–55}\)-immunized mice not given lithium. Surface staining of mononuclear cells from spinal cords of MOG\(_{35–55}\)-immunized mice treated with lithium after reaching score 2 confirmed that there was a much lower proportion of CD4\(^+\) T cells in spinal cords of lithium-treated than in spinal cords of lithium-free mice (Fig. 2E). **Lithium administration reduces effector T cells** The resistance to EAE provided by lithium treatment could be due to attenuated generation of MOG\(_{35–55}\)-specific T cells. Therefore, we measured the in vitro-stimulated proliferation of T cells isolated from draining lymph nodes and spleens 10 days after MOG\(_{35–55}\)-immunization, with or without in vivo lithium pretreatment. The MOG\(_{35–55}\)-stimulated proliferation of T cells from primed mice was greatly reduced in cells of both tissues prepared from lithium-pretreated mice (Fig. 3A). The low response of T cells from lithium-treated MOG\(_{35–55}\)-immunized mice to Ag restimulation could be due to compromised effector cell generation and/or intrinsic loss of T cell ability to be activated. To determine whether lithium treatment compromises the ability of T cells to be activated, we examined the proliferative response to anti-CD3, and this was similar between T cells from lithium-treated and untreated mice (Fig. 3B). These results indicate that in vivo lithium pretreatment selectively inhibits the generation of MOG\(_{35–55}\)-specific effector T cells. This conclusion was further supported by measurements of MOG\(_{35–55}\)-stimulated production of cytokines by splenocytes isolated from MOG\(_{35–55}\)-immunized mice 10 days postimmunization. In cells from mice pretreated in vivo with lithium, the MOG\(_{35–55}\)-induced productions of IFN-\(\gamma\), IL-6, and IL-17 were much less than the amounts produced by splenocytes isolated from MOG\(_{35–55}\)-immunized mice not treated with lithium, whereas anti-CD3-induced IFN-\(\gamma\), IL-6, and IL-17 production was unaffected by lithium treatment (Fig. 4). IL-10 is a regulatory cytokine in inflammatory autoimmune diseases and its elevated expression is associated with amelioration of, or protection from, EAE (26–28). We therefore determined whether lithium treatment induced generation of IL-10-producing effector T cells. Restimulation of splenocytes isolated from lithium-treated or untreated MOG\(_{35–55}\)-immunized mice with MOG peptide did not result in the production of detectable levels of IL-10 (data not shown). However, stimulation of splenocytes from MOG-immunized lithium-treated mice but not untreated mice with anti-CD3 resulted in IL-10 production (67.9 pg/ml). This result suggests that one mechanism of beneficial action of lithium in EAE is by the generation of IL-10-producing T cells, but this is limited to non-Ag (MOG\(_{35–55}\)) specific T cells. **Lithium administration does not affect NK cells, and reduces the number but not the activity of Treg cells** NK cells and Treg cells have a role in modulating disease activity in EAE (29–33). Therefore, lithium could alter the development and/or severity of EAE by altering the numbers and functions of NK or Treg cells. To address these possibilities, we evaluated the number and activity of NK and Treg cells in spleens of mice treated for 10 days with lithium compared with untreated mice. The results show that the number of NKL1-expressing NK cells was similar in both lithium-treated and untreated mice (Fig. 5A). The NK cell activity within spleen mononuclear cells to YAC-1 targets was also similar in lithium-treated and untreated mice (data not shown). Remarkably, lithium treatment resulted in a significant reduction in the proportion of Treg cells compared with untreated mice (Fig. 5B). However, the ability of Treg cells from lithium-treated mice to suppress the proliferation of responder CD4⁺ T cells was comparable to that of untreated mice (Fig. 5C). Overall, these results demonstrate that lithium-mediated protection and/or amelioration of EAE is not due to changes in NK cell number or activity or enhanced Treg cell numbers or activity. **Increased severity of EAE in constitutively active GSK3 knock-in mice** GSK3α/β knock-in mice containing serine-to-alanine mutations in the regulatory serines of both GSK3 isoforms, S21A-GSK3α and S9A-GSK3β, and matched wild-type mice were immunized with MOG₃₅₋₅₅ peptide to test whether constitutively active GSK3 promoted EAE. Wild-type mice developed symptoms of EAE similar to C57BL/6 mice (Fig. 6). The development of the acute phase of the disease was accelerated in the constitutively active GSK3 knock-in mice compared with wild-type mice. Furthermore, during the chronic phase, constitutively active GSK3 knock-in mice exhibited more severe disease compared with wild-type mice. Overall, severity of EAE was significantly different between the two groups of mice, as the CDI was 54.5 ± 4.6 for wild-type mice and 80.3 ± 28.9 for constitutively active GSK3 knock-in mice \((p < 0.05)\). Incidence of disease was 6/6 in the wild-type and 5/6 in the GSK3 mutant knock-in mice. Thus, mice expressing constitutively active GSK3 exhibited more severe EAE than wild-type mice. **Lithium administration controls relapsing EAE** A major form of clinical multiple sclerosis is a relapsing/remitting disease, which is modeled in female SJL mice immunized with PLP (34). PLP₁₃₀₋₁₅₁-immunized mice developed an acute episode of clinical EAE, followed by remission (Fig. 7). During the first remission, 20 days after immunization, half the mice were administered lithium. All mice displayed a secondary relapse episode, but in the lithium-treated mice the severity was approximately half that displayed by the lithium-free mice, which reached clinical scores equivalent to the first episode. Subsequently, the lithium-free mice displayed a third episode of clinical EAE, which stabilized in a chronic progressive phase with an average clinical score near 2. In contrast, the lithium-treated mice stabilized with mild symptoms, which remained below an average clinical score of 1, to 90 days postimmunization. To determine whether continuous lithium treatment was blocking an active disease process, lithium treatment was withdrawn on day 90. Remarkably, after a washout period of a few days, the mice that had been treated with lithium relapsed to reach a clinical score equivalent to the mice that had never received lithium. Restoration of lithium treatment on day 109 promoted recovery. Thus, chronic lithium treatment suppressed an ongoing disease process, which was reactivated upon withdrawal of lithium, demonstrating that lithium treatment is therapeutic in relapsing/remitting EAE. **Discussion** EAE is a debilitating immune-mediated inflammatory and demyelinating disease of the CNS induced in rodents by the administration of CNS-derived Ags. EAE is widely used to model multiple sclerosis to identify physiological cascades that lead to clinical symptoms and to identify potential therapeutic targets. The results reported in this study show that the symptoms of EAE were significantly relieved in mice using four different lithium treatment protocols: pretreatment, treatment at the onset of EAE, treatment during severe disease, and treatment during remission. Especially notable is the effectiveness of lithium treatment in the relapsing/remitting EAE paradigm where lithium administered after the first disease episode, during remission, provided long-term suppression of EAE. After nearly 3 mo of protection by lithium, relapse rapidly occurred after lithium was withdrawn, and recovery followed subsequent readministration of lithium. The ability to repress or induce clinical symptoms of EAE at any time after immunization by lithium administration or withdrawal, respectively, provides a unique and valuable model for assessing the disease process long after initial onset. Attenuation of the clinical symptoms of EAE by lithium treatment was accompanied by reduced leukocyte infiltration into the spinal cord, reduced demyelination, and reduced microglial activation. Remarkably, the extent of demyelination in spinal cords of mice treated with lithium after onset of disease was less than in untreated mice. This could be due to lithium inhibiting demyelination, promoting remyelination, or a combination of both. Lithium could inhibit demyelination by suppressing microglia activation and inflammatory cytokine production. This is consistent with several studies highlighting microglia as a major mediator of neuronal damage in EAE and multiple sclerosis (35–39). Our data also suggest the intriguing possibility that lithium promotes remyelination. However, this is a speculative observation and needs to be examined further. The broad effectiveness of lithium on characteristic signs of EAE indicated that it affects an early stage in the immunological cascade leading to EAE, and this was confirmed by the finding that lithium attenuated the generation of MOG\textsubscript{35–55} peptide-responsive T cells. This block likely stems, in part, from the recent finding that GSK3, which is inhibited by lithium (9), is crucial for the differentiation and activation of proinflammatory dendritic cells (40). The Th17 lineage of CD4\textsuperscript{+} T cells has recently been identified as the major effector T cell for EAE development (41–43), and IL-6 has a crucial role in inducing IL-21, which in cooperation with TGFβ is necessary for the generation of Th17 cells (44–46). Notably, in vivo lithium treatment significantly reduced the \textit{in vitro} MOG\textsubscript{35–55}-stimulated production of both IL-6 and IL-17 by splenocytes isolated from MOG\textsubscript{35–55}-immunized mice, indicating that lithium reduced the development of MOG\textsubscript{35–55}-responsive Th17 cells, which would retard the development of EAE. However, lithium did not selectively only block the development of Th17 cells, as indicated by the finding that lithium treatment also greatly reduced the \textit{in vitro} MOG\textsubscript{35–55}-induced production of IL-6 and of IFN-γ, which are produced by Th1 cells and a population of Th cells that coexpress both IL-17 and IFN-γ, cells that correlate closely with disease severity in EAE (23, 47, 48). Thus, it appears that by inhibiting GSK3, lithium suppresses the development and differentiation of Ag-responsive T cells, possibly at the level of dendritic cell activation. Consistent with this possibility, the ability of T cells obtained from lithium-treated mice to be activated and produce inflammatory cytokines following direct engagement of the Ag receptor with anti-CD3 was unaffected. We also found that lithium-mediated inhibition of GSK3 did not lead to increases in numbers or activity of Treg or NK cells. These findings extend to a therapeutically relevant lithium administration paradigm the previous report that high doses of lithium were capable of blocking EAE in rats (20), which preceded the discovery that lithium is a selective inhibitor of GSK3 (9). The importance of GSK3 is also highlighted by the finding that EAE was moderately but significantly promoted in GSK3 knock-in mice in which the GSK3 is constitutively active and unable to be inhibited by the phosphorylation of regulatory serines (11). Lithium has also been shown to reduce inflammatory cytokine production by inhibiting GSK3-dependent activation of NF-κB transcriptional activity (12, 17, 49). Inhibitors of GSK3, in some cases including lithium, previously were reported to reduce LPS-induced production of inflammatory cytokines, such as IL-6, in monocytes and other cells (12), and to reduce disease severity in animal models of sepsis, arthritis, peritonitis, and colitis (12, 14, 16–19). This study demonstrates that lithium, likely by inhibiting GSK3, has profound actions in both the innate and adaptive immune systems and in signaling mechanisms controlling the production of inflammatory molecules. Although these actions of GSK3 may at first seem surprising considering its original identification as a kinase regulating glycosyl metabolism, research during the last 10 years has revealed that GSK3 regulates many cellular functions and signaling pathways, such as phosphorylating >20 transcription factors (10). Thus, GSK3 seems to be the most likely target mediating lithium’s therapeutic effects in EAE, but because lithium also has other targets (8), we cannot rule out the possibility that other actions of lithium may contribute to its effects on EAE. Nonetheless, because lithium was found to be highly effective in providing protection from EAE, and it has been used for >50 years in human patients with psychiatric diseases, taken together these findings suggest that lithium treatment and targeting GSK3 may be a rational strategy to diminish the effects of autoimmune diseases as well as of inflammatory diseases affecting the CNS. Acknowledgments We thank Dr. Huang-Ge Zhang for assisting us with the NK cell activity assay, Cecelia Latham, Dr. Simer Preet Singh, and Anna Zimiełewska for excellent technical assistance, and the University of Alabama Neuroscience Core Facilities (NS47466, NS57098). Disclosures The authors have no financial conflict of interest. References 1. McFarland, H. F., and R. Martin. 2007. Multiple sclerosis: a complicated picture of autoimmunity. \textit{Nat. Immunol.} 8: 913–919. 2. Sospedra, M., and R. Martin. 2005. Immunology of multiple sclerosis. \textit{Annu. Rev. Immunol.} 23: 683–747. 3. Hartung, H. P. 2004. Multiple sclerosis. \textit{J. Clin. Invest.} 113: 788–794. 4. Noweirathy, J. H., C. Lauchnieti, M. Rodriguez, and B. G. Weinshenker. 2000. Multiple sclerosis. \textit{N. Engl. J. Med.} 343: 938–952. 5. Steinman, L., and S. Z. Kozak. 2005. Viruses and failure of EAE in the development of therapeutic vaccines. \textit{Trends Immunol.} 26: 565–571. 6. Bernard, C. N., and P. R. Carnegie. 1975. Experimental autoimmune encephalomyelitis in mice: immune response to mouse spinal cord and myelin basic proteins. \textit{J. Immunol.} 114: 1537–1540. 7. Chesselet, M. F., A. D. Berman, W. Waldner, M. Munder, E. Bottani, and L. B. Nicholson. 2002. T cell response in experimental autoimmune encephalomyelitis (EAE): role of self and cross-reactive antigens in shaping, tuning, and regulating the autoreactive T cell repertoire. \textit{Annu. Rev. Immunol.} 20: 101–123. 8. Jope, R. S. 1999. Anti-bipolar therapy: mechanism of action of lithium. \textit{Mol. Psychiatry} 4: 111–128. 9. Klein, P. S., and J. A. McEwen. 1996. A molecular mechanism for the effect of lithium on brain development. \textit{Proc. Natl. Acad. Sci. USA} 93: 8455–8459. 10. Jope, R. S., and G. V. Johnson. 2004. The glamour and gloss of glycogen synthase kinase-3. \textit{Trends Biochem. Sci.} 29: 95–102. 11. Michalski, M. S., K. Sakamoto, J. A. Armitage, R. M. Shapiro, I. Marquez, and D. A. Alessi. 2004. Redox phosphorylation of GSK3 occurs in muscle and Wnt signaling mediated by kinectin analysis. \textit{EMBO J.} 23: 1571–1583. 12. Martin, M., K. Rehani, R. S. Jope, and S. M. Michaels. 2005. Toll-like receptor-mediated cytokine production is differentially regulated by glycogen synthase kinase 3. \textit{J. Immunol.} 175: 777–784. 13. Hart, D. A., S. J. Donn, H. Benediktsdottir, and S. P. Lenz. 1994. Partial characterization of the enhanced survival of female NZB/W mice treated with lithium chloride. \textit{Int. Immunol.} 6: 167–172. 14. Dupuy, M., Collin, D. A. Allen, N. S. Patel, I. Bauer, E. M. Mervuala, M. Louhelaheinen, S. J. Foster, M. M. Yaqoob, and C. Thiermannern. 2005. GSK-3β inhibitors alter the organ dysfunction/dysfunction caused by endotoxemia in the rat. \textit{Crit. Care Med.} 33: 1905–1912. 15. Berrel, E., and J. J. Juge. 2000. The paradoxical pro- and anti-apoptotic actions of GSK3 in the intrinsic and extrinsic apoptosis signaling pathways. \textit{Prog. Neurol. Biol.} 79: 173–189. 16. White, B. A., A. Varaja, A. Posa, A. Mohan, M. Collin, and C. Thiermannern. 2006. Reduction of experimental colitis in the rat by inhibition of glycogen synthase kinase-3β. \textit{Br. J. Pharmacol.} 147: 575–582. 17. Hu, X., P. K. Paik, J. Chen, A. Yartilin, L. Kockenich, T. T. Lee, J. R. Woodgett, L. Li, B. Li, and J. A. McEwen. 2004. GSK3β and GSK3α/β are required with TLK2 in regulating GSK3 and CREB/ATF1 proteins. \textit{Immunol. 24}: 563–574. 18. Cuzzocrea, S., E. Mazzon, R. Di Paola, C. Muia, C. Crisafulli, L. Dugo, M. Collin, D. Brilli, A. P. Capasso, and C. Thiermannern. 2006. Glycogen synthase kinase-3β phosphorylation reduces the degree of arthritis caused by type II collagen in the mouse. \textit{Clin. Immunol.} 120: 57–67. 19. Cuzzocrea, S., T. Genovese, E. Mazzon, C. Crisafulli, R. Di Paola, C. Muia, M. Collin, D. Brilli, A. P. Capasso, and C. Thiermannern. 2006. Glycogen synthase kinase-3β inhibition reduces septic shock damage in experimental spinal cord trauma. \textit{J. Pharmacol. Exp. Ther.} 318: 78–89. 20. Levine, S., and A. Saltzman. 1991. Inhibition of experimental allergic encephalomyelitis by lithium chloride: specific effect or nonspecific stress? \textit{Immunopathol.} 11: 111–116. 21. De Sarno, P., X. Li, and R. S. Jope. 2002. Regulation of Akt and glycogen synthase kinase-3β phosphorylation by sodium valproate and lithium. \textit{Neuropharmacology} 43: 1001–1008. 22. Astell, R. C., M. S. Wells, S. R. Barman, and C. Ramon. 2004. Cutting edge: critical role for CD5 in experimental autoimmune encephalomyelitis: inhibition of engagement reverses disease in mice. \textit{J. Immunol.} 173: 2928–2932. 23. Astell, R. C., L. Xu, S. R. Barman, and C. Ramon. 2006. CD5-CR3 binding activation-deficient mice are resistant to experimental autoimmune encephalomyelitis: protection is associated with diminished populations of IL-17-expressing T cells in the central nervous system. \textit{J. Immunol.} 177: 8842–8849. 24. Liu, C., S. Yu, J. Kappos, J. Wang, W. E. Grizzle, K. R. Zum, and H. G. Zhang. 2007. Brain-derived neurotrophic factor represses NK cell cytotoxicity in tumor-bearing host. \textit{Blood} 109: 4336–4343. 25. Reddy, J., Z. Illes, X. Zhang, J. Encinas, J. Pyrdol, L. Nicholson, R. A. Sobel, K. W. Wucherpfennig, and V. K. Kuchroo. 2004. Myelin proteolipid protein-specific CD4\textsuperscript{+} CD8\textsuperscript{+} regulatory cells mediate genetic resistance to experimental autoimmunity encephalomyelitis. \textit{Proc. Natl. Acad. Sci. USA} 101: 15434–15439. 26. Bettelli, B., M. P. Das, E. D. Howard, H. L. Weiner, R. A. Sobel, and V. K. Kuchroo. 1998. IL-10 is critical in the regulation of autoimmune encephalomyelitis: evidence from studies of IL-10- and IL-4-deficient and transgenic mice. *J. Immunol.* 161: 3298–3308. 27. Kennedy, M. K., D. S. Torrance, K. S. Picha, and K. M. Mohler. 1992. Analysis of cytokine mRNA expression in the central nervous system of mice with experimental autoimmune encephalomyelitis reveals that IL-10 mRNA expression correlates with recovery. *J. Immunol.* 149: 2496–2505. 28. Segal, B. M., B. K. Dwyer, and E. M. Shevach. 1998. An interleukin (IL)-10/IL-12 immune regulatory circuit controls susceptibility to autoimmune disease. *J. Exp. Med.* 187: 537–548. 29. Xu, W., C. Fasano, H. Hara, and T. Tabira. 2005. Mechanism of natural killer (NK) cell regulatory role in experimental autoimmune encephalomyelitis. *J. Neuroimmunol.* 163: 77–84. 30. Vollmer, T., R. Liu, M. Price, S. Rhodes, A. La Cava, and F. D. Shi. 2005. Differential effects of IL-21 during initiation and progression of autoimmunity against neuroantigen. *J. Immunol.* 174: 2696–2701. 31. Reddy, J., H. Waldner, X. Zhang, Z. Illes, K. W. Wucherpfennig, R. A. Sobel, and V. K. Kuchroo. 2001. Cloning of CD25+ regulatory T cells contribute to species differences in susceptibility to experimental autoimmune encephalomyelitis. *J. Immunol.* 175: 5591–5595. 32. Jahng, A.-W., I. Maricic, B. Pedersen, N. Burmester, O. Naidenko, M. Kennedy, Y. Kosem, and M. K. Kennedy. 2005. Activation of natural killer T cells potentiates or prevents experimental autoimmune encephalomyelitis. *J. Exp. Med.* 194: 1789–1799. 33. Kumar, V., K. Stuifbergen, and E. Senarca. 1996. Inactivation of T cell receptor peptide-specific CD4 regulatory T cells induces chronic experimental autoimmune encephalomyelitis (EAE). *J. Exp. Med.* 184: 1609–1617. 34. Tuohy, V. K., Z. Illes, R. A. Sobel, R. A. Laursen, and M. B. Lees. 1989. Identification of an immunodominant determinant of myelin proteolipid protein for SLA. *J. Immunol.* 142: 1523–1527. 35. Ponomarev, E. D., L. P. Shriver, and B. N. Dittel. 2006. CD40 expression by microglial cells is required for their completion of a two-step activation process during central nervous system autoimmune inflammation. *J. Immunol.* 176: 1402–1411. 36. Crocker, S. J., J. K. Whitmire, R. F. Frausto, P. Chertboonmaung, P. D. Soloway, J. L. Wharton, and I. L. Campbell. 2006. Persistent macrophage/microglial activation and tissue disruption after experimental autoimmune encephalomyelitis in tissue inhibitor of metalloproteinase-1-deficient mice. *Am. J. Pathol.* 169: 2104–2116. 37. Butovsky, O., G. Landa, G. Kumis, Y. Ziv, H. Avidan, N. Greenberg, A. Schwarz, I. Sinmrin, A. Pollack, S. Jung, and M. Schwartz. 2006. Induction and blockage of oligodendrogenesis by differently activated microglia in an animal model of multiple sclerosis. *J. Clin. Invest.* 116: 905–915. 38. Huppertz, P., M. Gerner, D. Meier, M. Falsag, G. Rath, N. Nowotnyer, T. Wurmser, T. Schaefer, W. Prinz, J. Priller, et al. 2007. Experimental autoimmune encephalomyelitis repressed by microglial paralysis. *Nat. Med.* 11: 146–152. 39. Benveniste, E. N. 1997. Role of macrophages/microglia in multiple sclerosis and experimental allergic encephalomyelitis. *J. Mol. Med.* 75: 165–173. 40. Rodionova, E., M. Canzeimann, E. Marsakovskiy, M. Hess, M. Kirsch, T. Giese, A. D. Ho, M. Zoller, P. Dreger, and T. Luft. 2007. GSK-3 mediates differentiation and activation of proinflammatory dendritic cells. *Blood* 109: 1584–1592. 41. Langrish, C. T., R. D. Hatton, P. R. Mangan, and L. E. Harrington. 2007. IL-17 family members and the expanding diversity of effector T cell lines. *Annu. Rev. Immunol.* 42. Langrish, C. L., Y. Chen, W. M. Blumenstein, J. Mattson, B. Basham, J. D. Sedgwick, J. McHugh, R. A. Kastlein, and D. J. Cua. 2005. IL-23 drives a pathogenic T cell population that induces autoimmune inflammation. *J. Exp. Med.* 201: 233–240. 43. Harrington, L. E., R. D. Hatton, P. R. Mangan, H. Turner, T. L. Murphy, K. M. Murphey, and L. T. Weaver. 2005. Interleukin 17-producing CD4+ effector T cells develop via a lineage distinct from the Th1 helper type 1 and 2 lineages. *Nat. Immunol.* 6: 1133–1132. 44. Zhou, L., I. Ivanov, R. Spolski, R. Min, K. Shenderov, T. Egawa, D. E. Levy, W. J. Leonard, and D. R. Littman. 2007. IL-6 programs TH17 cell differentiation by promoting sequential engagement of the IL-21 and IL-23 pathways. *Nat. Immunol.* 8: 967–974. 45. Nurieva, R., X. O. Yang, G. Martinez, Y. Zhang, A. D. Panopoulos, L. Ma, K. S. Shintani, Q. Tian, S. S. Watowich, A. M. Jetten, and C. Dong. 2007. Essential autocrine regulation by IL-21 in the generation of inflammatory T cells. *Nature* 448: 480–483. 46. Korn, T., E. Bettelli, W. Gao, A. Awasthi, A. Jager, T. B. Strom, M. Oukka, and V. K. Kuchroo. 2007. IL-21 initiates a pathogenic regulatory pathway to induce proinflammatory T(H)17 cells. *Nature* 448: 484–487. 47. Suryani, S., and I. Sutton. 2007. An interferon-γ-producing Th1 subset is the major source of IL-17 in experimental autoimmune encephalitis. *J. Neuroimmunol.* 187: 10–16. 48. Bettelli, E., M. Oukka, and V. K. Kuchroo. 2007. T(H)-17 cells in the circle of immunity and autoimmunity. *Nat. Immunol.* 8: 345–350. 49. Stembecher, K. A., W. Wilson, III, P. C. Cogswell, and A. S. Baldwin. 2005. Glycogen synthase kinase 3β functions to specify gene-specific, NF-κB-dependent transcription. *Mol. Cell. Biol.* 25: 8444–8455.
THE INFLUENCE OF A D. C. ELECTRIC FIELD ON CHEMISORPTION OF OXYGEN ON ZINC OXIDE by John Raymond Lane A Thesis Submitted to the Faculty of the DEPARTMENT OF ELECTRICAL ENGINEERING In Partial Fulfillment of the Requirements For the Degree of MASTER OF SCIENCE In the Graduate College THE UNIVERSITY OF ARIZONA 1968 STATEMENT BY AUTHOR This thesis has been submitted in partial fulfillment of requirements for an advanced degree at the University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library. Brief quotations from this thesis are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author. SIGNED: John R. Lane APPROVAL BY THESIS DIRECTOR This thesis has been approved on the date shown below: S. A. HOENIG Professor of Electrical Engineering May 14, 1968 ACKNOWLEDGMENT The writer is indebted to Dr. Stuart A. Hoenig for the initial inspiration of this research topic and for his encouragement during the months of the investigation. His practical suggestions were invaluable in developing the content of this thesis. The writer is also grateful to Dr. Vern R. Johnson for his helpful criticism and evaluation of the manuscript. Acknowledgment is made to Mr. Ira Clough for his assistance in the fabrication of the vacuum system. And finally, the writer greatly appreciates the financial assistance provided by the National Aeronautics and Space Administration which made this research possible. # TABLE OF CONTENTS | Section | Page | |------------------------------------------------------------------------|------| | LIST OF ILLUSTRATIONS | vi | | ABSTRACT | vii | | 1. INTRODUCTION | 1 | | 2. CHEMISORPTION ON ZINC OXIDE | 3 | | Zinc Oxide | 3 | | Chemisorption | 4 | | Mechanism of Chemisorption | 4 | | Zinc Oxide Chemisorptive Properties | 5 | | Rate of Chemisorption | 7 | | Boundary Layer Theory | 9 | | 3. FIELD EFFECT | 13 | | Field Effect Theory | 13 | | Field Effect Calculations | 14 | | 4. DESIGN OF EXPERIMENT SYSTEM | 17 | | Vacuum System | 17 | | Field Voltage Apparatus | 18 | | Preparation of Zinc Oxide Plates | 18 | | Zinc Oxide Plate Assembly | 19 | | 5. EXPERIMENT | 21 | | Objective of Experiment | 21 | | Pre-Run Preparations | 21 | | Procedures for Taking Data | 22 | | Chemisorption with Oxygen as the Test Gas | 22 | | Chemisorption of Nitrogen | 22 | | 6. RESULTS | 23 | | 7. DISCUSSION AND CONCLUSIONS | 26 | | Comparison of Theory and Experiment | 26 | | Future Efforts | 26 | | Nomenclature | 35 | | References | 37 | | Figure | Description | Page | |--------|-------------|------| | 1 | Energy plot of ZnO band structure | 28 | | 2 | Influences of field on ZnO thin film band structure | 29 | | 3 | Vacuum system for ZnO studies | 30 | | 4 | ZnO plate assembly | 31 | | 5 | Electric schematic | 32 | | 6 | Experimental curve: Adsorption-desorption vs. applied potential | 33 | | 7 | Pressure vs. time | 34 | ABSTRACT An experimental system and procedures are developed to determine the effect of electrostatic fields on the chemisorption of oxygen on zinc oxide thin films. A theoretical model is constructed, and the theoretical performance curves are compared with experimental data. The experimental results are qualitatively similar to the theoretical predictions. The adsorption-desorption process of oxygen on zinc oxide can be controlled by the application of transverse electrostatic fields. Results show $1.67 \times 10^{-12}$ grams of oxygen adsorbed per square centimeter of oxide surface per volt per centimeter. CHAPTER 1 INTRODUCTION All reactions between gas and solids begin with adsorption, the entrance of the gas into the solid. It is customary to distinguish between adsorption which implies a solution or concentration of the gas in the solid and absorption which might be typified by the action of a sponge. In fact it is difficult to separate the phenomena, and the general term sorption is sometimes used for both processes. The word adsorption will be used here to describe any reaction in which a gas is bound to a solid in some more or less permanent fashion. This bonding is the first step in any rusting, oxidation, catalysis, or gas drying process. The commercial aspects of these problems have generated a great deal of effort aimed at understanding adsorption processes. Unfortunately there is no general theory which explains adsorption. However some processes have begun to be understood, especially in the case of adsorption on semiconductors. A major difficulty with adsorbents is their non-specific nature. For example, an adsorbent for water vapor will take up other condensable gases, and this limits the use of adsorption for gas separation. Adsorbents are often used for the removal of carbon dioxide or carbon monoxide from confined environments, i.e., submarines. Other applications include separation of sulfur dioxide from smelter stack gases or oxygen from hydrocarbons. Again the adsorbent must be discarded after use or recycled by heating. In view of the foregoing discussion, it is clear that a need exists for an adsorbent which can be recycled in situ without heating. If the adsorbent were specific for a certain gas or class of gases, this would be an even greater advantage in terms of industrial and scientific applications. The term specific is defined, in this case, as being chemically reactive with only certain particular gases. Adsorption on semiconductors has been studied for a number of years, and in 1961 a book by a Russian author, Volkenshtein, appeared. Volkenshtein suggested that a semiconductor could serve as an adsorbent which would be specific to a particular type of gas and could be cycled by application of electrical fields. The electrostatic field method of controlling chemisorption is attractive as it offers low power requirements and unsophisticated circuitry and requires no direct contact between the adsorbent surface and field circuitry. Zinc oxide is an n-type semiconductor belonging to the II-VI family of chemical compounds. Zinc oxide exhibits the properties characteristic of the compounds in this family (Heiland, Mollwo, and Stockman, 1959, p. 195). At room temperature zinc oxide crystals have the hexagonal wurtzite lattice structure. In this lattice arrangement zinc ions and oxygen ions possess the same relative arrangement, but the oxygen ions are placed in the closest hexagonal packing. Half of the tetrahedral interstitial positions are occupied by the zinc ions. In addition to the allowed energy levels for electrons in the conduction and valence bands, zinc oxide crystals also contain discrete energy levels within the forbidden region (Morrison, 1955, p. 261). These forbidden band energy levels are localized at impurity atoms and crystal imperfections. An energy plot of the zinc oxide band structure is shown in Fig. 1. (Figures appear at end of text, starting on page 28.) It has been established that excess interstitial zinc atoms are primarily responsible for the forbidden region levels (Morrison, 1955, p. 261). These levels are located at approximately .04 eV below the conduction band edge and as a result, the ionization energy of the interstitial zinc atoms is quite low. At room temperature most of these atoms are ionized, and the n-type conductivity observed in zinc oxide results from electrons that are thermally excited into the conduction band from these ionized interstitial zinc atoms. Investigations have been performed on different forms of zinc oxide by various workers (Heiland et al., 1959; Barry and Stone, 1960; McDaniel, Mitchell, and Watson, 1967). These forms include single crystals, powder, sintered pellets, and evaporated thin layers. Zinc oxide powders and evaporated layers are commonly used in surface property investigations as they each possess a large effective surface area. In this experiment, evaporated thin layers were used because of the ease of fabrication, handling, and their adaptability to electric field experimental configurations. The thin evaporated layers of zinc oxide fabricated for this experiment were translucent. The observable colors of thin zinc oxide layers result primarily from optical interference (Heiland et al., 1959, p. 196). The surface of these layers is rough and nonreflective and has been found to possess statistically ordered crystallites. These surface crystallites are responsible for the large specific area typical of evaporated layers. The size of the crystallites is dependent on the thickness of the layer. A layer having a thickness of 1000 angstroms will have crystallites of the order of 300 angstroms (Heiland et al., 1959, p. 196). Chemisorption Mechanism of Chemisorption Chemisorption in a gas-semiconductor system is generally initiated when a gas molecule strikes the semiconductor surface and is physically adsorbed. While physically adsorbed, the molecule migrates over the semiconductor surface until it contacts a preferred site which has sufficient activation energy to form a chemical bond with the gas molecule. The molecule is then said to be chemisorbed (Trapnell and Hayward, 1964, p. 2). Quite often, however, a molecule is desorbed before a preferred site is found. Hence, chemisorption can be understood as a chemical adsorption reaction occurring in two steps: physical adsorption and chemisorption. The chief distinction between these two stages lies primarily in the strength and type of bond which holds the gas molecule to the semiconductor surface. The physical adsorption bond is essentially a Van der Waals force which is of sufficient strength to weakly bind the gas molecule to the surface and yet allow the molecule to move about the surface. The chemisorption bond involves an actual electron exchange between the preferred site and gas molecule, and requires a much higher heat of activation. The resulting chemisorption bond will be much stronger than the physical adsorption bond. The chemisorption bond requires the direct contact of adsorbate with the substrate (Trapnell and Hayward, 1964, p. 4); therefore chemisorption can account for at most a monolayer of coverage. Physical adsorption does not depend on direct contact of the gas molecules with the substrate and therefore is not limited to monolayer of coverage. Zinc Oxide Chemisorptive Properties Zinc oxide may chemisorb oxygen in the form of $O_2^-$, or $O^-$, or $O^{\pm}$. This gas-semiconductor reaction is known as depletion chemisorption. Excess interstitial zinc atoms serve as the donors of electrons for this process. These donor electrons come from the conduction band and are supplied by the ionized interstitial zinc atoms (see Chapter 2, page 3, Zinc Oxide). Therefore it is felt that adsorption will be non-localized on the surface. Barry and Stone (1960, p. 128) present a somewhat different picture of ZnO chemisorption. They suggest that adsorption of oxygen molecules is localized to zinc atoms in the crystal face. For dissociative chemisorption they mention that zinc atoms in the next lower layer may also act as adsorption sites, thus leading to multilayer adsorption. Most investigators incline to the non-localized view discussed above. However, the work of Barry and Stone is of interest and their ideas warrant further investigation. Zinc oxide shows a strong correlation between adsorption and its electrical and optical properties (Morrison, 1955, p. 260). For example, zinc oxide strongly adsorbs ultraviolet light which corresponds to its band gap of approximately 3.2 eV. The radiation produces electron-hole pairs within the zinc oxide crystal. Chemisorption of oxygen has a direct influence on the surface conductivity of zinc oxide. Depletion of surface electrons occurs when oxygen molecules combine with electrons in the zinc oxide conduction band to form oxygen ions. The resulting negatively charged surface further reduces conductivity by repelling free electrons from this high-mobility region (Hoenig and Lane, 1968). If the zinc oxide is illuminated to produce electrons and holes, the holes will migrate toward the negatively charged semiconductor surface and discharge the chemisorbed oxygen ions. The neutralized oxygen molecules then desorb into the gas phase. **Rate of Chemisorption** If the mean free path of the gas is less than the field plate spacing (see Fig. 4), the rate of adsorption can be expressed in terms of the number of gas molecules, in the gas phase, which collide per second with a unit area of surface at an ambient pressure $p$. This adsorption rate, defined as molecules adsorbed per $\text{cm}^2$ per second, can be written as (Trapnell and Hayward, 1964, p. 87): $$u = sp/\sqrt{2\pi mkT}. \quad (1)$$ The sticking probability, $s$, is defined as the fraction of the gas molecule-surface collisions which result in chemisorption. For a single activated adsorption reaction, $s$ can be expressed as $$s = \sigma f(\theta)e^{-E/RT}. \quad (2)$$ The term $f(\theta)$ is the probability that a molecule out of the gas phase will collide with a vacant chemisorption site and is a function of the degree of surface coverage. In this equation the temperature is denoted by $T$, while $E$ is the activation energy required for chemisorption. Therefore $u$ can be expressed as \[ u = \sigma p f(\theta) e^{-E/RT} / \sqrt{2\pi mkT}. \] (3) For a system operating in a partial vacuum, the number of physically adsorbed molecules will be neglected as a first approximation. It can be assumed that desorption will take place primarily from occupied sites. The velocity of desorption, \( u \), is then \[ u' = Kf'(\theta) e^{-E'/RT}. \] (4) The terms \( E' \) and \( K \) are the activation energy of desorption and the velocity constant respectively. \( f'(\theta) \) equals the proportion of sites available for desorption at coverage \( \theta \). The number of gas molecules which are chemisorbed per unit area, per unit time, is equal to the difference between the chemisorption and desorption rates. Therefore by combining expressions (3) and (4) the net rate of chemisorption can be expressed as \[ N_{fr} = u - u' \] \[ = [\sigma p f(\theta) e^{-E/RT} / \sqrt{2\pi mkT}] - Kf'(\theta) e^{-E'/RT}. \] (5) The chemisorption rate is therefore a function of pressure, temperature, degree of surface coverage, and the mass of the gas molecules. If the mean free path of the gas is greater than the plate spacing, a gaseous boundary layer will begin to form which will limit the speed of reaction. An approximation for the mean free path length is \[ l_{\text{cm}} \sim 5/p, \] (6) where \( l \) is the path length in centimeters and \( p \) is the vacuum system pressure in microns (Kennard, 1938, Chapter 8). As is discussed in Chapter 4, page 19, the plate spacing of the zinc oxide plate assembly is 0.15 cm. Therefore for test pressures below 30 microns, the chemisorption rate will be retarded by this gaseous boundary layer. The chemisorption of \( O_2 \) on ZnO proceeds until a potential barrier builds which limits further chemisorption. A widely accepted theory which is applicable to this "limiting action" is known as the boundary layer theory. **Boundary Layer Theory** Oxygen tends to be adsorbed on zinc oxide as negative ions (Trapnell and Hayward, 1964, p. 261). This partially full conduction band supplies the electrons for negative ion formation. As these conduction band electrons are rapidly depleted, additional electrons must be furnished, by levels deep in the crystal, for chemisorption to continue. This removal of electrons causes a charge transfer between the semiconductor bulk and surface. The chemisorption-limiting potential barrier increases the activation energy for adsorption while decreasing the heat of adsorption. For illustration, at zero coverage the activation energy of adsorption is equal to $\eta$ (Barry and Stone, 1960, p. 138). As the potential, $V$, builds up, the activation energy becomes $(\eta + V)$. Furthermore, the heat of adsorption, $(\alpha - \phi)e$, becomes $(\alpha - \phi - V)e$ as the potential barrier develops. The term $\phi$ is the work function of the oxide and can be expressed as the work necessary to remove an electron from the semiconductor bulk to a point just outside its surface. $\alpha$ is defined as the electron affinity of the oxygen molecule. In other words, $\alpha$ is a measure of the attraction of the oxygen molecule for an electron from the conduction band of the zinc oxide crystal. The height of the potential barrier is a function of the number of electrons removed from the bulk. It will increase in magnitude as chemisorption continues until the potential energy of the electrons on either side of the depletion region (interface) is equal. At this point, further increases in chemisorption do not occur. This type of chemisorption is described as "depletive" chemisorption because it results in a reduction of the majority charge carriers by the adsorbate. The potential barrier inhibits adsorption from proceeding to full monolayer coverage. The boundary layer theory can be used to quantitatively describe the mechanisms of depletive chemisorption (Trapnell and Hayward, 1964; Stone, 1961). If oxygen is chemisorbed on the surface of zinc as $O^-$, the energy of chemisorption for the first atom adsorbed will be \( (\alpha - \phi)e + K. \) (7) \( K \) is the interaction energy of the oxygen ion with the oxide surface. \( (\alpha - \phi)e \) is described as the heat of adsorption for zero coverage of the semiconductor surface. As adsorption continues, a potential barrier \( V \) builds up. The Fermi Level decreases by the amount \( V \), and the energy of chemisorption becomes \[ (\alpha - \phi - V)e + K. \] (8) When the potential barrier reaches the value \( V_f \), the potential for chemisorption is zero: \[ (\alpha - \phi - V_f)e + K = 0. \] (9) At this stage, further chemisorption is precluded. In order to find the number of adsorbed ions, \( N_f \), in terms of the potential barrier, \( V_f \), it is assumed that every donor site has yielded its electron and that the charge density, \( \rho \), in the boundary region is a constant. Therefore, \[ \rho = en_o. \] (10) and applying Poisson's equation we get \[ d^2V/dx^2 = 4\pi\rho/k. \] (11) Integrating equation (11) between \( x = 0 \) and \( x = l \) where \( l \) is the width of the boundary layer, we get \[ V_f = (2\pi \rho / k) x^2 = 2\pi e n_o x^2 / k. \] (12) The total charge in the boundary layer equals the magnitude of the charge in the chemisorbed layer. Therefore \[ V_f = 2\pi e N_f^2 / n_o k \] (13) and \[ N_f = (n_o k V_f / 2\pi e)^{1/2} \] (14) where \( N_f \) is expressed as molecules chemisorbed per cm\(^2\). CHAPTER 3 FIELD EFFECT Field Effect Theory It has been suggested that an electric field can influence chemisorption on a semiconductor by either attracting or repelling charge carriers near the surface (Volkenshtein, 1963, p. 126). The effect of this field-induced charge transfer is to change the position of the Fermi Level at the surface of the semiconductor. If the field is oriented to attract electrons toward the zinc oxide surface, the Fermi Level will approach the bottom of the conduction band and chemisorption of oxygen will be enhanced. A field which is oriented to repel electrons from the surface will force the Fermi Level downward toward the valence band. This field orientation would be expected to inhibit chemisorption of oxygen and cause desorption from the semiconductor surface. In this connection, Volkenshtein (1963) suggests that the effect of an electric field on semiconductor thin films (evaporated layers) will be to increase the adsorption capacity of the surface facing the positive field plate while decreasing the capacity of the surface facing the negative field plate, but not to the same degree. Figure 2 shows the influence of an electric field whose direction is perpendicular to the plane of the oxide surface on the zinc oxide thin film band structure. Field Effect Calculations The maximum shift of the Fermi Level due to the application of a DC electric field is the product of the Debye length and field existing at the semiconductor surface (Battelle, 1967). The Debye length is a measure of the distance over which departures from electrical neutrality, under all conditions, smooth themselves out (Adler, 1964, p. 151). The Debye length for an extrinsic semiconductor is \[ L_D = \sqrt{\frac{kT}{q}} \left( \frac{\varepsilon}{qn_o} \right) . \] (15) The field strength at the semiconductor surface, due to this external field, equals the magnitude of the field in the gap between the positive and negative plates divided by the relative dielectric constant of zinc oxide (Lindmayer and Wrigley, 1965, p. 324). \[ F = \left( \frac{V_g}{d} \right) \left( \frac{l}{\varepsilon_r} \right) . \] (16) Therefore \( V_s \), the maximum shift of the Fermi Level, will be equal to the product of equations (15) and (16) \[ V_s = \left( \frac{V_g}{d} \right) \left[ \left( \frac{kT}{q} \right) \left( \frac{\varepsilon}{qn_o \varepsilon_r^2} \right) \right]^{1/2} \] (17) The field at the semiconductor surface is now the sum of the original field across the semiconductor boundary layer and the external field imposed by the DC voltage source. Likewise, the new potential barrier value will be equal to the sum of equations (13) and (17). \[ V_n = V_f + V_s \] (18) The change in the number of adsorbed molecules per \( \text{cm}^2 \) is \[ \Delta N_f = \sqrt{\varepsilon_0 k/2\pi e} (\sqrt{V_n} - \sqrt{V_f}) \] (19) We have no accurate data on the exact specific surface of the semiconductor (zinc oxide) used in this experiment, but Heiland et al. (1959, p. 195) states that the specific surface of active zinc oxide may be in excess of 80 \( \text{m}^2/\text{g} \). Specific surface is defined as surface area which is chemically reactive with the adsorbate and is much larger than the nominal or linearly measured surface. To obtain a numerical answer for the number of adsorbed molecules, this figure (80 \( \text{m}^2/\text{g} \)) will be used in the calculations. The adsorption-effective surface area is approximately \( 2.0 \times 10^4 \text{ cm}^2 \) (see Chapter 4, page 18, Preparation of Zinc Oxide Plates). The total change in the amount of adsorbed molecules in the system is \[ \Delta N_{ft} = 2.0 \times 10^4 \sqrt{\varepsilon_0 k/2\pi e} (\sqrt{V_n} - \sqrt{V_f}) \] (20) To arrive at an actual number for \( N_{ft} \) as a function of \( V_s \), the following values for zinc oxide parameters were used (Barry and Stone, 1960, p. 139): \[ n_o = 10^{19} \text{ Donors per cm}^3 \] \[ V_f = 0.055 \text{ eV}. \] Therefore, \[ \Delta N_{ft} = 6.5 \times 10^{16} (\sqrt{V_s + 0.055} - 0.235) \text{ molecules}. \quad (21) \] The number of adsorbed molecules is therefore a function of the shift in the Fermi Level resulting from the applied electric field. CHAPTER 4 DESIGN OF EXPERIMENTAL SYSTEM Vacuum System The vacuum system consisted of a stainless steel vacuum chamber, a Welch Duo-Seal rotary mechanical pump, a Consolidated Vacuum Corp. 300-watt oil diffusion pump, and two pressure gauges. The vacuum chamber, into which the zinc oxide plate assembly was placed, had interior dimensions of 73 mm diameter by 254 mm length. The fittings and tubing leading into the chamber added substantially to its volume. The over-all volume of the system was approximately 2.05 liters. Figure 3 shows a diagram of the vacuum chamber and associated fittings. The mechanical pump was used to back the diffusion pump and also had a direct connection to the vacuum system through a bellows sealed valve. For fast pump cycles, in which a high vacuum was not required, the mechanical pump was used alone to pump down the system. An ultra high vacuum valve was used to regulate the quantity of test gas within the vacuum system. System pressure was monitored by a discharge vacuum gauge for pressures of 5 microns or less. A calibrated pirani vacuum gauge was used to monitor pressures in excess of 5 microns. Field Voltage Apparatus A source of DC field voltage was provided by a Gyra Electronics Corporation reversible polarity DC power supply. This power supply has an infinitely adjustable output ranging from 0 to 3000 vdc. The field voltage was supplied to the zinc oxide plate assembly by a high voltage twin feed-through connector (Fig. 3). Preparation of Zinc Oxide Plates The substrates for the evaporated zinc layers were 4 round stainless steel plates, 6.36 cm in diameter and 1 mm thick. The nominal surface area of these 4 plates was $250 \text{ cm}^2$. The plates were sand blasted and then cleaned with Alconox, acetone, and finally rinsed in distilled water before deposition of zinc. Zinc was evaporated onto each side of the stainless steel plates at a pressure of $5 \times 10^{-6}$ torr to a thickness of approximately 2000 Å. These zinc coated plates were oxidized in an atmosphere of pure oxygen at 600° C for 14 hours. The total volume of zinc oxide in the system is the product of the nominal surface area times the oxide thickness. Taking $2.0 \times 10^{-5}$ cm as the depth of the oxide and $250 \text{ cm}^2$ as the surface area, the volume of zinc oxide was $5.0 \times 10^{-3} \text{ cm}^3$. The density of zinc oxide is approximately 5.0 g/cm$^3$. Therefore the mass of zinc oxide on the plates was of the order of $2.50 \times 10^{-3}$ g. Assuming that the specific surface of the zinc oxide is of the order of $80 \text{ m}^2/\text{g}$ (see Chapter 2, page 4, Chemisorption), the total adsorption-effective surface area of oxide in the system was approximately $2.0 \times 10^4 \text{ cm}^2$. Zinc Oxide Plate Assembly The zinc oxide coated plates were situated between 5 uncoated stainless steel plates. These uncoated plates were of the same dimensions as the zinc oxide plates. The 9 plates were mounted on a glass jig as shown in Fig. 4. The plates were separated by 1.5 mm glass spacers. The total geometrical area of the 4 zinc oxide coated plates was approximately $250 \text{ cm}^2$. Electrical contact was made to the plates by means of stainless steel tabs which had been spot welded to the plates. Nickel wire was used to connect the plates to the high voltage twin feed-through connector. The experiment was set up so that a positive power supply voltage made the uncoated plates positive with respect to the zinc oxide substrate plates. The electric field between the uncoated and zinc coated plates resulting from this polarity voltage was designated as a positive field. This positive field should cause adsorption as noted by a decrease in system pressure. Likewise, a negative field produced by a voltage of opposite polarity should cause desorption as noted by an increase in system pressure. Figure 5 shows a schematic of the electric field-producing circuit. The ideal gas law was used to relate pressure changes to the numbers of gas molecules adsorbed or desorbed. The gas law states that $$P V_o = n R T$$ (22) where $n =$ number of gram-moles. The heat liberated during chemisorption is too small to appreciably alter the temperature of the gas in the over-all vacuum system; so the reaction is essentially isothermal. Since the volume of the system and the gas constant, $R$, are constant, changes in $n$ are proportional to changes in $P$. Therefore, $$\Delta n = \frac{\Delta PV_o}{RT}. \quad (23)$$ Assuming that pressure changes are caused just by adsorption or desorption, $n$ equals the number of moles adsorbed or desorbed from the zinc oxide plates. Therefore $\Delta N_{ft}$, the number of oxygen molecules adsorbed or desorbed, is equal to the product of $n$ and Avogadro's number. $$\Delta N_{ft} = 6.02 \times 10^{23} \Delta n$$ $$= 6.02 \times 10^{23} \frac{\Delta PV_o}{RT}. \quad (24)$$ These results were used for calculations of $\Delta N_{ft}$ from the experimental data. CHAPTER 5 EXPERIMENT Objectives of Experiment The experimental procedures were designed to determine the amount of adsorption/desorption as a function of field voltage and base pressure. This was done by relating pressure changes due to field-induced adsorption/desorption to molecules of gas being adsorbed or desorbed by the zinc oxide plates. Repeatability of the data was noted by repeating the experiment a minimum of three times for each data point. An assessment of zinc oxide adsorption/desorption life was made by noting any degradation of results from repeated adsorption/desorption cycles. Finally, it was of interest to determine if the field-induced adsorption/desorption was specific to oxygen as opposed to nitrogen. Pre-Run Preparations Prior to taking data on adsorption/desorption, the vacuum system was twice flushed out with oxygen. The flush cycle consisted of first pumping down to $10^{-6}$ torr and then backfilling with oxygen to a pressure of one atmosphere. The purpose of this was to reduce to a very low value the concentration of other residual gases in the system with respect to oxygen. Then before each experimental run, the vacuum system was pumped down to $10^{-6}$ torr, then filled with oxygen to a pressure of 50 microns. The system was allowed to sit at this pressure for 5 minutes to oxygenate the vacuum system to a stabilized reference value. It has been established that zinc oxide has a relatively fast response time in this pressure range (McDaniel et al., 1967, p. 18). This pressure was also low enough to allow quick pump down cycles to the desired test pressures. **Procedures for Taking Data** **Chemisorption with Oxygen as Test Gas** The procedure for taking data was to set the system at the base pressure of interest and to allow the system to stabilize for 5 minutes. The pressure was monitored for two minutes on the pirani gauge with readings taken every 30 seconds. The field voltage was then applied, and any changes in pressure were noted. The pressure was then monitored for 2 additional minutes, with readings noted first 15 seconds after the field was applied, then every 30 seconds. **Chemisorption of Nitrogen** To show that the reaction was specific to oxygen, nitrogen was substituted for oxygen in the experiment. The system was first flushed out with nitrogen and prepared according to the pre-run preparations. Two experimental runs were then taken following the same procedures as described above for oxygen. As was predicted in Chapter 3, page 13, FIELD EFFECTS, adsorption and desorption on zinc oxide were controllable by transverse electric fields. Figure 5 shows the amounts of adsorption and desorption for positive and negative field voltages for various base (test) pressures. Base pressure, in this case, was taken as the pressure at which the system was stabilized before the transverse field was applied. For positive fields, it is seen that the amounts of adsorption increase as the field voltage increases. However, for negative transverse fields, the desorption curves appear to asymptotically approach a limit as the negative field voltage increases. Actually, for higher base pressures, the curves reverse themselves at certain values of field voltages. The 20-micron base curve in Fig. 6 reverses at -250 volts; while the 40-micron base curve is seen to reverse at -100 volts. This seems to indicate that a field induced discharge resulting in a loss of field strength was occurring at a fairly constant value of pressure-voltage (i.e., approximately 500 micron volts). A limit to desorption is predicted by the theoretical curve seen in Fig. 6. A negative field voltage of sufficient magnitude should be able to force the potential barrier to a value where complete desorption occurs. However, there was no such limit to the amount of adsorption as the positive field voltage increased. Amounts of adsorption for a given positive field were less than corresponding amounts of desorption for a negative field of the same magnitude. This was also predicted in the theoretical curves. It was noted that as base pressures increased, desorption increased but adsorption decreased. A reason for this result is seen in the mode of operation. With the electric field off, the system was filled to the base pressure with oxygen. But zinc oxide normally adsorbs oxygen, so that by the time that the field was applied, part of the adsorption capacity of the oxide was taken up. Therefore desorption effects should be greater than adsorption effects. Figure 7 shows the pressure versus time relation occurring with application of an adsorption-enhancement field, for two different values of the applied fields. The two runs taken with nitrogen in the system (after flushing the system with nitrogen) showed no adsorption taking place on zinc oxide as evidenced by decreases in system pressure. However the first desorption run showed a very slight increase in pressure occurring on application of the negative field. The slight desorption was undoubtedly due to a small amount of residual chemisorbed oxygen in the semiconductor being released by the field. This speculation was borne out by the results of the second consecutive run in nitrogen which showed no desorption at all taking place when the field was applied. The results of the experiment were used to compute a "figure of merit" for zinc oxide. This figure of merit was computed in terms of grams of oxygen adsorbed per square centimeter of geometric surface area per unit voltage gradient. Taking the amount of oxygen adsorbed at the base pressure of 5 microns and field voltage of +800 volts, the figure of merit is \[ L = 1.67 \times 10^{-12} \text{ (grams O}_2 \text{ cm/cm}^2 \text{ volt)} \] No detailed studies were undertaken on the long-term stability of the zinc oxide system. At this time we can only indicate that no degradation was noted in the response of the zinc oxide after 3 months of operation involving hundreds of adsorption and desorption cycles. For the pressures and temperature ranges encountered in this experiment, the zinc oxide reactions have been completely reversible. CHAPTER 7 DISCUSSION AND CONCLUSIONS Comparison of Theory and Experiment The experimental results show, in general, qualitative agreement with the theoretical curves (Fig. 6). It is of interest to note that the field induced desorption does show the limiting behavior, at low base pressures, predicted by theory in Chapter 3, page 13, FIELD EFFECT. Quantitative differences between theoretical and experimental results are due to the idealized theoretical model used for the calculations. Parameters of the model such as donor density, Debye length, dielectric constant, and specific area were "educated estimates" of the characteristics of the actual zinc oxide used in the experiments. The important result of this experiment is that a practical means of influencing chemisorption on a semiconductor has been shown to be possible. Future Efforts Many other parameters in the realm of controllable chemisorption remain to be investigated. The actual adsorbent-effective life of zinc oxide has not yet been determined. Zinc oxide preparation techniques and binding procedures can greatly affect the reversible and irreversible adsorbent capacity of the oxide (Barry and Stone, 1960, p. 124). It will be of interest to investigate the effects of oxide preparation on field-controlled chemisorption reactions. The figure of merit shows that $1.67 \times 10^{-12}$ grams of oxygen can be adsorbed per square centimeter of surface per volt per centimeter. An improvement in these results should follow upon use of an adsorbent having a greater specific surface area than zinc oxide thin films, i.e., zinc oxide powder. It is apparent that many investigations remain to be carried out before our knowledge in this area is complete. That further investigations are warranted is clear, both from the promising results to date and from the many scientific and industrial applications that await the development of a controllable chemisorber. Fig. 1. Energy Plot of ZnO Band Structure L = THICKNESS OF THIN LAYER SOLID LINE DENOTES BAND STRUCTURE - NO FIELD APPLIED DOTTED LINE DENOTES EFFECT OF TRANSVERSE FIELD Fig. 2. Influences of Field on ZnO Thin Film Band Structure Fig. 3. Vacuum System for ZnO Studies STAINLESS STEEL SPACER (TYP) 70 mm STAINLESS STEEL COLLAR WITH SETSCREW (TYP) TO FIELD VOLTAGE SUPPLY STAINLESS STEEL PLATES (SPACING BETWEEN PLATES ~ 1.5 mm) GLASS WASHER (TYP) GLASS JIG 6 mm GLASS ROD TOP VIEW SLIDE (TYP) 63 mm DIA. SIDE VIEW NOTE: DARK STAINLESS STEEL PLATES ARE COATED ON BOTH SIDES WITH ZnO Fig. 4. ZnO Plate Assembly Fig. 5. Electric Schematic ZINC OXIDE COATED PLATES UNCOATED PLATES COAXIAL CABLE HIGH VOLTAGE VACUUM TWIN FEED-THRU CONNECTOR GYRA REVERSIBLE POLARITY D.C. POWER SUPPLY 0 - 3000 VDC Fig. 6. Experimental Curve: Adsorption-desorption vs. Applied Potential DATA POINTS, TEMPERATURE 300°K - 5μ BASE PRESSURE - 10μ - 20μ - 40μ NOTE: DATA POINTS SHOWN ARE THE AVERAGE OF THREE RUNS PRESSURE CHANGE WITH RESPECT TO BASE 5, 10, 20, 40 MICRONS ΔP (MICRONS) APPLIED POTENTIAL (VOLTS) THEORETICAL CURVE AVERAGE SCATTER ADSORPTION ΔP = 0.7 AT 800 V MAX. DESORPTION ΔP = 0.09 AT 800V Fig. 7. Pressure vs. Time | Symbol | Description | |--------|-------------| | d | Gap width | | e | Electron charge | | E | Activation energy of chemisorption | | E' | Activation energy of desorption | | F | Field strength at semiconductor surface | | k | Boltzmann constant | | K | Interaction energy of $O^-$ ion with surface | | $\lambda$ | Thickness of boundary layer | | L | Figure of merit for zinc oxide | | $L_D$ | Debye length of extrinsic semiconductor | | m | Mass of gas molecule | | n | Number of gram-moles | | $n_o$ | Concentration of donor sites | | $N_f$ | Number of adsorbed ions per cm$^2$ | | $N_{fr}$ | Net rate of chemisorption per cm$^2$ | | $N_{ft}$ | Total number of adsorbed ions | | p | Pressure | | R | Universal gas constant | | T | Temperature in °K | | V | Potential barrier | | $V_g$ | Gap voltage | | Symbol | Description | |--------|-------------| | $V_f$ | Potential barrier for which energy of chemisorption is zero | | $V_n$ | Potential barrier with field applied | | $V_s$ | Shift in potential barrier due to field | | $V_o$ | Volume of vacuum system | | $x$ | Distance into boundary layer | | $\alpha$ | Electron affinity of the atom | | $\varepsilon$ | Dielectric constant of zinc oxide | | $\varepsilon_r$ | Relative dielectric constant | | $\sigma$ | Condensation coefficient | | $\phi$ | Work function of zinc oxide | | $\rho$ | Charge density in boundary layer | | $\eta$ | Activation energy of adsorption | REFERENCES Adler, R. B. *Introduction to Semiconductor Physics*. New York: John Wiley and Sons, 1964. Barry, T. I., and F. S. Stone. "Reactions of Oxygen at Dark and Irradiated Zinc Oxide Surfaces," *Proc. Roy. Soc.*, A255: 124-144, 1960. Battelle Memorial Institute, Columbus Laboratories. Private correspondence from Joseph H. Oxley to Edward N. Wise, October 17, 1967. Heiland, G., E. Mollwo, and F. Stockmann. "Electronic Processes in Zinc Oxide," *Solid State Physics*, F. Seitz and D. Turnbull, editors. Vol. 8, 193-298. New York: Academic Press, 1959. Hoenig, S. A., and J. R. Lane. "Chemisorption of Oxygen on Zinc Oxide. Effect of a DC Electric Field." (To be published in *Surface Science*, 1968). Kennard, E. H. *Kinetic Theory of Gases*. New York: McGraw-Hill Book Company, Inc., 1938. Lindmayer, J., and C. Wrigley. *Fundamentals of Semiconductor Devices*. New Jersey: Van Nostrand Company, 1965. McDaniel, M. L., R. R. Mitchell, and H. J. Watson. "Conductivity Phenomena in Polycrystalline Zinc Oxide Films," Technical Note R-237, Research Laboratories, Brown Engineering Company, Inc., Huntsville, Alabama, June, 1967. Morrison, S. R. "Surface Barrier Effects in Adsorption, Illustrated by Zinc Oxide," *Advances in Catalysis*, W. G. Frankenburg et al., editors. Vol. 7, 259-300. New York: Academic Press, 1955. Stone, F. S. "Chemisorption and Catalysis on Metallic Oxides," *Advances in Catalysis*, D. D. Eley et al., editors. Vol. 13, 1-60. New York: Academic Press, 1961. Trapnell, B. M. W., and D. O. Hayward. *Chemisorption*. London: Butterworths, 1964. Volkenshtein, F. *The Electronic Theory of Catalysis on Semiconductors*. New York: MacMillan and Company, 1963.
The Board of Directors (the "Board") of Trail of the Lakes Municipal Utility District (the "District") met in regular session, open to the public, at the offices of Radcliffe Bobbitt Adams Polley PLLC, 2929 Allen Parkway, Suite 3450, Houston, Harris County, Texas 77019, a place outside the boundaries of the District, on Monday, May 21, 2018, at 12:00 p.m.; whereupon, the roll was called of the members of the Board, to-wit: | Name | Position | |--------------------|---------------------------| | Jeff Campbell | President | | Jo A. Smith | Vice President | | Virginia Elkins | Secretary | | Crystal Kirby | Assistant Secretary | | Kim Pendleton | Director | All members of the Board were present, except Director Pendleton, thus constituting a quorum. Also attending the meeting were: Mr. Jim Caldwell of C&C Water Services LLC ("C&C"); Mr. Russell Wolff of Residential Recycling & Refuse of Texas, Inc. ("RRRT"), garbage and recycling collection service providers for the District; Mr. Ross Madia of Si Environmental, LLC ("SE"), operators for the District; Ms. Keli Schroeder, P.E., of BGE, Inc. ("BGE" or the "Engineer"), engineers for the District; Ms. Amy Symmank of Myrtle Cruz, Inc. (the "Bookkeeper" or "MCI"), bookkeepers for the District; Ms. Kristen Scott of Bob Leared Interests, Inc. ("Tax Assessor/Collector"), tax assessor/collectors for the District; and Ms. Regina D. Adams and Ms. Monica Garza, attorneys, and Ms. Rita R. Rodriguez, paralegal, of Radcliffe Bobbitt Adams Polley PLLC ("Radcliffe Bobbitt" or the "Attorney"), attorneys for the District. WHEREUPON, the President called the meeting to order and evidence was presented that public notice of the meeting had been given in compliance with the law. The posted notices of the meeting are attached hereto. PUBLIC COMMENT There were no comments from the public. APPROVAL OF MINUTES The Board then considered approval of the April 30, 2018 regular meeting minutes and the May 8, 2018 special meeting minutes, which were previously distributed to the Board. Upon motion by Director Smith, seconded by Director Elkins, after full discussion and the question being put to the Board, the Board voted unanimously to approve the April 30, 2018 regular meeting minutes and the May 8, 2018 special meeting minutes, as presented. Ms. Schroeder then reported that the remote water well no. 3 is inoperable and repairs are necessary. Ms. Schroeder then introduced Mr. Caldwell who reviewed with the Board cost estimates for replacement of equipment and preliminary maintenance, a copy of which is attached to the Engineer's Report. Mr. Caldwell explained that the cost to televise and remove any blockages is $18,000 and the cost to remove the pump equipment, bail the oil and televise is $9,800. Mr. Madia also reported that an insurance claim will be filed, but because the remote water well no. 3 failure is still under investigation and it has not yet been determined what caused the failure, he is unsure what the insurance will cover. Upon motion by Director Smith, seconded by Director Elkins, after full discussion and the question being put to the Board, the Board voted unanimously to approve the proposals in the amounts of $18,000 and $9,800. Upon motion by Director Elkins, seconded by Director Smith, after full discussion and the question being put to the Board, the Board voted unanimously to declare repair/replacement to the remote water well no. 3 as an emergency. Upon motion by Director Smith, seconded by Director Elkins, after full discussion and the question being put to the Board, the Board voted unanimously to authorize replacement of the remote water well no. 3, as may be necessary and pursuant to the cost estimate provided by C&C. Director Kirby entered the meeting at this time. Mr. Caldwell exited the meeting at this time. **SECURITY REPORT** Director Campbell reviewed with the Board the Harris County Constable's Office, Precinct No. 4 Report, a copy of which is attached hereto. Mr. Campbell also reviewed with the Board the TrafficLogix report, a copy of which is attached hereto. The Board discussed the potential for a pedestrian crossing near the District's future walking trails and parks located near the Wastewater Treatment Plant (the "STP") and the Board requested that information regarding the statistics be included in the District's Summer 2018 Newsletter to encourage District residents to reduce their speed. **GARBAGE AND RECYCLING REPORT** Mr. Wolff then reviewed the Garbage and Recycling Reports, including additional information provided in such reports, copies of which are attached hereto. Mr. Wolff reported that there were two (2) complaints during the prior month. Director Kirby noted that one (1) of the recycle collection coworkers threw a recycling bin and it rolled down the street. Director Kirby then inquired if the recycling collection trucks have cameras. Mr. Wolff explained that the particular truck used that day does not and that he will address the matter with the RRRT employees. Ms. Adams then reviewed with the Board a complaint Radcliffe Bobbitt received from Mr. Rick Acosta, of 17922 Evergreen Trace Lane. A copy of the email detailing such complaint is attached hereto. Ms. Adams noted that such complaint was not listed on the complaint log and asked the status of the investigation of such complaint. Mr. Wolff explained that Mr. Acosta's complaint was addressed and resolved the same day. Director Elkins then stated that she had observed an excessive amount of garbage on the curb of Crestline Drive at Wells Mark Drive. Mr. Wolff then stated that it appeared the residents moved out and RRRT will do its best to ensure all garbage is collected. Director Campbell then noted that there are many residents moving in and suggested more recycling bins be ordered. Upon motion by Director Smith, seconded by Director Kirby, after full discussion and the question being put to the Board, the Board voted unanimously to approve the Garbage Report and authorize RRRT to purchase an additional 98 recycling bins on behalf of the District. **TAX ASSESSOR/COLLECTOR'S REPORT** Ms. Scott then presented the Tax Assessor/Collector's Report for the month of April, a copy of which is attached hereto. Ms. Scott noted that the District has collected 97.7% of its 2017 taxes as of April 30, 2018, compared to 97.896% for this same time last year. Ms. Scott then reviewed with the Board the Homestead Payment Plan Report, a copy of which is attached to the Tax Assessor/Collector's Report. Ms. Scott also reported that notices regarding implementation of the 20 percent (20%) penalty for delinquent 2017 taxes will be mailed soon. Upon motion by Director Smith, seconded by Director Elkins, after full discussion and the question being put to the Board, the Board voted unanimously to approve the Tax Assessor/Collector's Report and authorize payment of the checks reflected therein. **DELINQUENT TAX ATTORNEY'S REPORT** Ms. Scott then reviewed the Delinquent Tax Attorney's Report and an uncollectible accounts report with the Board, copies of which are attached hereto. Ms. Scott explained that seven (7) uncollectable accounts (the "Uncollectable Accounts") that have total base tax of $232.66 and requested that the Board authorize writing off such accounts. Upon motion by Director Smith, seconded by Director Elkins, after full discussion and the question being put to the Board, the Board voted unanimously to approve the Delinquent Tax Attorney's Report and authorize writing off the Uncollectable Accounts. BOOKKEEPER'S REPORT Ms. Symmank next reviewed the District's Bookkeeper's Report and the Quarterly Investment Report, copies of which are attached hereto, including the revenues and expenses of the District, the budget comparison and the checks being presented for payment. Ms. Symmank then reviewed the Bookkeeper's Report for the Wastewater Treatment Plant (the "STP"), a copy of which is attached hereto. Mr. Wolff exited the meeting at this time. Upon motion by Director Smith, seconded by Director Kirby, after full discussion and the question being put to the Board, the Board voted unanimously to approve the Bookkeeper's Reports, including the Quarterly Investment Report and authorize payment of the checks being presented for payment. ADOPT ORDER REGARDING ANNUAL REVIEW OF RULES, POLICIES, CODE OF ETHICS, AND LIST OF AUTHORIZED BROKERS FOR THE INVESTMENT OF DISTRICT FUNDS ("INVESTMENT POLICY") Ms. Adams then explained that the Public Funds Investment Act, as amended, requires the Board to review the District's Investment Policy on an annual basis and presented the Investment Policy for the Board's consideration and adoption. Ms. Adams noted that the list of approved financial institutions/brokers has been updated by MCI. Upon motion by Director Elkins, seconded by Director Smith, after full discussion and the question being put to the Board, the Board voted unanimously to adopt the amended Investment Policy to include an updated broker list, a copy of which is attached hereto. REVIEW OF ARBITRAGE COMPLIANCE ANALYSES PREPARED BY ARBITRAGE COMPLIANCE SPECIALISTS, INC. ("ACS") Ms. Adams reviewed with the Board the arbitrage compliance analyses prepared by ACS, copies of which are attached hereto. Ms. Adams reported that ACS has indicated that the District's debt service fund balance is high and that she has informed RBC Capital Markets ("Financial Advisor"), the District's Financial Advisor, of same. Ms. Adams added that such elevated balance could be attributed to the anticipation of selling the first (1st) issue of park bonds. Ms. Adams noted that no action was necessary at this time. AMENDED RATE ORDER STUDY The Board deferred this matter until the June 25th Board meeting. OPERATIONS REPORT Mr. Madia then presented the Operations Report, including the Production Report and Management Report, for the month of April, copies of which are attached hereto. Mr. Madia reported that the District had a water accountability ratio of 98.92% for the prior month, and that there are currently 2,851 total connections in the District. Mr. Madia also reported that there were no excursions at the District's STP. Mr. Madia then reviewed the Delinquent Letter Accounts List, a copy of which is attached to the Operations Report. Mr. Madia reported that during the prior month, SE sent 199 termination letters for delinquent accounts, 40 accounts were tagged, ten (10) accounts had service terminated for nonpayment and two (2) accounts had water service restored. Mr. Madia also reported that there were 204 delinquent letters mailed for non-payment of water service and 163 accounts are set to have door tags hung on June 1st, for a service disconnection date of June 6th. Mr. Madia then reported that the District received 405 customer-related telephone calls during the prior month. Mr. Madia then reviewed with the Board the 2017 Consumer Confidence Report (the "CCR"), a copy of which is attached to the Operations Report. Mr. Madia added that the CCR is required to be delivered to the District's customers no later than July 1, 2017, and that such report will be delivered in June. Mr. Madia then reported that a propeller, wear plate and seal gland on the return active sludge ("RAS") pump no. 1 are worn. Mr. Madia explained that the cost to repair the RAS pump no. 1 is $26,590 and the cost to replace same is $29,288. Mr. Madia recommended replacement of RAS pump no. 1. The Board then noted that because the cost estimates for repair or replacement exceeds $25,000, pursuant to the Regional Sewage Treatment Plant Agreement with Harris County Municipal Utility District No. 290 ("HCMUD No. 290"), the STP Advisory Committee must approve such work. Ms. Schroeder then stated that she will forward the cost estimate to the engineer for HCMUD No. 290 for review. Mr. Madia reported that repair and maintenance items completed during the prior month included: 1) rehabilitating 259 water line taps at a cost of $107,000; 2) lowering the hydrant on Silver Bend Drive at a cost of $3,272; 3) repairing one (1) tapline at a cost of $3,004; and 4) performing preventative maintenance on all systems at water plant nos. 2 and 4. Mr. Madia then reported that water well no. 3 will be inoperable for approximately eight (8) weeks. The Board discussed the need for further well testing and repairs. Mr. Madia reported that repair and maintenance items at the STP completed during the prior month included excavating and repairing the non-potable water line at a cost of $3,966. Upon motion by Director Kirby, seconded by Director Smith, after full discussion and the question being put to the Board, the Board voted unanimously to: 1) approve the Operations Report; 2) authorize termination of service to the delinquent accounts, in accordance with the District's Rate Order; 3) approve and authorize distribution of the 2017 CCR; and 4) authorize repair or replacement of the RAS pump no. 1, subject to approval by the STP Advisory Committee. Ms. Scott exited the meeting at this time. Director Smith reported that the Coalition has scheduled the following events: 1) First (1\textsuperscript{st}) Annual Gala to be held on October 20\textsuperscript{th}; 2) Third (3\textsuperscript{rd}) Annual Regatta to be held on October 13\textsuperscript{th}; and 3) Volunteer Appreciation to be held on November 14\textsuperscript{th}. **DETENTION FACILITIES REPORT** In the absence of Mr. Tom Dillard of Champions Hydro-Lawn, Inc., detention pond maintenance service provider for the District, Ms. Schroeder reported that Harris County Flood Control District has authorized renewal of the Storm Water Quality Permit for Clayton's Park ("CP") detention pond. Ms. Schroeder then reported that, in connection with the reinstallation of the bollards and fence on Woodland Hills Drive at Rankin Road right-of-way, Mr. Dillard has some concerns whether an easement exists that will require a consent to encroachment. Ms. Schroeder then requested Board authorization to perform a title search. Upon motion by Director Smith, seconded by Director Elkins, after full discussion and the question being put to the Board, the Board voted unanimously to authorize a title search, as described above. **DEVELOPER'S REPORT** Due to the absence of Mr. Aaron Alford of Woodmere Development Company, developers in the District, Ms. Adams reviewed the Developer's Report. Ms. Adams reported that Mr. Alford has reported the following: 1) 29 homes have been sold year-to-date; 2) proposals from James Coney Island and Events By Kerry for 2018 National Night Out will be presented at the June 25\textsuperscript{th} Board meeting; and 3) construction of the CP Splashpad will follow the installation of the electric meter by CenterPoint Energy. **ENGINEER'S REPORT** Ms. Schroeder then reviewed the Engineer's Report with the Board, a copy of which is attached hereto. Ms. Schroeder reported that, in connection with the waterline rehabilitation project, phase 2 (the "Waterline Project") being performed by Vaca Underground Utilities ("Vaca"), SE is working on completion of meter installations. Ms. Schroeder also reported that SE has completed all the house meters but is working on irrigation meters, which requires Harris County permits. Ms. Schroeder further reported that Vaca is currently working on restorations throughout the project area. Ms. Schroeder added that a full inspection will be performed once all parties are complete with construction. Ms. Schroeder then reported that BGE will be preparing plans for: 1) the proposed STP improvements; 2) waterline connection along Will Clayton Parkway; and 3) CP generator installation. Upon motion by Director Kirby, seconded by Director Smith, after full discussion and the question being put to the Board, the Board voted unanimously to approve the Engineer's Report. EMERGENCY PREPAREDNESS PLAN The Board tabled this matter until the June 25th Board meeting. DEVELOPMENT OF RECREATIONAL AMENITIES Ms. Schroeder then reported that the Park Bond Application for the First (1st) Issue of Park Bonds is ready for submittal pending final approval of the Attorney and the Financial Advisor on the final draft and should be submitted by May 25th. The Board then scheduled its special meeting regarding development of recreational amenities for Tuesday, June 19, 2018, at 3:30 p.m. in Room T-16 at Atascocita High School. DRIVER FEEDBACK SIGN Mr. Madia reported that the second (2nd) driver feedback sign on the southbound side of Woodland Hills Drive at Woodland Path should be installed by mid-June. DISTRICT COMMUNICATIONS Ms. Rodriguez then reported that the District's website is being updated, as necessary. The Directors then requested that their business cards be printed prior to the American Water Works Association 2018 Annual Conference and Exposition. The Board discussed the draft Summer 2018 Newsletter and requested such publication also include information regarding the speeding report. Upon motion by Director Smith, seconded by Director Elkins, after full discussion and the question being put to the Board, the Board voted unanimously to approve an authorize distribution of the Summer 2018 Newsletter, as revised. ATTORNEY'S REPORT Ms. Adams then explained that the District's current insurance policies expire on June 13, 2018, and, therefore, Board action on the insurance renewal proposal is necessary. Ms. Adams then reviewed with the Board the insurance renewal proposal from Arthur J. Gallagher & Co. ("AJG"), the District's current insurance carrier. Ms. Adams then reviewed with the Board correspondence from Ms. Kim Courte of AJG who explained that Texas Municipal League ("TML") wrote the property and boiler and machinery quote for the prior several years, and TML's insurance renewal increased by approximately $9,000 with major windstorm deductibles and new requirements for flood zone A locations. Ms. Adams went on to explain that AJG compared TML's quote to other markets that quoted the District and AJG was able to save the District the majority of the increase that TML quoted. Ms. Adams further reported that Ms. Courte noted that, the District will see an $8,000 savings, better coverage, lower property deductibles and a slight increase on the boiler and machinery deductible. Upon motion by Director Kirby seconded by Director Smith, after full discussion and the question being put to the Board, the Board voted to accept the insurance proposal from AJG, a copy of which is attached hereto. Ms. Adams then reported that Radcliffe Bobbitt has been preparing the draft of the District's Parks, Trails and Recreation Facilities Rules and Regulations. **REVIEW OF CONSULTANT CONTRACTS** The Board had no comments on the consultant contracts at this time. **MISCELLANEOUS MATTERS** Ms. Adams then reminded the Board that a special meeting has been scheduled for Tuesday, June 19, 2018, at 3:30 p.m. in Room T-16 at Atascocita High School, and the next regular meeting has been scheduled for Monday, June 25, 2018, at 12:00 p.m., at the offices of Radcliffe Bobbitt. There being no further business to come before the Board, the meeting was adjourned. PASSED, APPROVED and ADOPTED this 25th day of June, 2018. Secretary, Board of Directors
Scalable and Probabilistic Leaderless BFT Consensus through Metastability Team Rocket, Maofan Yin, Kevin Sekniqi, Robbert van Renesse, and Emin Gün Sirer Cornell University* Abstract—This paper introduces a family of leaderless Byzantine fault tolerance protocols, built around a metastable mechanism via network subsampling. These protocols provide a strong probabilistic safety guarantee in the presence of Byzantine adversaries while their concurrent and leaderless nature enables them to achieve high throughput and scalability. Unlike blockchains that rely on proof-of-work, they are quiescent and green. Unlike traditional consensus protocols, no one or more nodes typically process linear bits in the number of total nodes per decision, no node processes more than logarithmic bits. It does not require accurate knowledge of all participants and exposes new possible tradeoffs and improvements in safety and liveness for building consensus protocols. The paper describes the Snow protocol family, analyzes its guarantees, and describes how it can be used to construct the core of an internet-scale electronic payment system called Avalanche, which is evaluated in a large scale deployment. Experiments demonstrate that the system can achieve high throughput (3400 tx/s), provide low transaction latency (1.35 sec), and scale well compared to existing systems that deliver similar functionality. For our implementation and setup, the bottleneck of the system is in transaction verification. I. INTRODUCTION Achieving agreement among a set of distributed hosts lies at the core of countless applications, ranging from Internet-scale services that serve billions of people [12], [30] to cryptocurrencies worth billions of dollars [1]. To date, there have been two main families of solutions to this problem. Traditional consensus protocols rely on all-to-all communication to ensure that all correct nodes reach the same decisions with absolute certainty. Because they require quadratic communication overhead and accurate knowledge of membership, they have been difficult to scale to large numbers of participants. On the other hand, Nakamoto consensus protocols [8], [24], [26], [35], [43]–[46], [53]–[55] have become popular with the rise of Bitcoin. These protocols provide a probabilistic safety guarantee: Nakamoto consensus decisions may revert with some probability $\varepsilon$. A protocol parameter allows this probability to be rendered arbitrarily small, enabling high-value financial systems to be constructed on this foundation. This family is a natural fit for open, permissionless settings where any node can join the system at any time. Yet, these protocols are costly, wasteful, and limited in performance. By construction, they cannot quiesce: their security relies on constant participation by miners, even when there are no decisions to be made. Bitcoin currently consumes around 63.49 TWh/year [20], about twice as all of Denmark [14]. Moreover, these protocols suffer from an inherent scalability bottleneck that is difficult to overcome through simple reparameterization [17]. This paper introduces a new family of consensus protocols called Snow. Inspired by gossip algorithms, this family gains its properties through a deliberately metastable mechanism. Specifically, the system operates by repeatedly sampling the network at random, and steering correct nodes towards a common outcome. Analysis shows that this metastable mechanism is powerful: it can move a large network to an irreversible state quickly, where the irreversibility implies that a sufficiently large portion of the network has accepted a proposal and a conflicting proposal will not be accepted with any higher than negligible ($\varepsilon$) probability. Similar to Nakamoto consensus, the Snow protocol family provides a probabilistic safety guarantee, using a tunable security parameter that can render the possibility of a consensus failure arbitrarily small. Unlike Nakamoto consensus, the protocols are green, quiescent and efficient; they do not rely on proof-of-work [23] and do not consume energy when there are no decisions to be made. The efficiency of the protocols stems partly from removing the leader bottleneck: each node requires $O(1)$ communication overhead per round and $O(\log n)$ rounds in expectation, whereas classical consensus protocols have one or more nodes that require $O(n)$ communication per round (phase). Further, the Snow family tolerates discrepancies in knowledge of membership, as we discuss later. In contrast, classical consensus protocols require the full and accurate knowledge of $n$ as its safety foundation. Snow’s subsampled voting mechanism has two additional properties that improve on previous approaches for consensus. Whereas the safety of quorum-based approaches breaks down immediately when the predetermined threshold $f$ is exceeded, Snow’s probabilistic safety guarantee degrades smoothly when Byzantine participants exceed $f$. This makes it easier to pick the critical threshold $f$. It also exposes new tradeoffs between safety and liveness: the Snow family is more efficient when the fraction of Byzantine nodes is small, and it can be parameterized to tolerate more than a third of the Byzantine nodes by trading off liveness. To demonstrate the potential of this protocol family, we illustrate a practical peer-to-peer payment system, Avalanche. In effect, Avalanche executes multiple Snowball instances with the aid of a Directed Acyclic Graph (DAG). The DAG serves to piggyback multiple instances, reducing the cost from $O(\log n)$ to $O(1)$ per node and streamlining the path where there are... no conflicting transactions. Overall, the main contribution of this paper is to introduce a brand new family of consensus protocols, based on randomized sampling and metastable decision. The next section provides the model, goals and necessary assumptions for the new protocols. Section III gives intuition behind the protocols, followed by their full specification, Section IV provides methodology used by our formal analysis of safety and liveness in Appendix A, Section V describes Avalanche, a Bitcoin-like payment system, Section VI evaluates Avalanche, Section VII presents related work, and finally, Section VIII summarizes our contributions. II. MODEL AND GOALS a) Key Guarantees Safety: Unlike classical consensus protocols, and similar to longest-chain-based consensus protocols such as Nakamoto consensus [43], we adopt an $\varepsilon$-safety guarantee that is probabilistic. In practice, this probabilistic guarantee is as strong as traditional safety guarantees, since appropriately small choices of $\varepsilon$ can render consensus failure negligible, lower than the probability of hardware failure due to random events. ![Fig. 1: The relation between $f/n$ and the probability of system safety failure (decision of two conflicting proposals), given a choice of finality. Classical BFT protocols that tolerate $f$ failures will encounter total safety failure when the threshold is exceeded even by one additional node. The Bitcoin curve shows a typical finality choice for Bitcoin where a block is considered final when it is “buried” in a branch having 6 additional blocks compared to any other competing forks. Snowflake belongs to the Snow family, and it is configured with $k = 10$, $\beta = 150$. Snowflake-7,8 uses $\alpha = 7$ and $\alpha = 8$ respectively.] Liveness: All our protocols provide a non-zero probability guarantee of termination within a bounded amount of time. This bounded guarantee is similar to various protocols such as Ben-Or [7] and longest-chain protocols. In particular, for Nakamoto consensus, the number of required blocks for a transaction increases exponentially with the number of adversarial nodes, with an asymptote at $f = n/2$ wherein the number is infinite. In other words, the time required for finality approaches $\infty$ as $f$ approaches $n/2$ (Figure 3). Furthermore, the required number of rounds is calculable ahead of time, as to allow the system designer to tune liveness at the expense of safety. Lastly, unlike traditional consensus protocols and similar to Nakamoto, our protocols benefit from lower adversarial presence, as discussed in property P3 below. ![Fig. 2: Figure 1 with log-scaled y-axis.] ![Fig. 3: The relation between $f/n$ and the convergence speed, given $\varepsilon = 10^{-20}$. The left figure shows the expected number of blocks to guarantee $\varepsilon$ in Bitcoin, which, counter to commonly accepted folk wisdom, is not a constant 6, but depends on adversary size to withhold the same $\varepsilon$. The right figure shows the maximum number of rounds required by Snowflake, where being different from Bitcoin, the asymptote is below 0.5 and varies by the choice of parameters.] Formal Guarantees: Let the system be parameterized for an $\varepsilon$ safety failure probability under a maximum expected $f$ number of adversarial nodes. Let $O(\log n) < t_{\text{max}} < \infty$ be the upper bound of the execution of the protocols. The Snow protocols then provide the following guarantees: P1. Safety. When decisions are made by any two correct nodes, they decide on conflicting transactions with negligible probability ($\leq \varepsilon$). P2. Liveness (Upper Bound). Snow protocols terminate with a strictly positive probability within $t_{\text{max}}$ rounds. P3. Liveness (Lower Bound). If $f \leq O(\sqrt{n})$, then the Snow protocols terminate with high probability ($\geq 1 - \varepsilon$) in $O(\log n)$ rounds. b) Network In the standard definition of asynchrony [7], message transmission is finite, but the distribution is undefined. This implies that the scheduling of message transmission itself could behave arbitrarily, and potentially even maliciously. We use a modified version of this model, which is well-accepted [6], [22], [25], [33], [39] in the analysis of epidemic networks and gossipbased stochastic systems. In particular, we fix the distribution of message delay to that of the exponential distribution. We note that, just like in the standard asynchronous model, there is a strictly non-zero probability that any correct node may execute its next local round only after an arbitrarily large amount of time has passed. Furthermore, we also note that scheduling only applies to correct nodes, and the adversary may execute arbitrarily, as discussed later. c) Achieving Liveness Classical consensus that works with asynchrony does not get stuck in a single phase of voting because the vote initiator always polls votes from all known participants and wait for \( n - f \) responses. In our system, however, nodes operate via subsampling, hence it is possible for a single sample to select a majority of adversarial nodes, and therefore the node gets stuck waiting for the responses. To ensure liveness, a node should be able to wait with some timeout. Therefore, our protocols are synchronous in order to guarantee liveness. Lastly, it is worth noting that Nakamoto consensus is synchronous, in which the required difficulty of proof-of-work is dependent on the maximum network delay [44]. d) Adversary The adversarial nodes execute under their own internal scheduler, which is unbounded in speed, meaning that all adversarial nodes can execute at any infinitesimally small point in time, unlike correct nodes. The adversary can view the state of every honest node at all times and can instantly modify the state of all adversarial nodes. It cannot, however, schedule or modify communication between correct nodes. Finally, we make zero assumptions about the behavior of the adversary, meaning that it can choose any execution strategy of its liking. In short, the adversary is computationally bounded (it cannot forge digital signatures) but otherwise is point-to-point informationally unbounded (knows all state) and round-adaptive (can modify its strategy at any time). e) Sybil Attacks Consensus protocols provide their guarantees based on assumptions that only a fraction of participants are adversarial. These bounds could be violated if the network is naively left open to arbitrary participants. In particular, a Sybil attack [21], wherein a large number of identities are generated by an adversary, could be used to exceed the adversarial bound. A long line of work, including PBFT [13], treats the Sybil problem separately from consensus, and rightfully so, as Sybil control mechanisms are distinct from the underlying, more complex agreement protocol\(^1\). Nakamoto consensus, for instance, uses proof-of-work [4] to limit Sybils, which requires miners to continuously stake a hardware investment. Other protocols, discussed in Section VII, rely on proof-of-stake or proof-of-authority. The consensus protocols presented in this paper can adopt any Sybil control mechanism, although proof-of-stake is most aligned with their quiescent operation. One can use an already established proof-of-stake based mechanism [27]. The full design of a peer-to-peer payment system incorporating staking, unstaking and minting mechanism is beyond the scope of this paper, whose focus is on the core consensus protocol. f) Flooding Attacks Flooding/spam attacks are a problem for any distributed system. Without a protection mechanism, an attacker can generate large numbers of transactions and flood protocol data structures, consuming storage. There are a multitude of techniques to deter such attacks, including network-layer protection, proof-of-authority, local proof-of-work and economic mechanisms. In Avalanche, we use transaction fees, making such attacks costly even if the attacker is sending money back to addresses under its control. g) Additional Assumptions We do not assume that all members of the network are known to all participants, but rather may temporarily have some discrepancies in network view. We quantify the bounds on the discrepancy in Appendix A-F. We assume a safe bootstrapping mechanism, similar to that of Bitcoin, that enables a node to connect with sufficiently many correct nodes to acquire a statistically unbiased view of the network. We do not assume a PKI. Finally, we make standard cryptographic assumptions related to digital signatures and hash functions. III. PROTOCOL DESIGN We start with a non-BFT protocol called Slush and progressively build up to Snowflake and Snowball, all based on the same common majority-based metastable voting mechanism. These protocols are single-decree consensus protocols of increasing robustness. We provide full specifications for the protocols in this section, and defer the analysis to the next section, and present formal proofs in the appendix. A. Slush: Introducing Metastability The core of our approach is a single-decree consensus protocol, inspired by epidemic or gossip protocols. The simplest protocol, Slush, is the foundation of this family, shown in Figure 4. Slush is not tolerant to Byzantine faults, only crash-faults (CFT), but serves as an illustration for the BFT protocols that follow. For ease of exposition, we will describe the operation of Slush using a decision between two conflicting colors, red and blue. In Slush, a node starts out initially in an uncolored state. Upon receiving a transaction from a client, an uncolored node updates its own color to the one carried in the transaction and initiates a query. To perform a query, a node picks a small, constant sized (\( k \)) sample of the network uniformly at random, and sends a query message. Upon receiving a query, an uncolored node adopts the color in the query, responds with that color, and initiates its own query, whereas a colored node simply responds with its current color. Once the querying node collects \( k \) responses, it checks if a fraction \( \geq \alpha \) are for the same color, where \( \alpha > \lfloor k/2 \rfloor \) is a protocol parameter. If the \( \alpha \) threshold is met and the sampled color differs from the node’s \(^1\)This is not to imply that every consensus protocol can be coupled/decoupled with every Sybil control mechanism. procedure ONQUERY(v, col') if col = ⊥ then col := col' RESPOND(v, col) procedure SLUSHLOOP(u, col₀ ∈ {R, B, ⊥}) col := col₀ // initialize with a color for r ∈ [1 . . . m] do // if ⊥, skip until ONQUERY sets the color if col = ⊥ then continue store random samples from the known nodes K := SAMPLE(N \{u, v\}) P := {ONQUERY(v, col) for v ∈ K} for col' ∈ {R, B} do if P.COUNT(col') ≥ α then col := col' ACCEPT(col) Fig. 4: Slush protocol. Timeouts elided for readability. own color, the node flips to that color. It then goes back to the query step, and initiates a subsequent round of query, for a total of \(m\) rounds. Finally, the node decides the color it ended up with at time \(m\). Slush has a few properties of interest. First, it is almost memoryless: a node retains no state between rounds other than its current color, and in particular maintains no history of interactions with other peers. Second, unlike traditional consensus protocols that query every participant, every round involves sampling just a small, constant-sized slice of the network at random. Third, Slush makes progress under any network configuration (even fully bivalent state, i.e. 50/50 split between colors), since random perturbations in sampling will cause one color to gain a slight edge and repeated samplings afterwards will build upon and amplify that imbalance. Finally, if \(m\) is chosen high enough, Slush ensures that all nodes will be colored identically with high probability (whp). Each node has a constant, predictable communication overhead per round, and \(m\) grows logarithmically with \(n\). The Slush protocol does not provide a strong safety guarantee in the presence of Byzantine nodes. In particular, if the correct nodes develop a preference for one color, a Byzantine adversary can attempt to flip nodes to the opposite so as to keep the network in balance, preventing a decision. We address this in our first BFT protocol that introduces more state storage at the nodes. B. Snowflake: BFT Snowflake augments Slush with a single counter that captures the strength of a node’s conviction in its current color. This per-node counter stores how many consecutive samples of the network by that node have all yielded the same color. A node accepts the current color when its counter exceeds \(\beta\), another security parameter. Figure 5 shows the amended protocol, which includes the following modifications: 1) Each node maintains a counter \(cnt\); 2) Upon every color change, the node resets \(cnt\) to 0; 3) Upon every successful query that yields \(\geq \alpha\) responses for the same color as the node, the node increments \(cnt\). When the protocol is correctly parameterized for a given threshold of Byzantine nodes and a desired \(\varepsilon\)-guarantee, it can ensure both safety (P1) and liveness (P2, P3). As we later show, there exists an irreversible state after which a decision is inevitable. Correct nodes begin to commit past the irreversible state to adopt the same color, whp. For additional intuition, which we do not expand in this paper, there also exists a phase-shift point, where the Byzantine nodes lose ability to keep network in a bivalent state. C. Snowball: Adding Confidence Snowflake’s notion of state is ephemeral: the counter gets reset with every color flip. Snowball augments Snowflake with confidence counters that capture the number of queries that have yielded a threshold result for their corresponding color (Figure 6). A node decides if it gets \(\beta\) consecutive chits for a color. However, it only changes preference based on the total accrued confidence. The differences between Snowflake and Snowball are as follows: 1) Upon every successful query, the node increments its confidence counter for that color. 2) A node switches colors when the confidence in its current color becomes lower than the confidence value of the new color. IV. ANALYSIS Due to space limits, we move some core details to Appendix A, where we show that under certain independent and distinct assumptions, the Snow family of consensus protocols provide safety (P1) and liveness (P2, P3) properties. In this section, we summarize our core results and provide some proof sketches. a) Notation Let the network consist of a set of $n$ nodes (represented by set $\mathcal{N}$), where $c$ are correct nodes (represented by set $\mathcal{C}$) and $f$ are Byzantine nodes (represented by set $\mathcal{B}$). Let $u, v \in \mathcal{C}$ refer to any two correct nodes in the network. Let $k, \alpha, \beta \in \mathbb{Z}^+$ be positive integers where $\alpha > \lfloor k/2 \rfloor$. From now on, $k$ will always refer to the network sample size, where $k \leq n$, and $\alpha$ will be the majority threshold required to consider the voting experiment a “success”. In general, we will refer to $\mathcal{S}$ as the state (or configuration) of the network at any given time. b) Modelling Framework To formally model our protocols, we use continuous-time Markov processes (CTMC). The state space is enumerable (and finite), and state transitions occur in continuous time. CTMCs naturally model our protocols since state transitions do not occur in epochs and in lockstep for every node (at the end of every time unit) but rather occur at any time and independently of each other. We focus on binary consensus, although the safety results generalize to more than two values. We can think of the network as a set of nodes either colored red or blue, and we will refer to this configuration at time $t$ as $\mathcal{S}_t$. We model our protocols through a continuous-time process with two absorbing states, where either all nodes are red or all nodes are blue. The state space $\mathcal{S}$ of the stochastic process is a condensed version of the full configuration space, where each state $\{0, \ldots, n\}$ represents the total number of blue nodes in the system. The simplification that allows us to analyze this system is to obviate the need to keep track of all of the execution paths, as well as all possible adversarial strategies, and rather focus entirely on a single state of interest, without regards to how we achieve this state. More specifically, the core extractable insight of our analysis is in identifying the irreversibility state of the system, the state upon which so many correct nodes have usurped either red or blue that reverting back to the minority color is highly unlikely. A. Safety a) Slush Unless explicitly stated, we assume that $\mathcal{L}(u) = \mathcal{N}$ for all $u \in \mathcal{N}$. We model the dynamics of the system through a continuous-time process where two states are absorbing, namely the all-red or all-blue state\footnote{Note that, in reality, we do not require that all nodes be the same color in order to ensure that we decide on that color, only $n - \alpha + 1$. This is only a simplification in our description.}. Let $\{X_{t \geq 0}\}$ be the random variable that describes the state of the system at time $t$, where $X_0 = \{0, \ldots, c\}$. We begin by immediately discussing the most important result of the safety dynamics of our processes: the reversibility probabilities of the Slush process. All the other formal results in this paper are, informally speaking, intuitive derivations and augmentations of this result. **Theorem 1.** Let the configuration of the system at time $t$ be $\mathcal{S}_t = n/2 + \delta$, meaning that the network has drifted to $2\delta$ more blue nodes than red nodes ($\delta = 0$ means that red and blue are equal). Let $\xi_\delta$ be the probability of absorption to the all-red state (minority). Then, for all $0 \leq \delta \leq n/2$, we have $$\xi_\delta \leq \left( \frac{1/2 - \delta/n}{\alpha/k} \right)^\alpha \left( \frac{1/2 + \delta/n}{1 - \alpha/k} \right)^{k-\alpha}$$ $$\leq e^{-2((\alpha/k)-(1/2)+(\delta/n))^2k}$$ (1) **Proof.** This bound follows from the Hoeffding-derived tail bounds of the hypergeometric distribution by Chvatal [15]. □ We note that Chvatal’s bounds are introduced for simplicity of exposition and are extremely weak. We leave the full closed-form expression in Theorem 2 to the appendix, which is also significantly stronger than the Chvatal bound. Nonetheless, using the loose Chvatal bound, we make the key observation that as the drift $\delta$ increases, given fixed $\alpha$ and $k$, the probability of moving towards the minority value decreases exponentially fast (in fact, even faster, since there is a quadratic term in the inverse exponent). Additionally, the same result holds for increasing $\alpha$ given a fixed $k$. The outcomes of this theorem demonstrate a key property: once the network loses full bivalency (i.e. $\delta > 0$), it tends to topple and converge rapidly towards the majority color, unable to revert back to the minority with significant probability. This is the fundamental property exploited by our protocols, and what makes them secure despite only sampling a small, constant-sized set of the network. The core result that follows for the safety guarantees in Snowflake is in finding regions (given specific parameter choices) where the reversibility holds with no higher than $\varepsilon$ probability even under adversarial presence. b) Snowflake For Snowflake, we relax the assumption that all nodes are correct and assume that some fraction of nodes are adversarial. In Slush, once the network gains significant majority support for one proposal (e.g., the color blue), it becomes unlikely for a minority proposal (e.g., the color red) to ever become decided in the future (irreversibility). Furthermore, in Slush nodes simply have to execute the protocol for a deterministic number of rounds, $m$, which is known ahead of protocol execution. When introducing adversarial nodes with arbitrary strategies, however, nodes cannot simply execute the protocol for a deterministic number of rounds, since the adversary may nondeterministically affect the value of $m$. Instead, correct nodes must implement a mechanism to explicitly detect that irreversibility has been reached. To that end, in Snowflake, every correct node implements a decision function, $D(u, \mathcal{S}_t, \text{blue}) \rightarrow \{0, 1\}$, which is a random variable that outputs 1 if node $u$ detects that the network has reached an irreversibility state at time $t$ for blue. The decision mechanism is probabilistic, meaning that it can fail, although it is designed to do so with negligible probability. We now sketch the proof of Snowflake. **Proof Sketch.** We define safety failure to be the event wherein any two correct nodes $u$ and $v$ decide on blue and red, i.e. $\mathcal{D}(u, S_t, \text{blue}) \rightarrow 1$ and $\mathcal{D}(v, S_{t'}, \text{red}) \rightarrow 1$, for any two times $t$ and $t'$. We again model the system as a continuous time random process. The state space is defined the same way as in Slush. However, we note some critical subtleties. First, unlike in Slush, where it is clear that, once nodes are the same color, a decision has been made, this is no longer the case for Snowflake. In fact, even if all correct nodes accept a color, it is entirely possible for a correct node to switch again. Second, we also have to consider the decision mechanism $\mathcal{D}(\ast)$. To analyze, we obviate the need to keep track of all possible network configurations under all possible adversarial strategies and assume that a node $u$ first decides on blue. Then, conditioned on the state of the network upon $u$ deciding, we calculate the probability that another node $v$ decides red, which is a function of both the probability that the network reverts towards a minority blue state and that $v$ decides at that precise state. We show that under appropriate choices of $k$, $\alpha$, and $\beta$, we can construct highly secure instances of Snowflake (i.e. safety failure with probability $\leq \varepsilon$) when the network reaches some bias of $\delta$, as shown in Figure 7. A concrete example is provided in Figure 1. ![Figure 7: Representation of the irreversibility state, which exists when – even under $f$ Byzantine nodes – the number of blue correct nodes exceeds that of red correct nodes by more than $2\delta$.](image) **c) Snowball** Snowball is an improvement over Snowflake, where random perturbations in network samples are reduced by introducing a limited form of history, which we refer to as confidence. The fundamental takeaway is that the history enables Snowball to provide stronger security against safety failures than Snowflake. **Proof Sketch.** We structure the model via a game of balls and urns, where each urn represents one of the correct nodes, and the ball counts correspond to confidences in either color. Using this model, the analysis applies martingale concentration inequalities to prove that once the system has reached the irreversibility state, then the growth of the confidence of the majority decided color will perpetually grow and drift further away from those of the minority color, effectively rendering reversibility less likely over time. If the drifts ever revert, then reversibility analysis becomes identical to that of Snowflake. Since now the adversary must overcome the confidence drifts, as well as the irreversibility dynamics, the security of Snowball is strictly stronger than that of Snowflake. **B. Liveness** We assume that the observed adversarial presence $0 \leq f' \leq n(k - \alpha - \psi)/k \leq f$, where we refer to $\psi$ as the buffer zone. The bigger $\psi$, the quicker the ability of the decision mechanism to finalize a value. If, of course, $\psi$ approaches zero or becomes negative, then we violate the upper bound of adversarial tolerance for the parameterized system, and thus the adversary can, with high probability, stall termination by simply choosing to not respond, although the safety guarantees may still hold. Assuming that $\psi$ is strictly positive, termination is strictly finite under all network configurations where a proposal has at least $\alpha$ support. Furthermore, not only is termination finite with probability one, we also have a strictly positive probability of termination within any bounded amount of time $t_{max}$, as discussed in Lemma 4, which follows from Theorem 3. This captures liveness property P2. **Proof Sketch.** Using the construction of the system to prove irreversibility, we characterize the distribution of the average time spent (sojourn times) at each state before the system terminates execution by absorption at either absorbing state. The termination time is then a union of these times. For non-conflicting transactions, since the adversary is unable to forge a conflict, the time to decision is simply the mixing time of the network starting from a configuration where every correct node is uninitialized. **Proof Sketch.** Mixing times for gossip is well characterized to be as $O(\log n)$, and this result holds for all our protocols. Liveness guarantees under a fully bivalent network configuration reduce to an optimal convergence time of $O(\log n)$ rounds if the adversary is at most $O(\sqrt{n})$, for $\alpha = \lfloor k/2 \rfloor + 1$. We leave additional details to Lemma 5. When the adversary surpasses $O(\sqrt{n})$ nodes, the worst-case number of rounds increases polynomially, and as $f$ approaches $n/2$ it approaches exponential convergence rates. **Proof Sketch.** We modify Theorem 3 to include the adversary, which reverts any imbalances in the network by keeping network fully bivalent. **a) Multi-Value Consensus** Our binary consensus protocol could support multi-value consensus by running logarithmic binary instances, one for each bit of the proposed value. However, such theoretical reduction might not be efficient in practice. Instead, we could directly incorporate multi-values as multi-colors in the protocol, where safety analysis could still be generalized. As for liveness, we sketch a leaderless initialization mechanism, which in expectation uses $O(\log n)$ rounds under the assumption that the network is synchronized. Every node operates in three phases: in the first phase, it gossips and collects proposals for $O(\log n)$ rounds, where each round lasts for the maximum message delay; in the second phase, each node stops collecting proposals, and instead gossips all new values for an additional $O(\log n)$ rounds; in the third phase, each node samples the proposals it knows of locally, checking for values that have an $\alpha$ majority, ordered deterministically, such as by hash values. Finally, a node selects the first value by the order as its initial state when it starts the subsequent consensus protocol. In a cryptocurrency setting, the deterministic ordering function would incorporate fees paid out for every new proposal, which means that the adversary is financially limited in its ability to launch a fairness attack against the initialization. While the design of initialization mechanisms is interesting, note that it is not necessary for a decentralized payment system, as we show in Section V. Finally, we discuss churn and view discrepancies in the appendix. V. Peer-to-Peer Payment System We have implemented a bare-bones payment system, Avalanche, which supports Bitcoin transactions. In this section, we describe the design and sketch how the implementation can support the value transfer primitive at the center of cryptocurrencies. Deploying a full cryptocurrency involves bootstrapping, minting, staking, unstaking, and inflation control. While we have solutions for these issues, their full discussion is beyond the scope of this paper, whose focus is centered on the novel Snow consensus protocol family. In a cryptocurrency setting, cryptographic signatures enforce that only a key owner is able to create a transaction that spends a particular coin. Since correct clients follow the protocol as prescribed and never double spend coins, in Avalanche, they are guaranteed both safety and liveness for their virtuous transactions. In contrast, liveness is not guaranteed for rogue transactions, submitted by Byzantine clients, which conflict with one another. Such decisions may stall in the network, but have no safety impact on virtuous transactions. We show that this is a sensible tradeoff, and that resulting system is sufficient for building complex payment systems. A. Avalanche: Adding a DAG Avalanche consists of multiple single-decree Snowball instances instantiated as a multi-decree protocol that maintains a dynamic, append-only directed acyclic graph (DAG) of all known transactions. The DAG has a single sink that is the genesis vertex. Maintaining a DAG provides two significant benefits. First, it improves efficiency, because a single vote on a DAG vertex implicitly votes for all transactions on the path to the genesis vertex. Second, it also improves security, because the DAG intertwines the fate of transactions, similar to the Bitcoin blockchain. This renders past decisions difficult to undo without the approval of correct nodes. When a client creates a transaction, it names one or more parents, which are included inseparably in the transaction and form the edges of the DAG. The parent-child relationships encoded in the DAG may, but do not need to, correspond to application-specific dependencies; for instance, a child transaction need not spend or have any relationship with the funds received in the parent transaction. We use the term ancestor set to refer to all transactions reachable via parent edges back in history, and progeny to refer to all children transactions and their offspring. The central challenge in the maintenance of the DAG is to choose among conflicting transactions. The notion of conflict is application-defined and transitive, forming an equivalence relation. In our cryptocurrency application, transactions that spend the same funds (double-spends) conflict, and form a conflict set (shaded regions in Figure 11), out of which only a single one can be accepted. Note that the conflict set of a virtuous transaction is always a singleton. Avalanche embodies a Snowball instance for each conflict set. Whereas Snowball uses repeated queries and multiple counters to capture the amount of confidence built in conflicting transactions (colors), Avalanche takes advantage of the DAG structure and uses a transaction’s progeny. Specifically, when a transaction $T$ is queried, all transactions reachable from $T$ by following the DAG edges are implicitly part of the query. A node will only respond positively to the query if $T$ and its entire ancestry are currently the preferred option in their respective conflict sets. If more than a threshold of responders vote positively, the transaction is said to collect a chit. Nodes then compute their confidence as the total number of chits in the progeny of that transaction. They query a transaction just once and rely on new vertices and possible chits, added to the progeny, to build up their confidence. Ties are broken by an initial preference for first-seen transactions. Note that chits are decoupled from the DAG structure, making the protocol immune to attacks where the attacker generates large, padded subgraphs. B. Avalanche: Specification Each correct node $u$ keeps track of all transactions it has learned about in set $T_u$, partitioned into mutually exclusive conflict sets $\mathcal{P}_T, T \in T_u$. Since conflicts are transitive, if $T_i$ and $T_j$ are conflicting, then they belong to the same conflict set, i.e. $\mathcal{P}_{T_i} = \mathcal{P}_{T_j}$. It’s worth noting this relation may sound counter-intuitive: conflicting transactions have the equivalence relation, because they are equivocations spending the same funds. We write $T' \leftarrow T$ if $T$ has a parent edge to transaction $T'$, The “$\leftarrow$”-relation is its reflexive transitive closure, indicating a path from $T$ to $T'$. DAGs built by different nodes are guaranteed to be compatible, though at any one time, the two nodes may not have a complete view of all vertices in the system. Specifically, if $T' \leftarrow T$, then every node in the system 1: procedure AVALANCHELOOP 2: while true do 3: find $T$ that satisfies $T \in \mathcal{T} \land T \notin \mathcal{Q}$ 4: $\mathcal{K} := \text{SAMPLE}(\mathcal{N}/u, k)$ 5: $P := \sum_{v \in \mathcal{K}} \text{QUERY}(v, T)$ 6: if $P \geq \alpha$ then 7: $c_T := 1$ 8: // update the preference for ancestors 9: for $T' \in \mathcal{T} : T' \leftarrow T$ do 10: if $d(T') > d(P_{T'}.\text{pref})$ then 11: $P_{T'}.\text{pref} := T'$ 12: if $T' \neq P_{T'}.\text{last}$ then 13: $P_{T'}.\text{last} := T'$, $P_{T'}.\text{cnt} := 1$ 14: else 15: $+P_{T'}.\text{cnt}$ 16: else 17: for $T' \in \mathcal{T} : T' \leftarrow T$ do 18: $P_{T'}.\text{cnt} := 0$ 19: // otherwise, $c_T$ remains 0 forever 20: $\mathcal{Q} := \mathcal{Q} \cup \{T\}$ // mark T as queried Fig. 9: Avalanche: the main loop. 1: function ISPREFERRED($T$) 2: return $T = P_{T}.\text{pref}$ 3: function ISSTRONGLYPREFERRED($T$) 4: return $\forall T' \in \mathcal{T}, T' \leftarrow T : \text{ISPREFERRED}(T')$ 5: function ISACCEPTED($T$) 6: return $(\forall T' \in \mathcal{T}, T' \leftarrow T : \text{ISACCEPTED}(T'))$ $\land |\mathcal{P}_T| = 1 \land \mathcal{P}_T.\text{cnt} > \beta_1)$ // safe early commitment $\lor (\mathcal{P}_T.\text{cnt} > \beta_2)$ // consecutive counter 7: procedure ONQUERY($j, T$) 8: ONRECEIVETX($T$) 9: RESPOND($j, \text{ISSTRONGLYPREFERRED}(T)$) Fig. 10: Avalanche: voting and decision primitives. that has $T$ will also have $T'$ and the same relation $T' \leftarrow T$; and conversely, if $T' \not\leftarrow T$, then no nodes will end up with $T' \leftarrow T$. Each node $u$ can compute a confidence value, $d_u(T)$, from the progeny as follows: $$d_u(T) = \sum_{T' \in \mathcal{T}_u, T' \leftarrow T} c_{uT'}$$ where $c_{uT'}$ stands for the chit value of $T'$ for node $u$. Each transaction initially has a chit value of 0 before the node gets the query results. If the node collects a threshold of $\alpha$ yes-votes after the query, the value $c_{uT'}$ is set to 1, otherwise remains 0 forever. Therefore, a chit value reflects the result from the one-time query of its associated transaction and becomes immutable afterwards, while $d(T)$ can increase as the DAG grows by collecting more chits in its progeny. Because $c_T \in \{0, 1\}$, confidence values are monotonic. In addition, node $u$ maintains its own local list of known nodes $\mathcal{N}_u \subseteq \mathcal{N}$ that comprise the system. For simplicity, we assume for now $\mathcal{N}_u = \mathcal{N}$, and elide subscript $u$ in contexts without ambiguity. Each node implements an event-driven state machine, centered around a query that serves both to solicit votes on each transaction and to notify other nodes of the existence of newly discovered transactions. In particular, when node $u$ discovers a transaction $T$ through a query, it starts a one-time query process by sampling $k$ random peers and sending a message to them, after $T$ is delivered via ONRECEIVETX. Node $u$ answers a query by checking whether each $T'$ such that $T' \leftarrow T$ is currently preferred among competing transactions $\forall T'' \in \mathcal{P}_T$. If every single ancestor $T'$ fulfills this criterion, the transaction is said to be strongly preferred, and receives a yes-vote (1). A failure of this criterion at any $T'$ yields a no-vote (0). When $u$ accumulates $k$ responses, it checks whether there are $\alpha$ yes-votes for $T$, and if so grants the chit (chit value $c_T := 1$) for $T$. The above process will yield a labeling of the DAG with a chit value and associated confidence for each transaction $T$. Figure 11 illustrates a sample DAG built by Avalanche. Similar to Snowball, sampling in Avalanche will create a positive feedback for the preference of a single transaction in its conflict set. For example, because $T_2$ has larger confidence than $T_3$, its descendants are more likely collect chits in the future compared to $T_3$. Similar to Bitcoin, Avalanche leaves determining the acceptance point of a transaction to the application. An application supplies an ISACCEPTED predicate that can take into account the value at risk in the transaction and the chances of a decision being reverted to determine when to decide. Committing a transaction can be performed through a safe early commitment. For virtuous transactions, $T$ is accepted when it is the only transaction in its conflict set and has a confidence greater than threshold $\beta_1$. As in Snowball, $T$ can also be accepted after a $\beta_2$ number of consecutive successful queries. If a virtuous transaction fails to get accepted due to a problem with parents, it could be accepted if reissued with different parents. Figure 8 shows how Avalanche performs parent selection and entangles transactions. Because transactions that consume and generate the same UTXO do not conflict with each other, any transaction can be reissued with different parents. Figure 9 illustrates the protocol main loop executed by each node. In each iteration, the node attempts to select a transaction $T$ that has not yet been queried. If no such transaction exists, the loop will stall until a new transaction is added to $\mathcal{T}$. It then selects $k$ peers and queries those peers. If more than $\alpha$ of those peers return a positive response, the chit value is set to 1. After that, it updates the preferred transaction of each conflict set of the transactions in its ancestry. Next, $T$ is added to the set $Q$ so it will never be queried again by the node. The code that selects additional peers if some of the $k$ peers are unresponsive is omitted for simplicity. Figure 10 shows what happens when a node receives a query for transaction $T$ from peer $j$. First it adds $T$ to $\mathcal{T}$, unless it already has it. Then it determines if $T$ is currently strongly preferred. If so, the node returns a positive response to peer $j$. Otherwise, it returns a negative response. Notice that in the pseudocode, we assume when a node knows $T$, it also recursively knows the entire ancestry of $T$. This can be achieved by postponing the delivery of $T$ until its entire ancestry is recursively fetched. In practice, an additional gossip process that disseminates transactions is used in parallel, but is not shown in pseudocode for simplicity. C. Multi-Input UTXO Transactions In addition to the DAG structure in Avalanche, an \textit{unspent transaction output} (UTXO) [43] graph that captures spending dependency is used to realize the ledger for the payment system. To avoid ambiguity, we denote the transactions that encode the data for money transfer \textit{transactions}, while we call the transactions ($T \in \mathcal{T}$) in Avalanche’s DAG \textit{vertices}. We inherit the transaction and address mechanisms from Bitcoin. At their simplest, transactions consist of multiple inputs and outputs, with corresponding redeem scripts. Addresses are identified by the hash of their public keys, and signatures are generated by corresponding private keys. The full scripting language is used to ensure that a redeem script is authenticated to spend a UTXO. UTXOs are fully consumed by a valid transaction, and may generate new UTXOs spendable by named recipients. Multi-input transactions consume multiple UTXOs, and in Avalanche, may appear in multiple conflict sets. To account for these correctly, we represent \textit{transaction-input} pairs (e.g. In$_{n_1}$) as Avalanche vertices. The conflict relation of transaction-input pairs is transitive because of each pair only spends one unspent output. Then, we use the conjunction of \textsc{isAccepted} for all inputs of a transaction to ensure that no transaction will be accepted unless all its inputs are accepted (Figure 12). In other words, a transaction is accepted only if all its transaction-input pairs are accepted in their respective Snowball conflict sets. Following this idea, we finally implement the DAG of transaction-input pairs such that multiple transactions can be batched together per query. a) Optimizations We implement some optimizations to help the system scale. First, we use \textit{lazy updates} to the DAG, because the recursive definition for confidence may otherwise require a costly DAG traversal. We maintain the current $d(T)$ value for each active vertex on the DAG, and update it only when a descendant vertex gets a chit. Since the search path can be pruned at accepted vertices, the cost for an update is constant if the rejected vertices have limited number of descendants and the undecided region of the DAG stays at constant size. Second, the conflict set could be very large in practice, because a rogue client can generate a large volume of conflicting transactions. Instead of keeping a container data structure for each conflict set, we create a mapping from each UTXO to the preferred transaction that stands as the representative for the entire conflict set. This enables a node to quickly determine future conflicts, and the appropriate response to queries. Finally, we speed up the query process by terminating early as soon as the $\alpha$ threshold is met, without waiting for $k$ responses. b) DAG Compared to Snowball, Avalanche introduces a DAG structure that entangles the fate of unrelated conflict sets, each of which is a single-degree instance. This entanglement embodies a tension: attaching a virtuous transaction to undecided parents helps propel transactions towards a decision, while it puts transactions at risk of suffering liveness failures when parents turn out to be rogue. We can resolve this tension and provide a liveness guarantee with the aid of two mechanisms. First we adopt an adaptive parent selection strategy, where transactions are attached at the live edge of the DAG, and are retired with new parents closer to the genesis vertex. This procedure is guaranteed to terminate with uncontested, decided parents, ensuring that a transaction cannot suffer liveness failure due to contested, rogue transactions. A secondary mechanism ensures that virtuous transactions with decided ancestry will receive sufficient chits. Correct nodes examine the DAG for virtuous transactions that lack sufficient progeny and emit no-op transactions to help increase their confidence. With these two mechanisms in place, it is easy to see that, at worst, Avalanche will degenerate into separate instances of Snowball, and thus provide the same liveness guarantee for virtuous transactions. Unlike other cryptocurrencies [48] that use graph vertices directly as votes, Avalanche only uses DAG for the purpose of batching queries in the underlying Snowball instances. Because confidence is built by collected chits, and not by just the presence of a vertex, simply flooding the network with vertices attached to the rejected side of a subgraph will not subvert the protocol. D. Communication Complexity Let the DAG induced by Avalanche have an expected branching factor of $p$, corresponding to the width of the DAG, and determined by the parent selection algorithm. Given the $\beta_1$ and $\beta_2$ decision threshold, a transaction that has just reached the point of decision will have an associated progeny $\mathcal{Y}$. Let $m$ be the expected depth of $\mathcal{Y}$. If we were to let the Avalanche network make progress and then freeze the DAG at a depth $y$, then it will have roughly $py$ vertices/transactions, of which $p(y - m)$ are decided in expectation. Only $pm$ recent transactions would lack the progeny required for a decision. For each node, each query requires $k$ samples, and therefore the total message cost per transaction is in expectation $(pky)/(p(y - m)) = ky/(y - m)$. Since $m$ is a constant determined by the undecided region of the DAG as the system constantly makes progress, message complexity per node is $O(k)$, while the total complexity is $O(kn)$. VI. Evaluation A. Setup We conduct our experiments on Amazon EC2 by running from hundreds (125) to thousands (2000) of virtual machine instances. We use c5.large instances, each of which simulates an individual node. AWS provides bandwidth of up to 2 Gbps, though the Avalanche protocol utilizes at most around 100 Mbps. Our implementation supports two versions of transactions: one is the customized UTXO format, while the other uses the code directly from Bitcoin 0.16. Both supported formats use secp256kl crypto library from bitcoin and provide the same address format for wallets. All experiments use the customized format except for the geo-replication, where results for both are given. We simulate a constant flow of new transactions from users by creating separate client processes, each of which maintains separated wallets, generates transactions with new recipient addresses and sends the requests to Avalanche nodes. We use several such client processes to max out the capacity of our system. The number of recipients for each transaction is tuned to achieve average transaction sizes of around 250 bytes (1–2 inputs/outputs per transaction on average and a stable UTXO size), the current average transaction size of Bitcoin. To utilize the network efficiently, we batch up to 40 transactions during a query, but maintain confidence values at individual transaction granularity. All reported metrics reflect end-to-end measurements taken from the perspective of all clients. That is, clients examine the total number of confirmed transactions per second for throughput, and, for each transaction, subtract the initiation timestamp from the confirmation timestamp for latency. Each throughput experiment is repeated for 5 times and standard deviation is indicated in each figure. As for security parameters, we pick $k = 10$, $\alpha = 0.8$, $\beta_1 = 11$, $\beta_2 = 150$, which yields an MTTF of $\sim 10^{24}$ years. B. Throughput We first measure the throughput of the system by saturating it with transactions and examining the rate at which transactions are confirmed in the steady state. For this experiment, we first run Avalanche on 125 nodes with 10 client processes, each of which maintains 400 outstanding transactions at any given time. As shown by the first group of bars in Figure 13, the system achieves 6851 transactions per second (tps) for a batch size of 20 and above 7002 tps for a batch size of 40. Our system is saturated by a small batch size comparing to other blockchains with known performance: Bitcoin batches several thousands of transactions per block, Algorand [27] uses 2–10 Mbyte blocks, i.e., 8.4–41.9K tx/batch and Conflux [38] uses 4 Mbyte blocks, i.e., 16.8K tx/batch. These systems are relatively slow in making a single decision, and thus require a very large batch (block) size for better performance. Achieving high throughput with small batch size implies low latency, as we will show later. ![Throughput vs. network size](image) Fig. 13: Throughput vs. network size. Each pair of bars is produced with batch size of 20 and 40, from left to right. C. Scalability To examine how the system scales in terms of the number of nodes participating in Avalanche consensus, we run experiments with identical settings and vary the number of nodes from 125 up to 2000. Figure 13 shows that overall throughput degrades about 1.34% to 6909 tps when the network grows by a factor of 16 to $n = 2000$. This degradation is minor compared to the fluctuation in performance of repeated runs. Note that the xAvalanche acquires its scalability from three sources: first, maintaining a partial order that captures only the spending relations allows for more concurrency than a classical BFT replicated log that linearizes all transactions; second, the lack of a leader naturally avoids bottlenecks; finally, the number of messages each node has to handle per decision is $O(k)$ and does not grow as the network scales up. D. Cryptography Bottleneck We next examine where bottlenecks lie in our current implementation. The purple bar on the right of each group in Figure 14 shows the throughput of Avalanche with signature verification disabled. Throughputs get approximately 2.6x higher, compared to the blue bar on the left. This reveals that cryptographic verification overhead is the current bottleneck of our system implementation. This bottleneck can be addressed by offloading transaction verification to a GPU. Even without such optimization, 7K tps is far in excess of extant blockchains. E. Latency The latency of a transaction is the time spent from the moment of its submission until it is confirmed as accepted. Figure 15 tallies the latency distribution histogram using the same setup as for the throughput measurements with 2000 nodes. The x-axis is the time in seconds while the y-axis is the portion of transactions that are finalized within the corresponding time period. This figure also outlines the Cumulative Distribution Function (CDF) by accumulating the number of finalized transactions over time. This experiment shows that most transactions are confirmed within approximately 0.3 seconds. The most common latencies are around 206 ms and variance is low, indicating that nodes converge on the final value as a group around the same time. The second vertical line shows the maximum latency we observe, which is around 0.4 seconds. Figure 16 shows transaction latencies for different numbers of nodes. The horizontal edges of boxes represent minimum, first quartile, median, third quartile and maximum latency respectively, from bottom to top. Crucially, the experimental data show that median latency is more-or-less independent of network size. F. Misbehaving Clients We next examine how rogue transactions issued by misbehaving clients that double spend unspent outputs can affect latency for virtuous transactions created by honest clients. We adopt a strategy to simulate misbehaving clients where a fraction (from 0% to 25%) of the pending transactions conflict with some existing ones. The client processes achieve this by designating some double spending transaction flows among all simulated pending transactions and sending the conflicting transactions to different nodes. We use the same setup with $n = 1000$ as in the previous experiments, and only measure throughput and latency of confirmed transactions. Avalanche’s latency is only slightly affected by misbehaving clients, as shown in Figure 17. Surprisingly, maximum latencies drop slightly when the percentage of rogue transactions increases. This behavior occurs because, with the introduction of rogue transactions, the overall effective throughput is reduced and thus alleviates system load. This is confirmed by Figure 18, which shows that throughput (of virtuous transactions) decreases with the ratio of rogue transactions. Further, the reduction in throughput appears proportional to the number of misbehaving clients, that is, there is no leverage provided to the attackers. G. Geo-replication Next experiment shows the system in an emulated geo-replicated scenario, patterned after the same scenario in prior work [27]. We selected 20 major cities that appear to be near substantial numbers of reachable Bitcoin nodes, according to [9]. The cities cover North America, Europe, West Asia, East Asia, Oceania, and also cover the top 10 countries with the highest number of reachable nodes. We use the latency and jittering matrix crawled from [58] and emulate network packet latency in the Linux kernel using tc and netem. 2000 nodes are distributed evenly to each city, with no additional network latency emulated between nodes within the same city. Like Algorand’s evaluation, we also cap our bandwidth per process to 20 Mbps to simulate internet-scale settings where there are many commodity network links. We assign a client process to each city, maintaining 400 outstanding transactions per city at any moment. In this scenario, Avalanche achieves an average throughput of 3401 tps, with a standard deviation of 39 tps. As shown in Figure 19, the median transaction latency is 1.35 seconds, with a maximum latency of 4.25 seconds. We also support native Bitcoin code for transactions; in this case, the throughput is 3530 tps, with $\sigma = 92$ tps. H. Comparison to Other Systems Though there are seemingly abundant blockchain or cryptocurrency protocols, most of them only present a sketch of their protocols and do not offer practical implementation or evaluation results. Moreover, among those who do provide results, most are not evaluated in realistic, large-scale (hundreds to thousands of full nodes participating in consensus) settings. Therefore, we choose Algorand and Conflux for our comparison. Algorand, Conflux, and Avalanche are all fundamentally different in their design. Algorand’s committee-scale consensus algorithm falls into the classical BFT consensus category, and Conflux extends Nakamoto consensus by a DAG structure to facilitate higher throughput, while Avalanche belongs to a new protocol family based on metastability. Additionally, we use Bitcoin [43] as a baseline. Both Algorand and Avalanche evaluations use a decision network of size 2000 on EC2. Our evaluation picked shared c5.large instances, while Algorand used m4_2xlarge. These two platforms are very similar except for a slight CPU clock speed edge for c5.large, which goes largely unused because our process only consumes 30% in these experiments. The security parameters chosen in our experiments guarantee a safety violation probability below $10^{-9}$ in the presence of 20% Byzantine nodes, while Algorand’s evaluation guarantees a violation probability below $5 \times 10^{-9}$ with 20% Byzantine nodes. Neither Algorand nor Conflux evaluations take into account the overhead of cryptographic verification. Their evaluations use blocks that carry megabytes of dummy data and present the throughput in MB/hour or GB/hour unit. So we use the average size of a Bitcoin transaction, 250 bytes, to derive their throughputs. In contrast, our experiments carry real transactions and fully take all cryptographic overhead into account. The throughput is 3-7 tps for Bitcoin, 874 tps for Algorand (with 10 Mbyte blocks), 3355 tps for Conflux (in the paper it claims 3.84x Algorand’s throughput under the same settings). In contrast, Avalanche achieves over 3400 tps consistently on up to 2000 nodes without committee or proof-of-work. As for latency, a transaction is confirmed after 10–60 minutes in Bitcoin, around 50 seconds in Algorand, 7.6–13.8 minutes in Conflux, and 1.35 seconds in Avalanche. Avalanche performs much better than Algorand in both throughput and latency because Algorand uses a verifiable random function to elect committees, and maintains a totally-ordered log while Avalanche establishes only a partial order. Algorand is leader-based and performs consensus by committee, while Avalanche is leader-less. Avalanche has similar throughput to Conflux, but its latency is 337–613x better. Conflux also uses a DAG structure to amortize the cost for consensus and increase the throughput, however, it is still rooted in Nakamoto consensus (PoW), making it unable to have instant confirmation compared to Avalanche. In a blockchain system, one can usually improve throughput at the cost of latency through batching. The real bottleneck of the performance is the number of decisions the system can make per second, and this is fundamentally limited by either Byzantine Agreement (BA*) in Algorand and Nakamoto consensus in Conflux. VII. RELATED WORK Bitcoin [43] is a cryptocurrency that uses a blockchain based on proof-of-work (PoW) to maintain a ledger of UTXO transactions. While techniques based on proof-of-work [4], [23], and even cryptocurrencies with mining based on proof-of-work [49], [57], have been explored before. Bitcoin was the first to incorporate PoW into its consensus process. Unlike more traditional BFT protocols, Bitcoin has a probabilistic safety guarantee and assumes honest majority computational power rather than a known membership, which in turn has enabled an internet-scale permissionless protocol. While permissionless and resilient to adversaries, Bitcoin suffers from low throughput (< 3 tps) and high latency (~5.6 hours for a network with 20% Byzantine presence and $2^{-32}$ security guarantee). Furthermore, PoW requires a substantial amount of computational power that is consumed only for the purpose of maintaining safety. Countless cryptocurrencies use PoW [4], [23] to maintain a distributed ledger. Like Bitcoin, they suffer from inherent scalability bottlenecks. Several proposals for protocols exist that try to better utilize the effort made by PoW. Bitcoin-NG [24] and the permissionless version of Thunderella [46] use Nakamoto-like consensus to elect a leader that dictates writing of the replicated log for a relatively long time so as to provide higher throughput. Moreover, Thunderella provides an optimistic bound that, with 3/4 honest computational power and an honest elected leader, allows transactions to be confirmed rapidly. ByzCoin [35] periodically selects a small set of participants and then runs a PBFT-like protocol within the selected nodes. Protocols based on Byzantine agreement [37], [47] typically make use of quorums and require precise knowledge of membership. PBFT [13], a well-known representative, requires a quadratic number of message exchanges in order to reach agreement. The Q/U protocol [2] and HQ replication [16] use a quorum-based approach to optimize for contention-free cases of operation to achieve consensus in only a single round of communication. However, although these protocols improve on performance, they degrade very poorly under contention. Yzzyza [36] couples BFT with speculative execution to improve the failure-free operation case. Past work in permissioned BFT systems typically requires at least $3f + 1$ replicas. CheapBFT [32] leverages trusted hardware components to construct a protocol that uses $f + 1$ replicas. Other work attempts to introduce new protocols under redefinitions and relaxations of the BFT model. Large-scale BFT [50] modifies PBFT to allow for arbitrary choice of number of replicas and failure threshold, providing a probabilistic guarantee of liveness for some failure ratio but protecting safety with high probability. In another form of relaxation, Zeno [52] introduces a BFT state machine replication protocol that trades consistency for high availability. More specifically, Zeno guarantees eventual consistency rather than linearizability, meaning that participants can be inconsistent but eventually agree once the network stabilizes. By providing an even weaker consistency guarantee, namely fork-join-causal consistency, Depot [40] describes a protocol that guarantees safety under $2f + 1$ replicas. NOW [28] uses sub-quorums to drive smaller instances of consensus. The insight of this paper is that small, logarithmized quorums can be extracted from a potentially large set of nodes in the network, allowing smaller instances of consensus protocols to be run in parallel. Snow White [18] and Ouroboros [34] are some of the earliest provably secure PoS protocols. Ouroboros uses a secure multiparty coin-flipping protocol to produce randomness for leader election. The follow-up protocol, Ouroboros Praos [19] provides safety in the presence of fully adaptive adversaries. HoneyBadger [42] provides good liveness in a network with heterogeneous latencies. Tendermint [10], [11] rotates the leader for each block and has been demonstrated with as many as 64 nodes. Ripple [51] has low latency by utilizing collectively-trusted sub-networks in a large network. The Ripple company provides a slow-changing default list of trusted nodes, which renders the system essentially centralized. In the synchronous and authenticated setting, the protocol in [3] achieves constant-3-round commit in expectation, at the cost of quadratic message complexity. Stellar [41] uses Federated Byzantine Agreement in which *quorum slices* enable heterogeneous trust for different nodes. Safety is guaranteed when transactions can be transitively connected by trusted quorum slices. Algorand [27] uses a verifiable random function to select a committee of nodes that participate in a novel Byzantine consensus protocol. Some protocols use a Directed Acyclic Graph (DAG) structure instead of a linear chain to achieve consensus [5], [8], [53]–[55]. Instead of choosing the longest chain as in Bitcoin, GHOST [54] uses a more efficient chain selection rule that allows transactions not on the main chain to be taken into consideration, increasing efficiency. SPECTRE [53] uses transactions on the DAG to vote recursively with PoW to achieve consensus, followed up by PHANTOM [55] that achieves a linear order among all blocks. Like PHANTOM, Conflux also finalizes a linear order of transactions by PoW in a DAG structure, with better resistance to liveness attack [38]. Similar to Thunderella, Meshcash [8] combines a slow PoW-based protocol with a fast consensus protocol that allows a high block rate regardless of network latency, offering fast confirmation time. Hashgraph [5] is a leader-less protocol that builds a DAG via randomized gossip. It requires full membership knowledge at all times, and it is a PBFT-variant that requires quadratic messages in expectation. VIII. CONCLUSION This paper introduced a novel family of consensus protocols, coupled with the appropriate mathematical tools for analyzing them. These protocols are highly efficient and robust, combining the best features of classical and Nakamoto consensus. They scale well, achieve high throughput and quick finality, work without precise membership knowledge, and degrade gracefully under catastrophic adversarial attacks. There is much work to do to improve this line of research. One such improvement could be the introduction of an adversarial network scheduler. Another improvement would be to characterize the system’s guarantees under an adversary whose powers are realistically limited, whereupon performance would improve even further. Finally, more sophisticated initialization mechanisms would bear fruitful in improving liveness of multi-value consensus. Overall, we hope that the protocols and analysis techniques presented here add to the arsenal of the distributed system developers and provide a foundation for new lightweight and scalable mechanisms. REFERENCES [1] Crypto-currency market capitalizations. https://coinmarketcap.com/. Accessed: 2017-02. [2] ABD-EL-MALEK, M., GANGER, G. R., GOODSON, G. R., REITER, M. K., AND WYLIE, J. J. Fault-scalable byzantine fault-tolerant systems. In ACM SIGOPS Operating Systems Review (2005), vol. 39, ACM, pp. 59–68. [3] ABRAHAM, I., DEVADAS, S., DOLEV, D., NAYAK, K., AND REN, L. Efficient synchronous byzantine consensus. arXiv preprint arXiv:1704.02397 (2017). [4] ASPNIS, J., JACKSON, C., AND KRISHNAMURTHY, A. Exposing computationally-challenged byzantine impostors. Tech. rep., Technical Report YALEU/DCS/TR-1352, Yale University Department of Computer Science, 2005. [5] BAIRD, I. Hashgraph consensus: fair, fast, byzantine fault tolerance. Tech. rep., Swirlds Tech Report, 2016. [6] BANERJEE, S., CHATTERJEE, A., AND SHAKKOTTAI, S. Epidemic thresholds with external agents. In IEEE INFOCOM 2014-IEEE Conference on Computer Communications (2014), IEEE, pp. 2202–2210. [7] BIR, O. K. Another advantage of free choice (extended abstract): Asynchronous asynchronous agreement. In Proceedings of the second annual ACM symposium on Principles of distributed computing (1983), ACM, pp. 27–30. [8] BINTOV, I., HUBACEK, P., MORAN, T., AND NADLER, A. Tortoise-and Hares Consensus: the Meshcash framework for incentive-compatible, scalable cryptocurrencies. IACR Cryptology ePrint Archive 2017 (2017), 300. [9] BITNODES. Global Bitcoin nodes distribution. https://bitnodes.earn.com/. Accessed: 2018-04. [10] BUCHMAN, N. Tendernight: Byzantine fault tolerance in the age of blockchain. Ph.D. thesis, 2016. [11] BYUMAN, E., KWON, J., AND MILOSEVIC, Z. The latest gossip on bit consensus, 2018. [12] BURROWS, M. The chubby lock service for loosely-coupled distributed systems. In 7th Symposium on Operating Systems Design and Implementation (OSDI’06), November 6-8, Seattle, WA, USA (2006), pp. 335–350. [13] CASTRO, M., AND LISKOV, B. Practical byzantine fault tolerance. In Proceedings of the 19th USENIX Symposium on Operating Systems Design and Implementation (OSDI), New Orleans, Louisiana, USA, February 22-25, 1999 (1999), pp. 173–186. [14] CENTRAL INTELLIGENCE AGENCY. The world factbook. https://www.cia.gov/library/publications/the-world-factbook/geos/da.html. Accessed: 2018-04-01. [15] CHVÁTL, V. The tail of the hypergeometric distribution. Discrete Mathematics 25, 3 (1979), 285–287. [16] COWLING, J., MYERS, D., LISKOV, B., RODRIGUES, R., AND SIRIRAJ, L. Hq replication: A hybrid quorum protocol for byzantine fault tolerance. In Proceedings of the 7th symposium on Operating systems design and implementation (2006), USENIX Association, pp. 177–190. [17] CROMAN, K., DECKER, C., EYAL, I., GENCER, A., F. JUDE, A., KOSSA, A., E., MILLER, A., SAXENA, P., SHI, E., SIRER, E. G., SONG, D., AND WATTENHOFER, R. On scaling decentralized blockchains - (a position paper). In Financial Cryptography and Data Security - FC 2016 International Workshops, BITCOIN, VOTING, and WAHC, Christ Church, Barbados, February 26, 2016, Revised Selected Papers (2016), pp. 106–125. [18] DAIAN, P., PASS, R., AND SHI, E. Snow white: Provably secure proofs of stake. Cryptology ePrint Archive, Report 2016/919, 2016. https://eprint.iacr.org/2016/919. [19] DAVID, B., GAZI, P., KIAYIAS, A., AND RUSSELL, A. Ouroboros Proofs: An adaptively-secure, semi-synchronous proof-of-stake blockchain. In Advances in Cryptology - EUROCRYPT 2018 - 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part II (2018), pp. 66–98. [20] DIGICONOMIST. Bitcoin energy consumption index. https://digiconomist.net/bitcoin-energy-consumption. Accessed: 2018-04. [21] DIONNEUR, J. R. The cyber virus. In International Workshop on Peer-to-Peer Systems (2002), Springer, pp. 251–260. [22] DRAEF, M., GANESH, A., AND MASSOULIÉ, L. Thresholds for virus spread on networks. In Proceedings of the 1st international conference on Performance evaluation methodologies and tools (2006), ACM, p. 51. [23] DWORK, C., AND NAOR, M. Pricing via processing or combatting junk mail. In Advances in Cryptology - CRYPTO ’92: 12th Annual International Cryptology Conference, Santa Barbara, California, USA, August 16-20, 1992, Proceedings (1992), pp. 139–147. [24] EYAL, I., GENCER, A. E., SIRER, E. G., AND VAN RENESSE, R. Bitcoin-NG: A scalable blockchain protocol. In 13th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2016, Santa Clara, CA, USA, March 29-April 1, 2016 (2016), USENIX Association, pp. 1–14. [25] GANESH, A., MASSOULIÉ, L., AND TOWSEY, D. The effect of network topology on the spread of epidemics. In Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies. (2005), vol. 2, IEEE, pp. 1455–1466. [26] GARAY, J. A., KIAYIAS, A., AND LEONARDOS, N. The Bitcoin Backbone Protocol: Analysis and Applications. In Advances in Cryptology - EUROCRYPT 2015 - 34th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Sofia, Bulgaria, April 26-30, 2015, Proceedings, Part II (2015), pp. 281–310. [27] GILAD, Y., HEMO, R., MICALI, S., VLACHOS, G., AND ZELODVIICH, N. Algorand: Scalable byzantine agreements for cryptocurrencies. In Proceedings of the 26th Symposium on Operating Systems Principles, Shanghai, China, October 28-31, 2017 (2017), pp. 51–68. [28] GUERRAOUI, R., HUC, F., AND KERMARREC, A.-M. Highly dynamic distributed computing with byzantine failures. In Proceedings of the 2013 ACM symposium on Principles of distributed computing (2013), ACM, pp. 176–185. [29] HOEFFDING, W. Probability inequalities for sums of bounded random variables. Journal of the American statistical association 58, 301 (1963), 13–30. [30] HUNT, P., KONAR, M., JUNQUEIRA, F. P., AND REED, B. Zookeeper: Wait-free coordination for internet-scale systems. In 2010 USENIX Annual Technical Conference, Boston, MA, USA, June 23-25, 2010 (2010). [31] JOHANSEN, H. D., VAN RENESSE, R., VIGFUSSON, Y., AND JOHANSEN, D. Fireflies: A secure and scalable membership and gossip service. ACM Trans. Comput. Syst. 33, 2 (2015), 5:1–5:32. [32] KAPITZA, R., BEHL, J., CACHIN, C., DISTLER, T., KUHNLE, S., MOHAMMADI, S. V., SCHRODER-PREIKSCHAT, W., AND STENGEL, K. Cheaper resource-efficient byzantine fault tolerance. In Proceedings of the 7th ACM european conference on Computer Systems (2012), ACM, pp. 295–308. [33] KEELING, M. J., AND ROHANI, P. Modeling infectious diseases in humans and animals. Princeton University Press, 2011. [34] KIAYIAS, A., RUSSELL, A., DAVID, B., AND IOVANOVYK, R. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Advances in Cryptology - CRYPTO 2017 - 37th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 20-24, 2017, Proceedings, Part I (2017), pp. 357–388. [35] KOKORIS-KOGIAS, E., JOVANOVIC, P., GAILLY, N., KHOFI, I., GASSER, L., AND FORB, B. Enhancing Bitcoin security and performance with strong consistency and collective signing. In 25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10-12, 2016, (2016), pp. 279–296. [36] KOTLA, R., ALVISI, L., DAHLIN, M., CLEMENT, A., AND WONG, E. L. Zyzzyva: Speculative byzantine fault tolerance. ACM Trans. Comput. Syst. 27, 4 (2009), 7:1–7:39. [37] LAMPORT, L., SHOSTAK, R. E., AND PEASE, M. C. The byzantine generals problem. ACM Trans. Program. Lang. Syst. 4, 3 (1982), 382–401. [38] Li, C., Li, P., Xu, W., Long, F., and Yao, A. C. Scaling nakamoto consensus to thousands of transactions per second. *CoRR abs/1805.03870* (2018). [39] Liggett, T. M., et al. Stochastic models of interacting systems. *The Annals of Probability*, 25, 1 (1997), 1–29. [40] Mahajan, P., Setty, S., Lee, S., Clement, A., Alvissi, L., Dahllin, M., and Walshfi, M. Depot: Cloud storage with minimal trust. *ACM Transactions on Computer Systems (TOCS)*, 29, 4 (2011), 12. [41] Mazieres, D. The Stellar consensus protocol: A federated model for internet-level consensus. *Stellar Development Foundation* (2015). [42] Miller, A., Xia, Y., Croman, K., Shi, E., and Song, D. The Honey Badge of BFT protocols. In *Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, October 24-28, 2016* (2016), pp. 37–52. [43] Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system, 2008. [44] Pass, R., Shostak, L., and Shraibman, A. Analysis of the blockchain protocol in asynchronous networks. In *Advances in Cryptology - EUROCRYPT 2017 - 36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Paris, France, April 30 - May 4, 2017, Proceedings, Part II* (2017), pp. 643–673. [45] Pass, R., and Shi, E. Fungible: A fair blockchain. *IACR Cryptology ePrint Archive*, 2016 (2016), 916. Accessed: 2018-04. [46] Pass, R., and Shi, E. Thunderella: Blockchains with optimistic instant confirmation. In *Advances in Cryptology - EUROCRYPT 2018 - 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part II* (2018), pp. 3–33. [47] Pease, M. C., Shostak, R. E., and Lamport, L. Reaching agreement in the presence of faults. *J. ACM* 27, 2 (1980), 228–234. [48] Popov, S. The tangle. https://www.iota.org/research/academic-papers. Accessed: 2018-04. [49] Rivest, R., and Shamir, A. Payword and micromint: Two simple micropayment schemes. In *Security protocols* (1997), Springer, pp. 69–87. [50] Rodrigues, R., Kuznetsov, P., and Bhattacharjee, B. Large-scale byzantine fault tolerance: Safe but not always live. In *Proceedings of the 3rd Workshop on Hot Topics in System Dependability* (2007). [51] Schwartz, D., Yosifov, N., Bortko, A., et al. Simple protocol composition algorithm. *Rice Lab Int White Paper* 5 (2014). [52] Singh, A., Fonseca, P., Kuznetsov, P., Rodrigues, R., and Maniatis, P. Zeno: Eventually consistent byzantine-fault tolerance. In *Proceedings of the 6th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2009, April 22-24, 2009, Boston, MA, USA* (2009), pp. 1–14. [53] Sompolinsky, Y., Lewenberg, Y., and Zohar, A. SPECTRE: A fast and scalable cryptocurrency protocol. *IACR Cryptology ePrint Archive* 2016 (2016), 1159. [54] Sompolinsky, Y., and Zohar, A. Secure high-rate transaction processing via Bitcoin. In *Financial Cryptography and Data Security, San Juan, Puerto Rico, January 26-30, 2015, Revised Selected Papers* (2015), pp. 507–527. [55] Sompolinsky, Y., and Zohar, A. PHANTOM: A scalable blockdag protocol. *IACR Cryptology ePrint Archive* 2018 (2018), 104. [56] Tan, W. On the absorption probabilities and absorption times of finite homogeneous birth-death queues. *Biometrika* (1976), 745–762. [57] Vidyasagar, M., Chandramouli, R., and Sinha, G. Karma: A secure economic framework for peer-to-peer resource sharing. In *Workshop on Economics of Peer-to-peer Systems* (2003), vol. 35. [58] WonderNetwork. Global ping statistics: Ping times between wonder-network servers. https://wondernetwork.com/pings. Accessed: 2018-04. **APPENDIX A** **ANALYSIS** In this appendix, we provide an analysis of Slush, Snowflake and Snowball. **A. Preliminaries** We assume the network model as discussed in Section II. We let $R$ (“red”) and $B$ (“blue”) represent two generic conflicting choices. Without loss of generality, we focus our attention on counts of $B$, i.e. the total number of nodes that prefer blue. **a) Hypergeometric Distribution** Each network query of $k$ peers corresponds to a sample without replacement out of a network of $n$ nodes, also referred to as a hypergeometric sample. We let the random variable $\mathcal{H}(\mathcal{N}, x, k) \rightarrow \{0, \ldots, k\}$ denote the resulting counts of $B$ in the sample (unless otherwise stated), where $x$ is the total count of $B$ in the population. The probability that the query achieves the required threshold of $\alpha$ or more votes is given by: $$P(\mathcal{H}(\mathcal{N}, x, k) \geq \alpha) = \sum_{j=\alpha}^{k} \binom{x}{j} \binom{n-x}{k-j} / \binom{n}{k}. \quad (2)$$ For ease of notation, we overload $\mathcal{H}(*)$ by implicitly referring to $P(\mathcal{H}(\mathcal{N}, x, k) \geq \alpha)$ as $\mathcal{H}(\mathcal{N}, x, k, \alpha)$. **b) Tail Bounds On Hypergeometric Distribution** We can reduce some of the complexity in Equation 2 by introducing a bound on the hypergeometric distribution induced by $\mathcal{H}_{\leq \psi}$. Let $p = x/n$ be the ratio of support for $B$ in the population. The expectation of $\mathcal{H}(\mathcal{N}, x, k)$ is exactly $kp$. Then, the probability that $\mathcal{H}(\mathcal{N}, x, k)$ will deviate from the mean by more than some small constant $\psi$ is given by the Hoeffding tail bound [29], as follows, $$P(\mathcal{H}(\mathcal{C}, x, k) \leq (p - \psi)k) \leq e^{-kD(p-\psi,p)} \leq e^{-2(p-\psi)^2k} \quad (3)$$ where $D(p - \psi, p)$ is the Kullback-Leibler divergence, measured as $$D(a,b) = a \log \frac{a}{b} + (1-a) \log \frac{1-a}{1-b} \quad (4)$$ **c) Concentration of Sub-Martingales** Let $\{X_{t \geq 0}\}$ be a sub-martingale and $|X_t - X_{t-1}| \leq c_t$ almost surely. Then, for all positive reals $\psi$ and all positive integers $t$, $$P(X_t \geq X_0 + \psi) \leq e^{-\psi^2/2 \sum_{i=1}^{t} c_i^2} \quad (5)$$ **B. Slush** Slush operates in a non-Byzantine setting; that is, $f = 0, c = n$. In this section, we will characterize the irreversibility properties of Slush (which appear in Snowflake and Snowball), as well as the precise converge rate distribution. The distribution of of both safety and liveness of Slush translate well to the Byzantine setting. The procedural version of Slush in Figure 4 made use of a parameter $m$, the number of rounds that a node executes Slush queries. What we ultimately want to extract is the total number of rounds $\phi$ that the scheduler will need to execute in order to guarantee that the entire network is the same color, whp. We analyze the system mainly using a continuous time process. Let $\{X_{t \geq 0}\}$ be a CTMC. The state space $S$ of the stochastic process is a condensed version of the full configuration space, where each state $\{0, \ldots, n\}$ represents the total number of blue nodes in the system. Let $\mathcal{F}_X_t$ be the filtration, or the history pertaining to the process, up to time $s$. This process is Markovian and timehomogeneous, conforming to \[ P\{X_t = j | \mathcal{F}_{X_s}\} = P\{X_t = j | X_s\} = P\{X_t = j | X_0\} \] Throughout the paper, we use \( Q \equiv (q_{ij}, i, j \in S) \) notation to refer to the infinitesimal generator of the process, where death \((i \to i - 1)\) and birth \((i \to i + 1)\) rates of configuration transitions are denoted via \( \mu_i \) and \( \lambda_i \) (\( \lambda_i \) is distinct from the clock parameter \( \lambda \), and will be clear from context). These rates are \[ \begin{cases} \mu_i = i \cdot \mathcal{H}(N, c - i, k, \alpha), & \text{for } i \to i - 1 \\ \lambda_i = (c - i) \cdot \mathcal{H}(N, i, k, \alpha), & \text{for } j \to i + 1 \end{cases} \] for \( 1 \leq i \leq c - 1 \), and where \( i = 0 \) and \( i = c \) are absorbing. Let \( p_{ij}(t) \) refer to the probability of transitioning from state \( i \) to \( j \) at time \( t \). We always assume that \[ p_{ij}(t) = \begin{cases} \lambda_i t + o(t), & \text{for } j = i + 1 \\ \mu_i t + o(t), & \text{for } j = i - 1 \\ 1 - (\lambda_i + \mu_i)t + o(t), & \text{for } j = i \\ o(t), & \text{otherwise} \end{cases} \] where all \( o(t) \) are uniform in \( i \). **a) Irreversibility** In Section IV, we discussed the loose Chvatal bound which provided intuitive understanding into the strong irreversibility dynamics of our core subsampling mechanism. In particular, once the network drifts to some majority value, it tends to revert back with only an exponentially small probability. We compute the closed-form expression for reversibility, and show that it is exponentially small. **Theorem 2.** Let \( \xi_\delta \) be the probability of absorption into the all-red state \((s_0)\), starting from a drift of \( \delta \) (i.e. \( \delta \) drift away from \( n/2 \)). Then, assuming \( \delta > 1 \), \[ \xi_\delta = 1 - \frac{\sum_{t=1}^{\delta} \prod_{i=1}^{t-1} \mu_i^2 \prod_{j=t}^{n-t} \lambda_j}{2 \sum_{t=1}^{n/2} \prod_{i=1}^{t-1} \mu_i^2 \prod_{j=t}^{n-t} \mu_j} \tag{6} \] and \[ \frac{\xi_\delta - \xi_{\delta+1}}{\xi_{\delta+1} - \xi_{\delta+2}} = \Gamma_{\delta+1} = \frac{\lambda_{\delta+1}}{\mu_{\delta+1}} \] \[ \approx \frac{n - \delta - 1}{\delta + 1} \sum_{j=\alpha}^{k} \frac{(n - \delta - 1)^k (\delta + 1)^{k-j}}{n^{2k-j}} \tag{7} \] where from now on we refer to \( \Gamma_{\delta+1} \) as the drift of the process. **Proof.** Our results are derived based on constructions from Tan [56]. We construct a sub-matrix of \( Q \), denoted \( B \), as shown in Figure 20. Let \( W'_1 = (\mu_1, 0, \ldots, 0) \), \( W'_{n-1} = (0, \ldots, 0, \lambda_{n-1}) \). Then, we can express \( Q \) as \[ Q = \begin{bmatrix} 0 & \ldots & 0 \\ W_1 & B & W'_{n-1} \\ 0 & \ldots & 0 \end{bmatrix} \] As a reminder, the stationary distribution can be found via \( \lim_{t \to \infty} P(t) = e^{Qt} \), where we have \[ e^{Qt} = \sum_{i=0}^{\infty} \frac{t^i}{i!} Q^i = \sum_{i=0}^{\infty} \frac{t^i}{i!} \begin{bmatrix} 0 & \cdots & 0 \\ B^{-1} W_1 & B^1 & B^{-1} W_{n-1} \\ 0 & \cdots & 0 \end{bmatrix} \] As Tan (eq. 2.3) shows, we have \[ \xi(t) = B^{-1} \left[ \sum_{i=0}^{\infty} B^i - I_{n-1} \right] W_1 \] Since we want the ultimate probabilities, we have that \[ \xi = \lim_{t \to \infty} \xi(t) = -B^{-1} W_1 \] We can explicitly compute \( \xi_\delta \) in terms of our rates \( \mu_i \) and \( \lambda_i \), getting \[ \xi_\delta = \frac{\sum_{l=1}^{n-\delta} \prod_{i=1}^{n-l} \mu_i \prod_{j=n-l+1}^{n-1} \lambda_j}{\sum_{l=1}^{n} \prod_{i=1}^{n-l} \mu_i \prod_{j=n-l+1}^{n-1} \lambda_j} \] However, we note that \( u_k = \lambda_{n-k} \). Algebraic manipulation from this observation leads to the two equations in the theorem. This expression is strictly lower than the Chvatal bounds used in Section IV. \( \square \) Using the construction for the absorption (and (ir)reversibility) probabilities as discussed previously, a natural follow up computation is in regards to **mean convergence time**. Let \( T_z(t) = \inf\{t \geq 0 : X_t = \{0, n\} | X_0 = z\} \), and let \( \tau_z = \mathbb{E}(T_z(t)) \). \( \tau_z \) is the mean time to reach either absorbing state, starting from state \( z \), which corresponds to the mean convergence time. The next theorem characterizes this distribution. **Theorem 3.** Let \( \tau_z \) be the expected time to convergence, starting from state \( z > n/2 \) to any of the two converging states in the network (all-red or all-blue). Then, \[ \tau_z = \frac{\sum_{d=1}^{n-1} x(d)y(d)}{2 \sum_{l=1}^{n/2} \prod_{i=1}^{l-1} \mu_i^2 \prod_{j=l}^{n-l} \mu_j} \tag{8} \] where \( x(d) \) and \( y(d) \) are \[ x(d) = \sum_{l=1}^{\min(z,d)-1} \prod_{i=1}^{l-1} \mu_i \prod_{j=l}^{d-1} \lambda_j \] \[ y(d) = \sum_{l=1}^{n-d-\max(z,d-1)} \prod_{i=d+1}^{n-l} \mu_i \prod_{j=n-l+1}^{n-1} \lambda_j \tag{9} \] **Proof.** Following the calculations from before, \(-B^{-1}\) at row \( z \) provides the number of traversals to each other state starting from \( z \). Calculating their sum, we have our result. The above equation is the full expression of the matrix row sum. \( \square \) Theorem 3 leads to the next lemma that captures property P2, under the assumption that at the beginning of the protocol, one proposal has at least \( \alpha \) support in the network. Lemma 4. Slush reaches an absorbing state in finite time almost surely. Proof. Starting from any non-absorbing, transient state, there is a non-zero probability of being absorbed. Additionally, since termination is finite and everywhere differentiable, Theorem 3 also implies that the probability of termination of any network configuration where a proposal has $\geq \alpha$ support in bounded time $t_{max}$ is strictly positive. □ C. Snowflake In Snowflake, the sampled set of nodes includes Byzantine nodes. We introduce the decision function $D(\cdot)$, which is constructed by having each node also keep track of the total number of consecutive times it has sampled a majority of the same color ($\beta$). Finally, we introduce a function called $A(S_t)$, the adversarial strategy, that takes as parameters the entire configuration of the network at time $t$, as well as the next set of nodes chosen by the scheduler to execute, and as a side-effect, modifies the set of nodes $B$ to some arbitrary configuration of colors. In order for our prior framework to apply to Snowflake, we must deal with a key subtlety. Unlike in Slush, where it is clear that once the network has reached one of the converging states and therefore may not revert back, this no longer applies to Snowflake, since any adversary $f \geq \alpha$ has strictly positive probability of reverting the system, albeit this probability may be infinitesimally small. The CTMC is flexible enough to deal with a system where there is only one absorbing state, but the long-term behavior of the system is no longer meaningful since, after an infinite amount of time, the system is guaranteed to revert, violating safety. We could trivially bound the amount of time, and show safety using this bounded time assumption by simply characterizing the distribution of $e^{tQ}$, where $Q$ is the generator. However, we can make the following observation: if the probability of going from state $c$ (all-blue) to $c-1$ is exponentially small, then it will take the attacker exponential time (in expectation; note, this is a lower bound, and in reality it will take much longer) to succeed in reverting the system. Hence, we can assume that once all correct nodes are the same color, the attack from the adversary will terminate since it is impractical to continue an attack. In fact, under reasonably bounded timeframes, the variational distance between the exact approach and the approximation is very small. We leave details to the accompanying paper, but we briefly discuss how analysis proceeds for Snowflake. As stated in Section IV, the way to analyze the adversary using the same construction as in Slush is to condition reversibility on the first node $u$ deciding on blue, which can happen at any state (as specified by $D(*)$). At that point, the adversarial strategy collapses to a single function, which is to continually vote for red. The probabilities of reversibility, for all states $\{1, \ldots, c-1\}$ must encode the probability that additional blue nodes commit, and the single function of the adversary. The birth and death rates are transformed as follows: $$ \begin{cases} \mu_i = i(1 - \mathbb{I}[D(s, i, B)]) \ H(N, c - i + f, k, \alpha) \\ \lambda_i = (c - i)(1 - \mathbb{I}[D(s, c - i, R)]) \ H(N, i, k, \alpha) \end{cases} $$ From here on, the analysis is the same as in Slush. Under various $k$ and $\beta$, we can find the minimal $\alpha$ that provides the system strong irreversibility properties. The next lemma captures P3, and the proof follows from central limit theorem. Lemma 5. If $f < O(\sqrt{n})$, and $\alpha = \lfloor k/2 \rfloor + 1$, then Snowflake terminates in $O(\log n)$ rounds with high probability. Proof. The results follows from central limit theorem, wherein for $\alpha = \lfloor k/2 \rfloor + 1$, the expected bias in the network after sampling will be $O(\sqrt{n})$. An adversary smaller than this bias will be unable to keep the network in a fully-bivalent state for more than a constant number of rounds. The logarithmic factor remains from the mixing time lower bound. □ D. Snowball We make the following observation: if the confidences between red and blue are equal, then the adversary has the same identical leverage in the irreversibility of the system as in Snowflake, regardless of network configuration. In fact, Snowflake can be viewed as Snowball but where drifts in confidences never exceed one. The same analysis applies to Snowball as in Snowflake, with the additional requirement of bounding the long-term behavior of the confidences in the network. To that end, analysis follows using martingale concentration inequalities, in particular the one introduced in Equation 5. Snowball can be viewed as a two-urn system, where each urn is a sub-martingale. The guarantees that can be extracted hereon are that the confidences of the majority committed value (in our frame of reference is always blue), grow always more than those of the minority value, with high probability, drifting away as $t \to t_{max}$. E. Safe Early Commitment As we reasoned previously, each conflict set in Avalanche can be viewed as an instance of Snowball, where each progeny instance iteratively votes for the entire path of the ancestry. This feature provides various benefits; however, it also can lead to some virtuous transactions that depend on a rogue transaction to suffer the fate of the latter. In particular, rogue transactions can interject in-between virtuous transactions and reduce the ability of the virtuous transactions to ever reach the required isACCEPTED predicate. As a thought experiment, suppose that a transaction $T_i$ names a set of parent transactions that are all decided, as per local view. If $T_i$ is sampled over a large enough set of successful queries without discovering any conflicts, then, since by assumption the entire ancestry of $T_i$ is decided, it must be the case (probabilistically) that we have achieved irreversibility. To then statistically measure the assuredness that $T_i$ has been accepted by a large percentage of correct nodes without any conflicts, we make use of a one-way birth process, where a birth occurs when a new correct node discovers the conflict of $T_i$. Necessarily, deaths cannot exist in this model, because a conflicting transaction cannot be unseen once a correct node discovers it. Our births are as follows: $$\lambda_t = \frac{c - t}{c} \left( 1 - \binom{n-t}{k} \right)$$ \hspace{1cm} (10) Solving for the expected time to reach the final birth state provides a lower bound to the $\beta_1$ parameter in the isACCEPTED fast-decision branch. The table below shows an example of the analysis for $n = 2000$, $\alpha = 0.8$, and various $k$, where $\varepsilon \ll 10^{-9}$, and where $\beta$ is the minimum required value before deciding. Overall, a very small number of iterations | $k$ | 10 | 20 | 30 | 40 | |-----|------|------|------|------| | $\beta$ | 10.87625 | 10.50125 | 10.37625 | 10.25125 | are sufficient for the safe early commitment predicate. This supports the choice of $\beta$ in our evaluation. F. Churn and View Updates Any realistic system needs to accommodate the departure and arrival of nodes. We now demonstrate that Avalanche nodes can admit a well-characterized amount of churn, by showing how to pick parameters such that Avalanche nodes can differ in their view of the network and still safely make decisions. Consider a network whose operation is divided into epochs of length $\tau$, and a view update from epoch $t$ to $t + 1$ during which $\gamma$ nodes join the network and $\bar{\gamma}$ nodes depart. Under our static construction, the state space $S_t$ of the network had a key parameter $\Delta^t$ at time $t$, induced by $c^t$, $f^t$, $n^t$ and the chosen security parameters. This can, at worst, impact the network by adding $\gamma$ nodes of color B, and remove $\bar{\gamma}$ nodes of color R. At time $t + 1$, $n^{t+1} = n^t + \gamma - \bar{\gamma}$, while $f^{t+1}$ and $c^{t+1}$ will be modified by an amount $\leq \gamma - \bar{\gamma}$, and thus induce a new $\Delta^{t+1}$ for the chosen security parameters. This new $\Delta^{t+1}$ has to be chosen such that the probability of reversibility from state $c^{t+1}/2 + \Delta^{t+1} - \gamma$ is $\leq \varepsilon$, which ensures that the system will converge under the previous pessimist assumptions. The system designer can easily do this by picking an upper bound on $\gamma, \bar{\gamma}$. The final step in assuring the correctness of a view change is to account for a mix of nodes that straddle the $\tau$ boundary. We would like the network to avoid an unsafe state no matter which nodes are using the old and the new views. The easiest way to do this is to determine $\Delta^t$ and $\Delta^{t+1}$ for desired bounds on $\gamma, \bar{\gamma}$, and then to use the conservative value $\Delta^{t+1}$ during epoch $t$. In essence, this ensures that no commitments are made in configuration $S_t$ unless they conservatively fulfill the safety criteria in state space $S_{t+1}$. As a result, there is no possibility of a node deciding red at time $t$, the network going through an epoch change and finding itself to the left of the new irreversibility state $\Delta^{t+1}$. This approach trades off some of the feasibility space, to add the ability to accommodate $\gamma, \bar{\gamma}$ node churn per epoch. Overall, if $\tau$ is in excess of the time required for a decision (on the order of minutes to hours), and nodes are loosely synchronized, they can add or drop up to $\gamma, \bar{\gamma}$ nodes in each epoch using the conservative process described above. We leave the precise method of entering and exiting the network by staking and unstaking to a subsequent paper, and instead rely on a membership oracle that acts as a sequencer and $\gamma$-rate-limiter, using technologies like Fireflies [31].
Scalable Video Coding in Content-Aware Networks: Research Challenges and Open Issues Christian Timmerer\textsuperscript{1}, Michael Grafl\textsuperscript{1}, Hermann Hellwagner\textsuperscript{1}, Daniel Negru\textsuperscript{2}, Eugen Borcoci\textsuperscript{3}, Daniele Renzi\textsuperscript{4}, Anne-Lore Mevel\textsuperscript{5}, and Alex Chernilov\textsuperscript{6} \textsuperscript{1} Klagenfurt University, Universitätsstrasse 65-67, A-9020 Klagenfurt, Austria, \{firstname.lastname\}@itec.uni-klu.ac.at \textsuperscript{2} CNRS LaBRI Lab., – University of Bordeaux 1, 351 Cours de la Libération, 33400 Talence, France, email@example.com \textsuperscript{3} University "Politehnica" of Bucharest (UPB), 1-3, Iuliu Maniu Ave., 061071, Bucharest 6, Romania, firstname.lastname@example.org \textsuperscript{4} bSoft ltd, 156 via Velini, 62100 Macerata, Italy, email@example.com \textsuperscript{5} Thomson Grass Valley France, 40, rue de Bray, 35510 Cesson-Sevigné, France, firstname.lastname@example.org \textsuperscript{6} Optibase, 7 Shenkar St., P.O.Box 2170, Herzlia, 46120 Israel, email@example.com Abstract The demand for access to advanced, distributed media resources is nowadays omnipresent due to the availability of Internet connectivity almost anywhere, anytime, and with a huge amount of different devices. This calls for rethinking of the current Internet architecture by making the network aware of which content is actually transported. This paper introduces Scalable Video Coding (SVC) as a tool for Content-Aware Networks (CANs) which is currently researched as part of the EU FP7 ALICANTE project. The architecture of ALICANTE with respect to SVC and CAN is reviewed, use cases are described, and, finally, research challenges and open issues are discussed. 1 Introduction In recent years the number of contents, devices, users, and means to communicate over the Internet has grown rapidly and with that the heterogeneity of all the involved entities. Many issues can be associated with that which are generally referred to as ongoing research in the area of the Future Internet (FI) [1]. One project in this area is the European research FP7 Integrated Project “MediA Ecosystem Deployment Through Ubiquitous Content-Aware Network Environments” (ALICANTE) [2] which proposes a novel concept towards the deployment of a new networked *Media Ecosystem*. The proposed solution is based on a flexible cooperation between providers, operators and end-users, finally enabling every user (1) to access the offered multimedia services in various contexts, and (2) to share and deliver her/his own audiovisual content dynamically, seamlessly, and transparently to other users. Towards this goal, ALICANTE’s advanced concept provides *content-awareness* to the network environment, *context-awareness* (network/user) to the service environment, and *adapted services/content* to the end-user for her/his best service experience possible, taking the role of a consumer and/or producer. By *environment*, it is understood a generic and comprehensive name to emphasize a grouping of functions defined around the same functional goal and possibly spanning, vertically, one or more several architectural (sub-)layers. This name is used to characterize its broader scope with respect to the term *layer*. The ALICANTE architecture introduces two novel virtual layers on top of the traditional network layer, i.e., a Content-Aware Network layer (CAN) for network packet processing and a Home-Box (HB) layer for the actual content adaptation and delivery. Furthermore, Scalable Video Coding (SVC) is heavily employed for the efficient, bandwidth-saving delivery of media resources across heterogeneous environments (cf. Section 2). Technical use cases that will benefit from this architecture are outlined in Section 3 and Section 4 details the research challenges and open issues to be addressed in the course of the project. Finally, the paper is concluded in Section 5. 2 ALICANTE: MediA Ecosystem Deployment Through Ubiquitous Content-Aware Network Environments 2.1 Overview and System Architecture The ALICANTE architecture promotes advanced concepts such as content-awareness to the network environment, user context-awareness to the service environment, and adapted services/content to the end-user for his/her best service experience while being a consumer and/or producer. Two novel virtual layers are proposed on top of the traditional network layer as depicted in Figure 1: the *Content-Aware Network (CAN) layer* for network packet processing and a *Home-Box (HB) layer* for the actual content adaptation and delivery. ![Figure 1. ALICANTE concept and system architecture.](image) Innovative components instantiating the CAN are called *Media-Aware Network Elements (MANE)*. They are actually CAN-enabled routers and associated managers, offering together content-aware and context-aware Quality of Service/Experience, security, and monitoring features, in cooperation with the other elements of the ecosystem. The upper layer, i.e., the *Service Environment*, uses information delivered by the CAN layer and enforces network-aware application procedures, in addition to user context-aware ones. The novel proposed *Home-Box (HB)* entity is a physical and logical entity located at end-users’ premises and gathering context, content, and network information essential for realizing the big picture. Associated with the architecture there exists an open, metadata-driven, interoperable middleware for the adaptation of advanced, distributed media resources to the users’ preferences and heterogeneous contexts enabling an improved Quality of Experience. The adaptation will be deployed at both the HB and CAN layers making use of scalable media resources as outlined in the next section. For more detailed information the interested user is referred to [3]. ### 2.2 Scalable Video Coding and Content-Aware Networks The adaptation relies on Scalable Video Coding (SVC) [4]. SVC follows a layered coding scheme comprising a base layer and one or more enhancement layers with various dimensions. Three basic scalable coding modes are supported, namely spatial scalability, temporal scalability, and Signal to Noise Ratio (SNR) scalability, which can be combined into a single coded bit stream: - **Spatial (picture size) scalability.** A video is encoded at multiple spatial resolutions. By exploiting the correlation between different representations of the same content with different spatial resolutions, the data and decoded samples of lower resolutions can be used to predict data or samples of higher resolutions in order to reduce the bit rate to code the higher resolutions. - **Temporal (frame rate) scalability.** The motion compensation dependencies are structured so that complete pictures (i.e., their associated packets) can be dropped from the bit stream. Note that temporal scalability is already enabled by AVC and that SVC has only provided supplemental enhancement information to improve its usage. - **SNR/Quality/Fidelity scalability.** A video is encoded at a single spatial resolution but at different qualities. The data and decoded samples of lower qualities can be used to predict data or samples of higher qualities in order to reduce the bit rate to code the higher qualities. The adaptation deployed at the CAN layer will be performed in a Media-Aware Network Element (MANE) [5]. MANEs, which receive feedback messages about the terminal capabilities and channel conditions, can remove the non-required parts from a scalable bit stream before forwarding it. Thus, the loss of important transmission units due to congestion can be avoided and the overall error resilience of the video transmission service can be substantially improved. **Figure 2. Concept of SVC (layered-multicast) tunnel.** Design options of in-network adaptation of SVC have been described in previous work [6] and first measurements of SVC-based adaptation in an off-the-shelf WiFi router have been reported in [7]. More complex adaptation operations that will be required to create scalable media resources, such as transcoding [8] of media resources which have increased memory or CPU requirements, will be performed at the edge nodes only, i.e., in the Home-Boxes. Therefore, the ALICANTE project will develop an SVC (layered-multicast) tunnel, as depicted in Figure 2, inspired by IPv6 over IPv4 tunnels. That is, within the CAN layer only scalable media resources – such as SVC – are delivered adopting a layered-multicast approach [9] which allows the adaptation of scalable media resources by the MANEs implementing the concept of distributed adaptation. At the border to the user, i.e., the Home-Box, adaptation modules are deployed enabling device-independent access to the SVC-encoded content by providing X-to-SVC and SVC-to-X transcoding/rewriting functions with $X=\{\text{MPEG-2, MPEG-4 Visual, MPEG-4 AVC, etc.}\}$. An advantage of this approach is the reduction of the load on the network (i.e., no duplicates), making it free for (other) data (e.g., more enhancement layers). However, multiple adaptations may introduce challenges that have not (yet) been addressed in their full complexity (cf. Section 4). 3 Use Cases In order to evaluate the concept of SVC in the context of CANs/HBs, several use cases have been defined which are briefly introduced in the subsequent sections. 3.1 Multicast/Broadcast In this scenario, multiple users are consuming the same content from a single provider (e.g., live transmission of sport events). The users may have different terminals with certain capabilities as depicted in Figure 3. The ALICANTE infrastructure is simplified in Figure 3 to highlight the interesting parts for this scenario (i.e., the HBs and the MANEs). Note that the SVC layers depicted in the figure are only examples and that SVC streams in ALICANTE may comprise temporal, spatial, and quality (SNR) scalability with multiple layers. The properties and numbers of SVC layers will be determined by the HB at the SP/CP side based on several parameters (e.g., diversity of terminal types, expected network fluctuations, size overhead for additional layers, available resources for SVC encoding/transcoding, etc.) which are known a priori or dynamically collected through a monitoring system operating across all network layers. 3.2 Home-Box Sharing In this scenario, a user consumes content through a foreign (shared) HB, e.g., the user accesses the content/service to which she/he has subscribed while being abroad (e.g., business trip, vacation). Figure 4 depicts a user consuming content at two different locations on two different terminals, connected to different HBs. Note that the user might as well use her/his mobile phone to consume content through HB2. Figure 4. Home-Box sharing. 3.3 Video Conferencing This scenario consists of an n:m video conferencing session (e.g., in family meetings, office meetings, etc.) as depicted in Figure 5. The media distribution is handled over a multicast shared bi-directional non-homogeneous tree in the ALICANTE network. In such a way only the minimum amount of network resources are spent, while assuring to the end user a maximum on quality. ![Figure 5. Video conferencing.](image) 3.4 Peer-to-Peer Media Streaming The HBs operate in peer-to-peer (P2P) mode within the ALICANTE ecosystem as illustrated in Figure 6. The MANEs, through which the P2P traffic flows, act as proxy caches which intercept requests for content pieces issued by HBs and aggregate them including the capabilities of requesting terminal. Furthermore, content pieces are only forwarded if the requesting terminals can decode them. Therefore, unnecessary traffic is reduced to a minimum freeing up the network resources for other data (e.g., additional enhancement layers). 4 Research Challenges and Open Issues In this section we would like to point out some research challenges and open issues with respect to utilizing Scalable Video Coding within Content-Aware Networks. **Distributed adaptation decision-taking framework.** Due to the fact that many, possibly heterogeneous entities are involved – in the production, ingestion, distribution and consumption stages – there is a need to develop a framework for distributed adaptation decision-taking. That is, finding the optimal decision regarding to the adaptation of the content for a single entity (i.e., HB, MANE) within a network of various entities in the delivery system. Note that the decision-taking is needed at the request and during the delivery of the multimedia content as (network) conditions might change. **Distributed adaptation at HB and CAN layers.** The actual adaptation at both layers needs to be done efficiently, based on several criteria, in order to obtain low (end-to-end) delay, minimum quality degradation, and assuring scalability in terms of the number of sessions that can be handled in parallel. **Efficient, scalable SVC tunneling and signaling thereof.** The approach of tunneling the content within SVC streams in the (core) network opens up a number of issues due to the SVC adaptation within the MANEs, the SVC transcoding/rewriting within the HBs, and the associated signaling thereof. The issues range from efficiency and scalability to quality degradations. **The impact on the Quality of Service/Experience (QoS/QoE).** As there may be many adaptations happening during the delivery of the content, the impact on the Quality of Service/Experience needs to be studied in order to find the best trade-off for the use cases in questions. While for the QoS many objective measures are available, the QoE is highly subjective and requires tests involving end users; these tests are time consuming and costly. In any case, a good test-bed is needed for both objective and subjective tests for the evaluation of the QoS and QoE respectively. The possible mappings between QoS and QoE will be considered in this work also. ## 5 Conclusions and Future Work In this paper we have introduced the usage of scalable video coding in content-aware networks for various use cases described in the paper. In particular, SVC is a promising tool for making the network aware of the actual content being delivered, i.e., when it comes to technical properties such as bit rate, frame rate, and spatial resolution. Furthermore, it allows for efficient and easy-to-use in-network adaptation due to the inherent structure of SVC. The use cases described in the paper indicate the advantages of using SVC and in-network adaptation and we highlight research challenges and open issues. However, as this work is in its early stage it lacks of validation results for the scenarios and solutions proposed which remains part of our future work. **Acknowledgments** This work is supported in part by the European Commission in the context of the ALICANTE project (FP7-ICT-248652), http://www.ict-alicante.eu/ ### References 1. Tselentis, G. (et.al.): Towards the Future Internet - Emerging Trends from European Research, IOSPress (2010) 2. ALICANTE Web site, http://www.ict-alicante.eu/. Accessed 21 June, 2010 3. Borocci, E., Negru, D., Timmerer, C.: A Novel Architecture for Multimedia Distribution Based on Content-Aware Networking, In: Proc. The Third International Conference on Communication Theory, Reliability, and Quality of Service (CTRQ2010), Athens/Glyfada, Greece (2010) 4. Schwarz, H., Marpe, D., Wiegand, T.: Overview of the Scalable Video Coding Extension of the H.264/AVC Standard, IEEE Transactions on Circuits and Systems for Video Technology, 17(9), 1103–1120 (2007) 5. Wenger, S., Wang, Y.-K., Schierl, T.: Transport and Signaling of SVC in IP Networks, IEEE Transactions on Circuits and Systems for Video Technology, 17(9), 1164–1173 (2007) 6. Kuschnig, R., Kofler, I., Ransburg, M., Hellwagner, H.: Design options and comparison of in-network H.264/SVC adaptation, Journal of Visual Communication and Image Representation, 19(8), 529–542 (2008) 7. Kofler, I., Prangl, M., Kuschnig, R., Hellwagner, H.: An H.264/SVC-based adaptation proxy on a WiFi router, In: 18th International Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV2008), Braunschweig, Germany (2008) 8. Shen, B., Tan, W.-T., Huve, F.: Dynamic Video Transcoding in Mobile Environments, IEEE MultiMedia, 15(1), 42–51 (2008) 9. McCanne, S., Jacobson, V., Vetterli, M.: Receiver-driven layered multicast, In: SIGCOMM (1996)
Layna Rush focuses her practice on representing clients in managed care, insurance-related regulatory and compliance, and privacy and security issues. Ms. Rush leads the Firm's Data Incident Response Team, which is a NetDiligence Authorized Breach Coach® firm. She assists clients in the investigation of and response to privacy and security incidents. She is a U.S. and Canadian Certified Information Privacy Professional and member of the Firm's Data Protection, Privacy and Cybersecurity Team. Ms. Rush also assists clients with managed care and insurance regulatory issues. **Privacy and Security** Ms. Rush counsels clients on the investigation of, and response to, privacy and security incidents and data breaches. She represents clients in investigations by the Office for Civil Rights, state attorneys general and insurance departments, and in litigation involving privacy challenges. She routinely advises clients on state and federal laws related to privacy and security of personal information, including HIPAA, Part 2 Substance Abuse laws, the Genetic Information Nondiscrimination Act, the Telephone Consumer Protection Act, the CAN-SPAM Act and state laws related to privacy protections and breach notification requirements. She has assisted clients in the development of policies and procedures for compliance with HIPAA and state and federal laws related to privacy and security. Her privacy and security experience includes: - Assisting clients with response to security incidents, including phishing scams and ransomware attacks. - Advising clients on notification requirements under state and federal breach notification laws. - Representing clients in investigations by the Office for Civil Rights, state attorneys general and other state agencies that enforce privacy and security regulations. - Assisting with review and development of privacy and security compliance manuals. - Drafting and advising on business associate obligations and contracts. - Reviewing and drafting website privacy policies. - Assisting clients with development and testing of incident response plans. **Managed Care** Ms. Rush represents clients in regulatory, compliance, and operational issues related to managed care. She drafts agreements between health care entities including provider agreements for participation in IPAs, ACOs, TPAs, PPOs and Medicare Advantage and Medicaid managed care plans. She routinely counsels clients on state and federal legislation that impacts managed care contracting. Her managed care experience includes: - Drafting and editing managed care contracts including IPA Participating Provider Agreements, ACO Participation Agreements, Medicaid Managed Care Participating Provider Agreements, PPO/Network Agreements, Commercial Health Plan Provider Agreements, Value Based/Risk Based Participation Agreements and Pharmacy Benefit Manager Agreements. - Assisting providers with resolution of disputes with payers including disputes related to payment obligations and contract terminations. - Preparing payment-related documents and agreements including assignment of rights, patient agreements to pay contracts and negotiated rate agreements. - Advising clients on federal and state laws related to out-of-network payments and balance billing, including the No Surprises Act. • Counseling clients on state laws related to network adequacy, prompt pay provisions, "any willing provider" requirements, Medicaid managed care and Association Plans/Multiple Employer Welfare Associations (MEWAs). • Advising clients on federal laws and regulations related to Medicare Advantage plans, Medicaid managed care plans, Affordable Care Act provisions related to plan benefit design and premium payment and coordination of Benefit provisions/Medicare Secondary Payer regulations. • Assisting clients with the formation, registration and/or licensing of Health Maintenance Organizations, insurance companies, Third-Party Administrators, Utilization Review Agents, Preferred Provider Organizations/Networks, and Independent Physician Associations. **Insurance Regulatory** Ms. Rush routinely assists clients with insurance regulatory matters and is familiar with all lines of insurance including property and casualty, title, life, and health. Her work on insurance matters includes: • Advising insurance companies on acquisitions, redomestications, mergers, formations and expansions, reinsurance, insolvency and holding company matters. • Counseling clients on compliance and operational issues. • Representing regulated clients before insurance departments around the country. • Assisting clients with response to cease and desist orders, civil investigation demands and related regulatory proceedings. • Assisting clients with preparation and filing of rate and form documents. • Advising clients on insurance premium tax matters and on the handling of unclaimed property laws and regulations. • Assisting insurance agents and brokers with obtaining licensure in all 50 states, advising agents and brokers on regulatory matters, and representing them in administrative actions. • Counseling clients on compliance with insurance data security laws, drafting information security programs and assisting with cyber breach reporting obligations. • Counseling clients on issues related to captive formation and operations. **Professional Honors & Activities** • American Bar Association – Vice Chair – Health Lawyer Editorial Board (2018 – 2024) • American Health Law Association • Baton Rouge Bar Association • Executive Secretary – Louisiana Business Group on Health Board of Directors • Louisiana Hospital Association • Louisiana State Bar Association • Wex S. Malone Chapter of the American Inns of Court • Member – International Association of Privacy Professionals (CIPP/US, CIPP/C) • Fellow – American Bar Foundation • Listed in *The Best Lawyers in America*® for Health Care Law (2022 – 2025) • Listed in *Who's Who Legal* in Healthcare (2020) • Recognized as a "Top Forty Under 40" by the *Baton Rouge Business Report* (2013) **Community and Other Activities** • Board of Directors – Environment and Health Council of Louisiana (2010 – present; President, 2013 – 2019) • Board of Directors – March of Dimes Capital Area (2010 – 2016) • Board of Directors – Charles W. Lamar YMCA (2011 – 2017) • Board of Directors – Girls on the Run of Greater Baton Rouge (2012 – 2022) • Board of Directors – Louisiana Business Group on Health (2012 – present) **Publications** - "Proposed HIPAA Security Rule Updates" (January 2025) - "The Office for Civil Rights Recently Settled Two Ransomware Related Investigations" (October 2024) - "Best Practices for Protecting Operations from Vendor's Cyber Incidents" (October 2024) - "Coming Soon to a Health Care Provider Near You: Requirements of the New Final Rule for Section 1557," *AHLA Post-Acute and Long Term Services Practice Group Briefing* (May 2024) - "How to Comply with HHS' New Nondiscrimination Compliance Infrastructure Requirements in Your Facility" (May 2024) - "The New Law of the Land: HHS Finalizes Health Care Nondiscrimination Provisions in Section 1557 Final Regulation" (May 2024) - "HIPAA Updates: The Obligations Continue to Unfold" (February 2024) - "Baker’s Dozen: What Are Your Best “Balance” Tips?," *Women’s Initiative Newsletter* (June 2023) - "MOVEit Transfer Zero-Day Vulnerability: What Companies Need to Know," republished in *CPO Magazine* (June 2023) - "U.S. Health Care Sector Should Take Immediate Mitigating Actions Due to Targeted Attacks by Pro-Russia Hacktivist Group" (February 2023) - "Cultivating Good Relationships with Insurance Regulators," *Law360* (January 2023) - "Beware of Cyber Attacks During the Holiday Season – Royal Ransomware Group Highlighted as Threats to the Health and Public Health Sectors" (December 2022) - "Creating and Maintaining Good Relationships between Regulators and Insurers," LexisNexis (December 2022) - "Office For Civil Rights Seeks Input on Implementation of HITECH Amendments" (April 2022) - "Imminent Deadline for Submitting Annual Notice of HIPAA Breach" (January 2022) - "Baker’s Dozen: Honoring Moms Everywhere," *Women’s Initiative Newsletter* (May 2021) - "FBI Warns Hospital and Health Care Providers are Under Attack" (October 2020) - "Health Info Authorization Ruling Is A Mixed Bag For Providers," *Law360* (February 2020) - "Proposed Part 2 Rule Addressing Rehab Data Confidentiality Has Significant Gaps," *Bloomberg BNA Health IT Law & Industry Report* (April 2016) - "A Recent State Supreme Court Ruling Opens the Door for Breach of Privacy Claims Against Health Care Providers" (November 2014) - "Conditional Class Certification Granted on Plan's Recoupment Practices as Applied to Out-of-Network Health Care Providers," American Health Law Association alert (September 2014) **Speaking Engagements** - Presenter – "HIPAA 2.0: New Proposed Requirements Under the HIPAA Security Rule," HCCA Regional Healthcare Compliance Conference (March 2025) - Presenter – "Operational Governance: Mastering AI, Technology, and Cybersecurity Risk," South Carolina Home Care & Hospice Association Annual Conference (November 2024) - Panelist – "Healthcare Compliance Emerging Trends and Best Practices for Managing Risk (CYBER)," National Symposium for Healthcare Executives, University of Alabama at Birmingham (October 2024) - Co-presenter – "Trends in Data Privacy Enforcement and Litigation," American Health Law Association (May 2024) - Co-presenter – "Privacy Dos and Don’ts and Social Media Risks," North Carolina Assisted Living Association (December 2022) • Panelist – "Cyber Extortion and Ransomware – Your Business is at Risk: Third Party Liability, Data Exfiltration and Information Disclosure Issues," ISACA North America Conference, New Orleans, Louisiana (May 2022) • Panelist – "Cyber: Business Interruption and Vendor Risk," WSIA 2022 Insurtech Conference, New Orleans, Louisiana (March 2022) • Panelist – "Regulatory Roulette: Leveraging and Navigating State Insurance Departments," ABA 2022 Insurance Coverage Litigation Seminar, Tucson, Arizona (March 2022) • Co-presenter – "Overview of Cyber and Automation Risks in the Oilfield," 20th Annual Energy Litigation Conference (November 2021) • Co-presenter – "Enterprise Risk in Cyber Extortion and Ransomware," RIMS Canada 2021 Virtual Conference (October 2021) • "A Ransomware Tale," Mississippi Society of CPAs' Health Care Services Conference (September 2021) • Co-presenter – "Cyber Security and Privacy Practices to Protect Patient Data and Mitigate Litigation and Regulatory Risk," ACI Managed Care Disputes and Litigation (June 9, 2021) • Panelist – "Cybersecurity – Risks, Trends, Resources, and Legal Requirements," WEN 2019 National Conference, Denver, Colorado (March 2019) • "Professionalism," 2018 Louisiana Association of Health Plans' Annual Meeting CLE, "Issues in Health Law" (December 2018) • "Negotiating Effective Payer-Provider Contracts," ABA Physicians Legal Issues Conference, Chicago, Illinois (June 2018) • "HIPAA Compliance in the Current Healthcare Landscape," American Portable Diagnostic Association's Mid-Year Meeting, Louisville, Kentucky (May 2018) • "Cybersecurity for the C-Suite and Management," 31st Annual MSU Insurance Day, Starkville, Mississippi (April 2018) • "Surprise! You've Been Billed: Emerging Trends in Balance Billing Disputes," ABA Emerging Issues in Healthcare Law (February 2018) • "Ethics and E-Discovery," 2017 Louisiana Association of Health Plans' Annual Meeting CLE—Issues in Health Law (December 2017) • "Section 1557: Unfunded Federal Mandate or Landmark Civil Rights Healthcare Law? Understanding Healthcare Provider Responsibilities Under the ACA's Anti-Discrimination Rule," HBA Health Law Section CLE (January 2017) • Co-presenter – "Section 1557 – Are You Ready?," webinar, LeadingAge Texas (August 2016) • "Cyber Security: What Should be on Your Radar," Southern Gaming Summit/BingoWorld 2016 conference (May 2016) • "Legislation Affecting Our Industry," Louisiana Association of Health Plans Health Care at the Capitol conference (March 2014) • "Health Insurance Exchange Challenges and Solutions, Part II: Enrollment Assistance and Privacy and Security," American Health Law Association webinar (August 2013) • "The Health Care Reform Act: A Look at Key Health Coverage Provisions," VenueConnect (July 2013) • "The Impact of Health Reform's State Exchanges," Spring Managed Care Forum (May 2013) • "Making the Grade: Compliance in the Exchange Market," National Association of Specialty Health Organizations Webinar (November 2012) • "Explaining Health Care Reform: Exchanges And The Impact On Employers," Louisiana Business Group on Health's Webinar Series (October 2012) • "Explaining Health Care Reform: Wrap Up for 2011 and What to Expect in 2012," Louisiana Business Group on Health's Webinar Series (December 2011) • "The Impact of Healthcare Reform Law," New Orleans Regional Leadership Institute (February 2011) Webinars - An Overview of HIPAA Issues for Financial Institutions and Best Practices for Your Vendor Management Program (June 2021) - Incident Response: What You Need to Know (April 2021) Education - Louisiana State University Paul M. Hebert Law Center, J.D., 1999 - *Law Review* - Louisiana State University, B.S., 1996, magna cum laude - Phi Kappa Phi Admissions - Louisiana, 1999 - United States District Court for the Eastern District of Louisiana - United States District Court for the Middle District of Louisiana - United States District Court for the Western District of Louisiana - United States Court of Appeals for the Fifth Circuit
What Happened on Deliberation Day Cass R. Sunstein Reid Hastie David Schkade Follow this and additional works at: http://chicagounbound.uchicago.edu/journal_articles Part of the Law Commons Recommended Citation Cass R. Sunstein, Reid Hastie & David Schkade, "What Happened on Deliberation Day," 95 California Law Review 915 (2007). This Article is brought to you for free and open access by the Faculty Scholarship at Chicago Unbound. It has been accepted for inclusion in Journal Articles by an authorized administrator of Chicago Unbound. For more information, please contact firstname.lastname@example.org. What Happened on Deliberation Day? David Schkade† Cass R. Sunstein††† Reid Hastie†††† What are the effects of deliberation about legal and political issues by like-minded people? This Essay reports the results of an experimental investigation involving sixty-three citizens in Colorado. Groups from Boulder, a predominantly liberal city, met to discuss global warming, affirmative action, and civil unions for same-sex couples. Groups from Colorado Springs, a predominately conservative city, discussed the same issues. The major effect of deliberation was to make group members more extreme in their views than they were before they started to talk. Liberals became more liberal on all three issues; conservatives became more conservative. As a result of intragroup deliberation, the division between the citizens of Boulder and the citizens of Colorado Springs significantly increased. Deliberation also increased consensus and significantly reduced diversity within the groups. Even anonymous statements of personal opinion became more extreme and less diverse after deliberation. Because political views are often distributed along geographical lines, these findings are highly likely to be replicated in actual deliberative processes unless safeguards and careful procedures are introduced. INTRODUCTION The American constitutional system aspires to be a deliberative democracy—one that combines accountability with a high degree of Copyright © 2007 California Law Review, Inc. California Law Review, Inc. (CLR) is a California nonprofit corporation. CLR and the authors are solely responsible for the content of their publications. † Jerome Katzin Professor, Rady School of Management, University of California, San Diego. †† Karl N. Llewellyn Distinguished Service Professor, Law School and Department of Political Science, University of Chicago. ††† Robert S. Hamada Professor of Behavioral Science, Graduate School of Business, University of Chicago. Thanks to Bruce Aekerman for valuable discussions and to Matthew Tokson for excellent research assistance. reflection.\textsuperscript{1} Embracing this deliberative ideal, many people have explored the foundations of political deliberation and its implications for legal and political reform.\textsuperscript{2} An evident hope is that deliberation will lead people to accurate understandings and sensible solutions to social problems. Emphasizing that hope, Bruce Ackerman and James Fishkin have argued on behalf of a formal “Deliberation Day,” designed to foment citizen deliberation.\textsuperscript{3} But under what circumstances is this hope realistic? What are the likely effects of deliberation on judgments about law and politics? It should be clear that in order for deliberation to realize its promise, a reasonable variety of views must be expressed and discussed. Without exposure to competing views, citizens cannot engage in a balanced and informed weighing of positions—a prerequisite of effective deliberation. But sufficient diversity is unlikely if people sort themselves into homogeneous groups, or if citizens are segregated geographically; sheer demographics may well mean that many social groups consist of like-minded people.\textsuperscript{4} In fact, there is evidence that different communities in the United States are becoming more homogeneous in ideological terms.\textsuperscript{5} To the extent that this is so, deliberating groups may lack the requisite diversity. What are the effects of deliberation in these ideologically sorted groups? Perhaps they spread falsehoods rather than truth, or produce confusion rather than clarity. We created an experiment in political deliberation, designed to examine the effects of deliberation on communities of people having relatively homogenous views—a special kind of Deliberation Day. In this experiment, citizens from two cities in Colorado were assembled into several groups, each containing about six people from a particular city. The groups were asked to deliberate about three highly-contested issues: global warming, affirmative action, and same-sex civil unions. The two cities were Boulder, which is known by its voting patterns to be a predominantly liberal city, and Colorado Springs, known by its voting patterns to be a mainly conservative enclave.\textsuperscript{6} Citizens were first asked to record their views individually and anonymously. After this initial survey, the citizens deliberated about the three issues together and were \begin{enumerate} \item See Joseph M. Bessette, The Mild Voice of Reason (1994). \item See Jürgen Habermas, Between Facts and Norms (1996) (elaborating deliberative conception of democracy); Deliberative Democracy (Jon Elster ed., 1998) (collecting diverse treatments of deliberative democracy); Amy Gutmann & Dennis Thompson, Democracy and Disagreement (1996) (defending deliberative democracy and discussing its preconditions). \item See Bruce Ackerman & James S. Fishkin, Deliberation Day (2004). \item See Diana C. Mutz, Hearing the Other Side 46–48 (2006). \item See id.; see also Bill Bishop, The Great Divide, Austin Am.-Statesman (2004), http://www.statesman.com/greatdivide (showing increased uniformity within communities, defined in geographical terms). \item David Leip, Atlas of U.S. Presidential Elections, 2004 Presidential General Election Data Graphs Colorado, http://www.uselectionatlas.org/RESULTS/datagraph.php?year=2004&fips=8&f=0&off=0&elect=0 (last visited Mar. 5, 2007). \end{enumerate} instructed to reach a group consensus on each issue. After deliberation, individual participants were asked to record their post-deliberation views, again individually and anonymously. The effects of deliberation on participants were simple. First, the groups from Boulder became more liberal on all three issues; the groups from Colorado Springs became more conservative. Deliberation with like-minded groups thus shifted individual opinions toward more extremity. Second, deliberation increased consensus and decreased diversity. Many of the groups showed substantial heterogeneity in individual opinions before deliberation began. As a result of a brief discussion period, group members showed significantly more agreement and less heterogeneity, not only in their public statements but also in their anonymous post-deliberation expressions of their private views. Third, deliberation sharply increased the disparities between the views of the largely liberal citizens of Boulder and those of the largely conservative citizens of Colorado Springs. Before deliberation, there was considerable overlap between many individuals in the two cities. After deliberation, the overlap in views was much smaller. The simplest statement of our findings is that deliberation among like-minded people produced *ideological amplification*—an amplification of preexisting ideological tendencies, in which group discussion leads to greater extremism. If our experimental findings translate to the real world, deliberation will amplify the ideological tendencies of like-minded people, decrease internal group diversity, and create greater divisions across ideological lines. These effects should be expected whenever groups sort themselves along political lines in purely geographical terms; they should also occur when the sorting occurs through a person’s voluntary decisions about what to read in the newspaper and watch on television.\footnote{7} Various kinds of ideological amplification have been established in other experimental settings, but the phenomenon has received little attention in the context of contested political issues. As we shall see, our experimental design diverges from related experiments, including those undertaken by prominent supporters of political deliberation.\footnote{8} In key ways, our design corresponds more closely to the real world of such deliberation, both formal and informal. Our findings therefore have implications for many questions in law and politics. These include the likely judgments of three-judge panels consisting of all-Republican appointees or all-Democratic appointees; the effects of freedom of association; the performance of private or public boards of like-minded people; and the consequences of movements—geographical, technological, or \footnote{7}{See Shanto Iyengar & Richard Morin, *Red Media, Blue Media*, *Wash. Post*, May 3, 2006, available at http://www.washingtonpost.com/wp-dyn/content/article/2006/05/03/AR2006050300865.html.} \footnote{8}{See infra notes 79–89 and accompanying text (discussing the treatment of James Fishkin’s studies).} otherwise—that increase the likelihood that like-minded people will form communities of their own. In this Essay, we report the results of our experiment, offer an explanation for our results, and provide some brief remarks on the implications of those results for deliberation in politics. We suggest that the Colorado experiment has analogies in many domains of democratic life. To be sure, we did not create the kind of Deliberation Day favored by the most enthusiastic proponents of deliberation in public life: we did not offer the various safeguards that they propose,\(^9\) and indeed our findings might well be taken to provide strong support for those safeguards. But actual deliberation days, and weeks, will often closely resemble our own experiment. As we shall show, our findings offer a vivid warning about the consequences of sorting along political lines—and suggest the need for careful design of any proposal intended to promote political and legal deliberation. I POLITICAL DELIBERATION IN COLORADO A. Study Procedures Sixty-three voting-eligible adults between the ages of twenty and seventy-five participated; thirty-four participants were women and twenty-nine were men.\(^{10}\) Participants were recruited from two Colorado counties for a study on opinions about social and political issues by a professional survey research firm using random telephone digit dialing. Each participant received $100 for a two-hour session. The choice of Colorado as the study’s location was purely for logistical convenience of conducting the study; a similar recruitment protocol could have been followed in any state or geographical area. The study drew half of its sample from Boulder County, which voted 67% for Democratic candidate John Kerry in the 2004 presidential election. The other half of study participants hailed from the city of Colorado Springs in El Paso County, which voted 67% for Republican candidate George W. Bush in the same election.\(^{11}\) The first and key level of screening for this study was geographical. The study also screened the candidates individually, so as to \(^9\) See Ackerman & Fishkin, supra note 3. \(^10\) Consistent with the general demographics of the two counties, 90% of respondents were white. In both counties, three of the five groups contained one non-white voter. There were no significant differences between groups with and without a white voter on any group or individual responses related to the affirmative action question. There was no significant difference in age between the samples (the median age was forty-six in both counties). Age did not have a significant effect on the willingness to change one’s opinion—the correlation between age and the extent to which a person changed his or her opinion in the direction of the group was +.12, which is not statistically significant in this sample. \(^11\) CNN.com, 2004 Election Results, http://www.cnn.com/ELECTION/2004/pages/results/president (last visited Mar. 5, 2007). ensure that the Boulder participants held generally liberal political views and the Colorado Springs participants had generally conservative political beliefs.\footnote{Screening questions included the following: (a) “In general, would you describe your political views as very conservative, conservative, moderate, liberal, or very liberal?” (b) “Suppose you were in the voting booth and you came across an office for which two candidates . . . were running and you had never heard of either one. Which candidate would you choose—the Democrat or the Republican—or would you just not vote for that office?” Participants were also asked to assign grades to various people, predicting how they would be as president. The conservative names included Dick Cheney, Wayne Allard (the Republican U.S. Senator from Colorado), Rush Limbaugh, and Pat Robertson. The liberal names included Edward Kennedy, Hillary Rodham Clinton, Jesse Jackson, and John Kerry.} Despite these general inclinations, we did not screen participants for their views on the particular issues involved in the experiment, and many groups showed a degree of pre-deliberation diversity on the issues that they were asked to discuss. There were a total of five conservative groups and five liberal groups, with five to seven members each. In each county, participants came to a central location at a local university for the study. In the first session, each person completed an individual questionnaire about his or her private personal views on several topics. Participants engaged in this task before being informed that they would be part of a group discussion. After all participants had completed their individual questionnaires, they were moved to a different room and told that they would discuss some of the issues as a group. The following instructions were read aloud (verbatim) by a study administrator: Next you will meet as a group to discuss some of the topics you just considered in the survey. As a group, your job will be to try to reach a consensus among you about each topic. As an individual, your job is to express your personal opinion on each discussion topic, and to attempt to reach a group consensus through discussion. You will have 15 minutes per topic. One member of your group has been randomly selected to be the ‘monitor.’ The monitor’s job is to (1) read instructions and questions aloud to the group, (2) make sure the group performs each discussion task in the proper order, (3) set the timer at 15 minutes for each discussion and (4) record the group’s final consensus opinion at the end of each discussion. The monitor will be given 5 numbered envelopes, which should be opened in numerical order.\footnote{In the group sessions, the designated monitor was given five numbered envelopes, to be opened in order as soon as the previous envelope’s task was completed. The first three each contained instructions for the group to discuss and reach a consensus, if possible, on one of the three focal issues. A fourth contained individual forms, identical to those they completed before the groups were convened, which asked for their private individual opinions on all three topics after the group discussions were completed. The other envelope asked the group to discuss an unrelated issue.} For instance, the monitor will first open Envelope 1, read the question and instructions inside to the group, and then set the timer for 15 minutes. At the end of the 15 minutes, the monitor will record the ‘Group Consensus Opinion’ (if there is consensus), and then open Envelope 2. Each discussion should last approximately 15 minutes. DO NOT take straw votes until you are close to the end of your time—use the full 15 minutes. IMPORTANT NOTE: Be sure not to close discussion before everyone has had a chance to talk. If you understand these instructions, you can open Envelope 1 and begin discussion on the first topic. Participants discussed each of the three issues as a group while being videotaped and tried to reach a consensus—defined as a unanimous opinion—in fifteen minutes of discussion. After the discussion, they filled out another questionnaire in which they re-rated each issue privately as individuals. B. Materials Given to Study Participants Each group discussed the same three issues, and all members privately rated their personal opinions before and after discussion on a one (Disagree Very Strongly) to ten (Agree Very Strongly) scale. Table 1. Rating Scale | Disagree Very Strongly | Disagree Strongly | Disagree Somewhat | Disagree Slightly | Agree Slightly | Agree Somewhat | Agree Strongly | Agree Very Strongly | |------------------------|-------------------|-------------------|-------------------|---------------|----------------|---------------|---------------------| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | The text of the three issues given to study participants said: 1. The United States should sign an international agreement to reduce the greenhouse gases produced in this country that contribute to global warming. 2. When different applicants for the same job or educational opportunity are almost equal on relevant criteria, then the job or admission should be given to members of groups in society that have been discriminated against in the past. 3. Two adults of the same sex should be able to form a “civil union,” which would entitle them to certain legal rights such as joint home ownership, or access to the other’s retirement or medical benefits. We chose these issues because they divide people sharply along political lines and have done so for a significant period of time. Undoubtedly, other political issues would have worked as well. The first questionnaire also included demographic information and some filler items.\textsuperscript{14} \section*{C. Study Results} The recruitment process was successful in assembling groups in Boulder that were, on average, significantly more liberal than those in Colorado Springs in their initial opinions.\textsuperscript{15} When combined across all three issues, individual pre-deliberation opinions show substantial differences between the two counties.\textsuperscript{16} \begin{table}[h] \centering \caption{Summary of Individual Responses} \begin{tabular}{l c c c c c c} \textit{Boulder (liberal)} & Mean pre-deliberation & Mean post-deliberation & Moved down & Stayed same & Moved up & \% groups polarized \\ \hline Global Warming & 9.19 & 9.44 & 5 & 18 & 8 & 60\% \\ Affirmative Action & 5.81 & 6.38 & 6 & 11 & 15 & 80\% \\ Civil Unions & 9.22 & 9.69 & 1 & 19 & 12 & 100\% \\ Overall & 8.07 & 8.50 & 12 & 48 & 35 & 80\% \end{tabular} \begin{tabular}{l c c c c c c} \textit{Colorado Springs (conservative)} & Mean pre-deliberation & Mean post-deliberation & Moved down & Stayed same & Moved up & \% groups polarized \\ \hline Global Warming & 5.13 & 2.97 & 2 & 7 & 3 & 100\% \\ Affirmative Action & 2.84 & 1.61 & 1 & 10 & 2 & 100\% \\ Civil Unions & 2.48 & 2.19 & 8 & 18 & 5 & 80\% \\ Overall & 3.48 & 2.26 & 48 & 35 & 10 & 93\% \end{tabular} \end{table} We now explore the effects of deliberation, separately analyzing the consequences for individual views and the consequences for group decisions. \textsuperscript{14} The filler items appeared in-between the three group discussion issues, and were: “Having family members nearby is an important part of a good quality of life,” “It is better to live in the country than in the city or a suburb,” and “The health care that I receive is worse than it was in the past.” \textsuperscript{15} \textit{See infra} Table 1. \textsuperscript{16} A repeated measures ANOVA (Analysis of Variance) showed that there were highly significant differences between the two samples in their pre-deliberation opinions on the issues to be discussed: $F(1,61) = 234.3$, ($p < .001$). This difference was separately significant for each of the three issues (each issue $p < .001$). 1. Individual Mean Shifts Toward Extremity The opinions of individuals showed consistent evidence of ideological amplification. Six groups produced individual means that shifted in the same direction as the general leaning of the group for all three issues, and the other four groups did so on two of the three issues. There were a total of thirty group discussions, or ten groups discussing three issues per group. Overall, then, twenty-six of thirty discussions, or 87%, produced ideological amplification in individual judgments. An analysis of the medians produced essentially identical results. This pattern of amplification is confirmed in a more formal analysis. For all individuals, we subtracted pre-deliberation opinions from post-deliberation opinions on each issue, so as to produce an opinion shift “difference score.” For the liberal groups, a positive difference would represent amplification, while a negative difference would represent the same thing for the conservative groups. This is in fact exactly what we observe, as Table 1 demonstrates. This difference between geographical locations is highly significant, $F(1,61) = 56.1$, $p < .001$, and is separately significant for each issue (global warming $p < .001$, affirmative action $p < .001$ and civil unions $p < .02$). Thus we clearly observe a shift toward more extreme opinions in both liberal and conservative groups, but in opposite directions. There is a small—but statistically significant—tendency for the conservative groups to shift their opinions more, after discussion, than do the liberal groups ($p < .01$). But it would be a mistake to pay much attention to this difference. While some groups would undoubtedly shift more than others, the difference found here is probably an artifact of the fact that on global warming and civil unions, liberal groups were more extreme at the beginning, so that there was less room for them to move after discussion.\footnote{See supra Table 1.} 2. Differentiation: The Gap Between Liberals and Conservatives Liberals and conservatives have different opinions and beliefs about many social and political issues, and it is no surprise that they might come to our study with differences on the particularly salient and controversial issues that we chose for discussion.\footnote{See supra Table 1. See also supra note 12.} Despite this general pattern of differences, before deliberation there was actually a substantial amount of overlap between opinions in Boulder and Colorado Springs.\footnote{See infra Figure 1, top panel.} What is the effect of deliberation, by like-minded groups, on the differences? The answer is simple: because of the ideological amplification resulting from the group process, the initial gulf between opinions in the two counties (8.07 for Boulder vs. 3.48 for Colorado Springs, a difference of 4.59) grew far wider (8.50 for Boulder vs. 2.26 for Colorado Springs, a now much larger difference of 6.24, $p < .001$). Perhaps more disturbing, the distribution of opinions is now heavily concentrated in the extremes, and most of the overlap in opinions between the two locations has disappeared.\footnote{See infra Figure 1, bottom panel.} A main effect of deliberation among like-minded people, then, was a growing gap between liberals and conservatives. \textit{Figure 1. Pre- and Post-Deliberation Distributions of Opinions} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{distribution.png} \caption{Distribution of Post-Deliberation Attitudes by Location} \end{figure} 3. Reduced Internal Diversity Another important question about deliberation is whether participants will converge or diverge during and as a result of the process of deliberation. A common method for measuring diversity in opinions is by their standard deviation. The result is clear: the diversity of opinion within our groups, as measured by the standard deviation of their ratings on an issue, was markedly lower after deliberation.\footnote{See infra Figure 2.} The standard deviation of individual opinions in the group was lower after deliberation for no fewer than twenty-nine of the thirty group-issue combinations, and fell from a median of 1.17 pre-deliberation to 0.69 post-deliberation ($z = 4.7$, $p < .001$, by a sign test). In other words, deliberation greatly decreased the heterogeneity of opinions within a group. A similar pattern can be found if we look across groups within the same county. The standard deviation among groups in Boulder declined from 0.67 to 0.51, and in Colorado Springs from 0.85 to 0.76. After deliberation, the opinions of even different groups of people from the same place were more similar—despite not talking with each other. Overall, then, deliberation created far more homogeneity of opinion within different groups from the same geographical location. **Figure 2. Opinion Diversity Declines After Deliberation** *Comparison of Pre and Post Standard Deviations of Individual Opinions Within a Group* 4. **Group Decisions** What is the relationship between pre-deliberation individual views and the views of deliberating groups? This question is of independent interest. Much of the time, what matters is what groups think and do as such, not only what their members think and do as individuals. The basic answer is that group decisions were more extreme than the mean or median of pre-deliberation judgments. Overall, twenty-five of thirty groups, or 83%, reached a unanimous decision on a numerical scale response within fifteen minutes—ten of ten on global warming, seven of ten on affirmative action, and eight of ten on civil unions. Among the twenty-five group-issue combinations on which a consensus was reached, nineteen groups, or 76%, reached a consensus decision that was more extreme than the mean pre-deliberation individual opinion of group members. The same figure holds for median pre-deliberation responses. II EXPLANATIONS AND IMPLICATIONS On our Deliberation Day, individual liberals grew more liberal and individual conservatives grew more conservative. Within groups, internal diversity diminished, and the gap between liberals and conservatives grew. Why did this happen? A. Conformity, Ideological Amplification, and Group Polarization 1. Consensus and Polarization in General When people discuss their beliefs and preferences in groups, consensus increases for two reasons. The first involves basic conformity or herding habits, which lead people to defer to the opinions of others—whether or not an individual actually agrees with those opinions.\textsuperscript{22} The second is that people learn from the information and views of others. As a result, discussion can produce significant changes in points of view.\textsuperscript{23} Mere deference, in public, to the views of others would not always be expected to affect anonymous statements of opinion. People’s public statements on an issue may well diverge from their private views.\textsuperscript{24} Indeed, we observed greater diversity in people’s anonymous statements than in the views of groups. But when a group member has signed onto an official view, the private statement might be affected as well—if only because it is disconcerting to maintain a view in private that diverges from a statement made in public.\textsuperscript{25} In any event, group members who learn from one another are likely to be affected in their anonymous statements as well as their public ones, and hence we observe a significant increase in internal group consensus, even with respect to privately held views, as a result of deliberation. That phenomenon has not been studied extensively in empirical terms, but it seems familiar in many types of groups, including political parties, religious organizations, university faculties, labor unions, student groups, and corporate boards. \textsuperscript{22} See Solomon Asch, \textit{Opinions and Social Pressure}, in \textit{Readings About the Social Animal} 13 (Elliott Aronson ed., 7th ed. 1995); Leon Festinger, \textit{A Theory of Social Comparison Processes}, 7 \textit{Hum. Rel.} 117 (1954); Muzaffer Sherif, \textit{An Experimental Approach to the Study of Attitudes}, 1 \textit{Sociometry} 90 (1937). A good discussion of the effects of conformity can be found in Lee Ross & Richard E. Nisbett, \textit{The Person and the Situation} 28–30 (1991). \textsuperscript{23} See Daniel Gigone & Reid Hastie, \textit{Proper Analysis of the Accuracy of Group Judgment}, 121 \textit{Psychol. Bull.} 149, 161–62 (1997); Reid Hastie, \textit{Review Essay: Experimental Evidence of Group Accuracy}, in \textit{Information Pooling and Group Decision Making} 129, 133–46 (Bernard Grofman & Guillermo Owen eds., 1986). \textsuperscript{24} See Timur Kuran, \textit{Private Truths, Public Lies} (1995) (discussing general phenomenon of “preference falsification,” in which people’s public statements are systematically inconsistent with their actual private views). \textsuperscript{25} See id. See also Leon Festinger, \textit{A Theory of Cognitive Dissonance} (1957) (giving more information on the complex relations between public statements and private views). More strikingly, a well-known effect of discussion is group polarization, by which deliberating groups end up in a more extreme position in line with their pre-deliberation tendencies.\textsuperscript{26} In our experiment, group polarization occurred in the particular form of ideological amplification. We find unmistakable evidence of group polarization on the current political and legal issues discussed in this study. Our findings are noteworthy because most studies of group polarization do not involve legal or political issues.\textsuperscript{27} The original polarization experiments involved risk-taking behavior, with a demonstration that risk-inclined people, when considering (for example) the decision whether to take a new job in a new city or to become a concert pianist, became still more risk-inclined as a result of deliberation.\textsuperscript{28} With respect to business-related decisions, groups seemed to be willing to take risks that their individual members would avoid.\textsuperscript{29} Later studies of group polarization showed that under some conditions, the “risky shift” could also be a “cautious shift,” as risk-averse people become more averse to certain risks after speaking with one another.\textsuperscript{30} It turned out that the direction of the shift—toward greater risk-taking or greater caution—was related to the domain of experience in which the risky choice was embedded. The principal examples of “cautious shifts” involved the decision whether to marry and the decision whether to board a plane despite severe abdominal pain, possibly requiring medical attention.\textsuperscript{31} In these cases, deliberating groups together moved toward a more cautious approach, as did the individual members who composed each of the groups.\textsuperscript{32} \textsuperscript{26} See Roger Brown, Social Psychology (2d ed. 1986). Two of the present authors have discussed this phenomenon in other places. See, e.g., Cass R. Sunstein, The Law of Group Polarization, 10 J. Pol. Phil. 175 (2002) [hereinafter Sunstein, Law of Group]; David Schkade et al., Deliberating About Dollars: The Severity Shift, 100 Colum. L. Rev. 1139 (2000); Cass R. Sunstein, Deliberative Trouble? Why Groups Go To Extremes, 110 Yale L.J. 71 (2000) [hereinafter Sunstein, Deliberative Trouble]. \textsuperscript{27} See Brown, supra note 26, at 200–245 (studying group polarization). \textsuperscript{28} See J.A.F. Stoner, A Comparison of Individual and Group Decision Involving Risk (1961) (unpublished Master’s thesis, Massachusetts Institute of Technology) (on file with Massachusetts Institute of Technology). An interesting replication of Stoner’s findings can be found in Lawrence K. Hong, Risky Shift and Cautious Shift: Some Direct Evidence on the Culture-Value Theory, 41 Social Psychol. 342 (1978). Hong finds that Americans are more risk-inclined in groups than as individuals with respect to the decision whether to take a new job, have a heart operation, buy stocks, choose a risky play in football, invest in a foreign country, choose a risky move in chess, become a concert pianist, and run for political office. Interestingly, Chinese subjects showed a cautious shift for all these questions, with a single exception: choosing a risky play in football. \textsuperscript{29} Stoner, supra note 28; Hong, supra note 28 (finding a risky shift for American subjects). \textsuperscript{30} See Serge Moscovici & Marisa Zavalloni, The Group as a Polarizer of Attitudes, 12 J. Personality & Soc. Psychol. 125, 125–35 (1969). See also Hong, supra note 28. \textsuperscript{31} See Moscovici & Zavalloni, supra note 30. See also Hong, supra note 28, at 344 (finding a cautious shift for both Chinese and American subjects with respect to the decision whether to marry). \textsuperscript{32} See Moscovici & Zavalloni, supra note 30. More careful analysis of these results demonstrated that the pre-deliberation median is the best predictor of the direction of the post-deliberation shift.\textsuperscript{33} Where group members were predisposed toward risk-taking behavior, a risky shift was observed. Where members were more disposed toward caution from the beginning, a cautious shift was observed. Hence, group polarization refers to the tendency of deliberating groups to shift to a more extreme position in line with the pre-deliberation tendencies of their members. Ideological amplification, as we use the term here, is best understood as a special case of group polarization. In the behavioral laboratory, group polarization is evident in a remarkably wide range of contexts, including robbery, aesthetic judgments and factual observations.\textsuperscript{34} For instance, even groups of burglars can show a shift in the cautious direction when they discuss prospective criminal endeavors.\textsuperscript{35} Group deliberation produces more extreme judgments about the attractiveness of people shown in slides.\textsuperscript{36} Deliberation can also produce more extreme group judgments for obscure factual questions, such as how far Sodom (on the Dead Sea) is below sea level.\textsuperscript{37} Our focus here has been on disputed political questions. In the domain of law, there is some evidence of group polarization as well. In punitive damage cases, deliberating juries have been found to polarize, producing awards that are often higher than those of the median juror before deliberation begins.\textsuperscript{38} When individual jurors begin with a high level of moral indignation about a defendant’s conduct, juries become more indignant after deliberation than their median member had been before discussion. This effect ultimately produces dollar awards that are often as high as or even higher than the highest award favored before deliberation by any individual juror.\textsuperscript{39} Group polarization also occurs for jury judgments of guilt and sentencing in criminal cases.\textsuperscript{40} With respect to legal questions, panels of appellate judges polarize too. In ideologically contested areas, Republican appointees show especially conservative voting patterns when sitting on panels consisting entirely of Republican appointees, and Democratic appointees show especially liberal \textsuperscript{33} See id. See also Brown, supra note 26, at 210–12. \textsuperscript{34} John C. Turner et al., Rediscovering the Social Group: A Self-Categorization Theory 142–70 (1987). \textsuperscript{35} Paul F. Cromwell et al., Group Effects on Decision-Making by Burglars, 69 Psychol. Rep. 579, 586 (1991). \textsuperscript{36} Turner et al., supra note 34, at 153. \textsuperscript{37} Id. \textsuperscript{38} See Schkade et al., supra note 26. \textsuperscript{39} See id. (finding that in 27% of the cases, the jury’s award was as high or higher than those favored by any individual juror before deliberation). \textsuperscript{40} David G. Myers & Martin F. Kaplan, Group-Induced Polarization in Simulated Juries, 2 Personality & Soc. Psychol. Bull. 63 (1976); Martin F. Kaplan, Discussion Polarization Effects in a Modified Jury Decision Paradigm: Informational Influences, 40 Sociometry 262 (1977). voting patterns when sitting solely with other Democratic appointees.\textsuperscript{41} There is also suggestive evidence of group polarization on political issues. As a result of deliberation, French people, on average, become more distrustful of the United States and its intentions with respect to foreign aid.\textsuperscript{42} Similarly, feminist ideals can become more attractive to women after internal group discussions.\textsuperscript{43} White people who are not inclined to show racial prejudice show less prejudice after deliberation with one another than before; but white people who are inclined to show such prejudice show more prejudice after deliberation with group members holding similar views.\textsuperscript{44} \textit{2. Sorting versus Mixing} In our experiment, people were sorted into like-minded groups. Geographical factors—the different voting patterns in Boulder and Colorado Springs—greatly simplified this sorting process. Such sorting was a central part of the design of our study, because we were explicitly interested in the effects of deliberation among and across like-minded groups. As we suggested, actual sorting appears to be increasing in geographical terms, as geographically defined areas within the United States are becoming more uniform in their political commitments.\textsuperscript{45} In addition, virtual sorting across political lines is far easier with the rise of media organized along ideological lines.\textsuperscript{46} It is natural to ask what would have happened if there had been a degree of mixing—if people from Colorado Springs had participated in groups with people from Boulder. Advocates of deliberation typically prefer heterogeneity rather than uniformity.\textsuperscript{47} Mixing might have occurred voluntarily, as it often does. Alternatively, mixing might have been engineered by the experimental design. In terms of ultimate outcomes, existing work suggests two principal possibilities. First, and most likely, the pre-deliberation median might well have been predictive here as well, in the sense that it would likely predict both the group’s decision and the shift in individual views.\textsuperscript{48} Suppose, for example, that a group of six people tended to oppose civil unions for same-sex couples, \begin{itemize} \item \textsuperscript{41} See Cass R. Sunstein, David A. Schkade, Lisa M. Ellman & Andres Sawicki, \textit{Are Judges Political? An Empirical Analysis of the Federal Judiciary} (2006) [hereinafter Sunstein et al., \textit{Are Judges Political}]; Cass R. Sunstein et al., \textit{Ideological Voting on Federal Courts of Appeals: A Preliminary Investigation}, 90 Va. L. Rev. 301 (2004) [hereinafter Sunstein et al., \textit{Ideological Voting}]. \item \textsuperscript{42} Brown, \textit{supra} note 26, at 224. \item \textsuperscript{43} David G. Myers, \textit{Discussion-Induced Attitude Polarization}, 28 Hum. Rel. 699, 707–11 (1975) (finding increase in support for feminism among women inclined to show feminist attitudes). \item \textsuperscript{44} David G. Myers & George D. Bishop, \textit{Discussion Effects on Racial Attitudes}, 169 Science 778, 778–79 (1970). \item \textsuperscript{45} See Bishop, \textit{supra} note 5. \item \textsuperscript{46} See id. \item \textsuperscript{47} See Ackerman & Fishkin, \textit{supra} note 3. \item \textsuperscript{48} See Brown, \textit{supra} note 26, at 210–12. \end{itemize} because two members sharply opposed them, two members mildly opposed them, and two members mildly favored them. In light of the initial distribution of views, the group and its individual members would probably move in the direction of greater opposition, notwithstanding a degree of internal heterogeneity. In many settings, the pre-deliberation median is the best predictor of the movement of individual and groups, even if there is a degree of antecedent heterogeneity.\textsuperscript{49} Note, in this regard, that all of the groups in our study began with some such heterogeneity, and they nonetheless moved in the way predicted by group polarization research. As discussed below, this conclusion follows from an understanding of the sources of polarization. The second possibility is that individual positions will be further entrenched in their preexisting views and will fail to move at all, as group members may show a reluctance to listen to those with competing positions. Polarization may not be found when the relevant group consists of individuals drawn equally from two extremes.\textsuperscript{50} Consider the finding that “familiar and long-debated issues do not depolarize easily.”\textsuperscript{51} We have said that ideological amplification generally occurs in the federal judiciary.\textsuperscript{52} But on two issues—capital punishment and abortion—Republican appointees are not affected by sitting with two Democratic appointees, and Democratic appointees are impervious to the influences of two Republican appointees.\textsuperscript{53} Evidently judicial judgments about abortion and capital punishment are firmly held and hence amplification does not occur. For political issues on which people do not have rigidly determined positions, polarization is more likely, as our own experiment suggests. Mixed groups have, however, been shown to have two desirable social effects. First, exposure to competing positions generally increases political tolerance.\textsuperscript{54} After hearing a variety of views, including those divergent from their own, many people become more respectful of alternative positions and are more willing to consider them plausible or legitimate. An important result of seeing a political conflict as legitimate is a “greater willingness to extend civil liberties to even those groups whose political views one dislikes a great deal.”\textsuperscript{55} Second, mixing increases the likelihood that people will be aware of competing rationales and will see potential counterarguments.\textsuperscript{56} This effect is especially pronounced for those who antecedently show a “civil orientation toward \textsuperscript{49} See Schkade et al., \textit{supra} note 26, at 1140–41 (finding that the pre-deliberation median predicts movements, even when there is considerable internal diversity). \textsuperscript{50} See E. Burnstein, \textit{Persuasion As Argument Processing}, in \textit{GROUP DECISION MAKING} (Hermann Brandstetter et al. eds., 1982). \textsuperscript{51} \textit{Brown}, \textit{supra} note 26, at 226. \textsuperscript{52} Sunstein et al., \textit{Are Judges Political}, \textit{supra} note 41, at 22–24. \textsuperscript{53} See \textit{id.} at 62–63 (discussing decisions of three-judge panels). \textsuperscript{54} See Moritz, \textit{supra} note 4, at 76–77. \textsuperscript{55} \textit{Id.} at 85. \textsuperscript{56} \textit{Id.} at 74–76. conflict," in the sense that they are committed to a degree of social harmony and are willing to acknowledge, in advance, that dissenting views should be expressed.\textsuperscript{57} These desirable effects of deliberation within mixed groups will not be realized in any deliberative process in which people are sorted, or sort themselves, into politically homogeneous groups. \textit{B. Explaining Polarization} Why does group polarization occur, and what accounts for such ideological amplification? Contributing factors include (a) informational influences, (b) corroboration effects, (c) social comparison, and (d) shared identity and self-categorization.\textsuperscript{58} 1. \textit{Informational Influences} The first and perhaps most important reason is that group members provide relevant information.\textsuperscript{59} In Colorado, group members were willing to consider both the conclusions and the arguments offered during deliberation. For example, skeptics made slippery slope arguments about same-sex marriage, expressing the fear that further changes to the institution of marriage will be difficult to prevent if that institution is not limited to one man and one woman. With respect to affirmative action, those rejecting color blindness emphasized the long history of discrimination in the United States and argued that a principle of color blindness might return the nation to a time of greater racial inequality. Group members were responsive to both of these concerns. In any group with some initial inclination, the views of most people in the group, and the information that they have and that they provide, will inevitably tend in the direction of that inclination.\textsuperscript{60} Suppose, for example, that most people in a group believe that an international treaty to control global warming is a bad idea. As a statistical matter, the arguments favoring that initial position will be more numerous than the arguments pointing in the other direction. Individuals may have been exposed to some, but not all, of the arguments that emerge from group deliberation; perhaps they will not have heard concerns about the expense of international controls, the dangers of ceding national controls over energy policy, or the possibility that global warming will have only modest adverse effects for the United States. As a result of hearing the various arguments, deliberation will lead people toward a more extreme point aligned with the initial beliefs of group members. Through this process, many \textsuperscript{57} \textit{Id.} at 75. \textsuperscript{58} See \textsc{Brown}, supra note 26, at 212–22, 226–45; Robert S. Baron et al., \textit{Social Corroboration and Opinion Extremity}, 32 \textit{J. Experimental Soc. Psychol.} 537 (1996). Overlapping accounts are provided in Schkade et al., \textit{supra} note 26, and Sunstein, \textit{Deliberative Trouble}, \textit{supra} note 26. \textsuperscript{59} \textsc{Brown}, \textit{supra} note 26, at 217–22. \textsuperscript{60} \textit{Id.} at 219. minds can polarize, and in exactly the same direction. 2. The Effects of Corroboration and “Self-Discovery” The second explanation stresses the close links among confidence, extremism, and corroboration by others.\footnote{61} If people lack confidence, they will tend toward the middle and avoid the extremes.\footnote{62} As people gain confidence, they usually become more extreme in their beliefs.\footnote{63} Agreement from others tends to increase confidence, and in this way like-minded people become more certain they are right and thus more extreme after deliberating with each other.\footnote{64} In a wide variety of experimental contexts, people’s opinions have been shown to become more extreme simply because their views have been corroborated, and because they have become more confident after learning that others share their views.\footnote{65} A process of this kind undoubtedly occurred in Colorado. Within both liberal and conservative groups, some people began with a degree of tentativeness, in a way that moved them toward the middle of the relevant scale.\footnote{66} After hearing both conclusions and arguments that fortified their original inclinations, they moved, with remarkable regularity, to a more extreme position.\footnote{67} A distinctive but related account of group polarization, and hence ideological amplification, suggests that deliberation can operate as a form of “self-discovery.”\footnote{68} This account begins with the observation that particular people are likely to find particular arguments especially persuasive. Fundamentalist Christians might be convinced, for example, that climate change, induced by some human beings to the detriment of other human beings and the natural world, is inconsistent with their deepest theological commitments. For those who think about political issues, some reasons are “active,” in the sense that they are known to be valid and relevant, whereas other reasons are “latent,” in the sense that people are uncertain of their pertinence and strength, but might be much affected by them if they are pressed in deliberation.\footnote{69} When people find themselves in groups of like-minded people, their latent judgments are made active, as others press reasons in favor \begin{itemize} \item[61.] See Baron et al., \textit{supra} note 58, at 557–59 (showing that corroboration increases confidence and hence extremism). \item[62.] See id. \item[63.] Id. \item[64.] Id. \item[65.] See Baron et al., \textit{supra} note 58 at 541, 546–47, 557 (concluding that corroboration of one’s views has effects on opinion extremity). \item[66.] See \textit{supra} Figure 1, top panel. \item[67.] See \textit{supra} Figure 1, bottom panel. \item[68.] See Catherine Hafer & Dimitri Landa, \textit{Deliberation and Social Polarization}, Jan. 25, 2006, available at http://ssrn.com/abstract=887634. \item[69.] Id. at 2. \end{itemize} of those judgments. It is in this sense that deliberation can operate as a form of "self-discovery," producing ideological amplification. This account is also consistent with what we observed in Colorado. 3. Social Comparison The third explanation involves social comparison.\textsuperscript{70} Sometimes people's publicly stated views are partly a function of how they want to present themselves.\textsuperscript{71} People usually want to be perceived favorably by other group members.\textsuperscript{72} Once they hear what others believe, some will adjust their positions at least slightly in the direction of the dominant position, to present themselves in the way that they prefer. Reputational concerns are only part of the story here; people also want to preserve their preferred self-conception, and if they ordinarily think of themselves slightly left-of-center, they might shift a bit, in a liberal group, in order to preserve that self-conception. In a liberal group, movements in the liberal direction will be favored and, for this reason, all members might end up leaning somewhat more to the left. This explanation fits well with the changes we observed. 4. Shared Identity and Self-Categorization A great deal of research indicates that group polarization is heightened when people have a sense of shared identity.\textsuperscript{73} People may polarize because they are attempting to conform to the position they see as typical or normative within their own group. If a group's particular identity is especially salient, the in-group norms "are likely to become more extreme so as to be more clearly differentiated from outgroup norms, and the within-group polarization will be enhanced."\textsuperscript{74} When Democrats or Republicans polarize, the desire to ensure intergroup differentiation is likely a motive. In our own experiment, many groups were even more prone to polarization when their discussions referred to groups with whom they disagreed, such as "those loony liberals" or "those crazy conservatives." C. The Limits of Polarization: Diverse Deliberation Days We have traced several social-cognitive processes that contribute to ideological amplification within like-minded groups: (1) informational \textsuperscript{70} See Brown, \textit{supra} note 26, at 213–17. \textsuperscript{71} Id. \textsuperscript{72} Id. \textsuperscript{73} See id. at 209–11; Turner et al., \textit{supra} note 34, at 159–70 (discussing evidence for the "self-categorization theory of polarization"); Joel Cooper et al., \textit{Attitudes, Norms, and Social Groups}, in \textit{BLACKWELL HANDBOOK OF GROUP PSYCHOLOGY: GROUP PROCESSES} 259, 269–70 (Michael A. Hogg & R. Scott Tindale eds., 2001). \textsuperscript{74} Turner et al., \textit{supra} note 34, at 210. influences, (2) corroboration effects, (3) social comparison, and (4) shared identity and self-categorization. An understanding of these processes suggests that political deliberation is extremely likely to lead to ideological amplification. It also suggests circumstances that may dampen or prevent ideological amplification. Imaginable interventions might produce different kinds of shifts and could either intensify or dampen amplification. 1. **Informational interventions.** We could easily imagine that information flows could affect amplification. Suppose, for example, that we gave the Boulder participants credible information suggesting global warming is not a particularly serious problem for the United States, that affirmative action greatly hurts those whom it is intended to help, or that civil unions for same-sex couples provide few benefits to such couples while posing legitimate threats to children. Such information could counteract the dynamics of ideological amplification, especially if the participants had some reason to trust the source of that information. For instance, participants may trust news sources that share their political beliefs.\footnote{75}{See Geoffrey L. Cohen, \textit{Party Over Policy: The Dominating Impact of Group Influence On Political Beliefs}, 85 J. PERSONALITY & SOC. PSYCHOL. 808 (2003) (showing that identification of a political party’s view greatly affects people’s judgments on political issues, enough so as to press them away from the view that they would otherwise hold).} If we told participants from Colorado Springs that President Bush supported a treaty to control global warming or that Vice President Cheney favored civil unions for same-sex couples, some of them would likely be influenced, especially if arguments supported the relevant views.\footnote{76}{See id. (noting that the policy favored by the relevant party affected participants’ views, even without supporting arguments); See also World Public Opinion.org, \textit{Global Warming}, http://americans-world.org/digest/global_issues/global_warming/gw2.cfm (last visited Feb. 2, 2007) (noting that about 70% of Americans favor the Kyoto Protocol to curtail global warming but that figure drops to about 43% when people are informed that President Bush rejects the Kyoto Protocol).} On the other hand, a deliberately balanced presentation that offers plausible arguments from both sides should diminish the effects that produce ideological amplification. 2. **Administrators, moderators, and leaders.** Administrators, moderators, and leaders might affect and perhaps even prevent ideological amplification. If group members trust them, administrators can dampen amplification by countering the group’s prevailing tendency and attempting to prevent extreme movements. Alternatively, deliberation could include planted members—confident, likeable, and apparently expert group members who are actually confederates of the experimenter. These confederates should be able to increase or to decrease amplification.\footnote{77}{Sherif, \textit{supra} note 22. A good outline can be found in Ross & Nisbet, \textit{supra} note 22, at 28–30. For demonstrations of the powerful effect of a confident confederate on the views of group members, See Robert Jacobs and Donald Campbell, \textit{The Perpetuation of An Arbitrary Tradition Through Several Generations of a Laboratory Subculture}, 62 J. ABNORMAL AND SOCIAL PSYCH. 649 (1961); Gregory Moschetti, \textit{Individual Maintenance and Perpetuation of A Means/Ends Arbitrary Tradition}, 40 SOCIOOMETRY 78 (1977).} The group’s assessment of administrators or confederates as relatively similar or relevantly different will affect their influence on the group.\textsuperscript{78} In our experiment, deliberators attempted to reach a group decision, and they succeeded in doing so in 83% of group discussions. The effects we describe would likely diminish if experimenters asked deliberators to speak to one another without reaching a decision and then polled privately on their views. 3. Other Deliberation Days. With respect to the effects of information flows and administrators, there is no need to speculate here. James Fishkin, an advocate of a Deliberation Day with balanced presentations of views, has illuminatingly explored the idea of a “deliberative opinion poll,” in which diverse people are asked to engage in deliberation about various issues.\textsuperscript{79} Fishkin finds significant changes in individual opinions, suggesting that deliberation is having a large effect, but he does not find a systematic tendency toward ideological amplification.\textsuperscript{80} In England, for example, deliberation led to reduced interest in using imprisonment as a tool for combating crime even when there was no antecedent hostility to the use of imprisonment.\textsuperscript{81} In the United States, deliberation increased the percentage of people holding a minority position about some issues. For example, deliberation led to a jump from 36% to 57% of people favoring policies making divorce “harder to get.”\textsuperscript{82} Before deliberation, 36% of people agreed that the “biggest problem facing the American family” is “economic pressure.” After deliberation, that number jumped to 51%.\textsuperscript{83} By contrast, the percentage believing that the biggest problem is the breakdown in family values fell from 58% to 48%.\textsuperscript{84} These changes are very different from what we observed in Colorado, and they do not show ideological amplification. The deliberative opinion poll uses several of the interventions described above. A trained moderator oversaw Fishkin’s groups to ensure a level of openness and likely altered some of the dynamics that produce amplification.\textsuperscript{85} Fishkin also presented participants with a set of written materials that attempted \textsuperscript{78} See Wendy Wood et al., Minority Influence: A Meta-Analytic Review of Social Influence Processes, 115 PSYCHOL. BULL. 323 (1994) (exploring when minority has impact and when it does not). \textsuperscript{79} See James S. Fishkin, The Voice of the People: Public Opinion and Democracy 161–81 (1995) (hereafter Fishkin, Voice of the People). For valuable and up-to-date materials, see James Fishkin, Ctr. for Deliberative Democracy, Deliberative Polling®: Towards a Better-Informed Democracy, http://cdd.stanford.edu/polls/docs/summary (last visited Mar. 5, 2007) (hereafter Fishkin, Deliberative Polling) \textsuperscript{80} See Fishkin, Deliberative Polling, supra note 79. \textsuperscript{81} Id. at 178–79. \textsuperscript{82} Fishkin, Deliberative Polling, supra note 79; See also Fishkin, Voice of the People, supra note 79, at 22 (showing an increase, on a scale of 1 to 3, from 1.40 to 1.59 in commitment to spending on foreign aid; also showing a decrease, on a scale of 1 to 3, from 2.38 to 2.27 in commitment to spending on social security). \textsuperscript{83} Fishkin, Deliberative Polling, supra note 79. \textsuperscript{84} Id. \textsuperscript{85} Id. to be balanced and that contained detailed arguments supporting competing positions.\textsuperscript{86} In our study, people relied only on the beliefs, information, and values they brought with them to the room. Fishkin’s balanced presentation would likely influence people in a way that simple group discussion without external materials would not. Whatever the experimenter’s goals, the materials that are provided will undoubtedly affect the direction that deliberation will take group members. Finally, Fishkin instructed his participants not to reach a group decision, and the absence of such a decision probably attenuated the influences discussed here. We have suggested that when individuals commit themselves to a group judgment, it is likely that their individual responses, even if anonymous, will be somewhat affected by that commitment. To disclose a private judgment that diverges from one’s public judgment is certainly possible, but it produces a degree of dissonance, which is often resolved in favor of the public statement.\textsuperscript{87} Group polarization has been found after mere exposure to the arguments of other group members, but it is typically smaller than after discussion and group judgment.\textsuperscript{88} It is tempting to explain Fishkin’s results by noting that his groups were diverse and did not consist of like-minded people. But the temptation should be resisted. Even if a group has a degree of internal diversity on some question, the pre-deliberation median is a good predictor of the post-deliberation median, at least if individual views are not entrenched.\textsuperscript{89} It would be most informative to test the effects of a variety of interventions into deliberative processes in order to see their various contributions to ideological amplification or dampening. It would also be informative to conduct deliberative opinion polls specifically testing the claim that deliberating groups will reach the \textit{correct} result on political questions that have answers that can be shown to be right. We do not know whether a hypothesis of that kind might be vindicated. Undoubtedly, there is a relationship between the nature of the interventions and the likelihood that the group will arrive at the truth. There is no question that other Deliberation Days, offering distinctive safeguards and procedures, would have different consequences from those we found in Colorado. Our only suggestion here is that on political issues, the likely result for deliberating groups, unaccompanied by an external moderator or a set of independent arguments, is amplification of preexisting views, \begin{itemize} \item \textsuperscript{86} \textit{Id.} \item \textsuperscript{87} See Robert B. Cialdini, \textit{Influence: The Psychology of Persuasion} ch. 3 (1993). \item \textsuperscript{88} See Brown, \textit{supra} note 26, at 220 (noting mere exposure produces significant shifts). \item \textsuperscript{89} See \textit{id.} at 210–11; Schkade et al., \textit{supra} note 26, at 1140–41 (finding that pre-deliberation median is predictor of shift, whether or not there is internal diversity before discussion began). \end{itemize} especially if group members are asked to reach a collective decision. D. Implications Does ideological amplification lead to accurate or inaccurate answers? Do deliberating groups err when they polarize? No general answer would make sense. A great deal will turn on the relationship between the correct answer and the group’s pre-deliberation tendencies. If the group is leaning toward the right answer, polarization might lead them directly to the truth. But there are no guarantees here. When individuals are leaning in a direction that is mistaken, the mistake will be amplified by group deliberation. Consider some results from domains in which mistakes and biases can be identified without taking a controversial stand on normative issues. With respect to questions with correct answers, deliberating groups tend to do about as well as or slightly better than their average member, but not as well as their best members.\textsuperscript{90} Further, deliberating groups do not reliably arrive at the correct answer.\textsuperscript{91} Group polarization occurs when jury members are biased as a result of pretrial publicity; the jury as a group becomes more biased than the individual jurors.\textsuperscript{92} When most people are prone to make conjunction errors (believing that A and B are more likely together than A or B alone), group processes lead to more errors, not fewer.\textsuperscript{93} The propensity to make conjunction errors is amplified, rather than reduced, by deliberation, apparently as a direct result of the mechanisms discussed here. Hence it is possible to show that in many domains, deliberation results in an amplification of individual mistakes.\textsuperscript{94} When individuals show a high degree of bias, groups are likely to be more biased than their median or average members.\textsuperscript{95} It is true that deliberating groups do well on “eureka” problems—those for which the answer is obvious once announced.\textsuperscript{96} It has been found, for example, that when sending and receiving information is costless, groups do significantly better on math problems than do individuals, apparently because people are able to recognize a \textsuperscript{90} See Gigone & Hastie, \textit{supra} note 23. \textsuperscript{91} See id. at 161–62 (summarizing findings that groups do not perform as well as best members); Hastie, \textit{supra} note 23, at 133–49. To the same effect, see also \textit{Garold Stasser \& Beth Dietz-Uhler, Collective Choice, Judgment, and Problem Solving}, in \textit{BLACKWELL HANDBOOK OF GROUP PSYCHOLOGY: GROUP PROCESSES}, \textit{supra} note 73, at 31, 49–50 (collecting findings). \textsuperscript{92} Robert J. MacCoun, \textit{Comparing Micro and Macro Rationality}, in \textit{JUDGMENTS, DECISIONS, AND PUBLIC POLICY} 116, 127–28 (Rajeev Gowda \& Jeffrey C. Fox eds., 2002). \textsuperscript{93} Norbert L. Kerr et al., Bias in Judgment: Comparing Individuals and Groups, 103 \textit{PSYCHOL. REV.} 687, 692 (1996). \textsuperscript{94} William P. Bottou et al., \textit{Propagation of Individual Bias Through Group Judgment: Error in the Treatment of Asymmetrically Informative Signals}, 25 \textit{J. RISK \& UNCERTAINTY} 147, 152–54 (2002). \textsuperscript{95} See MacCoun, \textit{supra} note 92. \textsuperscript{96} See Cass R. Sunstein, \textit{INFOTOPIA: HOW MANY MINDS PRODUCE KNOWLEDGE} 60–61 (2006). correct answer as such.\textsuperscript{97} But many problems do not have this feature because the correct answer is not immediately recognizable, and hence group error is pervasive. More generally, a comprehensive study demonstrated that majority pressures can be powerful even for factual questions on which some people know the right answer.\textsuperscript{98} The study involved 1200 people, forming groups of six, five, and four members. Individuals were asked true-false questions involving art, poetry, public opinion, geography, economics, and politics. They were then asked to assemble into groups to discuss the questions and produce answers by consensus. The clearest result was that the views of the majority played a large role in determining the group’s answers. When a majority of individuals in the group gave the right answer, the group’s decision followed the majority in no less than 79% of the cases. The truth played a role too, but a lesser one. If a majority of individuals in the group gave the wrong answer, the group decision nonetheless moved toward the majority position in 56% of the cases. Hence, the truth did have an influence—79% is higher than 56%—and this is a definite point in favor of the potentially beneficial effects of deliberation. But the judgment of the majority, and not the truth, was the dominant influence. And because the majority was influential even when wrong, the average group decision was right only slightly more often than the average individual decision (66% versus 62%). There is a final question. Is our experiment representative of the effects of political deliberation in most domains? We specifically attempted to ensure that our deliberating groups would consist of like-minded people. And it is reasonable to think that much of the time, real-world deliberation looks a lot like our experiment. To be sure, deliberation sometimes occurs within mixed groups, showing far more diversity than those assembled in Boulder and Colorado Springs. As we have seen, mixed groups are also likely to amplify preexisting tendencies, but that pattern is not inevitable,\textsuperscript{99} and such groups have significant advantages, mostly because of the potential effect of minority positions.\textsuperscript{100} Yet it is plausible to suggest that some countries, including the United States, operate to a greater or lesser extent as a collection of special interest enclaves, in which people are especially likely to associate and deliberate with \textsuperscript{97} See Mathew D. McCubbins and Daniel B. Rodriguez, \textit{When Does Deliberating Improve Decisionmaking?}, 15 J. Contemp. Legal Issues 9, 27–29 (2006), available at http://ssrn.com/abstract=900258. Note that when receiving information is costly, individuals did not do better with deliberation than on their own. \textit{See id.} at 28–32. \textsuperscript{98} Robert L. Thorndike, \textit{The Effect of Discussion Upon the Correctness of Group Decisions, When the Factor of Majority Influence Is Allowed For}, 9 J. Soc. Psychol. 343, 348–61 (1938) (exploring effects of both correctness and majority pressure on group judgments). \textsuperscript{99} \textit{See supra} text accompanying note 50. \textsuperscript{100} \textit{See} the discussion of minority influences in Cass R. Sunstein, \textit{Why Societies Need Dissent} 30–32 (2003). others who agree with them.\textsuperscript{101} To the extent that migration patterns are now producing more homogeneous subcultures, routine exposure to diverse opinions may become less likely for many people.\textsuperscript{102} Similar results might be produced and reinforced by the rise of highly specialized information sources, above all the internet, which makes it increasingly easy for people to avoid opinions that differ from theirs.\textsuperscript{103} Indeed, there is a well-documented tendency for people to seek information that confirms their existing beliefs and to avoid or devalue disconfirming information (“confirmation bias”).\textsuperscript{104} The ease of finding confirmatory evidence is likely to increase the balkanization of opinion. Consider in this regard an illuminating little experiment.\textsuperscript{105} Members of a nationally representative group of Americans were asked whether they would like to read news stories from one of four sources: Fox (known to be conservative), National Public Radio (known to be liberal), CNN (often thought to be liberal), and the British Broadcasting Network (whose politics are not widely known to Americans). The stories came in different news categories: American politics, the war in Iraq, racial issues in America, crime, travel, and sports. It turns out that for the first four categories, Republicans chose Fox by an overwhelming margin. By contrast, Democrats split their votes between National Public Radio (NPR) and CNN, while showing a general aversion to Fox. For travel and sports, the divide between Republicans and Democrats was much smaller. By contrast, independents showed no preference for any particular source. In a sense, the experiment showed that private choices tend to replicate our Colorado study, with people gravitating toward stories that shared their antecedent views. There was another finding, an equally striking one: the network label greatly affected people’s level of interest in the same news stories. For Republicans, the identical headline became far more interesting, and the story became far more attractive, if it carried the Fox label. In fact the Republican hit rate for the same news stories was three times higher when it was labeled “Fox News.” Interestingly, the hit rate also doubled when sports and travel stories were so labeled. Democrats showed a real aversion to stories labeled “Fox News,” and the CNN and NPR labels created a modest increase in their interest. The overall conclusion is that Fox attracts substantial Republican support and that Democratic viewers and readers take pains to avoid Fox—while CNN and NPR have noticeable but weak brand loyalty among \textsuperscript{101} See Bishop, \textit{supra} note 5; See also Alan I. Abramowitz et al., \textit{Incumbency, Redistricting, and the Decline of Competition in U.S. House Elections}, 68 J. Pol. 75 (2006). \textsuperscript{102} See Bishop, \textit{supra} note 5. \textsuperscript{103} See Cass R. Sunstein, \textit{REPUBLIC.COM} 2.0 (forthcoming 2007) for discussion. The Internet also makes it very easy to encounter new and different positions. Ideological amplification might well be less likely if people use the Internet to find such positions. \textsuperscript{104} See Raymond S. Nickerson, \textit{Confirmation Bias: A Ubiquitous Phenomenon in Many Guises}, 2 Rev. Gen. Psychol. 175 (1998). \textsuperscript{105} See Iyengar & Morin, \textit{supra} note 7. Democrats. This is only one experiment, to be sure, but there is every reason to suspect that the result would generalize—that people with identifiable leanings are consulting sources, including websites, that match their predilections, and are avoiding sources that do not cater to those predilections. It is important to note that there is a distinction between deliberating groups that attempt to reach a shared conclusion (as in our study) and deliberating groups that simply talk (as in Fishkin’s studies). Amplification might well be heightened for the former groups; and many real-world groups talk without having to reach a shared conclusion. To this extent, such groups may not show the same degree of amplification that we find. Recall, however, that mere exposure to the views of other like-minded people can produce group polarization.\textsuperscript{106} For this reason, we anticipate that movements of the kind found in Colorado, even if somewhat smaller, would also be found without group decisions. Nothing said here denies that deliberation might be structured so as to diminish the likelihood of ideological amplification; we have seen that neutral arbiters, providing information and helping to manage discussion, might have a substantial effect. Various efforts to prime participants might also influence the effects of deliberation. If participants are reminded of the 9/11 attacks, or of events that cast a favorable or unfavorable light on certain positions or even officials, they are likely to be affected, perhaps in a way that will diminish the effects found here.\textsuperscript{107} But whatever the intervention and priming effects, the outcome of our experiment offers important cautionary notes about the consequences of deliberation on political judgments. Ideological amplification, and not necessarily reason or truth, may well be the result of political deliberation, at least if group members share antecedent commitments. There is a final point. If deliberation results in ideological amplification, it does not follow that deliberation has moved group members in the wrong direction. Suppose that after deliberation, group members become especially hostile to affirmative action or especially receptive to same-sex unions. Has deliberation helped or hurt? Any answer would have to turn on some judgment about the merits. With a factual question, we can readily test whether members have been led to error or truth. But with some questions, no such test is easily available. If amplification occurs, perhaps groups are led, much of the time, in the right direction. Nonetheless, we should be suspicious of situations in which social interactions lead people to believe a more extreme version of what they thought before they began to talk. If group members were exposed to competing arguments, and to a range of perspectives, at least there would be greater reason \textsuperscript{106} \textit{Brown, supra} note 26, at 220. \textsuperscript{107} \textit{See Thomas Pyszczynski et al., In the Wake of 9/11: The Psychology of Terror} (2003). for confidence that ultimate conclusions were not an artifact of artificially limited "argument pools." CONCLUSION As a result of deliberation with like-minded others, liberals became more liberal and conservatives became more conservative. On some of the largest issues of the time, discussions by like-minded group members fueled greater extremism, and increased divisions between liberals and conservatives. At the same time, both liberal and conservative groups became more homogeneous; deliberation significantly reduced internal diversity. We have emphasized that our Deliberation Day was not the same as every imaginable deliberation day, and that many advocates of more deliberation argue in favor of distinctive safeguards and procedures that might ensure different results from those that we have described here. But there is every reason to believe that results of that kind occur not simply in experimental settings, but in many real-world domains in which citizens and officials engage in political discussions with one another—especially if they sort themselves into actual or virtual groups of like-minded people. Those who seek to foster broader deliberation, or to celebrate deliberative conceptions of democracy, would do well to keep these points in view.
The Village of Gowanda Board of Trustees special meeting was called to order by Mayor Heather McKeever at 7:05 p.m. at the Municipal Hall. The pledge of allegiance was recited. Present: Mayor Heather McKeever Trustee Carol Sheibley Trustee Pete Sisti Trustee Barb Nephew Trustee Paul Zimmermann Village Employees: Village Clerk Kathy Mohawk, Public Works Superintendent Jason Opferbeck, Officer-in-Charge Steve Raiport, Fire Chief Mark Hebner Media Present: Phil Palen, Cable Channel 22 Samantha McDonnell, Observer Public Present: Joe Gorenflo, Jim Anderson, Charles Beaver, Mr. and Mrs. Frank Markewitz, John Walgus, Jack Broyles, Cattaraugus County Legislator Paula Stockman, Anne and Earl Clabeaux, Monica Nephew, Joe Sweda, Aaron Markham, Cattaraugus County Legislator Dick Klancer, Tom Povhe, Charlie Maine, Lew and Jean Gabel, Carol Regan, Terry and Pam Howard, Joe Vogtli, Don Offhaus, Elton Hansen, Dorothy and Lou Selan, Rollin and Karen Besse, John Girome, Andy Burr, Eric Zielinski, David Schwedt, Charles and Vicki Toy, Bonnie and Bob Gabel, Janet Vogtli, Susan Bettker, Randy Latona, Cliff Lincoln, Kathy Schwedt, Terri DeHos, Sharon and Dale Hartlieb, Dave Latona, Charity Sweda, Deb Bennett, Kim Syracuse, Wanda Koch, Mr. and Mrs. Glende, Thomas Kielar Motion 19-15. Motion by Trustee Zimmermann, seconded by Trustee Sisti to open the public hearing for the 2015-2016 budget. Motion carried 5-0. Mayor McKeever advised the hearing was for the purpose of public input for the upcoming budget. The previous discussion during the work session regarding the garbage options was also part of the budget process. Elton Hanson said he is in favor of keeping the garbage service as it is, but raising the sticker price up double. He also stated the Village needs to enforce the regulations against violators. John Girome feels the Village should start fining the violators of the garbage regulations. He mentioned 45 Palmer Street and said he doesn’t feel the majority of the residents should have to pay for the violators. There was discussion of a tanker cab dumping at the foot of Broadway Road. Public Works Superintendent Opferbeck advised that Cattaraugus County is dumping leachage from Five Points. The County has issues with the other sewage treatment plants in the County. They will dump about 10 loads per day. It is a state of emergency for the County and the Village has a shared services agreement with the County. The Village will receive payment for the use of the sewer. Andy Burr stated he is opposed to putting trash charges on the water bill. He reported on the Village of South Dayton refuse program. They self perform the trash pickup. The Village of South Dayton receives about $15,500 in ticket revenues, the cost of the program is $12,000 and the tipping fees are $4,000. He suggested the Village purchase a 1-ton pickup truck with a trailer and maybe increase the sticker prices. Mr. Burr asked the Village to investigate the cost of self-performance. Mr. Burr also stated that the 2009 flood damage is now showing in the budget for repayment. Janet Vogtli is a property owner in the Village and she pays to have the garbage taken away. She said the Village could be thinking of closing the office one day a week to save money. Ms. Vogtli asked about the full time police officer who gets paid overtime. Mayor McKeever indicated it has been a good benefit for the Village in terms of safety. Trustee Sheibley asked why the Village is looking for an offsetting issue for garbage but not for other departments. She is not in favor of a garbage tax. Janet Vogtli asked how the Village is going to assess all the apartments in the Village. How about the not-for-profits? Trustee Sheibley also asked whether the County would collect the unpaid garbage on the unpaid water bills. They do not collect for lawn mowing services. Several residents felt the violators should be fined but Mayor McKeever advised that the towns get the money for any tickets that are issued. Charity Sweda asked why the police budget went from $217,000 to $245,000 for payroll. Mayor McKeever advised the Village is looking at shared services between the Village and the County to share another full-time officer 20 hours per week. That one position will be paid $17,000 by the Village. Andy Burr stated that police is actually under budget for personal services. Mayor McKeever indicated the Village may also do away with the D-line shift which is only once per week. Officer-in-Charge Raiport indicated some of the extra cost for personal services involves transports, extra training, etc. Ms. Sweda asked about the Village getting its own Court. Public Works Superintendent Opferbeck advised there is an initial set-up and the staff would be paid between $50-80,000 per year. Officer-in-Charge Raiport said that state law takes precedence over the Village ordinances. Wanda Koch said the Village needs a full time person in the police department to know the Village and the people. The Village will be working with the Southern Tier Task Force to have an undercover officer in the Village. A resident asked if the Village employees have to be Village residents. Mayor McKeever said the Village tries to give preference to Village residents. Joe Gorenflo, Frederick Street, stated he is against adding a garbage tax to the water bill. This is excessive especially for the older residents who only got 1.5% raise on their Social Security. Kathy Schwedt asked who is going to pick up her garbage? She indicated she doesn’t want the Village to cut the police protection due to the drug problems in the Village. Dave Latona asked why the garbage company picks up any garbage bags without whole stickers. Trustee Sisti advised the garbage is more likely to get picked up in the summer months for odor control. Trustee Sisti agreed this is an ongoing issue to look at more thoroughly going forward. He indicated the Village Board will continue to look at the issue and they have heard some strong opinions from the public. Deb Bennett is opposed to a garbage increase on the water bill. People are retired and live on strict budgets. Continuous increases will drive residents out of the community and eventually businesses. Anne Clabeaux stated that the minimum price for water went up. If the Village adds a garbage tax to someone who barely has garbage, the elderly residents will be paying for other people. She stated that is not fair. Phil Palen, historian and videographer, gets paid $900. He indicated he would take no compensation for either job. He would like to see a picnic table placed on the Union Street Village property at the entrance to Creekside Park. He indicated that people eat lunch there. He offered to add the $900 to the tree budget. Monica Nephew indicated she is willing to pay for what she uses, but not for what everyone else uses. Mayor McKeever advised that $163,000 price from Casella does not include recycling. To add recycling would be over $200,000. Public Works Superintendent Opferbeck spoke about the grants the Village has received: $155,000 for Creekside Park, the Safe Routes to School grant, the BOA grant, and asset management grant. Joe Vogtli stated that the Town of Collins bills have a breakdown on how much of the tax money goes for each item. Village Clerk Mohawk indicated the Village bills used to have the breakdown on the back. She will inquire why this was discontinued and ask if it can be reinstated. Public Works Superintendent Opferbeck advised the Village cut close to $200,000 from last year’s budget. The big items, i.e., retirement, utility costs, BAN charges, expenses have all increased. Andy Burr stated there is $1 million in bonding, $85,000 for bonding expenses, the uncertainty of reserve funds, and the $50,000 contingency for unexpected expenses all included in the 2015-2016 budget. Village Clerk Mohawk read a statement presented by Dale DeCarlo, 117 Broadway Road: “I would like this letter into the minutes as a matter of public record. Starting with the water increase, it was strongly suggested to the Board multiple times over the past several years for a periodic small increase in rates. This was shot down by the majority. Next is the garbage issue. Unfortunately if costs go up, you pass the cost on. I suggested an increase in stickers as a way to avoid the possible changes. This was again shot down. One reason given was that the elderly couldn’t afford a dollar increase. How is someone who doesn’t drive going to go to Dayton or Collins dump? I object to the comment in today’s paper “We’re in this situation because of misuse of the system.” We’re in this situation because no one wants to increase stickers. Had the suggestion to increase stickers been implemented, we may not be in this situation. The Board needs to plan more for the future than just now. Back to the garbage stickers, Casella would give a list of “violators” to Kathy and not pick up the bags. If needed, the “violators” would be contacted, and the issue resolved.” Motion 20-15. Motion by Trustee Zimmermann, seconded by Trustee Sisti to close the public hearing at 8:40 p.m. Motion carried 5-0. Motion 21-15. Motion by Trustee Zimmermann, seconded by Trustee Sisti to open the public hearing on Local Law No. 3 of the year 2015 authorizing a property tax levy in excess of the limit established in General Municipal Law §3-c at 8:40 p.m. Motion carried 5-0. Janet Vogtli stated that any efficiency plan the Village presents will not be implemented if the Board adopts an override of the tax levy. She stated it is unfair that the taxpayers have to pay more money because the former treasurer wasn’t doing her job. Jack Broyles said if the Village adopts the override, there is no limit to how high the tax rate can go. Legislator Paula Stockman advised that 1.68% is the tax limit for the Village. Eric Zielinski asked about the proposed budget at 5.22%. Does that mean the Board will already go over that? Treasurer Lauer stated that almost $250,000 is owed back to the general fund for the 2009 flood. If the taxes go up that high, it is catching up and is only a one-time charge. If the levy is at or under the limit, and the Village submits an efficiency plan by June 1, 2015, all residents will get a rebate. If the Village doesn’t do both of these things, the people will not get their rebates. Motion 22-15. Motion by Trustee Sheibley, seconded by Trustee Nephew to close the public hearing at 8:55 p.m. Motion carried 5-0. Motion 23-15. Motion by Trustee Zimmermann, seconded by Trustee Sisti to adopt Local Law No. 3 of 2015 authorizing a property tax levy in excess of the limit established in General Municipal Law §3-c. Motion carried 5-0. Mayor McKeever thanked Legislators Stockman and Klancer for their efforts at getting the old China King property transferred to the Village. She advised the Village received a $155,000 no match grant for the project. Public Works Superintendent Opferbeck indicated the Village is seeking approval to use the old dump site on Palmer Street. They would fill in the old pump house reservoir with the debris removed from Thatcher Brook. Mr. Opferbeck and Trustee Sheibley met with the Town of Persia and Cattaraugus County regarding this issue. He reported that the property is for sale for $1400 from Cattaraugus County. Motion 24-15. Motion by Trustee Nephew, seconded by Trustee Zimmermann to purchase the property, tax map No. 17.021-01-6, from Cattaraugus County for the cost of $1400, to be used for cleanup of the Thatcher Brook property. Mayor McKeever thanked Trusetee Sheibley, Councilman Walgus, and Legislators Stockman and Klancer for this effort. Mayor McKeever advised that some students from the University of Buffalo have developed a flood mitigation device. It is a brick wall. They have an interest in using the Village as a model to try the device. Mayor McKeever scheduled two more budget workshops: April 21 at 6:00 and April 28 at 6:00. She indicated that if a new RFP for garbage services is necessary, the Village Board needs to know within a week. Trustee Sheibley indicated that the Board must reject or accept the bid from Casella now that the price has been divulged. The Village cannot talk to anyone else. Motion 25-14. Motion by Trustee Zimmermann, seconded by Trustee Sisti to adjourn the special Village Board meeting at 9:20 p.m. Motion carried 5-0. The next Village of Gowanda board meeting is May 12, 2015 at 7:00 p.m. Respectfully submitted, Kathleen V. Mohawk Village Clerk
CASE STUDY Bhutan: Blending Happiness and Hazelnuts with Finance June 2016 ABOUT IFC International Finance Corporation, a member of the World Bank Group, is the largest global development institution focused exclusively on leveraging the power of the private sector to tackle the world’s most pressing development challenges. Working with private enterprises in more than 100 countries, IFC uses its capital, expertise and influence to help eliminate extreme poverty and promote shared prosperity. ABOUT GAFSP The Global Agriculture and Food Security Program (GAFSP) invests in agriculture to reduce poverty and improve food and nutrition security in low-income countries. GAFSP targets the entire value chain in agriculture and related sectors through its complimentary Public and Private Sector Windows, recognizing that investments from both public and private sectors are critical to a well-developed, resilient food system, improved agricultural productivity, increased incomes, and the highest development impact. Our donors – Australia, Bill and Melinda Gates Foundation, Canada, Republic of Korea, Ireland, Japan, the Netherlands, Spain, United Kingdom, and the United States – work in partnership with recipients, civil society organizations, and other stakeholders to improve the lives of smallholder farmers and their families. Millions of poor and vulnerable people around the world will directly benefit from GAFSP’s continued commitment and support. ABOUT THE CASE STUDY Expanding access to markets, financing and storage, inputs and technology for smallholder farmers is a central element to eliminating extreme poverty and promoting shared prosperity. This case study highlights the developmental impact of an unusual IFC and GAFSP led investment in a semi-greenfield company in the agribusiness sector in Bhutan. WRITTEN BY This case study was written by Caitriona Palmer with input from Philipp Farenholtz, Laura Mecagni, Elizabeth Price and Niraj Shah. Special thanks to Irina Sarchenko for her creative design. FUNDING Funding for this publication was provided by GAFSP. DISCLAIMER The findings, interpretations, views and conclusions expressed herein are those of the author and do not necessarily reflect the views of the Executive Directors of the International Finance Corporation (IFC) or of the World Bank or the governments they represent. While IFC believes that the information provided is accurate, the information is provided on a strictly “as-is” basis, without assurance or representation of any kind. IFC may not require all or any of the described practices in its own investments, and in its sole discretion may not agree to finance or assist companies or projects that adhere to those practices. Any such practices or proposed practices would be evaluated by IFC on a case-by-case basis with due regard for the particular circumstances of the project. COVER PHOTO © Mountain Hazelnuts RIGHTS AND PERMISSIONS © International Finance Corporation 2016. All rights reserved. The material in this work is copyrighted. Copying and/or transmitting portions or all of this work without permission may be a violation of applicable law. Bhutan: Blending Happiness and Hazelnuts with Finance In 2015, IFC, ADB and GAFSP together invested US$12 million in Mountain Hazelnuts, a project to promote hazelnut production by smallholder farmers across Bhutan, the land of Gross National Happiness. This unusual investment in a semi-greenfield company with significant execution risks was IFC’s first ever in the agribusiness sector in Bhutan and was made possible with concessional finance from the Private Sector Window of GAFSP. The project has the potential to improve the lives of 15 percent of Bhutan’s entire population: a mountainous return on a US$6 million GAFSP investment. BACKGROUND On the mountains of Bhutan, where happiness is akin to holiness, a quiet agricultural revolution is taking place. Dotted along the vertiginous Himalayan slopes are millions of young hazelnut trees, the vision of an entrepreneurial couple, Daniel Spitzer and Teresa Law, who dared bring commercial hazelnut production to Bhutan. In 2010, Spitzer established Mountain Hazelnuts, a smallholder farmer-based company designed to take advantage of the growing demand for hazelnuts from European confectionery and snack producers in Asia. Spitzer initially planned to build his hazelnut business in western China – where he had a proven track record of developing large scale projects – but the devastating Sichuan earthquake of May 2008 prompted many farmers there to abandon their land to move to the cities to work on reconstruction. At about the same time, Bhutan announced it would consider foreign investments. Spitzer did some research and discovered that the tiny kingdom, cradled by the Himalayas and wedged between India and China, had climate and soil characteristics perfect for growing commercial crops of hazelnuts. Bhutanese farmers, despite a high rate of urban migration, were well used to tending the steep slopes. Digging out his contact list from over thirty years of working in Asia, Spitzer and his team met with hundreds of potential stakeholders. They built credibility with government officials and people in the villages, working with experts on agricultural studies, land surveys and training programs for the farmers. Public officials helped spread the word amongst farmers. Months later, Mountain Hazelnuts was born. For many outside observers, the humble hazelnut may not seem like a big market opportunity, but it is the world’s second most valuable tree-nut crop after almonds, thanks to the European confectionery market and the expanding health consciousness of western consumers anxious to capitalize on the nut’s antioxidant qualities. Currently Turkey and Italy grow most of the world’s hazelnut crop, used most notably in the popular ‘Nutella’ spread. However, in recent years Turkey’s market has become extremely volatile and a new market has opened up in Bhutan’s backyard where the growing middle classes in China are rapidly increasing their tastes for exotic snacks. With proximity to these two new neighbors, Mountain Hazelnuts is well positioned to supply to this burgeoning export market. For many outside observers, the humble hazelnut may not seem like a big market opportunity, but it is the world’s second most valuable tree-nut crop after almonds, thanks to the European confectionery market and the expanding health consciousness of western consumers anxious to capitalize on the nut’s antioxidant qualities. THE MODEL Mountain Hazelnut’s business model is deceptively simple but not without considerable risks. Using hazelnut saplings grown in their own nursery in Bhutan, Mountain Hazelnuts distribute them to farmers to plant on fallow land that has no commercial use. An agreement brokered through the Bhutanese government allows farmers without land to participate in the project by leasing land from the Government. Mountain Hazelnuts then provides agricultural inputs and training to ensure that farmers know best how to care for their young shrubs. Once the trees flourish and bear nuts, the farmers sell the crop back to Mountain Hazelnuts at a guaranteed minimum price. Each full-grown tree can yield 4 to 6 kilos of nuts to be sold to Mountain Hazelnuts. With the typical rural household in Bhutan earning a cash income of less than $500 a year, these incremental earnings based solely on the sale of the hazelnuts will help farmers dramatically boost their incomes. By improving the lives of these farmers Mountain Hazelnuts is also hoping to stem the crippling flow of younger Bhutanese villagers migrating to urban areas. By planting on thousands of acres of overgrazed and deforested foothills, the company also hopes to halt hillside erosion. 1. **SOURCE MATERIAL** - MH sources hazelnut tissue culture and seed nuts to produce hazelnut tree saplings. As a third source, multiplication orchards were set up to produce tree seedlings from suckers of existing trees. 2. **NURSERY** - MH operates two state-of-the-art nurseries in Central Bhutan where the tissue culture and germinated seed nuts are stabilized and developed into hazelnut tree saplings for distribution to farmers. 3. **FARMER REGISTRATION/OUTREACH** - MH’s outreach and advocacy division manages the farmer registration process, which involves consultative meetings, advocacy workshops and land registration meetings with farmers and village administration. 4. **TREE DISTRIBUTION & PLANTING** - MH distributes tree saplings free of charge to farmers across Central and Eastern Bhutan; trees are planted on farmers’ degraded/fallow land under MH supervision. 5. **TREE GROWING/Maintenance** - Farmers are responsible for growing and proper maintaining of the trees on their own land; some inputs are provided. 6. **MONITORING & FARMER TRAINING** - Almost 200 field monitors train the farmers and monitor the orchard performance. Detailed performance data and issues are reported back to management via via MH’s mobile reporting systems. 7. **COLLECTION/PROCESSING** - MH will collect and purchase the nuts at a pre-agreed price and carry out initial processing at its own processing facility (under construction). 8. **SELLING** - Nuts will be shipped via road, rail and sea (Kolkata, Hong Kong) and sold to international nut traders. Initial target markets are Asian snack markets, going forward European high-end confectionery producers as well. THE PROJECT AND PERCEIVED RISKS In 2011, with Mountain Hazelnuts preparing to deliver the first truckloads of hazelnut seedlings to its first batch of participating farmers, Daniel Spitzer engaged with IFC about a possible investment in the company. Spitzer was well known to IFC from a previously successful investment in one of his companies in China in the late 1990s. However a detailed appraisal and investment review meeting that year did not materialize in an investment due to disagreements on potential valuation and terms. At the time, Mountain Hazelnuts had less than 3,000 farmers registered for the program and had just begun nursery operations. However, IFC maintained contact with Spitzer and in 2014, with Mountain Hazelnuts well-positioned to serve the Chinese market, a decision was taken to re-engage. Despite the company’s improved capacity, a proposed investment in Mountain Hazelnuts gave IFC cause for concern. The investment would be the institution’s first in a unique shared prosperity model – providing money to a hazelnut production company that technically owned neither the trees nor the land that the project’s success was dependent on. How could Mountain Hazelnuts manage and motivate 15,000 untrained farmers to adopt good agricultural practices and properly grow hazelnut trees across a mountainous country with limited infrastructure? And even if they did, what if a virulent pest or a sudden flood or earthquake wiped out the orchards? There was apprehension too about Mountain Hazelnuts’ ambitious timeline. Could the company meet its aggressive target of planting 10 million hazelnut trees and establish a logistics and international marketing infrastructure before seeing their first meaningful cash inflows? And what of the volatile global hazelnut market, which in 2014 alone saw prices fluctuate up and down by more than 100%? Could the bond of trust, so essential between participating farmers and Mountain Hazelnuts, survive the potential of side selling? What if buyers from nearby India or China offered a better price? Mountain Hazelnuts was far better positioned in 2014 than it had been in 2011. The company had moved closer to commercial production by planting in excess of 2.5 million established hazelnut trees, and was poised to generate the first marketable yields. With hardy hazelnut trees now taking root on Bhutan’s rocky mountain sides, the suitability of Bhutan’s natural conditions for hazelnut cultivation had been confirmed. IFC technical specialists noted that Mountain Hazelnuts’ nursery operation was fully developed with increased nut germination rates. Most importantly, the company had won farmer buy in and could boast of over 6,000 farmers on their books with and an additional 6,000 hectares of land registered for planting. The unusual partnership between a Bhutanese start up and a Buddhist nun As a child, Ani Kinzang helped her family tend cattle in the mountainous village of Mukazor. Contemplative and drawn to the spiritual, Ani would seek out her uncle during her free time, listening to his stories about the Buddha, and memorizing Buddhist prayers. At 14, determined to become a Buddhist nun, Ani ran away from home. In a remote nunnery in Sandencholing, she found a community of like-minded women. But life in the nunnery was extremely harsh, with very little food and basic comforts. That’s when Ani decided to dedicate her religious practice and earnings to fund a retreat for other nuns, a place where Ani and her colleagues could devote time, in relative comfort, to silent meditation. But how could a nun like Ani raise enough income to build a retreat? Returning to her family land and the life she once knew as a farmer, Ani decided to grow commercial trees: bamboo, walnut, pear and sandalwood. But animals gnawed at the roots and insects devoured sap from the plant tissues. Ani’s earnings slowed further when she struggled, without logistical support, to get her paltry produce to market. That’s when Ani’s brother-in-law told her about the hazelnut tree, a “tree that will grow where nothing else will.” Investigating further, Ani learned about Mountain Hazelnuts, a company that would not only provide Ani with hazelnut saplings, inputs and training to plant her own orchard, but who would return to purchase her crop at a guaranteed minimum price. “I’d never seen or heard about the tree before,” Ani said. “But to hear that I wouldn’t have the burden of bringing everything to market was a huge relief. I wanted to try.” That was in 2013. Now, Ani’s burgeoning hazelnut orchard has taken root, with more than 80% of the original plants alive and growing. While Ani waits for the trees to bear fruit, she enjoys frequent visits from Mountain Hazelnut staff who advise her on how best to care for her trees. In the meantime, with a brick press borrowed from her brother, Ani makes by hand the mud bricks that will one day enclose the retreat. “This life is precious,” said Ani. “We cannot waste it. I only hope that by planting trees such as hazelnuts, I can help others move closer to enlightenment.” THE BUSINESS CASE FOR GAFSP Although there was interest in moving forward with Mountain Hazelnuts, operational risks were still high to offer long-term capital to this still relatively early-stage and pre-revenue company. That’s when an approach was made to the Private Sector Window of the Global Agriculture and Food Security Program. By using blended finance solutions involving concessional funding, the Private Sector Window specializes in supporting early-stage but potentially impactful agribusiness projects targeted at improving the livelihood of smallholder farmers. However, instead of a grant-based approach, the goal is returnable capital. Following discussions, it was agreed that IFC and ADB would each invest US$3 million of equity while Daniel Spitzer and existing shareholders would convert US$3 million of existing bridge loans into equity. The Private Sector Window of GAFSP agreed to invest quasi-equity of US$6 million, matching the total amount of IFC’s and ADB’s investments. The use of GAFSP blended finance in the form of cumulative preferred redeemable shares was essential to mobilize IFC’s and ADB’s funding, and to close the remaining funding gap for the project’s completion. In short, it made this deal a reality. In the absence of alternative funding offers, the structure of this investment did not distort the market and did not price any competitor out of the market. With its cash flow friendly profile, it is the appropriate instrument for Mountain Hazelnuts’ capital structure, as the company will not generate substantial cash flows for some time and thus not be able to service regular debt. This concessional quasi-equity instrument from GAFSP, together with the investments from IFC and ADB, will help Mountain Hazelnuts reach its break-even point and ramp up profitability and cash generation. Once that occurs, the company will then be in a position to accept commercial funding, especially trade finance to support the company’s operations. US$12 MILLION EQUITY INVESTMENT BY IFC, ADB AND GAFSP PRODUCTS - US$3 million IFC Equity Investment in Preferred Shares - US$3 million ADB Equity Investment in Preferred Shares - US$6 million GAFSP Quasi-equity Investment in Cumulative Redeemable Preferred Shares (CRPS) FEATURES IFC and ADB - Common and specific shareholder rights (voting, consent, policy, information, exit, preemptive, anti-dilution, nomination of Board member) - Policy put, liquidity redemption GAFSP - Cumulative dividend of base IRR, paid at exit/redemption - Specific consent rights, information rights and policy rights - Senior in repayment of dividend and capital to all shareholders, subordinated only to IFC and ADB in line with GAFSP mandate - Redemption in line with IFC / ADB exit GAFSP CONCESSIONALITY - Front-ended disbursement disproportionate to IFC & ADB - Waterfall distribution of proceeds to provide capital protection to IFC & ADB PHASED DISBURSEMENT FOR RISK MITIGATION Subsequent disbursements subject to reaching operational and financial milestones: | | 2016 | 2017 | 2018 | |--------------------------------|--------|--------|--------| | Cumulative planted trees since Jan. 2015 | 2 million | 4.5 million | 6 million | | Revenue of previous year | -- | $0.1 million | $2 million | RATIONALE FOR THE STRUCTURE - Given the high risk profile inherent in this semi-greenfield project, neither IFC nor ADB would have invested without substantial support from GAFSP. - The company will not be in a position to service any interest or principal repayment of debt for a number of years, thus ruling out a straight loan from GAFSP. The CRPS, technically an equity instrument ranking senior to all other classes of shares, has similar characteristics as a loan with a fixed coupon and redemption timeline and no dilution to other shareholders. However, the biggest advantage it offers to the company and its shareholders is that the base IRR is in the form of a dividend payment and is cumulative i.e. it does not mandatorily have to be paid by the company to GAFSP every year. - The phased, milestone-based disbursement schedule and waterfall arrangement provide critical risk mitigation / capital protection for IFC and ADB only, thus restricting GAFSP’s concessionality in line with its mandate. - The waterfall mechanism allows for potentially higher returns for GAFSP but only out of the proceeds received by IFC and ADB. In other words, any downside or upside arrangement stays between IFC, ADB and GAFSP and does not benefit or penalize other shareholders. - Mirroring of the CRPS redemption timeline to the IFC and ADB equity exit timeline leads to a significant alignment of interests between IFC, ADB and GAFSP in spite of not investing via the same instrument. WATERFALL ARRANGEMENT FOR DISTRIBUTION OF PROCEEDS 1. Pooling of all proceeds received by IFC, ADB and GAFSP 2. First US$6 million paid to IFC & ADB (principal recovery) 3. Second US$6 million paid to GAFSP (principal recovery) 4. Next amounts proportionately paid to IFC, ADB and GAFSP until GAFSP reaches base IRR 5. Next amounts equally paid out to IFC and ADB only until both reach commercial IRR 6. Next amounts proportionately paid to IFC, ADB and GAFSP (thus representing additional upside for GAFSP beyond base IRR) DEVELOPMENT IMPACT The overall projected financial returns on GAFSP Private Sector Window projects are in the low single digits—clearly low investment returns if that were the instrument’s only measure of success. However, GAFSP and IFC recognizes the significance of their projects in terms of development impact, and in the case of Mountain Hazelnuts there are a multitude of development rich results to mine: alleviating poverty among farmers, linking them to markets, creating jobs, restoring Bhutan’s eroded landscape, improving the environment, and ensuring a financial gain for investors. FARMER REACH AND EXPECTED INCOME Mountain Hazelnuts is expected to eventually involve 15,000 farmer households, mostly located in Bhutan’s poorer eastern regions. Farmers will grow hazelnuts to generate income on degraded, unused land, which would otherwise be left barren. In an agreement between Mountain Hazelnuts and participating farmers, farmers cannot replace existing crops with hazelnuts, making the income from hazelnut cultivation entirely incremental. The additional earnings from hazelnut sales are projected to eventually double the household incomes of a large portion of participating farmers. Including all farmer household dependents, this translates into a project impact on approximately 15 percent of Bhutan’s population. LINKING FARMERS TO MARKETS Mountain Hazelnuts will support the development of an organized/structured marketing system for hazelnuts produced by smallholder farmers. The farmers involved will get market access and be integrated into an international supply chain. JOB CREATION, WOMEN PARTICIPATION AND SKILLS DEVELOPMENT Over time, 400 additional jobs will be created at Mountain Hazelnuts and the company plans to expand existing linkages with approximately 1,200 entrepreneurs offering support services (e.g., trucking, construction). Female employment is expected to triple and grow from 29 percent to 50 percent until 2020. Despite its semi-greenfield state, Mountain Hazelnuts is a very professionally run operation, with increasingly formal policies and procedures and solid governance in place, which is uncommon in Bhutan. This offers employees a unique opportunity to gain experience in working for a business that has implemented global best practices. In addition, several staff members have been given stipends to enroll in university programs abroad to upgrade their skills, which is likely to be repeated. The company aims to develop a management team which has the skills to manage a medium size business in Bhutan in the long run, without an overt dependence on expatriate staff. GREENHOUSE GAS (GHG) MITIGATION AND FOREST PRESERVATION Up to 1.5 million metric tons of carbon dioxide (CO2) will be sequestrated over the productive lifetime of the targeted 10.8 million hazelnuts trees. Annual pruning of trees will provide a sustainable source of fuel wood, instead of logging natural forests (which is equivalent to approximately 21,000 mature pine trees each year). In addition, the hazelnut trees will be planted on degraded land that was either deforested and became vulnerable, or was subjected to ‘slash and burn’ or ‘shifting cultivation’. The hazelnut trees will be planted along the contour, like retaining walls, which will stabilize the ground and reduce erosion by capturing the soil and spreading roots. REDUCED URBAN MIGRATION Mountain Hazelnuts works predominantly with farmers in the Eastern and Central regions of Bhutan. The attractive returns for farmers based on the sale of their hazelnuts is expected to reduce/slow down urban migration towards the capital Thimphu in the West, where employment opportunities are limited. CONCLUSION Through Mountain Hazelnuts’ creativity, entrepreneurial spirit, and commitment to development, the lives of thousands of Bhutanese farmers and their families will soon be improved with the first expected harvest of hazelnuts in the autumn of 2016. The children of these farmers will have better employment prospects as they grow and will be better positioned to remain with their families in their Himalayan mountain communities, rather than migrate to urban slums. Mountain Hazelnuts – despite the potential risks and pitfalls ahead – is playing a catalytic role to enable these vulnerable mountain communities to thrive by creating long-term sustainable income opportunities and numerous other positive impacts. This investment could not have taken place without the concessional finance support of the Private Sector Window of GAFSP. As world leaders come together to help meet the Sustainable Development Goals to end poverty and achieve food security by 2030, blended finance is now recognized as a viable model to mobilize capital to meet these ambitious development challenges. As investments like Mountain Hazelnut show, GAFSP projects are difficult and risky but they offer a way to achieve real impact and reach small farmers in some of the world’s most challenging areas. “Mountain Hazelnuts is a risky investment,” Daniel Spitzer said. “It’s a very long term venture. Trees take time to grow, they don’t produce hazelnuts immediately. The conventional financial mechanisms and financial institutions didn’t have the patience to provide capital to us on terms that made sense. GAFSP takes an interesting approach to the development of Mountain Hazelnuts. It thinks about the risks involved. It thinks about the actual needs of the project and it really plays a bridging role. We are delighted to have GAFSP involved.” About GAFSP The Global Agriculture and Food Security Program (GAFSP) is a global effort that pools donor resources to fund programs focused on increasing agricultural productivity as a way to reduce poverty and increase food and nutrition security. GAFSP targets countries with the highest rates of poverty and hunger. The public sector window helps governments with national agriculture and food security plans. The private sector window, managed by IFC, and supported by the governments of Australia, Canada, Japan, the Netherlands, the United Kingdom and the United States, provides long- and short-term loans, credit guarantees, and equity to private sector companies to improve productivity growth, deepen farmers’ links to markets, and increase capacity and technical skills. GAFSP is also committed to helping meet the United Nations sustainable development goals (SDGs) to end poverty and achieve food security in every corner of the globe by 2030. GAFSP focuses exclusively on the regions and sectors where significant progress will be required to meet several of the SDGs including: positively impacting poverty reduction (SDG-1) and meeting the hunger and food security targets (SDG-2), gender equality (SDG-5), and climate change (SDG-13) goals. About IFC IFC, a member of the World Bank Group, is the largest global development institution focused on the private sector in developing countries. Established in 1956, IFC is owned by 184 member countries, a group that collectively determines its policies. With a global presence in 100 countries, a network consisting of hundreds of financial institutions, and more than 2,000 private sector clients, IFC is uniquely positioned to create opportunity where it’s needed most. IFC uses its capital, expertise, and influence to help end extreme poverty and boost shared prosperity. STAY CONNECTED WEB: www.ifc.org/GAFSP LINKEDIN: www.linkedin.com/company/ifc-agribusiness TWITTER: #GAFSP For more information about GAFSP’s Private Sector Window please contact: Laura Mecagni firstname.lastname@example.org Bradford Roberts email@example.com Lina Tolvaisaite firstname.lastname@example.org Philipp Farenholtz email@example.com 2121 Pennsylvania Ave, NW Washington, DC 20433 Tel. 1-202-473-1000
COLLECTIVE EXPERT APPRAISAL: SUMMARY AND CONCLUSIONS Regarding the "expert appraisal for recommending occupational exposure limits for chemical agents" Assessment of health effects and methods for the measurement of exposure levels in workplace atmospheres for acetic anhydride (CAS No 108-24-7) This document summarises the work of the Expert Committees “health reference values”, “on expert appraisal for recommending occupational exposure limits for chemical agents” (OEL Committee) and the Working groups on “health effects” and on “metrology”. Presentation of the issue On 12 June 2007, AFSSET was requested by the Directorate General for Labour to conduct the expert appraisal work required for establishing recommendations on measures to be taken in the event of specific exposure profiles such as those with peaks. In 2010, ANSES published a report that recommended studying the 36 substances in France with a short-term exposure limit but no time-weighted average (TWA) to recommend health values taken from the most recent scientific literature (ANSES, 2010). France currently has an indicative 15-minute exposure limit value for acetic anhydride of 20 mg.m\(^{-3}\) (or 5 ppm). This was set by the Circular dated 5 March 1985\(^1\). Scientific background The French system for setting OELVs consists of three clearly distinct phases: - Independent scientific expertise (the only phase entrusted to ANSES); - Proposal by the Ministry of Labour of a draft regulation for the establishment of limit values, which may be binding or indicative; - Stakeholder consultation during the presentation of the draft regulation to the French Steering Committee on Working Conditions (COCT). The aim of this phase is to discuss the effectiveness of the limit values and if necessary to determine a possible implementation timetable, depending on any technical and economic feasibility problems. The organisation of the scientific expertise phase required for the establishment of Occupational Exposure Limits (OELVs) was entrusted to AFSSET in the framework of the 2005-2009 Occupational Health Plan (PST) and then to ANSES after AFSSET and AFSSA merged in 2010. \(^1\) Circular of 5 March 1985 completing and amending the annex to the Circular of 19 July 1982 relative to permitted levels for concentrations of certain hazardous substances in the workplace atmosphere. The OELs, as proposed by the Committee on expert appraisal for recommending occupational exposure limits for chemical agents (OEL Committee), are concentration levels of pollutants in workplace atmospheres that should not be exceeded over a determined reference period and below which the risk of impaired health is negligible. Although reversible physiological changes are sometimes tolerated, no organic or functional damage of an irreversible or prolonged nature is accepted at this level of exposure for the large majority of workers. These concentration levels are determined by considering that the exposed population (the workers) is one that excludes both children and the elderly. These concentration levels are determined by the OEL Committee experts based on information available from epidemiological, clinical, animal toxicology studies, etc. Identifying concentrations that are safe for human health generally requires adjustment factors to be applied to the values identified directly by the studies. These factors take into account a number of uncertainties inherent to the extrapolation process conducted as part of an assessment of the health effects of chemicals on humans. The Committee recommends the use of three types of values: - 8-hour occupational exposure limit (8h-OEL): this corresponds to the limit of the time-weighted average (TWA) of the concentration of a chemical in the worker's breathing zone over the course of an 8-hour work shift. In the current state of scientific knowledge (toxicology, medicine, epidemiology, etc.), the 8h-OEL is designed to protect workers exposed regularly and for the duration of their working life from the medium- and long-term health effects of the chemical in question; - Short-term exposure limit (STEL): this corresponds to the limit of the time-weighted average (TWA) of the concentration of a chemical in the worker's breathing zone over a 15-minute reference period during the peak of exposure, irrespective of its duration. It aims to protect workers from adverse health effects (immediate or short-term toxic effects such as irritation phenomena) due to peaks of exposure; - Ceiling value: this is the limit of the concentration of a chemical in the worker's breathing zone that should not be exceeded at any time during the working period. This value is recommended for substances known to be highly irritating or corrosive or likely to cause serious potentially irreversible effects after a very short period of exposure. These three types of values are expressed: - either in mg.m\(^{-3}\), i.e. in milligrams of chemical per cubic metre of air and in ppm (parts per million), i.e. in cubic centimetres of chemical per cubic metre of air, for gases and vapours; - or in mg.m\(^{-3}\) only, for liquid and solid aerosols; - or in f.cm\(^{-3}\), i.e. in fibres per cubic centimetre for fibrous materials. The 8h-OELV may be exceeded for short periods during the working day provided that: - the weighted average of values over the entire working day is not exceeded; - the value of the short term limit value (STEL), when it exists, is not exceeded. In addition to the OELs, the OEL Committee assesses the need to assign a "skin" notation, when significant penetration through the skin is possible (ANSES, 2014a). This notation indicates the need to consider the dermal route of exposure in the exposure assessment and, where necessary, to implement appropriate preventive measures (such as wearing protective gloves). Skin penetration of substances is not taken into account when determining the atmospheric limit levels, yet can potentially cause health effects even when the atmospheric levels are respected. The OEL Committee assesses the need to assign an "ototoxic" notation indicating a risk of hearing impairment in the event of co-exposure to noise and the substance below the recommended OELs, to enable preventionists to implement appropriate measures (collective, individual and/or medical) (ANSES, 2014a). The OEL Committee also assesses the applicable reference methods for the measurement of exposure levels in the workplace. The quality of these methods and their applicability to the measurement of exposure levels for comparison with an OEL are assessed, particularly with regards to their compliance with the performance requirements in the NF-EN 482 Standard and their level of validation. **Organisation of the expert appraisal** ANSES entrusted examination of this request to the Expert Committee on expert appraisal for recommending occupational exposure limits for chemical agents (OEL Committee). The Agency also mandated: - The working group on health effects to conduct the expert appraisal work on health effects; - The working group on metrology to assess measurement methods in workplace atmospheres. Several ANSES employees contributed to the work and were responsible for scientific coordination of the different expert groups. The methodological and scientific aspects of the work of this group were regularly submitted to the Expert Committee. The report produced by the working group takes account of observations and additional information provided by the Committee members. This expert appraisal was therefore conducted by a group of experts with complementary skills. It was carried out in accordance with the French Standard NF X 50-110 “Quality in Expertise Activities”. **Preventing risks of conflicts of interest** ANSES analyses interests declared by the experts before they are appointed and throughout their work in order to prevent potential conflicts of interest in relation to the points addressed in expert appraisals. The experts' declarations of interests are made public on ANSES's website (www.anses.fr). **Description of the method** *For the assessment of health effects* A summary report was prepared by the working group on health effects and submitted to the OEL Committee (term of office 2010-2013), which commented on it and added to it. --- 2 Since the publication of the ANSES report of 2014, the "ototoxic" notation has been replaced by the "noise" notation as the "noise" notation has been adopted by the European Scientific Committee and has been adopted in the French regulation for styrene. The information in the summary report on the health effects of acetic anhydride was taken from the Medline and Toxline databases consulted up to January 2012, and from summary documents written by the ACGIH (last revised in 2001). **For the assessment of methods for measuring exposure levels in the workplace** A summary report was prepared by the working group on metrology and submitted to the OEL committee (term of office 2010-2013), which added its own comments. The summary report presented the various protocols identified for measuring acetic anhydride in workplace atmospheres grouped together based on the methods they use. These methods were then assessed and classified based on the performance requirements set out particularly in the French Standard NF EN 482: "Workplace atmospheres - General requirements for the performance of procedures for the measurement of chemical agents" and the decision-making criteria listed in the methodology report. A list of the main sources consulted is detailed in the methodology report. The methods were classified as follows: - Category 1A: the method has been recognised and validated (all of the performance criteria of the NF-EN 482 Standard are met); - Category 1B: the method has been partially validated (the essential performance criteria of the NF-EN 482 Standard are met); - Category 2: the method is indicative (certain essential validation criteria are not clear enough); - Category 3: the method is not recommended (essential validation criteria are lacking or inappropriate). A detailed comparative study of the methods in categories 1A, 1B and 2 was conducted with respect to their different validation data and technical feasibility, in order to recommend the most suitable method(s) for measuring concentrations for comparison with OELs. The OEL Committee (term of office 2014-2017) adopted - the assessment of health effects at its meeting on 12 October 2015 - the evaluation of measurements methods in workplace atmospheres at its meeting on 12 October 2015. The OEL Committee (term of office 2014-2017) adopted the collective expert appraisal work and its conclusions and recommendations on 12 October 2015. The collective expert appraisal work and the summary report were submitted to public consultation from 30/06/2017 to 30/08/2017. No comments were received. The Health Reference Values Committee (term of office 2017-2020) adopted this version on 17 October 2017. **Results of the collective expert appraisal on the health effects** **Occupational uses** Acetic anhydride is mainly used as: - an acetylating agent for the manufacture of acetic esters (in particular cellulose acetates), pharmaceutical products (aspirin, etc.) and agrochemical products; - a dehydrating agent. (Source: Toxicological data sheet No 219, INRS-2004) Toxicokinetics data Inhaled acetic anhydride is absorbed by the upper and lower respiratory tracts. It hydrolyses into an acetate ion. In aqueous solution, acetic anhydride hydrolyses rapidly, with a half-life at 25°C of 4.40 min (ECHA 2011). General toxicity Toxicity in humans Exposure to acetic anhydride in liquid or vapour forms causes severe irritation of the skin, eyes and mucous membranes (ACGIH 2001). A study by Sinclair et al. (1994) describes the consequences of accidental contact with acetic anhydride following the explosion of a container. The worker developed a pulmonary oedema within 24 hours of the accident and died on the 69th day. The authors suggest degradation and necrosis of the pulmonary tissues following an exothermic reaction between the acetic anhydride and the water in the tissues. There are no studies available of the effects of chronic exposure to acetic anhydride vapours in humans. Toxicity in animals There are few studies on the toxicity of acetic anhydride by inhalation in animals. The two studies described below and mentioned later in the report were not described in peer-reviewed scientific literature but were described in the REACH registration dossier for acetic anhydride, available on the ECHA website, and are the subject of a report (in the screening information dataset - SIDS) by the OECD as part of the United Nations Environment Program (UNEP)3 (OECD 1997, ECHA 2011). A study of acute toxicity by inhalation in rats, prior to a toxicity study on reproduction, was carried out by a group of industrial producers of acetic anhydride in 1994. Mated Charles River male and female rats were exposed, 6 hours a day, 5 days a week for two weeks for the male rats and from the 6th to the 15th day after mating for the female rats. In each group, 5 animals were exposed to high concentrations of acetic anhydride of 0 – 104 – 418 – 1670 mg.m\(^{-3}\), or respectively 0 – 24 – 104 – 407 ppm. The exposure at 407 ppm only took place once as it resulted in the death of 2 animals and a poor general condition for the surviving animals. The autopsy revealed severe degeneration of all the tissues in the respiratory tracts. The animals exposed at 104 ppm showed less severe irritation of the respiratory tracts and significant weight loss. Lastly, the animals exposed at 24 ppm also showed signs of irritation of the respiratory tracts. A lower level of irritation and signs of discomfort (eyes half closed) were also observed in these animals during exposure (OECD 1997, ECHA 2011). A study of sub chronic toxicity by inhalation over 90 days in rats was carried out by the same group of industrial producers (OECD 1997, ECHA 2011). A control group was included in the --- 3 For “high production volume” (HPV) substances (produced or imported into Member States in volumes exceeding 1000 tonnes per year). study (15 male and 15 female rats) and three levels of exposure were tested, 1, 5 and 20 ppm or 4.2, 21 and 83.5 mg.m$^{-3}$ (target concentrations) (15 rats per dose and per sex). The animals exposed to the highest concentrations of acetic anhydride (20 ppm) all presented clinical and histopathological signs of severe irritation of the respiratory tracts (local inflammatory lesions with hyperplasia and squamous metaplasia of the underlying respiratory epithelium) in the nose, larynx, trachea and lungs as well as in the eyes. At 5 ppm, some minor signs of irritation in the nose, larynx and eyes were observed in some of the animals exposed. Slight haematological modifications were observed at 5 ppm but are described as being without toxicological significance. No clinical, biochemical or haematological signs were observed in the animals exposed at 1 ppm. The study also shows that the signs of irritation in the animals exposed had been reversed 13 weeks after exposure. The study led to a proposed NOAEL of 1 ppm. **Genotoxicity** Genotoxicity tests on bacteria gave negative results for acetic anhydride. **Carcinogenic effects** No data were found in the literature concerning potential carcinogenicity. **Reproductive toxicity** A reprotoxicity study by inhalation of acetic anhydride in rats followed the study of maternal toxicity described in the section on acute toxicity (OECD 1997, ECHA 2011). Female Charles River rats were exposed to acetic anhydride vapours for six hours a day, five days a week from the 6$^{th}$ to the 15$^{th}$ day after mating. In this study the female rats were only exposed to concentrations of 0 – 104 – 418 mg.m$^{-3}$, respectively 0 – 24 – 104 ppm. Exposure of the female rats to 104 ppm of acetic anhydride was interrupted after seven exposures because of the observed general toxicity (weight loss, low feed consumption, and noisy, gasping respiration) leading to litter resorption in two of the animals. In the group exposed to 24 ppm, the animals also showed signs of maternal toxicity, but to a lesser extent, and no foetal toxicity was observed. **Establishment of OELs** **15min-STEL** The critical effect selected is severe irritation of the eyes and respiratory tract following exposure to acetic anhydride vapours. The available data cannot be used to establish a short-term limit value for acetic anhydride. In the event of missing or insufficient data, the profile of substances with a similar structure can be considered for establishing the limit values, as indicated in the methodological document on *the establishment of limit values for irritating or corrosive substances* (ANSES 2014b). An analogy can be made with the acetic acid produced during the hydrolysis of acetic anhydride. The moistening and warming of the inhaled air by the mucous membranes of the upper airways, with their abundant blood vessels, provides a suitable environment for the rapid hydrolysis of acetic anhydride. The same is true for the mucous membranes of the eyes. Accordingly, exposure to acetic anhydride of skin tissue in the presence of moisture, and of eye and respiratory tract tissue in a humid atmosphere, is comparable to exposure of these tissues to acetic acid vapours. The ANSES collective expert appraisal on acetic acid carried out in 2014 led to a proposed short-term limit value for this substance of $22.5 \text{ mg.m}^{-3}$, rounded to $20 \text{ mg.m}^{-3}$. Calculation of the limit value for acetic anhydride is based on the molar balance of its hydrolysis reaction in the presence of water, giving rise to two molecules of acetic acid: $$15\text{min-STEL}_{\text{acetic anhydride}} = \frac{\text{STEL}_{\text{acetic acid}} \times M_{\text{acetic anhydride}}}{2 \times M_{\text{acetic acid}}}$$ With the molar masses of acetic anhydride and acetic acid equal to $102.09 \text{ g.mol}^{-1}$ and $60.05 \text{ g.mol}^{-1}$ respectively. This gives: $(22.5 \times 102.09) / (2 \times 60.05) = 19.1 \text{ mg.m}^{-3}$, rounded up to $20 \text{ mg.m}^{-3}$ Thus, a $15\text{min-STEL}$ of $20 \text{ mg.m}^{-3}$ is recommended for acetic anhydride, or 4 ppm (conversion factor at $20^\circ\text{C}$ and $101 \text{ kPa}$). **8h-OEL** The literature review carried out identified no long-term effect for acetic anhydride. As a result, no 8h-OEL is recommended for acetic anhydride. **“Skin” notation** As the substance does not cause any systemic effect and there are no quantitative data to calculate skin absorption, the “skin” notation is not assigned for acetic anhydride. **“Noise” notation** In the absence of scientific data on the ototoxic effect of acetic anhydride, the "noise" notation was not assigned for this substance. **Conclusions** No 8h-OEL recommended $15\text{min-STEL} = 20 \text{ mg.m}^{-3}$ “Skin” notation: not assigned “Noise” notation: not assigned Results of the collective expert appraisal on measurement methods in workplace atmospheres Three methods for measuring acetic anhydride in the air at the workplace were identified and assessed (see Table 1). Table 1: Assessment of methods for measuring acetic anhydride in workplace atmospheres | No | Methods | Similar protocols | Category | |----|-------------------------------------------------------------------------|----------------------------|-----------------------------------------------| | | | | For monitoring short-term exposure | | | | | For regulatory technical control of the 15min-STEL<sup>4</sup> | | 1 | Active sampling by bubbling through an alkaline solution of hydroxylamine – addition of a ferric chloride solution – analysis by visible spectrophotometry of the acetic anhydride/hydroxylamine-ferric chloride complex | NIOSH 3506: 1994 | 3 | | 2 | Active sampling through a glass fibre filter impregnated with 1-(2-pyridyl)pipерazine – extraction with a mixture of 2-propanol/toluene – analysis by GC/NPD | OSHA 82: 1990 | 1B | | 3 | Active sampling through a glass fibre filter impregnated with veratrylamine and di-n-octyl phthalate – extraction with a mixture of 2-propanol/toluene – analysis by GC/NPD | OSHA 102: 1993 | 1B | The graph below presents the ranges for which the various methods were validated as well as their limits of quantification. --- <sup>4</sup> Validation and performance criteria for methods for monitoring STELs are defined in the NF EN 482 Standard over an interval 0.5 to 2 times the STEL. Under the French regulations, for the technical control of the exposure limit, the measurement method must be able to measure one tenth of the 15min-STEL (Ministerial Order of 15 December 2009 on technical controls of occupational exposure limits in workplace atmospheres and conditions for accrediting the organisations in charge of controls, published in the French Official Journal of 17 December 2009). As such, when a method cannot measure one-tenth of the 15min-STEL, it cannot be classified in Category 1A or 1B for regulatory control of the 15min-STEL. However, it might be classified in Category 1A or 1B solely for assessing occupational exposure. Figure 1: Ranges of validity and limits of quantification for the various methods from 0.1 to 2 times the 15min-STEL recommended for acetic anhydride Method 1, described by the NIOSH 3506 protocol, was classified in Category 3 because it is not suitable for assessing the atmospheric concentration of acetic anhydride for the purposes of comparison with the 15min-STEL of 20 mg.m$^{-3}$ recommended by the OEL Committee, given that its limit of quantification is too high and the lack of validation data, in particular on the breakthrough volume. Method 2, described by the OSHA 082 protocol, is validated between 1 and 2 times the 15min-STEL with a limit of quantification below 0.1 of the 15min-STEL. The OSHA 082 protocol mentions possible interferences but does not examine their influence on collection efficiency. The method is classified in category 1B for the monitoring of short-term exposure. For technical control of the 15min-STEL in a regulatory framework, this method can be used to measure one tenth of the 15min-STEL, and is classified in category 1B. Method 3, described by the OSHA 102 protocol, is validated between 1 and 2 times the 15min-STEL with a limit of quantification below 0.1 of the 15min-STEL. The extraction efficiency was also studied over the range of 0.1 to 2 times the 15min-STEL. The OSHA 102 protocol mentions possible interferences but does not examine their influence on the trapping ability. The method is classified in category 1B for the monitoring of short-term exposure. For technical control of the 15min-STEL in a regulatory framework, this method can be used to measure one tenth of the 15min-STEL, and is classified in category 1B. It should be noted that recovery in the method described by the OSHA 102 protocol is higher than that of the method described by the OSHA 082 protocol. Conclusions of the collective expert appraisal Based on the data currently available, the OEL Committee: - recommends a 15min-STEL of 20 mg.m\(^{-3}\) for acetic anhydride - does not recommend setting an 8h-OEL for acetic anhydride - does not recommend a "skin" notation - does not recommend an "noise" notation Regarding the assessment of the methods for measuring acetic anhydride in the workplace, the OEL Committee recommends Methods 2 and 3, for the regulatory technical control of the 15min-STEL and the monitoring of short-term exposure. These two methods, both classified in category 1B, consist of active sampling on glass fibre filters impregnated with either 1-(2-pyridyl) piperazine (OSHA 082) or veratrylamine (OSHA 102), extraction with a mixture of 2-propanol/toluene, and then quantification by gas chromatography with a thermionic-specific nitrogen-phosphorus detector (GC/NPD). References ANSES. (2014a). Reference Document for the derivation and the measurement of exposure limit values for chemical agents in the workplace (OELs). (French Agency for Food, Environmental and Occupational Health & Safety, France). 115 p. Anses. (2014b). Document repère pour l’établissement de valeurs limites applicables en milieu professionnel pour les agents chimiques ayant un effet uniquement irritant et corrosif. (Agence Nationale de sécurité sanitaire de l’alimentation, de l’environnement et du travail, France). 50 p. ANSES. (2010). Recommandations émises en vue de limiter l’importance et le nombre de pics d’exposition dans une journée de travail : cas des substances ayant une VLCT-15min mais pas de VLEP-8h (partie 2). (French Agency for Food, Environmental and Occupational Health & Safety, France). 36 p. Health effects section American Conference of Governmental Industrial Hygienists (ACGIH) (2001). Acetic Anhydride in ‘Threshold limit values for chemical substances and physical agents and biological exposure indices’. 7th ed. (ACGIH: Cincinnati, United States). 4 pages. European Chemical Agency. ECHA Chem Database, registered substances. Acetic anhydride. Available on the website http://echa.europa.eu/web/guest/information-on-chemicals/registered-substances. Viewed on 23/06/2011 European Chemicals Bureau (ECB). International Uniform Chemical Information Database – Acetic anhydride, February 2000. INRS (2004). Anhydride acétique – FT 2019. In ‘Fiches Toxicologiques’. (Institut National de Recherche et de Sécurité: Paris, France). Available on website http://www.inrs.fr/acceuil/produits/bdd/doc/fichetox.html?refINRS=FT%20219 consulted 27/05/2013. Sinclair JS, McManus DT, O’Hara MD, Millar R (1994). Fatal inhalation injury following an industrial accident involving acetic anhydride. Burns 20(5):469–470. OECD: Screening Information Data Sheets on acetic anhydride (June 1997). Available on the website http://www.inchem.org/documents/sids/sids/108247.pdf. Viewed on 23/06/2011. Metrology section (date of inventory of methods: January 2012) AFNOR NF EN 482 : 2012 : Atmosphères des lieux de travail - Exigences générales concernant les performances des modes opératoires de mesurage des agents chimiques. NIOSH – Manual of Analytical Methods, 4 ed., Cincinnati, Ohio, NIOSH, 1994, method 3506 Acetic Anhydride. (http://www.cdc.gov/niosh/docs/2003-154/pdfs/3506.pdf. Viewed on 08/10/2012) OSHA - Sampling and analytical methods. Salt Lake City, method 082 (Acetic anhydride), April 1990. (http://www.osha.gov/dts/sltc/methods/organic/org082/org082.html. Viewed on 08/10/2012) OSHA - Sampling and analytical methods. Salt Lake City, method 102 (Acetic anhydride), October 1993. (http://www.osha.gov/dts/sltc/methods/organic/org102/org102.html, Date summary validated by the Health reference values Committee: 17 October 2017
STYLEWAVEGAN: STYLE-BASED SYNTHESIS OF DRUM SOUNDS WITH EXTENSIVE CONTROLS USING GENERATIVE ADVERSARIAL NETWORKS Antoine Lavault, Axel Roebel, Matthieu Voiry To cite this version: Antoine Lavault, Axel Roebel, Matthieu Voiry. STYLEWAVEGAN: STYLE-BASED SYNTHESIS OF DRUM SOUNDS WITH EXTENSIVE CONTROLS USING GENERATIVE ADVERSARIAL NETWORKS. 19th Sound and Music Computing Conference (SMC 2022), Jun 2022, Saint-Etienne, France. hal-03693950 STYLEWAVEGAN: STYLE-BASED SYNTHESIS OF DRUM SOUNDS WITH EXTENSIVE CONTROLS USING GENERATIVE ADVERSARIAL NETWORKS Antoine Lavault Apeira Technologies STMS, Sorbonne Université firstname.lastname@example.org Axel Roebel STMS, IRCAM, CNRS Ministère de la Culture email@example.com Matthieu Voiry Apeira Technologies firstname.lastname@example.org ABSTRACT In this paper we introduce StyleWaveGAN, a style-based drum sound generator that is a variation of StyleGAN, a state-of-the-art image generator [1, 2]. By conditioning StyleWaveGAN on both the type of drum and several audio descriptors, we are able to synthesize waveforms faster than real-time on a GPU directly in CD quality up to a duration of 1.5s while retaining a considerable amount of control over the generation. We also introduce an alternative to the progressive growing of GANs and experimented on the effect of dataset balancing for generative tasks. The experiments are carried out on an augmented subset of a publicly available dataset comprised of different drums and cymbals. We evaluate against two recent drum generators, WaveGAN [3] and NeuroDrum [4], demonstrating significantly improved generation quality (measured with the Frechet Audio Distance) and interesting results with perceptual features. 1. INTRODUCTION Drum machines are musical devices creating percussion sounds using analogue or digital signal processing [5] [6]. The characteristic sound of this synthesis process contributed to their use in the ’80s and their appreciation nowadays. However, these drum machines did not provide an extensive set of controls over the generation. Following the success of deep learning, several generative processes for percussive sounds have been proposed in the recent years, and two approaches retained our attention. [7] used a GAN for waveform generation with a conditioning on the type of drum, generating 0.3s at 44100kHz. There is also [8], where a GAN was trained to generate STFT of drum sounds while controlling the generator with audio descriptors, allowing them to generate 1s at 16kHz. Both of them used the progressive growing of GANs [9]. Another contribution that does not use GANs is Controllable Raw Audio Synthesis with High-resolution (CRASH) proposed in [10] is a score based generative model that supports a large variety of applications (class conditional synthesis, inpainting, interpolation) but unfortunately suffers from rather long inference times. In this paper, we build upon the same idea of conditional synthesis using discrete and continuous controls, with time-domain generation like [7] and control by means of perceptual features derived from the AudioCommons project like [4, 8] with a style-based approach (SGAN) [1, 2]. The characteristics of these networks are summarized in table 1. We expand on the idea of control with perceptual features by means of replacing the trained auxiliary network used in [8, 11] with a differentiable implementation of the feature estimators, increasing the robustness of the feature evaluation. We conduct our experiments on an augmented version of the ENST-Drums [12] dataset, containing kick, snare, toms and hi-hats and comprising about 120k samples amounting to 100 hours of recordings. To evaluate the quality of the model on this dataset, we are using the Fréchet Audio Distance (FAD) [13], in an attempt to obtain a reference-free automatic evaluation of the generated samples. Finally, we explore the ability of the network to use the information from the perceptual features. All in all, our goal is to create an algorithm for drum sound synthesis suitable for professional music production. In other words, we expect good output quality, real-time generation and relevant controls. The Fréchet Audio Distance (FAD) is used for the quality evaluation, real-time ability is measured through plain generation and the quality of the controls uses the descriptor consistency metric from [4]. | Reference | Sample Rate | Duration | |-----------------|-------------|----------| | WaveGAN [3] | 16kHz | 1.1s | | NeuroDrum [4] | 16kHz | 1s | | DrumGAN [8] | 16kHz | 1.1s | | Drysdale et al. [7] | 44.1kHz | 0.4s | | **Ours** | **44.1kHz** | **1.5s** | Table 1. Comparison of state of the art neural drum synthesizers 2. MODEL 2.1 Audio-Commons Timbre Models The Audio Commons project implements a collection of perceptual models that describe high-level timbral characteristics of a sound [14]. These features are specially crafted from the study of popular timbre designations given to a collection of sounds from the Freesound dataset. The perceptual models are built by combining existing low-level features found in the literature [15], which correlate with the chosen timbral designation. Contrary to [8], we reimplemented those features in order to make them fit directly into the training as differentiable functions. Our motivation behind this comes from the use of an auxiliary network for conditioning in [8]. Constructing a differentiable proxy for these timbral features by training a neural network does not guarantee the correct evaluation of the features to the same degree than implementing the features following the reference implementation. Moreover, the direct implementation allows a correct evaluation of signals that have descriptor values outside of the range of values that were available for training the proxy. Our implementation of these descriptors as well as the supplementary material can be found at https://alavault.github.io/stylewavegan/ 2.2 Generative Adversarial Networks and StyleGAN Generative Adversarial Networks (GAN) are a family of training procedures in which a generative model (the generator) competes against a discriminative adversary (the discriminator) that learns to distinguish whether a sample is real or fake [16]. Instead of using a vanilla GAN, we are using an evolution called StyleGAN [1, 2]. StyleGAN attempts to mitigate the entangled representation when using noise as latent and input of the generator. The key idea here is to use a style encoding, a vector which is obtained through a mapping network and is then used to control (through an affine transform) every layer of a synthesis network. 2.3 Proposed Architecture Since StyleGAN was originally used for high-quality image generation, we have to modify it for direct waveform generation. In particular, we transform 2D convolution \((3 \times 3)\) into 1D causal convolutions \((1 \times 9)\) [17], the upsampling is done with an averaging filter before each convolution block in the synthesis network, the mapping networks has 4 layers instead of 8 and the loss function is WGAN-LP [18] (see figure 1). We use the same number of filters, with respect to the depth, as StyleGAN2 [2]. Just like StyleGAN2, the synthesis network uses input/output skips and the discriminator is a residual network. In this work, we follow [4, 7] using a temporal signal representation. Informal perceptual evaluations performed in the initial phase of this study supported our idea that the temporal representation produces better audio quality than spectral representation: we suppose it is because of the high amount of noise and the importance of the transient in the drum sounds. 2.4 Noise Addition Layers and Output Envelopes We modified the noise addition layers of StyleGAN to make them style-dependent. We also add noise shaping (with a linear fade out) to avoid noisy tails. Having controlled noise addition is useful since some classes need more noise than others to get a good quality synthesis. This can be summarized in the following equation: \[ y = x + w \cdot n + b \] where \(y\) is the output of the layer, \(x\) is the signal input of the layer, \(w\) is the transformed style vector, \(n\) is the shaped noise (the same on every channel) and finally \(b\) is a bias term. One of the drawbacks of having noise addition layers is the lack of control of the decay of said noise. Because of this, the generated sounds have an audible noisy tail which makes them easily identifiable by a human listener. To avoid this pitfall, we added envelopes after the output of the network. These envelopes were generated using the training dataset, one per type of drum. For each sample of one given type, the final envelope is the filtered mean of the analytical part of the Hilbert transform of these normalized samples. A small fade out is applied to avoid audible clicks at the end of the generated sounds. The Hilbert transform is calculated using the Discrete Fourier Transform on the first 1.5s (65636 samples at 44.1kHz) of each normalized sound of the dataset. ![Figure 2. Generated envelopes from the training dataset](image-url) The final audio is obtained by multiplying the output of the synthesis network and the matching envelope element-wise. This ensures a quasi-constant energy representation inside the synthesis network. We hypothesize this helps by reducing the dynamic range to generate by the nonlinearities inside the network. The output time signal of the network $y_n$ is obtained from the output $x_n$ of the network by means of multiplying the envelope signal $e_{n,c}$ for drum class $c$ by means of $$y_n = x_n e_{n,c} \quad (2)$$ ### 2.5 Controlling the Network The labels and audio descriptors are fed into an embedding layer which is then concatenated to the latent $z$ (c.f figure 1) and fed to the mapping network. These labels and descriptors are concatenated after the mapping network too. In our experiments, we are using 5 labels. These labels are added to the network with a one-hot vector. The descriptors, if used, are concatenated after the labels. We expect to have a better disentanglement between the class label or the descriptors during the style encoding by using this method. We use the L1 loss to measure the deviation between the target descriptors and the generated values. ### 2.6 AutoFade Progressive growing of GANs has been proposed in [9] and used in [7, 8]. In our experiments, we developed and evaluated a variant of progressive growing, that we denote AutoFade. It is a ResNet architecture with a convolution path and a bypass where a learned parameter is used to fade more or less of one path. Rather than fixing a value like ResNet, we let the network choose the best value as part of the training process, without the need of training it block by block. If $x$ and $y$ represents the two different branches, we have: $$\sin(\alpha)x + \cos(\alpha)y \quad (3)$$ $\alpha$ is independent of $x$ or $y$. It makes this structure an intermediate between ResNet and Highway Networks. By using trigonometric function in equation (3), we guarantee the conservation of the standard deviation, if both inputs have equal variance. Similar to [2] we did not find any benefit using progressive growing or AutoFade in the generator. On the other hand using progressive growing in the discriminator did improve the results. The Autofade feature will therefore be evaluated in the following sections, only as part of the discriminator. All in all, AutoFade is similar to Progressive Growing in the sense where the $\alpha$ parameter changes over time. But contrary to Progressive Growing, the parameter is not forced to increase by hyperparameters but changes with the gradient information. The “growing” is then made dependent of the data and the training iteration. ### 3. EXPERIMENTAL SETUP #### 3.1 Dataset We are using a subset of ENST-Drums [12], comprised of 350 samples of close miking of kicks, snares, toms and hi-hat. Since 350 elements is too low for a data-driven approach, we used an augmentation method similar to [19]. We used SuperVP \(^1\) to process the original dataset. The modifications applied to the sounds consist of a gain applied to transient/attack components [20], noise components as well as independent transposition of the signal source and the spectral envelope. The set of parameters is shown in table 2. The limits have been obtained by means of subjective evaluation of the modified sounds aiming to avoid transformations that can be perceived as unnatural by a human listener. Examples are available in the supplementary material. As a supplementary metric, the Fréchet Audio Distance between the original dataset and the augmented one is 0.62. | Process | Parameters | |--------------------------------|---------------------| | Remix attack | 0.1, 0.3, 0.6, 1.5, 2, 3 | | Remix noise | 0.6, 1.5, 2, 3 | | Transposition | 0, ±100, ±200 | | Spectral envelope transposition| 0, ±200 | Table 2. Augmentation operations and parameters #### 3.2 Training Procedure The training procedure is the same as StyleGAN 2 [2], except that we trained the network on 2M samples. With a batch size of 10, it totals to 200k iterations. This takes 7 days without the descriptors and 10 with on a single nVidia 1080GTX. #### 3.3 Imbalanced Dataset Balancing datasets is common in classification tasks but to our knowledge, but is quite uncommon for generation tasks. One of such example is [21]. As shown in table 3, our augmented dataset is quite unbalanced, so to obtain a balanced dataset, we use a sampler which takes elements from sub-datasets (one per label) at random according to a uniform distribution. We call it “equal-proportion sampling”: such sampling method does require any downsampling contrary to [21]. #### 3.4 Baseline The most appropriate candidate to be used as our baseline is DrumGAN [8] and [7]. Unfortunately, these are not reproducible because of missing source code or/and missing or unknown meta parameters. Therefore, we will compare to [4] using the distributed code and a reimplementation of [3], both trained on our augmented dataset. --- \(^1\) SuperVP is available free of charge in form of a Max/MSP object at https://forum.ircam.fr/projects/detail/supervp-for-max/ | Element | Proportion | |------------------|------------| | Kick | 3% | | Snare | 18% | | Toms | 45% | | Closed hi-hat | 10% | | Open hi-hat | 22% | Table 3. Dataset population Because NeuroDrum [4] works with 16kHz samplerate we adapted our model to use this sample rate for this comparison. We also compared with WaveGAN [3] using our dataset with 44.1kHz. Here we configured both networks to generate 0.3s (@44.1kHz). ### 3.5 Evaluation We chose to use the Fréchet Audio Distance (FAD) [13], a reference-free evaluation metric for audio generation algorithms using a VGGish model trained on AudioSet. We compare the embedding of the augmented database to the embedding obtained from 64k samples generated by the evaluated network. In terms of computational cost, we achieve a generation rate of 52drum sounds/s on one 1080GTX with the network in full resolution (email@example.com + descriptors). | Network | FAD | |--------------------------------|-------| | Baseline [4] | 25.35 | | StyleWaveGAN@16kHz | 11.48 | Table 4. FAD comparison to NeuroDrum [4] (lower is better) | Network | FAD | |----------------------------------------------|-------| | firstname.lastname@example.org [3] | 13.08 | | email@example.com (SWG) | 7.75 | | SWG + AutoFade (AF) | 6.84 | | SWG + Balanced dataset (B) | 7.89 | | SWG + AF + B | 7.92 | Table 5. FAD on networks without labels (lower is better) | Network | FAD | |----------------------------------------------|-------| | SWG + labels | 6.85 | | SWG + labels + AF | 6.72 | | SWG + labels + AF + Balanced data (B) | 6.65 | | SWG + labels + AF + B + Envelope | 3.62 | Table 6. FAD on label-conditioned networks (lower is better) | Class | SWG | SWG + AF + B | SWG + AF + B + Env | |----------------|---------|--------------|--------------------| | Kick | 8.79 | 11.71 | **3.58** | | Snare | 7.87 | 7.53 | **4.29** | | Tom | 8.17 | 8.09 | **6.27** | | Closed HH | 10.12 | 6.97 | **4.23** | | Open HH | 8.26 | 8.91 | **4.12** | Table 7. Intra-class FAD for label-conditioned StyleWaveGAN ### 4. EXPERIMENTAL RESULTS This section describes the results obtained with StyleWaveGAN on three main configurations. The First unconditioned, the second with conditioning on the labels and finally a third with labels and descriptors. #### 4.1 Impact of Our Contributions The first result for unconditioned synthesis we have is that we improved on our baseline in terms of FAD (tables 4 and 5). We can also see from table 5 that using AutoFade in the discriminator helped at getting a better generation in this context. The results with dataset balancing are mitigated. Without the label conditionning, using it didn’t bring any decrease in the FAD: since it makes the training and evaluation dataset different (in proportions), the learned distribution differs, impacting negatively the FAD. This can be seen in table 5. However, it improved the supervised generation, as seen on table 6. The impact on the intra-class FAD of AutoFade and dataset balancing is shown in table 7. It lowers the FAD generally except for the kick and open hihat. Output envelopes have a very strong impact on the FAD for all drum classes. They reduce the FAD by nearly two for all drum classes besides for the tom. #### 4.2 Control with Audio Descriptors We will investigate further on the control of perceptual features. We trained a network on the same dataset, but we made it generate longer audio: 65536 samples, equivalent to 1.48s. Examples are available in the supplementary material. ##### 4.2.1 Brightness We only focus on one class (snare) and one descriptor (brightness) as a first presentation of the idea. Figure 3 shows the relation between target and synthesized brightness for NeuroDrum and StyleWaveGAN. Results are shown in form of mean values and standard deviation in black dots (StyleWaveGAN) and blue crosses (NeuroDrum). The solid red vertical lines show the limiting values in the training dataset. Finally, the reference target values used for the ordering comparison according to [4, 8] and discussed below are marked with dotted green lines. This figure demonstrates clearly that while the mean value... of the perceptual brightness of a sound produced by NeuroDrum is increasing with the target brightness, it still remains far off the target brightness most of the time. In contrast, the synthesized brightness of StyleWaveGAN is very close to the target value for all values that are present in the training set and even remains somewhat close to the target outside the brightness limits of the training data. To compare to [4, 8], we are using the ordering criterion used in [4, 8]. It compares pairs of sounds generated with a pair of target values (situated at levels 0.2, 0.5 and 0.8 on a min/max normalized scale), and evaluates whether the ordering of the targets is preserved in the generated features. Like [4], E1 uses extreme points, E2 uses the mid and low values and E3 uses the mid and high values. The very small error in the synthesized feature values generated with StyleWaveGAN results in a consistent ordering for all three criteria. Table 8 reproduces the results for brightness control from table 3 in [8] comparing NeuroDrum and DrumGAN, trained on a different dataset under the column “D1”. The results under the columns “D2” are for our network, trained on our augmented dataset. We matched and improved the results from NeuroDrum and DrumGAN in this configuration. All these results support our hypothesis that replacing a trained feature estimator as in [8, 11] by means of a direct implementation of the feature estimator allows for a significantly improved control consistency of the final network. ![Figure 3. Target brightness vs. Generated brightness (single descriptor). Black dots are for StyleWaveGAN and blue crosses are for NeuroDrum](image) | Features | E1 | | E2 | | E3 | | |------------|--------|-------|--------|-------|--------|-------| | Dataset | D1 | D2 | D1 | D2 | D1 | D2 | | DrumGAN | 0.74 | - | 0.71 | - | 0.7 | - | | NeuroDrum | 0.99 | 0.91 | 0.99 | 0.80 | 0.99 | 0.68 | | SWG | - | 1.00 | - | 0.94 | - | 0.98 | Table 8. Ordering accuracy for the feature coherence tests for brightness on samples generated with the baseline NeuroDrum [4] and DrumGAN (from [8]), higher is better ![Figure 4. Target depth vs. Generated depth (single descriptor)](image) ![Figure 5. Target warmth vs. Generated warmth (single descriptor)](image) ### 4.2.2 Other Descriptors We will discuss here the results for some other descriptors we deem of interest for our task: depth and warmth. Results are shown in table 9 as well as figures 4 and 5. On these figures, an histogram of the dataset values is overlayed in light blue. Figure 4 shows the results for the depth descriptor. We have a slight worse performance than the brightness descriptor due to some outliers. The same extrapolation property is found here, but slightly less smooth. We can conclude that the depth descriptor is harder to learn for the network. The difference for low depth (< 30, marked by the first blue dashed line) can be explained by the low number of samples to train the network with at this level. Figure 5 shows the results for the warmth descriptor. The performance is on par with the brightness descriptor except for the region above 80% of the min/max value. This can be explained by a lack of training data in this region as shown on the overlaid histogram. ### 4.2.3 Multi-dimensional Descriptor Controls Using three individual networks for controlling the individual descriptor is not that interesting for a real world application. In the next step we therefore investigate controlling | Features | E1 | E2 | E3 | |----------|-----|-----|-----| | Depth | 0.99| 0.99| 0.71| | Warmth | 1.00| 0.86| 0.90| Table 9. Ordering accuracy for other features of interest using StyleWaveGAN (higher is better) the network with a 3 dimensional vector of warmth, depth and brightness descriptors. When using descriptors simultaneously as part of the control, we can expect conflicts between them as well as dependence to the training data. Since the network is trained on data, it will learn to reproduce similar features as the real data which also means only a part of the combination possibles. To evaluate the quality of control, we use the same label but change the evaluation method slightly. While we use the same criterion, we generate samples in a way that can create sounds outside of the training dataset. More precisely, we take a set of real features from a batch of the training data and then modify the descriptor to be evaluated to 20, 50 or 80 percent of the min/max value with respect to said descriptor. Results obtained using this method are shown in table 10. | Features | E1 | E2 | E3 | |----------|-----|-----|-----| | Brightness | 1.0 | 1.0 | 1.0 | | Depth | 1.0 | 1.0 | 0.99| | Warmth | 0.98| 0.59| 0.97| Table 10. Ordering accuracy for multiple descriptors using StyleWaveGAN (higher is better) As shown in table 10, training the descriptors with the proposed differentiable error function produces a network following controls with a precision such that the ordering criterion proposed in [4] and used in [8] is no longer sufficient to evaluate the control precision. In the following we therefore propose a refined evaluation criterion that allows evaluating control precision with more details, taking into account not only ordering but also errors. In order to achieve this, we will be using the Mean Absolute Error (MAE) between the target values and the output values on three regions based around quantiles of the dataset values: - F1: MAE evaluated using only the target descriptor values within the 20th and 50th quantiles - F2: MAE evaluated using only the target descriptor values within the 50th and 80th quantiles - F3: MAE evaluated using only the target descriptor values within the 20th and 80th quantiles First, the interest of working with quantiles rather than percentage of the min/max values is that we expect to cover the same amount of values of the dataset each time while avoiding extreme values. The results are shown in table 11. The values given in said table are not percentage or relative to the descriptor values; they are absolute errors. We can also note that these numbers have the same unit as the descriptors. In table 11, the lines labelled single show the results using networks with only one descriptor and the lines labelled combined show the results when the descriptor of interest is set but the others are taken from a real sound from the training dataset. Finally, the lines labelled combined, dataset show the results when all the descriptors values are taken from the training dataset. | Features | F1 | F2 | F3 | |---------------------------|-----|-----|-----| | NeuroDrum (brightness) | 7.22| 10.40| 8.81| | Brightness (single) | 0.83| 1.06| 0.98| | Depth (single) | 1.06| 1.15| 1.10| | Warmth (single) | 1.15| 1.01| 1.08| | Brightness (combined) | 0.97| 1.36| 1.17| | Depth (combined) | 1.33| 1.50| 1.41| | Warmth (combined) | 1.29| 3.31| 2.33| | Brightness (dataset, combined) | 0.75| 0.95| 0.85| | Depth (dataset, combined) | 0.99| 1.03| 1.0 | | Warmth (dataset, combined)| 1.42| 1.37| 1.39| Table 11. Mean absolute error for several configurations (lower is better) Since we consider a perfect output follows perfectly the control input, we expect to see a good linear fit on the output. To evaluate this, we will calculate a linear least-square regression on the domain bound by the 20th and 80th quantiles, and use its determination coefficient $R^2$ as a metric of good linearity. In this case, $R^2$ is equal to: $$R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - \hat{y}_i)^2}{\sum_{i=1}^{n}(y_i - \bar{y})^2}$$ \hspace{1cm} (4) where $n$ is the number of samples, $y_i$ is the output value of the $i$-th measure, $\hat{y}_i$ the corresponding predicted value and $\bar{y}$ the average of the measured values. The results are compiled in table 12. We can also note that we can use the slope from the least-square regression and use it as an ordering criterion. | Features | $R^2$ | |---------------------------|-------| | NeuroDrum (brightness) | 0.03 | | Brightness (single) | 0.75 | | Depth (single) | 0.70 | | Warmth (single) | 0.76 | | Brightness (combined) | 0.47 | | Depth (combined) | 0.67 | | Warmth (combined) | 0.08 | | Brightness (dataset, combined) | 0.72 | | Depth (dataset, combined) | 0.62 | | Warmth (dataset, combined)| 0.45 | Table 12. Determination coefficient for several configurations (higher is better) Apart from a better fit than NeuroDrum, we can see that the $R^2$ coefficient is generally quite satisfying except for the warmth when used with values outside of the dataset. This illustrated on figure 8, where there is a bend in the output value. This bend is due to the dataset value distribution, where for high warmth values, the set of values for the other descriptors gets small (a variation of less than 5 points around 50 for brightness and 66 for depth, these values being already quite rare in the dataset). So, when the control inputs gets brightness and depth values that are from the rest of the dataset, the warmth value has to be extrapolated by the network since such combination was not seen during training. However, this behaviour is not shown when evaluating on control values from the dataset ($0.08 \leftrightarrow 0.45$). For the other descriptors, the linearity remains satisfying whatever the evaluation method used. These considerations can be seen on the figures 6 through 8. When iterating on the whole scale (i.e 0 to 100) while setting the other descriptors with values from the training dataset, the output control stays mostly consistent and linear and even allow to generate samples outside the minimum and maximum values of the dataset. To conclude, we have justified our method works great almost everywhere in the min/max values of the training dataset and can extrapolate further than the min/max values as well as between unseen combination of descriptors. 5. CONCLUSION AND FUTURE WORK In this paper, we presented a new method for drum synthesis using StyleWaveGAN, an adaptation of a state of the art image generator. The proposed method has explicit controls on drum type and additional continuous controls for selected perceptive audio features. We have shown the proposed style-based synthesis achieves a significantly reduced FAD compared to recent DNN based drum synthesis methods [3, 4]. We have proposed a new means for training the feature control by using a differentiable implementation of the AudioCommons features for calculating the feature loss and have demonstrated that this method significantly improves the feature coherence between target and measured features in the synthesized sounds when compared to [4], and argue that the same improvement would hold compared to [8]. We also introduce a way to measure the fidelity of the control with respect to the input. To the best of our knowledge the proposed DNN is the first achieving drum synthesis with 44.1kHz sample rate (for sounds with a duration of 1.5s) with an inference speed more than 50 times faster than real time on a consumer GPU. In terms of future work we will continue to work on the sound quality and additional controls, notably regarding velocity. 6. REFERENCES [1] T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” 2018. [Online]. Available: http://arxiv.org/abs/1812.04948 [2] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and Improving the Image Quality of StyleGAN,” dec 2019. [Online]. Available: http://arxiv.org/abs/1912.04958 [3] A. Dessein, N. Papadakis, and J. L. Rouas, “Regularized optimal transport and the rot mover’s distance,” vol. 19, oct 2018, pp. 1–53. [Online]. Available: http://arxiv.org/abs/1610.06447 [4] A. Ramires, P. Chandna, X. Favory, E. Gomez, and X. Serra, “Neural Percussive Synthesis Parameterised by High-Level Timbral Features,” in ICASSP. IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2020-May. Institute of Electrical and Electronics Engineers Inc., nov 2020, pp. 786–790. [Online]. Available: http://arxiv.org/abs/1911.11853 [5] G. Reid, “Practical Snare Drum Synthesis.” [Online]. Available: https://www.soundonsound.com/techniques/practical-snare-drum-synthesis [6] ——, “Practical Cymbal Synthesis.” [Online]. Available: https://www.soundonsound.com/techniques/practical-cymbal-synthesis [7] J. Drysdale, J.; Tomczak, M.; Hockman, “Adversarial Synthesis of Drum Sounds,” Proceedings of the 23rd International Conference on Digital Audio Effects (DAFx2020), no. September, pp. 24–30, 2020. [8] J. Nistal, S. Lattner, and G. Richard, “DrumGAN: Synthesis of Drum Sounds With Timbral Feature Conditioning Using Generative Adversarial Networks,” 2020. [Online]. Available: http://arxiv.org/abs/2008.12073 [9] T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” pp. 1–26, 2017. [Online]. Available: http://arxiv.org/abs/1710.10196 [10] S. Rouard and G. Hadjeres, “CRASH: Raw Audio Score-based Generative Modeling for Controllable High-resolution Drum Sound Synthesis,” jun 2021. [Online]. Available: https://arxiv.org/abs/2106.07431v1http://arxiv.org/abs/2106.07431 [11] A. Odena, C. Olah, and J. Shlens, “Conditional Image Synthesis With Auxiliary Classifier GANs,” 2016. [Online]. Available: http://arxiv.org/abs/1610.09585 [12] O. Gillet and G. Richard, “ENST-Drums: An extensive audio-visual database for drum signals processing,” ISMIR 2006 - 7th International Conference on Music Information Retrieval, pp. 156–159, 2006. [13] K. Kilgour, M. Zuluaga, D. Roblek, and M. Sharifi, “Fréchet audio distance: A reference-free metric for evaluating music enhancement algorithms,” Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, vol. 2019-Septe, pp. 2350–2354, 2019. [14] A. Pearce, T. Brookes, and R. Mason, “Hierarchical ontology of timbral semantic descriptors,” AudioCommons - Deliverable D5.1, pp. 1–34, 2016. [15] G. Peeters, “A Large Set of Audio Features for Sound Description,” Tech. Rep. 0, 2004. [16] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Networks,” pp. 1–9, 2014. [Online]. Available: http://arxiv.org/abs/1406.2661 [17] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “WaveNet: A Generative Model for Raw Audio,” sep 2016. [Online]. Available: http://arxiv.org/abs/1609.03499 [18] H. Petzka, A. Fischer, and D. Lukovnicov, “On the regularization of Wasserstein GANs,” sep 2017. [Online]. Available: http://arxiv.org/abs/1709.08894 [19] C. Jacques and A. Roebel, “Data Augmentation for Drum Transcription with Convolutional Neural Networks,” 2019. [Online]. Available: http://arxiv.org/abs/1903.01416 [20] A. Röbel, “A new approach to transient processing in the phase vocoder,” in Proc. of the 6th Int. Conf. on Digital Audio Effects (DAFx03), 2003, pp. 344–349. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01161124 [21] K. Su, X. Liu, and E. Shlizerman, “Audeo: Audio generation for a silent performance video,” in Advances in Neural Information Processing Systems, vol. 2020-Decem. Neural information processing systems foundation, jun 2020. [Online]. Available: https://arxiv.org/abs/2006.14348v1
Operational Issues Associated with an IBS Laying out the general case for an IBS is one thing. Writing the specifics of such a standard is another. Much of the devil is in the details. In this chapter, I offer specific answers to the following key questions: - Should the IBS be a unitary or two-level standard? - What elements of banking and banking supervision should an IBS include? - Who should set the standard? - How should compliance with an IBS be monitored and encouraged? Before addressing those specific operational issues, I will consider eight broad points about what an IBS can achieve and how it ought to be designed. **Broad Features of an IBS** First, an IBS is not a panacea. It would be unrealistic to expect an IBS to eliminate banking crises in developing countries—particularly if these countries do not make significant progress in reducing macroeconomic instability and the size and frequency of exchange rate misalignments. When the macroeconomy is in trouble and the real exchange rate is allowed to get way out of line, the banking system is sure to suffer. An IBS can improve mechanisms that cushion against macroeconomic volatility—bank capital, provisioning for loan losses, etc.—and it can reduce the independent contribution of banking-system weakness to an unhealthy macroeconomic environment. But an IBS cannot be a substitute for disciplined monetary, fiscal, and exchange rate policies, and it cannot engineer structural changes in the real economy (such as greater diversification in a country’s export structure) to reduce volatility.\footnote{As emphasized in chapter 2, a lack of diversification in the loan book of developing-country banks contributes to their vulnerability.} Also, even in countries with the most developed systems of banking supervision, many future bank failures go undetected during bank examinations. For example, a recent Federal Deposit Insurance Corporation (FDIC) (1996) study found that of the US banks that failed from 1980 to 1994, 36 percent of them had received the highest bank examination ratings (that is, CAMEL ratings of 1 and 2) two years prior to failure.\footnote{If one excludes types of bank failures that cannot be anticipated by safety and soundness examinations, as well as bank examinations that were more than one year old, the percentage of failed banks that had CAMEL ratings of 1 or 2 two years before failure drops to 16 percent (FDIC 1996). CAMEL is an abbreviation for five components of bank soundness: capital, assets, management, earnings, and liquidity. In an earlier study of the same issue, Benston (1973) found that of US commercial banks that failed from 1959 to 1971, almost 60 percent had been rated “no problem” on the last bank exam prior to collapse. Benston (1973) goes on to argue that the main reason examinations fail to predict bank failures is} An IBS should be seen as part of a comprehensive reform effort for banking and banking supervision (that would also include increased training for bank supervisors and improvements in the broader financial and legal infrastructure). A realistic objective for an IBS is that it lead to a lower frequency of serious banking crises in developing countries than would occur in its absence; given the costs of past banking crises in developing countries, this objective—if it can be achieved—would represent an important accomplishment. Second, if an IBS is going to make a real dent in the incidence of serious banking crises in developing countries, it will need to \textit{encompass an interrelated set of banking system and supervisory reforms}. Changing one or two elements of the banking architecture is unlikely to make a large difference. For example: - If nothing is done to improve accounting and provisioning practices, neither statements of a bank’s financial condition nor measures of bank capital will be accurate; as such, public disclosure will not fortify market discipline, and prompt-corrective-action supervisory measures based on capital-zone tripwires will be ineffective. - If nothing is done about connected lending, increasing capital requirements for banks will not alter the incentives for excessive risk taking by bank owners. - If nothing is done to institute prompt corrective action by bank supervisors, there may be little consequence of bank capital—even correctly measured—dropping below the regulatory requirement. If nothing is done to make government policymakers more accountable for granting “too big to fail” assistance to severely undercapitalized banks, then private creditors will not heed any improved public information on banks. If nothing is done to reduce the proclivity of governments to use banks as their quasi-fiscal agents, efforts to improve the credit review process are apt to be frustrated. If nothing is done to buttress the legal authority of bank supervisors, then tougher prudential standards are not likely to be enforceable. In other words, there is a critical mass of reforms in developing countries that, if not achieved, may result in little improvement in the bottom line. One frequent criticism of such a comprehensive approach to banking reform is that some of these elements would go beyond the traditional jurisdiction of banking supervisors (e.g., the Basle Committee of Bank Supervisors). For example, efforts to increase the transparency of government involvement in the banking system (by, for example, including such quasi-fiscal operations in the government’s budgetary figures) are more the responsibility of the IMF than of the Basle Committee. Similarly, international accounting standards fall in the sphere of the International Federation of Accountant’s International Accounting Standards Committee (IASC). Facilitating bank seizure of collateral on nonperforming loans involves changes in countries’ legal codes. And better preparation for financial liberalization will require, inter alia, more training of bank supervisors, which is part of ongoing activities of the World Bank and the regional development banks. My rebuttal to the jurisdictional argument is that if serious banking reform requires a coordinated effort among bank supervisors and other interested official parties, then a vigorous effort should be made to obtain such cross-agency cooperation. If that means one official institution cannot be solely responsible for designing an IBS, so be it.\(^3\) Third, an IBS does not imply (full) international harmonization of banking standards. So long as an IBS is designed as a *minimum set of international banking standards*, it represents only a *partial international harmonization* of standards, that is, it still leaves room—beyond the minimum—for individual countries to maintain their national preferences toward risk, as well to maintain some of their institutional diversity.\(^4\) For example, if Argentina wants its banks to disclose more information on their financial --- \(^3\) This issue will be taken up again later in this chapter. \(^4\) For a discussion of different levels of harmonization of international regulatory standards, see Herring and Litan (1995). For an analysis of why it is not desirable to impose the same condition than stipulated in the IBS, it would be free to do so. Likewise, since an IBS would not step into the debate on the securities and insurance activities of banks, it would not stop France and Germany from maintaining their universal banking structures, while the United States and Japan could continue their de jure limitation on such activities by banks. An IBS that stops well short of full harmonization of banking structures and supervisory practices merits emphasis because, as illustrated in appendix C, tables C.1 and C.2, there remain significant differences on these matters even among the G-10 and EU countries. Fourth, an IBS would not necessarily decrease competition in the banking industry. As highlighted by L. White (1996), in industries where national governments act to reduce competition, an international standard can serve to reduce national protectionism. Two examples suffice to illustrate the point. If governments provide state-owned banks with cheap capital and routinely bailout such institutions when they suffer large credit losses, an IBS that discourages these subsidies (or taxes) can increase global competition in the banking industry. Likewise, if generous national safety nets induce banks to substitute official (implicit or explicit) safety-net guarantees for private capital, then an IBS that sets a minimum international capital standard can reduce these national subsidies toward banks and increase competition (L. White 1996). Fifth, like other international regulatory initiatives, an IBS needs to confront the test that there be market failures, externalities (spillovers), or public goods that extend beyond national borders, and that cannot be handled adequately by national regulation (Herring and Litan 1995; L. White 1996). As argued in chapter 2, I believe an IBS can pass that test: there are nontrivial cross-border spillover effects of developing-country banking crises; there are market failures associated with asymmetric information, with connected lending, and with heavy involvement of national governments in the banking industry; and accurate and timely public information on the financial condition of banks has attributes of a public good. Also, based on the history of the past 15 years, it is unlikely that competition among national banking regulators in developing countries will motivate serious banking reform. Sixth, an IBS must consider the costs of regulation and the possibility that flawed or outmoded regulations could make matters worse. That is, there can be government failure as well as market failure. It is partly for this reason that an IBS ought to be voluntary. If countries view the costs of participating in an IBS as higher than the benefits, they need not sign up. Similarly, if they decide that changes in the structure of the banking organizational structure on financial markets in all countries, see Kaufman and Kroszner (1996). 5. See Merton (1995) for a discussion of the risks associated with implementing the wrong global regulatory standard. industry have made an IBS outmoded or counterproductive—and agreement can not be reached on a revision of the IBS—they can withdraw. In this sense, countries will vote with their feet as to whether the IBS is a club worth joining. Seventh, an IBS should include both quantitative and qualitative elements. The prescription for some regulatory and supervisory problems (e.g., minimum bank capital ratios, limits on connected lending) can and should be delineated in quantitative terms, but many other problems (improved public disclosure, prompt corrective action on the part of bank supervisors, stricter accounting and provisioning practices, etc.) are best handled primarily in qualitative terms. Indeed, appendix B shows that many of the more useful international guidelines in the financial area have been qualitative in nature. Whether quantitative or qualitative, IBS guidelines need to be specific enough to serve as benchmarks for performance evaluation by the monitoring agency (e.g., it will not be sufficient to call for appropriate asset classification unless some indication is given about what “appropriate” means). Eighth, as with the IMF’s SDDS, countries (not individual banks) would sign on to an IBS. Once a country agreed to participate, it would alter its national banking laws (if necessary) to accommodate any features of the IBS not already included; at that point, a country’s banks would be covered.\(^6\) But what about banks in a country that chose not to participate in the IBS? In that case, individual banks wanting to distinguish themselves from their less creditworthy competitors could indicate that they voluntarily comply with all elements of an IBS under their control (much in the same way that some derivative dealers advertise that they voluntarily implement the G-30 guidelines on risk management of derivatives). Admittedly, they could not claim that national supervisory practices were subject to international monitoring, but they still might get some market premium by subscribing to a higher code of conduct. **A Unitary or Two-Level Standard?** An IBS could be a unitary standard applicable to all countries, or alternatively, a two-level standard where countries themselves would decide at which level to join. All previous international banking agreements have --- 6. Should all banks be covered or only internationally active banks? The Basle Capital Adequacy Accord, for example, was directed only at the latter group. The rationale for covering only internationally active banks is that these banks generate the largest international spillovers. The argument for wider coverage is that widespread failures at domestic banks generate (smaller but still) nontrivial spillover effects; domestically oriented banks represent too large a share of vulnerability to ignore; and, as developing countries increase their financial links with the rest of the world, more of their banks will become internationally active. I would argue for the wider definition of participating banks. been unitary standards. It is argued that a unitary standard ensures all countries receive uniform treatment; it is also easier to administer. Despite these considerations, I vote for a two-level standard on three grounds: differences in country circumstances, relevant transition periods, and lessons from standards in other areas. Moreover, potential difficulties with a two-level mandatory standard are reduced when the IBS is voluntary instead. Some of the most widespread and severe banking problems are among the transition economies and developing countries of Africa and Asia. Yet financial and banking structures and the degree of market orientation in these countries are typically quite different from those in the more advanced emerging economies. What is of first priority and feasible in the way of banking reform is therefore likely to be different in say, China, Russia, and India than in say, Hong Kong and Chile. For example, the share of total banking assets owned by the state is almost 90 percent in India, whereas it is zero in both Hong Kong and Singapore. A two-level standard would better accommodate these differences. A two-level standard would lead to a more desirable transition period than would a unitary one. Note that implementation of the Basle Accord on risk-weighted capital standards took four years for the G-10 countries; similarly, as noted earlier, implementation of the Minimum Standards guidelines has been incomplete and uneven across countries four years after its agreement. The IMF’s SDDS, applicable only to countries heavily involved in international capital markets, will have a transition period of two and one-half years. If there is a unitary standard, then a choice must be made between setting it at a high level (i.e., “best practice” guidelines) or a low level (i.e., a minimum standard); the former could imply that many developing countries could not meet the standard for very considerable periods of time (perhaps a decade or more), while the latter may not yield much incentive for the emerging market economies to make important further improvements in their regimes. Looking beyond international banking agreements, two-level standards are more common, especially when such agreements are meant to cover a heterogeneous group of countries. The IMF’s Articles of Agreement, for example, specify that countries can adopt transitional arrangements (Article XIV status) before accepting the obligations of current-account convertibility (Article VIII status). At present, more than a third of the IMF’s member countries still avail themselves of such transitional arrangements. There is an even closer parallel with the IMF’s new data standards, which features a basic, transitional standard that all countries should satisfy, and a stricter standard that would apply to countries that are more heavily involved with international capital markets. Global and regional trade agreements, likewise, often specify longer transitional periods for developing countries. For example, APEC’s recent “free trade” commitment calls for industrial countries to meet the target by 2010, but gives developing countries until 2020. A similar arrangement might work for an IBS: an upper level (stricter) standard that would probably attract banks and countries more heavily involved with international capital markets and a basic (transitional) standard that would apply to all participants. The main incentive to sign on to the higher standard would be the market premium attached to having satisfied more rigorous entry qualifications. But other incentives could also be contemplated. For example, in line with supervisory arrangements in the United States, countries and banks meeting the higher standard (including higher capital requirements and stricter disclosure) could be subject to lighter supervisory oversight. So long as subscription to an IBS is voluntary and qualification for both levels is based on objective criteria rather than merely an industrial-country/developing-country classification, administering a two-level standard might not be much harder than administering a unitary one. Also, claims of “unequal treatment” would carry less weight. If countries rather than the monitoring agency choose the level they subscribe to, the monitoring agency need not decide when to “graduate” countries from the lower level to the upper one; instead, it would reject or accept a country’s application based on objective criteria for a given level. As regards equal treatment, there is a strong case against assigning industrial countries ex ante to the upper level and developing countries to the lower one—even though the primary focus of an IBS is on improving banking systems and banking supervision in developing countries and ex post most industrial countries would probably be in the upper level and most developing ones in the lower level (at least to start). Just because the incidence of serious banking crises in industrial countries has been lower than in developing countries over the past 15 years does not mean that banking systems/banking supervision in industrial countries are free from serious shortcomings. For example, “evergreening” of bad loans (i.e., poor asset classification) and regulatory forbearance have been prominent features of the ongoing banking crisis in Japan. Heavy and misguided government involvement has been evident in the sizable public bailout of Credit Lyonnais in France. Poor preparation for financial liberalization was instrumental in the late 1980s/early 1990s banking crises in Finland, Norway, and Sweden. Poor internal controls were a key factor in the recent troubles at Daiwa and Barings. And the bitter fruits of an incentive-incompatible official safety net were dramatically illustrated in the US saving and loan crisis.\(^7\) In short, industrial countries \(^7\) See Goldstein et al. (1993) for a discussion of these industrial-country banking problems. Calomiris and White (1994) have calculated that the deposit insurance cost to taxpayers of the US saving and loan debacle exceeded in real magnitude the losses of all failed banks during the Great Depression. should not get a free ride; they need to satisfy the same objective entry criteria as developing countries do. An IBS can therefore be a vehicle for motivating further improvements in industrial-country banking systems. Any IBS that discriminated against developing countries would not provide those countries with the proper incentives for reform. For example, if one could make the case on objective grounds that say, Hong Kong and Chile were better placed to qualify for an upper-level IBS than a few industrial countries, that differentiation should not be thwarted by some country-group classification. Following the same line of argument, it would be totally inappropriate to design an IBS only for developing countries. There can be different levels of certification, but qualification for those levels must be nondiscriminatory. While I believe that a two-level IBS would be superior to a unitary standard, the latter would be much better than having no IBS at all. **What Should an IBS Include?** To be truly comprehensive, an IBS would need to specify guidelines for all the important aspects of banking supervision, including, inter alia: deposit insurance; lender-of-last resort operations; bank licensing and permissible banking activities; external audits; internal controls and internal audits; information requirements of bank supervisors; public disclosure; limits on large exposures and connected lending; capital adequacy; asset valuation and provisioning; foreign-exchange exposures; on-site banking inspections; legal powers and political independence of bank supervisors; the mix between rules and discretion in the implementation of corrective actions; globally consolidated supervision; cooperation (including exchange of information) between home- and host-country supervisors; and measures to combat money laundering.\(^8\) In addition, one would want to offer some guidance on the relevant infrastructure for good banking, including: interbank and government securities markets; payments, delivery, and settlement systems; and the legal and judicial framework. Clearly, analysis of each of these elements would go beyond the scope of this study. I will therefore concentrate on *eight priority elements of an IBS, selected primarily for their past and potential contribution to banking crises in developing countries*. For each element, I attempt to convey the flavor of what should be required, along with some indication of which provisions might be reserved for the stricter (upper-level) standard (if an IBS were designed as a two-level standard rather than a unitary one). --- \(^8\) For an excellent analysis of “best practice” in each of these supervisory dimensions, see IMF (1997a). Public Disclosure IBS participants should be required to publish timely and accurate information on the financial condition of banks so that both sophisticated professional investors and less sophisticated retail depositors can make an informed assessment of bank performance and profitability. At a minimum, such information should include a balance sheet, income statement, large off-balance-sheet exposures, and summary of major concentrations of credit and market risk.\(^9\) This material should be prepared on a globally consolidated basis, in accordance with international accounting standards, and should be audited by a reliable independent external auditor.\(^10\) There should be enough detail so that readers can gauge the breakdown between interest and noninterest income and expenses, the relationship between nonperforming loans and loan-loss provisions, how well or poorly the bank is capitalized, and how profitable the bank is relative to its competitors (as revealed by traditional indicators, such as the return on equity, the return on assets, etc.). If a common format for such public disclosure of banks could be agreed, this, like a common international accounting standard, would be most welcome (since it would both reduce transaction costs and facilitate comparisons among banks within and across countries). IBS participants would agree to review their legal codes to ensure that banks are liable for serious penalties if they are found to have been issuing false or misleading information to the public. For upper-level status, banks could also be required to display prominently their most recent ratings from internationally recognized credit-rating agencies (including any downgradings). If they have not been rated, banks should disclose that fact. Upper-level participants would also commit to adopting public disclosure recommendations (jointly agreed by the Basle Committee, IOSCO, and the Eurocurrency Standing Committee) on the trading and derivative activities of banks and securities firms.\(^11\) Appendix D provides two examples of good public disclosure—one for the banking system as a whole and one for individual banks. The first --- 9. Later in this chapter, I introduce two additional disclosure requirements for IBS participation, specifically related to the problems of government involvement in the banking system and connected lending. 10. One problem here is that there are presently two competing international accounting standards: International Accounting Standards as drawn up by the International Accounting Standards Committee and Generally Accepted Accounting Principles (GAAP) used in the United States. See W. White (1996) for a discussion of their relative advantages and disadvantages. Discussions are ongoing among accounting bodies in the major industrial countries to see if agreement can be reached on a single international accounting standard. In the interim, use of either GAAP or international accounting standards might be acceptable for an IBS. 11. See Basle Committee on Banking Supervision (1996) for an explanation of this disclosure agreement. shows the aggregate data published quarterly for 3,000 national banks in the United States, while the second gives the disclosure requirements for individual banks under New Zealand’s new supervisory regime.\textsuperscript{12} \textbf{Accounting and Legal Framework} The aim here should be to move closer to internationally recognized loan classification and provisioning practices and remove undesirable legal impediments to the pledging, transfer, and seizure of loan collateral and to the statutory authority of supervisors to carry out their mandate. IBS participants would agree to set out clearly the criteria and rules/practices they employ to classify loans, provision for loan losses, and suspend accrual for overdue interest. In classifying loans, participants would agree to give appropriate weight to an assessment of the borrower’s current repayment capacity, to the market value of collateral, and to the borrower’s past record, and they would not rely exclusively on the loan’s payment status.\textsuperscript{13} Participants would also pledge to discourage and monitor accounting devices that facilitate the “evergreening” of bad loans.\textsuperscript{14} The time a loan could be in arrears before it was classified as nonperforming would be no longer than 150 days. For upper-level status, that time period could be 90 days. Each participant should have mandatory provisioning rules against bad loans. For upper-level status, participants would agree to meet an international provisioning standard (if one can be agreed); pending such an agreement, upper-level participants would maintain a provisioning coverage ratio (of loan-loss reserves to nonperforming loans) not more than 10 percent below the Organization for Economic Cooperation and Development (OECD) average for the previous five-year period. On the legal side, IBS participants would review their legal and commercial codes to certify that laws governing bankruptcy and recovery and pledging of collateral (for bank loans) do not impose undue costs on \textsuperscript{12} Note that disclosure requirements for banks in New Zealand are more demanding than those in most other industrial countries, and that New Zealand’s new supervisory regime places greater reliance on public disclosure (relative to prudential requirements) to discipline banks than do regimes in other industrial countries. It is sometimes argued that New Zealand can afford to rely so much on disclosure because large banks in New Zealand are foreign owned and thus subject to supervision in their home country. \textsuperscript{13} See Meltzer (1995) for a description and analysis of how Chile strengthened its asset classification and provisioning regime. \textsuperscript{14} De Juan (1996, 101) highlights three signs of “evergreening” and weak repayment capacity: “… (i) the financial statements of the borrower show negative net worth and/or negative cash flow; (ii) the loan has a history of consecutive rollovers, and the volume of each new loan is equal to or above the principal plus interest of the previous loan; and (iii) the principal or interest of previous loans is not paid in cash, but through refinancing facilities extended by the same creditor bank.” banks. In addition, participants would confirm the legal authority of bank supervisors to carry out their responsibilities (e.g., issuance and revocation of banking licenses, requests for information, setting of prudential guidelines/regulations, conducting on-site inspections, closure of insolvent banks, etc.). **Internal Controls** Because of increased bank involvement in trading activities and the tremendous growth of complex financial instruments over the past decade, it is more difficult for bank supervisors and creditors to monitor accurately the risk profile of banks.\(^{16}\) During the same period, there have been several notable failures at financial firms (e.g., Barings, Daiwa, Sumitomo) where time-honored principles of prudent risk management (e.g., separation of authority as between front- and back-office operations and awareness by senior management of the size of exposures) were violated (IMF 1996a). These developments underscore the importance of good internal controls at banks as the first line of defense against excessive risk taking—be it market risk, credit risk, legal risk, or operational risk. Participating banks would agree to have available for inspection a clear written account of what procedures and safeguards are in place as part of their internal risk management. It should address how risks are measured and tracked in real time, which members of senior management and the board are responsible for oversight and for “pulling the plug” if actual exposures exceed prespecified limits, how exposure limits in the loan book and trading book are set, how different functional risks within the firm are segregated, how the consistency and accuracy of internal record keeping is cross-checked, the amount of capital that is available to cover losses in various risk categories, what backup there is in case of computer breakdowns or other information technology problems, and what safeguards have been introduced to discourage and detect fraud and money laundering. In addition, IBS participants should certify that a reliable, independent internal audit function is in operation. For upper-level status, participants would certify that banks with significant involvement in derivative markets are implementing the G-30 (1993) guidelines on risk management of derivatives, as well as the recommendations for --- 15. A particularly important area here is the ability of supervisors to get the data they need to evaluate a bank, including data on off-balance-sheet and off-shore activities; see IMF (1997a). 16. See BIS (1996) and IMF (1996a) for figures on the growth of the over-the-counter and exchange-traded derivative markets during the 1990s. Goldstein (1995b) and Hoerig (1996) discuss the difficulties that financial regulators face in trying not to “fall behind the curve” in an innovative global capital market. Garber (1996) provides an account of how Mexican banks in 1994 used off-shore structured notes to evade national prudential regulations on net open currency positions. combating money laundering promulgated by the Financial Action Task Force on Money Laundering (1990). **Government Involvement** As highlighted in chapter 2, state-owned banks and burdensome developing-country government involvement in privately owned banks have drained public finances and generated inefficient resource allocation in banking services. Despite this dismal track record, it is neither realistic nor desirable that an IBS call for immediate privatization of all state-owned banks or mandate an end to all policy-directed lending in developing countries. After all, almost all countries have at some time intervened to influence the allocation of bank credit for what they deemed socially desirable purposes. Also, there may well be situations in developing countries where some government involvement can be legitimately defended. But what an IBS can do is bring greater transparency and accountability to government ownership and involvement in the banking system. This should subject such operations to greater public scrutiny and make it more difficult to use the banking system as a quasi-fiscal device to circumvent legislative and political constraints on the budget. Moreover, an IBS can encourage financial institutions that operate with policy-based lending constraints to give greater weight to commercial considerations in their credit decisions, to avoid costly future bailouts. And an IBS can even ask governments to consider more carefully whether privatization of some or most of their state-owned banks would not be in their long-term interest. Toward this end, IBS participants would agree to - include in the government budget all government costs and quasi-fiscal operations that involve the banking system (as recently recommended by the IMF [1996b]); - annually publish data on nonperforming loans in state-owned banks (on a basis that permits comparison with privately owned banks); - disclose the nature and extent of government instructions to banks on the allocation of credit (be it in state-owned or privately owned banks); - subject state-owned banks to an external audit by a private independent external auditor and publish the results of that audit; and - direct state-owned banks to give due attention to creditworthiness in their lending decisions. 17. Mackenzie and Stella (1996) explain how one might define and measure quasi-fiscal operations of public financial institutions. 18. Kaufman (1996a) urges developing countries where state-owned banks account for an important share of total bank assets to recapitalize all banks so that they are market-value For upper-level status, countries where state-owned banks account for a significant share of total banking assets would agree to review the costs and benefits of their state-owned banks, with an eye toward assessing the scope for privatization of such institutions. **Connected Lending** IBS participants would establish an exposure limit on lending to connected parties, endorse the principle that lending to connected parties should be on terms that are no more favorable than those extended to nonrelated borrowers of a similar risk class, outlaw practices that make it difficult or impossible for supervisors to verify the accuracy of reported connected-lending exposure (e.g., use of fictitious names, dummy corporations, etc.), and publicly disclose the share of loans going to connected parties and the identity of large shareholders and their affiliations.\(^{19}\) For upper-level status, participants would establish below-maximum-limit threshold reporting limits (to bank supervisors) on connected lending (to give supervisors advance warning of rapidly rising exposure to connected lending). **Bank Capital** Signatories to an IBS would adopt the existing 8 percent risk-weighted capital standard for credit risk, along with the recent amendment for market risk. To reflect the need for higher capital when the operating environment is relatively volatile, countries seeking upper-level status would apply a “safety factor” if their recent history of loan defaults, restructured loans, and/or government assistance to troubled banks was significantly higher than the OECD average over say, the past five years. This safety factor could possibly involve multiplying the level I capital requirement by 1.5, so that “volatile” countries would apply a minimum risk-weighted capital standard for credit risk of 12 percent. This approach would respect the principle of equal treatment. Any country—industrial or developing—that had a relatively volatile operating environment for its banks would apply the higher requirement if it wanted to meet the upper-level standard. Also, a country’s actions to reduce that volatility \(^{19}\) Exposure limits on connected lending should be additional to those on maximum exposure to a single borrower. According to a recent survey of the Basle Committee (Padoa-Schioppa 1996), 90 percent of countries do not allow lending to a single customer to exceed 60 percent of the bank’s capital, and roughly two-thirds of countries maintain the stricter exposure limit of 25 percent of capital. See Goldstein and Turner (1996) for the exposure limits on single borrowers in a group of emerging economies. (e.g., more stable macroeconomic policies) would, if sustained, eventually be reflected by a lower capital requirement. Much of this parallels the Basle Committee’s approach to determination of regulatory capital for market risk (Basle Committee on Banking Supervision 1996; Padoa-Schioppa 1996). **An Incentive Compatible Safety Net and Resisting Pressures for Regulatory Forbearance** The aim here should be to retain the positive features of an official safety net for banks (i.e., discouragement of bank runs and limitation of systemic risk) while reducing its negative (moral hazard) effects (i.e., less market discipline from bank creditors, excessive risk taking by banks, increased costs for taxpayers, and delay in enforcing corrective actions on undercapitalized banks by financial regulators). To do that, the safety net must incorporate incentives that tilt the behavior of the main players in the right direction. The most promising approach to date for designing an incentive-compatible official safety net is the system of structured early intervention and resolution (SEIR), put forward by Benston and Kaufman (1988) in the late 1980s and incorporated with some modifications in US banking legislation in the early 1990s.\(^{20}\) The losses (at least $150 billion) incurred in the saving and loan debacle and the prospect of similar difficulties for US commercial banks supplied the political motivation for reform. The key legislative vehicle was the Federal Deposit Insurance Corporation Improvement Act (FDICIA) of 1991. The underlying strategy has two pillars: first, to maintain deposit insurance for banks but to use regulatory sanctions to mimic the penalties that the private market would impose on banks (as their financial condition deteriorated) if they were not insured, and second, to reduce greatly the discretion that regulators have in imposing both corrective actions and closure of a bank. The safety-net reforms embodied in FDICIA legislation can be summarized as follows: (1) government deposit insurance is retained for small depositors;\(^{21}\) (2) deposit insurance premiums paid by banks are risk weight- --- 20. Benston and Kaufman (1996) argue that while FDICIA was a big step forward in deposit-insurance reform, it should have set the capital-zone thresholds higher, used a simple leverage ratio to measure capital (rather than using both this ratio and the Basle risk-weighted one), embraced market-value accounting, established stiffer penalties for Federal Reserve lending through the discount window to banks that subsequently failed, made wider spreads between the deposit insurance premiums paid by the safest and riskiest banks, and given even less scope for discretion in applying prompt corrective action and least cost resolution. 21. The rationale for covering small depositors is that they might otherwise run into currency when banks get into trouble, they are generally less adept than large bank creditors in evaluating the true financial condition of banks, and they have enough political muscle ed (depending on their capital and bank examination rating); (3) banks become subject to progressively harsher regulatory sanctions (e.g., eliminating dividends, restricting asset growth, and changing management) as their capital falls below multiple capital-zone tripwires; (4) by the same token, well capitalized banks receive “carrots” in the form of wider bank powers and lighter regulatory oversight; (5) regulators’ discretion is sharply curtailed (with respect to initiating “prompt corrective actions” and resolving a critically undercapitalized bank at least cost to the insurance fund (least cost resolution); (6) effective 1 January 1995 the insurance fund is generally prohibited from protecting uninsured depositors or creditors at a failed bank if this would increase the loss to the deposit insurance fund; and (7) provision is made for a discretionary, systemic-risk override to protect all depositors in exceptional circumstances (when not doing so “would have serious adverse effects on economic conditions or financial stability”)—but activation of this override requires explicit, unanimous approval by the most senior economic officials and subjects any bailout to increased accountability (Benston and Kaufman 1988, 1996; Kaufman 1996a, 1996b). Table 3.1 summarizes the prompt-corrective-action features of FDICIA. Proponents of SEIR argue that it improves incentives on at least five counts (Benston and Kaufman 1996; Kaufman 1996b). Because uninsured creditors of banks realize they will be at the end of the queue if a bank gets into trouble, they will monitor banks more assiduously, thereby enhancing market discipline. Because bank owners and managers know the penalties in advance if losses are sustained and banks become undercapitalized, they will be less inclined to engage in excessive risk taking and will not allow bank capital to fall too low. Because bank supervisors are largely obliged to prompt corrective action and least cost resolution, they will be less susceptible to pressures for regulatory forbearance. Because the most senior economic officials know that granting “too large to fail” assistance requires unanimous approval and involves increased public scrutiny, they will be dissuaded from doing so unless there is a clear systemic threat at hand. And because the explicit closure rule calls for resolving a failed bank while it still has positive net worth, losses to the deposit insurance fund should be small (thereby making it less costly to keep the fund fully funded).\(^{22}\) In contrast, safety-net regimes that do not incorporate SEIR often leave a key question unanswered: What happens when bank capital drops below the regulatory standard? \(^{22}\) If the deposit insurance scheme lacks sufficient financial resources, even insured depositors may be tempted to run during periods of bank weakness; moreover, regulators will be more inclined to grant regulatory forbearance because there are insufficient resources to liquidate the bank. As acknowledged by Benston and Kaufman (1996), FDICIA has been in operation for only five years, the US economy has not undergone a major cyclical downturn during that period, and no US money-center bank has become critically undercapitalized during this period. In addition, broader economic factors have no doubt contributed to the recovery of the US banks and S&Ls. It is, therefore, too early to come to a definitive verdict on the effectiveness of FDICIA. Nevertheless, the preliminary signs are encouraging. Not only are bank failures and bank problems down and bank capital and profitability up, but as shown in table 3.2, a much higher share of uninsured depositors has gone unprotected since FDICIA came on stream. This is a strong signal that market discipline is beginning to bite. Some exporting of FDICIA is already going on. In drawing lessons from its recent/ongoing banking difficulties, Japan plans to establish a prompt-corrective-action system in April 1998, and the banking laws of some developing countries (e.g., Chile) contain significant precommitment features. With no superior alternatives out there for reforming official safety nets, FDICIA-like features (to combat moral hazard and regulatory forbearance) ought also be included in an IBS. For example, IBS participants could agree to make some corrective actions mandatory if bank capital dropped below the regulatory minimum, ensure there is a well defined closure rule/procedure for banks, make it publicly known that uninsured creditors (including sellers of interbank funds) stand behind insured depositors and the deposit insurance fund in being protected from bank losses, and require that granting of “too large to fail” emergency financial assistance to banks be publicly approved by both the governor of the central bank and the minister of finance. **Consolidated Supervision and Cooperation Among Host- and Home-Country Supervisors** The Basle Committee on Banking Supervision has been on target in insisting that (1) all international banks be supervised on a globally consolidated basis by a capable home-country supervisor; (2) home-country supervisors be able to gather information from their cross-border banking establishments; (3) before a cross-border banking establishment is created, it receive prior consent from both the host- and home-country authorities; and (4) host countries have recourse to certain defensive actions (e.g., prohibit the establishment of banking offices) if they determine that conditions (1)-(3) are not being satisfied (Basle Committee on Banking Supervision 1996). Participants in an IBS should therefore agree to implement the 1992 Basle Minimum Standards. **Could an IBS be Agreed on?** So much for the makeup of an IBS. But wouldn’t an IBS represent such an ambitious extension of existing international banking agreements as to preclude agreement? After all, several of these items were no doubt raised in the Basle Committee in previous years without garnering the requisite support. If agreement could not be reached among the G-10 countries, wouldn’t it be unrealistic to expect agreement on a wider list of banking reforms among a broader group of countries? I find this criticism unpersuasive. Prior to the Mexican economic crisis of late 1994 to 1995, few would have anticipated reaching agreement on an international standard for publication of economic and financial data (the IMF’s SDDS), or on doubling the IMF’s line of credit from the General Agreement to Borrow and its extension to 14 new member countries, or on establishing a concerted official position on the rescheduling of sovereign bank debt (the so-called “orderly workout” issue). Yet, barely two years after the onset of the Mexican crisis, the international community has reached agreement on all three (Goldstein 1996b). In analyzing past international agreements in the area of financial stability, Kapstein (1992) identifies three underlying factors: a shared recognition of a common problem, some agreement on how the financial system should function and how problems might best be addressed, and the continuing exercise of state power to make it happen. It is only within the last year or two, with the publication of several comprehensive studies, that the scope and severity of banking problems in developing countries has come to be more widely appreciated, particularly by observers in the G-10 countries (Lindgren, Garcia, and Saal 1996; Caprio and Klingebiel 1996a, 1996b; Honohan 1996). And, it is only recently that research has produced a reasonable consensus on the factors behind these banking crises and the policy changes that would help to alleviate the problem (Caprio and Klingebiel 1996a, 1996b; Goldstein 1996a; Goldstein and Turner 1996; Kane 1995; Kaufman 1996a; Meltzer 1995; Rojas-Suarez and Weisbrod 1995, 1996a, 1996b, 1996c, 1996d). As regards leadership from the official sector, it was only at the Lyon Summit in June 1996 that G-7 heads of state (G-7 1996, 3) put the “… adoption of strong prudential standards in emerging economies” on their crisis prevention agenda, and it has been primarily during the past six months that senior international policymakers have begun to stress the need for a coordinated international approach to banking problems in developing countries (G-7 1996; Camdessus 1996; Summers 1996; Pou 1996). In short, each of Kapstein’s (1992) criteria for agreement are considerably closer to being satisfied now than they were even three years ago. What could be agreed then is not necessarily what can be agreed now. Who Should Set the IBS? Since more robust banking systems and more effective banking supervision would be in their common interest, an IBS ought to be sponsored jointly by the international financial institutions (the IMF, World Bank, and BIS), the Basle Committee on Banking Supervision, regulatory and supervisory authorities from the developing world, and representatives of the banking industry. But who should set the specific guidelines for an IBS? The main expertise needed to draft an IBS is banking supervision. This suggests that the Basle Committee on Banking Supervision should play a key role in the exercise, that is, they should draft the key provisions of the IBS that relate specifically to banking supervision. The Basle Committee’s leadership would give the IBS a brand name and provide a sense of continuity with earlier international banking agreements. But the Basle Committee should not be the only group working on an IBS, for at least four reasons. First, as outlined above, a good IBS would be somewhat broader in design (e.g., international accounting standards, greater transparency for government involvement in the banking system, etc.) than the confines of traditional banking supervision; as such, other groups that have more direct responsibility for these adjacent issues (e.g., the IASC or IMF) should be involved and their contributions folded into the final product. Such interagency collaboration would be even more essential if the ultimate aim were not merely to produce an IBS but rather to a produce a minimum international standard for, say, banking and securities activities; in that case, the guidelines of securities regulators (i.e., IOSCO) should be folded into the IBS into the broader standard. In either case, some international umbrella group at a higher level than the Basle Committee (e.g., the Interim Committee of the IMF or a working group of ministers of finance and central bank governors from larger industrial and developing countries) would need to coordinate the assembly of the final product. Second, and pointing in the same direction, because an IBS introduces some issues not raised by earlier international banking agreements (e.g., international monitoring of national supervisory regimes), enlarges the intersection between the microeconomic and macroeconomic aspects of financial regulation, and would require large changes in banking practices in some countries, the Basle Committee’s collaboration with other interested parties should be more extensive and intensive than normal. For example, if the international financial agencies (the IMF, World Bank, and regional development banks) were assigned the tasks of advising countries on how to alter their banking and supervisory arrangements to conform to an IBS, of monitoring participating countries’ compliance with the standard (as discussed later in this chapter), and of intensifying their normal financial surveillance and financial-sector restructuring work, then their views ought to be sought as to whether any guidelines drawn up by the Basle Committee are sufficiently specific and comprehensive. Their views would be particularly valuable on whether any consensus on an IBS reached in the Basle Committee had ducked some of the tough issues (e.g., whether countries with volatile operating environments should have higher regulatory capital requirements, whether an IBS includes incentives that will reduce over time government involvement in the banking system, etc.) that are apt to be crucial in reducing the vulnerability of developing-country banking systems. Given BIS’s considerable expertise in the intersection of the micro and macroeconomic elements of financial regulation, it should likewise be accorded an important role in reviewing any draft IBS guidelines produced by the Basle Committee. Third, the banking industry needs to be given ample opportunity to record its views on what elements should and should not be included in an IBS; after all, it is the banking industry that would need to absorb any costs associated with meeting the requirements of an IBS. The influence of their input should not be minimized. For example, the decision by the Basle Committee to permit banks to employ their own internal risk-management models to help calculate regulatory capital requirements for market risk occurred only after banks in several larger industrial countries expressed their dissatisfaction with the earlier proposal to base these capital requirements on a preset formula.\(^{23}\) Fourth, the group that sets the IBS should have strong representation from developing countries. Since developing-country banking systems and banking supervision are the primary focus of the IBS exercise, the drafting group needs to have firsthand experience with developing-country banking supervision issues. Without that experience, the IBS guidelines are not likely to be as well suited to the practical banking problems faced by developing countries as they could be with strong representation from these countries. Without adequate support from the developing countries, an IBS is unlikely to get off the ground. Some groups in developing countries are likely to resist the banking reforms necessary to qualify for IBS admission; some may even argue that an IBS is a scheme by industrial countries and their banks to reduce the competitiveness of developing-country banks by imposing onerous prudential standards—standards that many industrial countries will already have met or exceeded. Spokespersons for banking reform in developing countries will be better able to overcome opposition and convince their publics and their banking industries that such (voluntary) reforms are in the best interest of their country if they can legitimately say that they were full participants in drafting an IBS. Although the Basle Committee presently includes only bank supervisors from G-10 countries, there is no reason why the working group that drafts the IBS should not include significant representation from developing countries. Developing-country representatives should also serve on any other working groups that are contributing to an IBS. \(^{23}\) See IMF (1995) for a discussion of this background to the recent amendment of the Basle Capital Adequacy Accord to cover market risk. In the end, after the interagency collaboration is complete, there must be full agreement on the guidelines in an IBS. If, for example, the IMF and World Bank had a different view on minimum standards for accounting and provisioning than did the Basle Committee, countries participating in an IBS would not understand their obligations and the monitoring process would be unnecessarily complicated. When the smoke clears, there can be only one IBS.\footnote{24} \textbf{How Should Compliance with an IBS Be Monitored and Encouraged?} This is probably the single toughest operational issue facing an IBS. There are basically two approaches. The traditional one, at least in the field of international banking agreements, is to have international recommendations ratified by ministers and governors, incorporated into national law or regulation, and then monitored/enforced by the national banking supervisor (W. White 1996). This approach has its advantages. By maintaining home-country control, the chances that reforms are “owned” by the home country are maximized, and the criticism that conditions are being imposed by an international agency are avoided. Also, national supervisors are apt to be more knowledgeable about local banking conditions than an outside group would be. The rub is that exclusive home-country control will weaken the implementation/credibility of an IBS in those countries where weak banking supervision is part of the problem. In those cases, an independent outside monitor should render an objective evaluation of whether an IBS is being implemented as agreed. A hint of the complacency that might be associated with home-country monitoring is offered by a recent Basle Committee survey (Padoa-Schioppa 1996), which covered 129 countries: two-thirds of the countries reported that the supervisory agency is independent from the government; only 13 countries acknowledged that their banks grant loans in compliance with governmental directives; 72 percent of nonindustrial countries responded that they do not allow lending to a single customer to exceed 25 percent of the bank’s capital; and over 90 percent of all countries reported that supervisors verify the adequacy of bank’s accounting systems. It makes you wonder. Either all those studies showing political pressures on supervisors, government-directed lending, connected lending, and weak accounting systems to be major factors in banking crises were wrong (BIS 1996; Caprio and Klingebiel 1996a, 1996b; \footnote{24. This is not to say that in their financial surveillance and financial restructuring work, the IMF and the World Bank should not address areas that are not covered in an IBS. But for the areas that are covered, everyone has to be singing from the same hymn book.} Folkerts-Landau et al. 1995; Goldstein and Turner 1996; Honohan 1996; Lindgren, Garcia, and Saal 1996; Meltzer 1995; Rojas-Suarez and Weisbrod 1996a, 1996b, 1996c, 1996d; Sheng 1996), or there has recently been a tremendous improvement in supervision. The second approach is to entrust at least part of the monitoring to an international agency. This has been a long-standing practice in the areas of trade policy, macroeconomic stabilization, and sectoral reform (including the financial sector); note the roles of the General Agreement on Tariffs and Trade (GATT)-World Trade Organization (WTO), IMF, World Bank, and regional development banks (e.g., European Bank for Reconstruction and Development, Inter-American Development Bank, Asian Development Bank, etc.). Here, countries have decided that, despite the dilution of home-country control, evaluation by an international agency is critical to the agreement’s credibility. But which international agency or agencies should do the monitoring? I believe the IMF and the World Bank group are the most logical candidates. Only they have the universal membership that would include all potential participants in an IBS. Also, monitoring compliance with an IBS would require on-site inspections and discussions with local supervisory authorities and local banks. The IMF and the World Bank already send missions to countries and only they currently have enough personnel to make on-site visits throughout the developing world. I envision the Bretton Woods institutions carrying out at least three functions associated with an IBS. First, the World Bank and regional development banks could incorporate the IBS guidelines into the training, technical assistance, and financial restructuring advice that they already provide to many countries. In this sense, the IBS would help guide banking system reform in developing countries and delineate the banking-system preconditions that are necessary for developing countries to benefit from greater financial integration.\(^{25}\) Second, the IMF could carry the primary responsibility for determining whether countries voluntarily subscribing to the IBS were meeting their obligations. They would base that determination on off-site analysis and information obtained during missions (on-site) to the country. During those missions, they would hold discussions with national bank supervisors and a sample of local banks. National banking supervisors would continue to have the primary oversight responsibility for their banks but the mission would seek to reach a view, inter alia, as to whether national banking supervision itself was implementing faithfully the IBS guidelines. If a determination was made that a country was not meeting its IBS responsibilities, it could be given a fixed period of time to remedy the \(^{25}\) See the recent World Bank report (1997) for an extensive discussion of the preconditions for successful financial integration. situation. If the country displayed serious and persistent nonobservance of the IBS guidelines, then the IMF would indicate publicly that the country’s subscription to the IBS was suspended. Like the IMF’s SDDS, an electronic bulletin board could be established on the internet listing those countries that subscribed to the IBS and were in good standing; persistent noncompliance would be signaled by taking a country “off the board.” This option of taking a noncomplying IBS member off the board is necessary to give the IBS credibility with private capital markets. If there are no significant penalties for not behaving as a good club member, then club membership will not yield a market return. That said, it would be a mistake for the IMF to create the impression that an IBS member of good standing is immune to banking problems. As laid out in chapter 2, banking-sector vulnerability depends on a number of factors in addition to banking supervision, including the state of the macroeconomy and the appropriateness of the country’s exchange rate policy; the IBS does not address those sources of vulnerability and implying otherwise would only lead to a downgrading of the monitoring agency’s credibility. Instead, the IMF should make it clear that being an IBS member in good standing carries a much narrower interpretation—namely, that the country is meeting IBS minimum standards of good banking supervision. Third, the IMF and the World Bank could provide further incentive for signing on to the IBS and honoring its obligations by factoring compliance with the IBS guidelines into their policy conditionality decisions and/or by publishing their analysis of banking-sector developments. By including compliance with IBS guidelines as an element of policy conditionality, the IMF in its stabilization programs and the World Bank in its financial restructuring programs would give those countries seeking financial assistance a further incentive to undertake banking reform. From the perspective of the international financial institutions (IFIs), there is good reason for such conditionality: if nothing is done to overcome banking-sector fragility, other elements of stabilization and financial reform could be rendered ineffective and the IFIs’ chances of being repaid on time diminished. Also, at least in crisis situations, some banking system reforms are presumably already part of Bretton Woods conditionality; the IBS guidelines would just bring a more widely accepted framework to that element of conditionality. On the negative side, the more the IBS guidelines become a requirement, the less one captures the aforementioned advantages of a voluntary standard. Also, this additional source of leverage would only apply to countries seeking financial assistance from the IMF and the World Bank. On top of that, the potential ambiguities of judging compliance with an IBS that contains many elements should not be underestimated. Rather than simply conveying an “on-off” signal to the private capital markets about IBS membership (i.e., country x is or is not a member in good standing of the IBS), the IMF could publish a more informative signal—its analysis of banking-sector developments and the quality of banking supervision in individual countries. The IBS guidelines could then serve as a useful organizing device for such reports. Presumably, such an analysis would be included with an analysis of monetary, fiscal, and structural policies, in the IMF’s Article IV consultation report for the country. Again, such a monitoring role would affect incentives via its effect on information flows to private capital markets and ultimately on the country’s cost of borrowing in those markets. Whether it would be desirable to release to the markets that part of IMF consultation reports containing the staff’s analysis of economic policies and prospects has been hotly debated for at least a decade, and publishing an assessment of banking system soundness focusing on IBS benchmarks raises a similar debate; that is, enhanced market discipline versus concerns about precipitating crises and reducing the frankness of IMF consultations. Although the choice is not an easy one, I have concluded elsewhere (Goldstein 1995a) that, on balance, there is much more to be gained than lost from publishing Article IV reports, and I would extend that conclusion to analyses of banking systems as well. At least three criticisms might be leveled against such a monitoring role for the IFIs. For one thing, it can be argued that IFIs do not have enough personnel with the requisite training and experience to make reliable evaluations of banking systems and banking supervision. This criticism carries some currency but should become less relevant over the medium term. Both the World Bank and the IMF have gained valuable and wide-ranging experience in assessing and providing technical assistance on developing-country banking systems. Yet, if an IBS were agreed on, their increased responsibilities in this area would no doubt require additional staff with banking supervisory expertise. This will take some time. In the interim, some assistance might come from short-term loans of bank supervisors from G-10 countries and from those emerging-market economies with more advanced supervisory systems. A second line of criticism is that the IFIs are too politicized to make the hard decision of taking a nonperforming IBS member country off the board. But the IFIs have shown a willingness over several decades to interrupt their loans to countries under stabilization and restructuring programs when the latter have failed to meet agreed on performance criteria (a decision that can also generate significant effects in private capital markets). Why should suspension from the IBS be different in kind? Yet a third objection is that having the IFIs comment—perhaps even publicly—on the performance of national bank supervisors would compromise the latter’s independence and effectiveness. I find this argument unconvincing. To begin with, there is something to criticize. As detailed in chapter 2, there is no precedent for the wave of severe banking crises that have enveloped developing countries over the past 15 years; likewise, there have been some serious breakdowns in banking supervision in industrial countries, including that surrounding the current Japanese banking problem. Moreover, the contention that banking crises had little to do with banking supervision does not seem to be supported by existing analysis. Caprio and Klingebiel (1996b), for example, studied the factors contributing to 29 severe developing-country banking crises from 1980 to 1996 and concluded that poor supervision and regulation (broadly defined) were instrumental in more crises than were any other factor (e.g., recessions, declines in the terms of trade, fraud, lending to state enterprises, political interference, and deficient bank management). Also, one should not confuse independence with immunity from IFI criticism. For example, IMF and OECD publications have long provided an assessment of national monetary and fiscal policies (including evaluation of the monetary policies of independent central banks), without any claims that such assessments reduce the effectiveness of national authorities. Why then should national banking supervision receive a “special” exemption from such international surveillance? Indeed, with serious banking problems continuing to surface at disarming frequency (note the recent problems in South Korea and Thailand) and with weak national banking supervision identified as an important contributory factor, it seems incumbent to ask, “Who’s supervising the supervisors?” 26. Another possible instrument for international assessment of national supervisory policies would be “peer review” within the Basle Committee. While this would probably be some improvement over what we have now, such an exercise could easily succumb to “nonaggression pacts” that essentially eliminate criticism. See Bergsten and Henning (1996) on how such nonaggression pacts have reduced the effectiveness of peer pressure within the G-7.
1 Introduction In this report we survey some results on the complexity of finding the spectral gap of a quantum Hamiltonian. First, let us review some basic definitions. A $k$-local Hamiltonian acting on a system of $n$ qudits with local dimension $d$ is a Hermitian positive semidefinite operator $H$ with a decomposition $$H = \sum_{i=1}^{m} h_i$$ of local terms $h_i$, where each term satisfies $\|h_i\| \leq 1$ and only acts nontrivially on at most $k$ qudits. Physically, the Hamiltonian represents the energy of the system. The spectral gap of a Hamiltonian $H$ is defined to be the difference $\Delta(H) \equiv \lambda_2 - \lambda_1 \geq 0$ of its smallest two eigenvalues. 2 Finite systems The first setting we study is the case of finite system sizes. This case has been most fully investigated by Ambainis [Amb14]. This work studies the following decision problem: **Definition 1.** Given a local Hamiltonian $H$ over $n$ qudits, and an error parameter $\epsilon(n) > 0$, the problem SPECTRAL-GAP is to decide whether the spectral gap $\Delta(H)$ is either (i) $\leq \epsilon(n)$ or (ii) $\geq 2\epsilon(n)$, promised that one two cases holds. We restrict our attention to the case where $\epsilon(n) = \Omega(1/\text{poly}(n))$. Ambainis relates the complexity of this problem to two classes $P^{\text{QMA}[\log(n)]}$ and $P^{\text{UQMA}[\log(n)]}$. The class $P^{\text{QMA}[\log(n)]}$ is the class of polynomial-time machines that are allowed to make $O(\log(n))$ queries to an oracle for QMA. The definition of $P^{\text{UQMA}[\log(n)]}$ is similar, but with queries to an oracle for UQMA: this is a class similar to QMA but with the added promise that in the YES instance, there is a unique satisfying witness state. The following two theorems are the main results achieved by Ambainis. **Theorem 2.** SPECTRAL-GAP for $O(\log(n))$-local Hamiltonians is contained in $P^{\text{QMA}[\log(n)]}$. *Proof.* Let the given Hamiltonian be $H$, and let the Hilbert space it acts on be $\mathcal{H}$. Then we use $O(\log(n))$ queries to QMA oracle to determine by binary search a constant $a$ such that the minimum energy $\lambda_1$ of $H$ lies in the interval $[a, a + \epsilon/4]$. Next, we call the QMA oracle on the following measurement acting on a state in $\mathcal{H} \otimes \mathcal{H}$: 1. First, project the input state to the antisymmetric subspace of \( \mathcal{H} \otimes \mathcal{H} \). If the measurement succeeds, then the post-measurement state must have the form \[ |\Psi\rangle = \sum_{ij} c_{ij} (|\psi_i\rangle |\phi_j\rangle - |\phi_j\rangle |\psi_i\rangle), \] where \( \{|\psi_i\rangle\} \) is an orthonormal basis of \( \mathcal{H} \); without loss of generality, we can choose this basis to consist of eigenstates of \( H \). 2. Next, estimate the value of \( H \otimes I + I \otimes H^1 \) on the post-measurement state using phase estimation, up to precision \( \epsilon/5 \). Accept if the value is below \( 2a + \frac{7\epsilon}{4} \), and reject otherwise. It is easy to show that the state of the form (1) that minimizes the expectation value of \( H \otimes I + I \otimes H \) is the state \[ |\Psi\rangle = \frac{1}{\sqrt{2}} (|\psi_1\rangle |\psi_2\rangle - |\psi_2\rangle |\psi_1\rangle), \] where \( \psi_1 \) and \( \psi_2 \) are the two lowest-energy eigenstates of \( H \), with eigenvalues \( \lambda_1 \) and \( \lambda_2 \) respectively. This state \( |\Psi\rangle \) is in turn an eigenstate of \( H \otimes I + I \otimes H \) with eigenvalue \( \lambda_1 + \lambda_2 \). Thus, in the YES case, the result of phase estimation will be below \( (a + \epsilon/4) + (a + 5\epsilon/4) = a + \frac{3\epsilon}{2} \) with high probability, and in the NO case, the result of phase estimation will be above \( 2a + 2\epsilon \) with high probability, thus establishing completeness and soundness. \( \square \) **Theorem 3.** SPECTRAL-GAP for \( O(\log(n)) \)-local Hamiltonians is hard for \( P^{\text{UQMA}}[\log(n)] \). **Proof.** This proof uses a “history state” construction, which each time step in the “history” corresponds to a single oracle query. Suppose we are given a machine in \( P^{\text{UQMA}}[\log(n)] \). This machine makes a series of \( T = O(\log(n)) \) queries to a UQMA oracle. It is fairly straightforward to show that a complete problem for UQMA is to find a low energy state of a Hamiltonian \( H \) promised that at most one such state exists. Thus, every query to the UQMA oracle is specified by a Hamiltonian obeying this promise. We denote the answer to the \( i \)-th oracle query by \( y_i \), and the Hamiltonian sent as the \( i \)-th query by \( H_{i|y_1y_2\ldots y_{i-1}} \), acting on a Hilbert space \( \mathcal{H} \)—the notation reminds us that the \( i \)-th query is allowed to depend on the results of the previous queries. At the end, the machine decides to accept or reject based on the answers to its oracle queries. We construct a new Hamiltonian \( H \) whose spectral gap will encode whether the given machine accepts or rejects. This Hamiltonian acts on an enlarged Hilbert space \( \mathcal{H}' = \mathbb{C}^2 \otimes (\mathbb{C}^2 \otimes \mathcal{H})^{\otimes T} \), and consists of two terms: \[ H = I \otimes H_{\text{accepting history}} + \epsilon |0\rangle \langle 0| \otimes H_{\text{query}}, \] where the first term in the tensor product acts on the first qubit register in the Hilbert space. The term \( H_{\text{accepting history}} \) enforces that the sequence of answers to the oracle queries cause the \( P^{\text{UQMA}}[\log(n)] \) machine to accept; it is given by \[ H_{\text{accepting history}} = \sum_{y_1\ldots y_T \text{ rejecting}} |y_1\rangle \langle y_1| \otimes I_{\mathcal{H}} \otimes |y_2\rangle \langle y_2| \otimes I_{\mathcal{H}} \otimes \cdots \otimes |y_T\rangle \langle y_T| \otimes I_{\mathcal{H}}. \] \footnote{In [Amb14] this is written as \( H \otimes H \), which is presumably a typo} The other term $H_{\text{query}}$ enforces that the answers in the history state are correct for the queries issued by the machine. It is given by $$H_{\text{query}} = \sum_{i=1}^{T} \frac{1}{4^{i-1}} \sum_{y_1 \ldots y_{i-1}} |y_1\rangle\langle y_1| \otimes I_H \otimes \cdots \otimes |y_{i-1}\rangle\langle y_{i-1}| \otimes I_H$$ $$\otimes \left( |0\rangle\langle 0| \otimes (H_0) + |1\rangle\langle 1| \otimes H_i|y_1 \ldots y_{i-1}\rangle \right) \otimes I \otimes I_H \otimes \ldots,$$ where $H_0$ is a certain fixed Hamiltonian. It is shown by Ambainis that if the $P^{\text{UQMA}[\log n]}$ machine accepts, then the spectral gap of $H$ is 0—i.e. there exist two orthogonal degenerate ground states, and if the machine rejects, then the spectral gap is at least $\epsilon/4^T$, where $\epsilon$ is a constant related to the completeness-soundness gap for the Hamiltonians sent to the UQMA oracle. The intuition is that in the accepting case, the “accepting history” term and “query” term will both be satisfied by the same history state $|\psi\rangle$, so $|0\rangle \otimes |\psi\rangle$ and $|1\rangle \otimes |\psi\rangle$ will both be ground states of $H$; in the rejecting case, this degeneracy is broken. Ambainis’ results show that the problem SPECTRAL-GAP is closely related to the class QMA, for which many interesting questions remain open. Any progress on QMA would also tell us about the complexity of SPECTRAL-GAP. Below, we list some other open questions of interest. **Problem 4.** What is the complexity of SPECTRAL-GAP for Hamiltonians of constant locality? Recent unpublished work of Wu et al. shows that $P^{\text{UQMA}[\log(n)]} = P^{\text{QMA}[\log(n)]}$ but this does not completely resolve the question. Can perturbative gadgets be used to reduce the locality of the history state Hamiltonian? **Problem 5.** Can we put SPECTRAL-GAP in QMA(2)? Since coNP $\in P^{\text{UQMA}[\log(n)]}$, and it’s not even known whether coNP $\in$ QMA(2), this problem may be intractable with present knowledge. **Problem 6.** Is the SPECTRAL-GAP problem for frustration-free Hamiltonians any easier? Could results analogous to Ambainis’s be found relating SPECTRAL-GAP for such Hamiltonians to $P^{\text{QMA}_1[\log(n)]}$? ## 3 Translation-invariant infinite systems The second setting in which the complexity of finding spectral gaps has been studied is the limit of infinitely many particles. In this case, in order for the input size of the computational problem to be finite, we restrict our attention to translation-invariant Hamiltonians acting on qudits laid out spatially in a lattice. The decision problem we solve is to determine whether a given translation-invariant Hamiltonian $H$ has a spectral gap $\Delta \to 0$ as the system size $n$ tends to $\infty$ (this limit is known as the thermodynamic limit). If the gap tends to 0, the system is called gapped; otherwise, it is called gapless. In a tour-de-force result, it was shown by Cubitt, Perez-Garcia, and Wolf [CPGW15] that this problem is in fact undecidable for qudits on a 2D lattice, with local dimension above a certain universal constant. Their construction encodes the specification of a classical Turing machine into the entries of a local term in the Hamiltonian. The ground state of the overall Hamiltonian is a history state for quantum Turing machine that first applies phase estimation to read out the input, and then applies a universal classical Turing machine on the input string. Thus, the halting problem is embedded into the spectral properties of this Hamiltonian. The full construction is quite involved for several technical reasons; most importantly, in order to achieve a spectral gap in the thermodynamic limit, the Hamiltonian must be “composed” with another Hamiltonian encoding a classical tiling problem. The ground state of the combined Hamiltonian consists of a pattern of tiles, with copies of the quantum history state along edges of the tiles. **Problem 7.** Can the construction of [CPGW15] be improved to 1D chains? How low can we reduce the local dimension? From the other end, recent work of Bravyi and Gosset [BG15] has shown that the thermodynamic-spectral gap question is decidable for a restricted set of Hamiltonians with the important property of being frustration free. For our purposes frustration-free Hamiltonian is one that can be written as a sum of local projectors, such that the ground energy is 0. Bravyi and Gossett analyze frustration-free Hamiltonians on 1D chains of qubits, and provide a simple criterion for whether the Hamiltonian is gapped or gapless in the thermodynamic limit. (In fact, their criterion implies that the spectral gap problem for these Hamiltonians is not only decidable but also solvable in polynomial time.) **Theorem 8** ([BG15]). Let $H$ be a translation-invariant Hamiltonian acting on a 1D chain of $n$ qubits with the form $$H = \sum_{i=1}^{n-1} |\psi\rangle_{i,i+1}\langle\psi|_{i,i+1},$$ where $|\psi\rangle \in \mathbb{C}^2 \otimes \mathbb{C}^2$ is a fixed two-qubit state; by construction, this Hamiltonian is frustration free. Then the spectral gap of $H$ goes to 0 as $n \to \infty$ iff the eigenvalues of the matrix $$T_\psi = \begin{pmatrix} \langle\psi|0,1\rangle & \langle\psi|1,1\rangle \\ -\langle\psi|0,0\rangle & -\langle\psi|1,0\rangle \end{pmatrix}$$ have equal non-zero absolute value. The proof of both directions of this result is quite nontrivial. One of the key tools used in this proof is a remarkable result of Knabe [Kna88], which relates the spectral gap in the thermodynamic limit to spectral gaps of fixed finite size. **Lemma 9** ([Kna88]). Let $\Pi$ be a projector acting on two qudits, and consider the translation-invariant Hamiltonians $H_n^\circ$ and $H_n$ given by $$H_n = \sum_{i=1}^{n-1} \Pi_{i,i+1}, \quad H_n^\circ = H_n + \Pi_{n,1}.$$ (We refer to $H_n$ as the Hamiltonian over an $n$-qudit chain with open boundary conditions, and $H_n^\circ$ as the Hamiltonian for periodic boundary conditions). Further suppose that $H_n$ and $H_n^\circ$ are frustration free for all $n$. Then for all $m \geq n \geq 2$, it holds that $$\Delta(H_m^\circ) \geq \frac{n-1}{n-2} \left( \Delta(H_n) - \frac{1}{n-1} \right).$$ Note that the right-hand side of the bound is independent of $m$. Thus, any value of $n$ for which $\Delta(H_n) > \frac{1}{n-1}$ would be a certificate that $H_n^\circ$ is gapless in the thermodynamic limit. A recent work of Gosset and Mozgunov [GM15] improves this result by replacing $\frac{1}{n-1}$ with $\frac{6}{n(n-1)}$, and shows that this is asymptotically tight by finding examples of gapless systems where $\Delta(H_n) = \Omega(\frac{1}{n^2})$. The same work also establishes a variant of this lemma for 2D lattices. The proofs of the original lemma of Knabe and its extensions all proceed by establishing the positivity of a polynomial function of the Hamiltonians $H_m^\circ$ and $H_n$ by decomposing it as a sum of squared terms. For instance, in the original Knabe lemma, it is shown that \[ (H_m^\circ)^2 + \frac{1}{n-2} H_m^\circ \succeq \frac{1}{n-2} \sum_{k=1}^{m} A_{n,k}^2, \] where $A_{n,k}^2$ is a copy of $H_n$ acting on the subchain of the system starting from index $k$. To complete the proof, one uses the fact that for a frustration-free Hamiltonian $H$, $H^2 \geq \epsilon H \iff \Delta(H) \geq \epsilon$ to bound the $A_{n,k}^2$ terms on the RHS. After some simple manipulations, one obtains \[ (H_m^\circ)^2 \succeq \frac{n-1}{n-2} \left( \Delta(H_n) - \frac{1}{n-1} \right) H_m^\circ, \] which establishes the desired bound on the spectral gap of $H_m^\circ$. The extensions of [GM15] are proved by modifying the sum of squares decomposition that is used. While this approach seems to be inherently limited to frustration-free systems (since lower bounds on $H^2$ do not imply anything about the gap for a general Hamiltonian $H$), there is scope to achieve better results for other lattice configurations than those studied to date. **Problem 10.** Can an improved Knabe-type lemma be found for other lattices besides the 1D chain and 2D square lattice? Moreover, the Knabe lemma is only one of several crucial ingredients for the result of [BG15], and generalizing their whole result to even the case of 1D qudits with local dimension $d > 2$ would be significant progress. **Problem 11.** Can the classification of gapped and gapless phases of frustration-free systems in [BG15] be extended to beyond the case of 1D chains of qubits? Finally, the frustrated case remains wide open. **Problem 12.** Is there a criterion for gapped and gapless phases of 1D qubit chains with non-frustration-free Hamiltonians? ## 4 Acknowledgements The author thanks David Gosset and Xiaodi Wu for helpful discussions, and for sharing unpublished results. ## References [Amb14] Andris Ambainis. On physical problems that are slightly more difficult than $qma$, 2014. arXiv:1312.4758v2. [BG15] Sergey Bravyi and David Gosset. Gapped and gapless phases of frustration-free spin-1/2 chains, 2015. arXiv:1503.04035v3. [CPGW15] Toby S. Cubitt, David Perez-Garcia, and Michael M. Wolf. Undecidability of the spectral gap. *Nature*, 528:207–211, 2015. [GM15] David Gosset and Evgeny Mozgunov. Local gap threshold for frustration-free systems, 2015. arXiv:1512.00088. [Kna88] Stefan Knabe. Energy gaps and elementary excitations for certain VBS-quantum antiferromagnets. *J. Stat. Phys.*, 52(3/4):627–638, 1988.
Intel Exec. Asks for Single-Sourced Test Tooling, But Multi-Sourced ATE In what should have been the “Key Note” address at last month’s SouthWest Test Workshop – or International Wafer Test Workshop as it should be known – Steven Strauss presented Intel’s outlook on test tooling problems. Strauss is Intel’s Tooling Operations manager – located in Chandler, AZ. In addition to making a pitch for single-sourcing, shorter lead times and lower pricing of test tools, he also called for open architecture VLSI ATE testers. Strauss pointed out that Intel is spending less on capital equipment (testers, probers, handlers) every year (as shown in the graph at the bottom of p. 3 of this issue.), but is spending more on tooling – making it a bigger percentage of the cost-of-test. Strauss defined “test tooling” as anything that “provides a temporary thermal, mechanical and/or electrical interface to the DUT, eg. probe cards, sockets, DUT bds., etc. - all of which must be customized for packaging form factors, electrical and thermal requirements and device function” Strauss called for a Revolution in Test Tooling, saying that the tooling suppliers “have not changed with the times to meet customers’ needs” – and implied that this situation is not limited to Intel. He said all chipmakers are looking for more comprehensive solutions, lower cost, shorter lead times and better capability than they are presently getting from their socket, board and probe suppliers. He pointed out that chip sales have grown at a compound annual rated (CAGR) of about 15 percent since 1958 and were expected to at least maintain that rate through 2006. Continued on page 2 Continued from page 1 He said that forecast will hold despite last year’s drop in chip sales of almost 40 percent. Even more important for test tooling, according to Strauss, process cycles have continued to ramp up faster and fall faster as well. Strauss pointed out that 130-nm process technology took just four quarters to go from development to high-volume manufacturing (HVM) at Intel – arguing that there is “no time for mistakes” – prototypes and HVM are now “one and the same.” In addition, speed improvements, yield improvements, packaging and other changes now result in an effective product cycle of 3 to 6 months – and thus new tooling – including probe cards, sockets, DUT boards and burn-in boards – must be designed, manufactured and installed in production quantities, in that same time-frame. Tooling is, according to Strauss, “a technology, development and HVM enabler!” As he was talking to mainly probe card suppliers at this conference, Strauss chose to use probe cards as an example. (He had given a similar presentation in March at the BiTS conference, where he focussed on test sockets.) He noted that the number of different probe cards required by Intel are increasing. New designs grew 22 percent between 2000 and 2001 and by 38 percent from 2001 to 2002. Until 2001, Intel had designed all SIU’s (Sort Interface Units in Intel’s parlence) in-house, but then changed its strategy to “enable” outsourcing of those designs. Intel expects to outsource about one-third of all such designs this year. However, while Intel ‘enabled’ suppliers to do these designs, those same suppliers could not provide total solutions – only designs. As a result, “lead time reduction showed only marginal improvement in 2 years” he said. The problem, according to Strauss, is that a typical tooling supply chain contains 2-4 “poorly synchronized suppliers.” As an example he cited a vertical probe card, one supplier supplies the design, another the PCB/space transformer and a third supplies the probe and integration and – the ‘customer’ ends up being responsible for its functionality. What is needed, he said, is “turn key tooling suppliers” a single supplier which can provide the design, all of the components and volume production. A single supplier who can: - Enable fungible designs that last multiple product generations - Is synchronized with the specific technologies of the customers - Provides complete turn key solutions – allowing the customer to be able to negotiate with a single supplier. - Has 2-4 weak lead times, and finds innovative ways to continue to drive costs down. According to Strauss “What it takes is Revolution – Evolution will not yield these goals! He used the example of LSI ATE equipment as having gone through such a “Revolution.” That industry, he said has implemented testers which allow the use of advanced DFT to manage test complexity and has produced “Distributed Test” capability – eg. partitioning test capability by socket, allowing chipmakers to move a significant percentage of test content to less expensive DFT based structural testers. It is also moving to provide parallel test capability for complex chips. The result is simplified tester hardware designs, while maintaining state-of-the-art capabilities and reducing capital expenditures. But the ultimate tester solution, according to Strauss, is “open architecture” VLSI ATE. (See Opinion column on p. 3 of this issue for a detailed discussion and the industry’s reaction to that idea – and FTR’s comments on such a development.) Returning to the problems of test tooling – and particularly the probe card suppliers in the audience. Strauss asked “Can you do this [be part of a revolution and not just an evolution]? He said, “If not you won’t survive!. He then offered the audience the following **Strauss’s Prediction:** About one-half of you will not be around in 2 years! He then asked ‘Will you be one of them? In what Intel’s Steve Strauss himself admits “appeared to be some thing of a contradiction” in his presentation at last month’s SWTW gathering in Long Beach, CA – he called for “single-sourced” test tooling, while at the same time asking for ‘multi-sourced’ testers. As we described in this issue’s cover story, he believes that the chip industry would greatly benefit from a ‘consolidation’ of tooling (probe cards, sockets, DUT boards, burn-in boards, etc.) supplier, giving chipmakers ‘turnkey’ solutions to their tooling needs. However, he took quite a different tact, in that same presentation, calling for “open architecture” VLSI ATE. (Intel has been promoting this approach to its vendors for some time, but this was one of the first ‘public’ presentation of the idea beyond several papers at recent ITC meetings.) While Strauss said that the latest “modular” testers are an “evolutionary” (although one slide, top of p.2) called them "revolutionary") improvement over conventional testers – they are not sufficiently so. He said that ‘conventional’ testers, with their custom infrastructure are too difficult to support and improvements are only ‘generational. And, they are available from only a single supplier. Modular or “Tester-on-a-Board” systems provide more flexible configurations, but are still a “closed architecture” and still are available from only a single supplier. Strauss is asking for a “revolutionary’ change – to truly “open architecture” testers – by the ATE industry. Such an architecture would allow chip makers to purchase tester main frames, test heads, and modules from different suppliers. He likens it to a PC maker which has a wide choice of suppliers for each of the components in its products. The result, in Strauss’ opinion, would be test equipment which truly “scale across price, performance, pin count and application requirements”. He recognizes that such a “revolution” would require “disruptive” changes in the ATE industry. It would require the development of official or at least defacto industry standards for every tester component interface – and inevitably, standard software, along the line of Microsoft Windows. All of that would represent a 180-degree turn in ATE industry thinking – which since its beginnings more than 35 years ago has been based solely on proprietary tester architectures and software. This writer has had some relatively recent experience with the industry’s refusal to change that mind set. In 1994 we began an effort to work with SEMI and equipment makers to develop ‘standards’ for chip testers and related equipment. After four years of frustration – with both SEMI’s lack of interest in “back-end” standards and ATE makers’ almost total indifference, FTR abandoned the effort. (It has been continued – led by Xandex’s Roger Sinsheimer – but with limited results.) However, in recent months, at least two new efforts to develop standards for an “open architecture” have been quietly created. One is reportedly being led by Schlumberger CTO, Bernie West and the other by Advantest VP, Sergio Perez. Each group is attempting to develop a ‘consensus’ open architecture, but doing it outside of industry groups such as SEMI and IEEE. Teradyne reportedly has not joined either group, but is already embracing the concept. In May it announced the creation of an Open Architecture Initiative for its newly introduced Integra FLEX test system. “This initiative enables third parties to both cooperatively and independently develop and market instrumentation options for the FLEX system. It will provide our customers with access to a wider range of instrumentation on the FLEX platform with an accelerated time to market,” said Mark Jagiela, VP/GM of Teradyne’s Semiconductor Test Group. You can look for both groups – or perhaps a merged “consortium” – to surface publicly at SEMICON/West next month and at ITC in October. Given the present distress of the ATE industry – where most companies have little except price cuts to close orders – the time may just be right for the required mind set change. That’s my opinion. SEMI reported that No. American chip equipment vendors saw an upturn in their bookings for the sixth consecutive month in May. Total net new orders were $1,084 million in May (three-month rolling-average) - about 9 percent above the April figure of $995.6 million, and about 50 percent above the level of May 2001. Billings were $861.7 million, a 6 percent increase over April, but still more than 40 percent below May 2001. The book-to-bill was 1.26, up from the 1.22 (revised) in April. Front-end orders in May were $861.8 million, a 7 percent sequentially, and 37.3 percent higher than in May of last year. Billings totaled $704 million, a 5 percent improvement over April, but about 43 percent below billings in May of last year. The resulting book-to-bill ratio for front-end equipment was 1.22. TAP (Test, Assembly, Packaging) orders were $222.2 million, more than double the level of May 2001 when TAP orders reached only $95.7 million. TAP billings rose just over 10 percent MOM, reaching $157.7 million, but YOY TAP billings are still down about 31 percent. The book-to-bill for TAP equipment was 1.41. | May '02 TAP Book-to-Bill | Apr'02 | May'02 | May-01 | |-------------------------|--------|--------|-------| | Book | $190.3 | $222.2 | $95.7 | | Bill | $143.0 | $157.7 | $228.0| | B/B | 1.33 | 1.41 | 0.42 | WW Chip Sales Fell 24.8% in April According to preliminary data released by the SIA the dollar value of worldwide chip sales fell 24.8 percent between March and April to $10.03 billion. This drop is typical of the first month of a new quarter. But, the YOY trend remains worrisome, as sales for the first four months of 2002 were about 22 percent below the same period in 2001. Things have improved in recent months: last April chip sales were down over 24.8 percent YOY, while this year that figure has fallen to just 8.2 percent. Last September, worldwide chip sales were trailing the year-earlier total by 44.4 percent, so the April 2002 vs. April 2001 comparison shows real improvement over the dismal late-2001 state of the industry. Nevertheless, when semiconductor sales are viewed in a larger historical perspective there is still reason for concern: at the end of 2000, sales on a three-month-average basis were growing at an annualized rate of almost 22 percent, but during the first quarter of this year chip sales were declining at a 33.9 percent annualized rate. | April 2002 Chip Sales | MOM | YOY | |-----------------------|-----|-----| | Americas | -30.0% | -13.5% | | Europe | -36.9% | -24.4% | | Japan | -11.3% | -25.1% | | Asia-Pacific | -22.0% | +23.9% | | Total | -24.8% | -8.2% | 2002 10 BEST Chip Equipment Suppliers This year, VLSI Research added two overall categories to its 10 BEST Customer Satisfaction awards. - **Focused Suppliers** – companies who focus on individual segments. - **Large Suppliers** of chipmaking equipment – companies who rank among the top fifteen in revenues. | Focused Suppliers | Rank | Company | Rating | |-------------------|------|-----------|--------| | 1 | Tegal| 8.26 | | 2 | Datacon| 8.23 | | 3 | Universal| 8.21 | | 4 | Orthodyne| 8.10 | | 5 | Alphasm| 8.06 | | 6 | EBARA| 8.03 | | 7 | SUSS Micro| 7.78 | | 8 | Multitest| 7.69 | | 9 | Disco| 7.68 | | 10 | Axcelis| 7.68 | | 11 | SZ Test| 7.65 | | 12 | Credence| 7.47 | | 14 | Schlumberger| 7.42 | | 15 | TSK| 7.41 | | 19 | Electrogas| 7.21 | | 21 | Shinkawa| 7.11 | | 23 | K & S| 6.95 | | 27 | Yokogawa| 6.49 | | 30 | Ando| 6.36 | | Large Suppliers | Rank | Company | Rating | |-----------------|------|-----------|--------| | 1 | ASM| 7.90 | | 2 | Varian| 7.89 | | 3 | Agilent| 7.38 | | 4 | Teradyne| 7.35 | | 5 | Nikon| 7.31 | | 6 | Novellus| 7.29 | | 7 | Advantest| 7.01 | | 8 | Canon| 7.00 | | 9 | Hitachi| 6.95 | | 10 | TEL| 6.93 | April Global Eqpt. Sales Dn. 41.9% YOY SEMI reported that global sales of chipmaking equipment fell 41.9 percent YOY in April, the smallest drop in 11 months. Worldwide sales totaled US$1.69 billion in April, it said. The data showed strength in Taiwan where sales rose 5.1 percent YOY to US$387.6 million and in Korea where sales were down just 9.2 percent YOY. On a brighter note, No. American equipment suppliers said net new orders were up 9 percent and Japanese equipment makers reported orders up 48.9 percent YOY in May. | April 2002 Chip Equipment Sales By Product Segment (US$M) | |----------------------------------------------------------| | Type | Amount | | Mask | $53.81 | | Wafer Fab | $1281.24 | | Packaging | $62.07 | | Testing | $215.00 | | Related | $57.57 | | Total | $1,669.00 | | April '02 Chip Equipment Sales By Geographical Region | |------------------------------------------------------| | Region | Sales | VOY | | No. America | $483.6 | -44.9% | | Europe | $204.0 | -48.5% | | Japan | $208.5 | -71.1% | | Korea | $180.7 | -9.2% | | Taiwan | $387.6 | +5.1% | | Other | $204.6 | -82.9% | | TOTAL | $1,669.0 | -41.9% | FINANCIAL REPORTS MOSAID Technologies Q4 Ending April 26: C$000 Figures in Canadian dollars | | 2001 | 2002 | |------------------|------|------| | Sales | C$9,762 | C$24,292 | | Net | (3,186) | 2,175 | | Per Shr. | (0.31) | 0.22 | | Yr. Ending April 26: C$000 | | 2001 | | | | 2002 | | | | Sales | C$51,861 | C$82,926 | | Net | (24,686) | 7,002 | | Per Shr. | (2.45) | 0.72 | STATS Assembly Test Services (STATS) is a supplier of complete back-end turnkey services from wafer sort, test, assembly to drop shipment, with particular focus on mixed-signal testing. STATS is headquartered in Singapore with worldwide offices in the United States, United Kingdom, Germany, Japan and Taiwan. Its main manufacturing plants are located in Singapore and Taiwan, with operational space of 300,000 square feet and 220,000 square feet respectively. It also has test development centers in Singapore, the U.S. and the U.K. and has approximately 2,500 employees – half of them technical professionals – worldwide. STATS began operations in January 1995, and has been listed on the U.S. Nasdaq (STTS) since January 2000. As was the case for most semiconductor-related companies, 2001 was a tough year for STATS – as reflected in its ADR price. It had revenues of $145.9 million – down from $353.3 million in 2000 – and a loss of $133.9 million. In its various manufacturing facilities it has a large portfolio of state-of-the-art testers including platforms servicing digital, mixed signal, Radio Frequency (RF) and Bluetooth test requirements. In the area of advanced packaging, STATS offers an extensive range of packages and options including BGAs, QFPs, PLCCs, near CSP packages, Stacked Die Ball Grid Array and lead-free packaging targeted at mid-to high-end packaging applications. In February of this year – in an aggressive bid to strengthen its global presence – STATS opened its FastRamp Test Services facility – a high-end engineering and production test laboratory which focuses on providing engineering and pre-production test services — in Milpitas, CA. According to FastRamp GM, Mark Kelley, the company had looked at purchasing one of the available existing test centers in Silicon Valley – but finally decided to build its own facility. For the new 34,000 sq. ft. facility, an initial investment of $10 million has already been made and it plans a total capital outlay totaling $20 million. Much of the investment was allocated to the development of a premier test engineering area to meet the demands of fabless companies looking for solutions for testing the products they are rushing to market. A unique feature of the facility is that the test floor is surrounded by large, comfortable customer offices which offer a full view of the tester in operation. The offices are fully equipped for operation of the testers and for data collection. It also provides catered meals for customers who work through lunch/dinner. In addition, personnel, test equipment and processes are aligned to help customers launch new products, and meet volume ramps and production cost targets. It has begun equipping FastRamp so that it ‘mirrors the test hardware and tester configurations of those at its main facility in Singapore. Testers already installed include: Teradyne Tiger, Catalyst, and J750, Agilent 93000 and 83000, Credence Quartet and Duo, and LTX Fusion. According to Kelly, a second Agilent 93000 will “arrive soon.” Many of the testers were transferred from STATS’ Singapore facility to FastRamp, and STATS Singapore’s technical staff and engineers provided training on each of the systems and aided in the launch of FastRamp’s operation. According to Kelley, “technical and technology knowledge is shared between STATS and FastRamp, with the regular cross-training of technical staff.” The goal of FastRamp is to provide test engineering solutions which include lab-to-factory compatibility for transition from development to production. When the customer is ready for transition to volume production, FastRamp will provide production off-loads and capacity coordination in STATS’ manufacturing facilities in Singapore and Taiwan. Kelley said, “Customers who use STATS’ testers and platforms for development work can now easily transfer their devices to volume production.” Device Tracking for Strip & Matrix Test The following article was written for FTR by Dave Huntley, president of KINESYS Software, Petaluma, CA. Significant cost savings can be achieved in semiconductor TAP (test, assembly and packaging) factories by using matrix (strips) substrates in conjunction with parallel unit strip testing instead of conventional singulated unit testing. These savings apply not only to test, but also to the actual assembly of the packages themselves. These savings are made possible by the existence of a strip map, an electronic representation of the devices on the strip. The strip map presents the possibility for traceability in the event of a failure, in reverse order, to the individual device, the individual piece of assembly equipment and to the wafer. This article will explore the cost-saving opportunities and what it takes for TAP manufacturers and subcontractors to realize them. Strip Test Strip test is the testing of the device before singulation into individual semiconductor components, while it is still mounted on the matrix (strip) substrate (ceramic, leadframe, laminate or tape). It is much easier for human operators to handle the strip as opposed to individual devices, particularly when the devices are small, light and/or thin. A factory can standardize on a strip size and handler and perform parallel test on several different devices at once with high pincount testers. The improved utilization of the tester and reduction in material handling errors can lower test costs significantly. For example, Amkor invested $50 million to be able to carry out strip testing on-line. The company, the world's leading independent supplier of outsourced packaging and test semiconductor interconnect services, is now reporting test cost savings of as much as 80 percent. Traceability Welcome by-products of strip test are strip mapping and traceability. When a device is singulated, its connection to the strip is severed both physically and logically. When devices are tested on the strip, the result is a strip map - a computerized representation of electrical test specific to each individual strip as identified by the strip designator. With the strip map, it is possible to analyze failure patterns with regard to the strip geometry. Perhaps more important, in the event of device electrical failures, and assuming a strip tracking system is in place, then it is possible to identify potential causes of the failure and implement correction plans within the factory as needed. If the devices are marked, the device identifier can be correlated with the device's location on the strip. If a marked device fails in the field, its history can be traced via the strip it came from and the factory equipment on which it was processed. Using strip mapping and traceability to identify and correct process problems is in its infancy and cost reduction figures are not yet available. However, initial results look promising. Substrate Tracking Today, matrix (strip) substrate tracking and the failure analysis is largely manual. TAP factories typically rely on the operator to manually read the magazine or scan a bar code on the magazine. Since only the magazine is tracked, traceability is lost if strips are transferred to another magazine (for example as a result of lot split or merge). A better approach to substrate tracking is to mark each strip with a 2D matrix that uniquely identifies it. The 2D matrix cannot be read by human operators and scanning every strip by hand would not be cost-effective. To track strips individually, the equipment must read and report the strip identifier. There is now a standard for substrate tracking (SEMI E90) that is being widely implemented in 300-mm wafer processing plants. Feed Forward Map Data There is also now a standard for exchanging strip map information with equipment (SEMI E84). If this standard was implemented in die-attach equipment, traceability from wafer to strip could become a reality. Subsequent equipment (for example, inspection) supporting this standard could modify the map so that any further yield loss could be recorded and skipped at strip test. With wafer map data being fed forward in the TAP factory, it becomes possible to correlate wafer and strip test results in real-time to look for early indications of process drift. Device Tracking Once the link is made from wafer to strip at die attach, it becomes possible to trace an individual device that has proven defective in the field right back to the wafer. The wafer identifier can be used to zero in on the wafer processing equipment responsible for the failure. If the device location on the wafer is tracked, then it becomes possible to analyze failure patterns with regards to the wafer geometry. Conclusion Although yields are typically high in TAP factories, the cost of failures is also high since the devices are at their maximum value and profit margins are at their slimmest at this point in the semiconductor manufacturing process. Integrating strip mapping with die attach is the key to enable feed-forward and feed-back control of the TAP process as well as deliver critical failure data back to the wafer processing plant. All failure patterns can be analyzed with respect to the strip, the assembly equipment, the wafer and the wafer processing equipment. Automated map data collection and substrate tracking coupled with failure analysis software can offer real-time process correction. There are standards now in place for traceability and substrate tracking. It will take time for these standards to become widely accepted. LogicVision Unveils Hdw. IC Debug tool LogicVision has entered the hardware arena with its *Validator* – composed of software, intellectual property (IP) and hardware – which it describes as “the industry’s fastest software and hardware solution for silicon debugging. It is targeted at the broad range of chips for consumer, computer, communications and other applications, said the San Jose, CA-based supplier of built-in-self-test (BIST) software and hardware. LogicVision claims that in beta trials, the Validator has cut silicon debugging times by more than 100 times over traditional methods. It said that in one case, the first silicon consisting of 10 million gates on 0.13-micron technology, the at-speed test was successfully completed within 45 minutes after the first silicon was received. It also claims that it “eliminates dependence on test vectors, test programs, and hard to access test equipment.” The Validator will be available in Q3 of this year, the company said. ### Validator Specifications | **Clocks** | 2 or 4 – 3.8V Max | |------------|------------------| | **Clock Freq.** | 0 – 330MHz | | **Power supplies** | | |--------------------|------------------| | Programmable voltage ranges | | | Option1 | 0 – 8 Volts | | Option2 | 0 – 20 Volts | | Max Current | 30A @8Volts | | **Debug Data Interfaces:** | | |---------------------------|------------------| | Chip | JTAG, 9 In, 4 Out | | Board | 1 - JTAG | | Voltage | 2.5V – 5.0V | | **CPU 1** | | |-----------|------------------| | SUN SPARC | 500MHz | | RAM | 512MB | | Storage | 80GB | | **CPU 2** | | |-----------|------------------| | Intel Pentium | 3 – 1.1GHz | | RAM | 256MB | | Storage | 80GB | | **Dimensions** | | |---------------|------------------| | | 21"W x 14"H x 28"D | Aehr Gets Full Wafer Test/BI Contract Aehr Test Systems said that it has received an order – from an undisclosed source – totaling over $2 million for engineering development of a full wafer contact test system. The system will be developed using proprietary interconnect and parallel test technology currently utilized its full wafer contact FOX product line. The full wafer contact system is expected to parallel test 200-mm and 300-mm wafers, and will include individual DUT power supplies using Aehr’s MTX test technology. C.J. Meurell, president of Aehr Test said, “A DFT or JTAG test strategy eliminates many of the barriers to full wafer contact and allows for an extremely cost effective test solution. Testing an entire wafer or die at the same time certainly changes the dynamics of manufacturing test costs and throughput improvements.” Aehr’s FOX full wafer contact burn-in and test systems contacts, burn-in and tests up to 14 wafers simultaneously, with more than 30,000 contact-point capability per wafer. The FOX systems use full algorithmic test (N, N2, N3/2) for memories, and a vector pattern generator for devices using BIST. Aehr’s contact system utilizes micro pogo spring contacts, which the company claims provide a high touch-down life, high compliance and works with most pad metalurgies. However, as Steve Steps of Aehr pointed out in his presentation at SWTW last month, contact pressure requirements are substantial. In his example an 8” SDRAM wafer, with 500 die and 50 pads/die requires a 25,000 pin contactor and at 10 grams /pin requires about 250kg (about 550 pounds.) A major challenge is to maintain planarity to within a few microns at such force levels. Wafer alignment is accomplished off-line – using Electroglas equipment – in wafer/PWB cassettes held together with air pressure. Aehr admits that the development of the FOX system has taken considerably longer than it expected, due to the number of thermal, mechanical and electrical barriers which must be dealt with. The company would not provide specific information about existing installations of its FOX system, but reportedly does have at least one customer which is using it for laser diode burn-in. In addition, the system is said to be under evaluation by ‘several’ memory device makers. This development contract includes performance milestones which are scheduled to be completed during calendar 2002 and 2003. Non-recurring engineering (NRE) revenue will be recorded as earned, upon milestone completion, Aehr said. Teradyne/Test Insight Test-to-EDA Tool Teradyne has partnered with Israel-based Test Insight to provide that company's WaveWizard test development product for design-to-test solutions. WaveWizard enables test engineers to create test programs for Teradyne's J973, Integra and Catalyst testers, utilizing EDA software design data. Teradyne's Test Assistance Group (TAG) will standardize on the WaveWizard tool set for test generation solutions. TAG's standardization provides the foundation for worldwide applications support and training for all Teradyne and Test Insight customers, establishing "the first industry-wide accessible solution for easily moving design information into test," according to the companies. The WaveWizard productivity tool facilitates an efficient transition from EDA software into fully functioning ATE test programs, complete with patterns, timings, levels, and pin configurations. With WaveWizard, test and design engineers can emulate device timing architecture and design, removing the constraints of cycle-based methods typically found in EDA-to-test conversion products. Teradyne has benchmarked Wave Wizard against several commercially available tools and selected it for its ease of use, graphical display, faster code development, debug, and characterization capability, and flexible, intelligent timing generation. "The intuitive approach to device timing, combined with WaveWizard's ease of use, shortens customers' test program development cycle and reduces errors," explains Meir Gellis, CEO of Test Insight. EDA Finally Getting Some Respect? When the Design Automation Conference (DAC) returned to New Orleans last month, it was not just be the city that was heating up. At one panel - led by Synopsys' CEO Aart de Geus, a "Man on the Street" video was presented. Shot in New York, an interviewer asked passersby whether they thought investing in EDA or pork bellies was more lucrative. When people got a definition of EDA from the interviewer, they overwhelmingly voted for pork bellies. A closing question to a passing woman: "Would you vote for Wally Rhines?" "Never heard of him," came the reply. However, de Geus pointed out in his presentation that while Nasdaq spiked during the boom years of the dot-com craze, EDA stocks have remained a remarkably stable investment "even though people don't understand what we do." Others argued that from an investment perspective, EDA can be extremely attractive, especially in uncertain times. EDA growth is stable and comparatively predictable and only goes one direction—up. You can count on the EDA industry to deliver positive growth at a compounded annual rate of about 12 percent to 15 percent over the long haul, and EDA has never had a down year. Also, EDA companies, even those that sell some hardware, usually have "software-like" business models in the sense that there's little physical inventory and the margins are high. The aggregate gross margins of the 15 publicly traded EDA companies last year totaled 81 percent. Operating margins ranged from 10 percent to 30 percent, with an aggregate of 15 percent operating income last year. This was a very attractive financial profile for a single equity—not to mention an entire industry—in troubled 2001. EDA Industry Still Consolidating And, [EDA] willows down, to a precious few: A spate of acquisitions of publicly held EDA companies by the three industry leaders, Cadence, Synopsys and Mentor over the last couple of months has further increased the domination of EDA industry by those three companies. - Cadence Design acquired Simplex Solutions as of June 27 for $3.95/share or about $165 million. Simplex had revenues of about $48 million for the last four quarters. - Mentor Graphics is acquiring Innoveda at $3.95/share or about $160 million. That company had revenues of about $80 million for its last four quarters as an independent company. Mentor also acquired IKOS Systems in late April, at $11.00 per share or about $135 million. IKOS had revenues of $53.3 million for the previous four quarters. - Synopsys' acquisition of Avanti was completed on June 7 at about $18.36/share – about $730 million – well above their 52-week low of $2.62 on Sept. 27, 2001 but below the 52-week high of $21.23 reached on Jan. 9 of this year. Avanti reported 2001 revenue of $398.7 million. Although these three companies claim to represent over three quarters of worldwide EDA revenues, there are a total of about 145 other companies which classify themselves as EDA companies. (Avanti, Innoveda and IKOS had previously been 'tracked' by FTR, but now have been removed from our weekly and monthly charts. We are presently evaluating other public companies to replace them.) | COMPANY | Ticker | Close 06/28 | Change Month | 52 Week High | 52 Week Low | |-----------|--------|-------------|--------------|--------------|-------------| | Cadence | CDN | $16.12 | -16.3% | $24.94 | $14.10 | | LogicVision | LGVN | $12.75 | 0.0% | $15.45 | $3.97 | | Mentor | MENT | $14.23 | -12.4% | $27.15 | $12.84 | | Synopsys | SNPS | $55.48 | 8.9% | $61.00 | $36.15 | Average Change during June -5.0% Japan Eqpt. Orders up 48% YOY in May Worldwide orders for Japan-made chip equipment grew 48.9 percent YOY in May, to ¥112.06 billion ($940 million), the third month of YOY increase and the highest level since January 2001. May’s WW orders represented a 32 percent rise from April, according to the SEAJ. However orders placed by Japan’s chipmakers to both Japanese and foreign firms in May decreased 17 percent YOY to ¥35.57 billion ($297.4 million), down 5.2 percent from April, it said. Worldwide sales of Japan-made equipment declined 50.2 percent YOY in May, to ¥46.14 billion ($38.6 million). Domestic sales of chip equipment made by both Japanese and foreign firms dropped 53.0 percent YOY to ¥25.10 billion ($210 million) in the month. The global book-to-bill ratio for Japanese equipment climbed to 1.61 in May from 1.26 in April. That ratio topped the key 1.00-mark in April for the first time since January 2001. The book-to-bill for Japanese equipment was 0.98 in March and 0.74 in February, according to the SEAJ. The data shows chipmakers, particularly in Asia, are increasing capital spending as global chip demand improves, the SEAJ noted. However, industry observers say "Japanese chip-manufacturing equipment makers shouldn’t get their hopes up too much as the order outlook remains uncertain. A recovery in the global chip market still looks fragile in the absence of strong demand for finished products", they said. Furthermore, the industry can’t count on a rise in orders from Japanese chip makers, who remain hesitant to boost capital spending after sinking deeply into the red last fiscal year, ended March 31. Most industry observers believe that the decline was guaranteed during the 1997-98 chip recession, when Japan’ chipmakers cut their CAPEX by 40 percent YOY, to a collective $5.3. They increased their spending in 1999, but, then cut them again in 2000, and again in 2001 by 63 percent YOY. According to IC Insights, since 1992, Japan’s chipmakers have steadily falling behind their foreign rivals when capital expenditures are measured as a percentage of IC sales. Last year capital spending for the average Japanese semiconductor company was 19 percent of sales, far from the 27 percent global average, according to IC Insights. Japan is already depending on both foundries and test/assembly contractors to provide the capacity they are unwilling-or unable-to provide for themselves. Most observers believe that within a few years, many of Japan’s large chip companies will become essentially fabless. Most won’t build new fabs and instead will turn to foundries to make their leading-edge chip designs. Japanese chipmakers are also expanding production in China, attempting to take advantage of China’s low costs to boost their competitiveness in its semiconductor market. Tough Times Test Japan’s Chipmakers Japan’s IC industry is under intense pressure and scrambling for answers. Saddled with billions of dollars in fresh losses. Toshiba’s IC operations in the business year ended March 31 were $1.32 billion; Hitachi’s semiconductor business lost $1.28 billion; NEC’s Electron Devices fell $1.14 billion into the red; and Mitsubishi Electric’s chip division posted a $615 million operating loss. Now, chipmakers there appear to be responding by dismantling the strategies that just a decade ago appeared to make them invincible. NEC has moved to spin off almost all its semiconductor and flat-panel-display operations into a series of subsidiaries and joint ventures that will essentially eliminate its Electron Devices group. Hitachi has already spun off its DRAM design and marketing business into the Elpida Memory joint venture with NEC, and is apparently planning to merge its remaining microcontroller and logic-IC operations into a joint venture with Mitsubishi. Japanese chipmakers controlled 51 percent of the worldwide market in 1988, but that slipped to just 23 percent in 2001 while the U.S. chip industry now controls 52 percent of the global market. JAPANESE ATE STOCKS | INDEX | Ticker | Close 06/28 | Change Month | |---------|--------|-------------|--------------| | NIKKEI 225 | N225 | 10,622 | -9.7% | | Advantest | 6857 | 7,460 | -12.2% | | Ando | 6847 | 480 | -15.5% | | JEM | 6855 | 1,160 | -5.7% | | MJC | 6871 | 910 | -14.2% | | TEL | 8035 | 7,810 | -6.8% | | TSK | 7729 | 4,070 | -14.0% | | Yokogawa | 6841 | 930 | -14.3% | Elpida is now Japan’s No. 2 DRAM maker-behind Micron Technology’s plant in Kobe, Japan. The annual *SouthWest Test Workshop* (SWTW) moved – over the objections of many of its long-time attenders – from its previous venue at Paradise Point in San Diego, CA to Long Beach, CA’s convention center area. The new venue was generally viewed as adequate, but little more. The workshop had 282 advanced registrants before early registration was cut off a week before the conference and 62 more registered on-site, for a total of 345, up slightly from 330 in 2001. About one-third were first time attendees, and the mix of vendors and users was substantially better than at last year’s version – when relatively few users attended. As usual, this workshop – which as we repeatedly say, should be renamed the *International Wafer Probe Conference* (or *Workshop*) – provided a good mix of presentations including ‘hands-on’ problem descriptions and solutions for those who are directly involved in wafer probing on a day-to-day basis. It also offered more general presentations on the future of wafer test. SWTW began on Sunday afternoon with a first-class description of the state of *Wafer Level Burn-in*. A very detailed description of the status of chip burn-in and various companies efforts at both in-house and commercial equipment to accomplish full-wafer burn-in was presented by Bill Mann, General Chair of SWTW. His presentation was followed by Teresa McKenzie, a Motorola engineer who described her company’s work with sacrificial metal wafer-level burn-in and test methodology (As *FTR* described in the Jan. ’02 issue, p.7). McKenzie was followed by Steve Steps’ of Aehr presentation of *Solutions to Technical Challenges for WLBI*. He struck a solid note when he said “There are “only three major technical challenges in developing a full-wafer test and burn-in system – thermal, mechanical, and electrical. (See p. 8 of this issue for a description of Aehr’s “FOX” test system.) The main workshop produced a wide variety of offerings: from high-power probing to RF and parametric probing. Safe to say, anyone involved in wafer test would have found something of value during the two and one-half days of the workshop. The award for *Best Overall Presentation* went to Brett Grossman and Tim Swettlen of Intel for their presentation titled: *Modeling Distributed Power Delivery Effects in High Performance Sort Interface Units*. The award for which this conference is famous – the *Golden Wheelbarrow Full of Crap*, for the most poorly disguised sales pitch - for the first time ever, was awarded to a company – rather than to individual presenters – JEM America’s two papers; the *HAWK: High Parallel Hybrid Probe Card for Memory Devices* and *VSCC: Vertical Spring Contact Card for Bump Probing*” by Phill Mai, et al and Patrick Mui, et al respectively. (How those papers got past the SWTW program committee, will forever remain a mystery.) On Monday evening Steve Strauss of Intel gave what should have been the Keynote Address and to which we have devoted a substantial part of this issue of *FTR*. (The actual Keynote, titled *Wafer Testing – Where Back-End Meets the Front-End* and given by Neil Moskowitz of Prismark Partners. While it was interesting, it seemed off-the subject of this conference.) In addition to the presentations – in the long-time tradition of this gathering – long breaks and a number of social gatherings provided lots of opportunity for networking and discussions. All in all, this is one conference where you do get your money’ worth – in information, food and booze. In summary, in a very difficult year for technical conferences and exhibitions – due to tight travel budgets – This year’s *SouthWest Test Workshop* has to rated a substantial success. **ATE/DFT MEETINGS** **July ‘2002** 17-19 SEMICON/West Test, Assembly & Packaging San Jose, CA Convention Ctr www.semi.org **September, 2002** 16-18 SEMICON Taiwan 2002 www.semi.org **October ‘2002** 8 - 10- Intl. Test Conference Baltimore, MD Convention Ctr. email@example.com. 15-16 SEMICON Southwest 2002 Austin, TX Convention Ctr. www.semi.org **INDUSTRY** The SIA released its 2002 mid-year forecast last month, outlining its view that an industry-wide recovery is now under way. The SIA expects semiconductor sales to increase by 3.1% in 2002, with the growth rate accelerating to 23.2% in 2003 and 20.9% in 2004. VLSI Research expects that chip equipment sales will reach $100 billion in 2007 – a compound annual growth of 22% from $36.8 billion in 2002. Though this forecast seems high considering the industry’s recent woes, VLSI notes that when the figure is calculated as a CAGR from the high point of $60 billion in 2000, it translates to only 8% per year. SEMI said that its office in Washington, D.C. will become the headquarters for SEMI North America, and Victoria Hadfield has been named president of its North American operations. Hadfield had been VP for industry advocacy for SEMI. She replaces Bobby Greenberg, who has resigned “to pursue other interests.” **COMPANIES** MCT has received notification from the Nasdaq Stock Market, that it does not meet the $50,000,000 market capitalization required for continued listing on the Nasdaq National Market. It said it will appeal it to *Nasdaq Listing Qualifications Panel*. LogicVision said Agere Systems has licensed its Embedded Test 4.0 for design, debug and production test. Morgan Stanley’s chip equipment analyst in Japan, Noriko Oki, said he expects Advantest to miss its F2002 (ending March, 2003) sales target. Credence Systems will fund a Masters of Science (MS) level fellowship program in electrical and computer engineering at Portland State University. Electroglas has donated an EG41200c parametric wafer prober to that same school’s new IC Design and Test Laboratory. Kulicke & Soffa will revise the wafer test portion of its chip test tooling business, by consolidating multiple U.S.-based probe card manufacturing facilities in Gilbert, AZ, Austin, TX and San Jose, CA facilities, followed by consolidation of Taiwan-based manufacturing operations into the Hsin Chu location. No changes are expected in European operations at this time. **PEOPLE** David Tacelli was named president and COO of LTX. He had been an Exec. VP of the company since 1999. Jim Healy has resigned his positions as president of ASAT USA and Sr. VP of worldwide sales and marketing for ASAT. Sales and marketing will report to Harry Rozakis, ASAT’s new CEO. Bryan Hoadley has been named STM Worldwide Account Manager for Credence Systems – based in Grenoble, France. He had been Sr. Manager of Field Operations for that company. Todd Delvecchio will assume Hoadley’s previous position. Dennis Bibeau has joined LogicVision as Sales Manager. Bibeau had been with Symtx, in Austin, TX and prior to that to LTX in Boston. Ray Sites has rejoined LTX as Account Manager for the Western Region. Sites also comes from Symtx. Chin Koon Koh has been named GM of Asian manufacturing operations for Electroglas. Tan Lay Koon has been named President/CEO of STATS, replacing Harry Davoozy, who resigned after just six months in that position, “to pursue interests in the United States”, according to the company. --- **Are you reading someone else’s copy of this report?** Your customers, vendors, competitors, and co-workers are all reading *The Final Test Report* every month. Isn’t time you had your own copy for just $195.00 per year? We will send you the next two issues with no obligation. You will not be invoiced until after you receive your second issue. In addition, if you include your e:mail address, we will e:mail you the weekly updATE as well. Subscribing is easy — just fill-out this form and fax a copy to: 925-906-9427 **Or:** Drop your business card into an envelope and mail it to: IKONIX Corporation P.O. Box 1938 Lafayette, CA, USA 94549-1938 *FTR gives you the information you need — when you need it!*
Climate Change Loss and Damage Compensation Katak Malla* Abstract The Conference of the Parties (COP) to the UN Framework Convention on Climate change (UNFCCC), held in Doha (2012), recognised “protection against loss and damage caused by climate change” as an agenda item for the negotiation of a new treaty on climate change. This is obviously one of the most controversial agendas of the COP negotiation; e.g. who is responsible for the harm that results from climate change, and how could/should the harmed states (or individuals) be compensated appropriately? The present author suggests that some national case law developments may be useful guidance for the future COP, especially when negotiating the controversial issues of harm and compensation. The reasoning behind the suggestion is that the case law developments helps us to understand nexus between national court’s litigation, legislation and also domestic policy of those countries which are generally not favourable from the binding obligation of emission reductions. And, an understanding of nexus (or tensions) that exist currently, at different national levels, could be instrumental to comprehend and acknowledge the domestic reality of the parties and conduct future COP negotiations accordingly. This paper focuses on the emerging trend of national adjudication of climate change related disputes in some of the influential states in the COP, assessing how these litigations are building pressure on the necessary legislation on greenhouse gas emission reductions at national levels. The nexus between litigation and legislation, as well as the domestic climate policy of states, could be detrimental in shaping the content of “climate change loss and damage” into a new climate treaty that is slated to conclude in 2015 and implemented from 2020. 1. Introduction The “loss and damage caused by climate change” is formally added as an agenda item of the international negotiations for a new treaty on climate change. The Conference of the Parties (COP) to the UN Framework Convention on Climate Change (UNFCCC) held in Doha 2012, specifically COP18, recognised the agenda.\(^1\) The United States and some other likeminded states opposed use of the concept loss and damage in the text of the COP18. At the same time, the European Union (EU) and the group of developing countries endorsed the use of this phrase in the COP18 decision; and it is being described as a significant step towards a new treaty. The agenda may be a necessity for the COP, but it is certainly a dif- \(^1\) Subsidiary Body for Implementation, Thirty-seventh session, Doha, 26 November to 1 December 2012, Agenda item 10; Approaches to address loss and damage associated with climate change impacts in developing countries, Decision 1/CP.16, paragraphs 26–29, see full document> http://unfccc.int/resource/docs/2012/sbi/eng/l44.pdf>. ficult obstacle to overcome by the COP. How the agenda might be incorporated into a new treaty, scheduled to be concluded in 2015 and implemented from 2020 onward, remains to be seen.\(^2\) The present author suggests that some national case law developments may be used as a guide for future COP negotiations. This paper implicitly focuses on emissions from fossil fuels use, particularly from industrial sectors. Other types of emissions, e.g. deforestation, methane, livestock and agriculture are not addressed. The main issue surrounding the climate change impacts mitigation and adaption, when looked from the strict legal point of view, is that greenhouse gas emissions are not defined as an illegal act \textit{per se} by any national law or international law. This means that the act of greenhouse gas emission may fall under the category of those harmful acts that are not prohibited by law, and therefore by its definition, could perhaps be addressed under the Common Law of equity and torts. The question is which option the COP will choose; whether the future COP negotiations could and should address climate change loss and damage in line with the ILC Draft Articles on the Responsibility of States for Internationally Wrongful Acts.\(^3\) Or they rather should it be addressed based on the consequen- \(^2\) The negotiations on loss and damage are not formally linked to the 2015 agreement, but implementation of the UNFCCC, i.e. COP18 referred the issue to the subsidiary body of responsible for negotiating the 2015 agreement. The work programme on loss and damage originates from the COP16. \(^3\) The International Law Commission (ILC) initially started its work on draft articles on the liability for harmful activities not prohibited under international law, on which the ILC later adopted the Draft Articles on Responsibility of States for Internationally Wrongful Acts’ (2001 and 2006). Text adopted by the (ILC) at its fifty-third session, in 2001, and submitted to the General Assembly as a part of the Commission’s report covering the work of that session (A/56/10). The report also contains commentaries on the draft articles, also presented in \textit{Yearbook of the International Law Commission, 2001}, vol. II, Part Two. tial damage, or will the COP negotiations follow the idea of control and reduction of the source of damage which is somewhat similar to the approach employed by the Ozone treaty regime.\(^4\) For example, the Climate Fund of can be developed and managed in a similar manner to the Ozone Fund established under the Montreal Protocol, involving and assisting developing countries as a compliance mechanism. The COP negotiations, thus, should, in particular, be focused on how the countries like China, India and Brazil as well as the other developing and least developed countries, could be guaranteed as beneficiaries of the Climate Fund.\(^5\) However, a number of questions arise concerning the above-mentioned issues and options for the COP. We shall group the questions into \(^4\) The Ozone treaty regime consists of the Vienna Convention on Ozone depletion (1985) and it’s Montreal Protocol (1987), which aims to control production and consumption of specific chemicals CFCs (HCFCs), methyl bromide and similar chemicals. Specific targets are set under the ozone treaty regime, aiming at the reduction of chemicals under an agreed timetable by the parties. The Protocol has been amended in London (1990), Copenhagen (1992), Montreal (1997) and Beijing (1999). The London Amendment provided for an Interim Multilateral Fund to assist and qualify developing countries for compliance procedures, among others. In the Copenhagen Amendment, parties made the Interim Multilateral Fund permanent. The Montreal Amendment obligated countries to establish and implement a licensing system for the import and export of new, used, recycled and reclaimed controlled substances, and to control trade in the banned substances by parties not in compliance with the Protocol. The Beijing Amendment provided for a “basic domestic needs” exception for certain controlled chemicals and added bromochloromethane to the list of controlled substances. \(^5\) The Ozone Fund was agreed at fair cost and a reasonable grace period for the developing countries. In a similar approach to the grace period under the Montreal Protocol, China, India and Brazil could be offered a reasonable (greenhouse gas emissions) grace period in the short term, the other developing countries in the medium term, and the least developed countries in the long term, see Katak Malla, “The EU and Strategies for New Climate Treaty Negotiations”, \textit{European Policy Analysis}, NOVEMBER ISSUE 2011:12epa, p5. two sets, in order for make an in-depth discussion on “loss and damage caused by climate change”, including issues of political as well as legal relevance. The first set of questions that arises is of political and legal nature and they are also generally relevant to the COP negotiations: who is responsible for the harm that is and could be resulting from the greenhouse gas emissions? Is greenhouse gas emission reduction essentially political issue and, if so, what is the political obligation of states (or individuals) for mitigating climate change? If climate harm is also a legal issue, then who has the right to file a case, against whom, (either governments or companies, or both) and in which court of law? Should climate change be considered as a part of the law of public nuisance and if so, what are the possibilities for the compensation to the victims of climate change? What conclusion can be drawn from the practices of some national courts in this regard? Does this line of litigation represent a solution to the problem, and if not, what possible solutions are available with regard to climate change mitigation and compensation of climate harm? The second group of questions relates directly to the COP negotiations; what is the difference between the COP18 decisions that recognised “damage aid” from the classic official development aid (ODA)? In what sense does “damage aid” differ from the earlier COP decisions on mitigating climate change, e.g. “green climate fund” (COP15) and “long term finance” (COP16 and COP17)? Will the least developed countries and the small Island countries receive funds to repair “loss and damage” incurred as a result of climate change based on a pledge made by industrial states? If future COP decisions are simply going to be a policy statement, what is the relevance of such decisions in terms of legal “injury”, “harm” and “compensation” to victims of climate change? Generally speaking, the state responsibility-based claim for damages under international law has to fulfil the following criteria; “(i) identifying the damaging activity attributable to a state; (ii) establishing a causal link between the activity and the damage, (iii) determining either a violation of international law or a violation of a duty of care (due diligence), which is (iv) owed to the damaged state, and (v) in a court of law would be to quantify the damage caused and relate those back to the activity.”\(^6\) Keeping view of these criteria, it can be useful to examine some case law developments as a way to explore the two set of questions mentioned above. In doing so, a few key case law examples, from a number of countries, will be demonstrated first. Afterwards, some noteworthy legal opinions will be discussed and finally conclusion will be presented. 2. Case Law Some key pieces of litigation are selected from Canada, India and the United States. It is primarily because of language barriers of the present author, the case law developments in China, Brazil, Russia and others countries are not included in this work. It is because of its longstanding support of the COP negotiations and “climate and energy package”\(^7\) already in place, the EU’s case law is less relevant to this study. With their democratic governments and independent judiciaries, Canada, India and the United States make their case law more relevant in exploring the possibilities for climate harm compensation. This discussion will focus on the tension between litigation and climate policy of states, which are generally not favourable to the bind- --- \(^6\) Richard S.J. Tol and Roda Verheyen, “State responsibility and compensation for climate change damages—a legal and economic assessment”, *Energy Policy* 32, pp. 1109–1130, (2004). \(^7\) See >http://ec.europa.eu/clima/policies/package/>. ing obligation of emission reductions. The selection of the case law is based on the countries’ conflicting climate policy towards the binding obligation of emission reduction and national litigations.\(^8\) A more careful study of the national court’s approaches—especially of Canada, India and the United States—towards climate change litigation could serve as indicators. After its formal withdrawal in 2011 from the Kyoto Protocol, Canada’s position, in particular, has become more relevant pertaining to some of the above mentioned questions. The case law from India and the United States are considered as instructive, because the former does not have the same obligation of emission reductions (as the Annex 1 Parties to the Kyoto Protocol) and the latter remains outside the Protocol. Some case law examples selected for the discussion are the national court decisions, including one, but important, decision from the international legal bodies, i.e. WTO. These decisions generally differ from the point of view of national and international jurisdictions, but they are also interrelated from a prism of the need for emission reductions and sustainable energy development. For instance, one case law is about Canada’s obligation to reduce greenhouse gas emission under a Canadian federal law relating to the Kyoto Protocol, and another is about Canada’s withdrawal from the Kyoto Protocol. Yet, another case decided by the WTO panel is about Canada’s renewable energy projects. The decisions selected from the Supreme Court of India deals with the important principles of international environmental law. Similarly, decisions, on focus, from the US Supreme Court deal with abatement of carbon dioxide emissions by fossil fuel-based utility companies. It should be acknowledged that domestic case law development is mostly not about liability in the strict sense (i.e., compensation for damages) but about injunctive relief (i.e., mitigation of greenhouse gas emissions). How could domestic case law which is often motivated by slow progress on climate change mitigation, be expected to influence the COP negotiations? Generally, it is assumed that there are nexus or tension between litigation and legislation at various national levels. For example, despite the lack of a pro-active national policy for binding obligation of emission reductions, India’s courts and tribunals have interpreted legislative provisions relating environmental protection that sustainable development to be taken into account.\(^9\) The fact that the national legislations are increasingly becoming necessity for the low-carbon economic growth in the developed countries and developing countries, the author considers this progress as a lynchpin of the climate change mitigation solutions. As well, climate change and energy policies are being integrated and put into practice in the various national legislations. The EU’s climate and ener- \(^8\) For example, the EU – and its member states – has accepted the legal obligation of the greenhouse gas emission reduction. The United States has not and does not seem ready to accept a legal obligation, so long as the developing economic powers, i.e. China and India and Brazil and others countries are not ready to do so, whose fossil fuel industrial emissions have increased in recent decades. Currently, China, India and Brazil are the rising economic powers, whose respective capabilities have increased considerably, both in terms of emission and technological knowhow. These three countries still consider themselves as developing countries and, therefore, they insist on the developed countries’ responsibility of the greenhouse gas emission reductions. \(^9\) For example, the decision of the Supreme Court in *Narmada Bachao Andolan v. Union of India* 2000 (10) SCC 664 at p. 727, Taj Trapezium case, *M.C. Mehta v. Union of India*, AIR 1997 SC 734; see also, Ilona Millar, “The Environmental Law Framework for Sustainable Development – Principles of Sustainable Development in International, National and Local Laws” http://www.actpla.act.gov.au/_data/assets/pdf_file/0006/13893/Millar_paper.pdf. ogy package can be seen as a noteworthy example in that regard.\textsuperscript{10} Thus, it is considered necessary to demonstrate the tension or nexus between legislation, litigation and climate policy of states in focus. More specifically, Canada’s internal tension can be seen in terms of its withdrawal from the Kyoto Protocol, Canada’s Federal Court decision confirming right to withdrawal, and an implication of the WTO panel decision relating its renewable energy development. In India, it is about its dilemma posed by judicial activism of the Supreme Court of India concerning harm and compensation, on the one hand, and India’s policy of voluntary emission reductions, instead of binding obligation, on the other hand. The tensions between litigation and legislation in the United States are interestingly demonstrative. For example, the US Supreme Court decisions have suggested legislation as a necessary tool for greenhouse gas emission reductions, a legislative bill on emission reduction stalled and died in the US Senate, as a result of the opposition to the bill. Afterwards, the US President Barak Obama has announced in public that, “if Congress won’t act soon”, he will “to reduce pollution, prepare our communities for the consequences of climate change, and speed the transition to more sustainable sources of energy.”\textsuperscript{11} It is, thus, logical to visualise that the internal situation in the United States would lead to the country towards adoption of an appropriate national legislation on the climate change or actively negotiate a new climate treaty under the COP, or even to do both. Therefore, the above mentioned national case law developments and some relevant legal opinions on harm and compensation (to be discussed later), pertaining to the rationale and risk as well as benefit of climate litigations, may be useful guide for future COP negotiations. 2.1 Rationale, risk and benefit of litigation The rationale of analysing litigation is that a case law may be a small dot in the wider environmental law context, but a combination of such dots may also lead to the development of environmental jurisprudence. For instance, a decision made by the Federal Court of Canada, determining who can represent whom in the court of law concerning the reduction of greenhouse gas emissions, could be an inspiration for the Supreme Court of India or the United States. When the independent courts of the various countries decide the same issue by reaching the same, or different, conclusions, it helps jurists to form opinions which help to promote an evolution of the jurisprudence towards broader changes. We should, however, be mindful that legal experts have identified a number of difficulties and/or risk associated with climate change-related litigations at the national and international levels.\textsuperscript{12} Pursuing these types of lawsuits in the various courts of law is problematic, mainly because of the difficulties of presenting causal links between greenhouse gas emissions and climate harm. However, some progress is slowly being made. This kind of litigation exercise has opened up some possibilities for an adjudication of climate change-related cases. With regard to litigation concerning climate change mitigations through the use of non-fossil fuel-based energy, we should be aware of the fact that in some situations the outcome of litigation \textsuperscript{10} See >http://ec.europa.eu/clima/policies/package/>. \textsuperscript{11} President Barack Obama’s speech that was directly broadcasted in the World’s visual media, in February 13, 2013. \textsuperscript{12} Laura Horr, “Is Litigation an Effective Weapon for Pacific Island Nations in the War Against Climate Change?”, Asia Pacific Journal of Environmental Law, Vol.12, issue 1 2009, 169–202. may have “deterrent effect on the expansion of production capacity for renewable energy if it spreads to uncertainty about the types of support that really is legally acceptable.”\textsuperscript{13} In other situations, the litigation’s outcome may “involve countermeasures of various kinds, or a desire to create ‘pawns’ to use in negotiations that do not necessarily involve the same substantive issues.”\textsuperscript{14} One specific research on the litigation relating to climate change suggests that; “it could be a useful tool to draw media attention.”\textsuperscript{15} It is, thus, not unreasonable to assume that genuine media attention creates favourable national and international public opinion and, that in its turn influences the nexus between litigation and legislation, i.e. litigation by influencing public opinion and legislation and vice versa. Analysis of the litigations of this sort is considered necessary, because it is possible that public opinion in favour of the environmental protection may result into national legislations or even conclusion of new climate treaty. Similarly, the burden of litigation may also lead to legislations. Mutual influence between litigation and legislation could be considered as means of accommodation with the competing policies, if not convergence of contradictory interests. Nexus between litigation and legislation could also influence institutional aspects of legislative and judicial branches and their competence. A number of cases relating to the climate change were initiated in different countries by using a variety of statutes under the Common Law and international law. Public interest litigations, or class actions, are lawsuits relevant to the climate change mitigations and sustainable energy. Public interest litigation means that an individual or a group of people (collectively or individually) could bring a claim to the court, involving the interests of not just to the parties of the case, but for the general public as a whole.\textsuperscript{16} This type of litigations is not usually in practice in the Continental Legal system. How this type of litigation is used in Canada, India and the United States and in what ways highlights issues raised in this discussion, is the central focus in the following. 3. Canada First, let us review and examine the case law from Canada to understand who is entitled to file a case and against whom and where (or which national courts), especially when the dispute is related to climate change mitigations, or climate harm and compensation for that matter. One case law example from Canada revolves around the question whether or not non-governmental organizations (NGO) have a right to file a case against governments, demanding implementation of a particular national law that also relates to global common concern, i.e. climate change mitigation. If NGOs do have those rights, does the litigation result in any tangible achievement towards mitigation? The Canadian case law example, together one WTO ruling, will also shed light on the complexities involving free trade and renewable energy development. \textsuperscript{13} David Langlet, \textit{Förnybar energi – den nya konfliktytan mellan miljöskydd och frihandel – analys}, JP Miljönret 2013-03-12. \textsuperscript{14} Ibid. \textsuperscript{15} Laura Horn, “Is Litigation an Effective Weapon for Pacific Island Nations in the War Against Climate Change?”, \textit{Asia Pacific Journal of Environmental Law}, Vol.12, issue 1 2009, 169–202. e.g. “the Pacific Island nations seeking to recover compensation from developed countries for the adverse effects of climate change”. \textsuperscript{16} Litigation filed in a court of law for the protection of “public interest”, e.g. pollution and hazards waste etc. 3.1 Friends of the Earth v Canada Despite formidable difficulties of litigation relating to climate change at the national courts, a noteworthy attempt was made in the *Friends of the Earth v Canada* (2008).\(^{17}\) From the start, the issue of stake at the Federal Court of Canada was whether or not NGO could represent the general public interest. The plaintiff, Friends of the Earth (a NGO) had challenged the Government of Canada for not fulfilling its obligations under the Kyoto Protocol Implementation Act (KPIA). It should be noted as a background of the case that Canada had initially agreed to reduce their greenhouse gas emissions by six per cent from 1990 levels by 2012, under the Kyoto Protocol to the UNFCCC. The KPIA is a Federal Law of Canada, aiming for effective implementation of the Kyoto Protocol. The case is thus based on the KPIA that include Canada’s legal obligations to ensure that the country takes effective and timely action to meet its international treaty obligations under the Kyoto Protocol. In the *Friends of the Earth v Canada*, the Court recognised *locus standi* of the Friends of the Earth—a right to sue the Government of Canada. This needs to be seen with the international law context, whereby NGOs are generally not recognised as the subject of international law. Whether Canada’s Federal Court decision remotely recognised the Friends of the Earth as a subject of international law may be still debatable. The decision has, nonetheless, opened an avenue for NGOs to bring public interest litigations to national courts of law. Except for some exceptional circumstances such as genocide, crime against humanity and the protection of human rights, individuals are not generally considered as the subjects of international law, but signatories to the 1998 Aarhus Convention have agreed to take a rights-based approach to environmental matters.\(^{18}\) The NGO’s right to engage in public interest litigation has, since 2008, been established by the Federal Court of Canada. That decision stands as an example for other national courts to follow, especially, in countries where NGOs can bring cases against governments for failing the international obligations. Such a possibility, however, may exist only in the countries where the court system is able to exercise judicial independence. Although the recognition of NGO’s rights to represent public interest through litigation at the court of law is an achievement of the case, the Federal Court of Canada did not recognise the plaintiff’s claim which demanded that the Government of Canada should fulfil its obligations to reduce its share of emissions. Instead, it is concluded that “the Court has no role to play reviewing the reasonableness of the government’s response to Canada’s Kyoto commitments.”\(^{19}\) The Court also concluded that, “while there may be a limited role for the Court in the enforcement of the clearly mandatory elements of the Act such as those requiring the preparation and publication of Climate Change Plans, statements and re- \(^{17}\) *FRIENDS OF THE EARTH V. CANADA*, 2008 FC 1183, [2009] 3 F.C.R. 201, T-2013-07, T-78-08, 1683-07. The Court found that Parliament had, with the Act, “created a comprehensive system of public and Parliamentary accountability as a substitute for judicial review,” see also *Emissions Trading and Climate Change Bulletin*, November 2008, McMillan LLP, >www.mcmillan.ca>. \(^{18}\) This convention grants the public rights regarding information, public participation and access to justice in governmental decision-making processes on matters concerning the local, national and trans boundary environment/The UNECE Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters, See also, Jonas Ebbesson, “Public Participation and Privatisation in Environmental Matters: An Assessment of the Aarhus Convention”, *Erasmus Law Review*, Volume 4, Issue 2 (2011). \(^{19}\) *FRIENDS OF THE EARTH V. CANADA*, 2008 FC 1183, [2009] 3 F.C.R. 201, T-2013-07, T-78-08, 1683-07. ports, those are not matters which are at issue in these applications.”\textsuperscript{20} Nonetheless, it must be noted that Canada’s Federal Court neither ordered the Government of Canada to comply with the demands of the plaintiff, nor did the Court hold that Canada is free from the commitments that the country has made under the Kyoto Protocol for its share of emission reductions. A few years after the \textit{Friends of the Earth v The Gov’t of Canada}, the Government of Canada notified the UN Secretary General (December 15, 2011) to the effect that the country has withdrawn from the Kyoto Protocol. In the aftermath of the notification, Law Professor Daniel Turp applied to the Federal Court of Canada, asking for the judicial review of the decision concerning Canada’s withdrawal from the Protocol. In the \textit{Turp v Canada} (Minister of Justice), the Federal Court dismissed the application, concluding that, “the executive branch of the Government had the ability to withdraw from the treaty.”\textsuperscript{21} As a result of Canada’s withdrawal from the Kyoto Protocol, Canada has become a subject to international criticisms. In response to the increasing international criticisms, the Canadian Minister for the Environment, Peter Kent, argued that he invoked his country’s “legal right” to do so.\textsuperscript{22} At the same time, UN Climate Chief Christiana Figueres commented that Canada had both “a legal and moral obligation” to reduce emissions and lead efforts to fight climate change.\textsuperscript{23} Whatever maybe interests involved, Canada has withdrawn from the Kyoto Protocol. In context to the extension of the Kyoto Protocol for its second commitment period by the COP18, Canada’s withdrawal could be a point of further legal dispute domestically, as well as internationally. It could be a matter of contention between Parties to the Protocol, especially under the rubric of the Vienna Convention on the Law of Treaties (VCLT). If/when any dispute arises, the enforcement mechanisms established under the Kyoto Protocol could and should have taken priority over the VCLT-based general international obligations of states, because the Protocol is a specific treaty instrument,\textsuperscript{24} and the VCLT is a general framework treaty. As a rule, the Parties to the Protocol are required to demonstrate that they are within their assigned amounts of greenhouse gas emissions,\textsuperscript{25} according to the first commitment \begin{itemize} \item \textsuperscript{20} Ibid. \item \textsuperscript{21} \textit{Turp v. Canada} (Minister of Justice) et al. 2012 FC 893; Whether Canada’s withdrawal from the Kyoto Protocol has violated the KPIA was not considered by the Court in \textit{Turp v. Canada}. The separation of powers between the branches of the government also remained unaddressed by the Court, i.e. is the executive branch of the government free to withdraw from a treaty without the consent of the legislative branch? \item \textsuperscript{22} Canada pulls out of Kyoto Protocol CBC News posted: Dec 12, 2011 4:00 PM ET; >http://www.bbc.co.uk/news/world-us-canada-16151310>. \item \textsuperscript{23} Canada’s withdrawal from Kyoto Protocol regrettable – UN climate official> http://www.un.org/apps/news/story.asp?newsid=40714#.UhNGa5hvmfA>. \item \textsuperscript{24} In case of the failure to meet these obligations, there are two branches established under the Kyoto Protocol’s compliance mechanism: the Facilitative Branch and the Enforcement Branch. The Enforcement Branch is entitled to determine if a Party (Annex I) is not in compliance with its emissions limitation. In that case, the Party is required to cut emissions by an additional 30 per cent and a Party can be suspended from the Clean Development Mechanism (CDM), thereby being prohibited from making transfers by way of the Emission Trading Mechanisms. The procedural non-compliance issues concerning Canada should have been dealt with during the commitment period by the oversight body. On the other hand, substantive non-compliance would require a Party that has exceeded its emission allocation to purchase equivalent carbon emission rights. If the Party refuses to comply, then economic measures such as fines or trade-related enforcement measures may be used. \item \textsuperscript{25} According to Article 18 of the Kyoto Protocol, “The Conference of the Parties serving as the meeting of the Parties to this Protocol shall, at its first session, approve appropriate and effective procedures and mechanisms to determine and to address cases of non-compliance with the provisions of this Protocol, including through \end{itemize} period (2008–2012). Whether Canada’s withdrawal from the Kyoto Protocol at the end of the first commitment period is subject to legal judgment by the court of law. Canada’s withdrawal from the Protocol could also be challenged from the point of view of *pacta sunt servanda*, which in this case may imply that nonfulfillment of the obligation during the first commitment, as a breach of the Kyoto Protocol. According to Article 27 of the Protocol, “Any Party that withdraws from the Convention shall be considered as also having withdrawn from this Protocol.”\(^{26}\) It seems that Canada’s withdrawal is aimed at the Protocol. Canada remains a party to the UNFCCC and continues to participate in the COP negotiations. So far, no further legal action has been taken against Canada’s withdrawal from the Kyoto Protocol, either by the Facilitative Branch or by the Enforcement Branch. None of the Parties to the Protocol, nor the EU—may be because legal jurisdictional or political reasons—seem ready to bring a case in the ICJ against Canada concerning its withdrawal from the Protocol based on the VCLT. The Kyoto Protocol foresees the possibility of a party legally withdrawing, but a question arises, which courts jurisdiction is appropriate, if a case is to be filed against Canada. ### 3.2 WTO ruling It is relevant to note that a related WTO case from 2011, particularly dealing with energy and trade, has led to a new twist in Canada’s position concerning climate change mitigation. This litigation started when Japan and the EU brought a complaint against Canada at the WTO, concerning Ontario’s renewable energy program. It should be noted that Canada has both federal and province-based energy laws and one of them is Ontario’s 2009 Green Energy and Green Economy Act (GEGEA). The GEGEA aims to ensure access to alternative energy, as well as energy conservation and efficiency. Japan and the EU consider that some rules of the GEGEA are contradictory to the WTO principle of non-discrimination. Especially, because of the “local content requirement” under the GEGEA, Japan and the EU brought the subject to the WTO panel of adjudication against Canada.\(^{27}\) In 2012, the WTO ruled in favour of the plaintiffs. The WTO panel ruled that the renewable energy scheme had breached some WTO rules, but it failed to agree whether it constituted an illegality. The subsidy clause, --- \(^{26}\) Article 27 of the Protocol reads: “(1). At any time after three years from the date on which this Protocol has entered into force for a Party, that Party may withdraw from this Protocol by giving written notification to the Depositary. (2). Any such withdrawal shall take effect upon expiry of one year from the date of receipt by the Depositary of the notification of withdrawal, or on such later date as may be specified in the notification of withdrawal.” \(^{27}\) World Trade Organization, DS412/R and DS426/R. Summer 2012 Argentina initiated dispute settlement proceedings against the EU at which it argue that Spain’s implementation of the EU Directive 2009/28/EC on the promotion of the use of energy from renewable sources is contrary to WTO rules by improperly promoting EU-based producers and Certain Measures Concerning the Importation of Biodiesels. As negotiations in the autumn did not result in a solution called Argentina in December 2012 that a panel that is the first instance in the WTO dispute settlement process would be established (DS443). It is not EU law sustainability criteria for biofuels, which have been disputed by both political and scientific starting points, which are subject to review, but some national implementing measures. which is intertwined with “local content requirements”, is the core issue of disagreement. After the decision, Canada had lodged appeal over the WTO ruling, arguing that, “Ontario’s feed-in tariff (FiT) scheme aims to support renewable energy by guaranteeing electricity generators above-market rates on certain renewable sources of energy, such as wind and solar.”\textsuperscript{28} In response to Canada’s appeal, the WTO ruling, in May 2013, found Canada’s incentives offered to local companies against foreign firms, as discriminatory.\textsuperscript{29} This ruling has made it clear that the use of quality, cost-effective technologies used for the sustainable energy development should not be hampered by protectionist measures. The ruling, in fact, has left no choice for Canada but to work with the provincial authorities to respond to the WTO ruling. Some skepticism has, however, aroused, whether the situation after the ruling is spurring more WTO disputes. Such disputes are likely to be among those countries that are desperate for economic growth. The other countries may also be doing so, who may be suspecting that their energy development projects are being locked out of foreign interest as a result of the WTO ruling.\textsuperscript{30} One would assume, in any case, and could argue that alternative energy development that leads to greenhouse gas emission reduction should prevail over trade issues. The WTO panel ruling has not prohibited renewable energy incentives but incentives that favour local content products before products from other countries. Canada, or any other state, could have a FiT as long as it treats foreign and domestic renewable energy components equally. It is relevant to note that China has filed a complaint to the WTO against the EU, requesting consultations regarding domestic content restrictions, affecting the renewable energy generation sector, including feed-in tariff programs.\textsuperscript{31} The WTO decision has, thus, become a source of legal uncertainty. “While there are a number of potential opportunities associated with investments in emission reduction projects, there are also a number of potential liabilities associated with investing in firms or projects that have high emissions”, according to Chris Rolfe and Staff Counsel.\textsuperscript{32} Rolfe and Counsel argues that, “emitters will pay carbon taxes, … have to buy allowances or credits, or pay more for fossil fuels.”\textsuperscript{33} Yet, “where long-term fixed price contracts commit an emitter to production of greenhouse gas intensive products, the emitter should consider trying to control its potential liability.”\textsuperscript{34} However, the fault-based liability in the strict sense of compensation for damage is difficult to establish, particularly in case of greenhouse gas emission reduction. The seriousness of the damage (or injuries) becomes the prime matter of legal relevance in any case involving liability for compensation of harm. An identification of a wrongful act is necessary to establish climate harm liability for compensation.\textsuperscript{35} \textsuperscript{28} On 5 November 2012, ICTSD Reporting; >http://ictsdo.org/i/news/biores/154399/. \textsuperscript{29} DS426. \textsuperscript{30} For example, the United States has already charged India with illegally favoring local producers in its solar sector and China has hit the EU with a claim that Greece and Italy favored solar power firms that bought local components. Other potential disputes are simmering, with Brazil, Indonesia, Nigeria, Russia, Ukraine and the United States all under scrutiny in sectors such as energy, mining, car making and telecoms”, as reported by \textit{Reuters}, Mon May 6, 2013 12:39pm EDT. \textsuperscript{31} WTO, DS452; >http://www.wto.org/english/tratop_e/dispu_e/cases_e/ds452_e.htm>. \textsuperscript{32} Chris Rolfe, Staff Counsel, “Opportunities and Liabilities from Greenhouse Gas Emissions and Greenhouse Gas Emission Reductions”, \textit{West Coast Environmental Law}, March, (1999). \textsuperscript{33} Ibid. \textsuperscript{34} Ibid. \textsuperscript{35} For example, according to Article 2 of the Draft Articles on Responsibility of States for internationally wrongful Acts (DASR), an internationally wrongful act means that when conduct of an action or omission: a) is attributable If any court of law is ever asked to decide the legality of greenhouse gas emissions, judges will have to rely on natural science-based evidence of what constitutes significant harm. In order to make a successful claim for climate change loss and damage compensation, it would require demonstration of clear linkage between cause and effect for example as was done with the linking of tobacco use to lung cancer.\textsuperscript{36} \section*{4. India} It is worthwhile to contemplate how independent courts in other countries would have decided \textit{Friends of the Earth v Canada} and \textit{Turp v Canada}. For instance, how would the Supreme Court of India have decided in cases like these, given that there is exceptional judicial activism exercised by the Supreme Court of India, relating cases of harm and compensation, as well as the important environmental law principles? Because of its landmark decisions, India’s Supreme Court is somewhat unique in its high level of judicial activism as it concerns environmental rights and principles. Legal experts believe that the Supreme Court of India “will continue to play a significant role in facilitating adaptation to climate change.”\textsuperscript{37} This has led to the Indian Parliament’s creation of the National Green Tribunal (NGT), which is a court to deal with environmental cases. The Tribunal is empowered to render decisions against violators of environmental laws and enforce the payment of civil damages. The Supreme Court of India is known for its judicial activism and exercise of public interest litigations. In this context, a few but noteworthy examples need to be taken into perspective. Greenhouse gas emissions have not yet been proven to be a toxin. If and when such emissions are eventually scientifically proven to be toxic, India’s Supreme Court decision \textit{M.C. Mehta v Union of India},\textsuperscript{38} in which the Court defined polluters’ “strict and absolute liability”, could be relevant. In this case, it is held that if an enterprise is engaged in a hazardous or inherently dangerous activity such as emitting toxic gasses, the enterprise is strictly and absolutely liable to compensate all those who are affected by the toxic emissions. One international case concerning trans boundary herbicide spraying is relevant here to mention. Ecuador filed a case against Colombia at the International Court of Justice (ICJ), concerning trans-boundary environmental harm, arguing that Colombia’s aerial herbicide spraying at the border with Ecuador has resulted in significant environmental harm. The \textit{Ecuador vs Colombia} case has eventually been settled by an agreement between the parties.\textsuperscript{39} According to the 2013 Agreement, Colombia will not conduct aerial spraying operations across its border with Ecuador.\textsuperscript{40} \textsuperscript{36} One relevant case example how to prove link between human activities and climate change is the casual link between smoking and lung cancer. This link was initially proved by Richard Doll (in 1950) and nicotine substances were recognised as addictive by the United States District Court Judge Gladys Kessler and a federal appeals court in Washington upheld Kessler’s findings and found large tobacco companies liable in the case in 2006, Source, news.bbc.co.uk, June 29th, 2010. \textsuperscript{37} Aitken Hem, THE ROLE OF THE SUPREME COURT IN FACILITATING ADAPTATION TO CLIMATE CHANGE IMPACTS IN INDIA, \textit{Journal of Environmental Research And Development}, Vol. 7 No. 1, July–September 2012, pp. 155–165. \textsuperscript{38} M.C. Mehta v. Union of India AIR 1987SC (1965). \textsuperscript{39} September 13, 2013, the ICJ made an Order recording the discontinuance by Ecuador of the proceedings and directing the removal of the case from the Court’s List. Aerial Herbicide Spraying (Ecuador v. Colombia) Case removed from the Court’s List at the request of the Republic of Ecuador, see <http://www.icj-cij.org/docket/files/138/17526.pdf>. \textsuperscript{40} The Agreement of 9 September 2013 between the parties to the case> http://www.icj-cij.org/docket/ Again turning to the discussion on the cases decided by the Supreme Court of India, it is remarkable that India’s Supreme Court has acknowledged the Polluter Pays Principle as the law of the land in the *Indian Council for Enviro-Legal Action v Union of India*; it is a case involving an industrial chemical plant. In addition, in the *Vellore Citizens Welfare Forum v Union of India*, the Indian Supreme Court held that the Precautionary Principle and the Polluter Pays Principle are part of the environmental law of the country. The above-mentioned decisions indicate that, jurisprudence of the Indian Supreme Court has evolved significantly, which could be useful for climate change mitigation through litigation. In the *Municipal Council, Ratlam v Vardhichand*, the Court held that pollutants discharged by the big factories are “public nuisance” and open drains, garbage, and pollutants being discharged by big factories to the detriment of those living nearby are detrimental to “social justice.” This is the current state of jurisprudence as defined by the Indian Supreme Court regarding nuisance and social justice. How the law of the nuisance is argued concerning the climate change mitigations and fossil fuel industrial emission reduction will be seen in the following case decided by the Supreme Court of United States. 5. The United States Two important legal issues decided by the United States’ Supreme Court stand out concerning the theme of this paper; whether or not states and private parties are entitled under the public law of nuisance to bring a lawsuit against utility companies, demanding their share of carbon dioxide emission reductions; and whether issues involving greenhouse gas emission reductions are the pure political issues? And if these are also the legal issues, what legal conclusion can be drawn from the US case law development? 5.1 Connecticut v American Electric Power Co The *Connecticut v American Electric Power Co* (2011) is a noteworthy case from the United States. The case was filed at the United States District Court for the Southern District of New York (2004). Eight Federal States, as well as New York City and three non-profit land trusts, sued the five largest electric power companies in the United States. The plaintiffs claimed that emissions have created a “substantial and unreasonable interference with public rights” and it is being done “in violation of the federal common law of interstate nuisance.” The plaintiffs had asked for a permanent injunction order from the Court, requiring each of the five defendants, the *American Electric Power Co*, to abate their share of carbon dioxide emissions. The United States District Court of New York initially dismissed the lawsuits, suggesting that greenhouse gas emission reduction is a political issue and therefore such a claim should be resolved by the legislature. The Court of Appeals for the Second Circuit, however, reversed the --- 44 *Connecticut v. American Electric Power Co* 564, U. S., (2011). This is litigation against the fossil fuel electricity suppliers of the United States, emitting 650 million tons annually, which accounts for 25 per cent of domestic emissions, 10 per cent of domestic anthropogenic emissions and 2.5 per cent of global anthropogenic emissions. The full decision see, >http://www.supremecourt.gov/opinions/10pdf/10-174.pdf>. 45 It should be noted that an injunction is a traditional writ of the Common Law courts, (which may be difficult to apply in the Continental or Civil law systems), where legislations are considered more appropriate than the writ petitions. District Court dismissal of the lawsuits and held that the dispute is not restricted to resolution in the political arena, and the Court considered that claim is valid under the federal common law of nuisance. The defendants demanded rehearing of the case, but the Second Circuit denied defendants’ request, on the ground that the US Environmental Protection Agency had “failed to publicize any regulations on emissions” and could not “speculate whether the hypothetical regulation of emission would pertain to the issues” raised in the case. The Supreme Court granted the writ of *certiorari*.\(^{46}\) The question presented to the Court was that whether federal common law public nuisance claims could be made against carbon dioxide emitters. The Supreme Court held that the plaintiffs of *Connecticut v American Electric Power Co* could not pursue their claims under the federal common law of nuisance. The reason given behind the decision is that in the Clean Air Act, the United States delegates the federal role in managing greenhouse gas emissions to the Environmental Protection Agency (EPA). The Court held that the EPA is better equipped than federal judges to decide how strictly to regulate emissions. This was seen as a setback for those who had hoped to use federal common law to litigate against carbon dioxide emitters, but it says nothing about the “ability of states to use their own public nuisance laws to curb environmental harms.”\(^{47}\) The outcome of the case suggests that attempts to limit emissions have to be done through the legislative and executive branches. Earlier on, in the *Commonwealth of Massachusetts et al. v EPA*, the United States’ Supreme Court also held that “carbon dioxide is an air pollutant under section 202(a) (1) of the Clean Air Act which provides that the EPA “shall by regulation prescribe…standards applicable to the emission of any air pollutant from…new motor vehicles…which in his judgment cause, or contribute to, air pollution which may reasonably be anticipated to endanger public health or welfare.”\(^{48}\) The plaintiffs of *Connecticut v American Electric Power Co* had demanded *injunction*, not demand compensation, for damage that may have resulted from the defendant’s share of carbon emissions that led to global warming and climate change. It is obvious that the burden of proof would have been higher should the plaintiffs had asked for compensation. Outcomes of the United States case example suggest that legislation, not litigation, is the basis for climate change mitigation. So, what is the internal tension in the United States? A legislative bill on climate change was abandoned in the United States Senate in 2010, in the face of opposition. The United States President Barack Obama, in his State of the Union Speech (2013), made a pledge that “if Congress won’t act soon to protect future generations, I will. I will direct my Cabinet to come up with executive actions we can take, now and in the future, to reduce pollution, prepare our communi- \(^{46}\) It is an order by a higher court directing a lower court, tribunal, or public authority to send the record in a given case for review. \(^{47}\) David R. Brody *AMERICAN ELECTRIC POWER CO. V. CONNECTICUT*, Harvard Environmental Law Review Vol. 36, 298–304. \(^{48}\) The judgment of 2nd April 2007 is available: >http://www.climatelaw.org/media/Mass.v.EPA.USSC Court of Appeals for the District of Columbia Circuit, Judges. A similar view was arrived at in *Australian Conservation Foundation v Minister for Planning*, which held that “greenhouse gas (GHG) emissions from burning coal must be taken into account in a planning decision to approve a coal mine extension, i.e. the use to which the coal would be put must be taken into account in determining the environmental effects.” Judgment of Justice Stuart Morris, available at: >http://www.austlii.edu.au/au/cases/vic/VCAT/2004/2029.html-It should be noted that the Renewable Energy (Electricity) Act of Australia (2000) has had a mandatory national renewable energy target. ties for the consequences of climate change, and speed the transition to more sustainable sources of energy.”\textsuperscript{49} It remains to be seen if the President’s words will be matched by future actions that lead to combating climate change and ensuring sustainable energy access and supply. But there is certainly internal stress concerning climate change mitigation liability (or obligation), especially between the climate policy of the United States, the court’s litigation and the national legislation. The current internal situation in the United States would not be sustainable for longer term, according to a new “national strategic narrative” published by “Mr Y” under the pseudonym.\textsuperscript{50} Mr Y suggests that, there is need for a new narrative to frame the national policy decisions of the United States, including policy on the environmental protection and climate change. 6. Legal opinions Some relevant legal issues relating climate harm and compensation have been thoroughly examined by Professor Daniel Farber; who caused the harm? Are emitters of greenhouse gasses under an obligation to compensate?\textsuperscript{51} Farber argues that from the start “some of this [emission] activity was innocent, because the reality of climate change was not known at the time.”\textsuperscript{52} An innocent act cannot be a subject to culpability without which liability for the compensation of damage cannot be ascertained. This is one important criterion for determining either a violation of international law or a violation of a duty of care (due diligence) towards the harmed state. There is no disagreement among jurists about these criteria.\textsuperscript{53} Farber, thus, suggests that, “for those concerned about culpability, apportioning responsibility on the basis of emissions after some cut-off date would be an appropriate response.”\textsuperscript{54} What is the cut-off date, according to Farber? He considers that “one possible cut-off date is 1992, when the United States and other nations entered a framework agreement to reduce greenhouse gasses.”\textsuperscript{55} The reason given for this cut-off date is that “at that point, the international community had clearly identified the harm; any source of emissions after that date was at least on notice of the damaging nature of the conduct.”\textsuperscript{56} \textsuperscript{49} President Barack Obama’s Speech that was directly broadcasted in the World’s visual media, February 13, 2013. \textsuperscript{50} Mr. Y, A NATIONAL STRATEGIC NARRATIVE; Captain Porter’s and Colonel Mykleby’s “Y article” could not come at a more propitious time, writes Anne-Marie Slaughter in the preface of the Article, who is also Director of Policy Planning, U.S. Department of State, 2009–2011; see <http://www.foreignpolicy.com/articles/2011/04/13/the_y_article#sthash.BM9xxSYk.dpbs>. \textsuperscript{51} Daniel A. Farber, \textit{Basic Compensation for Victims of Climate Change}, Environmental Law Institute®, Washington, DC, reprinted with permission from ELR®, http://www.eli.org, 1-800-433-5120. Prof. Daniel Farber argues that compensation for harm caused by climate change is a moral imperative, and he surveys various mechanisms that have been used in other circumstances to compensate large numbers of victims for environmental and other harms. In response, Professor Feinberg cautions that significant hurdles remain before any realistic compensation system could be considered, but suggests that the most effective approach may be evolving parallel tracks of civil litigation and government action to address climate harm. Peter Lehner and William Dornbos argue that using common-law doctrines to find greenhouse gas (GHG) emitters liable for harm is a more pressing concern than creating a compensation system. Finally, Raymond Ludwiszewski and Charles Haake claim that the basic elements of liability are not readily discernable with climate change and that it would be more productive to invest in curtailing GHG emissions. \textsuperscript{52} Ibid. \textsuperscript{53} For example see, Richard S.J. Tol and Roda Verheyen, “State responsibility and compensation for climate change damages—a legal and economic assessment”, \textit{Energy Policy} 32, pp. 1109–1130, (2004). \textsuperscript{54} Ibid. \textsuperscript{55} Ibid. \textsuperscript{56} Ibid. Farber’s critics, specifically Raymond B Ludwiszewski and Charles H Haake argue that, “assuming such a cut-off date could be established, how would a court differentiate from a liability damages standpoint what is caused by post-1992 emissions—which would be actionable—and pre-1992 emissions—which would not be?”\(^{57}\) Farber acknowledges that, “it is obviously impossible to link any specific greenhouse gas emissions with any specific injury from a particular company or governmental entity due to the cumulative nature of the (GHG) effect.”\(^{58}\) Ludwiszewski and Haake argue that, “liability would require a finding that a putative defendant engaged in conduct that was unreasonable under the circumstances.”\(^{59}\) A vital question against Farber’s arguments raised by Ludwiszewski and Haake is “what constitutes unreasonable conduct when it comes to emissions?”\(^{60}\) The two critics note that, “Farber suggests that, “it may have been unreasonable for manufacturers to not use environmentally friendly technologies or to reduce production to account for the impacts of global warming.”\(^{61}\) The two critics further notes that, “Farber does not identify what viable alternative sources of energy could have been relied upon, nor does he provide any formula for determining what level of output is reasonable and what level is unreasonable; output after all, is dictated by the law of supply and demand.”\(^{62}\) However, neither Farber nor his critics take into account that 80 per cent of the world’s energy needs can be met through alternatives to fossil fuels.\(^{63}\) Thus, it would be unreasonable for states not to agree for the use of alternative energy of fossil fuels, especially to prevent further loss and damage from the climate change. Even if states fail to negotiate an international agreement for sustainable energy, they will sooner or later, have to accommodate the competing interests, primarily as a result of nexuses between litigation arising from loss and damage caused by climate change, and legislation on sustainable energy development as a part of the climate change mitigations. The WTO will have to balance between environmental protections interests versus economic interests.\(^{64}\) There are, however, certain limitations of climate change mitigation through litigations. The UNFCCC provides for dispute settlement, but it precludes legal redress avenues from the Convention process.\(^{65}\) In contrast to trans-boundary air or water pollution cases, where it may be relatively easy to identify the victims and the sources of harm, it is much more complicated to demonstrate causality in the present context, --- \(^{57}\) Raymond B. Ludwiszewski and Charles H. Haake, RESPONSE Comment on Basic Compensation for Victims of Climate Change Basic Compensation for Victims of Climate Change, Environmental Law Institute®, Washington, DC, reprinted with permission from ELR®, http://www.elr.org, 1-800-433-5120. \(^{58}\) Ibid. \(^{59}\) Ibid. \(^{60}\) Ibid. \(^{61}\) Ibid. \(^{62}\) Ibid. \(^{63}\) IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation (IPCC 2011) Prepared by Working Group III of the Intergovernmental Panel on Climate Change [O. Edenhofer, R. Pichs-Madruga, Y. Sokona, K. Seyboth, P. Matschoss, S. Kadner, T. Zwickel, P. Eickemeier, G. Hansen, S. Schlömer, C. von Stechow (eds)], Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 1075. \(^{64}\) There seems to be enough legal grounds to argue convincingly for the prioritisation of alternative energy development under the “local content requirement.” But, at the same time, importing goods and services essential for sustainable development is also equally valuable under the WTO rules of non-discrimination and the most favoured nation clause. Until that case is decided, it will have to be sufficient to rely on legislation and/or treaties to balance between economic and environmental interests. \(^{65}\) Article14 of the UNFCCC. where there can even be a dual identity of injured (victims) and emitters (wrongdoers). The New Zealand’s High Court rejection of an appeal of a Kiribati climate refugee case (2013) is an indication of difficulties in reconciling the country’s generally favorable policy of emission reduction with the notion of “climate refugee”. It is also difficult, if not impossible, to prove a case of climate harm, linking any specific anthropogenic emissions with any specific injury from a particular company or state, that is specific from the cumulative effect of emissions. It is, however, argued by some that, “harmed states are not bound to tolerate damage and liability that can be established according to the case facts at hand.” Some other, therefore, consider climate harm mitigation as a part of the “prevention duties and state responsibility” and still other consider climate change as a “wrongful harm to future generations.” Yet, it remains difficult how to define greenhouse gas emission as a wrongful act. In this situation, should not the international community of states acknowledge the principle of unjust enrichment in dealing with the climate harm and compensation? 6.1 Unjust enrichment Harm and compensation are also part of the Common Law principles of equity and tort. Relevant to these concepts is unjust enrichment, which suggests that those benefiting disproportionately at the expense of others should compensate the victims, even if the use of the resources involved is not illegal. It follows from this principle that any person, natural or corporate, who unjustly obtains wealth or property, owes compensation to the injured party, even if the property was not obtained illegally. This suggests that even if greenhouse gas emissions may not be an illegal act as such, it is illegal to harm the common interest of humanity, while taking advantage of the situation, in order to fulfil individual interest by a state or individual. Thus, the principle of unjust enrichment scrutinises one party’s right to use natural or human resources to optimise the fulfilment of its needs to the detriment of another party’s pursuit. --- 66 Petra Šurková, Anna Gromilova, Barbara Kiss, Megi Plaku, Climate refugees in the 21st century, December 2012<http://acuns.org/wp-content/uploads/2013/01/Climate-Refugees-1.pdf>. The asylum claim “based on vulnerability to climate change highlights the fact that international refugee law cannot respond to climate-induced displacement”<http://www.ejfoundation.org/node/997>. 67 Christina Voigt, “State Responsibility for Climate Change Damages”, Nordic Journal of International Law, Vol. 77, No. 1-2, (2008). 68 Roda Verheyen, Climate Change Damage and International Law: Prevention Duties and State Responsibility, Martinus Nijhoff Publishers, (2005). 69 Marc D. Davidson, Wrongful Harm to Future Generations: The Case of Climate Change, Environmental Values, Volume 17, Number 4, November 2008, pp. 471–488(18). 70 John Bede Donnelly (in a paper for the degree of Doctor of Juridical Science Deakin University February, 2004) suggests that, a like concept has had a place in the common law since its inception under several characterisations. It bears the mark of ancient Roman jurisprudence, but relates to independent principles. The jurisprudence was formed by special characteristics of its history. It is distinct from modern Roman/Dutch law but the doctrinal overtones of its foundational case law reflect the basis of reasoning, which in Continental law is founding the adopted ancient codes. It is this foundation of reasoning and the firm rejection of a normative general principle that makes Anglo/Australian law different in character and jurisprudence from unjust enrichment in USA and Canada. Stifled for centuries by quasi contract misconceptions, the law of unjust enrichment entered the modern law in the 20th C through the seminal judgements of Lord Wright in Fibrosa Spolka Akcyjna v Fairbairn Lawson Coombe Barbour Ltd, [1943 AC 32] and related cases and through the strong judicial and juristic following they inspired. Donelly seems to suggest that any civilised system of law is bound to provide remedies for unjust enrichment, as it “became an imperative across the common law world: it has long held a place in the Roman Dutch jurisdictions of South Africa and Continental Europe. of the same. In addition, the principle can be a basis for restitution, compensation and introduction of global taxation, which can hold excessive greenhouse gas emitters directly responsible for global climate harm. Keeping in view the difficulties to establish a fault-based compensation system, as well as in the light of value of the principle of unjust enrichment, a no-fault-based insurance scheme could be a suitable mechanism to deal with climate change loss and damage compensation. Before reaching to any conclusion, it is important to address one crucial question; whether the existing legal concepts, rules and mechanisms are equipped to meet the challenges and complexity posed by climate change, including adequate compensation for climate change loss and damage? The state responsibility to reduce greenhouse gas emissions is based on the UNFCCC, including the Kyoto Protocol. It is important to note that there is clear legal obligation of states to provide climate finance under Article 4 of the UNFCCC.\footnote{71} There are political obligations of states as well, especially recognised by the 2009 Copenhagen Accord (COP15)\footnote{72} in the form of self-imposed obligations. It should be also noted that there are historical evidences where such self-imposed political obligations have evolved into de facto legal obligation. For example, the Helsinki Accords and Final Act on Security and Cooperation in Europe\footnote{73} have, over decades, acquired legal significance, including political-military security, economic and environmental issues as well as protection of human rights.\footnote{74} Therefore, an importance of the political commitments under the Copenhagen Accord should not be underestimated,\footnote{75} particularly concerning the Green Climate Fund (COP15). In this context, the Fund could be developed in the future as global no-fault insurance schemes for compensation. As mentioned earlier, the future COP negotiations might use the ozone treaty regime as a model, focusing on control and reduction of sources of damage, instead of concentrating on consequential damage and compensation. \footnote{71}{The relevant parts of Article 4 of the UNFCCC and its para 4 and 8 reads as follows; 4) “The developed country Parties and other developed Parties included in Annex II shall also assist the developing country Parties that are particularly vulnerable to the adverse effects of climate change in meeting costs of adaptation to those adverse effects”; 8) “In the implementation of the commitments in this Article, the Parties shall give full consideration to what actions are necessary under the Convention, including actions related to funding, insurance and the transfer of technology, to meet the specific needs and concerns of developing country Parties arising from the adverse effects of climate change and/or the impact of the implementation of response measures”.} \footnote{72}{FCCC/CP/2009/L.7 18 December 2009.} \footnote{73}{The Final Act of the Conference on Security and Co-operation in Europe, known as the Helsinki Final Act, Helsinki Accords or Helsinki Declaration, was the final act of the Conference on Security and Co-operation in Europe held in Helsinki, Finland during July and August of 1975; see also the book review by Leo Gross and Anthony D’Amato, \textit{78 American Journal of International Law} 960 (1984) (Code BR1-84); See, also Igor I. Kawass, Jacqueline Paquin Granier and Mary Frances Dominick, ed., \textit{Human Rights, European Politics, and the Helsinki Accord: the Documentary Evolution of the Conference on Security and Co-operation in Europe 1973–1975}. The Helsinki Accord type documents “engage States politically and morally, in the sense that they are not free to act as if they did not exist”, see Gidon Gottlieb, “Relationism: Legal Theory for a Relational Society”, \textit{50 University of Chicago Law Review}, 1983, pp. 567–582.} \footnote{74}{The Helsinki process includes that respect for human rights and fundamental freedoms, including the freedom of thought, conscience, religion or belief. The Conference on Security and Cooperation in Europe held in Helsinki, Finland 1975, thirty-five states, including the United States, Canada, and all European states except Albania and Andorra, signed the declaration including Helsinki Final Act, Helsinki Accords or Declaration. This was an attempt to improve relations between the Communist bloc and the West.} \footnote{75}{Katak Malla, \textit{The International Negotiations for a New Global Climate Treaty: Legal Analysis of COP 15-16 and Basis for Further Action}, Climore and Stockholm Miljörättscentrum publication, (2011).} COP19 has, thus, “decided to establish an international mechanism to provide most vulnerable populations with better protection against loss and damage caused by extreme weather events and slow onset events such as rising sea levels.”\textsuperscript{76} There are ongoing efforts to distinguish the climate finance from the Official Development Aid (ODA).\textsuperscript{77} The ongoing discussions on loss and damage are seeking to introduce a concept based on a different logic than ODA. The alternative concept is supposed to be in line with the notion of Article 4 of the UNFCCC, i.e. compensation owed to vulnerable countries due to damage caused by climate change.\textsuperscript{78} \section*{7. Conclusion} An examination of the case law developments in Canada, India and the United States shows that national court litigations have been driven, in part, to guarantee individual’s right to file climate-related cases against governments and/or individuals corporations. These litigations have certainly created considerable pressures on national governments and corporations to climate mitigate harm. As has been described earlier the internal situation of the United States, in terms of litigation, legislations and President Obama’s policy statements, it can be concluded that the United States sooner or later will have to adopt national legislation of climate change, or actively take part in the COP negotiation, or even both. At the international level, a stalemate persists in the COP negotiations concerning a new climate treaty.\textsuperscript{79} Appropriate national legislations by all industrialised countries, as well as developing countries, whose share of global emissions is on the rise, would be an important step towards climate change impacts mitigation and adaption. A fault-based approach to climate change loss and damage compensation would be difficult, if not impossible, to include in a new treaty. An act of greenhouse gas emission, as well as liability to pay compensation for climate harm, could have been a part of the international liability for injurious consequences arising out of acts not prohibited by international law, but the ILC’s work encountered difficulties in developing draft articles. The ILC, therefore, shifted its approach towards responsibility of states for “internationally wrongful acts.”\textsuperscript{80} Serious obstacles remain in recognising greenhouse gas emissions as a wrongful act. Similar difficulties exist concerning recognition of the legal status of climate “victim” or “refugee” in different national laws. \textsuperscript{76} Detailed work on the so-called “Warsaw international mechanism for loss and damage” remains to be established; <http://unfccc.int/files/press/news_room/press_releases_and_advisories/application/pdf/131123_pr_closing_cop19.pdf>. \textsuperscript{77} Felix Fallasch and Laetitia De Marez, “New and Additional? A discussion paper on fast-start finance commitments of the Copenhagen Accord”, \textit{Climate Analytics}, 01 December, (2010). \textsuperscript{78} According to Article 4 of the UNFCCC and its para, 8 the following countries are listed as vulnerable: a) Small island countries; b) Countries with low-lying coastal areas; c) Countries with arid and semi-arid areas, forested areas and areas liable to forest decay; d) Countries with areas prone to natural disasters; e) Countries with areas liable to drought and desertification; f) Countries with areas of high urban atmospheric pollution; g) Countries with areas with fragile ecosystems, including mountainous ecosystems; h) Countries whose economies are highly dependent on income generated from the production, processing and export, and/or on consumption of fossil fuels and associated energy-intensive products; and i) Land-locked and transit countries. \textsuperscript{79} Especially between the United States and the Basic group – Brazil, South Africa, India and China – on the one hand, and between the EU and the United States, on the other; Katak Malla, “The EU and Strategies for New Climate Treaty Negotiations”, \textit{European Policy Analysis}, NOVEMBER, ISSUE 2011:12epa. \textsuperscript{80} The United States had insisted that the ILC’s Draft Articles on Wrongful Acts should be crafted as non-binding guidelines. and international law.\textsuperscript{81} Given the situation, a no-fault based climate change loss and damage compensation, owed to vulnerable countries, seems to be a workable option for the COP negotiations to follow, establishing a new global climate treaty regime. \textsuperscript{81} “Climate change refugee bid denied by New Zealand court, High court in Auckland rules against Kiribati man’s claim for asylum over rising sea levels caused by global warming,” http://www.theguardian.com/environment/2013/nov/26/climate-change-refugee-new-zealand-court.
contract Siesta exclusive Siesta, established in 1987, offers to an international public satisfying the most varied needs, through products conceived in the name of functionality and design. The continuous attention to the needs and tendencies of the market has led to a constant and balanced development throughout time, determining a first hand position for both productive capacity and also from a commercial point of view. This has been possible thanks to constant research on productive and technological aspects and also on the esthetics and on product promotion. The freshness of the design of Siesta tables and chairs together with the competitive price satisfies different levels of needs. These characteristics give Siesta access to different market sectors, for example contract and domestic. The collection includes chairs, bar stools and tables, all designed to be multifunctional and multi-purpose, user friendly and with an indisputable style. With their colour, sense of irony, play on senses and unique forms, Siesta products are immediately recognizable throughout the world, offering longlasting, practical enjoyment. Today Siesta exports 75% of its production to 80 countries around the globe through its comprehensive network of agents and dealers. Siesta offers an extremely wide range of indoor and outdoor products, with innumerable colour variations, with plastic materials like polypropylene and fiberglass matched to wood or metal or latest trend like transparent polycarbonate or glossy PA6 nylon, with strong technical characteristics, that give the product inimitable qualities such as softness, opaqueness, flexibility and resistance to atmospheric agents. A concentrate of quality that makes the Siesta product so unique and special. | CHECKLIST OF OBJECTS | |----------------------| | BLOOM 4 | CRYSTAL 6 | ARTHUR 8 | DEJAVU 10 | ELIZABETH 12 | ELIZABETH-C 14 | | ALLEGRA 28 | CARMEN 30 | Miss BIBI 32 | MR BOBO 34 | MOON 36 | FLASH 38 | | PLUS 52 | MILA 54 | PIA 56 | SUNSET 58 | SNOW 60 | ARES 62 | | TIFFANY'S 76 | MAYA 78 | LUCCA 80 | LUCCA-T 82 | MIO-PP 84 | MIO 86 | | AIR BAR 100 | MAYA BAR 104 | ARES BAR 108 | OPERA BAR 110 | CHIAVARI BAR 114 | ARIA 118 | | BOX TABLE 134 | AIR TABLE 80 136 | AIR TABLE 140 138 | AIR TABLE 180 140 | AIR LEGS 142 | OCTOPUS 60 144 | | ICE 158 | ICE 158 | ICE Leg&Base 162 | MANGO 164 | MANGO ALU 166 | FORZA 168 | | BABY ELIZABETH 16 | OPERA 18 | CHIAVARI 20 | BO 22 | BEE 24 | ALLEGRA-PP 26 | | BOX 40 | BOX SOFA 42 | ARTEMIS XL 44 | ARTEMIS 46 | AIR XL 48 | DIVA 50 | | AIR 64 | MIRANDA 66 | SOHO 68 | JOSEPHINE 70 | NAPOLEON 72 | TIFFANY 74 | | BELLA 88 | GALA 90 | ROMEO 92 | JULIETTE 94 | DOLCE 96 | VITA 98 | | FOX 120 | GIO 124 | PACIFIC 126 | OCEAN SIDE TABLE 128 | OCEAN TABLE 130 | QUEEN 132 | | OCTOPUS BAR 146 | ARES 80 148 | ARES 140 150 | MAYA 80 152 | MAYA 120 152 | MAYA 140 152 | | SORTIE 170 | POPPY / ELFO 172 | DODO 176 | NOVA 178 | SMART 180 | ARTEMIS XL Cushions 182 | Bloom chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 048 Bloom Colors White Black Dark Grey Silver Grey Taupe Red Yellow Stacking armchair for indoor and outdoor use in clear polycarbonate moulded with gas technology of the second generation. Scratch resistant, UV-resistant. 052 Crystal Colors Glossy White Black Transparent Mist Transparent Clear Transparent Smoke Grey Transparent Orange Transparent Amber Transparent Stacking armchair for indoor and outdoor use in shiny technopolymer PA6 nylon or clear polycarbonate. Scratch resistant, self-extinguish classification V2, UV-resistant. 053 Arthur Colors Glossy White Glossy Red Glossy Black Clear Transparent Amber Transparent Black Transparent Stacking armchair for indoor and outdoor use in shiny technopolymer PA6 nylon or clear polycarbonate moulded with gas technology of the second generation, Scratch resistant, self-extinguish classification V2, UV-resistant. 032 Dejavu Colors Glossy White Glossy Red Smoke Grey Transparent Amber Transparent Black Transparent Clear Transparent Elizabeth Stacking chair for indoor and outdoor use in shiny technopolymer PA6 nylon or clear polycarbonate. Scratch resistant, self-extinguish classification V2, UV-resistant. 034 Elizabeth Colors Glossy White Glossy Red Glossy Black Clear Transparent Amber Transparent Pink Transparent Black Transparent Elizabeth-C Stacking chair for indoor use with removable cushion on the seat, in shiny technopolymer PA6 nylon or clear polycarbonate. Cushion Colors Dark Brown Light Brown Light Ivory Black White Stacking children chair Baby Elizabeth for indoor and outdoor use in clear polycarbonate. Scratch resistant, UV – resistant. 05 | Baby Elizabeth Colors Glossy White Pink Transparent Pink Transparent Blue Transparent Violet Transparent Clear Transparent Opera Stacking wedding chair for indoor and outdoor use in clear or shiny polycarbonate. Scratch resistant, UV - resistant. 061 Opera Colors Glossy White Amber Transparent Clear Transparent Chiavari Stacking wedding chair for indoor and outdoor use in clear or shiny polycarbonate. Scratch resistant, UV - resistant. 071 Chiavari Colors Glossy White Amber Transparent Clear Transparent Stacking chair for indoor and outdoor use in clear or shiny polycarbonate. Scratch resistant, UV-resistant. 005 Bo Colors: - Glossy White - Glossy Black - Clear Transparent Bee Stacking chair for indoor and outdoor use in clear or shiny polycarbonate. Scratch resistant, UV-resistant. Design by LARRY GUNNEN 021 Bee Colors Glossy White Glossy Black Red Transparent Amber Transparent Black Transparent Clear Transparent Allegra-PP Recyclable polycarbonate main body. Legs frame in polypropylene. For indoor and outdoor use. Can be disassembled. 096 Allegra-PP Colors White / Glossy White Black / Black Transparent Brown / Glossy White Recyclable polycarbonate main body. Legs frame in tubular chrome steel. Stackable. For indoor use. 057 Allegra Colors Glossy White Past Transparent Amber Transparent Black Transparent Clear Transparent Carmen Stacking armchair with the latest generation of air moulding thermoplastic injection. Seat in polypropylene reinforced with glass fiber, backrest in transparent polycarbonate. For indoor and outdoor use. 059 Carmen Colors White / Glossy White White / Red Transparent White / Violet Transparent White / Black Transparent White / Clear Transparent White / Amber Transparent Black / Clear Transparent Black / Black Transparent Black / Amber Transparent Miss Bibi Stacking chair with the latest generation of air moulding thermoplastic injection. Seat in polypropylene reinforced with glass fiber; back/rest in transparent polycarbonate. For indoor and outdoor use. 055 Miss Bibi Colors - Dark Grey / Smoke Grey Transparent - Dark Grey / Clear Transparent - Black / Amber Transparent - Black / Black Transparent - Black / Clear Transparent - White / Amber Transparent - White / Clear Transparent - White / Black Transparent - White / Violet Transparent - White / Red Transparent Mr Bobo Stacking chair with the latest generation of air moulding thermoplastic injection. Seat in polypropylene reinforced with glass fiber, back/rest in transparent polycarbonate. For indoor and outdoor use. 056 Mr Bobo Colors - Dark Grey / Smoke Grey Transparent - Dark Grey / Clear Transparent - Black / Black Transparent - Black / Clear Transparent - White / Amber Transparent - White / Black Transparent - White / Clear Transparent Stacking chair with the latest generation of air moulding thermoplastic injection. Seat in polypropylene reinforced with glass fiber, back/rest in transparent polycarbonate. For indoor and outdoor use. 090 Moon Colors White / Glossy White Black / Black Transparent White / Red Transparent Black / Amber Transparent White / Amber Transparent White / Clear Transparent Black / Red Transparent Black / Clear Transparent Stacking chair with the latest generation of air moulding thermoplastic injection. Seat in polypropylene reinforced with glass fiber, back/rest in transparent polycarbonate. For indoor and outdoor use. Colors - White / Glossy White - White / Red Transparent - White / Amber Transparent - White / Clear Transparent - Black / Clear Transparent - Black / Black Transparent - Black / Amber Transparent - Black / Red Transparent Recyclable polypropylene stacking armchair; strong and stable. Suitable for outdoor contract use. 058 Box Colors - Black - Rust - Silver Grey - Dark Grey - Tropical Green - Orange - White Box Sofa Recyclable polypropylene stackable sofa, very sturdy and stable. Suitable for outdoor contract use. 063 Box Sofa Colors - Black - Rust - Silver Grey - Dark Grey - Tropical Green - Orange - White Artemis XL Lounge armchair is stackable and produced with a single injection of polypropylene reinforced with glass fiber, for indoor and outdoor use with or without cushion and non-slip feet. The Artemis collection also includes a coffee table, in various sizes, designed to match the styling silhouette of the chair to perfection. 004 Artemis XL Colors Black Dark Grey Silver Grey Taupe White Artemis armchair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 01 | Artemis Colors - White - Black - Dark Grey - Silver Grey - Brown - Taupe - Teak Air XL armchair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 007 Air XL Colors - White - Black - Dark Grey - Red - Taupe - Tropical Green - Orange - Yellow Diva armchair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 028 Diva Colors - Black - Rust - Dark Grey - Silver Grey - Light Blue - Tropical Green - Orange - White Plus armchair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 093 Plus Colors White Black Red Mila armchair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 085 Mila Colors - White - Black - Dark Grey - Taupe - Red - Yellow Pia chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 086 Pia Colors - White - Black - Dark Grey - Taupe - Red - Yellow Sunset chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 088 Sunset Colors - White - Black - Dark Grey - Brown - Beige Snow chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 092 Snow Colors - White - Black - Dark Grey - Taupe - Red - Yellow Ares chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 009 Ares Colors - White - Black - Dark Grey - Silver Grey - Brown - Taupe - Teak Air chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 014 Air Colors - White - Black - Dark Grey - Red - Taupe - Tropical Green - Orange - Yellow Miranda chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. Colors - White - Black - Dark Grey - Brown - Taupe Soho chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 054 Soho Colors - White - Black - Dark Grey - Brown - Beige - Red Josephine wedding chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 050 Josephine Colors White Black Napoleon wedding chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 044 Napoleon Colors - White - Black - Silver Grey - Gold Tiffany chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. Stackable. For indoor and outdoor use. 018 Tiffany Colors - Black - Brown - Dark Grey - Silver Grey - Light Blue - Tropical Green - Orange - White Tiffany-S chair is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. Stackable. For indoor and outdoor use. 019 Tiffany-S Colors - Brown - Dark Grey - Light Blue - White The latest generation air moulding process, as used for Maya, allows perfect control for the channel draining and the wall thickness. In designing Maya, the designer’s developed a formally characterized shape yet suitable for different ambiances - contemporary, classic, informal or elegant – where it easily settles in, thanks to a colour range including brights as well as neutral tones. For indoor and outdoor use. 025 Maya Colors Light Blue Black Dark Grey Silver Grey Orange Tropical Green White Stacking chair with the latest generation of air moulding thermoplastic injection. For indoor and outdoor use. 026 Lucca Colors - Light Blue - Dark Grey - Silver Grey - Orange - Tropical Green Stacking chair with the latest generation of air moulding thermoplastic injection. Back in transparent polycarbonate. For indoor and outdoor use. 029 Lucca-T Colors - Light Blue - Orange - White Mio-PP chair is produced with recyclable polypropylene seat and legs reinforced with glass fiber. For indoor and outdoor use. Can be disassembled. 094 Mio-PP Colors White / White Black / Black Brown / White Mio Stacking chair with recyclable polypropylene seat and backrest and painted steel legs. For indoor use. 046 Mio Colors - Black - Orange - Red - Beige - White Stacking armchair in anodized aluminium with seat and back in recyclable polypropylene with latest generation of air moulding thermoplastic injection. Diameter 25 mm. For indoor and outdoor use. 040 Bella Colors Teak Orange White Stacking armchair in anodized aluminium with seat and back in recyclable polypropylene with latest generation of air moulding thermoplastic injection. Diameter 25 mm. For indoor and outdoor use. 04 | Gala Colors Teak: Orange White Stacking armchair with recyclable polypropylene seat and backrest and anodized aluminium legs. Diameter 25 mm. For indoor and outdoor use. 043 Romeo Colors Blue, Black, Light Green, Orange, Red, Beige, White Stacking chair with recyclable polypropylene seat and backrest and anodized aluminium legs. Diameter 25 mm. For indoor and outdoor use. 045 Juliette Colors Blue Black Light Green Orange Red Beige White Stacking armchair with recyclable polypropylene seat and backrest and anodized aluminium legs. Diameter 25 mm. For indoor and outdoor use. 047 Dolce Colors - Black - Light Green - Orange - Red - Beige - Dark Grey - White Vita Stacking chair with recyclable polypropylene seat and backrest and anodized aluminium legs. Diameter 25 mm. For indoor and outdoor use. 049 Vita Colors - Black - Light Green - Orange - Red - Beige - Dark Grey - White Air Bar stool h.75 cm is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 068 Air Bar 75 Colors - White - Black - Dark Grey - Red - Taupe - Tropical Green - Orange Air Bar stool h.65 cm is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 067 Air Bar 65 Colors - White - Black - Dark Grey - Red - Taupe - Tropical Green - Orange Maya stacking bar stool h.75 cm is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 099 Maya Bar 75 Colors - White - Black - Dark Grey - Taupe Maya stacking bar stool h.65 cm is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 100 Maya Bar 65 Colors - White - Black - Dark Grey - Taupe Ares stacking bar stool h.75 cm is produced with a single injection of polypropylene reinforced with glass fiber obtained by means of the latest generation of air moulding technology with neutral tones. For indoor and outdoor use. 101 Ares Bar 75 Colors White Black Dark Grey Brown Taupe Stacking wedding bar stool h.75 cm for indoor and outdoor use in clear or shiny polycarbonate. Scratch resistant, UV-resistant. 073 Opera Bar 75 Colors Glossy White Amber Transparent Clear Transparent Opera Bar 65 Stacking wedding bar stool h.65 cm for indoor and outdoor use in clear or shiny polycarbonate. Scratch resistant, UV-resistant. 074 Opera Bar 65 Colors - Glossy White - Amber Transparent - Clear Transparent
COLLABFUZZ: A Framework for Collaborative Fuzzing Sebastian Österlund* Vrije Universiteit Amsterdam firstname.lastname@example.org Elia Geretto* Vrije Universiteit Amsterdam email@example.com Andrea Jemmett* Vrije Universiteit Amsterdam firstname.lastname@example.org Emre Güler Ruhr-Universität Bochum email@example.com Philipp Görz Ruhr-Universität Bochum firstname.lastname@example.org Thorsten Holz Ruhr-Universität Bochum email@example.com Cristiano Giuffrida Vrije Universiteit Amsterdam firstname.lastname@example.org Herbert Bos Vrije Universiteit Amsterdam email@example.com ABSTRACT In the recent past, there has been lots of work on improving fuzz testing. In prior work, EnFuzz showed that by sharing progress among different fuzzers, they can perform better than the sum of their parts. In this paper, we continue this line of work and present COLLABFUZZ, a collaborative fuzzing framework allowing multiple different fuzzers to collaborate under an informed scheduling policy based on a number of central analyses. More specifically, COLLABFUZZ is a generic framework that allows a user to express different test case scheduling policies, such as the collaborative approach presented by EnFuzz. COLLABFUZZ can control which tests cases are handed out to what fuzzer and allows the orchestration of different fuzzers across the network. Furthermore, it allows the centralized analysis of the test cases generated by the various fuzzers under its control, allowing to implement scheduling policies based on the results of arbitrary program (e.g., data-flow) analysis. CCS CONCEPTS • Security and privacy → Software security engineering; • Software and its engineering → Software testing and debugging. KEYWORDS fuzzing, parallel fuzzing, collaborative fuzzing, ensemble fuzzing, automated bug finding ACM Reference Format: Sebastian Österlund, Elia Geretto, Andrea Jemmett, Emre Güler, Philipp Görz, Thorsten Holz, Cristiano Giuffrida, and Herbert Bos. 2021. COLLABFUZZ: A Framework for Collaborative Fuzzing. In 14th European Workshop on Systems Security (EuroSec ’21), April 26, 2021, Online, United Kingdom. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3447852.3458720 *Equal contribution joint first authors. 1 INTRODUCTION In recent years, fuzzing has become an essential tool for finding bugs and vulnerabilities in software. Fuzzers, such as AFL [26] and Honggfuzz [13], have successfully been applied to generate inputs and find bugs in a large number of applications [22]. Recent work in fuzzing [1, 5, 25] has focused on improving the fraction of the target application covered by the fuzzer by implementing new input mutation techniques and branch constraint solving strategies. Since it is common to use automated bug finding tools to find new bugs in software development scenarios (on every new commit/release), in pentesting scenarios (to find evidence of vulnerabilities), or in server consolidation scenarios (where spare CPU cycles can be dedicated to fuzzing), producing results in bounded time is crucial. Consequently, we target practical use cases where the time budget available for fuzzing is limited and it may be difficult to saturate coverage within that budget. It is, thus, important to look at how existing tools can be utilized in a more efficient way. Large-scale fuzzing efforts, such as OSS-Fuzz [22], have shown that fuzzing scales well with additional computing resources in order to find security-relevant bugs in software. Moreover, researchers further improved the speed of fuzzing by parallelizing and distributing the fuzzing workload [17, 18, 26]. Typically, in these setups, multiple instances of the same fuzzer run in parallel and their results are periodically synchronized [22]. In contrast, EnFuzz [7] demonstrated that running combinations of different fuzzers in parallel leads to a noticeable variation in performance, paving the way for further improvement. Intuitively, this makes sense as fuzzers that have different properties and advantages in some areas often come with disadvantages in others. Hence, a collaborative fuzzing run using a combination of fuzzers with different abilities can outperform multiple instances of the same fuzzer. Given a set of COTS (commercial-off-the-shelf) fuzzers and a number of cores, collaborative fuzzing has two possible ways to improve beyond simply running the fuzzers as independent tasks in a predetermined configuration. First, we can synchronize the fuzzers so that a good input found by one can benefit the others. Second, we can determine the right mix of fuzzers to run to combine their strengths—e.g., one fuzzer may be better at solving some constraints and another may be better at solving others. While the second direction has already been explored in the literature [11], to our knowledge, optimally sharing test cases represents a resource allocation problem that has not yet received the attention it deserves. Yet, this problem is important, as each fuzzer has its own strengths and weaknesses that influence the time it takes to get past specific obstacles in the program. For example, a heavyweight symbolic execution-based approach might be better at solving certain constraints, while a lightweight greybox fuzzer might be better at rapidly exploring a program under test. In this paper, we investigate whether *test case scheduling* on a fuzzer level (i.e., selectively handing out test cases to particular fuzzers) can improve the overall results of a collaborative fuzzing campaign. To this end, we introduce **CollabFuzz**, a collaborative fuzzing framework capable of orchestrating fuzzing campaigns of a diverse set of fuzzers and deciding how these fuzzers share their progress with each other. Using **CollabFuzz**, we implement a number of relatively simple test case scheduling policies and evaluate whether such policies can improve fuzzing performance. Summarizing, we make the following contributions: - We present **CollabFuzz**, a distributed collaborative fuzzing framework. - We implement and evaluate a number of test case scheduling policies on top of **CollabFuzz**. - We release **CollabFuzz** as open source software, available at https://github.com/vusec/collabfuzz. ## 2 BACKGROUND ### 2.1 Fuzzing Fuzzing is the process of automatically finding bugs by generating randomly mutated inputs and observing the behavior of the application under test. Current fuzzers are mainly *coverage-guided*, meaning that they try to generate inputs to maximize code coverage. These fuzzers are generally classified into three categories: *blackbox*, where the fuzzer has no inherent knowledge of the target program (with the advantage of being fast and easily compatible, but with less opportunity for generating high-quality test cases); *whitebox*, with a focus on heavyweight and high-quality input generation (but suffering from scalability and compatibility issues); and *greybox*, which combines the strengths of the first two, trying to be compatible while still using some lightweight analysis to produce high-quality test cases. Besides improving the fuzzing techniques themselves, the growing code size of projects like web browsers have required developers to scale performance by running fuzzers in parallel [18, 22, 26]. When automatically testing large applications like Chrome, with over 25 million lines of code [8], it becomes increasingly clear that even optimized fuzzing tools need access to multi-core and distributed systems to maximize code coverage and their likelihood of finding bugs, as shown e.g. by the ClusterFuzz project [22]. For this purpose, fuzzers like AFL ship with a parallel-mode [18, 26], where multiple AFL instances share a corpus and thus synchronize their efforts. Although this approach does indeed increase code coverage, it does not solve some of the limitations inherent to AFL. For instance, whenever AFL has difficulties solving *magic bytes* comparisons, multiple instances of AFL will still have a low probability of solving these conditions. ### 2.2 Collaborative fuzzing To counter the limitations imposed by using one single type of fuzzer, ExFuzz [7] introduces *ensemble fuzzing*. The authors demonstrate that combining a *diverse* set of fuzzers leads to greater code coverage than running multiple instances of the *same* fuzzer. The boost in performance seems to stem from the symbiosis of the different fuzzing techniques, where the combination of fuzzers are more likely to cancel out individual disadvantages. Recently, Güler & al. [11] showed how it is possible to automatically select a *good* set of diverse fuzzers to use in such a scenario. While state-of-the-art fuzzers typically focus on increasing code coverage, a recent area of research focuses on minimizing the latency of reaching specific or interesting parts of the program [21]. Within such a constrained budget, some combinations of fuzzers most likely provide a higher return on investment than others. Besides looking at *which* fuzzers to run together, there is also the question of *how* they should collaborate. Is handing out all the generated test cases to all the fuzzers always the best choice? Certain fuzzers, such as QSYM [25], are good at finding new branches, but their performance can degrade significantly if they get too many (low-quality) test cases. We thus investigate how selectively handing out test cases—or, in other words, *test case scheduling*—can improve the performance in a collaborative setting. In summary, **CollabFuzz** is a framework that allows multiple fuzzers to collaborate on a large scale, while a central scheduling component can optimize the fuzzing process by improving the exchange of information between fuzzers—in other words improve resource allocation. ## 3 DESIGN Since fuzzing is a parallelizable task, it is reasonable to run several fuzzers collaboratively to improve code coverage and bug finding. Without test case scheduling, a large fraction of each fuzzer’s execution is spent to just get to a point in the target program that another fuzzer may have already found. This is true not only for several instances of a fuzzer, despite the randomness involved in fuzzing, but even for different fuzzers. Indeed, different fuzzers have different strengths that influence the time it takes for them to get past specific obstacles in the program, but are inherently similar. With **CollabFuzz**, we want to implement a generic, flexible fuzzer orchestration framework that can be used for large-scale fuzzing campaigns as well as fuzzer evaluation. In contrast to prior fuzzer orchestration efforts, such as OSS-fuzz [22] and FuzzBench [20], **CollabFuzz** allows multiple different fuzzers to collaborate while supporting the user in running fine-grained analysis during the fuzzing campaign. In this paper, we showcase how we can use **CollabFuzz** to implement a number of *test case scheduling* mechanisms, allowing the manager to selectively hand out test cases according to an informed scheduling policy. We identify three main criteria for **CollabFuzz**’s design: (1) **Flexibility.** We want a framework that can easily be extended by future work. As such, we design the different components to also be reusable for other uses than presented here. (2) Reproducibility. In fuzzing, being able to reliably repeat experiments is paramount. CollabFuzz uses Docker to achieve a reproducible environment for all the fuzzing targets. (3) Scalability. We want to support large-scale fuzzing. As such, CollabFuzz allows the framework to run in a distributed setting, making it easy to scale fuzzing campaigns to large clusters. At a high level, the central scheduling manager interacts with the fuzzer drivers to control the fuzzers. The manager hands out test cases to the different fuzzer drivers that interact with the fuzzer in question, in turn allowing the fuzzer to mutate the input trying to increase coverage. When a fuzzer finds a new test case, the driver sends this test case back to the manager. The subsequent action is determined by the scheduling policy and informed by various analyses. In a typical scenario, if the generated test case provides new coverage (as decided by an analysis pass), the scheduler will hand out the new test case to one or more fuzzers or cache it for scheduling at a later stage. Specifically, for a single run, CollabFuzz has to be provided with the source of the program under test, a set of seeds for that program, a combination of fuzzers to run, and a policy to coordinate them. As shown in Figure 1, the source will be compiled into several instrumented binaries (which will be used to analyze test cases) and all the binaries each fuzzer requires. At this point, the framework can be started. While running, the following three components are of interest: (1) Central manager. This component schedules test cases to the different fuzzers, depending on the scheduling policy set by the user. We discuss a number of scheduling policies we implemented in CollabFuzz in Section 5. (2) Fuzzer. We can add any off-the-shelf fuzzer to the mix. The fuzzer fetches test cases and mutates them, typically optimizing for finding new coverage. (3) Fuzzer driver. The driver interacts with the off-the-shelf fuzzer, handing it new test cases when the manager needs to schedule them and reporting new findings back to the manager. Upon receiving a new test case from one of the drivers, the manager first places the incoming test case in storage, after which it starts up a number of analysis jobs that are defined by the scheduler. For example, a scheduler might require coverage analysis, in which case the manager would start up a coverage-gathering job on an analysis worker for the incoming test case. The results of these jobs are stored as analysis states, which can later be queried by the scheduler when making a scheduling decision. The scheduler is invoked both periodically (with a user configurable interval) and in an event-driven fashion when a new test case arrives at the manager, allowing for maximum flexibility when implementing scheduling policies. When the scheduler is invoked, it can reason over the stored analysis states and then make an informed decision on whether to hand out zero or more test cases to any running fuzzers. When the scheduler makes its decision, the selected test cases are sent out to the corresponding fuzzer drivers. The new test cases are then inserted into the fuzzer queue. We further discuss the design choices and implementation details in the following sections. 4 IMPLEMENTATION CollabFuzz consists of three components to facilitate the collaboration between fuzzers: the scheduling manager, to coordinate and schedule different fuzzers and inputs; the fuzzer drivers, to allow the fuzzers to interact with the scheduling manager; and the (off-the-shelf) fuzzers. We implemented the scheduling manager in Rust (about 6k LOC) and C++ (about 1k LOC), while the fuzzer drivers are written in Python (about 2k LOC). Each fuzzer runs in its own Docker [19] container and communicates with the scheduling manager over ZeroMQ [12] sockets, allowing the whole setup to run in a large-scale distributed setting. The framework is designed in an extensible way, allowing developers of new fuzzers to easily add support by simply creating a new container image. Scheduling Manager. The central scheduling manager listens for incoming new test cases from the fuzzers. When a test case arrives, a scheduler is invoked, which decides how to react to the event. It is also possible to let the manager invoke the scheduler periodically, allowing for a flexible way to implement different schedulers. When a scheduler is invoked, it typically selects one or more test cases to send out to a group of fuzzers. The scheduler registers a number of analyses that are executed for incoming test cases. These analyses (such as coverage tracing) are performed by analysis workers (which can be distributed over the network) and stored globally in analysis states that the scheduler can query. This design allows for flexible and possibly heavyweight analysis without a significant performance penalty. For example, some schedulers might require data-flow analysis as part of their scheduling decision-making. In such cases, the scheduler would register a data-flow analysis pass, which is run on every incoming test case which is deemed interesting by one of the fuzzers. Fuzzer Driver. We implemented a generic fuzzer driver (using Python), which listens to files created in a number of directories (e.g., queue, crashes, hangs for AFL). When a new inotify event is dispatched, the driver sends the new test case to the scheduling manager over a ZeroMQ socket. In a similar fashion, the driver also listens for incoming messages from the scheduling manager, placing these incoming test cases in a specified directory. This generic design allows this single driver to work with a variety of fuzzers. COLLABFUZZ currently supports AFL, AFLFAST, FAIRFUZZ, QSYM, RADAMSA, HONGGFUZZ, and LIBFUZZER. We extended LIBFUZZER and HONGGFUZZ with an AFL-style synchronization mechanism to allow all fuzzers to share test cases. Each target application for a particular fuzzer is based on a Docker image. Each fuzzing campaign is configured using a YAML file, allowing for repeatable runs of the campaign. Analyses. As mentioned before, each scheduler bases its decisions on data produced by a series of static and dynamic analyses. These analyses are implemented as LLVM [15] passes and thus require source code. Despite the absence of technical limitations in implementing them at the binary level, we chose this approach to ease development. As an example, we implemented the following analyses: Global coverage Extracts the exact edge coverage of a test case and then aggregates it during a single campaign. Test case benefit Implements the design described in Section 5 using DataFlowSanitizer and a static interprocedural control flow graph. Instruction count Uses a modified version of DataFlowSanitizer to dynamically compute the length of the dynamic backward slice for each instruction, storing the minimum. New analysis passes can easily build on top of the individual building blocks in our existing passes. The data generated by the analysis passes and all scheduler events are stored in a SQLite database, which can be queried for further analysis. Patches for compatibility. COLLABFUZZ includes fuzzing targets as Docker containers. We include containers for every target binary in LAVA-M [9], Binutils, and Google fuzzer-test-suite [10] for every fuzzer that we used (AFL, AFLFAST, FAIRFUZZ, QSYM, HONGGFUZZ, LIBFUZZER, RADAMSA, LAFINTEL), allowing for a consistent environment when performing our benchmarks. To make the target programs compatible with our DFSan-based analysis passes, we had to patch a number of issues in the build systems (e.g., of the Google fuzzer-test-suite), as well as in DFSan. We compiled all C++ programs against LLVM’s libcxx to be able to get adequate DFSan coverage. Furthermore, we had to patch a number of fuzzers to allow for external test case syncing at runtime. 5 CASE STUDY: TEST CASE SCHEDULING As a case study of applying COLLABFUZZ, we evaluate whether coarse-grained fuzzer-level scheduling (i.e., the scheduler has no insight into the fuzzer’s internal queue) of test cases can be utilized to improve the overall results of a collaborative fuzzing campaign. We consider EnFUZZ as a baseline, and see whether other strategies of synchronizing (i.e., scheduling) the corpus yields a noticeable effect on the overall result of the fuzzing campaign. To showcase how COLLABFUZZ allows for diverse scheduling policies, we implemented four relatively simple test case scheduling policies. We believe that more informed and effective policies are possible, but leave this as future work. Our goal is to demonstrate that COLLABFUZZ can be used as a platform for reasoning about such scheduling and resource allocation policies. EnFUZZ scheduler. This scheduler is a reimplementation of the approach described by Chen & al. [7]. The scheduler continuously receives new test cases from each single fuzzer and, every 2 minutes, it forwards them to all the other fuzzers participating in the collaboration. Broadcast scheduler. This second scheduler is a simple optimization over the one employed by EnFuzz. It simply eliminates the synchronization delay by forwarding test cases as soon as they are received by the coordination server. The intuition behind this approach is that the 2 minutes delay may build up over the run and thus negatively influence the increase in global coverage over time. Since COLLABFUZZ is continuously analyzing new incoming test cases and caching the analysis results, there is no need for a 2-minute delay for coverage information analysis. Benefit scheduler. In contrast to the previous scheduler, the benefit scheduler introduces a synchronization delay in order to focus the fuzzers on important test cases and delay less interesting ones. In detail, the benefit scheduler fills a priority queue with all the received test cases, but flushes only 1% of it every 5 seconds (these parameters ensure a continuous small stream of test cases). The prioritization happens based on benefit, a novel metric that, given a test case, we define as the count of unseen basic blocks in the program that are reachable from the frontier of the test case. In turn, we define the frontier of a test case as the set of basic blocks in the trace for that test case which match the following criteria: 1. They have at least one unseen basic block as neighbor in the interprocedural CFG. 2. The terminator instruction of the basic block from which they can be reached is tainted by the input given by the fuzzer. The intuition behind this scheduler is that focusing the fuzzing effort on test cases with a high benefit can potentially increase the global coverage more rapidly. This is particularly important for fuzzers that employ heavyweight analyses and thus have a lower execution count, like QSYM, since they can be focused on important test cases first, without wasting cycles on fuzzing, for example, error-handling code. An approach, similar in nature, which prioritizes certain test cases based on some heuristic has been previously presented by DigFuzz [27]. **Cost-benefit scheduler.** This last scheduler partially overlaps with the previous one because it uses the same priority queue system, but it changes the way in which the priority is calculated. Apart from the benefit metric defined before, it also relies on *cost*, another novel metric that, given a basic block, we define as the minimum number of instructions in the trace for that test case which match the following criteria: 1. They manipulate their arguments in some way, e.g. arithmetic operations, and do not just move them around in memory, e.g. store instructions. 2. They belong to the dynamic backward slice of the terminator instruction of the basic block for which the cost metric is being computed. These two metrics are then aggregated in the following way: \[ \text{cost\_benefit}(t) = \frac{\sum_{b \in \text{frontier}(t)} \text{benefit}(t)/\text{cost}(b)}{|\text{frontier}(t)|} \] (1) The intuition behind this scheduler is that the benefit that can be produced when solving a specific branch constraint needs to be weighted against the difficulty to solve that constraint, which is approximated with the cost metric. For example, a very beneficial constraint that is almost impossible to solve is probably not worth focusing on. In turn, the intuition behind the cost metric is that the more instructions concur to the computation of a single value, the more complex the constraint will be. An example of the distribution of the cost metric at the end of a run is shown in Figure 2. ### 5.1 Evaluation of schedulers We performed experiments on 32-core/64-thread AMD ThreadRipper 2990WX processors with 128GB of RAM. For each experiment, we allocated 4 hardware threads (running 4 fuzzers + framework/drivers). We ran the fuzzers inside Docker containers and enabled core pinning for AFL. We ran each experiment for 10 hours with 10 repetitions. We show the median of the branch coverage count and the area-under-the-curve (AUC) of the branch coverage at the end of the campaign. We also indicate whether the results are statistically significant using the Mann-Whitney U-test as suggested by Klee et al. [14]. We evaluate our Cost-Benefit against the EnFUZZ scheduling policy on the Google fuzzer-test-suite with the well-performing diverse selection of fuzzers suggested by Cupid [11] (AFL,FairFuzz,LibFuzzer, QSYM). The results are presented in Table 1. We observed similar initial results for the other scheduling policies. Not surprisingly, our results show that, with the given selection of fuzzers, the coverage at saturation is typically similar regardless of the scheduling policy, with no statistically significant difference for the overwhelming majority of our target programs. In the end, the achieved coverage that a set of fuzzers reaches is determined by the individual mutation techniques of the fuzzers. As such, the manner in which they share their progress (as long as it is shared somehow) has little influence on how much of the target program can possibly be explored, if coverage saturation is reached within the time limit. Despite the achieved coverage being the same, we also investigate whether scheduling policies can improve *how quickly* said coverage is reached. In other words, can different scheduling policies affect the latency of reaching a certain amount of coverage? The AUC metric shows the evolution of coverage over time. The more coverage is found earlier on in the campaign, the higher the AUC will be. On the other hand, reaching the same end coverage at a later time will result in a lower AUC metric. To make the AUC metric somewhat more tangible, we derive some samples from it. Namely, looking at the latency of achieving a partial amount of coverage is a useful indication of the real-world speedup a user can expect when fuzzing with limited time and resources. We show the difference in latency to achieve the 90, 95, 97, and 99th percentile of the total coverage. For example, as shown in Table 2, the EnFUZZ scheduler reaches 90% of its end coverage 13% faster than Cost-Benefit. At a first glance, the differences appear significant. Namely, the EnFUZZ scheduler seems to outperform the Cost-Benefit scheduler in every case. However, after some more thorough analysis of the data obtained through CollaBFUZZ, we can conclude that there is *no statistical significant difference* in the AUC between the different schedulers (as can be seen in Table 1) and thus the aforementioned latency deviations can likely be attributed to randomness. Latency-wise there can be a large difference in when the different setups reach a particular milestone. However, in practice, this large difference in latency might simply be due to a very small skew (even due to a single branch) in the distribution of when coverage is found. For example, our broadcast scheduler performs similar to the EnFUZZ scheduler, despite it cutting out the 2 minute synchronization time-window of EnFUZZ. Overall, our results show that the different schedulers we have presented do not significantly affect the overall achieved coverage of a fuzzing campaign, nor do they affect the AUC of the coverage. As such, we can conclude that a fuzzer-level coarse-grained scheduling of test cases is unlikely to yield any significant performance improvements. Nonetheless, we believe our analysis is an important first step to study scheduling policies in a collaborative fuzzing scenario and CollaBFUZZ can serve as a basis to quickly evaluate a variety of more sophisticated policies in future work. Table 1: Median branch coverage for different scheduling policies. $\uparrow$ indicates that Cost-Benefit was significantly better than EnFuzz; $\downarrow$ indicates that EnFuzz performed better; $\times$ means no statistically significant difference. | Binary | Cost-Benefit | EnFuzz | $p$-val | AUC $p$-value | |-----------------|--------------|----------|---------|---------------| | c-ARES | 45 | 45 | — | $\times$ | | GUETZLI | 5047.5 | 4983.5 | $\times$| $\times$ | | JSON | 1544 | 1545 | $\times$| $\times$ | | LIBARCHIVE | 5264 | 5424.5 | $\downarrow$| $\times$ | | LIBPNG | 1514 | 1516.5 | $\times$| $\times$ | | LIBXML2 | 5407.5 | 5354.5 | $\uparrow$| $\times$ | | OPENSSL-1.0.2d | 1442 | 1442 | — | $\times$ | | OPENSSL-1.1.0c | 1281 | 1281 | $\times$| $\downarrow$ | | OPENTHREAD | 1915.5 | 1912.5 | $\times$| $\times$ | | PROJ4 | 5773 | 5882 | $\times$| $\times$ | | SQLITE | 1733 | 1733 | — | $\times$ | | WOFF2 | 2990.5 | 3021.5 | $\downarrow$| $\downarrow$ | Geomean final coverage 1847.23 1855.05 Table 2: Speedup in achieving partial coverage compared to the EnFuzz scheduler. | Coverage | Cost-Benefit | |----------|--------------| | 90% | -0.32% | | 95% | -12.75% | | 97% | -18.78% | | 99% | -0.76% | 6 RELATED WORK Existing work on fuzzing has investigated how prioritizing certain test cases can improve the performance within one single fuzzer. FAIRFUZZ [16] prioritizes input mutations, such that “rare” branches are given priority over commonly exercised branches. In AFLFast [3], the authors model fuzzing as a Markov model, and use it to steer fuzzing towards low-frequency paths. In contrast, COLLABFUZZ does not look at the individual fuzzers at the queue level, but rather implements scheduling policies on a global level over a variety of different fuzzers. COLLABFUZZ’s scheduling policies can be applied to any off-the-shelf fuzzer, and requires little or no modification to the actual fuzzer. In [24] the authors evaluate a large number of scheduling algorithms for blackbox fuzzing. AFLGo [2], HAWKEYE [4], and PARMESan [21] all use static analysis and instrumentation to allow for prioritization (i.e., scheduling) of test cases that lead to coverage of pre-specified locations in the target program. Hybrid fuzzing [23, 25] shows that augmenting lightweight greybox fuzzing with more heavy-weight analysis (e.g., symbolic execution) can yield more bugs without significantly slowing down the whole process. This approach can be seen as a type of test case scheduling, where the hard-to-solve cases are offloaded to the heavyweight analysis. In fact, these schemes can be easily expressed as a scheduling policy in COLLABFUZZ. Recent work by Chen & al [6] and Zhao & al [27] show how adaptive scheduling policies can further improve hybrid fuzzing. EnFuzz [7] introduces ensemble fuzzing, i.e., having a diverse set of fuzzers collaborate, showing how selecting an ensemble of diverse fuzzers can increase code coverage. In this paper, we generalize the intuition provided by EnFuzz and present COLLABFUZZ, a framework that can model such collaboration between different fuzzers in a more generic fashion. 7 DISCUSSION & FUTURE WORK In this paper, we have limited ourselves to COTS fuzzers with the explicit goal of not making significant modifications to the fuzzers themselves. We are thus limited to the interface that the selected fuzzers provide. This means that we can select which test case to hand out to the fuzzer, but we cannot select which test case any particular fuzzer should work on at a given moment. With a more fine-grained interface, the scheduler could have more control over, for example, the kinds of branches to target. While the scheduling policies presented in this paper did not yield a statistically significant improvement, we see ample opportunity for improving test case scheduling by introducing such fine-grained control mechanisms. Furthermore, in our current implementation, the set of selected fuzzers is static over the whole run. In some cases, changing the resource allocation among the fuzzers over the fuzzing campaign might yield better results. Our current COLLABFUZZ prototype has rudimentary support for this, but we have limited the scope of this study to a static set of fuzzers. 8 CONCLUSION We have presented COLLABFUZZ, a collaborative fuzzing framework that allows multiple fuzzers to share their progress towards one end goal. By using COLLABFUZZ’s orchestration of large-scale fuzzing campaigns on a cluster, we have shown that coarse-grained test case scheduling between fuzzers has a negligible effect on the result of the fuzzing campaign. Nevertheless, COLLABFUZZ enables developers to easily express different fuzzing techniques by means of scheduling policies and allows them to easily collect fuzzer statistics for further analysis. The source code for our COLLABFUZZ prototype is available at https://github.com/vusec/collabfuzz. ACKNOWLEDGMENTS We would like to thank the anonymous reviewers for their constructive feedback. This work was supported by Cisco Systems, Inc. through grant #1138109 and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2092 CaSA – 390781972. In addition, this project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 786669 (ReAct). This paper reflects only the authors’ view. The funding agencies are not responsible for any use that may be made of the information it contains. REFERENCES [1] Cornelius Aschermann, Sergej Schumilo, Tim Blazytko, Robert Gawlik, and Thorsten Holz. 2019. REDQUEEN: Fuzzing with Input-to-State Correspondence. In Symposium on Network and Distributed System Security (NDSS). [2] Marcel Böhme, Van-Thuan Pham, Manh-Dung Nguyen, and Abhik Roychoudhury. 2017. Directed greybox fuzzing. In ACM Conference on Computer and Communications Security (CCS). [3] Marcel Böhme, Van-Thuan Pham, and Abhik Roychoudhury. 2017. Coverage-based Greybox Fuzzing As Markov Chain. In *IEEE Transactions on Software Engineering*. [4] Hongyan Chen, Yinxing Xue, Yuekang Li, Bihuan Chen, Xiaofei Xie, Xiuheng Wu, and Yang Liu. 2018. Hawkeye: towards a desired directed grey-box fuzzer. In *ACM Conference on Computer and Communications Security (CCS)*. [5] Peng Chen and Hao Chen. 2018. Angora: Efficient fuzzing by principled search. In *IEEE Symposium on Security and Privacy (S&P)*. [6] Yaohui Chen, Mansour Ahmadi, Boyu Wang, Long Lu, et al. 2020. MEUZZ: Smart Seed Scheduling for Hybrid Fuzzing. In *23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020)*. 77–92. [7] Yuanliang Chen, Yu Jiang, Fuchen Ma, Jie Liang, Mingzhe Wang, Chijin Zhou, Xun Jiao, and Zhuo Su. 2019. EnFuzz: Ensemble Fuzzing with Seed Synchronization among Diverse Fuzzers. In *USENIX Security Symposium*. [8] Chrome [n.d.]. The Chromium (Google Chrome) Open Source Project on Open Hub. Last accessed: https://www.openhub.net/p/chrome/analyses/latest/languages_summary. Accessed: March 31, 2021. [9] Brendan Dolan-Gavitt, Patrick Hulin, Engin Kirda, Tim Leek, Andrea Mambretti, Wil Robertson, Frederick Ulrich, and Ryan Whelan. 2016. Lava: Large-scale automated vulnerability addition. In *IEEE Symposium on Security and Privacy (S&P)*. [10] Google, Inc. 2018. fuzzer-test-suite. https://github.com/google/fuzzer-test-suite. Accessed: March 31, 2021. [11] Emre Güler, Philipp Görz, Ella Geretto, Andrea Jennett, Sebastian Österlund, Herbert Bos, Cristiano Giuffrida, and Thorsten Holz. 2020. Cupid: Automatic Fuzzer Selection for Collaborative Fuzzing. In *Annual Computer Security Applications Conference (ACSAC)*. https://doi.org/10.1145/3427228.3427266 [12] Pieter Hintjens. 2013. ZeroMQ messaging for many applications. O’Reilly Media, Inc.. [13] Honggfuzz [n.d.]. Security oriented fuzzer with powerful analysis options. https://github.com/google/honggfuzz. Accessed: March 31, 2021. [14] George Klee, Andrew Ruef, Benji Cooper, Shiyi Wei, and Michael Hicks. 2018. Evaluating Fuzz Testing. In *ACM Conference on Computer and Communications Security (CCS)*. [15] Chris Lattner and Vikram Adve. 2004. LLVM: A compilation framework for lifelong program analysis & transformation. In *International Symposium on Code generation and optimization*. [16] Caroline Lemieux and Koushik Sen. 2018. Fairfuzz: Targeting rare branches to rapidly increase greybox fuzz testing coverage. In *ACM International Conference on Automated Software Engineering (ASE)*. [17] Yang Li, Chao Feng, and Chaojing Tang. 2018. A Large-scale Parallel Fuzzing System. In *International Conference on Advances in Image Processing*. [18] Jie Liang, Yu Jiang, Yuanliang Chen, Mingzhe Wang, Chijin Zhou, and Jiaguang Sun. 2018. Pafl: extend fuzzing optimizations of single mode to industrial parallel mode. In *ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE)*. [19] Dirk Merkel. 2014. Docker: lightweight linux containers for consistent development and deployment. *Linux journal* 2014, 239 (2014), 2. [20] László Szekeres Jonathan Metzman, Abhishek Arya, and I. Szekeres. 2020. FuzzBench: Fuzzer benchmarking as a service. *Google Security Blog* (2020). [21] Sebastian Lernerlund, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida. 2020. ParmeSan: Sanitizer-guided Greybox Fuzzing. In *USENIX Security*. Paper=https://download.vusec.net/papers/parmesan_sec20.pdfCode=https://github.com/vusec/parmesan [22] Kostya Serebryany. 2017. OSS-Fuzz-Google’s continuous fuzzing service for open source software. In *USENIX Security Symposium*. [23] Nick Stephens, John Gross, Christopher Salls, Andrew Dutcher, Ruoyu Wang, James Coghill, David Gotschlichvili, Christopher Kruegel, and Giovanni Vigna. 2016. Driller: Augmenting Fuzzing Through Selective Symbolic Execution.. In *Symposium on Network and Distributed System Security (NDSS)*. [24] Maverick Woo, Sang Kil Cha, Samantha Gottlieb, and David Brumley. 2013. Scheduling black-box mutational fuzzing. In *ACM Special Interest Group on Security, Audit and Control (SIGSAC)*. [25] Insu Yun, Sangho Lee, Meng Xu, Yeongjin Jang, and Taesoo Kim. 2018. QSYM: A Practical Console Execution Engine Tailored for Hybrid Fuzzing. In *USENIX Security Symposium*. [26] Michał Zalewski [n.d.]. american fuzzy lop. http://lcamtuf.coredump.cx/afl/. Accessed: March 31, 2021. [27] Lei Zhao, Yue Duan, Heng Yin, and Jifeng Xuan. 2019. Send Hardest Problems My Way: Probabilistic Path Prioritization for Hybrid Fuzzing.. In *NDSS*.
Structural Concrete The Bridge Between People VOLUME 1 - Plenary Session Keynote Addresses - Session 1 Design of Concrete Structures for Structural Beauty and Elegance - Session 2 Practical Design of Structural Concrete VIACON Agency, Prague, 1999 Size Effect in Concrete Structures: Nuisance or Necessity? Author: Zdeněk P. Bažant W.P. Murphy Professor of Civil Engineering and Materials Science Northwestern University 2145 Sheridan Road, IL 60208 Evanston, U.S.A. Phone: +1 847 491 4025, Fax: +1 847 467 1078, E-mail: firstname.lastname@example.org Summary Dedicated to Hans W. Reinhardt at his 60th birthday The lecture reviews the case for incorporating size effect into the code provisions of concrete structures and computational evaluation. After commenting on the long history of the problem beginning in the Renaissance, some recent major structural catastrophes in which the size effect must have played a role are discussed. The mechanism of the energetic size effect is explained and numerous formulae for the size effect that have recently been derived for different purposes from various theories or experimental evidence are reviewed. The possibility of extending the strut-and-tie (truss) models in a way that captures the size effect is emphasized. Although the size effect might be seen as a nuisance, spoiling the beauty of the theory of limit states, its incorporation into the codes cannot be avoided. It is a necessity. Introduction The size effect is the dependence of the nominal strength of the structure, $\sigma_N$, on the characteristic size (dimension) $D$ of the structure when geometrically similar structures (with similar loading and similar failure modes) are compared. For three-dimensional similarity $\sigma_N = P/D^2$, and for two-dimensional similarity $\sigma_N = P/bD$ where $P =$ load capacity and $b =$ structure thickness in the third dimension. Elastic analysis, as well as plastic limit analysis according to the limit state concept, exhibits no size effect. However, when a significant range of sizes is considered, tests of brittle failures of concrete structures as well as structures made of all quasibrittle materials reveal a significant size effect [1,2]. Although this is not a great concern for ordinary structures, the size effect needs to be introduced into the design procedures when sensitive or large structures, or new types of structural systems, are considered. For such purposes, updates of the existing design codes are inevitable. Interest in the size effect is older than mechanics of materials itself. Leonardo da Vinci [3] suggested that the strength of ropes is inversely proportional to their length. Such excessive size effect was rejected by Galileo [4]. Later in the 17th century Mariotte [5] advanced in qualitative terms the basic idea of the statistical size effect due to strength randomness, namely the fact that the probability to encounter in a structure an element of a certain low strength increases with the structure size. A mathematical formulation of this idea had to await the development of the statistical weakest link model by Fisher, Tippet and Fréchet in the 1920's, and the discovery of the proper extreme value distribution by Weibull in 1939 [6]. Since Weibull until recently, the size effect, if observed experimentally, was generally considered to be statistical, something to be relegated to statisticians and buried in the safety factors. In the mid 1970's however, researches at Lund [8,9] and Northwestern University [10-12] revealed that a large deterministic size effect in quasibrittle structures is caused by stress redistribution, strain localization, and the consequent energy release associated with large fractures or large cracking zones developing before the maximum load. The theory of the energetic size effect has gone through rapid development and its basic aspects are today well understood [1,2]. Extensive experimental data have recently been accumulated in reduced-scale laboratory testing, and some large-scale tests of real structures have been carried out. A positive development in this decade has been that the need for introducing size effect into the design practice is no longer generally dismissed but taken seriously. Various size effect provisions are appearing in the codes of various countries. Even though extensive experimental evidence for large structures is still lacking, it is now clear that it is imprudent to omit the size effect from the design of large or innovative structures. As for the classical statistical size effect, it was shown to be minor and usually negligible whenever large fractures or large cracking zones develop prior to the maximum load. Still other explanations of the observed size effect have been recently proposed, particularly the fractal nature of crack surfaces and microcrack distributions, or the width and spacing of cracks. However, there are good reasons to conclude that such hypothetical mechanisms do not play a significant role [1,2,7]. **A New Look at Structural Catastrophes in the Past** Since it is either too expensive or outright impossible to test large structures to failure, lessons regarding the size effect should be drawn from structural catastrophes that happened in the past. These are, for example, the following: (1) the sinking of Sleipner A oil platform in 1991 [13]; (2) the toppling of the Han-Shin freeway viaduct in Kobe earthquake in 1995; (3) the collapse of Cypress Viaduct on Nimitz Freeway, in Oakland, in Loma Prieta earthquake in 1989 [14]; (4) the collapse of St. Francis dam near Los Angeles in 1928 [15]; (5) the collapse of Malpasset arch dam in French Maritime Alps in 1954 [14]; and (6) the collapse of Schoharie Creek Bridge on New York Thruway in 1987. Although, with the exception of Schoharie, the investigating committees did not list the size effect among the causes, from today's perspective it is clear that it must have been a significant additional contributing factor. **Causes of Size Effect** The cause of size effect may be explained, without any calculations, by considering the mechanism of diagonal shear failure of reinforced concrete beams without or with stirrups (Fig. 1a). Prior to the maximum load, a long "shear" crack caused by diagonal tension develops, while significant compression stresses are transmitted along a so-called "compression strut" running parallel to the crack. For failure to occur, this compression strut must be crushed. Therefore, a crushing band (B in Fig. 1a), in which concrete undergoes axial splitting fractures, must propagate across the strut. If the area 12341 of this band covered the entire length of the strut (i.e., if the strut were failing simultaneously all along its length), there would be no size effect, and neither would if the width \( h \) of the band in the strut direction were proportional to the beam depth. But neither is the case. The crushing localizes into the narrowest band possible, of a width equal to several maximum aggregate sizes, and so the width \( h \) of the band in the direction of the strut is approximately constant, independent of the beam size. Since the energy dissipation per unit area is also approximately constant, independent of beam depth \( d \), the dissipation of energy in the crushing band is proportional to length \( c \) of the band, which in turn is approximately proportional to the beam depth \( d \) because the failures of small and large beams are known to be similar. Thus, the energy dissipated in the band is approximately proportional to beam depth \( d \). This energy must be supplied by a release of strain energy from the beam. The crushing band causes stress reduction in a strip of width \( c \) running all along the compression strut (area 67856 in Fig. 1a left, or 12561 in Fig. 1a right). The inclination of the strut being similar for various sizes, the area of this strip is proportional to \( cd \), and since \( c \) is also proportional to \( d \), this area is proportional to \( d^2 \). So the energy release is proportional to \( (\sigma_N)^2/E \cdot d^2 \). To preserve energy balance, this must be proportional to \( d \), characterizing the energy consumed. Evidently, this is possible only if \( \sigma_N \) is proportional to \( d^{-1/2} \). Thus, the source of the energetic size effect is simply the fact that the energy consumed increases with size \( d \) linearly, while the energy released increases quadratically. This mismatch must be offset by a decrease of \( \sigma_N \) with increasing size. With this simple argument one can immediately realize, without any calculations, that there indeed must be a size effect, and that its source is energy release. This size effect would be avoided only if the failure occurred at the very inception of cracking (at which both the energy released and energy consumed are negligible). But ample experimental evidence shows that this is not the case. Fig. 1 a) Zones of energy and dissipation in a "compression strut". b) Laws size effect for $P_{\text{max}}$ (a) after large crack growth, (b) at crack initiation. c) Size effect curves based on various hypotheses or assumed empirically. d) Simple size effects implied by two subsequent ACI specifications for splices. \[ \sigma_N = \frac{\sigma_0}{\sqrt{1 + (D/D_0)}} \quad \text{(large crack) [1,2,12] where } D_0 = c_f \frac{g'}{g}, \sigma_0 = \sqrt{\frac{EG_f}{g'c_f}} \tag{1} \] \[ \sigma_N = \frac{\sigma_0 + \sigma_R \sqrt{1 + (D/D_1)}}{\sqrt{1 + (D/D_0)}} \approx \frac{\sigma_0}{\sqrt{1 + (D/D_0)}} + \sigma_R \quad [1,2] \quad \text{if } D_1 \approx D_0 \tag{2} \] where \( D_1 = c_f \frac{\gamma'}{\gamma}, \quad \sigma_R = \sigma_r \sqrt{\frac{\gamma'}{\gamma}} \) \[ \sigma_N = \sigma_0 \left(1 + \frac{rD_b}{D}\right)^{1/r} \quad \text{(crack initiation) [17]} \quad \text{where } D_b = \frac{\langle -g'' \rangle c_f}{4g'} \tag{3} \] \[ \sigma_N = \sqrt{A_1 + \frac{A_2}{D}} \quad \text{(Carpinteri's "MFSL", CEB—diag. shear)} \tag{4} \] where \( A_1 = \frac{EG_f}{g'c_f}, \quad A_2 = \frac{\langle -g'' \rangle EG_f}{2g'^2} \quad \text{(crack initiation) [16]} \) \[ \sigma_N = C D^{-n/m} \quad \text{(Weibull, } n/m \approx \frac{1}{12}. \text{ But JCI: } n/m = \frac{1}{4} \text{ for diag. shear)} \tag{5} \] \[ \sigma_N = \text{Min} \left( \frac{\sigma_0}{1 + (D/D_0)}, \tau_c \right) \quad \begin{cases} \text{Swiss Code for punching shear [18]} \\ \text{"Leonardo" asymptote} \end{cases} \tag{6} \] \[ \sigma_N = C D^{-1/2} \quad \text{(LEFM, German and ACI codes for anchor pullout)} \tag{7} \] \[ \sigma_N = \text{Graph in Fig. 4 (ACI 318R-89 for splices, "Leonardo" size effect) [3]} \tag{8} \] \[ \sigma_N = \frac{\sigma_0}{\sqrt{(D/D_0)^{n/m} + (D/D_0)}} \quad \text{(large crack, statistical) [1,19]} \tag{9} \] \[ \sigma_N = \sigma_0 (D_b/D)^{n/m} \left[1 + r(D/D_b)^{1-rn/m}\right]^{1/r} \quad \text{(crack initiation, statistical)} \tag{10} \] \[ \sigma_N = \sigma_0 \left(1 + \frac{D}{D_0}\right)^{-1/2} \left(1 + \frac{D}{D_{so}}\right)^{-1/4} \quad \text{(composite beams, studs scaled) [20]} \tag{11} \] \[ \sigma_N = C D^{-2/5} + \sigma_0 \quad \text{(compression fracture, large scale) [1]} \tag{12} \] \[ \sigma_N = C D^{-3/8} + \sigma_0 \quad \text{(thermal bending of sea ice plate) [1]} \tag{13} \] \[ \sigma_N = \sigma_0 D^{(\delta-1)/2} \left[1 + (D/D_0)\right]^{-1/2} \quad \text{(fractal, large crack) [2]} \tag{14} \] \[ \sigma_N = \sigma_0 D^{(\delta-1)/2} \left[1 + r(D_b/D)\right]^{1/r} \quad \text{(fractal, crack initiation) [2]} \tag{15} \] **Tab. 1** Size effect formulae obtained from different theories or from experience, used for different purposes, some incorporated in codes. Size Effect Formulae While until almost the end of the 1980's, no size effect provisions were present in the codes, a number of them have been introduced for various types of failures in the codes of various countries. This is a healthy trend, however, what is striking is the variety of formulae and the underlying theories. A point to be noted in this regard is that the energetic size effect is inevitably present if the failure does not occur at the initiation of cracking. This means that if some other theory is assumed it could only come on top of the energetic theory, but not without it, not as a replacement. Most of the existing formulae for size effect are listed in Table 1 in which $D$ is the characteristic structures size; $E$, $G_t$, $c_t$, $m$, $\sigma_r$, $r$, $d$ are material constants; $\sigma_0$, $D_0$, $D_1$, $\sigma_R$, $D_\alpha$, $A_1$, $A_2$, $n$, $C$, $\tau_c$, $D_{50}$ are structural constants depending on geometry; $g = g(\alpha) =$ energy release function of relative crack length based on linear elastic fracture mechanics (LEFM), and $g' = dg/d\alpha$. Eqs. 1, 2 and 3 are based on the energetic theory. Eq. 4 (curve 4 in Fig. 1c), called by Carpinteri et al. [16] the MFSL ("multi-fractal" scaling law), can also be justified by the energetic theory [17] (being a special case of Eq. 3), although it was originally proposed on the basis of geometrical (non-mechanical) arguments relying on fractal aspects of fracture geometry (the partly fractal nature of crack surfaces and microcrack distributions in concrete is not questioned, only its role in the mechanics of size effect is). Unlike the fractal hypothesis, the energy release analysis provides the geometry dependence of the coefficients of MFSL (Eq. 4). The fracture mechanics expressions in Table 1 using LEFM functions of $g$ and $g'$ are useful only if the effective fracture situation of a very large structure at maximum load is known, which is a difficult problem. Eq. 1 (Fig. 1b left, curve 1 in Fig. 1c) is the original simple size effect law derived by Bažant [12], applicable to brittle failures occurring after large stable crack growth, which is typical of reinforced concrete. Eq. 2 is its modification applicable when there is a significant residual stress $\sigma_r$ transmitted across the cracking band, which is important for compression fracture. Eq. 3 (curve 3 in Fig. 1c) is applicable to failures occurring at fracture initiation, which is typical of plain concrete and is exemplified by the modulus of rupture test. Eqs. 1-4 have been justified in a number of ways - by simple analysis of energy release zones, by asymptotic expansions of J-integral, by equivalent LEFM analysis based on asymptotic matching, and by numerical simulations with nonlocal finite elements or with discrete elements (random particle method). They have been verified experimentally for many types of brittle failures, including diagonal shear of beams, punching shear, torsion, bar pullout, anchor pullout, splice failure, slender column failure, and failure of steel concrete composite beams due to failure of connectors. Eq. 4 (curve 4 in Fig. 1c) for MFSL is a special case of Eq. 3 for $r = 2$, however, the value $r = 1.44$ has been found optimal by comparisons with many test data on the modulus of rupture (in collaboration with Drahoslav Novák, Brno). Fracture analysis indicates Eq. 4 to be applicable to failure at crack initiation, yet this equation has been proposed for the diagonal shear failure even though this failure occurs after large fracture growth (German and European codes). Formulae of the type of Eq. 1 and 2 are proposed as size effect factors to be incorporated into the code formulae for most types of brittle failure (diagonal shear, torsion, punching shear, anchor pullout, bar pullout, splice failure, stud failure in composite beams and failure of slender columns). Eq. 5 (curve 5 in Fig. 1c) represents the size effect obtained from Weibull statistical theory; $m =$ Weibull modulus of the material (widely considered as 12, but better taken as 24 according to the latest studies), and $n = 2$ or 3 for two- or three-dimensional similarity. This formula is in theory applicable only when the failure occurs at fracture initiation. However, based on the results of the largest-scale tests so far, conducted at Shimizu Corp. in Japan, this formula was introduced in JCI for the diagonal shear failure of beams, with the value $m/n = 1/4$. Another weakness of Eq. 5 is that a power law implies the structure to possess no characteristic dimension (complete self-similarity), yet a characteristic dimension must exist due to the size of aggregate as well as the spacing and size of reinforcing bars. Eq. 6 (curve 6 in Fig. 1c) is an interesting formula introduced into Swiss code SIA 162 to describe the size effect in punching shear; $\sigma_0$, $D_0$ and $\tau_c =$ constants. The formula was based strictly on test results, but its form is theoretically objectionable. For sufficiently large sizes $D$ it gives an impossibly strong size effect. It approaches the "Leonardo" size effect [3], namely $\sigma_N$ being inversely proportional to $D$, which is thermodynamically impossible. Nevertheless, the Swiss code deserves praise for being the first to accept that there indeed is a strong size effect in punching shear. Eq. 7 (asymptote of curve 1 in Fig. 1c) represents the size effect of LEFM, which was introduced into German and ACI Code Recommendations, based mainly on the tests of Eligehausen. This is a strong size effect which is excessive for anchors that are small, but such anchors might not be of great concern. There are other provisions in various codes which imply a size effect although this is not stated explicitly. For example, the code ACI 318 R-89 implied a huge size effect for the failure of splices, shown graphically Fig. 1d. This was in fact the "Leonardo" size effect of slope 1, which is thermodynamically impossible. Fig. 1d shows for comparison also the size effect in ACI 318 R-95, in which the discontinuous jump is objectionable (note that these diagrams are plotted assuming the cover thickness to be proportional to the bar diameter, or else one could not speak of size effect). The enormous sudden change between the two ACI plots in Fig. 1d, a "U-turn" made in absence of any new revolutionary finding, is striking. Eqs. 9 and 10 represent statistical generalizations of Eqs. 1 and 3. However, the additional Weibull-type statistical effect, given by the terms with exponents $n/m$, is very small. The fit of size effect test data with these formulae is not any better than that with the deterministic ones. Eq. 11 represents the size effect in steel-concrete composite beams that fail due to shear failures of studs. The studs are not all failing simultaneously; rather their failures propagate along the beam, which is a behavior similar to crack propagation. In this problem there are two size effects superimposed on each other: (1) the size effect in stud failure, and (2) another size effect due to propagation of the stud failures through the steel concrete interface in the beam as a whole. Due to combination of these two, the compound size effect given by Eq. 11 can be stronger than in LEFM [20], provided that the studs are scaled with the beam. If they are not, then the size effect given by Eq. 1 applies. Eqs. 12 and 13 represent special size effects applicable to compression fracture of concrete propagating laterally to the direction of the axial splitting cracks, and to thermal bending fracture of floating sea ice plate. Finally, Eqs. 14 and 15 represent the size effects derived on the basis of energy balance under the assumption that the crack surface or the microcrack distribution has a fractal geometry and that the fracture energy can be treated as fractal [25, 21]. They differ from Eq. 4, which was derived strictly geometrically, without any mechanical analysis. Still another size effect is obtained by a numerical calculation proposed by M. Collins for the diagonal shear failure (see Ref. [17]). Among the formulae listed, Eqs. 1-3 have the strongest theoretical and experimental support and appear to be appropriate for the design code. A difficult problem is a theoretical prediction of geometry dependence of constants $D_0$ and $\sigma_0$. Formulae for this purpose need to be worked out for many cases. In absence of a theoretical prediction model, these coefficients can be determined empirically for each type of failure. The size effects given by Eqs. 1, 2, 4, 5 and 6 are plotted in Fig. 1c. The difficulty in deciding which formula is appropriate is that the scatter of the existing data is too wide for the range of sizes tested. If the decision between various formulas should be made strictly theoretically, it would be necessary to greatly extend the test data into larger size ranges, and obtain a statistically significant number of test results for geometrically scaled structures. Unfortunately, most large-scale tests have been conducted in the past on structures that were not geometrically scaled (e.g., the bar sizes, cover thickness and bar spacing were not geometrically similar). The effect of the changes of shape (geometry) are known only crudely and introduce additional errors. Therefore, formulae that have the strongest theoretical support ought to be preferred. The theory itself may be verified by checks other than extension of the size range of the tests of the given particular failure. Unless such an approach is adopted, the choice between the formulae will be random, depending solely on voting of committee members. Energetic Modification of Truss Model (Strut-and-Tie Model) The strut-and-tie model, in which the action of reinforced concrete at maximum load is approximated by a statically determinate truss and the load capacity is calculated from equilibrium and compatibility at some assumed limit state, has gained enormous popularity and has unquestionably had considerable success. However, complacency has settled in as to its capability. The chief problem is that the compression struts exhibit strain softening. This causes the failure to localize and propagate. Thus the failure is not simultaneous as required by plastic limit analysis based on the limit state concept. The most important consequence of the progressive and localized nature of failure is the size effect, although this is not readily apparent from the existing comparisons with test data because of the lack of large-scale tests with proper geometrical scaling. The strut-and-tie model does nevertheless capture well at least a part of the behavior of reinforced concrete at ultimate load, and therefore the model should not be scrapped but extended. The required extension has been worked out in detail for the diagonal shear failure [7], and can be applied in a similar way to all the other situations for which the strut-and-tie model has been used. The equilibrium analysis of the strut-and-tie model can be retained, and so can the simplified concept of compression strut. The load capacity, however, needs to be calculated from energy balance during the propagation of a compression (or shear-compression) failure band (cracking zone, crushing zone) across the compression strut. From the forces determined by equilibrium analysis, the release of strain energy from the equivalent truss needs to be calculated and equated to the rate of energy consumed and dissipated by the failure band propagating across the strut. Such analysis inevitably provides a size effect, and the size effect generally has the form of Eq. 1 or 2 in Table 1 or Eq. 16 of Table 2, in which the structure size $D$ is now represented by depth $d$ to reinforcement: $v_p$, $d_0$ and $v_r$ are structural constants taking into account the beam geometry - see Eqs. 17 and 18 of Table 2 in which $c =$ width of the cracking band (crushing band) at maximum load, $c/d =$ empirical constant, $a =$ shear span, $E_c =$ Young's modulus, $G_f =$ fracture energy of the material, $s_c =$ spacing of axial splitting cracks; $h_0$, $w_0$, $s_c =$ constants; $2 =$ inclination of compression strut in a beam with stirrups, and $v_r =$ residual strength calculated by plastic limit analysis from the residual compression strength of the strut (which might or might not vanish). Fig. 2 shows the basic test data from the literature, as presented in [21]. \[ v_u = v_p \left[1 + \left(\frac{d}{d_0}\right)\right]^{-1/2} \] (a) For beams without stirrups: \[ d_0 = w_0 \frac{d}{c}, \quad v_r = c_p K_c \left(\frac{a}{d} + \frac{d}{a}\right)^{-1}, \quad K_c = \sqrt{EG_f}, \quad c_p = k \sqrt{\frac{2h_0}{w_0 s_c}} \frac{c/d}{a/d} \] (b) For beams with stirrups: \[ d_0 = w_0 \frac{d}{c}, \quad v_r = \frac{\sin 2\theta}{2} \sigma_r, \quad v_p = K_c \sqrt{\frac{h_0}{2s_c w_0}} \frac{c}{d} \sin 2\theta \] Tab. 2 Equations of the fracturing strut-and-tie model for diagonal shear failure of reinforced concrete beams. Nuisance or Necessity? Until recently, the size effect was widely regarded as a nuisance foisted on the designers by some theoreticians. There were exceptions, though. Kani [22] in the mid 1960's carried out large-scale beam tests which clearly indicated the presence of size effect in shear of beams. Reinhardt [27] pointed out that the size effect may be due to fracture mechanics. Recent tests (e.g. [23]) clearly proved the existence of size effect for real size beams. A nuisance might be that the size effect formula cannot be established strictly experimentally because it is difficult to adhere to geometrical similarity in large-scale tests, and because large-scale experiments of a sufficient number to provide an adequate statistical basis are not in sight. This makes the use of a theory inevitable. The size effect cannot be avoided unless concrete could be made to behave perfectly plastically (which requires triaxial compression with the compressive principal stress of lowest magnitude exceeding the uniaxial compression strength [24]). The size effect is a necessity that concrete designers must learn to live with. Acknowledgment Financial support under NSF Grant CMS-9713944 to Northwestern University is gratefully acknowledged. ![Graphs showing linear regression fits of various test data for diagonal shear failure of beams](image) **Fig. 2** Linear regression fits of various test data for diagonal shear failure of beams; a), b) without stirrups, fit by Eq. 1, c) with stirrups, fit by Eq. 2 (if no size effect existed, all these plots would have to be horizontal). **Bibliography** [1] Bažant, Z.P., and Planas, J. (1998). *Fracture and Size Effect in Concrete and Other Quasibrittle Materials*. CRC Press, Boca Raton, Florida. [2] Bažant, Z.P., and Chen, E.-P. (1997c). "Scaling of structural failure." *Applied Mechanics Reviews ASME* 50 (10), 593-627. [3] da Vinci, L. (1500's) – see *The Notebooks of Leonardo da Vinci* (1945), Edward McCurdy, London (p. 546); and *Les Manuscrits de Leonard de Vinci*, transl. in French by C. Ravaissson-Mollien, Institut de France (1881-91), Vol. 3. [4] Galileo, Galilei Linceo (1638). "Discorsi I Demostrazioni Matematiche intorno a due Nuove Scienze," Elseviril, Leiden; English transl. by T. Weston, London (1730), pp. 178-181. [5] Mariotte, E. (1686). *Traité du mouvement des eaux*, posthumously edited by M. de la Hire; Engl. transl. by J.T. Desvaguliers, London (1718), p. 249; also *Mariotte's collected works*, 2nd ed., The Hague (1740). [6] Weibull, W. (1939). "The phenomenon of rupture in solids." Proc., Royal Swedish Institute of Engineering Research (Ingeniorsvetenskaps Akad. Handl.) 153, Stockholm, 1-55. [7] Bažant, Z.P. (1997). "Fracturing truss model: Size effect in shear failure of reinforced concrete." *J. of Engrg. Mechanics* ASCE 123 (12), 1276-1288. [8] Hillerborg, A., Modoer, M. and Petersson, P.E. (1976). "Analysis of crack formation and crack growth in concrete by means of fracture mechanics and finite elements." *Cement and Concrete Research* 6 773-782. [9] Petersson, P.E. (1981). "Crack growth and development of fracture zones in plain concrete and similar materials." *Report TVBM-1006*, Div. of Building Materials, Lund Inst. of Tech., Lund. [10] Bažant, Z.P. (1976). "Instability, ductility, and size effect in strain-softening concrete." *J. Engng. Mech. Div.*, Am. Soc. Civil Engrs., 102, EM2, 331-344; disc. 103, 357-358, 775-777, 104, 501-502. [11] Bažant Z.P., and Oh B.-H. (1983). "Crack band theory for fracture of concrete." *Materials and Structures* (RILEM, Paris), 16, 155-177. [12] Bažant, Z.P. (1984). "Size effect in blunt fracture: Concrete, rock, metal." *J. of Engrg. Mechanics* ASCE, 110, 518-535. [13] R. Jacobsen, F. Rosendahl (1994). "The Sleipner Platform accident." *Struct. Engrg. International* 3, 190-193. [14] M. Levy and M. Salvadori (1992). "Why buildings fall down." W.W. Norton, N.Y. [15] K. Pattison (1998). "Invention and Technology." 14(1), 23-31. [16] Carpinteri, A., Chiaia, B., and Ferro, G. (1994). "Multifractal scaling law for the nominal strength variation of concrete structures," in *Size effect in concrete structures* (Proc., Symp., Sendai 1993), M. Mihashi, H. Okamura and Z.P. Bažant, eds., E & FN Spon, London (1994) 193-206. [17] Bažant, Z.P. (1988). "Size effect in tensile and compression fracture of concrete structures: computational modeling and design." *Fracture Mechanics of Concrete Structures* (3rd Int. Conf., FraMCoS-3, held in Gifu, Japan), H. Mihashi and K. Rokugo, eds., Aedificatio Publishers, Freiburg, Germany, Vol. 3, 1905-1922. [18] Schweizerische Ingenieur- und Architekten Verein, Norm SIA162, Teilrevision 1993, articles 3 25 32 and 3 25 402. [19] Bažant, Z.P. and Xi, Y. (1991). "Statistical size effect in quasi-brittle structures: II. Nonlocal theory." *ASCE J. of Engineering Mechanics* 117 (11), 2623-2640. [20] Bažant, Z.P., Vitek, J.L. (1998). "Fracture and size effect in composite beams with deformable connectors." *Fracture Mechanics of Concrete Structures* (3rd Int. Conf., FraMCoS-3, held in Gifu, Japan), H. Mihashi and K. Rokugo, eds., Aedificatio Publishers, Freiburg, Germany, Vol. 2, 839-848. [21] Bažant, Z.P., and Becq-Giraudon, E. (1988). "Size effects in shear fracture of reinforced concrete beams." *Fracture Mechanics of Concrete Structures* (3rd Int. Conf., FraMCoS-3, held in Gifu, Japan), H. Mihashi and K. Rokugo, eds., Aedificatio Publ., Freiburg, Germany, V.3, 2063-2074. [22] Kani, G.N.J. (1967). "Basic facts concerning shear failure," *ACI Journal, Proc.* 64 (3), 128-141. [23] Walraven, J., and Lehwalter (1994). "Size effects in short beams loaded in shear." *ACI Structural Journal* 91 (5), 585-593. [24] Bažant, Z.P., Kim, J.J., and Brocca, M. (1999). "Finite strain tube-squash test of concrete at high pressures and shear angles up to 70°E." *ACI Materials Journal* 96, in press. [25] Bažant, Z.P. (1997b). "Scaling of quasibrittle fracture: Hypotheses of invasive and lacunar fractality, their critique and Weibull connection." *Int. J. of Fracture* 83 (1), 41-65. [26] Petroski, H. (1994). "Design paradigms." Cambridge University Press. [27] Reinhardt, H.W. (1981). "Massstabseinfluss bei Schubversuchen im Licht der Bruchmechanik." *Beton und Stahlbetonbau* (Berlin) No. 1, 19-21 (also Proc., IABSE Symp., Delft 1981, 201-210).
Neural Mechanisms of the Rejection-Aggression Link David S. Chester *Virginia Commonwealth University* Donald R. Lynam *Purdue University* Richard Milich *University of Kentucky*, email@example.com C. Nathan DeWall *University of Kentucky*, firstname.lastname@example.org Follow this and additional works at: [https://uknowledge.uky.edu/psychology_facpub](https://uknowledge.uky.edu/psychology_facpub) Part of the [Neurosciences Commons](https://uknowledge.uky.edu/psychology_facpub), [Psychology Commons](https://uknowledge.uky.edu/psychology_facpub), and the [Sociology Commons](https://uknowledge.uky.edu/psychology_facpub) **Repository Citation** Chester, David S.; Lynam, Donald R.; Milich, Richard; and DeWall, C. Nathan, "Neural Mechanisms of the Rejection-Aggression Link" (2018). *Psychology Faculty Publications*. 151. [https://uknowledge.uky.edu/psychology_facpub/151](https://uknowledge.uky.edu/psychology_facpub/151) Neural Mechanisms of the Rejection-Aggression Link Notes/Citation Information Published in Social Cognitive and Affective Neuroscience, v. 13, no. 5, p. 501-512. © The Author(s) (2018). Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact email@example.com Digital Object Identifier (DOI) https://doi.org/10.1093/scan/nsy025 Neural mechanisms of the rejection–aggression link David S. Chester,1 Donald R. Lynam,2 Richard Milich,3 and C. Nathan DeWall3 1Department of Psychology, Virginia Commonwealth University, Richmond, VA 23284, USA, 2Department of Psychological Sciences, Purdue University, West Lafayette, IN 47907, USA, and 3Department of Psychology, University of Kentucky, Lexington, KY 40506, USA Correspondence should be addressed to David S. Chester, Department of Psychology, Virginia Commonwealth University, 808 West Franklin St., Richmond, VA 23284, USA. E-mail: firstname.lastname@example.org Abstract Social rejection is a painful event that often increases aggression. However, the neural mechanisms of this rejection–aggression link remain unclear. A potential clue may be that rejected people often recruit the ventrolateral prefrontal cortex’s (VLPFC) self-regulatory processes to manage the pain of rejection. Using functional MRI, we replicated previous links between rejection and activity in the brain’s mentalizing network, social pain network and VLPFC. VLPFC recruitment during rejection was associated with greater activity in the brain’s reward network (i.e. the ventral striatum) when individuals were given an opportunity to retaliate. This retaliation-related striatal response was associated with greater levels of retaliatory aggression. Dispositionally aggressive individuals exhibited less functional connectivity between the ventral striatum and the right VLPFC during aggression. This connectivity exerted a suppressing effect on dispositionally aggressive individuals’ greater aggressive responses to rejection. These results help explain how the pain of rejection and reward of revenge motivate rejected people to behave aggressively. Key words: aggression; social rejection; reward; frontostriatal; fMRI Introduction It is not hard to imagine how an individual, pushed against their will to the fringes of a community, might react violently towards the perceived sources of such pain. Yet our understanding of the precise neural mechanisms that link experiences of rejection to aggressive retaliation is imperfect. In what follows, we summarize a brain imaging project that sought to understand the biological and psychological forces that drive the rejection–aggression link. Specifically, we tested whether the neural regulation of the pain of rejection magnified the subsequent ‘sweetness’ or reward of taking revenge upon the source of rejection. We then tested whether this interplay between regulatory and reward reactivity was linked to greater severity of the aggressive retaliatory response. Building on this, we tested whether this mechanism helped explain why some people tend to be more dispositionally aggressive than others. The rejection–aggression link Social rejection occurs when active attempts at social inclusion and belonging are rebuffed by the targets of affiliative acts (Williams, 2009). Such experiences are pervasive and have long-term adverse effects on human health. Indeed, social rejection is reliably linked to greater anxiety, depression and learned helplessness (Leary, 1990; Williams, 2009). These internalized consequences of rejection are juxtaposed against this phenomenon’s more externalizing consequences, such as aggression. Social rejection is one of the most well-established causes of aggression (Leary et al., 2006; Ren et al., 2018). In laboratory settings, rejected individuals exhibit heightened aggressive behavior towards both their rejectors and innocent bystanders (Twenge et al., 2001; Buckley et al., 2004). As evidence that this phenomenon extends out into the broader world, experiences of rejection are thematic among the lives of the majority of... school shooters (Leary et al., 2003). Investigations into the psychological processes that motivate the rejection–aggression link have revealed anger (Chow et al., 2008), hostile cognitive biases (DeWall et al., 2009), disinhibition (Rajchert and Winiewski, 2016), non-adherence to societal norms (Poon and Teng, 2017) and desires to re-establish feelings of control (Warburton et al., 2006; Wesselmann et al., 2010), as likely motivations. Although the psychological mechanisms of this effect are becoming well explicated, the neural mechanisms of the rejection–aggression link remain largely unexamined. **Neural correlates of rejection** Rejection threatens the fundamental human need for social connections (MacDonald and Leary, 2005). Social Pain Overlap Theory posits that the brain evolved to respond to this social injury with a broad and powerful recruitment of multiple neural systems that include brain regions critical to the experience of pain (Eisenberger and Lieberman, 2004). **Social pain: DACC and anterior insula.** As outlined in Social Pain Overlap Theory, social pain is the aversive, affective and somatic response to perceptions of social rejection (Eisenberger and Lieberman, 2004; Eisenberger, 2012). As evidence of this, functional neuroimaging has associated experimental inductions of exclusion with activity in brain regions associated with the affective component of pain: anterior insula, dorsal anterior cingulate cortex (DACC; Eisenberger et al., 2003; Eisenberger, 2012; Rotge et al., 2014). Reactivity of the DACC to rejection is associated with greater retaliatory aggression, though only among individuals with relatively poor executive functioning (Chester et al., 2014). This finding offers initial evidence that the brain’s social pain response to rejection may exert a motivating influence on subsequent aggressive retaliation. **Regulation of social pain: VLPFC.** Social pain, like physical pain, is not an unchecked response to stimuli. Instead, robust regulatory mechanisms exist to modulate neural pain reactivity (Price, 2000). Numerous studies point to the ventrolateral prefrontal cortex’s (VLPFC) role in responding to socially painful events (Eisenberger et al., 2003; Chester and DeWall, 2014). In this context, the VLPFC exerts a regulatory role that inhibits the subjective experience of pain by inhibiting brain regions that generate the distressing experience of pain (Eisenberger et al., 2003; Yanagisawa et al., 2011; Kawamoto et al., 2012). These findings fit with a much larger literature demonstrating the critical role of the VLPFC in regulating distress and negative affect (Wager et al., 2008). Affect regulation is one of an array of functions in the VLPFC, which largely involve inhibition (Aron et al., 2014). Indeed, the VLPFC is a relatively large and heterogeneous region of the prefrontal cortex that is anatomically and functionally heterogeneous (Levy and Wagner, 2011). Most posterior regions that are close to the motor regions of the frontal lobe tend to facilitate the inhibition of concrete motor and behavioral responses (pars opercularis/triangularis; Levy and Wagner, 2011; Aron et al., 2014), whereas more anterior VLPFC regions tend to subserve the inhibition and regulation of more abstract emotional and cognitive processes (pars triangularis/orbitalis; Wager et al., 2008; Buhle et al., 2014). Yet how might VLPFC recruitment during rejection impact the neural underpinnings of retaliatory aggression? **Neural correlates of retaliatory aggression** Retaliatory aggressive behavior is associated with greater activity in a host of cortical regions *during* the act (dorsomedial PFC, posterior cingulate cortex, anterior and posterior insula; Krämer et al., 2007; Lotze et al., 2007; Dambacher et al., 2014; Chester and DeWall, 2016; Emmerling et al., 2016). Germaine to this project, retaliatory aggression has also been linked to activity in the ventral striatum during the aggressive act, a neural region reliably linked to the experience of pleasure and reward (Chester and DeWall, 2016). The VLPFC exerts a robust regulatory influence on the ventral striatum, the connectivity of which predicts greater self-regulatory success (e.g. Wagner et al., 2013). Retaliatory aggression is associated with reduced connectivity between the ventral striatum and the VLPFC, potentially indicating a dysregulated reward response (Buades-Rotger et al., 2016; Chester and DeWall, 2016; Chester, 2017). These findings suggest that a likely neural mechanism underlying retaliatory aggression is a magnified and dysregulated striatal response, that may serve to reinforce such aggressive acts. Our results further fit with psychological research that implicates reward and pleasure as central components of revenge-seeking tendencies (Chester and DeWall, 2018b). The pleasure of retaliation likely motivates aggression in both prospective and concurrent manners, with individuals seeking out acts of retaliatory aggression for the anticipated and currently felt rewards it brings. However, it remains unknown how such striatal mechanisms interact with neural responses to rejection. **The excessive recruitment model** Conventionally, VLPFC recruitment during aversive experiences is theorized to be an adaptive regulatory response (Wager et al., 2008). Indeed, self-regulation failures (e.g. aggression) that occur because of aversive experiences (e.g. rejection) are thought to arise from an under-recruitment of the VLPFC (Heatherton and Wagner, 2011). Recent work has called for modification to this prevailing paradigm, suggesting that while this conceptualization of VLPFC recruitment may be correct in the short-term (i.e. effective inhibition of distress *during* the aversive experience), such VLPFC recruitment undermines *longer-term* self-regulatory success (Chester and DeWall, 2014; Chester and Riva, 2016; Chester et al., 2016). As evidence for this new approach, performing aversive, taxing and prefrontally mediated tasks has been linked to subsequent self-regulatory failure (e.g. Inzlicht and Gutsell, 2007). Participants who were attempting to restrict their calorie intake exhibited greater reward reactivity to food stimuli and less functional connectivity between reward regions and the lateral PFC after they had to repeatedly regulate their attention away from a distracting stimulus (Wagner et al., 2013). When racially biased Whites interacted face-to-face with a Black individual, their lateral PFC recruitment to Black faces predicted subsequent self-control impairment (Richeson et al., 2003). These findings suggest that greater lateral PFC recruitment during aversive experiences may not prove adaptive in the long-term. In the context of rejection, greater VLPFC recruitment during rejection predicted a magnified and prefrontally dysregulated ventral striatum response to appetitive cues on a subsequent lab task (Chester and DeWall, 2014). Extending outside of the lab, greater rejection-related VLPFC activity was associated with self-regulatory failures and increased cravings (Chester and DeWall, 2014). These results formed the basis of the *excessive recruitment model*, which posits that VLPFC recruitment in response to aversive experiences undermines subsequent self-regulation by impairing the VLPFC’s regulatory effects on the ventral striatum (Chester and DeWall, 2014; Chester and Riva, 2016; Chester et al., 2016). Further evidence for this model was found by observing that individuals who chronically experienced self-regulatory failure in response to aversive experiences also exhibited an exacerbated VLPFC response to aversive situations (Chester et al., 2016). This greater VLPFC response was associated with poorer inhibitory success and predicted greater alcohol consumption 1 month and 1 year later. It is important to note that self-regulation is not localized to the VLPFC, but is subserved by a host of other regions including the dorsolateral and dorsomedial PFC (Kober et al., 2010). The excessive recruitment model may indeed apply to these other regions, yet the evidence for making such claims about these other brain regions is currently lacking. As such, our hypotheses focus on the role of the VLPFC in self-regulatory failure. Because aggression can be construed as a self-regulatory failure (Denson et al., 2012), this excessive recruitment model might help explain why rejected people behave aggressively. Further, this model may help explain larger patterns of aggressive behavior that extend beyond the individual rejection incident. **Trait aggression: a potential reinforcing role of striatal responses to revenge** Physical aggression is typically thought of in the context of a single act, but aggression is also a dispositional, trait-like construct (Buss and Perry, 1992). Trait aggression shows substantial generalizability across cultures, between-individual variability, predictive validity of actual aggressive behavior and within-individual test-retest reliability (Huesmann et al., 1984; Buss and Perry, 1992; Côté et al., 2006; Gerevich et al., 2007; Webster et al., 2014). Together, these findings provide substantial support for the existence of physical aggressiveness as a personality trait. The underlying neurobiology of trait physical aggression remains largely unknown, with few published studies on this topic (e.g. Carré et al., 2013). An unexamined neural mechanism might underlie trait aggression: striatally mediated reinforcement. The ventral striatum’s role in promoting the reinforcement of behaviors that become habitual (e.g. cocaine and alcohol abuse) is well-established (Everitt and Robbins, 2005). As a behavior that recruits the ventral striatum (Chester and DeWall, 2016), retaliatory aggression is a candidate for an act that can become striatally reinforced, leading to durable patterns of aggressive behavior across time and situations. **The present study** The main goal of this project was to better understand the neural mechanisms of the rejection–aggression link and how they might contribute to larger patterns of aggressive traits. Based off the excessive recruitment model (Chester et al., 2016), we predicted that greater VLPFC activity during social rejection would be associated with more ventral striatum activity during opportunities for retaliatory aggression. Seeking to replicate previous work (Chester and DeWall, 2016), we further predicted that ventral striatum activity during retaliatory opportunities would positively correlate with greater actual retaliation. These findings would support a temporal sequence whereby VLPFC activity during rejection promotes subsequent retaliation through a magnified ventral striatum response. We further predicted that dispositionally, physically aggressive individuals would exhibit greater dysregulation in the ability of the VLPFC to functionally inhibit the ventral striatum during retaliatory aggression. To test these predictions, a sample of undergraduates experienced social acceptance and then rejection from two same-sex strangers and then were given an opportunity to aggressively retaliate against one of their rejecters, all while undergoing functional magnetic resonance imaging (fMRI). Participants then reported the extent of their aggressive traits and another measure of whether participants typically experienced pleasure during retaliatory aggression, which served to assist our reverse inference that the ventral striatum activity that we expected to observe during retaliatory aggression reflected reward and not some other process. **Materials and methods** **Ethics statement** All participants provided informed consent before performing any research procedures, and all research procedures were conducted in accordance with human participants protection regulations as set forth by governmental and institutional policies. We, as authors, declare no conflicts of interest relevant to the research described in our manuscript. Data from a subset of these participants have been published in a separate manuscript (Chester and DeWall, 2018a). **Participants** Participants were 60 healthy, right-handed, English-fluent, young adults (38 females, 22 males; age: $M = 20.28$, $SD = 2.77$, range: 18–30). Participants were either undergraduates recruited through the introductory psychology subject pool in exchange for credit towards their course’s research requirement and an image of their brain, or general community members recruited in exchange for $50 and an image of their brain. Exclusionary criteria were assessed by an online questionnaire, which included: body mass index above 30, claustrophobia, color blindness, mental or neural pathology, metallic objects in the body, prior head trauma and psychoactive medication use. **Materials** **Angry mood improvement inventory.** The 32-item Angry Mood Improvement Inventory (AMII) assesses the degree to which individuals tend to control and express their aggressive behavior to improve their mood when they are upset (Bushman et al., 2001). The eight-item Expression-Outwards subscale of the AMII assesses the tendency to express aggression outwardly in order to experience mood repair (sample items: ‘To improve my mood when I am upset, I express my anger’, ‘To improve my mood when I am upset, I strike out at whatever angers me’). Participants indicate the frequency of these mood-motivated actions along a 1 (Never) to 5 (Often) scale. **Brief aggression questionnaire.** To measure trait physical aggression, we employed the Brief Aggression Questionnaire (BAQ; Webster et al., 2014). The BAQ contains 12 items that comprise four factors: anger (sample item: ‘I have trouble controlling my temper’), hostility (sample item: ‘I sometimes feel that people are laughing at me behind my back’), physical aggression (sample item: ‘Given enough provocation, I may hit another person’) and verbal aggression (sample item: ‘My friends say that I’m somewhat argumentative'). Participants responded to each item along a 1 (disagree) to 7 (agree) scale. **Procedure** Participants arrived at the neuroimaging laboratory where they had the study explained to them, which entailed a cover story that the study was actually examining the role of brain functioning during various cognitive tasks in promoting alcohol misuse. Further, participants were instructed that they would be completing the study with two partners who were in nearby testing rooms. To ensure the believability of this deception, participants were told that they were the first participant to arrive and then asked to select a piece of paper that would determine which of the three MRI scanners they would be placed in (in reality, there was only one MRI scanner). After being ‘assigned’ to their MRI scanner, participants were then screened to ensure they would be safe and comfortable in the MRI environment, and then practiced the aggression task they would complete in the MRI scanner. Participants were then placed in an fMRI scanner and had a high-resolution structural scan taken of their brain (see MRI Data Acquisition & Preprocessing section for more details). **Cyberball task.** To induce an experience of social rejection in the functional neuroimaging environment, we employed the Cyberball social rejection task (Williams *et al.*, 2000; Eisenberger *et al.*, 2003; Chester *et al.*, 2014). In this task, participants were instructed to play a virtual ball-tossing game with two fictitious partners. The ostensible purpose of the task was for participants to mentally visualize the task as if it were occurring in real life, so that we might understand the neural underpinnings of the human imagination. The task proceeded across three blocks. In the first two blocks, participants received an equal number of ball-tosses from their two partners for 60s per block (Acceptance condition). However, in the third block, after 30s, participants stopped receiving the ball from their partners who continuously threw it back-and-forth to one another for 50s (Rejection condition). Baseline activation was captured by 10s ‘Rest’ trials that preceded each of the three blocks. Total task time was 3 m 50s. **Aggression task.** After the rejection task, participants completed an aggression task used in previous fMRI studies of aggression (Krämer *et al.*, 2007; Chester and DeWall, 2016). In this task, participants competed against one of their fictitious Cyberball partners, who was supposedly in an MRI scanner nearby, to see who could press a button faster. As an ostensible motivational component of the task, participants were punished if they lost the competition via an aversive noise blast. Conversely, if participants won the competition their opponent heard the noise blast and they did not. Crucially, the volume of the noise blast delivered to their opponent was set by the participant and served as the measure of aggressive behavior. The task consisted of 14 blocks, with each block containing six trials. Each block began with a 10s fixation cross that modeled baseline neural activity. Then, participants completed a 7.5s aggression trial in which they pressed a button that set the volume of their partner’s noise blast. A blank screen then appeared for a jittered duration (0.5/1.0/1.5s), which gave way to a competition trial in which participants pressed a button as fast as they could when a red square appeared on the screen (4.5/4.0/3.5s duration). Participants then saw what volume level their opponent set for them (5s duration). Finally, participants saw whether they won or lost the competition (5s duration). If participants lost the competition, they heard an aversive noise blast that varied from 1 (silence) to 4 (extremely loud, though not dangerous). Whether a given aggression trial was preceded by their opponent setting a loud (3, 4) or soft (1, 2) volume level determined whether the given trial was retaliatory (after a loud blast) or non-retaliatory (after a soft blast). Such retaliatory and non-retaliatory trials were split fairly evenly (six retaliatory and eight non-retaliatory) and randomly presented with the exception of the first trial, which was always non-retaliatory. Wins and losses were randomized and split evenly (seven wins and seven losses). Each of the 14 blocks lasted for 32.5s, for a total task time of 7 m 35s. Participants completed a series of other functional scans that were part of a separate project on impulsivity, and then exited the scanner. Participants were placed in a nearby testing room and completed a computerized battery of questionnaires including a demographics survey, the Brief Aggression Questionnaire, and the Angry Mood Improvement Inventory. Participants were then fully debriefed about the deception inherent in the study and escorted from the laboratory with thanks. At the onset of this debriefing was a structured suspicion probe that an experimenter verbally administered to each participant (e.g. what do you think this study was about?). No participants expressed significant suspicion about the deceptive elements of the study. **MRI data acquisition and preprocessing** All MRI data were obtained using a 3.0 tesla Siemens Magnetom Trio scanner. Echo planar BOLD images were acquired with a T2*-weighted gradient across the entire brain with a 3D shim (matrix size = 64 × 64, field of view = 224 mm, echo time = 28 ms, repetition time = 2.5 s, 3.5 mm³ isotropic voxel size, 40 interleaved axial slices, flip angle = 90°). To allow for registration to native space, a coplanar T1-weighted MP-RAGE scan was also acquired from each participant (1 mm³ isotropic voxel size, echo time = 2.56 ms, repetition time = 1.69 s, flip angle = 12°). The Oxford Center for Functional MRI of the Brain (FMRIB)'s Software Library (FSL version 5.0) was used to conduct all preprocessing and fMRI analyses (Smith *et al.*, 2004; Woolrich *et al.*, 2009). Reconstructed functional volumes underwent head motion correction to the median functional volume using FSL’s MCFLIRT tool. FSL’s Brain Extraction Tool was used to remove non-brain tissue from all functional and structural volumes using a fractional intensity threshold of 0.5. After a series of data quality checks, functional volumes underwent interleaved slice-timing correction, pre-whitening, spatial smoothing (using a 5 mm full-width-half-maximum Gaussian kernel) and temporal high-pass filtering (120 s cutoff). These processed brain volumes were then fed into subsequent data analyses. **Statistical analyses: Functional MRI** Preprocessed fMRI datasets from both the rejection and aggression tasks were analyzed using two levels of general linear models. **First level (within-participants).** Each participant’s whole-brain functional volumes were entered into a fixed-effects analysis that modeled trials as events using a canonical double-gamma hemodynamic response function with a temporal derivative. Regressors for the rejection task included Acceptance and Rejection blocks while leaving ‘Rest’ trials un-modeled. ‘Get 'Ready' trials were modeled as a nuisance regressor. Regressors-of-interest for the aggression task included Retaliatory Aggression and Non-Retaliatory Aggression, while leaving fixation trials un-modeled. Competition, Pre-Competition, High Provocation, Low Provocation, Win and Lose trials were included as nuisance regressors. All six head motion parameters from each participant were modeled as nuisance regressors for each task. For the rejection task, linear contrasts compared rejection to acceptance (Rejection > Acceptance contrast). For the aggression task, linear contrasts compared retaliatory and non-retaliatory aggression to each other and to baseline fixation trials (Retaliatory Aggression > Non-Retaliatory Aggression contrast, Retaliatory Aggression > Baseline contrast, Non-Retaliatory Aggression > Baseline contrast). Resulting contrast images from these analyses were first linearly registered to native space structural volumes and then spatially normalized to a Montreal Neurological Institute (MNI) stereotaxic space template image (resampled into $2 \times 2 \times 2$ mm$^3$ voxels). **Second level (across-participants).** Each participant’s contrast volumes from the first level were then fed into FLAME 1’s group level, mixed effects GLM that created group average maps for all four contrasts across the entire brain. Cluster-based thresholding was applied to each of the group activation maps (Worsley, 2001; Heller et al., 2006). Clusters were determined by applying a $Z > 2.3$ threshold to the voxels of each of the group-average, whole-brain activation maps. Family-wise error correction was then applied to each cluster based on Gaussian random field theory ($P < 0.05$). **Psychophysiological interaction analysis.** To assess functional connectivity during retaliatory aggression, a psychophysiological interaction (PPI) analysis was performed with the bilateral ventral striatum as a seed region-of-interest (ROI) using an anatomically and functionally defined region-of-interest (ROI) mask from the Wake Forest University Pickatlas (Maldjian et al., 2003). This took the form of a first level, within-participants analysis with the addition of two new regressors to the previously described GLM: the mean-centered timecourse of ventral striatum activity across the aggression task, and an interaction term multiplying the ventral striatum timecourse by retaliatory aggression trials. Linear contrasts compared the interaction between participants’ ventral striatum timecourses and retaliatory aggression, and their interaction to participants’ implicit baseline. Activation maps from this analysis were then fed into a whole-brain regression analysis in which brain activity estimates from the PPI analysis were correlated with participants’ trait physical aggression levels. Clusters were determined by applying a $Z > 2.3$ threshold to each of the group-average, whole-brain activation maps. Family-wise error correction was then applied to each cluster based on Gaussian random field theory ($P < 0.05$). All tests were two-tailed. **Statistical analyses: mediation modeling, ROI creation and parameter estimate extraction** In order to test whether participants’ VLPFC recruitment during rejection predicted greater subsequent aggression through greater striatal activity during retaliatory aggression, we conducted a mediation analysis (using the PROCESS version 2.0 macro for SPSS, model 4, 5000 bias-corrected and accelerated re-samples; Hayes, 2012). VLPFC activity was obtained from 8 mm spherical ROIs centered on peak activation voxels from the rejection main effect contrast. Voxels were determined to be within the VLPFC using the Automated Anatomical Labeling (AAL) atlas’ opercular, orbital and triangular portions of the inferior frontal gyrus (Tzourio-Mazoyer et al., 2002). Functional data from the voxels that comprised each ROI were converted to units of percent signal change, averaged across each participant and extracted as outlined by Mumford, J. http://mumford.bol.ucla.edu/perchange_guide.pdf. Another such mediation model was run, except that connectivity estimates from the Retaliatory Aggression PPI analysis were entered as a mediator, instead of VLPFC recruitment during rejection. This additional, exploratory mediation analysis served the purpose of examining whether striatum-based functional connectivity helped to explain the link between VLPFC recruitment during rejection and subsequent retaliatory aggression. All tests were two-tailed. **Open practices** De-identified data necessary to reproduce all analyses from this project have been made publicly available at https://osf.io/n5bwh/files/. **Results** **Descriptive statistics** **BAQ.** One participant failed to complete the Brief Aggression Questionnaire. Analyses were constrained to the Physical Aggression subscale of this questionnaire, as the aggressive behavior measured by our MRI task was physical, as opposed to verbal, in nature. Physical Aggression subscale scores exhibited substantial variability across the scale’s possible 1–7 range, $M = 2.92$, $SD = 1.70$, observed range = 1.00–7.00, Cronbach’s $\alpha = 0.84$. **AMI.** One participant failed to complete the Angry Mood Improvement Inventory. We calculated Express-Outwards, $z = 0.74$, Express-Inwards, $z = 0.81$, Control-Outwards, $z = 0.84$ and Control-Inwards, $z = 0.77$, subscale scores by averaging across each participant’s corresponding responses. Express-Outwards subscale scores exhibited substantial variability across the scale’s possible 1–5 range, $M = 2.19$, $SD = 0.49$, observed range = 1.38–3.38 and were positive correlated with trait physical aggression from the BAQ, $r(57) = 0.38$, $P = 0.003$. **Aggression task.** Volume settings were internally consistent, $z = .91$, and thus averaged across all 14 trials of the aggression task, as well as the six retaliatory trials, $z = .81$, and eight non-retaliatory trials, $z = .85$, to create three aggression scores (i.e. total, retaliatory and non-retaliatory) for each participant, possible range of 1–4. Validating our within-subjects provocation manipulation, participants selected louder noise blasts after high provocation (i.e. retaliatory aggression: $M = 2.58$, $SD = 0.87$, observed range = 1.00–4.00), than after low provocation (i.e. non-retaliatory aggression: $M = 2.41$, $SD = 0.79$, observed range = 1.00–4.00), $t(59) = 2.96$, $P = 0.004$, Cohen’s $d_{\text{dependent-means}} = 0.41$. **Neural correlates of rejection** Social rejection (compared to social acceptance) was both positively and negatively associated with large swaths of neural activity (see Tables 1 and 2). The positively associated regions Table 1. Brain regions positively associated with Reject > Accept during Cyberball | Cluster | Voxels | Brain region | Peak Z | Peak x, y, z | |---------|----------|---------------------------------------------------|--------|--------------| | 1 | 25 205 | VLPFC/anterior insula | 6.07 | 48, 22, 12 | | | | | 5.98 | 52, 22, 12 | | | | | 5.88 | 46, 36, −6 | | | | Temporoparietal junction | 5.81 | 56, 26, 12 | | | | Middle temporal gyrus/temporal pole | 5.77 | 50, −44, 20 | | | | | 5.68 | 56, −10, −16 | | 2 | 10 661 | VLPFC | 6.46 | −42, 20, −26 | | | | Temporoparietal junction | 6.00 | −46, −66, 22 | | | | Middle temporal gyrus/temporal pole | 5.89 | −46, 8, −26 | | | | Temporoparietal junction | 5.72 | −50, −66, 18 | | | | | 5.65 | −44, −70, 24 | | | | Anterior insula | 5.61 | −36, 24, −6 | | 3 | 9239 | Dorsomedial PFC | 7.52 | 10, 52, 36 | | | | | 7.09 | 8, 46, 30 | | | | | 7.01 | 6, 46, 25 | | 4 | 1492 | Posterior cingulate cortex | 5.49 | 4, −54, 34 | | 5 | 1382 | Thalamus/caudate | 5.34 | 6, −10, 10 | | 6 | 1218 | Occipital cortex | 5.03 | −12, −94, 4 | | 7 | 599 | Brainstem | 3.53 | 4, −30, −30 | Notes and Sources: Each cluster is displayed with rows for all local maxima. Table 2. Brain regions negatively associated with Reject > Accept during Cyberball | Cluster | Voxels | Brain region | Peak Z | Peak x, y, z | |---------|----------|---------------------------------------------------|--------|--------------| | 1 | 7616 | Posterior insula | −6.54 | −38, −6, 16 | | | | Precentral gyrus | −5.69 | −30, −14, 64 | | | | Supplemental motor area | −5.33 | −36, −16, 66 | | | | Postcentral gyrus | −5.26 | −4, −6, 54 | | | | | −5.14 | −48, −34, 56 | | | | | −4.98 | −52, −28, 50 | | 2 | 834 | Dorsolateral PFC | −4.29 | −40, 44, 8 | | | | | −3.67 | −46, 28, 28 | | | | | −3.60 | −34, 32, 28 | | | | | −3.58 | −34, 24, 24 | | | | | −3.46 | −40, 34, 34 | | | | | −3.29 | −38, 52, 16 | | 3 | 478 | Superior parietal lobule | −3.77 | 30, −48, 68 | | | | Supramarginal gyrus | −3.56 | 48, −36, 54 | | | | Postcentral gyrus | −3.51 | 42, −28, 40 | | | | Superior parietal lobule | −3.46 | 36, −46, 64 | | | | Postcentral gyrus | −3.09 | 46, −32, 50 | | | | Superior parietal lobule | −3.02 | 32, −54, 64 | Notes and Sources: Each cluster is displayed with rows for all local maxima. Crucially, the neuroimaging results from the Retaliatory Aggression > Non-Retaliatory Aggression contrast only reflect the opportunity to engage in retaliatory aggression and do not reflect the actual neural correlates of retaliatory aggressive behavior itself. We regressed participants’ retaliatory aggression scores onto whole-brain neural activity from the Retaliatory Aggression > Non-Retaliatory Aggression contrast to identify the true neural correlates of retaliatory aggression. Whole-brain regression analyses revealed no significant correlates of retaliatory aggressive behavior. However, when analyses were constrained to the bilateral ventral striatum using an ROI approach, we observed significant positive associations with the left (9 voxels; peak voxel: $Z = 2.88$, coordinates = −12, 4, −10) and right (10 voxels; peak voxel: $Z = 2.63$, coordinates = 14, 6, −12) ventral striatum (Figure 3). Frontostriatal associations with retaliatory aggression Of the three spherical VLPFC ROIs, rejection-related activity in the most rostral of the ROIs was positively associated with bilateral ventral striatum activity during retaliatory aggression (Retaliatory Aggression > Non-Retaliatory Aggression contrast), $r(58) = 0.346$, $P = 0.007$. Bilateral ventral striatum activity during retaliatory aggression was associated with greater retaliatory aggression, $r(58) = 0.342$, $P = 0.007$. This pattern of correlations suggested the presence of an indirect effect. To test this potential indirect effect, bilateral ventral striatum activity during retaliatory aggression was modeled as a mediator of the effect of VLPFC activity during rejection on subsequent retaliatory aggressive behavior. Based off the previously significant association with aggression-related activity in the ventral striatum, parameter estimates from the most rostral VLPFC ROI was modeled as the independent variable. A significant indirect effect was observed from this model, $B = 1.41$, $SE = 0.70$, 95% CI = 0.30, 3.22. This overall model explained 11.90% of the variance in retaliatory aggression, $F(2, 57) = 3.85$, $P = 0.027$ (Figure 4). Fig. 1. Greater neural activity from the Reject > Accept contrast of the Cyberball task in bilateral anterior insula, VLPFC and mentalizing network. Coordinates are in MNI space. Fig. 2. (A–C) Spherical VLPFC ROIs constructed by centering each sphere on one of three local maxima from the Reject > Accept contrast. (D) All three ROIs displayed simultaneously, red voxels display overlap between ROIs B and C. Coordinates are in MNI space. Does VLPFC activity simply reflect stronger affective responses to rejection? An alternative account for our findings could be that the VLPFC activity we observed during social rejection might simply reflect a greater affective response to this event, and not the regulatory account we have proposed. If this alternative explanation is correct, then we should be able to replicate the positive associations that were observed between VLPFC activity (during rejection) and ventral striatum activity (during aggression) with other neural regions that subserve social pain (i.e. the DACC and anterior insula; Eisenberger, 2012). To test whether this was the case, we extracted rejection-related activation from the DACC and bilateral anterior insula (for DACC mask see Chester et al., 2015; anterior insula mask see Chester et al., 2014) and correlated them with retaliatory-aggression-related activity in the bilateral ventral striatum. Failing to support this alternative account, VLPFC activity was unassociated with DACC, $r(58) = -0.12, P = 0.378$, or anterior insula, $r(58) = -0.01, P = 0.929$, activity during rejection. **Dispositional aggression and frontostriatal connectivity during retaliatory aggression** Dispositional physical aggression was positively associated with retaliatory aggression, $r(57) = 0.29, P = 0.029$, though not with greater ventral striatum activity during retaliatory aggression, $r(57) = 0.10, P = 0.435$. Subsequent analyses tested whether trait physical aggression was associated with altered functional connectivity between cortical brain regions and the bilateral ventral striatum, during retaliatory aggressive behavior. The combined psychophysiological interaction (PPI) and whole-brain regression analyses revealed a single, negatively correlated cluster in the right VLPFC (peak voxel: $Z = -4.21, P = 0.004$, MNI coordinates [x, y, z] = 34, 52, 6; 529 contiguous voxels; Figure 5; Brodmann’s Areas 46 and 47), though some voxels extended into the rostral (Brodmann’s Area 10) and ventral (Brodmann’s Area 11) medial PFC. In a post hoc, exploratory fashion, functional connectivity estimates from the VLPFC cluster observed in the Retaliatory Aggression PPI analysis were entered as a mediator into a mediation analysis (the size of these connectivity units were so small that we artificially inflated them by multiplying them times 100 for this analysis, so that effect size estimates would be interpretable). Retaliatory aggression was modeled as the dependent variable and trait physical aggression was modeled as the independent variable. A significant indirect effect was observed from this model, $B = -0.08, SE = 0.04, 95\% CI = -0.18, -0.01$, whereby trait physical aggression predicted more retaliatory aggression through reduced functional connectivity between the ventral striatum and VLPFC during retaliatory aggression. This overall model explained 17.17% of the variance in retaliatory aggression, $F(2, 56) = 5.80, P = 0.005$ (Figure 6). As an indicator of statistical suppression, the direct effect of trait physical aggression on retaliatory aggression became stronger after controlling for the indirect effect of frontostriatal connectivity (Figure 6; Mackinnon et al., 2000). **Discussion** Why do rejected people behave aggressively? Although rejection reliably increases aggression, it remains unclear what neurological mechanisms help explain this relationship. Our investigation fills this gap in the literature by offering a comprehensive account of the neural correlates of rejection-related aggression. Building off of the excessive recruitment model (Chester and DeWall, 2014; Chester et al., 2016), we examined whether taxing the brain’s regulatory functions during rejection would promote subsequent retaliatory aggression by unleashing neural reward circuitry, which would motivate retaliation by rendering such revenge more enticing. Implications for the neuroscience of social rejection We replicated previous research demonstrating that social rejection, compared to acceptance, is associated with greater activity in brain networks that subserve social pain (anterior insula; Eisenberger et al., 2003), social pain regulation (VLPFC; Chester and DeWall, 2014) and mentalizing about others’ mind state (Falk et al., 2014). These replications support Social Pain Overlap Theory’s central tenet that rejection is a markedly painful experience that recruits the affective, and less often somatic, aspects of the brain’s pain circuitry (Eisenberger and Lieberman, 2004; Eisenberger, 2012). Notably absent from our observed neural correlates of rejection was the DACC, which is typically observed in experiences of social rejection (Kawamoto et al., 2012). It remains uncertain why this region was not reactive to rejection in our experiment. Implications for the neuroscience of retaliatory aggression Opportunities to retaliate against one of the participants’ rejecters were associated with greater activity in the posterior insula, which replicates previous work on retaliatory aggression (Chester and DeWall, 2016; Emmerling et al., 2016). The posterior insula’s role in somatic and visceral interoception (Craig, 2011) may indicate that retaliatory aggression is experienced as a physically grounded state or signify an alertness to somatic cues. When actual retaliatory aggressive behavior was correlated with these retaliatory brain activity estimates, we observed significant, positive associations in the bilateral ventral striatum (Chester and DeWall, 2016). The added sensitivity gained from using the whole-brain regression approach suggests that this approach may be useful to researchers who regularly model the types of trials employed in their tasks as explanatory variables, but less commonly model participants’ actual responses to those task features. Adopting an ROI approach, we also replicated the association between activity in the ventral striatum during retaliatory aggression and the extent of actual retaliation that participants inflicted on their opponents (Chester and DeWall, 2016). Further, we replicated the correlation between such striatal activity and participants’ self-reported tendencies to react aggressively in order to improve their mood (as in Chester and DeWall, 2016). This replication lends confidence to our striatum-reward reverse inference, which is further supported by the extensive evidence for the ventral striatum’s involvement in reward (Burgdorf and Panksepp, 2006; Ikemoto, 2007; Sabatinielli et al., 2007; Kringelbach and Berridge, 2009; Berridge and Kringelbach, 2013). A reliably observed striatal response to retaliatory aggression supports the growing literature implicating positive affect, pleasure and reward and central psychological components of retaliatory aggression (Chester, 2017; Chester and DeWall, 2018b). Neurochemical nuances of reward However, ‘reward’ is a heterogeneous construct and these various sub-processes are subserved by distinct neurobiological pathways that fMRI cannot currently estimate (Berridge and Robinson, 2003). Indeed, ‘liking’ or pleasure is likely to be mediated by opioid transmitter pathways and spiny neurons in the shell of the ventral striatum, whereas motivational components of ‘wanting’ is likely to be mediated by dopaminergic transmitter pathways in this same region (Kringelbach and Berridge, 2009). Imaging and pharmacological techniques that measure and manipulate neural dopamine and opioid levels and binding sites are needed to better investigate whether ‘wanting’ or ‘liking’ is at play in retaliatory aggression. Disentangling these experiences is crucial for a comprehensive understanding of aggression’s phenomenological qualities. Support for the excessive recruitment model: sometimes less is more In support of the excessive recruitment model of self-regulatory failure (Chester and DeWall, 2014; Chester et al., 2016), anterior VLPFC recruitment during rejection was associated with greater ventral striatum activation during retaliatory aggression. The VLPFC–striatum circuit’s role in successful self-regulation is supported by these findings and further suggests that retaliatory aggression is a rewarding activity, as similar findings are observed in other rewarding behaviors such as unhealthy eating (Wagner et al., 2013). Crucially, only the most anterior ROI in our rejection-related VLPFC cluster was associated with striatal activity during aggression. The specificity of our findings to the anterior VLPFC can be interpreted in the light of research demonstrating that behavioral inhibition is localized to posterior VLPFC subregions and emotional and cognitive inhibition is localized to the anterior VLPFC (Aron et al., 2014; Buhle et al., 2014). As such, we can potentially infer that the VLPFC’s regulation of social pain and not the aggressive motor act itself is what predicted subsequent increases in retaliation-related striatal activity. Further, the null association between VLPFC activity during rejection and activity in the DACC and anterior insula undermine the potential alternative account that our VLPFC activation did not reflect self-regulation, but rather more intense affective responses to rejection. These findings cast doubt on ‘the more lateral PFC, the better’ approaches to the neuroscience of self-regulation and support the excessive recruitment model (Chester and DeWall, 2014; Chester et al., 2016). However, it may be that the excessive recruitment model only holds for situations characterized by negative affect (e.g. rejection), as other research has found that greater VLPFC recruitment to self-regulatory challenges outside of this affect context are predictive of self-regulatory success, not failure (Lopez et al., 2014). Further, chronic recruitment of brain regions can increase their plasticity and baseline activity (e.g. Teneback et al., 1999). If this was the case, then individuals who show exacerbated recruitment of the VLPFC during each aversive event would eventually trait this brain region to become more flexible and robust, promoting better emotion-regulation. Future work is needed to understand how some forms of cortical recruitment can lead to adaptive or maladaptive changes in behavior. A reinforcement model of trait aggression Dispositionally and physically aggressive individuals exhibited less functional connectivity between the VLPFC and ventral striatum during retaliatory aggression. Further, this connectivity exerted a suppressing effect on their aggression, suggesting that without such neural control over reward circuitry, those high in aggressive traits would act much more aggressively than they typically do. Such dysregulated reward activity during retaliatory belligerence might explain how aggressive traits are developed and maintained. Classic models of aggression emphasize distinct social-cognitive and affective-learning mechanisms as the process through which some individuals develop to be more aggressive than others (e.g. the General Aggression Model; Anderson and Bushman, 2002). This project argues for a more integrated conceptualization of affective and cognitive processes in these models, in which cognitive control mechanisms and affective impulses dynamically interact to reinforce aggressive behavior. These results also suggest avenues for future research to investigate how striatally mediated reinforcement processes can be altered to reduce the occurrence of physically aggressive dispositions. However, it remains unclear if the VLPFC connectivity with the striatum represents inhibition of the affective reward experience or a more general and behavioral inhibition of aggressive impulses. The anterior nature of this VLPFC cluster does suggest that the inhibition targeted affective and not the behavioral processes, yet more work is needed to be certain. **A role for appraisal processes?** Emotional responses to situations are determined by the appraisals of those situations (Siemer et al., 2007; Brosch et al., 2010). Further, such emotional appraisals can fundamentally alter the neural responses to emotional stimuli (Brosch and Sander, 2013; Buhle et al., 2014). Emotional responses to social rejection and aggression are no exception. For instance, externalizing responses to rejection are contingent upon the appraisal of the event (Sandstrom et al., 2003). The present research failed to include measures or manipulations of appraisal processes. However, establishing the role of subjective appraisals, beyond the objective experimental manipulations we employed, is critical to understanding the neural mechanisms of the rejection–aggression link. **Limitations and future directions** Our findings must be considered in light of the sample, which consisted of 60 undergraduates, and pose some concerns for the generalizability of our findings and the extent to which our analyses were sufficiently powered. The fact that most of our findings were close replications of previous studies, which we linked together in the same sample, provide some confidence that our results were not merely due to sampling error. An additional concern is that all findings were correlational in nature. Future work that experimentally modulates these neural systems, such as brain stimulation and pharmacological techniques, can circumvent this issue. The reverse inferences that we made are also limitations as the neural activity we observed may not actually be reflective of the psychological processes we interpreted them to signify (Poldrack, 2006). Our striatum-reward associations were supported by self-report evidence, but more work is needed to ensure the fidelity of our reverse inferences. Further, the association between retaliatory aggression and ventral striatum activity (displayed in Figure 3) was observed in a relatively small number of voxels (i.e. 19), which potentially undermines the reliability of these findings. However, voxels from the entire anatomical region were used in all other analyses, which provides more inferential confidence than just using the significant voxels form the regression analysis. Each of our aggression-related neural activity estimates may have been biased by the interleaved baseline fixation screens that we used, which may have been influenced by residual reactivity to previous elements of the task. Future work may benefit from using baseline estimates that are acquired before the aggression task. Finally, the ways in which striatal activity motivates aggression are unable to be articulated given our data. It may be that the anticipation of the striatal reward associated with aggression prospectively motivates such behavior. Conversely, it may be that the reward experienced during the aggressive act motivates more severe acts of aggression, in order to magnify the ongoing hedonic experience. Research that is able to disentangle the anticipated vs in-the-moment motivational capabilities of aggression’s rewarding qualities is a necessary future endeavor. Our rejection block was also confounded with the duration of the Cyberball task, which may have produced spurious patterns of brain activity, such as those we observed in the visual and auditory cortices. Finally, readers should use caution when interpreting our findings because we did not correct for the multiple comparisons that were made across the various VLPFC regions-of-interest. **Conclusions** Does the pain of rejection promote the sweetness of revenge? Our findings suggest that the answer to this question is yes, albeit indirectly. People are motivated to maintain their social connections, but also to avoid having those connections inflict excessive costs upon them (McCullough et al., 2013). Pain and pleasure are two proximate mechanisms that evolution may have co-opted to motivate individuals to avoid rejection (i.e. social pain) and to seek retribution against those who harm them (i.e. aggressive pleasure). By understanding how coping with the pain of rejection impacts our self-regulatory abilities, and how these regulatory changes render retaliation an appetitive option, we may better understand how to prevent the aggressive dividends that rejection often yields. **Funding** This work was supported by the National Institute on Drug Abuse (award # DA05312; Lynam, Milich and DeWall), the Foundation for Personality and Social Psychology’s Heritage Initiative (Chester), and the Robert S. Lipman Research Fund for the Prevention of Drug and Alcohol Abuse (Chester). **Conflict of interest.** None declared. **References** Anderson, C.A., Bushman, B.J. (2002). Human aggression. *Annual Review of Psychology*, 53(1), 27–51. Aron, A.R., Robbins, T.W., Poldrack, R.A. (2014). Inhibition and the right inferior frontal cortex: one decade on. *Trends in Cognitive Sciences*, 18(4), 177–85. Berridge, K.C., Kringelbach, M.L. (2013). Neuroscience of affect: brain mechanisms of pleasure and displeasure. *Current Opinion in Neurobiology*, 23(3), 294–303. Berridge, K.C., Robinson, T.E. (2003). Parsing reward. *Trends in Neurosciences*, 26(9), 507–13. Brosch, T., Pourtois, G., Sander, D. (2010). The perception and categorisation of emotional stimuli: a review. *Cognition and Emotion*, 24(3), 377–400. Brosch, T., Sander, D. (2013). Comment: the appraising brain: towards a neuro-cognitive model of appraisal processes in emotion. *Emotion Review*, 5(2), 163–8. Buades-Rotger, M., Brunnlieb, C., Münte, T.F., Heldmann, M., Krämer, U.M. (2016). Winning is not enough: ventral striatum connectivity during physical aggression. *Brain Imaging and Behavior*, 10(1), 105–14. Buckley, K.E., Winkel, R.E., Leary, M.R. (2004). Reactions to acceptance and rejection: effects of level and sequence of relational evaluation. *Journal of Experimental Social Psychology, 40*(1), 14–28. Buhle, J.T., Silvers, J.A., Wager, T.D., et al. (2014). Cognitive reappraisal of emotion: a meta-analysis of human neuroimaging studies. *Cerebral Cortex, 24*(11), 2981–90. Burgdorf, J., Panksepp, J. (2006). The neurobiology of positive emotions. *Neuroscience and Biobehavioral Reviews, 30*(2), 173–87. Bushman, B.J., Baumeister, R.F., Phillips, C.M. (2001). Do people aggress to improve their mood? Catharsis beliefs, affect regulation opportunity, and aggressive responding. *Journal of Personality and Social Psychology, 81*(1), 17–32. Buss, A.H., Perry, M. (1992). The aggression questionnaire. *Journal of Personality and Social Psychology, 63*(3), 452–9. Carré, J.M., Murphy, K.R., Hariri, A.R. (2013). What lies beneath the face of aggression?. *Social Cognitive and Affective Neuroscience, 8*(2), 224–9. Chester, D.S. (2017). The role of positive affect in aggression. *Current Directions in Psychological Science, 26*(4), 366–70. Chester, D.S., DeWall, C.N. (2014). Prefrontal recruitment during social rejection predicts greater subsequent self-regulatory imbalance and impairment: neural and longitudinal evidence. *NeuroImage, 101*(1), 485–93. Chester, D.S., DeWall, C.N. (2016). The pleasure of revenge: retaliatory aggression arises from a neural imbalance toward reward. *Social Cognitive and Affective Neuroscience, 11*(7), 1173–82. Chester, D.S., DeWall, C.N. (2018a). Aggression is associated with greater subsequent alcohol consumption: Shared neural basis in the ventral striatum. *Aggressive Behavior, 44*(3), 285–93. Chester, D.S., DeWall, C.N. (2018b). Personality correlates of revenge-seeking: Multidimensional links to physical aggression, impulsivity, and aggressive pleasure. *Aggressive Behavior, 44*(3), 235–45. Chester, D.S., Eisenberger, N.I., Pond, R.S., Richman, S.B., Bushman, B.J., DeWall, C.N. (2014). The interactive effect of social pain and executive functioning on aggression: an fMRI experiment. *Social Cognitive and Affective Neuroscience, 9*(5), 699–704. Chester, D.S., Lynam, D.R., Milich, R., Powell, D.K., Andersen, A.H., DeWall, C.N. (2016). How do negative emotions impair self-control? A neural model of negative urgency. *NeuroImage, 132*(1), 43–50. Chester, D.S., Pond, R.S., DeWall, C.N. (2015). Alexithymia is associated with blunted anterior cingulate response to social rejection: implications for daily rejection. *Social Cognitive and Affective Neuroscience, 10*(4), 517–22. Chester, D.S., Riva, P. (2016). Brain mechanisms to regulate negative reactions to social exclusion. In: Riva, P., Eck J., editors. *Social Exclusion: Psychological Approaches to Understanding and Reducing Its Impact*, Cham, Switzerland: Springer International Publishing. Chow, R.M., Tiedens, L.Z., Govan, C.L. (2008). Excluded emotions: the role of anger in antisocial responses to ostracism. *Journal of Experimental Social Psychology, 44*(3), 896–903. Côté, S., Vaillancourt, T., LeBlanc, J.C., Nagin, D.S., Tremblay, R.E. (2006). The development of physical aggression from toddlerhood to pre-adolescence: a nation wide longitudinal study of Canadian children. *Journal of Abnormal Child Psychology, 34*(1), 68–82. Craig, A.D. (2011). Significance of the insula for the evolution of human awareness of feelings from the body. *Annals of the New York Academy of Sciences, 1225*(1), 72–82. Dambacher, F., Sack, A.T., Lobbestael, J., Arntz, A., Brugman, S., Schuhmann, T. (2014). Out of control: evidence for anterior insula involvement in motor impulsivity and reactive aggression. *Social Cognitive and Affective Neuroscience, 10*(4), 508–16. Denson, T.F., DeWall, C.N., Finkel, E.J. (2012). Self-control and aggression. *Current Directions in Psychological Science, 21*(1), 20–5. DeWall, C.N., Twenge, J.M., Gitter, S.A., Baumeister, R.F. (2009). It’s the thought that counts: the role of hostile cognition in shaping aggressive responses to social exclusion. *Journal of Personality and Social Psychology, 96*(1), 45–59. Eisenberger, N.I. (2012). The pain of social disconnection: examining the shared neural underpinnings of physical and social pain. *Nature Reviews Neuroscience, 13*(6), 421–34. Eisenberger, N.I., Lieberman, M.D. (2004). Why rejection hurts: a common neural alarm system for physical and social pain. *Trends in Cognitive Sciences, 8*(7), 294–300. Eisenberger, N.I., Lieberman, M.D., Williams, K.D. (2003). Does rejection hurt? An fMRI study of social exclusion. *Science, 302*(5643), 290–2. Emmerling, F., Schuhmann, T., Lobbestael, J., Arntz, A., Brugman, S., Sack, A.T. (2016). The role of the insular cortex in retaliation. *PLoS One, 11*(4), e0152000. Everitt, B.J., Robbins, T.W. (2005). Neural systems of reinforcement for drug addiction: from actions to habits to compulsion. *Nature Neuroscience, 8*(11), 1481–9. Falk, E.B., Cascio, C.N., O’Donnell, M.B., et al. (2014). Neural responses to exclusion predict susceptibility to social influence. *Journal of Adolescent Health, 54*(5), S22–31. Frith, C.D., Frith, U. (2006). The neural basis of mentalizing. *Neuron, 50*(4), 531–4. Gerevich, J., Bácskai, E., Czobor, P. (2007). The generalizability of the Buss–Perry Aggression Questionnaire. *International Journal of Methods in Psychiatric Research, 16*(3), 124–36. Hayes, A.F. (2012). PROCESS: a versatile computational tool for observed variable mediation, moderation, and conditional process modeling (Version 2.0) [Software]. Available: http://www.afhayes.com/public/process2012.pdf [August 18, 2017]. Heatherton, T.F., Wagner, D.D. (2011). Cognitive neuroscience of self-regulation failure. *Trends in Cognitive Sciences, 15*(3), 132–9. Heller, R., Stanley, D., Yekutieli, D., Rubin, N., Benjamini, Y. (2006). Cluster-based analysis of fMRI data. *NeuroImage, 33*(2), 599–608. Huesmann, L.R., Eron, L.D., Lefkowitz, M.M., Walder, L.O. (1984). Stability of aggression over time and generations. *Developmental Psychology, 20*(6), 1120–34. Ikemoto, S. (2007). Dopamine reward circuitry: two projection systems from the ventral midbrain to the nucleus accumbens–olfactory tubercle complex. *Brain Research Reviews, 56*(1), 27–78. Inzlicht, M., Gutsell, J.N. (2007). Running on empty: neural signals for self-control failure. *Psychological Science, 18*(11), 933–7. Kawamoto, T., Onoda, K., Nakashima, K.I., Nittono, H., Yamaguchi, S., Ura, M. (2012). Is dorsal anterior cingulate cortex activation in response to social exclusion due to expectancy violation? An fMRI study. *Frontiers in Evolutionary Neuroscience, 4*, 11. Kober, H., Mende-Siedlecki, P., Kross, E.F., et al. (2010). Prefrontal–striatal pathway underlies cognitive regulation of craving. *Proceedings of the National Academy of Sciences, 107*(33), 14811–6. Krämer, U.M., Jansma, H., Tempelmann, C., Münte, T.F. (2007). Tit-for-tat: the neural basis of reactive aggression. *NeuroImage, 38*(1), 203–11. Kringelbach, M.L., Berridge, K.C. (2009). Towards a functional neuroanatomy of pleasure and happiness. *Trends in Cognitive Sciences*, 13(11), 479–87. Leary, M.R. (1990). Responses to social exclusion: social anxiety, jealousy, loneliness, depression, and low self-esteem. *Journal of Social and Clinical Psychology*, 9(2), 221–9. Leary, M.R., Kowalski, R.M., Smith, L., Phillips, S. (2003). Teasing, rejection, and violence: case studies of the school shootings. *Aggressive Behavior*, 29(3), 202–14. Leary, M.R., Twenge, J.M., Quinlivan, E. (2006). Interpersonal rejection as a determinant of anger and aggression. *Personality and Social Psychology Review*, 10(2), 111–32. Levy, B.J., Wagner, A.D. (2011). Cognitive control and right ventrolateral prefrontal cortex: reflexive reorienting, motor inhibition, and action updating. *Annals of the New York Academy of Sciences*, 1224(1), 40–62. Lopez, R.B., Hofmann, W., Wagner, D.D., Kelley, W.M., Heatherton, T.F. (2014). Neural predictors of giving in to temptation in daily life. *Psychological Science*, 25(7), 1337–44. Lotze, M., Veit, R., Anders, S., Birbaumer, N. (2007). Evidence for a different role of the ventral and dorsal medial prefrontal cortex for social reactive aggression: an interactive fMRI study. *NeuroImage*, 34(1), 470–8. MacDonald, G., Leary, M.R. (2005). Why does social exclusion hurt? The relationship between social and physical pain. *Psychological Bulletin*, 131(2), 202–23. MacKinnon, D.P., Krull, J.L., Lockwood, C.M. (2000). Equivalence of the mediation, confounding and suppression effect. *Prevention Science*, 1(4), 173–81. Maldjian, J.A., Laurienti, P.J., Kraft, R.A., Burdette, J.H. (2003). An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets. *NeuroImage*, 19(3), 1233–9. McCullough, M.E., Kurzban, R., Tabak, B.A. (2013). Putting revenge and forgiveness in an evolutionary context. *Behavioral and Brain Sciences*, 36(1), 41–58. Poldrack, R.A. (2006). Can cognitive processes be inferred from neuroimaging data?. *Trends in Cognitive Sciences*, 10(2), 59–63. Poon, K.T., Teng, F. (2017). Feeling unrestricted by rules: ostracism promotes aggressive responses. *Aggressive Behavior*, 43(6), 558–67. Price, D.D. (2000). Psychological and neural mechanisms of the affective dimension of pain. *Science*, 288(5472), 1769–72. Rajchert, J., Winiewski, M. (2016). The behavioral approach and inhibition systems’ role in shaping the displaced and direct aggressive reaction to ostracism and rejection. *Personality and Individual Differences*, 88(1), 272–9. Ren, D., Wesselmann, E.D., Williams, K.D. (2018). Hurt people hurt people: ostracism and aggression. *Current Opinion in Psychology*, 19(1), 34–8. Richeson, J.A., Baird, A.A., Gordon, H.L., et al. (2003). An fMRI investigation of the impact of interracial contact on executive function. *Nature Neuroscience*, 6(12), 1323–8. Rotge, J.Y., Lerngogne, C., Hinfray, S., et al. (2014). A meta-analysis of the anterior cingulate contribution to social pain. *Social Cognitive and Affective Neuroscience*, 10(1), 19–27. Sabatinnelli, D., Bradley, M.M., Lang, P.J., Costa, V.D., Versace, F. (2007). Pleasure rather than salience activates human nucleus accumbens and medial prefrontal cortex. *Journal of Neurophysiology*, 98(3), 1374–9. Sandstrom, M.J., Cillessen, A.H., Eisenhower, A. (2003). Children’s appraisal of peer rejection experiences: impact on social and emotional adjustment. *Social Development*, 12(4), 530–50. Siemer, M., Mauss, I., Gross, J.J. (2007). Same situation–different emotions: how appraisals shape our emotions. *Emotion*, 7(3), 592. Smith, S.M., Jenkinson, M., Woolrich, M.W., et al. (2004). Advances in functional and structural MR image analysis and implementation as FSL. *NeuroImage*, 23(1), S208–19. Teneback, C.C., Nahas, Z., Speer, A.M., et al. (1999). Changes in prefrontal cortex and paralimbic activity in depression following two weeks of daily left prefrontal TMS. *Journal of Neuropsychiatry and Clinical Neurosciences*, 11(4), 426–35. Twenge, J.M., Baumeister, R.F., Tice, D.M., Stucke, T.S. (2001). If you can’t join them, beat them: effects of social exclusion on aggressive behavior. *Journal of Personality and Social Psychology*, 81(6), 1058–69. Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., et al. (2002). Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. *NeuroImage*, 15(1), 273–89. Wager, T.D., Davidson, M.L., Hughes, B.L., Lindquist, M.A., Ochsner, K.N. (2008). Prefrontal-subcortical pathways mediating successful emotion regulation. *Neuron*, 59(6), 1037–50. Wagner, D.D., Altman, M., Boswell, R.G., Kelley, W.M., Heatherton, T.F. (2013). Self-regulatory depletion enhances neural responses to rewards and impairs top-down control. *Psychological Science*, 24(11), 2262–71. Warburton, W.A., Williams, K.D., Cairns, D.R. (2006). When ostracism leads to aggression: the moderating effects of control deprivation. *Journal of Experimental Social Psychology*, 42(2), 213–20. Webster, G.D., DeWall, C.N., Pond, R.S., et al. (2014). The brief aggression questionnaire: psychometric and behavioral evidence for an efficient measure of trait aggression. *Aggressive Behavior*, 40(2), 120–39. Wesselmann, E.D., Butler, F.A., Williams, K.D., Pickett, C.L. (2010). Adding injury to insult: unexpected rejection leads to more aggressive responses. *Aggressive Behavior*, 36(4), 232–7. Williams, K.D. (2009). Ostracism: A temporal need-threat model. In: Zanna, M.P., editor. *Advances in Experimental Social Psychology*, New York, NY: Academic Press. Williams, K.D., Cheung, C.K., Choi, W. (2000). Cyberostracism: effects of being ignored over the Internet. *Journal of Personality and Social Psychology*, 79(5), 748–62. Woolrich, M.W., Jbabdi, S., Patenaude, B., et al. (2009). Bayesian analysis of neuroimaging data in FSL. *NeuroImage*, 45(Suppl 1), S173–86. Worsley, K.J. (2001). Statistical analysis of activation images. *Functional MRI: An Introduction to Methods*, 14(1), 251–70. Yanagisawa, K., Masui, K., Onoda, K., et al. (2011). The effects of the behavioral inhibition and activation systems on social inclusion and exclusion. *Journal of Experimental Social Psychology*, 47(2), 502–5.
Serum Iron Markers Are Inadequate for Guiding Iron Repletion in Chronic Kidney Disease Paolo Ferrari, Hemant Kulkarni, Shyam Dheda, Susanne Betti, Colin Harrison, Timothy G. St. Pierre, and John K. Olynyk Summary Background and objectives Iron (Fe) overload may complicate parenteral Fe therapy used to enhance the efficacy of erythropoietic-stimulating agents in the treatment of anemia of chronic kidney disease. However, serum Fe markers are influenced by inflammation or malignancy and may not accurately reflect the amount of body Fe. Design, setting, participants, & measurements We studied the relationship between parenteral Fe therapy, conventional serum Fe markers, and liver iron concentration (LIC) measured using magnetic resonance R2 relaxometry (FerriScan) in 25 Fe-deficient predialysis chronic kidney disease patients before and 2 and 12 weeks after single high-dose intravenous Fe and in 15 chronic hemodialysis patients with elevated serum ferritin (>500 μg/L). Results In predialysis patients, there was strong dose dependency between the administered Fe dose and changes in LIC at weeks 2 and 12; however, no dose dependency between Fe dose and changes in ferritin or transferrin saturation (TSAT) were observed. In hemodialysis patients, LIC correlated with the cumulative Fe dose and duration of dialysis but not with current ferritin or TSAT. The cumulative Fe dose remained a significant independent predictor of LIC in a multiple regression model. Two dialysis patients who received >6 g parenteral Fe had substantially elevated LIC >130 μmol/g, which is associated with hemochromatosis. Conclusions In Fe-deficient predialysis patients, intravenous Fe therapy is associated with increases in LIC unrelated to changes in conventional Fe markers. In hemodialysis patients, TSAT and ferritin are poor indicators of body Fe load, and some patients have LICs similar to those found in hemochromatosis. Clin J Am Soc Nephrol 6: 77–83, 2011. doi: 10.2215/CJN.04190510 Introduction Chronic kidney disease (CKD) is commonly accompanied by the development of anemia that is characterized by poor intestinal iron (Fe) absorption, low ferritin levels, and requirement for parenteral Fe supplementation (1–3). There is good evidence that intravenous Fe therapy should be administered as a standard therapy for Fe deficiency (ferritin < 100 μg/L, transferrin saturation [TSAT] < 20%) in conjunction with or before therapy with erythropoietic-stimulating agents in anemia of CKD and to maintain adequate Fe stores in dialysis patients (4,5). Hemoglobin and ferritin concentrations have been shown to increase significantly in CKD patients after intravenous Fe compared with oral Fe therapy (6). For ease and convenience, intravenous Fe 500 to 1500 mg can be administered over a 4-hour period (7). Dialysis patients regularly require 50 to 200 mg monthly of intravenous Fe to maintain their Fe stores (8,9). Although it is generally assumed that restoration of hemoglobin toward the target range is a good outcome of this therapy, it is well known that Fe overload and Fe toxicity may be adverse consequences of this therapy. Both the American and Australian guidelines (4,5) recommend caution with the routine administration of intravenous Fe if the serum ferritin is >500 μg/L. However, the upper limit of a safe serum ferritin level remains unresolved (10). Serum ferritin levels between 300 and 1200 μg/L are associated with the lowest mortality risk after adjusting for malnutrition and inflammation (11), and only serum ferritin levels >2000 μg/L have been associated with hemochromatosis in dialysis patients (3). Unfortunately, serum Fe markers do not accurately reflect the amount of Fe in the body, because factors such as infections, inflammatory diseases, or malignancy can alter ferritin levels in the body. Thus, some authors have suggested that no specific level of serum ferritin should be stated as an upper limit for Fe treatment (10). Recently, a noninvasive magnetic resonance imaging (MRI)-based R2 relaxometry method (FerriScan) was developed that accurately measures liver Fe concentration (LIC) in a range of diseases (12). LICs >130 μmol/g dry weight are associated with increased risks of liver injury (13), whereas cardiac toxicity occurs when LIC exceeds 270 μmol/g (14). Liver biopsy is rarely performed in subjects with CKD, and retrospective quantitative phlebotomy as a direct measure of total body Fe burden is not possible because of the anemic status of the patients and their consequent limited tolerance of phlebotomy. Given the uncertainties surrounding the accuracy of serum Fe biochemistry as a direct measure of Fe status, $R_2$ relaxometry presents a readily available and validated noninvasive method of measuring LICs *in vivo* to assess the relationship between conventional biochemical markers of Fe stores, intravenous Fe therapy, and liver Fe load in CKD patients. The aims of this study were to (1) prospectively evaluate the effects of a single high-dose Fe infusion on LIC in Fe-deficient CKD stage 3 to 5 patients who had previously not received parenteral Fe and (2) characterize the extent of hepatic Fe overload in hemodialysis patients receiving regular intravenous Fe replacement. **Materials and Methods** **Prospective Study in Iron-Deficient CKD Subjects** The primary objective of this substudy was to prospectively assess LIC before and after a single high-dose intravenous Fe infusion and to compare LIC with conventional markers of Fe stores. This cohort comprised patients with stage 3 to 5 CKD (estimated GFR, 10–60 ml/min per 1.73 m$^2$), who were scheduled to receive 10 to 20 mg/kg of intravenous Fe-polymaltose (Ferrosig, Sigma Pharmaceuticals, Croydon, Australia) if they had (1) anemia, using World Health Organization definitions of hemoglobin <120 g/L for women and <130 g/L for men (15), (2) Fe deficiency (TSAT < 20% and/or serum ferritin < 100 $\mu$g/L), (3) not previously received parenteral Fe, and (4) no contraindications to MRI. The first MRI assessment of LIC was scheduled on the day of intravenous Fe therapy just before infusion; postinfusion scans and biochemical studies were obtained at weeks 2 and 12. The primary endpoint was calculated as the change in mean LIC from baseline to follow-up. A sample size of 20 was expected to provide 80% power to detect a change in mean LIC over time of 7.7 $\mu$mol/g dry weight, equating to an effect size of 66%, using a paired $t$ test with a two-tailed $\alpha$ level of 0.05 (16). This was based on an SD of LIC of 11.7 $\mu$mol/g reported in hemodialysis patients (17). **Cross-Sectional Study in Hemodialysis Subjects** The primary objective of this substudy was to assess whether LIC in hemodialysis patients with serum ferritin levels above the upper recommended guidelines was best predicted by ferritin, TSAT, or cumulative Fe dose. This cohort comprised chronic hemodialysis patients treated between September 2009 and February 2010. The inclusion criteria were (1) maintenance hemodialysis of $\geq$12 months, (2) native arteriovenous fistula, (3) parenteral Fe therapy (Ferrosig, usually 50 to 200 mg monthly) over the past 12 months, (4) serum ferritin >500 $\mu$g/L on two consecutive occasions 3 months apart, (5) full history of total parenteral Fe treatment since start of dialysis, (6) alcohol consumption of less than two standard drinks per day, and (7) exclusion of significant liver disease. Serologic testing was used to exclude chronic viral hepatitis B or C infection. Serum Fe markers and LIC-$R_2$ relaxometry were measured at least 2 weeks after the last intravenous Fe dose. These studies were approved by the Hospital Ethics Committee, and all subjects consented to participation in the study. **MRI** MRI was conducted on a 1.5-T whole body imaging unit (Siemens MAGNETOM Vision Plus) using a recently described method (12). This method measures the mean liver proton transverse relaxation rate ($R_2$), which has a high sensitivity and specificity for prediction of LIC (12). Images were acquired in partial Fourier mode with a multislice single-spin-echo pulse sequence, with the pulse repetition time = 2500 ms, spin echo times = 6, 9, 12, 15, and 18 ms, and slice thickness = 5 mm. All spin-echo sequences were performed using fixed gain control. Subjects were scanned together with a bag of normal saline, which acted as a long T2 standard to enable correction of any gain drift between different image acquisitions. Subjects were positioned to locate the liver central to the chest coil. Eleven slices were collected, with the gap between slices being 5 mm. $R_2$ values were calculated throughout the largest cross-section of the liver by curve fitting the equation for the bi-exponential decay in transverse magnetization after an single-spin-echo pulse sequence to the voxel intensity data as a function of echo time with radiofrequency field intensity-weighted spin density projection (18). Mean LIC was calculated from mean $R_2$ values as previously reported (12). The accepted normal range of LIC in healthy subjects is <30 $\mu$mol/g (Figure 1) (19). Serum Fe and ferritin concentrations and TSAT were measured or derived using standard methods. **Statistical Analyses** Statistical analysis was performed with Systat software package version 10 (SPSS, Chicago, IL). Results are presented as mean ± SD for continuous variables and number and percentage for categorical variables. Differences between subjects were tested by nonparametric Mann-Whitney $U$ test statistic for continuous data and Fisher exact test for categorical data. ANOVA was used to compare within-group changes over time in the first substudy. Non-CKD patients with secondary Fe overload and LIC >60 $\mu$mol/g are considered eligible for chelation therapy, whereas those with LIC >130 $\mu$mol/g are at increased risk of liver injury (13). Thus, these values were used as threshold levels for determination of the number of dialysis patients above and below these limits. All $P$ values quoted are two tailed. **Results** **Prospective Study in Iron-Deficient CKD Subjects** Twenty-five CKD patients (stage 3, $n = 6$; stage 4, $n = 14$; stage 5, $n = 5$) qualified for intravenous Fe Figure 1. Liver R2 images and distributions for four subjects with different degrees of iron (Fe) overload and pathologic conditions: (A) healthy control, (B) hereditary hemochromatosis, (C) ESKD patient with >6 g cumulative Fe dose, and (D) Fe deficient CKD patient before and 2 and 12 weeks after 1 g intravenous Fe (top to bottom). Note that the liver R2 images are superimposed on standard spin-echo images for registration purposes. Note that to enable visualization of the heterogeneity of R2 within each liver, the color scale within each liver is adjusted for each image such that zero corresponds to voxel R2 of zero, whereas the maximum of the color scale is scaled to the maximum R2 value within the liver. Table 1. Baseline characteristics of the iron-deficient CKD cohort and the hyperferritinemic hemodialysis cohort | Cohort | CKD | HD | |-------------------------|--------------|--------------| | N | 25 | 15 | | Age (years) | 65 ± 15 | 61 ± 12 | | Gender (male/female) | 17/8 | 10/5 | | Serum creatinine | 296 ± 130 | — | | Hemoglobin (g/L) | 107 ± 8 | 116 ± 9 | | Transferrin saturation (%) | 15 ± 6 | 31 ± 10 | | Ferritin (mg/L) | 67 ± 56 | 782 ± 170 | | CRP (mg/L) | 1.2 ± 2.6 | 4.9 ± 4.1 | | Transfusions (N) | 0 | 3.8 ± 2.7 | | Time on dialysis (days) | — | 899 ± 353 | | Cumulative Fe dose (mg) | — | 6560 ± 3098 | | Mean monthly Fe dose (mg)| — | 217 ± 87 | | Weekly erythropoietin dose (U/wk) | — | 7870 ± 4360 | | Liver Fe concentration (μmol/g) | 20.6 ± 7.9 | 81.2 ± 58.3 | infusion according to the hospital protocol and completed the three follow-up visits. Baseline characteristics of these patients are summarized in Table 1. The dose of intravenous Fe ranged from 1000 to 1500 mg, and the average Fe dose per body weight was 15 ± 3 mg/kg. At week 2, the hemoglobin averaged 113 ± 12 g/L ($P = \text{not significant versus baseline}$), TSAT averaged 31 ± 12% ($P < 0.00001$ versus baseline), and ferritin averaged $563 \pm 282$ $\mu g/L$ ($P < 0.0001$ versus baseline), and at week 12, the mean hemoglobin was $120 \pm 13$ g/L ($P < 0.00001$ versus baseline), TSAT was $25 \pm 9\%$ ($P < 0.0001$ versus baseline), and ferritin was $299 \pm 221$ $\mu g/L$ ($P < 0.001$ versus baseline; Figure 2). LIC increased significantly to $46.1 \pm 15.6$ $\mu mol/g$ at week 2 and $33.7 \pm 11.3$ $\mu mol/g$ at week 12 (Figure 2). A comparison of LIC at week 2 in subjects receiving Fe amounts below or above the median dose showed that LIC remained $\leq 30$ $\mu mol/g$ in 25% of subjects given $<14.5$ mg/kg Fe, whereas all subjects receiving $\geq 14.5$ mg/kg of Fe had LIC $> 30$ $\mu mol/g$. The mean increase in LIC from baseline was $25.4 \pm 15.6$ $\mu mol/g$ (152 ± 126%) at week 2 and $13.0 \pm 11.2$ $\mu mol/g$ (80 ± 89%) at week 12. The change in TSAT from baseline tended to show a dose dependency at week 2 but not at week 12, whereas there was no dose dependency for changes in serum ferritin either at week 2 or 12. The increase in LIC showed a clear dependence on the administered Fe dose at both weeks 2 and 12 (Figure 3). **Figure 2.** | Baseline and follow-up serum ferritin, transferrin saturation, and liver iron concentration in 25 iron-deficient CKD patients not yet on dialysis after single high-dose (mean, $15 \pm 3$ mg/kg body weight) parenteral Fe administration. Boxes represent median with interquartile range, and bars represent minimum and maximum values. $P$ values for differences versus baseline are reported. **Figure 3.** | Changes, compared with baseline, in serum ferritin, transferrin saturation, and liver iron concentration in 25 iron-deficient CKD patients not yet on dialysis in relation to the administered Fe dose. Individual changes and regression line are (○/——) at week 2 and (+/- - - - -) at week 12, respectively. **Cross-Sectional Study in Hemodialysis Subjects** Twenty patients were screened to participate in the study, but five failed to undergo R$_2$ relaxometry (claustrophobia, $n = 1$; pacemaker, $n = 1$; canceled two appointments, $n = 2$; malignancy, $n = 1$). Patients were dialyzed three times weekly for $272 \pm 24$ minutes on a Fresenius 4008 machine using FX80 high-flux dialyzers at an average blood flow of $329 \pm 25$ ml/min. Baseline characteristics of these patients are summarized in Table 1. LIC correlated with the cumulative Fe dose ($R^2 = 0.44$, $P < 0.01$) and the duration of dialysis ($R^2 = 0.39$, $P < 0.05$) but not with current ferritin or TSAT (Figure 4). The cumulative Fe dose remained a significant independent predictor of LIC ($R^2 = 0.69$, $P < 0.05$) in a multiple regression model that included C-reactive protein (CRP), ferritin, TSAT, current and cumulative Fe dose, and duration of dialysis. Nine patients (60%) had LIC $> 60$ $\mu mol/g$. All seven subjects with $\geq 6000$ mg cumulative Fe dose had LIC $> 60$ $\mu mol/g$ compared with only two of eight subjects with cumulative Fe dose $< 6000$ mg. Two of seven subjects with $\geq 6000$ mg cumulative Fe dose had LIC $> 130$ $\mu mol/g$ (Figure 1). Discussion Our study showed that in predialysis Fe-deficient CKD patients and ESKD patients on hemodialysis, intravenous Fe therapy can cause significant increases in LIC that are not related to or predicted by conventional serum markers of Fe metabolism such as TSAT and ferritin. LIC was assessed using a noninvasive MRI-based technique using tissue proton transverse relaxation rates ($R_2$). The accuracy and precision of this technology have been shown in several previous studies comparing $R_2$-LIC with liver biopsy-LIC in non-CKD patients (12,20–22). In the first substudy, we prospectively assessed the response of a single, high-dose Fe infusion in Fe-deficient predialysis patients. Clinically, the primary aim of this therapy is to correct anemia, and in our subjects, hemoglobin increased significantly from $107 \pm 8$ to $120 \pm 13$ g/L by 12 weeks after Fe administration, as previously reported (6). Compared with baseline, 2 weeks after intravenous Fe, there was an 11-fold increase in serum ferritin and a 2-fold increase in TSAT. However, these changes are not expected to reflect Fe overload, because changes in Fe indicators may take up to 14 days to reach steady state after intravenous Fe (23). At 12 weeks, changes in TSAT and serum ferritin compared with baseline were still significant, although all subjects had TSAT <50%, and all but one subject had serum ferritin <500 $\mu g/L$ as recommended by guidelines (4,5). On the other hand, 56% of the predialysis CKD subjects had LIC in excess of the upper limit of normal (<30 $\mu mol/g$) 12 weeks after high-dose parenteral Fe, although none had levels >60 $\mu mol/g$. In our cohort, only changes in LIC, but not serum ferritin or TSAT, showed a dose dependency to the administered Fe dose. These findings suggest that, in Fe-deficient CKD patients, a single, high-dose Fe infusion only leads to a transient liver Fe loading that is dependent on the amount of Fe administered. Although increases in LIC in predialysis CKD patients after Fe infusion are of smaller magnitude and transient, and thus seem to be safe, repeated infusions over intervals <12 weeks could result in significant Fe load. In the second substudy, we deliberately selected CKD patients on dialysis with serum ferritin levels above what is considered to be the safety threshold (4,5). Thus, one would expect the majority of these patients to show significantly increased LIC and serum ferritin to predict LIC. Our findings suggest that this may occur only in hemodialysis patients with persistently increased serum ferritin levels who have received considerable amounts of parenteral Fe. Interestingly, we found no correlation between serum Fe markers and LIC, but a strong correlation between LIC and both the cumulative Fe dose received and the time elapsed since start of dialysis. This is an interesting observation that suggests that uremia itself may progressively lead to Fe accumulation in tissues over time. Almost 40% of all hemodialysis patients in Australia have “unsafe” serum ferritin levels >500 $\mu g/L$ (24). The perception of toxicity of therapeutic Fe in CKD patients is based largely on a limited number of observational studies reporting an increased infection rate in Fe-deficient subjects on Fe replacement (25) or in CKD patients with high ferritin concentrations (26) and increased CVD and death in dialysis patients (27,28). Serum ferritin was shown to correlate with LIC assessed noninvasively by a superconducting quantum interference device (17). Although our study is limited by small sample size and the selection of hemodialysis patients with elevated serum ferritin and variable duration of dialysis, it is consistent with previous postmortem studies showing a lack of association between excessive tissue Fe and high serum ferritin but a strong association with duration of dialysis for >3 years and a cumulative Fe dose of between 6 and 25 g (29). Our results do not corroborate the findings by Canavese et al. (17), and it is possible that differences in timing of the parenteral Fe administration in relation to the timing of assay for TSAT and ferritin (23) may account for the observed differences between studies. A lack of correlation between serum ferritin levels and LIC has also been reported in other diseases, such as the common liver disorder nonalcoholic fatty liver disease (19). Although discrepancies between LIC and ferritin levels could be influenced by the presence of overt or subclinical inflammatory states, this seems unlikely because the mean CRP in our study ($4.9 \pm 4.1$ mg/L) did not differ from the mean CRP in the study of Canavese et al. ($6.2 \pm 8.3$ mg/L). Proinflammatory cytokines such as IL-1β, IL-6, and TNF-α are known to increase the synthesis of ferritin through increased translation of ferritin mRNA (30,31). It is possible that higher amounts of ferritin may trap more body Fe and protect the individual against worsening infection, because free Fe is believed to enhance the formation of free oxygen radicals, which are the mediators of cell damage in infection-associated inflammation. Hence, inflammation-induced hyperferritinemia may result in a so-called “functional Fe deficiency,” which may be useful in “acute” inflammation by Fe containment in the reticuloendothelial system sites but harmful under “chronic” inflammation by leading to anemia of chronic disease. This hypothesis is supported by recent findings of increased circulating IL-6 levels that are reduced after treatment with pentoxifylline in patients with stage 4 to 5 CKD (32). Hemodialysis treatment can precipitate a recurrent inflammatory state (33,34), and hemodialysis patients without native vascular access are notably more prone to recurrent infection (35). It is therefore not surprising that, in our hemodialysis cohort, serum ferritin does not provide an accurate assessment of body Fe stores. Our findings of an inverse association between LIC and TSAT suggest that the latter is also not helpful in the setting of hemodialysis. Indeed, TSAT does not correlate with LIC even in the context of inherited Fe overload disorders (36). It is possible that substantial Fe overload, similar to that observed in chronically transfused patients, is occurring in CKD subjects who are receiving excessive Fe therapy. Indeed, 60% of our hemodialysis cohort had LIC >60 μmol/g, a threshold above which chelation therapy may be recommended in non-CKD patients with secondary Fe overload disorders (14), and 13% had LIC >130 μmol/g, a level associated with increased risks of liver injury and fibrosis in hemochromatosis (37) (Figure 1). Whether Fe overload in hemodialysis patients leads to end organ damage involving the heart, liver, or brain remains unknown, and thus, we cannot currently advocate chelation therapy in dialysis patients with LIC >60 μmol/g. The most objective parameters of Fe overload are those reliant on direct measurement of organ Fe concentration. Although liver biopsy provides quantitative measurement of LIC, it is invasive, subject to sampling error, and not suited for use (nor likely to be accepted) in subjects who already have significant comorbidity. Thus, we did not perform routine liver biopsies for the purposes of this study, because there was no clinical hepatologic indication for the procedure in our study subjects. The most accurate and noninvasive methods are reliant on MRI (19,22). In nonuremic subjects with various degrees of Fe overload, an excellent correlation between R₂-LIC and LIC quantified using liver biopsy specimens has been shown (12,20,21,38). It could be argued that this relationship remains to be shown in uremic patients because of the lack of liver biopsy studies. Nevertheless, MRI-based R₂-relaxometry is solely dependent on the physical properties of Fe and its tissue concentration, whereas the effects of the tissue composition or the milieu are negligible. We developed and validated R₂ relaxometry methods based on MRI (FerriScan) that are able to accurately quantitate hepatic Fe concentration in CKD patients and that can be undertaken in 10 minutes. Clearly, we do not advocate for R₂-LIC to be routinely used in all dialysis subjects receiving erythropoietic-stimulating agent and Fe therapy; however, selected cases may benefit from R₂-LIC measurement to determine whether Fe overload is present and whether Fe administration should be withheld. We suggest that this is most likely to yield clinically meaningful results in hyperferritinemic hemodialysis patients who have received large amounts of parenteral Fe (>6000 mg) and are currently requiring regular Fe administration for the management of their anemia. The issue of how to treat Fe overload in this setting is problematic with no proven therapy. Improved knowledge of key regulators of Fe metabolism should result in further study and validation of new therapies for the treatment of Fe overload in CKD. Acknowledgments This work was supported by a Nephrology Research Grant, Roche, Australia. J.K.O. is the recipient of a National Health and Medical Research Council of Australia Practitioner Fellowship. Disclosures T.G.S. is a shareholder of Resonance Health Ltd. and is a member of the Board of Directors of Resonance Health Ltd., the company that provides the FerriScan® service. References 1. Carter RA, Hawkins JB, Robinson BH: Fe metabolism in the anaemia of chronic renal failure. Effects of dialysis and of parenteral iron. *BMJ* 3: 206–210, 1969 2. Milman N: Fe absorption measured by whole body counting and the relation to marrow Fe stores in chronic uremia. *Clin Nephrol* 17: 77–81, 1982 3. Kalantar-Zadeh K, Rodriguez RA, Humphreys MH: Association between serum ferritin and measures of inflammation, nutrition and Fe in haemodialysis patients. *Nephrol Dial Transplant* 19: 141–149, 2004 4. Roger S: The CARI guidelines. Haematological targets. *Iron Nephrol (Carlton)* 11[Suppl 1]: S217–S229, 2006 5. KDQOI Clinical Practice Guidelines and Clinical Practice Recommendations for Anemia in Chronic Kidney Disease. *Am J Kidney Dis* 47: S11–S145, 2006 6. Charytan C, Qunibi W, Bailie GR: Comparison of intravenous Fe sucrose to oral Fe in the treatment of anemic patients with chronic kidney disease not on dialysis. *Nephron Clin Pract* 100: c55–c62, 2005 7. Auerbach M, Witt D, Toler W, Fierstein M, Lerner RG, Ballard H: Clinical use of the total dose intravenous infusion of Fe dextran. *J Lab Clin Med* 111: 566–570, 1988 8. Macdougall IC, Tucker B, Thompson J, Tomson CR, Baker LR, Raine AE: A randomized controlled study of Fe supplementation in patients treated with erythropoietin. *Kidney Int* 50: 1694–1699, 1996 9. Taylor JE, Peat N, Porter C, Morgan AG: Regular low-dose intravenous Fe therapy improves response to erythropoietin in haemodialysis patients. *Nephrol Dial Transplant* 11: 1079–1083, 1996 10. Kalantar-Zadeh K, Lee GH: The fascinating but deceptive ferritin: To measure it or not to measure it in chronic kidney disease? *Clin J Am Soc Nephrol* 1[Suppl 1]: S9–S18, 2006 11. Kalantar-Zadeh K, Regidor DL, McAllister CJ, Michael B, Warnock DG: Time-dependent associations between Fe and mortality in hemodialysis patients. *J Am Soc Nephrol* 16: 3070–3080, 2005 12. St Pierre TG, Clark PR, Chua-anusorn W, Fleming AJ, Jeffrey GP, Olynik JK, Pootrakul P, Robins E, Lindeman R: Noninvasive measurement and imaging of liver Fe concentrations using proton magnetic resonance. *Blood* 105: 855–861, 2005 13. Cartwright GE, Edwards CQ, Kravitz K, Skolnick M, Amos DB, Johnson A, Buskjaer L: Hereditary hemochromatosis. Phenotypic expression of the disease. *N Engl J Med* 301: 175–179, 1979 14. Olivieri NF, Brittenham GM: Iron-chelating therapy and the treatment of thalassemia. *Blood* 89: 739–761, 1997 15. McLean E, Cogswell M, Egli I, Wojdyla D, de Benoist B: Worldwide prevalence of anaemia, WHO Vitamin and Mineral Nutrition Information System, 1993–2005. *Public Health Nutr* 12: 444–454, 2009 16. Lenth RV: Statistical power calculations. *J Anim Sci* 85: E24–E29, 2007 17. Canavese C, Bergamo D, Ciccone G, Longo F, Fop F, Thea A, Martina G, Piga A: Validation of serum ferritin values by magnetic susceptometry in predicting Fe overload in dialysis patients. *Kidney Int* 65: 1091–1098, 2004 18. St Pierre TG, Clark PR, Chua-Anusorn W: Single spin-echo proton transverse relaxometry of iron-loaded liver. *NMR Biomed* 17: 446–458, 2004 19. Olynik JK, Gan E, Tan T: Predicting Fe overload in hyperferritinemia. *Clin Gastroenterol Hepatol* 7: 359–362, 2009 20. Papakonstantinou OG, Maris TG, Kostaridou V, Goulianos AD, Koutoulas GK, Kalovidouris AE, Papavassiliou GB, Kordas G, Kattamis C, Vlahos LJ, Papavassiliou CG: Assessment of liver Fe overload by T2-quantitative magnetic resonance imaging: correlation of T2-QMRI measurements with serum ferritin concentration and histologic grading of siderosis. *Magn Reson Imaging* 13: 967–977, 1995 21. Clark PR, St Pierre TG: Quantitative mapping of transverse relaxivity (1/T(2)) in hepatic Fe overload: A single spin-echo imaging methodology. *Magn Reson Imaging* 18: 431–438, 2000 22. Clark PR, Chua-Anusorn W, St Pierre TG: Proton transverse relaxation rate (R2) images of liver tissue: Mapping local tissue Fe concentrations with MRI. *Magn Reson Med* 49: 572–575, 2003 23. Van Wyck DB, Roppolo M, Martinez CO, Mazey RM, McMurray S: A randomized, controlled trial comparing IV Fe sucrose to oral Fe in anemic patients with nondialysis-dependent CKD. *Kidney Int* 68: 2846–2856, 2005 24. Polkinghorne K, McDonald S, Excell L, Livingston B, Dent H: Ferritin and transferrin saturation, 2009 ANZDATA Registry Annual Data Report. Available at: http://www.anzdata.org.au/anzdata/AnzdataReport/32ndReport/Ch05.pdf. Accessed August 27, 2010 25. Murray MJ, Murray AB, Murray MB, Murray CJ: The adverse effect of Fe repletion on the course of certain infections. *BMJ* 2: 1113–1115, 1978 26. Seifert A, von Herrath D, Schaefer K: Fe overload, but not treatment with desferrioxamine favours the development of septicemia in patients on maintenance hemodialysis. *Q J Med* 65: 1015–1024, 1987 27. Bregman H, Gelland MC: Fe overload in patients on maintenance hemodialysis. *Int J Artif Organs* 4: 56–57, 1981 28. Druke T, Witko-Sarsat V, Massy Z, Descamps-Latscha B, Guerin AP, Marchais SJ, Gaussion V, London GM: Fe therapy, advanced oxidation protein products, and carotid artery intima-media thickness in end-stage renal disease. *Circulation* 106: 2212–2217, 2002 29. Gokal R, Millard PR, Weatherall DJ, Callender ST, Ledingham JG, Oliver DO: Fe metabolism in haemodialysis patients. A study of the management of Fe therapy and overload. *Q J Med* 48: 369–391, 1979 30. Rogers J, Lacroix L, Durmowitz G, Kasschau K, Andriotakis I, Bridges KR: The role of cytokines in the regulation of ferritin expression. *Adv Exp Med Biol* 356: 127–132, 1994 31. Harrison PM, Arosio P: The ferritins: Molecular properties, Fe storage function and cellular regulation. *Biochim Biophys Acta* 1275: 161–203, 1996 32. Ferrari P, Mallon D, Trinder D, Olynik JK: Pentoxifylline improves haemoglobin and interleukin-6 levels in chronic kidney disease. *Nephrology* 15: 344–349, 2010 33. Bossola M, Sanguinetti M, Scribano D, Zuppi C, Giungi S, Luciani G, Torelli R, Posteraro B, Fadda G, Tazza L: Circulating bacterial-derived DNA fragments and markers of inflammation in chronic hemodialysis patients. *Clin J Am Soc Nephrol* 4: 379–385, 2009 34. Samouilidou EC, Grapsa EJ, Kakavas I, Lagouranis A, Agrogiannis B: Oxidative stress markers and C-reactive protein in end-stage renal failure patients on dialysis. *Int Urol Nephrol* 35: 393–397, 2003 35. Xue JL, Dahl D, Ebben JP, Collins AJ: The association of initial hemodialysis access type with mortality outcomes in elderly Medicare ESRD patients. *Am J Kidney Dis* 42: 1013–1019, 2003 36. Bassett ML, Halliday JW, Ferris RA, Powell LW: Diagnosis of hemochromatosis in young subjects: Predictive accuracy of biochemical screening tests. *Gastroenterology* 87: 628–633, 1984 37. Brittenham GM, Griffith PM, Nienhuis AW, McLaren CE, Young NS, Tucker EE, Allen CJ, Farrell DE, Harris JW: Efficacy of deferoxamine in preventing complications of Fe overload in patients with thalassemia major. *N Engl J Med* 331: 567–573, 1994 38. Wood JC, Enriquez C, Ghugre N, Tyzka JM, Carson S, Nelson MD, Coates TD: MRI R2 and R2* mapping accurately estimates hepatic Fe concentration in transfusion-dependent thalassemia and sickle cell disease patients. *Blood* 106: 1460–1465, 2005 **Received:** May 13, 2010 **Accepted:** July 23, 2010 Published online ahead of print. Publication date available at www.cjasn.org.
Abstract Double R Model is a computational psycholinguistic model of natural language understanding founded on the linguistic principles of Cognitive Linguistics and implemented using the Atomic Components of Thought – Rational (ACT-R) cognitive architecture and modeling environment. Double R Grammar is the Cognitive Linguistic theory underlying Double R Model. In Double R Grammar, the focus is on the representation and integration of referential and relational meaning—two key dimensions of meaning that get grammatically encoded. Double R Process is the psycholinguistic theory of language processing underlying Double R Model. Double R Process is a highly interactive theory of language processing which eschews a separate syntactic analysis feeding a semantic interpretation component in favor of a direct interpretation of the referential and relational meaning of input texts. Double R Model is intended to validate the representation and processing commitments of Double R Grammar and Double R Process and to form the basis for the development of large-scale, functional natural language understanding systems. Introduction Double R Model (i.e. Referential and Relational Model) is a computational psycholinguistic model of natural language understanding founded on the linguistic principles of Cognitive Linguistics (Langacker, 1987, 1991; Lakoff, 1988; Talmy, 2003) and implemented using the Atomic Components of Thought – Rational (ACT-R) cognitive architecture and modeling environment (Anderson & Lebiere, 1998). Double R Grammar is the Cognitive Linguistic theory underlying Double R Model. In Double R Grammar, the focus is on the representation and integration of referential and relational meaning—two key dimensions of meaning that get grammatically encoded. Double R Process is the psycholinguistic theory of language processing underlying Double R Model. Double R Process is a highly interactive theory of language processing which eschews a separate syntactic analysis feeding a semantic interpretation component in favor of a direct interpretation of the referential and relational meaning of input texts. Double R Model is intended to validate the representation and processing commitments of Double R Grammar and Double R Process, together called Double R Theory, and to form the basis for the development of large-scale, functional natural language understanding systems. After introducing the basic theoretical commitments of Double R Model, this paper discusses some of the representational and processing commitments in more detail. It concludes with a processing example that demonstrates a subset of these commitments. Double R Grammar Double R Grammar is the Cognitive Linguistic theory underlying Double R Model. In Cognitive Linguistics, all grammatical elements have a semantic basis, including parts of speech, grammatical markers, phrases and clauses. Our understanding of language is embodied and based on experience in the world (Lakoff & Johnson, 1980). Categorization is a key element of linguistic knowledge, and categories are seldom absolute—exhibiting, instead, effects of prototypicality, base level categories (Rosch, 1978), family resemblance (Wittgenstein, 1953), fuzzy boundaries, radial structure and the like (Lakoff, 1987). Our linguistic capabilities derive from basic cognitive capabilities—there is no autonomous syntactic component (Chomsky, 1957, 1965) separate from the rest of cognition. Knowledge of language is for the most part learned and not innate. Abstract linguistic categories (e.g. noun, verb, nominal, clause) are learned on the basis of experience with multiple instances of words and expressions which are members of these categories, with the categories being abstracted and generalized from experience. Also learned are schemas which abstract away from the relationships between linguistic categories. Over the course of a lifetime, humans acquire a large stock of schemas at multiple levels of abstraction and generalization, representing knowledge of language and supporting language comprehension. These schemas constitute what might be called **grammatical semantics** in contrast to the **lexical semantics** of individual lexical items, although the schemas are, for the most part, associated with specific lexical items. Two key dimensions of meaning that get grammatically encoded are referential meaning and relational meaning. Double R Grammar is focused on the representation and integration of these two dimensions of meaning within the wider scope of Cognitive Linguistics. Consider the expressions 1. The book on the table 2. The book is on the table These two expressions have essentially the same relational meaning. They both express the relation “on” existing between “a book” and “a table”. However, their referential meaning is significantly different. The first expression, as a whole, refers to an object and is called an **object referring expression** in Double R Grammar. In referring to an object, the first expression uses the determiner “the” to **specify** that the object is salient in the context of use of the expression (and may have previously been referred to). The first expression also uses the word “book” to indicate the type of object being referred to, with “book” functioning as the **head** of the expression. Further, the phrase “on the table” refers to a location with respect to which the object can be identified and functions as a **modifier** in the expression. In referring to a location, the expression “on the table” refers to a second object “the table” and indicates the location of the first object with respect to the second object. Within the modifying expression, the relation “on” functions as the **relational head** with the object referring expression “the table” functioning as a **complement**. In the first expression the relational meaning of “on” is subordinated to referential meaning with the modifying function of “on the table” dominating the relational meaning of “on”. That is, although “on” is the relational head of the prepositional phrase “on the table”, it is not the head of the overall expression and does not determine the semantic type of that expression. The second expression refers to a situation and is called a **situation referring expression** in Double R Grammar. The second expression uses the auxiliary “is” to provide a temporal specification for the situation, fulfilling a referential function similar to that of the determiner “the” in “the book” and “the table”. The relational meaning of the second expression is about “being on” and not just “being”, with “on” functioning as the relational head of the situation referring expression. The relational head of a situation referring expression is called a **predicate** in Double R Grammar—reflecting the assertional function of the relational head. Note that “on” in the first expression is not functioning as a predicate, since it is presupposed and not asserted. That is, relational heads of modifying expressions are not predicates in Double R Grammar, they are (modifying) **functions**. In the second expression, the object referring expression “the book” functions as the subject (argument) of “being on” with “the table” functioning as the object (argument). Referentially, there is also a reference to a location “on the table”, which competes with the expression of the relational meaning of “on” as reflected in the difference between: 3. What is the book on? 4. Where is the book? where 3 highlights the relation “on” in asking about the object of that relation and 4 highlights the reference to a location using “where” to do so. The terms **specifier**, **head**, **modifier** and **complement** are borrowed from X-Bar Theory (Chomsky, 1970). It is acknowledged that X-Bar Theory captures an important grammatical generalization, but X-Bar theory is in need of semantic motivation (Ball, 2003a). In Double R Grammar, these terms are used to express referential functions that combine to form object, location and situation referring expressions (among others). Referring expressions in turn function as arguments in relational structures (and complements in corresponding referential structures). The joint encoding of referential and relational meaning leads to representations that simultaneously reflect both these important dimensions of meaning, with trade-offs occurring where the encoding of referential and relational meaning compete for expression. The specifier determines the referential type of a referring expression whereas the head determines the semantic type of the expression. Consider the referring expression 5. The kick in which the specifier “the” determines the expression to be an object referring expression, whereas, the word “kick” determines the expression to be a type of action (called a **Type Specification** in Langacker, 1991). In this expression, the specifier has the effect of objectifying the action expressed by “kick” and allowing it to be referred to as though it were an object. Note that since the inherent meaning of “kick” is not affected (only its function), there is no need to assume that the part of speech of “kick” is a noun instead of a verb in this expression. And if we allow verbs (especially action verbs) to function as heads of object referring expressions (i.e. noun phrases), then one of the primary syntactic arguments against the meaning based definition for parts of speech is nullified (Ball, 2003b). **Double R Process** Double R Process is the psycholinguistic theory underlying Double R Model. It is a highly interactive theory of language processing. Representations of referential and relational meaning are constructed directly from input texts. There is no separate syntactic analysis that feeds a semantic interpretation component. The processing mechanism is driven by the input text in a largely bottom-up, lexically driven manner. There is no top-down assumption that a privileged linguistic constituent like the sentence will occur (vice Townsend & Bever, 2001). There is no phrase structure grammar and no top-down control mechanism. How then are representations of input text constructed? Operating on the text from left to right, schemas corresponding to lexical items are activated. For those lexical items which are relational or referential, these schemas establish expectations which both determine the possible structures and drive the processing mechanism. A short-term working memory (Kintsch, 1998) is available for storing arguments which have yet to be integrated into a relational or referential structure, partially instantiated relational and referential structures, and completed structures. If a relational or referential entity is encountered which expects to find an argument to its left in the input text then that argument is assumed to be available in short-term working memory. If the relational or referential entity expects to find an argument to its right in the input text, then the entity is stored in short-term working memory as a partially completed structure and waits for the occurrence of the appropriate argument. When that argument is encountered it is instantiated into the stored relational or referential structure. Instantiated arguments are not separately available in short-term working memory. This keeps the number of separate linguistic units which must be maintained in short-term working memory to a minimum. **ACT-R** ACT-R is a cognitive architecture and modeling environment for the development of computational cognitive models. It is a psychologically validated cognitive architecture which has been used extensively in the modeling of higher-level cognitive processes (see the ACT-R web site for an extensive list of models and publications). ACT-R includes symbolic **production** and declarative **memory** systems integrated with subsymbolic **production selection** and **spreading activation** and **decay** mechanisms. Production selection involves the parallel matching of the left-hand side of all productions against a collection of **buffers** (e.g. goal buffer, retrieval buffer, visual buffer, auditory buffer) which contain the active contents of memory and perception. Production execution is a serial process—only one production is executed at a time. The parallel spreading activation and decay mechanism determines which declarative memory chunk is put into the retrieval buffer for comparison against productions. The combination of symbolic and subsymbolic mechanisms makes ACT-R a hybrid system of cognition. The **noise** parameter used by these computational mechanisms adds stochasticity to the system. ACT-R supports single **inheritance** of declarative memory chunks and limited, variable-based **pattern matching** (including a **partial-matching** capability). ACT-R incorporates **learning** mechanisms for learning both declarative and procedural knowledge. Version 5 of ACT-R (Anderson et al., 2002) adds a **perceptual-motor component** supporting the development of embodied cognitive models. With the addition of the perceptual-motor component, and the use of buffers as the interface between various cognitive modules (e.g. vision module, auditory module, production system, declarative memory), ACT-R is referred to as an “integrated theory of the mind”. **Double R Model** Double R Model is the computational implementation of Double R Theory in ACT-R. Double R Model is currently capable of processing an interesting range of grammatical constructions including: 1) intransitive, transitive and ditransitive verbs; 2) verbs taking clausal complements; 3) predicate nominals, predicate adjectives and predicate prepositions; 4) conjunctions of numerous grammatical types; 5) modification by attributive adjectives, prepositional phrases and adverbs, etc. Double R Model accepts as input as little as a single word or as much as an entire chunk of discourse—using the perceptual component of ACT-R to read words from a text window. Unrecognized words are simply ignored. Unrecognized grammatical forms result in partially analyzed text, not failure. The output of the model is a collection of declarative memory chunks that represent the referential and relational meaning of the input text. Although Double R Model is essentially a computational psycholinguistic model, it is intended to be used as the basis for development of large-scale, functional language comprehension systems and the current coverage of the model will need to be extended significantly to support that objective. **Inheritance vs. Unification** Unification allows for the unbounded, recursive matching of two logical representations. Unification is an extremely powerful pattern matching technique used in many language processing systems based on Prolog. Unfortunately, it is psychologically too powerful. For example, the following two logical expressions can be unified: \[ p(a,B,c(d,e,f(g,h(i,j),K),l)) \\ p(X,b,c(Y,e,f(Z,T,U),l)) \] where capitalized letters are variables and lowercase letters are constants. Humans are unlikely to be capable of performing such unifications consciously or otherwise without significant effort and an external scratch pad (i.e. short-term working memory does not have the capacity to retain more than a few variable bindings simultaneously). On the other hand, although extremely powerful, unification does not support the matching of types to subtypes. Thus, if we have a verb type with intransitive and transitive verb subtypes, unification cannot unify a chunk of type verb with a chunk of type intransitive verb or transitive verb. Unification’s inability to match types to subtypes often results in a proliferation of rules (or conditions on rules) to handle the various combinations. For example, the verb type can be variableized and a test for the valid types can be used to constrain the variable (e.g. Verb-Type equal verb or Verb-Type equal intrans-verb or Verb-Type equal trans-verb). With inheritance, a production that checks for a verb type will also match a transitive verb and an intransitive verb type (assuming an appropriate inheritance hierarchy). Humans appear to be able to use types and subtypes in appropriate contexts with little awareness of the transitions. For example, when processing a verb, all verbs (used predicatively) expect to be proceeded by a subject, but only transitive verbs expect to be followed by an object. Thus, humans presumably have available a general production that applies to all verbs (or even all predicates) which will look for a subject preceding the verb, but only a more specialized production for transitive verbs (or transitive predicates) which will look for an object following the verb. Inheritance supports the matching of two representations without requiring the recursive matching of their subparts (unlike unification) so long as the types of the two representations are compatible. Types are essentially an abstraction mechanism which makes it possible to ignore the detailed internal structure of representations when comparing them. For example, once the model has identified an expression as an object referring expression, the model can match the object referring expression against productions without consideration of the internal structure of the object referring expression. Of course, there may be productions that do consider the internal structure, but types are useful here as well. Instead of having to fully elaborate the internal structure, types can be used to partially elaborate that structure. For example, if a production is specifically concerned with object referring expressions headed by a quantifier (e.g. “some” in “some of the books”), the production can check to see that the head is of the appropriate type, providing a (limited) unification like capability where needed. In sum, inheritance and limited pattern matching provide a psychologically plausible alternative to a full unification capability. To take advantage of inheritance, Double R Model incorporates a type hierarchy (a tangled hierarchy or lattice, with multiple inheritance, is preferred, but ACT-R currently only supports single inheritance). Representative elements of the top levels of the current hierarchy of types (below `top-type`) are shown below: ``` Lexical-type Pronoun Proper-noun Noun Adjective Verb Preposition Adverb Determiner Quantifier Auxiliary Negative Referential-type Head Specifier Object Specifier Predicate Specifier Modifier Object Modifier Relation Modifier Complement Referring-expression-type What-referring-expression Object-referring-expression Situation-referring-expression Predicate-referring-expression Where-referring-expression Location-referring-expression Direction-referring-expression When-referring-expression Why-referring-expression How-referring-expression How-much-referring-expression Relation-type Relation Predicate Function Argument Term ``` The more specialized a production is, the more specialized the types of the chunks in the goal and retrieval buffers to which the production matches will need to be. The most general productions match a goal chunk whose type is `top-type` and ignore the retrieval buffer chunk. **Default Rules** ACT-R’s inheritance mechanism can be combined with the subsymbolic production utility parameter—which influences production selection—to establish default rules. Since all types extend a base type (i.e. `top-type`), using the base type as the value of the goal chunk in a production will cause the production to match any goal chunk. If the production is assigned a production utility value that is lower than competing productions, it will only be selected if no other production matches (ignoring stochasticity). A sample default production is shown below: ``` (p process-default--retrieve-prev-chunk =goal> ISA top-type =context> ISA context state process chunk-stack =chunk-stack =chunk=stack> ISA chunk-stack-chunk this-chunk =chunk prev-chunk =prev-chunk ==> =context> state retrieve-prev-chunk chunk-stack =prev-chunk +retrieval> =chunk) (spp process-default--retrieve-prev-chunk :p 0.75) ``` where the parentheses reflect the underlying lisp implementation, `p` identifies a production, `process-default--retrieve-prev-chunk` is the name of the production, `=goal>` identifies the goal chunk, `=context>` identifies a context chunk, `ISA context` is a chunk type, `state` is a chunk slot, `process` is a slot value, `==>` separates the left-hand side from the right-hand side and variables are preceded by `=` as in `=chunk`. This default production causes the previous chunk to be retrieved from declarative memory (using the `+retrieval>` form) if no other production is selected. To make this production a default production, the production utility parameter is set using the `spp` (set production parameter) command to a value of 0.75 (the default value is 1.0). **The Context Chunk and Chunk Stack** The current ACT-R environment provides only the goal and retrieval buffers (and perhaps the visual and aural buffers) to store the partial products of language comprehension. The lack of a stack is particularly constraining, since a stack is the primary data structure for managing the kind of (limited) recursion that occurs in language. There needs to be some mechanism for retrieving previously processed words from short-term working memory in last-in/first-out order during processing (subject to various kinds of error that can occur in the retrieval process). A stack provides this (essentially error free) capability. It is expected that a capacity to maintain about 5 separate linguistic chunks in short-term working memory is needed to handle most input—supporting at least one level of recursion (and perhaps two for the more gifted). The goal chunk could be adapted for this purpose, except that it is also the basis for creation of new declarative memory chunks and activation spread and these architectural needs would conflict. Further, it would be difficult to get the kind of stack like behavior needed out of the slots in the goal chunk. To overcome these problems, Double R Model introduces a context chunk containing a bounded, circular stack of links to declarative memory. As chunks are stacked in the circular stack, if the number of chunks exceeds the limit of the stack, then new chunks replace the least recently stacked chunks (supporting at least one type of short-term working memory error). The actual number of chunks allowed in the stack is specified by a global parameter. This parameter is settable to reflect individual differences in short-term working memory capacity. Chunks cannot be directly used from the stack. Rather, the stack is used to provide a template for retrieving the chunk from declarative memory. Essentially, the chunk on the stack provides a link to the corresponding declarative memory chunk. Since the chunk must be retrieved from declarative memory before use, the spreading activation and partial matching mechanisms of ACT-R are not circumvented and retrieval errors are possible—unlike the goal stack of earlier versions of ACT-R. Thus, the bounded, circular stack of links to declarative memory avoids the arguments against the goal stack of earlier versions of ACT-R, adds the insight of activated pathways to declarative memory, and retains the insights that motivated the inclusion of a goal stack in earlier versions of ACT-R. Besides storing the chunk stack, the context chunk is also used to separate out state information from the goal chunk. Since the goal chunk is the basis for creating new declarative memory chunks, storing the chunk stack in it would result in the chunk stack being stored with each new declarative memory chunk. While this might be used to support a kind of episodic memory where the context in which a word occurs is stored with the declarative memory chunk created during the processing of the word, ACT-R 5.0 does not currently provide a mechanism for transitioning episodic memory into semantic memory (i.e. abstracting from the context of use), and storing the context with a chunk has undesirable side-effects within the ACT-R environment (e.g. it interferes with the spreading activation mechanism). To avoid such problems a separate context chunk is maintained and made available to all productions. Although, the existence of a separate context chunk that productions match to violates the ACT-R 5.0 architecture where only the buffers are supposed to be used for this purpose, earlier versions of ACT-R allowed multiple chunks to be matched on the left-hand side of productions and this functionality is still available in ACT-R 5.0 environment. The context chunk maintains several pieces of information in addition to the chunk stack. Its definition (as specified by a chunk-type) in the model is shown below: (chunk-type context state rel-context sit-context text-context word prev-word-1 prev-word-2 repeat chunk chunk-stack) In this chunk-type definition, context is the name of the chunk, state is a slot that provides state information to guide production selection, rel-context is a slot that identifies the current relational context (typically determined by a specifier), sit-context is a slot that contains information about the current situation context, text-context is a slot that contains information about the larger discourse context, word contains the lexical item being processed, word-prev-1 and word-prev-2 contains the previous two words processed, repeat is yes if the word has been attended to previously and no-more if there are no more words in the input, chunk contains the most recently processed chunk, and chunk-stack contains the entire chunk stack. **Lexical and Functional Entries** The lexical entries in the model provide a limited amount of information which is stored in the word and word-info chunks. The definition of the word and word-info chunk types are provided below: ``` (chunk-type word word-form word-marker) (chunk-type word-info word-marker word-root word-type word-subtype word-morph-type) ``` The word-form slot of the word chunk contains the physical form of the word (represented as a string in ACT-R); the word-marker slot contains an abstraction of the physical form. The word-root slot contains the value of the root form of the word. The word-type slot contains the lexical type of the word and is used to convert a word-info chunk into a lexical-type chunk for subsequent processing. A word-subtype slot is provided as a workaround for the lack of multiple inheritance in ACT-R 5.0. The word-morph-type slot supports the encoding of additional grammatical information (although that information is not currently being used). Sample lexical entries for a noun and verb are provided below: ``` (cow-wf isa word word-form "cow" word-marker cow) (cow isa word-info word-marker cow word-root cow word-type noun word-morph-type third-per-sing) (running-wf isa word word-form "running" word-marker running) (running isa word-info word-marker running word-type verb word-root run word-subtype intrans-verb word-morph-type pres-part) ``` Note that there is no indication of the functional roles (e.g. head, modifier, specifier, predicate, argument) that particular lexical items may fulfill. Following conversion of word-info chunks into lexical-type chunks (e.g., verb, adjective), functional roles are dynamically assigned by the productions that are executed during the processing of a piece of text. Since functional role chunks are dynamically created, only chunk-type definitions exist for functional categories prior to that processing. As an example of a chunk-type definition for a functional category, consider the category pred-trans-verb (i.e. transitive verb functioning as a predicate) whose definition involves several hierarchically related chunk-types as shown below: (chunk-type top-type head) (chunk-type (rel-type (:include top-type))) (chunk-type (pred-type (:include rel-type) subj spec mod post-mod) (chunk-type (pred-trans-verb (:include pred-type)) obj) The top-type chunk-type contains the single slot head. All types are subtypes of top-type and inherit the head slot. Rel-type is a subtype of top-type that doesn’t add any additional slots. Pred-type is a subtype of rel-type that adds the slots subj, spec, mod, and post-mod. It is when a relation is functioning as a predicate that these slots become relevant. Pred-trans-verb is a subtype of pred-type that adds the slot obj. Summarizing, pred-trans-verb contains the slots head, subj, spec, mod (i.e. pre-head), post-mod (i.e. post-head), and obj, all of which are inherited from parent types except for the obj slot. The following production creates an instance of a pred-trans-verb and provides initial values for the slots: (p process-verb--convert-to-pred-trans-verb =goal> ISA verb head =verb subtype trans-verb =context> ISA context state convert-verb-to-pred-verb ==> +goal> ISA pred-trans-verb subj none spec none mod none head =goal post-mod none obj none =context> state retrieve-prev-chunk) In this production, a verb (subtype of lexical-type) whose subtype slot has the value trans-verb is converted into a pred-trans-verb for subsequent processing. The only slot of pred-trans-verb that is given a value other than none is the head slot whose value is set to be the goal chunk (i.e. head =goal). This production has the effect of assigning a transitive verb the functional role of predicate (specialized as a transitive verb predicate). Its selection and execution is based on the previous context which set the value of the state slot of the context chunk to convert-verb-to-pred-verb and on having a goal chunk of type verb whose subtype slot has the value trans-verb. **Productions** Sample productions were shown above in the discussion of default rules and in the creation of functional roles. This section provides some additional examples. The read-next-word production initiates the find-attend-encode sequence for reading the next word from the computer screen (using ACT-R’s perceptual component). ``` (p read-next-word =goal> ISA word =context> ISA context state start - repeat no-more ;; no more words ==> =context> state find) ``` Note that the goal is represented by the declarative word chunk as opposed to a more procedurally oriented goal chunk like read-word. This is fallout from the fact that the goal chunk is the basis for creating declarative memory chunks. The start value of the state slot in the context chunk is the primary basis for the selection of this production (along with the type of the goal chunk). Again, putting the state slot in the context chunk avoids the need to encode procedural information in the goal chunk. The “-repeat no-more” entry in the production indicates that this production only applies if the value of the repeat slot is not (negation is indicated by the “-“) no-more where no-more indicates that the last word in the text has already been read. The next production uses the word-marker slot of the word chunk to retrieve the word-info chunk. ``` (p retrieve-word-info =goal> ISA word word-marker =word-marker =context> ISA context state retrieve ==> +retrieval> ISA word-info word-marker =word-marker ``` The word-info chunk is then used to create a lexical-type chunk (e.g. verb) which becomes the goal. The productions that convert word-info chunks into lexical-type chunks are special in that the :effort parameter is set to 0.0. The :effort parameter determines how long it takes a production to execute (default is 0.05 sec or 50 msec). Setting the value to 0.0 means that the production takes no time to execute. The presumption is that the procedure that effects this conversion is substituting for an automated type conversion mechanism that the ACT-R 5.0 modeling environment does not currently provide. (p convert-word-to-verb =goal> ISA word-info word-type verb word-subtype =word-subtype =context> ISA context state convert ==> +goal> ISA verb head =goal subtype =word-subtype =context> state process) (spp convert-word-to-verb :effort 0.0) There are a few other kinds of “housekeeping” productions which are accorded zero effort in the model (e.g. the stack chunking procedures). In general, “housekeeping” productions are used to effect various data manipulations that are external to the basic processing mechanism. The process-verb--convert-to-pred-trans-verb production discussed above is another example of a “housekeeping” production. The next production matches a verb goal chunk and in the context of an obj (i.e. object referring expression) converts the verb type into a rel-head type (p process-verb--obj-context--convert-to-rel-head =goal> ISA verb head =verb =context> ISA context state retrieve-prev-chunk rel-context obj ==> +goal> ISA rel-head mod none head =goal post-mod none) Rel-head (i.e. relational-head) is a subtype of head. The next production matches a head goal chunk (which could be a rel-head) and an obj-spec (i.e. object-specifier) retrieval chunk and creates a new obj-refer-expr (i.e. object-referring-expression) which becomes the goal. Together, these two productions support to use of verbs as (relational) heads of object referring expressions following an object specifier (e.g. “kick” in “the kick”). ``` (p process-head--prev-chunk-is-obj-spec =goal> ISA head =context> ISA context state retrieve-prev-chunk =retrieval> ISA obj-spec ==> +goal> ISA obj-refer-expr spec =retrieval mod none head =goal post-mod none referent none-for-now =context> state process rel-context none) ``` The creation of an object referring expression causes the value of the rel-context slot to be set to none indicating the end of the object referring expression context. **Context Accommodation vs. Backtracking** Context accommodation is a mechanism for changing the function of an expression based on the context without backtracking. For example, when an auxiliary verb like “did” occurs it is likely functioning as a predicate specifier as in “he did not run” where the predicate is “run” and “did not” provides the specification for that predicate. However, auxiliary verbs may also function as predicates when they are followed by a noun phrase as in “he did it”. Determining the ultimate function of an auxiliary verb can only be made when the expression following the auxiliary is processed. In a backtracking system, if the auxiliary verb is initially determined to be functioning as a predicate specifier, then when the noun phrase “it” occurs, the system will backtrack and reanalyze the auxiliary verb, perhaps selecting the predicate function on backtracking. However, note that backtracking mechanisms typically lose the context that forced the backtracking. Thus, on backtracking to the auxiliary verb, the system has no knowledge of the subsequent occurrence of a noun phrase to indicate the use of the auxiliary verb as a predicate. Thus, the system can only randomly select a new function for the auxiliary verb which may or may not be that of a predicate. A better alternative is to accommodate the function of the auxiliary verb in the context which forces that accommodation. In this approach, when the noun phrase “it” is processed and the auxiliary verb functioning as a predicate specifier is retrieved, the function of the auxiliary verb can be accommodated in the context of a subsequent noun phrase to be a predicate. Context accommodation avoids the need to backtrack and allows the context to adjust the function of an expression just where that accommodation is supported by the context. Of course, there may cases where the context accommodation mechanism breaks down and some form of backtracking is needed (e.g. garden-path sentences), but in such cases backtracking is likely to involve a jump back to the beginning of a major constituent (e.g. clause) and some contextual information will be carried back with the jump. In any case, a reverse-depth-first, context-unraveling backtracking mechanism like that provided in Prolog is psychologically implausible. **Processing Example** As an example of the processing of a piece of text and the creation of declarative memory chunks to represent the meaning of the text, consider the processing of the following text: The old dog lover is asleep. The processing of the word “the” results in the creation of the following declarative memory chunks: ``` Goal25 isa DETERMINER head The Goal26 isa OBJ-SPEC head Goal25 mod None ``` The first chunk, goal25, is a determiner whose head slot has the value The. This chunk represents the inherent part of speech of the word “the”. The second chunk, goal26, is an obj-spec (i.e. object-specifier) whose head slot has the value goal25 and whose mod slot has the value none. This second chunk represents the function of “the” in this particular text. Note that if “the” were the only word in the input text, the creation of these two chunks would still occur since the processing mechanism works bottom-up from the lexical items and makes no assumptions about what will occur independently of the lexical items. The processing of the second word “old” leads to the creation of the following declarative memory chunks: ``` Goal32 isa ADJECTIVE head Old Goal33 isa OBJ-MOD head Goal32 mod None ``` where goal32 represents the inherent part of speech of “old” and goal33 represents the function of “old” (i.e. object-modifier) in the current context. The processing of the third word “dog” creates the following declarative memory chunks: Goal39 isa NOUN head Dog Goal40 isa HEAD head Goal39 mod Goal33 post-mod None Goal41 isa OBJ-REFER-EXPR head Goal40 referent None-For-Now spec Goal26 mod None post-mod None Goal41 is a full object referring expression and contains a referent slot to support a link to an object in the situation model corresponding to this piece of text. The model does not currently establish the value of the referent slot since this capability is not yet implemented. The processing of the fourth word “lover” creates or modifies the following declarative memory chunks: Goal47 isa NOUN head Lover Goal48 isa HEAD head Goal47 mod Goal40 post-mod None Goal41 isa OBJ-REFER-EXPR head Goal48 referent None-For-Now spec Goal26 mod None post-mod None Note that goal47 (i.e. “lover”) is now the head of goal48 which is the head of the object-referring-expression (i.e. goal41) with goal39 (i.e. “dog”) functioning as the head of goal40 which is functioning as a modifier of goal48. Also, note that goal33 (i.e. “old”) modifies goal39 (i.e. “dog”) and not goal48 (i.e. “dog lover”). This is equivalent to the expression “the lover of old dogs” rather than “the old lover of dogs”, both of which are possible interpretations. Having “old” modify “dog” rather than “dog lover” is probably not the preferred interpretation of this piece of text, but it does point out the advantage of having an implemented model to make such decision points apparent. The current model can be modified to support the alternative interpretation by addition of a production that has the intended effect, however, there is currently no mechanism for preferring this new production over the existing production. Such a mechanism would need to be able to distinguish between collocations like “old dog lover” and collocations like “old house renovator” where the modification works the other way. Continuing with the next word “is” leads to the creation of the following chunks: Goal54 isa REG-AUX head Is-Aux Goal55 isa PRED-SPEC head Aux-1 mod None modal-aux None neg None aux-1 Goal54 aux-2 None aux-3 None Note that the pred-spec (i.e. predicate-specifier) chunk type has a modal-aux, neg, and three auxiliary slots (aux-1, aux-2, and aux-3) to handle the range of predicate specifiers that can occur in English. For this instance of a pred-spec (i.e. goal55), goal54 fills the aux-1 slot and functions as the head of goal55. The processing of the final word “asleep” creates the following declarative memory chunks: Goal61 isa ADJECTIVE head Asleep Goal62 isa PRED-ADJ head Goal61 subj Goal41 spec Goal55 mod None post-mod None In the context of the predicate specifier “is” (i.e. goal55), the adjective “asleep” functions as a predicate adjective filling the head slot of goal62. Goal41 (an object referring expression) fills the subj slot (i.e. subject) of goal62. Following the processing of “asleep” the model attempts to read the next word. The failure to read a word signals the end of processing and a wrap-up production is executed. This production converts goal62 into a situation referring expression resulting in the creation of goal65 with goal62 filling the head slot. Goal65 isa SIT-REFER-EXPR head Goal62 referent None-For-Now mod None At the end of processing a single chunk of type *situation-referring-expression* is available in the *chunk-stack* to support subsequent processing. **Summary and Future Research** Double R Model may be the first attempt at the development of a Natural Language Understanding system founded on the principles of Cognitive Linguistics and implemented in the ACT-R cognitive modeling environment. Much work remains to be done. Double R Model has not yet reached a scale at which it can handle more than a token set of English. To expand the symbolic capabilities of Double R Model we are evaluating the integration of the CYC knowledge base (Lenat et al., 2003), WordNet (Miller et al., 2003), and FrameNet (Fillmore et al., 2003). CYC could provide the basis for creation of a situation model to ground the referring expressions in a text, thereby, supporting a fuller representation of referential meaning. WordNet will support the expansion of the lexicon to a full complement of lexical items. FrameNet, with some mapping to Double R Grammar, could provide constructional schemas for relational and referential lexical items. To expand the subsymbolic capabilities of Double R Model (e.g. in support of lexical disambiguation), we are evaluating the use of Latent Semantic Analysis (LSA) (Landauer et al.), and considering improvements to ACT-R’s single-level spreading activation mechanism. In this regard, LSA might provide an empirical basis for determining the strength of association of declarative memory chunks and multiple-level spreading activation (like that proposed in earlier ACT* theory, Anderson, 1983) would eliminate the need for direct association of all related declarative memory chunks. **References** Anderson, J. R. (1983). *The Architecture of Cognition*. Cambridge, MA: Harvard University Press. Anderson, J. & Lebiere, C. (1998). *The Atomic Components of Thought*. Mahway, MJ: LEA. Anderson, J., Bothell, D., Byrne, M. and LeBiere, C (2002). *An Integrated Theory of the Mind*. [http://act-r.psy.cmu.edu/papers/403/IntegratedTheory.pdf](http://act-r.psy.cmu.edu/papers/403/IntegratedTheory.pdf) Ball, J. (2003a). “Towards a Semantics of X-Bar Theory.” [http://www.DoubleRTheory.com/papers/other/SemanticsOfXBarTheoryPDF.pdf](http://www.DoubleRTheory.com/papers/other/SemanticsOfXBarTheoryPDF.pdf) Ball, J. (2003b). “Is the Head of a Noun Phrase necessarily a Noun.” Presentation at ICLC 2003. [http://www.DoubleRTheory.com/presentations/doubler/ICLCPresentation.pps](http://www.DoubleRTheory.com/presentations/doubler/ICLCPresentation.pps) Chomsky, N. (1965). *Aspects of the Theory of Syntax*. Cambridge, MA: The MIT Press. Chomsky, N. (1957). *Syntactic Structures*. The Hague: Mouton. Chomsky, N. (1970). “Remarks on Nominalization.” In R. Jacobs & P. Rosenbaum, eds., *Readings in English Transformational Grammar*. Ginn, Waltham, MA. Fillmore et al. (2003). FrameNet. http://www.icsi.berkeley.edu/~framenet/ Kintsch, W. (1998). *Comprehension, a Paradigm for Cognition*. New York, NY: Cambridge University Press. Lakoff, G. (1988). “Cognitive Semantics.” In *Meaning and Mental Representation*. Edited by U. Eco, M. Santambrogio & P. Violi. Indianapolis: Indiana University Press. Lakoff, G. (1987). *Women, Fire and Dangerous Things*. Chicago: The University of Chicago Press. Lakoff, G., & M. Johnson (1980). *Metaphors We Live By*. Chicago: The University of Chicago Press. Landauer et al. (2003). Latent Semantic Analysis (LSA). http://lsa.colorado.edu/ Langacker, R. (1987). *Foundations of Cognitive Grammar, Volume 1, Theoretical Prerequisites*. Stanford, CA: Stanford University Press. Langacker, R. (1991). *Foundations of Cognitive Grammar, Volume 2, Descriptive Applications*. Stanford, CA: Stanford University Press. Lenat et al. (2003). CYC. http://www.cyc.com Miller et al. (2003). WordNet. http://www.cogsci.princeton.edu/~wn/ Rosch, E. (1978). “Principles of Categorization.” In *Cognition and Categorization*. Edited by E. Rosch & B. Lloyd. Hillsdale, NJ: LEA. Talmy, L. (2003). *Toward a Cognitive Semantics, Vols I and II*. Cambridge, MA: The MIT Press Townsend, D. and T. Bever (2001). *Sentence Comprehension*. Cambridge, MA: The MIT Press. Wittgenstein, L. (1953). *Philosophical Investigations*. New York: MacMillan.
Network Lift from Dual Alters: Extended Opportunity Structures from a Multilevel and Structural Perspective Author(s): Emmanuel Lazega, Marie-Thérèse Jourda and Lise Mounier Source: *European Sociological Review*, Vol. 29, No. 6 (DECEMBER 2013), pp. 1226-1238 Published by: Oxford University Press Stable URL: https://www.jstor.org/stable/24480018 Accessed: 15-11-2019 09:27 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact firstname.lastname@example.org. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/terms Oxford University Press is collaborating with JSTOR to digitize, preserve and extend access to European Sociological Review Network Lift from Dual Alters: Extended Opportunity Structures from a Multilevel and Structural Perspective Emmanuel Lazega\textsuperscript{1,*}, Marie-Thérèse Jourda\textsuperscript{2} and Lise Mounier\textsuperscript{3} \textit{Abstract}: This article uses multilevel network analysis to identify an extended and latent opportunity structure for actors dually positioned in both intra-organizational and inter-organizational networks. This extended opportunity structure combines actors’ direct ties with indirect ties that they can add to their own network by ‘borrowing’ some of their boss’s two-path contacts. We call ‘dual alters’ contacts that can be reached through this multilevel path with help (or absence of obstruction) from such ‘embedded brokers’. We test the specific effect of this extension on actors’ performance using a data set derived from a multilevel study of the elite of French cancer researchers (1996–2005). We find a significant effect of this extension on members’ performance when dual alters provide complementary resources, thus providing proof of a ‘network lift from dual alters’ presence in the focal actor’s network. Network lift allows sociologists to measure the extent to which performance measured at the individual level depends in a complex way on the multilevel and combined characteristics of the intra- and inter-organizational context in which individuals belong. We believe that this measurement of latent and extended opportunity structures will help meso-level sociologists in their approach to social processes and inequalities in the organizational society. \textbf{Introduction} The fundamental question of the influence of social structure on behavior and performance of actors has been examined with renewed interest in recent decades owing to the development of structural sociology and the analysis of social and organizational networks, particularly from a multilevel perspective (Snijders and Bosker, 1999; Kozlowski and Klein, 2000). Network data help model opportunity structures in new ways (White \textit{et al.}, 1976; Burt, 2005). In particular, new approaches to ‘duality’ in social life, i.e. co-constitution of individuals and groups initially measured by bipartite or two-mode networks (Breiger, 1974), have enriched this multilevel perspective. These approaches complement rather than compete with more established hierarchical linear modeling (Bryk and Raudenbush, 1992), especially by taking into account new elements in the definition of opportunity structures. They usually observe two or more systems of superposed and partially interlocked interdependencies, at the same time inter-individual and inter-organizational, and provide several formalisms (Wilson, 1982; Fararo and Doreian, 1980; Snijders and Baerveldt, 2003; Lubbers, 2003; Robins \textit{et al.}, 2005; Van Duijn, 2006; Wang \textit{et al.}, 2012) that craft a formal theory of interpenetration of distinct entities such as individuals and groups. Various modes of articulation for the different levels have thus opened new avenues for research in that area, exploring, for example, meso-level networks (Hedström \textit{et al.}, 2000) or ‘linked design’ networks (Lazega \textit{et al.}, 2008). In this article, we further explore the possibilities offered by multilevel network analysis. We argue that it can develop sociologists’ understanding of opportunity structures. As each level of agency constitutes a system of exchange between different resources that has its own temporality, logic and processes (Lazega, 2012), it is important to examine both levels separately and jointly. Joint study allows us to identify opportunity structures and the actors that benefit from relatively easy access to the resources that circulate at each level, and also to measure their relative performance; in addition, it allows \textsuperscript{1}Institut d’Etudes Politiques de Paris, CSO-CNRS, Paris, France, \textsuperscript{2}CEPEL-CNRS, Montpellier, France, \textsuperscript{3}CMH-CNRS, Paris, France. *Corresponding author. Email: email@example.com © The Author 2013. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: firstname.lastname@example.org. Submitted: September 2011; revised: February 2013; accepted: February 2013. us to identify situations in which the interplay between the levels has less positive effects for the actors. We suggest that the knowledge of multilevel interdependencies, and additionally of the manner in which actors manage theses interdependencies at each level, adds an original dimension to multilevel reasoning and to meso-level exploration in sociology. We use the ‘linked design’ approach because we think that dual-positioning individual actors (in the network of their inter-individual relationships and in the network of relationships between the organizations to which they belong, i.e. in which they are affiliated) facilitates identification and measurement of ‘extended’ opportunity structures. This in turn leads to specific hypotheses concerning the relationship between position in these complex multilevel structures, strategy, and performance (measured at the individual level). The term ‘strategy’ refers here to the fact that actors manage their interdependencies at different levels by appropriating, accumulating, exchanging, and sharing resources, both with peers and with hierarchical superiors or subordinates. Determinants of performance are widely examined in the social network literature (see Flap et al., 1998, or Quintane et al., 2012, for example). Our way of presenting the problem of contextualizing action and actors’ performances echoes the preoccupations of organizational sociologists who reason in terms of individual and collective social capital (Leenders and Gabbay, 1999; Hsung et al., 2009). Our purpose is equivalent to addressing the difficult question of the integration of different levels of analysis in which they situate social capital. For this, we extend Burt’s (1992, 2005) work on ‘borrowing’ social capital in a new way. With respect to the link between individual social capital and performance, Burt’s theory is that, in social settings in which individuals compete, performance increases when actors have dense ties within their workgroup and high brokerage scores beyond the group. In particular, his work shows that members benefit from brokerage and structural holes, unless they are in a highly dependent and dominated situation, in which case they can try to borrow someone else’s (a champion’s, a mentor’s) relationships to reach the same levels of performance as their average competitors. Borrowing social capital can be an efficient strategy for members who suffer from a lack of legitimacy (e.g. women in a male-dominated organization): it consists of benefiting from a colleague’s or a superior’s support through a use of the latter’s network, a kind of support that leads to an increase in work performance. Based on this perspective, we propose to observe extended opportunity structures, as defined above, by focusing on actors’ multilevel relational strategies of borrowing contacts from their boss’s network. We identify a specific kind of borrowing effect on performance, an effect originating in the multilevel dimension of the structure, and we call it ‘network lift’. This borrowing can be considered an outcome of a specific mechanism of ‘embedded brokerage’ in the sense that it is provided by hierarchical superiors who act as bridges embedded in the inter-organizational networks. For embedded brokers, embeddedness in a relatively closed inter-organizational network lowers the risks of building bridges between subordinates compared with the risks taken by brokers, for example, in open markets. We measure the role of specific kinds of actors’ indirect and potential contacts provided by such embedded brokers. We call these contacts ‘dual alters’ and argue that they have an important and specific role in shaping this lift effect, i.e. in creating a focal actor’s latent, multilevel, and extended opportunity structure. We carry out this approach using a data set, collected by Lazega et al. (2008), in the sociology of sciences that measures the networks of an ‘elite’ of French cancer researchers in 1999–2000, examined at both the inter-individual and the inter-organizational levels. In itself, the study of ‘elites’ is not new in the sociology of science or in network analysis (see for example, Zuckerman, 1977, or Hargens et al., 1980). In particular, several studies about complete networks of scientists or laboratories have been presented before, beginning with the pioneering work of Mullins et al. (1977). Jansen (2004) provides a literature review. But this data set is particularly suitable for the purpose of analyzing performance in relationship with extended opportunity structures. The article is organized as follows: we first discuss the notion of extended opportunity structure and the role of hierarchical superiors and dual alters in shaping it. We then hypothesize that the richer members’ dual alters are in complementary resources, the higher these members’ performance. The data set used in the analyses is then presented: it is focused on tacit learning networks among scientists and the complex determinants of their performance. A model is fitted to measure the effect of access to such dual alters on actors’ performance. New descriptive analyses are provided to illustrate and interpret these results. Finally, we explore the limitations of our approach and suggest further developments. **Borrowing Social Capital from Hierarchical Superiors as Embedded Brokers** Extending Burt’s idea of borrowing social capital, we focus specifically on the actor’s own hierarchical superior as the strategic partner who can sponsor, in our case indirectly, the actor’s access to resources. Here, actors and ties are nested within organizational units, but the ties can also be among actors of different organizational units. From this perspective, borrowing relational capital is seen as a complex social process that provides performance with a collective dimension even when it is measured at the individual level. Actors have a complex inter-individual network at the personal level combined with an equally complex inter-organizational network at the collective level; the latter is based on affiliation ties or personal ties with managers (hierarchical superiors) who themselves have ties in other organizations across the boundaries of their organization. We aggregate the two networks and treat the ‘extended’ network as a latent, meso-level structure adding actual and indirect relational capital for the focal individual members. The linked-design allows analysts to look at the extent to which the extended relational context adds an extra effect on performance by providing this indirect potential social capital for these individual members. This extended context can either increase, i.e. lift, or maintain, or decrease members’ individual performance. The expression ‘lift’ is meant to signify that an increase is partly traceable to the multilevel dimension of the system. We call dual 3-path (or tetradic) the chain of ties that creates, at the aggregate level, this extended context. Figure 1 visualizes this chain. We call \(i\) actors (i.e. respondents in the empirical research), their observed direct contacts \(j\), and dual alter \(k\) their indirect potential contacts accessible through the embedded brokerage by their managers. Dual alters are the induced, indirect contacts that are part of a focal actor \(i\)’s relational capital via such substructures. This organizationally extended network is constructed by adding to the observed network of actors \(i\) all the indirect ties \(k\) that \(i\) can access through their manager and through the manager’s ties to other managers at the inter-organizational level. We assume that this potential can be more easily realized than creation of ties from scratch by individual actors, provided that members and managers get along reasonably well within the context of cooperation in their organization, and provided also that managers see the value for their organization of helping their members in accessing new resources through their inter-organizational ties. Realizing this potential is equivalent to closing a complex and multilevel 3-path, thus reaching potential collaborators leading to complementary resources. Closing this dual 3-paths means adding dual alters that one can reach through one’s boss’s network. Dual alters are thus equivalent to Burt’s borrowed contacts generated by Breiger’s duality. Borrowing relational capital, however, is a complex social process that provides performance with a collective dimension even when it is measured at the individual level. In that sense, this complex process adds a new effect to the effects already taken into account in the **Figure 1** Organizational extension: Closing a dual 3-path by adding indirect contacts from one’s boss’s network to one’s own. Star is an indirect contact of Member \(i\), called ‘dual alter’ \(k\) of \(i\), reachable via \(i\)’s and \(k\)’s managers and embedded brokerage (as defined in the text). He/she is part of Member 1’s potential relational capital accessible through direct tie with Manager 1 (based on common affiliation) and indirect tie with Manager 2. The dotted edge represents a potential collaboration tie. Other black nodes represent direct contacts \(j\) of \(i\), as observed in his/her self-reported network. literature on social capital based on measurement of the relationship between networks and performance. First, variations in kinds of contacts are important. Thus, it may be relevant to identify indirect contacts $k$ that vary in terms of wealth: some have many resources to share, as distinguished from others who are poorer contacts with less resources to share. Second, variations in kinds of resources exchanged also matter for identifying such effects. The value of the observed and potential relational capital depends on the nature of the task to be performed. This presupposes that the expanded network must be specified resource by resource. In an environment in which actors behave strategically, the issue of the relative utility of their contacts’ resources is also an appropriate one. The expanded network may or may not provide each member with access to complementary sources of resources. The network lift effect is relative to the utility of the resources of the indirect contact. These complementary sources may be useful or not useful with respect to providing resources that ego does not already have. In the case of social capital that is managed in a tightly organized environment, it may thus be relevant to distinguish resources of rich contacts $k$ that are complementary to that of from resources of rich contacts $k$ that are redundant with resources already available to $i$. Network lift can thus be thought to be provided by complementary resources borrowed from rich dual alters. Given this link between relative utility and network lift, a network of indirect ties that provides non-complementary resources, i.e. ‘more of the same’ in terms of resources, should be an inter-organizational network that does not provide lift in the sense that it does not carry the individual and increase his/her performance. **Hypothesis** Based on this discussion of social capital focusing on dual alters as indirect and potential sources of performance in the extended network accessible through the inter-organizational level of agency, we expect the following in terms of shaping a latent extended opportunity structure: *Controlling for the effect of direct contacts, the richer members’ dual alters are in complementary resources, the higher these members’ performance*. Assuming that members get along with their hierarchical superiors who will broker their access to indirect contacts, and assuming that these indirect contacts are willing to share their complementary resources, these contacts’ help should be identified as a specific source of members’ performance increase over time, i.e. network lift. The hypothesis that actors may benefit from such a network lift by access to dual alters is not a surprise in itself for social scientists who have long argued that position in the structure has an effect on behaviour and performance. But our approach adds a new multilevel dimension to opportunity structure, as well as methodology and data for their measurement. We can thus measure, for example, the specific effect of indirect social capital potentially provided by hierarchical superiors embedded in inter-organizational networks. **A Case of Tacit Learning Networks among Scientists across Laboratories** We explore the value of this extended network for individual performance under these conditions in a study of ‘elite’ cancer researchers in France (1999–2000). A detailed presentation of this data set can be found in Lazega et al. (2008) who measure the critical importance of resources directly provided by the organizations to their members in explaining the latter’s performance. Evidence for the existence of a ‘network lift from dual alters’ effect is based on measurement of the multilevel networks and performance levels of these scientists over 10 years (1996–2005). This data set is well suited for measurement of the extended opportunity structure because it includes and combines inter-laboratory networks, inter-individual networks within a specific subpopulation of scientists in that field, and performance variations. Performance is measured by impact factor scores (IF)—a measure of a scientific journal’s impact, based on citations to its articles. Each researcher’s performance was measured at the individual level by assigning to each of his/her publications the IF score of the journal in which it was published, and then by summing across all publications. Respondents belonged to five broad specialties: first, diagnostic, screening, prevention, and epidemiology (a set of disciplines, which will be pooled below under the label ‘epidemiology’); second, clinical research without fundamental research; third, clinical and fundamental research in hematology-immunology; fourth, fundamental research focused on pharmacology; and fifth fundamental research in molecular/cellular biology or genetics. At the time of the study, hematolgy-immunology was the dominant specialty in cancer research in France, following generations of investments in the study of leukemia by French researchers who were among the first to learn and apply collectively the methodologies of molecular biology, with a visible effect in their performance measurements. However, if the latter specialty had the highest IF scores, at the time, its scores were no longer increasing. At the same time, another specialty was making new progress: diagnostic, screening, prevention, and epidemiology had increasing IF scores, a sufficiently strong collective development for a visible effect in their performance measurements. The networks of interdependencies among laboratories and among scientists in France at the time were reconstituted during face-to-face interviews. First, the inter-organizational networks between the majority of laboratories engaged in cancer research; second, the advice networks, i.e. networks of access to tacit knowledge, constructed by members of the ‘elite’. This was done in the following manner. At the individual level, each researcher is considered a ‘scientific entrepreneur’ who needs resources that may be social or monetary. From the individual researcher’s point of view, scientific research may be analytically broken down into a sequence of five steps, each characterized by a strong degree of uncertainty: selecting a line of research, finding institutional support, finding sources of financing, recruiting personnel, and publishing articles. At each step, one must suppose that the researchers depend on their relational capital and that they seek advice from other members of the research community to handle these uncertainties. In this competitive and uncertain environment, access to advisors is an important resource because carrying out these tasks is facilitated by access to advice offered by competent colleagues who agree to help. Five advice networks were thus reconstituted, one for each of these five steps. At the inter-organizational level, systematic data about inter-laboratory networks and about laboratories characteristics were also collected. The laboratory directors indicated with which other laboratories, among those in France practicing cancer research, their laboratory exchanged different types of resources. The list of reconstituted transfers and exchanges includes the recruitment of post-docs and researchers, the development of programs of joint research, joint responses to tender offers, sharing of technical equipment, sharing of experimental material, mobility of administrative personnel, and invitations to conferences and seminars. The complete inter-organizational network examined here is the aggregated and dichotomized network of all these flows; dichotomization created a tie between two actors if there was at least one tie between them in one of the aggregated matrices. At the inter-individual level, the five advice networks are aggregated and dichotomized to reconstitute a complete network of 126 researchers and of density 0.06 with average degree 8.8. Likewise, the inter-organizational network of 82 laboratories reaches a density of 0.04 with average degree of 6. The number of cases in which the director of the laboratory and the researcher i answered the organization-level questionnaire and the individual-level questionnaire respectively, is 93. We use the characteristics of i, j, and k over two periods: four years before the measurement of the networks (1999–2000) and five years after these measurements. The effect we are mostly interested in is the effect of characteristics of indirect ties k on i’s performance. Specifically, we create two kinds of articulation between the individual’s network and the organizational network. First, we measured the status of the actor and the importance of their organization using his/her centrality in the advice network of this population. Centralities used here are indegrees (incoming ties) and outdegrees (outgoing ties). This provides a uniform basis for the interpretation of our results. The status of the organization is measured by three criteria: its indegree centrality in inter-organizational networks, the indirect resources to which its members declare having access (its outdegree), and its size. This produces an endogenous partition of the population into four classes that are baptized metaphorically for a more intuitive understanding of this dual positioning. Class 1 actors cluster the Big Fish in the Big Pond (BFBP), class 2 the Big Fish in the Small Pond (BFSP), class 3 the Little Fish in the Big Pond (LFBP), and class 4 the Little Fish in the Small Pond (LFSP). The construction of the four classes positioning actors at the meso level used the following median values thresholds: to be considered a BFBP, the researcher’s indegree centrality must be higher than 5.2, that of the laboratory higher than 2.75; the laboratory’s outdegree must be higher than 2 and its size higher than 26 researchers. The same thresholds are used for the three other categories (Big Fish in a Small Pond, etc.). Given the high number of internationally visible publications used to select this population, even the researchers that we call LFSP are researchers at an exceptional level. Consistent with Lazega et al. (2008), performances are not considered as simply floating on the extended networks of individuals, but as supported also by the status of their organizations. We then create a second combination of the individual network and the organizational network by identifying for each individual actor: (i) His/her contacts j who are both his own personal direct contacts and persons who belong to the organizations with which his/her own organization has inter-organizational ties and (ii) His/her dual alters k who are reachable through the managers of their organizations who could provide access to new and complementary resources. The dependent variables used here to test our hypothesis are IF scores in period 2. Independent variables include, for each actor i, j, and k their specialty as identified above and class derived from centralities (BFBP, etc.) and size. In this case, k’s centralities are measurements of k’s wealth with respect to various resources: k is rich if his/her centrality is above the mean of outdegrees of j, which is very selective. Results To test our hypotheses, we look at the effect of rich k contacts.\(^1\) All members do not benefit in the same way from indirect ties and managers’ ties do not represent all the relational possibilities offered by the organization; peers can also broker ties to new contacts. However, sharing relational capital is part of a manager’s job and observing this potential represents a good start for specifying network lift as a specific organizational effect on members’ performance. For the purpose of this article, we will remain at the level of the structure and average across all members to look at the pattern of the new extended network and its effect on members’ performance. Table 1 presents the results of the final regression models computed for this purpose. The dependent variable is the researcher’s performance in time T2. Researchers’ characteristics are measured in time T1 and may have an effect on performances in time T2. In these models, we only look at researchers i who have dual alters in Model 1 (\(N=93\)) and at researchers who have rich dual alters in Model 2 (\(N=58\)). Effects are sorted based on characteristics of i, j, and k. Recall that, based on our hypotheses, all researchers with dual alters do not necessarily have a high performance. Here, epidemiology includes diagnostic, screening, prevention, and epidemiology proper; clinical research is separated from fundamental research; clinical and fundamental research are mainly combined within hematology-immunology (at the time of the study); and fundamental research can be either focused on pharmacology or use molecular/cellular biology or genetics. Results show that rich indirect contacts k matter in our population’s performance levels. After selection of the richest dual alters k, i.e. taking out of the analysis all dual alters that are below the threshold level (i.e. the mean of j’s outdegrees), the usefulness of these k contacts for researchers’ performance level comes to light—at least for the actors/researchers who can take advantage of the existence of these indirect contacts. In this Model, the more actors have LFBP in their direct contacts, the less they have BFIP in their direct contacts, the more they have rich indirect contacts in epidemiology, the higher their performance in period 2. LFBP have low performance levels during the first period, but these levels increase during period 2 owing to these dual alters whose performance is high and can be shared in co-publications. This confirms that for actors’ performances to improve, they need to count on rich potential resources (dual alters) that are complementary to their own and to that of their direct contacts. Performances of j during the second period, as independent variables, are never significant perhaps because their resources were already used in the previous period. In contrast, dual alters’ performances do lift members’ performances controlling for variables that routinely explain variations in returns on investment in relational capital. As will be shown by Figure 2 later in the text, researchers i with access to high performing dual k are not necessarily those whose performances are the highest. Interpreting this result seems to require a closer look at the evolution of the performances of these researchers so as to provide interpretations that are different for different sub-populations. In the next section, we identify six groups researchers for whom these lifting effects from dual alters, whenever present, can have different explanations. Reintroducing the content of the network to explain the relative contribution of each network is thus highly illuminating because it helps with differentiating extended networks and dual alters that provide new and complementary resources from networks that provide redundant ‘more of the same’ resources. Interpretation and Illustration: Rebounds from Dual Alters’ Resource Complementarity Here, we measure complementarity as the result of two requirements: first, focal actor’s resources in a group that are below the mean; and second, alters’ resources for this group must be above the mean; if they are not, resources are considered redundant and unhelpful in terms of providing network lift. In other words, as measured here, complementarity is not a purely individual variable but a measure of redundancy or non-redundancy of resources at the level of performance groups. In addition, the value of this complementarity—and with it the specific effect of the mechanism of embedded brokerage on performance—is contingent upon the compound of researchers’ specialties involved. For example, in this case and at that time, performance levels increased when focal actors were clinical researchers contributing to fundamental research in hematology, directly in contact with fundamental researchers specialized in molecular biology, as well as indirectly in contact with researchers in epidemiology (including diagnostics, screenings, and prevention—the hot specialties at the time of the study, i.e. Table 1 Effect of dual alters on focal researchers’ Period 2 performance levels | Independent variables | Unstandardized coefficients Model 1 | Unstandardized coefficients Model 2 | |------------------------------------------------------------|-------------------------------------|-------------------------------------| | | β (S.E.) | β (S.E.) | | Intercept | 19.8* (8.02) | 17.97 (14.98) | | Focal actor i characteristics | | | | Specialty | | | | Epidemiology (diagnostics, prevention, etc.) | 13.3 (10.3) | 16.7 (13.07) | | Clinical (without fundamental research) | — — | — — | | Clinical (with fundamental research in hematology) | 35.02* (13.2) | 39.14* (17.86) | | Fundamental with pharmacology | — — | — — | | Fundamental in molecular biology | 15.02 (10.63) | — — | | Fish-Pond category membership | | | | BFBP | 15.65 (8.9) | — — | | BFSP | — — | −33.11* (11.26) | | LFBP | — — | — — | | LFSP | — — | — — | | Direct contact j characteristics | | | | Average performance at Period 2 | — −0.17 (0.24) | | Specialty | | | | Epidemiology (diagnostics, prevention, etc.) | — — | — — | | Clinical research (without fundamental research) | — — | −2.74 (4.53) | | Clinical (with fundamental research in hematology) | — — | — — | | Fundamental with pharmacology | 7.95 (4.66) | 12.22 (6.27) | | Fundamental in molecular biology | 10.6** (3.40) | 20.15*** (4.97) | | Fish-Pond category membership | | | | BEBP | — −14.14* (5.68) | | BEFP | −9.88** (3.30) | — — | | LBFP | — 12.33 (6.35) | | LFSP | — — | — — | | Dual alter k characteristics | | | | Average performance at Period 2 | — 0.3 (0.19) | | Specialty | | | | Epidemiology (diagnostics, prevention, etc.) | — — | 21.64* (10.93) | | Clinical (without fundamental research) | — — | — — | | Clinical (with fundamental research in hematology) | — — | — — | | Fundamental with pharmacology | — — | — — | | Fundamental in molecular biology | — 15.93 (11.25) | | Fish-Pond category membership | | | | BFBP | 2.55 (1.77) | — — | | BFSP | — — | — — | | LFBP | — — | — — | | LFSP | — — | — — | Significance levels for P-values: *P<0.05; **P<0.005; ***P<0.0005. Note. Outcome of stepwise ANOVA models explaining Period 2 performance levels of researchers i based on explanatory variables including the characteristics of i’s indirect contacts j (dual alters), controlling for i’s characteristics and the characteristics of i’s direct contacts j. All dual alters of actor i are retained in Model 1; only ‘rich’ dual alters (as defined in the text) are retained in Model 2. Variables for which there are no parameter estimates were removed from the model during the stepwise procedure. The equations are presented so as to facilitate the comparison between the effects of characteristics of i, j, and k on i’s performance levels. For example, Model 2 shows that, at the time of the study, the fact that i’s extended opportunity structure includes dual alters specialized (in bold) in epidemiology (coefficient 21, 64, standard error 10, 93) is likely to increase i’s performance significantly during period 2. Model fit: F=3.7, P=0.0019 for Model 1; F=3.43, P=0.0016 for Model 2. Non-significant variables that are retained in both final models are variables that nevertheless improved model fit. N=93 for Model 1, N=58 for Model 2. making real progress in terms of IF scores at the end of the 1990’s). The importance of complementarity of resources can be teased out of the data by identifying different performance groups in the population and by looking at the relative utility of resources provided by j and k in such groups. Analyzing the evolution of performances at the individual level for all researchers in our population, we are able to cluster these researchers into six groups of performance, i.e. six categories of evolution of such performances over 10 years. In effect, to better understand the effects identified in the models above, we calculated for each researcher his/her ‘career’ of IF score between 1996 and 2005 and his/her position compared with the annual average of all IF scores. We also calculated the evolution of this score as compared with the evolution of this mean in the first and the second periods. Figure 2 presents these evolutions for the six groups of performance identified by this analysis. Group 1, the ‘top of the top’ in terms of performance, is always above average and progressing towards the top over time. Group 2 is also above average during the two periods, but it does not make much progress. Group 4’s performances decrease: it starts above the mean in period 1 and is below in period 2. Group 3 starts below the mean in period 1 but is in the amazing position to try to catch up with Group 1 in the second period. Group 5, although it remains below the mean in both periods, makes progress during the second period. Group 6 performances are below the mean in period 1 and decrease even further during period 2. In the following Figure 3a–c, we compare outdegree centralities, resource by resource, of the members of each performance group with that of their direct contacts and that of their dual alters. We focus on three contrasted groups—Groups 1, 3 and 4—to illustrate the effects of the latent and expanded network that is accessible through dual alters that are rich indirect k contacts, (i.e. dual alters with high indegrees and high outdegrees (i.e. at least as high as the average j contact). Selecting among dual alters only those with indegrees and outdegrees equal or superior to that of the mean values for direct j contacts shows, for example, that Groups 1 (the top performers) had dual alters that are not complementary in the resources they provide. Group 1 members are atypical. Resources of their indirect Figure 3 (a) Complementarity of resources: Comparison of standardized average outdegrees (in each of the five advice networks observed) of focal actors i, of their direct contacts j, and of their dual alters k, in performance group 1 (the top of the top). Group 1 members, the highest performers, are below average (white columns) in number of contacts who can provide insights with respect to a new line of research (Discussion network), with respect to funding, and with respect to manuscript reading before they submit the manuscript to a journal. To some extent, their direct contacts (grey columns) are rich in four of five resources (Discussion, Project reading, advice related to Funding, and advice concerning Manuscript reading). Their indirect (continued) contacts k are redundant, almost always below the mean. However, there is strong complementarity between their resources and that of their direct contacts, suggesting that they already did rebound after having transformed their dual alters k into direct alters. Members of this group are not always very central overall but strongly supported and compensated by their direct ties. They select and use their direct contacts ‘perfectly’. They seem to have ‘filled up’ already in the past in terms of contacts and resources that they need to carry out their work. Here, those who make progress are not the most central but those who have direct access to complementary social resources. In contrast, Group 3 members, the ‘catchers up’, who are not central, do have a few rich indirect contacts that are more resourceful and prestigious than they themselves and their direct contacts are. Weak centralities of i can be compensated by their j and k in two networks out of five. These indirect contacts can provide them with complementary resources with respect to Project, Funding, Recruitment, and Manuscript reading. Group 3 members, mostly LFBP, are thus particularly helped by the centrality of their laboratory. Access to these dual alters is thus likely to have helped them rebound in the second period with respect to their performance increase. These are ‘mixed’ results: when they need it to start off up again, these actors may count on a certain level of complementarity in the resources of their direct and indirect contacts. This tends to show that those who make most progress start low and have access to such complementary resources to compensate for the lack of their own. For Group 4, however, who have decreasing IF scores and among the lowest performance levels in the second period, lack of access to rich dual alters (with respect to Discussion, Project support, Recruitment, and Manuscript reading) may have slowed down their progress. When Group 4 needed complementary resources, neither its actual contacts nor its dual alters could provide them. This Group counts on itself exclusively and thus do not benefit from network lift. Their extended network will not give them access to BFBBPs who share, but perhaps access to BFBP’ who exploit. The case of Groups 4 and 6 shows that our multilevel approach does not focus necessarily on situations in which the interplay between inter-organizational and inter-individual networks have a joint positive effect. It is complementarity between resources of the focal actors and resources provided by their dual alters that creates the joint positive effect for which we have found evidence. In sum, the contribution of the extended network of dual alters is thus relative to the nature of resources that the actor already has to perform his/her tasks. To provide lift, the multilevel network must provide both dual alters and complementarity of the resources that they make available. Examining centralities in each detailed kind of advice network at the inter-individual level helps in specifying the relative utility or contribution of the extended network. Figure 3a–c illustrates how an extended network including dual alters and providing complementary resources provides network lift and increases its members’ performance levels. In contrast, an extended network providing non-complementary resources, *more of the same* resources, does not provide network lift. We do not have enough observations in each performance group (12, 10, and 22, respectively) to replicate the aforementioned regression models (Table 1). **Figure 3** Continued contacts k (black columns) are all relatively poor with respect to all resources. Selecting among dual alters only those with indegrees and outdegrees equal or superior to that of the mean values for direct j contacts shows, for example, that groups 1 had dual alters that are not complementary in the resources they provide. (b) Complementarity of resources: Comparison of standardized average outdegrees (in each of the five advice networks observed) of focal actors i, of their direct contacts j, and of their dual alters k, in performance group 3. Group 3 members, the only group that catches up with Group 1 members over time, are below average (white columns) in number of contacts who can provide insights with respect to a new line of research (Discussion network), with respect to funding, but highly above average with respect to manuscript reading before they submit the manuscript to a journal. In three kinds of resources (Project, Funding, and Recruitment), indirect contacts k (black columns) are relatively less poor than i. (c) Complementarity of resources: Comparison of standardized average outdegrees (in each of the five advice networks observed) of focal actors i, of their direct contacts j, and of their dual alters k, in performance group 4. Group 4 members, among the lowest performers in this population, are above average (white columns) in number of direct contacts who can provide insights with respect to a new line of research (Discussion network), with respect to funding, but not with respect to manuscript reading before they submit the manuscript to a journal. Their direct contacts (grey columns) are also rich in four of five resources (Discussion, Project reading, advice related to Funding, and advice concerning recruitment). Their indirect contacts k (black columns), however, are almost all very poor with respect to all resources except advice concerning where to get funding. for each one of them. With the aforementioned qualifications in mind, however, especially those related to relative utility and complementarity of resources, these descriptions are sufficient to identify and specify the network lift effect in the learning process among colleagues. The implication is that the network lift effect is unevenly distributed because actors have access, owing to their hierarchical superior, to different kinds of dual alters providing either complementary or non-complementary resources. Conclusions In this article, we have outlined a new, multilevel network approach to social capital that identifies an extended intra- and inter-organizational opportunity structure. This extended opportunity structure is created by a mechanism of embedded brokerage. We show that it produces a ‘network lift’ effect on the performance of members of organizations in an inter-organizational system. Actors have a complex inter-individual network at the personal level combined with an equally complex inter-organizational network at the collective level; the latter is based on affiliation ties or personal ties with managers (hierarchical superiors) who themselves have ties in other organizations across the boundaries of their organization. We aggregate the two networks and treat the extended network as a latent structure adding actual and indirect relational capital. Our analyses found empirical evidence confirming our hypothesis about the role of members’ dual alters. This is not a latent and extended opportunity structure that individual actors can always take advantage of on their own, without help from their hierarchy. Our multilevel approach thus confirms the existence of a relationship between ‘borrowing’ relational capital by any individual members, sharing relational capital by hierarchical superiors, and performance. Further analyses of this network lift effect also shows that it is unevenly distributed. It works under specific conditions of levels of resources of dual alters and complementarity of their resources in terms of relative utility for the focal actor. Even if individual performance is thus even less individual than previously thought, borrowing seems more complicated for some groups than for others, and the same kind of borrowing does not always have the same lift effects. For some, dual alters make a difference (suggesting the existence of a ‘rich get richer through embedded borrowing’ effect); for others, they do not. The advantage of the proposed approach is that it specifies and measures an organizational dimension of what has become a rather complex network determinant of performance. Several limitations of our study of network lift can be reported at this point. First, the conditions under which, and the extent to which, individuals can rely on such a lift effect is still a matter of debate. As already called for by Snijders and Baerveldt (2003), the simple presence of a sufficient number of multilevel sub-structures in such complex dual opportunity structures will be tested only when adapted instruments are made available by statistical research (Wang et al., 2012) and when more knowledge is gathered about the intra-organizational relationships between members and managers of the organizations—i.e. unpacking the affiliation tie. Our purpose is not to put a positive twist on the old story about French science as a system of ‘mandarins’ in which powerful directors of laboratories control the access of their members to other researchers and to the environment, especially prevent their subordinates from going directly to someone working with or under another mandarin. Indeed, further work should test the extent to which researchers go directly to colleagues (who happened to be dual alters) in whom they are interested, or whether managers introduce dual alters to their members, or whether the latter would have to go up to their own mandarin and he/she could (or not) put them in contact with the dual alter. Consistent with considering each level as a full-fledged level of collective action, our next hypothesis would be that, even when this system does not keep each researcher dependent upon his/her mandarin (which is usually thought to stifle progress and innovative thinking), the inter-organizational network of the laboratory opens up opportunities that many individual members would not perceive, access or seize on their own (Lazega et al., 2008). Second, empirical analyses and interpretations of the relationship between extended networks and performance need to be further refined. For example, the richness of direct contacts in complementary resources might also be a factor that should be controlled for in further research explaining the differences observed. Third, indirect ties and access to rich dual alters has an effect that is measurable over time only. This raises the question of the actual dynamics of such multilevel opportunity structures and its specific impact on network lift effects. This kind of analysis could prove particularly well adapted to certain types of questions that sociologists explore in complex meso-level processes. Further use of this approach could test the generality of this method of extension of opportunity structures in other areas of interest to sociologists. It will hopefully prove to be useful in measuring the dynamics of multilevel opportunity structures in contemporary organizational societies (Perrow, 1991) and in revisiting old substantive and theoretical questions addressing social inequalities. Note 1. An online document—a companion to this paper on the site of the journal—provides a visualization of this effect. Acknowledgements The empirical research on which this article is based was funded by the Association pour la Recherche sur le Cancer, a French non-profit organization. The authors are grateful to Ronald Breiger, Ronald Burt, David Dekker, David Lazega, Marijte van Duijn, the members of the Multilevel Network Modelling Group led by Mark Tranmer, as well as to the members of the Multi-Level Social Network project at the Observatoire des Réseaux Intra- et Inter-Organisationnels funded by the French Agence Nationale de la Recherche for stimulating discussions. They are also grateful to two anonymous reviewers and to the Editors of ESR for supplying them with constructive comments that helped improve this manuscript. Supplementary Data Supplementary data are available at ESR online. References Breiger, R. L. (1974). The duality of persons and groups. *Social Forces*, 53, 181–190. Bryk, A. S. and Raudenbush, S. W. (1992). *Hierarchical Linear Models*. Newbury Park, CA: Sage. Burt, R. S. (1992). *Structural Holes: The Social Structure of Competition*. Cambridge: Harvard University Press. Burt, R. S. (2005). *Brokerage and Closure. An Introduction to Social Capital*. Oxford: University Press. Fararo, T. J. and Doreian, P. (1980). Tripartite structural analysis: generalizing the Breiger-Wilson formalism. *Social Networks*, 6, 141–175. Flap, H., Bulder, B. and Völker, B. (1998). Intra-organizational Networks and Performance: A Review. *Computational & Mathematical Organization Theory*, 4, 1–39. Hargens, L., Mullins, N. and Hecht, P. K. (1980). Research areas and stratification process in science. *Social Studies of Science*, 10, 55–75. Hedström, P., Sandell, R. and Stern, C. (2000). Mesolevel networks and the diffusion of social movements: the case of the Swedish social democratic party. *American Journal of Sociology*, 106, 145–172. Hsung, R.-M., Lin, N. and Breiger, R. L. (Eds.), (2009). *Contexts of Social Capital: Social Networks in Markets, Communities, and Families*. New York and London: Routledge. Jansen, D. (2004). *Networks, Social Capital, and Knowledge Production*. Speyer: Forschung für Öffentliche Verwaltung, Universität Speyer. Discussion paper series 8. Kozlowski, S. W. J. and Klein, K. J. (2000). A multilevel approach to theory and research in organizations: contextual, temporal and emergent processes. In Klein, K. and Kozlowski, S. (Eds.), *Multi-Level Theory, Research and Methods in Organizations*. San Francisco: Jossey-Bass, pp. 3–90. Lazega, E. (2012). Sociologie néo-structurale. In Keucheyan, R. and Bronner, G. (Eds.), *Introduction à la Théorie Sociale Contemporaine*. Paris: Presses Universitaires de France. Lazega, E. et al. (2008). Catching up with big fish in the big pond? Multi-level network analysis through linked design. *Social Networks*, 30, 157–176. Leenders, R. and Gabbay, S. (Eds.), (1999). *Corporate Social Capital and Liability*. Boston: Kluwer. Lubbers, M. J. (2003). Group composition and network structure in school classes: a multilevel application of the p* model. *Social Networks*, 25, 309–332. Mullins, N. et al. (1977). The Group structure of co-citation clusters. A Comparative study. *American Sociological Review*, 42, 552–562. Perrow, C. (1991). A society of organizations. *Theory and Society*, 20, 725–762. Quintane, E. et al. (2012). An investigation of the temporality of structural holes. In *Academy of Management Best Papers Proceedings*. OMT division ISSN: 2151-6561. Robins, G. L., Woolcock, J. and Pattison, P. (2005). Small and other worlds: global network structures from local processes. *American Journal of Sociology*, 110, 894–936. Snijders, T. A. B. and Bosker, R. (1999). *Multi-level Analysis*. London: Sage. Snijders, T. A. B. and Baerveldt, C. (2003). A multilevel network study of the effects of delinquent behaviour on friendship evolution. *Journal of Mathematical Sociology*, 27, 123–151. Van Duijn, M. A. J. (2006). The multilevel $p^2$ model: a random effects model for the analysis of multiple social networks. *Methodology: European Journal of Research Methods for the Behavioral and Social Sciences*, 2, 42–47. Wang, P. et al. (forthcoming). Exponential random graph models for multilevel networks. *Social Networks*. White, H. C., Boorman, S. and Breiger, R. L. (1976). Social structure from multiple networks: I. Blockmodels of roles and positions. *American Journal of Sociology*, **81**, 730–780. Wilson, T. P. (1982). Relational networks: an extension of sociometric concepts. *Social Networks*, **4**, 105–116. Zuckerman, H. (1977). *Scientific Elite. Nobel Laureates in the US*. New York: The Free Press.
Virtual CME Church 2021 Women’s Missionary Council Executive Board Theme: “Missionaries Embracing a New Era of Innovation For Greater Service” Wednesday & Thursday February 24 & 25, 2021 Dr. Jacqueline I. Scott International President Women’s Missionary Council Patron Bishop, James B. Walker Women’s Missionary Council Senior Bishop Lawrence L. Reddick, III Thelma J. Dudley Missionary Education Introduction of the 2021 – 2022 Study Course Presenter – Ida P. Suggs, Secretary Thelma J. Dudley Missionary Education Department • Greetings • Purpose – To introduce the 2021 – 2022 Study Course • Tribute to Our Deceased Missionary Institute Director ~~~~~Mrs. Nita L. Threadgill~~~~~ • Thank You – Missionary Institute Directors/Division and Departmental Secretaries/Region Presidents/Our Missionary Sisters and Supporters at the Local Level Her obituary stated that she was a faithful member of Miles Chapel CME Church, where she was noted as a strong pillar of the church. Her wisdom, knowledge, and leadership will be truly missed and can never be replaced. Meet Our Episcopal Missionary Institute Directors - 1st Mrs. Ella Watson - 2nd Wanda P. Henry - 3rd – Eleanor Ellis - 4th Mrs. Annie Williams - 5th – Ms. Tiffanie E. Thompson - 6th - Mrs. Jacqueline Henry Carter - 7th – Rev. Carole E. Richardson - 8th – Linda M. Woolage - 9th – Danette Armstead 2020 - 2021 Thelma J. Dudley Study Course Goal: To revisit Adult Books • Life Interrupted by Shari • Joyous Faith – The Key to Aging With Resilience by Michelle Howe • Bible Study – Messy People Young Adult Books • Millennials Adult & Young Adult Book Recommended for 2021 Daily Meditation Each Chapter: • About 4-5 pages in length • Contains a Bible verse and a biblical quote at the beginning of each chapter, • Take-away Action Thought • My Heart Cries Out to You, O Lord • Faith Steps Bible Study Adult & Young Adult - Messy People Phyllis H. Bedford Course of Study The Young Adult Harvest That We Dare Not Miss Millennials Recapturing the Generation That Checked Out of Church P. Douglas Small Foreword by Dr. Tom Cheyney, The Renovate Group Course of Study-Mattie E. Coleman Department Well-Read Black Girl: Finding Our Stories, Discovering Ourselves "A brilliant collection of essential American reading... smart, powerful, and complete." —Min Jin Lee, author of Pachinko Courageous Teens Michael Catt and Amy Parker Course of Study for Mattie E. Coleman Girls, 12 - 17 • Teen Girls Just like your best friend ... DISNEY CHANNEL STAR TRINITEE STOKES SHARES IT ALL! From embarrassing, laugh-out-loud moments of life to favorite memories with family, friends, and costars—this book features Trinitee’s insight and answers to real fans about friends, faith, and fame. Best known as Judy Cooper from Disney Channel’s K.C. Undercover, Trinitee is also a tween like you who knows how hard it can be to stay true to yourself while following your dreams. Since she began entertaining at the age of 3, Trinitee has faced her share of pressure and obstacles, and she’ll help you handle your own life challenges with courage and charm. Page by page, you’ll feel like you’re sitting with your best friend as Trinitee empowers you to chase what matters most and have fun along the way! Bold and Blessed includes a special photo insert with pictures from Trinitee’s childhood to now. "Dreaming is good, but dreaming is for dreamers." That's what Michael Jordan's mother tells him—when he lives, breathes, and dreams basketball. It's the 1976 Olympics and the U.S. basketball team is playing in Germany. Everyone is following it—including Michael, but he would rather play basketball than watch it on television. What he dreams of is being on the U.S. Olympic team. How will Michael make his dream come true? Deloris Jordan, mother of the basketball great Michael Jordan, tells this true story of inspiration, hard work, determination, and what it takes to be a champion. DELORIS JORDAN is the author of Salt in His Shoes and Michael's Golden Rules, both illustrated by Radim Nelson. BARRY ROOT is the illustrator of many books for children, including *By My Brother's Side*, *Game Day*, and *Teammates*, all written by Tiki and Ronde Barber with Robert Burleigh. Every person has a purpose, a unique effect on the world around us. And sometimes a person's achievements are so extraordinary, they shape generations to come. Highlighting key figures in the African American fight for equality, this beautiful picture book—brought to life by thirty-four award-winning artists—beautifully takes readers through a people's history. From George Washington Carver to Jackie Robinson, from Rosa Parks to Barack Obama, here are true pioneers of change. "A cohesive and affecting collective portrait... African-American history is 'the story of hope.'" —Publishers Weekly, starred review "Celebration, inspiration, and connection are the themes that drive this big, handsome picture book... The book's message of hope will inspire parents and grandparents to share their memories and talk with children about the future." —Booklist, starred review STUNNINGLY ILLUSTRATED BY Cozbi A. Cabrera • R. Gregory Christie • Bryan Collier Pat Cummings • Leo and Diane Dillon • AG Ford • E. B. Lewis Frank Morrison • James Ransome • Charlotte Riley-Webb Shadra Strickland • Eric Velasquez "It's OK to be Different" celebrates children who have the courage to be themselves, and to accept others as they are. Young readers with an eye for exceptional artwork and clever wordplay will enjoy it over and over again." Parents seeking read-alouds that educate kids about diversity and acceptance will find "It's OK to be Different" the perfect lesson of choice." Midwest Book Reviews "It's OK to be Different" has a great message and one that is especially important in a modern world that is connected globally like never before. The book is easy and fun to read. It has delightful illustrations to capture the eyes and minds of its audience." Literary Titan When Brokenness is not about You Featured Post by Be Thee Inspired Broken World & Broken Relationships BROKEN VESSELS LIFE’S SAVINGS ENDING THE CYCLE OF BROKENNESS BROKEN YOU BROKEN ME BROKEN COMMUNITY Finding Beauty In Brokenness Broken Vessels Motherhood, Moses, and the Beauty of Broken Vessels BLACK WOMEN & BROKEN HOMES Brokeneness • The broken person is able to verbalize his needs to others, as well. There is no brokenness where there is no openness. Almost without exception, the greatest victories over sin and temptation that I have experienced have been won when I was willing to humble myself and confess my need to a mature believer who could pray for me and help hold me accountable to obey God. • Ultimately, brokenness is a matter of surrendering control of our life to God. The heart that has been emptied of itself and broken of its willfulness is the heart that will experience the filling and the reviving of our glorious, holy God, who humbled Himself, that He might lift us up. “Chosen Vessels” The phrase conjures up a variety of images: Sassy career women, Wise church women, Strong grandmothers, Welfare mothers. But how about “chosen vessels”? Or “keys to change”? Perhaps we need some new images. Women of color have historically been on the bottom of the economic and social ladder. But the paradox of the kingdom of God is that being on the bottom is a plus. God often chooses the rejected and despised to confound the wise and mighty (1 Corinthians 1:22-29). By examining our spiritual history and God-ordained destiny, this book is designed to help us turn the tide of evil in our own lives and in the lives of our families, cities and nations. We are chosen vessels. This book will help us each to find our significance in the eyes of God. This revised edition includes new Bible studies to accompany each chapter. “Chosen Vessels gripped my heart! Like to other book … White Chosen Vessels, Black Chosen Vessels, Red, Yellow, every Chosen Vessel should read this book. Rebecca Osaiigbovo has given us a glimpse into the fact that we are a chosen vessel so we can be free to become the woman God wants us to be.” CAROL RENT, speaker and author of Becoming a Woman of Influence “I believe this book will be a source of inspiration and a source of exploration for all who read it.” DR. MYLES MUNROE, author of Understanding the Purpose and Potential of Women Voice of an African American Woman Speaking to other AA Women. • Women of color have historically been on the bottom of the economic and social ladder. But the paradox of the kingdom of God is that being on the bottom is a plus. God often chooses the rejected and despised to confound the wise and mighty (1 Corinthians 1:27-29). By examining our spiritual history and God ordained destiny, this book is designed to help us turn the tide of evil in our own lives and in the lives of our families and nations. • We are chosen vessels. This book will help us each to find our significance in the eyes of God. Using This Book • This book is divided into 5 parts • Each part contains 3-4 chapters • Each chapter is about 5-7 pages long • Each chapter begins with 1 or 2 Bible verses • Questions are at the end of each chapter which provide for deeper discussion • Book is designed to facilitate growth • This book also helps us to understand the pivotal role African American Christian women have in the kingdom of God We must come to see God as He really is, for the closer we get to God, the more we will see our own need in the light of His holiness. In the fifth chapter of Isaiah, the great prophet pronounces well-deserved woes on the materialistic, sensual, proud, immoral people of his day. Over and over he cries out, “Woe to them . . . .” But then Isaiah comes face-to-face with the holiness of God, and his next words are, “Woe to me!” (Isa. 6:5, emphasis mine). The broken man or woman is more conscious of the corruption in his own breast than in the heart of his neighbor. Having seen God for who He is, we must cry out to Him for mercy. Learning to acknowledge and verbalize our spiritual need to God is essential to a lifestyle of brokenness. The broken person does not blame others. His heart attitude is, “It’s not my brother nor my sister, but it’s me, oh, Lord, standing in the need of prayer.” Sometimes a Heart Must Be Broken To Let Light In Conducting Your Institute Please take a count. Handbook requires us to complete 10 hours Zoom/Conference Call/In Person (if your state okays going back into your building) BOOKS CAN BE SECURED FROM Amazon Christianbook.com Urban Spirit Publishing and Media Company, LLC www.bloomsburykids.com 4 Kids Like Mine Walmart/Target/Books-a-Million STAY CONNECTED Thank You!
Brane solutions of a spherical sigma model in six dimensions Hyun Min Lee\textsuperscript{1} and Antonios Papazoglou\textsuperscript{2} Physikalisches Institut der Universität Bonn Nussallee 12, D-53115 Bonn, Germany Abstract We explore solutions of six dimensional gravity coupled to a non-linear sigma model, in the presence of co-dimension two branes. We investigate the compactifications induced by a spherical scalar manifold and analyze the conditions under which they are of finite volume and singularity free. We discuss the issue of single-valuedness of the scalar fields and provide some special embedding of the scalar manifold to the internal space which solves this problem. These brane solutions furnish some self-tuning features, however they do not provide a satisfactory explanation of the vanishing of the effective four dimensional cosmological constant. We discuss the properties of this model in relation with the self-tuning example based on a hyperbolic sigma model. \firstname.lastname@example.org \email@example.com 1 Introduction Recently, there has been a lot of work on extra dimensional models with brane sources in relation with the cosmological constant problem [1]. The aim has been to find solutions with zero effective four dimensional cosmological constant regardless of the value of the brane vacuum energy. This adjustment mechanism has been called self-tuning and is particularly promising for codimension-two branes [2]. These branes have the property that they do not curve the extra space but only induce a conical deficit in the internal geometry. Thus, it is conceivable that the brane vacuum energy in this case is absorbed in a change of the deficit angle without affecting the properties of the bulk solution. The latter scenario was studied in detail in the framework of compactifications in the presence of gauge field fluxes. This kind of compactification was first considered in the seventies [3] (under the name of spontaneous compactifications) and have been revisited recently because of the property to fix some or all of the moduli of extra dimensional models. The first attempt to realize self-tuning in flux compactifications was done by [4] where an example of a “rugby-ball”-shaped internal space was constructed and was shown that flat solutions existed for any value of the brane tension. The model included a bulk fine tuning between the flux and the bulk cosmological constant which can be relaxed if supersymmetry is invoked in the bulk [5] (generalizing the supergravity solution of [6]). However, it was soon realized that due to flux quantization [7] (or even flux conservation [8]), a relation between the brane tension and the bulk cosmological constant is introduced and so the self-tuning is ruined. For a detailed discussion of properties of these models (with or without supersymmetry) see [9–14]. There is another mechanism that induces compactifications of the extra space dimensions which utilizes a non-linear sigma model and has not been discussed in such a detail as the previous case. This kind of compactification was first considered in the eighties [15, 16] in the framework of supergravity and used non-trivial backgrounds of the fields of a hyperbolic sigma model to compactify the internal space to a manifold called “tear-drop”. Recently, this kind of solution was generalized by [17] with codimension-two branes, yielding self-tuning solutions which do not have the complications of the previous flux models. These models, however, possesses a naked singularity in the bulk and although one can prevent energy flow in the singularity by appropriate boundary conditions [16], the solution cannot be trusted close to the singularity since the curvature becomes significant. In the present paper, we consider instead a spherical sigma model and derive solutions with codimension-two branes. We first present the analog of the “rugby-ball” compactification and then generalize to more general compactifications with azimuthal symmetry. Depending on values of the sigma model coupling and the brane tensions, we can find non-singular solutions of finite volume. We discuss the issue of the single-valuedness of the sigma model fields and some special embedding of the scalar manifold to the internal space which circumvents this problem. We note that although the above solutions have self-tuning features, they cannot provide a satisfactory explanation to the cosmological constant problem since there exist nearby curved solutions for non-zero bulk cosmological constant. Finally, we compare this model with the self-tuning model based on a hyperbolic sigma model and conclude. 2 Model setup We will consider a six-dimensional model with gravity, a two-dimensional non-linear sigma model with metric $f_{ij}(\phi)$ in the presence of codimension-2 branes. The full action of the system is $$S = \int d^6 x \sqrt{-g} \left( \frac{1}{2} R - \frac{1}{2} k f_{ij} \partial_M \phi^i \partial^M \phi^j \right) + S_4,$$ where $\phi^i (i = 1, 2)$ are real scalar fields, $k > 0$ is the coupling of the sigma model to gravity. The scalar manifold is chosen to be a sphere with metric $f_{ij}$ given by $$d\sigma_f^2 = (d\phi^1)^2 + \sin^2 \phi^1 (d\phi^2)^2.$$ The brane action $S_4$ is given by the following localized terms $$S_4 = \sum_{i=1,2} \int d^4 x d^2 y \sqrt{-g^{(i)}} (-\Lambda_i) \delta^2(y - y_i),$$ where $g_{\mu\nu}^{(i)}$, $\Lambda_i$ and $y_i$ are brane-induced metrics, brane tensions and positions of the branes in extra dimensions, respectively. We wish to find solutions of the above system where the internal two dimensional manifold is compactified and is axisymmetric. The two 3-branes are placed at antipodal points on the axis of symmetry of the internal manifold. The metric variation of the above action (1) gives rise to the Einstein equation $$R_{MN} - \frac{1}{2} g_{MN} R = k \ f_{ij} \left( \partial_M \phi^i \partial_N \phi^j - \frac{1}{2} g_{MN} \partial_P \phi^i \partial^P \phi^j \right) + T_{MN}^{(4)},$$ with the brane energy momentum tensor $$T_{MN}^{(4)} = - \sum_{i=1,2} \frac{\sqrt{-g^{(i)}}}{\sqrt{-g}} \Lambda_i g_{\mu\nu}^{(i)} \delta^{\mu}_M \delta^{\nu}_N \delta^2(y - y_i).$$ We can rewrite the Einstein equation in a simpler way in terms of the Ricci tensor as $$R_{MN} = k \ f_{ij} \partial_M \phi^i \partial_N \phi^j - \sum_{i=1,2} \frac{\sqrt{-g^{(i)}}}{\sqrt{-g}} \Lambda_i (\delta^{\mu}_M \delta^{\nu}_N g_{\mu\nu}^{(i)} - g_{MN}) \delta^2(y - y_i).$$ \footnote{We restrict the sign of $k$ in order not to have a ghost-like kinetic term for the sigma model.} On the other hand, the field equation for the scalars is \[ \frac{2}{\sqrt{-g}} \partial_M \left( \sqrt{-g} f_{ij} \partial^M \phi^j \right) = \frac{\partial f_{kl}}{\partial \phi^i} \partial_M \phi^k \partial^M \phi^l, \] or equivalently \[ \square \phi^i = \frac{1}{\sqrt{-g}} \partial_M \left( \sqrt{-g} \partial^M \phi^i \right) = -\gamma^i_{kl} \partial_M \phi^k \partial^M \phi^l, \] where \( \gamma^i_{kl} \) are the Christoffel symbols for the sigma model metric \( f \). ### 3 “Rugby-ball”-shaped internal space In order to find a background solution in this model, let us take the ansatz for the metric with factorizable extra dimensions as \[ ds^2 = g_{\mu\nu}(x) dx^\mu dx^\nu + \gamma_{mn}(y) dy^m dy^n, \] where \( g_{\mu\nu}(x) \) denotes the four dimensional spacetime which is taken to be maximally symmetric. Then, when the scalars depend only on extra coordinates, eq. (6) implies that \( R_{\mu\nu} = 0 \), i.e. the only maximally symmetric spacetime that is a solution is Minkowski spacetime. With the 4d flat metric, \( g_{\mu\nu} = \eta_{\mu\nu} \), let us take the ansätze as following \[ ds_\gamma^2 = R_0^2(d\theta^2 + \beta^2 \sin^2 \theta d\psi^2), \] \[ \phi^1 = \theta, \] \[ \phi^2 = \beta \psi + c, \] with \( c \) being an integration constant. Then, the field equation for the scalars (8) are satisfied while the Einstein equation (6) in the bulk is also satisfied only for \( k = 1 \). We note that the radius of extra dimensions \( R_0 \) is not determined from the equations of motion. Matching the singular terms coming from the conical singularities with the brane source terms leads to the following relation between the brane tensions and the parameter \( \beta \) \[ \Lambda_1 = \Lambda_2 = 2\pi(1 - \beta), \] with the deficit angle of the two branes being equal to \( \delta = 2\pi(1 - \beta) \). Therefore, we find that changing the brane tensions is compensated by the deficit angle on the sphere. To avoid the tuning between brane tensions, we need to consider the orbifold \( S^2/Z_2 \) with \( Z_2 \) acting on the sphere coordinates as \[ \theta \rightarrow \pi - \theta, \quad \psi \rightarrow \psi. \] Then, in order to have a well defined solution, it is enough to impose the \( Z_2 \) parities on \( \phi^1 \) and \( \phi^2 \) as \[ \phi^1 \rightarrow \pi - \phi^1, \quad \phi^2 \rightarrow \phi^2. \] 4 General internal space In the previous section we obtained a flat solution with “rugby-ball”-shaped extra dimensions for which two brane tensions are the same. This happens only for $k = 1$. This is a sort of a bulk fine-tuning of the scalar coupling to gravity. In this section, we find more general solutions with different embedding of the brane singularities in the metric in which one doesn’t need to have $k = 1$. Let us define a complex scalar field in terms of the $\phi^i$’s as $$\Phi = \left( \tan \frac{\phi^1}{2} \right) e^{i\phi^2}. \tag{16}$$ Then, the scalar manifold metric in eq.(2) becomes $$d\sigma_f^2 = \frac{4 \ d\Phi d\bar{\Phi}}{(1 + |\Phi|^2)^2}. \tag{17}$$ Then, with the above definition of fields, the Einstein equations (6) and the field equation for $\Phi$ (8), are respectively as following $$R_{MN} = \frac{2k}{(1 + |\Phi|^2)^2} (\partial_M \Phi \partial_N \bar{\Phi} + \partial_N \Phi \partial_M \bar{\Phi})$$ $$- \sum_{i=1,2} \frac{\sqrt{-g^{(i)}}}{\sqrt{-g}} \Lambda_i (\delta^\mu_M \delta^\nu_N g^{(i)}_{\mu\nu} - g_{MN}) \delta^2(y - y_i), \tag{18}$$ and $$\frac{1}{\sqrt{-g}} \partial_M (\sqrt{-g} \partial^M \Phi) = \frac{2\bar{\Phi}}{1 + |\Phi|^2} \partial_M \Phi \partial^M \Phi. \tag{19}$$ In order to find a flat solution, let us assume that extra dimensions are factorized and take the ansätze for the internal metric and the complex scalar field in complex coordinates as $$ds_2^2 = r_0^2 e^{2A(z,\bar{z})} dz d\bar{z}, \tag{20}$$ $$\Phi = \Phi(z, \bar{z}). \tag{21}$$ where the “radius” $r_0$ is a scale typical of the size of the internal space. Then, the $(z\bar{z})$ Einstein equation and the field equation are $$-2 \partial \bar{\partial} A = \frac{2k}{(1 + |\Phi|^2)^2} (\partial \Phi \bar{\partial} \bar{\Phi} + \bar{\partial} \Phi \partial \bar{\Phi}) + \sum_{i=1,2} \Lambda_i \delta^2(z - z_i), \tag{22}$$ $$\partial \bar{\partial} \Phi = \frac{2\bar{\Phi}}{1 + |\Phi|^2} \partial \Phi \bar{\partial} \Phi. \tag{23}$$ The \((zz)\) Einstein equation dictates that \(\Phi\) is either holomorphic or antiholomorphic. Then the scalar field equation is automatically satisfied for any (anti)holomorphic function \(\Phi = \Phi(z)\) (\(\Phi = \Phi(\bar{z})\)). Assuming that one of the branes is located at \(z_1 = 0\), we can readily get the solution for the metric in terms of the scalar field as \[ A = -k \ln(1 + |\Phi|^2) - a \ln |z| + f(z) + \bar{f}(\bar{z}), \] where the functions \(\Phi(z)\), \(f(z)\) and \(\bar{f}(\bar{z})\) are regular at \(z = 0\). At this point, in order to illustrate some explicit solutions, we take a simple holomorphic function for the complex scalar \[ \Phi(z) = c_0 z^b, \] with \(c_0\) a phase (\textit{i.e.} \(|c_0| = 1\)) and \(b\). Then the internal metric becomes \[ ds_2^2 = r_0^2 |z|^{-2a} \frac{dz d\bar{z}}{(1 + |z|^{2b})^{2k}}. \] In order to see how the parameter \(a\) is related to the tension of the brane sitting at \(z = 0\), we examine the metric at the origin \[ ds_2^2 = r_0^2 [d\rho^2 + (1 - a)^2 \rho^2 d\psi^2], \] where a change of coordinates \(\rho = |z|^{1-a}/(1-a)\) has been performed. Then we find that the conical singularity at \(z = 0\) must be matched with the brane tension as \[ \frac{\Lambda_1}{2\pi} = 1 - |1 - a| \equiv 1 - \beta_1, \] and the deficit angle of the brane sitting at \(z = 0\) is \(\delta_1 = 2\pi(1 - \beta_1)\). As we will see shortly the condition for finite volume of the internal space forces \(a < 1\), so finally \[ \frac{\Lambda_1}{2\pi} = a \equiv 1 - \beta_1. \] As we see from the metric, the antipodal point of \(z = 0\) on the axis of symmetry of the internal space is \(z \to \infty\) (note that this point is at finite proper distance from \(z = 0\)). At this point we should put in principle a second 3-brane. In order to see how the parameter \(b\) is related to the tension of the brane sitting at \(z \to \infty\), we examine the asymptotic form of the metric \[ ds_2^2 = r_0^2 [d\rho^2 + (1 - a - 2kb)^2 \rho^2 d\psi^2], \] where a change of coordinates \(\rho = |z|^{1-a-2kb}/(1-a-2kb)\) has been performed. Then, we find that the additional conical singularity at \(z \to \infty\) must be matched with the other brane tension as \[ \frac{\Lambda_2}{2\pi} = 1 - |1 - a - 2kb| \equiv 1 - \beta_2, \] and the deficit angle of the brane sitting at $z \to \infty$ is $\delta_2 = 2\pi(1 - \beta_2)$. As we will see shortly the condition for finite volume of the internal space forces $a + 2kb > 1$, so finally $$\frac{\Lambda_2}{2\pi} = 2 - a - 2kb \equiv 1 - \beta_2.$$ Using eq. (29), we can rewrite the above condition as $$\Lambda_1 + \Lambda_2 = 4\pi(1 - kb).$$ Therefore, in view of eqs. (29) and (33), we find that any change of brane tensions can be compensated via $a$ and $b$ which are parameters in the solutions, maintaining the flat solution. In the following we will assume that $b > 0$ since the $b < 0$ case is its dual by changing $z \to 1/z$. The size of extra dimensions $r_0$ is not determined, just as in the previous section. Therefore, there exists a modulus in the system which corresponds to the volume. The solution we have obtained up to now could suffer in principle from curvature singularities at $z = 0$ or $z \to \infty$. We should make sure that first of all the following curvature invariants (computed for the specific background) are everywhere finite $$R^2 = 2R_{MN}^2 = R_{MNA}^2 = \frac{64b^4k^2}{r_0^4}r^{4(a+b-1)}(1 + r^{2b})^{4(k-1)},$$ where we have set $z = r e^{i\psi}$. Thus, to have regular solutions the following conditions should be satisfied $$a + b \geq 1 \quad , \quad a + b(2k - 1) \leq 1.$$ Additionally, we wish that the volume of the internal space is finite. This is proportional to $$\int_0^\infty dr \frac{r^{1-2a}}{(1 + r^{2b})^{2k}}.$$ Thus, to have finite volume solutions the following conditions should be satisfied $$a < 1 \quad , \quad a + 2kb > 1.$$ Let us discuss now some special points in the above allowed parameter space. **i.** For $k = 1$ we see that the only way to satisfy all the constraints is when $a + b = 1$ and $a < 1$. Then $\Lambda_1 = \Lambda_2 = 2\pi a$ and we obtain exactly the solution of the previous section of the “rugby-ball”-shaped internal space. The metric is then indeed written as $$ds_2^2 = r_0^2 \frac{dzd\bar{z}}{(|z|^a + |z|^{2-a})^2} = r_0^2 \frac{dzd\bar{z}}{|z|^2(|z|^\beta + |z|^{-\beta})^2},$$ which is the metric of the “rugby-ball” as in Ref. [4] and $r_0 = 2R_0\beta$ where $R_0$ the radius of the “rugby-ball”. The case $a = 0$, $b = k = 1$ corresponds to the sphere. ii. For \( bk = 1 \) we see from (33) that \( \Lambda_1 + \Lambda_2 = 0 \) so the two branes have opposite tensions and the geometry resembles a “heart”-shaped internal space, as in Fig.1. From the constraints (35), (37), we obtain \[ a + b \geq 1 \quad , \quad a \leq b - 1 \quad , \quad -1 < a < 1. \] (39) The allowed parameter space is shown in Fig.2. Let us note that for the semi-line \( a = 0, \ b > 1 \) we obtain a configuration without branes which is an ellipsoid. The point \( a = 0, \ b = k = 1 \) corresponds to the sphere. ![Figure 2: “Heart”-shaped solution: The allowed parameter space of \( a, b \) for \( kb = 1 \). The dot point and the solid line in the shaded region correspond to a sphere and an ellipsoid without branes, respectively.](image) iii. For \( a = 0, \ bk \neq 1 \) or for \( a = 2(1 - kb), \ bk \neq 1 \) the internal space supports only one three-brane as in Fig.3. These two cases are related by duality \( z \rightarrow 1/z \), so let us examine only the case with \( a = 0, \ bk \neq 1 \) where \( \Lambda_1 = 0 \) and \( \Lambda_2 = 4\pi(1 - kb) \). Then the constraints (35), (37) give \[ b \geq 1 \quad , \quad bk > 1/2 \quad , \quad b(2k - 1) \leq 1. \] (40) The allowed parameter space is shown in Fig.4. Let us note that \( \Lambda_2 \) can have both signs depending on \( b \). ![Figure 4: One three-brane solution: The allowed parameter space of \( b, k \) for \( a = 0 \) and \( kb \neq 1 \). The brane tension is positive for the green region while it is negative for the red region.](image) Apart from the above special cases, it is easy to see that there exist generic regions in the \((a, b, k)\) parameter space where the conditions (35), (37) are satisfied. The important observation is that these allowed regions are not isolated points in the parameter space but rather continuous intervals. So the parameters are allowed to vary continuously without affecting the flatness of the solution. As an illustration of the above remark let us consider a specific example with $k = 1/2$. Then from (35), (37) we have $$a < 1 \quad , \quad a + b > 1.$$ (41) The possible geometries are shown in Fig.5 and the allowed parameter space in Fig.6. ![Figure 5: The possible geometries with $k = 1/2$. The figures depict two positive tensions, opposite tensions and two negative tensions, in order from the left to the right.](image) ![Figure 6: The allowed parameter space of $a, b$ for $k = \frac{1}{2}$. Both brane tensions are positive (negative) for the green (red) region while brane tensions take opposite signs for the yellow region.](image) ## 5 Single-valuedness of the scalar field In finding the above solutions we have omitted checking whether the scalar field is single-valued. In that sense all the above solutions are incomplete. In this section we will show that generically there exists a problem which can be solved by appropriate embedding of the scalar manifold coordinates to the internal space geometry. Let us first note the problem: The scalar field should be single valued once we perform a $2\pi$ rotation (of $\psi$) around the axis of symmetry\footnote{We thank Stefan Förste for discussions on this point.}. In other words $$\Phi(r, 0) = \Phi(r, 2\pi) \quad \Rightarrow \quad e^{2\pi bi} = 1 \quad \Rightarrow \quad b = n, \quad n = \text{integer}. \quad (42)$$ But the parameter $b$ is not generically an integer (since it is related to the tension of one of the branes), so one is directed to identifying points of the scalar manifold in order that the $\Phi$ field to be periodic. The latter amounts, however, to changing the scalar manifold and thus the dynamics of the system every time the brane tensions change. This is a mere fine-tuning which we want to avoid if we wish to use the previous solutions for obtaining self-tuning. To circumvent this problem, we need to embed the scalar manifold in the internal space in a more contrived way. For this purpose we define the new field $X$ instead of $\Phi$ $$X = \left( \tan \frac{\phi^1}{2} \right) e^{iK(\phi^2)}, \quad (43)$$ where $K(\phi^2)$ is a function which is to be determined by the requirement that the scalar field $X$ is single-valued. The latter condition reads $$K[\phi^2(2\pi)] = K[\phi^2(0)] + 2\pi n, \quad (44)$$ where $n$ is an integer. Remember also that for our solutions $\phi^2 = b\psi + c$ where $c$ is the integration constant appearing in (12) and in (25) if we set $c_0 = e^{ic}$. The trivial choice $K(\phi^2) = \phi^2$ fails to give an equation which determines $c$ and instead quantizes $b$. But for generic choices of $K(\phi^2)$, it is possible to have arbitrary (and continuous) $b$ and satisfy the single-valuedness condition by choosing an appropriate integration constant $c$. Let us discuss a simple example of embedding which serves the above purpose $$K(\phi^2) = \phi^2 + \epsilon (\phi^2)^2, \quad (45)$$ where $\epsilon$ is a parameter characteristic of the embedding. Then the condition (44) gives $$c = -\pi b + \frac{1}{2\epsilon} \left( \frac{n}{b} - 1 \right). \quad (46)$$ For this choice of the integration constant $c$, the scalar field $X$ is rendered single valued. Finally, let us get an better insight in the special embedding that we have chosen. The redefinition of fields gives a mapping $X = f(\Phi, \bar{\Phi})$, so it gives a solution for $X$ which is non-holomorphic. What we have actually done by passing from the (multi-valued) field $\Phi$ to the (single-valued) $X$, is to find a solution of the equations of motion for non-holomorphic embeddings keeping the same solution for the spacetime metric. 6 Nearby curved solutions Up to now, we assumed that there is no bulk cosmological constant. In this section we consider the model with non-zero bulk cosmological constant. We show that there exist nearby curved solutions for the flat solution of the “rugby-ball”-shaped internal space. When we add a nonzero bulk cosmological constant $\Lambda_b$ to the sigma model action, the bulk action becomes $$S_{bulk} = \int d^6 x \sqrt{-g} \left( \frac{1}{2} R - \Lambda_b - 2k \frac{\partial_M \Phi \partial^M \bar{\Phi}}{(1 + |\Phi|^2)^2} \right). \quad (47)$$ Then, the modified Einstein equation is $$R_{MN} = \frac{1}{2} \Lambda_b g_{MN} + \frac{2k}{(1 + |\Phi|^2)^2} (\partial_M \Phi \partial_N \bar{\Phi} + \partial_N \Phi \partial_M \bar{\Phi})$$ $$- \sum_{i=1}^{2} \frac{\sqrt{-g^{(i)}}}{\sqrt{-g}} \Lambda_i (\delta^\mu_M \delta^\nu_N g^{(i)}_{\mu\nu} - g_{MN}) \delta^2(y - y_i), \quad (48)$$ and the field equation for $\Phi$ is the same as eq. (19). Let us take the metric ansatz with factorized extra dimensions as $$ds^2 = g_{\mu\nu}(x) dx^\mu dx^\nu + r_0^2 e^{2A(z,\bar{z})} dz d\bar{z}, \quad (49)$$ where $g_{\mu\nu}(x)$ denotes the four dimensional maximally symmetric spacetime with its Ricci tensor given by $R_{\mu\nu} = 3\lambda g_{\mu\nu}$. Here, $\lambda$ is a constant parameter which gives a 4d dS solution for $\lambda > 0$, a 4d flat solution for $\lambda = 0$, and a 4d AdS solution for $\lambda < 0$. Then, the Einstein equations give rise to $$\lambda = \frac{1}{6} \Lambda_b, \quad (50)$$ and $$-2\partial \bar{\partial} A = \frac{1}{4} \Lambda_b r_0^2 e^{2A} + \frac{2k}{(1 + |\Phi|^2)^2} (\partial \Phi \bar{\partial} \bar{\Phi} + \bar{\partial} \Phi \partial \bar{\Phi}) + \sum_{i=1,2} \Lambda_i \delta^2(z - z_i). \quad (51)$$ The field equation for the scalar field is the same as eq. (23). As in the flat case, the $(zz)$ Einstein equation dictates that $\Phi$ is (anti)holomorphic, and for any such function the scalar field equation is trivially satisfied. Then, taking the solution for the metric as $$A = -\ln(1 + |\Phi|^2) - a \ln |z|, \quad (52)$$ we find from eq. (51) the holomorphic solution for $\Phi$ as follows, $$\Phi = c_0 z^{1-a}, \quad (53)$$ with $c_0$ a constant phase. The radius of the sphere is determined by relation with $k$ and $\Lambda_b$ as $$r_0^2 = \frac{8(1-k)(1-a)^2}{\Lambda_b}.$$ The parameter $a$ is related as usual to the brane tension as $$a = \frac{\Lambda_1}{2\pi} = \frac{\Lambda_2}{2\pi}.$$ Therefore, the curved solutions for a nonzero $\Lambda_b$ and $k \neq 1$, are continuously connected to the “rugby-ball”-shaped flat solution with $\Lambda_b = 0$ and $k = 1$ (see Fig.7). Note that for $0 < k < 1$, there exist only dS solutions while for $k > 1$, there exist only AdS solutions. We should note here that the above is a simple solution by assuming the ansatz (52) and that there could exist in general solutions with complicated embeddings of $\Phi$ if we change the ansatz (52). These more general solutions could provide nearby curved analogues for the other non-constant curvature compactifications that we found in section 4. ## Discussion and conclusions In this paper we have explored brane solutions for the spherical sigma model. It is helpful to recall what are the analogous brane solutions of the hyperbolic sigma model [17] and compare them. The first important difference between the two approaches has to do with supersymmetry. In the hyperbolic case, the sigma model arises in six dimensional supergravity and thus the fine-tuning of the bulk cosmological constant can be explained. We should note here that the model as it stands, with vacuum branes, is supersymmetric and thus there is no mystery on why Weinberg’s theorem is not applicable\footnote{We thank Hans-Peter Nilles and Gianmassimo Tasinato for discussions on this point.}. [This also raises a question about the behaviour of the system regarding the effective cosmological constant if supersymmetry is broken.] On the other hand, the spherical sigma model cannot arise in supergravity and thus one has no justification on setting the bulk cosmological constant to zero. A second important difference between these models has to do with the presence of a naked singularity in the hyperbolic case\footnote{We note that for $k = -1$ (when the scalars have ghostlike kinetic term), one can see that the singularity is absent but the volume of the internal space diverges.}. There have been arguments in \cite{16, 17} that with appropriate boundary conditions on the naked singularity one can prevent energy, angular momentum and $U(1)$ charge (related to the azimuthal isometry) to flow in the singularity. However, the solution remains troublesome because infinitesimally close to the singularity the curvature explodes and the description of the theory breaks down. It is not clear if the completion of the theory in that high curvature regime will retain the properties discussed before. On the other hand, the spherical sigma model has no singularity problem for certain intervals of the brane tensions and sigma model coupling and thus is completely under control in the present theory. Finally, a third difference between the two models is the issue of the single-valuedness of the scalar fields. In the hyperbolic case there is again a problem of single-valuedness which however can be solved easily by having $b = n$, $n =$integer. Note that without this condition one is forced to identify points of the scalar manifold in such a way that the scalar gets single valued, which as discussed in section 5 amounts to fine-tuning. However, having $b$ to be an integer poses no problem as in the spherical case. In the spherical case the parameter $b$ was related to the tension of the second three-brane and thus could not be in general an integer. On the contrary, in the hyperbolic case there is no relation of $b$ with other input quantities and thus a solution with $b =$integer is satisfactory, without any need of a special embedding of the scalar manifold to the internal space. Let us now briefly comment about the moduli of the system of the spherical sigma model (similar conclusions are expected to hold also for the hyperbolic case). As was seen in the previous sections, the “radius” $r_0$ of the internal space is undetermined for the flat space compactifications. Thus there exists a massless scalar in the four dimensional theory which should be stabilized by some mechanism. There is no guarantee that the stabilization mechanism will not disturb the self-tuning property. On the other hand, for curved vacua like the ones of the previous section, the “radius” $r_0$ of the internal space is fixed and the radion is massive, with mass that depends on the effective four dimensional cosmological constant. In conclusion, we have presented new solutions of a six dimensional non-linear sigma model in the presence of codimension-two branes. We discussed in detail the conditions that should be satisfied for absence of singularities and for obtaining an internal space of finite volume. We noted that there is parameter space where there exist one or two branes along the axis of symmetry and that they can have any combination of tension (i.e. positive or negative). In order for the solutions to be single valued, one has to use a special embedding of the scalar manifold to the internal space. These solutions furnish some self-tuning feature, in the sense that the brane vacuum energy is not related to the flatness of the four dimensional effective theory. However, these solutions cannot still give a satisfactory resolution of the cosmological constant problem since the bulk cosmological constant controls the flatness of the solutions and should be set to zero in order to obtain Minkowski four dimensional spacetime. Nevertheless, if one accepts to do only one fine-tuning, the one of the bulk cosmological constant, the above self-tuning mechanism can guarantee flatness of the solutions irrespective of the brane tension. Then the challenge that is posed is to find some dynamics which can relax the bulk cosmological constant to zero in order to avoid the latter fine-tuning. **Acknowledgments:** We would like to acknowledge helpful discussions with Stefan Förste, Hans-Peter Nilles and Gianmassimo Tasinato. This work is supported in part by the European Community’s Human Potential Programme under contracts HPRN–CT–2000–00131 Quantum Spacetime, HPRN–CT–2000–00148 Physics Across the Present Energy Frontier and HPRN–CT–2000–00152 Supersymmetry and the Early Universe. H.M.L. was also supported by priority grant 1096 of the Deutsche Forschungsgemeinschaft. **Note added:** While this work was being completed, [18] appeared in the pre-print archives considering the same spherical sigma model and obtaining similar solutions. **References** [1] N. Arkani-Hamed, S. Dimopoulos, N. Kaloper and R. Sundrum, Phys. Lett. **B 480**, 193 (2000) [arXiv:hep-th/0001197]; S. Kachru, M. B. Schulz and E. Silverstein, Phys. Rev. **D 62**, 045021 (2000) [arXiv:hep-th/0001206]; S. Förste, Z. Lalak, S. Lavignac and H. P. Nilles, Phys. Lett. **B 481**, 360 (2000) [arXiv:hep-th/0002164]; S.P. de Alwis, A.T. Flournoy and N. Irges, JHEP **0101**, 027 (2001)[arXiv:hep-th/0004125]; C. Csaki, J. Erlich, T. J. Hollowood and J. Terning, Nucl. Phys. **B 584**, 359 (2000) [hep-th/0004133]; S. Förste, Z. Lalak, S. Lavignac and H. P. Nilles, JHEP **0009**, 034 (2000) [arXiv:hep-th/0006139]; S.P. de Alwis and N. Irges, Phys. Lett. **B 492**, 171 (2000)[arXiv:hep-th/0007223]; J. E. Kim, B. Kyae and H. M. Lee, Phys. Rev. Lett. **86**, 4223 (2001)[arXiv:hep-th/0011118]; C. Csaki, J. Erlich and C. Grojean, Nucl. Phys. **B 604**, 312 (2001) [arXiv:hep-th/0012143]; J. E. Kim, B. Kyae and H. M. Lee, Nucl. Phys. B 613, 306 (2001) [arXiv:hep-th/0101027]; C. Grojean, F. Quevedo, G. Tasinato and I. Zavala, JHEP 0108, 005 (2001) [arXiv:hep-th/0106120], J. E. Kim, B. Kyae and Q. Shafi, [arXiv:hep-th/0305239]. [2] J. -W. Chen, M. A. Luty and E. Ponton, JHEP 0009, 012 (2000) [arXiv:hep-th/0003067]. [3] E. Cremmer and J. Scherk, Nucl. Phys. B 108, 409 (1976); Z. Horvath, L. Palla, E. Cremmer and J. Scherk, Nucl. Phys. B 127, 57 (1977). [4] S. M. Carroll and M. M. Guica, [arXiv:hep-th/0302067]; I. Navarro, JCAP 0309, 004 (2003) [arXiv:hep-th/0302129]. [5] Y. Aghababaie, C. P. Burgess, S. L. Parameswaran and F. Quevedo, Nucl. Phys. B 680, 389 (2004) [arXiv:hep-th/0304256]. [6] A. Salam and E. Sezgin, Phys. Lett. B 147, 47 (1984). [7] I. Navarro, Class. Quant. Grav. 20 (2003) 3603 [arXiv:hep-th/0305014]. [8] J. Vinet and J. M. Cline, [arXiv:hep-th/0406141]; J. Garriga and M. Porrati, [arXiv:hep-th/0406158]. [9] J. M. Cline, J. Descheneau, M. Giovannini and J. Vinet, JHEP 0306, 048 (2003) [arXiv:hep-th/0304147]. [10] G. W. Gibbons, R. Güven and C. N. Pope, arXiv:hep-th/0307238. [11] Y. Aghababaie, C. P. Burgess, J. M. Cline, H. Firouzjahi, S. L. Parameswaran, F. Quevedo, G. Tasinato, and I. Zavala, JHEP 0309, 037 (2003) [arXiv:hep-th/0308064]. [12] H. P. Nilles, A. Papazoglou and G. Tasinato, Nucl. Phys. B 677, 405 (2004) [arXiv:hep-th/0309042]. [13] H. M. Lee, Phys. Lett. B 587, 117 (2004) [arXiv:hep-th/0309050]. [14] M. L. Graesser, J. E. Kile and P. Wang, [arXiv:hep-th/0403074]. [15] M. Gell-Mann and B. Zwiebach, Phys. Lett. B 147, 111 (1984). [16] M. Gell-Mann and B. Zwiebach, Nucl. Phys. B 260, 569 (1985). [17] A. Kehagias, [arXiv:hep-th/0406025]. [18] S. Randjbar-Daemi and V. Rubakov, [arXiv:hep-th/0407176].
November 17, 1997 Diane Clark Calibron Systems 7861 E. Gray Road Scottsdale, AZ 85260-3405 Phone: 602-991-3550 Fax: 602-998-5589 CERTIFICATE OF COMPLIANCE This is to certify that Adalet Explosionproof Enclosures, catalog Series XDHX, are CSA Certified for use in Class I, Division I, Groups B, C, & D and Class II, Division I Groups E, F, & G and FM approved for use in Class I, Division I, Groups B, C, & D and Class II, Division I Groups E, F, & G hazardous locations. Sincerely, ADALET-PLM A Scott Fetzer Company Tim Snelly Standards Engineer Enclosure Certificate of Compliance Certificate Number: LR 27991-34 Revision: LR 27991-60 Date Issued: September 20, 1995 Issued To: Adalet-PLM A Scott Fetzer Co. 4801 West 150th St. Cleveland, OH 44135 USA Attention: Mr. Timothy Snelly The products listed below are eligible to bear the CSA Mark. Issued By: D. Somma, C.E.T. Toronto, ON Canada Signature PRODUCTS CLASS 4418 02 - OUTLET BOXES AND FITTINGS - Boxes - For Hazardous Locations Class I, Groups B, C and D; Class II, Groups E, F and G; Class III; CSA Encl. 4: Dual cover instrument Enclosure XDH Series. Additional alphanumeric suffixes are added to the catalogue number to denote mechanical variations. Different type covers, zero-span adjustments and terminal strips. Enclosure equipped with terminal strips are rated 300V (max), 3A (max). APPLICABLE REQUIREMENTS CSA Std C22.2 No. 14-M1987 - Industrial Control Equipment CSA Std C22.2 No. 25-1966 - Enclosures for Use in Class II, Groups E, F and G Hazardous Locations CSA Std C22.2 No. 30-M1986 - Explosion-Proof Enclosures for Use in Class I Hazardous Locations CSA Std C22.2 No. 94-1976 - Special Purpose Enclosures 2, 3, 4 and 5 Electrical Notice 417B - Impact Test Requirements for Explosion-Proof Enclosures for Use in Class I Hazardous Locations Electrical Bulletin 1310 - Requirements for Enclosures for Use in Class II, Hazardous Locations MARKINGS The following markings are shown on a metal nameplate. The nameplate material is aluminum of 0.020 inch thickness. - CSA Mark. - Company name. - Hazardous Locations designation: Class I, Groups B, C, D; Class II, Groups E, F, G; Class III; CSA Enclosure 4. - Catalogue number. The nameplate material is aluminum of 0.020 inch thickness. The nameplate is secured with four drive pins. "CAUTION: DISCONNECT FROM SUPPLY BEFORE OPENING - KEEP COVER TIGHT WHILE CIRCUITS ARE ALIVE." and "ATTENTION: OUVRIR LE CIRCUIT AVANT D'ENLEVER LE COUVERCLE - GARDER LE COUVERCLE BIEN FERME TAUT QUE LES CIRCUITS SEUT SOUS TENSION." Conduit seals must be installed within 18 inches of the enclosure. Date code is ink stamped, month and year numerically on the inner surface of the casting. Enclosures equipped with terminal strips shall have 300V maximum, 3A maximum electrical ratings. ALTERATIONS 1. Marking as above. 2. Enclosures equipped with GB Type CSA Certified terminal strips shall have 20 AWG CSA Certified 6 in long, 600V and 105C wire soldered to the terminal pin encapsulated in epoxy with min 1/16 in thick epoxy insulating the soldered joint from enclosure material. 3. In addition to the above markings enclosures shall have 300V max, 3A max electrical markings. FACTORY TESTS N/A. MARKINGS Refer to Dwg No 6643 attached as Ill 1. Refer to Ills 2 and 2A for Series XDHX (European Enclosure). DESCRIPTION A. Catalogue Designation: Ill 4 \[ \begin{array}{cccccc} XDH & FGC & X & -1 & 2T & Z \\ I & II & III & IV & V & V \\ \end{array} \] I Dual cover instrument housing. II Instrument Side - FC - Solid flat cover - FGC - Flat glass cover - DC - Solid dome cover - DGC - Dome glass cover III External Grounding - X - Denotes european style enclosure (external grounding added) - Blank - no external ground. IV Power Side Solid flat cover is standard with no suffix number. -1 Dome glass cover DC -2 Flat glass cover FGC -3 Solid dome cover DGC V 2T 2 - Point terminal strip 4T 4 - Point terminal strip VI Z Zero - span adjustments CHAPTER 2 HAZARDOUS (CLASSIFIED) LOCATION ELECTRICAL EQUIPMENT Equipment listed within this chapter, unless referenced elsewhere within the Approval Guide, has been examined only for its hazardous location suitability. Specific types of equipment may be located by consulting the Equipment Index under "Hazardous Location Electrical Equipment." Explanation of Hazardous Location Coding System The equipment is preceded by an underlined code describing the hazardous location for which it is Approved. A key to this code follows. Type of Protection - XP - Explosionproof - IS - Intrinsically Safe Apparatus - AIS - Associated with Intrinsically Safe Apparatus - ANI - Nonincendive Circuit Field Wiring - PX,PY,PZ - Pressurized - APX,APY,APZ - Associated Purge Systems/Components - NI - Nonincendive - DIP - Dust-Ignitionproof - S - Special Protection Class - I - Class I - II - Class II - III - Class III Division - 1 - Division 1 - 2 - Division 2 Group - A - Group A - B - Group B - C - Group C - D - Group D - E - Group E - F - Group F - G - Group G Control Documentation Control drawing, instruction manual, installation diagram etc., if applicable APPROVAL DESIGNATION Type of Protection Class Division Group IS / I / I / 1 / CDEFG — 699008 / B — Example 1. XP-DIP-IS / I / I / I / CDEFG — 689008 / B — Example 2. This Approved electrical equipment is suitable for use in hazardous (classified) locations as defined by Article 500 of the National Electrical Code. The equipment is listed alphabetically by manufacturer including the class, division and group of the location for which it is Approved. Installation and maintenance of equipment listed in this chapter should be according to the National Electrical Code or other applicable code. Hazardous locations are divided into class, division and group as described below. Equipment listed in this Chapter for Hazardous (Classified) Locations is also suitable for installations in areas that are non-hazardous (ordinary) locations. Class I, Division 1, Groups A, B, C & D Class I, Division 1 locations are those in which hazardous concentrations of flammable gases or vapors exist continuously, intermittently or periodically under normal operating conditions. Electrical equipment for use in such locations may be "explosion-proof," "intrinsically safe," "purged," or otherwise protected to meet the intent of Articles 500 and 501 of the National Electrical Code. Explosionproof protection consists of equipment designed to be capable of containing an internal explosion of a specified flammable vapor-air mixture. In addition, the equipment must operate at a safe temperature with respect to the surrounding atmosphere. Intrinsically safe electrical equipment and associated wiring are incapable of releasing sufficient electrical or thermal energy to cause ignition of a specific hazardous material under "normal" or "fault" operating conditions. Normal operation assumes maximum supply voltage and rated environmental extremes; fault conditions assume any single or dual independent electrical faults plus field wiring open, shorts or connections to ground. Equipment rated as intrinsically safe is recognized by Article 500 as safe for use in hazardous locations without special enclosures or physical protection that would otherwise be required. Purged systems have fresh air or an inert gas under positive pressure to exclude ignitable quantities of flammables from the electrical equipment enclosure. Equipment Approved for Division 1 locations shall be permitted in Division 2 locations of the same class and group using associated apparatus defined in the manufacturer's control documentation. Class I, Division 2, Groups A, B, C & D Class I, Division 2 locations are those in which hazardous concentrations of flammables exist only under unlikely conditions of operation. As such, equipment and associated wiring which are incapable of releasing sufficient electrical and thermal energy to ignite flammable gases or vapors under "normal" operation and environmental conditions are safe to use in Class I, Division 2 locations. Equipment of this type is called "nonincendive" and needs no special enclosure or other physical safeguard. Class II, Divisions 1 & 2, Groups E, F & G Electrical equipment suitable for use in Class II locations, as defined by the National Electrical Code, is constructed to exclude ignitable amounts of dust from the equipment enclosure. Approved equipment of this type has also been evaluated to assure that hazardous surface temperatures do not exist. Equipment listed as suitable for Class II locations is "dust-ignitionproof" or otherwise designed to meet the intent of Articles 500 and 502 of the National Electrical Code. d = Engineering unit for tens. e = Optional conduit box C or blank. f = Conformal urethane coating U or blank. IS / I,II,III / 1 / ABCDEFG — 750-0028-00 / A; Entity; NI / I / 2 / ABCD Max Entity Parameters: V_{Max} = 33 V, I_{Max} = 178.5 mA, C_f = 12 nF, L_f = 0 uH. Wide-Range V/I Isolating Transmitter. Model T703-2000. IS / I,II,III / 1 / ABCDEFG — 741-0014-00A / 2; Entity Max Entity Parameters: | Group | V_{Max} (V) | I_{Max} (mA) | C_f (nF) | L_f (mH) | |-------|-------------|--------------|----------|----------| | ABCDEFG | * | 90.0 | 0 | 0 | | CDEFG | * | 145.0 | 0 | 0 | | DFG | * | 250.0 | 0 | 0 | V_{OC} must be greater than 7.5 V and less than 43.0 V. Current-to-Pressure Transducer. Model IPa1-b000 a = Series 5 or 6. b = Output 2 or 3. XP / I / 1 / BCD; DIP / II / 1 / EFG; S / III / 1 Current-to-Pressure Transducer. Model IPe2-b000 a = Output 2 or 3. NI / I / 2 / ABCD; S / II / 2 / FG; S / III / 2 Current-to-Pressure Transducer. Model IP-abc000 a = Series 5 or 6. b = Series 1 or 2. c = Output 2 or 3. Adalset-PLM Scott Fetzer Co 4801 W 150th St Cleveland OH 44135 XP / I / 1 / D; DIP / II,III / 1 / EFG Control Panel Enclosure. Model XJF-081006abcdefg a = Options mounting pan metallic or nonmetallic. b = Unit size 1 or 2. c = Pilot lights (max of 4 installed in cover), XL-S-Q Series. d = Standard pushbutton (max of 4 installed in cover), XH-PBQ Series. e = Dual pushbutton (max of 4 installed in cover), XH-PBBS Series. f = Standard selector switch (max of 4 installed in cover), XH-SQS Series. g = Key lock selector switch (max of 4 installed in cover), XH-KSQ Series. XP / I / 1 / BCD; DIP / II,III / 1 / EFG Instrument Housing. Series XIH-abc, XIH-d a = Housing cover FC, DC, FGC or DGC. b = Conduit hub size 2 or 3. c = External mounting lugs L or blank. d = Custom housing FGC-2-5-4094 (no options), FCS-4-068 (no options). Options Four Y6-20 UNC-2B holes on 3/4 x 1 3/4 centers. One 3/4-14 NPT hole centered, or one 3/4-14 NPT hole centered. No mounting bosses. Four mounting bosses on vertical and horizontal centerlines. Four mounting bosses on 45° centerlines. Two mounting bosses on vertical centerline. Two mounting bosses on horizontal centerline. Two mounting bosses on +45° centerline. Two mounting bosses on -45° centerline. Back wall mounting boss. European Instrument Housing. Series XIH-aXbc a = Housing cover FC, DC, FGC or DGC. b = Conduit hub size 2 or 3. c = External mounting lugs L or blank. Instrument Housing. Models XIHLA, XIHSbC a = FC, FGC, DC or DGC. b = BP, CF, DFC, TP, XF, YF, BD, CD, DD, LD, TD, XD, YD, BDG, CDG, DDG, LDG, TDD, XDD or YDG. XP / I / 1 / BCD; DIP / II,III / 1 / EFG Junction Box. Type S-3352, S-3763. Control Enclosure. Model XJF-101206-S4030-ab a = Non-removable hinge B. b = Grounding kit XGK-1. Panel-Meter Enclosure. Model XDF 040908N4. XP / I / 1 / BCD; DIP / II,III / 1 / EFG Instrument Housing. Model XDH-abcdZe European Instrument Housing. Model XHJH-aXbcdZe a = Instrument side cover FC, FGC, DC or DGC. b = Conduit hub size 2 or 3. c = Power slide cover A, B, C or blank. d = Option 2T or 4T. e = Option blank or S. XP / I / 1 / BCD; DIP / II / 1 / EFG Junction Box. Catalog No. XJHA and XJHA N4. Aero-Motive Mfg Co 5558 M L Av E Kalamazoo MI 49001 Electric Cable Reel. Models 11a-b-c-X or 11aM-b-c-X; 12d-b-c-X or 12dM-b-c-X; 14d-b-c-X or 14dM-b-c-X; 20f-g-c-X or 20fM-g-c-X; hl-g-c-X or hlM-g-c-X; jkq-g-c-X or jkM-g-c-X a = Flange spacing and No. of springs 22, 42, 63, 64 or 65. b = Spring type 05, 06, 10, 16 or 17. c = Flange spacing and No. of springs 20, 30, 30E, 30E, 310, 403 or 404. d = Flange spacing and No. of springs 33, 55 or 67. e = Flange spacing and No. of springs 43, 45 or 46. f = Flange spacing and No. of springs 47, 52, 53, 54, 55, 66, 67, 80 or 89. g = Flange spacing and No. of springs 52, 53, 55, 56, 57, 60, 62, 63, 65, 66, 70, 75, 80 or 85. h = Model series 23 or 24. i = Flange spacing and No. of springs 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 70, 71, 72, 73, 74, 75, 76, 77, 78 or 79. j = Model series 25, 26, 27 or 28. k = Flange spacing and No. of springs 70, 71, 72, 73, 74, 75, 76, 77, 78 or 79. Agema Infrared Systems AB Box 3 S-182-11 Danderyd Sweden IS / I,II,III / 1 / CDEFG — 05070/A Detector Head. Type H2. Models LTSFIS, LTCFIS, HTSFIS, HTCFIS, PTSFIS, GSSFIS, M1S-FIS, MTCFIS or FASFIS. IS / I / 1 / ABCD * *When used with 9V alkaline battery. Portable Noncontact Infrared Thermometer. Model TP70bIS a = Functional selection 20, 50, 40 or 50. b = Option EM, CF or TR. Air Monitor Corp 1050 Hopper Av Box 6358 Santa Rosa CA 95408 XP / I / 1 / BCD; DIP / II,III / 1 / EFG Differential Pressure Transmitter. Model DPT-a-b-c a = Range 00 or 01. b = Display 00 or 01. c = Range 01, 02, 03, 04, 05, 08, 09, 03B or 04B. Model DPT-Plus-a a = Range 01, 02, 03, 04, 05, 06, 09, 03B or 04B. Altrimar Technology Corp 89 Meadowbrook Dr Milford NH 03055 S / I,II,III / 1 / ABCDEFG — SK950504/A Ultrasonic Transducer. Models 38-C31-1, 38-032-1. Akron Electric Inc Box 26505 Akron OH 44319 XP / I / 1 / BCD; DIP / II,III / 1 / EFG Instrument Housing. Model XJIHab a = Version G or S. b = Dimensions 1, 2, 4 or 5. 1996 FMRC APPROVAL GUIDE — Hazardous (Classified) Location Electrical Equipment = Engineering unit for lens. = Optional conduit box C or blank. = Conformal urethane coating U or blank. IS / I,II,III / 1 / ABCDEFG — 790-0028-00 / A; Entity; NI / I / 2 / ABCD Max Entity Parameters: $V_{\text{Max}} = 33$ V, $I_{\text{Max}} = 178.5$ mA, $C_i = 12$ nF, $L_i = 0$ uH. Wide-Range V/I Isolating Transmitter. Model T703-2000. IS / I,II,III / 1 / ABCDEFG — 741-0014-00A / 2; Entity Max Entity Parameters: | Type | $V_{\text{Max}}$ (V) | $I_{\text{Max}}$ (mA) | $C_i$ (μF) | $L_i$ (mH) | |------|------------------|------------------|----------|----------| | ABCDEFG | * | 90.0 | 0 | 0 | | DEFG | * | 185.0 | 0 | 0 | | G | * | 250.0 | 0 | 0 | * must be greater than 7.8 V and less than 40.0 V. Current-to-Pressure Transducer. Model IPa1-b000 Series 5 or 6. Output 2 or 3. XP / I / 1 / BCD; DIP / II / 1 / EFG; S / III / 1 Current-to-Pressure Transducer. Model IP62-a000 Output 2 or 3. II / I / 2 / ABCD; S / II / 2 / FG; S / III / 2 Current-to-Pressure Transducer. Model IP-atc000 Series 5 or 6. Series 1 or 2. Output 2 or 3. Aeratel-PLM Scott Fetzer Co 4801 W 150th St Cleveland OH 44135 P / I / 1 / D; DIP / II,III / 1 / EFG Control Panel Enclosure. Model XJF-081008abcdefg Optional mounting pan metallic or nonmetallic, left or right hinge BL or BR. Indicator lights (max of 4 installed in cover), XLS-G Series. Standard pushbutton (max of 4 installed in cover), XHPBS Series. Dual pushbutton (max of 4 installed in cover), XHDPS Series. Standard selector switch (max of 4 installed in cover), XHSSS Series. Key lock selector switch (max of 4 installed in cover), XHKSSS Series. XP / I / 1 / BCD; DIP / II,III / 1 / EFG Instrument Housing. Series XIH-abc, XIH-d Housing cover FC, DC, FGC or DGC. Conduit hub size 2 or 3. External mounting lens U or blank. Custom housing FSC-2-5-4094 (no options), FC-3-4088 (no options). Options 1/4-20 UNC-2B holes on 3/4 x 1 3/4 centers 1/2-14 NPT hole centered, or one 3/4-14 NPT hole centered Mounting bosses Mounting bosses on vertical and horizontal centerlines Mounting bosses on 45° centerlines Mounting bosses on vertical centerline Mounting bosses on horizontal centerline Junction Box. Type S-3352, S-3763. Control Enclosure. Model XJF-101206-S4030-ab a = Non-removable hinge B. b = Grounding kit XGK-1. Panel-Meter Enclosure. Model XDF 040908N4. XP / I / 1 / BCD; DIP / II,III / 1 / EFG Instrument Housing. Model XDH-abodZe European Instrument Housing. Model XDH-aXbcdZe a = Instrument side cover FC, FGC, DC or DGC. b = Conduit hub size 2 or 3. c = Power side cover A, B, C or blank. d = Option 2T or 4T. e = Option blank or S. XP / I / 1 / CD; DIP / II / 1 / EFG Junction Box. Catalog No. XJHA and XJHA N4. Aero-Motive Mfg Co 5688 M L Av E Kalamazoo MI 49001 Electric Cable Reel. Models 11a-b-c-X or 11aM-b-c-X; 12d-b-c-j- o-X; 14e-b-c-X or 14eM-b-c-X; 20f-g-c-X or 20fM-g-c-X; hi-g-c-Y; jk-g-c-X or jkM-g-c-X a = Flange spacing and No. of springs 22, 42, 53, 64 or 85. b = Spring type 05, 08, 10, 16 or 17. c = Slip ring current rating and quantity 303, 304, 306, 308, 310, 403 or 404. d = Flange spacing and No. of springs 53, 55 or 57. e = Flange spacing and No. of springs 43, 45 or 46. f = Flange spacing and No. of springs 50, 51, 52, 53, 54, 55, 56, 57, 58 or 59. g = Chain drive ratio 50, 52, 53, 55, 56, 57, 60, 62, 63, 65, 66, 70, 76, 80 or 85. h = Model series 23 or 24. i = Flange spacing and No. of springs 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 74, 75, 76, 77, 78 or 79. j = Model series 25, 26, 27 or 28. k = Flange spacing and No. of springs 70, 71, 72, 73, 74, 75, 76, 77, 78 or 79. Agema Infrared Systems AB Box 3 S-182-11 Danderyd Sweden IS / I,II,III / 1 / CDEFG — 06070/A Detector Head. Type H2. Models LTSFIS, LTCFIS, HTSFIS, HTCFI, GSSFIS, M1SFISM, M1CFIS or F4SFIS. IS / I / 1 / ABCD * *When used with 9V alkaline battery. Portable Noncontact Infrared Thermometer. Model TPTabIS a = Functional selection 20, 30, 40 or 50. b = Option EM, CF or TR. Air Monitor Corp 1050 Hopper Av Box 6358 Santa Rosa CA 95441 XP / I / 1 / BCD; DIP / II,III / 1 / EFG Differential Pressure Transmitter. Model DPT-a-b-c a = Square root 00 or 01. b = Display 00 or 01. c = Range 01, 02, 03, 04, 05, 08, 09, 03B or 04B. Model DPT-Plus-a EC-TYPE EXAMINATION CERTIFICATE Component intended for use in equipment or protective system intended for use in Potentially Explosive Atmospheres Directive 94/9/EC EC-Type Examination Certificate Number: DEMKO 02 ATEX 0205350U Component: Type XIHM, XIHMK and XDHM Flameproof Enclosures Manufacturer: Adalet, A Scott Fetzer Co. Address: 4801 W. 150th Street, Cleveland, OH 44135 USA This component and any acceptable variation thereof is specified in the schedule to this certificate and the documents therein referred to. DEMKO, notified body number 0539 in accordance with Article 9 of the Council Directive 94/9/EC of 23 March 1994, certifies that this equipment or protective system has been found to comply with the Essential Health and Safety Requirements relating to design and construction of equipment and protective systems intended for use in potentially explosive atmospheres given in Annex II to the Directive. The examination and test results are recorded in confidential report no. 02NK05350 Report Compliance with the Essential Health and Safety Requirements has been assured by compliance with: EN 50014 June 1997+A1-A2: 1999 EN 50018 July 2000 EN50281-1-1: September 1998 The sign "U" placed after the certificate number indicates that this certificate must not be mistaken with a certificate intended for an equipment or protective system. This partial certification may be used as a basis for certification of equipment or protective system. This EC-TYPE EXAMINATION CERTIFICATE relates only to the design and construction of the specified component. If applicable, further requirements of this Directive apply to the manufacture and supply of this component. The marking of the component shall include the following: Ex II 2 G D EEEx d IIC & EEEx d IIB For and on behalf of UL International Demko A/S Steen Lumby Certification Manager UL International Demko A/S Lyngbyvej 8, P.O. Box 574 DK-2730 Hørsholm Denmark Telephone: +45 44656868 Fax: +45 44634950 Certificate: 02 ATEX 0205350U This certificate may only be reproduced in its entirety and without any change, schedule included Schedule EC-TYPE EXAMINATION CERTIFICATE No. DEMKO 02 ATEX 0205350U Description of Component: The flameproof aluminum and stainless steel XIHM, XIHMK, and XDHM enclosures with flat, solid dome, and mid-size solid covers are intended to be used primarily as instrument housings. The XDHM enclosures is similar to the XIHM and XIHMK enclosures, except it is provided with a threaded cover on both ends of the body. Up to three conduit entries may be provided in the XDHM body, and four in the XIHM and XIHMK bodies with threads as specified in the control drawing No. DS430 and DS431. The inner diameter of the XIHM and XIHMK bodies may be increased to a dimension of 3.75 in. Enclosures employed with cemented joints or solid covers are suitable for gas group IIC. Enclosures not employed with cemented joints, or not employed with solid covers are suitable for gas group IIB. Degree of protection by enclosure: IP66 Type Variants Covered By The Approval: | XIHM | FCX | |------|-----| | I | II | I – Enclosure Material and type XIHM- Single body XIHMK- Single body XDHM- Dual body II – Enclosure Cover FCX- small (flat) cover, no window (Group IIC) FGCX- small (flat) cover, cemented window (Group IIC) FJCX- small cover, flanged window joint (Group IIB) DCX- large cover, no window (Group IIC) DGCX- large cover, cemented window (Group IIC) DJCX- large cover, flanged window joint (Group IIB) MCX- medium size cover, no window (Group IIC) Report No.: 02NK05350 REPORT The certificate entitles the licensee to provide the product with the registered mark ® and the Epsilon-x mark ™. Drawings: | Number | Date | Description | |----------|------------|-----------------------------------------------------------------------------| | DS431 Rev. F | 2-14-02 | XDHM Explosion Proof Dual Housing Medium | | DS430 Rev. F | 1/25/02 | XIHM Series Explosion Proof Instrument Housing Medium | | DS643 Rev. A | 05AUG02 | Installation Sheet | | 7421 Rev. A | 2001-04-20 | XIHK Explosion Proof Housing Machine Drawing | | 7444 Rev. A | 06/18/02 | XIHM/XDHM Series Name Plate Group B, C, D IIB + H2 | | 7445 Rev. A | 06/18/02 | XIHM/XDHM Series Name Plate For Housing with Flat Jointed RTV Sealed Glass Covers GRP. C D IIB | | 7447 | 10AUG01 | Empty EX Enclosure label | The manufacturer shall inform the notified body concerning all modifications to the technical documentation as described in ANNEX III to Directive 94/9/EC of the European Parliament and the Council of 23 March 1994. [17] **Special conditions for safe use:** Before opening the enclosure in a flammable atmosphere the circuits must be interrupted. The approval applies to equipment without cable glands. When mounting the flameproof enclosure in a hazardous area, only flameproof cable glands certified to EN 50018 must be used. All unused conduit entries must be closed with a plug certified to EN 50018. The maximum reference pressure measured in accordance with EN 50018: 2000, sub clause 15.1.2 was 13.72 bar. The enclosure has been tested in accordance with sub-clause 184.108.40.206 by applying a static pressure of 56.88 bar for 1 min. Routine testing may be omitted, since representative enclosures withstood over 4 times the reference pressure, or 60.95 bar. These component enclosures have been certified as components, this means with an "U" after the number of the certificate and without stating the temperature class (T1 - T6). They were evaluated for use in a -40 C to 100 C ambient. Consideration should be given to the effects of use outside of these temperatures. When the boxes have been mounted with their contents it is a requirement that the total product is re-certified, among other things to decide the temperature class. [18] **Essential Health and Safety Requirements** Concerning ESR this Schedule verifies compliance with the Ex standards only. The manufacturer's Declaration of Conformity declares compliance with other relevant Directives. On behalf of UL International Demko A/S Herlev, 2002-08-19 Steen Lumby Certification Manager UL International Demko A/S Lysgårdsvej 8, P.O. Box 514 DK-2730, Herlev, Denmark Telephone: +45 44356988 Fax: +45 44356900 Certificate: 02 ATEX 0205350U Report: 02NK05550 This certificate may only be reproduced in its entirety and without any change, schedule included **January 2004 — Approval Guide** **Electrical Equipment** **Aero-Motive Mfg Co** --- **XIHS-aC. Single Ended Domestic Instrument Housing.** XP / I / I / BCD / T6; DIP / II, III / I / EFG / T6; Type 4X n = Housing Cover: BF, CF, DF, LF, TF, XP, YP, BD, CD, DD, LO, TD, XD, YD, BDG, CDG, DDG, LDG, TDG, XDG or YDG. **XIHM-aXbc and S5222. Single Ended Instrument Housing — Domestic/European.** XP / I / I / BCD / T6; XP / II / I / IIb+H2 / T6; DIP / II, III / I / EFG / T6; Type 4X, IP66 a = Cover Options: DC, DGC, DJC, FC, FGC, FJC or MC (with J in model number, for Groups C, D, (IIb), E, F and G only). b = Conduit Hub Size: Z or none. c = External Mounting Lugs: L or blank. Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options. 316 Stainless Steel Option Painting Options **XIMMK-aXbc. Single Ended Instrument Housing — Domestic/European.** XP / I / I / BCD / T6; DIP / II, III / I / EFG / T6; Type 4X a = Cover Options: FG or FC. n = Conduit Hub Size: Z or none. c = External Mounting Lugs: L or blank. Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options. 316 Stainless Steel Option Painting Options **XDHM-aXbc. Double Ended Instrument Housing — Domestic/European.** XP / I / I / BCD / T6; XP / II / I / IIb+H2 / T6; DIP / II, III / I / EFG / T6; Type 4X, IP66 a = Instrument Side Cover: DC, DGC, DJC, FC, FGC, FJC or MC (with J in model number, Groups C, D, (IIb), E, F and G, only). b = Conduit Hub Size: Z or none. c = Power Slide Cover: FC, DC, FJC, DGC, DJC, FGC or MC (with J in model number for Groups C, D, (IIb), E, F and G, only). Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options. 316 Stainless Steel Option Painting Options **XJF-08100abcdefg. Control Panel Enclosure.** XP / I / I / D / T6; DIP / II, III / I / EFG / T6 a = Optional mounting pan metallic or nonmetallic. b = Left or right hinge BL or BR. c = Pilot light (max of 4 installed in cover), XLS-G Series. d = Standard pushbutton (max of 4 installed in cover), XH998 Series. e = Dual pushbutton (max of 4 installed in cover), XHDP88 Series. f = Standard selector switch (max of 4 installed in cover), XH995 Series. g = Key lock selector switch (max of 4 installed in cover), XHK933 Series. **XJF-101200-S4030-ab. Control Enclosure.** XP / I / I / CD / T6; DIP / II, III / I / EFG / T6 a = Nonremovable hinge @. b = Grounding kit KGR-1. **XJHA and XJNA N4. Junction Box.** XP / I / I / CD / T6; DIP / II, III / I / EFG / T6; Type 4 **S-3352. Junction Box.** XP / I / I / CD / T6; DIP / II, III / I / EFG / T6; Type 4 **S-3763 and S-5418. Junction Box.** XP / I / I / BCD / T6; DIP / II, III / I / EFG / T6; Type 4 --- **Advanced Flow Technology Company, Box 906, Lakeland FL 33802** **6µDeltaPulse. DPM PE a b B c o XL Intelligent Magnetic Flow Transmitter** NI / I / 2 / ABCD / T4 Ta = 60°C; S / II / 2 / FG / T4 Ta = 60°C; S / III / 2 / T4 Ta = 60°C; Type 4X, IP66 a = Analog Output Signal A or B. b = Communication: RS or H. c = Analog Output Signal 1 or 2. d = Pulse Output Signal 0, 1 or 2. **Aero-Motive Mfg Co, 5888 M L Ave E, Kalamazoo MI 49001** **16s-b-c-X; 16AH-b-c-X; 16b-b-c-X; 12dM-b-c-X; 14s-b-c-X; 20f-g-c-X; 20FM-g-c-X; nL-g-c-X; nM-g-c-X; jk-g-c-X, or jkM-g-c-X. Electric Cable Reels.** XP / I / I / CD / T5; DIP / II / FEG / T5; Type 3R a = Flange spacing and number of springs 22, 42, 53, 64 or 85. b = Flange spacing and number of springs 10 or 17. c = Skip ring current rating and number 303, 304, 308, 308, 310, 403 or 404. d = Flange spacing and number of springs 33, 55 or 67. e = Flange spacing and number of springs 43, 45 or 46. f = Flange spacing and number of springs 50, 51, 52, 53, 54, 55, 56, 57, 58 or 59. g = Chain drive ratio 50, 52, 53, 55, 56, 57, 60, 62, 63, 65, 66, 70, 76, 80 or 85. SFT100-a1-bcxxx. NEXGEN Mass Flow Transmitter. XP / I / I / CD / T6 Ta = 65°C; DIP / II / III / EFG / T6 Ta = 65°C; NI / I / 2 / ABCD / T4 Ta = 65°C; A13 / I,II,III / 1 / CDEFG — 70500-008 / F; D001029 / B; Type 4X a = Display options 0, 1, 2. b = Batch option 0. c = Communication board options 0, 1. xxx = Software options not affecting safety. Adalet-PLM Scott Fetzer Co, 4801 W 150th St, Cleveland OH 44135 XCE-101406N4-S5194. Control Enclosure. XP / I / I / CD / T6; DIP / II,III / I / EFG / T6; Type 4 XDF-640908N4. Panel-meter enclosure. XP / I / I / CD / T6; DIP / II,III / I / EFG / T6; Type 4, 4X XDH-aXbc. Double Ended Instrument Housing — Domestic/European. XP / I / I / BCD / T6; DIP / II,III / I / EFG / T6; Type 4X a = Instrument Side Cover: FC, FGCC, DGCC, FGC, DC or DGC. b = Conduit Hub Size: 2 or 3. c = Power Side Cover: A, B, C, D, E or blank. Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options 316 Stainless Steel Option Painting Options XDHL-aXbc. Double Ended Instrument Housing — Domestic/European. XP / I / I / BCD / T6; XP / I / I / IIB+H₂ / T6; DIP / II,III / I / EFG / T6; Type 4X, IP66 a = Instrument Side Cover: FC, FG, DC, DGCC. b = Conduit Hub Size: 2 or none. c = Power Side Cover: B or C. Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options 316 Stainless Steel Option Painting Options XIH-aXbc. Single Ended European Instrument Housing. XIHb — Single Ended Domestic Instrument Housing. XP / I / I / BCD / T6; DIP / II,III / I / EFG / T6; Type 4X a = Housing cover: FC, DC, FGCC, DGCC, FGC or DGC. b = Conduit hub size: 2 or 3. c = External mounting lugs: L or blank. d = Options: FC-3 S4084 (no options), FC-3 S4088 (no options). Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options 316 Stainless Steel Option Painting Options XIHLC-aXbc. Single Ended Instrument Housing — Domestic/European. XP / I / I / I / CD / T6; XP / I / I / IIB+H₂ / T6; DIP / II,III / I / EFG / T6; Type 4X, IP66 a = Housing Cover: FC, FG, DC, or DGCC. b = Conduit Hub Size: 2 or none. c = External Mounting Lugs: L or blank. Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options 316 Stainless Steel Option Painting Options XIHFGC3L-S4080. Single Ended Instrument Housing. XP / I / I / I / BCD / T6; DIP / II,III / I / EFG / T6; Type 4X XIH-L-aXbc. Single Ended Instrument Housing — Domestic/European. XP / I / I / BCD / T6; XP / I / I / IIB+H₂ / T6; DIP / II,III / I / EFG / T6; Type 4X, IP66 a = Housing Cover: FC, FG, DC, or DGCC. b = Conduit Hub Size: 2 or none. c = External Mounting Lugs: L or blank. Non-Designated Options: Front Pad Mounting Option Front Pad Entrance Options Internal Wall Entrance Options Internal Wall Mounting Options Internal Wall Terminal Strip Options 316 Stainless Steel Option Painting Options NEXT PAGE ### Temperature Class | Class | Description | |-------|-------------| | T1 | T1 (450°C) | | T2 | T2 (300°C) | | T2A | T2A (280°C) | | T2B | T2B (260°C) | | T2C | T2C (240°C) | | T2D | T2D (215°C) | | T3 | T3 (200°C) | | T3A | T3A (180°C) | | T3B | T3B (165°C) | | T3C | T3C (160°C) | | T4 | T4 (135°C) | | T4A | T4A (120°C) | | T5 | T5 (100°C) | | T6 | T6 (85°C) | | XXX°C | XXX°C | The temperature class is based on a 40°C ambient unless a higher ambient is shown, e.g. "T4 Ta = 60°C". A temperature class is not shown for associated apparatus designed to be located in an unclassified location. ### Control Documentation When critical details for the installation are specified in a control drawing, instruction manual, installation diagram, etc., the document number will be specified. ### Entity Intrinsically Safe apparatus Approved under the Entity concept shows the word “Entity” and may include the entity parameters in the Listing. ### FISCO Intrinsically Safe apparatus Approved under the Fieldbus Intrinsically Safe Concept shows the word “FISCO” and may include the FISCO parameters in the Listing. ### Nonincendive Field Wiring Apparatus Approved under the Nonincendive Field Wiring concept will include a control drawing reference and may include the nonincendive field wiring parameters in the listing. ### Enclosure Type Enclosure type/ingress protection designation per ANSI/NEMA 250 and/or IEC 60529. ### Special Conditions of Use Some products, typically components, include Special Conditions of Use that must be observed when installing and using the product. The conditions are shown following each applicable Listing. ### APPROVAL DESIGNATION #### Type of Protection - **Class** - **Division** - **Group** #### Temperature Class - **Control Documentation** #### Entity/FISCO #### Enclosure Type **Example 1** 123-abc. Temperature Transmitter. IS / I/I / 1 / CDEFG / T4 — 699007; Entity; Type 4X Entity Parameters: $V_{DC} = 16.4$ V, $I_{AC} = 33$ mA, $C_B = 0.9$ μF, $L_B = 110$ mH **Example 2** 456-def. Temperature Transmitter. XP-AIS / I / I / CD / T4 — 699008; Type 4X, IP68 XP-AIS / I / I / IB / T4 — 699008; Type 4X, IP68 **Example 3** 789-ghi. Temperature Transmitter. CHAPTER 2 HAZARDOUS (CLASSIFIED) LOCATION ELECTRICAL EQUIPMENT Equipment listed in this chapter for Hazardous (Classified) Locations is also suitable for installations in areas that are unclassified locations and, unless referenced in the listing to another part of the Approval Guide, has been examined only for its hazardous location suitability. The equipment is listed alphabetically, by manufacturer. Beginning with the September 1998 Approval Guide, a reorganization of this material has begun to provide better usability. Not all listings have been updated in this issue. Where listings have been changed, the specific models are listed in alphanumeric order under the respective manufacturers. Each model listing includes the specific ratings for which it is Approved. Note: The addition of the temperature class and enclosure type/ingress protection, along with the reorganization according to model number, is a "work-in-progress". Where a temperature class or enclosure type/ingress protection is not shown, the manufacturer or FM Approvals should be consulted to obtain the applicable ratings. Installation and maintenance of equipment listed in this chapter shall be according to the National Electrical Code® (NEC) or other applicable code. Two different Hazardous Location rating systems are defined by Articles 500 and 505 of the National Electrical Code®. The following are explanations of the two systems: Hazardous Location Coding System — NEC 500. Class I // II // III, Division 1 / 2 | Type of Protection | Description | |--------------------|--------------------------------------------------| | XP | Explosionproof | | IS | Intrinsically Safe Apparatus | | AIS | Associated Apparatus with Intrinsically Safe Connections | | ANI | Associated Nonincendive Field Wiring Circuit | | PX PY PZ | Pressurized | | APX APY APZ | Associated Pressurization Systems/Components | | NI | Nonincendive apparatus and nonincendive field wiring apparatus | | DIP | Dust-Ignitionproof | | S | Special Protection | Equipment utilizing more than one type of protection is shown by joining the applicable types of protection with hyphens, see Example 2. Class - I = Class I - II = Class II - III = Class III Division - 1 = Division 1 - 2 = Division 2 Group - A = Group A - B = Group B - C = Group C - D = Group D The chemical formula of a specific gas or vapor for which the apparatus is Approved may be shown alone or concatenated with an apparatus group. - E = Group E - F = Group F - G = Group G Attestation of Conformity This is to declare, in accordance with Directive 94/9/EC, that the following product(s) are designed and manufactured in accordance with Annex II of Directive 94/9/EC. The manufacturer attests on their own responsibility that the apparatus has been constructed in accordance with the principles of good engineering in safety matters, and that any routine verification and test required by Clause 24 of EN 50014:1997 has been successfully completed. Manufacturer: ADALET, Scott Fetzer Company 4801 West 150th Street, Cleveland, Ohio 44135, USA Product Description: Cast Aluminum, Flameproof Enclosure, Type XIHMX Single Body, for use in potentially explosive atmospheres. (EEx d IIC, ATEX: II 2 GD) (Enclosures employed with cemented joints or solid cover) (EEx d IIB, ATEX: II 2 GD) (Enclosures not employed with cemented joints, or not solid cover) Type XIHMKX Single Body, for use in potentially explosive atmospheres. (EEx d IIC, ATEX: II 2 GD) (Enclosures employed with cemented joints or solid cover) Type XDHMX Dual Body, for use in potentially explosive atmospheres. (EEx d IIC, ATEX: II 2 GD) (Enclosures employed with cemented joints or solid cover) (EEx d IIB, ATEX: II 2 GD) (Enclosures not employed with cemented joints, or not solid cover) Ambient Temperature Range Of All Enclosures: -40°C TO +100°C Certifying Agency: UL International DEMKO A/S Testing & Certification (0539) P.O. Box 514, Lyskaer 8, DK-2730 Herlev, Denmark EC-Type Examination Certificate: DEMKO 02 ATEX 0205350U This Declaration is based on Compliance with the following Standards: EN 50014:1997 Appendix A1 + A1-A2:1999 ELECTRICAL APPARATUS FOR POTENTIALLY EXPLOSIVE ATMOSPHERES – GENERAL REQUIREMENTS EN 50018:2000 ELECTRICAL APPARATUS FOR POTENTIALLY EXPLOSIVE ATMOSPHERES – FLAMEPROOF ‘D’ EN 50281-1-1:1998 ELECTRICAL APPARATUS FOR USE IN THE PRESENCE OF COMBUSTIBLE DUST – PART 1-1. ELECTRICAL APPARATUS PROTECTED BY ENCLOSURES – CONSTRUCTION AND TESTING. EN 60529 SPECIFICATIONS OF PROTECTION BY ENCLOSURES (IP CODE) DEMKO APPROVED For and on behalf of ADALET, Timothy Snelly, Standards Engineer NO REVISION TO DRAWING WITHOUT DEMKO APPROVAL Aug. 8, 2002 DS642 Rev. A 08Aug.2002 ADALET Installation Sheet SERIES XIHMX..SINGLE BODY XIHMXX..SINGLE BODY XDHXK..DUAL BODY Flameproof Enclosures Adalet’s XIHM & XIHMK Single Ended and XDHM Double Ended Series of Flameproof Enclosures are cast from copper-free aluminum. The enclosures are intended to be used primarily as instrument housings. The XDHM enclosure are similar to the XIHM enclosures, except they are provided with a threaded cover on both ends of the body. Up to three entries may be provided in the XDHM body and four in the XIHM and XIHMK body, with threads as specified in the control drawings DS430 AND DS431. The XIHM & XDHM enclosures are available with flat solid, flat glass, dome solid and dome glass style covers. The XIHMK enclosure is available with a flat solid or flat glass cover. The inside diameter of the XIHM/XIHMK enclosure may be increased to a dimension of 3.75 inches. These enclosures are ideal for indoor and outdoor areas where dampness and corrosive atmospheres are present. For added corrosion protection bodies and covers of all models are available in 316 stainless steel. Certifications EN 50018 EEEx d IIC (ENCLOSURES EMPLOYED WITH CEMENTED JOINTS OR SOLID COVERS) EEEx d IIB (ENCLOSURES NOT EMPLOYED WITH CEMENTED JOINTS OR NOT SOLID COVERS.) Directive 94/9/EC: 0539 EX || 2 GD Ambient Temperature Range DEMKO Certificate: 02 ATEX 020535OU -40° C to +100° C UL 1203: Class I, Groups BCD; Class II, Groups EFG; Class III cUL: Class I, Groups BCD; Class II, Groups EFG; Class III (Investigated to CSA C22.2 No. 30 by UL) FM 3615: Class I, Groups BCD; Class II, Groups EFG; Class III (Jointed Covers UL, cUL & FM Rated: Class I, Groups CD; Class II, Groups EFG; Class III) UL50, NEMA 250: TYPE 4X IEC 60529: IP66 Enclosure Catalog Numbers: XIHM SERIES..SINGLE BODY XDHM SERIES..DUAL BODY A) WITH SMALL (FLAT) COVER, CEMENTED WINDOW, _FGCX B) WITH SMALL (FLAT) COVER, NO WINDOW, _FCX C) WITH SMALL (FLAT) COVER, FLANGED WINDOW JOINT, _FUCX D) WITH LARGE (DOME) COVER, NO WINDOW, _DCX E) WITH LARGE (DOME) COVER, CEMENTED WINDOW, _DGCX F) WITH LARGE (DOME) COVER, FLANGED WINDOW JOINT, _DJCX G) WITH MEDIUM SIZE COVER, NO WINDOW _MCX XIHMK..SINGLE BODY A) WITH SMALL (FLAT) COVER, CEMENTED WINDOW, _FGCX B) WITH SMALL (FLAT) COVER, NO WINDOW, _FCX SPECIAL CONDITIONS FOR SAFE USE: 1) BEFORE OPENING THE ENCLOSURE IN A FLAMMABLE ATMOSPHERE THE CIRCUITS MUST BE INTERRUPTED. 2) THE APPROVAL APPLIES TO EQUIPMENT WITHOUT CABLE GLANDS. WHEN MOUNTING THE FLAMEPROOF ENCLOSURE IN A HAZARDOUS AREA, ONLY FLAMEPROOF CABLE GLANDS CERTIFIED TO EN 50018 MUST BE USED. 3) ALL UNUSED CONDUIT ENTRIES MUST BE CLOSED WITH A PLUG CERTIFIED TO EN 50018. ADALET, Scott Fetzer Co. 4801 West 150th Street Cleveland, OH 44135, USA Phone: (216) 267-9000/Fax: (216) 267-1681 www.adalet.com DS643 REV. A 05AUG02 DEMKO APPROVED NO REVISION TO DRAWING WITHOUT DEMKO APPROVAL SIGN Timothy Smalley
Planitherm high performance glazing for a more comfortable home www.planitherm.com Considering new windows? When it comes to replacing your windows, **the glass really does make a difference**. Glass will typically make up 70% of your window and is at the heart of controlling the flow of natural light, the level of safety and security, of heat retention and even the level of furniture fade protection. Choosing the right glass for your home will have a huge impact on the performance of your windows, so it’s important to get it right. Insist on Planitherm glass Our modern glazing can contribute so much to the comfort of your home, making you more connected to the world around you. Whether you’re looking for more warmth, a peaceful night’s sleep, or to enjoy a sun-filled room without overheating, Planitherm is the perfect glass for you. We want you to feel as passionate about glass as we do and feel the difference that Planitherm can make to your home. Planitherm is the UK’s leading energy efficient glass and our full range can be incorporated into any style of window frame. All of our packages are energy efficient as standard and are manufactured in the UK specifically for our changeable climate! We’ve made it simple Planitherm glass can do amazing things. Through our unique combination of coatings and laminated layers, our high performance glazing is designed to make your home feel comfortable and secure. You might not be able to ‘see’ it, but you’ll certainly feel the difference. To simplify the process, we’ve developed our Planitherm range into three clear options, helping you select the right level of comfort for your home. It’s as simple as choosing the option that’s right for you. Comfort comes to life To help you make your choice, we like to bring our comfort features to life. Find out more about each of our Planitherm high performance glazing features below and select the options that work for your home. | Comfort Feature | Energy Standard | Comfort | Comfort Plus | |-------------------------------|-----------------|---------|--------------| | Energy Efficiency | | ✓ | ✓ | | Enhanced Security | | ✓ | ✓ | | Noise Reduction | | ✓ | ✓ | | Furniture Fade Protection | | ✓ | ✓ | | Reduce Overheating | | | ✓ | With Planitherm, energy efficiency comes as standard. A special coating on our glass helps stop heat escaping, so you’ll use less energy to keep your home warm. Our Energy Standard and Comfort glazing captures and makes use of the warmth from natural daylight, while Comfort Plus helps limit and control it – ideal for very sunny rooms. Help protect your family from break-ins and vandalism. Our high-security transparent layer on Comfort and Comfort Plus glass makes them much tougher to break through than standard, unliminated glass. When used in an approved window frame, they’re designed to meet the official police security initiative ‘Secured by Design’ standard, so you’ll feel safer and more secure in your home. Make your home a peaceful respite from the world outside. Perfect for bedrooms to ensure a good night’s sleep, a built-in acoustic layer keeps exterior noise exactly where it belongs – outside. If you live on a busy road or close to other disturbing sounds, noise control glazing should be on your list. Keep your furniture looking newer for longer. Prolonged exposure to ultra-violet light from the sun will contribute to your furniture and curtains fading over time. Our hidden layer of protection on Comfort and Comfort Plus glass blocks out 99% of the sun’s UV rays. Think of it as sun-block for your sofa, curtains and carpets. Perfect balance of light and warmth. Rooms with lots of glass will give you a great view, but the warmth from natural daylight can make it uncomfortably hot on sunny days. Our Comfort Plus glass has a special coating designed to keep the heat from the sun’s rays, for a comfortable and bright environment and greater control of your internal temperature. It’s particularly suited for south- and west-facing rooms with sunny aspects, or larger glazed areas such as bi-fold and patio doors. Energy Standard Keep your home warm and comfortable Keeping your home warm can be challenging when energy prices are rising. Choosing windows with Planitherm Energy Standard glass can help to combat this. The special coating on our glass captures the warmth from natural daylight, and stops 56% more internal heat escaping than older-style double glazing. So, it will cost you less to keep your home warm. How it works The crystal clear, microscopic coating applied to the inner pane of glass allows more light and heat into the room, helping to warm it, while keeping more heat in too. Benefits • Improved energy efficiency • Can help lower energy bills Where to use it Ideal for standard windows in any room, but not ideal for south- or west-facing rooms with sunny aspects or large glazed areas. Comfort Choose better security and better sleep Disturbed by outside noise? Concerned about security? Choosing windows with Planitherm Comfort can help provide the answer. With a built-in transparent layer, Comfort glass is much tougher to break through than standard unlaminated glass. Ideal for bedrooms or street-facing rooms, the layer also reduces noise by 20%, helping to improve your overall well-being. As an added bonus, it also blocks 99% of UV rays, helping your furnishings to look as good as new for longer. How it works The transparent layer laminated between two panes of glass creates a strong barrier, dampens noise and blocks over 99% of UV light. Benefits • Helps protect against burglary and vandalism – designed to meet the official police security initiative “Secured by Design” standard • Reduces unwanted noise • Improved energy efficiency • Reduces fading of your furniture Where to use it Ideal for standard windows in any room, particularly bedrooms for a peaceful night’s sleep, or those overlooking busier streets where noise and/or security are a concern. Comfort Plus Make your home into the ultimate sanctuary For the perfect balance of light, warmth and comfort choose Planitherm Comfort Plus. An invisible coating is specifically designed to block out 50% of external heat from the sun while keeping in 63% more internal heat. Your home will be cooler in summer and warmer in winter. Particularly useful for large glazed areas, such as bi-fold or patio doors, and for south- and west-facing rooms with sunny aspects. The glass is also combined with a built-in transparent layer to give you all the comfort features of the Comfort option. Keeps out 50% of the sun's heat 20% better noise reduction* 99% UV reduction** 63% energy efficiency improvement* How it works The special clear coating on the glass reflects away heat from the sun, while optimising heat retention in the room. The transparent, laminated layer creates a strong barrier, dampens noise and blocks UV light. Benefits • Blocks out heat from the sun’s rays, giving you greater control of your internal temperature. • Improved energy efficiency. • Helps protect against burglary and vandalism – designed to meet the official police security initiative ‘Secured by Design’ standard. • Reduces fading of your furniture • Reduces unwanted noise Where to use it Ideal for homes with large glazed areas, bi-fold doors and south- and west-facing aspects. Excellent choice for both new builds and renovations. For further information To find out more and choose the right Planitherm Comfort Features for your home visit www.planitherm.com You will find product information, advice on selecting the best options for your home, and can search for a local installer. Planitherm Network The Planitherm Network is a nationwide group of independent window installers and fabricators. All have been vetted to meet our minimum membership criteria and many have undertaken additional training on Planitherm glass. Choosing a Planitherm Network Member local to you gives you access to a wealth of advice and expertise and ensures that you can choose Planitherm with total confidence. Find an installer at www.planitherm.com Please note that the installers listed on the Planitherm Network are for guidance only and are not endorsed by us. We would strongly recommend that you obtain your own quotations, obtain your own references from satisfied customers, and make your own decisions. We cannot be held responsible and we do not accept any liability for any disputes between yourselves and any of these installers. **FAQs** **Q** Are the coatings and laminated layers on the glass permanent? **A** Yes. The coating is permanent, invisible and hermetically sealed within the glass unit so will last the lifetime of the window. For product options that include a laminated layer, the layer is permanently adhered between two panes of glass. **Q** Will the coatings and laminated layers affect the appearance of my windows? **A** No, the appearance remains neutral. The coating is completely transparent and microscopically thin, so it is not visible to the naked eye. Laminated layers are also very thin and totally clear. **Q** Can I specify Planitherm glass with any type of window frame? **A** Yes, all Planitherm Comfort glazing options can be included in any size of double glazed window unit and used with all popular frame materials and types, including UPVC, wood and aluminium. Members of the Planitherm Network are installers who can provide you with comprehensive advice and installation expertise. You can search for an installer near you at [www.planitherm.com](http://www.planitherm.com). **Q** Is the performance of Planitherm glass proven? **A** Yes, the performance of Planitherm meets the requirements of European standard BS EN 1096-4 and carries the relevant CE Marking. For more information, visit [www.saint-gobain-glass.com/ce](http://www.saint-gobain-glass.com/ce). **Q** What does this performance mean to me in real terms? **A** By switching from older-style, non-coated double glazing to one of the new Planitherm Comfort glazing options, you can benefit from; over 50% improved energy efficiency, enhanced security, up to a 20% reduction in noise, furniture fade protection and a reduction in overheating for large glazed areas, with the Comfort Plus option blocking out 50% of external heat from the sun. **Q** Where can I buy Planitherm glass? **A** Planitherm glass, and each of the Comfort glazing options, is available to buy via window companies across the UK. Look for members of the Planitherm Network – installers who can provide you with comprehensive advice and installation expertise. You can search for an installer near you at [www.planitherm.com](http://www.planitherm.com). **Q** Does Saint-Gobain Building Glass guarantee the quality of work of installer members of the Planitherm Network? **A** No, installers who use Planitherm can register on the Planitherm Network and have met our Network membership requirements. However, we do not guarantee their work and any queries or issues with your windows should be directed to your installer. For more frequently asked questions, visit [www.planitherm.com](http://www.planitherm.com) --- *Compared with a 4mm/16mm cavity/4mm (uncolored) Double Glazed Unit filled with air.* **Calculated from UV transmittance in accordance with EN 410.**
MEMORANDUM TO: Stephen Schneider, Arnold Arboretum FROM: Jennifer Relstab, P.E. and Hannah Carlson, RLA RE: Roslindale Gateway Path 25% Conceptual Plan CC: Draft Brian Kuchar, RLA, P.E. The Horsley Witten Group, Inc. (HW) is providing this memorandum to summarize the design elements of the draft 25% conceptual plan for the Roslindale Gateway Path in Roslindale, MA. The draft 25% conceptual plan incorporates revisions to the 10% design concept based on the following: - Detailed review of surveyed topography and proposed grading; - Recommendations suggested by project partners and stakeholders at the project kickoff meeting; - Information provided by the City of Boston regarding the South Street and Bussey Street intersection; and - Input from the Arnold Arboretum regarding potential impacts to vegetation and trees. The draft 25% conceptual plan as well as additional relevant information is provided as attachments to this memorandum as noted herein. Background This project is a continuation of design work completed by HW for the Arboretum Park Conservancy and WalkUP Roslindale for a multi-use path that connects residents and commuters from the Forest Hills Massachusetts Bay Transportation Authority (MBTA) station to the Roslindale Village MBTA commuter rail station (HW, 2016; HW, 2017). The purpose of this project is to: merge the two previous path designs into one continuous path; advance design and alignment of the path; and examine connectivity and wayfinding for both cyclists and pedestrians. Also included in this project is a limited survey of a portion of the Arnold Arboretum (an approximately 30-ft wide segment along the proposed path) which was completed on November 9, 2017. The survey work included site topography, existing pathways, drainage infrastructure, utilities, walls and other key natural and man-made features. Summary of Existing Conditions A locus map showing the overall project area is provided in Attachment A, Figure A.1. There are three major sections within the Roslindale Gateway Path that have been defined through previous projects: • **Section 1: MBTA** – This section extends from the northeastern edge of the existing platform at the Roslindale Village MBTA commuter rail station to the existing stone wall that abuts the Arnold Arboretum property. This parcel is currently owned by the MBTA. • **Section 2: Peters Hill** – This section is within the Arnold Arboretum Peters Hill area, located east of the MBTA property and west of South Street. This section ends at the Poplar Gate. • **Section 3: Blackwell Path Extension** – This section is between Poplar Gate and the Blackwell Footpath at Bussey Brook Meadow Gate. A portion of this path is adjacent to the Bussey Brook Meadow. Each section abuts the MBTA railroad bed to the South and is defined by unique physical characteristics which are summarized briefly below. A more detailed description will be provided in the final report. **Section 1: MBTA** The MBTA parcel is wooded with flat topography (<5% slope) near the tracks and a steep hillside (~20% slope) to the north and east. Based on National Resource Conservation Service (NRCS) data, soils in this area are primarily in hydrologic soil group (HSG) B and have a moderate infiltration capacity. During the survey, depressed wet areas were identified, along with several large diameter trees (> 24”) of various species. A stone wall, which extends to Arborough Road, defines the eastern boundary of this parcel. **Section 2: Peters Hill** The western portion of this section of Peters Hill is characterized by open, grassed areas with flat topography, which transitions to a moderate slope (5 to 10%) in the wooded area. In the eastern portion, closer to South Street and Bussey Street, the terrain is varied with a steep hillside approaching the South Street underpass. The NRCS soils in this area are generally HSG A, indicating a high infiltration capacity, though it is understood that soil conditions were disturbed during the construction of the railroad bed. This section contains several collections of trees belonging to the Arnold Arboretum, including redwoods (*Metasequoia*), poplars, crabapples, oaks, pines and others. An existing drainage swale drains a portion of the area to two 30” diameter culverts that cross under South Street to the south. **Section 3: Blackwell Path Extension** The final section has two distinct subsections in either direction of Arboretum Road. Between the South Street underpass and Arboretum Road, the area is largely wooded with wetland species (primarily purple loosestrife) in the valley. There is a large low area situated between the steeply sloping embankment along South Street and the MBTA railroad bed, which is identified on existing plans as a detention basin. An existing footpath traverses the low area. The 30” diameter culverts from the Peters Hill section and a smaller pipe connected to catch basins on South Street discharge into the detention basin. The outlet is a 36” diameter reinforced concrete pipe, which discharges to the Bussey Brook Meadow. Between Arboretum Road and the Bussey Brook Meadow Gate, the area is grassed and open and the topography generally follows road grade and the existing stone wall. The NRCS soils in this area are HSGs A and C. Summary of Concept The draft 25% conceptual plan for the Roslindale Gateway Path provides a new 10-foot wide accessible pathway with 2-foot wide shoulders on either side, connecting the Roslindale Village Commuter Rail Station to the Blackwell Footpath. This path design includes the use of the existing Peters Hill Road and connection to Poplar Gate and crosses the road at the intersection of Bussey Street and South Street. There are three new primary gateways onto the path. Secondary gateways and paths provide additional connections for residents in adjacent neighborhoods and visitors to the Arnold Arboretum. The draft plan is provided in Attachment B and is summarized below. The path alignments and profiles are provided in Attachment C. A zoom-in of the section of the alignment where the multi-use path meets Peters Hill Road is provided in Attachment D to highlight the impacts to poplar trees (*Populus*). Additional information on materials is provided in the last section of this memorandum with supplemental precedent images provided in Attachment E. Section 1: MBTA Gateways and Path Alignment A proposed primary gateway entrance to the Roslindale Gateway Path is located where the existing platform abuts the MBTA parcel and the commuter rail right-of-way. The path alignment follows the shallow slope of the existing grade and gently meanders around existing large diameter trees, helping to reduce speed. From the commuter rail platform, the path is relatively straight to allow users to maintain sight lines. The path follows grade, except where slopes are greater than 20% at the connection to the Arnold Arboretum property. In this portion, a boardwalk is recommended to maintain accessibility and continue the gently curving path without disturbing a large area with earthwork and tree removal. Stormwater management features, such as bioretention areas, are recommended to manage runoff from the surrounding area as well as the path. A secondary gateway is proposed at the stone wall along the MBTA and Arnold Arboretum property line. This gateway reuses the existing stone wall by creating a gap for the pathway to cut through the wall, similar to the entrance into Arnold Arboretum at Arborough Road. This gateway marks the entrance into Arnold Arboretum and the change of the landscape from wooded to open meadow. Potential Impacts to Landscape This alignment reduces disturbance to existing vegetation and habitat and is in keeping with the goals of the Urban Wilds Initiative, a potential opportunity suggested by the stakeholders. It also encourages management of overgrown understory and cleanup of debris and trash that has accumulated in the area. As shown, the alignment will impact most trees in the vicinity with calipers below 12” in diameter, but several of the larger diameter trees (> 18”) should not be impacted. Site Amenities Wayfinding signage is recommended at the gateway to orient path users and to identify the entrance into Arnold Arboretum. Path lighting is not recommended in this section, which corresponds with the Arnold Arboretum’s policy on lighting within their property. However, in-grade textured surface materials or reflective materials (on or along the path or on the boardwalk) can be used as indicators to key path changes (e.g., approaching the boardwalk). Materials The recommended path material in this section is stabilized soil, which provides a more natural appearance in this heavily wooded area. The boardwalk decking material and railings are recommended to be of Ipe wood with helical piles providing support. This gateway is recommended to have two “shoulder-height” stone columns flanking the entrance with signage naming the entrance, similar in appearance and height to the Bussey Street and Peters Hill gates. These would visually indicate that it is an entrance to Arnold Arboretum while maintaining sight lines from the MBTA commuter rail station down the path and vice versa. Section 2: Peters Hill Gateways and Path Alignment On the northeast side of the proposed secondary gateway, where the wooded MBTA section opens up into Arnold Arboretum, a secondary pathway connects from the primary path up the hill to Arborough Road and the Mendum Street gateway. The primary path alignment continues into Arnold Arboretum from the stone wall and sweeps through the existing meadow towards the stand of oaks and then curves towards the stand of dawn redwoods (*Metasequoia*), offering closer views and interactions with the botanical landscape. The meander also encourages reduced speeds for bicyclists as they travel to and from the boardwalk on the MBTA property. To discourage visitors from continuing to use the existing desire line, a small berm with additional meadow plantings would be added. The path turns uphill through existing collections of mainly crabapples (*Malus*) to meet with Peters Hill Road. Similar to Section 1, the path generally follows existing grade except in the area of an existing shallow swale and at Peters Hill Road (where grades are greater than 5%). In order to cross over the existing swale without adding fill and without extensive grading or walls, a low profile bridge or boardwalk is shown on the plans. This crossing allows water to pass under it and creates a simple, attractive feature in the landscape. For safety, the crossing requires a toe curb. The path meets Peters Hill Road south of the intersection and continues onto the access road leading to Poplar Gate. This junction avoids the disturbance of the majority of the poplar collection on the hill leading towards Poplar Gate. However, at least one poplar, as well as approximately 20 other trees and shrubs in the vicinity, will need to be removed to make this accommodation. The alignment ties into the road at an angle that allows for open sight lines and safe turning radii for bicyclists. The use of the existing road reduces the amount of “hardscaping” and allows room for other types of uses. The section of the path at the intersection with Peters Hill Road is steep; therefore, the path is graded out to create a more level “landing” off of the loop road. From Peters Hill Road, the path follows the existing asphalt path to Poplar Gate at the intersection of Bussey and South Streets. Pathway markings are recommended on the existing road either with paint or in-grade materials such as granite cobbles. A bump-out is currently proposed at the entrance to Poplar Gate based on plans provided by City of Boston’s Public Works Department (City of Boston, 2017) which would be hardscaped with concrete similar to sidewalks in the area. A secondary path connection mostly follows existing footpaths and connects the South Street underpass to the existing access road near Poplar Gate. The existing entrance into Arnold Arboretum from the South Street underpass is recommended to be formalized with a small accessible ramp to create a safer and more visible access point to Peters Hill. In the future, a new pedestrian crossing is recommended along the curve of South Street in a location to be vetted to meet safety and traffic requirements. **Potential Impacts to Landscape** The existing footpath which runs parallel to the railroad tracks will likely be impacted through added grading and/or landscaping to encourage pedestrians to use the proposed path. The path and grading appears to impact existing trees as it connects to Peters Hill Road, including: - 1 poplar (*Populus*) - 8 crabtrees (*Malus*) - 3 fir (*Abies*) - 2 spruce (*Picea*) - Others: mountain ash (*Sorbus*), lilac (*Syringa*), hawthorns (*Crataegus*), and honeysuckle (*Lonicera*). It should also be noted that other footpaths may be interrupted with this alignment. **Site Amenities** Wayfinding signage is recommended by the stone wall at the MBTA/ Arnold Arboretum property line to orient users coming from Arborough Road and Peters Hill and entering/exiting the MBTA property. Pavement markings or in-grade materials provide direction and show visitors where to continue on the path from the existing road at the junction. Wayfinding signage at the enhanced secondary entrance by the South Street underpass directs visitors to various places within Arnold Arboretum as well as to the primary path and transportation hubs. Interpretive signage is recommended at the location of the dawn redwoods (*Metasequoias*) for an education and outreach opportunity for Arnold Arboretum. No visible lighting is recommended in this section, which corresponds with Arnold Arboretum’s policy on lighting within their property. However, reflective materials (on or along the path) and/or in-grade materials can be used as indicators to key path changes. **Materials** The recommended path material in this section is stabilized soil, which provides a more natural appearance, as preferred by Arnold Arboretum. The swale crossing material and toe curb would be made of Ipe wood. The enhanced secondary gateway would use granite blocks to formalize steps into Arnold Arboretum and a tilted or cut granite block is recommended to provide a ramp. Alternatively, a small bike ramp could also be attached. **Section 3: Blackwell Path Extension** **Gateways and Path Alignment** A bump-out is currently being proposed at the intersection opposite Poplar Gate by the City of Boston’s Public Works Department (City of Boston, 2017). The bump-out is shown in this concept as well, which promotes traffic calming and creates a safer crosswalk to Poplar Gate. A primary gateway would be added across South Street from Poplar Gate which would lead users onto the new section of pathway and tie together the old entrance and the new one. The existing stone wall would be extended into the bump out and connect to the new gateway, creating a similar appearance to other Arnold Arboretum gateways. A hardscaped area in the bump out is proposed to ensure open sight lines at the intersection; to create a safe place for users to wait before they cross the street; and to enhance the aesthetics and view of the new gateway. The new gateway would be similar in appearance to the Bussey Brook Meadow gate with an opening for pedestrians and another for maintenance vehicles. Stormwater management features (bioretention areas) would be included to capture and treat stormwater runoff and enhance aesthetics at the gateway. There are two paths proposed: 1) a path to the South Street underpass and 2) a path to the existing Blackwell Path. The path to the underpass would follow grade close to the existing stone wall. The path leads users to the location of the future pedestrian crossing to Peters Hill and extends to a break in the existing wall to the proposed sidewalk extension, connecting users to Archdale Road. The path connecting to the Blackwell path would follow along the grade of South Street with a boardwalk, meeting existing grades near the Arboretum Road underpass. A lookout platform off of the boardwalk provides a place to rest as well as views into the detention basin below. Interpretive signage is recommended at the lookout for an education and outreach opportunity for Arnold Arboretum. The path continues at grade until it meets with the existing Blackwell Path at the Bussey Brook Meadow Gate. A primary gateway at the Arboretum Road underpass is proposed to provide an accessible connection for residents to Arnold Arboretum. The gateway would include signage on the wall on the sides of the underpass entrances. Due to the steep slopes (> 10%) between the underpass and the primary path, a stairway with a bike ramp may be necessary. **Potential Impacts to Landscape** This path will impact several trees in the wooded area between the primary gate at the intersection of Bussey and South Streets and the South Street underpass. However, the path will open up sight lines for both pedestrians and vehicles, which will improve safety. Limited impacts to the existing landscape are likely for the path to Bussey Brook Meadow Gate as that canopy is relatively open and the path is proposed to follow grade. Stabilization is currently needed at two outfall points on the slope going down into the detention basin. It is recommended that the slope be restored with an erosion control blanket and vegetation (erosion control mix and plant plugs). If needed, a meandering outfall swale along the hillside could be implemented to further manage and treat stormwater runoff. **Site Amenities** Wayfinding signage is recommended at the bump-out to orient users as they approach Poplar Gate and as they continue towards Bussey Brook Meadow. Wayfinding signage is also recommended at the Arboretum Road underpass to orient users to the location of the gateway connections. A look-out platform along the boardwalk is proposed with interpretive signage to educate users about stormwater and/or Bussey Brook Meadow. No visible lighting is recommended in this section, which corresponds with Arnold Arboretum’s policy on lighting within their property. However, reflective materials (on or along the path) and/or in-grade materials can be used as indicators to key path changes. **Materials** The recommended path material to the South Street underpass is stabilized soil, which would allow a more natural appearance in the wooded area and would be more resistant to erosion. The hardscaped landing at the bumpout would be concrete to match typical sidewalk materials in the area. The boardwalk and railings material would be Ipe wood boardwalk. The remaining path between the boardwalk and the Blackwell Path would be a dense grade stone to match the material used along the existing Blackwell Path. It is recommended that the new primary gateway have three “shoulder-height” stone columns with signage naming the entrance, similar in appearance and height to the Bussey Brook Meadow Gate. The existing stone wall would be extended using the same materials. Review of Materials **Stabilized Soil** Stabilized soil with an organic binder is the preferred material for the majority of the path segments. Stabilized soil is recommended for the following characteristics: - has a natural appearance; - has smaller aggregate sizes; - is more firm than dense graded stone; - uses an organic binder that allows the soil to perform similar to an asphalt path without the use of chemicals; and - can be installed on slopes up to 8%. Maintenance of stabilized soil pathways is comparable to other pathway surfaces and generally less than with dense graded stone. Typical maintenance would be small repairs that involve re-wetting and re-compacting of the stabilized soil, or adding small amounts of new material. These repairs are typically less intensive than repairs to asphalt surfacing. **Dense Graded Stone** Dense graded stone is preferred for the Blackwell Path Extension up to the Bussey Brook Meadow Gate Path to provide continuity to the existing Blackwell Path. Also, the proposed shallow slopes will discourage rutting and erosion, so a more stabilized material is not needed. Further, there is proven functionality of the Blackwell Path as a natural-looking multi-use path in this area. **Boardwalk** Sections of boardwalk will be built from Ipe wood planking which is a very strong, high density hardwood. The wood is rot-resistant and long-lasting, aging to a silver grey color. **Ramps** A ramp for accessibility is proposed to enhance access at the formalized existing gateway by the South Street underpass. A tilted or cut rough thermal finish granite stone to match the existing stone wall is proposed. An additional set of stairs and ramp may be required at the Arboretum Road underpass, which could be of a concrete or granite material. **Surface Markings** In-grade textured surfacing, pavement markings and reflectors are speed reducing measures and wayfinding options that can help orient both multi-use path users and other visitors to the Arnold Arboretum. Subtle, visual measures as well as textured surfacing highlight the route and areas where the path intersects with others. **Gateways** Primary gateways would use the same materials and design of existing gateways into the Arnold Arboretum. Secondary gateways would either be openings in stone walls or steps and ramps up over existing walls. Primary gateways would clearly be entrances into the Arnold Arboretum with the same aesthetic as the existing gateways while secondary gateways would provide clear access through or over walls as the visual boundary of the Arnold Arboretum. Examples of these materials are shown in Attachment E. References Horsley Witten Group, Inc. (HW). 2016. *Revised Conceptual Plan and Cost Estimate, Blackwell Path Extension*. Technical Memorandum. January 20, 2016. Prepared for: Arboretum Park Conservancy. Horsley Witten Group, Inc. (HW). 2017. *Roslindale Gateway Path Conceptual Plan and Cost Estimate*. Technical Memorandum. April 14, 2017. Prepared for: WalkUp Roslindale and Livable Streets Alliance. City of Boston Public Works Department Engineering Division (City of Boston). 2017. South Street & Bussey Street, West Roxbury. CIP 17-32, Sheets 12 – 17. May 2017. Attachment A Locus Map Arnold Arboretum (Peters Hill) Roslindale Village Roslindale Village Commuter Station Mendum Street Gate Poplar Gate South Street Gate Blackwell Footpath to Forest Hills Bussey Brook Meadow Legend - Project Area - MBTA Property (Approx.) - MBTA Commuter Rail Line - MBTA Commuter Rail Station - Proposed Primary Path - Future Secondary Path - Existing Mulch Path - Parcels - Bussey Brook Horsley Witten Group Sustainable Environmental Solutions 160 Route PA - 23rd St. Suite 400, Marlborough, MA 01752 (508) 922-8622 • fax: (508) 922-8623 • firstname.lastname@example.org Figure A.1 Roslindale Gateway Path Date: 12/20/2017 Attachment B 25% Conceptual Design Plan Draft Attachment C Path Alignments and Profiles MBTA SECTION ALIGNMENT MBTA SECTION PROFILE HORIZONTAL SCALE: 1" = 60' VERTICAL SCALE: 1" = 5' MBTA GIS Registration: Project Number: 16073 Sheet Number: C - 1 Draft: 1 of 3 Horneley Witten Group, Inc. Sustainable Environmental Solutions Route 2A, MA 02468 978-833-8000 Voice 978-833-8001 FAX MBTA SECTION DRAFT NOT FOR CONSTRUCTION BLACKWELL PATH SECTION ALIGNMENT BLACKWELL PATH SECTION PROFILE HORIZONTAL SCALE: 1" = 60' VERTICAL SCALE: 1" = 5' Attachment D Junction of Multi-Use Path and Peters Hill Road - Populus Trees Affected by Path Alignment Roslindale Gateway Path 25% Conceptual Plan Draft: Attachment D Junction of Multi-Use Path and Peters Hill Road: Populus Trees Affected by Path Alignment Roslindale, MA Attachment E Material Examples Attachment E: Material Examples Figure E.1: Stabilized soil driveway at historic house Figure E.2: Dense grade stone at Blackwell Path Figure E.3: Alewife Wetland Ipe Boardwalk Figure E.4: Example of Bike Ramp. This photo shows typical stair profile, not the granite material. (Pinterest photo, Russell Baxley, Swamp Rabbit Trail) Figure E.5: In-grade textured surfacing, in this case cobble rumble strips Figure E.6: Bike/Ped lane pavement markings Figure E.7: Bussey Brook Meadow Gate, precedent for new primary gateways Figure E.8: Arborough Road opening in stone wall, precedent for new secondary gateways
Dirty Fashion: Spotlight on China Why the Chinese Collaboration for Sustainable Development of Viscose will not be able to deliver on its promise China is the largest textile producer in the world and a dominant player in the global viscose market. With a 63% share of a growing market already worth US$12 billion worldwide, the Chinese viscose industry is also under pressure to clean up its performance. Ten leading Chinese viscose producers, along with two trade associations, came together in March 2018 to form their own initiative to promote sustainable viscose sourcing and manufacturing. The so-called Collaboration for Sustainable Development of Viscose (CV) has launched a three-year Roadmap, which claims to provide a way for CV members to achieve sustainable viscose supply chains. However, far from driving meaningful transformation of the sector in line with best practices for responsible viscose production, this report shows that the CV Roadmap fails to drive ambition among its members, and gives Chinese producers the option to pick and choose between different standards. At a time when global fashion brands and retailers are sending a clear message to their suppliers to commit to cleaner viscose-sourcing and production methods, this approach appears short-sighted and unstrategic. To date, eight major brands and retailers - ASOS, C&A, Esprit, H&M, Inditex, Marks & Spencer (M&S), Next and Tesco - have publicly pledged to integrate Changing Markets’ Roadmap towards responsible viscose and modal fibre manufacturing into their sustainability policies. This Roadmap sets the viscose industry on a pathway to closed-loop manufacturing, in line with the most ambitious current guidelines for clean viscose manufacturing: the European Commission’s 2007 Reference document on best available techniques (BAT) in the production of polymers. In addition, 160 brands have pledged to stop sourcing wood pulp (used in the production of viscose) from ancient and endangered forests, in line with their commitment to CanopyStyle, which goes beyond the approach set out in the CV Roadmap. This report finds that, through the CV initiative, Chinese producers are committing to an approach that will make them fall short of what some viscose producers (including Austria-based Lenzing, a member of the CV initiative) are already achieving, or have committed to achieve in the coming years. This is all the more concerning considering ongoing government and media accounts, highlighted in this briefing, that speak of serious pollution issues around CV members’ production sites. --- 1 China Chemical Fibres Association, China Cotton Textile Association, CHTC Helin, Fuming Asyong, Jilin Chemical Fibre, Sateri, Shandong Yanxi, Shandong Yinying (Silver Hawk), Tangshan Sanyou, Xinjiang Balu Chemical Fibre, Yihai Grace and Zhejiang Fulida. In particular, this report finds that the CV Roadmap: - **Lacks ambition**, by not obliging its members to achieve the highest level of production standard recommended by the Chinese government for companies selling to the international market, or a standard that would align with EU BAT, which several leading fashion brands and retailers support. - **Allows members to pick and choose from a selection of certification standards and industry self-assessment tools**, which non-governmental organisations (NGOs) have criticised by for their lack of ambition (for example, the Programme for the Endorsement of Forest Certification (PEFC) standard) or for not covering some key parameters (for example, OEKO-TEX does not take a comprehensive approach towards viscose manufacturing). - **Lacks clarity and transparency**, by failing to provide publicly available information about how the CV Roadmap will be enforced, monitored and verified, and whether it will sanction non-complying members. For all these reasons, the CV initiative will not deliver on its promise to improve the environmental performance of CV members, which needs to be acknowledged and urgently addressed. This report provides a set of recommendations for how the CV secretariat can increase the level of ambition and commit to a robust approach to responsible viscose production, in line with the requirements of the CanopyStyle commitment and EU BAT as laid down in the Changing Markets *Roadmap*. --- 1. **Introduction: China’s place in the global viscose market** The rapid development of China’s textile industry has become one of the biggest threats to China’s environment. Historically one of the country’s most polluting industries, it has repeatedly been identified as a major contributor to water stress, due to production generating large quantities of inadequately treated wastewater.\(^2\) China’s Ministry of Environmental Protection reports that the industry is the third-biggest source of wastewater, accounting for over 10% of China’s total industrial wastewater in 2015 alone.\(^3\) In 2017, the Chinese NGO Institute of Public & Environmental Affairs (IPE) recorded the textile industry committing over 300,000 violations of environmental standards.\(^4\) China is also the world’s top viscose producer, accounting for around 63% of global viscose output. The industry, once concentrated in North America and Europe, shifted to Asia in the late 20\(^{th}\) century as a result of its cheaper labour costs and looser environmental protection rules. In the first decade of the 21\(^{st}\) century, China quadrupled its viscose-production capacity.\(^5\) Viscose is an increasingly popular textile widely used in high-street and high-end fashion alike. It is currently the third most commonly used fibre in the world, after synthetics and cotton.\(^6\) As a fibre which is in principle biodegradable, viscose has the potential to be a sustainable alternative to oil-derived synthetics and water-hungry cotton. Also, market research suggests that biodegradability will be a key factor influencing consumers’ purchasing decisions, boosting demand for materials that are plant-based and replenishable.\(^7\) However, many viscose manufacturers have yet to adopt responsible production methods and sourcing practices to make viscose a sustainable fibre. While Austria’s Lenzing and India’s Aditya Birla Group are the two largest individual players on the viscose market, collectively, Chinese companies dominate the industry. In 2017, the revenues generated by Chinese viscose producers reached more than US$7.3 billion. By way of comparison, in the same year the two next-largest markets, Europe and India, had estimated revenues of US$8.4 billion and US$1.2 billion respectively. Annual production of viscose staple fibre (VSF)\(^8\) globally is nearly 5 million tonnes, of which China accounted for 3.6 million tonnes in 2017.\(^9\) The Chinese viscose-fibre industry is highly concentrated; in 2017, 65% of its viscose-fibre sales came from its top eight producers. Most of the companies are located in eastern coastal areas, as well as Xinjiang province in the country’s northwest.\(^10\) An investigation into conditions at viscose-manufacturing sites carried out by the Changing Markets Foundation in 2017\(^11\) found that major Chinese viscose manufacturers were dumping highly toxic chemicals in local waterways, destroying marine life and directly exposing workers and local people to harmful chemicals. In a striking example of the industry’s impact on iconic nature spots, pollution from viscose manufacturing was found to be polluting Lake Poyang, China’s largest freshwater lake. In response to China’s considerable environmental challenges, in recent years the government has --- 2 Viscose fibre exists as viscose filament yarn and viscose staple fibre. Viscose filament yarn is a spun thread ready for weaving into textiles. Viscose staple fibres, which represent about 95% of the market are, cut into short pieces after the spinning bath and can be blended with other fibres into textile yarns or processed into ‘non-woven’ products later on. China in the Global Viscose Fibre Market – Key Facts Size of the global viscose market: - 2017: USD 12 billion - 2023: USD 15.9 billion Compound annual growth rate 2017-2023: 4.76% Top countries in sales (2017): - China: 63.54% - India: 10.03% - Europe: 9.22% - North America: 2.32% - Southeast Asia: 9.34% - Others: 5.56% Top countries in revenue (2017): - Europe = USD 1.4 billion - China = USD 7.3 billion - India = USD 1.2 billion In the first decade of the 21st century, China quadrupled its viscose production capacity. Top manufacturers by revenue (2017): 1. Lenzing: 19.81% 2. Aditya Birla Group: 19.11% 3. Others: 24.26% 4. Sateri: 6.4% 5. Ajayang: 4.63% 6. Bohi: 4.14% 7. Xiangsheng Group: 3.48% 8. Grace: 3.10% 9. Fulida: 2.41% 10. Zhongtai Chemical: 3.02% Chinese Viscose Fibre market is growing: - 2017: USD 7353 million - 2023: USD 9575 million strengthened enforcement of pollution regulations. This has significantly affected China’s manufacturing sector. Tens of thousands of factories have been shut down and fined, and their management accused of criminal offences, following inspections by the Chinese Ministry of Environmental Protection. This wave of enforcement has also hit the textile industry.\textsuperscript{21} Due to this increased government scrutiny – combined with pressure from clothing brands, retailers and initiatives such as IPE’s Blue Map Database (which provides greater transparency on the Chinese textile sector’s environmental performance) – an industry-led initiative has been created to develop a more sustainable viscose-manufacturing industry in China. This initiative – the Collaboration for Sustainable Development of Viscose – brings together China’s leading viscose producers, which collectively account for more than half of global VSF production. The initiative commits its members to adopt and implement a three-year Roadmap that promises to provide a sustainability pathway for the Chinese viscose industry and drive real market transformation. This briefing analyses the merits of this initiative, and provides recommendations for its improvement. Several aspects of the viscose supply chain are environmentally destructive, including the potentially devastating impacts of wood-pulp production on ancient and endangered forests, pollution and the release of toxic chemicals at fibre-manufacturing plants, and the unsustainable use of water and harmful chemicals in the dyeing and finishing process. With responsible logging and chemical management, viscose can be produced in a way that minimises impacts on people and the environment. However, many manufacturers across the industry are yet to adopt such best practices. According to Canopy, dissolving pulp for viscose production wastes approximately 70% of the tree and is a chemically intensive manufacturing process.\textsuperscript{22} Moreover, around 30% of viscose that goes into clothing comes from pulp logged from endangered and ancient forests.\textsuperscript{23} In addition, the viscose-fibre manufacturing process still depends on the use of toxic chemicals to transform wood pulp into viscose fibre, and, as a result, is linked to alarming environmental and health impacts at and around production sites. Carbon disulphide (CS\textsubscript{2}), which is at the heart of the process, is a toxic and endocrine-disrupting chemical linked to numerous serious health conditions. Most notoriously, it was found to be a cause of insanity in factory workers over a century ago,\textsuperscript{24} but it also contributes to illnesses ranging from kidney disease and Parkinson’s-like symptoms to heart attack and stroke.\textsuperscript{25} The chemical can be present in both water and air as a result of pollution from viscose factories.\textsuperscript{26} Similarly, sodium hydroxide (NaOH, also known as caustic soda) and sulphuric acid (H\textsubscript{2}SO\textsubscript{4}), which are used in the process, as well as hydrogen sulphide (H\textsubscript{2}S), which is created as a by-product, are linked to severe negative impacts on people exposed to them. These include eye damage, function impairment, neurobehavioural changes,\textsuperscript{27} skin burns and shortness of breath. Evidence suggests that occupational exposure to sulphuric acid mists, in combination with other acid mists, can be carcinogenic.\textsuperscript{28} Without proper chemical management and treatment, these toxic chemicals find their way into the air and waterways surrounding viscose factories, affecting the delicate natural balance of ecosystems and water bodies. Pollutants characteristically found in wastewater from viscose production are sulphuric acid, sulphates, sulphur and sulphides. There can also be some metals present, namely zinc salts. Inadequately treated wastewater can also contain a lot of organic material, which can lead to high levels of chemical oxygen demand (COD); this means that less dissolved oxygen is available for aquatic organisms, such as fish, resulting in their death.\textsuperscript{29} 3. Collaboration for Sustainable Development of Viscose (CV) In March 2018, the Chinese viscose sector launched its own industry-led initiative for the development of a sustainable viscose-manufacturing industry in China – the so-called Collaboration for Sustainable Development of Viscose (CV). CV brings together China’s ten largest viscose producers, which collectively account for about 60% of the world’s VSF production.\textsuperscript{29} The initiative also includes two trade associations – the China Chemical Fibre Association and the China Cotton Textile Association – and lists Austrian viscose producer Lenzing as a member.\textsuperscript{29} In August 2018, the initiative launched its CV Roadmap, which includes ten best practice standards that cover the full viscose supply chain – from raw-material sourcing to responsible production and product safety. This means its members are expected to adopt a number of certification schemes and standards, including certification of all viscose cellulosic raw materials by either the PEFC or the Forest Stewardship Council (FSC); alignment with Zero Discharge of Hazardous Chemicals (ZDHC) wastewater guidelines; certification of facilities under OEKO-TEX Step; and completion of the Higg Facility Environmental Module (FEM) 3.0 using self-assessment. CV members are expected to achieve preselected standards within three years by meeting the CV Roadmap’s basic requirements by June 2019 and its advanced requirements by the end of 2020. This is what the CV identifies as a system of continuous improvement. Although the initiative identifies the CV Roadmap as a ‘living document’, which will be subjected to periodic reviews and updated as needed, it is not clear whether there are plans to consistently scale up its ambition in line with a policy of continuous improvement. The CV initiative was launched in March 2018 and brings together ten leading Chinese viscose producers (Source: cvroadmap.com) The Roadmap towards responsible viscose & modal fibre manufacturing was published by Changing Markets in February 2018 Best available techniques (BAT) for the production of polymers The EU’s BAT Reference Document (BREF) on Polymers was published in 2007 and defines the most effective techniques for achieving environmentally responsible production of synthetics and cellulose-based fibres, including viscose.\textsuperscript{24} Conclusions on BAT are used as the main reference when issuing operating permits and licences in the EU, which are granted by authorities in Member States.\textsuperscript{25} The Polymers BREF was drafted under the auspices of the European Commission, and is based on an exchange of information between EU Member States, the EU viscose industry and NGOs carried out between 2003 and 2005. It is based on operating data supplied by EU industry players, meaning it reflects what the best performers in the industry were already achieving over a decade ago. The world’s two biggest manufacturers, Aditya Birla Group and Lenzing, are currently developing plans to bring all their manufacturing sites in line with EU BAT. Lenzing already has two sites performing in line with EU BAT (Lenzing in Austria and Nanjing in China), and has established a global standard based on EU BAT for all its factories.\textsuperscript{26} In the Changing Markets Foundation’s \textit{Roadmap towards responsible viscose and modal fibre manufacturing},\textsuperscript{27} we identified EU BAT on viscose (as described in the Polymers BREF) as the most comprehensive and ambitious standard: it sets limits on chemicals usually discharged from the viscose-manufacturing process, and addresses both air and water pollution during VSF production. Eight major brands and retailers – ASOS, C&A, Esprit, H&M, Inditex, M&S, Next and Tesco – have already publicly pledged to integrate Changing Markets’ Roadmap into their sustainability policies. With this commitment, clothing brands and retailers are sending a clear message to viscose manufacturers that they expect the industry to move towards more responsible viscose production by 2023-2025. View from the ground: pollution at manufacturing sites operated by CV members During Spring 2017, the Changing Markets Foundation worked with local NGOs and investigators to carry out on-the-ground investigations at viscose-manufacturing sites in China. The team visited seven viscose-production sites, including some facilities operated by the following CV members: **Tangshan Sanyou** (Tangshan Sanyou Group Xingda Chemical Fibre Co. Ltd and Tangshan Sanyou Group Yuanda Chemical Fibre Co. Ltd, both situated in Hebei province), **Sateri** (Jiangxi Chemical Fibre Co. Ltd and Jiujiang Fibre Co. Ltd), and **Shandong Silverhawk Chemical Fibre** and **CHTC Helon** (both situated in Shandong province). The findings of the investigation were published in *Changing Markets’ Dirty Fashion* report.\(^{34}\) At all sites, including factories belonging to the aforementioned four CV members, we found clear evidence of viscose producers dumping untreated wastewater, contaminating local lakes and waterways or discharging air pollutants that exceeded national and local environmental standards. Air pollution characterised by an intense smell of rotten eggs was observed around all four CV member sites. The investigators found levels of hydrogen sulphide exceeding the permitted limits at Sateri’s Jiangxi site, levels of carbon disulphide exceeding permitted limits in the residential area around the Tangshan Sanyou and CHTC Helon sites, and levels of both chemicals in breach of regulations at Shandong Silverhawk Chemical Fibre. We also found evidence of severe water pollution at all four sites. Sateri’s Jiangxi factory was found discharging effluent in Lake Poyang – China’s largest freshwater lake; home to several critically endangered species (including the finless porpoise), it provides critical habitat for half a million migratory birds each year. Pollution from viscose manufacturing there has played a role in turning the water black, killing fish and shrimps, and stunting crop growth. The COD level of residential drinking water was found to be above the regulatory limit around the sites operated by Sateri and Tangshan Sanyou. Villagers around the Tangshan Sanyou factories complained that water pollution had impacted fisheries, with dead fish regularly found near wastewater outfalls. Local people living around the CHTC Helon and Silverhawk factories had stopped drinking well water because they feared it would make them ill, and even avoided using it for irrigation because it could kill their plants. According to some locals, in the past few years an increasing number of people living near the Shandong Helon factory had died of cancer; they reported that cases of lung cancer, gastric cancer and oesophageal cancer were common.\(^{35}\) Since our investigation, the Chinese government and media have recorded multiple violations of national and local regulations and pollution incidents at sites operated by CV members. In 2018 Sateri’s newest plant, Fujian Fibre Co. Ltd., was issued with several violation notices. On separate occasions, the site was found to be improperly managing hazardous waste and sewage treatment,\(^{36}\) and the company reported several instances of excessive dust emissions and abnormal nitrogen oxide emissions.\(^{37}\) In July 2017, Sateri Jiujiang and Shandong Yamei Technology were fined over US$100,000 (£724,797)\(^{38}\) and over US$300,000 (£2,465,208)\(^{39}\) respectively for discharging wastewater that exceeded national emissions standards. In October 2017, the Xinxiang Environmental Protection Bureau issued Jilin Chemical Fiber Refco Group Ltd with a penalty for improperly stacking coal, which was leading to dust pollution.\(^{40}\) In the same month, Xinxiang Chemical Fiber was fined for operating its boilers despite the Xinxiang City Government calling for their suspension due to an orange alert signalling heavy pollution.\(^{41}\) Still, in November 2017, *The Paper* reported that Xinxiang Chemical Fiber was continuing to operate despite its heavy pollution warning.\(^{42}\) In the same month, residents in Weifang complained about a pungent smell, which an investigation by the local Environmental Bureau tracked to CHTC Helon.\(^{43}\) Pollution from Chinese viscose factories found during investigation in spring 2017 4. Chinese companies’ approach to addressing the environmental impact of viscose manufacturing With regards to wood-pulp sourcing, the CV Roadmap stipulates that all its members use viscose cellulosic raw materials certified by either the PEFC or the FSC. The CV Roadmap does not require or recommend an additional independent audit, such as the CanopyStyle Audit, to ensure that wood is not sourced from ancient and endangered forests. On the fibre-manufacturing side, CV uses the Chinese Clean Production Standard to address the impacts of VSF production. According to communication with the CV secretariat, the Clean Production Standard was updated in summer 2018 and is based on the standard formulated in 2014 under the leadership of the China Chemical Fibres Association (Assessment Indicator System of Production for Viscosity Industry (HX/T 52005-2014)). However, our researchers could not find the updated version online. Although not mandatory, the Chinese government drafted and recommended the Clean Production Standard. Assuming that the 2018 version uses the same framework as the 2014 version, it defines three levels, with Level I being the most ambitious: - **Level I** for an ‘internationally advanced’ level of cleaner production; - **Level II** for a ‘domestic advanced’ level of cleaner production; and - **Level III** for a ‘domestic basic’ level of cleaner production. --- The CV Roadmap was launched in August 2018 and includes a range of standards on wood pulp sourcing and manufacturing (source: cvroadmap.com) 5. Shortcomings of the CV Roadmap Our analysis of the CV Roadmap identifies a number of pitfalls that the CV needs to address to ensure this initiative drives real transformation. 5.1 Lack of transparency and clarity There is very limited public information available about the specifics of the CV Roadmap, including what the different certification schemes and selected standards entail, how it will be enforced, monitored and verified, and whether that process will be independent and transparent. There is also an absence of information about whether any sanctions will be taken against members who do not comply with its requirements. The *Three-year action plan for green development of the regenerated cellulose fibre industry* report on the official CV website is only available in Mandarin,\(^{40}\) making it difficult for the global marketplace to understand how the selection of standards is meant to support the transition to responsible viscose production in practical terms. Our researchers could not find the updated Clean Production Standard that the CV uses in its Roadmap on either the CV website or any government platform, which calls into question the transparency of the initiative. Moreover, the units of measurement used for the pollution parameters identified by the Clean Production Standard are in most cases not comparable to the units used by internationally recognised standards and best practices, such as the EU Ecolabel and EU BAT. This makes it almost impossible for third parties to assess the level of ambition behind the CV Roadmap, and how its requirements compare to what the best-performing producers in the viscose industry are already achieving. This lack of transparency and clarity makes it close to impossible for international stakeholders to meaningfully scrutinise the Chinese viscose industry, and enables CV members to create an illusion of progress while, in reality, failing to take steps to transition to more responsible production methods. 5.2 Weak ambition and lack of measures to drive continuous improvements The CV initiative is meant to provide a platform for Chinese viscose producers ‘to achieve sustainable viscose and help their customers deliver on their sustainability commitments’.\(^{41}\) However, there are several problems with this – highlighted below – including the fact that the initiative does not oblige its members to achieve the level of production intended for companies selling to the international market (Level I, i.e. the most ambitious level). 5.2.1 Responsible forestry requirements CV members have the option of demonstrating responsible harvesting and respectful forestry practices through PEFC certification. PEFC and its globally associated certifications, such as the Sustainable Forestry Initiative, have been criticised or found inadequate by a number of NGOs (including the World Wildlife Fund (WWF),\textsuperscript{45} Sierra Club\textsuperscript{46} Canopy\textsuperscript{47} and Greenpeace) for lacking credibility and failing to ensure responsible forest management. In March 2018, Greenpeace International also withdrew its membership of the FSC, stating: ‘we no longer have confidence that FSC alone can consistently guarantee enough protection, especially when forests are facing multiple threats.’\textsuperscript{48} This indicates that relying only on FSC certification (or, even worse, on the PEFC) is no longer a sufficient guarantee of sustainable sourcing, and that further measures are needed. A more appropriate and comprehensive approach to verifying performance at this stage of the supply chain would be to implement the requirements of the CanopyStyle Guide’s tool, \textit{Making the cut: Sustainable cellulosic fibre staircases}, which sets out expectations of rayon and viscose producers.\textsuperscript{49} The tool provides six levels of ambition, from ‘High risk’ to ‘Gold’ levels, and encourages suppliers to continuously ‘move up the staircase’. Sourcing fibres from FSC-certified forests is only one of the requirements with which companies need to comply to achieve Canopy’s ‘Silver’ level. The foundational requirement is completion of CanopyStyle Audits to verify that no sourcing from ancient and endangered forests or controversial sources is taking place. In other words, the CanopyStyle Audits confirm whether viscose fibres are coming from the right or wrong places globally, and FSC then layers overtop to confirm sustainable forest practices regionally. In 2017–2018, Aditya Birla Group and Lenzing completed the CanopyStyle Audit, along with ENKA and three Chinese producers: Tangshan Sanyou, Sateri\textsuperscript{50} and Zhejiang Fulida.\textsuperscript{51} Beyond simply mitigating risk, leading viscose producers are expected to: - support research and development of alternative fibres, such as recycled fabrics or agricultural residues, and work towards sourcing fibre made from these lower-impact, non-wood alternatives; - demonstrate a business strategy and investments for making these alternative fibres commercial-scale and cost-competitive; - meet CanopyStyle Audit expectations for other products and businesses in which it uses wood products; and - supporting lasting, legislated protection in critical areas of ancient and endangered forests. If the CV persists in following its lowest-common-denominator approach by relying on PEFC and/or FSC certification only, there is a real risk that CV members will find themselves complicit in the destruction of ancient and endangered forests and eliminated from sourcing for retailers, brands and designers that do not want endangered orangutan or bear habitats traced to their stores. 5.2.2 Responsible production requirements At the next stage of production, i.e. the processing of wood pulp into fibre, the Clean Production Standard defines three levels of ambition that aim to address environmental impacts from production. However, CV members are not required to reach the highest level (Level I), which is referred to as the ‘internationally advanced’ level and, according to our analysis, comes the closest to the EU BAT. According to communication between Changing Markets and the CV secretariat, CV members have generally met the requirements of Level III, meaning ‘basic domestic’ level of production. The CV Roadmap instructs every member company to meet advanced domestic levels of cleaner production (Level II) by 2020, but does not compel them to go beyond this to reach EU BAT, or even Level I (the ‘internationally advanced’ level of cleaner production). Our analysis of information supplied by the CV secretariat shows that the limits on emissions of sulphur to air are weak and not in line with EU BAT. For example, the BREF document shows operational data from a European plant that, in 2007, had already achieved 96–98% recovery of carbon disulphide and elementary sulphur. However, the CV Roadmap only requires CV members to achieve a minimum of 89% sulphur recovery by 2020. Limits set on zinc to water by the CV Roadmap are also weak and fall short of EU BAT values. Moreover, the CV Roadmap does not require members to track COD in water in viscose-fibre production, which is a parameter included in EU BAT. | Pollution parameters | EU BAT | CPS Level II | CPS Level I | |----------------------|--------|--------------|-------------| | **废气 Waste gas-related data** | | | | | 挥发到空气中的硫 S (sulphur) to air | 12–20 kg/t | Sulphur recovery rate >89% (>25 kg)* | *Value in brackets was provided by the CV Secretariat and suggests that CPS limit is not in line with the EU BAT standards.* | | | | Sulphur recovery rate >92% (=18.9 kg)* | *Value in brackets was provided by the CV Secretariat and suggests that CPS limit is in line with the EU BAT standards.* | | **水 Water-related data** | | | | | 脱水中硫酸根离子 SO42- (sulphate) to water | 200–300 kg | Recovered calcium sulphate ≥400 (~434)* | Not comparable to EU BAT. *Value in brackets was provided by the CV Secretariat but is not clear.* | | COD | 3,000–5,000 g/t | / | CPS does not measure COD. | | | | Recovered calcium sulphate ≥500 (~330)* | Not comparable to EU BAT. *Value in brackets was provided by the CV Secretariat but is not clear.* | | Zn | 0.01–0.05g/kg (10–50g)* | 5 mg/l = (275g)* | Not directly comparable to EU BAT. *Value in brackets was provided by the CV Secretariat and suggests that CPS limit is not in line with the EU BAT standards.* | | | | 2 mg/l = (90g)* | Not directly comparable to EU BAT. *Value in brackets was provided by the CV Secretariat and suggests that CPS limit is not in line with the EU BAT standards.* | In an exchange with Changing Markets, the CV secretariat stated that there are no limits for COD because water-treatment processes differ among CV Roadmap members, and, while some have their own wastewater treatment plant, others use a centralised plant. Even if this were the case, investigations by Chinese NGO IPE have shown that many centralised industrial wastewater treatment facilities in China turn out to be ‘centralized sources of pollution’ because they fail to meet legal discharge standards. Statistics from IPE’s China Water Pollution Map show that, between 2008 and 2013, wastewater treatment facilities around the country had an average of 1.4 violation records per facility. This shows that additional requirements are needed for CV members, which are committing to becoming more responsible viscose producers, to verify that their COD levels do indeed comply with the highest standards. In contrast, Austrian producer Lenzing already has two sites performing in line with EU BAT (Lenzing in Austria and Nanjing in China), and has set up a global standard based on EU BAT for all its factories. The company has measured relevant pollutant values (sulphur air, sulphate to water, zinc to water and COD), and confirms that these are in line with EU BAT. India’s Aditya Birla Group is also in the process of developing a plan to achieve EU BAT at its sites. Moreover, any new viscose producer operating on the European market will need to comply with EU BAT levels to obtain operating permits and licences, which EU Member State authorities grant. Given that the Chinese producers that are members of the CV initiative operate on the international market, the lack of requirement to produce in line with Level I (‘internationally advanced’ level of cleaner production) seems like a major failing in the CV Roadmap. In addition, given that many brands have pledged to source from suppliers committed to EU BAT, we recommend that the CV Roadmap adopts an approach in line with this. 5.3 Failure to adopt a holistic approach The CV initiative seeks to address environmental impacts throughout the viscose supply chain. However, it sets out to do this is by piling together a variety of certification schemes, standards, industry initiatives (e.g. ZDHC) and self-assessment tools (such as the Higg Index). Our analysis shows that many of these are incomplete, and/or only certify a small part of the supply chain or simply the quality of the end product, and often lack sufficiently strict criteria. It is highly concerning that, in many cases, the CV initiative has not selected the most ambitious standards available and is allowing its members to pick and choose which standards they wish to use (e.g. PEFC or FSC, even though these do not achieve the same level of ambition). In addition, our previous analysis showed that Oeko-TEX does not cover parameters specific to the viscose-manufacturing process, while ZDHC is only now working on its standard for the production of viscose, the ambition of which remains to be seen. As things currently stand, using any one of these schemes as proof of responsible manufacturing would convey the false impression that viscose production is ‘clean’, without accounting for the full range of relevant pollution parameters or every stage of the viscose-production chain at which environmental impacts occur. In addition, it is of concern that CV members are only required to reach Level II of the CPS, which is not in line with what other companies producing for the international market are achieving, or committing to achieve, within a similar timeframe. As a rule, any industry initiative that aims to improve environmental performance must go beyond national regulatory requirements and should only accept the best industry players, ensuring the level of ambition remains high and reflects the top-performing percentile of companies in that industry. The CV initiative should also put in place criteria on how its members are expected to report on progress – and what happens if they fail to comply with the requirements. Based on our analysis, the CV Roadmap currently falls far short of these guiding principles, and its members cannot therefore be considered to be producing viscose responsibly. Our analysis shows that the CV Roadmap, in its current form, constitutes a weak attempt to clean up the Chinese viscose industry and will not lead to transformation of the sector in line with international standards of responsible production. This is especially concerning at a time when other big players on the market are already achieving higher standards, or have committed to achieving them in the near future. While Chinese companies collectively occupy the largest share of the viscose-fibre market, as global suppliers they also have many major European and North American brands as their key customers. With many of these brands adopting a more robust approach to responsible sourcing and manufacturing of viscose, Chinese manufacturers run the risk of losing out to their competitors in other parts of the world, which are coming forward with more ambitious plans to improve their operations. Our analysis shows that brands and retailers should not consider membership of the CV initiative and commitment to the CV Roadmap as proof of good environmental performance and responsible production methods – unless the initiative undergoes significant reform, in line with the recommendations outlined below. **Recommendation 1: Higher ambition** For the CV Roadmap to drive meaningful transformation, it needs to oblige its members to move towards the most ambitious sourcing and production standards. While the Roadmap’s current requirements do define milestones to drive improvements in the Chinese viscose industry, the level of ambition should be raised. Requirements for wood-pulp sourcing should, at a minimum, include completing the CanopyStyle Audit, along with the other provisions of the CanopyStyle Guide’s tool. Without this, there is a real risk that CV members will find themselves complicit in the destruction of ancient and endangered forests. For viscose manufacturing, the CV Roadmap should stipulate that members should go beyond Level II of the CPS by requiring them to implement EU BAT by 2023-2025, as set out in the Changing Markets *Roadmap*. **Recommendation 2: Clarity and transparency** The CV initiative must provide more clarity about how the standards and initiatives included in the CV Roadmap will contribute to cleaner viscose production. Specifically, the CV must lay down emissions limits for relevant pollution parameters (see Table 1) in a form that is comparable to, and aligns with, the internationally recognised standards – specifically, EU BAT on Polymers. Only this will ensure meaningful scrutiny of the Chinese viscose industry by international stakeholders. Moreover, the initiative must disclose how the CV Roadmap will be enforced and progress measured and verified. Monitoring and verification should be independent and regular, and the progress of CV members transparently disclosed on the CV website. **Recommendation 3: Incentive for continuous improvement** The CV initiative should create an incentive for Chinese viscose producers to improve over time, and as technology progresses, by consistently ramping up the ambition of the CV Roadmap in line with a policy of continuous improvement. The initiative should also commit to addressing non-compliance by defining sanctions and exclusion criteria for cases where members consistently fail to meet its requirements. **Recommendation for ZDHC: Guidelines for measuring and recording performance in the viscose manufacturing process** This report points to an urgent need for guidelines for consistent measuring and recording of pollution parameters that could be applied across viscose producers in different countries. Such guidelines are required to make producers’ performances internationally comparable and subject to meaningful scrutiny. ZDHC could develop these under its upcoming framework of guidelines for wastewater, sludge, waste and air emissions specific to man-made cellulosic-fibre production. To ensure that they drive a shift to responsible viscose production, the ZDHC should set the bar high from the outset by aligning its standards with the best available technology - meaning, at a minimum, with EU BAT values. 20 Gebbie et al. (2009) A review of health effects of carbon disulphide in viscose industry and a proposal for an occupational exposure limit. *Critical Reviews in Toxicology*, 39(Suppl 2): 21–26; Tan et al. (2001) Carbon disulphide exposure assessment in a Chinese viscose filament plant. *International Journal of Hygiene and Environmental Health*, 203(5–6): 465–471. 21 Corn, M. (ed.) (1993) *Handbook of hazardous materials*, San Diego, CA: Academic Press. 22 WHO Regional Office for Europe (2000) Chapter 5-4: Carbon disulphide. Air quality guidelines. Second edition. [ONLINE] Available at: http://www.euro.who.int/__data/assets/pdf_file/0019/123058/AQG2ndEd_5_4carbodisulfide.PDF. 23 United States Environmental Protection Agency (2000) Ambient aquatic life water quality criteria for dissolved oxygen (*schechteri*). *Cape Cod to Cape Hatteras*. [ONLINE] Available at: https://tinyurl.com/y8f8rguq. 24 European Commission (2007) Reference document on best available techniques in the production of polymers. [ONLINE] Available at: http://ecspdc.jrc.ec.europa.eu/reference/BREF/pal_bref_0807.pdf. 25 European Commission (2018) The industrial emissions directive. [ONLINE] Available at: http://ec.europa.eu/environment/industry/stationary/waste/legislation.htm. 26 Changing Markets Foundation (2018) Dirty fashion: On track for transformation. [ONLINE] Available at: http://changingmarkets.org/wp-content/uploads/2018/08/Dirty_Fashion_on_track_for_transformation.pdf. 27 Changing Markets Foundation (2018) Roadmap towards responsible viscose and modal fibre manufacturing. Available at: http://changingmarkets.org/wp-content/uploads/2018/02/Roadmap_towards_responsible_viscose_and_modal_fibre_manufacturing.2018.pdf. 28 Sateri (2017) Sateri’s sustainability report 2017. [ONLINE] Available at: http://www.sateri.com/wp-content/uploads/2018/10/Sateri-Sustainability-Report2017English.pdf. 29 Collaboration for Sustainable Development of Viscose (2018) Profile. [ONLINE] Available at: http://www.cvroadmap.com/abouten.html. 30 Changing Markets Foundation (2017) Dirty fashion: How pollution in the global textiles supply chain is making viscose toxic. [ONLINE] Available at: http://changingmarkets.org/wp-content/uploads/2017/06/CHANGING_MARKETS_DIRTY_FASHION_REPORT_SPREAD_WEB.pdf. 31 Changing Markets Foundation (2017) Dirty fashion: How pollution in the global textiles supply chain is making viscose toxic. [ONLINE] Available at: http://changingmarkets.org/wp-content/uploads/2017/06/CHANGING_MARKETS_DIRTY_FASHION_REPORT_SPREAD_WEB.pdf. 32 Putian City Environmental Protection Bureau (2017) Putian City Environmental Protection Bureau ordered the correction of the illegal decision letter. [ONLINE] Available at: http://hjsj.putian.gov.cn/xgk/wry-hjgxgk/zxzf/201711/t20171128_745916.htm. 33 IPE (2017) Sateri (Fujian) Fibre Co., Ltd. enterprise feedback. [ONLINE] Available at: http://www.ipe.org.cn/IndustryRecord/regulatory-record.aspx?companyId=0&companyId=1704&T=dataType=1&isdefault=all&isHy=0. 34 Hukou County Environmental Protection Bureau (2018) Environmental administrative punishment case information disclosure form. [ONLINE] Available at: http://www.hukou.gov.cn/xgk/33035/zxfbnxxgq/hjhj/jrjc/201806/t20180622_90781.html. 35 County Environmental Protection Agency Administrative Punishment Information Disclosure Form (2017) Public information disclosure on administrative punishments from (Boxing) County Environmental Bureau. [ONLINE] Available at: http://126.96.36.199.809/n16/n1n354/n355/2018070503277fe312.html. 36 Jilin Environmental Protection Bureau (2017) Jilin Environmental Protection Bureau Administrative penalty decision. [ONLINE] Available at: http://www.jiepb.gov.cn/hjxgk/hjcf/201712/t20171221_365796.html. 37 Xinxiang Environmental Protection Bureau (2017) New reg. penalty word! [2017] No. 53. Xinxiang Chemical Fiber Co., Ltd. [ONLINE] Available at: http://www.xxb.gov.cn/news/102_12329. 38 The Paper (2017) Ministry of Environmental Protection: These 29 companies did not stop production during the heavy pollution weather emergency plan. [ONLINE] Available at: https://www.thepaper.cn/newsDetail_forward_1854884. 39 Phoenix New Media (2017) Henglian Hallong and Henglian pulp and paper factory area, 17 November. [ONLINE] Available at: http://news.ifeng.com/a/20171117/53343332_5.shtml. 40 Collaboration for Sustainable Development of Viscose (2018) Green development of regenerated cellulose fiber industry: Three-year action plan. [ONLINE] Available at: http://www.cvroadmap.com/reports/201809/43.html. 41 Collaboration for Sustainable Development of Viscose (2018) Profile. [ONLINE] Available at: http://www.cvroadmap.com/abouten.html. 42 WWF (2015) WWF forest certification assessment tool (CAT). [ONLINE] Available at: https://wwf.panda.org/724687/WWF-Forest-Certification-Assessment-Tool-CAT. 43 Stand.Earth (n.d.) *Environmental leaders critique SFI*. [ONLINE] Available at: https://www.stand.earth/page/environmental-leaders-critique-sfi. 44 Greenpeace (2018) Greenpeace International to not renew FSC membership. [Press release]. 26 March. [ONLINE] Available at: https://www.greenpeace.org/international/press-release/15358/greenpeace-international-to-not-renew-fsc-membership/. 45 Canopy (2017) The hot button issue: Canopy-Style update on viscose producers and forests. [ONLINE] Available at: https://canopyplanet.org/resources/hotbutton2017/ 46 Canopy, 2017, *The hot button issue: Detailed matrix of viscose producer performance – 2017 update*. [ONLINE] Available at: https://canopyplanet.org/wp-content/uploads/2017/11/Canopy-Style-Update-Matrix-EN.pdf; Canopy (2018) The Canopy-Style audit. [ONLINE] Available at: https://canopyplanet.org/resources/canopystyle-audit/. 47 Fulla (2018) Fulla completes independent verification audit by Rainforest Alliance to exclude wood pulp sourcing risk. [ONLINE] Available at: http://www.fulla.com/en/index.php/news/show/4/242. 48 IPE (2014) No excuses: Taking full responsibility for pollution from manufacturing. [ONLINE] Available at: http://www.ipe.org.cn/upload/IPE_Reports/Report-Textiles-Phase-IV-EN.pdf. 49 Changing Markets Foundation (2018) *The false promise of certification*. [ONLINE] Available at: https://changingmarkets.org/wp-content/uploads/2018/05/False-promise_full-report-ENG.pdf. 50 Changing Markets Foundation (2018) *Dirty fashion: On track for transformation*. [ONLINE] Available at: http://changingmarkets.org/wp-content/uploads/2018/08/Dirty_Fashion_on_track_for_transformation.pdf. Changing Markets Foundation
UNIVERSITÉ DE SHERBROOKE IMPACT CELLULAIRE DE L'EXPRESSION D'UN MUTANT CONSTITUTIVEMENT ACTIF DU RÉCEPTEUR AT₁ DE L'ANGIOTENSINE II par MANNIX AUGER-MESSIER Département de Pharmacologie Thèse présentée à la Faculté de Médecine en vue de l'obtention du grade de Philosophiæ Doctor (Ph.D.) Mai 2005 Si la vie quotidienne est nécessairement faite de contraintes, le règne de l’esprit réclame, lui, un affranchissement radical. Tiré de « Le fruit défendu de la connaissance; De Prométhée à la pornographie » par Roger Shattuck Il ne t’est jamais donné un désir sans que te soit donné le pouvoir de le rendre réalité. Tu peux être obligé néanmoins de peiner pour cela. Tiré de « Le messie récalcitrant » par Richard Bach À Nancy et Jacob! # TABLE DES MATIÈRES **LISTE DES ARTICLES PRÉSENTÉS DANS CETTE THÈSE** ........................................ vi **AUTRES ARTICLES** ........................................................................................................... vi **LISTE DES FIGURES** ........................................................................................................ viii **LISTE DES TABLEAUX** .................................................................................................... x **LISTE DES ABRÉVIATIONS** ............................................................................................. xi **RÉSUMÉ** .......................................................................................................................... xiii ## INTRODUCTION ......................................................................................................................... 1 1. **LE RÉCEPTEUR DE L’ANGIOTENSINE II DE TYPE I (AT$_1$)** ................................. 5 1.1. Système rénine-angiotensine et rôles physio/pathologiques ............................... 5 1.2. Expression et structure du récepteur AT$_1$ .............................................................. 8 1.3. Voies de signalisation enclenchées par le récepteur AT$_1$ ..................................... 12 2. **ACTIVITÉ CONSTITUTIVE DES GPCRs** .................................................................. 20 2.1. Modèles théoriques d’activation des GPCRs ....................................................... 21 2.2. Rôles physiologiques de l’activité constitutive de GPCRs de type sauvage .... 27 2.3. Pathologies associées à l’activité constitutive de GPCRs mutants ..................... 28 2.4. Récepteur mutant constitutivement actif N111G-AT$_1$ .......................................... 32 3. **PROBLÉMATIQUE ET BUT DE L’ÉTUDE** ................................................................ 35 **RÉSULTATS** .......................................................................................................................... 36 **ARTICLE 1 – AVANT-PROPOS** ....................................................................................... 37 The constitutively active N111G-AT$_1$ receptor for angiotensin II maintains a high affinity conformation despite being uncoupled from its cognate G protein $G_{q/11}\alpha$ ... 38 ARTICLE 2 – AVANT-PROPOS ................................................................. 74 Down-regulation of inositol 1,4,5-trisphosphate receptor in cells stably expressing the constitutively active angiotensin II N111G-AT$_1$ receptor ........................................... 75 ARTICLE 3 – AVANT-PROPOS ................................................................. 123 The Constitutively Active N111G-AT$_1$ Receptor for Angiotensin II Modifies Cellular Morphology and Cytoskeletal Organization of HEK-293 Cells ............... 124 DISCUSSION ......................................................................................... 156 CONCLUSION ....................................................................................... 162 PERSPECTIVES .................................................................................... 163 REMERCIEMENTS ............................................................................... 167 BIBLIOGRAPHIE .................................................................................. 169 1) Auger-Messier, M., Clement, M., Lanctot, P.M., Leclerc, P.C., Leduc, R., Escher, E., Guillemette, G. (2003) The constitutively active N111G-AT1 receptor for angiotensin II maintains a high affinity conformation despite being uncoupled from its cognate G protein Gq/11alpha. Endocrinology 44(12): 5277-5284 2) Auger-Messier, M., Arguin, G., Chaloux, B., Leduc, R., Escher, E., Guillemette, G. (2004) Down-Regulation of Inositol 1,4,5-Trisphosphate Receptor in Cells Stably Expressing the Constitutively Active Angiotensin II N111G-AT$_1$ Receptor. Mol. Endocrinol. 18(12): 2967-2980 3) Auger-Messier, M., Turgeon, E.S., Leduc, R., Escher, E., Guillemette, G. The Constitutively Active N111G-AT$_1$ Receptor for Angiotensin II Modifies Cellular Morphology and Cytoskeletal Organization of HEK-293 Cells. (Manuscrit accepté à Exp. Cell Res.) AUTRES ARTICLES 4) Lanctot, P.M., Leclerc, P.C., Clement, M., Auger-Messier, M., Escher, E., Leduc, R., Guillemette, G. (2005) Importance of N-glycosylation positioning for cell-surface expression, targeting, affinity and quality control of the hAT 1 receptor. Biochem J. May 4; doi:10.1042/BJ20050189 5) Leclerc, P.C., **Auger-Messier, M.**, Lanctot, P.M., Escher, E., Leduc, R., Guillemette, G. (2002) A polyaromatic caveolin-binding-like motif in the cytoplasmic tail of the type 1 receptor for angiotensin II plays an important role in receptor trafficking and signaling. *Endocrinology* 143(12): 4702-4710 6) Perodin, J., Deraet, M., **Auger-Messier, M.**, Boucard, A.A., Rihakova, L., Beaulieu, M.E., Lavigne, P., Parent, J.L., Guillemette, G., Leduc, R., Escher, E. (2002) Residues 293 and 294 are ligand contact points of the human angiotensin type 1 receptor. *Biochemistry* 41(48): 14348-14356. 7) Rihakova, L., Deraet, M., **Auger-Messier, M.**, Perodin, J., Boucard, A.A., Guillemette, G., Leduc, R., Lavigne, P., Escher, E. (2002) Methionine proximity assay, a novel method for exploring peptide ligand-receptor interaction. *J. Recept. Signal Transduct. Res.* 22(1-4): 297-313 8) Gosselin, M.J., Leclerc, P.C., **Auger-Messier, M.**, Guillemette, G., Escher, E., Leduc, R. (2000) Molecular cloning of a ferret angiotensin II AT(1) receptor reveals the importance of position 163 for Losartan binding. *Biochim. Biophys. Acta.* 1497(1): 94-102 LISTE DES FIGURES Figure 1. Métabolisme de l’angiotensine II au sein du système rénine-angiotensine ..... 6 Figure 2. Représentation schématique du récepteur AT₁ .................................................. 10 Figure 3. Couplage du récepteur AT₁ aux protéines G hétérotrimériques .......................... 13 Figure 4. Modèle cubique d’activation du complexe ternaire d’un GPCR ...................... 22 Article 1 : Figure 1. Photoaffinity labeling of wild-type and mutant AT₁ receptors ............................ 52 Figure 2. Functional properties of the wild-type and mutant AT₁ receptors: IP production ................................................................. 54 Figure 3. Functional properties of the wild-type and mutant AT₁ receptors: Ca²⁺ mobilization ................................................................. 56 Figure 4. Binding properties in the presence of an uncoupling agent ............................... 57 Figure 5. Coimmunoprecipitation of Gq/11α with the different AT₁ receptors ............... 60 Article 2 : Figure 1. Spontaneous and Ang II-induced Ca²⁺ oscillations in single cells ................... 81 Figure 2. Ang II-induced Ca²⁺ release and Ca²⁺ entry ..................................................... 83 Figure 3. Dose-response curves for Ang II-induced Ca²⁺ release and Ca²⁺ entry .......... 85 Figure 4. Capacitative Ca²⁺ entry under basal conditions ................................................. 87 Figure 5. Integrity of the intracellular Ca²⁺ stores ............................................................. 89 Figure 6. EXP3174 rescues the Ca²⁺ release and Ca²⁺ entry activities of N111G cells . 91 Figure 7. IP₃-induced Ca²⁺ release activity in permeabilized cells ................................ 93 Figure 8. [³H]IP₃ binding to permeabilized cells ............................................................... 96 Figure 9. Immunoblot analysis of IP$_3$RIII ................................................................. 97 Figure 10. Ca$^{2+}$ responses and IP$_3$RIII expression in heterogenous populations of G-418-resistant transfected cells ........................................................................................................... 99 Figure 11. Mechanism of IP$_3$RIII down-regulation in N111G cells .................................. 101 Article 3: Figure 1. Proliferation rate of clonal cell lines ................................................................. 132 Figure 2. Morphological and cytoskeletal reorganization of N111G cells ...................... 133 Figure 3. EXP3174 prevents phenotypic changes in N111G cells ............................... 136 Figure 4. Phenotypic modifications of WT cells following Ang II stimulation ............. 139 Figure 5. Impact of a Rho-kinase inhibitor on cell-cell contact formation .................... 142 LISTE DES TABLEAUX Article 1 : Tableau 1. Binding properties of wild-type and mutant AT₁ receptors.......................... 51 Article 2 : Tableau 1. Binding and Functional Properties of Receptors Expressed in Clonal Cell Lines........................................................................................................................................... 80 | Acronyme | Définition | |----------|------------| | AC | Adénylyl cyclase | | ACE | Enzyme de conversion de l'angiotensine | | AMP_c | Adénosine monophosphate cyclique | | Ang I | Angiotensine I | | Ang II | Angiotensine II | | Ang III | Angiotensine III | | Ang IV | Angiotensine IV | | ARNm | Acide ribonucléique messager | | AT_1R | Récepteur de type 1 de l'angiotensine II | | AT_2R | Récepteur de type 2 de l'angiotensine II | | CHAPS | 3-[(3-cholamidopropyl)-dimethylammonio]propanesulfonate | | DAG | Diacylglycérol | | DTM | Domaine transmembranaire | | ERK | "Extracellular signal-regulated kinase" | | FAK | "Focal adhesion kinase" | | FRAP | "Fluorescence recovery after photobleaching" | | FRET | "Fluorescence resonance energy transfer" | | GDP | Guanosine diphosphate | | GIRK | "G protein-linked inwardly rectifying K⁺ channel" | | GPCR | Récepteur couplé aux protéines G | | GRK | "G protein-coupled receptor kinase" | | GTP | Guanosine triphosphate | | GTPγS | Guanosine-5'-O-(3-thiotriphosphate) | | HEK-293 | Lignée cellulaire "Human Embryonic Kidney 293" | | IP_3 | Inositol 1,4,5-trisphosphate | | IP_3R | Récepteur-canal de l'IP_3 | | IP_3 sponge | Domaine de liaison de l'IP_3RI de souris | | IRAP | "Insulin-regulated membrane aminopeptidase" | | JAK | "Janus-activated kinase" | | MAPK | "Mitogen-activated protein kinase" | | mGluR | Récepteur metabotropique du glutamate | | p130PH | "PH domain of the phospholipase C-like protein p130" | | PAR1 | "Protease-activated receptor 1" | | PKA | Protéine kinase A | | PKC | Protéine kinase C | | PIP_2 | Phosphatidylinositol 4,5-bisphosphate | | PI3K | Phosphoinosite 3-kinase | | PLA_2 | Phospholipase A_2 | | PLC | Phospholipase C | | PLD | Phospholipase D | | PMCA | "Plasma membrane calcium ATPase" | | PS | Pentosane sulfate | | Abbreviation | Description | |--------------|-------------| | PYK2 | "Proline-rich tyrosine kinase 2" | | RE | Réticulum endoplasmique | | RGS2 | "Regulator of G protein signaling 2" | | Sar | Sarcosine | | SERCA | "Sarcoplasmic and endoplasmic reticulum calcium ATPase" | | SOC | "Store-operated channel" | | SRA | Système rénine-angiotensine | | STAT | "Signal transducer and activator of transcription" | | TRP | "Transient receptor potential" | Impact cellulaire de l’expression d’un mutant constitutivement actif du récepteur AT₁ de l’angiotensine II Par MANNIX AUGER-MESSIER Université de Sherbrooke Département de Pharmacologie Thèse présentée à la Faculté de Médecine en vue de l’obtention du grade de Philosophiae Doctor (Ph.D.) Le récepteur mutant constitutivement actif N111G-AT₁ a la capacité de stimuler la protéine $G_{q/11}$ en absence d’angiotensine II (Ang II), l’agoniste endogène du récepteur AT₁. Nous avons montré que le récepteur N111G-AT₁, bien qu’il couple efficacement à la protéine $G_{q/11}$, maintient un état de haute affinité pour l’Ang II même en présence d’agents découpants. Afin de déterminer si cette propriété du récepteur N111G-AT₁ découle d’un changement de conformation intrinsèque ou plutôt d’une interaction plus stable avec la protéine $G_{q/11}$, nous avons montré par co-immunoprécipitation que le récepteur N111G-AT₁ couple de façon réversible à la protéine Gα$_{q/11}$. D’autre part, nous avons montré que l’expression stable du récepteur N111G-AT₁ dans les cellules HEK-293 (cellules N111G) mène à la génération spontanée d’oscillations calciques en absence d’Ang II. Par contre, en dépit de cette mobilisation calcique constante au repos, les cellules N111G présentent une désensibilisation hétérologue de la voie de relâche du Ca$^{2+}$. Nous avons montré par des essais fonctionnels (relâche de Ca$^{2+}$ par l’inositol 1,4,5-trisphosphate (IP$_3$) sur des cellules perméabilisées à la saponine) et biochimiques (liaison d’[³H]-IP$_3$ et immunobuvardage des récepteurs de l’IP$_3$ (IP$_3$Rs)) que le niveau des IP$_3$Rs dans les cellules N111G est fortement diminué. Le mécanisme par lequel cette désensibilisation survient passe par une dégradation accrue des IP$_3$Rs par la voie du lysosome. Le traitement prolongé (24-48 h) des cellules N111G avec l’EXP3174, un agoniste inverse du récepteur AT₁, permet de renverser cette désensibilisation de la voie de relâche calcique. La signalisation constante du récepteur N111G-AT₁ se traduit aussi par d’importants changements morphologiques des cellules N111G lorsqu’elles forment un feuillet de cellules confluentes. Ce phénotype particulier des cellules N111G se traduit par une réorganisation du cytosquelette d’actine et peut être reproduit en stimulant avec l’Ang II des cellules HEK-293 exprimant le récepteur AT₁. Nous avons montré que l’adoption d’un tel phénotype est dépendant de l’activation de la protéine Rho-kinase. Encore une fois, l’EXP3174 s’est révélé efficace pour prévenir et renverser l’adoption du phénotype des cellules N111G. En somme, ces résultats suggèrent que l’activation constitutive des voies de signalisation sous-jacentes au récepteur N111G-AT₁ force la cellule à s’adapter en modifiant son phénotype. INTRODUCTION L’identification des protéines formant la grande famille des récepteurs couplés aux protéines G (G Protein-Coupled Receptor: GPCR) a contribué au progrès fulgurant de la médecine du 20e siècle. En plus de jouer un rôle clé dans une myriade de processus physiologiques et pathologiques, les GPCRs constituent à l’heure actuelle environ 50% des cibles thérapeutiques exploitées par l’industrie pharmaceutique (FLOWER, 1999). L’intérêt commercial et fondamental pour l’étude de cette superfamille de protéines reste d’ailleurs des plus élevé puisqu’il a été possible d’estimer que 2% du génome humain coderait pour des GPCRs (LANDER et al., 2001; FREDRIKSSON et al., 2003). Les GPCRs sont des protéines intégrales de la membrane plasmique présentant une architecture à sept domaines transmembranaires (DTMs) hydrophobiques reliés consécutivement par des séquences hydrophiles (boucles intracellulaires et extracellulaires) de longueurs variables. De plus, ces protéines sont dotées de queues N-terminale (côté extracellulaire) et C-terminale (côté intracellulaire) de différentes longueurs. Composée de plus de 800 membres distincts, la superfamille des GPCRs parvient à décoder une multitude de différents signaux extracellulaires allant de simples ions jusqu’à de grosses protéines (DOHLMAN et al., 1991; COUGHLIN, 1994; STRADER et al., 1994). En liant spécifiquement leur GPCR, ces ligands en modifient la structure des DTMs et des boucles intracellulaires et permettent ainsi de traduire l’information extracellulaire en signaux intracellulaires via l’activation de différents effecteurs (GILMAN, 1987; BIRNBAUMER et al., 1990; GUDERMANN et al., 1997; BOCKAERT et PIN, 1999). Parmi ces derniers, l’activation classique des protéines G hétérotrimériques est d'ailleurs à l'origine du nom des GPCRs. L'identification des protéines impliquées dans ces voies classiques de signalisation des GPCRs est le fruit d'importantes études réalisées au cours des quarante dernières années. Divers groupes de chercheurs tels Sutherland et Rall, Rodbell et Birnbaumer, Ross et Gilman, Clapham et Neer ont mis à jour l'incroyable complexité entourant la signalisation des GPCRs via les sous-unités protéiques $G_{\alpha}$ et $G_{\beta\gamma}$ (VAUGHAN, 1998). Toutefois, la totalité des actions engendrées par les GPCRs ne s'explique pas que par la simple activation des protéines G hétérotrimériques. Au cours de la dernière décennie, la capacité des GPCRs à activer d'autres voies de signalisation traditionnellement reconnues pour découler de l'activation des récepteurs à activité tyrosine kinase (e.g. Jak/STAT, pp60$^{C-SRC}$, petites protéines G de la famille Rho) et ayant un impact à long terme sur le protéome cellulaire a été mise en évidence (FUKATA et al., 2001; LUTTRELL et LUTTRELL, 2004; PARSONS et PARSONS, 2004; THOMAS et al., 2004). Ainsi, la compréhension approfondie des mécanismes moléculaires d'activation et des voies de signalisation sous-jacentes aux GPCRs (autant ceux connus que les GPCRs orphelins) renferme la promesse d'identifier de nouvelles stratégies thérapeutiques pouvant améliorer la qualité de vie et la longévité chez l'homme. À cette fin, différents modèles d'homologies ont été suggérés afin d'identifier, s'il en existe, les bases communes aux mécanismes moléculaires d'activation des GPCRs (BALDWIN, 1993; GETHER et KOBILKA, 1998; GETHER, 2000; KARNIK et al., 2003). Traditionnellement, les homologies de séquences entre les GPCRs ont permis de définir trois sous-familles de récepteurs (A, B et C) chez les mammifères (ATTWOOD et FINDLAY, 1994; KOLAKOWSKI, 1994). Les récepteurs semblables à la rhodopsine constituent la plus importante sous-famille renfermant approximativement 90% des GPCRs (sous-famille A: caractérisée par la présence d’une séquence DRY dans la portion N-terminale de la 2\textsuperscript{e} boucle intracellulaire, d’un pont disulfure reliant les 1\textsuperscript{ère} et 2\textsuperscript{e} boucles extracellulaires et d’un motif NPxxY dans le 7\textsuperscript{e} DTM). Les GPCRs reliés au récepteur de la calcitonine (sous-famille B: caractérisée par la présence de 20 cystéines bien conservées et l’absence de la séquence DRY) et aux récepteurs métabotropiques (sous-famille C: caractérisée par la présence du pont disulfure reliant les 1\textsuperscript{ère} et 2\textsuperscript{e} boucles extracellulaires et l’absence de la séquence DRY) constituent les 10% restant des GPCRs (GETHER et KOBILKA, 1998; GETHER, 2000). Il est à noter que la nomenclature plus récente selon le système de classification GRAFS (pour Glutamate, Rhodopsin, Adhesion, Frizzled/taste2 et Secretin) a été proposée suite au séquençage complet du génome humain (FREDRIKSSON \textit{et al.}, 2003). En soi, l’ensemble de ces caractéristiques permettant la classification en sous-familles des GPCRs suggèrent que la pression évolutive force le maintien de certains déterminants communs nécessaires à l’activation appropriée de ces récepteurs. D’ailleurs, quelques études tendent à prouver l’existence de tels mécanismes moléculaires d’activation communs aux GPCRs (HAN \textit{et al.}, 1997; MIURA \textit{et al.}, 1999; SHEIKH \textit{et al.}, 1999; SCHULZ \textit{et al.}, 2000). La validation de ces mécanismes d’activation bénéficierait grandement de l’élucidation des structures tridimensionnelles des GPCRs en conformations inactives et actives. Toutefois, bien que les structures tridimensionnelles des protéines G soient connues (COLEMAN \textit{et al.}, 1994; MIXON \textit{et al.}, 1995; TESMER \textit{et al.}, 1997), la cristallographie des protéines membranaires tels les GPCRs reste encore aujourd’hui difficile à réaliser. La détermination de l’arrangement spatial des 7 DTMs de la rhodopsine et l’élucidation de sa structure cristallographique à haute résolution dans une conformation inactive reste un exemple unique parmi les GPCRs de mammifères (UNGER et al., 1997; PALCZEWSKI et al., 2000). Heureusement, la cristallographie des récepteurs à 7 DTMs n'est pas la seule méthodologie pouvant fournir d'importantes informations sur leur structure. Par exemple, la mutagénèse aléatoire (SPALDING et al., 1998; PARNOT et al., 2000; BEUKERS et al., 2004), la mutagénèse dirigée (COTECCHIA et al., 1990; KJELSBERG et al., 1992; MARIE et al., 1999), l'introduction de sites de liaison pour un ion métallique (SHEIKH et al., 1996; ALTENBACH et al., 1999), le marquage par photoaffinité (KENNEDY et al., 1996; BOUCARD et al., 2000), la réticulation intra- ou inter-moléculaire à l'aide d'agent ponté bi-fonctionnel (ITOH et al., 2001; KLEIN-SEETHARAMAN et al., 2001) et l'identification des pochettes de liaison par la méthode de "Substituted-Cysteine Accessibility Method" (JAVITCH et al., 1994; BOUCARD et al., 2003) sont parmi les méthodes exploitées activement afin de mieux définir la structure des GPCRs. En somme, la modélisation moléculaire d'un GPCR par homologie de séquence avec la rhodopsine en combinaison avec l'ensemble des données relatives à sa structure-fonction permettront de raffiner notre compréhension de ses propriétés pharmacologiques et fonctionnelles. L'élucidation des différents états conformationnels (inactif et actif) d'un GPCR ainsi que des mouvements dynamiques ensuivis lors de son activation représente un défi de taille pour la communauté scientifique (BARTFAI et al., 2004). Toutefois, l'accumulation de ces connaissances promet de mener au développement de médicaments encore plus efficaces et sécuritaires tels les agonistes inverses et les superagonistes, favorisant respectivement l'adoption d'états conformationnels inactifs ou actifs par le GPCR ciblé (KENAKIN, 2003a). 1. LE RÉCEPTEUR DE L’ANGIOTENSINE II DE TYPE I (AT₁) Au cours des 25 dernières années, la mise en évidence de certaines propriétés fondamentales des GPCRs (e.g. activité constitutive, dimérisation, allostérisme, couplage à différentes protéines G, influence des protéines auxilliaires et enrichissement d’états conformationnels précis selon le ligand utilisé et le protéome cellulaire étudié) nous a permis d’entrevoir de nouvelles avenues thérapeutiques (BRADY et LIMBIRD, 2002; KENAKIN, 2004a). À cet effet, l’étude du récepteur AT₁ a grandement contribué à l’avancement de nos connaissances entourant les propriétés des GPCRs en général. 1.1. Système rénine-angiotensine et rôles physio/pathologiques L’Ang II (Asp-Arg-Val-Tyr-Ile-His-Pro-Phe) est une hormone octapeptidique qui permet le maintien de l’homéostasie cardiovasculaire en régulant la pression sanguine (volume sanguin et résistance vasculaire), l’équilibre hydrique et électrolytique, la sécrétion de certaines hormones (e.g. aldostérone, vasopressine et adrénocorticotropine) et les fonctions rénales (DE GASPARO *et al.*, 2000). Ces actions physiologiques de l’Ang II s’expliquent par la diversité de ses tissus cibles tels les muscles lisses vasculaires, le cerveau, l’hypophyse, le système nerveux sympathique, les glandes surrénales et les reins. La formation de l’Ang II est accomplie par la synthèse du précurseur angiotensinogène (protéine globulaire de 452 acides aminés) qui, sous l’action successive des enzymes protéolytiques rénine (constituant l’étape limitante du processus) et ACE (enzyme de conversion de l’angiotensine régulant aussi le niveau de bradykinine), est hydrolysé en Ang II (Figure 1). En diminuant le taux de sécrétion de la Figure 1. Métabolisme de l’angiotensine II au sein du système rénine-angiotensine. L’angiotensinogène, principalement synthétisé au foie, est hydrolysé en angiotensine I (Ang I) par l’enzyme rénine (synthétisée et sécrétée par les cellules juxtaglomérulaires des reins). L’Ang I est rapidement hydrolysée en angiotensine II (Ang II) par l’enzyme de conversion de l’angiotensine (ACE) que l’on retrouve à la surface des cellules endothéliales et conséquemment, en grande quantité aux poumons (NG et VANE, 1967; RYAN et al., 1976). L’Ang II produit diverses réponses cellulaires en liant ses récepteurs AT₁ et AT₂. L’Ang II peut aussi être hydrolysé par des aminopeptidases en fragments actifs (angiotensine III : Ang III et angiotensine IV : Ang IV) liant le récepteur AT₄/IRAP ou par diverses protéases en fragments inactifs. rénine, l'Ang II régule son propre niveau sanguin et complète ainsi une boucle de rétro-inhibition au sein du système rénine-angiotensine (SRA). Les malformations engendrées chez des souris déficientes en angiotensinogène, en ACE ou traitées *in utero* avec le losartan (antagoniste spécifique du récepteur AT\(_1\)) mettent en évidence l'importance du SRA dans la croissance et le développement normal de ses tissus cibles (TUFROMCREDDIE *et al.*, 1995). Depuis déjà 20 ans, il est suggéré que l'existence de SRA localisé à même ces différents tissus cibles serait responsable de l'action paracrine et autocrine de l'Ang II dans les processus d'hypertrophie cellulaire, de prolifération, de migration, d'inflammation et de fibrose (DZAU et GIBBONS, 1987; GRIFFIN *et al.*, 1991; WEBER *et al.*, 1995; FUKUHARA *et al.*, 2000; DANSER, 2003; NERI SERNERI *et al.*, 2004). L'étiologie des pathologies cardiovasculaires telles l'hypertension, l'athérosclérose, l'hypertrophie cardiaque du ventricule gauche et les crises cardiaques reflète aussi l'importance du rôle joué par le récepteur AT\(_1\) dans ces maladies (GRIENDLING *et al.*, 1996; SWYNGHEDAUW, 1999; VAUGHAN, 2000; MOLKENTIN et DORN, 2001; HUNYADY et TURU, 2004). L'efficacité équivalente des antagonistes spécifiques du récepteur AT\(_1\) et des inhibiteurs de l'ACE dans la lutte contre l'hypertension artérielle a permis de confirmer l'implication du SRA dans cette pathologie (RAMSAY et YEO, 1995). Néanmoins, plusieurs études cliniques à grande échelle (e.g. LIFE, ELITE, RENAAL) ont montré qu'en bloquant précisément le récepteur AT\(_1\), les antagonistes spécifiques s'avèrent plus sécuritaires et mieux tolérés que les inhibiteurs de l'ACE (BALL et WHITE, 2003). De plus, une nouvelle classe de médicaments non-peptidiques (tel l'aliskiren) inhibant directement la rénine pourrait bientôt s’ajouter à la pharmacopée utilisée pour lutter contre diverses pathologies du système cardiovasculaire (STANTON, 2003). 1.2. Expression et structure du récepteur AT\textsubscript{1} Chez l’homme, l’Ang II lie deux récepteurs ayant 30% d’homologie entre eux, soient les récepteurs hAT\textsubscript{1} et hAT\textsubscript{2}. Le clonage des récepteurs AT\textsubscript{1} et AT\textsubscript{2} a suivi leur identification pharmacologique préalablement basée sur leur différence de liaison aux antagonistes non-peptidiques sélectifs tels le losartan (aussi nommé DuP753 et développé par Du Pont Pharmaceuticals; AT\textsubscript{1} sélectif) et le PD123177 (un dérivé de la spinacine; AT\textsubscript{2} sélectif) (CHIU \textit{et al.}, 1989; WHITEBREAD \textit{et al.}, 1989; MURPHY \textit{et al.}, 1991; SASAKI \textit{et al.}, 1991; KAMBAYASHI \textit{et al.}, 1993; NAKAJIMA \textit{et al.}, 1993). Plus précisément, le récepteur hAT\textsubscript{1} est codé par un seul gène composé de 5 exons et situé sur la bande q24 du chromosome 3 (CURNOW \textit{et al.}, 1992; GUO \textit{et al.}, 1994). Chez le rat, deux gènes distincts codant pour les sous-types de récepteur AT\textsubscript{1A} et AT\textsubscript{1B} ont été clonés (SASAMURA \textit{et al.}, 1992; YE et HEALY, 1992; MURASAWA \textit{et al.}, 1993; GUO et INAGAMI, 1994). Suite au clonage du récepteur AT\textsubscript{1}, certains groupes de recherche ont entrepris d’identifier les mécanismes régulant son expression. Il a été suggéré que l’effet protecteur des estrogènes sur les maladies cardiovasculaires découlerait de leur capacité à moduler le niveau d’expression du récepteur AT\textsubscript{1A} chez le rat par les 5’LS BPs (protéines liant des ARN cytosoliques et reconnaissant, dans le cas présent, la séquence promotrice en position 5’ de l’ARNm du récepteur AT\textsubscript{1A}) (KRISHNAMURTHI \textit{et al.}, 1999). L’interféron-γ inhibe l’expression du récepteur AT\textsubscript{1A} dans les cellules vasculaires de muscles lisses de rat en modulant le niveau transcriptionnel du gène par des voies dépendantes de l'activation des protéines MAPK (Mitogen-activated protein kinase) et Jak2 (Janus-activated kinase) (IKEDA *et al.*, 1999). Toutefois, l'accumulation de connaissances entourant la régulation de l'expression du récepteur AT\textsubscript{1} chez l'homme est ralentie par le manque de lignées cellulaires humaines exprimant de façon endogène ce GPCR. Ce n'est que tout récemment que l'étude d'une lignée de trophoblaste humain immortalisé a permis de montrer que l'expression du récepteur hAT\textsubscript{1} y est régulée en partie par les facteurs de transcription ubiquitaires Sp1 et Sp3 (DUFFY *et al.*, 2004). En dépit du manque d'informations entourant la régulation de l'expression du récepteur AT\textsubscript{1}, plusieurs de ses déterminants structuraux et fonctionnels sont toutefois connus. L'analyse d'hydropathie du récepteur AT\textsubscript{1} suggère que cette protéine de 359 acides aminés contient 7 DTM hydrophobes formant des hélices $\alpha$ dans la bicouche lipidique des membranes cellulaires. De plus, l'existence d'une 8\textsuperscript{e} hélice $\alpha$ au caractère amphipathique, située à la suite du 7\textsuperscript{e} DTM et interagissant avec les lipides anioniques a été proposée comme pouvant influencer la fonctionnalité du récepteur AT\textsubscript{1} (MOZSOLITS *et al.*, 2002; THOMAS et QIAN, 2003). La structure secondaire du récepteur AT\textsubscript{1} est représentée à la figure 2. La structure tertiaire de ce GPCR est maintenue grâce à la présence de deux ponts disulfures reliant la région N-terminale à la 3\textsuperscript{e} boucle extracellulaire et la 1\textsuperscript{ère} boucle extracellulaire à la 2\textsuperscript{e} boucle extracellulaire. L'intégrité de ces ponts disulfures est nécessaire à la liaison de l'Ang II au récepteur AT\textsubscript{1} et lui confère une sensibilité aux agents réducteurs tels le dithiothréitol et le 2-mercaptoéthanol (GUNTHER, 1984; WHITEBREAD *et al.*, 1989; OHYAMA *et al.*, 1995). La présence de trois sites consensus de N-glycosylation (NxS/T) retrouvés sur la queue N-terminale (N4) et la 2\textsuperscript{e} boucle extracellulaire (N176 et N188) du récepteur AT\textsubscript{1} Figure 2. Représentation schématique du récepteur AT₁. Les sept DTM s et la 8e hélice α composant le récepteur AT₁ sont numérotés (I-VIII). Le récepteur AT₁ comprend trois sites de N-glycosylation (Asn⁴, Asn¹⁷⁶ et Asn¹⁸⁸) essentiels à son ciblage membranaire. Sa structure tertiaire est maintenue grâce à deux ponts disulfures reliant la Cys¹⁸ à la Cys²⁷⁴ et la Cys¹⁰¹ à la Cys¹⁸⁰. Le résidu Asn¹¹¹ (milieu du 3e DTM) et les motifs DRY et NPxxY (portion C-terminale des 3e et 7e DTMs respectivement) sont intimement liés aux mécanismes d'activation du récepteur AT₁. Les sites potentiels de phosphorylation (sérines/thréonines et tyrosines) présents aux boucles intracellulaires et à la queue C-terminale sont aussi mis en évidence (losanges et triangles rouges, respectivement). est nécessaire à son ciblage membranaire lors de la synthèse protéique (DESLAURIERS et al., 1999; JAYADEV et al., 1999; LANCTOT et al., 1999). Le processus de glycosylation est toutefois complexe et peut varier énormément d'un type cellulaire à l'autre (ROTH, 2002). C'est d'ailleurs probablement pourquoi la masse apparente du récepteur AT$_1$, comme celle mesurée pour d'autres GPCRs (e.g récepteur AT$_2$, récepteur $\beta$-adrénergique, récepteur du glucagon), diffère souvent entre différents tissus et systèmes d'expression (CARSON et al., 1987; ARBABIAN et al., 1989; SERVANT et al., 1994; IWANIJ, 1995). Comme pour plusieurs GPCRs, la présence des séquences DRY (résidus 125-127) et NPxxY (résidus 298-302) à la portion C-terminale des 3$^e$ et 7$^e$ DTMs, respectivement, est intimement liée à la fonctionnalité du récepteur AT$_1$ (LAPORTE et al., 1996; MIURA et al., 2000; KARNIK et al., 2003). L'ensemble de ces déterminants structuraux du récepteur AT$_1$ permet de classer cette glycoprotéine dans la sous-famille A des GPCRs. D'autre part, les trois boucles intracellulaires et la queue cytoplasmique du récepteur AT$_1$ renferment plusieurs sites potentiels de phosphorylation pour certaines Ser/Thr kinases (e.g. protéine kinase C et "G protein receptor kinases") et Tyr kinases (e.g. Jak2, pp60$^{C-SRC}$ et p125$^{FAK}$) (BERK et CORSON, 1997; FERGUSON, 2001). Suite à la phosphorylation du récepteur AT$_1$, les mécanismes de désensibilisation tels le recrutement des $\beta$-arrestines et l'internalisation au niveau des puits tapissés de clathrine sont enclenchés (OAKLEY et al., 2000; PIERCE et al., 2001; THOMAS et QIAN, 2003). Bien que la phosphorylation des Ser$^{335}$ et Thr$^{336}$ favorise l'internalisation rapide et maximale du récepteur AT$_1$ lors de la liaison de l'Ang II, cette modification post-traductionnelle n'est toutefois pas essentielle à son internalisation (THOMAS et al., 1996; THOMAS et al., 1998). De plus, il est intéressant de noter que les motifs d'internalisation du récepteur AT\textsubscript{1} (contenus au niveau de la partie N-terminale de sa 3\textsuperscript{e} boucle intracellulaire et de sa queue C-terminale) ne recoupent pas entièrement les déterminants structuraux permettant la stimulation de la phospholipase C (CHAKI \textit{et al.}, 1994; HUNYADY \textit{et al.}, 1994b; HUNYADY \textit{et al.}, 1995; THOMAS \textit{et al.}, 1995; LAPORTE \textit{et al.}, 1996). En ce sens, l’Ang II provoque l’internalisation robuste des récepteurs mutants D74N-AT\textsubscript{1} et D74N/Δ221-226-AT\textsubscript{1} sans que ces derniers ne parviennent à activer la protéine G\textsubscript{q/11} (HUNYADY \textit{et al.}, 1994a). La capacité de différents analogues de l’Ang II (e.g. [Sar\textsuperscript{1},Ile\textsuperscript{8}]Ang II et [Sar\textsuperscript{1},Ile\textsuperscript{4},Ile\textsuperscript{8}]Ang II) à diriger le récepteur AT\textsubscript{1} vers les vésicules d’endocytose sans préalablement activer la protéine G\textsubscript{q/11} suggère aussi fortement que ce GPCR adopte des conformations distinctes lors de son couplage aux protéines G hétérotrimériques et de son internalisation (THOMAS \textit{et al.}, 2000). 1.3. Voies de signalisation enclenchées par le récepteur AT\textsubscript{1} L’action rapide de l’Ang II sur le système cardiovasculaire est majoritairement due à sa liaison au récepteur AT\textsubscript{1}. En premier lieu, l’activation de la phospholipase C de type β par la sous-unité protéique Gα\textsubscript{q/11} liant le guanosine triphosphate permet l’hydrolyse du phosphatidylinositol 4,5-bisphosphate en inositol 1,4,5-trisphosphate (IP\textsubscript{3}) et diacylglycérol (Figure 3). Tandis que l’IP\textsubscript{3} favorise l’ouverture de ses récepteurs-canaux tétramériques (IP\textsubscript{3}Rs) et provoque la mobilisation intracellulaire rapide du Ca\textsuperscript{2+} contenu dans le réticulum endoplasmique (RE), le diacylglycérol active directement certaines isoformes de protéine kinase C (NEWTON, 2001; PATTERSON \textit{et al.}, 2004). Le Ca\textsuperscript{2+} ainsi relâché peut provoquer une multitude d’effets cellulaires (via l’activation de différents effecteurs) allant de réponses rapides telles la sécrétion d’hormones et la Figure 3. Couplage du récepteur AT$_1$ aux protéines G hétérotrimériques. La liaison de l’Ang II au récepteur AT$_1$ (AT$_1$R) provoque l’activation des protéines G$_{q/11}$ et G$_{i/o}$ en favorisant l’échange de guanosine diphosphate (GDP) pour le guanosine triphosphate (GTP). La sous-unité G$\alpha_{q/11}$ liant le GTP active la phospholipase C de type $\beta$ (PLC-$\beta$) qui hydrolysera le phosphatidylinositol 4,5-bisphosphate (PIP$_2$) en diacylglycérol (DAG) et IP$_3$. Le DAG est un activateur direct de la protéine kinase C (PKC) tandis que l’IP$_3$, en liant son récepteur-canal (IP$_3$R), mène à l’activation de plusieurs effecteurs (e.g. calmoduline kinase, protéine kinase C) via la relâche du Ca$^{2+}$ contenu au réticulum endoplasmique (RE). Cette élévation de la concentration de Ca$^{2+}$ au cytoplasme est contrebalancée par les pompes SERCA et PMCA. L’entrée de Ca$^{2+}$ extracellulaire par les canaux calciques situés à la membrane plasmique tels les "transient receptor potential" (TRP) et les "store-operated channel" (SOC) permettent d’éviter la vidange à long terme des réserves de Ca$^{2+}$ intracellulaire. Pour sa part, la sous-unité G$\alpha_{i/o}$ liant le GTP inhibe l’adénylyl cyclase (AC), réduisant ainsi le niveau d’adénosine monophosphate cyclique (AMP$_c$) et l’activité de la protéine kinase A (PKA). Les sous-unités G$\beta\gamma$ peuvent aussi moduler l’activité de la PLC-$\beta$ et de diverses adénylyl cyclases. DAG PIP₂ PLC-β IP₃ IP₃R Ca²⁺ Calmoduline Calmoduline Kinase SERCA ATP ADP Pi TRP SOC Noyau RE ATP AMPc PKA AC ATP ADP Pi AngII AT₁R αq/11 γ β αi₀ GDP GTP +/- contraction jusqu'aux régulations à long terme telles l'hyperplasie et l'hypertrophie cellulaire, les processus d'inflammation et l'apoptose (BERRIDGE *et al.*, 2000). La cellule doit d'ailleurs se protéger contre l'exposition prolongée à de fortes concentrations de Ca$^{2+}$ intracellulaire en l'expulsant à l'aide des pompes SERCA (Sarcoplamic and Endoplasmic Reticulum Ca$^{2+}$ ATPase: pompage au RE) et PMCA (Plasma Membrane Ca$^{2+}$ ATPase: pompage au milieu extracellulaire). Afin d'éviter la déplétion sévère des réserves intracellulaires de Ca$^{2+}$, divers canaux calciques situés à la membrane plasmique (e.g. "transient receptor potential", "store-operated channel") sont ensuite ouverts afin de permettre le remplissage du RE (SPASSOVA *et al.*, 2004). L'activation de la protéine G$_{i/0}$ permet aussi d'expliquer certaines des actions hémodynamiques du récepteur AT$_1$ (Figure 3). Par exemple, c'est en activant la protéine G$_{i/0}$ que le récepteur AT$_1$ stimule la synthèse d'angiotensinogène dans le foie (POBINER *et al.*, 1991; KLETT *et al.*, 1993). Plus précisément, l'inhibition de l'adénylyl cyclase par la sous-unité G$\alpha_{i/0}$ chez les hépatocytes de rat mène à la stabilisation de l'ARNm de l'angiotensinogène et conséquemment, à l'augmentation de sa synthèse protéique (KLETT *et al.*, 1993). L'atténuation du transport au travers l'épithélium rénal des tubules proximaux (réabsorption d'eau, de sel et de composés organiques endogènes VS sécrétion de déchets organiques) par le récepteur AT$_1$ est aussi due à l'activation de la protéine G$_{i/0}$ (DOUGLAS *et al.*, 1990). Enfin, l'utilisation de la toxine de pertussis (inhibant la protéine G$_{i/0}$ par un mécanisme d'ADP-ribosylation) a permis de montrer que le récepteur AT$_1$ diminue la sécrétion de rénine en partie grâce à l'inhibition de l'adénylyl cyclase (HACKENTHAL *et al.*, 1985). Toutefois, les effets cellulaires découlant de l'activation de la protéine G$_{i/0}$ peuvent aussi provenir de l'action de la sous-unité G$\beta\gamma$ sur ses différents effecteurs tels la PI3K (phosphoinositide 3-kinase), certaines GRK (G protein-coupled receptor kinases) ainsi que différentes isoformes d’adénylyl cyclase et de PLC-β (CLAPHAM et NEER, 1997). Plusieurs études ont montré que les protéines G hétérotrimériques interagissent directement avec différentes portions des boucles intracellulaires et de la queue C-terminale des GPCRs (KONIG et al., 1989; DOHLMAN et al., 1991; BLUML et al., 1994; BOURNE, 1997; WESS, 1998; WU et al., 1998; WELSBY et al., 2002; CHAN et al., 2003; AUGER et al., 2004; ROGINSKAYA et al., 2004). Par exemple, la Ser$^{240}$ de la rhodopsine (située à la portion C-terminale de sa 3e boucle intracellulaire) est nécessaire pour l’interaction de ce GPCR en conformation active avec les portions N- (résidus 19-28) et C-terminales (résidus 310-314 et 342-345) de la transducine (protéine G hétérotrimérique présente dans la rétine) (FRANKE et al., 1992; CAI et al., 2001; ITOH et al., 2001). L’activation des protéines G hétérotrimériques par le récepteur AT$_1$ repose en partie sur l’intégrité de la séquence DRY (2e boucle intracellulaire), des résidus 219-225 (3e boucle intracellulaire) et des résidus 312-314 (queue C-terminale) (OHYAMA et al., 1992; SHIRAI et al., 1995; WANG et al., 1995; SHIBATA et al., 1996; SANO et al., 1997; KAI et al., 1998; MIURA et al., 2000). Toutefois, peu de distinction entre le couplage des protéines G$_{q/11}$ et G$_{i/0}$ au récepteur AT$_1$ n’a été porté au cours de ces études. De telles différences semblent pourtant exister puisque l’orientation et la rigidité du 4e DTM du récepteur AT$_1$ influence sa spécificité de couplage avec les protéines G$_{q/11}$ et G$_{i/0}$ (FENG et KARNIK, 1999). D’autre part, la sélectivité de couplage des GPCRs (e.g. récepteur D$_2$ de la dopamine, récepteur CB1 des cannabinoides) à différentes protéines G hétérotrimériques varie selon le ligand utilisé (GLASS et NORTHUP, 1999; GAZI et al., Le groupe de Lefkowitz a aussi montré que l'état de phosphorylation des récepteurs β-adrénergiques (type 1 et 2) et V2 de la vasopressine gouverne la nature des voies de signalisation pouvant être enclenchées suite à l'activation de ces GPCRs (ZAMAH et al., 2002; MARTIN et al., 2004a; REN et al., 2005). On peut ainsi suggérer que l'ensemble des déterminants moléculaires permettant au récepteur AT₁ d'activer la protéine $G_{q/11}$ ne correspond pas entièrement à ceux permettant son couplage fonctionnel à la protéine $G_{i/0}$. L'action du récepteur AT₁ s'étend toutefois bien au-delà des quelques effecteurs classiques des protéines $G_{q/11}$ et $G_{i/0}$ mentionnés ci-haut. En fait, l'activation du récepteur AT₁ enclenche un vaste réseau de voies de signalisation faisant appel à la phospholipase A₂ (génération d'acide arachidonique), la phospholipase D (production d'acide phosphatidique), différents canaux ioniques (e.g. canaux calciques voltage-dépendant de type L et T) ainsi qu'une grande variété de sérine/thréonine kinases (e.g. MAPK, PKC, Akt/PKB) et de tyrosine kinases (e.g. Jak2, Pyk2, pp60$^{C-SRC}$, p125$^{FAK}$, p130$^{CAS}$) (SAYESKI et al., 1998; DE GASPARO et al., 2000; TOUYZ et BERRY, 2002). Par exemple, le récepteur AT₁ exprimé dans les cellules vasculaires de muscles lisses active la voie de signalisation des PI3K-Akt/PKB grâce à la production d'espèces réactives en oxygène par la NADH/NADPH oxidase (USHIO-FUKAI et al., 1999). Le récepteur AT₁ provoque aussi la translocation rapide des STAT ("signal transducers and activators of transcription") au noyau de divers types de cellules (e.g. cardiomyocytes, cellules vasculaires de muscles lisses, cellules "Chinese Hamster Ovary") en activant directement la tyrosine kinase Jak2 (BHAT et al., 1995; MARRERO et al., 1995; MCWHINNEY et al., 1997). Plus précisément, la protéine Jak2 s'associe au récepteur AT₁ en interagissant avec le motif YIPP (région 319-322) situé sur la queue C-terminale de ce GPCR (ALI et al., 1997). Il est intéressant de noter que deux mutants du récepteur AT₁ ("M5-AT₁" et D74E-AT₁) n'activant pas les protéines $G_{q/11}$ et $G_{i/0}$ parviennent tout de même à enclencher la voie de signalisation Jak2-STAT (DOAN et al., 2001). En interagissant directement avec le récepteur AT₁, la protéine ATRAP ("AT₁ receptor-associated protein") diminue le niveau d'activation de la voie de signalisation calcineurin-NFAT ("nuclear factor of activated T cells") (DAVIET et al., 1999; GUO et al., 2005). D'autre part, une gamme entière de fonctions physio/pathologiques de l'Ang II sur le système cardiovasculaire repose sur ses effets typiquement associés aux facteurs de croissance tels l'EGF ("epidermal growth factor") et le PDGF ("platelet-derived growth factor") (SHAH et CATT, 2003; SMITH et al., 2004). En fait, la transactivation du récepteur de l'EGF (EGFR) par le récepteur AT₁ semble jouer un rôle important dans la progression de l'hypertrophie pathologique du muscle cardiaque (KAGIYAMA et al., 2002; THOMAS et al., 2002). Ce processus de transactivation des EGFR par différents GPCRs requiert l'activation de métalloprotéases de la matrice extracellulaire tel ADAM12 ("a disintegrin and metalloprotease 12") (DAUB et al., 1996; GSCHWIND et al., 2001; ASAKURA et al., 2002; SHAH et CATT, 2004). Bien que le mécanisme par lequel les GPCRs activent les métalloprotéases reste à éclaircir, deux études provenant du groupe de Sadoshima ont suggéré que le récepteur AT₁ peut transactiver les EGFR sans l'aide des protéines G hétérotrimériques (SETA et al., 2002; SETA et SADOSHIMA, 2003). En effet, même en ne couplant pas à la protéine $G_{q/11}$, le récepteur mutant D125G/R126G/Y127A/M134A-AT₁ parvient toujours à internaliser, à activer les voies de signalisation des tyrosine kinases et des MAPK en plus de transactiver les EGFR (SETA et al., 2002). De plus, cette équipe de recherche a rapporté l'interaction directe (et controversée) du récepteur AT\textsubscript{1} avec l'EGFR suite à la phosphorylation de la Tyr319 située sur la queue C-terminale de ce GPCR (SETA et SADOSHIMA, 2003; THOMAS \textit{et al.}, 2004). Ainsi, penser que les protéines G hétérotrimériques soient exclusivement responsables de l'activation de l'ensemble des voies de signalisation sous-jacentes au récepteur AT\textsubscript{1} serait de banaliser la capacité de ce GPCR à interagir avec d'autres protéines régulatrices. Bien que les protéines G hétérotrimériques jouent un rôle majeur au niveau de la fonctionnalité des GPCRs, elles ne sont toutefois pas les seules protéines à interagir avec ces récepteurs à 7 DTM (BOCKAERT et PIN, 1999; HALL \textit{et al.}, 1999; MARINISSEN et GUTKIND, 2001; KREIENKAMP, 2002; PIERCE \textit{et al.}, 2002; REBOIS et HEBERT, 2003). Par exemple, la famille des protéines β-arrestines est reconnue depuis plus d'une décennie pour diriger l'internalisation de plusieurs GPCRs vers les puits tapissés de clathrine en interagissant directement avec les portions intracellulaires de ces récepteurs (FERGUSON, 2001; LUTTRELL et LEFKOWITZ, 2002). Le recrutement stable des protéines β-arrestines par certains GPCRs (e.g. récepteur AT\textsubscript{1}, récepteur V2 de la vasopressine, récepteur de la substance P) au niveau des vésicules d'endocytose permet d'ailleurs d'activer les voies de signalisation des MAPK et JNK indépendamment des protéines G hétérotrimériques (MCDONALD \textit{et al.}, 2000; OAKLEY \textit{et al.}, 2000; WEI \textit{et al.}, 2003; WEI \textit{et al.}, 2004). L'interaction de l'une des deux types de "Receptor-Activity-Modifying Protein" avec le "Calcitonine Receptor-Like Receptor" permet le repliement adéquat et le ciblage membranaire de ce GPCR tout en déterminant ses propriétés pharmacologiques (MCLATCHIE \textit{et al.}, 1998). La liaison des protéines d'échafaudage Homer (types 1a-c, 2 et 3) aux récepteurs métabotropiques du glutamate (via un domaine riche en proline (PPxxFr) situé au niveau de leur queue cytoplasmique) permet la dimérisation et le couplage fonctionnel de ces GPCRs aux IP₃Rs (BRAKEMAN et al., 1997; TU et al., 1998; XIAO et al., 1998). En fait, l'identification croissante de protéines interagissant directement avec les récepteurs à 7 DTMs (e.g. calmodulin, "Na⁺/H⁺ exchanger regulatory factor", tubulin, "A kinase-anchoring protein", etc...) suggère que le réseau de voies de signalisation pouvant être enclenché et régulé par les GPCRs est complexe et diversifié (MINAKAMI et al., 1997; HALL et al., 1998; CIRUELA et al., 1999; WANG et al., 1999; FRASER et al., 2000). À la lumière de ces données récentes, il est clair que le contexte cellulaire (ou protéome) dans lequel un GPCR est exprimé peut influencer drastiquement sa réponse face aux ligands endogènes et aux médicaments dirigés contre ce dernier (KENAKIN, 2003b). 2. ACTIVITÉ CONSTITUTIVE DES GPCRs L'action des GPCRs sur leurs systèmes biologiques ne repose pas uniquement sur la liaison préalable d'un ligand agoniste. En fait, ces récepteurs ont la capacité intrinsèque d'adopter une conformation active en absence d'agoniste (activité constitutive). Cette activité constitutive des GPCRs joue un rôle fondamental dans le maintien de différentes fonctions physiologiques et l'évolution de certains processus pathologiques (SPIEGEL, 1996; LEURS et al., 2000; PARNOT et al., 2002; SEIFERT et WENZEL-SEIFERT, 2002; MILLIGAN, 2003). À ce jour, plus de 60 GPCRs de type sauvage ont montré une activité constitutive dans différents systèmes d'expression et ce, sans égard au type de protéine G hétérotrimérique auquel ils couplent ou à leur sous-famille d'origine (SEIFERT et WENZEL-SEIFERT, 2002). Les premières évidences de l'activité constitutive d’un GPCR ont été rapportées pour les récepteurs δ opioïde et β₂-adrénergique (KOSKI et al., 1982; CERIONE et al., 1984; COSTA et HERZ, 1989). Rapidement, plusieurs études de mutagenèse dirigée et de mutagenèse aléatoire ont montré qu’il est possible d’accentuer l’activité constitutive chez différents GPCRs (e.g. rhodopsine, récepteur AT₁, récepteur M5 muscarinique, récepteur α₁-adrénergique, récepteur δ opioïde) (COTECCHIA et al., 1990; KJELSBERG et al., 1992; COHEN et al., 1993; BURSTEIN et al., 1995; SPALDING et al., 1998; PARNOT et al., 2000; DECAILLOT et al., 2003). Ces GPCRs mutants constitutivement actifs sont caractérisés par l’augmentation de leur activité basale relativement à celle de leur récepteur homologue de type sauvage (PARNOT et al., 2002). Depuis plus d’une décennie, l’étude de l’activation constitutive des GPCRs a grandement enrichi notre connaissance de leur structure et notre compréhension de leurs mécanismes d’activation et de désensibilisation (LEFKOWITZ et al., 1993; SCHEER et COTECCHIA, 1997; LEURS et al., 2000; BARAK et al., 2003; MILLIGAN, 2003; GOULDSON et al., 2004; PRATHER, 2004). 2.1. Modèle théorique d’activation des GPCRs Suite aux premières observations d’activité constitutive chez les GPCRs, le modèle d’activation du complexe ternaire d’un GPCR (en vogue dans les années 1980) a été raffiné et complété par le modèle cubique d’activation du complexe ternaire d’un GPCR (DE LEAN et al., 1980; LEFKOWITZ et al., 1993; SAMAMA et al., 1993; LEFF, 1995; WEISS et al., 1996) (Figure 4). L’innovation de ce modèle cubique réside dans le fait qu’il tient compte de toutes les combinaisons d’interactions possibles entre le Figure 4: Modèle cubique d’activation du complexe ternaire d’un GPCR. La réaction d’isomérisation déterminant la proportion de récepteurs existant dans une conformation inactive ($R_I$) ou active ($R_A$) est régie par un équilibre thermodynamique. Le récepteur peut, dans l’une ou l’autre de ces conformations, lier le ligand (L) et/ou la protéine G hétérotrimérique (G). L’activité constitutive d’un GPCR s’explique par la formation d’un complexe fonctionnel entre le $R_A$ et la G. récepteur en conformation inactive ou active, le ligand et la protéine G hétérotrimérique. Bien que ces modèles théoriques représentent la simplification d'une réalité beaucoup plus complexe, ils ont tout de même permis de comprendre et prédire plusieurs comportements des GPCRs (MILLIGAN et IJZERMAN, 2000; STRANGE, 2002). Par exemple, l'expression stable du récepteur $\alpha_{2D}$-adrénnergique dans les cellules PC-12 (phéochromocytome de rat) a permis de confirmer l'existence de ce GPCR (en absence de ligand) dans une conformation active pouvant interagir et activer la protéine $G_{i0}$ ($R_A G$) (TIAN et al., 1994). La forte activité constitutive du récepteur Mel$_{1A}$ de la mélatonine exprimé à un niveau physiologique dans les cellules HEK-293 s'explique par l'intense précouplage de la protéine $G\alpha_i$ à ce GPCR ($R_i G$ et/ou $R_A G$ décelés par co-immunoprecipitation) (ROKA et al., 1999). D'autre part, le modèle cubique d'activation du complexe ternaire d'un GPCR permet de comprendre le caractère agoniste, antagoniste neutre ou agoniste inverse des ligands face à leurs GPCRs. Ainsi, un ligand agoniste stabilisera une quantité plus importante de récepteurs en conformation active couplant et activant les protéines G hétérotrimériques ($LR_A G$ en équilibre avec $R_A G$) que celle retrouvée initialement en absence de ligand ($R_A G$ seulement). C'est pourquoi la quantité de protéines G hétérotrimériques entraînées par l'immunoprecipitation de certains GPCRs (e.g. récepteur CCK-B de la cholecystokinine, récepteur $\delta$ opioïde, récepteur Mel$_{1A}$ de la mélatonine) est augmentée de façon importante suivant la liaison d'agonistes (LAW et REISINE, 1997; BRYDON et al., 1999; ROKA et al., 1999; GALES et al., 2000). Par contre, un agoniste inverse appauvrira la quantité totale de complexes fonctionnels ($LR_A G$ et $R_A G$) en déplaçant l'équilibre thermodynamique vers des complexes n'activant pas les protéines G hétérotrimériques ($LR_i G$ et $LR_i$). L'existence d'un complexe ternaire du type $LR_1G$ a récemment été confirmée dans le cas des récepteurs CB2 des endocannabinoides, $H_1$ et $H_2$ de l'histamine liant les agonistes inverses SR 144528, mepyramine et tiotidine, respectivement (BOUABOULA et al., 1999; MONCZOR et al., 2003; FITZSIMONS et al., 2004). Remarquablement, ces agonistes inverses ont la propriété inattendue de provoquer la désensibilisation hétérologue des voies de signalisation régulées par les protéines G hétérotrimériques maintenues en complexe avec ces GPCRs en appauvrissant leur disponibilité pour d'autres types de GPCRs. La conséquence physiologique de l'utilisation de tels ligands reste toutefois à définir. Enfin, les antagonistes neutres lient leurs GPCRs sans en diminuer la quantité de complexe $R_A G$ à l'équilibre. Toutefois, ils bloquent ainsi l'accès de la pochette de liaison du GPCR aux agonistes inverses et aux agonistes. Il est intéressant de noter que la plupart des agonistes inverses ont initialement été perçus comme étant des antagonistes neutres (MILLIGAN et al., 1995; KENAKIN, 2004b). Ensemble, le clonage des GPCRs, la possibilité d'exprimer ces récepteurs dans des systèmes plus sensibles et l'identification de mutations augmentant leur activité constitutive ont permis de démasquer le caractère agoniste inverse de ces "pseudo-antagonistes neutres". Par exemple, la surexpression du récepteur $\beta_2$-adrénergique (montrant ainsi une activité constitutive) chez les souris a permis de confirmer la nature agoniste inverse du ligand ICI-118,551 (BOND et al., 1995). Bien que les agonistes inverses offrent un potentiel thérapeutique certain, l'utilisation d'antagonistes neutres pourrait s'avérer plus judicieuse dans certains cas (MILLIGAN et al., 1995; MILLIGAN et BOND, 1997). Par exemple, l'antagoniste neutre 6$\beta$-naltrexol dirigé contre le récepteur mu opioïde semble plus sécuritaire que les agonistes inverses naloxone et naltrexone afin de calmer la dépendance physique à la morphine (SADEE *et al.*, 2005). Le modèle cubique d’activation du complexe ternaire d’un GPCR permet aussi de comprendre pourquoi les niveaux d’expression du récepteur et de la protéine G hétérotrimérique influencent la détection de l’activité constitutive (KENAKIN, 1997). En fait, la surexpression d’une de ces deux composantes résultera, d’après la loi d’action de masse, en une augmentation de la quantité totale de complexes $R_A G$. De nombreuses études ont observé une augmentation de l’activité constitutive d’un GPCR (e.g. récepteur $D_2$ de la dopamine, récepteur M3 muscarinique) suite à la surexpression de la protéine G hétérotrimérique lui étant couplée (SENOGLES *et al.*, 1990; BURSTEIN *et al.*, 1997). De même, la surexpression de plusieurs GPCRs (e.g. récepteur $\beta_2$-adrénergique, récepteur de la thyrotropine, récepteur $D_{1B}$ de la dopamine, récepteur de la calcitonine) dans divers types cellulaires mène aussi à l’augmentation de la production de seconds messagers en absence de ligand agoniste (CHIDIAC *et al.*, 1994; TIBERI et CARON, 1994; VAN SANDE *et al.*, 1995; POZVEK *et al.*, 1997). Outre les niveaux d’expression du GPCR et de la protéine G hétérotrimérique, plusieurs autres facteurs peuvent augmenter l’activité constitutive des GPCRs (e.g. polymorphismes, épissage alternatif, édition d’ARN, variances entre espèces, mutations ponctuelles et protéome cellulaire). En somme, n’importe quelle modification poussant l’équilibre d’isomérisation du récepteur vers une conformation active ou favorisant la formation d’un complexe avec la protéine G hétérotrimérique est susceptible d’augmenter le niveau d’activité constitutive du GPCR. Évidemment, le modèle cubique ne couvre pas tous les aspects du comportement des GPCRs. Comme mentionné précédemment, ces récepteurs sont maintenant reconnus pour interagir avec une vaste gamme de protéines ne se limitant pas uniquement aux protéines G hétérotrimériques. Toutefois, il serait futile de tenter d'inclure ces différentes interactions aux modèles théoriques puisqu'elles ne sont souvent pas généralisées à l'ensemble des GPCRs. Par contre, en décrivant l'interaction d'un GPCR avec une protéine G hétérotrimérique dans un ratio de 1:1, le modèle cubique ne tient évidemment pas compte de la dimérisation possible de ces récepteurs. Le processus de dimérisation est pourtant essentiel au transport (ciblage membranaire et internalisation) et à la signalisation de plusieurs GPCRs (HEBERT et BOUVIER, 1998; DEVI, 2001; RIOS et al., 2001). Par exemple, il a récemment été montré que l'homodimérisation du récepteur $\beta_2$-adrénergique au niveau du réticulum endoplasmique est nécessaire à son ciblage adéquat à la membrane plasmique (SALAHPOUR et al., 2004). En utilisant un peptide dérivé du 6e DTM de ce même GPCR, il a été possible d'empêcher l'homodimérisation du récepteur $\beta_2$-adrénergique et ainsi, de prévenir l'activation de l'adénylyl cyclase suite à l'ajout de l'agoniste isoprotérénoïd (HEBERT et al., 1996). Fait intéressant, ce peptide inhibe aussi l'activité constitutive du récepteur $\beta_2$-adrénergique, suggérant ainsi que le processus de dimérisation puisse jouer une rôle important dans l'activation constitutive d'un GPCR (HEBERT et al., 1996). Bien que plusieurs autres GPCRs constitutivement actifs soient capables de former des homodimères (e.g. récepteurs 5-HT$_{1B}$ et 5-HT$_{1D}$ de la sérotonine, récepteurs D$_2$ et D$_3$ de la dopamine, récepteur CB1 des endocannabinoides), il reste à prouver que leur activation constitutive dépend de leur dimérisation (NG et al., 1993; NIMCHINSKY et al., 1997; ZAWARYNSKI et al., 1998; XIE et al., 1999; LEE et al., 2000; MUKHOPADHYAY et al., 2000). Nul doute que l'ensemble de ces résultats influencera, au même titre que la découverte de l'activité constitutive par le passé, les modèles théoriques d'activation des GPCRs. 2.2. Rôles physiologiques de l'activité constitutive de GPCRs de type sauvage Bien que l'activité constitutive soit une propriété intrinsèque des GPCRs, son rôle au niveau du fonctionnement normal des systèmes biologiques demeure difficile à prouver (MILLIGAN, 2003). La présence d'agonistes endogènes peut constituer un problème majeur dans l'étude *in vivo* de l'activité constitutive des GPCRs, particulièrement pour des systèmes complexes comme les neurones où la relâche de vésicules de neurotransmetteurs vient brouiller les cartes. Pour une analyse juste de l'activité constitutive des GPCRs dans un système natif, il est crucial d'avoir en sa possession une gamme d'agonistes, d'antagonistes neutres et d'agonistes inverses afin d'exclure la contribution d'agonistes endogènes à l'activité détectée (MORISSET *et al.*, 2000; WIELAND *et al.*, 2001). C'est d'ailleurs en utilisant l'antagoniste neutre Proxyfan que l'implication de l'activité constitutive du récepteur H$_3$ de l'histamine au niveau du contrôle de l'activité des neurones histaminergiques chez le rat (important pour l'éveil, l'attention et les mécanismes d'apprentissage) a pu être révélée (MORISSET *et al.*, 2000). Bien que cet exemple reste unique à l'heure actuelle, plusieurs autres GPCRs connus pour jouer un rôle central dans le maintien de réponses toniques (e.g. récepteur de l'oxytocine pour la contraction utérine, récepteurs des endocannabinoides pour le contrôle du seuil de nociception thermique) sont soupçonnés d'y contribuer par l'activation constitutive de leurs voies de signalisation sous-jacentes (DE LIGT *et al.*, 2000). De plus, l'intense activité neuronale qui anime notre cerveau en tout temps pourrait bien reposer sur l'activité constitutive de plusieurs autres GPCRs (e.g. récepteurs $\alpha_2$- et $\beta_2$-adrénergiques liant la noradrénaline, récepteur Mel$_{1A}$ de la mélatonine, récepteur S1P$_5$ du sphingosine 1-phosphate, récepteurs CB1 et CB2 des endocannabinoides, récepteur 5-HT$_{2A}$ de la 5-hydroxytryptamine, récepteur MC$_4$ de la mélanocortine) (TIAN et al., 1994; SMIT et al., 1996; ROKA et al., 1999; HOPKINSON et al., 2000; ROULEAU et al., 2002; HARVEY, 2003; NIEDERBERG et al., 2003; SRINIVASAN et al., 2004; PERTWEE, 2005). L'identification récente d'agonistes inverses endogènes supporte aussi l'implication possible de l'activité constitutive des GPCRs dans certains processus physiologiques. En effet, le contrôle du poids corporel repose en partie sur l'action agoniste inverse du ligand endogène "Agouti-related protein" au niveau des récepteurs MC$_3$ et MC$_4$ de la mélanocortine (OLLMANN et al., 1997; HASKELL-LUEVANO et MONCK, 2001; NIJENHUIS et al., 2001; ADAN et KAS, 2003). D'autre part, les chemokines "interferon-$\gamma$-inducible protein 10" et "stromal cell-derived factor-1$\alpha$" empêchent le développement du sarcome de Kaposi en renversant l'activité constitutive du GPCR viral ORF74 ("open reading frame 74") (ROSENKILDE et al., 1999). Clairement, l'identification d'autres réponses physiologiques dépendantes de l'activation constitutive des GPCRs et de l'action d'agonistes inverses endogènes modifiera notre perception du maintien de l'homéostasie chez les organismes en général. 2.3. Pathologies associées à l'activité constitutive de GPCRs mutants L'avènement de la génétique moléculaire a permis de déceler une multitude de mutations ponctuelles impliquées dans le développement de plusieurs maladies graves. Les protéines en cause sont autant des enzymes, des canaux ioniques que des GPCRs. Les maladies provoquées par la mutation d'un GPCR résultent souvent d'une substitution inactivant le récepteur (SPIEGEL et WEINSTEIN, 2004; ULLOA-AGUIRRE *et al.*, 2004). Par exemple, la substitution de l'Arg$^{137}$ du récepteur V$_2$ de la vasopressine par une His rend ce GPCR incapable de stimuler la protéine G$_s$ et mène au diabète insipide (ROSENTHAL *et al.*, 1993). Le nanisme peut découler de l'incapacité du récepteur du facteur de relâche de l'hormone de croissance à activer la protéine G$_s$ suite à la substitution de l'Asp$^{60}$ par une Gly (LIN *et al.*, 1993). Toutefois, diverses pathologies humaines héréditaires ou acquises résultent de mutations germinales ou somatiques augmentant l'activité constitutive de GPCRs (SPIEGEL, 1996; PARNOT *et al.*, 2002; SEIFERT et WENZEL-SEIFERT, 2002). Par exemple, plusieurs mutations différentes augmentant l'activité constitutive du récepteur de l'hormone luténisante (e.g. M571I, T577I et D578G) menent à la puberté masculine précoce familiale (désordre métabolique hériditaire, indépendant des gonadotropines et provoquant de graves problèmes développementaux) (SHENKER *et al.*, 1993; SHENKER, 2002; THEMMEN et VERHOEF-POST, 2002). La rhodopsine, impliquée dans la phototransduction, est aussi sujette à de telles mutations (JIN *et al.*, 2003). Ce récepteur est normalement activé par l'absorption d'un photon qui provoque l'isomérisation du rétinal lié de façon covalente grâce à une base de Schiff formée par les acides aminés Glu$^{113}$ et Lys$^{296}$ (situés au 3$^e$ et 7$^e$ DTM, respectivement) (HUBBELL *et al.*, 2003). L'activation constitutive de la rhodopsine lors de la substitution de la Lys$^{296}$ par une Glu a été retrouvée chez une famille souffrant de rétinite pigmentaire, une maladie caractérisée par la dégénération du tissu photosensible de la rétine (ROBINSON *et al.*, 1992). Il est intéressant de noter que l'augmentation d'expression ou la dégradation inappropriée du récepteur PAR1 ("protease-activated receptor 1"; irréversiblement activé par l'action protéolytique de la thrombine et agissant ainsi comme un récepteur pleinement constitutivement actif) contribuent à l'apparition du caractère métastatique des cellules de cancer du sein (EVEN-RAM et al., 1998; O'BRIEN et al., 2001; BOODEN et al., 2004). La responsabilité de l'activation constitutive de certains GPCRs dans le développement de diverses tumeurs souligne d'ailleurs le potentiel oncogénique des récepteurs à 7 DTMs (initialement suggéré par l'étude du récepteur mutant R288K/K290H/A293L-\(\alpha_{1B}\)-adrénergique) (ALLEN et al., 1991). En effet, plusieurs cas d'adénomes thyroïdiens présentant une hyperplasie de la glande thyroïde et provoquant un hyperthyroïdisme sont causés par différentes mutations somatiques (\(\sim 50\) ayant été identifiées) augmentant drastiquement l'activité constitutive du récepteur de l'hormone hypophysaire thyrotropine (PARMA et al., 1993). Récemment, une nouvelle isoforme du récepteur de la cholecystokinine B/gastrine résultant d'un épissage alternatif et montrant une activité constitutive accrue a été découverte spécifiquement chez les cellules humaines du cancer colorectal (HELLMICH et al., 2000). De plus, certains cas rares de cancer (e.g. sarcome de Kaposi, lymphome de Burkitt, leucémie chez l'adulte, carcinome hépatique) découlent de l'expression de GPCRs mutants constitutivement actifs codés par différents virus (e.g. virus de l'herpès, rétrovirus et virus de l'hépatite B; ces derniers ont piraté le génome humain pour ensuite muter ces récepteurs) (SMIT et al., 2000; ROSENKILDE et al., 2001; SMIT et al., 2003). Vraisemblablement, l'activation continue des voies de signalisation en aval de ces GPCRs constitutivement actifs est à l'origine des désordres cellulaires et métaboliques produits. Ainsi, nul doute que le développement d'agonistes inverses offrira un fort potentiel thérapeutique dans la lutte contre les diverses pathologies dues à l'activation constitutive de GPCRs (PRATHER, 2004). À ce jour, l'existence d'un lien entre le polymorphisme du gène du récepteur AT\textsubscript{1} et l'incidence des maladies cardiovasculaires demeure controversé (MILLER et SCHOLEY, 2004; BAUDIN, 2005). Par exemple, certaines études rapportent que la substitution de l'adénosine située à la position 1166 du gène codant le récepteur hAT\textsubscript{1} (portion 3' non traduite du gène) pour une cytidine constitue un polymorphisme fréquemment observé chez des patients hypertendus (BONNARDEAUX \textit{et al.}, 1994; SZOMBATHY \textit{et al.}, 1998; KOBASHI \textit{et al.}, 2004; RUBATTU \textit{et al.}, 2004). À l'inverse, d'autres équipes de recherche n'ont pu établir de lien entre le polymorphisme A1166C du récepteur AT\textsubscript{1} et l'évolution de différentes pathologies du système cardiovasculaire telles l'hypertension essentielle et l'hypertrophie du myocarde (SCHMIDT \textit{et al.}, 1997; ONO \textit{et al.}, 2003; ARAUJO \textit{et al.}, 2004; KUZNETSOVA \textit{et al.}, 2004; SUGIMOTO \textit{et al.}, 2004). Étant donné la divergence des nombreuses études ayant exploré l'implication du polymorphisme A1166C du récepteur AT\textsubscript{1}, il est surprenant de constater que la conséquence même de ce polymorphisme au niveau de l'expression et de l'activité du récepteur AT\textsubscript{1} n'est toujours pas connue (DANSER et SCHUNKERT, 2000). D'autre part, le développement d'adénomes (tumeurs bénignes) des glandes surrénales provoquant l'hyperaldostéronisme ne semble pas résulter d'une mutation augmentant l'activité constitutive du récepteur AT\textsubscript{1} (DAVIES \textit{et al.}, 1997; SACHSE \textit{et al.}, 1997). Par contre, la mise en évidence récente du rôle déterminant joué par l'activation du récepteur AT\textsubscript{1} dans les processus d'angiogenèse associés à la croissance de certaines tumeurs laisse entrevoir le potentiel thérapeutique des agonistes inverses (e.g. losartan, irbesartan et valsartan) de ce GPCR (RICHARD et al., 2001; YOSHIJI et al., 2001; EGAMI et al., 2003; FUJITA et al., 2005). D'ailleurs, l'administration d'inhibiteurs de l'ACE, plus que n'importe quels autres médicaments anti-hypertenseurs (e.g. diurétiques, bloqueurs de canaux calciques, bloqueurs des récepteurs β-adrénergique), diminue le taux de mortalité découlant de tumeurs malignes (LEVER et al., 1998). L'implication de l'activation constitutive du récepteur AT₁ dans la pathogenèse des maladies associées au système cardiovasculaire reste toutefois à établir. 2.4. Récepteur mutant constitutivement actif N111G-AT1 Au cours de la dernière décennie, les efforts déployés par plusieurs groupes de recherche ont permis d'identifier certains déterminants moléculaires nécessaires au mécanisme d'activation du récepteur AT₁ (MIURA et al., 2003a). Par exemple, l'intégrité des résidus Asp\textsuperscript{74} et Tyr\textsuperscript{292} (situés au milieu des 2\textsuperscript{e} et 7\textsuperscript{e} DTMs, respectivement) est essentielle à l'activation de la protéine G\textsubscript{q/11} par le récepteur AT₁ liant l'Ang II (BIHOREAU et al., 1993; MARIE et al., 1994). Ces premières observations ont permis de développer et proposer un modèle de la structure du récepteur AT\textsubscript{1A} de rat (basé sur les coordonnées crystallographiques de la bactériorhodopsine) suggérant l'existence d'une interaction de la Tyr\textsuperscript{292} avec l'Asn\textsuperscript{111} dans une conformation inactive (JOSEPH et al., 1995). Bien que cette interaction n'ait pas encore été confirmée, le rôle clé de l'Asn\textsuperscript{111} dans le mécanisme moléculaire d'activation du récepteur AT₁ a été mis en évidence simultanément par trois équipes de recherche (NODA et al., 1996; BALMFORTH et al., 1997; GROBLEWSKI et al., 1997). En fait, le degré d'activation du récepteur AT₁ est inversement proportionnelle à la grosseur du résidu en position 111 (FENG et al., 1998). Ainsi, la substitution de l'Asn$^{111}$ par de plus petits résidus (e.g. Cys, Ala, Gly) augmente l'activité constitutive du récepteur AT$_1$ tandis que les résidus plus volumineux (e.g. Gln, Tyr, Phe) lui confèrent un caractère réfractaire à l'activation par l'Ang II (FENG et al., 1998). Il est intéressant de noter que la substitution de la position équivalente chez d'autres GPCRs (e.g. E113Q-rhodopsine, récepteur N113A-B2 de la bradykinine, récepteur C128F-$\alpha_{1B}$-adrénergique, récepteur C116F-$\beta_2$-adrénergique, récepteur N119A-CXCR4 des chemokines, récepteur N100A-PAF du "platelet-activating factor") mène aussi à l'activation constitutive de ces récepteurs, appuyant l'existence d'un mécanisme d'activation commun à la sous-famille A des GPCRs (ROBINSON et al., 1992; PEREZ et al., 1996; ISHII et al., 1997; PAUWELS et WURCH, 1998; ZUSCIK et al., 1998; MARIE et al., 1999; ZHANG et al., 2002; KARNIK et al., 2003). La cartographie systématique d'une librairie d'ADNc du récepteur AT$_{1A}$ générée par mutagénèse aléatoire a de plus permis d'identifier d'autres déterminants moléculaires impliqués dans l'activation de ce GPCR (PARNOT et al., 2000). Bien que d'autres mutations simples puissent augmenter l'activité constitutive du récepteur AT$_1$ (e.g. I245T et L305Q), le récepteur mutant N111G-AT$_1$ reste celui montrant la plus forte activation constitutive de la protéine $G_{q/11}$ (PARNOT et al., 2000). À ce jour, l'étude du récepteur mutant N111G-AT$_1$ a contribué à élargir notre champ de connaissances entourant la structure, le couplage à la protéine $G_{q/11}$ et les mécanismes de désensibilisation du récepteur AT$_1$. Par le passé, l'utilisation du récepteur mutant constitutivement actif L266S/K267R/H269K/L272A-$\beta_2$-adrénergique a permis de mieux définir l'état actif de ce GPCR par une approche de "Substituted-Cysteine Accessibility Method" (JAVITCH et al., 1997). Récemment, cette même stratégie a été appliquée avec succès chez le récepteur N111G-AT\textsubscript{1} afin de mettre en évidence certains mouvements des 2\textsuperscript{e}, 3\textsuperscript{e} et 7\textsuperscript{e} DTMs participant au mécanisme d'activation du récepteur AT\textsubscript{1} (BOUCARD \textit{et al.}, 2003; MIURA \textit{et al.}, 2003b; MARTIN \textit{et al.}, 2004b). Différents groupes de recherche ont rapporté l'efficacité accrue de ligands peptidiques agonistes partiels du récepteur AT\textsubscript{1} de type sauvage à activer pleinement le récepteur N111G-AT\textsubscript{1} (NODA \textit{et al.}, 1996; GROBLEWSKI \textit{et al.}, 1997; LE \textit{et al.}, 2002). Ces études suggèrent que les contraintes moléculaires stabilisant l'état inactif du récepteur AT\textsubscript{1} sont moins importantes chez le récepteur N111G-AT\textsubscript{1}, augmentant ainsi l'efficacité des agonistes partiels à activer ce GPCR mutant. Comme pour plusieurs autres GPCRs constitutivement actifs (e.g. récepteur T42A-A\textsubscript{2B} de l'adénosine, récepteur T373K-\(\alpha_{2A}\)-adrénergique), l'affinité des agonistes inverses du récepteur AT\textsubscript{1} (e.g. candesartan, EXP3174, irbesartan, losartan) est diminuée chez le récepteur N111G-AT\textsubscript{1} (NODA \textit{et al.}, 1996; WADE \textit{et al.}, 2001; LE \textit{et al.}, 2003; BEUKERS \textit{et al.}, 2004). En absence d'Ang II, le récepteur N111G-AT\textsubscript{1} est aussi rapidement internalisé et lentement recyclé chez les cellules HEK-293, récapitulant ainsi le processus de désensibilisation du récepteur AT\textsubscript{1} en présence d'Ang II (MISEREY-LENKEI \textit{et al.}, 2002). Toutefois, l'absence de phosphorylation du récepteur N111G-AT\textsubscript{1} suite à son activation par l'Ang II suggère que ce récepteur mutant ne reproduit pas entièrement le comportement du récepteur AT\textsubscript{1} activé par l'Ang II (THOMAS \textit{et al.}, 2000). 3. PROBLÉMATIQUE ET BUT DE L’ÉTUDE Tel que décrit précédemment, il est possible d’affecter le phénotype d’une cellule en augmentant l’activité constitutive d’un GPCR par une simple mutation ponctuelle. Comment définir alors l’activation constitutive d’un GPCR si la plupart des mécanismes moléculaires sous-jacents à l’activation de ces récepteurs restent encore aujourd’hui à préciser (BARTFAI et al., 2004)? Comment un récepteur mutant constitutivement actif parvient-il à accélérer le rythme d’activation des protéines G hétérotrimériques? Forme-t-il un complexe plus stable avec la protéine G hétérotrimérique ou adopte-t-il plutôt une nouvelle conformation facilitant l’adoption spontanée de l’état actif du GPCR? Nous avons voulu répondre à ces questions en utilisant le récepteur mutant N111G-AT$_1$ comme modèle de GPCR activant constitutivement la protéine G$_{q/11}$ (Article 1). Notre objectif était de déterminer si l’activité constitutive accrue du récepteur N111G-AT$_1$ découle d’un changement de conformation intrinsèque ou plutôt d’une interaction plus stable avec la protéine G$_{q/11}$. Suite à cette première étude, nous avons voulu déterminer comment réagiraient les cellules face à l’activation continue des voies de signalisation sous-jacentes au récepteur mutant constitutivement actif N111G-AT$_1$. Sachant que ce GPCR mutant active spontanément la protéine G$_{q/11}$, nous avons voulu évaluer l’impact de l’expression du récepteur N111G-AT$_1$ sur les mécanismes de mobilisation calcique (Article 2). Au cours de cette étude, nous avons remarqué que la morphologie des cellules HEK-293 exprimant le récepteur N111G-AT$_1$ est modifiée spontanément lorsque ces cellules forment un feuillet compact. Nous avons ainsi voulu identifier par quel mécanisme ce changement de phénotype est enclenché par le récepteur N111G-AT$_1$ (Article 3). RÉSULTATS Statut de l'article : publié Référence : Mannix Auger-Messier, Martin Clement, Pascal M. Lanctot, Patrice C. Leclerc, Richard Leduc, Emanuel Escher, and Gaetan Guillemette (2003) The Constitutively Active N111G-AT$_1$ Receptor for Angiotensin II Maintains a High Affinity Conformation Despite Being Uncoupled from Its Cognate G Protein G$_{q/11\alpha}$. Endocrinology 144(12): 5277-5284 Apport : J’ai participé activement à l’élaboration de cette étude en planifiant et fournissant 75% des résultats présentés dans cet article. J’ai écrit le premier jet du manuscrit. The Constitutively Active N111G-AT$_1$ Receptor for Angiotensin II Maintains a High Affinity Conformation Despite Being Uncoupled from Its Cognate G Protein G$_{q/11\alpha}$ Mannix Auger-Messier, Martin Clement, Pascal M. Lanctot, Patrice C. Leclerc, Richard Leduc, Emanuel Escher, and Gaetan Guillemette Department of Pharmacology, Faculty of Medicine, Université de Sherbrooke, Sherbrooke, Quebec, Canada, J1H 5N4 Abbreviated title: High affinity conformation of the N111G-AT$_1$ receptor List of index terms - AT$_1$ receptor, G protein coupling, coimmunoprecipitation, constitutive activity, Ca$^{2+}$ mobilization, site-directed mutagenesis. This work is part of the Ph.D. thesis of M.A.M. and was supported by grants from the Canadian Institutes of Health Research. R.L. is a Scholar of the Fonds de la Recherche en Santé du Québec (FRSQ). E.E. is a recipient of a J.C. Edwards Chair in cardiovascular research. P.C.L. is a recipient of studentships from the Natural Sciences and Engineering Research Council of Canada (NSERC). M.A.M. and P.M.L. are recipients of studentships from FRSQ. Address all correspondence and request for reprints to: Gaetan Guillemette, Ph.D., Department of Pharmacology, Faculty of Medicine, Université de Sherbrooke, 3001, 12th Avenue North, Sherbrooke, Quebec, Canada, J1H 5N4, Tel.: (819) 564-5347, Fax: (819) 564-5400, E-mail: email@example.com Abbreviations: Ang II, angiotensin II; AT$_1$ receptor, angiotensin II type 1 receptor; B$_{MAX}$, maximal binding capacity; Bpa, $p$-Benzoyl-L-phenylalanine; CNBr, cyanogen bromide; ECL, enhanced chemiluminescence; FBS, fetal bovin serum; GPCR, G protein-coupled receptor; InsP$_2$, inositol bisphosphate; InsP$_3$, inositol trisphosphate; IP, inositol phosphate; K$_D$, dissociation constant; PVDF, polyvinylidene fluoride; RIPA, radioimmunoprecipitation assay; SDS, sodium dodecyl sulfate; SERCA, sarcoplasmic and endoplasmic reticulum calcium ATPase; STI, soybean trypsin inhibitor. Asn111, localized in the third transmembrane domain of the AT$_1$ receptor for angiotensin II, plays a critical role in stabilizing the inactive conformation of the receptor. We evaluated the functional and G protein coupling properties of mutant AT$_1$ receptors in which Asn111 was substituted with smaller (Ala or Gly) or larger residues (Gln or Trp). All four mutants were expressed at high levels in COS-7 cells and, except for N111W-AT$_1$, recognized $^{125}$I-Ang II with high affinities comparable to that of the wild-type AT$_1$ receptor. In phospholipase C assays, the four mutants encompassed the entire spectrum of functional states, ranging from constitutive activity (without agonist) for N111A-AT$_1$ and N111G-AT$_1$ to a significant loss of activity (upon maximal stimulation) for N111Q-AT$_1$ and a major loss of activity for N111W-AT$_1$. In Ca$^{2+}$ mobilization studies, N111W-AT$_1$ produced a weak Ca$^{2+}$ transient and, unexpectedly, N111G-AT$_1$ also produced a Ca$^{2+}$ transient that was much weaker than that of the wild-type AT$_1$. The agonist-binding affinity of N111W-AT$_1$ was not modified in the presence of GTP$\gamma$S, suggesting that this receptor is not basally coupled to a G protein. GTP$\gamma$S did not modify the high agonist-binding affinity of N111G-AT$_1$ but abolished the coimmunoprecipitation of G$_{q/11\alpha}$ with this constitutively active mutant receptor. These results are a direct demonstration that the N111G-AT$_1$ receptor maintains a high affinity conformation despite being uncoupled from the G protein G$_{q/11}$. Introduction The octapeptide hormone angiotensin II (Ang II) is the active component of the renin-angiotensin system and exerts a wide variety of physiological effects, including vascular contraction, aldosterone secretion, sodium and water retention, neuronal activation and cardiovascular cell growth and proliferation (for reviews see 1, 2). Virtually all the known physiological effects of Ang II are produced through the activation of the AT$_1$ receptor, which belongs to the G protein-coupled receptor (GPCR) superfamily. The AT$_1$ receptor interacts with the G protein $G_{q/11}$, which activates a phospholipase C, which in turn generates inositol 1,4,5-trisphosphate (InsP$_3$) and diacylglycerol from the cleavage of phosphatidylinositol 4,5-bisphosphate (3, 4). InsP$_3$ causes the release of Ca$^{2+}$ from an intracellular store and diacylglycerol recruits and activates protein kinase C at the plasma membrane. As with all the hormones that trigger this signaling pathway, Ang II is recognized as a Ca$^{2+}$-mobilizing hormone. In the last decade, great emphasis has been placed on elucidating the various structural determinants involved in AT$_1$ receptor activation at the molecular level. Like other GPCRs, the AT$_1$ receptor undergoes spontaneous isomerization between its inactive state (favored in the absence of the agonist) and its active state (induced or stabilized by the agonist). With a photoaffinity labeling approach, we directly identified ligand-contact points within the second extracellular loop and the seventh transmembrane domain of the AT$_1$ receptor (5-7). These contact points delimit the ligand-binding pocket of the receptor and may be important for its activation. Numerous mutagenesis studies have also provided indirect evidence for the involvement of transmembrane segments and/or specific amino acid side chains in agonist recognition and receptor activation (reviewed in 1 and 8). Based on these studies, Joseph et al. (9) proposed a preliminary model postulating that an interaction between Asn111 in the third transmembrane domain and Tyr292 in the seventh transmembrane domain maintains the AT\textsubscript{1} receptor in the inactive conformation. The agonist Ang II would disrupt this interaction, allowing Tyr292 to interact with Asp74 in the second transmembrane domain and promote an active conformation. These authors later validated their model in part by showing that substituting Asn111 with Ala produces a constitutively active mutant receptor that signals in an agonist-independent fashion (10). Almost simultaneously, two other studies reported that the substitution of Asn111 with Ala produces a constitutively active AT\textsubscript{1} receptor (11, 12). Feng et al. (13) further showed that a reduction of the side chain size of residue 111 induces an intermediate active conformation (N111G-AT\textsubscript{1} being the most active mutant). Constitutively active mutant AT\textsubscript{1} receptors obtained by replacing Asn111 have high affinities for agonist ligands and also exhibit increased efficacies in response to partial agonist and even some antagonist ligands (10-16). It is unclear whether the constitutive activity is due to a conformational state conferring a more efficient coupling to the G protein or to a conformational state that resembles the agonist-occupied receptor. In order to clarify this question, we produced constitutively active mutant AT\textsubscript{1} receptors by substituting Asn111 with smaller amino acid residues (Ala or Gly) and we also obtained less activatable mutant receptors by substituting Asn111 with larger amino acid residues (Gln or Trp). The pharmacological and functional properties of these receptors were analyzed after transient expression in COS-7 cells. Their coupling properties were indirectly assessed by binding studies in the presence of uncoupling agents and directly assessed in receptor/G protein coimmunoprecipitation studies. We showed that the less activatable mutant receptors poorly couple whereas the constitutively active mutant receptors efficiently and reversibly couple to $G_{q/11\alpha}$. More importantly, we showed that the constitutively active N111G-AT$_1$ receptor maintains a high affinity conformation despite its uncoupling from $G_{q/11}$. Materials and Methods **Materials.** The cDNA encoding the human AT$_1$ receptor with a N-terminus FLAG epitope was constructed in our laboratory and subcloned in the mammalian expression vector pcDNA3 (Invitrogen, San Diego, CA). The Sculptor *in vitro* mutagenesis kit, restriction endonucleases, polymerases, *myo-*[$^3$H]-inositol (80 Ci/mmol), $^{125}$Iodine (2000 Ci/mmol) and ECL plus Western blotting detection reagents were from Amersham Pharmacia Biotech (Piscataway, NJ). Dulbecco's Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin-glutamine, lipofectamine and oligonucleotide primers were from Gibco Life Technologies (Gaithersburg, MD). COS-7 cells were from the American Type Culture Collection (Manassas, VA). Ang II, [Sar$^1$,Ile$^8$]Ang II, bovine serum albumin (BSA), bacitracin, soybean trypsin inhibitor (STI), CNBr, monoclonal anti-FLAG M1 antibody and FLAG peptide were from Sigma (Oakville, ON). The AG 1-X8 resin was from Bio-Rad (Mississauga, ON). The protease inhibitors aprotinin, leupeptin and pefabloc SC were from Roche (Mannheim, Germany). Goat polyclonal anti-rabbit-IgG antibody conjugated to horseradish peroxidase, rabbit polyclonal anti-G$_{q/11\alpha}$ antibody (C-19) and protein A/G plus-agarose beads were from Santa Cruz Biotechnology (Santa Cruz, CA). Immobilon-P polyvinylidene fluoride (PVDF) transfer membranes were from Millipore (Bedford, MA). [Sar$^1$,Bpa$^8$]Ang II was synthesized in our laboratory by the solid-phase method and purified by high-performance liquid chromatography (HPLC) as previously described (17). $^{125}$I-Ang II, $^{125}$I-[Sar$^1$,Ile$^8$]Ang II and $^{125}$I-[Sar$^1$,Bpa$^8$]Ang II (1000 Ci/mmol) were prepared with IODO-GEN (Pierce, Rockford, IL) according to the method of Fraker and Speck (18) in an acetic acid buffer (pH 5.4) and purified by HPLC on a C-18 column (Waters, Mississauga, ON) as previously reported (19). The specific radioactivities of the radiolabeled peptides were determined by self-displacement and saturation binding experiments as described by Boulay et al. (20). **Construction of the mutant receptors.** The cDNA encoding the human AT\textsubscript{1} receptor was inserted into HindIII and XbaI sites of M13mp19. Site-directed mutagenesis was done using the Sculptor \textit{in vitro} mutagenesis kit. Four oligonucleotides were constructed to introduce different mutations at Asn111. The oligonucleotide primers were the following (altered nucleotides are underlined): Asn111→Gly (N111G-AT\textsubscript{1}), 5'-GCGTACAGGCCGAAACTGACG-3'; Asn111→Ala (N111A-AT\textsubscript{1}), 5'-CTAGCGTACAGGGCGAAACTGACGCT-3'; Asn111→Gln (N111Q-AT\textsubscript{1}), 5'-GCTAGCGTACAGCTGGAAACTGACGC-3'; Asn111→Trp (N111W-AT\textsubscript{1}), 5'-GCTAGCGTACAGCCAGAAACTGACGCT-3'. After confirmation of the site-directed mutations and integrity of the cDNAs by DNA sequencing, the N111G-AT\textsubscript{1}, N111A-AT\textsubscript{1}, N111Q-AT\textsubscript{1} and N111W-AT\textsubscript{1} cDNAs were excised from the M13mp19RF by digestion with HindIII and XbaI and subcloned into the multiple cloning site of pcDNA3 digested with the same restriction enzymes. The FLAG epitope was inserted in frame with the coding sequence of the different constructs by a subcloning strategy using the restriction endonucleases AccI, HindIII and XbaI. **Cell culture and transfection.** COS-7 cells were grown in DMEM supplemented with 10% [v/v] heat-inactivated FBS, 2 mM L-glutamine, 100 IU/mL penicillin and 100 μg/mL streptomycin (complete DMEM medium). A total of 1×10\textsuperscript{6} cells were seeded into 75 cm\textsuperscript{2} culture dishes. After 24 h of growth, the cells were washed once with serum-free DMEM and transfected with 4 μg of plasmid DNA and 25 μL of lipofectamine in 8 mL of serum-free DMEM. The cells were incubated for 5 h at 37°C and the medium was replaced with complete DMEM. The transfected cells were allowed to grow for 12 h, transferred to six-well culture plates for IP production experiments or directly seeded on coverslips for Ca$^{2+}$ mobilization studies. Cells were used 36–60 h after the initial transfection. For photoaffinity labeling, binding and coimmunoprecipitation assays, the cells were grown for 48 h in 75 cm$^2$ culture dishes and stored at –80°C. **Binding experiments.** Broken cells (frozen and thawed) were gently scraped into 10 mL of washing buffer (25 mM Tris-HCl, pH 7.4, 100 mM NaCl and 5 mM MgCl$_2$) and centrifuged at 2,500 x g for 15 min at 4°C. The pellet was dispersed in binding buffer (25 mM Tris-HCl, pH 7.4, 100 mM NaCl, 5 mM MgCl$_2$, 0.1% [w/v] BSA, 0.01% [w/v] bacitracin and 0.01% [w/v] STI). Saturation binding studies were performed by incubating broken cell aliquots (20–50 μg of protein) for 1 h at room temperature in a final volume of 0.5 mL of binding buffer containing varying concentrations of radioactive and nonradioactive ligands. Non-specific binding was measured in the presence of 1 μM unlabeled Ang II. Bound radioactivity was separated from free ligand by vacuum filtration through GF/C filters presoaked for 2 h in binding buffer. Receptor-bound radioactivity was evaluated by γ counting. Binding affinities ($K_D$) and receptor expression levels ($B_{MAX}$) were calculated by Scatchard analysis of the saturation curves. **Photoaffinity labeling.** Photoaffinity labeling experiments were essentially done as previously described (21). Briefly, broken cells aliquots were incubated for 1 h at room temperature in the presence 5 nM $^{125}$I-[Bpa$^8$]Ang II in 0.5 mL of binding buffer. After washing by centrifugation at 500 x g for 15 min, the broken cells were resuspended in 0.5 mL of ice cold binding buffer (without BSA) and irradiated for 1 h at 0°C under filtered UV light (365 nm). Broken cells were then solubilized in RIPA buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 0.5% [w/v] deoxycholate, 0.1% [w/v] SDS and 1% [v/v] Nonidet P-40). After centrifugation for 15 min at 15,000 x g, the supernatant was mixed with an equal volume of 2x Laemmli buffer (60 mM Tris-HCl, pH 6.8, 10% [v/v] glycerol, 2% [w/v] SDS, 100 mM dithiothreitol and 0.05% [w/v] bromophenol blue), incubated for 1 h at 37 °C and analyzed by SDS-PAGE on a 7.5% [w/v] polyacrylamide Tris-glycine gel. **Chemical digestion.** Partially purified photolabeled receptors (5,000–10,000 cpm) were incubated in a mixture containing 200 µL of 70% [v/v] trifluoroacetic acid and 200 µL of CNBr to obtain a final concentration of 100 mg/mL. Samples were incubated for 18 h at room temperature in the dark. Reactions were terminated by the addition of 1 mL of water. After lyophilization and resuspension in 1x Laemmli buffer, samples were analyzed by SDS-PAGE on 16.5 % [w/v] polyacrylamide Tris-tricine gels and revealed by autoradiography on BioMax MS film (Eastman Kodak, Rochester, NY). **Inositol phosphate production.** COS-7 cells were seeded into six-well plates 12 h post-transfection, allowed to grow for 24 h and labeled for 16–24 h in inositol-free DMEM containing 10 µCi/mL of *myo-*[^3H]-inositol. After preincubation for 30 min at 37°C in Medium 199 containing 25 mM Hepes, pH 7.4, 10 mM LiCl and 0.1% [w/v] BSA, cells were stimulated with 100 nM Ang II for 20 min. The incubation was stopped by adding PCA (5% [v/v]). Water-soluble inositol phosphates were then extracted with an equal volume of a 1:1 mixture of 1,1,2-trichlorotrifluoroethane and tri-*n*-octylamine. The samples were vigorously mixed and centrifuged at 15,000 x g for 1 min. The upper phase was applied to an AG 1-X8 resin column and the inositol phosphates were sequentially eluted by the addition of an ammonium formate/formic acid solution of increasing ionic strength. Measurement of intracellular Ca$^{2+}$. Transfected COS-7 cells were grown on coverslips, washed twice with HBSS (20 mM Hepes, pH 7.4, 120 mM NaCl, 5.3 mM KCl, 0.8 mM MgSO$_4$, 1.8 mM CaCl$_2$ and 11.1 mM glucose) and loaded with Fura-2/AM (1 µM in HBSS, Molecular Probes, Eugene, OR) for 20 min at room temperature in the dark. After washing and incubating in fresh HBSS for 20 min at room temperature, the coverslips were inserted into a circular open-bottom chamber and placed on the stage of a Zeiss Axiovert microscope fitted with an Attofluor Digital Imaging and Photometry System (Attofluor Inc., Rockville, MD). The system allows data acquisition from up to 99 user-defined variably sized regions of interest per field of view. From 40 to 99 isolated Fura-2-loaded cells were selected and the [Ca$^{2+}$]$_i$ was measured by fluorescence videomicroscopy at room temperature using alternating excitation wavelengths of 334 and 380 nm and measuring emitted fluorescence at 520 nm. The data are expressed as a ratio of Fura-2 fluorescence (334/380). Data from 20 to 40 individual cells that responded to Ang II were collected from each coverslip. Coimmunoprecipitation studies. Broken cells (800 µg/500 µL of binding buffer) were added to an equal volume of 2x solubilizing buffer (binding buffer with 0.4% [w/v] CHAPS, 0.4 mM Pefabloc SC, 20 µM leupeptin, 10 µg/mL aprotinin and 1 mM CaCl$_2$) and incubated for 30 min at 4°C with gentle agitation. After centrifugation at 20,000 x g for 30 min at 4°C, the supernatant was transferred to 15 µL of wet protein A/G plus-agarose beads that had been preincubated with 4 µL of anti-FLAG M1 antibody(10 µg/mL). Immune complex formation was allowed to proceed for 3 h at 4°C with rotation. The agarose beads were sedimented by centrifugation at 5,000 x g for 3 min and washed three times with ice-cold 1x solubilizing buffer. Beads were resuspended in 35 µL of loading buffer (60 mM Tris-HCl, pH 6.8, 10% [v/v] glycerol, 3% [w/v] SDS, 5% [v/v] β-mercaptoethanol and 0.05% [w/v] bromophenol blue) and incubated for 1 h at 50°C. After centrifugation at 5,000 x g for 3 min, proteins from the supernatant were separated by SDS-PAGE on a 10% [w/v] polyacrylamide Tris-glycine gel and transferred to a PVDF membrane. The membrane was then blocked in Tris-buffered saline (20 mM Tris-HCl, pH 7.6, 200 mM NaCl) supplemented with 5% [w/v] nonfat dry milk and 0.1% [v/v] Tween 20. The $G_{q/11\alpha}$ protein was probed with rabbit polyclonal anti-$G_{q/11\alpha}$ antibody (C-19) and goat polyclonal anti-rabbit-IgG antibody conjugated to horseradish peroxidase. The immunostained bands were revealed by enhanced chemiluminescence according to the manufacturer’s instructions on a BioMax ML film. Autoradiograms of membranes were digitized on a Hewlett Packard Scan Jet 5100c. Integrated peak areas were determined using the gel analysis Quantity One software (version 4.2; Biorad, Mississauga, ON). **Data analysis.** Results are presented as the mean ± SD. Binding data ($B_{MAX}$ and $K_D$ values) were analyzed with the Kell program (Biosoft, Ferguson, MO), which uses a weighted nonlinear curve-fitting routine. Results Binding properties of mutant AT₁ receptors. cDNAs encoding wild-type and mutant human AT₁ receptors were subcloned into the pcDNA3 mammalian expression vector and transfected into COS-7 cells. The pharmacological properties of the different receptors were assessed in saturation binding studies with the radioactive agonist $^{125}$I-Ang II. As summarized in Table 1, the wild-type AT₁ and the N111Q-AT₁ receptors displayed two affinity states, suggesting that they could adopt a high affinity (G protein coupled) conformation and a low affinity (G protein uncoupled) conformation. The N111W-AT₁ receptor displayed a single low affinity state ($K_D$ of 2.7 nM) whereas the N111G-AT₁ receptor displayed a single high affinity state ($K_D$ of 0.7 nM). Interestingly, in competitive binding assays, the N111G-AT₁ receptor exhibited a relatively lower affinity than the WT-AT₁ receptor for the nonpeptide antagonist losartan (data not shown). The level of expression of the different receptors varied between 1.7 and 2.7 pmol/mg of protein (Table 1). Covalent labeling of mutant AT₁ receptors. In photoaffinity labeling experiments, the photosensitive analogue $^{125}$I-[Bpa$^8$]Ang II specifically labeled the wild-type and mutant AT₁ receptors that migrated as glycoproteins with a typical broad band pattern between 60–130 kDa (Figure 1A). To further characterize the ligand binding site, the photolabeled receptors were treated with CNBr, which cleaves on the C-terminal side of methionine residues (Figure 1B). This chemical digestion of the wild-type receptor (Figure 1B) produced two typical fragments migrating as sharp bands with apparent molecular masses of 6.9 kDa and 9.4 kDa as previously reported (5, 6). CNBr digestion of the mutant AT₁ receptors also produced the same typical fragments (Figure 1B), | Receptors | Affinity (nM) | Expression levels (pmol/mg protein) | |--------------|---------------|-------------------------------------| | | $K_D1$ | $K_D2$ | $B_{MAX}1$ | $B_{MAX}2$ | | WT-AT$_1$ | 0.3 ± 0.1 | 3.1 ± 0.1 | 0.5 ± 0.1 | 1.3 ± 0.1 | | N111G-AT$_1$ | 0.7 ± 0.1 | | 2.7 ± 0.2 | | | N111A-AT$_1$ | 1.0 ± 0.2 | | 2.4 ± 0.3 | | | N111Q-AT$_1$ | 0.5 ± 0.1 | 3.8 ± 1.4 | 0.6 ± 0.2 | 1.2 ± 0.3 | | N111W-AT$_1$ | 2.7 ± 0.6 | | 1.8 ± 0.3 | | **TABLE 1. Binding properties of wild-type and mutant AT$_1$ receptors.** Affinities ($K_D$) for $^{125}$I-Ang II and expression levels ($B_{MAX}$) of the wild-type and mutant AT$_1$ receptors transiently expressed in COS-7 were obtained by Scatchard analysis of the saturation curves. In typical experiments with the WT-AT$_1$ receptor, total binding of $^{125}$I-Ang II (1 nM) was 2,765 cpm and non-specific binding was 245 cpm. Results are expressed as means ± standard deviations of three independent experiments. FIG. 1. Photoaffinity labeling of wild-type and mutant AT$_1$ receptors. COS-7 cells expressing the wild-type AT$_1$ receptor (WT) and various mutant receptors (N111G, N111A, N111Q, N111W) were incubated in the presence of 5 nM $^{125}$I-[Bpa$^8$]Ang II for 1 h at room temperature. Cells were then irradiated under 365 nm filtered UV light for 1 h at 0 °C. **Panel A**: After solubilization, samples were resolved by SDS-PAGE on a 7.5% [w/v] polyacrylamide Tris-glycine gel followed by autoradiography as described in the experimental procedures. **Panel B**: CNBr (100 mg/mL) hydrolysis of partially purified $^{125}$I-[Sar$^1$,Bpa$^8$]Ang II-labeled receptors proceeded for 18 h at room temperature in the dark before resolution by SDS-PAGE. Protein standards with the indicated molecular masses were run in parallel. These results are representative of three independent experiments. indicating that there were no major differences between the conformational states adopted by the different receptors in the presence of the photosensitive ligand. **Functional properties of mutant AT₁ receptors.** The functional properties of wild-type and mutant AT₁ receptors were evaluated by assessing the basal and Ang II-induced production of inositol phosphates (IPs) in transiently transfected COS-7 cells. Figure 2 shows the relative amounts of IPs accumulated under basal conditions (white columns) and after maximal stimulation with Ang II (black columns). The basal levels of IPs in cells expressing the mutant N111Q-AT₁ and N111W-AT₁ receptors were relatively low and not significantly different from the basal level in cells expressing the wild-type AT₁ receptor. As expected, the basal levels of IPs in cells expressing the mutant N111A-AT₁ and N111G-AT₁ receptors were significantly higher than those in cells expressing the wild-type AT₁ receptor. These results illustrate the constitutive activity of the mutant N111A-AT₁ and N111G-AT₁ receptors. After maximal stimulation with Ang II, the constitutively active receptors increased the IPs to levels not significantly different from those of the wild-type AT₁ receptor. Interestingly, after maximal stimulation, the N111Q-AT₁ receptor caused only a weak production of IPs whereas the N111W-AT₁ receptor did not significantly elevate the level of IPs. The mutant N111Q-AT₁ and N111W-AT₁ receptors are clearly less activatable than the wild-type and the constitutively active mutant AT₁ receptors. These results provide a strong support to the previous suggestion by Feng et al. (13) that an increase in the side chain size of residue 111 leads to a form of the receptor with reduced basal and maximal activities. The functional properties of the different receptors were also evaluated with a Fura-2 fluorescence approach that measures the free calcium concentration within FIG. 2. Functional properties of the wild-type and mutant AT$_1$ receptors: IP production. Transfected COS-7 cells were loaded for 16–24 h with 10 µCi/mL of myo-[³H]-inositol in inositol-free DMEM. Cells were then incubated for 20 min in the presence (black columns) or absence (white columns) of 100 nM Ang II and IP levels (sum of InsP$_2$ and InsP$_3$) were determined as described in the experimental procedures. These results represent the means ± standard deviations of at least three experiments (done in triplicate) where IP production was normalized for receptor expression level (determined by saturation binding assays). *, P < 0.05 compared to basal value for WT-AT$_1$; †, P < 0.05 compared to Ang II-stimulated WT-AT$_1$. individual living cells. Addition of Ang II (100 nM) to COS-7 cells transfected with the wild-type AT\textsubscript{1} receptor resulted in a rapid, robust increase in intracellular Ca\textsuperscript{2+}, which reached a maximum level within a few seconds and then declined toward a lower level that remained slightly above the resting level for at least 3 min (Figure 3A). Under similar conditions, the N111W-AT\textsubscript{1} receptor caused a much weaker Ca\textsuperscript{2+} transient that also declined to a level slightly above the resting level (Figure 3A). Although we could not measure any significant increase of IPs upon stimulation of this receptor, these results demonstrated that it can nevertheless couple to G\textsubscript{q/11a} with a low efficacy and produce enough InsP\textsubscript{3} to cause a weak Ca\textsuperscript{2+} transient. Surprisingly, maximal stimulation of the N111G-AT\textsubscript{1} receptor consistently produced a Ca\textsuperscript{2+} transient that had a lower amplitude than that produced by the wild-type receptor and that declined to the resting level within a few minutes (Figure 3A). Since this constitutively active receptor was shown to be as efficient as the wild-type receptor in producing IPs, these results suggest that the cells expressing the N111G-AT\textsubscript{1} receptor have developed a refractoriness in their Ca\textsuperscript{2+} mobilization mechanism at a step downstream from phospholipase C. A possible cause for this reduced response could be the level of Ca\textsuperscript{2+} within the intracellular stores that could possibly be maintained lower in cells containing higher basal InsP\textsubscript{3} concentrations. This possibility is unlikely however since in a nominally free extracellular Ca\textsuperscript{2+} medium, thapsigargin (a SERCA inhibitor) released the same amount of Ca\textsuperscript{2+} from cells expressing the N111G-AT\textsubscript{1} receptor as from cells expressing the WT-AT\textsubscript{1} receptor (Figure 3B). \textit{G protein coupling of mutant AT\textsubscript{1} receptors.} We used a classical binding approach to assess the G protein coupling efficacies of the mutant AT\textsubscript{1} receptors in the presence of an uncoupling agent. Figure 4A shows a typical dose-displacement FIG. 3. Functional properties of the wild-type and mutant AT$_1$ receptors: Ca$^{2+}$ mobilization. **Panel A:** Transfected COS-7 cells were loaded with Fura-2/AM and their [Ca$^{2+}$] was monitored upon stimulation with 100 nM Ang II (filled arrow). Typical traces represent the average Ang II-induced (100 nM) Ca$^{2+}$ transients in 150-350 cells transfected with the wild-type AT$_1$ receptor (filled circle), the mutant N111G-AT$_1$ receptor (empty circle), the mutant N111W-AT$_1$ receptor (filled square) and the empty pcDNA3 plasmid (empty square). **Panel B:** HEK-293 cells stably expressing the WT-AT$_1$ receptor (WT) or the N111G-AT$_1$ receptor (N111G) were loaded with Fura-2/AM and their Ca$^{2+}$ stores content was evaluated by addition of thapsigargin (1 µM; empty arrow) in a nominally free extracellular Ca$^{2+}$ medium. These typical traces summarize the results of four independent experiments. FIG. 4. Binding properties in the presence of an uncoupling agent. Broken cells (20–50 μg of protein) expressing the wild-type AT₁ receptor (Panel A), the N111W-AT₁ receptor (Panel B) and the N111G-AT₁ receptor (Panel C) were incubated for 1 h at room temperature in binding buffer containing 0.1 nM $^{125}$I-Ang II and increasing concentrations of Ang II, in the absence (empty symbols) or presence of 10 μM GTPγS (filled symbols). In panel D, broken cells expressing the N111G-AT₁ receptor were preincubated for 1 h in the presence of 0.1 nM $^{125}$I-Ang II and then incubated in the presence (empty symbols) or absence (filled symbols) of the uncoupling agent for different periods of time. Incubations were terminated by vacuum filtration as described in the experimental procedures. Each point represents the mean ± experimental variation of duplicate data. Similar results were obtained with three different cell preparations. experiment where the binding of $^{125}$I-Ang II to the wild-type AT$_1$ receptor was proportionally decreased in the presence of increasing concentrations of non radioactive Ang II. Under control conditions (white diamond), the tracer bound with a high affinity (15,700 cpm specifically bound) and the concentration of non radioactive Ang II required to inhibit 50% of tracer binding (IC$_{50}$) was 0.8 nM. In the presence of the uncoupling agent GTP$\gamma$S (black square), the tracer bound with a lower affinity (3,600 cpm specifically bound) and the IC$_{50}$ of Ang II was increased by about 3-fold (2.1 nM). This typical experiment clearly illustrated the loss of affinity of the AT$_1$ receptor for its agonist ligand upon treatment with an uncoupling agent. The same experiment was repeated with the less activatable mutant N111W-AT$_1$ receptor (Figure 4B). Interestingly, under control conditions (white diamond), the tracer bound with a low affinity (3,000 cpm specifically bound) and the IC$_{50}$ of Ang II was 5.1 nM. In the presence of the uncoupling agent (black square), the dose-displacement curve was superimposable on the control curve. These results suggest that the mutant N111W-AT$_1$ receptor is not basally coupled to its cognate G protein. This interpretation is consistent with the relatively low affinity of this mutant for the agonist ligand, with its poor activation of phospholipase C and with its induction of a weak Ca$^{2+}$ transient. These experiments were repeated with the constitutively active N111G-AT$_1$ receptor. Figure 4C shows that under control conditions (white diamond), the tracer bound with a high affinity (21,300 cpm specifically bound) and the IC$_{50}$ of Ang II was 1.8 nM. In the presence of the uncoupling agent (black square), the dose-displacement curve was superimposable on the control curve. The naïve interpretation of these results would be that this receptor does not couple to a G protein because it is insensitive to the effect of the uncoupling agent. However, this interpretation is unlikely considering that the mutant N111G-AT$_1$ receptor is a strong activator of phospholipase C, most probably through the activation of the G protein G$_{q/11}$. One possibility could be that the coupling between the constitutively active receptor and its G protein is so strong that it requires a longer period of treatment with the uncoupling agent to dissociate the two proteins. Figure 4D shows that the binding of $^{125}$I-Ang II to the constitutively active receptor was not modified during incubation with an uncoupling agent for periods as long as 5 hours. These results suggest a very strong and virtually undissociable coupling between the constitutively active receptor and its G protein. An alternative explanation for these results could be that the receptor always remains in a high agonist affinity conformation, whether or not it is coupled to its G protein. To discriminate between these two possibilities, we used a coimmunoprecipitation approach to evaluate the coupling state of the mutant AT$_1$ receptors in a more direct fashion. Figure 5 shows that under basal conditions, immunoprecipitation of the wild-type AT$_1$ receptor coprecipitated a small amount of the G protein G$_{q/11\alpha}$ (panel A). The coprecipitation of G$_{q/11\alpha}$ increased upon stimulation with Ang II and decreased in the presence of the uncoupling agent GTP$\gamma$S. The coimmunoprecipitation was specific and did not occur when the anti-FLAG antibody was previously blocked with a saturating amount of FLAG peptide (panel B). Interestingly, when the constitutively active N111G-AT$_1$ receptor was immunoprecipitated under basal conditions, only a small amount of G$_{q/11\alpha}$ was co-precipitated (panel A). Stimulation with Ang II strongly stabilized the complex between the constitutively active receptor and G$_{q/11\alpha}$ whereas the uncoupling agent very efficiently destabilized this complex. With this approach, the less activatable FIG. 5. Coimmunoprecipitation of $G_{q/11\alpha}$ with the different AT$_1$ receptors. COS-7 cells expressing the FLAG-AT$_1$ receptor (WT), the FLAG-N111G-AT$_1$ receptor (N111G) or the FLAG-N111W-AT$_1$ receptor (N111W) were incubated for 1 h at room temperature in the presence or in the absence of 100 nM Ang II and/or 10 $\mu$M GTP$\gamma$S. After solubilization with 0.4% CHAPS, FLAG-tagged receptors were immunoprecipitated with the anti-FLAG M1 antibody and the presence of $G_{q/11}$ in the immune complex was revealed by Western blot with the anti-$G_{q/11\alpha}$ antibody (C-19). **Panel A**: lanes 1-4, coimmunoprecipitation of $G_{q/11\alpha}$ under different conditions. As controls, a standard $G_{q\alpha}$ from E. Coli (lane 5, empty arrow) and $G_{q/11\alpha}$ from COS-7 cells (lane 6, filled arrow) were run in parallel. **Panel B**: The presence of the FLAG-WT-AT$_1$ receptor in the immune complex under the different conditions was revealed by Western blot with the anti-FLAG M1 antibody. **Panel C**: Densitometric analysis of the results shown in panel A. These results are representative of three experiments performed with different cell preparations. A WT NIIIG NIIIW Ang II (100 nM) - + + + GTPyS (10 μM) - - + - FLAG peptide - - - + B 1 2 3 4 Ang II (100 nM) - + + + GTPyS (10 μM) - - + - FLAG peptide - - - + C Integrated peak (relative units) Lane 1 Lane 2 Lane 3 WT NIIIG NIIIW N111W-AT$_1$ receptor appeared poorly coupled to G$_{q/11\alpha}$, under basal conditions and in the presence of Ang II (panel A). Panel C shows the densitometric analysis of results from panel A. These results demonstrate that the constitutively active AT$_1$ receptor reversibly couples to the G protein G$_{q/11\alpha}$ and that the coupling is stabilized in the presence of the agonist Ang II and efficiently destabilized in the presence of GTP$\gamma$S. Discussion Three independent studies previously suggested that Asn111 plays a conformational switch function in the activation of the AT\textsubscript{1} receptor (10-12). Another recent study demonstrated that only residues smaller than Asn can confer constitutive activity on the AT\textsubscript{1} receptor and therefore concluded that a feature responsible for constitutive activity of the AT\textsubscript{1} receptor is the side chain size of the residue at position 111 (13). In the study presented here, we evaluated the pharmacological and functional properties of mutant AT\textsubscript{1} receptors in which Asn111 was replaced by different residues of various sizes, including the smaller Gly and larger Trp natural amino acid residues. In transient transfection studies, all the mutant receptors were expressed at the same high levels as that of the wild-type receptor (1–2 pmol/mg protein). In photoaffinity labeling experiments, identical CNBr fragmentation patterns revealed that the mutant receptors were covalently labeled at the same location as the wild-type receptor. Except for the N111W mutant, which exhibited a 4-fold reduction in binding affinity, all the mutant receptors recognized Ang II with a high affinity similar to that of the wild-type receptor. These results are in agreement with those of previous studies reporting minor reductions in binding affinity due to the substitution of Asn111 with the larger residues Phe and Lys (11, 13). As also previously reported in other studies (16, 22), we observed that the substitution of Asn111 with smaller residues (Ala, Gly) caused a significant decrease in binding affinity for the nonpeptide antagonist losartan but no major changes for peptide agonists (data not shown). These results demonstrate that substitutions of Asn111 with smaller or larger amino acid residues caused only minor changes in the binding conformation of the AT\textsubscript{1} receptor. These minor changes did not interfere with the proper folding of the receptor, which was correctly targeted and expressed at the plasma membrane and which maintained the same interaction with the photosensitive analogue of Ang II. The entire spectrum of functional states could be obtained by replacing the Asn111 of the AT\textsubscript{1} receptor with different amino acid residues of increasing sizes. Substitutions with smaller residues (Gly, Ala) produced constitutively active receptors. The smallest residue at position 111 produced the strongest constitutive activity. Substitutions with larger residues (Gln, Trp) produced less activatable receptors. The largest residue at position 111 produced an apparently inactive receptor with respect to the production of IPs. However, Ca\textsuperscript{2+} mobilization, a much more sensitive functional assay, revealed that all the mutant receptors, including the less activatable N111W-AT\textsubscript{1}, could produce an intracellular Ca\textsuperscript{2+} transient in response to Ang II. These results suggest that despite its weak activity, the N111W mutant could still couple with a low efficacy to the G protein G\textsubscript{q/11} and induce the production of a sufficient amount of InsP\textsubscript{3} to generate a detectable Ca\textsuperscript{2+} response. Another explanation could be that the intracellular Ca\textsuperscript{2+} transient obtained with the N111W mutant receptor is produced by a mechanism that does not require the activation of a G protein. It was recently shown that a mutant AT\textsubscript{1} receptor, lacking G protein coupling, was able to activate Src tyrosine kinase, possibly leading to transactivation of epidermal growth factor receptor and ultimately to activation of phospholipase C\textgamma (23). Previous studies have shown that substituting Asn111 with smaller residues (Ala or Gly) conferred constitutive activity to the AT\textsubscript{1} receptor (10-16). Noda et al. (11) further observed that N111I and N111F mutant receptors had lower basal activities and lower Ang II-stimulated maximal activities. These results led them to propose that Asn111 plays a crucial role in constraining the AT\textsubscript{1} receptor in a basal inactive conformation. Substitution of Asn111 with a smaller residue would provide more flexibility to the receptor and favor acquisition of the active conformation. In contrast, substitution of Asn111 with a larger residue would further constrain the receptor in an inactive conformation, thus reducing its basal and maximal (agonist-induced) activities. This interpretation is very compatible with the model proposed by Groblewski et al. (10), who suggested that Asn111 restrains the AT\textsubscript{1} receptor in an inactive conformation by forming an intramolecular bond with Tyr292. During the activation process of the AT\textsubscript{1} receptor, the agonist would disrupt this intramolecular interaction and promote conformational flexibility. In the study presented here, we substituted Asn111 with Trp, the largest natural amino acid, and observed that the mutant receptor was barely activatable. Our results thus strongly support the current models on the molecular activation mechanism of the AT\textsubscript{1} receptor. The first mechanistic event occurring after activation of a GPCR is the recruitment and activation of its cognate G protein. Mutations that influence the functionality of the AT\textsubscript{1} receptor may also affect its G protein coupling properties. As expected, we observed that the agonist-binding affinity of the wild-type AT\textsubscript{1} receptor was decreased in the presence of the uncoupling agent GTP\gamma S. This well-known phenomenon illustrates the efficient coupling and uncoupling capacities of the receptor (24, 25). Not surprisingly, the agonist-binding affinity of the N111W mutant was not modified in the presence of GTP\gamma S, suggesting that this mutant couples poorly to G\textsubscript{q/11}. This is consistent with the 4-fold lower agonist-binding affinity of this mutant compared to that of the wild-type receptor (Table 1). Interestingly, as previously observed by Noda et al. (11), the agonist-binding affinity of the constitutively active N111G mutant was not modified in the presence of GTPγS. This result could mean that the constitutively active receptor does not couple to $G_{q/11}$. However, the high efficiency of this receptor for the production of IPs argues against this interpretation. Two possibilities could therefore explain the lack of effect of GTPγS on the binding affinity of the N111G mutant. Either the interaction between the receptor and $G_{q/11}$ is so strong that GTPγS cannot dissociate both proteins, or the receptor maintains a high affinity conformation despite its dissociation from $G_{q/11}$. Our coimmunoprecipitation approach revealed that under basal conditions the AT$_1$ receptor was not strongly coupled to its cognate G protein $G_{q/11\alpha}$. However, the agonist Ang II stabilized the complex between the two proteins whereas GTPγS destabilized it. To our knowledge, these are the first results showing the agonist-dependent coimmunoprecipitation of the AT$_1$ receptor with $G_{q/11\alpha}$. These results are consistent with those of previous studies that demonstrated the agonist-dependent coimmunoprecipitation of the cholecystokinin receptor with $G_{q/11\alpha}$ (26), the δ opioid receptor with $G_{i\alpha}$ (27), the interleukin-8 receptor with $G_{i\alpha}$ (28) and the melatonin receptor with $G_{i\alpha}$ and $G_{q/11\alpha}$ (29). Not surprisingly, the N111W mutant did not coimmunoprecipitate a detectable amount of $G_{q/11\alpha}$ either under basal conditions or after stimulation with Ang II. Interestingly, the constitutively active N111G mutant behaved like the wild-type receptor with respect to coupling to $G_{q/11\alpha}$. Since the N111G mutant could induce a relatively significant production of IPs (about 40% of maximal production) in the absence of Ang II, indicating a functional interaction with its cognate G protein, it was expected that it could coimmunoprecipitate $G_{q/11\alpha}$. However, it appears that under basal conditions the coupling of this mutant receptor to $G_{q/11\alpha}$ is rather weak or unstable and that Ang II is required to strengthen the complex. Ang II is probably stabilizing the receptor in a conformational state propitious for a strong and efficient interaction with $G_{q/11\alpha}$. This interpretation is consistent with the suggestion by Noda et al. (10) that a decrease in the size of the Asn111 side chain induces an intermediate activated receptor conformation ($R'$) that can isomerize to the fully activated conformation ($R^*$) either spontaneously or after induction by Ang II. Because the physical coupling between the N111G mutant and $G_{q/11\alpha}$ was relatively low in the absence of agonist, it is likely that the spontaneous isomerization from the $R'$ to the $R^*$ conformation occurs only transiently and that the agonist is necessary to stabilize it. Our results further showed that the N111G-AT$_1$ receptor maintains a high agonist-binding affinity in the presence of GTP$\gamma$S despite being completely uncoupled from $G_{q/11\alpha}$. These results lend credence to the hypothesis by Kjelsberg et al. (30) that the high affinity state of constitutively active mutants of $\alpha_{1B}$-adrenergic receptors does not require an interaction with a G protein but is rather an intrinsic property of the receptors themselves. What would be the physiological consequence of the occurrence of a N111G-AT$_1$ mutant in a living organism? The N111G-AT$_1$ receptor adopts a high agonist-binding affinity state that appears to correspond to an intermediate activated receptor conformation. In the absence of Ang II, this receptor can induce 40% of maximal phospholipase C activation. However, under normal physiological conditions where cells are constantly exposed to low levels of Ang II, the high affinity of this receptor would promote the formation of a large number of functional ternary agonist-receptor-G protein complexes and therefore cause very significant activation (probably much more than 40% of maximal level) of the intracellular mechanisms regulated by the AT\textsubscript{1} receptor. Our Ca\textsuperscript{2+} mobilization studies revealed some refractoriness in cells expressing the N111G-AT\textsubscript{1} receptor, suggesting that a desensitization process has developed as a consequence of the permanent activity of this receptor. We showed that this reduced response was not due to a reduced content of Ca\textsuperscript{2+} within the intracellular stores of cells expressing the N111G-AT\textsubscript{1} receptor. Tovey \textit{et al.} (31) reported that prolonged stimulation of Hela and SH-SY5Y neuroblastoma cells reduced the amplitude, duration and frequency of Ca\textsuperscript{2+} puff sites and showed that this effect was unlikely to be due to InsP\textsubscript{3} production but rather to IP\textsubscript{3}R downregulation, under these conditions. Further work is needed to identify the exact refractory component(s) of the Ca\textsuperscript{2+} cascade and also the phenotypic and mechanistic changes occurring in cells expressing the constitutively active N111G-AT\textsubscript{1} receptor. In conclusion we have shown that the constitutively active N111G-AT\textsubscript{1} receptor behaves similarly to the wild type AT\textsubscript{1} receptor as regard to G protein coupling but adopts a high agonist affinity conformation that is maintained in spite of its uncoupling from G\textsubscript{q/11}\textalpha. References 1. de Gasparo M, Catt KJ, Inagami T, Wright JW, Unger T 2000 International union of pharmacology. XXIII. The angiotensin II receptors. Pharmacol Rev 52:415-472 2. Burnier M 2001 Angiotensin II type 1 receptor blockers. Circulation 103:904-912 3. Kojima I, Kojima K, Kreutter D, Rasmussen H 1984 The temporal integration of the aldosterone secretory response to angiotensin occurs via two intracellular pathways. J Biol Chem 259:14448-14457 4. Balla T, Baukal AJ, Guillemette G, Morgan RO, Catt KJ 1986 Angiotensin-stimulated production of inositol trisphosphate isomers and rapid metabolism through inositol 4-monophosphate in adrenal glomerulosa cells. Proc Natl Acad Sci USA 83:9323-9327 5. Laporte SA, Boucard AA, Servant G, Guillemette G, Leduc R, Escher E 1999 Determination of peptide contact points in the human angiotensin II type I receptor (AT$_1$) with photosensitive analogs of angiotensin II. Mol Endocrinol 13:578-586 6. Perodin J, Deract M, Auger-Messier M, Boucard AA, Rihakova L, Beaulieu ME, Lavigne P, Parent JL, Guillemette G, Leduc R, Escher E 2002 Residues 293 and 294 are ligand contact points of the human Angiotensin type I receptor. Biochemistry 41:14348-14356 7. Boucard AA, Wilkes BC, Laporte SA, Escher E, Guillemette G, Leduc R 2000 Photolabeling identifies position 172 of the human AT(1) receptor as a ligand contact point: receptor-bound angiotensin II adopts an extended structure. Biochemistry 39:9662-9670 8. Hunyady L, Balla T, Catt KJ 1996 The ligand binding site of the angiotensin AT1 receptor. Trends Pharmacol Sci 17:135-140 9. Joseph MP, Maigret B, Bonnafous JC, Marie J, Scheraga HA 1995 A computer modeling postulated mechanism for angiotensin II receptor activation. J Protein Chem 14:381-398 10. Groblewski T, Maigret B, Larguier R, Lombard C, Bonnafous JC, Marie J 1997 Mutation of Asn111 in the third transmembrane domain of the AT1A angiotensin II receptor induces its constitutive activation. J Biol Chem 272:1822-1826 11. Noda K, Feng YH, Liu XP, Saad Y, Husain A, Karnik SS 1996 The active state of the AT1 angiotensin receptor is generated by angiotensin II induction. Biochemistry 35:16435-16442 12. Balmforth AJ, Lee AJ, Warburton P, Donnelly D, Ball SG 1997 The conformational change responsible for AT1 receptor activation is dependent upon two juxtaposed asparagine residues on transmembrane helices III and VII. J Biol Chem 272:4245-4251 13. Feng YH, Miura S, Husain A, Karnik SS 1998 Mechanism of constitutive activation of the AT1 receptor: influence of the size of the agonist switch binding residue Asn(111). Biochemistry 37:15791-15798 14. Thomas WG, Qian H, Chang CS, Karnik S 2000 Agonist-induced phosphorylation of the angiotensin II (AT(1A)) receptor requires generation of a conformation that is distinct from the inositol phosphate-signaling state. J Biol Chem 275:2893-2900 15. Miserey-Lenkei S, Parnot C, Bardin S, Corvol P, Clauser E 2002 Constitutive internalization of constitutively active angiotensin II AT(1A) receptor mutants is blocked by inverse agonists. *J Biol Chem* 277:5891-5901 16. Le MT, Vanderheyden PM, Szaszak M, Hunyady L, Vauquelin G 2002 Angiotensin IV is a potent agonist for constitutive active human AT1 receptors. Distinct roles of the N-and C-terminal residues of angiotensin II during AT1 receptor activation. *J Biol Chem* 277:23107-23110 17. Bosse R, Servant G, Zhou LM, Guillemette G, Escher E 1993 Sar1-p-benzoylphenylalanine-angiotensin, a new photoaffinity probe for selective labeling of the type 2 angiotensin receptor. *Regul Pept* 44:215-223 18. Fraker PJ, Speck JC 1978 Protein and cell membrane iodinations with a sparingly soluble chloroamide, 1,3,4,6-tetrachloro-3a,6a-diphrenylglycoluril. *Biochem Biophys Res Commun* 80:849-857 19. Laporte SA, Servant G, Richard DE, Escher E, Guillemette G, Leduc R 1996 The tyrosine within the NPXnY motif of the human angiotensin II type 1 receptor is involved in mediating signal transduction but is not essential for internalization. *Mol Pharmacol* 49:89-95 20. Boulay G, Chrétien L, Richard DE, Guillemette G 1994 Short-term desensitization of the angiotensin II receptor of bovine adrenal glomerulosa cells corresponds to a shift from a high to a low affinity state. *Endocrinol* 135:2130-2136 21. Servant G, Laporte SA, Leduc R, Escher E, Guillemette G 1997 Identification of angiotensin II-binding domains in the rat AT2 receptor with photolabile angiotensin analogs. *J Biol Chem* 272:8653-8659 22. Groblewski T, Maigret B, Nouet S, Larguier R, Lombard C, Bonnafous JC, Marie J 1995 Amino acids of the third transmembrane domain of the AT1A angiotensin II receptor are involved in the differential recognition of peptide and nonpeptide ligands. Biochem Biophys Res Commun 209:153-160 23. Seta K, Nanamori M, Modrall JG, Neubig RR, Sadoshima J 2002 AT1 receptor mutant lacking heterotrimeric G protein coupling activates the Src-Ras-ERK pathway without nuclear translocation of ERKs. J Biol Chem 277:9268-77 24. Glossmann H, Baukal A, Catt KJ 1974 Angiotensin II receptors in bovine adrenal cortex. Modification of angiotensin II binding by guanyl nucleotides. J Biol Chem 249:664-666 25. Poitras M, Sidibe A, Richard DE, Chretien L, Guillemette G 1998 Effect of uncoupling agents on AT1 receptor affinity for antagonist analogs of angiotensin II. Receptors Channels 6:65-72 26. Gales C, Kowalski-Chauvel A, Dufour MN, Seva C, Moroder L, Pradayrol L, Vaysse N, Fourmy D, Silvente-Poirot S 2000 Mutation of Asn-391 within the conserved NPXXY motif of the cholecystokinin B receptor abolishes Gq protein activation without affecting its association with the receptor. J Biol Chem 275:17321-17327 27. Law SF, Reisine T 1997 Changes in the association of G protein subunits with the cloned mouse delta opioid receptor on agonist stimulation. J Pharmacol Exp Ther 281:1476-1486 28. Damaj BB, McColl SR, Mahana W, Crouch MF, Naccache PH 1996 Physical association of Gi2alpha with interleukin-8 receptors. J Biol Chem 271:12783-12789 29. Brydon L, Roka F, Petit L, de Coppet P, Tissot M, Barrett P, Morgan PJ, Nanoff C, Strosberg AD, Jockers R 1999 Dual signaling of human Mel1a melatonin receptors via G(i2), G(i3), and G(q/11) proteins. Mol Endocrinol 13:2025-2038 30. Kjelsberg MA, Cotecchia S, Ostrowski J, Caron MG, Lefkowitz RJ 1992 Constitutive activation of the alpha 1B-adrenergic receptor by all amino acid substitutions at a single site. Evidence for a region which constrains receptor activation. J Biol Chem 267:1430-1433 31. Tovey SC, de Smet P, Lipp P, Thomas D, Young KW, Missiaen L, De Smedt H, Parys JB, Berridge MJ, Thuring J, Holmes A, Bootman MD 2001 Calcium puffs are generic InsP(3)-activated elementary calcium signals and are downregulated by prolonged hormonal stimulation to inhibit cellular calcium responses. J Cell Sci 114:3979-89 Statut de l'article : publié Référence : Mannix Auger-Messier, Guillaume Arguin, Benoit Chaloux, Richard Leduc, Emanuel Escher, and Gaetan Guillemette (2004) Down-Regulation of Inositol 1,4,5-Trisphosphate Receptor in Cells Stably Expressing the Constitutively Active Angiotensin II N111G-AT$_1$ Receptor. Mol. Endo. 18(12): 2967-2980 Apport : J'ai participé activement à l'élaboration de cette étude en planifiant la majorité des expériences et en fournissant 50% des résultats présentés dans cet article. J'ai écrit le premier jet du manuscrit. Down-Regulation of Inositol 1,4,5-Trisphosphate Receptor in Cells Stably Expressing the Constitutively Active Angiotensin II N111G-AT$_1$ Receptor Mannix Auger-Messier, Guillaume Arguin, Benoit Chaloux, Richard Leduc, Emanuel Escher, and Gaetan Guillemette Department of Pharmacology, Faculty of Medicine, Université de Sherbrooke, Sherbrooke, Quebec, Canada, J1H 5N4 Running title: Ca$^{2+}$ dyshomeostasis caused by N111G-AT$_1$ Keywords - constitutively active mutant GPCR, IP$_3$R down-regulation, desensitization, lysosomal degradation, adaptive process, inverse agonist M.A.-M. and G.A contributed equally to this work and should both be considered first authors. Address all correspondence and requests for reprints to: Gaetan Guillemette, Ph.D., Department of Pharmacology, Faculty of Medicine, Université de Sherbrooke, 3001, 12th Avenue North, Sherbrooke, Quebec, Canada, J1H 5N4, Tel.: (819) 564-5347, Fax: (819) 564-5400, E-mail: firstname.lastname@example.org ABSTRACT The diverse cellular changes brought about by the expression of a constitutively active receptor are poorly understood. HEK-293 cells stably expressing the constitutively active N111G-AT$_1$ receptor (N111G cells) showed elevated levels of inositol phosphates and frequent spontaneous intracellular Ca$^{2+}$ oscillations. Interestingly, Ca$^{2+}$ transients triggered with maximal doses of angiotensin II were much weaker in N111G cells than in WT cells. These blunted responses were observed independently of the presence or absence of extracellular Ca$^{2+}$ and were also obtained when endogenous muscarinic and purinergic receptors were activated, revealing a heterologous desensitization process. The desensitized component of the Ca$^{2+}$ signaling cascade was neither the G protein G$_q$ nor phospholipase C. The intracellular Ca$^{2+}$ store of N111G cells and their mechanism of Ca$^{2+}$ entry also appeared to be intact. The most striking adaptive response of N111G cells was a down-regulation of their inositol 1,4,5-trisphosphate receptor (IP$_3$R) as revealed by reduced IP$_3$-induced Ca$^{2+}$ release, lowered $[^3\text{H}]$IP$_3$ binding capacity, diminished IP$_3$R immunoreactivity and accelerated IP$_3$R degradation involving the lysosomal pathway. Treatment with the inverse agonist EXP3174 reversed the desensitized phenotype of N111G cells. Down-regulation of IP$_3$R represents a reversible adaptive response to protect cells against the adverse effects of constitutively active Ca$^{2+}$-mobilizing receptors. INTRODUCTION The AT$_1$ receptor belongs to the G protein-coupled receptor (GPCR) superfamily and plays an active role in the renin-angiotensin system. The AT$_1$ receptor mediates virtually all the known physiological actions of angiotensin II (Ang II), including vascular contraction, aldosterone secretion, sodium and water retention, neuronal activation and cardiovascular cell growth and proliferation (1,2). The AT$_1$ receptor functions primarily through its productive coupling to the heterotrimeric guanyl nucleotide binding regulatory protein (G protein) $G_{q/11}$, which activates phospholipase C, which in turn hydrolyses membranous phosphatidylinositol 4,5-bisphosphate into inositol 1,4,5-trisphosphate (IP$_3$) and diacylglycerol (3,4). While IP$_3$ causes a rapid release of Ca$^{2+}$ from intracellular stores upon activation of its receptor-channel (IP$_3$R), diacylglycerol recruits and activates protein kinase C at the plasma membrane. IP$_3$-induced Ca$^{2+}$ release is generally followed by an increase in Ca$^{2+}$ entry across the plasma membrane that can serve to replenish stores or contribute to Ca$^{2+}$-dependent signaling. This entry of Ca$^{2+}$ occurs through a poorly defined mechanism that is initiated by the depletion of Ca$^{2+}$ stores, a process known as capacitative Ca$^{2+}$ entry (5). A GPCR able to adopt an active conformation in the absence of an agonist is said to be constitutively active. The AT$_1$ receptor belongs to a large group of about 60 wild-type GPCRs exhibiting constitutive activity (6). The constitutive activity of the AT$_1$ receptor was emphasized after its overexpression in COS-1 cells, which showed enhanced basal activity of phospholipase C (7). Recently, three independent studies simultaneously reported an important increase in the constitutive activity of the AT$_1$ receptor when Asn$^{111}$ (in the third transmembrane domain) was substituted for the smaller residues Ala or Gly (7-9). Irregardless of the type of G protein they couple with, numerous other examples of constitutively active mutant (CAM) GPCRs have been reported in the literature, including A293E-\(\alpha_{1B}\) adrenergic receptor (10), M257Y-rhodopsine (11) and T279K-mu opioid receptor (12). It is believed that intramolecular interactions preferentially constrain GPCRs in the inactive conformation and that agonists or specific mutations relieve these constraints, thus privileging the active conformation (13). CAM-GPCRs may profoundly modify cell functions. Diseases such as retinitis pigmentosa (14) and Kaposi’s sarcoma (15) have been ascribed to CAM-GPCRs. More studies are needed to determine the effects of CAM-GPCRs on intracellular signaling mechanisms and cell functions. In recent work, we noticed some refractoriness in the Ca\(^{2+}\) response of cells expressing the constitutively active N111G-AT\(_1\) receptor, suggesting that a desensitization process had developed as a consequence of the permanent activity of this receptor (16). In the study presented here, we selected a HEK-293 clonal cell line stably expressing the N111G-AT\(_1\) receptor (N111G cells) in order to examine in greater detail the mechanism of intracellular Ca\(^{2+}\) regulation under basal conditions and after stimulation with different Ca\(^{2+}\) mobilizing agonists. We noted that agonist-induced intracellular Ca\(^{2+}\) release and subsequent Ca\(^{2+}\) entry activities were heterologously desensitized in N111G cells. This refractory state was mainly caused by a down-regulation of IP\(_3\)R and could be reversed by a prolonged treatment with EXP3174, an inverse agonist of the AT\(_1\) receptor. RESULTS Pharmacological properties of the N111G-AT$_1$ Receptor—The constitutively active N111G-AT$_1$ receptor and the AT$_1$ receptor were stably transfected in HEK-293 cells, and representative clonal cell lines were analyzed for their functional properties (Table 1). In saturation binding studies, the AT$_1$ receptor exhibited high and low affinity states for the agonist $^{125}$I-Ang II ($0.5 \pm 0.2$ and $3.4 \pm 0.9$ nM respectively) with expression levels ($B_{\text{MAX}}$) of $0.4 \pm 0.1$ and $0.8 \pm 0.3$ pmol/mg of protein, respectively (Table 1). The high affinity state was completely converted to the low affinity state in presence of guanyl nucleotides (data not shown). The N111G-AT$_1$ receptor exhibited a single high affinity state ($0.9 \pm 0.2$ nM) that was not affected by the presence of guanyl nucleotides (data not shown) and its $B_{\text{MAX}}$ was $2.4 \pm 0.6$ pmol/mg of protein. Under basal conditions, N111G cells contained high levels of IP that were at least 6-fold higher than those found in WT cells. These results confirm the constitutive activity of N111G-AT$_1$ receptor. Upon stimulation with a high concentration of Ang II, N111G and WT cells accumulated comparable levels of IP (Table 1). These results are similar to those previously obtained after transient transfection of these receptors in COS-7 cells (16). Impaired Ca$^{2+}$ Response in Single N111G cells—The temporal patterns of intracellular Ca$^{2+}$ signals in HEK-293 cells expressing the constitutively active N111G-AT$_1$ receptor were compared with those elicited by the wild-type AT$_1$ receptor. Under basal conditions (without agonist), the AT$_1$ receptor did not generally elicit any fluctuations in intracellular Ca$^{2+}$ concentrations (Fig. 1A). Upon stimulation with a relatively low concentration of Ang II (0.1 nM), the AT$_1$ receptor elicited repetitive baseline-separated Ca$^{2+}$ transients (Ca$^{2+}$ oscillations) with a frequency of $32 \pm 16$ **TABLE 1** *Binding and Functional Properties of Receptors Expressed in Clonal Cell Lines* | Clonal cell line | Affinity (nM) | Expression level (pmol/mg of protein) | IP accumulation (% of stimulated WT cells) | |------------------|---------------|--------------------------------------|------------------------------------------| | | $K_{D1}$ | $K_{D2}$ | $B_{MAX1}$ | $B_{MAX2}$ | Basal | Ang II-stimulated | | WT cells | 0.5 ± 0.2 | 3.4 ± 0.9 | 0.4 ± 0.1 | 0.8 ± 0.3 | 4 ± 3 | 100 | | N111G cells | 0.9 ± 0.2 | --- | 2.4 ± 0.6 | --- | 28 ± 1 * | 109 ± 6 | Affinities ($K_D$) for $^{125}$I-Ang II and expression levels ($B_{MAX}$) of the wild-type AT$_1$ receptor and the mutant N111G-AT$_1$ receptor stably expressed in HEK-293 cells were obtained by Scatchard analysis of saturation binding experiments. In a typical experiment with the WT cells, total binding of $^{125}$I-Ang II (1 nM) was 910 cpm and non-specific binding was 250 cpm. These results are expressed as means ± SD of three independent experiments. Functional properties were evaluated by measuring the basal and Ang II-stimulated phospholipase C activity in clonal cell lines pre-loaded for 18–24 h with 15 µCi/mL of $myo-[^3H]$inositol. Total inositol phosphates (sum of IP$_1$, IP$_2$, IP$_3$ and IP$_4$) accumulated within a period of 20 min were determined as described under “Material and Methods”. In a typical experiment with WT cells, basal IP were 4920 cpm and Ang II-stimulated IP were 15990 cpm. The results are expressed as means ± SD values from three independent experiments (done in triplicate) where IP production was normalized for incorporation of $myo-[^3H]$inositol into phospholipids and for receptor expression level (determined by saturation binding assays). *, P < 0.05 compared to basal value of WT cells. FIG. 1. Spontaneous and Ang II-induced Ca$^{2+}$ oscillations in single cells. Attached WT cells (A and C) or N111G cells (B and D) were loaded with Fura2/AM (0.1 μM) for 20 min at room temperature in HBSS, washed for 20 min at room temperature in HBSS and mounted onto a videomicroscopy system. Fura2 fluorescence in single cells was monitored under basal conditions for an initial period of 750 s before adding either 0.1 nM Ang II (A and B) or 1 μM Ang II (C and D). These typical traces show variations of the fluorescence ratio (F$_{334}$/F$_{380}$) obtained at room temperature as described under “Material and Methods”. Similar results were obtained with three different cell preparations. oscillations/h. Interestingly, under basal conditions, the N111G-AT\textsubscript{1} receptor elicited spontaneous Ca\textsuperscript{2+} oscillations with a frequency of 18 ± 11 oscillations/h. Upon activation with 0.1 nM Ang II, N111G cells showed only a modest acceleration of the oscillatory rate, which increased by about 13 oscillations/h, barely reaching the oscillatory rate of WT cells stimulated with 0.1 nM Ang II (Fig. 1B). These results suggest that the N111G-AT\textsubscript{1} receptor is less responsive than the AT\textsubscript{1} receptor to a stimulation by Ang II. No significant difference was noted between the amplitude of the oscillations elicited in N111G cells (0.40 ± 0.02 fluorescence ratio unit) and those elicited in WT cells (0.41 ± 0.02 fluorescence ratio unit). Upon activation with a high dose of Ang II (1 μM), the AT\textsubscript{1} receptor produced a single large Ca\textsuperscript{2+} transient (amplitude of 0.87 fluorescence ratio unit) that slowly declined toward a low level slightly above the resting level (Fig. 1C), whereas the N111G-AT\textsubscript{1} receptor also produced a single Ca\textsuperscript{2+} transient but with a low amplitude of 0.66 fluorescence ratio unit (Fig. 1D). These results indicate that although the constitutively active N111G-AT\textsubscript{1} receptor is capable of inducing spontaneous Ca\textsuperscript{2+} oscillations that can be accelerated upon activation with low doses of agonist, single cells expressing this receptor show some refractoriness in their Ca\textsuperscript{2+} response. **Impaired Ca\textsuperscript{2+} Response in N111G Cell Populations**—Despite their elevated IP content, the basal Ca\textsuperscript{2+} concentration in N111G cells (99 ± 13 nM) was not significantly different from that of WT cells (99 ± 22 nM). In N111G cells, however, 1 μM Ang II produced a low amplitude Ca\textsuperscript{2+} transient (387 ± 97 nM; Fig. 2B) that was significantly lower than that produced in WT cells, (790 ± 84 nM; Fig. 2A). Agonist-induced Ca\textsuperscript{2+} transients consist of two main components: the release of Ca\textsuperscript{2+} from intracellular stores and Ca\textsuperscript{2+} entry from the extracellular medium. To identify which of these two **FIG. 2. Ang II-induced Ca$^{2+}$ release and Ca$^{2+}$ entry.** Populations ($1.25 \times 10^6$ cells/assay) of WT cells (*A, C* and *E*) or N111G cells (*B, D* and *F*) were loaded with Fura2/AM (5 μM) for 20 min at 37°C, washed by centrifugation, resuspended either in an extracellular-like medium (*A* and *B*) or in a nominally Ca$^{2+}$-free medium (*C, D, E* and *F*) and their intracellular Ca$^{2+}$ concentration was monitored upon stimulation with 1 μM Ang II or upon addition of 1.8 mM CaCl$_2$, as indicated. These experiments were performed at 37°C and [Ca$^{2+}$] variations were monitored with a Hitachi F-2000 spectrofluorometer as described under “Material and Methods”. These typical traces are representative of at least three independent experiments done in duplicate. components could be responsible for the refractory state of N111G cells, we performed experiments in a nominally Ca$^{2+}$-free medium. Under these conditions, WT cells maintained a stable low level of intracellular Ca$^{2+}$ for at least 3 min (Fig. 2C). When extracellular Ca$^{2+}$ was added to the medium, a very minor increase in the intracellular Ca$^{2+}$ concentration was observed. According to the “capacitative Ca$^{2+}$ entry” model (5), this minor increased could be due to the entry of Ca$^{2+}$ resulting from a small leakage of the intracellular Ca$^{2+}$ store. Interestingly, under the same conditions, N111G cells displayed a larger intracellular Ca$^{2+}$ increase when extracellular Ca$^{2+}$ was added (Fig. 2D). These results suggest that N111G cells have a dynamic capacitative Ca$^{2+}$ entry activity under basal conditions. This is consistent with the constitutive activity of the N111G-AT$_1$ receptor, which maintains an elevated level of IP$_3$ in these cells (Table 1), thus causing a larger depletion of their intracellular Ca$^{2+}$ store. In the absence of extracellular Ca$^{2+}$, a high concentration of Ang II (1µM) caused a robust intracellular Ca$^{2+}$ transient (amplitude of 522 ± 66 nM) in WT cells that reflected a major depletion of their IP$_3$-sensitive intracellular Ca$^{2+}$ store (Fig. 2E). Under these conditions, the addition of extracellular Ca$^{2+}$ caused a significant capacitative Ca$^{2+}$ entry with a maximal amplitude of 197 ± 25 nM. Fig. 2F shows that the IP$_3$-induced Ca$^{2+}$ release (amplitude of 306 ± 33 nM) and the capacitative Ca$^{2+}$ entry (amplitude of 152 ± 4 nM) activities elicited by 1µM Ang II in N111G cells were smaller than those observed in WT cells. Ang II dose-dependent effects on IP$_3$-induced Ca$^{2+}$ release and capacitative Ca$^{2+}$ entry activities were evaluated with a protocol similar to that used in Fig. 2E. In WT cells, increasing concentrations of Ang II from 1 pM to 1 µM caused intracellular Ca$^{2+}$ releases of increasing amplitude (Fig. 3A, filled circles). The threshold dose was FIG. 3. Dose-response curves for Ang II-induced Ca$^{2+}$ release and Ca$^{2+}$ entry. Populations ($1.25 \times 10^6$ cells/assay) of WT cells (filled circles) or N111G cells (empty circles) were loaded with Fura2/AM (5 μM) for 20 min at 37°C, washed by centrifugation, resuspended in a nominally Ca$^{2+}$-free medium and their releases of intracellular Ca$^{2+}$ were measured after stimulation with increasing concentrations of Ang II (A). Three min after Ang II stimulation, Ca$^{2+}$ entry was measured by adding 1.8 mM CaCl$_2$ to the medium (B). These experiments were performed at 37°C and [Ca$^{2+}$]$_i$ variation was monitored with a Hitachi F-2000 spectrofluorometer as described under “Material and Methods”. Each point represents the maximal amplitude of the Ca$^{2+}$ variation (nM) and is expressed as the mean ± SD of at least three independent experiments done in duplicate. approximately 30 pM Ang II, the maximal amplitude (522 ± 66 nM Ca$^{2+}$) was obtained with 1 μM Ang II and the EC$_{50}$ (dose producing 50% of the maximal release) was 1.0 ± 0.1 nM. In N111G cells (Fig. 3, empty circles), the dose-response curve revealed an EC$_{50}$ of 0.7 ± 0.5 nM, not significantly different from that obtained in WT cells, but the maximal amplitude (306 ± 33 nM Ca$^{2+}$) was significantly lower than that obtained in WT cells. Capacitative Ca$^{2+}$ entry in WT cells showed a typical dose-response curve with an EC$_{50}$ of 0.3 ± 0.1 nM and a maximal amplitude of 223 ± 49 nM (Fig. 3B, filled circles). In N111G cells (Fig. 3B, empty circles), the dose-response curve for capacitative Ca$^{2+}$ entry was very different, with a relatively significant entry (69 ± 16 nM Ca$^{2+}$) under basal conditions and a maximal amplitude (153 ± 16 nM Ca$^{2+}$) much lower than that obtained in WT cells. The EC$_{50}$ was 0.3 ± 0.1 nM but the maximal Ang II-induced Ca$^{2+}$ entry (difference between basal and maximal amplitude) was only 84 nM Ca$^{2+}$. **The Constitutive Activity of N111G-AT$_1$ Receptor Increases Basal Ca$^{2+}$ Entry** As previously argued, basal Ca$^{2+}$ entry in N111G cells is likely due to the constitutive activity of the N111G-AT$_1$ receptor. To support this hypothesis, cells were incubated in a nominally Ca$^{2+}$ free medium for different periods of time before assessing their capacitative Ca$^{2+}$ entry following the addition of extracellular Ca$^{2+}$. Under these conditions, WT cells showed only a slight time-dependent increase in Ca$^{2+}$ entry, likely due to a minor leakage of Ca$^{2+}$ from the intracellular pool toward the exterior of the cells (Fig. 4A, filled circles). N111G cells showed a high time-dependent increase in Ca$^{2+}$ entry that was consistent with an important leak of intracellular Ca$^{2+}$ due to the constitutive activity of the N111G-AT$_1$ receptor (Fig. 4A, empty circles). EXP3174 is an inverse agonist known to block the constitutive activity of the N111G-AT$_1$ receptor (7). FIG. 4. Capacitative Ca$^{2+}$ entry under basal conditions. Populations (1.25×10$^6$ cells/assay) of WT cells (filled circles) or N111G cells (empty circles) were loaded with Fura2/AM (5 μM) for 20 min at 37°C, washed by centrifugation, resuspended in a nominally Ca$^{2+}$-free medium for varying periods of time (ranging from 5 to 16 min) before measuring their Ca$^{2+}$ entry activity following addition of 1.8 mM CaCl$_2$ to the medium (A). With a similar protocol, WT cells (filled columns) or N111G cells (empty columns) were pre-treated for 6 min without (Control) or with 4 μM EXP3174 before measuring their Ca$^{2+}$ entry activity following addition 1.8 mM CaCl$_2$ to the medium (B). These experiments were performed at 37°C and [Ca$^{2+}$]$_i$ variations were monitored with a Hitachi F-2000 spectrofluorometer as described under “Material and Methods”. Data are expressed as mean ± SD of triplicate values and are representative of three independent experiments. In the presence of a saturating concentration of EXP3174, the time-dependent increase in capacitative $\text{Ca}^{2+}$ entry was not significantly affected in WT cells but was completely blunted in N111G cells (Fig. 4B). The elevated basal $\text{Ca}^{2+}$ entry in N111G cells is therefore likely due to the constitutive activity of the mutant AT$_1$ receptor. **Integrity of Internal $\text{Ca}^{2+}$ Stores**—Ang II dose-dependent curves revealed a diminished intracellular $\text{Ca}^{2+}$ release and a diminished maximal amplitude of $\text{Ca}^{2+}$ entry in N111G cells. The content of the intracellular $\text{Ca}^{2+}$ pool has a strong influence on IP$_3$-induced $\text{Ca}^{2+}$ release and capacitative $\text{Ca}^{2+}$ entry. In a nominally $\text{Ca}^{2+}$ free medium, thapsigargin, a potent SERCA inhibitor, caused the same amount of $\text{Ca}^{2+}$ to be released from the intracellular stores of WT cells ($339 \pm 30 \text{nM}$) and N111G cells ($340 \pm 21 \text{nM}$) (Fig. 5, left). Comparable results were obtained with the ionophore ionomycin, revealing that the total cellular $\text{Ca}^{2+}$ content was similar in both cell types (data not shown). Interestingly, capacitative $\text{Ca}^{2+}$ entry after depletion of intracellular stores with thapsigargin was very similar in both cell types (WT cells: $418 \pm 30 \text{nM}$; N111G cells: $466 \pm 49 \text{nM}$) (Fig. 5, right). After treatment with EXP3174, thapsigargin-induced $\text{Ca}^{2+}$ release and subsequent capacitative $\text{Ca}^{2+}$ entry in WT and N111G cells were not modified (data not shown). These results suggest that the intrinsic mechanisms responsible for capacitative $\text{Ca}^{2+}$ entry are not modified in N111G cells. **Heterologous Desensitization of $\text{Ca}^{2+}$ Responses in N111G Cells**—To verify whether the refractory state of N111G cells is an AT$_1$ receptor specific phenomenon, we analyzed the $\text{Ca}^{2+}$ responses induced by carbachol (CCh) and ATP, two $\text{Ca}^{2+}$-mobilizing agonists of endogenously expressed muscarinic and purinergic receptors, respectively. In the absence of extracellular $\text{Ca}^{2+}$, maximal doses of CCh and ATP caused intracellular FIG. 5. Integrity of the intracellular Ca$^{2+}$ stores. Populations (1.25×10$^6$ cells/assay) of WT cells ($A$) or N111G cells ($B$) were loaded with Fura2/AM (5 μM) for 20 min at 37°C, washed by centrifugation, resuspended in a nominally Ca$^{2+}$-free medium before being exposed successively to 1 μM thapsigargin and 1.8 mM CaCl$_2$. These experiments were performed at 37°C and [Ca$^{2+}$]$_i$ variations were monitored with a Hitachi F-2000 spectrofluorometer as described under “Material and Methods”. These typical traces are representatives of three independent experiments done in duplicate. Ca$^{2+}$ releases that were weaker in N111G cells (Fig. 6A, empty columns) than in WT cells (Fig. 6A, filled columns). Likewise, the addition of extracellular Ca$^{2+}$ to cells pretreated with CCh or ATP resulted in capacitative Ca$^{2+}$ entries that were weaker in N111G cells than in WT cells (Fig. 6B). These results indicated that the refractory state of N111G cells is due to some factor(s) common to the mechanism of action of several G$_q$-coupled receptors. The reversibility of the phenomenon was assessed by pretreating the cells with the inverse agonist EXP3174. The deficits in CCh-induced intracellular Ca$^{2+}$ release (Fig. 6C) and CCh-induced capacitative Ca$^{2+}$ entry (Fig. 6D) were totally eliminated in N111G cells after a relatively long (at least 24 h) pretreatment with EXP3174. Losartan and L-158,809, two partial inverse agonists, could only partially reverse the desensitized state of N111G cells (data not shown). **Down-regulation of IP$_3$ Receptor in N111G Cells**—The IP$_3$ receptor is an intracellular Ca$^{2+}$ channel that is a common component of the mechanism of action of all G$_q$-coupled receptors. The functional properties of the IP$_3$ receptor were assessed by fura2 spectrofluorometry in saponin-permeabilized cells. Fig. 7A shows that permeabilized WT cells could take up Ca$^{2+}$ within their intracellular store by an ATP-dependent process, thus decreasing the ambient Ca$^{2+}$ concentration to a low, steady level. The addition of increasing doses of IP$_3$ caused rapid, transient releases of increasing amounts of Ca$^{2+}$ until a maximal effect (11.8 ± 1.4 nmol of Ca$^{2+}$ released) was obtained. Note that the efficient Ca$^{2+}$ re-uptake following each IP$_3$-induced Ca$^{2+}$ release is consequent to the rapid degradation of IP$_3$ under these experimental conditions. In permeabilized N111G cells, relatively high doses of IP$_3$ were required to release sequestered Ca$^{2+}$, and the maximal effect (7.4 ± 0.2 nmol of Ca$^{2+}$ released) was lower. FIG. 6. EXP3174 rescues the Ca$^{2+}$ release and Ca$^{2+}$ entry activities of N111G cells. Populations (1.25×10$^6$ cells/assay) of WT cells (filled columns) or N111G cells (empty columns) were loaded with Fura2/AM (5 μM) for 20 min at 37°C, washed by centrifugation, resuspended in a nominally Ca$^{2+}$-free medium and their intracellular Ca$^{2+}$ releases were measured upon stimulation with 100 μM CCh or 100 μM ATP (A). Three min after stimulation with agonists, Ca$^{2+}$ entry was measured by adding 1.8 mM CaCl$_2$ to the medium (B). With a similar protocol, CCh-induced Ca$^{2+}$ release (C) and Ca$^{2+}$ entry (D) activities were measured after a pre-treatment of the cells for varying periods of time (ranging from 0.1 to 48 h) with 4 μM EXP3174. These experiments were performed at 37°C and [Ca$^{2+}$]$_i$ variations were monitored with a Hitachi F-2000 spectrofluorometer as described under “Material and Methods”. These data are expressed as mean ± SD of triplicate values and are representative of three independent experiments. A Increase in $[\text{Ca}^{2+}]_i$ (nM) CCh ATP B Increase in $[\text{Ca}^{2+}]_i$ (nM) CCh ATP C Increase in $[\text{Ca}^{2+}]_i$ (nM) Time of incubation (h) D Increase in $[\text{Ca}^{2+}]_i$ (nM) Time of incubation (h) Fig. 7. IP$_3$-induced Ca$^{2+}$ release activity in permeabilized cells. Populations (20×10$^6$ cells/assay) of WT cells (A) or N111G cells (B) were permeabilized for 3 min at 37°C in a cytosol-like buffer supplemented with 50 μg/ml of saponin, 0.5 μM fura2 acid, 20 units of creatine kinase and 10 mM phosphocreatine. Ca$^{2+}$ uptake (upon addition of 1 mM ATP: A) was partially released with increasing concentrations of IP$_3$ (ranging from 0.1 to 3 μM). The amount of Ca$^{2+}$ released was calibrated by adding a known amount of exogenous Ca$^{2+}$ (4 nmol CaCl$_2$: C). Maximal fluorescence was measured by adding a saturating concentration of Ca$^{2+}$ (1.8 mM CaCl$_2$: S). Panel C shows the dose-response curves for IP$_3$-induced Ca$^{2+}$ releases from WT cells (filled circles), N111G cells (empty circles) and EXP3174-treated (4 μM for 48 h) N111G cells (empty squares). Panel D shows the results of Panel C represented as % of maximal release. These experiments were performed at 37°C and ambient [Ca$^{2+}$] variations were monitored with a Hitachi F-2000 spectrofluorometer as described under “Material and Methods”. These typical traces are representative of at least three independent experiments done in duplicate and summarized as mean ± SD values in C and D. A \[ \text{[Ca}^{2+}]_i (\text{nM}) \] 0 500 1000 1500 2000 Time (s) B \[ \text{[Ca}^{2+}]_i (\text{nM}) \] 0 500 1000 1500 2000 Time (s) C \[ \text{Ca}^{2+} \text{ released (nmol)} \] 0 4 8 12 16 – Log [IP$_3$] (M) D \[ \text{Ca}^{2+} \text{ released (%) } \] 0 25 50 75 100 – Log [IP$_3$] (M) than in WT cells (Fig. 7B). The dose-response curves shown at Fig. 7C indicate that the IP$_3$-induced Ca$^{2+}$ release activity of N111G cells (empty circles) was clearly less efficient than that of WT cells (filled circles). Interestingly, after a 48 h pretreatment of N111G cells with EXP3174 (Fig. 7C, empty squares), their IP$_3$-induced Ca$^{2+}$ release activity was not significantly different from that of the WT cells. When the results of Fig. 7C were plotted as a percentage of maximal release under each condition, the three curves were superimposable with similar EC$_{50}$s of 0.29 ± 0.03 μM (Fig. 7D). These results are consistent with a reduction of IP$_3$ receptors in N111G cells. To directly assess the pharmacological properties of IP$_3$ receptors expressed in our clonal cell lines, [$^3$H]IP$_3$ binding studies were performed after permeabilization of the cells with saponin. The typical dose-displacement curves shown in Fig. 8, clearly indicate that the IP$_3$ binding activity of N111G cells (empty circles) was weaker than that of WT cells (filled circles). In both cell types, however, [$^3$H]IP$_3$ binding was inhibited in a similar fashion by increasing concentrations of unlabelled IP$_3$. A Scatchard analysis of the data (inset) showed that the binding affinity of N111G cells (19.2 ± 1.3 nM) was not significantly different from that of WT cells (20.3 ± 1.7 nM). The maximal binding capacity, however, was significantly lower in N111G cells (B$_{MAX}$ of 0.38 pmol/mg of protein) than in WT cells (B$_{MAX}$ of 0.86 pmol/mg of protein). The level of expression of type III IP$_3$R (IP$_3$RIII), an abundant subtype in HEK-293 cells, was directly assessed by immunoblot analysis with a selective anti-IP$_3$RIII antibody. Fig. 9A (upper panel) shows that IP$_3$RIII migrates on SDS-PAGE as a single sharp band with a Mr of ~230 kDa. The intensity of this band was higher in extracts from WT cells (lanes 1 and 2) than in extracts from N111G cells (lanes 3 and 4). A 48 h FIG. 8. $[^3\text{H}] \text{IP}_3$ binding to permeabilized cells. WT cells (filled circles) or N111G cells (empty circles) were permeabilized for 10 min at 37°C in an intracellular-like medium supplemented with saponin (50 μg/mL) and then incubated (20×10⁶ cells/tube) for 15 min at 0°C with ~ 2 nM $[^3\text{H}] \text{IP}_3$ (15,000 cpm) and increasing concentrations of unlabelled IP₃ (ranging from 0.1 nM to 1 μM) as described under “Material and Methods”. The $K_D$ and $B_{MAX}$ values were calculated from the Scatchard analysis of the data (inset). These data expressed as the means ± SEM of duplicate are representative of three independent experiments. **FIG. 9. Immunoblot analysis of IP$_3$RIII.** WT cells and N111G cells were grown for 48 h in the absence (lanes 1 - 4) or in the presence of 4 µM EXP3174 (lanes 5 - 8). In panel $A$, proteins from WT cells lysates (lanes 1, 2, 5 and 6) and from N111G cells lysates (lanes 3, 4, 7 and 8) ($1 \times 10^5$ cells/lane) were resolved on a 7% polyacrylamide gel and immunoblotted with an anti-IP$_3$RIII antibody (*upper panel*) or with an anti-actin antibody (*lower panel*) as described under “Material and Methods”. Bands corresponding to IP$_3$RIII (~ 230 kDa) and actin (42 kDa) are identified with black and white arrows respectively. Panel $B$ shows the densitometric analysis of the IP$_3$RIII that was performed as described under “Material and Methods” and expressed as means ± SEM of relative units of integrated peaks (ratio of IP$_3$RIII/actin densities). These results are representatives of at least three independent experiments. *, P < 0.05 compared to respective EXP3174-untreated cells. pretreatment of cells with EXP3174 did not modify the intensity of this band in extracts from WT cells (lanes 5 and 6) but significantly increased the intensity of the band in extracts from N111G cells (lanes 7 and 8). The densitometric analysis shown at Fig. 9B clearly indicates that N111G cells expressed less IP$_3$RIII than WT cells and that the level of expression of IP$_3$RIII was significantly increased by about 2-fold after a treatment of N111G cells with the inverse agonist EXP3174. These results suggest that the refractory state of N111G cells was due, at least in part, to a down-regulation of IP$_3$R. It is important to note that most of the results obtained with our N111G cell clone (weaker intracellular Ca$^{2+}$ release, weaker capacitative Ca$^{2+}$ entry and lower expression of IP$_3$RIII than in WT cells) were repeated with a heterogenous population of G-418-resistant N111G-AT$_1$ receptor-transfected cells (Fig. 10). Therefore, the down-regulation of IP$_3$R appears to be a common cellular mechanism in response to the expression of a constitutively active AT$_1$ receptor and not a mere epiphenomenon observed with an “eccentric” cell clone. **Mechanism of IP$_3$RIII Down-regulation in N111G Cells** Different mechanisms could be responsible for the down-regulation of IP$_3$RIII in N111G cells. RNA analyses by various cycles of RT-PCR revealed that the mRNA levels for IP$_3$RIII were identical in WT and N111G cells, indicating no change in the expression and the stability of gene product (Fig. 11A). With a pulse-chase labelling approach, the rates of synthesis and degradation of IP$_3$RIII were analyzed. Metabolically labelled IP$_3$RIII appeared within about 30 min and gradually increased for 4 h (Fig. 11B). Within this time period, the rate of appearance of metabolically labelled IP$_3$RIII did not differ between WT and N111G cells, indicating no change in the rate of synthesis of IP$_3$RIII. At the end of a 6 h pulseFig. 10. Ca$^{2+}$ responses and IP$_3$RIII expression in heterogenous populations of G-418-resistant transfected cells. Heterogenous populations (1.25×10$^6$ cells/assay) of G-418-resistant WT- (A and B) or N111G-AT$_1$ receptor-transfected cells (C and D) were loaded with Fura2/AM (5 μM) for 20 min at 37°C, washed by centrifugation, resuspended in a nominally Ca$^{2+}$-free medium and their intracellular Ca$^{2+}$ concentration was monitored upon stimulation with 100 μM CCh or upon addition of 1.8 mM CaCl$_2$, as indicated. These experiments were performed at 37°C and [Ca$^{2+}$]$_i$ variations were monitored with a Hitachi F-2000 spectrofluorometer as described under “Material and Methods”. These typical traces are representative of at least three independent experiments done in triplicate. In panel E, proteins from heterogenous populations of G-418-resistant WT- (lanes 1 and 2) or N111G-AT$_1$ receptor-transfected cells lysates (lanes 3 and 4) were resolved on a 7% polyacrylamide gel and immunoblotted with an anti-IP$_3$RIII antibody (upper panel) or with an anti-actin antibody (lower panel) as described under “Material and Methods”. Bands corresponding to IP$_3$RIII (~ 230 kDa) and actin (42 kDa) are identified with black and white arrows respectively. This result is representative of three independent experiments. A \[ \text{[Ca}^{2+}]_i \] (nM) Time (s) B \[ \text{[Ca}^{2+}]_i \] (nM) Time (s) C \[ \text{[Ca}^{2+}]_i \] (nM) Time (s) D \[ \text{[Ca}^{2+}]_i \] (nM) Time (s) E kDa \[ 202 \] \[ 38.3 \] 1 2 3 4 \[ \text{IP}_3\text{RIII} \] \[ \text{Actin} \] FIG. 11. Mechanism of IP$_3$RIII down-regulation in N111G cells. RT-PCR analysis (from cycles 25 to 50) of IP$_3$RIII mRNA extracted from WT and N111G cells is shown in panel A. In panel B, C and E, WT and N111G cells were pulse-labelled with Expre$^{35}$S$^{35}$S-Protein labeling mix (50 µCi) for 0.5 to 6 h. Cell lysates were immunoprecipitated with anti-IP$_3$RIII antibody and resolved on a 7% polyacrylamide gel. The gel was dried and subjected to fluorography for at least 5 days. In panel C, WT and N111G cells were pulse-labelled for 6 h and subsequently chased for 1 to 24 h. In panel D, WT and N111G cells were treated with cycloheximide (50 µg/ml) for indicated times and IP$_3$RIII from cell lysates were revealed by immunoblot analysis as described in Fig. 9. In panel E, WT and N111G cells were pulsed for 6 h and the indicated inhibitors were applied to the cells together with cycloheximide (50 µg/ml) during an 8 h chase. Final concentrations of the inhibitors were as follows: chloroquine (ChQ, 200 µM), NH$_4$Cl (5 mM), lactacystin (LC, 10 µM) and ALLN (30 µM). Bands corresponding to IP$_3$RIII (~230 kDa) were identified with black arrows and actin control with white arrow. These typical results are representative of at least three independent experiments. A PCR cycles 25 30 35 40 45 50 WT N111G B Pulse (h) 0.5 1 2 3 4 WT N111G C Chase (h) 0 1 2 4 8 12 16 20 24 WT N111G D CHX (h): 0 8 20 WT N111G E Chase (h): 0 8 8 8 8 8 WT N111G labelling, a 24 h chase showed that the level of metabolically labelled IP$_3$RIII declined more rapidly in N111G cells than in WT cells (Fig. 11C). Measurements made at 8, 12, 16, 20 and 24 h demonstrated different disappearance rates in both cell types. After 24 h, pulsed-labelled IP$_3$RIII had declined by 24 % in WT cells and by 83 % in N111G cells. In the presence of the inverse agonist EXP3174, the decline of pulse-labelled IP$_3$RIII in N111G cells was substantially reduced (data not shown). These results indicate that the down-regulation of IP$_3$RIII in N111G cells is via a protein degradation pathway. Immunoblot assays after pre-incubation of cells for different times with the protein synthesis inhibitor cycloheximide further suggested that IP$_3$RIII was degraded more rapidly in N111G cells than in WT cells (Fig. 11D). Again, in the presence of EXP3174, the degradation of immunoreactive IP$_3$RIII was reduced (data not shown). At the end of a 6 h pulse-labelling, an 8 h chase in the presence of cycloheximide revealed an important decline of the metabolically labelled IP$_3$RIII in N111G cells (Fig. 11E). Addition of chloroquine or NH$_4$Cl (two lysosomal activity inhibitors) during the chase period, offered a good protection against degradation whereas lactacystin and ALLN (two proteasomal activity inhibitors) offered only a weak protection. These results suggest that the main degradation pathway for the down-regulation of IP$_3$RIII in N111G cells involves the lysosome. DISCUSSION It is expected that the expression of a mutant receptor with constitutive activity would perturb cell homeostasis either by causing exaggerated downstream activities or by mediating adaptive refractory processes. In the study presented here, we showed that HEK-293 cells expressing the constitutively active N111G-AT$_1$ receptor exhibited a refractory state in their Ca$^{2+}$ signaling mechanism. Despite their spontaneous Ca$^{2+}$ oscillatory activity and their basal capacitative Ca$^{2+}$ entry activity, N111G cells had weaker responses than WT cells to low and high concentrations of Ang II. When the different phases of their Ang II-induced Ca$^{2+}$ response were analyzed, N111G cells displayed a lower release of intracellular Ca$^{2+}$ and weaker capacitative Ca$^{2+}$ entry than WT cells. Because thapsigargin released the same amount of intracellular Ca$^{2+}$ and triggered a similar Ca$^{2+}$ entry in both cell types, the refractory state of N111G cells cannot be attributed to a lower content of their intracellular Ca$^{2+}$ pool nor to the desensitization of a component of their capacitative Ca$^{2+}$ entry mechanism. Relatively few studies have examined the underlying effects of the expression of a constitutively active G$_q$-coupled receptor on cellular Ca$^{2+}$ homeostasis. We previously observed that COS-7 cells expressing the N111G-AT$_1$ receptor produced a diminished intracellular Ca$^{2+}$ transient in response to Ang II (16). In HEK-293 cells stably transfected with the N111A-AT$_1$ receptor, Ang II-induced intracellular Ca$^{2+}$ transients were no different from those obtained with cells expressing the wild-type AT$_1$ receptor (9). In our hands, the N111A-AT$_1$ receptor had relatively weak constitutive activity (18% of maximal WT activity) that may not be strong enough to cause a significant adaptive response (16). The expression of a constitutively active mutant TRH receptor caused a desensitization of the Ca$^{2+}$ response to TRH (homologous) and also to CCh (heterologous) in AtT20 cells (17). The authors provided evidence that the desensitization process was dependent on the availability of extracellular Ca$^{2+}$ and on the activity of protein kinase C, but no specific Ca$^{2+}$ handling component nor intracellular target was identified. The GTPase-deficient Q212L-G$_{16}\alpha$, which constitutively activates phospholipase C, creates a cell environment that may theoretically be similar to that created by a constitutively active G$_q$-coupled GPCR. NIH-3T3 cells stably transfected with Q212L-G$_{16}\alpha$ had a desensitized Ca$^{2+}$ response to ATP (18). These cells did not show any capacitative Ca$^{2+}$ entry under basal conditions and their intracellular Ca$^{2+}$ store was partially depleted. These different adaptive responses between Q212L-G$_{16}\alpha$ cells and N111G cells could be directly related to phenotypic differences between the two cell types. They could also be related to the fact that a specific GPCR may activate several downstream effectors through direct interactions with G proteins and also with diverse adaptors or scaffolding proteins (arrestins, A-kinase anchoring proteins, InaD, Homer, Janus kinase, etc.; for review see 19) whereas a specific G protein is known to interact with a more restricted set of downstream partners. Xenopus oocytes expressing the constitutively active Kaposi’s sarcoma-associated herpes virus GPCR (KSHV-GPCR) showed homologous and heterologous (elicited by TRH or acetylcholine) desensitizations of their Ca$^{2+}$ response accompanied with an impaired response to IP$_3$ injection (20). The main cause of this impairment appeared to be a depletion of their thapsigargin-sensitive intracellular Ca$^{2+}$ pool. In HEK-293 cells expressing a wild-type TRH receptor, pre-exposure to a high concentration of TRH caused an adaptive response mainly due to the depletion of their intracellular Ca$^{2+}$ pool (21). Again, this adaptive response was very different from that of N111G cells, which showed a significant capacitative $\text{Ca}^{2+}$ entry activity under basal conditions and no apparent depletion of their intracellular $\text{Ca}^{2+}$ pool. N111G cells therefore possess an efficient store-operated $\text{Ca}^{2+}$ influx mechanism that contributes adequately to the refilling of their intracellular $\text{Ca}^{2+}$ pool. The refractory state of N111G cells was also revealed when their endogenous muscarinic (CCh) and purinergic (ATP) receptors were stimulated. These results indicated that the diminished agonist-induced $\text{Ca}^{2+}$ release activity observed in N111G cells is a heterologous desensitization phenomenon that is not directly related to the specific properties of the N111G-AT$_1$ receptor but rather to a deficient component downstream from the receptor. The G protein $G_q$ and phospholipase C are located immediately downstream from the receptor. Agonist-dependent desensitization of $G_q\alpha$ or phospholipase C have been observed in different cell types (22-25). However, Rat 1 fibroblasts expressing a constitutively active $\alpha_{1B}$-adrenergic receptor did not show any desensitization of phospholipase C nor any down-regulation of $G_q\alpha$ unless they were chronically stimulated with phenylephrine (24). These studies suggest that very strong stimulations are necessary to desensitize $G_q\alpha$ and phospholipase C. Because we showed that, in response to a high dose of Ang II, N111G cells could produce the same maximal amount of IP as WT cells, it is unlikely that the activities of the G protein $G_q$ and of phospholipase C are depressed in these cells. This interpretation is supported by immunoblot studies showing similar amounts of the G protein $G\alpha_q$ and of phospholipase C$\beta_3$ in WT and N111G cells (data not shown). The next component of the $\text{Ca}^{2+}$ signaling mechanism downstream from phospholipase C is the IP$_3$R. We showed that IP$_3$ released less $\text{Ca}^{2+}$ from permeabilized N111G cells than from permeabilized WT cells, suggesting that the refractory state of N111G cells could be related to the function of IP$_3$R. This possibility was supported by binding studies showing a diminished amount of IP$_3$R in N111G cells. Our immunoblot analysis further showed that IP$_3$RIII is less abundant in N111G cells than in WT cells. Agonist-induced down-regulation of IP$_3$R was first demonstrated in SH-SY5Y neuroblastoma cells activated with high concentrations of CCh (26). Other studies showed that chronic activation of different G$_q$-coupled receptors causes a down-regulation of IP$_3$R within different cell types (27-30). Agonist-induced IP$_3$R down-regulation occurs by a mechanism involving the proteasome pathway (31, 32). Interestingly the mechanism responsible for the down-regulation of IP$_3$RIII in N111G cells appears to involve primarily the lysosome and to a minor extent the proteasome. This difference could be related to the fact that the N111G-AT$_1$ receptor is chronically producing a sub-maximal stimulation whereas agonist-induced IP$_3$R down-regulation was obtained with supra-maximal and relatively acute doses of agonists. Further studies are needed to clarify that question. Nonetheless, the N111G cells represent another interesting model to study the degradation pathways for IP$_3$R. In conclusion, it is known that prolonged elevations of intracellular Ca$^{2+}$ may be very deleterious for cells (33). The survival of cells expressing a constitutively active Ca$^{2+}$-mobilizing receptor is therefore dependent on some desensitization of their Ca$^{2+}$ signaling pathway. Our results obtained with a N111G cell clone and also with a heterogenous population of G-418-resistant N111G-AT$_1$ receptor-transfected cells indicated that the IP$_3$RIII is down-regulated in these cells, which nonetheless maintain a normal Ca$^{2+}$ concentration under basal conditions and can still respond, although less efficiently than WT cells, to Ang II and other Ca$^{2+}$-mobilizing stimuli. Interestingly, long-term treatments with the inverse agonist EXP3174 restored the CCh-induced Ca$^{2+}$ transient, IP$_3$-induced Ca$^{2+}$ release and the level of expression of IP$_3$RIII in N111G cells. The time course of these recovery responses was consistent with de novo protein synthesis and with the metabolic turnover of IP$_3$R (30 and 34). While the activation of IP$_3$R plays a fundamental role in cellular Ca$^{2+}$ responses, its down-regulation appears to be a key mechanism to protect cells against the deleterious effects of chronic Ca$^{2+}$ elevations. MATERIAL AND METHODS Materials—The cDNA clones encoding AT$_1$ receptor and N111G-AT$_1$ receptor, both with a N-terminal FLAG epitope, were constructed in our laboratory as described previously (16). HEK-293 cells were from Qbiogene (QBI-HEK-293A cells; Carlsbad, CA). Dulbecco’s Modified Eagle’s Medium (DMEM), fetal bovine serum (FBS), genetin (G-418 sulfate), lipofectamine, Met/Cys-free DMEM and penicillin-streptomycin-glutamine were from Gibco Life Technologies (Gaithersburg, MD). EN$^3$HANCE Reagent, [$^3$H]IP$_3$ (23 Ci/mmol), $myo$-$[^3$H]inositol (65 Ci/mmol) and Expre$^{35}$s$^{35}$s-Protein labelling mix (1175 Ci/mmol) were from PerkinElmer (Boston, MA). IP$_3$ was from Alexis Biochemicals (San Diego, CA). Ang II, ATP, bacitracin, bovine serum albumin (BSA), chloroquine diphosphate salt, creatine phosphokinase, phosphocreatine, poly-L-lysine hydrobromide, saponin and thapsigargin were from Sigma-Aldrich (Oakville, ON). AG 1-X8 resin was from Bio-Rad (Mississauga, ON). ALLN, carbachol (CCh), fura2 (free acid), fura2/AM and lactacystin were from Calbiochem (San Diego, CA). Proteases inhibitors cocktail (Complete™) and Expand High Fidelity DNA polymerase were from Roche Molecular Biochemicals (Laval, QC). Moloney murine leukemia virus reverse transcriptase was from Promega (Mississauga, ON). TRIzol® reagent was from Invitrogen Life Technologies (Burlington, ON). Immobilon-P polyvinylidene fluoride (PVDF) transfer membranes were from Millipore (Bedford, MA). Mouse anti-IP$_3$RII antibody (recognizing a N-terminal epitope) was from BD Biosciences Transduction Laboratories (Mississauga, ON). Mouse anti-actin antibody was from Chemicon International (Temecula, CA). AMDEX™ sheep anti-mouse IgG antibody coupled to horseradish peroxidase and ECL Plus Western blotting detection reagents were from Amersham Biosciences (Piscataway, NJ). EXP3174 is a generous gift previously obtained from DuPont Merck Pharmaceutical Co. (Wilmington, DE). $^{125}$I-Ang II (1000 Ci/mmol) was prepared with IODO-GEN (Pierce, Rockford, IL) according to the method of Fraker and Speck (35) in an acetic acid buffer (pH 5.4) and purified by HPLC on a C-18 column (Waters, Mississauga, ON) as previously reported (36). The specific radioactivities of the radiolabelled peptides were determined by self-displacement and saturation binding experiments as described previously (37). **Cell Culture**—HEK-293 clonal cell lines were cultured in complete DMEM (supplemented with 10% heat-inactivated FBS, 2 mM L-glutamine, 100 IU/ml penicillin, 100 μg/ml streptomycin and 0.4 mg/ml G-418) at 37°C in a humidified atmosphere containing 5% CO$_2$ and 95% air. To establish clonal cell lines expressing either the AT$_1$ receptor or the N111G-AT$_1$ receptor, HEK-293 cells at 60-70% confluence were transfected with 4 μg of the cDNA constructs and 25 μl of lipofectamine. Cells were grown for 36 h before selection with 0.8 mg/ml G-418 for 2 weeks. Most of these G-418 resistant cells were conserved as a heterogenous population of stably transfected cells. Some of the G-418 resistant cells were seeded at a density of 0.5 cell per well into 96-well plates. G-418 resistant clones were amplified and tested for AT$_1$ receptor expression with a $^{125}$I-Ang II binding assay as previously described (16). **Dynamic Video Imaging of Cytosolic Ca$^{2+}$**—Fluorescence from fura2-loaded cells was monitored as previously described (38). Briefly, HEK-293 cells were allowed to attach to a glass coverslips (number 1) coated with poly-L-lysine and to grow in complete DMEM for 36-48 h before being washed twice with a HEPES-buffered physiological saline solution (HBSS: 20 mM HEPES at pH 7.4, 120 mM NaCl, 5.3 mM KCl, 0.8 mM MgSO$_4$, 1.8 mM CaCl$_2$ and 11.1 mM dextrose). The coverslips were clamped into a Teflon circular open-bottom chamber and cells were incubated with 0.2 µM fura2/AM for 20 min at room temperature in the dark. Cells were then washed and bathed in fresh HBSS for 20 min to ensure complete hydrolysis of the fura2/AM prior to mount the Teflon chamber onto the stage of a Carl Zeiss Axiovert inverted microscope fitted with an Attofluor Digital Imaging and Photometry System (Attofluor Inc., Rockville, MD). The system allows data acquisition from up to 99 user-defined variably sized regions of interest per field of view. Fluorescence from isolated fura2-loaded cells was monitored by videomicroscopy using alternating excitation wavelengths of 334 and 380 nm and recording emitted fluorescence at 510 nm. All experiment were done at room temperature and the data are expressed as fura2 fluorescence ratio ($F_{334}/F_{380}$). Data acquisition was typically at 3 s intervals and lasted for 1600 s. *Cytosolic [Ca$^{2+}$] Measurement*—HEK-293 cells (1.25×10$^6$ cells grown in 10-cm dishes for 24-40 h) were detached by a brief trypsin/EDTA treatment, resuspended in complete DMEM and washed by centrifugation for 4 min at 100×g before being incubated with 5 µM fura2/AM in an extracellular-like medium (ECM) (15 mM HEPES at pH 7.4, 140 mM NaCl, 5 mM KCl, 1 mM MgCl$_2$, 10 mM dextrose, 1.8 mM CaCl$_2$ and 0.1% BSA) for 20 min at 37°C. After a wash by centrifugation, cells were resuspended in ECM and incubated for 20 min at 37°C to ensure complete hydrolysis of the fura2/AM. Cells were then centrifuged again and resuspended in 2 ml of ECM or of nominally Ca$^{2+}$-free ECM (which was identical in composition except for the omission of CaCl$_2$). Cells suspension was gently stirred in a quartz cuvette maintained at 37 C while [Ca$^{2+}$]$_i$ was monitored on a Hitachi F-2000 spectrofluorometer (Hialeah, FL) with alternative excitation wavelengths of 340 and 380 nm and with emission wavelength of 510 nm to measure changes in intracellular fura2 fluorescence intensity (F). At the end of each recording, maximal fluorescence ratio ($R_{\text{max}}$) and minimal fluorescence ratio ($R_{\text{min}}$) were determined by adding successively 0.1% Triton X-100 and 10 mM EGTA to the cell suspensions. The following equation from Grynkiewicz et al. (39) was used to relate the intensity ratios to Ca$^{2+}$ levels: $$[\text{Ca}^{2+}] = K_D \times \frac{(R - R_{\text{min}})}{(R_{\text{max}} - R)} \times \left(\frac{F\lambda_2_{\text{min}}}{F\lambda_2_{\text{max}}}\right)$$ where $R$ represents the fluorescence intensity ratio $F\lambda_1$ (340 nm)/$F\lambda_2$ (380 nm). $K_D$ is the Ca$^{2+}$ dissociation constant of the indicator (224 nM). Drugs were added in small volumes (<20 µl) of concentrated stocks (dissolved either in water or dimethylsulfoxide). **Intracellular IP Accumulation Measurement**—HEK-293 cells ($5 \times 10^5$ cells per well into 6-well plates) were labelled for 18-24 h in inositol-free DMEM containing 15 µCi/ml of *myo-*[$^3$H]inositol. Cells were then stimulated with 100 nM Ang II for 20 min at 37°C in Medium 199 containing 25 mM Hepes at pH 7.4, 10 mM LiCl and 0.1% BSA. Incubations were stopped by addition of ice-cold perchloric acid (5% [v/v]). Water-soluble IP were then extracted with an equal volume of a 1:1 mixture of 1,1,2-trichlorotrifluoroethane and tri-*n*-octylamine. The samples were vigorously mixed and centrifuged at 15,000×g for 15 min at 4°C. The upper phase was applied to an AG 1-X8 resin column and the IP were sequentially eluted by addition of ammonium formate/formic acid solutions of increasing ionic strength. Radioactive content from each sample was evaluated with a Beckman LS 6800 liquid scintillation counter (Fullerton, CA). $IP_3$-induced $Ca^{2+}$ Release—HEK-293 cells ($20 \times 10^6$ cells grown in 15-cm dishes) were detached by a brief trypsin/EDTA treatment, resuspended in 10 ml of DMEM and washed by centrifugation. After rinsing with 5 ml of a cytosol-like buffer (20 mM Tris/HCl at pH 7.4, 110 mM KCl, 10 mM NaCl, 5 mM KH$_2$PO$_4$ and 2 mM MgCl$_2$), cells were resuspended in 2 ml of permeabilization buffer composed of the cytosol-like buffer supplemented with 50 $\mu$g/ml of saponin, 0.5 $\mu$M fura2 acid, 20 units of creatine kinase and 10 mM phosphocreatine. After 3 min of permeabilization, less than 10% of the cells excluded Trypan Blue. Ca$^{2+}$ uptake (upon ATP addition) and release (upon IP$_3$ addition) were monitored at 37°C with a Hitachi F-2000 spectrofluorometer as previously described (40). $[^3H]IP_3$ Binding Assay—HEK-293 cells were permeabilized in a binding buffer (25 mM Tris/HCl at pH 8.5, 110 mM KCl, 10 mM NaCl, 5 mM KH$_2$PO$_4$, 1 mM EDTA) supplemented with 50 $\mu$g/ml saponin for 10 min at 37°C. Cells ($20 \times 10^6$ cells/0.5 ml) were then incubated for 15 min at 0°C in the presence of $\sim 2$ nM $[^3H]IP_3$ (15,000 cpm) and increasing concentrations of unlabelled IP$_3$ (ranging from 0.1 nM to 1 $\mu$M). Non specific binding was determined in the presence of 2 $\mu$M IP$_3$. Incubations were terminated by centrifugation at 15,000×g for 5 min at 4°C. The pellets were solubilized with 1% Triton X-100 and the receptor-bound radioactivity was evaluated with a Beckman LS 6800 liquid scintillation counter. Electrophoresis and Immunoblotting—HEK-293 cells ($10^7$ cells/ml) were solubilized for 1 h at 4°C in solubilization buffer (50 mM Tris/HCl at pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100 and the proteases inhibitors cocktail Complete™ 1X). Insoluble material was precipitated by centrifugation at 35,000×g for 30 min at 4°C. The supernatant was mixed with 1 ml of 2X Laemmli’s buffer (60 mM Tris/HCl at pH 6.8, 10% glycerol, 2% SDS, 125 mM dithiothreitol and 0.3% Bromophenol Blue) and heated for 5 min at 95°C. Samples were loaded onto a 7% polyacrylamide gel that was subjected to a constant current of 15 mA for 105 min. Proteins were electro-transferred to a PDVF membrane at a constant current of 0.5 A for 24 h at 4°C. Blotted membranes were incubated for 2 h at room temperature in PBS-T (PBS containing 0.1% Tween-20) supplemented with 5% non-fat dried milk. Blots were then incubated overnight at 4°C with either the anti-IP$_3$RIII antibody or the anti-actin antibody. After extensive washing with PBS-T, the blots were incubated for 1 h at room temperature with a peroxidase-conjugated secondary antibody and after extensive washing with PBS-T, the immunostained bands were revealed with ECL Plus according to the manufacturer’s instruction on a BioMax ML film. Autoradiograms were digitized on a Hewlett Packard Scan Jet 5100c and integrated peak areas were determined using the gel analysis Quantity One software (version 4.2; Bio-Rad, Mississauga, ON). **Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR)**—Total RNA was isolated from WT and N111G cells using TRIzol® reagent according to the manufacturer’s instruction. cDNA synthesis was carried out using Moloney murine leukemia virus reverse transcriptase with random hexamers as primers according to the manufacturer’s instruction. PCR was performed using Expand High Fidelity DNA Polymerase with standard buffer. For each reaction, 1 µg of cDNA template was used and cycling conditions consisted of 94°C for 30 s, 55°C for 1 min, and 72°C for 2 min for 50 cycles of PCR carried out with the iCycler system (Bio-Rad). Approximatively 5 µl of product was collected every 5 cycles (beginning at cycle 25) and run on 0.5× Tris borate/EDTA-2% agarose gel. The specific oligonucleotide primer sets used for amplification of IP$_3$RIII (nt 1975-2534) were as follows: (sense) 5'-TACCCCAAGAGCTCATCTGCAAG-3' and (antisense) 5'-ACTTGTTCTTCTTGTCACTCTGGGG-3'. The separated PCR fragments were visualized using the Gel Doc system from Bio-Rad. **Metabolic Labelling and Immunoprecipitation**—Metabolic labelling experiments were performed on WT and N111G cells ($1.5 \times 10^6$ cells in 6-cm dishes). Cells were incubated for 1 h with Met/Cys-free DMEM supplemented with 2 mM L-glutamine, 100 IU/ml penicillin, 100 µg/ml streptomycin, 0.4 mg/ml G-418 before adding 50 µCi of Express$^{35}$S$^{35}$S-Protein labeling mix for the indicated periods of time (pulse). Chase was done by replacing pulse labeling medium for DMEM without FBS supplemented with different inhibitors as indicated. After washing twice, cell lysates were prepared as described above and immunoprecipitated with anti-IP$_3$RIII antibody (5 µl) on wet protein A/G-plus agarose beads (50 µl; Santa Cruz Biotechnology, Santa Cruz, CA) for 16 h, at 4°C, under constant rotation. The agarose beads were sedimented by centrifugation at 5,000 x g for 2 min and the immune complexes were washed once with ice-cold solubilization buffer before resuspension in 45 µl of 1× Laemmli buffer. Labelled proteins were resolved by SDS-PAGE on a 7% polyacrylamide gel. Separated proteins were fixed before the gels were treated with EN$^3$HANCE for 1 h, dried for 2 h at 60°C under a vacuum and exposed (for at least 5 days) on a BioMax MS film (Eastman Kodak, Rochester, NY) with an intensifying screen. Integrated peak areas were determined using the gel analysis Quantity One software. REFERENCES 1. de Gasparo M, Catt KJ, Inagami T, Wright JW, Unger T 2000 International union of pharmacology. XXIII. The angiotensin II receptors. Pharmacol Rev 52:415-472 2. Burnier M 2001 Angiotensin II type 1 receptor blockers. Circulation 103:904-912 3. Kojima I, Kojima K, Kreutter D, Rasmussen H 1984 The temporal integration of the aldosterone secretory response to angiotensin occurs via two intracellular pathways. J Biol Chem 259:14448-14457 4. Balla T, Baukal AJ, Guillemette G, Morgan RO, Catt KJ 1986 Angiotensin-stimulated production of inositol trisphosphate isomers and rapid metabolism through inositol 4-monophosphate in adrenal glomerulosa cells. Proc Natl Acad Sci USA 83:9323-9327 5. Putney JW Jr 2003 Capacitative calcium entry in the nervous system. Cell Calcium 34:339-344 6. Seifert R, Wenzel-Seifert K 2002 Constitutive activity of G-protein-coupled receptors: cause of disease and common property of wild-type receptors. Naunyn-Schmiedeberg’s Arch Pharmacol 366:381-416 7. Noda K, Feng YH, Liu XP, Saad Y, Husain A, Karnik SS 1996 The active state of the AT1 angiotensin receptor is generated by angiotensin II induction. Biochemistry 35:16435-16442 8. Groblewski T, Maigret B, Larguier R, Lombard C, Bonnafous JC, Marie J 1997 Mutation of Asn111 in the third transmembrane domain of the AT1A angiotensin II receptor induces its constitutive activation. J Biol Chem 272:1822-1826 9. Balmforth AJ, Lee AJ, Warburton P, Donnelly D, Ball SG 1997 The conformational change responsible for AT1 receptor activation is dependent upon two juxtaposed asparagine residues on transmembrane helices III and VII. J Biol Chem 272:4245-4251 10. Kjelsberg MA, Cotecchia S, Ostrowski J, Caron MG, Lefkowitz RJ 1992 Constitutive activation of the alpha 1B-adrenergic receptor by all amino acid substitutions at a single site. Evidence for a region which constrains receptor activation. J Biol Chem 267:1430-1433 11. Han M, Smith SO, Sakmar TP 1998 Constitutive activation of opsin by mutation of methionine 257 on transmembrane helix 6. Biochemistry 37:8253-8261 12. Huang P, Li J, Chen C, Visiers I, Weinstein H, Liu-Chen LY 2001 Functional role of a conserved motif in TM6 of the rat mu opioid receptor: constitutively active and inactive receptors result from substitutions of Thr6.34(279) with Lys and Asp. Biochemistry 40:13501-13509 13. Gether U, Kobilka BK 1998 G protein-coupled receptors. II. Mechanism of agonist activation. J Biol Chem 273:17979-17982 14. Robinson PR, Cohen GB, Zhukovsky EA, Oprian DD 1992 Constitutively active mutants of rhodopsin. Neuron 9:719-725 15. Arvanitakis L, Geras-Raaka E, Varma A, Gershengorn MC, Cesarman E 1997 Human herpesvirus KSHV encodes a constitutively active G-protein-coupled receptor linked to cell proliferation. Nature 385:347-350 16. Auger-Messier M, Clement M, Lanctot PM, Leclerc PC, Leduc R, Escher E, Guillemette G 2003 The constitutively active N111G-AT1 receptor for angiotensin II maintains a high affinity conformation despite being uncoupled from its cognate G protein Gq/11alpha. *Endocrinology* 144:5277-5284 17. Grimberg H, Zaltsman I, Lupu-Meiri M, Gershengorn MC, Oron Y 1999 Inverse agonist abolishes desensitization of a constitutively active mutant of thyrotropin-releasing hormone receptor: role of cellular calcium and protein kinase C. *Br J Pharmacol* 126:1097-1106 18. Lobaugh LA, Eisfelder B, Gibson K, Johnson GL, Putney JW Jr 1996 Constitutive activation of a phosphoinositidase C-linked G protein in murine fibroblasts decreases agonist-stimulated Ca2+ mobilization. *Mol Pharmacol* 50:493-500 19. Pierce KL, Premont RT, Lefkowitz RJ 2002 Seven-transmembrane receptors. *Nat Rev Mol Cell Biol* 3:639-650 20. Lupu-Meiri M, Silver RB, Simons AH, Gershengorn MC, Oron Y 2001 Constitutive signaling by Kaposi's sarcoma-associated herpesvirus G-protein-coupled receptor desensitizes calcium mobilization by other receptors. *J Biol Chem* 276:7122-7128 21. Yu R, Hinkle PM 1997 Desensitization of thyrotropin-releasing hormone receptor-mediated responses involves multiple steps. *J Biol Chem* 272:28301-28307 22. Galas MC, Harden TK 1995 Receptor-induced heterologous desensitization of receptor-regulated phospholipase C. *Eur J Pharmacol* 291:175-182 23. Wise A, Lee TW, MacEwan DJ, Milligan G 1995 Degradation of G11 alpha/Gq alpha is accelerated by agonist occupancy of alpha 1A/D, alpha 1B, and alpha 1C adrenergic receptors. *J Biol Chem* 270:17196-17203 24. Lee TW, Wise A, Cotecchia S, Milligan G 1996 A constitutively active mutant of the alpha 1B-adrenergic receptor can cause greater agonist-dependent down-regulation of the G-proteins G9 alpha and G11 alpha than the wild-type receptor. *Biochem J* 320:79-86 25. Kai H, Fukui T, Lassegue B, Shah A, Minieri CA, Griendling KK 1996 Prolonged exposure to agonist results in a reduction in the levels of the Gq/G11 alpha subunits in cultured vascular smooth muscle cells. *Mol Pharmacol* 49:96-104 26. Wojcikiewicz RJ, Furuichi T, Nakade S, Mikoshiba K, Nahorski SR 1994 Muscarinic receptor activation down-regulates the type I inositol 1,4,5-trisphosphate receptor by accelerating its degradation. *J Biol Chem* 269:7963-7969 27. Lee B, Gai W, Laychock SG 2001 Proteasomal activation mediates down-regulation of inositol 1,4,5-trisphosphate receptor and calcium mobilization in rat pancreatic islets. *Endocrinology* 142:1744-1751 28. Sipma H, Deelman L, Smedt HD, Missiaen L, Parys JB, Vanlingen S, Henning RH, Casteels R 1998 Agonist-induced down-regulation of type 1 and type 3 inositol 1,4,5-trisphosphate receptors in A7r5 and DDT1 MF-2 smooth muscle cells. *Cell Calcium* 23:11-21 29. Willars GB, Royall JE, Nahorski SR, El-Gehani F, Everest H, McArdle CA 2001 Rapid down-regulation of the type I inositol 1,4,5-trisphosphate receptor and desensitization of gonadotropin-releasing hormone-mediated Ca2+ responses in alpha T3-1 gonadotropes. *J Biol Chem* 276:3123-3129 30. Wojcikiewicz RJ, Ernst SA, Yule DI 1999 Secretagogues cause ubiquitination and down-regulation of inositol 1,4,5-trisphosphate receptors in rat pancreatic acinar cells. *Gastroenterology* 116:1194-1201 31. Bokkala S, Joseph SK 1997 Angiotensin II-induced down-regulation of inositol trisphosphate receptors in WB rat liver epithelial cells. Evidence for involvement of the proteasome pathway. J Biol Chem 272:12454-12461 32. Oberdorf J, Webster JM, Zhu CC, Luo SG, Wojcikiewicz RJ 1999 Down-regulation of types I, II and III inositol 1,4,5-trisphosphate receptors is mediated by the ubiquitin/proteasome pathway. Biochem J 339:453-461 33. Orrenius S, Zhivotovsky B, Nicotera P 2003 Regulation of cell death: the calcium-apoptosis link. Nat Rev Mol Cell Biol 4:552-565 34. Joseph SK 1994 Biosynthesis of the inositol trisphosphate receptor in WB rat liver epithelial cells. J Biol Chem 269:5673-5679 35. Fraker PJ, Speck JC 1978 Protein and cell membrane iodinations with a sparingly soluble chloroamide, 1,3,4,6-tetrachloro-3a,6a-diphenylglycoluril. Biochem Biophys Res Commun 80:849-857 36. Laporte SA, Servant G, Richard DE, Escher E, Guillemette G, Leduc R 1996 The tyrosine within the NPXnY motif of the human angiotensin II type 1 receptor is involved in mediating signal transduction but is not essential for internalization. Mol Pharmacol 49:89-95 37. Boulay G, Chrétien L, Richard DE, Guillemette G 1994 Short-term desensitization of the angiotensin II receptor of bovine adrenal glomerulosa cells corresponds to a shift from a high to a low affinity state. Endocrinology 135:2130-2136 38. Zhu X, Jiang M, Peyton M, Boulay G, Hurst R, Stefani E, Birnbaumer L 1996 trp, a novel mammalian gene family essential for agonist-activated capacitative Ca2+ entry. Cell 85:661-671 39. Grynkiewicz G, Poenie M, Tsien RY 1985 A new generation of Ca2+ indicators with greatly improved fluorescence properties. *J Biol Chem* 260:3440-3450 40. Guillemette G, Balla T, Baukal AJ, Spat A, Catt KJ 1987 Intracellular receptors for inositol 1,4,5-trisphosphate in angiotensin II target tissues. *J Biol Chem* 262:1010-1015 FOOTNOTES Abbreviations: Ang II, angiotensin II; AT$_1$ receptor, angiotensin II type-1 receptor; CAM, constitutively active mutant; CCh, carbachol; ECM, extracellular-like medium; FBS, fetal bovine serum; GPCR, G protein-coupled receptor; HBSS, HEPES-buffered physiological saline solution; HEK, human embryonic kidney; IP, inositol phosphate; IP$_3$, inositol 1,4,5-trisphosphate; IP$_3$R, inositol 1,4,5-trisphosphate receptor; IP$_3$RII, type III IP$_3$R; PBS-T, PBS containing 0.1% Tween-20; WT, wild-type. ACKNOWLEDGMENTS This work was supported by the Canadian Institutes of Health Research (CIHR). M.A.-M. is recipient of a studentship from CIHR. G.A. is recipient of a studentship from Natural Sciences and Engineering Research Council of Canada. R.L. is a Senior Scholar from the Fonds de la Recherche en Santé du Québec (FRSQ). E.E. is recipient of a J.C. Edwards Chair in cardiovascular research. This work is part of the Ph.D. thesis of M.A.-M. ARTICLE 3 – AVANT-PROPOS Statut de l'article : Accepté à « Experimental Cell Research » Référence : Mannix Auger-Messier, Eric S. Turgeon, Richard Leduc, Emanuel Escher, and Gaetan Guillemette, The Constitutively Active N111G-AT₁ Receptor for Angiotensin II Modifies Cellular Morphology and Cytoskeletal Organization of HEK-293 Cells. Apport : J'ai participé activement à l'élaboration de cette étude en planifiant la totalité des expériences et en fournissant 80% des résultats présentés dans cet article. J'ai écrit le premier jet du manuscrit. Ms. No.: ECR-05-29R1 Title: The constitutively active N111G-AT1 receptor for angiotensin II modifies the morphology and cytoskeletal organization of HEK-293 cells Corresponding Author: Professor Gaetan Guillemette Authors: Mannix Auger-Messier, PhD; Eric S Turgeon, MSc; Richard Leduc, PhD; Emanuel Escher, PhD; Dear Professor Guillemette, I am pleased to inform you that your manuscript has been accepted for publication in Experimental Cell Research. Galley proofs will be sent to you in due course. Thank you for submitting your work to Experimental Cell Research. Yours Sincerely, Graham Carpenter, Ph.D. Associate Editor for North America Experimental Cell Research ECR Editorial Office 525 B St., Suite 1900 San Diego, CA 92101 tel: 619-699-6793 fax: 619-699-6211 email@example.com The constitutively active N111G-AT$_1$ receptor for angiotensin II modifies the morphology and cytoskeletal organization of HEK-293 cells. Mannix Auger-Messier, Eric S. Turgeon, Richard Leduc, Emanuel Escher, and Gaetan Guillemette* Department of Pharmacology, Faculty of Medicine, Université de Sherbrooke, Sherbrooke, Quebec, Canada, J1H 5N4 Running title: Morphological changes caused by N111G-AT$_1$ Keywords – constitutively active mutant GPCR, AT$_1$ receptor, N111G-AT$_1$, inverse agonist, EXP3174, cell-cell contact, cell morphology, cytoskeleton, actin reorganization * To whom correspondence and reprint requests should be addressed: Gaetan Guillemette, Ph.D., Department of Pharmacology, Faculty of Medicine, Université de Sherbrooke, 3001, 12th Avenue North, Sherbrooke, Quebec, Canada, J1H 5N4, Tel.: (819) 564-5347, Fax: (819) 564-5400, E-mail: firstname.lastname@example.org Abstract The expression of a constitutively active G protein-coupled receptor is expected to trigger diverse cellular changes ranging from normal to adaptive responses. We report that confluent HEK-293 cells stably expressing the constitutively active mutant N111G-AT$_1$ receptor for angiotensin II spontaneously exhibited dramatic morphological changes and cytoskeletal reorganization. Phase-contrast microscopy revealed that these cells formed a dense monolayer whereas cells expressing the WT-AT$_1$ receptor displayed large intercellular spaces and numerous filopodia. Confocal microscopy revealed an elaborate web of polymerized actin at the apical and basolateral surfaces of cells expressing the N111G-AT$_1$ receptor. Interestingly, these phenotypic changes were prevented by culturing the cells in the presence of the inverse agonist EXP3174. Similar morphologic rearrangements and *de novo* polymerized actin structures were found in Ang II-stimulated cells expressing the WT-AT$_1$ receptor. We further showed that AT$_1$ receptor-induced cell-cell contact formation did not require an increase in intracellular Ca$^{2+}$ concentration or the activity of protein kinase C. However, pretreatment with Y-27632 revealed that Rho-kinase activity was required for cell-cell contact formation upon AT$_1$ receptor activation. These observations demonstrate that the expression of the constitutively active mutant N111G-AT$_1$ receptor had a significant impact on the morphology and cytoskeletal organization of HEK-293 cells, possibly via a mechanism involving the activity of Rho-kinase. Introduction Numerous physiological processes are regulated by G protein-coupled receptors (GPCRs) via multiple downstream signaling pathways. Consequently, unregulated GPCR signaling may contribute to diverse pathologies such as peptic ulcer [1], asthma [2], left ventricular hypertrophy [3], and cancer [4]. Not surprisingly, the GPCR superfamily is one of the most common targets of therapeutic drugs. Some human diseases are caused by naturally occurring GPCR mutants with increased constitutive (agonist-independent) activity (e.g., the L457K-luteinizing hormone receptor in male precocious puberty [5], the H223R-parathyroid hormone receptor in Jansen-type metaphyseal chondrodysplasia [6], and the A843E-calcium-sensing receptor in autosomal dominant calcemia [7]). Numerous constitutively active mutant GPCRs were discovered either by random or site-directed mutagenesis [8-11]. Constitutively active mutant GPCRs, which are defined with respect to their coupling with G proteins, have extended the conceptual framework for understanding the theoretical [12] and structural [13, 14] basis of GPCR activation. The AT$_1$ receptor is a GPCR that mediates virtually all known physiological actions of angiotensin II (Ang II) in cardiovascular, renal, neuronal, and endocrine target cells [15]. The AT$_1$ receptor rapidly evokes Ca$^{2+}$ mobilization and PKC activation through its productive coupling with the heterotrimeric G protein G$_{q/11}$. The AT$_1$ receptor displays a modest constitutive activity that is noticeable only at high expression levels [16]. The substitution of Asn111 (in the third transmembrane domain) for the smaller residue Gly confers strong constitutive activity on the N111G-AT$_1$ receptor [17, 18], which behaves like the ligand-activated WT-AT$_1$ receptor in terms of desensitization [19]. We previously showed that the N111G-AT$_1$ receptor maintains a high affinity conformation even when it is uncoupled from its cognate G protein $G_{q/11}$ [20]. This property was exploited to study the active conformation of the AT$_1$ receptor with the substituted-cysteine accessibility method [21, 22]. Moreover, HEK-293 cells expressing the constitutively active N111G-AT$_1$ receptor (N111G cells) show diminished agonist-induced Ca$^{2+}$ mobilization and accelerated lysosomal-dependent degradation of the inositol 1,4,5-trisphosphate receptor [23]. These alterations may be part of an adaptive response to protect N111G cells against the deleterious effects of chronic intracellular Ca$^{2+}$ elevations. Importantly, a prolonged (24–48 h) treatment of N111G cells with EXP3174 (an inverse agonist of the AT$_1$ receptor) completely restored their agonist-induced Ca$^{2+}$ mobilization activity. We noticed striking morphological differences between monolayers of N111G cells and monolayers of cells expressing the WT-AT$_1$ receptor (WT cells). The purpose of the present study was therefore to identify the underlying mechanisms responsible for the morphological changes observed in HEK-293 cells expressing the constitutively active N111G-AT$_1$ receptor. Materials and Methods *Materials*—cDNA clones encoding the AT$_1$ and N111G-AT$_1$ receptors, both of which had an N-terminal FLAG epitope, were constructed in our laboratory as described previously [20]. HEK-293 cells were from Qbiogene (QBI-HEK-293A cells; Carlsbad, CA). Dulbecco’s Modified Eagle’s Medium (DMEM), fetal bovine serum (FBS), G-418 sulfate, lipofectamine, and penicillin-streptomycin-glutamine were from Gibco Life Technologies (Gaithersburg, MD). Ang II, and bovine serum albumin (BSA) were from Sigma-Aldrich (Oakville, ON). BAPTA-AM, Ro-31-8425, and Y-27632 were from Calbiochem (San Diego, CA). EXP3174 was a generous gift from DuPont Merck Pharmaceutical Co. (Wilmington, DE). Alexa Fluor® 568 phalloidin was from Molecular Probes (Eugene, OR). VECTASHIELD® Hard+Set Mounting Medium was from Vector Laboratories Inc. (Burlingame, CA). *Cell Cultures*—The clonal HEK-293 cell lines expressing either the WT-AT$_1$ receptor or the N111G-AT$_1$ receptor were established as previously described [23]. The G-418-resistant clonal cell lines were cultured in complete DMEM (supplemented with 10% heat-inactivated FBS, 2 mM L-glutamine, 100 IU/ml penicillin, 100 µg/ml streptomycin, and 0.4 mg/ml G-418) at 37°C in a humidified atmosphere containing 5% CO$_2$ and 95% air. To evaluate the proliferation rate of both clonal cell lines, HEK-293 cells grown for varying periods of time were detached by a brief trypsin/EDTA treatment and resuspended in complete DMEM before being counted with a Levy hemacytometer (Hausser Scientific, VWR International Inc., Montreal, QC). To evaluate the total cellular protein content, cells were washed twice with ice-cold PBS and lysed with 1 N NaOH. Triplicate samples were diluted to 0.25 N NaOH and protein concentrations were evaluated using the Lowry method [24] with BSA as a standard. *Phase-Contrast Microscopy*—The morphology of clonal cell line monolayers grown in 6-cm plastic dishes was analyzed by phase-contrast microscopy using an Axioscop 2 fluorescent microscope (Carl Zeiss Inc.; Thornwood, NY) with a 40× objective (Nikon; Montreal, QC). HEK-293 cells were treated with Ang II, EXP3174, or Y-27632 for indicated periods of time at 37°C. Photomicrographs were taken using a digital camera (Empix Imaging Inc.; Niagara, NY) and enhanced with SPOT software (Diagnostic Instruments; Sterling Heights, MI). *Confocal Microscopy*—The cytoskeletal organization of clonal cell line monolayers grown on 22-mm coverslips was analyzed by confocal microscopy. Quiescent and cells treated with Ang II or EXP3174 for indicated periods of time at 37°C) were fixed in 3.7% paraformaldehyde in phosphate-buffered saline (PBS: 1.76 mM KH$_2$PO$_4$, 10.14 mM Na$_2$HPO$_4$, 2.68 mM KCl, and 136.8 mM NaCl) for 15 min at room temperature (all subsequent steps were done at room temperature). The cells were permeabilized with 0.2% Triton X-100 in PBS for 20 min, washed three times with PBS for 10 min, and incubated with 0.5% BSA in PBS for 2 h. The cells were then incubated with Alexa Fluor®568 phalloidin (1 unit) for 45 min before being washed three times with PBS for 10 min. The coverslips were mounted on microscope slides using VECTASHIELD® Hard+Set Mounting Medium, and the cells were examined with a scanning confocal microscope (NORAN Instruments Inc.; Middleton, WI) equipped with a krypton/argon laser and coupled to an inverted microscope (Carl Zeiss Inc.) with a 100× oil immersion objective (Nikon). Specimens were excited at 568 nm and emitted fluorescence was measured using a 590 nm long-pass barrier filter. Optical sections were collected at 0.2 μm intervals with a 10 μm pinhole aperture (minimal opening; maximal confocal component). Digitized 512×480 pixel images were obtained with 256 times line averaging and enhanced using INTERVISION software (NORAN Instruments Inc.) on a Silicon Graphics O2-workstation (Mountain View, CA). Results Morphological Changes in N111G Cells—Since several transcription factors and their regulators are modulated by Ca$^{2+}$, it was conceivable that stable expression of the constitutively active mutant N111G-AT$_1$ receptor could affect the growth profile of HEK-293 cells. After seeding at a relatively low density ($1.25 \times 10^6$ cells per 10-cm dish), WT and N111G cells displayed a similar growth profile with an initial doubling time of 24 h (Fig. 1A). Once they had formed a monolayer ($30 \times 10^6$ cells per dish after approximately 5 days), contact inhibition slowed their proliferation (Fig. 1A). Also, WT and N111G cells had a similar rate of protein synthesis (Fig. 1B). These data indicated that the mitogenic and trophic responses of N111G cells were similar to those of WT cells. Despite the fact that both clonal cell lines reached confluence simultaneously, the morphology of the N111G cells monolayers was strikingly different from that of the WT cells monolayers. Phase-contrast microscopy of the WT monolayers revealed numerous filopodia and sparse intercellular contacts (Fig. 2B). MOCK-transfected cells showed a similar phenotype (Fig. 2A). Conversely, N111G cells monolayers had striking morphological differences characterized by dense cell-cell contacts and the absence of filopodia (Fig. 2C). The typical phenotype of the N111G cells was not adopted by WT cells even after growth to post-confluence (data not shown). Since significant adhesive interactions exist between the plasma membrane and the actin cytoskeleton [25], we then looked at the distribution of polymerized actin (F-actin). Confocal microscopy revealed that the entire basolateral surface of the N111G cells was covered by an elaborate web of F-actin with numerous stress fibers (Fig. 2F). Moreover, the presence of F-actin at the Fig. 1. Proliferation rate of clonal cell lines. WT cells (solid circles) and N111G cells (open circles) were seeded at $1.25 \times 10^6$ cells per 10-cm plastic dish and grown in culture medium with 10% FBS. Their growth profiles were monitored by determining the cell counts at various time-points (24 to 190 h) using a Levy hemacytometer (A). Total cellular protein content was determined using the Lowry assay as described in Materials and Methods (B). The data are expressed as means ± SD of triplicate values and are representative of three independent experiments. Fig. 2. Morphological and cytoskeletal reorganization of N111G cells. MOCK (A, D, and G), WT (B, E, and H), and N111G cells (C, F, and I) were grown in culture medium with 10% FBS until they formed a confluent monolayer. The MOCK (A), WT (B), and N111G cells (C) were examined by phase-contrast microscopy (40×) using SPOT software. Cytoskeletal organization in these clonal cell lines was assessed after fixing the HEK-293 cells in paraformaldehyde and staining the F-actin with Alexa Fluor®568 phalloidin. Confocal images of sections at the basolateral (D, E, and F) and apical (G, H, and I) surfaces were captured with a Zeiss inverted microscope (100×) using INTERVISION software as described in Materials and Methods. In panels A and B, representative filopodia and open intercellular spaces are indicated by arrows and arrowheads, respectively. In panels D and E, representative actin-rich spots are indicated by arrows. In panel I, a representative honeycomb-like F-actin structure is encircled. The photographs are representative of three independent experiments showing similar results. Scale bars, 30 μm (A–C) and 10 μm (D–I) A, B, C: Phase contrast images of cells cultured on (A) 100 μM, (B) 250 μM, and (C) 500 μM of 3-AT for 48 h. D, E, F: Fluorescence images of cells cultured on (D) 100 μM, (E) 250 μM, and (F) 500 μM of 3-AT for 48 h. G, H, I: Fluorescence images of cells cultured on (G) 100 μM, (H) 250 μM, and (I) 500 μM of 3-AT for 48 h. The white circle in (I) indicates the region where the cell is located. cell-cell interfaces with honeycomb-like patterns just beneath the apical plasma membrane was consistent with the tightly packed phenotype of N111G cell monolayers (Fig. 2I). The disorganized distribution of F-actin at the apical surface of MOCK-transfected and WT cells could account for their scarce, scattered cell-cell contacts (Fig. 2G and 2H). The sparse actin stress fibers and the presence of actin-rich spots at the basolateral surface of MOCK-transfected and WT cells are further characteristics that differentiate their cytoskeletal ultrastructure organization from that of N111G cells (Fig. 2D and 2E). These morphological differences were also observed when heterogenous populations of G-418-resistant WT cells were compared to N111G cells (data not shown). These striking phenotypic changes in N111G cells appeared to be a typical response common to all cells expressing the constitutively active AT\textsubscript{1} receptor and not a mere epiphenomenon observed with an eccentric cell clone. **Agonist-Induced Cell-Cell Contact Formation**—Inverse agonists stabilize the inactive conformation of GPCRs and inhibit their constitutive activity [26]. EXP3174 is an inverse agonist of the AT\textsubscript{1} receptor and efficiently blocks the production of inositol phosphates induced by the N111G-AT\textsubscript{1} receptor [16, 19, 23]. Phase-contrast microscopy of N111G cells grown in the presence of EXP3174 (4 μM) revealed that they had morphological features (many filopodia and sparse intercellular contacts) characteristic of WT cells (Fig. 3C). Confocal microscopy also revealed that after treatment with EXP3174, N111G cells had fewer actin stress fibers, more actin-rich spots at their basolateral surface (Fig. 3F), and a disorganized distribution of F-actin at their apical surface (Fig. 3I). These observations further corroborated the inhibitory effect of EXP3174 on the formation of dense cell-cell contacts between N111G cells. Fig. 3 EXP3174 prevents phenotypic changes in N111G cells. MOCK (A, D, and G), WT (B, E, and H), and N111G cells (C, F, and I) were grown in culture medium with 10% FBS and EXP3174 (4 μM) until they formed a confluent monolayer. The overall morphologies of EXP3174-treated MOCK (A), WT (B) and N111G cells (C) were assessed by phase-contrast microscopy (40×) using SPOT software. The cytoskeletal organization of these clonal cell lines was assessed after fixing the HEK-293 cells in paraformaldehyde and staining the F-actin with Alexa Fluor®568 phalloidin. Confocal images of sections at both basolateral (D, E, and F) and apical (G, H, and I) surfaces were captured using a Zeiss inverted microscope (×100) using INTERVISION software as described in Materials and Methods. The photographs are representative of three independent experiments showing similar results. Scale bars, 30 μm (A–C) and 10 μm (D–I) A, B, C: Phase contrast images of cells cultured on 10 μM (A), 25 μM (B) and 50 μM (C) of 3-AP. D, E, F: Fluorescence images of cells cultured on 10 μM (D), 25 μM (E) and 50 μM (F) of 3-AP. G, H, I: Fluorescence images of cells cultured on 10 μM (G), 25 μM (H) and 50 μM (I) of 3-AP. Interestingly, once formed, these intimate cell-cell contacts could be dismantled following a prolonged treatment (at least 24 h) of N111G cells with EXP3174 (data not shown). As expected, MOCK-transfected cells and WT cells monolayers displayed the same morphological (Fig. 3A and 3B) and cytoskeletal ultrastructures (Fig. 3D, 3E, 3G and 3H) whether they were treated or not with EXP3174. Stimulation of WT monolayers for 30 min with Ang II (100 nM) rapidly evoked important morphological changes, including retraction of filopodia and formation of dense cell-cell contacts (Fig. 4A). Under these conditions, a substantial increase in actin stress fibers and a disappearance of actin-rich spots were noted at the basolateral surface of WT cells (Fig. 4C). Moreover, Ang II-stimulated WT cells displayed a honeycomb-like pattern of F-actin at cell-cell interfaces just beneath the apical plasma membrane (Fig. 4E). EXP3174 completely blocked these Ang II-induced changes in WT cells (data not shown). While Ang II-stimulated N111G cells displayed no obvious morphological changes (Fig. 4B), closer examination of their cytoskeletal organization showed that they had numerous actin stress fibers covering their basolateral surface (Fig. 4D) and clear-cut honeycomb-like F-actin structures that seemed to reinforce cell-cell contacts at their apical surface (Fig. 4F). **Rho-Kinase-Dependent Morphological Changes**—Pretreatment for 30 min with the intracellular Ca$^{2+}$ chelator BAPTA-AM (50 μM) or the PKC inhibitor Ro-31-8425 (100 nM) did not prevent WT cells from adopting their typical Ang II-induced morphology (data not shown). Moreover, N111G monolayers did not change their morphology after a 24-h treatment with either BAPTA-AM or Ro-31-8425 (data not shown). These results suggest that Ca$^{2+}$ elevation or PKC activation were not required for Fig. 4 Phenotypic modifications of WT cells following Ang II stimulation. WT (A, C, and E) and N111G cells (B, D, and F) were grown in culture medium with 10% FBS until they formed a confluent monolayer. They were then incubated with Ang II (100 nM) for 30 min at 37°C. The morphologies of Ang II-stimulated WT (A) and N111G cells (B) were assessed by phase-contrast microscopy (40×) using SPOT software. The cytoskeletal organization of both clonal cell lines was assessed after fixing the HEK-293 cells in paraformaldehyde and staining the F-actin with Alexa Fluor®568 phalloidin. Confocal images of sections at basolateral (C and D) and apical (E and F) surfaces were captured with a Zeiss inverted microscope (100×) using INTERVISION software as described in Materials and Methods. The photographs are representative of three independent experiments showing similar results. Scale bars, 30 μm (A and B) and 10 μm (C–F). A B C D E F AT\textsubscript{1} receptor-mediated cell-cell contact formation. Many G\textsubscript{q/11}-coupled GPCRs have recently been shown to activate the Rho family of small GTPases (important regulators of cellular actin cytoskeletal dynamics) [27]. Interestingly, recent \textit{in vivo} studies have suggested that the AT\textsubscript{1} receptor can regulate Rho-kinase activity through its effect on RhoA signaling [28-30]. A pretreatment with the specific Rho-kinase inhibitor Y-27632 (5 µM) had no perceptible effect on the morphology of MOCK-transfected cells (Fig. 5A) or of WT cells (Fig. 5B). However, the pretreatment with Y-27632 severely compromised the morphological changes to WT cells induced by Ang II (Fig. 5D), and also inhibited the spontaneous morphological changes occurring in N111G cells (Fig. 5C), suggesting that Rho-kinase activity was essential for AT\textsubscript{1} receptor-mediated cell-cell contacts formation in HEK-293 cells and for spontaneous adoption of the particular phenotype in N111G cells. Fig. 5. **Impact of a Rho-kinase inhibitor on cell-cell contact formation.** MOCK (A), WT (B and D), and N111G cells (C) were grown in culture medium with 10% FBS and Y-27632 (5 μM) until they formed a monolayer. In panel D, WT cells were then incubated with Ang II (100 nM) for 30 min at 37°C. The overall morphology of these clonal cell lines was assessed by phase-contrast microscopy (40×) using SPOT software. The images are representative of three independent experiments showing similar results. *Scale bar, 30 μm* Discussion The N111G-AT$_1$ receptor is a constitutively active mutant AT$_1$ receptor with the highest constitutive activity known to date [16-18, 20]. To study the effect of the constitutive activity of this receptor on cellular functions, we stably expressed the N111G-AT$_1$ receptor in HEK-293 cells. We noticed striking morphological features in N111G cells characterized essentially by dense cell-cell contacts and the absence of filopodia. The well-structured actin cytoskeleton of N111G cells was characterized by numerous actin stress fibers at their basolateral surface and a honeycomb-like F-actin structure at their apical surface. Interestingly, treatment of N111G cells with the inverse agonist EXP3174 totally prevented the acquisition of these phenotypic features. This particular phenotype was noticeable only once N111G cells had formed a confluent monolayer. At pre-confluent stages, the morphology of N111G cells was similar to that of WT cells (data not shown). Moreover, N111G cells had the same mitogenic and trophic responses as WT cells under our culture conditions. We previously reported that N111G cells express more receptors than WT cells (2.4 pmol/mg of protein and 1.2 pmol/mg of protein, respectively) [23]. One could argue that the morphological changes observed in N111G cells were due to their high level of receptor expression. However, this possibility is unlikely since a heterogenous population of G-418-resistant WT cells expressing approximately 4 pmol of WT-AT$_1$ receptor per mg of protein did not show the characteristic N111G cell phenotype (data not shown). Since the WT-AT$_1$ receptor expressed at a high density of 5.4 pmol/mg of protein only displays weak constitutive activity [16], this suggests that much higher expression levels of this GPCR are needed to cause spontaneous cell-cell contact formation by HEK-293 cells. To our knowledge, relatively few studies have examined the effects on cellular morphology of constitutively active mutant GPCRs. Recent studies with the cholecystokinin 2 receptor provided results that are qualitatively different from ours. Indeed, it was shown that the expression of the constitutively active mutant E151A-cholecystokinin 2 receptor in NIH 3T3 cells alters their morphology at a pre-confluent stage [31]. The cells are spindle-shaped and have a highly refractile morphology with long protrusions and pseudopodia. These morphological changes efficiently revert after prolonged exposure (48 h) to the inverse agonist RPR048. The study showed that NIH 3T3 cells expressing the E151A-cholecystokinin 2 receptor exhibit enhanced agonist-independent cell proliferation and thus reaching a higher cell density. However, the authors did not describe the cellular morphology of their clonal cell lines at confluence. The activation of WT-cholecystokinin 2 receptor with gastrin disrupts cell-cell contacts between Madin-Darby canine kidney cells [32], and decreases the intercellular adhesion and alters the cell differentiation of murine pancreatic cells [33]. Additionally, another constitutively active mutant cholecystokinin 2 receptor containing 69 additional amino acids in the third intracellular loop greatly increases the proliferation rate of Balb3T3 and NRK-49F cells, even in the presence of FBS [34]. The different morphologic and mitogenic responses obtained with two different GPCRs known to couple to $G_q$ can likely be explained by different signaling pathways (independent of $G_q$) downstream from the N111G-AT$_1$ and E151A-cholecystokinin 2 receptors. Despite the fact that the N111G-AT$_1$ receptor did not enhance the proliferation of HEK-293 cells, our culture conditions (in the presence of 10% FBS) might have masked such a mitogenic response. Kaposi's sarcoma herpes virus open reading frame 74 encodes a constitutively active mutant GPCR that triggers the spontaneous formation of actin stress fibers in NIH 3T3 cells [35]. However, as these experiments were performed at a preconfluent stage, the authors did not observe any GPCR-induced cell-cell contact formation as we did with N111G cells. In Madin-Dardy canine kidney cells, an epithelial cell line that spontaneously forms dense cell-cell contacts, a constitutively active isoform of the prostaglandin E receptor increases the formation of actin stress fibers in the absence of agonist, with no alteration in the formation of cell-cell contacts [36]. The FP\textsubscript{B} receptor for prostaglandin F\textsubscript{2α} constitutively activates the hydrolysis of phosphatidylinositol 4,5-bisphosphate [37]. Interestingly, the expression of the FP\textsubscript{B} receptor in HEK-293 cells causes major changes in cell morphology and the cell cytoskeleton resembling those spontaneously adopted by N111G cells [38]. The authors did not report spontaneous cell-cell contact formation between HEK-293 cells expressing the FP\textsubscript{B} receptor as their experiments were performed using preconfluent cells. N111G cells thus represent a useful model for studying spontaneous cell-cell contact formation driven by a constitutively active GPCR. In the study presented here, we also showed that the morphology of HEK-293 cells was rapidly modified upon activation of the AT\textsubscript{1} receptor. Indeed, dense cell-cell contacts were readily detected 30 min after the addition of Ang II to WT cell monolayers. The numerous actin stress fibers and the honeycomb-like actin structures demonstrated that Ang II-stimulated WT cells could adopt a cytoskeletal organization similar to that spontaneously adopted by N111G cells. Interestingly, neither Ca\textsuperscript{2+} elevation nor PKC activation was essential for the modifications to WT cells caused by Ang II. However, the inhibition of Rho-kinase activity by Y-27632 efficiently blocked Ang II-induced cell-cell contact formation between WT cells. Similar agonist-induced cell-cell contact formation has been described following the activation of $M_3$ muscarinic and FP receptors in small cell lung carcinoma and HEK-293 cells, respectively [38-40]. The mechanisms underlying these GPCR-induced morphologic changes are both receptor- and cell-type-dependent. The activation of GPCRs influences many different aspects of cellular morphology by acting on members of the small GTPase family (Rho, Rac, Cdc42). In endothelial cells, these small GTPases regulate cell-cell contacts through mechanisms that are only beginning to be clarified [27, 41]. Interestingly, a very recent study by Barnes et al. showed that $\beta$-arrestin1 and the G protein $G_q$ act together to activate RhoA and actin stress fiber formation following AT$_1$ receptor stimulation in HEK-293 cells [42]. Moreover, Barnes et al. showed that the activation of Rho-kinase through RhoA is insensitive to Ca$^{2+}$ chelators and PKC inhibitors. Their results reported a novel $\beta$-arrestin1-dependent mechanism by which the AT$_1$ receptor induces the formation of actin stress fibers in HEK-293 cells. While actin stress fibers stabilize various components of cell-cell interactions [43], actin stress fiber formation can also negatively regulate cell-cell contacts [44]. Whether the formation of actin stress fibers is responsible for cell-cell contact formation following AT$_1$ receptor activation in HEK-293 cells thus remains to be clarified. How do the other members of the small GTPase family influence the phenotypic changes observed following AT$_1$ receptor activation in HEK-293 cells? Given that the treatment of neonatal rats with losartan downregulates the renal expression of approximately 20 different genes involved in cell-cell and cell-matrix interactions [45], it is important to determine how the spontaneous, continuous cell-cell contact formation observed in N111G cells modulates other structural components. Our finding that cytoskeleton reorganization and cell-cell contact formation were spontaneously and constitutively triggered in N111G cells provides a new model to dissect the molecular mechanisms linking the activation of AT\textsubscript{1} receptor to cellular morphologic changes. As both AT\textsubscript{1} receptor and small GTPases play major roles in the physiopathologies of cardiovascular diseases [15, 46], this model may shed light on major mechanisms mediating the effects of this GPCR. References 1. Lehmann, F., Hildebrand, P., and Beglinger, C. (2003). New molecular targets for treatment of peptic ulcer disease. *Drugs* **63**, 1785-97. 2. Ringdal, N. (2003). Long-acting beta2-agonists or leukotriene receptor antagonists as add-on therapy to inhaled corticosteroids for the treatment of persistent asthma. *Drugs* **63 Suppl 2**, 21-33. 3. Molkentin, J. D., and Dorn, I. G., 2nd (2001). Cytoplasmic signaling pathways that regulate cardiac hypertrophy. *Annu Rev Physiol* **63**, 391-426. 4. Balkwill, F. (2004). Cancer and the chemokine network. *Nat Rev Cancer* **4**, 540-50. 5. Latronico, A. C., Abell, A. N., Arnhold, I. J., Liu, X., Lins, T. S., Brito, V. N., Billerbeck, A. E., Segaloff, D. L., and Mendonca, B. B. (1998). A unique constitutively activating mutation in third transmembrane helix of luteinizing hormone receptor causes sporadic male gonadotropin-independent precocious puberty. *J Clin Endocrinol Metab* **83**, 2435-40. 6. Schipani, E., Kruse, K., and Juppner, H. (1995). A constitutively active mutant PTH-PTHrP receptor in Jansen-type metaphyseal chondrodysplasia. *Science* **268**, 98-100. 7. Zhao, X. M., Hauache, O., Goldsmith, P. K., Collins, R., and Spiegel, A. M. (1999). A missense mutation in the seventh transmembrane domain constitutively activates the human Ca2+ receptor. *FEBS Lett* **448**, 180-4. 8. Kjelsberg, M. A., Cotecchia, S., Ostrowski, J., Caron, M. G., and Lefkowitz, R. J. (1992). Constitutive activation of the alpha 1B-adrenergic receptor by all amino acid substitutions at a single site. Evidence for a region which constrains receptor activation. *J Biol Chem* **267**, 1430-3. 9. Li, J., Huang, P., Chen, C., de Riel, J. K., Weinstein, H., and Liu-Chen, L. Y. (2001). Constitutive activation of the mu opioid receptor by mutation of D3.49(164), but not D3.32(147): D3.49(164) is critical for stabilization of the inactive form of the receptor and for its expression. *Biochemistry* **40**, 12039-50. 10. Decaillot, F. M., Befort, K., Filliol, D., Yue, S., Walker, P., and Kieffer, B. L. (2003). Opioid receptor random mutagenesis reveals a mechanism for G protein-coupled receptor activation. *Nat Struct Biol* **10**, 629-36. 11. Beukers, M. W., van Oppenraaij, J., van der Hoorn, P. P., Blad, C. C., den Dulk, H., Brouwer, J., and AP, I. J. (2004). Random mutagenesis of the human adenosine A2B receptor followed by growth selection in yeast. Identification of constitutively active and gain of function mutations. *Mol Pharmacol* **65**, 702-10. 12. Kenakin, T. (2001). Inverse, protean, and ligand-selective agonism: matters of receptor conformation. *Faseb J* **15**, 598-611. 13. Scheer, A., and Cotecchia, S. (1997). Constitutively active G protein-coupled receptors: potential mechanisms of receptor activation. *J Recept Signal Transduct Res* **17**, 57-73. 14. Pauwels, P. J., and Wurch, T. (1998). Review: amino acid domains involved in constitutive activation of G-protein-coupled receptors. *Mol Neurobiol* **17**, 109-35. 15. de Gasparo, M., Catt, K. J., Inagami, T., Wright, J. W., and Unger, T. (2000). International union of pharmacology. XXIII. The angiotensin II receptors. *Pharmacol Rev* **52**, 415-72. 16. Noda, K., Feng, Y. H., Liu, X. P., Saad, Y., Husain, A., and Karnik, S. S. (1996). The active state of the AT1 angiotensin receptor is generated by angiotensin II induction. *Biochemistry* **35**, 16435-42. 17. Feng, Y. H., Miura, S., Husain, A., and Karnik, S. S. (1998). Mechanism of constitutive activation of the AT1 receptor: influence of the size of the agonist switch binding residue Asn(111). *Biochemistry* **37**, 15791-8. 18. Parnot, C., Bardin, S., Miserey-Lenkei, S., Guedin, D., Corvol, P., and Clauser, E. (2000). Systematic identification of mutations that constitutively activate the angiotensin II type 1A receptor by screening a randomly mutated cDNA library with an original pharmacological bioassay. *Proc Natl Acad Sci USA* **97**, 7615-20. 19. Miserey-Lenkei, S., Parnot, C., Bardin, S., Corvol, P., and Clauser, E. (2002). Constitutive internalization of constitutively active agiotensin II AT(1A) receptor mutants is blocked by inverse agonists. *J Biol Chem* **277**, 5891-901. 20. Auger-Messier, M., Clement, M., Lanctot, P. M., Leclerc, P. C., Leduc, R., Escher, E., and Guillemette, G. (2003). The constitutively active N111G-AT1 receptor for angiotensin II maintains a high affinity conformation despite being uncoupled from its cognate G protein Gq/11alpha. *Endocrinology* **144**, 5277-84. 21. Boucard, A. A., Roy, M., Beaulieu, M. E., Lavigne, P., Escher, E., Guillemette, G., and Leduc, R. (2003). Constitutive activation of the angiotensin II type 1 receptor alters the spatial proximity of transmembrane 7 to the ligand-binding pocket. *J Biol Chem* **278**, 36628-36. 22. Martin, S. S., Boucard, A. A., Clement, M., Escher, E., Leduc, R., and Guillemette, G. (2004). Analysis of the third transmembrane domain of the human type 1 angiotensin II receptor by cysteine-scanning mutagenesis. *J Biol Chem* **279**, 51415-23. 23. Auger-Messier, M., Arguin, G., Chaloux, B., Leduc, R., Escher, E., and Guillemette, G. (2004). Down-regulation of inositol 1,4,5-trisphosphate receptor in cells stably expressing the constitutively active angiotensin II N111G-AT(1) receptor. *Mol Endocrinol* **18**, 2967-80. 24. Lowry, O. H., Rosebrough, N. J., Farr, A. L., and Randall, R. J. (1951). Protein measurement with the Folin phenol reagent. *J Biol Chem* **193**, 265-75. 25. Raucher, D., Stauffer, T., Chen, W., Shen, K., Guo, S., York, J. D., Sheetz, M. P., and Meyer, T. (2000). Phosphatidylinositol 4,5-bisphosphate functions as a second messenger that regulates cytoskeleton-plasma membrane adhesion. *Cell* **100**, 221-8. 26. Kenakin, T. (2002). Drug efficacy at G protein-coupled receptors. *Annu Rev Pharmacol Toxicol* **42**, 349-79. 27. Fukata, Y., Amano, M., and Kaibuchi, K. (2001). Rho-Rho-kinase pathway in smooth muscle contraction and cytoskeletal reorganization of non-muscle cells. *Trends Pharmacol Sci* **22**, 32-9. 28. Moriki, N., Ito, M., Seko, T., Kureishi, Y., Okamoto, R., Nakakuki, T., Kongo, M., Isaka, N., Kaibuchi, K., and Nakano, T. (2004). RhoA activation in vascular smooth muscle cells from stroke-prone spontaneously hypertensive rats. *Hypertens Res* **27**, 263-70. 29. Kobayashi, N., Nakano, S., Mita, S., Kobayashi, T., Honda, T., Tsubokou, Y., and Matsuoka, H. (2002). Involvement of Rho-kinase pathway for angiotensin II-induced plasminogen activator inhibitor-1 gene expression and cardiovascular remodeling in hypertensive rats. *J Pharmacol Exp Ther* **301**, 459-66. 30. Aoki, H., Izumo, S., and Sadoshima, J. (1998). Angiotensin II activates RhoA in cardiac myocytes: a critical role of RhoA in angiotensin II-induced premyofibril formation. *Circ Res* **82**, 666-76. 31. Gales, C., Sanchez, D., Poirot, M., Pyronnet, S., Buscail, L., Cussac, D., Pradayrol, L., Fourmy, D., and Silvente-Poirot, S. (2003). High tumorigenic potential of a constitutively active mutant of the cholecystokinin 2 receptor. *Oncogene* **22**, 6081-9. 32. Bierkamp, C., Kowalski-Chauvel, A., Dehez, S., Fourmy, D., Pradayrol, L., and Seva, C. (2002). Gastrin mediated cholecystokinin-2 receptor activation induces loss of cell adhesion and scattering in epithelial MDCK cells. *Oncogene* **21**, 7656-70. 33. Bierkamp, C., Bonhoure, S., Mathieu, A., Clerc, P., Fourmy, D., Pradayrol, L., Seva, C., and Dufresne, M. (2004). Expression of cholecystokinin-2/gastrin receptor in the murine pancreas modulates cell adhesion and cell differentiation in vivo. *Am J Pathol* **165**, 2135-45. 34. Hellmich, M. R., Rui, X. L., Hellmich, H. L., Fleming, R. Y., Evers, B. M., and Townsend, C. M., Jr. (2000). Human colorectal cancers express a constitutively active cholecystokinin-B/gastrin receptor that stimulates cell growth. *J Biol Chem* **275**, 32122-8. 35. Shepard, L. W., Yang, M., Xie, P., Browning, D. D., Voyno-Yasenetskaya, T., Kozasa, T., and Ye, R. D. (2001). Constitutive activation of NF-kappa B and secretion of interleukin-8 induced by the G protein-coupled receptor of Kaposi's sarcoma-associated herpesvirus involve G alpha(13) and RhoA. *J Biol Chem* **276**, 45979-87. 36. Hasegawa, H., Negishi, M., Katoh, H., and Ichikawa, A. (1997). Two isoforms of prostaglandin EP3 receptor exhibiting constitutive activity and agonist-dependent activity in Rho-mediated stress fiber formation. *Biochem Biophys Res Commun* **234**, 631-6. 37. Pierce, K. L., Bailey, T. J., Hoyer, P. B., Gil, D. W., Woodward, D. F., and Regan, J. W. (1997). Cloning of a carboxyl-terminal isoform of the prostanoid FP receptor. *J Biol Chem* **272**, 883-7. 38. Pierce, K. L., Fujino, H., Srinivasan, D., and Regan, J. W. (1999). Activation of FP prostanoid receptor isoforms leads to Rho-mediated changes in cell morphology and in the cell cytoskeleton. *J Biol Chem* **274**, 35944-9. 39. Williams, C. L., Hayes, V. Y., Hummel, A. M., Tarara, J. E., and Halsey, T. J. (1993). Regulation of E-cadherin-mediated adhesion by muscarinic acetylcholine receptors in small cell lung carcinoma. *J Cell Biol* **121**, 643-54. 40. Fujino, H., Srinivasan, D., and Regan, J. W. (2002). Cellular conditioning and activation of beta-catenin signaling by the FPB prostanoid receptor. *J Biol Chem* **277**, 48786-95. 41. Braga, V. (2000). Epithelial cell shape: cadherins and small GTPases. *Exp Cell Res* **261**, 83-90. 42. Barnes, W. G., Reiter, E., Violin, J. D., Ren, X. R., Milligan, G., and Lefkowitz, R. J. (2004). beta-arrestin 1 and G(subalpha q/11} coordinately activate RhoA and stress fiber formation following receptor stimulation. *J Biol Chem*. 43. Braga, V. M. (2002). Cell-cell adhesion and signalling. *Curr Opin Cell Biol* **14**, 546-56. 44. Pellegrino, M., Furmaniak-Kazmierczak, E., LeBlanc, J. C., Cho, T., Cao, K., Marcovina, S. M., Boffa, M. B., Cote, G. P., and Koschinsky, M. L. (2004). The apolipoprotein(a) component of lipoprotein(a) stimulates actin stress fiber formation and loss of cell-cell contact in cultured endothelial cells. *J Biol Chem* **279**, 6526-33. 45. Chen, Y., Lasaitiene, D., Gabrielsson, B. G., Carlsson, L. M., Billig, H., Carlsson, B., Marcussen, N., Sun, X. F., and Friberg, P. (2004). Neonatal losartan treatment suppresses renal expression of molecules involved in cell-cell and cell-matrix interactions. *J Am Soc Nephrol* **15**, 1232-43. 46. Laufs, U., and Liao, J. K. (2000). Targeting Rho in cardiovascular disease. *Circ Res* **87**, 526-8. Acknowledgments We thank Dr. Leonid Volkov for his technical assistance with the confocal microscopy and for his valuable input. This work was supported by the Canadian Institutes of Health Research (CIHR). M.A.-M. is recipient of a CIHR studentship. R.L. is a Senior Scholar of the Fonds de la Recherche en Santé du Québec (FRSQ). E.E. is the recipient of a J.C. Edwards Chair in cardiovascular research. This work is part of the PhD thesis of M.A.-M. DISCUSSION Suite aux travaux ayant exploré l'activation constitutive des récepteurs $\alpha_{1B}$- et $\beta_2$-adrénergiques, il a été suggéré que les récepteurs mutants constitutivement actifs maintiennent un état de haute affinité pour leurs ligands agonistes indépendamment de leur couplage aux protéines G hétérotrimériques (KJELSBERG *et al.*, 1992; LEFKOWITZ *et al.*, 1993; SAMAMA *et al.*, 1993). Nous avons vérifié et confirmé la justesse de cette hypothèse dans le cas du récepteur mutant N111G-AT$_1$. En effet, nous avons mesuré une forte activation constitutive de la protéine $G_{q/11}$ (e.g. production d'IPs, génération spontanée d'oscillations calciques) suite à l'expression du récepteur N111G-AT$_1$, soulignant l'efficacité du couplage entre ces deux protéines. Nous avons aussi montré que, contrairement au récepteur AT$_1$ de type sauvage, l'état de haute affinité du récepteur N111G-AT$_1$ pour l'Ang II n'est pas affecté par l'ajout d'agents découpants tels le pentosane sulfate et le GTP$\gamma$S. Ces résultats sont en accord avec les études précédentes ayant mis en évidence l'activité constitutive du récepteur N111G-AT$_1$ (NODA *et al.*, 1996; BALMFORTH *et al.*, 1997; GROBLEWSKI *et al.*, 1997; FENG *et al.*, 1998). De plus, nous avons montré par co-immunoprécipitation que l'efficacité de ce couplage fonctionnel entre le récepteur N111G-AT$_1$ et la protéine $G_{q/11}$ ne se traduit pas par la formation d'un complexe stable en absence d'Ang II. En fait, la formation d'un complexe ternaire (ligand, GPCR et protéine G hétérotrimérique) est nécessaire dans nos conditions pour entraîner efficacement la protéine $G\alpha_{q/11}$ avec les récepteurs AT$_1$ et N111G-AT$_1$. De plus, cette interaction est déstabilisée dans les deux cas par l'ajout de GTP$\gamma$S. De toute évidence, la similarité de ce profil d'interaction des récepteurs AT$_1$ et N111G-AT\textsubscript{1} avec la protéine G\textsubscript{q/11} réfute l'hypothèse de la formation d'un complexe plus stable entre le récepteur N111G-AT\textsubscript{1} et la protéine G\textsubscript{q/11}. Ces résultats montrent plutôt que l'activité constitutive du récepteur N111G-AT\textsubscript{1} découle d'un changement de conformation intrinsèque ayant une haute affinité pour l'Ang II et augmentant sa capacité à s'isomériser plus fréquemment dans un état actif capable d'activer la protéine G\textsubscript{q/11}. À quel niveau ces résultats s'insèrent-ils dans notre compréhension actuelle des événements spatio-temporels menant à l'activation des protéines G hétérotrimériques? Est-ce que les GPCRs interagissent librement avec les protéines G hétérotrimériques à la surface des cellules (modèle de la mosaïque fluide) ou prennent-ils plutôt part à la formation de complexes multimoléculaires stables (modèle de compartimentation des protéines) (SINGER et NICOLSON, 1972; SIMON \textit{et al.}, 1991; REBOIS et HEBERT, 2003; VEREB \textit{et al.}, 2003)? Comme plusieurs autres études, nos résultats supportent le modèle d'association transitoire des GPCRs avec les protéines G hétérotrimériques. Récemment, l'interaction aléatoire et transitoire des récepteurs M2 et M3 muscariniques avec la protéine G\textsubscript{0} a été mise en évidence chez les cellules CHO grâce à l'application de technologies telles le FRAP ("fluorescence recovery after photobleaching") et le FRET ("fluorescence resonance energy transfer") (AZPIAZU et GAUTAM, 2004). L'internalisation de la protéine G\textsubscript{0/s} et du récepteur \beta\textsubscript{2}-adrénergique au niveau de voies d'internalisation distinctes souligne d'ailleurs la nécessité d'une séparation physique de ces deux protéines suite à l'activation du GPCR (ALLEN \textit{et al.}, 2005). D'autre part, les agonistes inverses SR 144528, mepyramine et tiotidine ont la capacité de séquestrer les protéines G hétérotrimériques couplant respectivement aux récepteurs CB2 des endocannabinoides, H\textsubscript{1} et H\textsubscript{2} de l'histamine (BOUABOULA \textit{et al.}, 1999; MONCZOR \textit{et al., 2003; FITZSIMONS et al., 2004). Ainsi, la désensibilisation hétérologue découlant de l'action rapide de ces agonistes inverses constitue un autre exemple éloquent de la validité du modèle de la mosaïque fluide. Toutefois, plusieurs études ont aussi révélé l'existence de complexes stables entre les GPCRs et les protéines G hétérotrimériques (REBOIS et al., 1997; REBOIS et HEBERT, 2003). Par exemple, il a été rapporté que, peu importe son état d'activation, le récepteur $\beta_2$-adrénergique exprimé chez les cellules Sf9 d'insecte interagit de façon stable avec la protéine Go$_s$ (LACHANCE et al., 1999). Pour sa part, la liaison de l'agoniste U46619 au récepteur TP$\beta$ du thromboxane A$_2$ n'augmente pas le niveau de protéine G$_q$ entraîné par co-immunoprecipitation (THERIAULT et al., 2004). En fait, les phénomènes prenant place au niveau des membranes biologiques s'inscrivent plus probablement au sein d'un modèle de mosaïque fluide à la fois structurée et dynamique (VEREB et al., 2003). Au cours de cette première étude, nous avons aussi observé une désensibilisation de la relâche calcique chez les cellules COS-7 exprimant de façon transitoire le récepteur N111G-AT$_1$. Étant donné l'importance du rôle joué par le Ca$^{2+}$ dans une multitude de réponses cellulaires, il n'est pas surprenant dans ces conditions que la cellule enclenche certains mécanismes visant à maintenir son homéostasie calcique. Ainsi, nous avons voulu déterminer à quel niveau et par quel mécanisme les cellules doivent s'adapter à l'activité constitutive du récepteur N111G-AT$_1$. Nous avons montré que l'expression stable du récepteur N111G-AT$_1$ chez les cellules HEK-293 provoque la désensibilisation hétérologue de la voie de mobilisation calcique. En effet, la relâche de Ca$^{2+}$ au niveau des réserves intracellulaires est moins importante chez les cellules N111G stimulées avec les agonistes ATP (récepteur purinergique) et carbachol (récepteur muscarinique) que chez les cellules AT$_1$. Cette diminution de la relâche calcique chez les cellules N111G n'est toutefois pas dûe à la désensibilisation de la protéine G$_{q/11}$ ou de la phospholipase Cβ. D'ailleurs, l'EGF (agoniste des EGFR activant la phospholipase Cγ) provoque aussi une relâche de Ca$^{2+}$ moins élevée chez les cellules N111G (data not shown). Par différentes approches (essais de liaison, relâche de Ca$^{2+}$ sur cellules perméabilisées à la saponine et immunobuvardage), nous avons montré que cette désensibilisation de la relâche de Ca$^{2+}$ est en fait causée par une diminution du nombre d'IP$_3$Rs exprimés chez les cellules N111G. Cette diminution des IP$_3$Rs est principalement due à leur dégradation accélérée au niveau du lysosome. Il intéressant de noter que l'agoniste inverse EXP3174 renverse efficacement la désensibilisation de la relâche calcique chez les cellules N111G en bloquant l'activité constitutive du récepteur N111G-AT$_1$. À ce jour, la dégradation accrue des IP$_3$Rs suite à la stimulation aigue de GPCRs couplés à la protéine G$_{q/11}$ avec des doses saturantes d'agonistes (e.g. Ang II, substance P, carbachol, cholecystokinine, bombesine) a été rapportée chez plusieurs types cellulaires (WOJCIKIEWICZ et NAHORSKI, 1991; WOJCIKIEWICZ et al., 1994; BOKKALA et JOSEPH, 1997; SIPMA et al., 1998; OBERDORF et al., 1999; LEE et al., 2001; WILLARS et al., 2001). Dans ces conditions, la forte production d'IPs semble causer l'augmentation d'ubiquitination des IP$_3$Rs et ainsi, diriger leur dégradation au niveau du protéasome (ZHU et al., 1999; ZHU et WOJCIKIEWICZ, 2000). Toutefois, il est intéressant de noter que la dégradation basale des IP$_3$Rs chez les cellules épithéliales WB de foie de rat s'effectue au niveau du lysosome (BOKKALA et JOSEPH, 1997). Outre les mécanismes de dégradation, la régulation du niveau d'expression des IP$_3$Rs peut aussi se faire en diminuant leur synthèse protéique (GENAZZANI et al., 1999; CAI et al., 2004). Se pourrait-il que la dégradation des IP$_3$Rs soit initialement prise en charge par le lysosome lorsque la production d'IPs est maintenue à des niveaux intermédiaires (physiologique) et que le protéasome entre en jeu seulement à des concentrations d'IPs beaucoup plus élevées (supraphysiologique)? Si tel est le cas, notre modèle des cellules N111G saura certainement faciliter l'étude des mécanismes de dégradation des IP$_3$Rs au niveau du lysosome. Au cours de cette deuxième étude, nous avons aussi observé d'importants changements morphologiques (e.g. contacts cellule-cellule étroits, absence de filipode) prenant place spontanément chez les cellules N111G formant un feuillet uniforme. Ainsi, nous avons voulu identifier le mécanisme sous-jacent à ce changement phénotypique et découlant vraisemblablement de l'activité constitutive du récepteur N111G-AT$_1$. Contrairement aux cellules AT$_1$, nous avons montré que les cellules N111G présentent une réorganisation importante de leur cytosquelette d'actine supportant ces changements de leur structure cellulaire. Tandis que le traitement des cellules N111G avec l'agoniste inverse EXP3174 prévient et renverse l'adoption de ce phénotype particulier (confirmant que l'activité constitutive du récepteur N111G-AT$_1$ est à l'origine de la morphologie modifiée des cellules HEK-293), nous avons montré que la stimulation des cellules AT$_1$ avec l'Ang II reproduit rapidement l'ensemble des changements morphologiques observé au niveau basal chez les cellules N111G. Enfin, nous avons montré que l'activation de la Rho-kinase participe aux changements morphologiques sous-jacents à la signalisation du récepteur AT$_1$. Au cours de la dernière décennie, l'impact de la signalisation des GPCRs sur l'organisation du cytosquelette via l'activation des petites protéines G de la famille Rho a clairement été mis en évidence (NARUMIYA, 1996; BISHOP et HALL, 2000; FUKATA et KAIBUCHI, 2001; BHATTACHARYA *et al.*, 2004). Par exemple, l'activation du récepteur de l'hormone de relâche de la gonadotropine mène à une réorganisation rapide du cytosquelette des cellules HEK-293 par un mécanisme dépendant de l'activation de Rac (DAVIDSON *et al.*, 2004). L'activation de la voie des Rho/Rho-kinase chez les cellules vasculaires de muscles lisses découle de l'activation de GPCRs couplant aux protéines $G_{q/11}$ ou $G_{12/13}$ (GOHLA *et al.*, 2000). Bien que d'autres études aient aussi rapporté l'activation de la protéine RhoA par ces protéines G hétérotrimériques, le mécanisme précis soutenant l'activation de la Rho-kinase chez les cellules N111G reste à identifier (KOZASA *et al.*, 1998; GOHLA *et al.*, 1999). Et qu'en est-il de l'activité des protéines Rac et cdc42 suite à l'activation du récepteur AT$_1$? Connaissant leur rôle respectif dans la formation des lamellipodes et des filipodes, il est probable que leur activité est aussi régulée chez les cellules N111G (BRAGA, 2000). De toute évidence, l'étude des changements morphologiques chez les cellules N111G constitue un nouveau modèle attrayant où l'activité des petites protéines G de la famille Rho peut être augmentée ou diminuée à l'aide des ligands EXP3174 et Ang II. CONCLUSION En somme, ces résultats montrent que l'activité constitutive du récepteur N111G-AT$_1$ découle de l'adoption d'une nouvelle conformation ayant une haute affinité pour l'Ang II et facilitant l'adoption spontanée d'un état actif couplant à la protéine G$_{q/11}$. Toutefois, le récepteur N111G-AT$_1$ n'est pas réfractaire à l'adoption d'un état inactif puisque l'agoniste inverse EXP3174 prévient efficacement l'activation constitutive de ce GPCR mutant. L'activation constitutive des voies de signalisation sous-jacentes au récepteur N111G-AT$_1$ entraîne des changements cellulaires allant de la réponse normale (e.g. oscillations calciques et réorganisation du cytosquelette d'actine) jusqu'au mécanisme d'adaptation (e.g. diminution du niveau d'expression des IP$_3$Rs). Sans nul doute, la nature et l'importance de ces changements dépendent du contexte cellulaire (ou protéome) et de l'étendue de l'activité constitutive du GPCR étudié. Ces études ouvrent la voie à plusieurs autres questions fondamentales. Par exemple, est-ce que le récepteur N111G-AT$_1$ parvient à activer spontanément d'autres voies de signalisation que celle de la protéine G$_{q/11}$? Puisque le récepteur AT$_1$ couple efficacement à la protéine G$_{i/0}$ chez les cellules HEK-293, il est possible que certaines fonctions cellulaires (e.g. entrée de Ca$^{2+}$ par des canaux membranaires) soient modulées par l'activation constitutive de cette protéine G hétérotrimérique par le récepteur N111G-AT$_1$ (MUNDELL et BENOVIC, 2000). Et qu'en est-il des voies de signalisation indépendantes des protéines G hétérotrimériques pouvant être activées par le récepteur AT$_1$? Le récepteur N111G-AT$_1$ arrive-t-il à activer la protéine Jak2 ou à transactiver les EGFR en absence d'Ang II? En fait, l'activation constitutive des GPCRs (e.g. récepteur 5-HT$_{2C}$ de la sérotonine, récepteurs C128F-$\alpha_{1B}$-, D142A-$\alpha_{1B}$- et A293E-$\alpha_{1B}$-adrénergiques, récepteur Y305A-B2 de la bradykinine) ne reproduit pas obligatoirement l'ensemble des changements conformationnels découlant de la liaison d'un agoniste au récepteur de type sauvage (PEREZ et al., 1996; WESTPHAL et SANDERS-BUSH, 1996; MHAOUTY-KODJA et al., 1999; KALATSKAYA et al., 2004). D'ailleurs, tel que mentionné précédemment, le récepteur N111G-AT$_1$ n'adopte pas une conformation propice à sa phosphorylation chez les cellules COS-7 (THOMAS et al., 2000). Clairement, à l'heure où les GPCRs ne semblent pas adopter qu'un seul état actif, l'identification des déterminants requis pour l'adoption d'une conformation précise pourrait permettre le développement de médicaments plus sécuritaires en favorisant l'enrichissement d'un état conformationnel favorable d'un point de vue thérapeutique (KENAKIN, 2003a). D'autre part, quel est le signal accélérant la dégradation des IP$_3$Rs au niveau du lysosome chez les cellules N111G? Ce phénomène découle-t-il directement de la liaison de l'IP$_3$ aux IP$_3$Rs (déjà connue pour provoquer la dégradation de ces récepteurs-canaux au niveau du protéasome en augmentant leur niveau d'ubiquitination) (OBERDORF et al., 1999; ZHU et al., 1999; ZHU et WOJCIKIEWICZ, 2000)? Si tel est le cas, l'efficacité des protéines p130PH ("PH domain of the phospholipase C-like protein p130") et "IP$_3$ sponge" (domaine de liaison de l'IP$_3$RI de souris) à tamponner la production d'IP$_3$ et à prévenir sa liaison aux IP$_3$Rs pourrait nous permettre de répondre à cette question (TAKEUCHI et al., 2000; UCHIYAMA et al., 2002). Toutefois, il est intéressant de noter que la dégradation des protéines par la voie du lysosome dépend du transport actif de vésicules nécessitant une réorganisation dynamique du cytosquelette (APODACA, 2001; TAUNTON, 2001). En fait, la distribution même des vacuoles lysosomales au sein d'une cellule polarisée dépend de l'intégrité des filaments intermédiaires du cytosquelette (STYERS et al., 2004). De plus, l'interaction des IP$_3$Rs avec certaines protéines du cytosquelette (e.g. myosine, taline, vinculine, ankyrine) est connue pour influencer leur localisation cellulaire ainsi que l'intensité de la relâche calcique (BOURGUIGNON et al., 1993; RIBEIRO et al., 1997; SUGIYAMA et al., 2000; WALKER et al., 2002). Est-ce que la réorganisation du cytosquelette d'actine chez les cellules N111G contribue à cette dégradation accrue des IP$_3$Rs par le lysosome? Étant donné l'importance des mécanismes de régulation de la mobilisation calcique dans plusieurs processus physio/pathologiques (e.g. sécrétion d'insuline, sécrétion biliaire, contraction des muscles squelettiques, hypertrophie cardiaque), il est essentiel d'approfondir notre compréhension des mécanismes de dégradation des IP$_3$Rs (HAGAR et EHRLICH, 2000; MISSIAEN et al., 2000; BERRIDGE et al., 2003; PUSL et NATHANSON, 2004). Enfin, par quel mécanisme le récepteur N111G-AT\textsubscript{1} parvient-il à activer spontanément la Rho-kinase? Son couplage à la protéine G\textsubscript{q/11} est-il à l'origine de cette activité ou est-ce plutôt via les protéines G\textsubscript{12} ou G\textsubscript{13} que le récepteur N111G-AT\textsubscript{1} influence la morphologie des cellules HEK-293? Puisque chacune de ces protéines G hétérotrimériques parvient à augmenter l'activité de la Rho-kinase, l'inhibition spécifique de la voie de signalisation en aval de la protéine G\textsubscript{q/11} par l'expression des protéines Q209L/D277N-G\textsubscript{\alpha}\textsubscript{q} (dominant négatif) ou RGS2 ("regulator of G protein signaling 2"; augmentant l'activité GTPasique de la protéine G\textsubscript{q/11}) pourrait nous permettre d'identifier laquelle d'entre elles est responsable de la réorganisation du cytosquelette chez les cellules N111G (BUHL et al., 1995; HEXIMER et al., 1997; YU et al., 2000; CHIKUMI et al., 2002; VOGT et al., 2003). D'autre part, il a été montré récemment que l'activation de la Rho-kinase et la formation de fibres de stress suivant la stimulation du récepteur AT\textsubscript{1} chez les cellules HEK-293 nécessitent la participation des protéines G\textsubscript{q/11} et \beta-arrestine 1 (BARNES et al., 2005). Quelle est l'importance du recrutement de la protéine \beta-arrestine 1 au niveau du récepteur N111G-AT\textsubscript{1} si celui-ci est réfractaire à la phosphorylation (THOMAS et al., 2000)? La formation d'un complexe stable entre la \beta-arrestine 1 et le récepteur AT\textsubscript{1} au niveau des vésicules d'internalisation est-elle nécessaire à l'activation de la Rho-kinase (OAKLEY et al., 2000)? Étant donné que la \beta-arrestine 2 ne soutient pas l'activation de RhoA et compétitionne avec le recrutement de la \beta-arrestine 1, sa surexpression pourrait-elle prévenir le changement de morphologie des cellules N111G (BARNES et al., 2005)? Puisque l'implication du récepteur AT\textsubscript{1} et des petites protéines G de la famille Rho dans le développement de diverses pathologies du système cardiovasculaire est clairement établie, il sera aussi pertinent d'évaluer si l'activité des protéines Rac et cdc42 (participant aussi à l'organisation du cytosquelette) est influencée par l'activité constitutive du récepteur N111G-AT$_1$ (BISHOP et HALL, 2000; DE GASPARO et al., 2000; LAUFS et LIAO, 2000). REMERCIEMENTS Le processus de formation en recherche aux études graduées constitue une étape déterminante dans l’élaboration d’une carrière scientifique. En ce sens, j’ai eu le privilège d’être supervisé par les Drs Gaétan Guillemette et Emanuel Escher tout au long de cette période. Leur discipline personnelle, leur logique scientifique et leur originalité constructive m’ont grandement inspirées. De plus, ils m’ont fait réaliser que l’on peut (et l’on doit) conjuguer ensemble "passion pour la science" et "rigueur scientifique" afin d’exceller en recherche. Pour l’ensemble de la solide formation qu’ils m’ont offert et pour les nombreuses réflexions qu’ils m’ont incité à poser, je les remercie sincèrement! Je tiens à remercier les Drs Terence E. Hébert, Nathalie Rivard et Guylain Boulay pour avoir accepté d’évaluer cette thèse. Je voudrais aussi remercier le Dr Richard Leduc qui, par ses actes et ses paroles, a su me faire prendre conscience de la place que je tiens à occuper dans notre communauté scientifique. Merci aussi à tous ceux qui ont participé de près ou de loin à ma formation au cours de mes études graduées (professeurs, étudiants, secrétaires…). Heureusement, j’ai récolté beaucoup plus au Département de Pharmacologie de l’Université de Sherbrooke qu’un simple diplôme de PhD. J’y ai développé des amitiés précieuses qui m’ont nourries et inspirées tout au long de ces années d’étude et de travail. Je tiens ainsi à remercier chaleureusement Jacqueline (Jackie) Pérodin, Lenka Rihakova, Maud Deraët, Marie-Reine Lefebvre, Danny Fillion, Brian Holleran, Anthony Boucard, Pascal Lanctôt, Christophe Proulx, Patrice Leclerc, Jean-Bernard Denault, Stéphane Poirier, Benoît Chaloux, Yannik Régimbal-Dumas, Annabelle Caron, Éric Turgeon et Guillaume Arguin. J'ai énormément bénéficié de la vive intelligence, du dynamisme et de la bonne humeur de ces gens. Je tiens à remercier ma mère pour sa grande force, sa patience et son amour. Nancy et Jacob! Vous formez ensemble ma plus grande source de bonheur, d'inspiration et de motivation. Vous remplissez ma vie de joie et m'incitez à combler la vôtre du meilleur que je puisse vous offrir. Pour terminer, je tiens à remercier la Fondation Georges Phénix, le Conseil de Recherche en Sciences Naturelles et Génie du Canada, les Fonds de la Recherche en Santé du Québec et les Instituts de Recherche en Santé du Canada pour leur soutien financier tout au long de mes études graduées. ADAN, R.A. & KAS, M.J. (2003) Inverse agonism gains weight. *Trends Pharmacol Sci.* 24: 315-21. ALI, M.S., SAYESKI, P.P., DIRKSEN, L.B., HAYZER, D.J., MARRERO, M.B. & BERNSTEIN, K.E. (1997) Dependence on the motif YIPP for the physical association of Jak2 kinase with the intracellular carboxyl tail of the angiotensin II AT1 receptor. *J Biol Chem.* 272: 23382-8. ALLEN, J.A., YU, J.Z., DONATI, R.J. & RASENICK, M.M. (2005) {beta}-Adrenergic Receptor Stimulation Promotes G{alpha}s Internalization through Lipid Rafts: A Study in Living Cells. *Mol Pharmacol.* 67: 1493-504. ALLEN, L.F., LEFKOWITZ, R.J., CARON, M.G. & COTECCHIA, S. (1991) G-protein-coupled receptor genes as protooncogenes: constitutively activating mutation of the alpha 1B-adrenergic receptor enhances mitogenesis and tumorigenicity. *Proc Natl Acad Sci U S A.* 88: 11354-8. ALTENBACH, C., CAI, K., KHORANA, H.G. & HUBBELL, W.L. (1999) Structural features and light-dependent changes in the sequence 306-322 extending from helix VII to the palmitoylation sites in rhodopsin: a site-directed spin-labeling study. *Biochemistry.* 38: 7931-7. APODACA, G. (2001) Endocytic traffic in polarized epithelial cells: role of the actin and microtubule cytoskeleton. *Traffic.* 2: 149-59. ARAUJO, M.A., MENEZES, B.S., LOURENCO, C., CORDEIRO, E.R., GATTI, R.R. & GOULART, L.R. (2004) The A1166C polymorphism of the angiotensin II type-1 receptor in acute myocardial infarction. *Arq Bras Cardiol.* 83: 409-13; 404-8. ARBABIAN, M., GRAZIANO, F.M., JICINSKY, J., HADCOCK, J., MALBON, C. & RUOHO, A.E. (1989) Photoaffinity labeling of the guinea pig pulmonary mast cell beta-adrenergic receptor. *Am J Respir Cell Mol Biol.* 1: 351-9. ASAKURA, M., KITAKAZE, M., TAKASHIMA, S., LIAO, Y., ISHIKURA, F., YOSHINAKA, T., OHMOTO, H., NODE, K., YOSHINO, K., ISHIGURO, H., ASANUMA, H., SANADA, S., MATSUMURA, Y., TAKEDA, H., BEPPU, S., TADA, M., HORI, M. & HIGASHIYAMA, S. (2002) Cardiac hypertrophy is inhibited by antagonism of ADAM12 processing of HB-EGF: metalloproteinase inhibitors as a new therapy. *Nat Med.* 8: 35-40. ATTWOOD, T.K. & FINDLAY, J.B. (1994) Fingerprinting G-protein-coupled receptors. *Protein Eng.* 7: 195-203. AUGER, G.A., SMITH, B.M., PEASE, J.E. & BARKER, M.D. (2004) The use of membrane translocating peptides to identify sites of interaction between the C5a receptor and downstream effector proteins. *Immunology.* 112: 590-6. AZPIAZU, I. & GAUTAM, N. (2004) A fluorescence resonance energy transfer-based sensor indicates that receptor access to a G protein is unrestricted in a living mammalian cell. *J Biol Chem.* 279: 27709-18. BALDWIN, J.M. (1993) The probable arrangement of the helices in G protein-coupled receptors. *Embo J.* 12: 1693-703. BALL, S.G. & WHITE, W.B. (2003) Debate: angiotensin-converting enzyme inhibitors versus angiotensin II receptor blockers--a gap in evidence-based medicine. *Am J Cardiol.* 91: 15G-21G. BALMFORTH, A.J., LEE, A.J., WARBURTON, P., DONNELLY, D. & BALL, S.G. (1997) The conformational change responsible for AT1 receptor activation is dependent upon two juxtaposed asparagine residues on transmembrane helices III and VII. *J Biol Chem.* 272: 4245-51. BARAK, L.S., WILBANKS, A.M. & CARON, M.G. (2003) Constitutive desensitization: a new paradigm for g protein-coupled receptor regulation. *Assay Drug Dev Technol.* 1: 339-46. BARNES, W.G., REITER, E., VIOLIN, J.D., REN, X.R., MILLIGAN, G. & LEFKOWITZ, R.J. (2005) beta-Arrestin 1 and Galphaq/11 coordinately activate RhoA and stress fiber formation following receptor stimulation. *J Biol Chem.* 280: 8041-50. BARTFAI, T., BENOVIC, J.L., BOCKAERT, J., BOND, R.A., BOUVIER, M., CHRISTOPOULOS, A., CIVELLI, O., DEVI, L.A., GEORGE, S.R., INUI, A., KOBILKA, B.K., LEURS, R., NEUBIG, R., PIN, J.-P., QUIRION, R., ROQUES, B.P., SAKMAR, T.P., SEIFERT, R., STENKAMP, R.E. & STRANGE, P.G. (2004) The state of GPCR research in 2004. *Nat Rev Drug Discov.* 3: 575, 577-626. BAUDIN, B. (2005) Cardiovascular genomics Special Issue. *Exp Physiol.* BERK, B.C. & CORSON, M.A. (1997) Angiotensin II signal transduction in vascular smooth muscle: role of tyrosine kinases. *Circ Res.* 80: 607-16. BERRIDGE, M.J., LIPP, P. & BOOTMAN, M.D. (2000) The versatility and universality of calcium signalling. *Nat Rev Mol Cell Biol.* 1: 11-21. BERRIDGE, M.J., BOOTMAN, M.D. & RODERICK, H.L. (2003) Calcium signalling: dynamics, homeostasis and remodelling. *Nat Rev Mol Cell Biol.* 4: 517-29. BEUKERS, M.W., VAN OPPENRAAIJ, J., VAN DER HOORN, P.P., BLAD, C.C., DEN DULK, H., BROUWER, J. & AP, I.J. (2004) Random mutagenesis of the human adenosine A2B receptor followed by growth selection in yeast. Identification of constitutively active and gain of function mutations. *Mol Pharmacol.* 65: 702-10. BHAT, G.J., THEKKUMKARA, T.J., THOMAS, W.G., CONRAD, K.M. & BAKER, K.M. (1995) Activation of the STAT pathway by angiotensin II in T3CHO/AT1A cells. Cross-talk between angiotensin II and interleukin-6 nuclear signaling. *J Biol Chem.* 270: 19059-65. BHATTACHARYA, M., BABWAH, A.V. & FERGUSON, S.S. (2004) Small GTP-binding protein-coupled receptors. *Biochem Soc Trans.* 32: 1040-4. BIHOREAU, C., MONNOT, C., DAVIES, E., TEUTSCH, B., BERNSTEIN, K.E., CORVOL, P. & CLAUSER, E. (1993) Mutation of Asp74 of the rat angiotensin II receptor confers changes in antagonist affinities and abolishes G-protein coupling. *Proc Natl Acad Sci U S A.* 90: 5133-7. BIRNBAUMER, L., ABRAMOWITZ, J. & BROWN, A.M. (1990) Receptor-effector coupling by G proteins. *Biochim Biophys Acta.* 1031: 163-224. BISHOP, A.L. & HALL, A. (2000) Rho GTPases and their effector proteins. *Biochem J.* 348 Pt 2: 241-55. BLUML, K., MUTSCHLER, E. & WESS, J. (1994) Insertion mutagenesis as a tool to predict the secondary structure of a muscarinic receptor domain determining specificity of G-protein coupling. *Proc Natl Acad Sci USA.* 91: 7980-4. BOCKAERT, J. & PIN, J.P. (1999) Molecular tinkering of G protein-coupled receptors: an evolutionary success. *Embo J.* 18: 1723-9. BOKKALA, S. & JOSEPH, S.K. (1997) Angiotensin II-induced down-regulation of inositol trisphosphate receptors in WB rat liver epithelial cells. Evidence for involvement of the proteasome pathway. *J Biol Chem.* 272: 12454-61. BOND, R.A., LEFF, P., JOHNSON, T.D., MILANO, C.A., ROCKMAN, H.A., MCMINN, T.R., APPARSUNDARAM, S., HYEK, M.F., KENAKIN, T.P., ALLEN, L.F. & ET AL. (1995) Physiological effects of inverse agonists in transgenic mice with myocardial overexpression of the beta 2-adrenoceptor. *Nature.* 374: 272-6. BONNARDEAUX, A., DAVIES, E., JEUNEMAITRE, X., FERY, I., CHARRU, A., CLAUSER, E., TIRET, L., CAMBIEN, F., CORVOL, P. & SOUBRIER, F. (1994) Angiotensin II type 1 receptor gene polymorphisms in human essential hypertension. *Hypertension.* 24: 63-9. BOODEN, M.A., ECKERT, L.B., DER, C.J. & TREJO, J. (2004) Persistent signaling by dysregulated thrombin receptor trafficking promotes breast carcinoma cell invasion. *Mol Cell Biol.* 24: 1990-9. BOUABOULA, M., DESNOYER, N., CARAYON, P., COMBES, T. & CASELLAS, P. (1999) Gi protein modulation induced by a selective inverse agonist for the peripheral cannabinoid receptor CB2: implication for intracellular signalization cross-regulation. *Mol Pharmacol.* 55: 473-80. BOUCARD, A.A., WILKES, B.C., LAPORTE, S.A., ESCHER, E., GUILLEMETTE, G. & LEDUC, R. (2000) Photolabeling identifies position 172 of the human AT(1) receptor as a ligand contact point: receptor-bound angiotensin II adopts an extended structure. *Biochemistry.* 39: 9662-70. BOUCARD, A.A., ROY, M., BEAULIEU, M.E., LAVIGNE, P., ESCHER, E., GUILLEMETTE, G. & LEDUC, R. (2003) Constitutive activation of the angiotensin II type 1 receptor alters the spatial proximity of transmembrane 7 to the ligand-binding pocket. *J Biol Chem.* 278: 36628-36. BOURGUIGNON, L.Y., JIN, H., IIDA, N., BRANDT, N.R. & ZHANG, S.H. (1993) The involvement of ankyrin in the regulation of inositol 1,4,5-trisphosphate receptor-mediated internal Ca2+ release from Ca2+ storage vesicles in mouse T-lymphoma cells. *J Biol Chem.* 268: 7290-7. BOURNE, H.R. (1997) How receptors talk to trimeric G proteins. *Curr Opin Cell Biol.* 9: 134-42. BRADY, A.E. & LIMBIRD, L.E. (2002) G protein-coupled receptor interacting proteins: emerging roles in localization and signal transduction. *Cell Signal.* 14: 297-309. BRAGA, V. (2000) Epithelial cell shape: cadherins and small GTPases. *Exp Cell Res.* 261: 83-90. BRAKEMAN, P.R., LANAHAN, A.A., O'BRIEN, R., ROCHE, K., BARNES, C.A., HUGANIR, R.L. & WORLEY, P.F. (1997) Homer: a protein that selectively binds metabotropic glutamate receptors. *Nature.* 386: 284-8. BRYDON, L., ROKA, F., PETIT, L., DE COPPET, P., TISSOT, M., BARRETT, P., MORGAN, P.J., NANOFF, C., STROSBERG, A.D. & JOCKERS, R. (1999) Dual signaling of human Mel 1a melatonin receptors via G(i2), G(i3), and G(q/11) proteins. *Mol Endocrinol.* **13**: 2025-38. BUHL, A.M., JOHNSON, N.L., DHANASEKARAN, N. & JOHNSON, G.L. (1995) G alpha 12 and G alpha 13 stimulate Rho-dependent stress fiber formation and focal adhesion assembly. *J Biol Chem.* **270**: 24631-4. BURSTEIN, E.S., SPALDING, T.A., HILL-EUBANKS, D. & BRANN, M.R. (1995) Structure-function of muscarinic receptor coupling to G proteins. Random saturation mutagenesis identifies a critical determinant of receptor affinity for G proteins. *J Biol Chem.* **270**: 3141-6. BURSTEIN, E.S., SPALDING, T.A. & BRANN, M.R. (1997) Pharmacology of muscarinic receptor subtypes constitutively activated by G proteins. *Mol Pharmacol.* **51**: 312-9. CAI, K., ITOH, Y. & KHORANA, H.G. (2001) Mapping of contact sites in complex formation between transducin and light-activated rhodopsin by covalent crosslinking: use of a photoactivatable reagent. *Proc Natl Acad Sci USA.* **98**: 4877-82. CAI, W., HISATSUNE, C., NAKAMURA, K., NAKAMURA, T., INOUE, T. & MIKOSHIBA, K. (2004) Activity-dependent expression of inositol 1,4,5- trisphosphate receptor type 1 in hippocampal neurons. *J Biol Chem.* **279**: 23691-8. CARSON, M.C., HARPER, C.M., BAUKAL, A.J., AGUILERA, G. & CATT, K.J. (1987) Physicochemical characterization of photoaffinity-labeled angiotensin II receptors. *Mol Endocrinol.* **1**: 147-53. CERIONE, R.A., CODINA, J., BENOVIC, J.L., LEFKOWITZ, R.J., BIRNBAUMER, L. & CARON, M.G. (1984) The mammalian beta 2-adrenergic receptor: reconstitution of functional interactions between pure receptor and pure stimulatory nucleotide binding protein of the adenylate cyclase system. *Biochemistry.* **23**: 4519-25. CHAKI, S., GUO, D.F., YAMANO, Y., OHYAMA, K., TANI, M., MIZUKOSHI, M., SHIRAI, H. & INAGAMI, T. (1994) Role of carboxyl tail of the rat angiotensin II type 1A receptor in agonist-induced internalization of the receptor. *Kidney Int.* **46**: 1492-5. CHAN, A.S., LAW, P.Y., LOH, H.H., HO, P.N., WU, W.M., CHAN, J.S. & WONG, Y.H. (2003) The first and third intracellular loops together with the carboxy terminal tail of the delta-opioid receptor contribute toward functional interaction with Galpha16. *J Neurochem.* **87**: 697-708. CHIDIAC, P., HEBERT, T.E., VALIQUETTE, M., DENNIS, M. & BOUVIER, M. (1994) Inverse agonist activity of beta-adrenergic antagonists. *Mol Pharmacol.* **45**: 490-9. CHIKUMI, H., VAZQUEZ-PRADO, J., SERVITJA, J.M., MIYAZAKI, H. & GUTKIND, J.S. (2002) Potent activation of RhoA by Galpha q and Gq-coupled receptors. *J Biol Chem.* **277**: 27130-4. CHIU, A.T., HERBLIN, W.F., MCCALL, D.E., ARDECKY, R.J., CARINI, D.J., DUNCIA, J.V., PEASE, L.L., WONG, P.C., WEXLER, R.R., JOHNSON, A.L. & ET AL. (1989) Identification of angiotensin II receptor subtypes. *Biochem Biophys Res Commun.* 165: 196-203. CIRUELA, F., ROBBINS, M.J., WILLIS, A.C. & MCILHINNEY, R.A. (1999) Interactions of the C terminus of metabotropic glutamate receptor type 1 alpha with rat brain proteins: evidence for a direct interaction with tubulin. *J Neurochem.* 72: 346-54. CLAPHAM, D.E. & NEER, E.J. (1997) G protein beta gamma subunits. *Annu Rev Pharmacol Toxicol.* 37: 167-203. COHEN, G.B., YANG, T., ROBINSON, P.R. & OPRIAN, D.D. (1993) Constitutive activation of opsin: influence of charge at position 134 and size at position 296. *Biochemistry.* 32: 6111-5. COLEMAN, D.E., BERGHUIS, A.M., LEE, E., LINDER, M.E., GILMAN, A.G. & SPRANG, S.R. (1994) Structures of active conformations of Gi alpha 1 and the mechanism of GTP hydrolysis. *Science.* 265: 1405-12. COSTA, T. & HERZ, A. (1989) Antagonists with negative intrinsic activity at delta opioid receptors coupled to GTP-binding proteins. *Proc Natl Acad Sci USA.* 86: 7321-5. COTECCHIA, S., EXUM, S., CARON, M.G. & LEFKOWITZ, R.J. (1990) Regions of the alpha 1-adrenergic receptor involved in coupling to phosphatidylinositol hydrolysis and enhanced sensitivity of biological function. *Proc Natl Acad Sci U S A.* 87: 2896-900. COUGHLIN, S.R. (1994) Expanding horizons for receptors coupled to G proteins: diversity and disease. *Curr Opin Cell Biol.* 6: 191-7. CURNOW, K.M., PASCOE, L. & WHITE, P.C. (1992) Genetic analysis of the human type-1 angiotensin II receptor. *Mol Endocrinol.* 6: 1113-8. DANSER, A.H. & SCHUNKERT, H. (2000) Renin-angiotensin system gene polymorphisms: potential mechanisms for their association with cardiovascular diseases. *Eur J Pharmacol.* 410: 303-316. DANSER, A.H. (2003) Local renin-angiotensin systems: the unanswered questions. *Int J Biochem Cell Biol.* 35: 759-68. DAUB, H., WEISS, F.U., WALLASCH, C. & ULLRICH, A. (1996) Role of transactivation of the EGF receptor in signalling by G-protein-coupled receptors. *Nature.* 379: 557-60. DAVIDSON, L., PAWSON, A.J., MILLAR, R.P. & MAUDSLEY, S. (2004) Cytoskeletal reorganization dependence of signaling by the gonadotropin-releasing hormone receptor. *J Biol Chem.* 279: 1980-93. DAVIES, E., BONNARDEAUX, A., PLOUIN, P.F., CORVOL, P. & CLAUSER, E. (1997) Somatic mutations of the angiotensin II (AT1) receptor gene are not present in aldosterone-producing adenoma. *J Clin Endocrinol Metab.* 82: 611-5. DAVIET, L., LEHTONEN, J.Y., TAMURA, K., GRIESE, D.P., HORIUCHI, M. & DZAU, V.J. (1999) Cloning and characterization of ATRAP, a novel protein that interacts with the angiotensin II type 1 receptor. *J Biol Chem.* 274: 17058-62. DE GASPARO, M., CATT, K.J., INAGAMI, T., WRIGHT, J.W. & UNGER, T. (2000) International union of pharmacology. XXIII. The angiotensin II receptors. *Pharmacol Rev.* 52: 415-72. DE LEAN, A., STADEL, J.M. & LEFKOWITZ, R.J. (1980) A ternary complex model explains the agonist-specific binding properties of the adenylate cyclase-coupled beta-adrenergic receptor. *J Biol Chem.* 255: 7108-17. DE LIGT, R.A., KOUROUNAKIS, A.P. & AP, I.J. (2000) Inverse agonism at G protein-coupled receptors: (patho)physiological relevance and implications for drug discovery. *Br J Pharmacol.* 130: 1-12. DECAILLOT, F.M., BEFORT, K., FILLIOL, D., YUE, S., WALKER, P. & KIEFFER, B.L. (2003) Opioid receptor random mutagenesis reveals a mechanism for G protein-coupled receptor activation. *Nat Struct Biol.* 10: 629-36. DESLAURIERS, B., PONCE, C., LOMBARD, C., LARGUIER, R., BONNAFOUS, J.C. & MARIE, J. (1999) N-glycosylation requirements for the AT1a angiotensin II receptor delivery to the plasma membrane. *Biochem J.* 339 (Pt 2): 397-405. DEVI, L.A. (2001) Heterodimerization of G-protein-coupled receptors: pharmacology, signaling and trafficking. *Trends Pharmacol Sci.* 22: 532-7. DOAN, T.N., ALI, M.S. & BERNSTEIN, K.E. (2001) Tyrosine kinase activation by the angiotensin II receptor in the absence of calcium signaling. *J Biol Chem.* 276: 20954-8. DOHLMAN, H.G., THORNER, J., CARON, M.G. & LEFKOWITZ, R.J. (1991) Model systems for the study of seven-transmembrane-segment receptors. *Annu Rev Biochem.* 60: 653-88. DOUGLAS, J.G., ROMERO, M. & HOPFER, U. (1990) Signaling mechanisms coupled to the angiotensin receptor of proximal tubular epithelium. *Kidney Int Suppl.* 30: S43-7. DUFFY, A.A., MARTIN, M.M. & ELTON, T.S. (2004) Transcriptional regulation of the AT1 receptor gene in immortalized human trophoblast cells. *Biochim Biophys Acta.* 1680: 158-70. DZAU, V.J. & GIBBONS, G.H. (1987) Autocrine-paracrine mechanisms of vascular myocytes in systemic hypertension. *Am J Cardiol.* 60: 991-1031. EGAMI, K., MUROHARA, T., SHIMADA, T., SASAKI, K., SHINTANI, S., SUGAYA, T., ISHII, M., AKAGI, T., IKEDA, H., MATSUISHI, T. & IMAIZUMI, T. (2003) Role of host angiotensin II type 1 receptor in tumor angiogenesis and growth. *J Clin Invest.* 112: 67-75. EVEN-RAM, S., UZIELY, B., COHEN, P., GRISARU-GRANOVSKY, S., MAOZ, M., GINZBURG, Y., REICH, R., VLODAVSKY, I. & BAR-SHAVIT, R. (1998) Thrombin receptor overexpression in malignant and physiological invasion processes. *Nat Med.* 4: 909-14. FENG, Y.H., MIURA, S., HUSAIN, A. & KARNIK, S.S. (1998) Mechanism of constitutive activation of the AT1 receptor: influence of the size of the agonist switch binding residue Asn(111). *Biochemistry.* 37: 15791-8. FENG, Y.H. & KARNIK, S.S. (1999) Role of transmembrane helix IV in G-protein specificity of the angiotensin II type 1 receptor. *J Biol Chem.* 274: 35546-52. FERGUSON, S.S. (2001) Evolving concepts in G protein-coupled receptor endocytosis: the role in receptor desensitization and signaling. *Pharmacol Rev.* 53: 1-24. FITZSIMONS, C.P., MONCZOR, F., FERNANDEZ, N., SHAYO, C. & DAVIO, C. (2004) Mepyramine, a histamine H1 receptor inverse agonist, binds preferentially to a G protein-coupled form of the receptor and sequesters G protein. *J Biol Chem.* **279**: 34431-9. FLOWER, D.R. (1999) Modelling G-protein-coupled receptors for drug design. *Biochim Biophys Acta.* **1422**: 207-34. FRANKE, R.R., SAKMAR, T.P., GRAHAM, R.M. & KHORANA, H.G. (1992) Structure and function in rhodopsin. Studies of the interaction between the rhodopsin cytoplasmic domain and transducin. *J Biol Chem.* **267**: 14767-74. FRASER, I.D., CONG, M., KIM, J., ROLLINS, E.N., DAAKA, Y., LEFKOWITZ, R.J. & SCOTT, J.D. (2000) Assembly of an A kinase-anchoring protein-beta(2)-adrenergic receptor complex facilitates receptor phosphorylation and signaling. *Curr Biol.* **10**: 409-12. FREDRIKSSON, R., LAGERSTROM, M.C., LUNDIN, L.G. & SCHIOTH, H.B. (2003) The G-protein-coupled receptors in the human genome form five main families. Phylogenetic analysis, paralogon groups, and fingerprints. *Mol Pharmacol.* **63**: 1256-72. FUJITA, M., HAYASHI, I., YAMASHINA, S., FUKAMIZU, A., ITOMAN, M. & MAJIMA, M. (2005) Angiotensin type 1a receptor signaling-dependent induction of vascular endothelial growth factor in stroma is relevant to tumor-associated angiogenesis and tumor growth. *Carcinogenesis.* **26**: 271-9. FUKATA, M. & KAIBUCHI, K. (2001) Rho-family GTPases in cadherin-mediated cell-cell adhesion. *Nat Rev Mol Cell Biol.* **2**: 887-97. FUKATA, Y., AMANO, M. & KAIBUCHI, K. (2001) Rho-Rho-kinase pathway in smooth muscle contraction and cytoskeletal reorganization of non-muscle cells. *Trends Pharmacol Sci.* **22**: 32-9. FUKUHARA, M., GEARY, R.L., DIZ, D.I., GALLAGHER, P.E., WILSON, J.A., GLAZIER, S.S., DEAN, R.H. & FERRARIO, C.M. (2000) Angiotensin-converting enzyme expression in human carotid artery atherosclerosis. *Hypertension.* **35**: 353-9. GALES, C., KOWALSKI-CHAUVEL, A., DUFOUR, M.N., SEVA, C., MORODER, L., PRADAYROL, L., VAYSSE, N., FOURMY, D. & SILVENTE-POIROT, S. (2000) Mutation of Asn-391 within the conserved NPXXY motif of the cholecystokinin B receptor abolishes Gq protein activation without affecting its association with the receptor. *J Biol Chem.* **275**: 17321-7. GAZI, L., NICKOLLS, S.A. & STRANGE, P.G. (2003) Functional coupling of the human dopamine D2 receptor with G alpha i1, G alpha i2, G alpha i3 and G alpha o G proteins: evidence for agonist regulation of G protein selectivity. *Br J Pharmacol.* **138**: 775-86. GENAZZANI, A.A., CARAFOLI, E. & GUERINI, D. (1999) Calcineurin controls inositol 1,4,5-trisphosphate type 1 receptor expression in neurons. *Proc Natl Acad Sci USA.* **96**: 5797-801. GETHER, U. & KOBILKA, B.K. (1998) G protein-coupled receptors. II. Mechanism of agonist activation. *J Biol Chem.* **273**: 17979-82. GETHER, U. (2000) Uncovering molecular mechanisms involved in activation of G protein-coupled receptors. *Endocr Rev.* **21**: 90-113. GILMAN, A.G. (1987) G proteins: transducers of receptor-generated signals. *Annu Rev Biochem.* **56**: 615-49. GLASS, M. & NORTHUP, J.K. (1999) Agonist selective regulation of G proteins by cannabinoid CB(1) and CB(2) receptors. *Mol Pharmacol.* **56**: 1362-9. GOHLA, A., OFFERMANNS, S., WILKIE, T.M. & SCHULTZ, G. (1999) Differential involvement of Galpha12 and Galpha13 in receptor-mediated stress fiber formation. *J Biol Chem.* **274**: 17901-7. GOHLA, A., SCHULTZ, G. & OFFERMANNS, S. (2000) Role for G(12)/G(13) in agonist-induced vascular smooth muscle cell contraction. *Circ Res.* **87**: 221-7. GOULDSON, P.R., KIDLEY, N.J., BYWATER, R.P., PSAROUDAKIS, G., BROOKS, H.D., DIAZ, C., SHIRE, D. & REYNOLDS, C.A. (2004) Toward the active conformations of rhodopsin and the beta2-adrenergic receptor. *Proteins.* **56**: 67-84. GRIENDLING, K.K., LASSEGUE, B. & ALEXANDER, R.W. (1996) Angiotensin receptors and their therapeutic implications. *Annu Rev Pharmacol Toxicol.* **36**: 281-306. GRIFFIN, S.A., BROWN, W.C., MACPHERSON, F., MCGRATH, J.C., WILSON, V.G., KORSGAARD, N., MULVANY, M.J. & LEVER, A.F. (1991) Angiotensin II causes vascular hypertrophy in part by a non-pressor mechanism. *Hypertension.* **17**: 626-35. GROBLEWSKI, T., MAIGRET, B., LARGUIER, R., LOMBARD, C., BONNAFOUS, J.C. & MARIE, J. (1997) Mutation of Asn111 in the third transmembrane domain of the AT1A angiotensin II receptor induces its constitutive activation. *J Biol Chem.* **272**: 1822-6. GSCHWIND, A., ZWICK, E., PRENZEL, N., LESERER, M. & ULLRICH, A. (2001) Cell communication networks: epidermal growth factor receptor transactivation as the paradigm for interreceptor signal transmission. *Oncogene.* **20**: 1594-600. GUDERMANN, T., SCHONEBERG, T. & SCHULTZ, G. (1997) Functional and structural complexity of signal transduction via G-protein-coupled receptors. *Annu Rev Neurosci.* **20**: 399-427. GUNTHER, S. (1984) Characterization of angiotensin II receptor subtypes in rat liver. *J Biol Chem.* **259**: 7622-9. GUO, D.F., FURUTA, H., MIZUKOSHI, M. & INAGAMI, T. (1994) The genomic organization of human angiotensin II type 1 receptor. *Biochem Biophys Res Commun.* **200**: 313-9. GUO, D.F. & INAGAMI, T. (1994) The genomic organization of the rat angiotensin II receptor AT1B. *Biochim Biophys Acta.* **1218**: 91-4. GUO, S., LOPEZ-ILASACA, M. & DZAU, V.J. (2005) Identification of Calcium-modulating Cyclophilin Ligand (CAML) as Transducer of Angiotensin II-mediated Nuclear Factor of Activated T Cells (NFAT) Activation. *J Biol Chem.* **280**: 12536-41. HACKENTHAL, E., AKTORIES, K. & JAKOBS, K.H. (1985) Pertussis toxin attenuates angiotensin II-induced vasoconstriction and inhibition of renin release. *Mol Cell Endocrinol.* **42**: 113-7. HAGAR, R.E. & EHRLICH, B.E. (2000) Regulation of the type III InsP3 receptor and its role in beta cell function. *Cell Mol Life Sci.* **57**: 1938-49. HALL, R.A., PREMONT, R.T., CHOW, C.W., BLITZER, J.T., PITCHER, J.A., CLAING, A., STOFFEL, R.H., BARAK, L.S., SHENOLIKAR, S., WEINMAN, E.J., GRINSTEIN, S. & LEFKOWITZ, R.J. (1998) The beta2-adrenergic receptor interacts with the Na+/H+-exchanger regulatory factor to control Na+/H+ exchange. *Nature.* 392: 626-30. HALL, R.A., PREMONT, R.T. & LEFKOWITZ, R.J. (1999) Heptahelical receptor signaling: beyond the G protein paradigm. *J Cell Biol.* 145: 927-32. HAN, M., GROESBEEK, M., SAKMAR, T.P. & SMITH, S.O. (1997) The C9 methyl group of retinal interacts with glycine-121 in rhodopsin. *Proc Natl Acad Sci U S A.* 94: 13442-7. HARVEY, J.A. (2003) Role of the serotonin 5-HT(2A) receptor in learning. *Learn Mem.* 10: 355-62. HASKELL-LUEVANO, C. & MONCK, E.K. (2001) Agouti-related protein functions as an inverse agonist at a constitutively active brain melanocortin-4 receptor. *Regul Pept.* 99: 1-7. HEBERT, T.E., MOFFETT, S., MORELLO, J.P., LOISEL, T.P., BICHET, D.G., BARRET, C. & BOUVIER, M. (1996) A peptide derived from a beta2-adrenergic receptor transmembrane domain inhibits both receptor dimerization and activation. *J Biol Chem.* 271: 16384-92. HEBERT, T.E. & BOUVIER, M. (1998) Structural and functional aspects of G protein-coupled receptor oligomerization. *Biochem Cell Biol.* 76: 1-11. HELLMICH, M.R., RUI, X.L., HELLMICH, H.L., FLEMING, R.Y., EVERS, B.M. & TOWNSEND, C.M., JR. (2000) Human colorectal cancers express a constitutively active cholecystokinin-B/gastrin receptor that stimulates cell growth. *J Biol Chem.* 275: 32122-8. HEXIMER, S.P., WATSON, N., LINDER, M.E., BLUMER, K.J. & HEPLER, J.R. (1997) RGS2/G0S8 is a selective inhibitor of Gqalpha function. *Proc Natl Acad Sci U S A.* 94: 14389-93. HOPKINSON, H.E., LATIF, M.L. & HILL, S.J. (2000) Non-competitive antagonism of beta(2)-agonist-mediated cyclic AMP accumulation by ICI 118551 in BC3H1 cells endogenously expressing constitutively active beta(2)-adrenoceptors. *Br J Pharmacol.* 131: 124-30. HUBBELL, W.L., ALTENBACH, C., HUBBELL, C.M. & KHORANA, H.G. (2003) Rhodopsin structure, dynamics, and activation: a perspective from crystallography, site-directed spin labeling, sulfhydryl reactivity, and disulfide cross-linking. *Adv Protein Chem.* 63: 243-90. HUNYADY, L., BAUKAL, A.J., BALLA, T. & CATT, K.J. (1994a) Independence of type I angiotensin II receptor endocytosis from G protein coupling and signal transduction. *J Biol Chem.* 269: 24798-804. HUNYADY, L., TIAN, Y., SANDBERG, K., BALLA, T. & CATT, K.J. (1994b) Divergent conformational requirements for angiotensin II receptor internalization and signaling. *Kidney Int.* 46: 1496-8. HUNYADY, L., BOR, M., BAUKAL, A.J., BALLA, T. & CATT, K.J. (1995) A conserved NPLFY sequence contributes to agonist binding and signal transduction but is not an internalization signal for the type 1 angiotensin II receptor. *J Biol Chem.* 270: 16602-9. HUNYADY, L. & TURU, G. (2004) The role of the AT(1) angiotensin receptor in cardiac hypertrophy: angiotensin II receptor or stretch sensor? *Trends Endocrinol Metab.* **15**: 405-8. IKEDA, Y., TAKEUCHI, K., KATO, T., TANIYAMA, Y., SATO, K., TAKAHASHI, N., SUGAWARA, A. & ITO, S. (1999) Transcriptional suppression of rat angiotensin AT1a receptor gene expression by interferon-gamma in vascular smooth muscle cells. *Biochem Biophys Res Commun.* **262**: 494-8. ISHII, I., IZUMI, T., TSUKAMOTO, H., UMEYAMA, H., UI, M. & SHIMIZU, T. (1997) Alanine exchanges of polar amino acids in the transmembrane domains of a platelet-activating factor receptor generate both constitutively active and inactive mutants. *J Biol Chem.* **272**: 7846-54. ITOH, Y., CAI, K. & KHORANA, H.G. (2001) Mapping of contact sites in complex formation between light-activated rhodopsin and transducin by covalent crosslinking: use of a chemically preactivated reagent. *Proc Natl Acad Sci USA.* **98**: 4883-7. IWANIJ, V. (1995) Canine kidney glucagon receptor: evidence for a structurally-different, tissue-specific variant of the glucagon receptor. *Mol Cell Endocrinol.* **115**: 21-8. JAVITCH, J.A., LI, X., KABACK, J. & KARLIN, A. (1994) A cysteine residue in the third membrane-spanning segment of the human D2 dopamine receptor is exposed in the binding-site crevice. *Proc Natl Acad Sci USA.* **91**: 10355-9. JAVITCH, J.A., FU, D., LIAPAKIS, G. & CHEN, J. (1997) Constitutive activation of the beta2 adrenergic receptor alters the orientation of its sixth membrane-spanning segment. *J Biol Chem.* **272**: 18546-9. JAYADEV, S., SMITH, R.D., JAGADEESH, G., BAUKAL, A.J., HUNYADY, L. & CATT, K.J. (1999) N-linked glycosylation is required for optimal AT1a angiotensin receptor expression in COS-7 cells. *Endocrinology.* **140**: 2010-7. JIN, S., CORNWALL, M.C. & OPPRIAN, D.D. (2003) Opsin activation as a cause of congenital night blindness. *Nat Neurosci.* **6**: 731-5. JOSEPH, M.P., MAIGRET, B., BONNAFOUS, J.C., MARIE, J. & SCHERAGA, H.A. (1995) A computer modeling postulated mechanism for angiotensin II receptor activation. *J Protein Chem.* **14**: 381-98. KAGIYAMA, S., EGUCHI, S., FRANK, G.D., INAGAMI, T., ZHANG, Y.C. & PHILLIPS, M.I. (2002) Angiotensin II-induced cardiac hypertrophy and hypertension are attenuated by epidermal growth factor receptor antisense. *Circulation.* **106**: 909-12. KAI, H., ALEXANDER, R.W., USHIO-FUKAI, M., LYONS, P.R., AKERS, M. & GRIENDLING, K.K. (1998) G-Protein binding domains of the angiotensin II AT1A receptors mapped with synthetic peptides selected from the receptor sequence. *Biochem J.* **332** (Pt 3): 781-7. KALATSKAYA, I., SCHUSSLER, S., BLAUKAT, A., MULLER-ESTERL, W., JOCHUM, M., PROUD, D. & FAUSSNER, A. (2004) Mutation of tyrosine in the conserved NPXXY sequence leads to constitutive phosphorylation and internalization, but not signaling, of the human B2 bradykinin receptor. *J Biol Chem.* **279**: 31268-76. KAMBAYASHI, Y., BARDHAN, S., TAKAHASHI, K., TSUZUKI, S., INUI, H., HAMAKUBO, T. & INAGAMI, T. (1993) Molecular cloning of a novel angiotensin II receptor isoform involved in phosphotyrosine phosphatase inhibition. *J Biol Chem.* **268**: 24543-6. KARNIK, S.S., GOGONEA, C., PATIL, S., SAAD, Y. & TAKEZAKO, T. (2003) Activation of G-protein-coupled receptors: a common molecular mechanism. *Trends Endocrinol Metab.* **14**: 431-7. KENAKIN, T. (1997) Differences between natural and recombinant G protein-coupled receptor systems with varying receptor/G protein stoichiometry. *Trends Pharmacol Sci.* **18**: 456-64. KENAKIN, T. (2003a) Ligand-selective receptor conformations revisited: the promise and the problem. *Trends Pharmacol Sci.* **24**: 346-54. KENAKIN, T. (2003b) Predicting therapeutic value in the lead optimization phase of drug discovery. *Nat Rev Drug Discov.* **2**: 429-38. KENAKIN, T. (2004a) Principles: receptor theory in pharmacology. *Trends Pharmacol Sci.* **25**: 186-92. KENAKIN, T. (2004b) Efficacy as a vector: the relative prevalence and paucity of inverse agonism. *Mol Pharmacol.* **65**: 2-11. KENNEDY, A.P., MANGUM, K.C., LINDEN, J. & WELLS, J.N. (1996) Covalent modification of transmembrane span III of the A1 adenosine receptor with an antagonist photoaffinity probe. *Mol Pharmacol.* **50**: 789-98. KJELSBERG, M.A., COTECCHIA, S., OSTROWSKI, J., CARON, M.G. & LEFKOWITZ, R.J. (1992) Constitutive activation of the alpha 1B-adrenergic receptor by all amino acid substitutions at a single site. Evidence for a region which constrains receptor activation. *J Biol Chem.* **267**: 1430-3. KLEIN-SEETHARAMAN, J., HWA, J., CAI, K., ALTENBACH, C., HUBBELL, W.L. & KHORANA, H.G. (2001) Probing the dark state tertiary structure in the cytoplasmic domain of rhodopsin: proximities between amino acids deduced from spontaneous disulfide bond formation between Cys316 and engineered cysteines in cytoplasmic loop 1. *Biochemistry.* **40**: 12472-8. KLETT, C., NOBILING, R., GIERSCHIK, P. & HACKENTHAL, E. (1993) Angiotensin II stimulates the synthesis of angiotensinogen in hepatocytes by inhibiting adenyllycyclase activity and stabilizing angiotensinogen mRNA. *J Biol Chem.* **268**: 25095-107. KOBASHI, G., HATA, A., OHTA, K., YAMADA, H., KATO, E.H., MINAKAMI, H., FUJIMOTO, S. & KONDO, K. (2004) A1166C variant of angiotensin II type 1 receptor gene is associated with severe hypertension in pregnancy independently of T235 variant of angiotensinogen gene. *J Hum Genet.* **49**: 182-6. KOLAKOWSKI, L.F., JR. (1994) GCRDb: a G-protein-coupled receptor database. *Receptors Channels.* **2**: 1-7. KONIG, B., ARENDT, A., MCDOWELL, J.H., KAHLERT, M., HARGRAVE, P.A. & HOFMANN, K.P. (1989) Three cytoplasmic loops of rhodopsin interact with transducin. *Proc Natl Acad Sci U S A.* **86**: 6878-82. KOSKI, G., STREATY, R.A. & KLEE, W.A. (1982) Modulation of sodium-sensitive GTPase by partial opiate agonists. An explanation for the dual requirement for Na+ and GTP in inhibitory regulation of adenylate cyclase. *J Biol Chem.* **257**: 14035-40. KOZASA, T., JIANG, X., HART, M.J., STERNWEIS, P.M., SINGER, W.D., GILMAN, A.G., BOLLAG, G. & STERNWEIS, P.C. (1998) p115 RhoGEF, a GTPase activating protein for Galpha12 and Galpha13. *Science.* **280**: 2109-11. KREIENKAMP, H.J. (2002) Organisation of G-protein-coupled receptor signalling complexes by scaffolding proteins. *Curr Opin Pharmacol.* **2**: 581-6. KRISHNAMURTHI, K., VERBALIS, J.G., ZHENG, W., WU, Z., CLERCH, L.B. & SANDBERG, K. (1999) Estrogen regulates angiotensin AT1 receptor expression via cytosolic proteins that bind to the 5' leader sequence of the receptor mRNA. *Endocrinology.* **140**: 5435-8. KUZNETSOVA, T., STAESSEN, J.A., THIJS, L., KUNATH, C., OLSZANECKA, A., RYABIKOV, A., TIKHONOFF, V., STOLARZ, K., BIANCHI, G., CASIGLIA, E., FAGARD, R., BRAND-HERRMANN, S.M., KAWECKA-JASZCZ, K., MALYUTINA, S., NIKITIN, Y. & BRAND, E. (2004) Left ventricular mass in relation to genetic variation in angiotensin II receptors, renin system genes, and sodium excretion. *Circulation.* **110**: 2644-50. LACHANCE, M., ETHIER, N., WOLBRING, G., SCHNETKAMP, P.P. & HEBERT, T.E. (1999) Stable association of G proteins with beta 2AR is independent of the state of receptor activation. *Cell Signal.* **11**: 523-33. LANCTOT, P.M., LECLERC, P.C., ESCHER, E., LEDUC, R. & GUILLEMETTE, G. (1999) Role of N-glycosylation in the expression and functional properties of human AT1 receptor. *Biochemistry.* **38**: 8621-7. LANDER, E.S., LINTON, L.M., BIRREN, B., NUSBAUM, C., ZODY, M.C., BALDWIN, J., DEVON, K., DEWAR, K., DOYLE, M., FITZHUGH, W., FUNKE, R., GAGE, D., HARRIS, K., HEAFORD, A., HOWLAND, J., KANN, L., LEHOCZKY, J., LEVINE, R., MCEWAN, P., MCKERNAN, K., MELDRIM, J., MESIROV, J.P., MIRANDA, C., MORRIS, W., NAYLOR, J., RAYMOND, C., ROSETTI, M., SANTOS, R., SHERIDAN, A., SOUGNEZ, C., STANGETHOMANN, N., STOJANOVIC, N., SUBRAMANIAN, A., WYMAN, D., ROGERS, J., SULSTON, J., AINSCOUGH, R., BECK, S., BENTLEY, D., BURTON, J., CLEE, C., CARTER, N., COULSON, A., DEADMAN, R., DELOUKAS, P., DUNHAM, A., DUNHAM, I., DURBIN, R., FRENCH, L., GRAFHAM, D., GREGORY, S., HUBBARD, T., HUMPHRAY, S., HUNT, A., JONES, M., LLOYD, C., MCMURRAY, A., MATTHEWS, L., MERCER, S., MILNE, S., MULLIKIN, J.C., MUNGALL, A., PLUMB, R., ROSS, M., SHOWNKEEN, R., SIMS, S., WATERSTON, R.H., WILSON, R.K., HILLIER, L.W., MCPHERSON, J.D., MARRA, M.A., MARDIS, E.R., FULTON, L.A., CHINWALLA, A.T., PEPIN, K.H., GISH, W.R., CHISSOE, S.L., WENDL, M.C., DELEHAUNTY, K.D., MINER, T.L., DELEHAUNTY, A., KRAMER, J.B., COOK, L.L., FULTON, R.S., JOHNSON, D.L., MINX, P.J., CLIFTON, S.W., HAWKINS, T., BRANSCOMB, E., PREDKI, P., RICHARDSON, P., WENNING, S., SLEZAK, T., DOGGETT, N., CHENG, J.F., OLSEN, A., LUCAS, S., ELKIN, C., UBERBACHER, E., FRAZIER, M., *et al.* (2001) Initial sequencing and analysis of the human genome. *Nature.* **409**: 860-921. LAPORTE, S.A., SERVANT, G., RICHARD, D.E., ESCHER, E., GUILLEMETTE, G. & LEDUC, R. (1996) The tyrosine within the NPXnY motif of the human angiotensin II type 1 receptor is involved in mediating signal transduction but is not essential for internalization. *Mol Pharmacol.* **49**: 89-95. LAUFS, U. & LIAO, J.K. (2000) Targeting Rho in cardiovascular disease. *Circ Res.* **87**: 526-8. LAW, S.F. & REISINE, T. (1997) Changes in the association of G protein subunits with the cloned mouse delta opioid receptor on agonist stimulation. *J Pharmacol Exp Ther.* **281**: 1476-86. LE, M.T., VANDERHEYDEN, P.M., SZASZAK, M., HUNYADY, L. & VAUQUELIN, G. (2002) Angiotensin IV is a potent agonist for constitutive active human AT1 receptors. Distinct roles of the N-and C-terminal residues of angiotensin II during AT1 receptor activation. *J Biol Chem.* **277**: 23107-10. LE, M.T., VANDERHEYDEN, P.M., SZASZAK, M., HUNYADY, L., KERSEMANS, V. & VAUQUELIN, G. (2003) Peptide and nonpeptide antagonist interaction with constitutively active human AT1 receptors. *Biochem Pharmacol.* **65**: 1329-38. LEE, B., GAI, W. & LAYCHOCK, S.G. (2001) Proteasomal activation mediates down-regulation of inositol 1,4,5-trisphosphate receptor and calcium mobilization in rat pancreatic islets. *Endocrinology.* **142**: 1744-51. LEE, S.P., O'DOWD, B.F., NG, G.Y., VARGHESE, G., AKIL, H., MANSOUR, A., NGUYEN, T. & GEORGE, S.R. (2000) Inhibition of cell surface expression by mutant receptors demonstrates that D2 dopamine receptors exist as oligomers in the cell. *Mol Pharmacol.* **58**: 120-8. LEFF, P. (1995) The two-state model of receptor activation. *Trends Pharmacol Sci.* **16**: 89-97. LEFKOWITZ, R.J., COTECCHIA, S., SAMAMA, P. & COSTA, T. (1993) Constitutive activity of receptors coupled to guanine nucleotide regulatory proteins. *Trends Pharmacol Sci.* **14**: 303-7. LEURS, R., RODRIGUEZ PENA, M.S., BAKKER, R.A., ALEWIJNSE, A.E. & TIMMERMAN, H. (2000) Constitutive activity of G protein coupled receptors and drug action. *Pharm Acta Helv.* **74**: 327-31. LEVER, A.F., HOLE, D.J., GILLIS, C.R., MCCALLUM, I.R., MCINNES, G.T., MACKINNON, P.L., MEREDITH, P.A., MURRAY, L.S., REID, J.L. & ROBERTSON, J.W. (1998) Do inhibitors of angiotensin-I-converting enzyme protect against risk of cancer? *Lancet.* **352**: 179-84. LIN, S.C., LIN, C.R., GUKOVSKY, I., LUSIS, A.J., SAWCHENKO, P.E. & ROSENFELD, M.G. (1993) Molecular basis of the little mouse phenotype and implications for cell type-specific growth. *Nature.* **364**: 208-13. LUTTRELL, D.K. & LUTTRELL, L.M. (2004) Not so strange bedfellows: G-protein-coupled receptors and Src family kinases. *Oncogene.* **23**: 7969-78. LUTTRELL, L.M. & LEFKOWITZ, R.J. (2002) The role of beta-arrestins in the termination and transduction of G-protein-coupled receptor signals. *J Cell Sci.* **115**: 455-65. MARIE, J., MAIGRET, B., JOSEPH, M.P., LARGUIER, R., NOUET, S., LOMBARD, C. & BONNAFOUS, J.C. (1994) Tyr292 in the seventh transmembrane domain of the AT1A angiotensin II receptor is essential for its coupling to phospholipase C. *J Biol Chem.* **269**: 20815-8. MARIE, J., KOCH, C., PRUNEAU, D., PAQUET, J.L., GROBLEWSKI, T., LARGUIER, R., LOMBARD, C., DESLAURIERS, B., MAIGRET, B. & BONNAFOUS, J.C. (1999) Constitutive activation of the human bradykinin B2 receptor induced by mutations in transmembrane helices III and VI. *Mol Pharmacol.* **55**: 92-101. MARINISSEN, M.J. & GUTKIND, J.S. (2001) G-protein-coupled receptors and signaling networks: emerging paradigms. *Trends Pharmacol Sci.* **22**: 368-76. MARRERO, M.B., SCHIEFFER, B., PAXTON, W.G., HEERDT, L., BERK, B.C., DELAFONTAINE, P. & BERNSTEIN, K.E. (1995) Direct stimulation of Jak/STAT pathway by the angiotensin II AT1 receptor. *Nature.* **375**: 247-50. MARTIN, N.P., WHALEN, E.J., ZAMAH, M.A., PIERCE, K.L. & LEFKOWITZ, R.J. (2004a) PKA-mediated phosphorylation of the beta1-adrenergic receptor promotes Gs/Gi switching. *Cell Signal.* **16**: 1397-403. MARTIN, S.S., BOUCARD, A.A., CLEMENT, M., ESCHER, E., LEDUC, R. & GUILLEMETTE, G. (2004b) Analysis of the third transmembrane domain of the human type 1 angiotensin II receptor by cysteine scanning mutagenesis. *J Biol Chem.* **279**: 51415-23. MCDONALD, P.H., CHOW, C.W., MILLER, W.E., LAPORTE, S.A., FIELD, M.E., LIN, F.T., DAVIS, R.I. & LEFKOWITZ, R.J. (2000) Beta-arrestin 2: a receptor-regulated MAPK scaffold for the activation of JNK3. *Science.* **290**: 1574-7. MCLATCHIE, L.M., FRASER, N.J., MAIN, M.J., WISE, A., BROWN, J., THOMPSON, N., SOLARI, R., LEE, M.G. & FOORD, S.M. (1998) RAMPs regulate the transport and ligand specificity of the calcitonin-receptor-like receptor. *Nature.* **393**: 333-9. MCWHINNEY, C.D., HUNT, R.A., CONRAD, K.M., DOSTAL, D.E. & BAKER, K.M. (1997) The type I angiotensin II receptor couples to Stat1 and Stat3 activation through Jak2 kinase in neonatal rat cardiac myocytes. *J Mol Cell Cardiol.* **29**: 2513-24. MHAOUTY-KODJA, S., BARAK, L.S., SCHEER, A., ABUIN, L., DIVIANI, D., CARON, M.G. & COTECCHIA, S. (1999) Constitutively active alpha-1b adrenergic receptor mutants display different phosphorylation and internalization features. *Mol Pharmacol.* **55**: 339-47. MILLER, J.A. & SCHOLEY, J.W. (2004) The impact of renin-angiotensin system polymorphisms on physiological and pathophysiological processes in humans. *Curr Opin Nephrol Hypertens.* **13**: 101-6. MILLIGAN, G., BOND, R.A. & LEE, M. (1995) Inverse agonism: pharmacological curiosity or potential therapeutic strategy? *Trends Pharmacol Sci.* **16**: 10-3. MILLIGAN, G. & BOND, R.A. (1997) Inverse agonism and the regulation of receptor number. *Trends Pharmacol Sci.* **18**: 468-74. MILLIGAN, G. & IJZERMAN, A.P. (2000) Stochastic multidimensional hypercubes and inverse agonism. *Trends Pharmacol Sci.* **21**: 362-3. MILLIGAN, G. (2003) Constitutive activity and inverse agonists of G protein-coupled receptors: a current perspective. *Mol Pharmacol.* **64**: 1271-6. MINAKAMI, R., JINNAI, N. & SUGIYAMA, H. (1997) Phosphorylation and calmodulin binding of the metabotropic glutamate receptor subtype 5 (mGluR5) are antagonistic in vitro. *J Biol Chem.* **272**: 20291-8. MISEREY-LENKEI, S., PARNOT, C., BARDIN, S., CORVOL, P. & CLAUSER, E. (2002) Constitutive internalization of constitutively active agiotensin II AT(1A) receptor mutants is blocked by inverse agonists. *J Biol Chem.* **277**: 5891-901. MISSIAEN, L., ROBBERECHT, W., VAN DEN BOSCH, L., CALLEWAERT, G., PARYS, J.B., WUYTACK, F., RAEYMAEKERS, L., NILIUS, B., EGGERMONT, J. & DE SMEDT, H. (2000) Abnormal intracellular ca(2+)homeostasis and disease. *Cell Calcium.* **28**: 1-21. MIURA, S., FENG, Y.H., HUSAIN, A. & KARNIK, S.S. (1999) Role of aromaticity of agonist switches of angiotensin II in the activation of the AT1 receptor. *J Biol Chem.* **274**: 7103-10. MIURA, S., ZHANG, J. & KARNIK, S.S. (2000) Angiotensin II type 1 receptor-function affected by mutations in cytoplasmic loop CD. *FEBS Lett.* **470**: 331-5. MIURA, S., SAKU, K. & KARNIK, S.S. (2003a) Molecular analysis of the structure and function of the angiotensin II type 1 receptor. *Hypertens Res.* **26**: 937-43. MIURA, S., ZHANG, J., BOROS, J. & KARNIK, S.S. (2003b) TM2-TM7 interaction in coupling movement of transmembrane helices to activation of the angiotensin II type-1 receptor. *J Biol Chem.* **278**: 3720-5. MIXON, M.B., LEE, E., COLEMAN, D.E., BERGHUIS, A.M., GILMAN, A.G. & SPRANG, S.R. (1995) Tertiary and quaternary structural changes in Gi alpha 1 induced by GTP hydrolysis. *Science.* **270**: 954-60. MOLKENTIN, J.D. & DORN, I.G., 2ND. (2001) Cytoplasmic signaling pathways that regulate cardiac hypertrophy. *Annu Rev Physiol.* **63**: 391-426. MONCZOR, F., FERNANDEZ, N., LEGNAZZI, B.L., RIVEIRO, M.E., BALDI, A., SHAYO, C. & DAVIO, C. (2003) Tiotidine, a histamine H2 receptor inverse agonist that binds with high affinity to an inactive G-protein-coupled form of the receptor. Experimental support for the cubic ternary complex model. *Mol Pharmacol.* **64**: 512-20. MORISSET, S., ROULEAU, A., LIGNEAU, X., GBAHOU, F., TARDIVEL-LACOMBE, J., STARK, H., SCHUNACK, W., GANELLIN, C.R., SCHWARTZ, J.C. & ARRANG, J.M. (2000) High constitutive activity of native H3 receptors regulates histamine neurons in brain. *Nature.* **408**: 860-4. MOZSOLITS, H., UNABIA, S., AHMAD, A., MORTON, C.J., THOMAS, W.G. & AGUILAR, M.I. (2002) Electrostatic and hydrophobic forces tether the proximal region of the angiotensin II receptor (AT1A) carboxyl terminus to anionic lipids. *Biochemistry.* **41**: 7830-40. MUKHOPADHYAY, S., MCINTOSH, H.H., HOUSTON, D.B. & HOWLETT, A.C. (2000) The CB(1) cannabinoid receptor juxtamembrane C-terminal peptide confers activation to specific G proteins in brain. *Mol Pharmacol.* **57**: 162-70. MUNDELL, S.J. & BENOVIC, J.L. (2000) Selective regulation of endogenous G protein-coupled receptors by arrestins in HEK293 cells. *J Biol Chem.* **275**: 12900-8. MURASAWA, S., MATSUBARA, H., URAKAMI, M. & INADA, M. (1993) Regulatory elements that mediate expression of the gene for the angiotensin II type 1a receptor for the rat. *J Biol Chem.* **268**: 26996-7003. MURPHY, T.J., ALEXANDER, R.W., GRIENDLING, K.K., RUNGE, M.S. & BERNSTEIN, K.E. (1991) Isolation of a cDNA encoding the vascular type-1 angiotensin II receptor. *Nature.* **351**: 233-6. NAKAJIMA, M., MUKOYAMA, M., PRATT, R.E., HORIUCHI, M. & DZAU, V.J. (1993) Cloning of cDNA and analysis of the gene for mouse angiotensin II type 2 receptor. *Biochem Biophys Res Commun.* **197**: 393-9. NARUMIYA, S. (1996) The small GTPase Rho: cellular functions and signal transduction. *J Biochem (Tokyo).* **120**: 215-28. NERI SERNERI, G.G., BODDI, M., MODESTI, P.A., COPPO, M., CECIONI, I., TOSCANO, T., PAPA, M.L., BANDINELLI, M., LISI, G.F. & CHIAVARELLI, M. (2004) Cardiac angiotensin II participates in coronary microvessel inflammation of unstable angina and strengthens the immunomediated component. *Circ Res.* **94**: 1630-7. NEWTON, A.C. (2001) Protein kinase C: structural and spatial regulation by phosphorylation, cofactors, and macromolecular interactions. *Chem Rev.* **101**: 2353-64. NG, G.Y., GEORGE, S.R., ZASTAWNY, R.L., CARON, M., BOUVIER, M., DENNIS, M. & O'DOWD, B.F. (1993) Human serotonin1B receptor expression in Sf9 cells: phosphorylation, palmitoylation, and adenylyl cyclase inhibition. *Biochemistry.* **32**: 11727-33. NG, K.K. & VANE, J.R. (1967) Conversion of angiotensin I to angiotensin II. *Nature.* **216**: 762-6. NIEDERNBERG, A., BLAUKAT, A., SCHONEBERG, T. & KOSTENIS, E. (2003) Regulated and constitutive activation of specific signalling pathways by the human S1P5 receptor. *Br J Pharmacol.* **138**: 481-93. NIJENHUIS, W.A., OOSTEROM, J. & ADAN, R.A. (2001) AgRP(83-132) acts as an inverse agonist on the human-melanocortin-4 receptor. *Mol Endocrinol.* **15**: 164-71. NIMCHINSKY, E.A., HOF, P.R., JANSSEN, W.G., MORRISON, J.H. & SCHMAUSS, C. (1997) Expression of dopamine D3 receptor dimers and tetramers in brain and in transfected cells. *J Biol Chem.* **272**: 29229-37. NODA, K., FENG, Y.H., LIU, X.P., SAAD, Y., HUSAIN, A. & KARNIK, S.S. (1996) The active state of the AT1 angiotensin receptor is generated by angiotensin II induction. *Biochemistry.* **35**: 16435-42. OAKLEY, R.H., LAPORTE, S.A., HOLT, J.A., CARON, M.G. & BARAK, L.S. (2000) Differential affinities of visual arrestin, beta arrestin1, and beta arrestin2 for G protein-coupled receptors delineate two major classes of receptors. *J Biol Chem.* **275**: 17201-10. OBERDORF, J., WEBSTER, J.M., ZHU, C.C., LUO, S.G. & WOJCIKIEWICZ, R.J. (1999) Down-regulation of types I, II and III inositol 1,4,5-trisphosphate receptors is mediated by the ubiquitin/proteasome pathway. *Biochem J.* **339** (Pt 2): 453-61. O'BRIEN, P.J., MOLINO, M., KAHN, M. & BRASS, L.F. (2001) Protease activated receptors: theme and variations. *Oncogene.* **20**: 1570-81. OHYAMA, K., YAMANO, Y., CHAKI, S., KONDO, T. & INAGAMI, T. (1992) Domains for G-protein coupling in angiotensin II receptor type I: studies by site-directed mutagenesis. *Biochem Biophys Res Commun.* **189**: 677-83. OHYAMA, K., YAMANO, Y., SANO, T., NAKAGOMI, Y., HAMAKUBO, T., MORISHIMA, I. & INAGAMI, T. (1995) Disulfide bridges in extracellular domains of angiotensin II receptor type IA. *Regul Pept.* **57**: 141-7. OLLMANN, M.M., WILSON, B.D., YANG, Y.K., KERNS, J.A., CHEN, Y., GANTZ, I. & BARSH, G.S. (1997) Antagonism of central melanocortin receptors in vitro and in vivo by agouti-related protein. *Science.* **278**: 135-8. ONO, K., MANNAMI, T., BABA, S., YASUI, N., OGIHARA, T. & IWAI, N. (2003) Lack of association between angiotensin II type 1 receptor gene polymorphism and hypertension in Japanese. *Hypertens Res.* **26**: 131-4. PALCZEWSKI, K., KUMASAKA, T., HORI, T., BEHNKE, C.A., MOTOSHIMA, H., FOX, B.A., LE TRONG, I., TELLER, D.C., OKADA, T., STENKAMP, R.E., YAMAMOTO, M. & MIYANO, M. (2000) Crystal structure of rhodopsin: A G protein-coupled receptor. *Science.* **289**: 739-45. PARMA, J., DUPREZ, L., VAN SANDE, J., COCHAUX, P., GERVY, C., MOCKEL, J., DUMONT, J. & VASSART, G. (1993) Somatic mutations in the thyrotropin receptor gene cause hyperfunctioning thyroid adenomas. *Nature.* **365**: 649-51. PARNOT, C., BARDIN, S., MISEREY-LENKEL, S., GUEDIN, D., CORVOL, P. & CLAUSER, E. (2000) Systematic identification of mutations that constitutively activate the angiotensin II type 1A receptor by screening a randomly mutated cDNA library with an original pharmacological bioassay. *Proc Natl Acad Sci U S A.* **97**: 7615-20. PARNOT, C., MISEREY-LENKEL, S., BARDIN, S., CORVOL, P. & CLAUSER, E. (2002) Lessons from constitutively active mutants of G protein-coupled receptors. *Trends Endocrinol Metab.* **13**: 336-43. PARSONS, S.J. & PARSONS, J.T. (2004) Src family kinases, key regulators of signal transduction. *Oncogene.* **23**: 7906-9. PATTERSON, R.L., BOEHNING, D. & SNYDER, S.H. (2004) Inositol 1,4,5-trisphosphate receptors as signal integrators. *Annu Rev Biochem.* **73**: 437-65. PAUWELS, P.J. & WURCH, T. (1998) Review: amino acid domains involved in constitutive activation of G-protein-coupled receptors. *Mol Neurobiol.* **17**: 109-35. PEREZ, D.M., HWA, J., GAIVIN, R., MATHUR, M., BROWN, F. & GRAHAM, R.M. (1996) Constitutive activation of a single effector pathway: evidence for multiple activation states of a G protein-coupled receptor. *Mol Pharmacol.* **49**: 112-22. PERTWEE, R.G. (2005) Inverse agonism and neutral antagonism at cannabinoid CB1 receptors. *Life Sci.* **76**: 1307-24. PIERCE, K.L., LUTTRELL, L.M. & LEFKOWITZ, R.J. (2001) New mechanisms in heptahelical receptor signaling to mitogen activated protein kinase cascades. *Oncogene.* **20**: 1532-9. PIERCE, K.L., PREMONT, R.T. & LEFKOWITZ, R.J. (2002) Seven-transmembrane receptors. *Nat Rev Mol Cell Biol.* **3**: 639-50. POBINER, B.F., NORTHUP, J.K., BAUER, P.H., FRASER, E.D. & GARRISON, J.C. (1991) Inhibitory GTP-binding regulatory protein Gi3 can couple angiotensin II receptors to inhibition of adenylyl cyclase in hepatocytes. *Mol Pharmacol.* **40**: 156-67. POZVEK, G., HILTON, J.M., QUIZA, M., HOUSSAMI, S. & SEXTON, P.M. (1997) Structure/function relationships of calcitonin analogues as agonists, antagonists, or inverse agonists in a constitutively activated receptor cell system. *Mol Pharmacol.* **51**: 658-65. PRATHER, P.L. (2004) Inverse agonists: tools to reveal ligand-specific conformations of G protein-coupled receptors. *Sci STKE.* **2004**: pe1. PUSL, T. & NATHANSON, M.H. (2004) The role of inositol 1,4,5-trisphosphate receptors in the regulation of bile secretion in health and disease. *Biochem Biophys Res Commun.* **322**: 1318-25. RAMSAY, L.E. & YEO, W.W. (1995) ACE inhibitors, angiotensin II antagonists and cough. The Losartan Cough Study Group. *J Hum Hypertens.* **9 Suppl 5**: S51-4. REBOIS, R.V., WARNER, D.R. & BASI, N.S. (1997) Does subunit dissociation necessarily accompany the activation of all heterotrimeric G proteins? *Cell Signal.* **9**: 141-51. REBOIS, R.V. & HEBERT, T.E. (2003) Protein complexes involved in heptahelical receptor-mediated signal transduction. *Receptors Channels.* **9**: 169-94. REN, X.R., REITER, E., AHN, S., KIM, J., CHEN, W. & LEFKOWITZ, R.J. (2005) Different G protein-coupled receptor kinases govern G protein and beta-arrestin-mediated signaling of V2 vasopressin receptor. *Proc Natl Acad Sci U S A.* **102**: 1448-53. RIBEIRO, C.M., REECE, J. & PUTNEY, J.W., JR. (1997) Role of the cytoskeleton in calcium signaling in NIH 3T3 cells. An intact cytoskeleton is required for agonist-induced [Ca2+]i signaling, but not for capacitative calcium entry. *J Biol Chem.* **272**: 26555-61. RICHARD, D.E., VOURET-CRAVIARI, V. & POUYSSEGUR, J. (2001) Angiogenesis and G-protein-coupled receptors: signals that bridge the gap. *Oncogene.* **20**: 1556-62. RIOS, C.D., JORDAN, B.A., GOMES, I. & DEVI, L.A. (2001) G-protein-coupled receptor dimerization: modulation of receptor function. *Pharmacol Ther.* **92**: 71-87. ROBINSON, P.R., COHEN, G.B., ZHUKOVSKY, E.A. & OPRIAN, D.D. (1992) Constitutively active mutants of rhodopsin. *Neuron.* **9**: 719-25. ROGINSKAYA, M., CONNELLY, S.M., KIM, K.S., PATEL, D. & DUMONT, M.E. (2004) Effects of mutations in the N terminal region of the yeast G protein alpha-subunit Gpa1p on signaling by pheromone receptors. *Mol Genet Genomics.* **271**: 237-48. ROKA, F., BRYDON, L., WALDHOER, M., STROSBERG, A.D., FREISSMUTH, M., JOCKERS, R. & NANOFF, C. (1999) Tight association of the human Mel(1a)-melatonin receptor and G(i): precoupling and constitutive activity. *Mol Pharmacol.* **56**: 1014-24. ROSENKILDE, M.M., KLEDAL, T.N., BRAUNER-OSBORNE, H. & SCHWARTZ, T.W. (1999) Agonists and inverse agonists for the herpesvirus 8-encoded constitutively active seven-transmembrane oncogene product, ORF-74. *J Biol Chem.* **274**: 956-61. ROSENKILDE, M.M., WALDHOER, M., LUTTICHAU, H.R. & SCHWARTZ, T.W. (2001) Virally encoded 7TM receptors. *Oncogene.* **20**: 1582-93. ROSENTHAL, W., ANTARAMIAN, A., GILBERT, S. & BIRNBAUMER, M. (1993) Nephrogenic diabetes insipidus. A V2 vasopressin receptor unable to stimulate adenyllyl cyclase. *J Biol Chem.* **268**: 13030-3. ROTH, J. (2002) Protein N-glycosylation along the secretory pathway: relationship to organelle topography and function, protein quality control, and cell interactions. *Chem Rev.* **102**: 285-303. ROULEAU, A., LIGNEAU, X., TARDIVEL-LACOMBE, J., MORISSET, S., GBAHOU, F., SCHWARTZ, J.C. & ARRANG, J.M. (2002) Histamine H3-receptor-mediated [35S]GTP gamma[S] binding: evidence for constitutive activity of the recombinant and native rat and human H3 receptors. *Br J Pharmacol.* **135**: 383-92. RUBATTU, S., DI ANGELANTONIO, E., STANZIONE, R., ZANDA, B., EVANGELISTA, A., PIRISI, A., DE PAOLIS, P., COTA, L., BRUNETTI, E. & VOLPE, M. (2004) Gene polymorphisms of the renin-angiotensin-aldosterone system and the risk of ischemic stroke: a role of the A1166C/AT1 gene variant. *J Hypertens.* **22**: 2129-34. RYAN, U.S., RYAN, J.W., WHITAKER, C. & CHIU, A. (1976) Localization of angiotensin converting enzyme (kininase II). II. Immunocytochemistry and immunofluorescence. *Tissue Cell.* **8**: 125-45. SACHSE, R., SHAO, X.J., RICO, A., FINCKH, U., ROLFS, A., REINCKE, M. & HENSEN, J. (1997) Absence of angiotensin II type 1 receptor gene mutations in human adrenal tumors. *Eur J Endocrinol.* **137**: 262-6. SADEE, W., WANG, D. & BILSKY, E.J. (2005) Basal opioid receptor activity, neutral antagonists, and therapeutic opportunities. *Life Sci.* **76**: 1427-37. SALAHPOUR, A., ANGERS, S., MERCIER, J.F., LAGACE, M., MARULLO, S. & BOUVIER, M. (2004) Homodimerization of the beta2-adrenergic receptor as a prerequisite for cell surface targeting. *J Biol Chem.* **279**: 33390-7. SAMAMA, P., COTECCHIA, S., COSTA, T. & LEFKOWITZ, R.J. (1993) A mutation-induced activated state of the beta 2-adrenergic receptor. Extending the ternary complex model. *J Biol Chem.* **268**: 4625-36. SANO, T., OHYAMA, K., YAMANO, Y., NAKAGOMI, Y., NAKAZAWA, S., KIKYO, M., SHIRAI, H., BLANK, J.S., EXTON, J.H. & INAGAMI, T. (1997) A domain for G protein coupling in carboxyl-terminal tail of rat angiotensin II receptor type 1A. *J Biol Chem.* **272**: 23631-6. SASAKI, K., YAMANO, Y., BARDHAN, S., IWAI, N., MURRAY, J.J., HASEGAWA, M., MATSUDA, Y. & INAGAMI, T. (1991) Cloning and expression of a complementary DNA encoding a bovine adrenal angiotensin II type-1 receptor. *Nature.* **351**: 230-3. SASAMURA, H., HEIN, L., KRIEGER, J.E., PRATT, R.E., KOBILKA, B.K. & DZAU, V.J. (1992) Cloning, characterization, and expression of two angiotensin receptor (AT-1) isoforms from the mouse genome. *Biochem Biophys Res Commun.* **185**: 253-9. SAYESKI, P.P., ALI, M.S., SEMENIUK, D.J., DOAN, T.N. & BERNSTEIN, K.E. (1998) Angiotensin II signal transduction pathways. *Regul Pept.* **78**: 19-29. SCHER, A. & COTECCHIA, S. (1997) Constitutively active G protein-coupled receptors: potential mechanisms of receptor activation. *J Recept Signal Transduct Res.* 17: 57-73. SCHMIDT, S., BEIGE, J., WALLA-FRIEDEL, M., MICHEL, M.C., SHARMA, A.M. & RITZ, E. (1997) A polymorphism in the gene for the angiotensin II type 1 receptor is not associated with hypertension. *J Hypertens.* 15: 1385-8. SCHULZ, A., BRUNS, K., HENKLEIN, P., KRAUSE, G., SCHUBERT, M., GUDERMANN, T., WRAY, V., SCHULTZ, G. & SCHONEBERG, T. (2000) Requirement of specific intrahelical interactions for stabilizing the inactive conformation of glycoprotein hormone receptors. *J Biol Chem.* 275: 37860-9. SEIFERT, R. & WENZEL-SEIFERT, K. (2002) Constitutive activity of G-protein-coupled receptors: cause of disease and common property of wild-type receptors. *Naunyn Schmiedebergs Arch Pharmacol.* 366: 381-416. SENOGLES, S.E., SPIEGEL, A.M., PADRELL, E., IYENGAR, R. & CARON, M.G. (1990) Specificity of receptor-G protein interactions. Discrimination of Gi subtypes by the D2 dopamine receptor in a reconstituted system. *J Biol Chem.* 265: 4507-14. SERVANT, G., DUDLEY, D.T., ESCHER, E. & GUILLEMETTE, G. (1994) The marked disparity between the sizes of angiotensin type 2 receptors from different tissues is related to different degrees of N-glycosylation. *Mol Pharmacol.* 45: 1112-8. SETA, K., NANAMORI, M., MODRALL, J.G., NEUBIG, R.R. & SADOSHIMA, J. (2002) AT1 receptor mutant lacking heterotrimeric G protein coupling activates the Src-Ras-ERK pathway without nuclear translocation of ERKs. *J Biol Chem.* 277: 9268-77. SETA, K. & SADOSHIMA, J. (2003) Phosphorylation of tyrosine 319 of the angiotensin II type 1 receptor mediates angiotensin II-induced trans-activation of the epidermal growth factor receptor. *J Biol Chem.* 278: 9019-26. SHAH, B.H. & CATT, K.J. (2003) A central role of EGF receptor transactivation in angiotensin II -induced cardiac hypertrophy. *Trends Pharmacol Sci.* 24: 239-44. SHAH, B.H. & CATT, K.J. (2004) Matrix metalloproteinase-dependent EGF receptor activation in hypertension and left ventricular hypertrophy. *Trends Endocrinol Metab.* 15: 241-3. SHEIKH, S.P., ZVYAGA, T.A., LICHTARGE, O., SAKMAR, T.P. & BOURNE, H.R. (1996) Rhodopsin activation blocked by metal-ion-binding sites linking transmembrane helices C and F. *Nature.* 383: 347-50. SHEIKH, S.P., VILARDARGA, J.P., BARANSKI, T.J., LICHTARGE, O., IIRI, T., MENG, E.C., NISSENSON, R.A. & BOURNE, H.R. (1999) Similar structures and shared switch mechanisms of the beta2-adrenoceptor and the parathyroid hormone receptor. Zn(II) bridges between helices III and VI block activation. *J Biol Chem.* 274: 17033-41. SHENKER, A., LAUE, L., KOSUGI, S., MERENDINO, J.J., JR., MINEGISHI, T. & CUTLER, G.B., JR. (1993) A constitutively activating mutation of the luteinizing hormone receptor in familial male precocious puberty. *Nature.* 365: 652-4. SHENKER, A. (2002) Activating mutations of the lutropin choriogonadotropin receptor in precocious puberty. *Receptors Channels.* 8: 3-18. SHIBATA, T., SUZUKI, C., OHNISHI, J., MURAKAMI, K. & MIYAZAKI, H. (1996) Identification of regions in the human angiotensin II receptor type 1 responsible for Gi and Gq coupling by mutagenesis study. *Biochem Biophys Res Commun.* **218**: 383-9. SHIRAI, H., TAKAHASHI, K., KATADA, T. & INAGAMI, T. (1995) Mapping of G protein coupling sites of the angiotensin II type 1 receptor. *Hypertension.* **25**: 726-30. SIMON, M.I., STRATHMANN, M.P. & GAUTAM, N. (1991) Diversity of G proteins in signal transduction. *Science.* **252**: 802-8. SINGER, S.J. & NICOLSON, G.L. (1972) The fluid mosaic model of the structure of cell membranes. *Science.* **175**: 720-31. SIPMA, H., DEELMAN, L., SMEDT, H.D., MISSIAEN, L., PARYS, J.B., VANLINGEN, S., HENNING, R.H. & CASTEELS, R. (1998) Agonist-induced down-regulation of type 1 and type 3 inositol 1,4,5-trisphosphate receptors in A7r5 and DDT1 MF-2 smooth muscle cells. *Cell Calcium.* **23**: 11-21. SMIT, M.J., LEURS, R., ALEWIJNSE, A.E., BLAUW, J., VAN NIEUW AMERONGEN, G.P., VAN DE VREDE, Y., ROOVERS, E. & TIMMERMAN, H. (1996) Inverse agonism of histamine H2 antagonist accounts for upregulation of spontaneously active histamine H2 receptors. *Proc Natl Acad Sci USA.* **93**: 6802-7. SMIT, M.J., TIMMERMAN, H., VERZIJL, D. & LEURS, R. (2000) Viral-encoded G-protein coupled receptors: new targets for drug research? *Pharm Acta Helv.* **74**: 299-304. SMIT, M.J., VINK, C., VERZIJL, D., CASAROSA, P., BRUGGEMAN, C.A. & LEURS, R. (2003) Virally encoded G protein-coupled receptors: targets for potentially innovative anti-viral drug development. *Curr Drug Targets.* **4**: 431-41. SMITH, N.J., CHAN, H.W., OSBORNE, J.E., THOMAS, W.G. & HANNAN, R.D. (2004) Hijacking epidermal growth factor receptors by angiotensin II: new possibilities for understanding and treating cardiac hypertrophy. *Cell Mol Life Sci.* **61**: 2695-703. SPALDING, T.A., BURSTEIN, E.S., HENDERSON, S.C., DUCOTE, K.R. & BRANN, M.R. (1998) Identification of a ligand-dependent switch within a muscarinic receptor. *J Biol Chem.* **273**: 21563-8. SPASSOVA, M.A., SOBOLOFF, J., HE, L.P., HEWAVITHARANA, T., XU, W., VENKATACHALAM, K., VAN ROSSUM, D.B., PATTERSON, R.L. & GILL, D.L. (2004) Calcium entry mediated by SOCs and TRP channels: variations and enigma. *Biochim Biophys Acta.* **1742**: 9-20. SPIEGEL, A.M. (1996) Defects in G protein-coupled signal transduction in human disease. *Annu Rev Physiol.* **58**: 143-70. SPIEGEL, A.M. & WEINSTEIN, L.S. (2004) Inherited diseases involving g proteins and g protein-coupled receptors. *Annu Rev Med.* **55**: 27-39. SRINIVASAN, S., LUBRANO-BERTHELIER, C., GOVAERTS, C., PICARD, F., SANTIAGO, P., CONKLIN, B.R. & VAISSE, C. (2004) Constitutive activity of the melanocortin-4 receptor is maintained by its N-terminal domain and plays a role in energy homeostasis in humans. *J Clin Invest.* **114**: 1158-64. STANTON, A. (2003) Therapeutic potential of renin inhibitors in the management of cardiovascular disorders. *Am J Cardiovasc Drugs.* 3: 389-94. STRADER, C.D., FONG, T.M., TOTA, M.R., UNDERWOOD, D. & DIXON, R.A. (1994) Structure and function of G protein-coupled receptors. *Annu Rev Biochem.* 63: 101-32. STRANGE, P.G. (2002) Mechanisms of inverse agonism at G-protein-coupled receptors. *Trends Pharmacol Sci.* 23: 89-95. STYERS, M.L., SALAZAR, G., LOVE, R., PEDEN, A.A., KOWALCZYK, A.P. & FAUNDEZ, V. (2004) The endo-lysosomal sorting machinery interacts with the intermediate filament cytoskeleton. *Mol Biol Cell.* 15: 5369-82. SUGIMOTO, K., KATSUYA, T., OHKUBO, T., HOZAWA, A., YAMAMOTO, K., MATSUO, A., RAKUGI, H., TSUJI, I., IMAI, Y. & OGIHARA, T. (2004) Association between angiotensin II type 1 receptor gene polymorphism and essential hypertension: the Ohasama Study. *Hypertens Res.* 27: 551-6. SUGIYAMA, T., MATSUDA, Y. & MIKOSHIBA, K. (2000) Inositol 1,4,5-trisphosphate receptor associated with focal contact cytoskeletal proteins. *FEBS Lett.* 466: 29-34. SWYNGHEDAUW, B. (1999) Molecular mechanisms of myocardial remodeling. *Physiol Rev.* 79: 215-62. SZOMBATHY, T., SZALAI, C., KATALIN, B., PALICZ, T., ROMICS, L. & CSASZAR, A. (1998) Association of angiotensin II type 1 receptor polymorphism with resistant essential hypertension. *Clin Chim Acta.* 269: 91-100. TAKEUCHI, H., OIKE, M., PATERSON, H.F., ALLEN, V., KANEMATSU, T., ITO, Y., ERNEUX, C., KATAN, M. & HIRATA, M. (2000) Inhibition of Ca(2+) signalling by p130, a phospholipase-C-related catalytically inactive protein: critical role of the p130 pleckstrin homology domain. *Biochem J.* 349: 357-68. TAUNTON, J. (2001) Actin filament nucleation by endosomes, lysosomes and secretory vesicles. *Curr Opin Cell Biol.* 13: 85-91. TESMER, J.J., BERMAN, D.M., GILMAN, A.G. & SPRANG, S.R. (1997) Structure of RGS4 bound to AlF4--activated G(i alpha1): stabilization of the transition state for GTP hydrolysis. *Cell.* 89: 251-61. THEMMEN, A.P. & VERHOEF-POST, M. (2002) LH receptor defects. *Semin Reprod Med.* 20: 199-204. THERIAULT, C., ROCHDI, M.D. & PARENT, J.L. (2004) Role of the Rab11-associated intracellular pool of receptors formed by constitutive endocytosis of the beta isoform of the thromboxane A2 receptor (TP beta). *Biochemistry.* 43: 5600-7. THOMAS, W.G., THEKKUMKARA, T.J., MOTEL, T.J. & BAKER, K.M. (1995) Stable expression of a truncated AT1A receptor in CHO-K1 cells. The carboxyl-terminal region directs agonist-induced internalization but not receptor signaling or desensitization. *J Biol Chem.* 270: 207-13. THOMAS, W.G., THEKKUMKARA, T.J. & BAKER, K.M. (1996) Molecular mechanisms of angiotensin II (AT1A) receptor endocytosis. *Clin Exp Pharmacol Physiol Suppl.* 3: S74-80. THOMAS, W.G., MOTEL, T.J., KULE, C.E., KAROOR, V. & BAKER, K.M. (1998) Phosphorylation of the angiotensin II (AT1A) receptor carboxyl terminus: a role in receptor endocytosis. *Mol Endocrinol.* **12**: 1513-24. THOMAS, W.G., QIAN, H., CHANG, C.S. & KARNIK, S. (2000) Agonist-induced phosphorylation of the angiotensin II (AT(1A)) receptor requires generation of a conformation that is distinct from the inositol phosphate-signaling state. *J Biol Chem.* **275**: 2893-900. THOMAS, W.G., BRANDENBURGER, Y., AUTELITANO, D.J., PHAM, T., QIAN, H. & HANNAN, R.D. (2002) Adenoviral-directed expression of the type 1A angiotensin receptor promotes cardiomyocyte hypertrophy via transactivation of the epidermal growth factor receptor. *Circ Res.* **90**: 135-42. THOMAS, W.G. & QIAN, H. (2003) Arresting angiotensin type 1 receptors. *Trends Endocrinol Metab.* **14**: 130-6. THOMAS, W.G., QIAN, H. & SMITH, N.J. (2004) When 6 is 9: 'uncoupled' AT1 receptors turn signalling on its head. *Cell Mol Life Sci.* **61**: 2687-94. TIAN, W.N., DUZIC, E., LANIER, S.M. & DETH, R.C. (1994) Determinants of alpha 2-adrenergic receptor activation of G proteins: evidence for a precoupled receptor/G protein state. *Mol Pharmacol.* **45**: 524-31. TIBERI, M. & CARON, M.G. (1994) High agonist-independent activity is a distinguishing feature of the dopamine D1B receptor subtype. *J Biol Chem.* **269**: 27925-31. TOUYZ, R.M. & BERRY, C. (2002) Recent advances in angiotensin II signaling. *Braz J Med Biol Res.* **35**: 1001-15. TU, J.C., XIAO, B., YUAN, J.P., LANAHAN, A.A., LEOFFERT, K., LI, M., LINDEN, D.J. & WORLEY, P.F. (1998) Homer binds a novel proline-rich motif and links group 1 metabotropic glutamate receptors with IP3 receptors. *Neuron.* **21**: 717-26. TUFRO-MCREDDIE, A., ROMANO, L.M., HARRIS, J.M., FERDER, L. & GOMEZ, R.A. (1995) Angiotensin II regulates nephrogenesis and renal vascular development. *Am J Physiol.* **269**: F110-5. UCHIYAMA, T., YOSHIKAWA, F., HISHIDA, A., FURUICHI, T. & MIKOSHIBA, K. (2002) A novel recombinant hyperaffinity inositol 1,4,5-trisphosphate (IP(3)) absorbent traps IP(3), resulting in specific inhibition of IP(3)-mediated calcium signaling. *J Biol Chem.* **277**: 8106-13. ULLOA-AGUIRRE, A., JANOVICK, J.A., BROTHERS, S.P. & CONN, P.M. (2004) Pharmacologic rescue of conformationally-defective proteins: implications for the treatment of human disease. *Traffic.* **5**: 821-37. UNGER, V.M., HARGRAVE, P.A., BALDWIN, J.M. & SCHERTLER, G.F. (1997) Arrangement of rhodopsin transmembrane alpha-helices. *Nature.* **389**: 203-6. USHIO-FUKAI, M., ALEXANDER, R.W., AKERS, M., YIN, Q., FUJIO, Y., WALSH, K. & GRIENDLING, K.K. (1999) Reactive oxygen species mediate the activation of Akt/protein kinase B by angiotensin II in vascular smooth muscle cells. *J Biol Chem.* **274**: 22699-704. VAN SANDE, J., SWILLENS, S., GERARD, C., ALLGEIER, A., MASSART, C., VASSART, G. & DUMONT, J.E. (1995) In Chinese hamster ovary K1 cells dog and human thyrotropin receptors activate both the cyclic AMP and the phosphatidylinositol 4,5-bisphosphate cascades in the presence of thyrotropin and the cyclic AMP cascade in its absence. *Eur J Biochem.* **229**: 338-43. VAUGHAN, D.E. (2000) AT(1) receptor blockade and atherosclerosis: hopeful insights into vascular protection. *Circulation.* **101**: 1496-7. VAUGHAN, M. (1998) Signaling by heterotrimeric G proteins minireview series. *J Biol Chem.* **273**: 667-8. VEREB, G., SZOLLOSI, J., MATKO, J., NAGY, P., FARKAS, T., VIGH, L., MATYUS, L., WALDMANN, T.A. & DAMJANOVICH, S. (2003) Dynamic, yet structured: The cell membrane three decades after the Singer-Nicolson model. *Proc Natl Acad Sci USA.* **100**: 8053-8. VOGT, S., GROSSE, R., SCHULTZ, G. & OFFERMANNS, S. (2003) Receptor-dependent RhoA activation in G12/G13-deficient cells: genetic evidence for an involvement of Gq/G11. *J Biol Chem.* **278**: 28743-9. WADE, S.M., LAN, K., MOORE, D.J. & NEUBIG, R.R. (2001) Inverse agonist activity at the alpha(2A)-adrenergic receptor. *Mol Pharmacol.* **59**: 532-42. WALKER, D.S., LY, S., LOCKWOOD, K.C. & BAYLIS, H.A. (2002) A direct interaction between IP(3) receptors and myosin II regulates IP(3) signaling in C. elegans. *Curr Biol.* **12**: 951-6. WANG, C., JAYADEV, S. & ESCOBEDO, J.A. (1995) Identification of a domain in the angiotensin II type 1 receptor determining Gq coupling by the use of receptor chimeras. *J Biol Chem.* **270**: 16677-82. WANG, D., SADEE, W. & QUILLAN, J.M. (1999) Calmodulin binding to G protein-coupling domain of opioid receptors. *J Biol Chem.* **274**: 22081-8. WEBER, K.T., SUN, Y. & CAMPBELL, S.E. (1995) Structural remodelling of the heart by fibrous tissue: role of circulating hormones and locally produced peptides. *Eur Heart J.* **16 Suppl** N: 12-8. WEI, H., AHN, S., SHENOY, S.K., KARNIK, S.S., HUNYADY, L., LUTTRELL, L.M. & LEFKOWITZ, R.J. (2003) Independent beta-arrestin 2 and G protein-mediated pathways for angiotensin II activation of extracellular signal-regulated kinases 1 and 2. *Proc Natl Acad Sci USA.* **100**: 10782-7. WEI, H., AHN, S., BARNES, W.G. & LEFKOWITZ, R.J. (2004) Stable interaction between beta-arrestin 2 and angiotensin type 1A receptor is required for beta-arrestin 2-mediated activation of extracellular signal-regulated kinases 1 and 2. *J Biol Chem.* **279**: 48255-61. WEISS, J.M., MORGAN, P.H., LUTZ, M.W. & KENAKIN, T.P. (1996) The cubic ternary complex receptor-occupancy model. I. model description. *J Theor Biol.* **178**: 151-67. WELSBY, P.J., CARR, I.C., WILKINSON, G. & MILLIGAN, G. (2002) Regulation of the avidity of ternary complexes containing the human 5-HT(1A) receptor by mutation of a receptor contact site on the interacting G protein alpha subunit. *Br J Pharmacol.* **137**: 345-52. WESS, J. (1998) Molecular basis of receptor/G-protein-coupling selectivity. *Pharmacol Ther.* **80**: 231-64. WESTPHAL, R.S. & SANDERS-BUSH, E. (1996) Differences in agonist-independent and -dependent 5-hydroxytryptamine2C receptor-mediated cell division. *Mol Pharmacol.* **49**: 474-80. WHITEBREAD, S., MELE, M., KAMBER, B. & DE GASPARO, M. (1989) Preliminary biochemical characterization of two angiotensin II receptor subtypes. *Biochem Biophys Res Commun.* 163: 284-91. WIELAND, K., BONGERS, G., YAMAMOTO, Y., HASHIMOTO, T., YAMATODANI, A., MENGE, W.M., TIMMERMAN, H., LOVENBERG, T.W. & LEURS, R. (2001) Constitutive activity of histamine h(3) receptors stably expressed in SK-N-MC cells: display of agonism and inverse agonism by H(3) antagonists. *J Pharmacol Exp Ther.* 299: 908-14. WILLARS, G.B., ROYALL, J.E., NAHORSKI, S.R., EL-GEHANI, F., EVEREST, H. & MCARDLE, C.A. (2001) Rapid down-regulation of the type I inositol 1,4,5-trisphosphate receptor and desensitization of gonadotropin-releasing hormone-mediated Ca2+ responses in alpha T3-1 gonadotropes. *J Biol Chem.* 276: 3123-9. WOJCIKIEWICZ, R.J. & NAHORSKI, S.R. (1991) Chronic muscarinic stimulation of SH-SY5Y neuroblastoma cells suppresses inositol 1,4,5-trisphosphate action. Parallel inhibition of inositol 1,4,5-trisphosphate-induced Ca2+ mobilization and inositol 1,4,5-trisphosphate binding. *J Biol Chem.* 266: 22234-41. WOJCIKIEWICZ, R.J., FURUICHI, T., NAKADE, S., MIKOSHIBA, K. & NAHORSKI, S.R. (1994) Muscarinic receptor activation down-regulates the type I inositol 1,4,5-trisphosphate receptor by accelerating its degradation. *J Biol Chem.* 269: 7963-9. WU, G., BENOVIC, J.L., HILDEBRANDT, J.D. & LANIER, S.M. (1998) Receptor docking sites for G-protein betagamma subunits. Implications for signal regulation. *J Biol Chem.* 273: 7197-200. XIAO, B., TU, J.C., PETRALIA, R.S., YUAN, J.P., DOAN, A., BREDER, C.D., RUGGIERO, A., LANAHAN, A.A., WENTHOLD, R.J. & WORLEY, P.F. (1998) Homer regulates the association of group 1 metabotropic glutamate receptors with multivalent complexes of homer-related, synaptic proteins. *Neuron.* 21: 707-16. XIE, Z., LEE, S.P., O'DOWD, B.F. & GEORGE, S.R. (1999) Serotonin 5-HT1B and 5-HT1D receptors form homodimers when expressed alone and heterodimers when co-expressed. *FEBS Lett.* 456: 63-7. YE, M.Q. & HEALY, D.P. (1992) Characterization of an angiotensin type-1 receptor partial cDNA from rat kidney: evidence for a novel AT1B receptor subtype. *Biochem Biophys Res Commun.* 185: 204-10. YOSHII, H., KURIYAMA, S., KAWATA, M., YOSHII, J., IKENAKA, Y., NOGUCHI, R., NAKATANI, T., TSUJINOUE, H. & FUKUI, H. (2001) The angiotensin-I-converting enzyme inhibitor perindopril suppresses tumor growth and angiogenesis: possible role of the vascular endothelial growth factor. *Clin Cancer Res.* 7: 1073-8. YU, B., GU, L. & SIMON, M.I. (2000) Inhibition of subsets of G protein-coupled receptors by empty mutants of G protein alpha subunits in g(o), G(11), and G(16). *J Biol Chem.* 275: 71-6. ZAMAH, A.M., DELAHUNTY, M., LUTTRELL, L.M. & LEFKOWITZ, R.J. (2002) Protein kinase A-mediated phosphorylation of the beta 2-adrenergic receptor regulates its coupling to Gs and Gi. Demonstration in a reconstituted system. *J Biol Chem.* 277: 31249-56. ZAWARYNSKI, P., TALLERICO, T., SEEMAN, P., LEE, S.P., O'DOWD, B.F. & GEORGE, S.R. (1998) Dopamine D2 receptor dimers in human and rat brain. *FEBS Lett.* 441: 383-6. ZHANG, W.B., NAVENOT, J.M., HARIBABU, B., TAMAMURA, H., HIRAMATU, K., OMAGARI, A., PEI, G., MANFREDI, J.P., FUJII, N., BROACH, J.R. & PEIPER, S.C. (2002) A point mutation that confers constitutive activity to CXCR4 reveals that T140 is an inverse agonist and that AMD3100 and ALX40-4C are weak partial agonists. *J Biol Chem.* 277: 24515-21. ZHU, C.C., FURUICHI, T., MIKOSHIBA, K. & WOJCIKIEWICZ, R.J. (1999) Inositol 1,4,5-trisphosphate receptor down-regulation is activated directly by inositol 1,4,5-trisphosphate binding. Studies with binding-defective mutant receptors. *J Biol Chem.* 274: 3476-84. ZHU, C.C. & WOJCIKIEWICZ, R.J. (2000) Ligand binding directly stimulates ubiquitination of the inositol 1,4,5-trisphosphate receptor. *Biochem J.* 348 Pt 3: 551-6. ZUSCIK, M.J., PORTER, J.E., GAIVIN, R. & PEREZ, D.M. (1998) Identification of a conserved switch residue responsible for selective constitutive activation of the beta2-adrenergic receptor. *J Biol Chem.* 273: 3401-7.
Nanoscale gold pillars strengthened through dislocation starvation Julia R. Greer and William D. Nix Department of Materials Science and Engineering, Stanford University, 416 Escondido Mall, Stanford, California 94305, USA (Received 21 April 2006; published 12 June 2006) It has been known for more than half a century that crystals can be made stronger by introducing defects into them, i.e., by strain-hardening. As the number of defects increases, their movement and multiplication is impeded, thus strengthening the material. In the present work we show hardening by dislocation starvation, a fundamentally different strengthening mechanism based on the elimination of defects from the crystal. We demonstrate that submicrometer sized gold crystals can be 50 times stronger than their bulk counterparts due to the elimination of defects from the crystal in the course of deformation. DOI: 10.1103/PhysRevB.73.245410 PACS number(s): 62.25.+g, 81.07.Bc, 81.16.Rf, 81.70.Bt Anyone who has ever repeatedly bent a copper wire knows that it gets progressively stronger as it becomes more deformed, through a phenomenon called strain-hardening. The strengths of cold-worked metals are known to be up to ten times greater than those of well-annealed crystals. Plasticity in metals occurs by the motion of dislocations, or line defects, which multiply in the course of plastic deformation. Impeding the motion of dislocations by introducing defects into crystals results in strengthening. Although these fundamental concepts are often assumed to be applicable to crystals of any dimensions, numerous recent studies have shown that conventional plasticity diverges at a certain length scale, with smaller samples reported to be stronger than their bulk counterparts.\textsuperscript{1–6} Pure metals and some alloys exhibit strong size effects at the submicron scale.\textsuperscript{1–13} Size effects in indentation, torsion, and bending have been understood in terms of the nonuniformity of the deformation, which sets up strain gradients leading to hardening.\textsuperscript{7} Size effects are also found in thin films, where the strength scales inversely with film thickness and is usually attributed to the confinement of dislocations by the substrate.\textsuperscript{8–10} Size effects are observed for pristine crystals, as well.\textsuperscript{11,12} In the earliest stages of nanoindentation, for example, the crystal volume is extremely small and can be dislocation-free, requiring very large stresses to nucleate new dislocations. In addition, classic experiments on the initially dislocation-free metal whiskers indicated that whiskers with smaller diameters yielded at higher stresses.\textsuperscript{13} In typical whiskerlike deformation behavior, initial elastic loading leads to a very high stress followed by a significant drop and continued plastic flow at low stresses. Finally, several molecular dynamics simulations\textsuperscript{14–16} and more recent experiments on small pillars\textsuperscript{17,18} all support the tenet that smaller is stronger. In spite of much progress on size effects research there is still no unified theory for plastic deformation at the submicron scale. We focus on size effects arising in unconstrained geometries, in the absence of strain gradients, and with nonzero initial dislocation densities. Gold nanopillars ranging in diameter between 200 nm and several micrometers were fabricated using focused ion beam (FIB) machining and a microlithography/electroplating technique. These small pillars are found to plastically deform in uniaxial compression at stresses as high as 800 MPa, a value $\sim$50 times higher than for bulk gold. We believe that these high strengths are controlled by the process of hardening by dislocation starvation. In this mechanism, once the sample is small enough, the mobile dislocations have a higher probability of annihilating at a nearby free surface than of multiplying and being pinned by other dislocations. When the starvation conditions are met, plasticity is accommodated by the nucleation and motion of new dislocations rather than by motion and interactions of existing dislocations, as is the case for bulk crystals. The primary pillar fabrication method utilizes FIB machining to etch patterns of interest into single crystal gold. Following the approach developed by Uchic \textit{et al.}\textsuperscript{17} and used by Greer \textit{et al.},\textsuperscript{18} the present work extends this fabrication technique to much smaller nanopillars (Fig. 1). We have also developed a uniaxial testing technique for studying mechanical properties of these tiny samples. Uniaxial compression tests on pillars of varying sizes and aspect ratios were conducted using the MTS Nanoindenter\textsuperscript{XP} with a flat punch diamond tip. The tip was custom-machined from a standard Berkovich indenter by etching off the tip in the FIB. The resulting projected area is an equilateral triangle with a 9 $\mu$m inscribed circle diameter. Unlike previously reported uniaxial compression studies, the loading mechanism here is nominally displacement-rate controlled rather than load-controlled.\textsuperscript{18} The “nominal displacement rate” here refers to the variable in the software method specified by the user. We wrote this method to simulate a constant displacement rate, and therefore a nearly constant strain rate. The method utilizes a PID-based feedback loop to control the displacement rate throughout the data acquisition in order to enable the instrument to quickly remove the imposed load during a discrete strain burst. Unfortunately, despite utilizing this approach, the Nanoindenter, which is inherently a load-controlled instrument, is not capable of responding to every slip event quickly enough. Therefore, the strain rate during the plastic components of the stress-strain curve cannot be fully controlled. Load-displacement data were collected in the continuous stiffness measurement (CSM) mode of the instrument. Once the surface of the pillar has been detected, such parameters as load, harmonic contact stiffness, and compressive displacement of the top of the pillar are continuously measured and recorded. The load-displacement data obtained during the compression experiments are then converted to uniaxial stresses and strains using the assumption that the plastic volume is conserved throughout this mostly-homogeneous deformation. Stress-strain curves of FIB pillars whose diameters range between 290 nm and 7450 nm as well as the strength of bulk gold at 2% strain are presented in Fig. 2(a). Uniaxial loading in the ⟨001⟩ direction, chosen for our experiments and corresponding to a high-symmetry orientation, would result in the activation of 12 different {111}/⟨011⟩ slip systems, with the pillar deforming uniformly around its diameter as it is compressed. In this orientation, despite the presence of the end constraints, the pillar remains centrally-loaded and preserves its cylindrical shape throughout the deformation process. Each curve represents a single test at a specific pillar diameter, measured at approximately $L/3$ below the pillar top. The smallest pillar reaches a compressive stress of 800 MPa at 10% strain. While in some cases the initial stages of deformation are not purely elastic due to the gradual onset of yielding, the initial loading slopes of the well-aligned tests give elastic moduli very close to the Young’s modulus of gold in the ⟨001⟩ direction, 43 GPa. The fully elastic unloading slopes closely match the expected value, as well. Another important aspect of these stress-strain curves is the lack of stage II work-hardening associated with the activation of multiple slip systems in the course of deformation of single crystals. In a typical single crystal, the dislocations from different active slip planes interact with each other and form sessile dislocations, creating a large number of barriers for the movement of other dislocations, thereby requiring ever-higher stresses with the increasing strain. A typical work-hardening slope is on the order of $\mu/20$, where $\mu$ is the elastic shear modulus. This behavior is prevalent in the compression of the largest pillar whose diameter is a little over 7 $\mu$m, as shown in Fig. 3(a). Contrary to this, the stress-strain behavior observed here agrees more with stage I-type deformation, or the “easy glide” section of a low-symmetry oriented single crystal deformation curve. Moreover, a representative stress-strain curve in this work is composed of a number of discrete slip events separated by elastic loading segments, while the overall stress level remains nearly constant as the strain increases as shown in Fig. 3(b). This suggests that the hardening mechanism here is the opposite to that of conventional strain-hardening, with the elastic loading sections indicative of the absence of dislocations rather than their multiplication. One of the major concerns with the FIB fabrication technique is the possibility of Ga$^+$ ion implantation into the sample, and that this is the cause for the observed increase in strength. Several approaches were used to address this issue. First, Auger depth profiling analysis with subsequent surface layer removal was employed. The initial concentration of Ga was 1.7 at. % on top of the pillar and 0.8 at. % on its side. The “side” of the pillar here refers to the middle third of the specimen since in that area the deformation is closest to being homogeneous. The conformal surface layer was removed by etching the rotating pillar in low-energy Ar$^+$ plasma. Depth profiling and surface etching were repeated to assess the ever-decreasing concentration of Ga. The final etch step resulted in the total removal of 5 nm from the pillar surface, reducing the overall Ga concentration by $\sim 50\%$, and the gallium-to-gold ratio from 0.079 to 0.016. The significant change in gold-to-gallium ratio indicates that Ga ions were located near the surface rather than implanted into the sample. These cleaned pillars were subsequently tested in compression, which indicated that surface removal had little effect on strength. The full Auger analysis can be found in Ref. 19(a). To further investigate the possible effects of Ga$^+$ ion implantation, we developed an alternative fabrication technique based on lithography/electroplating.\textsuperscript{18} The electroplated pillars were annealed at 300 °C before testing to establish a coarse grain size. While these pillars are polycrystalline, they contain only 2–3 grains extending across the pillar width. To complete the analysis of possible surface modification, some FIG. 2. (a) Stress-strain behavior of ⟨001⟩-oriented pillars: flow stresses increase significantly as the pillar diameter is reduced. (b) SEM image of a compressed pillar after deformation. Slip lines in multiple orientations are clearly present and indicate a homogeneous shape change. FIG. 3. (a) Stage II work-hardening is clearly present in the stress vs strain curve for the largest pillar whose diameter is 7.45 μm. (b) The lack of stage II work-hardening is evident in the stress vs strain curve for a small pillar whose diameter is 400 nm. FIG. 4. Flow stress vs pillar diameter for all FIB, electroplated, annealed, and Ar plasma-treated pillars as compared to a range of theoretical strengths and the yield strength of bulk gold. present on the surface of the pillars, it is not a major contributing factor in the strength increase. One possible explanation for these high strengths can be developed using the concept of dislocation starvation. In ordinary plasticity, dislocation motion leads to dislocation multiplication by the double cross-slip, invariably leading to softening before strain hardening occurs through elastic interaction of dislocations. Unlike in bulk samples, dislocations in sub-micron-sized crystals can travel only very small distances before annihilating at free surfaces, thereby reducing the overall dislocation multiplication rate. Gliding dislocations leave the crystal more rapidly than they multiply, decreasing the overall dislocation density. Such processes would lead to a dislocation-starved state requiring very high stresses to nucleate new dislocations. Our phenomenological model describes this hypothesis by calculating the evolving dislocation density in the course of deformation. All equations and relevant details of this model as well as the model’s predictions clearly showing the differences in deformation mechanism between a relatively large and a relatively small pillar are shown in Ref. 19(b). These theoretical results also agree well with the experimental data shown in Fig. 3. The dislocation starvation model we give here focuses only on the starvation process and does not incorporate any possible dislocation nucleation events. The modeling simply shows that for crystals below a critical size the dislocation density should decline in the course of deformation and tend toward a dislocation-starved state. When such a state is reached, the stress would be expected to rise abruptly leading subsequently to the nucleation of new dislocations at the ever-higher nucleation stresses. This increase in the dislocation nucleation stresses may be attributed to the scarcity of sources in smaller pillars. An in-depth computational dislocation dynamics analysis is required to fully assess the dislocation behavior in a confined-geometry sample under the described conditions. Some promising attempts have already been made to create a model explaining the presence of a characteristic length scale in mechanical deformation.\textsuperscript{14–16} Transmission electron microscopy (TEM) observations of pillar cross sections before and after deformation were made in an effort to check one of the main predictions of the dislocation starvation theory. Due to the very small dimensions of the pillar specimen, conventional TEM preparation techniques could not be utilized. The preparation of the electron-transparent samples involved the application of the Omniprobe capability in the focused ion beam. Since the pillar was exposed to the Ga\textsuperscript{+} ion beam throughout this sample preparation process, several attempts were made to clean the surface of TEM samples in the precision ion polishing system (PIPS). The Gatan PIPS ion mill is equipped with two low-energy Ar\textsuperscript{+} plasma beams whose tilt positions can be adjusted to any value between 0° and 10°. While this method of surface preparation is usually very effective, it did not prove to be so in this case due to the slight misalignment between the pillar and the TEM grid. This misalignment leads to an error in the calculation of the plasma beam angles, resulting in either milling away a part of the pillar or not removing any damage. TEM images of a deformed pillar cross section along the [110] zone axis are shown in Fig. 5. The pillar was compressed in the [001] direction, which is also shown in the figure. In the TEM image, two dislocation lines which are clearly present for the [111] and [220] diffraction vectors disappear for the g=[002] case. TEM images were taken at two different zone axes, and two sets of invisibility conditions were determined in order to calculate the Burgers vector of the featured dislocations. Based on this analysis, the Burgers vector of both dislocations visible in the TEM images is [1\bar{1}0], as shown in Fig. 5. As expected, this Burgers vector is perpendicular to the loading axes of the pillar [001]. The transmission electron microscopy images of the deformed pillars are found to be void of any mobile dislocations and show only two dislocations whose Burgers vectors are perpendicular to the loading axis and thus not driven by the applied stress. This supports the argument that all of the mobile dislocations escaped from the crystal, leaving it only with these two nonmoving dislocations that must have formed through a reaction of previously present mobile dislocations. TEM sample preparation inevitably leaves residual FIB damage on the specimen surface. For example, the surface layer on the 80 nm thick TEM foil is approximately 5 nm thick and manifests itself as a collection of small dislocation loops. The appearance of these loops in the TEM images for both undeformed and deformed cases indicates that they are most likely due to the FIB damage during sample preparation and are not produced by deformation. The existence of these dislocations on the surface of TEM samples brings to question the possibility of their presence in the original pillar surfaces, which might hinder the motion of gliding dislocations as they are leaving the crystal. It should be noted, however, that in the course of TEM sample preparation, the foil is subjected to higher doses of Ga$^+$ current compared to the actual pillar fabrication, due to the nature of the incident ion beam angles. TEM sample preparation requires varying the angle between the sample and the ion beam, resulting in more surface damage than during the FIB preparation, where the ion beam is strictly orthogonal to the pillar top and parallel to the sides of the pillar. Therefore, it would be expected for the TEM-transparent samples to contain significantly more dislocation loops than the surface of a FIB-machined pillar. Nonetheless, the possible presence of some FIB damage on the surfaces of test samples is a complication that requires further study. According to the dislocation starvation hypothesis described above, mobile dislocations are thought to escape from the crystal at the nearest free surface before multiplying and interacting with other dislocations. Such processes lead to a dislocation-starved state, and nucleation of new dislocations in the newly-formed perfect crystal is required to accommodate further plastic deformation. When a perfect crystal is subjected to an applied stress, the activation energy required for nucleation of dislocation loops is a strongly decreasing function of the ratio of the applied shear stress and the ideal shear strength, as shown by Xu and Argon.\textsuperscript{21} The stress levels attained for our smallest pillars correspond to $\sim 0.45 \tau_{\text{theoretical}}$, requiring a high energy of $\sim 13$ eV to homogeneously nucleate a dislocation loop. Theoretical shear strength values in this previous work, however, do not take into account the atomistic nature of nucleating a dislocation near a free surface, where the values of theoretical strength are expected to be lower. Therefore, in the presence of free surfaces, the fraction of applied stress to the theoretical strength will be higher, requiring lower, more realistic energies to activate nucleation. In order to explore the possibility of a nucleation-controlled deformation mechanism, it is useful to compare our experimental results to those obtained computationally. Several promising studies have been recently published addressing the observed size effect at the nanoscale. In the molecular dynamics work of Horstemeyer \textit{et al.},\textsuperscript{14} the embedded atom method (EAM) is employed to assess length scale behavior during shear deformation of a fcc crystal. These atomistic simulations reveal that dislocation nucleation at free surfaces is the mechanism of yielding in pristine crystals. The following power law dependence was reported by this study: $\tau_{\text{tss}}/\mu = 3.2 \times 10^{-5}(V/A_s)^{-0.38}$, where $\tau_{\text{tss}}$ is the resolved shear stress in the slip plane, $\mu$ is the shear modulus of Au, and $V$ and $A_s$ are the volume and the surface area, respectively. Here, the volume is normalized by a constant length in order to force the quantity on the right-hand side to be dimensionless. Deshpande \textit{et al.}\textsuperscript{15} performed dislocation dynamics simulations of a single crystal in compression and in tension. In this model, plastic flow was found to arise from the collective motion of discrete dislocations that nucleate from fixed Frank-Read sources which are activated both by the applied stress and the stress fields of nearby gliding dislocations. The authors also derived a power law expression for the flow stress as a function of the lateral dimensions of the sample, $W$, $\sigma_f = 67 \times 10^{-5}(W/W_0)^{-0.49}$, where the leading constant of 67 has units of MPa. Incipient plasticity was also investigated by Zuo \textit{et al.} in 2005.\textsuperscript{16} These authors also employed EAM to compute atomic fluctuations leading to dislocation nucleation in a small computational cell of Ni$_3$Al subjected to homogeneous shear. It was found that atoms with relative displacements larger than $\frac{1}{2}$ of the Shockley partial vector formed “hot zones,” resulting in the creation of partial dislocation loops. Fitting our gold data to this model required modification of two parameters, which are functions of the strain rate and elastic properties of the deformed material. The graphs representing predictions of these models\textsuperscript{14–16} in comparison to the present experiments are shown online [Ref. 19(c)]. While the overall trends of these models compare favorably with experimental data, the inevitable idealizations associated with computational studies and large differences in sample sizes may be responsible for some of the differences in the predicted mechanical behavior. The authors gratefully acknowledge financial support of this project through grants provided by a NSF-NIRT grant (CMS-0103257) and the Department of Energy (DE-FG03-89ER45387). We especially would like to thank W. Oliver (MTS Corporation), M. Uchic (AFRL), as well as A. Marshall, D. Pickard, G. Feng, E. Perozziello, B. Jones, and J. Cheng (Stanford University) for their help in this work. \begin{thebibliography}{21} \bibitem{1} Y. Wei and J. W. Hutchinson, J. Mech. Phys. Solids \textbf{51}, 2037 (2003). \bibitem{2} J. S. Stolken and A. G. Evans, Acta Mater. \textbf{46}, 5109 (1998). \bibitem{3} W. D. Nix and H. Gao, J. Mech. Phys. Solids \textbf{46}, 411 (1998). \bibitem{4} H. D. Espinosa and B. C. Prorok, J. Mater. Sci. \textbf{38}, 4125 (2004). \bibitem{5} L. Nicola, E. Van der Giessen, and A. Needleman, J. Appl. Phys. \textbf{93}, 5920 (2003). \bibitem{6} Q. Ma and D. R. Clarke, J. Mater. Res. \textbf{10}, 853 (1995). \bibitem{7} See, for example, N. A. Stelmashenko, M. G. Walls, L. M. Brown, and Y. V. Millman, Acta Mater. \textbf{41}, 2855 (1993); Y. Huang, Z. Xue, H. Gao, W. D. Nix, and Z. C. Xia, J. Mater. Res. \textbf{15}, 1786 (2000). \bibitem{8} E. Arzt, Acta Mater. \textbf{46}, 5611 (1998). \bibitem{9} D. Y. Yu and F. Spaepen, J. Appl. Phys. \textbf{95}, 2991 (2003). \bibitem{10} W. D. Nix, Metall. Trans. A \textbf{20A}, 2217 (1989). \bibitem{11} E. Arzt, G. Dehm, P. Gumbsch, O. Kraft, and D. Weiss, Prog. Mater. Sci. \textbf{46}, 283 (2001). \bibitem{12} W. W. Gerberich, J. C. Nelson, E. T. Lilleodden, P. Anderson, and P. J. T. Wyrobek, Acta Mater. \textbf{44}, 3585 (1996). \bibitem{13} S. S. Brenner, J. Appl. Phys. \textbf{27}, 1484 (1956); S. S. Brenner, R. Doremus, B. W. Roberts, and D. Turnbull, *Growth and Perfection of Crystals* (Wiley, New York, 1958), p. 157. M. F. Horstemeyer, M. I. Baskes, and S. J. Plimpton, Acta Mater. **49**, 4363 (2001). V. S. Deschpande, A. Needleman, and E. Van der Giessen, J. Mech. Phys. Solids (to be published). L. Zuo, A. H. W. Ngan, and G. P. Zheng, Phys. Rev. Lett. **94**, 095501 (2005). M. D. Uchic, D. M. Dimiduk, J. N. Florando, and W. D. Nix, Science **305**, 986 (2004). J. R. Greer, W. C. Oliver, and W. D. Nix, Acta Mater. **53**, 1821 (2005); J. R. Greer, Ph.D. dissertation, Stanford University (2005). See EPAPS Document No. E-PRBMDO-73-061624 for (a) full Auger analysis, (b) dislocation starvation mechanism, and (c) comparison of computational studies with our experimental data. This document can be reached via a direct link in the online article’s HTML reference section or via the EPAPS homepage (http://www.aip.org/pubservs/epaps.html). S. Ogata, J. Li, N. Hirosaki, Y. Shibutani, and S. Yip, Phys. Rev. B **70**, 104104 (2004). G. Xu and A. S. Argon, Philos. Mag. Lett. **80**, 605 (2000).
Pioneer Natural Resources When dividends flow around double digits PIONEER NATURAL RESOURCES IS ONE OF THE BIGGEST US SHALE (FRACKING) COMPANIES. Of course, the underlying business is first and foremost dependent on the price of oil. Outside of the US, it is a rather not so well known company. However, Pioneer Natural Resources (PXD) is the biggest oil producer in the Permian Basin in Texas and one of the top players in gas production. The Basin is the only place of its operations. Focus at its best! PXD has one of the lowest production cost profiles in the industry and thus high margins. Due to its concentration, it is valued lower than its peers. But Texas is business-friendly, especially for energy companies. One could also buy one of the supermajors to play a higher oil or gas price. But PXD has a more interesting shareholder return approach. Instead of spending (wasting) billions on share buybacks with higher stock prices, it pays a variable cash dividend. If oil climbs back above 100 USD, we are talking about a double digit dividend yield. “Behind every stock is a company. Find out what it’s doing.” “All the math you need in the stock market you get in the fourth grade.” “If you’re prepared to invest in a company, then you ought to be able to explain why in simple language that a fifth grader could understand, and quickly enough so the fifth grader won’t get bored.” (Peter Lynch) “That’s been one of my mantras – focus and simplicity. Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” (Steve Jobs) “I am suspicious of thick and overly detailed reports. What is good is usually also quite simple, so lengthy explanations should not be necessary.” (Marc Faber) “I don’t look to jump over 7-foot bars: I look around for 1-foot bars that I can step over.” (Warren Buffett) “Simplicity is the ultimate sophistication” (Leonardo da Vinci) Why do I write short reports and not long, extensive ones? Short answer: It’s about simplicity, comprehension and your time in today’s information overloaded world. Long answer: For me, the big goal (and challenge) is to be clear and on point with my investment theses. For every aspect I look at, the aim is to write down the main points cleanly on a single page. The research I am doing beforehand is not reduced, however, quite the opposite. My research process is extensive. I am spending several hours of time to crush together what is necessary into an easily comprehensible and digestible format for you. It is a service from me and also my philosophy to be spot on. Because I value your time! WOULDN'T IT BE GREAT TO OWN OIL STOCKS THAT PAY YOU HIGH DIVIDENDS – HIGHER OIL PRICES, HIGHER PAYOUTS? This could at least lower your energy bills somewhat. I found one company that not only excels as one of the best in its everyday operations, but has exactly this dividend strategy: The higher the oil price, the higher their cash inflow and the higher the dividends! Management adjusts the payout variably quarter by quarter to the most recent results. 80% of free cash flow every quarter straight into your pocket (before taxes)! In comparison, most oil companies pay a smaller dividend and only adjust it slightly once a year. Big parts of cash flows or even the majority, however, often are wasted for share buybacks, only after the stock price has already gone up. The company I am presenting you today in my first research report for my Premium Members is Pioneer Natural Resources. It is the biggest oil and a big gas producer in the Permian Basin of Texas. In my opinion, it is a good time to enter the game, as the price is in correction mode! I think you could already read from my current and third-last Weeklies that I am expecting higher energy prices and a sector rotation in favor of energy. At the minimum, oil should stay on higher levels than in the past, because of the sector's under-investments during the last decade. Supply increases should be very limited for the foreseeable future as this sector is in disfavor of the public and supply slow-moving – should it ever be increased. Demand tends to be non-elastic and will probably also stay high, because oil is directly or indirectly everywhere in our lives. This is a very good setup for oil companies, not so for consumers. After the relentless spending in prior cycles, many oil companies used the bear market since 2014 for cost-cutting and optimization measures. Cash flows and capital returns are at record highs now, thanks to the focus on the most profitable and promising projects instead of shear volume at all costs like in the past. In a "normal" up-cycle, oil producers would start investing in new production capacities and this way increase oil supply. But not anymore. Shareholders demanded strict capital discipline. Hence, there is not much more supply to be expected. Most oil companies maximize their cash flows and pay most of it out, instead. The OPEC+ cartel has a history of being able to decide about the fate of the oil price. 23 countries in this group being accountable for about 45% of total world oil supply, should be able to influence with any greater decision the price of oil. What worked in the past, however, either doesn't seem to work anymore. Members of OPEC+ in younger history had uneven views about adjustments of oil production capacity. I think they are not about to complain about higher prices... And don't forget, should worldwide geopolitical tensions escalate, you will have a hedge on this front in your portfolio, too. Just as a bonus. I hope you enjoy this new report. All the best, Alan Galecki Founder of Financial-Engineering.net Pioneer Natural Resources (PXD) is the biggest oil and one of the big gas companies of the second row, operating in the Permian Basin of Texas. Located in the Western part of Texas, the Basin is one of the most important and most promising sources for energy. In the USA, around 75% of reserves are still not in production! In contrast, most heavyweights like Saudi-Arabia, Kuwait, Iraq or Canada are already running full-steam with their production. And while the other more mature oil fields in the US are rather stagnating, the Basin is growing. Most of the untapped reserves are located in the Permian. This is exactly the place where today’s investment idea has its operations and its full focus. Internationally, it is not so well known like the big supermajors Exxon Mobil, Chevron, BP or Shell. But PXD is no small fish with a market cap of nearly 50 billion USD! Due to its size and strict focus on the Permian Basin operations, PXD has the lowest production costs and thus attractive margins. What I like about this company, are three things: - Its focus and operational excellence (lowest production costs) - Its long-time Founder-CEO that came back from retirement to deliver best in class results and increase shareholder rewards - Its capital allocation strategy In contrast to many other energy companies, PXD is not wasting cash flows for share buybacks at high stock prices. Instead, it pays 75–80% of its free cash flow as a variable quarterly dividend. This means, the higher the price of oil, the higher the dividend is adjusted on a quarterly basis. You can immediately profit from higher oil and gas prices! Even at WTI oil prices of 60 USD per barrel (currently 78.74 USD), PXD would be able to pay a dividend with a yield of around 5% p.a. in relation to the current stock price. PXD is relatively cheap already in comparison to its peers, even if the oil price should not rise, soon. At current levels, you can collect a nice 9% dividend yield p.a. The upside potential of the stock should be a comfortable 30% at current oil prices, should the free cash flow multiple only approach valuations of the industry leaders. PXD has negligible debt in relation to its cash flows, but also to its peers. Also, PXD sailed rather comfortably through the oil price shock during 2020, while many peers had to load up their balance sheets with debt to pay dividends or to service older debt. Should the oil price rise, you can expect a fat double-digit dividend yield on current prices, already at around 100 USD per barrel. Bull markets in commodities are born from previous under-investments – this is precisely the setup we are currently in! And this is the main reason why I do expect higher than lower oil prices. Read on for the whole story... ## Pioneer Natural Resources | **ISIN / ticker** | US7237871071 / PXD US | |-------------------------|-----------------------| | **Home bourse** | Irving (Texas, USA) | | **Share price (as of 23 September 2022)** | 208.99 USD | | **Market Cap / enterprise value** | 49.9 bn. USD / 52.7 bn. USD | | **Average 3 month daily volume** | 2.4 million shares / ~578 million USD | ### Longterm Chart Pioneer Natural Resources Company (PXD) - **Price**: 208.99 $ - **Change**: -17.66 $ (-7.79 %) - **Date**: 23. September 2022 Close **Price Return**: - 571.9% price return over 17.68 years - 11.4% CAGR **Source**: TIKR.com Pioneer Natural Resources (PXD) is a pure oil and gas exploration and production (E&P) company. “Pure” means that it has no refining, chemical or marketing activities and no own gas stations, unlike the big oil supermajors Exxon Mobil, Chevron, BP, Total or Shell. All operations of PXD are concentrated only on the Permian Basin in Western Texas (the Eastern “Midland” part). The Permian Basin as a whole contributes roughly 40% to total US oil production and is said to be one of the most future-proof oil sources in the world due to its vast still untapped reserves. The consultancy firm Rystad Energy says about the Basin: “We’ll stop using oil long before we run out of Permian inventory”. During 2022, production in the Permian already surpassed the highs from 2019 and is expected to rise further. Not so in the other, more mature and slower moving, oil-fields The Bakken (North Dakota), Eagle Ford (South Texas) or Woodford (Oklahoma). This strategic key-location in Western Texas should be on everyone’s radar who is interested in oil investments. Interestingly, from the graphic in the top right corner: Unlike for example other big oil producers like Saudi-Arabia, Russia, Iraq, Kuwait or Canada, the USA has roughly 75% of its reserves still not in production. The other nations are already for the most part actively producing. And the US is already one of the top three producing countries (some sources say even first place)! Total world oil production currently is 100 million barrels per day. The US produces around 11–12% of it. Due to its size, focus on a single high-quality location and good management, PXD has the lowest production costs in the Basin and thus the highest margins. At current oil prices, they have a margin per barrel of ca. 30–35 USD. This is still insanely profitable, although oil prices have lost around a third from its highs of 120 USD this year (WTI is relevant for PXD). Even at 50 USD per Barrel, PXD would be profitable. I don’t see such low prices as sustainable, however. PXD is selectively buying smaller companies to increase its already big resources that should last for around 20 years at the minimum (see p. 15 of their Q2 presentation). Fortunately, PXD is lead by one of its founders who even came back out of retirement for a second term. The CEO, Scott Sheffield, has been in charge since the merger in 1997 until 2016, when he retired. But he returned in 2019, luckily. His mission: To deliver top results and increase returns for shareholders! And he is good at capital allocation. The typical development in this sector is that energy companies have no cash at the bottom of the cycle and often post losses. Some companies even need to acquire debt to survive. Few even simply go bankrupt, because they were too aggressive during the good times. More on debt on the next page. Many CEOs do not have the courage or the means to buy back stocks when the prices are low and returns are the highest. They wait and see. Instead, they pro-cyclically raise payouts and especially stock buybacks when cash flows rise again as do stock prices. Buying high and selling low, unfortunately, is common practice in the energy sector. I don’t like to see that, because with higher stock prices you can obviously buy back less shares for your money. CEO Sheffield said in the most recent Q2-earnings call: “We do run our net asset value on the company. We like to get a great return when we go into the market and buy a lot of the stock like we did. And if for some reason, we see big dips, whether it’s in oil or something else affecting the marketplace, then we’ll be more aggressive like we have.” I really like to see such forward-looking leaders that understand capital allocation. The management of PXD committed to a progressive minimum dividend through the complete business cycle. This base dividend currently stands at 4.40 USD per year or slightly above 2% p.a. It is raised regularly since Sheffield came back. Even at 50 USD per barrel of oil, PXD would be able to pay this dividend. A big margin of safety! More interesting is of course the dividend’s variable component. In the picture above, we see that at levels of 60 USD per barrel, PXD would at current share prices throw off a nice 5% dividend yield. This is already way more than its big US peers Exxon Mobil and Chevron do. Really exiting, however, is the prospect in case of higher oil prices. This is another advantage of the variable dividend model. It translates to a current dividend yield of 9.1% (19 USD / 209 USD) p.a. Not bad! You see above what will happen with higher oil prices. I think, this is a very attractive and clearly communicated dividend strategy, while peers likely will up their buybacks with higher prices. PXD does also buy back its own shares, but it does so rather on occasion and only with up to 20% of free cash flow. During the first half of 2022, PXD bought back 2.5% of its stock. They still have around three billion USD left (around 5% of outstanding stocks at current prices). Because oil prices came back in the last months, the next quarterly dividend will likely be lower than the last one. With the Q2 results, PXD announced a quarterly dividend of 8.57 USD in total. That would be an annualized yield of around 16%. You see what is possible. Currently, I would be calculating with a yield of around 8–9% p.a. in my base case. The upside is for free! Because we are in an economic slowdown and also have rising interest rates, it is paramount to be very cautious with debt. You should pick the businesses with the best balance sheets. This is even more important for companies that produce commodities like in the oil sector where results fluctuate more. One of my favorite investor idols, Peter Lynch, once said: “A company without debt cannot go bankrupt.” I think, you see the point. Less debt means less stress. If cash flows fall abruptly, a healthy balance sheet will save you from having to raise capital at the worst conditions. Dilution hurts investors the most. You dilute shareholders massively more at low stock prices. Taking on debt in a period of stress is also not favorable, because the cost of debt is higher. Suspended or cut dividends likely follow suit under such stress scenarios. Unlike many of its peers (e.g. Exxon), PXD made it through the 2020-trough without taking on excessive debt to fund its expenditures and base dividends. Currently, the balance sheet is in very good shape. Net debt only sits at three billion USD (less than a third of free cash flow guided for 2022). It is even spread over several years and not to be paid back in one sum. Hence, we have stability at this front. Next, of course, is operating performance. Due to its low cost base and scale, PXD has also very high returns on capital. Please see the second slide above. Return on capital employed (what you get back for your investments) is expected to be around 30%. This is a very high figure, as you can see in comparison to even other industries with lower costs. One point I dislike is that PXD used own stock to pay for two acquisitions in 2021. This was a time, when the stock of PXD stood rather low. The share count grew by about 50%. Luckily, oil prices jumped. I hope they are buying a lot of stock back during the current correction. THE VALUATION OF COMMODITY STOCKS IS VERY TRICKY – TO SAY THE LEAST. We will try, nonetheless. The current market capitalization stands at around 50 billion USD with a share price of PXD of around 209 USD. The enterprise value (market cap + net debt) sits at around 52.7 billion USD. Management guided with their Q2 results a free cash flow for this year of around 9 billion USD. This is already after investments made back into the business. These are the rough figures. The problem is, oil prices were above 100 USD at the time of this guidance. We are 20% lower, now. I would thus build in a higher margin of safety and favor being surprised to the upside instead of being disappointed for having been too optimistic. Thus, let’s assume they will achieve a free cash flow of 8 billion USD this year. The EV / FCF would be 7.5x with a total of 6.6 billion free cash flow. This corresponds to a free cash flow yield of more than 15% (FCF / EV, exactly the other way around than EV / FCF). This is already an attractive expected return. Should PXD reach their guidance of 9 billion USD of free cash flow with higher oil prices, then the multiple would be even less than 6x. The FCF yield (your expected return) would be more than 16%. Peers are trading for multiples of 10.4x (Exxon), 12.2x (Chevron), 9.5x (Shell), 6.9x (BP, high debt) or 7.9x (TotalEnergies). PXD has not only the lowest valuation of this pack, but also very low debt and the highest dividend yield with currently 9%. For me, PXD is operating in one of the safest and most promising locations with best operating results. Hence, the valuation gap in comparison to its bigger peers, is unjustified, from my perspective. Would PXD at least reach an EV / FCF multiple of just 9x with 8 billion FCF, the upside potential would be already a juicy 36% (from an EV of 52.7 billion USD up to 72 billion USD). This without dividend payments, that you would collect on top! Of course, everything in the end depends on where the price of oil will go. Above, I have included one more interesting chart which shows the worldwide total rig count over the last more than 60 years. Rigs are the machines that drill for oil. I am sure, you have either never heard about this or certainly not heard about the numbers presented here. In fact, we are currently having one of the lowest active rig counts worldwide, ever! How can this be bearish for oil? Or in other words: It is a very promising setup for a year-long bull market in oil! This should be very supportive for the oil price, because you can only increase supply when you have the machinery in place to bring the black gold to the surface. That doesn’t happen overnight. All in all, I think oil will resume its temporary correction, but it will rise again. There is just not enough supply to push oil down sustainably. This will also be supportive for oil stocks in general and PXD in special. My strategy would be to collect a nice 9% yield at current prices and wait. Should the price drop further, one could buy more. LIKE ALWAYS, LET’S HAVE A LOOK AT THE POTENTIAL RISKS. There are several of them. Some are real, others rather theoretical in nature. Real risk to operations are: - The direction of the oil price itself is the biggest determinant of the stock price - Worker and tool shortages that increase operating costs and slim margins - Looming excess profit taxes (windfall taxes) that were implemented in Europe and are also discussed in the US - Supply increases (rather limited and not wanted by OPEC+) - Demand decrease (demand is inelastic) - Political interventions - PXD doesn’t hedge oil prices at current levels; they fully participate in both directions. This can lead to more swings in the stock price and the dividend - ESG movement and green ideologies I think, all in all, the risks are well known and typical for the business. But it is also clear that there is no way without oil, at the very least not in the short to medium term. Heads I win, tail I don’t lose – this is how I see it. But it cannot be ruled out that the current correction will push the stock price down more. Sources - https://rrc.texas.gov/oil-and-gas/major-oil-and-gas-formations/permian-basin/ - https://rrc.texas.gov/media/kfnco5xt/permian-top-ten-2021.pdf - https://www.bloomberg.com/graphics/2022-global-oil-permian-basin/?leadSource=uverify%20wall - https://worldpopulationreview.com/country-rankings/oil-production-by-country - https://www.eia.gov/outlooks/steo/report/global_oil.php - https://www.enverus.com/permian-basin/ - https://investors.pxd.com/static-files/4f8d74c7-e1c1-45ca-8dbe-1b953e723ecc - https://investors.pxd.com/static-files/f726c63a-75d5-4046-a468-14f6b07f2e41 - https://www.oasdom.com/what-are-oil-rigs/ - https://tradingeconomics.com/united-states/total-rigs - https://app.tikr.com/stock/about?cid=295224&tid=2639311&ref=9ycfp3 - https://seekingalpha.com/filings/pdf/15590269 Disclaimer Risk warning and terms of usage Before using the report, please study carefully the following key points: - My Site is a personal blog and not a regulated financial, tax or legal advisory service aimed at giving you specific advice with regards to buying, selling or holding financial instruments of any kind. You may use my Site to access my opinion and other information I provide. None of it constitutes financial, legal or tax advice. Where I make paid-for products available (like with this report), these are solely offered to contribute to the operating costs of my research. - All material published on my Site is solely for information and entertainment purposes. It is not a replacement for you receiving financial, tax or legal advice from your personal, regulated financial or legal advisor. Neither is it a replacement for doing your own research and due diligence. - My Site is only provided for your general information and use. It is not intended to cater for your particular requirements for financial, tax or legal advice. The author does not accept any liability for any loss suffered by any reader or user of the Site as a result of any decision. It is a condition of me allowing you access to my Site that you assume full responsibility for using the Site, and that you accept that I will not be liable for any action you take in reliance on information on the Site or procured through the Site. All the contents and materials published on my website are part of a personal blog. Some of the discussed investments on this blog or separate published reports like this on my blog could be part of the personal portfolio of the author. However, the author is never responsible for publishing or updating detailed information about his private transactions. Anyway, the author is not issuing any transaction recommendations, neither buy nor hold nor sell in his blog posts or other published materials including the paid for content. All information contained on my website or separately published reports like this on this blog are for information purposes only and only contain the opinion and assumptions of the author which in themselves can all be wrong. The reference to the analyzed companies on this blog is neither an offer nor a request to subscribe to shares of this company. The provided opinions and assumptions are for information purposes only and are not considered as recommendations for investments. Information from external sources are indicated appropriately. The author does not assume any responsibility or liability for the accuracy, correctness and completeness of those information, even though he deems them to be. Before making any use of the contents you must extensively research and assess on your own, if the published information are suitable for your own purpose and are compatible with your individual situation and goals. The presented opinions and assumptions are those of the author only at the time of publication and may not agree with information provided at a later time. From time to time, the author may publish updates to his past writings, but is never responsible or liable for doing so. It is only a voluntary service. This blog and all its articles, research reports, media appearances and any related content serve purely to inform and inspire readers to look out for new investment opportunities and research themselves – as a starting point for or additional individual opinion as part of an investment research process. The reader is self-responsible to do own extensive research and use a variety of other sources and never base any investment decisions solely on this website or the content published there. Any valuations of stocks or companies given on this website and its publications are highly subjective and theoretical results of studies of a range of possible outcomes, which do not have to come this way anytime in the future. The author is not forecasting any outcomes or likely share prices. The performance of the past is no indicator for future performance and does not provide necessarily any indication of future performance. The value of investments and their possible returns are not guaranteed and may increase as well as decrease and thus cause substantial losses for the investor. Potential investors or interested persons are strongly advised to consult their personal professional advisor for the evaluation of the investment risk and the investment strategy in order to determine the appropriateness of an investment bases on their individual situation. The author is not paid, sponsored or otherwise compensated by any of the companies discussed in these reports. Using any of the information contained on this blog financial-engineering.net or published material are at the reader's own risk. There is no advisory relationship between the reader/user and the author. It is prohibited to re-publish the content of this website without the express written permission of the author. © Alan Galecki, Financial-Engineering.net
Framework for Developing the Political Judgment of the Beltway Strategist by Colonel Patrick R. Michaelis United States Army United States Army War College Class of 2014 DISTRIBUTION STATEMENT: A Approved for Public Release Distribution is Unlimited This manuscript is submitted in partial fulfillment of the requirements of the Master of Strategic Studies Degree. The views expressed in this student academic research paper are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government. The U.S. Army War College is accredited by the Commission on Higher Education of the Middle States Association of Colleges and Schools, 3624 Market Street, Philadelphia, PA 19104, (215) 662-5606. The Commission on Higher Education is an institutional accrediting agency recognized by the U.S. Secretary of Education and the Council for Higher Education Accreditation. The purpose of this paper is to examine current theoretical models that help inform the Beltway strategist to “understand” the unique nature of the Beltway as an area of responsibility (AOR), and from those theoretical models propose a “framework” that creates context in the mind of the Beltway strategist and a start point for developing political judgment and awareness. To the neophyte, the myriad of influences to decision making and strategy within the Beltway AOR seem an imponderable act to decipher. The policy, process (bureaucracy), politics, and personality (4-Ps) model, emphasizing a framework relationship between the 4-Ps; the twin forcing functions of time and interests; and the lens of strategy as a function of priorities, resources, and risk, give the Beltway strategist, in any policy domain, a start point for contextual analysis. Independent of the framework, recommendations to the Army to develop political judgment and awareness focus on exposure and experience, earlier educational opportunities and broadening experiences, and a competitive and desirable selection process. 15. SUBJECT TERMS Military Strategy, Bureaucracy, Interagency 16. SECURITY CLASSIFICATION OF: a. REPORT UU b. ABSTRACT UU c. THIS PAGE UU 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 30 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (w/ area code) Framework for Developing the Political Judgment of the Beltway Strategist by Colonel Patrick R. Michaelis United States Army Dr. Stephen J. Gerras Department of Command, Leadership, and Management Project Adviser This manuscript is submitted in partial fulfillment of the requirements of the Master of Strategic Studies Degree. The U.S. Army War College is accredited by the Commission on Higher Education of the Middle States Association of Colleges and Schools, 3624 Market Street, Philadelphia, PA 19104, (215) 662-5606. The Commission on Higher Education is an institutional accrediting agency recognized by the U.S. Secretary of Education and the Council for Higher Education Accreditation. The views expressed in this student academic research paper are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the United States Government. U.S. Army War College CARLISLE BARRACKS, PENNSYLVANIA 17013 The purpose of this paper is to examine current theoretical models that help inform the Beltway strategist to “understand” the unique nature of the Beltway as an area of responsibility (AOR), and from those theoretical models propose a “framework” that creates context in the mind of the Beltway strategist and a start point for developing political judgment and awareness. To the neophyte, the myriad of influences to decision making and strategy within the Beltway AOR seem an imponderable act to decipher. The policy, process (bureaucracy), politics, and personality (4-Ps) model, emphasizing a framework relationship between the 4-Ps; the twin forcing functions of time and interests; and the lens of strategy as a function of priorities, resources, and risk, give the Beltway strategist, in any policy domain, a start point for contextual analysis. Independent of the framework, recommendations to the Army to develop political judgment and awareness focus on exposure and experience, earlier educational opportunities and broadening experiences, and a competitive and desirable selection process. Framework for Developing the Political Judgment of the Beltway Strategist Area of Responsibility: The geographical area associated with a combatant command within which a geographic combatant commander has authority to plan and conduct operations. Also called AOR. —Joint Publication 1¹ The proverbial Washington D.C. “Beltway” is an Area of Responsibility (AOR) within the Department of Defense (DoD) that is the purview of the Chairman of the Joint Chiefs of Staff (CJCS) and the Service Chiefs. Like the Geographic Combatant Commands (GCCs), strategists operating within the “Beltway AOR,” who have developed a refined level of political judgment, recognize operating in this space is, as defined by David Hackett Fische, “A dense web of contingency, in which many people [make] choices within a structure of relationships.”² The purpose of this paper is to examine current theoretical models that help inform the Beltway strategist to “understand” the unique nature of the Beltway as an AOR, and from those theoretical models propose a framework that creates context in the mind of the Beltway strategist and a start point for developing political judgment and awareness. After examining theory and proposing a framework, this paper will propose a series of recommendations to the United States Army in preparing military officers to successfully operate within the Beltway AOR. The Web of Contingency To the neophyte, the myriad of influences to decision making and strategy within the Beltway AOR seems an imponderable act to decipher. Interests and positions collide, the force of will—or seeming lack of it—may reveal a hidden hand, and bureaucracies emerge to temper, slow, or accelerate the road to consensus. Consensus itself seems at times to be a bridge too far, and at others, quickly coalesces around bigger, larger ideas. At the same time the sometimes petty nature of man emerges, personalities collide as the invisible handler whispers into the principal’s ear. The Beltway AOR is dominated by characteristics penned into the Constitution and the collage of interests that have emerged to shape outcomes through influence in the form of placement and access. Within the AOR of the Beltway “Laws are far and few indeed: skills are everything. The key skill [is] the ability to grasp what makes a situation unique.” In other words—to comprehend the complex. This seeming barrage of interests is enough to freeze an observer. Unable to see the underlying structures in play, he becomes discouraged at best, manipulated at worst. This “complex web of contingency in which choices are made within the structure of relationships” describes the uncertain environment that defines the Beltway AOR. Creating shared understanding of these underlying structures and interests—comprehending the complexity—creates opportunities to see beyond the surface, and move agendas forward, even in the face of morphing equities. **The Beltway as a Unique Culture?** The frustration with getting things done inside the Beltway is palpable. Moving from idea to execution to follow through with any measure of effectiveness is an extraordinary feat. The question is why? Whole industries of academia now exist to educate and explain the complexity of politics and policy. Yet many times we look at the question idealistically rather than realistically. We talk in terms of strategic culture when examining strategic issues, but as a subset of U.S. strategic culture, would it be more realistic to talk in terms of the unique aspects of the Beltway’s culture to understand how to get things done? As a start point to defining the problem, it begins with wiping away the immediacy of the now to attempt to understand the broader question of whether there is a Beltway culture, does it derive itself from the U.S. strategic culture, and if so, what are characteristics of said culture? Thomas Mahnken asserts: A nation’s strategic culture flows from its geography and resources, history and experience, and society and political structure. It represents an approach that a given state has found successful in the past. Although not immutable, it tends to evolve slowly.\(^5\) Using this definition of a nation’s strategic culture, we can extrapolate that the U.S. strategic culture “flows” from a combination of a land-mass surrounded by two oceans giving us the perception of stand-off and a rich resource base from which to exercise independence. The revolution for independence created civilian oversight, a fear of large standing Armies, and a political structure, grounded in the Declaration of Independence and the Constitution that is averse to centralization of power and creates a tension to balance human will and to create the time and space for policies to be subjectively and objectively debated before a decision. From this explanation of U.S. strategic culture, can we derive the “decision-making culture” inside the proverbial Beltway? This culture acknowledges the reality of separated institutions sharing power and a historical/geographical influence of being a nation independently resilient and comfortable with a time/space buffer created by two oceans? Best described by Graham Allison and Philip Zelikow in the *Essence of Decision, Explaining the Cuban Missile Crises*, the Beltway’s cultural character can be described in the form of the game of governance, in which the Beltway strategist would be a player, in the complexity forged by the Declaration of Independence and the Constitution, reinforced by geography, history, and precedent: The rules of the game, or the rules for choice, stem from the Constitution, statutes, court interpretations, executive orders, conventions, and even culture. Some rules are explicit, others implicit. Some rules are quite clear, others fuzzy. Some are very stable; others are ever changing. But the collection of rules, in effect, defines the game. First, rules establish the positions, the paths by which individuals gain access to positions, the power of each position, and the action-channels. Second, rules constrict the range of governmental decisions and actions that are acceptable.\(^6\) But the complexity and character of decision making inside the Beltway is much richer than a game. It is the collision of interests with massive implications for policy. Creating comprehension from the chaos of complexity defines the ability to shape the Beltway operating environment. Are there lenses by which we can both understand and comprehend the complex nature and reality of the Beltway? Simplifying the Complex--Understanding the Why Models help simplify and provide comprehension to the complex in order to create an epiphany of understanding. There are inherent risks in the oversimplification of models, but they serve as a start point for creating clarity. The convergence of political science and organizational behavior studies begin to create in the mind of the Beltway strategist some understanding behind the “web of contingency” that defines the Beltway AOR. In this section I will examine models that help create root understanding of large, complex organizations. Complex Adaptive Systems In a working paper produced by the United States Army War College, Professor Andrew Hill takes a reductionist approach to bring focus to large, interconnected organizations. Complex Adaptive Systems (CAS) are “system[s] in which large networks of components with no central control and simple rules of operations give rise to complex collective behavior, sophisticated information processes, and adaptation via learning and evolution.”\textsuperscript{7} Recognizing the ubiquitous reality of CAS in different forms, you can begin to see the different component interactions affecting the outcome of the system: “... composition (size, diversity), structure (openness, network density, etc.), agent behavior (rules, adaptation, etc.), collective behavior, and equilibrium conditions.”\textsuperscript{8} By being able to reduce the complexity into “parts” you begin to see connections to behaviors and opportunities exploited when equilibrium becomes disrupted. A nuanced understanding of CAS leads to “understanding causal relationships in strategic systems and identifying means either to alter the conditions within the system to achieve a new equilibrium, or to maintain an existing equilibrium.”\textsuperscript{9} The August to September 2013 timeframe within U.S. geopolitics gives us a start point for understanding CAS. The verbal “red-line” policy established by President Obama in reaction to Syrian use of chemical weapons emerges as an example of structures in the form of the National Security Staff, Department of Defense, Department of State, Congress, agents in the form of the President, Secretary of State, CJCS, and members of Congress reacting to an equilibrium that had been disrupted. \textit{Governmental Politics} In the book \textit{The Essence of Decision, Explaining the Cuban Missile Crisis}, Zelikow and Allison present three logical models for understanding the decision-making apparatus of the Soviets and the Americans during the Cuban Missile crisis. As the authors move from model to model, the monolithic “they” of the Soviets and the “we” of the Americans begin to crumble and the complex web of interests converging within the contingency of relationships emerges as a model with more nuance and granularity: Outcomes are formed, and deformed, by the interaction of competing preferences . . . the Governmental Politics Model sees no unitary actor but rather many actors as players: players who focus not on a single strategic issue but on many diverse intranational problems as well; players who act in terms of no consistent set of strategic objectives but rather according to various conceptions of national, organizational, and personal goals; players who make government decisions not by a single, rational choice but by the pulling and hauling that is politics.\(^{10}\) Implicit in Allison and Zelikow’s treatise is the acknowledged reality that politics and governance are inextricably connected. Drawing from this Beltway culture as discussed earlier as a function of the constitutional separation of powers, the Governmental Politics Model has incredible synergy to the Complex Adaptive Systems Model. Allison and Zelikow posit: For those who participate in government, the terms of daily employment cannot be ignored. Within the framework of broad values and shared interests leaders have competitive, not identical operational objectives; priorities and perceptions are shaped by positions; problems are much more varied than straightforward strategic issues; management of piecemeal streams of decisions is more important than steady-state choices; making sure that government does what is decided is more difficult than selecting the preferred solution. Coalitions are formed to produce the desired actions. The coalitions may include relevant outsiders, legislators, and lobbyists for an interest group, or even foreign officials, as if they were some different species of domestic power broker.\(^{11}\) You can feel the tensions between the structure, diversity, agent behavior and equilibrium so eloquently described by Professor Hill, yet layered on top of the Complex Adaptive System is a thick influence of political realities in the form of interests and values. “Each [government] is a more or less complex arena for internal bargaining among the bureaucratic elements and political personalities who collectively comprise its working apparatus.”\(^{12}\) As explicit and implicit interests began to form around the reaction to Syria’s use of chemical weapons, coalitions began to form around a need for action versus whether or not action was within a vital U.S. interest. The conversation coalesced very quickly around whether the use of force was legitimate or not. **Punctuated Equilibrium** There is an inertia of interests that surrounds policy. Precedence, statute, and regulation are relatively static, except on the margins. This occurs because of the routine coalescing of interests and hardening of organizational structures that often emerge to support the precedence, statute, or regulation. Timing, as a function of environmental scanning and political awareness, understands that beyond change at the margins, large-scale Beltway change does not occur unless there is a punctuation of the existing equilibrium. More commonly known as “opportunity,” punctuated equilibrium is defined as an “evolution that is characterized by long periods of stability in the characteristics of an organism and short periods of rapid change during which new forms appear.”\(^{13}\) The formation of the Department of Homeland Security is a prime example of the punctuated equilibrium of 9/11 creating the disequilibrium and therefore the political space to reorganize government. Coupling punctuated equilibrium within the Government Politics Model and the Complex Adaptive Systems model, we can begin to see the value of opportunity or inactivity in relation to interests. In the face of the opportunity of punctuated equilibrium, “Those who oppose the decision, or oppose the action, maneuver to delay implementation, to limit implementation, to raise the issue again with a different face or in another channel.”\(^{14}\) Further, the keen Beltway strategist begins to understand the cost/benefit of action or inaction to future endeavors. Practical Drift Building upon Complex Adaptive Systems and the Governmental Politics Model, structure/density/agents within the CAS could be defined as the processes or bureaucracy of large complex organizations. Professor Scott Snook from the Harvard Business School proposed a theory of Practical Drift that posits a phenomenon inherent in large organizations with multiple sub-units of “the slow steady uncoupling of practice from written procedure.” Using the shoot down of two UH-60 Blackhawks over northern Iraq by Air Force F-15’s on patrol, Snook deconstructs the causal linkages from the formulation of policy in conducting operations over northern Iraq, to changes in leadership, precedence, and structures that inevitably lead to the shoot down of the Blackhawks. Most Complex Adaptive Systems, within a construct of the Governmental Politics Model, during periods of punctuated equilibrium have “constant demands for local efficiency dictat[ing] the path of the drift. Over time, incremental actions in accordance with the drift meet no resistance, are implicitly reinforced, and hence become institutionally accepted within each sub unit.” Practical Drift, when viewed through a Complex Adaptive System or Governmental Politics lens could be renamed “organizational drift” as a natural erosion state that is in a constant state of “disequilibrium” where policy and precedent are constantly redefining organizational norms and interpretations of policy. “Over time, the globally engineered, standardized organization is replaced by a series of locally adaptive subunit logics, each justifying their own version of “the rules.” Principal-Agent Theory Inherent in any large organization is a degree of decentralization. That decentralization is designed to power down decision making and to compensate for the extraordinary realities of large Complex Adaptive Systems that are connected many times by interests only. As a theory of economics applied to organizational behavior, the Principal-Agent theory provides a realist perspective on the actions between a “Principal” and an “Agent.” Those same “agents” within the Complex Adaptive Systems theory, or the “players” within the Government Politics Model, seek advantage within their own interests by taking advantage of asymmetry. The farther away from direction, the more asymmetric the information between the principal and the agent, the more there are tendencies to shape information and create moral hazard. “A classic principal-agent problem occurs when information asymmetries make it hard for an employer to monitor the action of an employee, allowing the employee to act in a way that meets his or her needs, not those of the employer.”\(^{18}\) Further, in the arena of the Beltway, there forms interest asymmetries where vital interests in one camp may not be vital in another, creating tensions and opportunities. The recently produced Quadrennial Defense Review highlights the interest asymmetries between the Department of the Army and the DoD as the Army’s interest in the size of the force conflicts with the DoD’s need to balance the budget.\(^{19}\) The moral hazard or slippage within agents occurs when they filter “information according to their own biases and deliberately distort the information to reinforce their views.”\(^{20}\) Decision making within the Beltway is fraught with examples of the Principal-Agent theory in practice as influencers to decision makers seek to “shape” the message internally, horizontally, and vertically in order to win the resourcing battle that is commonly referred to as “strategy.” **Strategy Defined** No true discussion of the decision making space of the Beltway AOR would be complete without tying the complexity of the Governmental Politics Model to an attempt to understand the true nature of “strategy” as a *term* within the context of the Beltway. Though there are a range of definitions of both strategy and grand strategy, generally accepted are the ideas that *strategy* is “a concept for relating means to ends,”\(^{21}\) and *grand strategy* is “the capacity of the nation’s leaders to bring together all of the elements, both military and non-military, for the preservation and enhancement of the nation’s long term best interests.”\(^{22}\) Yet a more realist set of ideas within the Complex Adaptive System of the Beltway posits that *strategy* is more realistically just a function of *priorities*, *resources*, and *risk*.\(^{23}\) There are few if any contemporary examples of a theory of strategy, along the lines of George Kennan’s *Long Telegram*, mainly because of the absence of a perceived existential threat.\(^{24}\) Instead, resources become the prime driver of defining what it is we can do. Strategy emerges when the complexity of the Beltway begins to influence its formulation and follow-through. The combinations of the separation of powers with the checks and balances in the process reveal the reality that resources drive strategy, and that strategy is a reflection of priorities (which are different depending on where you sit), and risk. Risk becomes a nuanced understanding of what one can and cannot due, and from a political perspective is explained in the form of preservation of options and governance conducted through the lens of re-election. Pulling from a few theories focused on organizations, decision making, behavior, and strategy, we can begin to form a loose understanding of the Beltway AOR as an operating environment. Is there an environmental frame that can quickly allow the context behind strategy formulation within the Beltway AOR to emerge? This paper proposes there is a way to “frame” the Beltway AOR to assist the strategist in comprehending the complex, the shifting nature of the environment, and from that understanding shape the context as a Beltway strategist. This is the basis of developing the political judgment and awareness necessary for success. **Policy, Process, Politics, and Personality: The 4-Ps** These four factors (policy, process, politics, and personality) influence every contextual issue within the Joint, Inter-agency, Inter-governmental and Multi-National (JIIM) environment of the Beltway AOR. Influenced by “interests” and “time,” the interconnected “Ps” interact through the tensions that lie within “strategy” as a function of “priorities,” “resources,” and “risk.” Using the preceding models to enhance the understanding underlying the 4-Ps, this framework gives the Beltway strategist a start point to understand the interconnected realities that drive behaviors, alliances, and partnerships. Granularity is achieved when the strategist understands the relations of different parts of the model to each other. The next portion of this paper will look at each component of the model. Policy Whether through statute, regulation, speech or article, policy defines the playing field. It outlines the authorities and responsibilities that become a start point in understanding equities within the Beltway. The legal interpretations of those authorities and responsibilities become part of the interplay within the 4-Ps. As we learned earlier, policy within Complex Adaptive Systems “drift.” With new leaders or new precedents, the authorities and responsibilities can potentially shift over time under the opportunity of punctuated equilibrium, or the re-calibration of organizational norms as a function of addressing a Principal-Agent challenge. As in Syria, the red line response by the President was perceived as a form of policy, which the different elements of the government began to react to in the late summer of 2013. Through established policy, the ways and means of implementing policy, lead us to “process.” Process As a short hand for bureaucracy, processes are the workflows, approval processes, and precedents based on previous action or tradition, which become opportunities or threats within the operating environment. Processes can be used to accelerate or decelerate an interest in direct relation to both politics and personality. As a “Complex Adaptive System,” the composition of the agents and structures seek to establish equilibrium. With constant inputs in the form of reactions to critical events, established precedents, guidance, and drift, two concepts from the discussion of theory become apparent in relation to process outputs: - **Organizational Drift.** Large organizations develop their own culture and processes that feed internally to create what the outside observer would think is a singularly static and monolithic set of behaviors which defy understanding. As a Complex Adaptive System, processes born from precedent seeking to solve an internally relevant set of challenges will drift over time, decoupling original design from current reality. Applying the Governmental Politics Model to a Complex Adaptive System, we can begin to dissect the why and the what behind organizational decisions. - **Organizational Capacity.** There exists an equilibrium point within which an organization can effectively accomplish its mission in relation to time and resources. When priorities—written, verbal, or non-verbal—emerge that do not emphasize the need to create, harmony, the rationalization process of work translates into a prioritization that de-emphasizes the importance of unified effort and potentially creates internally focused outcomes. The Diplomacy, Development, Defense Planning Guide (3D Planning Guide) functions as an example of a “process.” Interestingly the formulation of the 3D approach to inter-agency planning seemed to be an offshoot of the relationship between two key principal agents in the form of Secretary of Defense Robert Gates and Secretary of State Hillary Clinton. Yet, drift started to emerge as these two key drivers of the process departed and departmental equities, in the form of capacity, no longer saw the importance of the process. Politics As a Beltway strategist, it is important to understand political positioning, the political mind, and the motivations behind the political mind. Balancing multiple stakeholders, internal and external constituencies, and keeping options available until absolutely necessary, are just the start point to understanding how politics, as a reality, impact decision making. Clearly recognizing the reality of ideology, as an influencing factor within the interface between the 4-Ps, is critical to developing good political judgment as a strategist. Whether it is the realism, idealism, or constructivism approach to international relations, or the tax or spend agendas behind domestic politics, developing an understanding of the ideology behind politics, and the reverberation to the other 3-Ps, creates conditions conducive to political judgment and awareness. There are few instances within political psychology of a “pattern” associated with political personalities. As a link to the 4th “P” of personality, what becomes apparent in the literature is that “there is sufficient variation in situations, problems, and opportunities faced by leaders--both presidents and ordinary leaders--that no single cluster of public, consistent behaviors has much of an effect on performance.”26 Personality Whether serving as a political appointee, government civilian, think tank personality, lobbyist, or military officer, the ability to create relationships counts. It is first among equals. It frames and shapes the Beltway discussion. The weight of interests and the ability to create persuasive arguments, as well as create and leverage relationships, are as much a factor in understanding contextual issues as the other 3-Ps. Though Complex Adaptive Systems and the Governmental Politics Model seek to look holistically at a system of decision making, it is the judgments and decision making of individuals that create the organizational drift, create the information asymmetries, or create opportunities in a period of punctuated equilibrium. Who really knows the political calculus behind President Obama’s decision to defer to Congress on the authorization of force in Syria, but it lends itself to the power of personality in shaping the political landscape with the connection to policy, process, and politics. **Time/Interests** Surrounding the 4-Ps are the twin drivers of time—which impacts energy and attention—and interests. Linked to the 4-Ps *time* and *interests* can determine organizational capacity (bandwidth) and focus. Hill asserts: Timing is an art, and time is precious. Timing is the pace of surveillance and intervention in a system. A leader must tread a fine line between avoiding the early abandonment of an effective intervention and avoiding the delayed abandonment of an ineffective intervention; or between concluding too soon and waiting too long to determine that a causal relationship exists.\(^{27}\) Interest, in relation to politics and personality, further defines the playing field within the Beltway AOR. Interest can develop externally or internally, and “where you stand depends on where you sit . . . Knowledge of the organizational seat at the table yields significant clues about a likely stand.”\(^{28}\) An understanding of time and true interest may describe both the tactics and the reason behind the deference for authorization for the use of force to Congress in Syria. Strategy as a function of Priorities-Resources-Risk At the center of the connections of the 4-Ps is a diamond of concerns focused on strategy as a function of priorities, resources, and risk. Though there are arguments in many circles as to what comes first, strategy or resources, it is clear within the Beltway AOR that they are linked, and a concrete reality. They are therefore the prism by which the 4-Ps pass through in connection with one another. Understanding the complexities of Beltway strategy formulation and the myriad of actors involved—as a Complex Adaptive System in itself—arms the Beltway strategist with understanding that “effecting a desired change in a complex, adaptive system is about probabilities, not certainties.”29 Strategy as a function of priorities, resources, and risks shapes the equilibrium of a Complex Adaptive System, balances and reconciles organizational drift, and provides a point from which to shift as the Beltway strategist lays out options to principals. Increasing Granular Understanding Balancing and building “context” by understanding the inter-relationships between the 4-Ps can focus understanding of strategy, and increase probabilities. Freedman emphasizes that: A gifted strategist will be able to see the future possibilities inherent in the next moves, and think through successive stages. The ability to think ahead is therefore a valuable attribute in a strategist, but the starting point will still be the challenges of the present rather than the promise of the future.30 As we connect elements of the 4-P model together we begin to see totality and therefore linkages and granularity. Understanding the cost/benefit of action or inaction, comprehending political capital and the need to preserve options, seeing the power differentials between opposing ideas or policy options, all reveal themselves through thorough analysis. That same analytical approach to dissecting the 4-Ps allows the Beltway strategist to understand shifts in equities over time. You begin to see elements of the Melian dialogue where the "the strong do as they can and the weak suffer what they must," while at the same time identifying the operating space within the Beltway AOR to shape the environment. But there is risk in making the complex comprehensible through a framework. Inevitably, a simplistic framework for understanding the Beltway AOR lends itself to a reality check. The fear is that models such as the 4-P model soothes: "(1) Our collective preference for simplicity; (2) our aversion to ambiguity and dissonance; (3) our deep-rooted need to believe we live in an orderly world; and (4) our seemingly incorrigible ignorance of the laws of chance." **Framework Conclusion** Political judgment and awareness within the Beltway AOR means: being comfortable with ambiguity and uncertainty. Through trial and error the Beltway strategist can develop the instinct to see the angles around issues and strategies on behalf of institutional interests. There is no doubt that experience counts when operating within the Beltway AOR, and the learning curve should be extrapolating if preparation and study persist. The 4-P model is one framework to build context, and then to be able to operate within the unique space that is the Beltway. But it should not be considered the final answer. Institutionally we can do more. **Recommendations to the Army** This paper has been about developing the political judgment to operate within the unique operating space that is defined by the Beltway AOR. Based on theories of organizational behavior, political science, and one officer's experience, the 4-P model is just one way to create context and foster political judgment as a strategist or senior leader. But are there recommendations to the Army, as an institution, that can better prepare future strategists and leaders to not only survive, but dominate the complex environment of the Beltway AOR? As an extension of the JIIM construct, how can we better create political judgment and awareness within our Army’s officer corps? Below are recommendations to the Army that are focused on the human aspects of selection, education, and experience as components to developing the political judgment to operate effectively in the Beltway AOR. **Exposure and Experience to Build Political Judgment and Awareness** Career selection earlier in the officer life-cycle to exposure and experience within the Beltway AOR will lead to cumulative long-term benefits for the Army, the GCCs, and the Joint Staff in being able to develop unified action. Consistent exposure earlier to how the Beltway works is an attempt to create the political judgment equivalent to Napoleon’s *Coup d’oeil* or Clausewitz’s military genius. Awareness through the constant exposure and problem solving over time will naturally create a capability of value to the JIIM environment. **Strategic-Level Educational Opportunities Should Come Earlier** Consider the “when” in educating strategic leadership competencies unique to the Beltway AOR. Earlier exposure to the constructs as taught at the United States Army War College will create the mental models and conditions for more nuanced understanding. Assignments need to focus on true broadening as opposed to development. Expose those selected to operate within the Beltway AOR to broadening assignments that force a re-framing of cognitive models. Rather than a developmental, or skill set development assignment, the exposure to different modes of thinking forces creative approaches to solving problems directly impacting the contribution of the Beltway strategist. Selection for Operating Within the Beltway AOR Should Be Competitive and Desired, but Balanced The requirement for an earlier education focused on understanding of strategic level concepts, coupled with consistent exposure and experience to the unique aspects of the Beltway AOR, in order to create nuanced political judgment must attract the best and the brightest. A competitive selection process balanced against operational opportunities can create the desire in the force, and also mitigate the risk against a negative perception. Additionally, the functional area approach to assignments will allow the Beltway strategist to understand the needs of the Army and the Joint Force by continuous exposure to operational and tactical assignments. Conclusion We have seen how it happened: not a single event, or even a chain of events, but in a great web of contingency. . . To study an event . . . is to discover a dense web of contingency, in which many people made choices within a structure of relationships.\(^{33}\) In David Hackett Fischer’s seminal work *Washington’s Crossing*, he describes the beginning of the political culture of the United States through the lens of Washington’s crossing of the Delaware in 1776. The “web of contingency, in which many people made choices within a structure of relationships”\(^{34}\) accurately captures the operating environment that defines the contemporary Area of Responsibility (AOR) of the Washington D.C. “Beltway.” Though “Beltway,” as a term of reference has many connotations, in the context of this paper it seeks to create a shorthand for the interplay of the separation of powers, and the collage of interests that have emerged to define what it means to create strategy and unified action within the purview of this distinctly American AOR. Though focused from a perspective of the defense environment, the 4-P model, emphasizing a framework relationship between policy, process (bureaucracy), politics, and personality; the twin forcing functions of time and interests; and the lens of strategy as a function of priorities, resources, and risk, give the Beltway strategist in any policy domain a start point for contextual analysis. Whether working in Defense, Diplomacy, or Development, the constant of the 4-Ps resonates across the totality of the Beltway. But it is only a framework. A start point for seeking clarity in what is a constantly shifting environment that recognizes complexity, seeks constant advantage, and values relationships and intellect. In the framework is risk, if not ground in an understanding of organizational behavior, political science, strategic leadership, and strategic planning. The Army is in a position, as a service, to create a competitive advantage within itself by dedicating a portion of the education and human resourcing mechanisms to create the *coup d’oeil*, or the “gift of being able to see at a glance the possibilities offered by the terrain,”\(^{35}\) within the force. The “terrain” in this case is the unique operating environment--the web of contingency--that defines the Washington D.C. Beltway. Endnotes 1 U.S. Joint Chiefs of Staff, *Department of Defense Dictionary of Military and Associated Terms*, Joint Publication 1-02 (Washington, DC: U.S. Joint Chiefs of Staff, November 8, 2010), 24. 2 David Hackett Fischer, *Washington’s Crossing* (New York: Oxford University Press, 2004), 364. 3 Lawrence Freedman, *Strategy, A History* (New York: Oxford University Press, 2013), 613. 4 Fisher, *Washington’s Crossing*, 364. 5 Thomas G. Mahnken, *United States Strategic Culture* (Defense Threat Reduction Agency, Advanced Systems and Concepts Office, November 13, 2006), 1. 6 Graham Allison and Philip Zelikow, *Essence of Decision, Explaining the Cuban Missile Crisis*, 2nd ed. (New York: Longman, 1999), 302. 7 Andrew Hill, *An Introduction to Complex Adaptive Systems*, Working Paper (Carlisle Barracks, PA: U.S. Army War College, 2013), quoting Melanie Mitchell, *Complexity: A Guided Tour* (Oxford: Oxford University Press, 2011), 1. 8 Ibid., 2. 9 Ibid., 2-3. 10 Allison and Zelikow, *Essence of Decision*, 255. 11 Ibid., 258. 12 Ibid., 260. 13 Merriam-Webster Dictionary, “Equilibrium,” [http://www.merriam-webster.com/dictionary/punctuated%20equilibrium](http://www.merriam-webster.com/dictionary/punctuated%20equilibrium) (accessed March 20, 2014) 14 Allison and Zelikow, *Essence of Decision*, 304. 15 Scott Snook, *Friendly Fire* (Princeton, NJ: Princeton University Press, 2000), 194. 16 Ibid., 194-5. 17 Ibid., 197. 18 Daniel L. Byman, “Friends like These, Counterinsurgency and the War on Terrorism,” *International Security* 31 (Fall 2006): 89. 19 Chuck Hagel, *Quadrennial Defense Review* (Washington, DC: U.S. Department of Defense, March 2014). 20 Byman, “Friends like These,” 90. 21 Carl H. Builder, *The Masks of War: American Military Styles in Strategy and Analysis* (Baltimore, MD: John Hopkins University Press, 1989), 49. 22 Tami Davis Biddle, *NSPS AY 14 Directive, 20130905, Lesson 5, Grand Strategy Intro*, 36, quoting Paul Kennedy. 23 Interview with CSIS, non-attribution discussion, February 2014, February 17, 2009. 24 George Keenan, “The Long Telegram,” http://www2.gwu.edu/~nsarchiv/coldwar/documents/episode-1/kennan.htm (accessed March 20, 2014). 25 3D Planning Guide, “Diplomacy, Development, Defense, Pre-decisional Working Draft,” July 31, 2012, http://www.usaid.gov/documents/1866/diplomacy-development-defense-planning-guide (accessed March 20, 2014). 26 David G. Winter, “Personality and Political Behavior,” in Oxford Handbook of Political Psychology, ed David O. Sears, Leonie Huddy and Robert Jervis (New York: Oxford University Press, 2003), 119. 27 Hill, An Introduction to Complex Adaptive Systems, 17. 28 Allison and Zelikow, Essence of Decision, 207. 29 Hill, An Introduction to Complex Adaptive Systems, 16. 30 Freedman, Strategy, A History, 611. 31 Robert B. Strassler, The Landmark Thucydides, A Comprehensive Guide to the Peloponnesian War (New York: Simon and Schuster, 1996), 352. 32 Hill, An Introduction to Complex Adaptive Systems, 5. 33 Fischer, Washington’s Crossing, 364. 34 Ibid. 35 Freedman, Strategy, A History, 613.
1. Name of Property historic name Bacon County School other names/site number Bacon County Elementary School 2. Location street & number 504 North Pierce Street city, town Alma county Bacon state Georgia ( ) vicinity of code GA 005 code GA zip code 31510 ( ) not for publication 3. Classification Ownership of Property: ( ) private (X) public-local ( ) public-state ( ) public-federal Category of Property: (X) building(s) ( ) district ( ) site ( ) structure ( ) object Number of Resources within Property: | Resource Type | Contributing | Noncontributing | |---------------|--------------|-----------------| | buildings | 1 | 0 | | sites | 0 | 0 | | structures | 0 | 0 | | objects | 0 | 0 | | total | 1 | 0 | Contributing resources previously listed in the National Register: N/A Name of previous listing: N/A Name of related multiple property listing: N/A 4. State/Federal Agency Certification As the designated authority under the National Historic Preservation Act of 1966, as amended, I hereby certify that this nomination meets the documentation standards for registering properties in the National Register of Historic Places and meets the procedural and professional requirements set forth in 36 CFR Part 60. In my opinion, the property meets the National Register criteria. ( ) See continuation sheet. [Signature] Richard Coates Date: 11-06-07 W. Ray Luce Historic Preservation Division Director Deputy State Historic Preservation Officer In my opinion, the property ( ) meets ( ) does not meet the National Register criteria. ( ) See continuation sheet. [Signature] Date State or Federal agency or bureau 5. National Park Service Certification I, hereby, certify that this property is: (✓) entered in the National Register [Signature] Elsom H. Ball Date: 12-26-07 ( ) determined eligible for the National Register ( ) determined not eligible for the National Register ( ) removed from the National Register ( ) other, explain: ( ) see continuation sheet [Signature] Keeper of the National Register Date 6. Function or Use Historic Functions: EDUCATION: school Current Functions: WORK IN PROGRESS 7. Description Architectural Classification: Late 19th and 20th Century Revivals: Colonial Revival Materials: - foundation: brick - walls: brick - roof: asphalt shingle - other: N/A Description of present and historic physical appearance: The Bacon County School is a T-shaped, brick, Colonial Revival-style school building. The school is located in downtown Alma in Bacon County in south Georgia. It fronts Pierce Street (U.S. Highway 1). Construction began in 1933 and was completed the following year. The five-part plan of the school contains a central two-story main block with entry portico, one-story classroom wings, and one-story pavilions at each end. A large auditorium extends back from the central block, creating the stem of the "T." The gable-roofed main block of this red brick building is two stories in height, containing three bays, with the larger, central bay including the slightly projecting entry porch (photographs 1 and 2). Concrete steps lead up to the three arched openings on the porch. These openings contain decorative concrete surrounds with keystone motifs. Entrance from the porch into the school is via three double doors with fanlights. Single window openings are located above and beside the entry (photograph 6). Brickwork in the form of stylized quoins and string courses adorns the main façade. A brick dentil cornice provides ornamental relief on the gable ends (photograph 5). The hip-roofed auditorium has arched window openings on the north and south facades with original windows (photograph 12). West façade window openings have been bricked in or covered over (photograph 10). One-story, flat-roofed, symmetrical classroom wings flank the main block of the school (photographs 3-5). The wings are divided into three bays, each containing four window openings. Brick quoins divide each bay. The string course from the central portion of the school continues the length of each wing. A row of bricks placed vertically provides decorative relief below the windows. Some of the window and door openings have been infilled with brick or air-conditioning units. The existing windows are metal sash (photograph 5). At the end of the north wing, due to slope, the wing rests on a full basement (photograph 10). The hip-roofed pavilions contain original arched windows with concrete keystones. A brick apron with herringbone pattern under the arched windows provides further ornamentation to this façade (photograph 7). Entry to these pavilions is by way of an arched opening on the north and south facades (photograph 15). Three single window openings are located on either side of the entryways (photographs 9 and 14). The window openings on the north pavilion have been filled with brick (photograph 9). The west or rear facades of these pavilions have no window openings (photograph 10). The corners of the pavilion are detailed with brick quoins. The main block of the T-shaped building features offices on either side of the front doorway (photograph 16), an auditorium with stage and balcony (photographs 18 and 19), and two classrooms on the second floor (photographs 25 and 26). The front offices retain plaster walls, wood ceilings, original wood doors and moldings. Bathrooms and closet areas are located in the offices (photograph 16). Wood single-run stairs, located behind one of the offices, lead to the second floor classrooms. The stairwell contains a wood ceiling, handrail, and moldings (photograph 20). The upstairs classrooms appear to have retained most of their original finishes including wood doors, trim, ceilings and slate blackboards (photographs 25 and 26). The original materials and finishes including plaster walls, wood ceilings, floors, doors and door surrounds, wide moldings, bull's-eye medallions, and balcony with metal railing have been preserved in the auditorium. The original stage and dressing rooms are also intact (photographs 18 and 19). The classroom wings, with central corridor, contain a total of 16 classrooms, many with built-in shelves and closets. Most of the walls are covered with a fiberglass-type panel. The extent of plaster remaining under these panels is unknown. Original wood trim and blackboards have been removed from the first-floor classrooms. Flooring in the classrooms is linoleum tile (photographs 22-24). Restrooms, with tile floors and walls, are located at the end of the corridor. During a 1978 rehabilitation, the original boiler was removed from the basement and gas heating systems were installed in the corridors. Original restrooms were replaced or updated. Most of the windows were removed and replaced with metal windows or window air-conditioning units. Some of the floors were covered with linoleum and walls were covered or replaced with fiberglass-type panels. Suspended ceiling tiles were also installed in some areas. A complex of school buildings occupied the site including a junior high building and a high school building constructed in the 1940s, a 1953 cafeteria, and two buildings constructed in 1969 for kindergarten and Head Start. These 1969 buildings remain to the northwest of the school, outside of the National Register boundary, and will become part of the new Bacon County Board of Education complex. The 1940s and 1953 buildings were demolished in 2003. The proposed National Register site includes only the 1934 school and immediate surrounding area with a few trees and a paved parking area. The current owner, the Bacon County Commission, has plans to rehabilitate part of the school for use as a senior center. The school received a Georgia Heritage Grant in 2007 for this interior renovation. 8. Statement of Significance Certifying official has considered the significance of this property in relation to other properties: ( ) nationally ( ) statewide (X) locally Applicable National Register Criteria: (X) A ( ) B (X) C ( ) D Criteria Considerations (Exceptions): (X) N/A ( ) A ( ) B ( ) C ( ) D ( ) E ( ) F ( ) G Areas of Significance (enter categories from instructions): Architecture Education Period of Significance: 1933-1957 Significant Dates: 1933 – date of construction Significant Person(s): N/A Cultural Affiliation: N/A Architect(s)/Builder(s): Douglas, Alexander – general contractor, 1933 Statement of significance (areas of significance) Construction of the Bacon County School began in 1933 due to an increase in the school-age population and the merger of the town of Alma and the Bacon County school systems. When opened in 1934, the Bacon County School served grades one through eleven until the 1940s when new junior and high school buildings were built behind the school. A cafeteria was constructed in 1953. In 1957 and 1958, new junior and high schools were built elsewhere in Alma and the buildings remained in use as an elementary school. In 2003, the 1940s junior and high school buildings and the cafeteria were demolished. From 1991 through 2003, the 1934 school was used for Head Start and pre-K through third grade. The 1934 elementary school building is the only historic building on the property today. This school was built as a result of school consolidation and is a good example of a Colonial Revival-style school in Georgia. The Bacon County School is significant in the area of architecture as an excellent example of a small-town "Consolidated Public School" building with minimal Colonial Revival-style details built during the period of school consolidation in Georgia. As documented in the historic context *Public Elementary and Secondary Schools in Georgia, 1868-1971*, the Consolidated Public School was a new type of school that was built throughout Georgia starting in the 1920s. It replaced smaller city schools and scattered rural schools. Floor plans in the shape of a letter – T, H, L, or U – were commonly used for consolidated schools. The massing of these schools was more elongated than urban schools with classrooms oriented along double-loaded corridors. The buildings were generally one-to-two stories high; most were one story. Amenities included a large auditorium, offices, and bathrooms. The Bacon County School reflects this design with its corridors, large windows for light and air, and consolidation of various school functions. Stylistically, the Bacon County School reflects the Colonial Revival style. Popular throughout Georgia from the 1890s to the early 1940s, the Colonial Revival style expressed a renewed interest in American colonial architecture based upon European precedent. The Bacon County School is a good example of a Colonial Revival-style design with its five-part Palladian plan in the form of a villa or Palladian-style dwelling. The school retains its original exterior finishes, massing, form, details, and pattern of openings. The Bacon County School is significant in the area of education because it represents the efforts of the city of Alma and Bacon County to provide a modern school for the local white community during a period of statewide consolidation in which larger, modern schools replaced many one- and two-room schoolhouses. Changing transportation technology, especially the use of the automobile and school buses, made it possible to consolidate several schools into one larger and improved school building. A statewide effort in Georgia at consolidation began in 1919 with the passage of the Barrett-Rogers Law that provided funds for consolidation. By 1928, the idea of consolidation proved so widespread in its appeal that increased state funding was needed to provide money to all of the schools that qualified and by 1936 every county in Georgia had an accredited high school. Many schools evolved into campus-like settings with separate gymnasiums, auditoriums or cafeterias. The results of consolidation were better school buildings and uniform educational policies for most of Georgia's white school students. Bacon County's efforts reflect state and national trends toward greater involvement by state boards of education in local school systems. National Register Criteria A - The Bacon County School is significant in education because it represents the efforts of the county to provide a modern school for its white community during a period of statewide consolidation. C – The Bacon County School is significant in architecture as a good example of a Colonial Revival-style consolidated public school-type school building in Georgia. Criteria Considerations (if applicable) N/A Period of significance (justification) The period of significance begins with the date of construction in 1933 and ends in 1957, the end of the historic period, during which time the building was continuously used as a public school. The building remained in service until 2003. Contributing/Noncontributing Resources (explanation, if necessary) The nomination includes one contributing building, the main school building (1933-1934). Developmental history/historic context (if appropriate) Schools played an important part in the life of the town of Alma almost from its inception. Alma began as a stop on a logging railroad, constructed by the Offerman and Western Railroad Company sometime after 1887. Farming and timbering provided the primary means of occupation in the area, and the rail offered a means to ship these goods. A turpentine distilling company formed by C.W. Deen and A.M. McLaughlin began operations around 1899 near present-day 12th and Wayne streets in Alma. To take advantage of the many pine trees in the area, the Yaryan Company, set up operations next to Deen and McLaughlin to load and ship pine tree stumps for gum extraction to a Brunswick company that made gunpowder. These two enterprises resulted in an increase in population and the need for housing for the many workers. With an increasing population, Alma’s first school opened c.1902 in two rooms of the Turner house, just south of the turpentine operation. As Alma grew, residents took steps for its incorporation in 1904. Many cotton farmers, tired of the rocky north Georgia soil, migrated to the Alma area around 1906 for better farming opportunities and new businesses including the Alma Trading Company, the Alma Mercantile Company and the Alma Gin and Milling Company sprang up. The Alma Land and Improvement Company, formed from local citizens, bought 200 acres of farmland owned by Jack Rigdon for $7,000. The company also purchased 1,500 acres north of Alma, known as the Fending tract. The company surveyed, then platted the town, laying out several blocks north and south of the railroad tracks. Nearly 500 lots sold for $10 to $50 in two days in February 1907. On March 25, 1907, the Alma Land and Improvement Company executed a warranty deed to the town of Alma conveying a parcel of property bounded on the west by Baker Street, on the south by Seventh Street and on the east and north by the Fending tract of the Alma Land and Improvement Company. The company built a two-story brick school building on the property and deeded it to the town. Called the Alma Polytechnic School, and later Alma High School, the first principal was F.A. Moss. This school was located at the corner of Baker and 7th streets and served the community for many years. Bacon County, named after Augustus O. Bacon, a U.S. Senator, was created in 1914 out of Appling, Pierce and Ware counties. Alma became the county seat. Then in 1918, Alma was rechartered as a city. By 1930 the population had increased to 1,235. The city expanded west and north of the original planned limits of the Alma Land and Improvement Company. This expansion tied into the construction of the railroad depot west of town and the building of Highway 1. By 1930, the center of commercial activity was located at Pierce Street (U.S. Highway 1 and 12th Street). With growth, a new school was needed. When the Board of Education of Bacon County met on November 7, 1933, the minutes reflect that "... bonds in the sum of $20,000.00 have been voted and validated by said Alma High School District for the purpose of erecting a new schoolhouse for the said district, and the Board of Trustees of said District and the above named authority, in the exercise of a sound discretion have decided to erect said new school building on part of a certain tract of land owned by Mrs. John Johnson, of said County, located in the City of Alma." On November 13, 1933, a deed was filed from Mrs. John Johnson selling 7.82 acres to the Board of Education of Bacon County, Ga., and their successors for $159.11 with the boundaries being stated as: "... fronting Eight Street a distance of Three Hundred Fifty (350) feet, and running back and even width of Nine Hundred Eighty Four and One Tenth feet (984.1) and bounded as follows; On the South by Eighth Street, East by lands of Richard S. Altman and the original lot line, On the West by lands of Clifford Edgar and other lands of Grantor, and on the North by other lands of the Grantor, and agreed line being established between the Northwest and Northeast corner of the said above described tract as being a straight line between iron stakes placed at both of the above corners, ALSO, Lots No. One (1) Two (2) and Six (6) in Block No. 4A and lots No. One (1) and Six (6) in Block No. 5A, and including also all fractional lots East of the original land line of lot No. 281, and in the 5th District of said County, the same having been left unnumbered by Alma Land & Improvement Company, in their Map & Survey of said City, said City lots and unnumbered fractions containing Forty-three-One Hundreths acres." The Grantee, in addition to the Board of Education is listed as the Bacon County Elementary School Property located in city of Alma, Ga. In a Board of Education meeting on January 22, 1934, there is confirmation that "... Alma High School District is now engaged in the construction of a new school house ..." The name of the school architect is unknown, but the general contractor was Alexander W. Douglas. Velmar Benton and Howard Taylor laid the brick. The finished school building consisted of approximately 24,000 square feet of space, serving grades 1-11, with 16 classrooms. In times of overcrowding, the overflow of students attended classes in the McCoy home or in a tobacco warehouse. As documented in the historic context *Public Elementary and Secondary Schools in Georgia, 1868-1971*, consolidated public schools were built in response to concerns about the state of education in Georgia in the early 20th century. Increased funding, provided by a 1919 law, provided funds for schools that had been consolidated. At the same time, an amendment to the state constitution permitted county school boards to issue bonds for new school construction. Changing means of transportation, including increased use of the automobile, made possible the consolidation of rural and urban schools. The establishment of high schools, and the elimination of ungraded schools at this time were important changes that were reflected in this type of school. Once built, these school buildings became important for use as community centers for meetings, cultural and recreational uses. During the 1940s, Bacon County Junior High School and Bacon County High School buildings were constructed on the school site, and the 1934 school building housed primary grades only. A cafeteria was constructed on the site in 1953. The Bacon County Junior High School and the Bacon County High School buildings were vacated after new facilities were constructed elsewhere in Alma in 1957 to 1958. The vacated buildings were subsequently used to house Head Start, kindergarten, and other classrooms. In 1966, the Bacon County School System integrated, on a voluntary basis, and the African-American Alma High School closed in 1968 when mandatory integration took place. In 1978, a major interior rehabilitation of the Bacon County School took place, which removed asbestos materials and plaster, installed new restrooms, wall panels, suspended ceilings, air conditioning, heating and electrical systems. The architect for this rehabilitation was James Buckley. The school operated as an elementary school until 2003 when a new school was constructed. The junior high, high school and cafeteria buildings were demolished in 2003. In 1969, two additional buildings were built on the site for kindergarten and Head Start. Architect Percy Perkins was responsible for school changes and additions at this time. These buildings are located northwest of the 1934 school and are to remain as part of a new Bacon County Board of Education Complex. Plans are to renovate the 1934 building for use as a senior center. Several well known people have attended the school. Best known is author Harry Crews who is in the Georgia Writers Hall of Fame. Born in Bacon County on June 7, 1935, Crews authored over 21 books and was a regular contributor to *Esquire* magazine and other publications. A movie is currently being developed about his life in Bacon County and the South. This school is an example of a building that was constructed in response to the merger of the county and city public school systems. Initially, the school housed grades 1-11, then as time and money allowed, more buildings were constructed. Over the years, the school became specialized as to the grades that were housed in it, until finally, only the lower grades were left. Additionally, it came to house Head Start and pre-K classes, previously unavailable in the county. The school was the site of many community meetings including the annual meetings of the Satilla Rural Electrical Administration. It also served as a religious center for many churches until they could afford larger buildings. The Bacon County Commission also held meetings here, as needed. Community and school plays were performed in the auditorium, providing entertainment for the community. 9. Major Bibliographic References Aerial map. On file at Bacon County, Georgia, Board of Education. n.d. Aycock, Billy. “Bacon County School.” Historic Property Information Form, July 17, 2006. On file at the Historic Preservation Division, Department of Natural Resources, Atlanta, Georgia. Bacon County, Georgia. Board of Commissioners Minutes, various years. Bacon County, Georgia. Board of Education Minutes, various years. Bacon County, Georgia. Superior Court. Deed Book 16, p. 401. Baker, Bonnie. “A Short History of Alma and Bacon County.” Alma-Bacon County, Georgia Historical Society, 1977. Baker, Bonnie T. The History of Alma and Bacon County. 1984. Benton, Wanzette. Alma, Georgia. Interview by Henry Kight, 1 June 2006. Buckley, James. “Rehabilitation Plans, 1978.” Bacon County, Georgia, Board of Education. Crimmons, Timothy; Dickens, Roy; Preston, Howard. “Survey of the Historical and Archaeological Impact of Selected Model Cities and Community Development Projects, Alma, Georgia.” Atlanta, 1975. Johnson, Jones. Alma, Georgia. Telephone interview by Henry Kight, 22 November 2006. Johnson, R.T. First Baptist Church, Alma, Georgia. Interview by Henry Kight, 4 June 2006. Kight, Henry. Floorplan Drawings, 2006. On file at Save Our Schools, Inc. Bacon County, Georgia. Kight, Henry. Southeast Georgia Regional Development Center, Waycross, Georgia. Interview by Michael V. Jacobs, 1 June 2006. Perkins, Percy. “Bacon County Elementary School and Bacon County Board of Education. Bacon County Consolidated School Construction Plans, c. 1969.” Percy H. Perkins Collection, Tube #6, Georgia Southern College Special Collections. Also on file at Bacon County, Georgia Board of Education. Ray & Associates. Public Elementary and Secondary Schools in Georgia 1868-1971. Atlanta, GA: Georgia Department of Natural Resources, 2004. Thomas, Ken. "Alma Depot." National Register Nomination, 1983. On file at the Historic Preservation Division, Department of Natural Resources, Atlanta, Georgia. www.georgiaencyclopedia.org. "Harry Crews." Previous documentation on file (NPS): (X) N/A ( ) preliminary determination of individual listing (36 CFR 67) has been requested ( ) preliminary determination of individual listing (36 CFR 67) has been issued date issued: ( ) previously listed in the National Register ( ) previously determined eligible by the National Register ( ) designated a National Historic Landmark ( ) recorded by Historic American Buildings Survey # ( ) recorded by Historic American Engineering Record # Primary location of additional data: (X) State historic preservation office ( ) Other State Agency ( ) Federal agency ( ) Local government ( ) University ( ) Other, Specify Repository: Georgia Historic Resources Survey Number (if assigned): N/A 10. Geographical Data Acreage of Property 3.44 UTM References A) Zone 17 Easting 361158 Northing 3490722 Verbal Boundary Description The boundary is indicated by a heavy black line on the attached map. Boundary Justification The boundary is the land immediately around the historic school building and is the current legal boundary. It does not include land on which other historic school buildings, since demolished, once stood, and it does not include extant non-historic school buildings. 11. Form Prepared By State Historic Preservation Office name/title Lynn Speno, Survey and Register Specialist organization Historic Preservation Division, Georgia Department of Natural Resources mailing address 34 Peachtree Street, Suite 1600 city or town Atlanta state Georgia zip code 30303-2316 telephone (404) 656-2840 date 11/6/2007 e-mail email@example.com Consulting Services/Technical Assistance (if applicable) ( ) not applicable name/title Billy Aycock organization Bacon County Commissioners mailing address P.O. Box 356 city or town Alma state GA zip code 31510 telephone 912-632-5214 e-mail N/A (X) property owner ( ) consultant ( ) regional development center preservation planner ( ) other: Property Owner or Contact Information name (property owner or contact person) Bacon County Commissioners organization (if applicable) Bacon County Commissioners mailing address P.O. Box 356 city or town Alma state GA zip code 31510 e-mail (optional) N/A Name of Property: Bacon County School City or Vicinity: Alma County: Bacon State: Georgia Photographer: James R. Lockhart Negative Filed: Georgia Department of Natural Resources Date Photographed: April 2007 Description of Photograph(s): Number of photographs: 26 1. Main or east façade of the school; photographer facing northwest. 2. Main or east façade of the school; photographer facing west. 3. Main or east façade of the school; photographer facing west. 4. Main or east façade of the school; photographer facing west. 5. Main or east façade of the school; photographer facing west. 6. Main façade, central portion; photographer facing west. 7. East façade of northern end pavilion; photographer facing west. 8. Main façade and north end pavilion; photographer facing southwest. 9. Main façade and north end pavilion; photographer facing southwest. 10. West elevation of building; photographer facing east. 11. West elevation of building; photographer facing northeast. 12. South elevation of the auditorium; photographer facing north. 13. South and east elevations of the building; photographer facing northwest. 14. South and west elevations of the building; photographer facing northeast. 15. South elevation of the south end pavilion; photographer facing north. 16. Interior office; photographer facing west. 17. Corridor in central block; photographer facing northwest. 18. View in auditorium; photographer facing northeast. 19. View facing stage in auditorium; photographer facing west. 20. View up stairs to second floor; photographer facing north. 21. View in corridor of south wing; photographer facing north. 22. View in classroom; photographer facing southeast. 23. View into classrooms; photographer facing east. 24. View in classroom; photographer facing east. 25. View in upstairs classroom; photographer facing northwest. 26. View in upstairs classroom; photographer facing southwest. Bacon County Elementary School Bacon County, Georgia National Register Map/Plat Map National Register Boundary: North: ↑ Scale: 1" = 100' Drawn by: Everett Tomberlin R/R SIPIKE IN ASPHALT N 70°56'02" E 21.79 N 08°24'10" E 417.93 N 81°39'18" E 322.41' 3.441 ACRES APPROXIMATE LOT COR. U.S. HWY. #1 R/W VARIES L.L. 281 L.L. 282 WEST 8TH STREET 60' R/W POWER POLE FIRE HYD POWER POLE N 81°42'36" W 327.93' S 08°34'43" W 231.92' S 49°38'42" W 37.58' S 08°45'25" E 59.65' N 34°28'49" W 36.42' Bacon County Elementary School Bacon County, Georgia First Floor Plan North: ↑ Scale: Not to Scale Drawn by: Henry Kight Photograph/Direction of View: 6 Bacon County Elementary School Bacon County, Georgia Second Floor Plan North: ↑ Scale: Not to Scale Drawn by: Henry Kight Photograph/Direction of View: 6 BALCONY STAIRS MAP #2 STAIRS UPSTAIRS CLASS ROOM UPSTAIRS CLASS ROOM 23' 21' 27' 18' 5' 15' 44'
Effect of Bergenin on Human Gingival Fibroblast Response on Zirconia Implant Surfaces: An In Vitro Study John Xiong 1, Catherine M. Miller 1,2* and Dileep Sharma 1,2,* 1 College of Medicine and Dentistry, James Cook University, Smithfield, QLD 4878, Australia; email@example.com (J.X.); firstname.lastname@example.org (C.M.M.) 2 School of Health Sciences, College of Health, Medicine and Wellbeing, The University of Newcastle, Ourimbah, NSW 2258, Australia * Correspondence: email@example.com Abstract: The poor quality of life associated with the loss of teeth can be improved by the placing of dental implants. However, successful implantation relies on integration with soft tissues or peri-implant inflammatory disease that can lead to the loss of the implant. Pharmacological agents, such as antibiotics and antiseptics, can be used as adjunct therapies to facilitate osseointegration; however, they can have a detrimental effect on cells, and resistance is an issue. Alternative treatments are needed. Hence, this study aimed to examine the safety profile of bergenin (at 2.5 μM and 5 μM), a traditional medicine, towards human gingival fibroblasts cultured on acid-etched zirconia implant surfaces. Cellular responses were analysed using SEM, resazurin assay, and scratch wound healing assay. Qualitative assessment was conducted for morphology (day 1) and attachment (early and delayed), and quantitative evaluation for proliferation (day 1, 3, 5 and 7), and migration (0 h, 6 h and 24 h). The concentrations of bergenin at 2.5 μM and 5 μM did not demonstrate a statistically significant effect with regard to any of the cellular responses ($p > 0.05$) tested. In conclusion, bergenin is non-cytotoxic and is potentially safe to be used as a local pharmacological agent for the management of peri-implant inflammatory diseases. Keywords: zirconia; dental implant; fibroblasts; bergenin 1. Introduction Dental implantology is a predictable, clinically proven method to restore edentulous areas to improve form, function and aesthetics [1]. Since its introduction, titanium and its alloys have been regarded as the gold standard implant material, with well-established clinical results due to its excellent biocompatibility and mechanical strength [2,3]. The principal disadvantage of titanium implants is its unaesthetic metallic appearance, which is often visible through peri-implant mucosa, particularly in patients with a thin mucosal biotype [4,5]. Titanium has also been reported to induce potential immunological complications due to the release of titanium ions into the surrounding structures, resulting in implant failure [6]. To combat this, novel technologies have been explored. Zirconia has been recommended due to its toothlike colour, low affinity for plaque and outstanding mechanical and chemical properties [7–9]. Zirconia is a bioinert non-resorbable metal oxide that has demonstrated excellent mechanical and chemical properties and offers a variety of potential advantages for use in implant dentistry, especially in the aesthetic zones [7,10,11]. The formation of an early and long-standing soft tissue barrier is imperative for both the initial healing and increasing the longevity of the implant restoration [12,13]. Human gingival fibroblasts are the soft tissue cells involved in forming the tight soft tissue adaptation against the implant neck. Additionally, they are involved in the synthesis and maintenance of the extracellular matrix, which is responsible for facilitating tissue regeneration and repair during wound healing [14]. The first phase of soft tissue healing involves cellular attachment, which serves as a basis for subsequent cellular interactions, such as proliferation and spreading [15]. The tight soft tissue adaptation onto the implant neck is made up of a complex with epithelium and connective tissue cells, however, unlike natural teeth, it lacks periodontal ligaments. This makes the implant more prone to bacterial penetration and epithelial downgrowth, causing bone loss and, ultimately, the loss of the implant [16, 17]. Hence, this soft tissue barrier is of paramount importance in ensuring a soft tissue seal that is essential for preventing microbial ingrowth and maintaining successful osseointegration. The 2017 World Workshop Classification of Periodontal and Peri-implant Disease and Condition outlines the two disease processes associated with the peri-implant tissue: peri-implant mucositis and peri-implantitis [18]. Peri-implant mucositis is limited to the mucosa of the implant and is characterised by the redness, swelling and inflammation of the peripheral soft tissue [17–19]. It is reversible with proper routine implant maintenance and good oral hygiene; however, if untreated, it will progress to peri-implantitis [17–19]. Peri-implantitis is an infectious inflammatory disease associated with the loss of surrounding peri-implant bone and tissue, and is one of the main causes of implant failure [19]. Various approaches have been developed for the clinical management of peri-implantitis. The physical methods of decontamination are routinely practiced; however, such strategies can damage implant surfaces and predispose them to bacterial colonisation [9, 20]. Adjunct therapies using chemical and pharmacological agents have been employed to aid in the management of mucositis and peri-implantitis through the decontamination of the implant surface and may include antibiotics and antiseptics [21–23]. Several studies have demonstrated a negative impact on the cellular proliferation of commonly used oral antiseptics (i.e., CHX, Povidone-Iodine and Hydrogen peroxide) and antibiotics that are commonly prescribed for management of peri-implant infections [24–29]. These results, coupled with the rise of resistance to commonly used antiseptics and antibiotics, makes it imperative that alternative treatments be identified. Natural immunomodulatory agents, especially derived from plants, have been extensively used in folk medicine as nootropics and adaptogens for centuries [30]. Recently, bergenin, which is derived from plants of the *Bergenia* genus, has been shown to have favourable biological activities that may be beneficial in wound healing [31–33]. Current research suggests that bergenin possesses anti-inflammatory properties through the reduction in cyclooxygenase-2 activity (COX-2), the inhibition of pro-inflammatory cytokines, such as interleukin-6 (IL-6) and IL-8, and the selective inhibition of COX-2 in vitro [32, 34]. Additionally, bergenin has been reported to possess anti-microbial activities against *Candida albicans* and Herpes simplex virus [32]. In the context of its effect on bone, a study reported that bergenin may enhance osteoblastic bone regeneration [35]. Hence, bergenin could potentially be used as a novel therapy for managing peri-implant inflammatory diseases via local delivery, however, its effect on soft tissue cells remains unexplored. Based on the known actions of bergenin, we hypothesise that the presence of bergenin could have no negative effect on the gingival fibroblasts cultured on the zirconia surfaces. Hence, our study aims to explore the effect of bergenin on HGF cells on the attachment, proliferation and migration on Zirconia implant surfaces. 2. Materials and Methods All cellular assays were completed in accordance with the Minimum Information About a Cellular Assay (MIACA) guidelines [36]. The Modified Consolidated Standards of Reporting Trials guidelines for preclinical in vitro studies on dental materials checklist was used to report the findings [37]. No ethics approval was required for this in vitro study. 2.1. Sample Preparation The yttria-tetragonal zirconia polycrystals (Y-TZP) discs were kindly provided by Dr. Elsa Dos Santos Antunes (James Cook University) and were produced via the sintering of 3 mol% yttria partially stabilised zirconia powder (30% monoclinic and 70% tetragonal) using the protocol described in Munro et al. (2020) [8]. The discs measuring 14 mm in diameter and 1 mm in thickness were utilised in this study, and surface modification was performed as previously reported [9]. Briefly, discs were modified by submersion in 40% hydrofluoric acid (Scharlab, Barcelona, Spain) for 1 h to create an acid-etched surface (AEY-TZP). Following this, the discs were rinsed with deionised water to remove any residue and to neutralise any remaining acid and were autoclaved prior to cell culture. Complete Media Preparation The complete media (CM) was made up of Dulbecco’s modified Eagle medium (DMEM) supplemented with 10% foetal bovine serum (FBS) (Sigma-Aldrich, Sydney, Australia), penicillin/streptomycin (Sigma-Aldrich, Sydney, Australia) and L-glutamine (Sigma-Aldrich, Sydney, Australia). Bergenin (Sigma-Aldrich, Sydney, Australia) was titrated into the CM at concentrations of 0 μM (B0), 2.5 μM (B2.5) and 5 μM (B5). The complete media containing bergenin is referred to as BCM. 2.2. Cell Culture Commercially sourced Human gingival fibroblasts (HGF; ATCC® PCS-201-018™) were used in this study. HGF cells were grown in cell culture flasks with CM. The cells were incubated at 37 °C, 5% CO₂ and 90% humidity, with the media being changed every 3 days. Cells were checked under conventional microscopy (Nikon EclipseTS100, Nikon Instruments, Tokyo, Japan) until 95% confluency and were then passaged. Cells from the 3rd to 7th passage were used. The cells were then detached with trypsin (Sigma-Aldrich, Sydney, Australia) and seeded onto the AEY-TZP discs or tissue culture plates with CM containing differing concentrations of bergenin for their relevant experiment. 2.2.1. Cell Morphology and Attachment A qualitative analysis was performed to observe the cytoskeletal arrangements of the HGF cells on AEY-TZP discs using Scanning electron Microscopy (SEM) (Phenom™ G2 pro, Phenom-World BV, Eindhoven, The Netherlands). Initially, HGF (3 × 10⁵ per disc) were seeded onto the AEY-TZP discs incubated in BCM (0 μM, 2.5 μM or 5 μM) for 24 h in triplicate. Cells were fixed with 3% glutaraldehyde in 0.1 M phosphate-buffered saline (PBS; Sigma-Aldrich, Sydney, Australia), rinsed with 0.1 M PBS twice, then dehydrated in an increasing ethanol series (25%, 50%, 75%, 95%, then 100%) for 5 min at each concentration. Following this, each disc was dried in a 1:1 solution of hexamethyldisilane (HMDS) and ethanol for 15 min, and 100% HMDS for 5 min. Prior to metallising, samples were dried in a fume cupboard for 4 h. Gold sputtering was performed on the samples and observed by SEM at 500×, 2000×, and 5000× magnification. Cell morphology was assessed on micrographs taken in triplicate randomly in different areas of each experimental disc. Cellular attachment was assessed at an early and late timepoint. HGFs at a density of 3 × 10⁵ cells/well were seeded onto the AEY-TZP surfaces (n = 9) in a 24-well plate in BCM (0 μM, 2.5 μM or 5 μM). After culturing for 30 min (early) or 3 days (late), unattached cells were removed by rinsing three times with 1 mL of PBS. The attached cells were fixed with 4% formaldehyde at room temperature for 10 min. Following this, cells were permeabilised in 0.1% Triton X-100 (Sigma-Aldrich, Sydney, Australia) for 10 min and then blocked with 1% bovine serum albumin (Sigma, St. Louis, MO, USA) for 15 min to prevent non-specific binding. After rinsing with PBS (3 × 5 min each), the cells on the discs were stained with 2% Flash Phalloidin™ red solution (BioLegend, San Diego, CA, USA) according to the manufacturer’s instructions. Fluorescence was visualised and imaged using an Olympus IX53 inverted epifluorescence microscope (Evident Australia Pty Ltd., Macquarie Park, Ryde, Australia). 2.2.2. Cell Proliferation Human gingival fibroblast proliferation on the surface of the specimens was evaluated using the resazurin assay. The HGF cells were seeded onto the AEY-TZP surfaces (n = 9) as follows: On day 0, 400 μL of BCM (0 μM, 2.5 μM or 5 μM) containing 1 × 10⁵ cells/mL was placed to seed cells onto each surface. Cellular proliferation was determined at days 1, 3, 5 and 7 using 10% *v/v* resazurin (Sigma-Aldrich, Sydney, Australia). The resazurin solution was added at each time point and incubated for 5 h. Media samples from each specimen were transferred to a 96-well plate in triplicate (3 wells of 100 µL each). The absorbance of resorufin (the product of reduction) at 570 and 600 nm was read using a microplate absorbance reader (iMark™ Microplate Absorbance Reader, BioRad Laboratories, Hercules, CA, USA). The percentage of resorufin was calculated using the values obtained for the control solution (TCP + resazurin solution without cells). 2.2.3. Cell Migration A scratch-healing assay was used to assess migration. HGF cells (3 × 10⁵ cells/mL) were seeded onto three groups of AEW-TZP (n = 3) surfaces and TCP (n = 3) in a 24-well plate with 400 µL of CM. The cells were then incubated at 37 °C, 5% CO₂ and 90% humidity until confluent. Prior to initial scratching, the CM was changed to BCM for the respective groups (0 µM, 2.5 µM or 5 µM). Using a 200 µL pipette tip, two scratches were made perpendicular to each other on the discs and TCP. Following this, detached cells were removed by washing each well thoroughly with PBS. The migration of cells into the scratched area was assessed at 0 h, 6 h and 24 h. Prior to imaging, samples were washed in 0.1% Triton X-100 for 10 min, then blocked with 1% bovine serum albumin (Sigma, St. Louis, MO, USA) for 15 min. Following that, surfaces were rinsed thrice with PBS for 5 min each time. The discs and wells were stained with 2% Flash Phalloidin™ red solution (BioLegend, San Diego, CA, USA) according to the manufacturer’s instructions. The scratched area was imaged using an Olympus IX53 inverted epifluorescence microscope (Olympus Australia Pty Ltd, Melbourne, Australia) at 10× magnification. The wound area was measured using the Fiji app on ImageJ software (version 1.53f51, National Institutes of Health, Bethesda, MD, USA), and the healed area was calculated by comparing the wound area at a set time point to the initial scratch area and represented as a percentage. The percentage calculation is shown below and was quantified using the measured scratched area (SA<sub>Measured</sub>) and the average initial scratch area (SA<sub>Initial</sub>). Measurements were taken in triplicate, and the average surface area was used for the calculations. For all groups initial (0 h) healed percentage was 0%. \[ \text{Healed Percentage} (\%) = \frac{\text{SA}_{\text{Initial}} - \text{SA}_{\text{Measured}}}{\text{SA}_{\text{Initial}}} \times 100\% \] 2.3. Statistical Analysis The software suite GraphPad 9.2 (GraphPad Software, San Diego, CA, USA) was used for statistical analysis. Two-way ANOVA and post hoc Tukey test were used to compare the control and the experimental groups since the data did not show significant departure from normality. The results were expressed as mean ± standard deviation. A \( p < 0.05 \) was considered statistically significant. 3. Results 3.1. Cellular Morphology An SEM analysis of cellular morphology was performed and aimed to identify the characteristics of fibroblasts upon initial attachment. Following an incubation period of 24 h, cells were found to be adhering to both the TCP and AEW-TZP discs for all bergerin concentrations (Figure 1). SEM observation showed the definitive morphology of spindle-like long cytoplasmic elongations, a profile characteristic of HGFs. Figure 1. SEM Imaging of HGF morphology AEW-TZP discs (a–i) (bergenin 0 μM, (a–c), beragenin 2.5 μM, (d–f), beragenin 5 μM, (g–i)). Cells were fixed with 3% glutaraldehyde, discs were dried then gold sputtered, and SEM imaging was performed on three randomly selected sites. These images are representative and portray definitive the morphology of spindle-like long cytoplasmic elongations or pseudopods (orange arrows), narrow and flattened profiles. Scale bar = (a–c) 100 μm, (d–f) 20 μm, (g–i) 10 μm. SEM images taken at 500×, 2000×, 5000× magnification after 24 h. Cellular Attachment The immunofluorescence imaging aimed to observe patterns of the cellular attachment of HGF onto AEW-TZP discs in the presence of differing concentrations of beragenin. Both early and delayed attachment were observed and are shown in Figure 2. During early attachment, there was no observable difference in cellular morphology or cell volume between the groups. However, in the delayed attachment phase, there was a noticeable increase in cell numbers in the presence of beragenin compared to untreated cells, independent of beragenin concentration. Additionally, the cells demonstrated a similar morphology identified in the SEM analysis with random orientation. This indicated that beragenin had minimal to no negative effect on the attachment of HGF onto AEW-TZP discs. Figure 2. Cellular attachment of HGF cells on AEY-TZP discs (10×). The results show an increase in cell numbers within the groups with CM containing Bergenin. Cells were imaged on each sample using Flash Phalloidin™ Red solution and epifluorescence microscopy. Representative images of the attachment are shown: early attachment (a–c) (bergenin 0 μM (a), bergenin 2.5 μM, (b), bergenin 5 μM (c)), delayed attachment (d–f) (bergenin 0 μM (d), bergenin 2.5 μM, (e), bergenin 5 μM, (f)). 3.2. Cellular Proliferation The resazurin assay aimed to investigate the effects of bergenin on cellular proliferation. Results at 1, 3, 5 and 7 days of incubation are shown in Figure 3. Post hoc analyses demonstrated a statistically significant effect associated with time (F (1.561, 9.368) = 6.724, \( p < 0.05 \)) for both AEY-TZP discs and TCP, with cell numbers increasing over time. For TCP, there was a significant increase in cellular proliferation between day 1 and day 7 (\( p < 0.05 \)) for the untreated cells and cells treated with bergenin, with no significant difference between groups (Figure 3a). For cells grown on AEY-TZP discs, 2.5 μM bergenin-treated samples showed a decrease in proliferation from day 1 to day 5, whilst cells treated with 5 μM bergenin had a significant increase in proliferation from day 5 to day 7 (Figure 3b). However, there was no significant effect of differing bergenin concentrations over time (F (6, 18) = 1.368, \( p > 0.05 \)). This assay demonstrated that the tested concentrations of bergenin did not affect cellular proliferation at any time point sampled. Figure 3. Cellular proliferation of HGF was observed by assessing the average percentage of reduction in resazurin by the cells cultured onto TCP (a) and AEY-TZP (b) discs in wells containing 400 μL of CM and BCM. The assay used 10% \( v/v \) resazurin to determine cellular proliferation at day 1, 3, 5 and 7. Results are plotted as mean ± standard deviation. The results demonstrated that bergenin had minimal effect on the rate of cellular proliferation (\( p > 0.05 \)). * indicates a significant difference (\( p < 0.05 \)) between two groups. 3.3. Cellular Migration The scratch wound healing assay aimed to observe any differences in healing when cells were cultured in the presence of bergerin following an initial scratch using a 200 μm pipette tip. Results can be seen in Figure 4. The average surface area ($S_a$) for each scratch at 0 h was 0.5379 mm² for AEY-TZP discs. ![Images](image1.png) **Figure 4.** Cellular migration of HGF on AEY-TZP discs was observed by calculating the average healed percentage across time points. HGF cells were seeded onto AEY-TZP Discs in wells containing 400 μL of CM and grown until confluent. Prior to the initial scratch, the BCM media was allocated into the wells. Following the scratch, 2% Flash Phalloidin™ red solution was used, and images were observed under epifluorescence microscopy. Representative images (10×) of the migration were chosen: bergerin 0 μM (a–c), bergerin 2.5 μM, (d–f), bergerin 5 μM, (g–i), 0 h (a,d,g), 6 h (b,e,h), 24 h (c,f,i). The images were... analysed using ImageJ software, and the healed percentage was calculated and compared (j). The cells grown in 0 μM bergenin showed a significant increase in healed percentage at 6 h compared with 0 h but no significant increase from 0 h to 24 h. The presence of bergenin did not have a significant effect on healed percentage across the time points compared to 0 μM bergenin \((p > 0.05)\) ** indicates a \(p < 0.01\), **** indicates a \(p < 0.0001\). At 6 h, all groups (B0, B2.5, B5) demonstrated a significant increase in healed percentage (24.8%, 41.9% and 32.7%, \(p < 0.05\)) compared to 0 h. At 24 h, B2.5 (49.5%) and B5 (52%) showed a significant increase compared to 0 h; however, no groups demonstrated a significant increase compared to 6 h. Within the groups, only B2.5 demonstrated a significant increase over B0 \((p < 0.05)\) for AEF-TZP discs, all other experimental groups did not show significant differences to each other. A comparison of the differing concentrations of bergenin showed that at 6 h, the healed percentage of B2.5 (41.9%) was significantly larger \((p < 0.05)\) than B0 (24.8%). Apart from this, no other groups demonstrated any significant difference between the groups at both timepoints (6 h, 24 h). The results demonstrate that, although a greater final healed percentage was noted at 24 h in the cells cultured with bergenin on the AEF-TZP discs, bergenin did not have a significant effect on the cellular migration of HGF cells. 4. Discussion A biological seal of soft tissue around the transmucosal component of the dental implant is very important for long-term stability. The lack of a proper seal allows for the penetration of bacteria and the stimulation of an inflammatory response that can compromise the integrity of the implant. We hypothesised that the addition of a molecule that has anti-inflammatory as well as anti-microbial effects may enhance soft tissue healing and promote osseointegration. Our results provide insight into the effect of bergenin on the attachment of HGF on zirconia implant surfaces. The potential biological effects were assessed using SEM to assess the morphology and attachment, a resazurin assay to assess subsequent proliferation, and a scratch wound healing assay to assess migration across the surface. The assays were performed to establish a safety profile of bergenin in the context of HGF cells and to identify its ability to promote healing. Our results demonstrate that bergenin-exposed cells exhibited normal morphological characteristics upon attachment and exposure did not impede cellular proliferation, supporting our hypothesis that bergenin had no negative effects on cells. It is important to note that successful wound healing is complicated and relies on many different physiological factors, such as inflammation, and thus cannot be determined by the cellular activity alone. We hypothesized that the addition of a molecule that has anti-inflammatory as well as anti-microbial effects may enhance soft tissue healing and promote osseointegration. Thus, adding bergenin, which has been shown to have anti-inflammatory actions in inhibiting pro-inflammatory cytokines, such as IL-6 and IL-8, as well as anti-microbial actions, may foster an ideal environment for healing [32]. SEM analysis demonstrated that HGF demonstrated similar morphology amongst tested groups. They presented with narrow spindle-like shape with long cellular extensions indicative of optimal migration and attachment capability [38]. The cells demonstrated similar morphology to those reported in a study by Zizzari et al. (2013), which examined HGF attachment on machined and polished Y-TZP discs following 3 h, 24 h, 72 h and 7 days of incubation. They reported that at 24 h and beyond, both surfaces demonstrated HGFs with definitive morphology and long cytoplasmic elongations, consistent with what was observed on acid-etched discs in this present study [39]. In the current study, acid-etching on the Y-TZP discs was performed as it is a common surface modification technique intended to enhance osseointegration [40]. Whilst surface roughness parameters were not recorded in this study, it has been noted across the literature that surface structure can affect cellular morphology and adhesion [41,42]. The current literature regarding the effect of the roughness of the Zirconia surface on adhesion is varied, but parallel grooves are known to promote elongated morphology and can orientate HGF following surface morphology [41,43]. Our study demonstrated that bergenin did not affect the morphology or adhesion ability of HGF cells, suggesting that bergenin did not alter the initial cellular attachment of human gingival fibroblasts or negatively affect morphology. With the immunofluorescence imaging, it was noted visually that there was a greater volume of cells in the bergenin-exposed groups compared to unexposed cells. The resazaurin assay for cellular proliferation revealed that bergenin also did not have a significant effect on the rate of cellular proliferation. To date, there is no literature that explores the effect of antiseptics and antibiotics on HGF proliferation in the context of zirconia surfaces; however, several studies demonstrate the negative impact of commonly used oral antiseptics (i.e., CHX, Povidone-Iodine and Hydrogen peroxide) on cellular proliferation and the negative impact of antibiotics that are commonly prescribed for the management of peri-implant infections [24–29]. Emmadi et al. (2008) noted the dose-dependent effect of oral antiseptics, with CHX (0.2%) being more cytotoxic than Povidone-Iodine (1%) [24]. Similarly, other studies by Wilken et al. (2001) and Cline and Layman (1992) confirmed that the direct exposure of between 0.0025% to 0.12% CHX and 0.2% Povidone-Iodine was enough to cause the inhibition of growth. [27,28] The findings of this current study demonstrate that bergenin exposure does not affect the proliferation of HGF cells and may suggest the neutral effect on cell growth within the early stages of healing. Previous animal and clinical studies [44,45] showed that a faster gap closure between soft and hard tissue around the abutment can facilitate a healthy peri-mucosal tissue; therefore, an increase in cell density of HGF could promote wound healing after insertion. In the scratch wound healing assay on the AEY-TZP discs, both bergenin-treated groups showed a statistically significant increase in wound healing across both time points compared to untreated cells. Whilst an overall greater increase in healed percentage compared to control at 24 h was noted, it was not statistically significant. To explain the difference between the two data sets, further investigations into surface properties, material chemistry and surface characterisation are required. To the authors’ best knowledge, the effect of antiseptics or antibiotics commonly used in local delivery of a therapeutic agent in the management of peri-implant infections on the migration ability of HGF on zirconia surfaces remains unexplored. However, as noted previously, cellular proliferation was negatively affected as a result of the cytotoxic nature of other common antiseptics, thus implying the inability of cells to migrate in the presence of these antiseptics [24–26,29]. The current study was able to demonstrate an overall positive effect of bergenin on migration in the context of zirconia. Although we have shown the non-cytotoxic nature of bergenin on HGF in the context of zirconia surfaces, it is important to note there are other materials that have comparable qualities that could also be considered suitable materials for implants. Polyetheretherketone (PEEK) has been shown to have comparable mechanical qualities to titanium but with a significant reduction in the ability of *Streptococcus oralis* to attach and form a biofilm [46]. Zirconia was not included in that study, but another study showed no significant difference between PEEK and zirconia in relation to biofilm-repelling activity [47]. They also showed that the cell viability of osteoblasts was comparable between PEEK and zirconia, so either material could be suitable as a dental implant material. Peng et al. (2021) did find surface roughness to be reduced in zirconia compared to PEEK, but this may be due to the choice of method for creating surface roughness [47]. Our studies have shown surface roughness of zirconia to be equivalent to that of titanium if sandblasting is used [8]. To avoid the overuse of antibiotics and antiseptics with potentially toxic effects on cells, we have proposed that bergenin, as a novel molecule with anti-microbial and anti-inflammatory activities, should be used as an adjunct therapy, but it should be recognised that photodynamic therapy is a viable alternative. Photodynamic therapy has been shown to have antimicrobial efficacy and to remove biofilm to a similar degree as antibiotics but the effect on cells in the surrounding tissue may still need to be evaluated [47,48]. Photodynamic therapy does generate reactive oxygen species that can lead to cellular destruction, so a direct comparison of photodynamic therapy and bergenin as an adjunct therapy would be valuable. More importantly, each patient needs to be evaluated for treatment as an individual, and it is useful to have a variety of options so that treatment can be tailored to their specific needs. The results of this study showed the non-cytotoxic nature of bergenin on HGF in the context of zirconia surfaces. The HGF morphology, attachment, proliferation and migration of bergenin-treated cells were comparable to untreated cells. One key limitation of the present study was the use of a single cell type (HGF) as healing is a complex process involving a variety of cells and physiological mechanisms, such as inflammation. In addition, it is understood that surface modification can affect the cellular response of HGF; thus, identifying the surface characteristics could help establish a baseline and understand the extent of the effect of surface modification on cell response. Also, the potential for the phase transformation of the zirconia material due to its exposure to a high concentration of acid used in etching process will need to be acknowledged and explored further. 5. Conclusions Within the limitations of an *in vitro* study, exposure to bergenin at both 2.5 and 5 μM concentration did not demonstrate a negative impact on the cellular characteristics and responses, including morphology, attachment, proliferation and migration, of human gingival fibroblasts. Additionally, investigations on the proposed anti-inflammatory and anti-microbial activity in the context of the oral environment is essential prior to the consideration of bergenin for the management of peri-implant inflammatory conditions clinically. **Author Contributions:** Conceptualization, D.S. and C.M.M.; methodology, J.X and D.S.; formal analysis, J.X.; resources, D.S. and C.M.M.; data curation, J.X.; writing—original draft preparation, J.X.; writing—review and editing, J.X., D.S. and C.M.M.; visualization, J.X.; supervision, D.S. and C.M.M.; funding acquisition, D.S. and C.M.M. All authors have read and agreed to the published version of the manuscript. **Funding:** This project was partly funded by Australian Dental Research Foundation’s Clark Family Research Award (2735-2020) and James Cook University Honours research support fund. **Institutional Review Board Statement:** Not applicable. **Informed Consent Statement:** Not applicable. **Data Availability Statement:** The data presented in this study are available from the corresponding author upon reasonable request. **Acknowledgments:** The authors would like to thank Elsa Dos Santos Antunes, Mechanical Engineering, James Cook University, for providing the zirconia discs used in this study; Phurpa Wangchuk, Lecturer in Biomedical Sciences and Molecular Biology, for his intellectual input in selecting the candidate molecule for *in vitro* evaluation. **Conflicts of Interest:** The authors declare no conflict of interest. **References** 1. Fillion, M.; Aubazac, D.; Bessadet, M.; Allegre, M.; Nicolas, E. The impact of implant treatment on oral health related quality of life in a private dental practice: A prospective cohort study. *Health Qual. Life Outcomes* **2013**, *11*, 197. [CrossRef] [PubMed] 2. Nicholson, J.W. Titanium Alloys for Dental Implants: A Review. *Prosthesis* **2020**, *2*, 100–116. [CrossRef] 3. Özcan, M.; Hämmerle, C. Titanium as a Reconstruction and Implant Material in Dentistry: Advantages and Pitfalls. *Materials* **2012**, *5*, 1528–1545. [CrossRef] 4. Ferrari, M.; Carrabba, M.; Vichi, A.; Goracci, C.; Cagidiaco, M.C. Influence of Abutment Color and Mucosal Thickness on Soft Tissue Color. *Int. J. Oral. Maxillofac. Implant.* **2017**, *32*, 393–399. [CrossRef] [PubMed] 5. Lops, D.; Stellini, E.; Sbricoli, L.; Cea, N.; Romeo, E.; Bressan, E. Influence of abutment material on peri-implant soft tissues in anterior areas with thin gingival biotype: A multicentric prospective study. *Clin. Oral. Implant. Res.* **2017**, *28*, 1263–1268. [CrossRef] 6. Kim, K.T.; Eo, M.Y.; Nguyen, T.T.H.; Kim, S.M. General review of titanium toxicity. *Int. J. Implant. Dent.* **2019**, *5*, 10. [CrossRef] 7. Grech, J.; Antunes, E. Zirconia in dental prosthetics: A literature review. *J. Mater. Res. Technol.* **2019**, *8*, 4956–4964. [CrossRef] 8. Munro, T.; Miller, C.M.; Antunes, E.; Sharma, D. Interactions of Osteoprogenitor Cells with a Novel Zirconia Implant Surface. *J. Funct. Biomater.* **2020**, *11*, 50. [CrossRef] 9. Tan, N.C.P.; Miller, C.M.; Antunes, E.; Sharma, D. Impact of physical decontamination methods on zirconia implant surface and subsequent bacterial adhesion: An in-vitro study. *Clin. Exp. Dent. Res.* **2022**, *8*, 313–321. [CrossRef] 10. Lacefield, W.R. Materials characteristics of uncoated/ceramic-coated implant materials. *Adv. Dent. Res.* **1999**, *13*, 21–26. [CrossRef] 11. Wenz, H.J.; Bartsch, J.; Wolfart, S.; Kern, M. Osseointegration and clinical success of zirconia dental implants: A systematic review. *Int. J. Prosthodont.* **2008**, *21*, 27–36. [PubMed] 12. Moon, I.S.; Berglundh, T.; Abrahamsson, I.; Linder, E.; Lindhe, J. The barrier between the keratinized mucosa and the dental implant. An experimental study in the dog. *J. Clin. Periodontol.* **1999**, *26*, 658–663. [CrossRef] [PubMed] 13. Palaioilogou, A.A.; Yukna, R.A.; Moses, R.; Lallier, T.E. Gingival, dermal, and periodontal ligament fibroblasts express different extracellular matrix receptors. *J. Periodontol.* **2001**, *72*, 798–807. [CrossRef] [PubMed] 14. Nhlapo, N.; Dzogbewu, T.C.; de Smidt, O. A systematic review on improving the biocompatibility of titanium implants using nanoparticles. *Manuf. Rev.* **2020**, *7*, 31. [CrossRef] 15. Nicolas, J.; Magli, S.; Rabbachin, L.; Sampaolesi, S.; Nicotra, F.; Russo, L. 3D Extracellular Matrix Mimics: Fundamental Concepts and Role of Materials Chemistry to Influence Stem Cell Fate. *Biomacromolecules* **2020**, *21*, 1968–1994. [CrossRef] [PubMed] 16. Tan, W.C.; Lang, N.P.; Schmidlin, K.; Zwahlen, M.; Pjetursson, B.E. The effect of different implant neck configurations on soft and hard tissue healing: A randomized-controlled clinical trial. *Clin. Oral. Implant. Res.* **2011**, *22*, 14–19. [CrossRef] 17. Zhao, B.; van der Mei, H.C.; Subbiahdoss, G.; de Vries, J.; Rustema-Abbing, M.; Kuijer, R.; Busscher, H.J.; Ren, Y. Soft tissue integration versus early biofilm formation on different dental implant materials. *Dent. Mater.* **2014**, *30*, 716–727. [CrossRef] 18. Caton, J.G.; Armitage, G.; Berglundh, T.; Chapple, I.L.C.; Jepsen, S.; Kornman, K.S.; Mealey, B.L.; Papapanou, P.N.; Sanz, M.; Tonetti, M.S. A new classification scheme for periodontal and peri-implant diseases and conditions—Introduction and key changes from the 1999 classification. *J. Clin. Periodontol.* **2018**, *45* (Suppl. S20), S1–S8. [CrossRef] 19. Berglundh, T.; Persson, L.; Klinge, B. A systematic review of the incidence of biological and technical complications in implant dentistry reported in prospective longitudinal studies of at least 5 years. *J. Clin. Periodontol.* **2002**, *29* (Suppl. S3), 197–212; discussion 232–193. [CrossRef] 20. Louropoulou, A.; Slot, D.E.; Van der Weijden, F. The effects of mechanical instruments on contaminated titanium dental implant surfaces: A systematic review. *Clin. Oral. Implant. Res.* **2014**, *25*, 1149–1160. [CrossRef] 21. Mahato, N.; Wu, X.; Wang, L. Management of peri-implantitis: A systematic review, 2010–2015. *Springerplus* **2016**, *5*, 105. [CrossRef] 22. Smeets, R.; Henningsen, A.; Jung, O.; Heiland, M.; Hammacher, C.; Stein, J.M. Definition, etiology, prevention and treatment of peri-implantitis—A review. *Head. Face Med.* **2014**, *10*, 34. [CrossRef] 23. Wang, W.C.; Lagoudis, M.; Yeh, C.W.; Paranhos, K.S. Management of peri-implantitis—A contemporary synopsis. *Singap. Dent. J.* **2017**, *38*, 8–16. [CrossRef] 24. Flemingson, Emmadi, P.; Ambalavanan, N.; Ramakrishnan, T.; Vijayalakshmi, R. Effect of three commercial mouth rinses on cultured human gingival fibroblast: An in vitro study. *Indian J. Dent. Res.* **2008**, *19*, 29–35. [CrossRef] 25. Wyganowska-Swiatkowska, M.; Kotwicka, M.; Urbaniaik, P.; Nowak, A.; Skrzypczak-Jankun, E.; Jankun, J. Clinical implications of the growth-suppressive effects of chlorhexidine at low and high concentrations on human gingival fibroblasts and changes in morphology. *Int. J. Mol. Med.* **2016**, *37*, 1594–1600. [CrossRef] 26. Gutierrez-Venegas, G.; Guadarrama-Solis, A.; Munoz-Seca, C.; Arreguin-Cano, J.A. Hydrogen peroxide-induced apoptosis in human gingival fibroblasts. *Int. J. Clin. Exp. Pathol.* **2015**, *8*, 15563–15572. [PubMed] 27. Wilken, R.; Botha, S.J.; Grobler, A.; Germishuys, P.J. In vitro cytotoxicity of chlorhexidine gluconate, benzydamine-HCl and povidone iodine mouthrinses on human gingival fibroblasts. *SADJ* **2001**, *56*, 455–460. [PubMed] 28. Cline, N.V.; Layman, D.L. The effects of chlorhexidine on the attachment and growth of cultured human periodontal cells. *J. Periodontol.* **1992**, *63*, 598–602. [CrossRef] [PubMed] 29. Lima, N.M.F.; Peruzzo, D.C.; Passador-Santos, F.; Saba-Chufji, E.; Martinez, E.F. In vitro evaluation of gingival fibroblasts proliferation and smear layer formation in pre-conditioned root surfaces. *RGO Rev. Gaúcha Odontol.* **2016**, *64*, 387–392. [CrossRef] 30. Adegbeye, O.; Field, M.A.; Kupz, A.; Pai, S.; Sharma, D.; Smout, M.J.; Wangchuk, P.; Wong, Y.; Loiseau, C. Natural-Product-Based Solutions for Tropical Infectious Diseases. *Clin. Microbiol. Rev.* **2021**, *34*, e0034820. [CrossRef] 31. Chen, M.; Ye, C.; Zhu, J.; Zhang, P.; Jiang, Y.; Lu, X.; Wu, H. Bergenin as a Novel Urate-Lowering Therapeutic Strategy for Hyperuricemia. *Front. Cell Dev. Biol.* **2020**, *8*, 703. [CrossRef] [PubMed] 32. Bajracharya, G.B. Diversity, pharmacology and synthesis of bergenin and its derivatives: Potential materials for therapeutic usages. *Fitoterapia* **2015**, *101*, 133–152. [CrossRef] [PubMed] 33. Liang, J.; Li, Y.; Liu, X.; Huang, Y.; Shen, Y.; Wang, J.; Liu, Z.; Zhao, Y. In vivo and in vitro antimalarial activity of bergenin. *Biomed. Rep.* **2014**, *2*, 260–264. [CrossRef] [PubMed] 34. Nunomura, R.C.S.; Oliveira, V.G.; Da Silva, S.L.; Nunomura, S.M. Characterization of bergenin in Endopleura uchi bark and its anti-inflammatory activity. *J. Braz. Chem. Soc.* **2009**, *20*, 1060–1064. [CrossRef] 35. Suh, K.S.; Chon, S.; Choi, E.M. Bergenin increases osteogenic differentiation and prevents methylglyoxal-induced cytotoxicity in MC3T3-E1 osteoblasts. *Cytotechnology* **2018**, *70*, 215–224. [CrossRef] 36. Sakurai, K.; Kurtz, A.; Stacey, G.; Sheldon, M.; Fujibuchi, W. First Proposal of Minimum Information About a Cellular Assay for Regenerative Medicine. *Stem Cells Transl. Med.* **2016**, *5*, 1345–1361. [CrossRef] 37. Faggion, C.M., Jr. Guidelines for reporting pre-clinical in vitro studies on dental materials. *J. Evid. Based Dent. Pract.* **2012**, *12*, 182–189. [CrossRef] 38. Xu, Z.; He, Y.; Zeng, X.; Zeng, X.; Huang, J.; Lin, X.; Chen, J. Enhanced Human Gingival Fibroblast Response and Reduced Porphyromonas gingivalis Adhesion with Titania Nanotubes. *Biomed. Res. Int.* **2020**, *2020*, 5651780. [CrossRef] 39. Zizzari, V.; Borelli, B.; De Colli, M.; Tumedei, M.; Di Iorio, D.; Zara, S.; Sorrentino, R.; Cataldi, A.; Gherlone, E.F.; Zarone, F.; et al. SEM evaluation of human gingival fibroblasts growth onto CAD/CAM zirconia and veneering ceramic for zirconia. *Ann. Stomatol.* **2013**, *4*, 244–249. 40. Liu, Y.; Rath, B.; Tingart, M.; Eschweiler, J. Role of implants surface modification in osseointegration: A systematic review. *J. Biomed. Mater. Res. A* **2020**, *108*, 470–484. [CrossRef] 41. Rausch, M.A.; Shokoohi-Tabrizi, H.; Wehner, C.; Pippenger, B.E.; Wagner, R.S.; Ulm, C.; Moritz, A.; Chen, J.; Andrukhov, O. Impact of Implant Surface Material and Microscale Roughness on the Initial Attachment and Proliferation of Primary Human Gingival Fibroblasts. *Biology* **2021**, *10*, 356. [CrossRef] [PubMed] 42. Rohr, N.; Zeller, B.; Matthisson, L.; Fischer, J. Surface structuring of zirconia to increase fibroblast viability. *Dent. Mater.* **2020**, *36*, 779–786. [CrossRef] [PubMed] 43. Pae, A.; Lee, H.; Kim, H.S.; Kwon, Y.D.; Woo, Y.H. Attachment and growth behaviour of human gingival fibroblasts on titanium and zirconia ceramic surfaces. *Biomed. Mater.* **2009**, *4*, 025005. [CrossRef] [PubMed] 44. Canullo, L.; Tallarico, M.; Penarrocha-Oltra, D.; Monje, A.; Wang, H.L.; Penarrocha-Diago, M. Implant Abutment Cleaning by Plasma of Argon: 5-Year Follow-Up of a Randomized Controlled Trial. *J. Periodontol.* **2016**, *87*, 434–442. [CrossRef] [PubMed] 45. García, B.; Camacho, F.; Penarrocha, D.; Tallarico, M.; Perez, S.; Canullo, L. Influence of plasma cleaning procedure on the interaction between soft tissue and abutments: A randomized controlled histologic study. *Clin. Oral. Implant. Res.* **2017**, *28*, 1269–1277. [CrossRef] 46. D’Ercole, S.; Cellini, L.; Pilato, S.; Di Lodovico, S.; Iezzi, G.; Piattelli, A.; Petrini, M. Material characterization and Streptococcus oralis adhesion on Polyetheretherketone (PEEK) and titanium surfaces used in implantology. *J. Mater. Sci. Mater. Med.* **2020**, *31*, 84. [CrossRef] 47. Peng, T.Y.; Lin, D.J.; Mine, Y.; Tasi, C.Y.; Li, P.J.; Shih, Y.H.; Chiu, K.C.; Wang, T.H.; Hsia, S.M.; Shieh, T.M. Biofilm Formation on the Surface of (Poly)Ether-Ether-Ketone and In Vitro Antimicrobial Efficacy of Photodynamic Therapy on Peri-Implant Mucositis. *Polymers* **2021**, *13*, 940. [CrossRef] 48. Azizi, B.; Budimir, A.; Bago, I.; Mehmeti, B.; Jakovljevic, S.; Kelmendi, J.; Stanko, A.P.; Gabric, D. Antimicrobial efficacy of photodynamic therapy and light-activated disinfection on contaminated zirconia implants: An in vitro study. *Photodiagnosis Photodyn. Ther.* **2018**, *21*, 328–333. [CrossRef] **Disclaimer/Publisher’s Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Integrating Behavioral Health and Primary Care: Consulting, Coordinating and Collaborating Among Professionals Deborah J. Cohen, PhD, Melinda Davis, PhD, Bijal A. Balasubramanian, MBBS, PhD, Rose Gunn, MA, Jennifer Hall, MPH, Frank V. deGruy III, MD, MSFM, C. J. Peek, PhD, Larry A. Green, MD, Kurt C. Stange, MD, PhD, Carla Pallares, PhD, Sheldon Levy, PhD, MPH, David Pollack, MD, and Benjamin F. Miller, PsyD Purpose: This paper sought to describe how clinicians from different backgrounds interact to deliver integrated behavioral and primary health care, and the contextual factors that shape such interactions. Methods: This was a comparative case study in which a multidisciplinary team used an immersion-crystallization approach to analyze data from observations of practice operations, interviews with practice members, and implementation diaries. The observed practices were drawn from 2 studies: Advancing Care Together, a demonstration project of 11 practices located in Colorado; and the Integration Workforce Study, consisting of 8 practices located across the United States. Results: Primary care and behavioral health clinicians used 3 interpersonal strategies to work together in integrated settings: consulting, coordinating, and collaborating (3Cs). Consulting occurred when clinicians sought advice, validated care plans, or corroborated perceptions of a patient’s needs with another professional. Coordinating involved 2 professionals working in a parallel or in a back-and-forth fashion to achieve a common patient care goal, while delivering care separately. Collaborating involved 2 or more professionals interacting in real time to discuss a patient’s presenting symptoms, describe their views on treatment, and jointly develop a care plan. Collaborative behavior emerged when a patient’s care or situation was complex or novel. We identified contextual factors shaping use of the 3Cs, including: time to plan patient care, staffing, employing brief therapeutic approaches, proximity of clinical team members, and electronic health record documenting behavior. Conclusion: Primary care and behavioral health clinicians, through their interactions, consult, coordinate, and collaborate with each other to solve patients’ problems. Organizations can create integrated care environments that support these collaborations and health professions training programs should equip clinicians to execute all 3Cs routinely in practice. (J Am Board Fam Med 2015;28:S21–S31.) Keywords: Behavioral Medicine; Communication; Delivery of Health Care, Integrated; Interdisciplinary Health Team Compelling research evidence, health care reform initiatives, and clinician and patient needs are driving the integration of primary care and behavioral health services. Emotional, behavioral, and physical comorbidities are common and compound the risk for undesirable patient health outcomes.\textsuperscript{1–11} Regardless of implementation site, integration requires professionals of different backgrounds interacting to provide care, yet little research has focused on understanding the ways clinicians work... together on an interpersonal level to deliver integrated care. Given that patients suffer and health care costs increase when professionals are unable to interact to meet patients’ physical, emotional, and behavioral health needs, there is an urgency to understand how primary care and behavioral health clinicians work together.\textsuperscript{7,12,13} We used the Institute of Medicine’s definition of primary care, which defines primary care as “the provision of integrated, accessible health care services by clinicians who are accountable for addressing a large majority of personal health care needs, developing a sustained partnership with patients, and practicing in the context of family and community.”\textsuperscript{14} A primary care clinician (PCP) refers to a person who delivers that care. Behavioral health refers to care that addresses emotional, behavioral, and substance use problems. Behavioral health clinicians (BHCs) include psychologists, psychiatrists, licensed clinical social workers, and master’s trained therapists. Research on the phenomenon of interprofessional practice examines barriers and facilitators of how professionals work together.\textsuperscript{15–41} This research has discovered certain critical ingredients that foster successful interprofessional practice such as willingness to communicate with other professionals,\textsuperscript{29–31,34} a willingness to bend traditions to solve problems,\textsuperscript{17,18,24} and shared goals, vision, and philosophy.\textsuperscript{16–18,35–41} Much of the research on interprofessional collaboration relies on conceptual work and self-report data (eg, interviews, surveys). Studying interprofessional interaction “in the wild” provides “higher quality, context-specific guidance to complement theoretical models”\textsuperscript{40} than self-report data. It also conveys a more nuanced understanding of the ways actual professionals interact in real-world practices, informing efforts to build effective integrated teams, and enhancing education and training. This research was not tethered to any specific taxonomy or framework, but focused on actual observed interpersonal behaviors of individuals in diverse practices striving to integrate primary care and behavioral health. Our aim was to 1) identify how people work together during routine practice to meet patients’ needs, 2) describe these interactions, and 3) determine which contextual factors shape these professional interactions. **Methods** **Sample** Nineteen U.S.-based primary care practices and community mental health centers participated in this study. Eleven practices located in Colorado and participating in the Advancing Care Together program and 8 practices located across the United States and participating in the Integration Workforce Study to identify workforce needs for integrated care participated in this study. For more details on the sample for this study see Cohen et al,\textsuperscript{42} in this issue. **Data Collection** Data collection occurred between September 2011 and September 2014, and is described in detail elsewhere.\textsuperscript{43,44} Briefly, we conducted site visits at each practice, where we intensely observed a broad spectrum of clinical operations, both in and out of the examination room, and conducted 1-on-1, semistructured interviews with 2 to 17 practice members at each site. We spent more than 45 days in the field observing 160 patient visits: 98 with PCPs, 45 with BHCs, and 16 with patients who visited with both types of clinicians on the same day of service. We conducted 90 interviews, providing approximately 54 hours of interview data and we prepared more than 1070 pages of field notes to document site-visit observations. **Data Management** We prepared field notes from jottings after each day in the field. Interviews were audio recorded and professionally transcribed, and transcripts were reviewed for accuracy and completeness. All data were deidentified and entered into Atlas.ti (Version 7.0, Atlas.ti Scientific Software Development GmbH). The Institutional Review Boards at Oregon Health & Science University and University of Colorado Denver approved this study protocol. **Analysis** We used a grounded theory approach to analyze data, following a 3-stage analysis process informed by the work of Miller and Crabtree\textsuperscript{45} and the immersion-crystallization approach described by Borkan.\textsuperscript{46} Grounded theory is an approach to analysis whereby researchers allow findings to emerge from data analysis rather than impose a priori theories or categories during the data-analysis process. Immersion-crystallization is a process whereby researchers saturate themselves (immersion) in data to identify (crystallize) findings. In the first immersion cycle, our multidisciplinary team read field notes and listened to interviews together to identify and tag segments of text relevant to BHCs and PCPs working together. Once we established a stable method for tagging text, we divided the remaining data meeting regularly to review data and discuss findings. We then engaged in a second immersion-crystallization cycle, analyzing tagged data as a group to identify and empirically define the ways professionals interacted to deliver integrated care. After consulting, coordinating, and collaborating, behaviors were defined and we examined instances when these interpersonal interactions occurred, and where they were absent, comparing these instances to identify contextual factors that shaped these interactions. In our third immersion-crystallization cycle, we reviewed preliminary findings with a larger team of experts to refine findings and make connections with the literature. **Results** The 19 practices varied in practice type, size and ownership, location, years in practice, and years integrating behavioral health and primary care.\textsuperscript{42} All practices had collocated behavioral health and primary care, although referral out for specialty services was common. Six organizations engaged in partnerships with another organization to bring BHCs and PCPs together; others hired the needed professionals. Practices also varied in proximity and shared space of behavioral health and primary care,\textsuperscript{47} and on the strategies used to identify patient need to deliver integrated patient care.\textsuperscript{42} From this widely varied group of practices, we observed 3 modes of interaction between PCPs and BHCs: consulting, coordinating, and collaborating (3Cs). Below, we provide empirical examples to distinguish between these modes, describe each type of interaction, and identify contextual factors that shape this interaction. **Consulting** Consulting is defined as a care team member with specific professional expertise or experience seeking advice or input from another clinician with different professional expertise or experience in the context of providing patient care. Consulting typically began with 1 person contacting another, either virtually or in person. The advice-seeker offered a brief description of relevant aspects of the patient’s case (eg, age, health conditions/illnesses, history of illness, medications) followed by a question. The consulted clinician may seek additional information before answering: The PCP comes out of a patient room and asks the female obstetrician if there are any antiemetics that this patient can take—she’s on a number of psychiatric medications and is having uncontrollable nausea. The obstetrician is not sure; she wonders if the psychiatrist is available for a quick consult. The doctor says she’ll try to get the psychiatrist on the phone. The psychiatrist does not answer, so the doctor leaves a message, with a quick summary of the problem—she needs to know about an antiemetic to use in a woman on antipsychotics who cannot keep her medications down because of nausea. The doctor asks for a return call. About 5 minutes after the initial call, the psychiatrist calls back regarding the patient question. They review the medication choices and decide to go with Phenergan (promethazine), because metoclopramide might have a negative interaction with 1 of the antipsychotics (Field Notes, Practice 2). **Coordinating** Coordinating involves 2 or more clinicians working in a parallel in a back-and-forth fashion to care for the same patient, delivering care to the patient in a manner that has the same goal, yet is accomplished independent of the other clinician. Anyone on the care team could trigger the need for a PCP or BHC, and various strategies were employed (eg, phone, walkie-talkie, pager, walking) to find the needed person. In addition, coordinating primary and behavioral health care during the same day of service required careful management of time and flow of patients and clinicians.\textsuperscript{48} The example below demonstrates how coordinating occurred: PCP1 comes over to the Medical Assistant (MA) station and asks the BHC to join a patient visit. The BHC agrees. The PCP motions toward the examination room. As they start to walk toward the examination room, PCP2 comes over to speak with the BHC. PCP2 explains to the BHC that she has a patient she saw a long time ago, and now the patient has returned to see her. The patient has a history of depression. PCP2 has tried the patient on multiple medications. The patient is not suicidal. PCP2 says the real issue seems to be anxiety. Could you introduce yourself, give her some information about self-care and relaxation? The BHC agrees. She tells the doctor she will see the patient after this visit (she is going to see a different patient with PCP1). She asks what examination room the patient is in, and PCP2 tells her the patient is in room 14. PCP2 leaves, and the BHC and PCP1 resume walking to the examination room and see the patient together… Afterward, the BHC finds the other patient she needs to see in examination room 14 (Field Notes, Practice 4). Several key steps in coordinating primary and behavioral health care for patients were highlighted in this example, including: 1) locating the needed clinician, 2) rapidly briefing the clinician about a patient’s needs or by having the coordinating clinician determine the patient need, 3) negotiating a time to meet with the patient, 4) meeting with the patient and identifying a treatment plan (not shown above), and 5) rapidly debriefing after the clinician met with the patient to share what was learned and to discuss the next steps (not mentioned above). We observed coordinating happening on the same day via a warm handoff between professionals in the same office, as well as through telemedicine exchanges. Briefing and debriefing, when clinicians inform each other of the steps to be taken to help the patient, were important steps in coordinating, which may happen through any combination of verbal exchange, documentation in medical record notes, or a secure messaging system. In the case above, the PCP briefed the BHC by offering her assessment (ie, depressed patient, not suicidal, not responding to medication because her main problem is anxiety). The PCP also suggested treatments the BHC might offer the patient (ie, educational material, help with relaxation). Debriefing occurred after professionals met with the patient to discuss next steps, and involved BCHs rapidly offering an assessment, reporting information relevant to treatment decisions, and offering a treatment plan. Debriefing informed the next steps, including the actions of others on the care team. \textit{Collaborating} Collaborating involves BHCs and PCPs working to jointly make sense of patients’ needs and, together, identifying a treatment plan to best address those needs. Sometimes the PCP and BHC accomplished this by talking together with the patient to discover those needs. Making sense of the case together, what Bloch refers to as the “dual optic,”\textsuperscript{49} distinguishes collaborating from coordinating. We observed clinicians collaborating when caring for patients with complex needs. In the case below, the patient had multiple concerns: trouble sleeping, crying for no reason, and drinking alcohol to sleep. The PCP and patient agreed to bring in the BHC: The PCP finds the BHC and says she needs help. She describes the patient—trouble sleeping, depression … but he’s also drinking alcohol and has a history of drug use. His main complaint is that the sleep medications are not working. There’s also an alcohol smell, and he’s crying. The doctor leaves. The BHC reviews the chart notes. We go in the examination room and the BHC greets the patient and says that the doctor asked me to help a bit … The BHC says that it sounds like he’s suffering a lot. The man starts to cry. The man eventually says that the medications are not working and that he has to drink to knock himself out. He’s not getting any sleep and it is horrible. The BHC asks a series of social and diagnostic questions … **The BHC says that she wants to put her head together with the doctor to see what they can do to help the patient.** ‘Do you think you’d be willing to come in and talk with a BHC since it has helped in the past? The patient says, yes, get me back together. The BHC finds the doctor. The BHC points out that much of the patient’s motivation is focused on sleeping better. When the patient comes back she might start to work with him on his sobriety. **They talk about medications the patient is on and how there are 2 prescriptions for antidepressants. Together, they identify a plan that includes the doctor prescribing a new sleeping medication that also has mood stabilizing characteristics.** Later, she tells me the patient was positive when she went back in and seemed thankful (Field Notes, Practice 2, **Emphasis** added). Together, the PCP and BHC made sense of this patient’s situation and arrived at a treatment plan to address the patient’s problem with sleep and mood. The next example shows that collaboration can also manifest between 2 BHCs: BHC1 asks how’s he doing, referring to a patient BHC2 just saw. BHC2 says he’s OK. He does not want to talk much about what is happening. They discuss if having 1 of their physician’s assistants leave was a trigger for this patient’s relapse—it happened around the same time and the patient’s wife thinks it was. BHC1 asks if the patient is interested in day treatment? BHC2 says, yes, I just called. BHC2 asks how long are they in day treatment? BHC1 says 2 to 3 months, and says he’ll need something after, too. They discuss how this patient does well in day treatment and then struggles when it ends—not having the order is hard on him. BHC2 comments that they are going to try to start early in working on that transition so the patient has some structure early and does not decompensate … and quickly go back to using and it is a quick spiral after that: using, dealing, reckless disregard for life/hopelessness. They look through the patient’s note and realize that after his incarceration he was eligible for residential treatment. They wonder if this is still possible. They will run this by the patient as an option (Field Notes, Practice 5). We most often observed collaboration occurring in situations in which patient care decision making was complex. **Contextual Factors Shaping the 3Cs** Factors affecting clinician-to-clinician interactions (ie, 3Cs) while providing integrated care include availability of structured and unstructured meetings to plan patient care (eg, preclinic huddles, complex care meetings), staffing patterns and employing brief approaches to therapy, location of clinicians in close proximity to each other, and electronic health record (EHR) documenting practices. These contextual factors are described in more detail in Table 1 and below. **Time to Plan Patient Care** Consulting, coordinating, and collaborating happened during structured meetings as well as during the more fluid flow of clinical care. The preclinic huddle, when clinicians and the larger care team gather before the first patient visit to review the schedule and to anticipate and plan for patients’ needs, is 1 example of a structured, routinized way to foster the 3Cs. Complex care meetings, formal meetings to identify how best to address the needs of the practice’s most complex patients, are another. The example below describes how a patient’s care was managed during a planned huddle. The PCP was supporting a nurse practitioner (NP) in the field who was scheduled to see a patient with a wound (and also diagnosed with borderline personality disorder). Before this excerpt, they discussed the wound culture and which antibiotics to prescribe: Table 1. Contextual Factors that Shape Coordinating, Collaborating and Consulting | Factors | Consulting: Advice Seeking/Giving | Coordinating: Separate, but Aligned Care Delivery | Collaborating: Shared Sense Making, Decision Making | |---------|----------------------------------|-------------------------------------------------|---------------------------------------------------| | Patient | Problem/situation definable | | Problem/situation is complicated | | | Identified as needing expertise of another provider | | Identified as needing professionals from different backgrounds to make sense of problem/treatment | | Clinician | Clinician with expertise to answer patient care question | Clinician with expertise carries out next steps/treatment | Clinicians work together to clarify patients’ needs | | | Clinicians from different disciplines work as a team, conduct care team huddles and meet to discuss clinical care, close proximity of team, flexible schedule/time for warm handoffs. | | | | Practice | Clinicians from different disciplines (often colocated) are rapidly and reliably accessible to answer questions | | | | System | Support for communication between separate behavioral and medical practices | Support for synchronizing (behavioral and medical) care over time | Support for shared learning about and with the patient. | | Problem | Discrete problem | Definable, discrete problem | Complex, hard-to-define problem that seems intractable to treatment and/or linked to medical or social problem | | | Little uncertainty | Moderate uncertainty or routine care need | Professionals need longer dialogue to clarify best strategy to deliver and engage patient in treatment | | | Information, when provided, allows advice seeker to act independently | Professional has expertise to address care need Quick discussion positions professionals to act in loosely connected way Engages patient in treatment | | The doctor explains this patient has borderline personality disorder. This means the patient will push people away, saying no you do not like me, and at the same time act like they better like him and take care of him. She says you should start every sentence with: “It must be very hard to . . .” Then she says to the NP: you know what medically needs to be managed, but you need to manage his emotions. It is hard. Borderlines push everyone’s buttons. The doctor says the struggle will be getting this patient to the wound center. He just needs to know you still care for him. The pharmacist offers to take a look at the order. She asks the NP to route it to her. She reviews and discusses it with the NP (Field Notes, Practice 5). The doctor helped shape how the NP viewed the patient (collaborating), acknowledging her emotional reactions to the patient and offering strategies for working with this patient (consulting). In addition, the pharmacist offered to review the patient’s medication order (consulting) to make sure everything was correct. During active patient care times, the ability to access other clinicians on the care team and having brief unstructured meetings facilitated consulting, coordinating, or collaborating. In the example below, the PCP found the BHC and interrupted her work to engage her help: The doctor knocks on the BHC’s door and says he needs her help. He has a patient in for acupuncture with bruises on her legs. She says her boyfriend pushed her down. He asked if she feels safe at home and she said no ... The doctor says she’s in a room on the third floor and asks if the BHC has time to see her. The BHC is looking at her schedule and says I will put her in at 2:00 PM. The doctor says I do not think she’ll come; she says she has a hard time talking about it. The BHC says, I bet. I will come up to see her. The doctor says thank you so much. The BHC says, give me a few minutes. (Field Notes, Practice 1). Two factors made this coordination possible. First, care team members knew where and how to find each other, as shown in the case above, and had reliable ways to reach a clinician (eg, instant messaging, pager, phone, walkie talkie, physically walking to where she or he is) when they were not in sight. Second, access was enhanced with rules that allow professionals to interrupt each other. **Staffing and Brief-Targeted Therapy** Staffing appropriately to meet patients’ needs, flexibility with schedules to accommodate warm hand-offs, having a path for managing patients with longer-term behavioral health needs, and BHCs doing brief, problem-focused therapy (rather than traditional therapy), as well as colocation facilitated 3Cs behaviors. In the example below, the BHC conducted 50-minute counseling sessions and was not located in the clinic where this patient was seen. PCPs gave patients an paper-based referral to see the BHC, and patients scheduled appointments at the front desk. These issues combined to make it difficult for PCPs to engage BHCs in real time, and consulting, coordinating, and collaborating behaviors were limited and only happened when crises arose. For more on staffing and scheduling see Davis et al, this issue.\textsuperscript{48} Patient has anxiety … highly motivated, engaged, in college, lost funding … I ask the doctor how the appointment ended and he says he put on the patient’s blue sheet that he should make an appointment with a counselor and psychiatric nurse practitioner. He’s going to have some blood work done and then schedule an appointment to see the counselor next week. The patient was very open to talking to a counselor, and unfortunately he left the clinic today without seeing the BHC. (Field Notes Practice 7) **Sharing Information and Space: Creating Closeness** Close physical proximity of clinicians was a factor that fostered consulting, coordinating, and collaborating, just as working at a distance (eg, on separate floors, in distant pods) inhibited these behaviors. Although documentation provided important information about prior patient assessments and treatments, the ability to communicate synchronously was critical to initiating the coordinating process, and this communication was fostered by close proximity of professionals. For more on this, see Gunn et al,\textsuperscript{47} in this issue. **Discussion** This article used direct observations from 19 practices striving for comprehensive primary care to discover how the integration of behavioral health care and primary care can be accomplished in diverse, real-world practices, in ways tailored to patient need and to practice/clinician situations. Our study builds on a continuing scientific effort to illuminate the details of how professionals work together in primary care by conducting basic observational research focused on integrating care.\textsuperscript{50} The observed patterns resolved into 3 distinct types: consulting, coordinating, and collaborating—the 3Cs of working together. These 3 modes do not rank in terms of desirability, appropriateness, or quality—under certain circumstances any 1 of them may be the “best” mode of working together. The professionals we observed were all working in colocated environments, and interaction among professionals was made possible when partners had established modes of communication with 1 another (eg, mail, pager, email, telephone, video conferencing, or in person).\textsuperscript{51–56} With even the most basic means of communication, certain forms of consultation were possible. Coordination and collaboration emerged when access to one another was expanded to include close physical proximity,\textsuperscript{47} access via compatible schedules and workflows, explicit rules regarding interruptions and timing,\textsuperscript{48} and when there were structures supporting communication and information sharing (ie, shared EHR, team huddles, complex case meetings). This finding may help organization leaders design and balance 1) space and infrastructure, 2) workflows and protocols, and 3) the process by which professionals are introduced to each other and trained together in collaborative practice. Frameworks for clinician interaction with names sounding similar to the 3Cs appear in the literature, such as shown in Table 2. Our work complements these conceptual models or definitions by offering a distillation of observations of real clinicians seeing patients, and by identifying what goes on between people in practice when working on specific clinical cases. Our observations were at the interpersonal level of professionals interacting in real-world practices, along with the features of organizational design that affected those interactions. Similar concepts are highlighted in other models, but with direct practice observations, it may be possible to more effectively understand how and under which circumstances professionals will work together. Historically problematic, consultation and coordination have been default modes of interaction—the goal of institutional or organizational arrangements—but support only minimal communication between professionals beyond mere referral.\textsuperscript{57,58} Field observations clarify consulting and coordinating behaviors while clearly showing that the closer, more interdependent collaboration behaviors are not merely an incremental augmentation on consultation and coordination (ie, the BHC and PCP can still work from within their original or “native” perspectives, tools, language, know-how, and culture as they work on a task in front of them). In our observations, collaborating involved establishing a shared understanding regarding illness, health, care, and teamwork across disciplines, rather than separate clinicians doing separate things, even if consultative and coordinated. Elements of “good clinicianship” are comparable to the elements of “good musicianship” that unite “players” in common sensibilities beyond their “chosen instrument” by their shared appreciation of music and how to harmonize together.\textsuperscript{59} The challenge is to organize a teachable common culture of good clinicianship for PCPs and BHCs working together to deliver comprehensive, whole-person care. This implies good working relationships among clinicians, not just a set of techniques applied without connection to each other. It requires mutual trust and a willingness of PCPs and BHCs to share care, and to share the connection to patients, which is also so important to patients and to the providers who seek and are sustained by these relationships. Although this observational study provides detailed insight into the ways BHCs and PCPs interact to deliver integrated care, this study is not without limitations. We were able to identify with confidence 3 ways BHCs and PCPs interact; however, we are unable to link these interpersonal behaviors to practice performance, patient experience, or costs. Findings from this study can inform future research to study such associations and outcomes. In addition, evidence suggests that experienced partners improve their clinical skills by learning from each other; not only do they anticipate what their colleague would likely recommend or do, but they sometimes acquire the confidence and skill to do it themselves, or to do it with a less-intensive mode of working together.\textsuperscript{59,60} This suggests that whether partners are consulting, coordinating, or collaborating with each other may have a developmental component. However, additional research is needed to explore this relationship given that it was outside the scope of this study. **Conclusion** PCPs and BHCs consult, coordinate, and collaborate with each other as they work together to deliver integrated care. These 3 modes of working together are not a hierarchy of sophistication or desirability. Each is critically important in particular circumstances. Organizations can create integrated care environments that support the 3Cs, and health professions’ training programs should equip clinicians to execute all 3 routinely in practice. Ideally, this would happen in internships and residencies where professionals of different background can be trained together, and then be supported in their subsequent work in practice environments that reinforce working as a health care team. The authors are grateful to the participating practices and their patients. The authors thank Leah Baruch, MD for her assistance with data collection on the IWS study and David Cameron for his assistance with data analysis. The authors are also grateful for editing and publication assistance from Ms. LeNeva Spires. **References** 1. Whooley MA. Depression and cardiovascular disease: Healing the broken-hearted. JAMA 2006;295:2874–81. 2. Piette JD, Wagner TH, Potter MB, Schillinger D. Health insurance status, cost-related medication underuse, and outcomes among diabetes patients in three systems of care. Med Care 2004;42:102–9. 3. Simon EP, Showers N, Blumenfield S, Holden G, Wu X. Delivery of home care services after discharge: What really happens. Health Soc Work 1995;20:5–14. 4. Katon WJ. Clinical and health services relationships between major depression, depressive symptoms, and general medical illness. Biol Psychiatry 2003;54:216–26. 5. Katon WJ. The Institute of Medicine “Chasm” report: Implications for depression collaborative care models. Gen Hosp Psychiatry 2003;25:222–9. 6. Katon W, Russo J. Somatic symptoms and depression. J Fam Pract 1989;29:65–9. 7. Petterson SM, Phillips RL Jr, Bazemore AW, Dodoo MS, Zhang X, Green LA. Why there must be room for mental health in the medical home. Am Fam Physician 2008;77(6):757. 8. Hertz JE, Anschutz CA. Relationships among perceived enactment of autonomy, self-care, and holistic health in community-dwelling older adults. J Holist Nurs 2002;20:166–86. 9. Ng TP, Niti M, Tan WC, Cao Z, Ong KC, Eng P. Depressive symptoms and chronic obstructive pulmonary disease: effect on mortality, hospital readmission, symptom burden, functional status, and quality of life. Arch Intern Med 2007;167:60–7. 10. Loeppke R, Taitel M, Hauffe V, Parry T, Kessler RC, Jinnatt K. Health and productivity as a business strategy: a multiemployer study. J Occup Environ Med 2009;51:411–28. 11. Lin EH, Heckbert SR, Rutter CM, et al. Depression and increased mortality in diabetes: Unexpected causes of death. Ann Fam Med 2009;7:414–21. 12. Melek S, Norris D. Chronic conditions and comorbid psychological disorders. Seattle, WA: Milliman, 2008. 13. Petterson S, Miller BF, Payne-Murphy JC, Phillips RL. Mental health treatment in the primary care setting: Patterns and pathways. Fam Syst Health 2014;32:157–66. 14. Institute of Medicine. Part 3: The new definition and an explanation of terms. Defining primary Care: An interim report. Washington, DC: The National Academies Press, 1994. 15. Apker J, Propp KM, Zabava Ford WS, Hofmeister N. Collaboration, credibility, compassion, and coordination: professional nurse communication skill sets in health care team interactions. J Prof Nurs 2006;22:180–9. 16. Alt-White AC, Charms M, Strayer R. Personal, organizational and managerial factors related to nurse–physician collaboration. Nurs Adm Q 1983;8:8–18. 17. Hartgerink JM, Cramm JM, Bakker TJ, van Eijsden RA, Mackenbach JP, Nieboer AP. The importance of relational coordination for integrated care delivery to older patients in the hospital. J Nurs Manag 2014;22:248–56. 18. Hartgerink JM, Cramm JM, Bakker TJ, van Eijsden AM, Mackenbach JP, Nieboer AP. The importance of multidisciplinary teamwork and team climate for relational coordination among teams delivering care to older patients. J Adv Nurs 2014;70:791–9. 19. Pfaff K, Baxter P, Jack S, Ploeg J. An integrative review of the factors influencing new graduate nurse engagement in interprofessional collaboration. J Adv Nurs 2014;70:4–20. 20. Pfaff KA, Baxter PE, Ploeg J, Jack SM. A mixed methods exploration of the team and organizational factors that may predict new graduate nurse engage21. Gaboury I, Bujold M, Boon H, Moher D. Interprofessional collaboration within Canadian integrative healthcare clinics: Key components. Soc Sci Med 2009;69:707–15. 22. Gaboury I, Lapierre LM, Boon H, Moher D. Interprofessional collaboration within integrative healthcare clinics through the lens of the relationship-centered care model. J Interprof Care 2011;25:124–30. 23. Keshet Y, Popper-Giveon A. Integrative health care in Israel and traditional arab herbal medicine: when health care interfaces with culture and politics. Med Anthropol Q 2013;27:368–84. 24. Hollenberg D. Uncharted ground: patterns of professional interaction among complementary/alternative and biomedical practitioners in integrative health care settings. Soc Sci Med 2006;62:731–44. 25. Farrell B, Pottie K, Woodend K, et al. Shifts in expectations: evaluating physicians’ perceptions as pharmacists become integrated into family practice. J Interprof Care 2010;24:80–9. 26. Maxwell L, Odukoya OK, Stone JA, Chui MA. Using a conflict conceptual framework to describe challenges to coordinated patient care from the physicians’ and pharmacists’ perspective. Res Social Adm Pharm 2014;10:824–36. 27. Rubio-Valera M, Jove AM, Hughes CM, Guillen-Sola M, Rovira M, Fernandez A. Factors affecting collaboration between general practitioners and community pharmacists: a qualitative study. BMC Health Serv Res 2012;12:188. 28. Eve JD. Sustainable practice: how practice development frameworks can influence team work, team culture and philosophy of practice. J Nurs Manag 2004;12:124–30. 29. D’Amour D, Goulet L, Labadie JF, Martin-Rodriguez LS, Pineault R. A model and typology of collaboration between professionals in healthcare organizations. BMC Health Serv Res 2008;8:188. 30. D’Amour D, Ferrada-Videla M, San Martin Rodriguez L, Beaulieu MD. The conceptual basis for interprofessional collaboration: Core concepts and theoretical frameworks. J Interprof Care 2005;19(Suppl 1):116–31. 31. Gask L. Overt and covert barriers to the integration of primary and specialist mental health care. Soc Sci Med 2005;61:1785–94. 32. Knowles P. Collaborative communication between psychologists and primary care providers. J Clin Psychol Med Settings 2009;16:72–6. 33. Fredheim T, Danbolt LJ, Haavet OR, Kjonsberg K, Lien L. Collaboration between general practitioners and mental health care professionals: a qualitative study. Int J Ment Health Syst 2011;5(1):13. 34. San Martin-Rodriguez L, Beaulieu MD, D’Amour D, Ferrada-Videla M. The determinants of successful collaboration: a review of theoretical and empirical studies. J Interprof Care 2005;19(Suppl 1):132–47. 35. Dawson S. Interprofessional working: communication, collaboration… perspiration! Int J Palliat Nurs 2007;13(10):502–5. 36. Dow AW, DiazGranados D, Mazmanian PE, Retchin SM. Applying organizational science to health care: a framework for collaborative practice. Acad Med 2013;88:952–7. 37. Lorenz AD, Mauksch LB, Gawinski BA. Models of collaboration. Prim Care 1999;26:401–10. 38. Jones A, Jones D. Improving teamwork, trust and safety: an ethnographic study of an interprofessional initiative. J Interprof Care 2011;25:175–81. 39. Reeves S. Ideas for the development of the interprofessional field. J Interprof Care 2010;24:217–9. 40. Salas E, Cooke NJ, Rosen MA. On Teams, teamwork, and team performance: discoveries and developments. Hum Factors 2008;50:540–7. 41. Casimiro L, Hall P. Barriers and enablers to interprofessional collaboration in health care: Research report. Champlain Region: Academic Health Council, 2011. 42. Cohen DJ, Balasubramanian BA, Davis M, et al. Understanding care integration from the ground up: five organizing constructs that shape integrated practices. J Am Board Fam Med 2015;28:S7–S20. 43. Davis M, Balasubramanian BA, Waller E, Miller BF, Green LA, Cohen DJ. Integrating behavioral and physical health care in the real world: early lessons from advancing care together. J Am Board Fam Med 2013;26:588–602. 44. Cohen DJ, Davis MM, Hall JD, Gilchrist EC, Miller BF. A guidebook of professional practices for behavioral health and primary care integration: Observations from exemplary sites. Rockville, MD: Agency for Healthcare Research and Quality, 2015. 45. Miller WL, Crabtree BF. The dance of interpretation. In: Crabtree BF, Miller WL, eds. Doing qualitative research. 2nd ed. Thousand Oaks, CA: Sage Publications, 1999;127–43. 46. Borkan J. Immersion/crystallization, 2nd ed. Thousand Oaks, CA: Sage Publications, 1999. 47. Gunn R, Davis M, Hall J, et al. Designing clinical space for the delivery of integrated behavioral health and primary care. J Am Board Fam Med 2015;28: S52–S62. 48. Davis et al. Clinician staffing, scheduling, and engagement strategies among primary care practices delivering integrated care. J Am Board Fam Med 2015;28:S32–S40. 49. Bloch DA. The dual optic: Researchers and therapists. Fam Syst Med 1989;7:115–9. 50. Stange KC, Zyzanski SJ, Jaén CR, et al. Illuminating the ‘black box’. A description of 4454 patient visits to 138 family physicians. J Fam Pract 1998;46:377–89. 51. Miller BF, Pettersson S, Brown Levey SM, Payne-Murphy JC, Moore M, Bazemore A. Primary care, behavioral health, provider colocation, and rurality. *J Am Board Fam Med* 2014;27:367–74. 52. Miller BF, Pettersson S, Burke BT, Phillips RL Jr, Green LA. Proximity of providers: Colocating behavioral health and primary care and the prospects for an integrated workforce. *Am Psychol* 2014;69:443–51. 53. Williams J, Palmes G, Klinepeter K, Pulley A, Foy JM. Referral by pediatricians of children with behavioral health disorders. *Clin Pediatr* 2005;44:343–9. 54. Kolko DJ, Campo J, Kilbourne AM, Hart J, Sakolsky D, Wisniewski S. Collaborative care outcomes for pediatric behavioral health problems: A cluster randomized trial. *Pediatrics* 2014;133(4):e981–92. 55. Blount A, Bayona J. Toward a system of integrated primary care. *Fam Syst Med* 1994;12:171–82. 56. Blount A, DeGirolamo S, Mariani K. Training the collaborative care practitioners of the future. *Fam Syst Health* 2006;24:111–9. 57. Kessler R, Miller BF, Kelly M, et al. Mental health, substance abuse, and health behavior services in patient-centered medical homes. *J Am Board Fam Med* 2014;27:637–44. 58. Massa I, Miller BF, Kessler R. Collaboration between NCQA patient-centered medical homes and specialty behavioral health and medical services. *Transl Behav Med* 2012;1:1–5. 59. Peek CJ, Heinrich RL. Building a collaborative healthcare organization: From idea to invention to innovation. *Family Syst Med* 1995;13(3–4):327–42. 60. Mitchell P, Matthew W, Golden R, et al. Core principles & values of effective team-based health care. Washington, DC: Institute of Medicine, 2012. 61. Blount A. Integrated primary care: Organizing the evidence. *Fam Syst Health* 2003;21:121–33. 62. Glouberman S, Zimmerman B. Complicated and complex systems: What would successful reform of medicare look like? Vol discussion paper No. 8: Commission on the Future of Health Care in Canada, 2002. 63. Collins C, Hewson DL, Munger R, Wade T. Evolving models of behavioral health integration in primary care. New York, NY: Milbank Memorial Fund, 2010. 64. Doherty WJ, McDaniel SH, Baird MA. Five levels of primary care/behavioral healthcare collaboration. *Behav Healthc Tomorrow* 1996;5:25–7. 65. A standard framework for levels of integrated healthcare. SAMHSA-HRSA Center for Integrated Health Solutions. National Council for Community Behavioral Healthcare, 2013. 66. Peek CJ and the National Integration Academy Council. Lexicon for behavioral health and primary care integration: Concepts and definitions developed by expert consensus. In: No.13-IP001-EF AP, ed. Rockville, MD: Agency for Healthcare Research and Quality, 2013. 67. Katon WJ, Lin EH, Von Korff M, et al. Collaborative care for patients with depression and chronic illnesses. *N Engl J Med* 2010;363:2611–20. 68. Unützer J, Katon W, Callahan CM, et al. Collaborative care management of late-life depression in the primary care setting: a randomized controlled trial. *JAMA* 2008;288:2836–45.
Efficient minimum spanning tree construction without Delaunay triangulation Hai Zhou\textsuperscript{a,*}, Narendra Shenoy\textsuperscript{b}, William Nicholls\textsuperscript{b} \textsuperscript{a} Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208, USA \textsuperscript{b} Advanced Technology Group, Synopsys, Inc., Mountain View, CA 94043, USA Received 7 April 2000 Communicated by F. Dehne Abstract Given $n$ points in a plane, a minimum spanning tree is a set of edges which connects all the points and has a minimum total length. A naive approach enumerates edges on all pairs of points and takes at least $\Omega(n^2)$ time. More efficient approaches find a minimum spanning tree only among edges in the Delaunay triangulation of the points. However, Delaunay triangulation is not well defined in rectilinear distance. In this paper, we first establish a framework for minimum spanning tree construction which is based on a general concept of spanning graphs. A spanning graph is a natural definition and not necessarily a Delaunay triangulation. Based on this framework, we then design an $O(n \log n)$ sweep-line algorithm to construct a rectilinear minimum spanning tree without using Delaunay triangulation. © 2002 Elsevier Science B.V. All rights reserved. Keywords: Minimal spanning tree; Graph algorithms; Wire routing; Computational geometry 1. Introduction Given $n$ points in a plane, a minimum spanning tree is a set of edges which connects all the points and has a minimum total length. Minimum spanning tree construction on an arbitrary graph is a well studied problem [1]. It also belongs to a more general class of greedy problems on combinatorial structures known as matroids [6]. Typical complexity of computing a minimum spanning tree in a graph $G(V, E)$ is $O(m \log n)$, where $n$ is the number of vertices and $m$ is the number of edges. So given a graph, the spanning tree can be constructed efficiently. Clearly, a minimal spanning tree is contained in the complete graph of $n$ points. However enumerating $\Omega(n^2)$ edges is expensive for large $n$. The first algorithm to speed up the minimum spanning tree computation came as a by-product of computational geometry research and was based on the fact that only the edges in the Delaunay triangulation of the points need to be examined [8]. But this only works in Euclidean distance ($L_2$). When Euclidean distance ($L_2$) is used, the Delaunay graph is defined as the dual of the Voronoi diagram [8]. A Delaunay graph is usually a triangulation if no more than three points are cocircular, or can become a triangulation by adding more edges. Preparata and Shamos’ algorithm [8] for Voronoi diagram uses divide-and-conquer strategy and runs in $O(n \log n)$ time. Later, Fortune [2] designed a much simpler sweep-line algorithm with the same running time. His algorithm avoids the difficult merge... step of the divide-and-conquer technique. However, when rectilinear distance ($L_1$) is used, the Voronoi diagram is not always well defined. Efforts to resolve this issue need to explicitly specify what they mean when there are ambiguities [5,3]. Similar as in Euclidean distance, the original algorithm [5] in this direction was a divide-and-conquer algorithm. And motived by Fortune [2], there came a sweep-line algorithm [3] more recently. As we mentioned earlier, minimum spanning tree construction for Euclidean distance came only as a by-product of Delaunay triangulation. Since Delaunay triangulation is not well defined in rectilinear distance, forcing minimum spanning tree computation on it encounters unnecessary difficulties. In fact, around the same time, Yao [11] observed that a minimum spanning tree can be constructed by considering a sufficient number of closest neighbors for each of the given points, and gave an algorithm which runs in $O(n^{2-1/8} \log^{1-1/8} n)$ time for the planar case. Guibas and Stolfi [4] further implemented the idea for rectilinear distance in the plane with a running time of $O(n \log n)$. Interesting enough, their algorithm is also based on divide-and-conquer strategy: it divides the point set into a left half and a right half, and recursively applies the algorithm to them. In this paper, we focus directly on the objective of constructing a minimum spanning tree. Keeping this in mind, we find that there is no need to take the burden of constructing or even defining a Delaunay triangulation. Actually, what we need are sparse graphs which contain minimum spanning trees. Generally, we define these graphs as spanning graphs. Although for Euclidean distance a Delaunay triangulation can be proved to be a spanning graph, a spanning graph need not to be a Delaunay triangulation. This observation is invaluable for rectilinear distance metric where a Delaunay triangulation is not well defined. Based on the framework and using the property that each point needs to be connected to only a few other points, we designed a sweep-line algorithm to construct a spanning graph for rectilinear distance. After that, a minimum spanning tree can be easily computed on the spanning graph. With respect to the literature, our work stands out in two contributions: First, we establish a general framework of spanning graphs which includes both Delaunay triangulation and non-Delaunay triangulation approaches, and study the properties of spanning graphs in both rectilinear and Euclidean distances. Second, although the divide-and-conquer algorithm by Guibas and Stolfi [4] has the same asymptotic running time as our sweep-line algorithm, theirs is more complicated in implementation, requires more storage ($O(n \log n)$ vs. $O(n)$), and has larger hiding constant. The rest of the paper is organized as follows. In Section 2, we define the spanning graphs and discuss their properties. In Section 3, we design an algorithm to construct rectilinear spanning graphs for a given set of points. Finally, Section 4 concludes the paper. 2. Spanning graph Given a set of $n$ points in a plane, a spanning tree is a set of edges that connects all the points and contains no cycles. When each edge is weighted using some distance metric of the incident points, the *metric minimum spanning tree* is a tree whose sum of edge weights is minimum. If the Euclidean distance ($L_2$) is used, it is called the *Euclidean minimum spanning tree*; if the rectilinear distance ($L_1$) is used, it is called the *rectilinear minimum spanning tree*. Since the minimum spanning tree problem on a weighted graph is well studied, the usual approach for metric minimum spanning tree is to first define a weighted graph on the set of points and then to construct a spanning tree on it. Much like a connection graph is defined for the maze search [12], we can define a spanning graph for the minimum spanning tree construction. **Definition 1.** Given a set of points $V$, an undirected graph $G = (V, E)$ is called a *spanning graph* if it contains a minimum spanning tree. Usually, for a given set of points, the minimum spanning tree may not be unique. Thus a spanning graph defined above may not contain all minimum spanning trees. If we are only interested in one of these trees, no matter which one, the above definition is sufficient. Otherwise, we may need a strong version as follows. **Definition 2.** Given a set of points $V$, an undirected graph $G = (V, E)$ is called a *strong spanning graph* if it contains all minimum spanning trees. Since we are interested in spanning graphs with as few number of edges as possible, we define the cardinality of a spanning graph as its number of edges. As we can see, a complete graph on a set of points contains all spanning trees, thus is a spanning graph, even in the strong sense. This gives us an $O(n^2)$ upper bound on the cardinalities of both spanning graphs and strong spanning graphs. On the other hand, since a minimum spanning tree is by definition also a spanning graph, the minimum cardinality of spanning graphs is always $n - 1$ for a set of $n$ points. But the minimum cardinality of strong spanning graphs is more complicated and, as we will show next, is dependent on which metric is used. Minimum spanning tree algorithms usually use two properties to infer the inclusion and exclusion of edges in a minimum spanning tree. The first property is known as the cut property. It states that an edge of smallest weight crossing any partition of the vertex set into two parts belongs to a minimum spanning tree. The second property is known as the cycle property. It says that an edge with largest weight in any cycle in the graph can be safely deleted. Since the two properties are stated in connection with the construction of a minimum spanning tree, they are related to a spanning graph. A strong cut property states that all lightest edges crossing any partition of the vertex set into two parts belong to a strong spanning graph. A strong cycle property says that the single heaviest edge in any cycle in the graph does not belong to a strong spanning graph. Preparata and Shamos [8] proved the following lemma. **Lemma 1** (Lemma 6.2 in [8]). Let $S$ be a set of points in the plane, and let $\Delta(p)$ denote the set of points adjacent to $p \in S$ in the Delaunay triangulation of $S$. For any partition $\{S_1, S_2\}$ of $S$, if $\overline{qp}$ is the shortest segment between points of $S_1$ and points of $S_2$, then $q$ belongs to $\Delta(p)$. Combining the lemma with the strong cut property, we have the following theorem. **Theorem 1.** If Euclidean distance is used, the Delaunay triangulation of a set of points is always a strong spanning graph. ![Fig. 1. A set of points whose strong spanning graph must have $\Omega(n^2)$ edges.](image) Since a Delaunay graph has only a linear number of edges, the above theorem also gives that the minimum cardinality of strong spanning graph is $O(n)$ for any set of $n$ points if Euclidean distance is used. On the contrary, the rectilinear distance does not have such good property, as shown in the following lemma. **Theorem 2.** If rectilinear distance is used, there is a set of $n$ points, for which any strong spanning graph has at least $\Omega(n^2)$ number of edges. **Proof.** Consider a set of $n$ points as follows. Let $\lfloor n/2 \rfloor$ points fall on the segment $x + y = 1$ with $x \in [0, 1]$; all other points sit on the segment $x + y = -1$ with $x \in [-1, 0]$. This is illustrated in Fig. 1. As we can see, if we partition the whole set into two subsets according to the two segments, all edges between the two subsets have the same length. According to the strong cut property, they all must be in a strong spanning graph. □ ### 3. Rectilinear spanning graph construction Using the terminology given in [10], we define the uniqueness property as follows. **Definition 3.** Given a point $s$, a region $R$ has the uniqueness property with respect to $s$ if for every pair of points $p, q \in R$, $\|pq\| < \max(\|sp\|, \|sq\|)$. A partition of space into a finite set of disjoint regions is said to have the uniqueness property if each of its regions has the uniqueness property. For the rest of the paper we will use the notation $\|sp\|$ to represent the distance between $s$ and $p$ using the $L_1$ metric. Define the *octal partition* of the plane with respect to $s$ as the partition induced by the two rectilinear lines and the two 45 degree lines through $s$, as shown in Fig. 2(a). Here, each of the regions $R_1$ through $R_8$ includes only one of its two bounding half lines as shown in Fig. 2(b). It can be shown that the octal partition has the uniqueness property. **Lemma 2.** Given a point $s$ in the plane, the octal partition with respect to $s$ has the uniqueness property. **Proof.** To show a partition has the uniqueness property, we need to prove that each region of the partition has the uniqueness property. Since the regions $R_1$ through $R_8$ are similar to each other, we only give a proof for $R_1$. The points in $R_1$ can be characterized by the following inequalities $$x \geq x_s,$$ $$x - y < x_s - y_s.$$ Suppose we have two points $p$ and $q$ in $R_1$. Without loss of generality, we can assume $x_p \leq x_q$. If $y_p \leq y_q$, then we have $\|sq\| = \|sp\| + \|pq\| > \|pq\|$. Therefore we only need to consider the case when $y_p > y_q$. In this case, we have $$\|pq\| = |x_p - x_q| + |y_p - y_q|$$ $$= x_q - x_p + y_p - y_q$$ $$= (x_q - y_q) + y_p - x_p$$ $$< (x_s - y_s) + y_p - x_s$$ $$= y_p - y_s$$ $$\leq x_p - x_s + y_p - y_s$$ $$= \|sp\|. \quad \square$$ Given two points $p, q$ in the same octal region of point $s$, the uniqueness property says that $\|pq\| < \max(\|sp\|, \|sq\|)$. Consider the cycle on points $s$, $p$, and $q$. Based on the cycle property, only the point with the minimum distance from $s$ needs to be connected to $s$. An interesting property of the octal partition is that the contour of equidistant points from $s$ forms a line segment in each region. In regions $R_1, R_2, R_5, R_6$, these segments are captured by an equation of the form $x + y = c$; in regions $R_3, R_4, R_7, R_8$, they are described by the form $x - y = c$, as shown in Fig. 3. Conceptually, we only need to consider edges from $s$ to the closest neighbor in each octant. We will pose this problem in the reverse manner. Given a point $s$, find all the candidate points to which it can possibly be the nearest neighbor in a specified octant. For sake of simplifying the exposition, we will only consider the case for $R_1$. The rest of octants are symmetric and the discussion can be easily extended to handle them. For the $R_1$ octant, we construct a sweep-line algorithm on all points according to non-decreasing $x + y$. During the sweep, we maintain an *active set* consisting of points whose nearest neighbors in $R_1$ are still to be discovered. When we process a point $p$, we find all the points in the active set, which have $p$ in their $R_1$ regions. Suppose $s$ is one such point from the active set. Since we process points in non-decreasing $x + y$, we know that $p$ is the nearest point in $R_1$ for $s$. Therefore, we add edge $sp$ and delete $s$ from the active set. After processing those active points, we also add $p$ into the active set. Each point will be added and deleted at most once from the active set. The fundamental operation that is required in the sweep-line algorithm is given a point $p$, find a subset of active points such that $p$ is in their $R_1$ regions. Based on the following observation, we need to find the subset of active points in the $R_5$ region of $p$. **Observation 1.** Given two points $p$ and $s$, point $p$ is in the $R_1$ region of $s$ if and only if $s$ is in the $R_5$ region of $p$. Since $R_5$ can be represented as a two-dimensional range $(-\infty, x_p] \times (x_p - y_p, +\infty)$ on $(x, x - y)$, a priority search tree [7] can be used to maintain the active point set. Since each of the insertion and deletion operations takes $O(\log n)$ time, and the query operation takes $O(\log n + k)$ time where $k$ is the number of objects within the range, the total time for the sweep is $O(n \log n)$. Since other regions can be processed in the similar way as in $R_1$, we get an algorithm running in $O(n \log n)$ time. Priority search tree is a data structure that relies on maintaining a balanced structure for the fast query time. This works well for static input sets. When the input set is dynamic, re-balancing the tree can be quite challenging. Fortunately, the active set has a structure we can exploit for an alternate representation. Since we delete a point from the active set if we find a point in its $R_1$ region, no point in the active set can be in the $R_1$ region of another point in the set. **Lemma 3.** For any two points $p, q$ in the active set, we have $x_p \neq x_q$, and if $x_p < x_q$ then $x_p - y_p \leq x_q - y_q$. Based on this property, we can order the active set in increasing order of $x$. This implies a non-decreasing order on $x - y$. Given a point $s$, the points which have $s$ in their $R_1$ region must obey the following inequalities $$x \leq x_s,$$ $$x - y > x_s - y_s.$$ To find the subset of active points which have $s$ in their $R_1$ regions, we can first find the largest $x$ such that $x \leq x_s$, then proceed in decreasing order of $x$ until $x - y \geq x_s - y_s$. Since the ordering is kept on only one dimension, using any binary search tree with $O(\log n)$ insertion, deletion, and query time will also give us an $O(n \log n)$ time algorithm. Binary search trees also need to be balanced. An alternative is to use skip-lists [9] which use randomization to avoid the problem of explicit balancing but provide $O(\log n)$ expected behavior. A careful study also shows that after the sweep process for $R_1$, there is no need to do the sweep for $R_5$, since all edges needed in that phase are either connected or implied. This is also based on Observation 1. Moreover, based on the information in $R_5$, we can further reduce the number of edge connections. As shown in Fig. 4, when the sweep step processes point $s$, we find a subset of active points which have $s$ in their $R_1$ regions. Without loss of generality, suppose $p$ and $q$ are two of them. Then $p$ and $q$ are in the $R_5$ region of $s$, which means $\|pq\| < \max(\|sp\|, \|sq\|)$. Therefore, we need only to connect $s$ with the nearest active point. Since $R_1$ and $R_2$ have the same sweep sequence, we can process them together in one pass. Similarly, $R_3$ and $R_4$ can be processed together in another pass. Based on the above discussion, the pseudo-code of the algorithm is presented in Fig. 5. The correctness of the algorithm is stated in the following theorem. **Theorem 3.** Given $n$ points in the plane, the rectilinear spanning graph algorithm constructs a spanning graph in $O(n \log n)$ time, and the number of edges in the graph is $O(n)$. **Proof.** The algorithm can be considered as deleting edges from the complete graph. As described, all edges that we delete are redundant based on the cycle property. Thus, the output graph of the algorithm will contain at least one rectilinear minimum spanning tree. In the algorithm, each given point will be inserted and deleted at most once from the active set for each of Algorithm Rectilinear Spanning Graph for (i = 0; i < 2; i++) { if (i == 0) sort points according to x + y; else sort points according to x - y; A[1] = A[2] = ∅; for each point p in the order { find points in A[1], A[2] such that p is in their R_{2i+1} and R_{2i+2} regions, respectively; connect p with the nearest point in each subset; delete the subsets from A[1], A[2], respectively; add p to A[1], A[2]; } } Fig. 5. The rectilinear spanning graph algorithm. the four regions $R_1$ through $R_4$. For each insertion or deletion, the algorithm requires $O(\log n)$ time. Thus, the total time is upper bounded by $O(n \log n)$. The storage we need is only for active sets, which is at most $O(n)$. 4. Conclusion In summary, we have characterized a broader class of spanning graphs which are more natural than the Delaunay triangulation for minimum spanning tree constructions. We also described a new and much simpler approach to solving the minimum spanning tree problem for the rectilinear distance metric. The approach relies on the cycle property of spanning trees and the uniqueness property applied to an octal partition of the planar space. References [1] T.H. Cormen, C.E. Leiserson, R.H. Rivest, Introduction to Algorithms, MIT Press, Cambridge, MA, 1989. [2] S. Fortune, A sweepline algorithm for Voronoi diagrams, Algorithmica 2 (1987) 153–174. [3] L.L. Deneen, G.M. Shute, C.D. Thomborson, An $O(n \log n)$ plane-sweep algorithm for $L_1$ and $L_\infty$ Delaunay triangulation, Algorithmica 6 (1991) 207–221. [4] L.J. Guibas, J. Stolfi, On computing all north-east nearest neighbors in the $L_1$ metric, Inform. Process. Lett. 17 (4) (1983) 219–223. [5] F.K. Hwang, An $o(n \log n)$ algorithm for rectilinear minimal spanning trees, J. ACM 26 (2) (1979) 177–182. [6] E.L. Lawler, Combinatorial Optimization: Networks and Matroids, Holt, Rinehart and Winston, New York, 1976. [7] E.M. McCreight, Priority search trees, SIAM J. Comput. 14 (2) (1985) 257–276. [8] F.P. Preparata, M.I. Shamos, Computational Geometry: An Introduction, Springer, Berlin, 1985. [9] W. Pugh, Skip lists: A probabilistic alternative to balanced trees, Comm. ACM 33 (6) (1990). [10] G. Robins, J.S. Salowe, Low-degree minimum spanning tree, Discrete Comput. Geom. 14 (1995) 151–165. [11] A.C.-C. Yao, On constructing minimum spanning trees in $k$-dimensional spaces and related problems, SIAM J. Comput. 11 (4) (1982) 721–736. [12] S.Q. Zheng, J.S. Lim, S.S. Iyengar, Finding obstacle-avoiding shortest paths using implicit connection graphs, IEEE Trans. Comput. Aided Des. 15 (1) (1996) 103–110.
A Rejection Technique for Sampling from Log-Concave Multivariate Distributions Original Citation: Leydold, Josef (1998) A Rejection Technique for Sampling from Log-Concave Multivariate Distributions. Preprint Series / Department of Applied Statistics and Data Processing, 21. Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, Vienna. This version is available at: https://epub.wu.ac.at/946/ Available in ePubWU: July 2006 ePubWU, the institutional repository of the WU Vienna University of Economics and Business, is provided by the University Library and the IT-Services. The aim is to enable open access to the scholarly output of the WU. A Rejection Technique for Sampling from Log-Concave Multivariate Distribution Josef Leydold Department of Applied Statistics and Data Processing Wirtschaftsuniversität Wien Preprint Series Preprint 21 July 1997 http://statmath.wu-wien.ac.at/ A Rejection Technique for Sampling from Log-Concave Multivariate Distributions Josef Leydold University of Economics and Business Administration Department for Applied Statistics and Data Processing Augasse 2–6, A-1090 Vienna, Austria email: firstname.lastname@example.org March 1998 Abstract Different universal methods (also called automatic or black-box methods) have been suggested to sample from univariate log-concave distributions. The description of a suitable universal generator for multivariate distributions in arbitrary dimensions has not been published up to now. The new algorithm is based on the method of transformed density rejection. To construct a hat function for the rejection algorithm the multivariate density is transformed by a proper transformation $T$ into a concave function (in the case of log-concave density $T(x) = \log(x)$.) Then it is possible to construct a dominating function by taking the minimum of several tangent hyperplanes which are transformed back by $T^{-1}$ into the original scale. The domains of different pieces of the hat function are polyhedra in the multivariate case. Although this method can be shown to work, it is too slow and complicated in higher dimensions. In this paper we split the $\mathbb{R}^n$ into simple cones. The hat function is constructed piecewise on each of the cones by tangent hyperplanes. The resulting function is not continuous any more and the rejection constant is bounded from below but the setup and the generation remains quite fast in higher dimensions, e.g. $n = 8$. The paper describes the details how this main idea can be used to construct algorithm TDRMV() that generates random tuples from multivariate log-concave distribution with a computable density. Although the developed algorithm is not a real black box method it is adjustable for a large class of log-concave densities. CR Categories and Subject Descriptors: G.3 [Probability and Statistics]: Random number generation General Terms: Algorithms Additional Key Words and Phrases: Rejection method, multivariate log-concave distributions, universal method ## Contents 1 Introduction 3 2 The method 4 2.1 Transformed density rejection 4 2.2 Construction of a hat function 6 2.3 Simple cones 9 2.4 Triangulation 11 2.5 Problems 13 2.6 Log-concave densities 14 3 The algorithm 15 3.1 Setup 16 3.2 Sampling 16 4 Possible variants 16 4.1 Subset of $\mathbb{R}^n$ as domain 16 4.2 Density not differentiable 19 4.3 Indicator Functions 20 4.4 Mode not in Origin 20 4.5 Add mode as construction point 20 4.6 More construction points per cone 20 4.7 Squeezes 21 4.8 $T_c$-concave densities 21 5 Computational Experience 22 5.1 A C-implementation. 22 5.2 Basic version: unbounded domain, mode in origin 23 5.3 Rectangular domain 26 5.4 Quality 26 5.5 Some Examples 26 5.6 Résumé 27 1 Introduction For the univariate case there is a large literature on generation methods for standard distributions (see e.g. [Dev86] and [Dag88]) and in the last years some papers appeared on universal (or black-box) methods (see [Dev86, chapter VII], [GW92], [Ahr95], [Hör95a], [HD94] and [ES97]); these are algorithms that can generate random variates from a large family as long as some information (typically the mode and the density of the specific distribution) are available. For the generation of variates from bivariate and multivariate distributions papers are rare. Well known and discussed are only the generation of the multinormal and of the Wishart distribution (see e.g. [Dev86] and [Dag88]). Several approaches to the problem of generating multivariate random tuples exist, but these have some disadvantages: - The multivariate extension of the ratio of uniforms methods as in [SV87] or [WGS91]. This method can be reformulated as rejection from a small family of table-mountain shaped multivariate distributions. This point of view is not included in these two papers but it is useful as it clarifies the question why the acceptance probability becomes poor for high correlation. This disadvantage of the method is already mentioned in [WGS91]. The practical problem how to obtain the necessary multivariate rectangle enclosing the region of acceptance for the ratio of uniforms method is not discussed in [SV87] nor in [WGS91] and seems to be difficult for most distributions. - The conditional distribution method. It requires the knowledge of and the ability to sample from the marginal and the conditional distributions (see [Dev86, chapter XI.1.2]). - The decomposition and rejection method. A majorizing function (also called hat-function) suggested for the multivariate rejection method is the product of the marginal densities (in [Dag88]). It is not clear at all how to obtain the necessary rejection constant $\alpha$. - Development of new classes of multivariate distributions, which are easy to generate. It is only necessary (and possible) to specify the marginal distribution and the degree of dependence measured by some correlation coefficient (see the monograph [Joh87]). This idea seems to be attractive for most simulation practitioners interested in multivariate distributions but it is no help if variates from a distribution with given density should be generated. - Recently Devroye [Dev97] has developed algorithms for ortho-unimodal densities. But this paper leaves the generation of log-convave distributions as open problem. - Sweep-plane methods for log-concave (and T-concave) distributions are described recently in [Hör95b] for bivariate case and in [LH98] for the multivariate case. These algorithms use the idea of transformed density rejection which is presented in a first form in [Dev86, chapter VII.2.4] and with a different set-up in [GW92]. To our knowledge these two algorithms are the only universal algorithms in the literature for multivariate distributions with given densities. (In [Dev86, chapter XI.1.3] it is even stressed that no general inequalities for multivariate densities are available, a fact which makes the design of black-box algorithms, that are similar to those developed in [Dev86] for the univariate case, impossible.) Although the algorithm in [LH98] works, it is very slow, since the domain of the density $f$ is decomposed in polyhedra. This is due to the construction of the hat function, where we take the pointwise minimum of tangent hyperplanes. In this paper we again use transformed density rejection and the sweep-plane technique to derive a much more efficient algorithm. The main idea is to decompose the domain of the density in cones first and then compute tangent hyperplanes in this cones. The resulting hat function is not continuous any more and the rejection constant is bounded from below, but the setup as well as the sampling from the hat function is much faster than in the original algorithm. Section 2 explains the method and gives all necessary mathematical formulae. Section 3 provides all details of the algorithm. Section 4 discusses how to improve and extend the main idea of the algorithm (e.g. to T-concave distributions, bounded domain) and section 5 reports the computational experience we have had with the new algorithm. 2 The method 2.1 Transformed density rejection Density. We are given a multivariate distribution with differentiable density function $$f: D \to [0, \infty), \quad D \subseteq \mathbb{R}^n, \quad \text{with mode } m.$$ (1) To simplify the development of our method we assume $D = \mathbb{R}^n$, $m = 0$ and $f \in C^1$. In §4 we extend the algorithm so that these requirements can be dropped. Transformation. To design an universal algorithm utilizing the rejection method it is necessary to find an automatic way to construct a hat function for a given density. Transformed density rejection introduced under a different name in [GW92] and generalized in [Hör95a] is based on the idea that the density $f$ is transformed by a monotone $T$ (e.g. $T(x) = \log(x)$) in such a way that (see [Hör95a]): (T1) $\tilde{f}(x) = T(f(x))$ is concave (we then say “$f$ is T-concave”); (T2) $\lim_{x \to 0} T(x) = -\infty$; (T3) $T(x)$ is differentiable and $T'(x) > 0$, which implies $T^{-1}$ exists; and (T4) the volume under the hat is finite. **Hat.** It is then easy to construct a hat $\tilde{h}(x)$ for $\tilde{f}(x)$ as the minimum of $N$ tangents. Since $\tilde{f}(x)$ is concave we clearly have $\tilde{f}(x) \leq \tilde{h}(x)$ for all $x \in \mathbb{R}^n$. Transforming $\tilde{h}(x)$ back into the original scale we get $h(x) = T^{-1}(\tilde{h}(x))$ as majorizing function or hat for $f$, i.e. with $f(x) \leq h(x)$. Figure 1 illustrates the situation for the univariate case by means of the normal distribution and the transformation $T(x) = \log(x)$. ![Figure 1: hat function for univariate normal density](image) The left hand side shows the transformed density with three tangents. The right hand side shows the density function with the resulting hat. (The dashed lines are simple lower bounds for the density called squeezes in random variate generation. Their use reduces the number of evaluations of $f$. Especially if the number of touching points is large and the evaluation of $f$ is slow the acceleration gained by the squeezes can be enormous.) **Rejection.** The basic form of the multivariate rejection method is given by algorithm REJECTION(). **Algorithm 1** REJECTION() 1: **Set-up:** Construct a hat-function $h(x)$. 2: Generate a random tuple $X = (X_1, \ldots, X_n)$ with density proportional to $h(X)$ and a uniform random number $U$. 3: If $Uh(X) \leq f(X)$ return $X$ else go to 2. The main idea of this paper is to extend transformed density rejection as described in [Hör95a] to the multivariate case. 2.2 Construction of a hat function Tangents. Let \( p_i \) be points in \( D \subseteq \mathbb{R}^n \). In the multivariate case the tangents of the transformed density \( \tilde{f}(x) \) at \( p_i \) are the hyperplanes given by \[ \ell_i(x) = \tilde{f}(p_i) + \langle \nabla \tilde{f}(p_i), (x - p_i) \rangle \] (2) where \( \langle \cdot, \cdot \rangle \) denotes the scalar product. Polyhedra. In [LH98] a hat function \( h(x) \) is constructed by the pointwise minimum of these tangents. We have \[ h(x) = \min_{i=1,\ldots,m} T^{-1}(\ell_i(x)) \] (3) The domains in which a particular tangent \( \ell_i(x) \) determines the hat function are simple convex polyhedra \( P_i \), which may be bounded or not (for details about convex polyhedra see [Grü67, Zie95]). Then a sweep-plane technique for generating random tuples in such a polyhedron with density proportional to \( T^{-1}(\ell_i(x)) \) is derived. To avoid lots of indices we write \( p \), \( \ell(x) \) and \( P \) without the index \( i \) if there is no risk of confusion. A sweep-plane algorithm. Let \[ g = -\frac{\nabla \tilde{f}(p)}{\|\nabla \tilde{f}(p)\|} \] (4) if \( \nabla \tilde{f}(p) \neq 0 \). Otherwise choose any \( g \) with \( \|g\| = 1 \). (\( \|\cdot\| \) denotes the 2-norm.) For a given \( x \) let \( x = \langle g, x \rangle \). We denote the hyperplane perpendicular to \( g \) through \( x \) by \[ F(x) = F(x) = \{ y \in \mathbb{R}^n : \langle g, y \rangle = x \} \] (5) and its intersection with the polytope \( P \) with \( Q(x) = Q(x) = P \cap F(x) \). (\( F(x) \) depends on \( x \) only; thus we write \( F(x) \), if there is no risk of confusion.) \( Q(x) \) again is a convex simple polyhedra. Now we can move this sweep-plane \( F(x) \) through the domain \( P \) by varying \( x \). Figure 2 illustrates the situation. As can easily be seen from (2), (4) and (5), \( T^{-1}(\ell(x)) \) is constant on \( Q(x) \) for every \( x \). Let \[ \alpha = \tilde{f}(p) - \langle \nabla \tilde{f}(p), p \rangle \quad \text{and} \quad \beta = \|\nabla \tilde{f}(p)\| \] (6) Then the hat function in \( P \) is given by \[ h|_P(x) = T^{-1}(\ell(x)) = T^{-1}(\ell(x \cdot g)) = T^{-1}(\alpha - \beta x). \] (7) where again \( x = \langle g, x \rangle \). We find for the marginal density function of the hat \( h|_P \) along \( g \) \[ h_g(x) = \int_{Q(x)} h|_P(y) dy = A(x) \cdot T^{-1}(\alpha - \beta x) \] (8) where integration is done over $F(x)$. $A(x)$ denotes the $(n-1)$-dimensional volume of $Q(x)$. It exists if and only if $Q(x)$ is bounded. To compute $A(x)$ let $v_j$ denote the vertices of $P$ and $v_j = \langle g, v_j \rangle$. Now assume that the polyhedron $P$ is simple. Then let $t_1^{v_j}, \ldots, t_n^{v_j}$ be the $n$ nonzero vectors in the directions of the edges of $P$ originated from $v_j$, i.e. for each $k$ and every $x \in P$, $\langle t_k^{v_j}, x \rangle \geq 0$. Then by modifying the method in [Law91] we find $$A(x) = \sum_{\substack{j \\ v_j \leq x}} a_j (x - v_j)^{n-1} = \sum_{k=0}^{n-1} b_k^{(x)} x^k$$ The coefficients are given by $$a_j = \frac{1}{(n-1)!} |\det(t_1^{v_j}, \ldots, t_n^{v_j})| \prod_{i=1}^{n} \langle g, t_i^{v_j} \rangle^{-1}$$ and $$b_k^{(x)} = \binom{n-1}{k} \sum_{\substack{j \\ v_j \leq x}} a_j (-v_j)^{n-1-k}$$ Notice that $b_k^{(x)} = b_k^{(v_j-1)}$ for $x \in [v_{j-1}, v_j)$, and that equations (9) and (10) does not hold if $P$ is not simple. For details see [LH98]. The generation from $h_g$ is not easy in general. But for log-concave or $T_c$-concave (see §4.8) densities $f(x)$, $h_g$ again is log-concave ([Pr673]) and $T_c$-concave ([LH98]), respectively. **Generate random tuples.** For sampling from the “hat distribution” we first need the volume below the hat in all the polyhedra $P_i$ and in the domain $D$. We then choose one of these polytopes randomly with density proportional to their volumes. By means of a proper univariate random number we sample from marginal distribution $h|_g$ and get a intersection $Q(x)$ of $P$. At last we have to sample from a uniform distribution on $Q(x)$. It can be shown (see [LH98]) that the algorithm works if 1. the polyhedra $P_i$ are simple (see above), 2. there exists a unique maximum of $\ell_i(x)$ in $P_i$ (then $\alpha - \beta x$ is decreasing and thus the volume below the hat is finite in unbounded polyhedra), and 3. $\ell_i(x)$ is non-constant on every edge of $P_i$ (otherwise $\langle g, t_i^{v_j} \rangle = 0$ for a vertex $v_j$ and an edge $t_i$ and thus $a_j = \infty$ in (10)). **Adaptive rejection sampling.** It is very hard to find optimal points for constructing these tangents $\ell_i(x)$. Thus these points must be chosen by adaptive rejection sampling (see [GW92]). Adapted to our situation it works in the following way: We start with the $n + 1$ vertices of a regular simplex and add a new construction point whenever a point is rejected until the maximum number $N$ of tangents is reached. The points of contact are thus chosen by a stochastic algorithm and it is clear that the multivariate density of the distribution of the next point for a new tangent is proportional to $h(x) - f(x)$. Hence with $N$ tending towards infinity the acceptance probability for a hat constructed in such a way converges to 1 with probability 1. It is not difficult to show that the expected volume below the hat is $1 + O(N^{-2/n})$. **Problems.** Using this method we run into several problems. - We have to compute the polyhedra every time we add a point. - What must be done, if the marginal distribution (8) does not exist in the initial (usually not bounded) polyhedra $P_i$, or if the volume below the hat is infinite ($Q_i(x)$ not bounded, $\alpha - \beta x$ not decreasing)? Moreover the polyhedra $P_i$ typically have many vertices. Therefore the algorithm is slow and hard to implement because of the following effects. - The computation of the polyhedra (setup) is very expensive. - The marginal density (8) is expensive to compute. Since it is different for every polyhedron $P_i$ (and for every density function $f$), we have to use a slow black box method (e.g. [GW92, Hör95a]) for sampling from the marginal distribution even in the case of log-concave densities. - $Q(x)$ is not a simplex. Thus we have to use the (slow) recursive sweep-plane algorithm as described in [LH98] for sampling from the uniform distribution over a (simple) polytope. 2.3 Simple cones A better idea is to choose the polyhedra first as simple as possible, i.e. we choose cones. (We describe in §2.4 how to get such cones.) A *simple cone* $C$ (with its vertex in the origin) is an unbounded subset spanned by $n$ linearly independent vectors: $$t_1, \ldots, t_n \in S^{n-1}$$ $$C = \{ \lambda_1 t_1 + \cdots + \lambda_n t_n : \lambda_i \geq 0 \}$$ (12) In opposition to the procedure described above we now have to choose a proper point $p$ in this cone $C$ for constructing a tangent. In the whole cone the hat $h$ is then given by this tangent. The method itself remains the same. Obviously the hat function is not continuous any more (because we first define a decomposition of the domain and then compute the hat function over the different parts. It cannot be made continuous by taking the pointwise minimum of the tangents, since otherwise we cannot compute the marginal density $h_g$ by equation (8)). Moreover we have to choose one touching point in each part. These disadvantages are negligible compared to the enormous speedup of the setup and of the generation of random tuples with respect to this hat function. **Marginal density.** The intersection $Q(x)$ of the sweep plane $F(x)$ with the cone $C$ is bounded if and only if $F(x)$ cuts each of the sets $\{\lambda t_i; \lambda > 0\}$ for all $x > 0$, i.e. if and only if $(g, \lambda t_i) = x$ for a $\lambda > 0$ by (5), and hence if and only if $$(g, t_i) > 0 \quad \text{for all } i.$$ (13) We find for the volume $A(x)$ in (9) of the intersection $Q(x)$ $$A(x) = \begin{cases} a \cdot x^{n-1} & \text{for } x \geq 0 \\ 0 & \text{for } x < 0 \end{cases}$$ (14) where (again) $$a = \frac{1}{(n - 1)!} |\det(t_1, \ldots, t_n)| \prod_{i=1}^{n} (g, t_i)^{-1}$$ (15) Notice that $A(x)$ does not exist if condition (13) is violated, whereas the right hand side in (14) is defined if $(g, t_i) \neq 0$ for all $i$. If the marginal density exists, i.e. (13) holds, then by (8) and (6) it is given by $$h_g(x) = a \cdot x^{n-1} \cdot T^{-1}(\alpha - \beta x)$$ (16) **Volume.** The volume below the hat function in a cone $C$ is given by $$H_C = \int_0^\infty h_g(x) \, dx = \int_0^\infty a \cdot x^{n-1} \cdot T^{-1}(\alpha - \beta x) \, dx$$ (17) Notice that $g$ and thus $a$, $\alpha$ and $\beta$ depend on the choice of $p$. Choosing an arbitrary $p$ may result in a very large volume below the hat and thus in a very poor rejection constant. **Intersection of sweep-plane.** Notice that the intersection $Q(x)$ is always a $(n-1)$-simplex if condition (13) holds. Thus we can use the algorithm in [Dev86] for sampling from uniform distribution on $Q(x)$. The vertices $\mathbf{v}_1, \ldots, \mathbf{v}_n$ of $Q(x)$ in $\mathbb{R}^n$ are given by $$\mathbf{v}_i = \frac{x}{\langle \mathbf{g}, \mathbf{t}_i \rangle} \mathbf{t}_i$$ \hspace{1cm} (18) Let $U_i$ ($i = 1, \ldots, n-1$) iid uniformly $[0, 1]$ random variates and $U_0 = 0$, $U_n = 1$. We sort these variates such that $U_0 \leq \ldots \leq U_n$. Then we get a random point in $Q(x)$ by (see [Dev86, theorems XI.2.5 and V.2.1]) $$\mathbf{X} = \sum_{i=1}^{n} (U_i - U_{i-1}) \mathbf{v}_i$$ \hspace{1cm} (19) **The choice of $\mathbf{p}$.** One of the main difficulties of the new approach is the choice of the touching point $\mathbf{p}$. In opposite to the first approach where the polyhedron is build around the touching point, we now have to find such a point so that (13) holds. Moreover the volume below the hat function over the cone should be as small as possible. Searching for such a touching point in the whole cone $C$ or in domain $D$ (the touching point needs not to be in $C$) with techniques for multidimensional minimization is not very practicable. Firstly the evaluation of the the volume $H_C$ in (17) for a given point $\mathbf{p}$ is expensive and its gradient with respect to $\mathbf{p}$ is not given. Secondly the domain of $H_C$ is given by the set of points where (13) holds. Instead we suggest to choose a point in the center of $C$ for a proper touching point for our hat. Let $\tilde{\mathbf{t}} = \frac{1}{n} \sum_{i=1}^{n} \mathbf{t}_i$ be the barycenter of the spanning vectors. Let $a(s)$, $\alpha(s)$ and $\beta(s)$ denote the corresponding parameters in (16) for $\mathbf{p} = s \tilde{\mathbf{t}}$. Then we choose $\mathbf{p} = s \tilde{\mathbf{t}}$ by minimizing the function $$\eta: D_A \rightarrow \mathbb{R}, \quad s \mapsto \int_0^\infty a(s) x^{n-1} T^{-1}(\alpha(s) - \beta(s) x) \, dx$$ \hspace{1cm} (20) The domain $D_A$ of this function is given by all points, where $\|\nabla \tilde{f}(s \tilde{\mathbf{t}})\| \neq 0$ and where $A(x)$ exists, i.e. where $\mathbf{g} = \mathbf{g}(s \tilde{\mathbf{t}})$ fulfills condition (13). It can easily be seen, that $D_A$ is an open subset of $(0, \infty)$. To minimize $\eta$ we can use standard methods, e.g. Brent’s algorithm (see e.g. [FMM77]). The main problem is to find $D_A$. Although $f(\mathbf{x})$ is concave by assumption, it is possible for a particular cone $C$ that $D_A$ is a strict subset of $(0, \infty)$ or even the empty set. Moreover it might not be connected. In general only the following holds: Let $(a, b)$ be a component of $D_A \neq \emptyset$. If $f \in C^1$, i.e. the gradient of $f$ is continuous, then $$\lim_{s \searrow a} \eta(s) = \infty \quad \text{and} \quad \lim_{s \nearrow b} \eta(s) = \infty$$ \hspace{1cm} (21) Roughly spoken, $\eta$ is a U-shaped function on $(a, b)$. An essential part of the minimization is initial bracketing of the minimum, i.e. finding three points $s_0 < s_1 < s_2$ in $(a, b)$, such that $\eta(s_1) < \eta(s_0)$ and $\eta(s_1) < \eta(s_2)$. This is necessary since the function term of $\eta$ in (20) is also defined for some $s \not\in D_A$ (e.g. $s < 0$). Using Brent’s algorithm without initial bracketing may (and occasionally does) result in e.g. a negative $s$. Bracketing can be done by (1) search for a $s_1 \in D_A$, and (2) use property (21) and move towards $a$ and $b$, respectively, to find an $s_0$ and an $s_2$. (It is obvious that we only find a local minimum of $\eta$ by this procedure. But in all the distributions we have tested, there is just one local minimum which therefore is the global one.) For the special case where $\langle g(s), t \rangle$ does not depend on $s$ (e.g. for all multivariate normal distributions) $D_A$ either is $(0, \infty)$ or the empty set. It is then possible to make similar considerations like that in [Hör95a, theorem 2.1] for the one dimensional case. Adapted to the multivariate case it would state, that for the optimal touching point $p$, $f(p)$ is the same for every cone $C$. **Condition violated.** Notice that $D_A$ even may be the empty set, i.e. condition (13) fails for all $s \in (0, \infty)$. By the concavity of $\tilde{f}(x)$ we know, that $\langle g, p \rangle > 0$ for every construction point $p$. Furthermore $\langle g, p \rangle$ is bounded from below on every compact subset of the domain $D$ of the density $f$. Therefore there always exists a partition into simple cones with proper touching points $p = s t$ which satisfy (13), i.e. the domains $D_A$ are not empty for all cones $C$. We even can have $D_A = (0, \infty)$. ### 2.4 Triangulation For this new approach we need a partition of the $\mathbb{R}^n$ into simple cones. We get such a partition by triangulation of the unit sphere $S^{n-1}$. Each cone $C$ is then generated by a simplex $\Delta \subset S^{n-1}$ (triangle in $S^2$, tetrahedron in $S^3$, and so on): $$C = \{\lambda t : \lambda \geq 0, t \in \Delta\} \quad (22)$$ These simplices are uniquely determined by the vectors $t_1, \ldots, t_n$ in (12), i.e. their vertices. (They are the the convex hull of these vertices in $S^{n-1}$.) It does not matter that these cones are closed sets. The intersection of such cones might not be empty but has measure zero. For computing $a$ in (15) we need the volumes of these simplices. To avoid $D_A$ being the empty set, some of the cones have to be skinny. Furthermore to get a good hat function, these simplices should have the same volume (if possible) and they should be “regular”, i.e. the distances from the center to the vertices should be equal (or similar). Thus the triangulation should have the following properties: (C1) Recursive construction. (C2) $|\det(t_1, \ldots, t_n)|$ are easy computable for all simplices. (C3) Edges of a simplex have equal length. Although it is not possible to get such a triangulation for \( n \geq 3 \) we suggest an algorithm which fulfills (C1) and (C2) and which “nearly” satisfies (C3). **Initial cones.** We get the initial simplices as the convex hull in \( S^{n-1} \) of the vectors \[ \delta_1 e_1, \ldots, \delta_n e_n \] where \( e_i \) denotes the \( i \)-th unit vector in \( \mathbb{R}^n \) (i.e. a vector where the \( i \)-th component is 1 and all others are 0) and \( \delta_i \in \{-1, 1\} \). As can easily be seen the resulting partition of the \( \mathbb{R}^n \) is that of the arrangement of the hyperplanes \[ x_i = 0, \quad i = 1, \ldots, n \] Hence we have \( 2^n \) initial cones. **Barycentric subdivision of edges.** To get smaller cones we have to triangulate these simplices. Standard triangulations of simplices which are used for example in fixed-point computation (see e.g. [Tod76, Tod78]) are not appropriate for our purpose. The number of simplices increases too fast for each triangulation step. (In opposition to fixed point calculations, we have to keep all simplices with all their parameters in the memory of the computer.) Instead we use a barycentric subdivision of edges: Let \( t_1, \ldots, t_n \) be the vertices of a simplex \( \Delta \). Then use the following algorithm. (1) Find the longest edge \( (t_i, t_j) \). (2) Let \[ t_{new} = \frac{t_i + t_j}{||t_i + t_j||}, \] i.e. the barycenter of the edge projected to the sphere. (3) Get two smaller simplices: Replace vertex \( t_i \) by \( t_{new} \) for the first simplex and vertex \( t_j \) by \( t_{new} \) for the second one. We have \[ |\det(t_1, \ldots, t_{new}, \ldots, t_n)| = \frac{1}{||t_i + t_j||} |\det(t_1, \ldots, t_n)| \] After making \( k \) of such triangulation steps in all initial cones we have \( 2^{n+k} \) simplices. This triangulation is more flexible. Whenever we have a cone \( C \), where \( D_a \) is empty (or the algorithm does not find an \( s \in D_a \)) we can split \( C \) and try again to find a proper touching point in both new cones. This can be done until we have found proper construction points for all cones of the partition (see end of §2.3). In practice this procedure stops, if too many cones are necessary. (The computer runs out of memory.) Notice that it is not a good idea to use barycentric subdivision of the whole simplex (instead of dividing the longest edge). This triangulation exhibits the inefficient behavior of creating long, skinny simplices (see remark in [Tod76]). “Oldest” edge. Finding the longest of the \( \binom{n}{2} \) edges of a simplex is very expensive. An alternative approach is to use the “oldest” edge of a simplex. The idea is the following: 1. Enumerate the \( 2n \) vertices of the initial cones. 2. Whenever a new vertex is created by barycentric subdivision, it gets the next number. 3. Edges are indexed by the tuple \((i, j)\) of the number of the incident vertices, such that \( i < j \). 4. We choose the edge with the lowest index with respect to the lexicographic order (the “oldest” edge). This is just the pair of lowest indices of the vertices of the simplex. As can easily be seen, the “oldest” edge is (one of) the longest edge(s) for the first \( n - 1 \) triangulation steps. Unluckily this does not hold for all simplices in following triangulation steps. (But it is at least not the shortest one.) Computational experiences with several normal distributions for some dimensions \( n \) have show, that this idea speeds up the triangulation enormously but has very little effect on the rejection constant. Setup. The basic version of the setup algorithm is as follows: 1. Create initial cones. 2. Triangulate. 3. Find touching points \( p \) if possible (and necessary). 4. Triangulate every cone without proper touching point. 5. Goto 3 if cones without proper touching points exist, otherwise stop. 2.5 Problems Although this procedure works for our tested distributions, an adaptation might be necessary for a particular density function \( f \). (1) The searching algorithm for a proper touching point in §2.3 can be improved. E.g. \( D_A \) is either \([0, \infty)\) or the empty set if \( f \) is a normal distribution. (2) There is no criterion how many triangulation steps are necessary or usefull for an optimal rejection constant. Thus some tests with different numbers of trianglation steps should be made with density \( f \) (see also §5). (3) It is possible to triangulate each cone with a “bad” touching point. But besides the case where no proper touching point can be found, some touching points may lead to an enormous volume below the hat function. So this case should also be excluded and the corresponding cones should be triangulated. A simple solution to this problem is that an upper bound $H_{\text{max}}$ for the volumes $H_C$ is provided. Each cone with $H_C > H_{\text{max}}$ has to be triangulated further. Such a bound can be found by some empirical tests with the given density $f$. Another way is to triangulate all initial cones first and then let $H_{\text{max}}$ be a multiple (e.g. 10) of the 90th percentile of the $H_C$ of all created cones. (4) Problems might occur when the mode is on the boundary of the support $\text{supp} f = \{x \in D : f(x) \neq 0\} \subset \mathbb{R}^n$. (Then we set $\log(f(x)) = -\infty$ for all $x \in D \setminus \text{supp} f$ and thus $\log f : D \to \mathbb{R} \cup \{-\infty\}$ can be seen as a concave function.) An example for such a situation is when $f(x)$ is a normal density on a ball $B$ and vanishes outside of $B$. In such a case there exists a cone $C$ such that $\{\lambda x : \lambda > 0\}$ does not intersect $\text{supp} f$ and the algorithm is in troubles. If $C \cap \text{supp} f = \emptyset$ we simply can remove this cone. Otherwise an expensive search for a proper touching point is necessary. **Restrictions.** The above observations — besides the fact that no automatic adaption is possible — are a drawback of the algorithm for its usage as black-box algorithm. Nevertheless the algorithm is suitable for a large class of log-concave densities and it is possible to include parameters into the code to adjust the algorithm for a given density easily. Of course some tests might be necessary. Besides, the algorithm does not produce wrong random points but simply does not work, if no “good” touching points can be found for some cones $C$. ### 2.6 Log-concave densities The transformation $T(x) = \log(x)$ satisfies (T1)–(T4). If $T(f(x)) = \log(f(x))$ is concave, we say $f$ is *log-concave*. We have $T^{-1}(x) = \exp(x)$ and thus we find for the marginal density function in (16) those of a gamma distribution with shape parameters $n$ and $\beta$. $$h_g(x) = a \cdot x^{n-1} \cdot \exp(\alpha - \beta \cdot x) = a \cdot e^\alpha \cdot x^{n-1} \cdot e^{-\beta \cdot x} \quad (27)$$ The volume below the hat for log-concave densities in a cone $C$ is now given by $$H_C = \int_0^\infty a \cdot x^{n-1} \cdot \exp(\alpha - \beta \cdot x) \, dx = a \cdot e^\alpha \cdot \beta^{-n} \cdot (n-1)! \quad (28)$$ To minimize this function it is best to use its logarithm: $$D_A \to \mathbb{R}, \quad s \mapsto \log(a(s)) + \alpha(s) - n \cdot \log(\beta(s)) \quad (29)$$ For the normal distribution with density proportional to \( f(x) = \exp(-\sum x_i^2) \) we have \( f(p) = f(s\hat{t}) = \log(f(s\hat{t})) = -\sum s^2 t_i^2 = -s^2 \), where \( \hat{t} \) is the center of the cone \( C \) with \( \sum t_i^2 = 1 \). Thus we simply find by (6) \( \alpha(s) = s^2 \) and \( \beta = 2s \). Since \( \alpha(s) \) does not depend on \( s \) we find for (29) \( s^2 = n \log(s) + \text{constant} \). But even for the normal distribution with an arbitrary covariance matrix, this function becomes much more complicated. ## 3 The algorithm The algorithm `TDRMV()` consists of two main parts: the construction of a hat function \( h(x) \) and the generation of random tuples \( X \) with density proportional to this hat function. The first one is done by the subroutine `SETUP()`, the second one by the routine `SAMPLE()`. **Algorithm 2** `TDRMV()` /* generate a random tuple for given log-concave density */ **Input:** density \( f \) /* Setup */ 1: call `SETUP()`. /* Construct a hat-function \( h(x) \) */ /* Generator */ 2: repeat 3: \( X \leftarrow \text{call SAMPLE()} \). /* Generate a random tuple \( X \) with density prop. to \( h(X) \). */ 4: Generate a uniform random number \( U \). 5: until \( U \cdot h(X) \leq f(X) \). 6: return \( X \). To store \( h(x) \), we need a list of all cones \( C \). For each of these cones we need several data which we store in the object `cone`. Notice that the variables \( p, g, \alpha, \beta, a \) and \( H_C \) depend on the choice the touching point \( p \) and thus on \( s \). Some of the parameters are only necessary for the setup. **object 1 cone** | PARAMETER | VARIABLE | DEFINITION | |----------------------------|----------|---------------------------------| | spanning vectors | \( t_1, \ldots, t_n \) | | | center of cone | \( \hat{t} \) | \( = \frac{1}{n} \sum t_i \) | | construction point | \( p \) | \( = s \cdot \hat{t} \) | | location of \( p \) | \( s \) | | | sweep plane | \( g \) | see (4) | | marginal density | \( \alpha, \beta \) | see (6) | | coefficient | \( a \) | see (15) | | determinant of vectors | \( \det \) | \( = \det(t_1, \ldots, t_n) \) | | volume under hat | \( H_C, H_C^{cum} \) | see (17) and (28) | Remark. To make the description of the algorithm more readable, some standard techniques are not given in details. 3.1 Setup The routine `setup()` consists of three parts: (H1) setup initial cones, (H2) triangulation of the initial cones and (H3) calculation of parameters. (H1) is simple (see §2.4). (H2) is done by subroutine `split()`. The main problem in (H3) is how to find the parameter $s$ (i.e. a proper construction point). This is done by subroutine `find()`. Minimizing (29) is very expensive. Notice that for a given $s$ we have to compute all parameters that depend on $s$ before evaluating this function. Since it is not suitable to use the derivative of this function, a good choice for finding the minimum is to use Brent’s algorithm (e.g. [FMM77]). To reduce the cost for finding a proper $s$, we do not minimize (29) for every cone. Instead we use the following procedure: 1. Make some triangulation steps as described in §2.4. 2. Compute $s$ for every cone $C$. 3. Continue with triangulation. When a cone is split by barycentric subdivision of the corresponding simplex, both new cones inherit $s$ from the old simplex. Our computational experiences with various normal distributions show, that the costs for setup reduces enormously without raising the rejection constant too much. Using this procedure it might happen that $s$ does not give a proper touching point $\mathbf{p} = s \tilde{\mathbf{t}}$ (or $H_C$ is too big; see end of §2.4) after finishing all triangulation steps. Thus we have to check $s$ for every cone and continue with triangulation in some cones if necessary. 3.2 Sampling The subroutine `sample()` consists of four parts: (S1) select a cone $C$, (S2) find a random variate proportional to the marginal density $h_{\mathbf{g}}$ (27), (S3) generate a uniform random tuple $\mathbf{U}$ on the standard simplex (i.e. $0 \leq U_1 \leq \ldots \leq U_n \leq 1$ and $U_1 + \cdots + U_n = 1$) and (S4) compute tuple on the intersection $Q(x)$ of the sweeping plane with cone $C$. (S3) and (S4) is done by subroutine `simplex()`. 4 Possible variants 4.1 Subset of $\mathbb{R}^n$ as domain Our experiments have shown, that the basic algorithm works even for densities with support $\text{suppf} = \{ \mathbf{x} \in D : f(\mathbf{x}) \neq 0 \} \subset \mathbb{R}^n$. Since the hat $h(\mathbf{x})$ has support $\text{supph} = \mathbb{R}^n$, the rejection constant might become very big. Subroutine 3 SETUP() /* Construct a hat function */ Input: level of triangulation for finding $s$, level of minimal triangulation /* Initial cones */ 1: for all tuples $(\delta_1, \ldots, \delta_n)$ with $\delta_i \in \{\pm 1\}$ do /* $2^n$ initial cones */ 2: Append new cone to list of cones with $\delta_1 e_1, \ldots, \delta_n e_n$ as its spanning vectors. /* Triangulation */ 3: repeat 4: for all cone $C$ in list of cones do 5: call SPLIT() with $C$. 6: Update list of cones. 7: until level of triangulation for finding $s$ is reached /* Find $s$ */ 8: for all cone $C$ in list of cones do 9: call FIND() with $C$. /* Continue triangulation */ 10: repeat 11: for all cone $C$ in list of cones do 12: call SPLIT() with $C$. 13: Update list of cones. 14: until minimum level of triangulation is reached /* Check $s$ */ 15: repeat 16: for all cone $C$ in list of cones where $s$ unknown do /* (13) violated */ 17: call SPLIT() with $C$ and list of cones. 18: call FIND() with both new cones. 19: Update list of cones. 20: until no such cone was found /* Compute all parameters */ 21: for all cone $C$ in list of cones do 22: Compute all parameters of $C$. /* Total volume below hat */ 23: $H_{tot} \leftarrow 0$. 24: for all cone $C$ in list of cones do 25: $H_{tot} \leftarrow H_{tot} + H_C$. 26: $H_{tot}^{cum} \leftarrow H_{tot}$ /* Used for $O(0)$-search algorithm */ /* End */ 27: return list of cones, $H_{tot}$. **Subroutine 4** `SPLIT()` /* split a given cone and update list of cones */ **Input:** cone $C$, list of cones 1: Find lowest indices $i, j$ of all vectors of $C$. 2: Find highest index $m$ of all vectors (of triangulation). 3: $\mathbf{t}_{m+1} \leftarrow \frac{\mathbf{t}_i + \mathbf{t}_j}{\|\mathbf{t}_i + \mathbf{t}_j\|}$. 4: Append new cone $C'$ to list and copy vectors and $s$ of $C$ into $C'$. 5: Replace $\mathbf{t}_i$ by $\mathbf{t}_{m+1}$ in $C$ and replace $\mathbf{t}_j$ by $\mathbf{t}_{m+1}$ in $C'$. 6: Replace det by $\frac{1}{\|\mathbf{t}_i + \mathbf{t}_j\|} \cdot \det$ in $C$ and $C'$. 7: **return** list of cones. **Subroutine 5** `FIND()` /* find a proper touching point */ **Input:** cone $C$ /* Bracketing a minimum */ 1: Search for a $s_1 \in D_A$. **return failed** if not successful. 2: Search for $s_0, s_2$ (Use property (21)). **return failed** if not successful. /* Find minimum */ 3: Find $s$ using Brent’s algorithm (Use (29)). **return failed** if not successful. **Subroutine 6** `SAMPLE()` /* Generate a random tuple with density proportional to hat */ **Input:** $H_{tot}$, list of cones /* Find cone */ 1: Generate a uniformly $[0, H_{tot}]$ distributed random variate $U$. 2: Find $C$, such that $H^{\text{cum}}_{C_{\text{pred}}} \leq U < H^{\text{cum}}_C$. ($C_{\text{pred}}$ is the predecessor of $C$ is the list of cones.) /* Find sweep-plane */ 3: Generate a gamma$(n, \beta)$ distributed random variate $G$. /* Generate uniformly distributed point in $Q(G)$ and return tuple */ 4: $\mathbf{X} \leftarrow \text{call SIMPLEX()}$ with $C$ and $G$. 5: **return** $\mathbf{X}$. **Subroutine 7** `SIMPLEX()` /* Generate a uniform distributed tuple on simplex */ **Input:** cone $C$, $x$ (location of sweeping plane) /* Generate $n$ uniformly distributed random variates $U_i$ in simplex */ 1: Generate iid uniform $[0, 1]$ random variates $U_i$, $i = 1, \ldots, n - 1$. 2: $U_n \leftarrow 1$. 3: Sort: $0 \leq U_1 \leq \ldots \leq U_{n-1} \leq U_n = 1$. 4: **for** $i = n, \ldots, 2$ **do** 5: $U_i \leftarrow U_i - U_{i-1}$. /* Generate uniformly distributed point $\mathbf{X}$ in $Q(x)$ */ 6: $\mathbf{X} \leftarrow \sum_{i=1}^{n} U_i \frac{x}{\langle g, t_i \rangle} t_i$. /* Return random tuple */ 7: **return** $\mathbf{X}$. Pyramids. If the given domain $D$ is a proper subset of $\mathbb{R}^n$ (that is, we give constraints for $\text{supp} f$), the acceptance probability can be increased when we restrict the domain of $h$ accordingly to the domain $D$. (The domain is the set of points where the density $f$ is defined; obviously $\text{supp} f \subseteq D$. Notice that we have to provide the domain $D$ for the algorithm but the support of $f$ is not known.) Thus we replace (some) cones by pyramids. Notice that the base of such a pyramid must be perpendicular to the direction $\mathbf{g}$. Hence we first have to choose a construction point $\mathbf{p}$ and then compute the height of the pyramid. The union of these pyramids (and of the remaining cones) must cover $D$. Whenever we get a random point not in the domain $D$ we reject it. It is clear that continued triangulation decreases the volume between $D$ and enclosing set. Polytopes. We only deal with the case where $D$ is an arbitrary polytope which are given by a set of linear inequalities. Height of pyramid. The height is the maximum of $\langle \mathbf{g}, \mathbf{x} \rangle$ in $C \cap D$. Because of our restriction to polytopes this is a linear programming problem. Using the spanning vectors $\mathbf{t}_1, \ldots, \mathbf{t}_n$ as basis for the $\mathbb{R}^n$, it can be solved by means of the simplex algorithm in at most $k$ pivot steps (for a simple polytope), where $k$ is the number of constraints for $D$. Marginal density and volume below hat. The marginal distribution is a truncated gamma distribution with domain $[0, u]$, where $u$ is the height of the pyramid $C$. Instead of (28) and (29) we find for pyramids $$H_C = \int_0^u x^{n-1} \exp(\alpha - \beta x) \, dx = a \, e^\alpha \beta^{-n} ?(n, \beta u)$$ \hspace{1cm} (30) and $$D_A \rightarrow \mathbb{R}, \quad s \mapsto \log(a(s)) + \alpha(s) - n \log(\beta(s)) + \log(?(n, \beta(s)u(s)))$$ \hspace{1cm} (31) where $? (n, x) = \int_0^x t^{n-1} e^{-t} \, dt$ is proportional to the incomplete gamma function and can be computed by means of formula (3.351) in [GR65]. Computing the height $u(s)$ is rather expensive. So it is recommended to use (29) instead of the exact function (31) for finding a touching point in pyramid $C$. Computational experiments with the standard normal distribution have shown, that the effect on the rejection constant is rather small (less than 5%). 4.2 Density not differentiable For the construction of the hat function we need a tangent plane for every $\mathbf{x} \in D$. Differentiability of the density is not really required. Thus it is sufficient to have a subroutine that returns the normal vector of a tangent hyperplane (which is “$\nabla f(\mathbf{x})$”, if $f \in C^1$) for every $\mathbf{x}$. However for densities $f$ which are not differentiable the function in (29) might have a nasty behavior. However notice that $f$ must be continuous in the interior of $\text{supp} f$, since $\log \circ f$ is concave. ### 4.3 Indicator Functions If $f(x) = f_0$ is the indicator function of a convex set, then we can choose an arbitrary point in the convex set as the mode (as origin of our construction) and set $g = t$, the center of the cone $C$ (see (4) in §2.2). Notice that the marginal density in (16) now reduces to $h_g(x) = a f_0 x^{n-1}$. None of the parameters $\alpha$ and $\beta$ depends on the choice of the touching point $p$. Of course we have to provide a compact domain for the density. Using indicator functions we can generate uniformly distributed random variates of arbitrary convex sets. ### 4.4 Mode not in Origin It is obvious that the method works, when the mode $m$ is an arbitrary point in $D = \mathbb{R}$. If the mode is unknown we can use common numerical methods for finding the maximum of $f$, since $T(f(x))$ is concave (see e.g. [Rao84]). Notice that the exact location of the mode is not really required. The algorithm even works if the center for the construction of the cones is not close to the mode. Then we just get a hat with a worse rejection constant. ### 4.5 Add mode as construction point Since we have only one construction point in each cone, the rejection constant is bounded from below. Thus only a few steps to triangulate the $S^{n-1}$ make sense. To get a better hat function we can use the mode $m$ of $f$ as an additional construction point. The hat function is then the minimum of $f(m)$ and the original hat. The cone is split into two parts by a hyperplane $F(b)$ with different marginal densities, where $b$ is given by $T^{-1}(\alpha - \beta b) = f(m)$. Its marginal density is then given by $$h_g(x) = a x^{n-1} \cdot \begin{cases} f(m) & \text{for } x \leq b \\ T^{-1}(\alpha - \beta x) & \text{for } x > b \end{cases} \quad (32)$$ Notice that we use the same direction $g$ for the sweep plane in both parts. We have to compute the volume below the hat for both parts which are given by $$H_C^1 = a b^n \quad \text{and} \quad H_C^2 = \int_b^\infty a x^{n-1} T^{-1}(\alpha - \beta x) \, dx \quad (33)$$ ### 4.6 More construction points per cone A way to improve the hat function is to use more than one (or two) construction points. But this method has some disadvantages and it is not recommended to use it. The cones are divided in several pieces of a pyramid (see figure 3). The lower and upper base of these pieces must be perpendicular to the corresponding direction $\mathbf{g}$. These vectors $\mathbf{g}$ are determined by the gradients of the transformed density at the construction points in this pieces. Thus these $\mathbf{g}$ (may) differ and hence these pieces must overlap. This increases the rejection constant. Moreover it is not quite clear how to find such pieces. For the univariate case appropriate methods exist (e.g. [DH94]). But in the multivariate case these are not suitable. Also adaptive rejection sampling (introduced in [GW92]) as used in [Hör95b, LH98] is not a really good choice. The reason is quite simple. The cones are fixed and the construction points always are settled in the center of these cones. Thus using adaptive rejection sampling we select the new construction points due to a distribution which is given by the marginal density of $(h - f)|_C$. And this marginal density is not zero at the existing construction points. ### 4.7 Squeezes We can make a very simple kind of squeezes: Let $x_0 = 0 < x_1 < x_2 < \ldots < x_k$. Compute the minima of the transformed density at $Q(x_i)$ for all $i$. Since $\tilde{f}$ is concave these minima are at the vertices of these simplices. The squeeze $s_i(x)$ for $x_{i-1} \leq x \leq x_i$ is then given by $$s_i(x) = T^{-1} \left( \frac{\tilde{f}(x_i) - \tilde{f}(x_{i-1})}{x_i - x_{i-1}} (x - x_{i-1}) + \tilde{f}(x_{i-1}) \right)$$ where $\tilde{f}(x_i)$ denotes the minimum of $\tilde{f}(x)$ in $Q(x_i)$. The setup of these squeezes is rather expensive and only useful, if many random points of the same distribution must be generated. ### 4.8 $T_c$-concave densities A family $T_c$ of transformations that fulfil conditions (T1)–(T4) is introduced in [Hör95a]. Let $c \leq 0$. Then we set | $c$ | $T_c(x)$ | $T_c^{-1}(x)$ | $T'_c(x)$ | |-----|----------|---------------|-----------| | $c = 0$ | $\mathbb{R}^+ \to \mathbb{R}$ | $\log(x)$ | $\exp(x)$ | $x^{-1}$ | | $-\frac{1}{n} < c \leq 0$ | $\mathbb{R}^+ \to \mathbb{R}^-$ | $-x^c$ | $(-x)^{1/c}$ | $-c \, x^{c-1}$ | It can easily be verified, that condition (T4) (i.e. volume below hat is bounded) holds if and only if $-\frac{1}{n} < c \leq 0$. Moreover for $c < 0$ we must have $T_c(h(x))|_C < 0$. To ensure the negativity of the transformed hat we always have to choose the mode $\mathbf{m}$ as construction point (see §4.5). In [Hör95a] it was shown that if a density $f$ is $T_c$-concave then it is $T_{c_1}$-concave for all $c_1 \leq c$. The case $c = 0$, $T_0(x) = \log(x)$ is already described in §2.6. For the case $c < 0$ the marginal density function (16) is now given by $$h_\mathbf{x}(x) = a \ x^{n-1} \cdot \begin{cases} f(\mathbf{m}) & \text{for } x \leq b \\ (\beta x - \alpha)^{\frac{1}{c}} & \text{for } x > b \end{cases} \quad (35)$$ where $b$ is given by $(\beta b - \alpha)^{\frac{1}{c}} = f(\mathbf{m})$. To our knowledge no special generator for this distribution is known. (The part for $x > b$ looks like a beta-prime distribution (see [JKB95]), but $\alpha, \beta > 0$.) By assumption $(\beta x - \alpha) > 0$ for $x > b$ and $\frac{1}{c} < -n$. Hence it can easily be seen that the marginal density is $T_c$-concave. Therefore we can use the universal generator ([Hör95a]). ## 5 Computational Experience ### 5.1 A C-implementation. A test version of the algorithm was written in C and is available via anonymous ftp [Ley98]. It should handle the following densities $f$: - $f$ is log-concave but not constant on its support. - Domain $D$ is either $\mathbb{R}^n$ or an arbitrary rectangle $[a_1, b_1] \times \ldots \times [a_n, b_n]$. - The mode $\mathbf{m}$ is arbitrary. But if $D \neq \text{supp} f$ then $\mathbf{m}$ must be an interior point of $\text{supp} f$ not “too close” to the boundary of $\text{supp} f$. We used two lists for storing the spanning vectors and the cones (with pointers to the list of vectors). For the setup we have to store the edges $(i, j)$ for computing the new vertices. This is done temporarily in a hash table, where the first index $i$ is used as the hash index. The setup step is modified in the case of a rectangular domain. If the mode is near to the boundary of $D$ we use the nearest point on the boundary (if possible a vertex) for the center to construct the cones. If this point is on the boundary we easily can eliminate all those initial cones, that does not intersect $D$. If this point is a vertex of the rectangle there remains only one initial cone. For finding the mode of $f$ we used a pattern search method by Hooke and Jeeves [HJ61, Rao84] as implemented in [Kau63, BP66], since it could deal with both unbounded and bounded support of $f$ without giving explicit constraints. For finding the minimum of (29) we use Brent’s algorithm as described in ([FMM77]). The implementation contains some parameters to adjust these routines to a particular density $f$. For finding a cone $C$ in subroutine SAMPLE() we used a $O(0)$-algorithm with a search table. (Binary search is slower.) For generating the gamma distributed random number $G$ we used the algorithm in [AD82] for the case of unbounded domain. When $D$ is a rectangle, we used transformed density rejection ([Hör95a]) to generate from the truncated gamma distributions. Here it is only necessary to generate a optimal hat function for the truncated gamma distribution with shape parameters n and 1 with domain $(0, u_{\text{max}})$, where $u_{\text{max}}$ is the maximal value of $\text{height} \cdot \beta$ for all cones. The optimal touching points for this gamma distribution are computed by means of the algorithm [DH94]. The code was written for testing various variants of the algorithm and is not optimized for speed. Thus the data shown in the tables below give just an idea of the performance of the algorithm. We have tested the algorithm with various multivariate log-concave distributions in some dimensions. All our tests have been done on a PC with a P90 processor running Linux and the GNU C compiler. ### 5.2 Basic version: unbounded domain, mode in origin **Random points with density proportional to hat function.** The time for the generation of random points below the hat has shown to be almost linear in dimension $n$. Table 1 shows the average time for the generation of a single point. For comparison we give the time for generation of $n$ normal distributed points using the Box/Muller method [BM58] (which gives a standard multinormal distributed point with density proportional to $f(x) = \exp(-\sum_{i=1}^{n} x_i^2)$). For computing the hat function we only used initial cones for the standard multinormal density. | $n$ | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | hat function | 14.6 | 17.1 | 21.3 | 24.9 | 30.2 | 34.6 | 41.5 | 45.7 | 55.6 | | multinormal | 7.2 | 10.8 | 14.4 | 18.0 | 21.6 | 25.2 | 28.8 | 32.4 | 36.0 | Table 1: average time for the generation of one random point (in $\mu s$) **Random points for the given density.** The real time needed for the generation of a random point for a given log-concave density depends on the rejection constant and the costs for computing the density. Table 2 shows the acceptance probabilities and the times needed for the generation of standard multinormal distributed points. Notice that these data do not include the time for setting up the hat function. **Setup.** When FIND() is called after triangulation has been done, the time needed for the computation of the hat function depends linearly on the number | number of cones | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |-----------------|-----|-----|-----|-----|-----|-----|-----|-----|-----| | time (µs) | $2^5$ | $2^8$ | $2^{11}$ | $2^{13}$ | $2^{14}$ | $2^{15}$ | $2^{16}$ | $2^{16}$ | $2^{16}$ | | acceptance (%) | 73.3 | 71.3 | 67.9 | 60.9 | 49.5 | 40.7 | 33.4 | 19.6 | 10.6 | Table 2: acceptance probability and average time for the generation of standard multinormal distributed points of cones. (Thus FIND() is the most expensive part of the SETUP().) Table 3 shows the situation for the multinormal distribution with density proportional to $f(x) = \exp(-\sum_{i=1}^{n} i \cdot x_i^2)$, $n = 4$. It demonstrates the effects of continuing barycentric subdivision of the “oldest” edge (see §2.4) on the number of cones, the acceptance probability, the costs for generating a random point proportional to the hat function (i.e. without rejection) and proportional to the given density. Furthermore it shows the total time (in ms) for the setup (i.e. for computing the parameters of the hat function) (in ms) and the time for each cone (in µs) (The increase for large $n$ in the time needed for generating points below the hat is due to effects of memory access time.) | subdivisions | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |--------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | cones | $2^4$ | $2^5$ | $2^6$ | $2^7$ | $2^8$ | $2^9$ | $2^{10}$ | $2^{11}$ | $2^{12}$ | $2^{13}$ | $2^{14}$ | | acceptance (%) | 26.2 | 34.1 | 41.5 | 48.1 | 55.3 | 60.1 | 64.1 | 66.6 | 68.5 | 69.7 | 70.5 | | hat (µs) | 24.8 | 24.8 | 24.9 | 25.0 | 25.2 | 25.3 | 25.7 | 26.1 | 27.1 | 27.6 | 28.4 | | density (µs) | 94.2 | 73.0 | 59.9 | 52.1 | 45.6 | 42.9 | 40.1 | 39.3 | 39.5 | 39.6 | 40.4 | | setup (ms) | 11.2 | 22.6 | 47.2 | 92.2 | 182 | 366 | 744 | 1549 | 3120 | 6254 | 12520 | | setup/cone (µs) | 700 | 706 | 738 | 720 | 713 | 714 | 727 | 756 | 762 | 763 | 764 | Table 3: time for computing the hat function for multinormal distribution ($f(x) = \exp(-\sum_{i=1}^{4} i \cdot x_i^2)$, $n = 4$) If we do not run FIND() for every cone of the triangulation but use the method described in §3.1 we can reduce the costs for the construction of the hat function. Table 4 gives an idea of this reduction for the multinormal distribution with proportional to $f(x) = \exp(-\sum_{i=1}^{4} i \cdot x_i^2)$, $n = 4$. It shows the time for constructing the hat function subject to the number of cones for which FIND() is called. Due to Table 4 the acceptance probability is not very bad, if we run FIND() only for the initial cones. But this is not true in general. It might become extremely poor if the level sets of the density are very “skinny”. Table 5 demonstrates the effect on the density proportional to $f(x) = \exp(-x_1^2 - 10^{-8} x_2^2)$, $n = 2$. At last table 6 demonstrates that the increase in the time for constructing the hat function for increasing dimension $n$ is mainly due to the increase of | FIND() in subdivision cones (FIND()) | 0 $2^4$ | 1 $2^5$ | 2 $2^6$ | 3 $2^7$ | 4 $2^8$ | 5 $2^9$ | 6 $2^{10}$ | |-------------------------------------|---------|---------|---------|---------|---------|---------|---------| | acceptance (%) | 56.4 | 58.7 | 60.5 | 62.1 | 63.2 | 63.7 | 64.1 | | setup (ms) | 66.2 | 76.8 | 100.2 | 141.6 | 224.7 | 393.6 | 744 | | setup/cone (total) ($\mu$s) | 65 | 75 | 98 | 138 | 219 | 384 | 727 | Table 4: time for computing the hat function for multinormal distribution with “inherited” construction points ($f(\mathbf{x}) = \exp(-\sum_{i=1}^{4} i \cdot x_i^2)$, $n = 4$, $2^{10}$ cones) | FIND() in subdivision | 1 | 2 | 3 | 4 | |-----------------------|---------|---------|---------|---------| | acceptance (%) | 0.000166| 0.000638| 0.00253 | 0.0109 | | | 0.0403 | 0.161 | 0.635 | 2.42 | | | 7.94 | 14.82 | | | Table 5: acceptance probability for multinormal distribution with “inherited” construction points ($f(\mathbf{x}) = \exp(-x_1^2 - 10^{-8}x_2^2)$, $n = 2$, $2^{12}$ cones) number of cones. Notice that we start with $2^n$ cones. Furthermore we have to make $(n-1)$ consecutive subdivisions to shorten every edge of a simplex that defines an initial cone. Thus the number of cones increases exponentially. | $n$ cones | 2 $2^7$ | 3 $2^8$ | 4 $2^9$ | 5 $2^{10}$ | 6 $2^{11}$ | 7 $2^{12}$ | 8 $2^{13}$ | 9 $2^{14}$ | 10 $2^{15}$ | |-----------|---------|---------|---------|-------------|-------------|-------------|-------------|-------------|-------------| | acceptance (%) | 73.6 | 70.7 | 60.1 | 45.6 | 31.2 | 22.3 | 14.8 | 9.33 | 5.77 | | setup (ms) | 69.1 | 157.8 | 365.8 | 830.4 | 1899 | 4141 | 9566 | 20609 | 46547 | | setup/cone ($\mu$s) | 540 | 616 | 714 | 811 | 927 | 1011 | 1170 | 1250 | 1421 | Table 6: time for computing the hat function for multinormal distribution ($f(\mathbf{x}) = \exp(-\sum_{i=1}^{n} i \cdot x_i^2)$, 5 subdivisions of the initial cones) If the covariance matrix of the multinormal distribution is not a diagonal matrix and the ratio of the highest and lowest eigenvalue is large, then we cannot use initial cones only and we have to make several subdivisions of the cones. Because of the above considerations the necessary number of cones explodes with increasing $n$. Thus in this case this method cannot be used for large $n$. (Suppose we have to shorten every edge of each simplex, then we have $2^5 = 8$ cones if $n = 2$, but we need $2^{19} = 524\,288$ cones for $n = 10$.) **Tests.** We ran a $\chi^2$-test with the density proportional to $\exp(-\sum_{i=1}^{3} i \cdot x_i)$, $n = 3$, to validate the implementation. For all other densities we compared the observed rate of acceptance to the expected acceptance probability. Comparison with algorithm [LH98]. The code for algorithm [LH98] is much longer (and thus contains more bugs). The setup is much slower and it needs 11750 $\mu s$ to generate on multlnormal distributed random point in dimension 4 (versus 38 $\mu s$ in table 2 for tDRMV()). 5.3 Rectangular domain Normal densities restricted to an arbitrary rectangle have a similar performance as the corresponding unrestricted densities, except of the acceptance probability which is worse since the domain of the hat $h$ is a superset of the domain of density $f$. 5.4 Quality The quality of non-uniform random number generators using transformation techniques is an open problem even for the univariate case (see e.g. [Hör94] for a first approach). It depends on the underlying uniform random number generators. The situation is more serious in the multivariate case. Notice that this new algorithm requires more than $n + 2$ uniform random numbers for every random point. We cannot give an answer to this problem here, but it should be clear that e.g. RANDU (formerly part of IBM’s Scientific Subroutine Package, and now famous for its devastating defect in three dimensions: its consecutive points $(x_i, x_{i+1}, x_{i+2})$ lie in just fifteen parallel planes; see e.g. [LW97]) may result in a generator of poor quality. 5.5 Some Examples We have tested our algorithm in dimensions $n = 2$ to $n = 8$ with densities proportional to $$f_1(x) = \exp(-\sum a_i x_i^2)$$ $$f_2(x) = \exp(-\sum a_i |x_i|)$$ $$f_3(x) = \max(0, \prod (1 - a_i x_i^2))$$ $$f_4(x) = \max(0, \prod (1 - |x_i|^{a_i}))$$ $$f_5(x) = \max(0, \sum (1 - a_i x_i^2))$$ where $a_i > 0$. The domain was $\mathbb{R}^n$ and some rectangles. We also used densities proportional to $f_i(U x + b)$, where $U$ is an orthonormal transformation and $b$ a vector, to test distributions with non-diagonal correlation matrix and arbitrary mode. The algorithm works well for densities $f_3$, $f_4$ and $f_5$ both with $D = \mathbb{R}^n$ and $D$ being a rectangle enclosing the support of $f_i$. Although some of these densities are not $C^1$, the FIND() routine works. Problems arise if the level sets of the density have “corners”, i.e. the $g$ is unstable when we vary the touching $p$ a little bit. Then there are some (that contains these “corners”) with huge volume $H_C$ and further triangulation is necessary. If dimension is high ($n \gtrsim 5$) too many cones might be necessary. The optimization algorithm for finding the mode fails if we use a starting point outside the support of $f_S$. The code has some parameters for adjusting the algorithm to the given density. For example, it requires some testing to get the optimal number of cones and the optimal level of subdivisions for calling FIND(). ### 5.6 Résumé The presented algorithm is a suitable method for sampling from log-concave (and $T$-concave) distributions. The algorithm works well for all tested log-concave densities if dimension is low ($n \lesssim 5$) or if correlation is not too high. Restrictions of these densities to compact polytopes are possible. The setup time is small for small dimension but increases exponentially in $n$. The speed for generating random points is quite fast even for $n \geq 6$. Due to the large amount of cones for high dimension the program requires a lot of computer memory (typically 2–10 MB). Although the developed algorithm is not a real black box method it is easily adjustable for a large class of log-concave densities. Examples for which the algorithm works are the multivariate normal distribution and the multivariate student distribution (with transformation $T(x) = -x^c$) with arbitrary mean vector and variance matrix conditioned to an arbitrary compact polytope. However for higher dimensions the ratio of highest and lowest eigenvalue of the covariance matrix should not be “too big”. ### Acknowledgments The author wishes to note his appreciation for help rendered by Jörg Lenneis. He has given lots of hints for the implementation of the algorithm. The author also thanks Gerhard Derflinger and Wolfgang Hörmann for helpful conversations and their interest in his work. ### References [AD82] J.H. Ahrens and U. Dieter. Generating gamma variates by a modified rejection technique. *Comm. ACM*, 25(1):47–54, January 1982. [Ahr95] J. H. Ahrens. A one-table method for sampling from continuous and discrete distributions. *Computing*, 54:127–146, 1995. [BM58] G.E.P. Box and M.E. Muller. A note on the generation of random normal deviates. *Annals of Mathem. Stat.*, 29(2):610–611, 1958. [BP66] M. Bell and M. C. Pike. Remark on algorithm 178. *Comm. ACM*, 9(9):684–686, September 1966. [Dag88] J. Dagpunar. *Principles of Random Variate Generation*. Clarendon Press, Oxford, 1988. [Dev86] L. Devroye. *Non-Uniform Random Variate Generation*. Springer-Verlag, New-York, 1986. [Dev97] L. Devroye. Random variate generation for multivariate densities. *ACM TOMACS*, 7(4):447–477, October 1997. [DH94] G. Derflinger and W. Hörmann. The optimal selection of hat functions for rejection algorithms. manuscript, private communication, 1994. [ES97] M. Evans and T. Swartz. Random variable generation using concavity properties of transformed densities. *J. of Comp. and Graph. Stat.*, 1997. to appear. [FMM77] G. E. Forsythe, M. A. Malcolm, and C. B. Moler. *Computer methods for mathematical computations*. Prentice-Hall series in automatic computation. Prentice-Hall, Eaglewood Cliffs, NJ, 7 edition, 1977. [GR65] I. S. Gradshteyn and I. M. Ryzhnik. *Table of Integrals*. Academic Press, 5th edition, 1965. [Grü67] B. Grünbaum. *Convex Polytopes*. Interscience, 1967. [GW92] W. R. Gilks and P. Wild. Adaptive rejection sampling for gibbs sampling. *Appl. Statistics*, 41:337–348, 1992. [HD94] W. Hörmann and G. Derflinger. Universal generators for correlation induction. In R. Dutter and W. Grossmann, editors, *Compstat, Proceedings in Computational Statistics*, pages 52–57, Heidelberg, 1994. Physica-Verlag. [HJ61] R. Hooke and T. A. Jeeves. “Direct search” solution of numerical and statistical problems. *J. ACM*, 8(2):212–229, 1961. [Hör94] W. Hörmann. The quality of non-uniform random numbers. In H. Dyckhoff et al., editors, *Operations Research Proceedings 1993*, pages 329–335, Berlin, 1994. Springer Verlag. [Hör95a] W. Hörmann. A rejection technique for sampling from $T$-concave distributions. *ACM Trans. Math. Software*, 21(2):182–193, 1995. [Hör95b] W. Hörmann. A universal generator for bivariate log-concave distributions. Preprint, 1995. [JKB95] N. L. Johnson, S. Kotz, and N. Balakrishnan. *Continuous Univariate Distributions*, volume 2. Wiley–Interscience, New York, 2nd edition, 1995. [Joh87] M. E. Johnson. *Multivariate Statistical Simulation*. John Wiley & Sons, New York, 1987. [Kau63] A. F. Kaupe Jr. Algorithm 178: Direct search. *Comm. ACM*, 6(6):313–314, June 1963. [Law91] J. Lawrence. Polytope volume computation. *Math. Comput.*, 57(195):259–271, 1991. [Ley98] J. Leydold. *TDRMV — Generating multivariate log-concave densities with transformed density rejection*. Institut für Statistik, Wirtschaftsuniversität Wien, 1998. code available at ftp://statistik.wu-wien.ac.at/src/tdrmv/. [LH98] J. Leydold and W. Hörmann. A sweep plane algorithm for generating random tuples. *Math. Comp.*, 1998. to appear. [LW97] H. Leeb and S. Wegenkittl. Inversive and linear congruential pseudorandom number generators in empirical tests. *ACM TOMACS*, 7(2):272–286, 1997. [Pré73] A. Prékopa. On logarithmic concave measures and functions. *Acta Sci. Math. Hungarica*, 34:335–343, 1973. [Rao84] S. S. Rao. *Optimization. Theory and Applications*. Wiley Eastern Ltd., New Delhi, 2nd edition, 1984. [SV87] S. Stefănescu and I. Văduva. On computer generation of random vectors by transformations of uniformly distributed vectors. *Computing*, 39:141–153, 1987. [Tod76] M. J. Todd. *The Computation of Fixed Points and Applications*, volume 124 of *Lecture Notes in Economics and Mathematical Systems*. Springer, Berlin, 1976. [Tod78] M. J. Todd. Improving the convergence of fixed-point algorithms. *Mathematical Programming Study*, 7:151–169, 1978. [WGS91] J. C. Wakefield, A. E. Gelfand, and A. F. M. Smith. Efficient generation of random variates via the ratio-of-uniforms method. *Statist. Comput.*, 1:129–133, 1991. [Zie95] G. M. Ziegler. *Lectures on Polytopes*, volume 152 of *Graduate Texts in Mathematics*. Springer-Verlag, New York, 1995.
FARRAGUT HIGH SCHOOL Photos Courtesy of Bob McEachern Photographers Yingan Abudureheman Samantha Ackermann Erika Aguileta Mohammed Alzawami Jacob Allston Leah Alobrooks Justin Alton Daniel Amarin Ashlynn Amato Amy Anderson Emily Anderson Andrew Applegate William Alec Arnold Eloise Arp Maggie Atchley Bakhtiyar Adkhamjanov Alexander Au Morgan Ayward Nathan Baakko Huangyu Bask Courtney Baggs Robin Barrow Isaac Barnes Nicholas Barnes Jack Barnett Grace Bass Nicholas Baughman Joshua Beaver Inshira Bediako Samantha Beekman Dillon Biltmeyer Vanessa Binder Jacob Blasing Owen Blake Isaac Blanton Cameron Boothe Roxie Boride Caitlin Brouff Connor Brouff Carolina Brown Ashley Bradley Joseph Brackeney Robert Browster Josie Brooker Chris Brooks Elijah Brooks Mary Brooks Robert Browder III William Brown Brooke Buckner Will Buschley Jon Buell Hope Burkett Sam Burns Katelyn Butcher Alex Butler Helen Butler Spencer Byrd Gabriel Callaghan-Edgar Christian Callahan Katherine Campbell Olivia Campbell Andrew Pierce Campen Genesis Canedy-Pickin Lara Capps Morgan Carbaugh Zachary Caro Jazmin Carson Kyle Carter Giuliana Castillo Shendan Caughorn Nathaniel Chandler Sydney Chapman Benjamin Chestatham Kevin Chen Sydney Cherney Muri Chuong Brooke Christian Owen Clark Riley Clayton Amy Cloud Gavin Clower Grayson Clower Emma Cobb Kamtria Coffey Jacobi Cohen CONGRATULATIONS CLASS OF 2019 good to know SouthEast bank coming soon! BEARDEN | FARRAGUT | HARDIN VALLEY | LENOIR CITY | FOUNTAIN CITY 1.844.SE.BANKS (732.2657) | southeastbank.com Caps Off to you, Graduates! Congratulations! Cindy Doyle, Agent 248 N Peters Rd Ste 4 Conveniently located next to Nama’s at Cedar Bluff Rd. 865-690-6300 www.cindydoyle.com State Farm® Farragut High School Graduation Celebration "The Farragut High School PTSO 2019 Graduation Celebration Committee would like to thank our sponsors for their generous support and dedication to making our night a huge and safe success!" **Admiral** - Blaze Pizza - Chick-Fil-A Overlook & Turkey Creek - Dairy Queen - Dickey's Barbeque Pit - Farragut Press Enterprise - First Tennessee Foundation - Great Clips - Menchie's - Moa's Southwest Grill - Pure Magic Car Wash - State Farm Insurance - Josh Hemphill - The Boathouse - Walt Disney World - Zady's **Captain** - Buddy's BBQ - Christopher O'Rourke, DDS - DentalWorks Turkey Creek - Farragut High School PTSO - First Utility District - First Watch - Jim Hanchey, TN Rep District 14 - Knox County Commissioner - John Schoonmaker - Knoxville Pediatric Associates - McDonald's - NIT Clothing Exchange - Patterson's Home Appliance Center - Wendy's **Commander** - Altar'd State - Angel Nails - Backyard Burgers **Bailey Insurance** - BCI/Tennessee Division - Bob McEachern Photographers, Inc. - Breakout Games - Knoxville - Buffalo Wild Wings - Chesney Dentistry - Chipotle - Coldwell Banker Wallace & Wallace - Cuker's - Hollywood - Dr. Eric W. Himmeleich - El Mocador - Hicks Orthodontics - HouseWorks of Knoxville, LLC - Isagirni - Angie Brown - Isagirni - Cindi Allstuler - Isagirni - Juli Unreick - Jacobs Chiropractic - John H. Hildreth, CLU - Lakeside Tavern - Main Event Entertainment - Mary Kay - Outlier's Advantage - Planet Fitness - Precision Tune Auto Care - Print Edge - Publix Super Markets, Inc. - Pull 'n' Roll & Garness - Sam and Sherry Taylor - Southeast Oral Surgery - State Farm Insurance - Cindy Doyle - State Farm Insurance - Jeannett Rogers - Suntan City - Sunfruit Bake - Susan P. Horn (School Board) **TDS Telecom** - Tennessee Trash & Recycling - The Eye Group - The Skin Wellness Center - The Smile Doctors - UCOR - Van Lieven Communications, Inc. - Van Bakery - Yankee Candle Company **Family** - Alfredo's Italian Restaurant - Alumna Hall - Ballant/Campus Products - Bed, Bath, and Beyond - Carol & Cris Taylor - Carol & Stephen Elam - Cosco - Don Gallo - Dr. Karen Bowdle, DDS - Elite Nails - Empire Pizza - Jiffy Lube - Juno Jan - Kim & Rob Morris - LaParrilla Mexican Grill - Little Joe's Pizza - Mellow Mushroom - Midas Auto Service & Tires - Minns - Papa Johns - Papa Murphy's Pizza - Pellissippi State Community College - Rick Terry Jewelry Designs - Scrambled Jakes - Tennessee Smokies --- **Congratulations to the Farragut Class of 2019 from Batteries Plus Bulbs** Knoxville, TN 222 N Peters Rd 865.692.0002 6667 Clinton Highway 865-276-6006 Bearden 4927 Kingston Pike 865-314-8008 Alcoa 220 Hamilton Crossing Dr 865-983-1901 www.Batteriesplus.com --- **Congratulations Graduates** A Christian Physical Therapy clinic that tailors care to your wellness objectives. Max Potential Rehabilitation West: Farragut 11281 Farragut Parkway, Suite 104 Phone: (865) 392-6001 or (865) 392-6002 Office Hours: Mon-Fri 8am-5pm http://maxpotentialrehab.com | email@example.com --- **Call to Learn More** Assisted Living (865) 988-7373 Memory Care (865) 271-9966 Morning Pointe is Your Care Partner Morning Pointe Assisted Living & Memory Care facebook.com/morningpointe.com morningpointe.com CONGRATULATIONS CLASS OF 2019 Your beauty will shine bright into the future. 102 S Campbell Station Rd. Saah Salon Suite #26 | Farragut, TN 865-392-1014 www.katslashlounge.com PARKER Business Consulting & Accounting, PC. CPA’s SPECIALIZING IN- • Individual & Business Tax • Accounting Services • Business Valuation • Custom Financial Reports ROBERT N PARKER, CPA, MBA, ABV, CVA 10265 Kingston Pike, Suite A | Knoxville, TN 37922 865.470.2122 Congratulations to all the 2019 Graduates! Congratulations, GRADUATES! from Hicks Orthodontics BRACES & INVISALIGN for children, teens & adults Knoxville (865) 777-5700 Lenoir City (865) 816-6710 hicksoortho.com “Think Big! To do the impossible, you have to see the invincible. Never think something is out of reach!” Congratulations to the class of 2019! Josh Hemphill, Agent 11420 Kingston Pike | Knoxville, TN 37934 865-675-3999 firstname.lastname@example.org | www.fagentjosh.com Se habla Español Well Done Graduating Seniors! One-Stop Maintenance Shop • Tires • Oil Service • Filters • Brakes • Alignment • Shocks • Struts • Sodient Services • Transmissions Services • Tire Rotation Services and many more... Graduation Day Special $5.00 OFF OIL SERVICE hot valid with any other offer. On most vehicles. See shop for details. FARRAGUT 10730 Kingston Pike 966-0425 Lenoir City :: 986-6533 Maryville :: 983-0741 Alcoa :: 423-744-9283 Ask for NITROGEN! www.matlocktiresservice.com “The fireworks begin today. Each diploma is a lighted match. Each one of you is a fuse.” ~Edward Koch Time flies! NHC Farragut and Cavette Hill would like to Congratulate the Class of 2019 We wish you the best of luck and know you all have bright futures. NHC Place Farragut and Cavette Hill Assisted Living & Memory Care 121 Cavette Hill Lane • Knoxville, TN 37934 Call to schedule your tour today – 865.777.9000 • www.nhplacefarragut.com NHC Place Farragut Assisted Living is East Tennessee’s Premier Assisted Living Facility Bearden High School Grad Night 2019 A special THANK YOU is extended to all the Sponsors and Volunteers who contributed to Grad Night for the Class of 2019. Grad Night, an all-night party honoring the Class of 2019, was held at The Main Event following graduation on May 17. 240 graduates turned out for a fun and fun-filled night. Due to outstanding support of the community, we were able to provide activities that included, bowling, laser tag, a casino, video games, games, fabulous prizes and food all night long. The BHS PTSO would like to thank the many volunteers who helped in planning and supervising Grad Night. Additionally, a huge THANK YOU is extended to all the sponsors who contributed to the event, and prizes. Your generosity is greatly appreciated. A special thank you to the Bearden Foundation who generously donated to Grad Night. Please support those who support Bearden when dining out or when you need to make purchases! **Diamond ($4,000+)** - Bearden High School Foundation **Platinum ($1,000+)** - Altar’s State - Bayouger - Great Clips - Petco’s Chik & Chips - Pam May Auto Wash - Tropical Smoothie - Wild Wings Café **Gold ($500+)** - Bonfire Grill (Bearden Hill) - Buddy’s Bar-B-Q - Costa Coffee - Chick-fil-A - Dunkin Donuts - Firehouse Subs - Dr. Edwin Spencer - (Alumni/Chiropractic Clinic) - Dr. Christopher (Chuckie) **Silver ($250+)** - Cedar Bluff Racquet Club - Chick-Fil-A (Bearden) - Dr. James Piderberg - Fort Sanders Health and Fitness Center - Gatlinburg Country Club - Jiffy Lube - Mathisen - Smoothie King - Titanic Museum Attraction in Pigeon Forge - Tutu’s Dance Boutique - The Nail Shop **Bronze ($100+)** - Bearden Asphalt, Inc. - Breakaway Hair Studio - Butler and Bailey Grocery - Dollwood - Double Dogs - Drake’s - Diner 4 Life - Fairway’s & Green Golf Center - Dr. T.J. Fowler - Dr. David Harris (University Eye Specialists) **Additional (donations up to $99)** - Almond’s Pizza - Applebee’s - Azurey’s - Ball’s Company - Buffalo Salon & Spa - Big Kahuna Wings - Blaze Pizza - Blue - Bradley’s - Bracelette - Bruster’s Ice Cream - Buffalo Wild Wings - Buttermilk Sky Pie - Carvel Ice Cream House - Carrafa’s - Casa Don Gallo - Chestbars - Chaz’s - Cotton - Cutlers - Dick’s Sporting Goods - Doughnuts - Dynasty Nail Spa - Edify’s Health Shopper - Elder’s Ace Hardware - Essential Body Spa - Farmacia - First Watch - Fleming’s - Foxfire Mountain - Frank’s Barber Shop - Fratello’s - Gold’s Gym - Goodness To Go - Hot Rod Goodys - Honey Baked Ham - Jeff’s Pizza - Krazy Kreme - La La Nails - Lenny’s Sub Shop - Lazy Boy Spa - Maple Street Biscuit Company - Massage Envy - McBrady’s Deli - Moxie - Neon’s Deli - Orange Crush - P.F. Chang’s China Bistro - Ripley’s Aquarium of the Smokies - River Sports Outfitters, Inc. - Salada - Sonic Drive In - Studio-Village - Sunflower Farms - Texas Roadhouse - The Climbing Center (River Sports) - The Edge - The Soup Kitchen - Tennessee Smokies Baseball - The Vines - Val’s Boutique - Virginia Jane - Welch Witch - Women’s Basketball Hall of Fame - Zoo Knoxville Congratulations and Good Luck to the Bearden High School Class of 2019! Hard work pays off! Congratulations to the graduates! Call to register for summer exam prep sessions NOW and receive 5% off exam prep packages. Mention this ad! Offer expires June 1st! Huntington® Your Tutoring Solution A McGrath Family Business Call (865) 691-6688 or visit huntingtonhelps.com ACT/SAT exam prep • SAT Subject tests ASVAB • Subject tutoring • Study skills “If you want something you’ve never had, you must be willing to do something you’ve never done.” ~Thomas Jefferson “Twenty years from now you will be more disappointed by the things that you didn’t do than by the ones you did do. So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover.” —Mark Twain Congrats to CAK’s Class of 2019! Learn how CAK is SET APART from other schools Call today to schedule your tour! Serving Age 3 to 12th Grade 865-813-4CAK • www.cakwarriors.com CONGRATULATIONS! CONCORD CHRISTIAN SCHOOL Class of 2019 I know the plans I have for you plans to give you Hope & a Future Jeremiah 29:11 “For I know the plans I have for you,” declares the Lord, “Plans to prosper you and not to harm you, plans to give you hope and a future.” —Jeremiah 29:11 GRACE CHRISTIAN ACADEMY Photos Courtesy of Grace Christian Academy GRACE CHRISTIAN ACADEMY Luke Kirby Stacy Roger Jackson Krauss Maddie Lawson Grant Ledford Savannah Lee Lauren Lewis Isabella Lobetti Trevor Luman Christian Lutheil Jordan Martin Will Maynard Morgan McNulken Gentry Mooneyouls Madie Melvagen Taylor Meiguard Eli Milligan Matthew Montgomery Eli Nordhom Dusty Oden Cassie Parks Nica Pohl Virginia Pinkle Brooke Pleimmon Grace Powers Bradley Rush Kaitlyn Ray Alissa Rhyne Billy Sams Eric Sharp Reagan Show Tyler Skinner Blake Summers Thomas Walker Konstad Warwick Justin West Abbi White Chandler Williams Chloe Windham Bailey Wyatt Sahi Zain HARDIN VALLEY ACADEMY Nathaniel Aberdeen Ramz Abu Shehadeh Calvin Ahrens Ahmed Akouluk Ian Alexander Tristan Alford Aya Alhajjalamreh Camilla Ali Philip Allen Stefney Allen Karlis Anastasakis Julia Anderson Nabiel Aggad Madison Paige Astbury Brayton Atkins Jackson Atteberry Cailyn Autrey Emily Aycock Sebastian Badillo Britlyn Bagwell Erin Baker Brady Ballow Edward Barajas Eric Banegas Brooks Barber Chelsea Barfa Lindsey Bartlett Caleb Bassett Timothy Bassett Malakay Becklman Rachel Becklman Joshua Beasley Sarah Bouchat Jessica Beverwyk Philip Bingham Mary Birge Joel Boscikovits Dylan Buhrne Natalie Butinger Taylor Bondy Drew Brooks Gabrielle Brown Hannah Brown Jalen Brown Jordan Brown Morgan Brown Richard Brown Hailey Bryant Patrick Buckner Sanya Budhwani Congratulations Class of 2019 Volunteer Pharmacy extends our heartfelt congratulations to you. Volunteer Pharmacy is your independent community pharmacy located in Hardin Valley. VOLUNTEER PHARMACY 2559 Willow Point Way, Knoxville, TN 37931 (865) 560-0135 • volunteerpharmacy.com Hours: Weekdays: 8AM–6:30PM Saturday: 10AM–3PM Sunday: Closed Congratulations Graduates Becky Massey TN Senator District 6 Jason Zachary State House of Representatives District 14 Nick McBride Knox County Register of Deeds John Schoonmaker Knox County Commissioner District 5 Scott Meyer Farragut Alderman Ward 1 Tony R. Aikens Mayor Lenoir City Gabriela Lopez Jordan Lyons Christine Maestri Mia McAllister Zane Mears Congratulations to the Hardin Valley Academy Class of 2019! “Sometimes you find out what you are supposed to be doing by doing the things you are not supposed to do.” ~ Oprah Winfrey, Howard University Congratulations GRADUATES! CLASS OF 2019! PELLISSIPPI STATE COMMUNITY COLLEGE Congratulations to the Class of 2019 “An investment in knowledge always pays the best interest.” —Benjamin Franklin Knoxville Catholic High School Photos Courtesy of Knoxville Catholic High School Hardin Valley Academy Knoxville Catholic High School Photos Courtesy of Knoxville Catholic High School KCHS Faculty & Staff Congratulations to this year’s LCUB scholarship recipients! We wish continued success! “We would like to congratulate the outstanding recipients of the LCUB scholarship for 2018. They each come highly recommended by their teachers and counselors. We believe they each have a rewarding future ahead in the engineering or computer science industry and we wish them the very best in their college endeavors.” M. Shannon Littleton General Manager “Graduation is only a concept. In real life, every day you graduate. Graduation is a process that goes on until the last day of your life. If you can grasp that, you’ll make a difference.” ~Arie Pencovici “You have brains in your head. You have feet in your shoes. You can steer yourself Any direction you choose. You’re on your own. And you know what you know. And YOU are the guy who’ll decide where to go.” —Dr. Seuss, “Oh, the Places You’ll Go!” Lenoir City High School Photos Courtesy of Bob McEachern Photographers. Olivia Abbott Preston Adams Kajee Akins Kaylee Allen Jade Allison Sonia Almaraz Kevin Alvarado Ramirez Sandra Ariani Ethan Anderson Amber Atkins Journa Awaltis Matthew Babin Rileigh Babour Tess Barnes Brenda Barrientos Cartera Ronald Barrientos Alixois Batton Alejandro Bedolla Jacob Bergoy Brittany Berube Brian Bibian Ashland Bingham Elijah Bivens Sienna Bobbitt Cameron Bordon John Bornhardt Ashley Boser Austin Bowden Caleb Bowen Katherine Boykin Hunter Brocien David Brocian Nancy Brewer Joshua Brown Maya Brown Tasha Brown Matthew Carter Julia Carnell Dakota Carter Grace Caughron Katheryn Childress Casey Childs Brayleneth Buckner Elijah Burr Jenna Butler Megan Caldwell Robert Cairo Heidi Coffman Ashton Compton Yasmin Constantino Sean Crider Ashley Crouch Samantha Daggett Cole Davis Henry Davis Katelyn Davis Sydney Davis James Dial Haley Donnell Brooke Duff Devin Duggan Joseph Duggan Karly Duggan Isaac Dutton Abigail Elliott Jesus Espinuza Hernandez Yulisa Espinuza Hernandez Honoring Our Graduates Wampler’s It’s Great Sausage Cades Cove Good Fixin’s Elm Hill Congratulations to all the upstanding men and women of the Class of 2019. We wish you much luck and continued success as you strive for the next goal. Keep up the great work! Brewster’s Congratulates and Wishes Future Success to all Lenoir City 2019 Graduates! BREWSTER’S Services Group, LLC (865) 458-8887 1042 Mulberry Street | Loudon, Tennessee 37774 Office Hours: Monday — Friday, 8:00am – 5:00pm Lenoir City High School Anna Rees, Clara Revilla, Miguel Rivera Gonzalez, Carlos Rodriguez-Filnes, Adolfo Rodriguez, Dascar Ruiz Garcia, Rafael Ruiz, Caitlin Rusinek, Jonathan Salmons, Stoneman Sanchez, Dale Scarborough, Olivia Scarfo, Rachel Seifert, Jacob Schuster, Andrea Segovia, Destiny Selvidge, Gabriel Steinabum, Gabriel Shannon, Nash Sharpe, Omar Shuford-Mendoza, Madison Shutz, Faith Simmons, Dalton Smith, Graeza Smith, Maegen Smith, Trey Smudrick, Yoselin Solis, Jamison Sousa, Zackary Spears, Samantha Spencer, Ashlyn Spoon, Annamaria Stocksbury, Gabriel Street, Elise Strickland, Reagan Swain, Beth Sweeney, Tianna Turlay, Alexander Tyler, Andrew Thigpen, Alexander Thomas, Natalia Thomas, Emma Thompson, Jonathan Thompson, Quillen Thornton, Kaylee Townsend, Molley Van Valkenburg, Morgan Varner, Gemile Vasquez, Cody Vineyard, Hayden Vineyard, Kelley Wadrop, Brooke Walters, Drew Ward, Brody Waters, Grant Watson, Travis Watson, Taylor Wear, Nicholas Webster, Joseph West, Brandon Westfall, Gabriel Williams, Colton Wilson, Madelyn Woods, Logan Wright, Michael Wright, Mirah Wynrick, Eleanor Zinkowski. Paideia Academy Photos Courtesy of Paideia Academy “There is a good reason they call these ceremonies ‘commencement exercises’. Graduation is not the end, it’s the beginning.” ~Orrin Hatch Congratulations Class of 2019! We extend our gratitude to all our graduates and we are proud of you all. Lenoir City Board of Education Dr. Jeanne Barker - Superintendent Rick Chadwick - Chairman Glenn McNish, Sr. - Vice Chairman Mitch Ledbetter - School Board Member Bobby Johnson - School Board Member Jim McCarroll - School Board Member Monterey Mushrooms® would like to Congratulate the Class of 2019. Good luck! 19748 TN-72, Loudon, TN 37774 • Open 24 hours • (865) 458-4611 www.montereymushrooms.com “Never be discouraged. Never look back. Give everything you’ve got and when you fall throughout life, fall forward.” - Denzel Washington farragutpress 11863 Kingston Pike | Farragut, TN 37934 865-675-6397 | www.farragutpress.com Congratulations Class of 2019 from Concord Watch, Clock & Jewelry Center 11130 Kingston Pike Knoxville, TN 37934 (865) 288-7728 Specializing in the Sales, Restoration & Repair of all types of watches, clocks, and fine jewelry. We take pride in providing our customers with the best value for their money without compromising quality or service. Home of USAWATCH.NET Design your watch online PATEK PHILIPPE • ROLEX • BREITLING • OMEGA WEBB SCHOOL OF KNOXVILLE Photos Courtesy of Webb School of Knoxville United States Naval Academy • Wake Forest University • Furman University • UCLA • • Columbia College Chicago • RIT • Johns Hopkins University • George Washington University • Rhodes College • Dartmouth College • University of Toronto • University of Tennessee • Marist College • Virginia Tech • American University • Rensselaer Polytechnic Institute • Yale University • Belmont University • Embry-Riddle Aeronautical University • University of Vermont • Pepperdine University • University of Pennsylvania • Centre College • Northeastern University All 110 graduates were extended 436 offers of admission & received $16+ million in scholarship offers. Congratulations Class of 2019! Webb School of Knoxville “God never said that the journey would be easy, but He did say that the arrival would be worthwhile.” ~Max Lucado 2019 GRADUATION MEMORIES Congratulations, Class of 2019! Your Future Starts Today, not Tomorrow! Congrats 2019 Grad! Knoxville Insurance Group Gigi Scull Agency Owner 220 South Peters Road | Knoxville TN 37923 | P 865.694.9788 www.knoxvilletownecogroup.com Dr. Steven Brock Dr. Chase Nieri Congratulations Class of 2019! DENTAL IMAGES 1715 Downtown West Blvd Knoxville, TN 37919 (865) 531-1715 www.mydentalimage.com Graduation is not the end, it’s the Beginning. Congratulations! Dr. Susan Barnes Cosmetic & Family Dentistry 5424 S. Kingston Ave., Suite 4 (865)845-8400 www.drsusanbarnes.com www.facebook.com/susanbarnesdmd Pattersons Home Appliance Center EST. 1965 BECAUSE WE CARE JENN-AIR® Whirlpool® MAYTAG® KitchenAid® AMANA® JUST RIGHT. KNOXVILLE 10640 KINGSTON PIKE (865) 694-4181 OAK RIDGE 170 OAK RIDGE TURNPIKE (865) 483-8842 KINGSTON HWY. 70 MIDTOWN (865) 376-6233 ROCKWOOD OUTLET 1090 N. GATEWAY (865) 354-0061 CROSSVILLE 2024 N. MAIN ST. (931) 250-4349
Study of Chinese carbon emission trading market mechanism based on the game theory Dai J.\textsuperscript{1,2*}, Yan J.\textsuperscript{1} and Gao H.\textsuperscript{3} \textsuperscript{1}School of Business, Sichuan University, Chengdu 610041, China \textsuperscript{2}Chengdu Qizhi Innovation Patent Agency, Chengdu 610096, China \textsuperscript{3}Science and Technology on Vacuum Technology and Physics Laboratory, Lanzhou Institute of Physics, Lanzhou 730000, China Received: 04/09/2021, Accepted: 21/12/2021, Available online: 23/12/2021 * to whom all correspondence should be addressed: e-mail: email@example.com https://doi.org/10.30955/gnj.003942 Graphical abstract Abstract In order to provide corresponding suggestions for the establishment and development of China's carbon trading market mechanism, the three-party game model of the competent government departments, carbon emission enterprises and third-party verification institution in the initial allocation of carbon emission rights and the rotation bargaining game model in the secondary carbon trading market are solved and analyzed in this paper. The results show that the competent government departments should improve the review efficiency effectively to reduce cost by outsourcing the review work to universities, research institutes and other scientific research units and increasing punishment for the collusion behavior between the carbon emission enterprises and third-party verification institution. At the same time, the competent government departments should adopt the regular regulatory policies to deal with collusion behavior and reduce the sampling proportion to cut cost of government review. The trading center should directly determine transaction price in combination with the forces of buyers and sellers, and make matchmaking trading directly by selecting the qualified buyers and sellers at the secondary carbon trading market in process of bilateral open bidding. Keywords: game theory; carbon emission rights; carbon trading market mechanism; collusion behavior; matchmaking trading 1. Introduction According to the report on the global climate state released by the World Meteorological Organization, the global concentration of major greenhouse gases continued to rise in the half year of 2020, and the global average temperature was about 1.2°C higher than the pre-industrial level. The last period of 2015 to 2020 was the six warmest years for the world since the meteorological records began (Tan et al., 2020; Gong and Zhou, 2019). In order to deal with the climate change, the international community reached the Paris Climate Agreement in 2015, which put forward to control the global temperature rise less than 2°C and strive for the goal of 1.5°C compared with the industrial revolution. The countries around the world unanimously reached a consensus at the climate summit allowing the implementation of Paris Climate Agreement into the trot stage (Fang et al., 2018; Jin et al., 2019; Woo et al., 2017). Every country has put forward its own carbon emission reduction targets according to its own national conditions. Among them, the United States proposed a new emission reduction target that the greenhouse gas emissions in 2030 reduce about 50%~52% compared with the 2005 (Huang et al., 2020). The Japan has put forward that the reducing carbon emissions reached about 46% in 2030 compared with 2013 levels, which is 20% higher than its previous commitment of 26% reducing carbon emissions plan (Kanchinadham and Kalyanaraman, 2017). Canada proposes to reduce greenhouse gas emissions by 40%~45% in 2030 compared with 2005 (Nath et al., 2015). Brazil has pledged to cut its carbon emissions by 50% in 2030 and become carbon neutral in 2050, which has 10 years ahead of schedule (Wang et al., 2017). Although the China and United States are the two largest carbon emission countries in the world, the emission reduction of greenhouse gases is not a matter of one single country. It involves the whole world trade rules and the carbon emission trading must be carried out in cooperation around the world that has been reached a consensus in the climate arena (Guo et al., 2018; Li et al., 2019; Weng and Xu, 2018; Liu and Cui, 2018). The most important aspect of the carbon emission trading market is the carbon price, which has not yet formed a free market pricing (Li and Lei, 2018). Previously, the Obama administration estimated the carbon price at $42 per ton, while the Trump administration put it only about $7. There are still differences in understanding of the carbon price within a country, it is necessary to explore in practice to reach a consensus on carbon price around the world. It need to develop the concrete solutions to a series of problems such as carbon pricing and carbon tariffs in practice (Li et al., 2016; Wang and Wu, 2018; Yang et al., 2019). In 2020, China announced the goal of carbon emission to the world that strive to achieve the carbon peak in 2030 and achieve the carbon neutrality in 2060 (Chen et al., 2015; Wang et al., 2019). The so-called peak carbon dioxide emissions means that the annual carbon dioxide emissions of a region or industry reach the highest value in history, then go through a plateau period and enter a continuous decline process, which is the historical inflection point of carbon dioxide emissions from increase to decline (Zhu et al., 2020; Zheng et al., 2021). The carbon neutrality means that the carbon dioxide content emitted directly and indirectly by human activities and absorbed through afforestation and other ways cancel each other out in a certain area, offsets the carbon dioxide, and the final effect of nearly zero carbon dioxide emission realized (Shan et al., 2021; Zhu et al., 2020). China is the largest industrial nation in the world at present, emitting about 10 billion tons of carbon every year that is twice as much as the United States and three times as much as the European Union. China is responsible for about a quarter of the global carbon emissions at the present stage, which means that the carbon peak and carbon neutral achieving have the greatest impact on the China's economic development (Han et al., 2017). China management needs to take all factors into full consideration when formulating the carbon emission trading mechanism and make great adjustments and changes, so as to be more conducive to the harmonious development between the national economic development and natural environment. The official opening of the world's largest carbon trading market in China's Hubei Province in 2021 is bound to accelerate the process of developing international carbon trading standards (Pan et al., 2019; Xia and Tang, 2017). In order to meet the Paris Agreement's emissions reduction target of limiting temperature rise within 1.5°C, it is estimated that the proportion of non-fossil energy in China's primary energy consumption will reach about 25% to 28% in 2030. It is expected that the share of new energy in primary energy will rise from current 17% to about 80%, while the proportion of coal and oil will drop from current 75% to about 10% by 2060 (Chen et al., 2019). This goal will surely lead to a low-carbon transition in China's traditional energy and manufacturing industry, which will have the greatest impact. The development of carbon emissions trading market mechanisms, especially the carbon financial, will help to promote the social capital to flow into the low carbon field, conducive to stimulating the enterprise to develop the low carbon technology and use of the low carbon products, and make the pattern of enterprise production and business changed, that improving the market competitiveness of enterprises and providing the power for the cultivation and innovation in low carbon economy development (Wang et al., 2018; Yu et al., 2018). The explosive green industry is bound to give rise to green finance. At present, the carbon emission trading market launched at the end of June is only limited to the spot market. The explosive development of green industry will inevitably accelerate the birth of green finance industry, and there will be a lot of financial derivatives around carbon emissions in the future (Qu et al., 2018). China's carbon peak and carbon neutrality plan will certainly prompt big changes in traditional industries direction of the energy-rich provinces, such as Inner Mongolia and Shanxi province. The tax revenue proportion of the mining and power industries in the Shanxi and Inner Mongolia province is more than 40% and 30%, respectively, that is related to employment, various public welfare and people's well-being behind the taxes (Zhang et al., 2017). The transition from fossil energy to clean energy will put the large impact on these regions, which requires the provinces to prioritize their own advantages to complete the transformation of the energy industry (Hu et al., 2017; Zhou et al., 2017). For example, The Shanxi province can focus on carbon capture technology, and then use them as raw materials to develop high performance of carbon materials, carbon fiber, graphite and other strategic emerging industry. In the past we have adopted the extensive development model that emitting all of these carbon into the air directly and they are going to back to use and make it valuable in the future (Munnings et al., 2016). The Inner Mongolia province can also develop clean energy, not just be restricted to the carbon capture technology, but also use of their own advantages and continuously enlarge the area of grassland, forests, wetlands which are considered the natural carbon dioxide absorption device. Then they can carry out the carbon sinks trade of forest, wetland and grassland or directly sell them to compensate their energy-intensive industries (Tan and Wang, 2017). This solves the problem we started with about how to price natural resources. Therefore, every province can use its unique advantages to achieve the goal of carbon peak and carbon neutral, which solves the problem of how to price natural resources that we talked about at the beginning. In this paper, we analyze the behaviors of every participants in the initial allocation market of carbon emission rights and the secondary carbon trading market in China by using the game theory. By analyzing and summarizing the possible behaviors of the three parties (the competent government departments, the carbon emission enterprises and the third-party verification institution constitute) in the carbon trading market, the influence of strategy adopted by each party on China's carbon trading market is clarified. The relevant research conclusions will provide reference for China's carbon trading policy formulation. 2. The game theory model 2.1. Game model and solution procedure of the initial allocation of carbon emission rights The carbon emission trading is a market trading system involving the government, carbon emission enterprises and third-party service institutions, and its soundness is the decisive factor for the realization of optimal resource allocation. The initial allocation of emission rights is a prerequisite for the normal operation of the emission rights trading market. It is also the key to total volume control and the important factor to ensure the maximization of global economic benefits. The reasonable and practical initial allocation of emission rights is conducive to realizing reasonable allocation and economical utilizing of resources, promoting technological innovation, and forming a production pattern with low pollution emission level and high economic benefits. There are two methods of initial allocation of carbon emission rights. One is the free allocation, including the Grandfathering allocations mode that the number of carbon emission rights is determined by a certain proportion of the historical emissions of carbon emission enterprises and allocations mode that the carbon emission rights obtained by the current total output and per unit of output. The other initial allocation method is the public auction. At present, China's initial allocation of emission rights is mainly free allocation mode, following the monitoring, reporting and verification mechanism (MRV). Under this style, the emission enterprises can make up the difference of emission reduction through energy saving and emission reduction or carbon emission right trading to fulfill their emission reduction commitment, which does not involve the cost transfer between the government and emission enterprises. However, if the initial allocation of emission rights is excessive or insufficient, the constraint effect of emission reduction will be lost that resulting in the price fluctuate of carbon emission rights. According to China's overall deployment of carbon emission reduction in the early stage of the carbon trading system, the carbon emission verification task of quota allocation is jointly completed by the government and third-party verification institution, in which the former play a leading role in the current. With the continuous development and expansion of third-party verification institution, they will become the leading force of verification task, and the MRV mechanism will be widely applied in the process of free allocation of carbon emission rights. The social cost of verification will be reduced, and the competent government departments will also reduce the cost of carbon emission verification. In the context of "whoever pollutes, treats" policy, the carbon emission enterprises will inevitably strive for more carbon emission allocation quota to maximize their interests in the initial allocation of carbon emission rights. The enterprises driven by interests will inevitably have the incentive to lie about their own information. Therefore, the dynamic game model of Stackelberg (Bernard et al., 2008; Manuel et al., 2016; Zhao et al., 2017) was introduced to study the mixed game behavior of government departments, carbon emission enterprises and third-party verification institutions in MRV mechanism in this paper, providing guidance for the improvement of China's carbon trading market mechanism in the future. In the operation process of MRV mechanism, the competent government departments, carbon emission enterprises and third-party verification institution constitute the tripartite game behavior that influence each other. The external independent third-party verification institutions are generally responsible for the measurement and verification of corporate carbon emission rights. The third-party institutions must possess relevant qualifications and meet capability requirements which must be recognized by the government. Currently, this institution recognized by the Chinese government is the China National Accreditation Commission for Conformity Assessment. The third-party verification institutions can make full use of their own effective resources and technical advantages to reduce the cost of carbon emission verification for the government, and at the same time directly avoid the risk of administrative liability. The income source of third-party verification institution depends on the carbon emission reduction enterprises, and they have the motivation to collude with carbon emission reduction enterprises to obtain additional income. It is necessary for competent government departments to check a certain proportion of the carbon emissions enterprises. If collusion is found in the verification process, the third-party verification institution and emission reduction enterprises will be subject to administrative penalties. The strategy of the competent government department is recheck or not recheck. If the recheck was selected, it is assumed that the cost paid was $C_0$. If the collusion between the carbon emission enterprise and third-party verification institution existed, the administrative penalty will be imposed on them, assuming $C_1$ and $C_2$, respectively. And the cost is zero if not rechecked. The strategy of the carbon emission enterprises and third-party verification institution is collusion or no collusion. If the strategy of no collusion was adopted, the income of carbon emission enterprises and third-party verification institutions is $E_1$ and $E_2$, and the increased returns of both was $\Delta E_1$ and $\Delta E_2$, respectively, when adopted the collusion strategy. Since the game players of the three parties are not sure which strategy the other party will adopt in choosing specific strategies process, the game is a mixed strategy game in the static game with incomplete information (Lin et al., 2019). It suppose that the probability of recheck strategy adopted by government departments and the sampling ratio is $p_1$ and $r$, respectively. The probability of carbon emission enterprises taking collusion strategy with third-party verification institution is $p_2$, and the probability of being selected to recheck is $r$, the same as that of the competent government departments. Accordingly, the returns matrix of the tripartite game between competent government department, carbon emission enterprises and third-party verification institution is shown in Table 1. Table 1. Returns matrix of the tripartite game | | Be selected (r) | Not be selected (1-r) | No double-check (1-p_1) | |---------------------|-----------------|-----------------------|-------------------------| | Collusion (p_2) | (C_1+C_2-rC_0, E_1+\Delta E_1-C_1, E_2+\Delta E_2-C_2) | (-rC_0, E_1+\Delta E_1, E_2+\Delta E_2) | (0, E_1+\Delta E_1, E_2+\Delta E_2) | | No Collusion (1-p_2)| (-rC_0, E_1, E_2) | (-rC_0, E_1, E_2) | (0, E_1, E_2) | (1) The Nash equilibrium solving of competent government department. The expected revenue of the competent government department can be expressed as: \[ U_1 = p_1 p_2 r (C_1 + C_2 - rC_0) - p_1 p_2 (1-r)rC_0 \\ - p_1 (1-p_2)r(rC_0) - p_1 (1-p_2)(1-r)(rC_0) \] (1) If the competent government departments maintains the continuous recheck strategy, that the \( p_1=1 \), then: \[ U_1 = p_2 r (C_1 + C_2 - rC_0) - p_2 (1-r)rC_0 \\ -(1-p_2)r(rC_0) -(1-p_2)(1-r)(rC_0) \] (2) The Nash equilibrium point is the critical point for selecting the recheck strategy, indicating that the expected return is the same no matter whether the recheck strategy is adopted at this point. That means the first-order condition of the equilibrium point is satisfied as: \( \frac{\partial U_1}{\partial r} = 0 \), and the calculation gives following results. \[ p_2 = C_0 / (C_1 + C_2) \] (3) When the probability \( p_2 \) of collusion between third-party verification institution and carbon emission enterprises meets \( p_2 > C_0/(C_1+C_2) \), the competent government departments tend to take the rechecked strategy. When \( p_2 < C_0/(C_1+C_2) \), the non-rechecked strategy is preferred. (2) The Nash equilibrium solving of carbon emission enterprises The expected revenue of the carbon emission enterprises adopted collusion strategy can be expressed as follow: \[ U_2 = p_1 p_2 r (E_1 + \Delta E_1 - C_1) \\ + p_1 p_2 (1-r) (E_1 + \Delta E_1) + (1-p_1) p_2 (E_1 + \Delta E_1) \] (4) The Nash equilibrium point of carbon emission enterprises is the critical point for selecting the collusion strategy, indicating that the expected return is the same no matter whether the collusion strategy is adopted at this point. That means the first-order condition of the equilibrium point is satisfied as: \( \frac{\partial U_2}{\partial p_2} = 0 \), and following results can be obtained. \[ \frac{\partial U_2}{\partial p_2} = p_1 r (E_1 + \Delta E_1 - C_1) \\ + p_1 (1-r) (E_1 + \Delta E_1) + (1-p_1) (E_1 + \Delta E_1) = 0 \] (5) The equilibrium solution can be expressed as: \[ p_1 r = E_1 + \Delta E_1 / C_1 \] (6) Therefore, the carbon emission enterprises prefer to take the collusion strategy when \( p_1 r > E_1 + \Delta E_1 / C_1 \). On the contrary, when \( p_1 r < E_1 + \Delta E_1 / C_1 \), the collusion strategy will be more likely not to be used. (2) The Nash equilibrium solving of third-party verification institution The expected revenue of the third-party verification institution adopted collusion strategy can be expressed as follow: \[ U_3 = p_1 p_2 r (E_2 + \Delta E_2 - C_2) \\ + p_1 p_2 (1-r) (E_2 + \Delta E_2) + (1-p_1) p_2 (E_2 + \Delta E_2) \] (7) The Nash equilibrium point of third-party verification institution is the critical point for selecting the collusion strategy, indicating that the expected return is the same no matter whether the collusion strategy is adopted at this point. That means the first-order condition of the equilibrium point is satisfied as: \( \frac{\partial U_3}{\partial p_2} = 0 \), and following results can be obtained. \[ \frac{\partial U_3}{\partial p_2} = p_1 r (E_2 + \Delta E_2 - C_2) \\ + p_1 (1-r) (E_2 + \Delta E_2) + (1-p_1) (E_2 + \Delta E_2) = 0 \] (8) The equilibrium solution can be expressed as: \[ p_1 r = E_2 + \Delta E_2 / C_2 \] (9) Therefore, the carbon emission enterprises prefer to take the collusion strategy when \( p_1 r > E_2 + \Delta E_2 / C_2 \). On the contrary, when \( p_1 r < E_2 + \Delta E_2 / C_2 \), the collusion strategy will be more likely not to be used. 2.2. Game model and solution procedure of the secondary market of carbon emission rights The trading of carbon emission rights in the secondary market is analyzed by using the game model of alternate bargaining. The buyer and the seller concluded the deal after \( n \) rounds of bargaining. The specific process is as follows, in the first round (\( n=1 \)), the buyer makes an offer, if the seller accepts, the game is over, and if the seller refuses, it goes to the next round. In the second round (\( n=2 \)), the seller makes an offer, if the buyer accepts, the game is over, and it goes to the third round (\( n=3 \)) if the buyer does not accept. This cycle continues until the end of the \( n \) round. The discount factor (patience extent) of buyers and sellers is often determined by the total number of people on both sides. If the number of sellers and buyers is \( s \) and \( b \), respectively, and we assume that the \( s \) and \( b \) remains the same throughout the bidding process. The greater proportion of sellers in the transaction is, the more patient the buyer is to bid. Therefore the discount factor for the buyer is $\delta_b = sb/(s+b)$, and similarly, the discount factor of the seller is $\delta_s = b/(s+b)$, in which the parameters meet the following conditions: $0 < \delta_i < 1$, $i = (b, s)$. The commodity value provided by the seller is $c$ for the seller and $v$ for the buyer. The seller and buyer bid $P_s$ and $P_b$ according to the value what they think the goods are worth, and each round of quotation satisfies the $P \in [P_s, P_b]$ relationship. The earnings of seller is $P - P_s$, and the additional earnings obtained by buyer is $P_b - P_s$, that is the additional earnings of $P_b - P$, will be distributed by the two sides. The allocation ratio of the buyer and seller is denoted as $x_b$ and $1-x_b$, $x_s$ and $1-x_s$ when the buyer and seller bid in turn, respectively. When $n = 2k$, the buyer's optimal bid is: $$P_{2k}(\delta_b, \delta_s, P_s, P_b) = \frac{(\delta_b \delta_s)^k - 1}{\delta_b \delta_s - 1}(1 - \delta_s)(P_b - P_s)$$ (10) $$\chi_{2k}(b, s, P_s, P_b) = \frac{b^k s^{k+1} - s(b + s)^{2k}}{bs(b + s)^{2k-1} - (b + s)^{2k+1}}(P_b - P_s)$$ (11) When $n = 2k+1$, the buyer's optimal bid is: $$P_{2k+1}(\delta_b, \delta_s, P_s, P_b) = \left[ \frac{(\delta_b \delta_s)^k - 1}{\delta_b \delta_s - 1}(1 - \delta_s) + (\delta_b \delta_s)^k \right](P_b - P_s)$$ (12) $$\chi_{2k+1}(b, s, P_s, P_b) = \left[ \frac{b^k s^{k+1} - s(b + s)^{2k}}{bs(b + s)^{2k-1} - (b + s)^{2k+1}} + \frac{b^k s^k}{(b + s)^{2k}} \right](P_b - P_s)$$ (13) It can be seen that the factors affecting the equilibrium price include discount factor, game times and bid in the final game stage under the condition of limited bargaining times. In the process of carbon emission trading, the trading center stipulates that the buyer and seller shall determine the final transaction price through bargaining within a certain period of time. If the two sides fail to reach an agreement on the trading price through negotiation, they can choose to abandon the current round of trading and wait for the next round. Therefore, the bargaining process of carbon emission trading should be an indefinite game, what means the optimal bid of the buyer in the indefinite bargaining game can be obtained when $k$ approaches infinity as follows. $$P_\infty = \frac{1 - \delta_s}{1 - \delta_b \delta_s}(P_b - P_s)$$ (14) Then the transaction price of two sides is: $$P^* = P_b - P_s = \frac{\delta_s(1 - \delta_b)}{1 - \delta_b \delta_s}P_b + \frac{1 - \delta_s}{1 - \delta_b \delta_s}P_s$$ (15) The $P_\infty$ and $P^*$ satisfy the following relationship: $P_\infty = \frac{1}{2}(P_b - P_s)$ and $P^* = \frac{1}{2}(P_b + P_s)$ when $\delta_b = \delta_s = 1$. This indicates that the trading center generally matches the transaction price of $P = \frac{1}{2}(P_b + P_s)$ by matchmaking trading type in two-way open bidding. The precondition for this transaction price is that both the discount factor of buyer and seller are 1, what means they have enough patience and the numbers of buyer and seller tends to infinity. In reality, the number of two sides is limited, it is urgent for sellers to make a deal in order to sell the carbon emission quota that is about to expire and for buyers to fulfill the contract in order to buy the carbon emission quota. Therefore, the transaction price of $P = \frac{1}{2}(P_b + P_s)$ cannot fully accord with the real transaction intention of buyer and seller. 3. Results and discussion 3.1. Analysis on the three-party game of initial allocation of carbon emission rights It can be seen from the equilibrium solution of competent government departments that the probability of adopting the review strategy is positively correlated with review cost paid and negatively correlated with administrative penalty intensity when the carbon emission enterprises and third-party verification institution adopting collusion strategy. The carbon emission enterprises and third-party verification institution believe that the lower review cost and the greater administrative penalty is, the greater probability of adopting the review strategy by competent government departments, and the smaller probability that they will adopt the collusion strategy. On the contrary, the higher review cost and the smaller administrative penalty is, the more likely they will adopt the collusion strategy because carbon emission enterprises and third-party verification institution will think that the competent government departments will prefer to adopt the no review strategy. In conclusion, the competent government departments should improve the review efficiency effectively to reduce cost. Specifically, the review work can be outsourced to universities, research institutes and other scientific research units, and at the same time, they should increase punishment for the collusion behavior between the carbon emission enterprises and third-party verification institution. From the equilibrium solution of carbon emission enterprises and third-party verification institution, it can be seen that the greater benefits brought by the collusion strategy of false reporting of carbon emissions and the smaller penalty for collusion behavior is, the greater product of the probability of adopting the review strategy by competent government departments and sample proportion of review. If the competent government department always adopts the review strategy, namely the $p_2 = 1$, the sample proportion is positively correlated with the additional benefit brought by conspiracy to falsify carbon emission and negatively correlated with the penalty intensity for collusion. At present, free allocation of carbon emission rights can basically meet the needs of carbon emission enterprises. As the carbon trading system continues to improve, the collusion strategy will bring more additional revenue. The proportion of is inversely proportional to the severity of punishment. The proportion of review is inversely proportional to the additional revenue. The greater the government's punishment for the collusion strategy, the smaller the proportion and cost of review sampling. Therefore, except to the fixed fine for the collusion strategy, the most effective countermeasure for competent government departments is to impose additional fine that is several times the revenue obtained by the collusion strategy, which can largely avoid the occurrence of the collusion strategy and reduce the compound cost. 3.2. Analysis on the game of secondary market of carbon emission rights In the process of bilateral open bidding, the time cost of negotiating the transaction price between buyer and seller is high. The trading center can directly determine the transaction price by combining the number of buyers and sellers and quantity of carbon emission, but not simply take the arithmetic average of bidders as the transaction price. At the same time, the trading center can determine the two sides that meet certain conditions to complete the compulsory transaction according to the forces of buyer and seller. This form can eliminate the process of seeking confirmation from both sides. If the buyer and seller take turns to bid and determine the transaction price in the bilateral open bidding process, the equilibrium transaction price can be expressed as follow. \[ p^* = \frac{\delta_1 (1 - \delta_s)}{1 - \delta_s \delta_b} p_b + \frac{1 - \delta_s}{1 - \delta_s \delta_b} p_s \] (16) 4. Conclusions China's carbon emission trading market needs to development and improvement focus on the following aspects. Firstly, China should vigorously support and build third-party verification institution to realize the rapid development and specialization of entire industry. It is necessary to strengthen the professional training on operational skills of the personnel carbon emission verification that reducing the verification costs of carbon emission industry effectively. Secondly, the competent government departments should establish a regular mechanism to review the carbon emission reports submitted by carbon emitting enterprises and strictly prevent third-party verification institution from colluding with carbon emitting enterprises. A strict punishment mechanism for discovered collusion should be established, combining punishment with unlawful act. The competent government departments can establish long-term cooperation with universities, research institutes and other research institutions, and outsource the review work to improve the review efficiency. Thirdly, in order to improve the transaction efficiency, the trading center should directly determine the transaction price in combination with the forces of the buyers and sellers, and select the qualified buyers and sellers to make transactions directly at the secondary carbon trading market in process of bilateral open bidding. References Bernard A., Haurie A., Vielle M., Vielle M. and Viguier L. (2008). A two-level dynamic game of carbon emission trading between Russia, China, and Annex B countries, *Journal of Economic Dynamics & Control*, 32, 1830–1856. Chen Y.F., Wang Y.Q., Cao Y.W., Zeng M., Zhu B. and Hou X.Z. (2019). Research on the business model of electric vehicle charging facility construction project participating in carbon trading market, *Materials Science and Engineering*, 612, 042020. Chen Y.H., Jiang P., Dong W.B. and Huang B.J. (2015). Analysis on the carbon trading approach in promoting sustainable buildings in China, *Renewable Energy*, 84, 130–137. Fang G.C., Tian L.X., Liu M.H., Fu M. and Sun M. (2018). How to optimize the development of carbon trading in China-Enlightenment from evolution rules of the EU carbon price, *Applied Energy*, 211, 1039–1049. Gong J. and Zhou J. (2019). The current situation and correlation analysis of the new energy industry in Chinese carbon trading market. *2019 IEEE Asia Power and Energy Engineering Conference*, 341–346. Guo C.F., Li X.J. and Lan H.J. (2018). The equilibrium model of dual channel closed-loop supply chain network based on carbon trading and carbon tax, *International Journal of Internet Manufacturing and Services*, 5(1), 1–21. Han R., Yu B.Y., Tang B.J., Liao H. and Wei Y.M. (2017). Carbon emissions quotas in the Chinese road transport sector: A carbon trading perspective, *Energy Policy*, 106, 298–309. Hu Y.J., Li X.Y. and Tang B.J. (2017). Assessing the operational performance and maturity of the carbon trading pilot program: The case study of Beijing’s carbon market, *Journal of Cleaner Production*, 16, 1263–1274. Huang Y.S., Hu J.J., Yang Y.Q., Yang L. and Liu S.J. (2020). A low-carbon generation expansion planning model considering carbon trading and green certificate transaction mechanisms, *Polish Journal of Environmental Studies*, 29, 1169–1183. Jin J.L., Zhou P., Li C.Y., Guo X.J. and Zhang M.M. (2019). Low-carbon power dispatch with wind power based on carbon trading mechanism, *Energy*, 170, 250–260. Kanchinadham S.B.K. and Kalyanaraman C. (2017). Carbon trading opportunities from tannery solid waste: a case study, *Clean Technologies and Environmental Policy*, 19, 1247–1253. Li H. and Lei M. (2018). The influencing factors of China carbon price: a study based on carbon trading market in Hubei province, *IOP Conference Series: Earth and Environmental Science*, 121, 052073. Li L.X., Ye F., Li Y.N. and Chang C.T. (2019). How will the Chinese Certified Emission Reduction scheme save cost for the national carbon trading system?, *Journal of Environmental Management*, 244, 99–109. Li Y., Fan J., Zhao D.T., Wu Y.R. and Li J. (2016). Tiered gasoline pricing: A personal carbon trading perspective, *Energy Policy*, 89, 194–201. Lin W., Jin X.L., Mu Y.F., Jia H.J., Yu X.D., Pu T.J. and Chen N.S. (2019). Game-theory based trading analysis between distribution network operator and multi-microgrids, *Energy Procedia*, 158, 3387–3392. Liu X.Y. and Cui Q.B. (2018). Value of performance baseline in voluntary carbon trading under uncertainty, *Energy*, **145**, 468–476. Manuel G., Chávez Á. and Mpaid S. (2016). Cooperation and the carbon trading game: A system dynamics approach to the prisoner’s dilemma, *International Journal of Game Theory and Technology*, **2**, 9–23. Munnings C., Morgenstern R.D., Wang Z.M. and Liu X. (2016). Assessing the design of three carbon trading pilot programs in China, *Energy Policy*, **96**, 688–699. Nath A.J., Lal R. and Das A.K. (2015). Managing woody bamboos for carbon farming and carbon trading, *Global Ecology and Conservation*, **3**, 654–663. Pan Y.T., Zhang X.S., Wang Y. and Yan J.H. (2019). Application of blockchain in carbon trading, *Energy Procedia*, **158**, 4286–4291. Qu K.P., Yu T., Huang L.N., Yang B. and Zhang X.S. (2018). Decentralized optimal multi-energy flow of large-scale integrated energy systems in a carbon trading market, *Energy*, **149**, 779–791. Shan S., Genc S.Y., Kamran H.W. and Dinca G. (2021). Role of green technology innovation and renewable energy in carbon neutrality: A sustainable investigation from Turkey, *Journal of Environmental Management*, **294**, 113004. Tan D., Gao S. and Komal B. (2020). Impact of carbon emission trading system participation and level of internal control on quality of carbon emission disclosures: insights from Chinese state-owned electricity companies, *Sustainability*, **12**, 1788. Tan X.P. and Wang X.Y. (2017). The market performance of carbon trading in China: A theoretical framework of structure-conduct-performance, *Journal of Cleaner Production*, **159**, 410–424. Wang M., Zhao L.D. and Hergy M. (2018). Modelling carbon trading and refrigerated logistics services within a fresh food supply chain under carbon cap-and-trade regulation, *International Journal of Production Research*, **56**(12), 4207–4225. Wang M., Zhao L.D. and Hergy M. (2019). Joint replenishment and carbon trading in fresh food supply chains, *European Journal of Operational Research*, **277**, 561–573. Wang Q. and Wu S.T. (2018). Carbon trading thickness and market efficiency in a socialist market economy, *Chinese Journal of Population Resources and Environment*, **16**(2), 109–119. Wang Z.X., Zhao J. and Li M. (2017). Analysis and optimization of carbon trading mechanism for renewable energy application in buildings, *Renewable and Sustainable Energy Reviews*, **73**, 435–451. Weng Q.Q. and Xu H. (2018). A review of China’s carbon trading market. *Renewable and Sustainable Energy Reviews*, **91**, 613–619. Woo C.K., Chen Y., Olson A., Moore J., Schlag N., Ong A. and Ho T. (2017)., Electricity price behavior and carbon trading: New evidence from California, *Applied Energy*, **204**, 531–543. Xia Y. and Tang Z.P. (2017). The impacts of emissions accounting methods on an imperfect competitive carbon trading market, *Energy*, **119**, 67–76. Yang S.T., Chen Q., Yu J., Geng J., Du H.W., Jin Z.Q. and Zeng W. (2019)., Distributed generation low-carbon trading strategy based on cooperative game nucleolus method, *2019 IEEE Innovative Smart Grid Technologies-Asia*, 1369–1374. Yu Z.J., Geng Y., Dai H.C., Wu R., Liu Z.Q., Xu T. and Bleischwitz R. (2018). A general equilibrium analysis on the impacts of regional and sectoral emission allowance allocation at carbon trading market, *Journal of Cleaner Production*, **192**, 421–432. Zhang D., Alhorr Y., Elsarrag E., Marafia A.H., Lettieri P. and Papageorgiou L.G. (2017). Fair design of CCS infrastructure for power plants in Qatar undercarbon trading scheme, *International Journal of Greenhouse Gas Control*, **56**, 43–54. Zhao T.Y., Choo F.H., Zhang L.Q. and Gu Y. (2017). Game theory based distributed energy trading for microgrids parks, *2017 Asian Conference: Energy, Power and Transportation Electrification*, 8168563. Zheng T.L., Wang Z., Liu S.H., Bao X., Liu Z.C. and Ji M.X. (2021). The development trend and prospect of automobile energysaving standard system under the goal of peak carbon dioxide emissions, *E3S Web of Conferences*, **271**, 02006. Zhou J.P., Xiong S.Q., Zhou Y.C., Zou Z.J. and Ma X.M. (2017). Research on the development of green finance in Shenzhen to boost the carbon trading market, *IOP Conference Series: Earth and Environmental Science*, **81**, 012073. Zhu B.Z., Zhang M.F., Huang L.Q., Wang P., Su B. and Wei Y.M. (2020). Exploring the effect of carbon trading mechanism on China’s green development efficiency: A novel integrated approach, *Energy Economics*, **85**, 104601. Zhu C.Z., Wang M. and Du W.B. (2020). Prediction on peak values of carbon dioxide emissions from the chinese transportation industry based on the SVR model and scenario analysis, *Journal of Advanced Transportation*, **8848149**.
The Village of Gowanda Board of Trustees Organization meeting was called to order by Mayor Heather McKeever at 6:45 p.m. at the Municipal Hall. **Motion 1-16.** Motion by Trustee Zimmermann, seconded by Trustee Sheibley to go into Executive Session regarding a personnel matter. Motion carried 5-0. **Motion 2-16.** Motion by Trustee Zimmermann, seconded by Trustee Sheibley to come out of Executive Session at 7:10 p.m. Motion carried 5-0. The pledge of allegiance was recited. Trustee Sheibley asked for a moment of silence in memory of Virginia Noecker who served as the Village Clerk for 20 years. Present: - Mayor Heather McKeever - Trustee Carol Sheibley - Trustee Aaron Markham - Trustee Paul Zimmermann - Trustee Wanda Koch Village Employees: - Village Clerk Kathy Mohawk, Public Works Superintendent Jason Opferbeck, Village Attorney Deb Chadsey, Account Clerk Kathleen Ellis, Disaster Coordinator Nick Crassi, Officer Kris Booth Media Present: - Phil Palen, Cable Channel - Rebecca Cuthbert, Observer Public Present: - Bob Tiller, Fire Chief Mark Hebner, Candy and Howard Parish, Karen Markham, Sal Dicembre, Pete Sisti, Andy Burr, Charity Sweda, Earl Farina, Joshua Markham, C. Hodak, John Walgus, Dan Mosier Village Clerk Mohawk presided over the swearing in of newly-elected Trustees Aaron Markham and Wanda Koch. Mayor McKeever read a letter from Public Works Superintendent Opferbeck: “In anticipation of appointment to the position of Superintendent of Public Works, I am respectfully submitting this letter as official resignation of my hired position of Public Works Superintendent for the Village of Gowanda effective today.” **Motion 3-16.** Motion by Trustee Zimmermann, seconded by Trustee Markham to accept the resignation of Superintendent of Public Works Opferbeck as presented. Motion carried 4-1. Trustee Sheibley abstained. She indicated she was abstaining until she had a chance to review the paperwork. Mayor McKeever presented the following resolution: “WHEREAS, the Board of Trustees for the Village of Gowanda has determined that it is the best interests of the Village to have coordinated and effective oversight and management of all aspects of the public works of the Village; and WHEREAS, the Board of Trustees of the Village of Gowanda desire to effectuate the goal of coordinated and effective oversight and management of all public works of the Village by the appointment of a Superintendent of Public Works pursuant to New York State Village Law Section 3-301(2)®, who shall have authority to manage and direct all aspects of the public works of the Village, including but not limited to, waterworks, sewer works and highway works; and WHEREAS, the Board of Trustees for the Village of Gowanda has determined that appointing a Superintendent of Public Works will provide opportunities to reduce the costs and expenses expended by the Village overall with respect to Village public works; and WHEREAS, the Board of Trustees for the Village of Gowanda has determined that no person qualified to perform the duties of Superintendent of Public Works as defined by the Village Board of Trustees currently resides in the Village of Gowanda; and WHEREAS, Mayor McKeever has determined to appoint Jason Opferbeck as the Superintendent of Public Works pursuant to New York State Village Law, Section 3-300, et.seq.; and WHEREAS, such mayoral appointments are subject to the approval by the Board of Trustees of the Village of Gowanda, NOW, THEREFORE, the Board of Trustees of the Village of Gowanda, duly convened does hereby: RESOLVE, that the residency requirement contained in New York State Village Law, Section 3-300 shall be expanded to allow appointment of such persons to the position of Superintendent of Public Works as reside in a county within which the Village of Gowanda is situated, as allowed by New York State Village Law Section 3-300(2)(a). RESOLVE, that Jason Opferbeck be and is appointed to the position of Superintendent of Public Works effective as provided in New York State Village Law Section 3-302. RESOLVE, that the terms and conditions of such appointment, attached to this resolution as Schedule A, shall be determinative of the requirements contained therein. RESOLVED, this Resolution shall take effect immediately.” Motion 4-16. Motion by Trustee Zimmermann, seconded by Trustee Koch to adopt the foregoing resolution as presented. Motion carried 4-1. Trustee Sheibley abstained. Both Trustees Markham and Koch indicated they would have liked time to review this resolution prior to this evening. Mayor McKeever presented the following resolution: “WHEREAS, the Board of Trustees for the Village of Gowanda has determined that it is the best interests of the Village to have coordinated and effective oversight and management of all aspects of the public works of the Village; and WHEREAS, the Board of Trustees of the Village of Gowanda has determined that it must use its best efforts to control and reduce the costs of maintaining the public works of the Village through the use of inter-municipal agreements, contract negotiations with employees and management of the costs and expenses associated with operating water, sewer and highway systems; and WHEREAS, the Board of Trustees for the Village of Gowanda has determined that a re-organization of its public works departments should be explored to identify cost-saving opportunities and provide the most effective and efficient means of providing services to the residents of the Village of Gowanda. NOW, THEREFORE, the Board of Trustees of the Village of Gowanda, duly convened does hereby: RESOLVE that the Superintendent of Public Works, with the assistance of designated members of the Board of Trustees, Village Attorney, and such other persons as may be appointed by the Mayor, shall investigate the re-organization of the various public works departments of the Village and, as may be determined, prepare a plan of re-organization to be presented to the Village Board of Trustees for their consideration and further action. RESOLVED, this Resolution shall take effect immediately.” Motion 5-16. Motion by Trustee Markham, seconded by Trustee Zimmermann to adopt the foregoing resolution as presented. Motion carried 4-1. Trustee Sheibley abstained. Mayor McKeever presented her list of official appointments for 2016-2017. This list confirmed 2 year appointments which were made in 2015 for the Clerk, Deputy Clerk, Treasurer and Deputy Treasurer. | OFFICE | TERM | INCUMBENT | APPOINTEE | |-------------------------------|--------|---------------|-----------------| | Deputy Mayor | 1 year | Paul Zimmermann| Paul Zimmermann | | Village Clerk/ | | | | | Deputy Treasurer | 1 year | Kathleen Mohawk| Kathleen Mohawk | | Deputy Clerk | 1 year | Cynthia Schilling| Kathleen Ellis | | Treasurer | 1 year | Cindy Lauer | Mark Adamchick | | Affirmative Action Officer | 1 year | Kathleen Mohawk| Kathleen Mohawk | | Animal Control Officer | To be incorporated into department through re-organization | | | Registrar | 1 year | Kathleen Mohawk| Kathleen Mohawk | | Deputy Registrar | 1 year | Becky Kuhs | Becky Kuhs | | Officer-in-Charge | 1 year | Steve Raiport | Steve Raiport | | Building Inspector | 1 year | Gary Brecker | Larry McCormick | | | | | James Pierce | | | | | Gary Brecker | | Historian | 1 year | Phil Palen | Phil Palen | | Disaster Coordinator | 1 year | Nick Crassi | Nick Crassi | | Village Engineer | 1 year | Mark Burr | Mark Burr | | Village Attorney | 1 year | Deborah Chadsey| Deborah Chadsey | | Superintendent of Public Works| 1 year | Jason Opferbeck| | Motion 6-16. Motion by Trustee Zimmermann, seconded by Trustee Koch to accept the official appointments are presented by Mayor McKeever. Motion carried 4-1. Trustee Seibley abstained on the last appointment of Superintendent of Public Works. Mayor McKeever presented her committee assignments for 2016-2017: Audit Committee Carol Sheibley Wanda Koch Beautification, Parks and Trees Aaron Markham Budget Officer Heather McKeever Building Inspector and Ordinances Wanda Koch Building and Sidewalk Maintenance Carol Sheibley Cattaraugus Creek Basin Task Force Heather McKeever Paul Zimmermann Disaster Coord Liaison Carol Sheibley Aaron Markham Employee Negotiation Heather McKeever Paul Zimmermann Police Commissioner Heather McKeever Fire Commissioner Carol Sheibley Paul Zimmermann Gowanda Central School Heather McKeever Public Works Departments Heather McKeever Paul Zimmermann Recreation Carol Sheibley Aaron Markham Solid Waste & Recycling Village Board Water and Waste Water Commission Village Board Thatcher Brook Task Force Village Board Motion 7-16. Motion by Trustee Markham, seconded by Trustee Sheibley to approve the committee assignments as presented by Mayor McKeever. Motion carried 5-0. Mayor McKeever presented the annual motions: A) The regular meeting of the Board of Trustees shall be held on the second Tuesday of the month at 7:00 P.M. B) That the Gowanda Office of Community Bank, NA and MBIA/CLASS be designated as depositories of Village funds for the ensuing year. C) That the Village Clerk or Treasurer be authorized and directed to draw an order for the amount of the reasonable expenses of Village Officials and employees attending the regular monthly meetings of the Erie County Village Officials Association, the Cattaraugus County Village Officials Association, Southtowns Planning and Development, and Association of Erie Co. Governments. D) That the Department Heads of the Police, Public Works, Recreation and Clerk’s Office be authorized to approve their department payrolls within the structure of the budget. E) That the Mayor of the Village be and hereby is authorized and empowered to execute such application and documents necessary to apply to the proper state agency regarding a youth recreation program for the Village of Gowanda. F) That the policy of the Village of Gowanda shall be for all Department Heads to purchase whenever practical such items as Gasoline, Tires, Blacktop, Street Oil, Chlorine and any other such items as directed by the Board, from the New York State Office of Standards and Purchase on what is commonly known as “State Bid.” The authorized purchasing agents for the Village of Gowanda are attached. G) That the Treasurer be authorized by law to temporarily invest moneys not required for immediate expenditure in time open or day to day deposit accounts in financial institutions authorized by New York State Department of Audit and Control. H) That any non-profit Village-oriented organization be allowed use of the Village parking lots for approved special events on a no fee basis. Requests must be filed at least thirty (30) days in advance with a certificate of insurance. The organization involved shall be responsible for all clean up as directed by the Public Works Department. I) The official newspaper of the Village of Gowanda is hereby officially designated as the Evening Observer. J) That the Village departments will make all purchases in accordance with the Village Purchasing Policy and Procedures as attached. K) That all Village investments are made in accordance with the Village of Gowanda investment Policy and Guidelines as attached. L) The Village’s Safety policy will be adhered to and remains in effect as per attached. M) The Drug and Alcohol Policy will be obeyed and remains in effect as per attached. N) The meetings will follow the Rules of Procedure for meetings. O) The Village’s Prohibition of Sexual Harassment Policy will be adhered to and remains in effect as per attached. P) The Village’s Information Technology policy will be adhered to and remains in effect as per attached. Q) The Village’s Workplace Violence Prevention Policy will be adhered to and remains in effect as per attached. Motion 9-16. Motion by Trustee Zimmermann, seconded by Trustee Koch that the annual motions be approved as presented. Motion carried 5-0. Motion 10-16. Motion by Trustee Sheibley, seconded by Trustee Zimmermann to approve the minutes of the March 8, 2016 Village Board meeting as presented. Motion carried 5-0. Motion 11-16. Motion by Trustee Sheibley, seconded by Trustee Markham to approve Abstract #31 dated April 5, 2016 on all funds as follows: | Fund | Amount | |-----------------------|------------| | General Fund | $27,370.88 | | Water Fund | 18,891.38 | | Sewer Fund | 41,629.67 | | UDAG Fund | 29,200.00 | | Flood Recovery | 1,080.00 | | Joint Activity | 1,360.05 | | **Total** | **119,531.98** | Village Clerk Mohawk advised that the voucher for Grainger for the utility pump in the water fund should be removed as the Village received a credit for it. Trustee Sheibley also questioned the sewer voucher for $19,000. The amount approved was $3,568. Public Works Superintendent Opferbeck advised the amount quoted was a per day amount and he had no idea how many days it would take to clean the digester. Motion carried 5-0. **PROJECT UPDATES** Public Works Superintendent Opferbeck reported that the bids for the Creekside Improvements are returnable April 7th. Public Works Superintendent Opferbeck advised that another intern has been found for the asset management grant. Alan Nephew can begin the middle of May. It is a $5,000 grant with no match. Mayor McKeever advised that a new meeting date is being scheduled for the final review. In conjunction with the Safe Routes to School project, Healthy Community Alliance is planning a Wellness Walk on Saturday, May 14th. Village Clerk Mohawk advised that the insurance certificate from the organization has not yet been received. The Village Board will review this before the public hearings next week. **PUBLIC PARTICIPATION** John Walgus, President of Hidi Hose Company, advised the company will be celebrating the centennial this year. He requested permission to proceed with a $1200 plaque commemorating same which will be paid for by the fire company. They will be asking to place the marker once it is received. **Motion 12-16.** Motion by Trustee Zimmermann, seconded by Trustee Koch to allow the fire company to pursue the purchase of a centennial marker. Motion carried 5-0. Mr. Walgus indicated the company wants to have a dinner to raise funds for the marker. Anything over what is necessary for the plaque will be donated to Community Connections. Fire Chief Mark Hebner presented a quote for fire hoses from Eliza Co. He indicated it is necessary to replace the old hoses. The cost is $308.25 apiece. First Assistant Chief Nick Crassi reported that hose testing regulations have changed and the present hoses probably won’t pass the tests. He indicated there is money in the budget for this purchase. Motion 13-16. Motion by Trustee Zimmermann, seconded by Trustee Koch to approve the purchase of the 10 lengths of 100 foot hose at a cost of $308.25 each. Motion carried 5-0. John Walgus asked about further patching work on Palmer Street. He indicated there are about 40 to 50 small holes in the road. The Village only patched a few holes and he is not happy with the job that was done. Charity Sweda again asked that the drain in front of her house be fixed. She also indicated that the Village Board should make sure the Time Warner contract doesn’t contain a clause that a third party auditor cannot be used. LEGAL Village Attorney Chadsey stated there is no restriction on the Village for auditing the supplier. Mayor McKeever stated that next Tuesday at 6:00 is the public hearing on the Time Warner franchise agreement and at 6:30 is the budget hearing. Pete Sisti suggested that the Village Board pursue a third party auditor for the past three years since Time Warner was unwilling to negotiate. Motion 14-16. Motion by Trustee Sheibley, seconded by Trustee Markham to set the public hearing for the Time Warner franchise agreement for Tuesday, April 12, 2016 at 6:00 p.m. Motion carried 5-0. Village Attorney Chadsey reported that the Gowanda Fitness application to take an assignment of the center by new owners has been worked out. Motion 15-16. Motion by Trustee Zimmermann, seconded by Trustee Koch to allow the new owners of Gowanda Fitness to move forward with taking the loan by assignment. Motion carried 5-0. There was discussion about the delinquent UDAG loan. Village Attorney Chadsey indicated she could run a search and send out information subpoenas. Motion 16-16. Motion by Trustee Sheibley, seconded by Trustee Zimmermann to authorize Village Attorney Chadsey to run a search of the delinquent loan and send out information subpoenas, up to $500. Motion carried 5-0. Village Attorney Chadsey spoke about setting up a meeting with the Village Board and the GARC Board to update the new Board members. She asked if they could have a conference call to be up to speed. Village Attorney Chadsey briefly explained that the PRP’s agreed to do the cleanup of the site and the Village would facilitate the purchase by GARC. The PRPs put up funds for operation and maintenance, administrative costs and beautification but they now want to be gone. The Village needs to figure out the costs of operation and maintenance for the next 25 years to make sure the funding will be enough. John Walgus indicated the PRP’s made an offer by Village Attorney Chadsey and the GARC Board members don’t feel it is enough. Village Clerk Mohawk was asked to e-mail the Board members to get their availability for the next 10 days for a conference call with Village Attorney Chadsey. **JOINT ACTIVITY** Trustee Sheibley reported that the basketball backboard will be replaced. **BUSINESS/BUILDING PERMITS** Disaster Coordinator Crassi stated that the code enforcement office is important and needs to be kept close by. He indicated that Building Inspector Brecker has always been available when they needed him. Mayor McKeever wants the police officers to shadow Mr. Brecker for the code enforcement. Both Mr. Crassi and Fire Chief Hebner feel the Village needs someone who is really knowledgeable with the codes. Bob Tiller asked about the status of the Emborski property on Memorial Drive. Mayor McKeever indicated the Zoar Valley Clinic, Savarino project, on South Water Street is looking to have the groundbreaking in mid-April. **POLICE** Trustee Sheibley advised she was planning to attend the Seneca Nation Meet and Greet on Wednesday. Officer-in-Charge Raiport stated that with the resignation of Officer Campas and the death of Officer Hock, he would like to hire two additional competitive part-time officers. **Motion 17-16.** Motion by Trustee Markham, seconded by Trustee Koch to hire Officer Earl Farina as competitive part-time status. Motion carried 5-0. **Motion 18-16.** Motion by Trustee Markham, seconded by Trustee Koch to hire Officer John Bennett as competitive part-time status. Motion carried 5-0. Officer-in-Charge Raiport also reported that two other officers passed their physical agility tests making them eligible for competitive part-time or full time status. He asked the Village Board to change their employment status to Competitive Part-time. **Motion 19-16.** Motion by Trustee Zimmermann, seconded by Trustee Koth to change the status of Officer Josh Bartholomew to Competitive Part-time. Motion carried 5-0. **Motion 20-16.** Motion by Trustee Sheibley, seconded by Trustee Koch to change the status of Officer Elwood Mohawk to Competitive Part-time. Motion carried 5-0. On Tuesday, March 29, 2016, our Community Connections Group, along with Gowanda Central School, hosted a drug information forum. There was a good turnout of community members at the forum. A Community Workshop is scheduled for April 30th from 9 am to 1 pm. There will be several speakers on hand, Narcan training, this workshop will be very informative and more of a hands on approach with displays and booths set up. **FIRE** Trustee Sheibley reported that new OSHA regulations have been instituted regarding sexual harassment and violence prevention. She suggested that since the Village already has those policies in place it would be good if the fire company could use those policies in training. **Motion 21-16.** Motion by Trustee Sheibley, seconded b Trustee Zimmermann to allow the fire department to use the current Village policies of sexual harassment and violence prevention for training purposes. Motion carried 5-0. Fire Chief Mark Hebner advised that the street washing would take place on May 1st. **DISASTER COORDINATOR** Disaster Coordinator Crassi reported that Greenman Peterson has completed the final report. FEMA has officially received all the PW changes. They wanted more pictures and descriptions of the pictures. Mr. Crassi also indicated there may be more funds for this building from the 2009 flood. Disaster Coordinator Crassi advised that the Town of Perrysburg asked Public Works Superintendent Opferbeck and Mayor McKeever to meet on the road closures for Indian Hill Road. John Walgus advised that the Seneca Nation has offered top supply the traffic control devices when the road closes. Disaster Coordinator Crassi asked about the maintenance contract for the emergency generators and why it was never renewed. Mayor McKeever said the service work could be done in-house. Mr. Crassi advised the generators were put in through grants and he feels the testing should be done before an emergency happens. Public Works Superintendent Opferbeck said the public works department is following the same checklist that the generator service companies use. Trustee Sheibley said the contract was for 2 years; $5,000/year for 7 generators. The fire department amount is $540 for 2 years. She feels it should be revisited. **PUBLIC WORKS** Village Clerk Mohawk reported that the United States Department of the Interior Fish and Wildlife Service will be treating the Cattaraugus Creek for sea lamprey populations from May 3 through May 12. The color of the water changes. She will put a notice on the website. There was some discussion about the garbage bids that were received. Village Clerk Mohawk will supply copies to Trustees Koch and Markham to bring them up to speed. There was discussion about the request from Highway Superintendent Denea to get paid for his unused vacation time. Trustee Zimmermann stated that per the terms of the contract, Mr. Denea should get paid. The Village doesn’t have an option. Public Works Superintendent Opferbeck stated that he asked Mr. Denea last month to put the request in writing which he did. **Motion 22-16.** Motion by Trustee Zimmermann, seconded by Trustee Markham to pay Highway Superintendent Denea 72 hours of unused vacation time per the terms of the supervisory contract. Motion carried 5-0. Mayor McKeever suggested meeting next week at 5:30 prior to the public hearing to take action on some of the items including two event applications that are insufficient and the proposal from Wendel. **ADMINISTRATION** **Motion 23-16.** Motion by Trustee Sheibley, seconded by Trustee Zimmermann to set a public hearing for the 2016-2017 Village budget on April 12, 2016 at 6:30 p.m. Motion carried 5-0. There was discussion about the Houghton College training. Trustee Koch indicated she would attend. Mayor McKeever asked that Village Clerk Mohawk make the new code enforcement team is aware of the program as well. Public Works Superintendent Opferbeck advised that he saw a copy of a letter today that was sent to a delinquent waste hauler. He asked if the Board should consider late notices for unpaid waste haulers as well. Public Works Superintendent Opferbeck reported that the cemetery cleaning used to be done by prison work crews but there are not enough crews now to do the work. Trustee Sheibley suggested that he call Brocton. Village Clerk Mohawk presented the 2016-2017 water/sewer relevies. **Motion 24-16.** Motion by Trustee Zimmermann, seconded by Trustee Sheibley to approve the 2016-2017 water/sewer levies in the amount of $47,149.07 for Cattaraugus County and $18,090.03 for Erie County. Motion carried 5-0. ENVIRONMENT Phil Palen indicated that he would need compost, mulch and a backhoe driver for the upcoming tree planting. He advised that 13 trees will be purchased from Chestnut Ridge for $1,001. Mr. Palen indicated that funds will be left over from the tree budget and he would like the money to be put into a tree reserve for future use. Mayor McKeever advised that the responded to the Farmer-Neighbor dinner invitation for April 13th. Motion 25-16. Motion by Trustee Zimmermann, seconded by Trustee Koch to adjourn the Village Board meeting at 9:35 p.m. Motion carried 5-0. The next Village of Gowanda board meeting is May 10, 2016 at 7:00 p.m. Respectfully submitted, Kathleen V. Mohawk Village Clerk
Since early summer, a three-person team from LCA, Tip of the Mitt Watershed Council and Land Information Access Association has been visiting each of the 10 planning commissions with responsibilities for our Lake Charlevoix shoreline. Our three-person team arrives, makes a brief presentation on the need for and history of shoreline protection and then asks questions. The result is dialogue about the actual experiences of our townships and cities as they adopt and enforce shoreline regulations that balance development and lake protection. What are we up to? We are in the first steps of a process to develop a vision for Lake Charlevoix. The combined efforts of the various public and nonprofit organizations working to maintain the lake in a natural state have not been able to prevent rapid development of our shoreline. Our community response to the high water of 2019 and 2020 resulted in numerous shoreline hardening projects. As the water receded in 2021, a large amount of stone placed around the lake became apparent. We need to do better. The question is how? From what we know and what we experienced around the lake, environmental protection happens best through community consensus. We have a watershed management plan produced through a consensus process, the Lake Charlevoix Watershed Management Plan. The Plan is maintained on the Tip of the Mitt Watershed Council website, https://www.watershedcouncil.org/lake-charlevoix-watershed-management-plan.html. It contains a wealth of detail about conditions in the watershed, warnings about risks and ideas for protection. It states a goal, protecting the water quality of the watershed, as the only way to protect the quality of Lake Charlevoix. But what does a protected Lake Charlevoix look like? We think it is time to develop such a vision. Our experience teaches us that our local planning commissions are the places where community standards are developed, interpreted, and enforced. So, with a grant from the Charlevoix County Community Foundation, matched by our own resources, we engaged with Tip of the Mitt Watershed Council and Land Information Access Association in a listening exercise. By the end of October, we will have been to each of the 10 planning commissions, not to talk but to listen. By January, we will have processed our notes and impressions. We intend to return to the Planning Commissions to report the observations and suggestions which emerge from our listening. We will also be formulating the next step towards developing a consensus vision for a future Lake Charlevoix that continues to be a place of natural beauty and inspiration. Please stay tuned. My connection to Charlevoix traces back to the late 19th century when my great grandfathers arrived here. They came to exploit the forests and the fish populations and moved on when those resources were exhausted. I find it ironic that we are now working to re-establish a balance that existed for hundreds of years before my ancestors turned things around in just a few decades. Shoreline protection has been an important mission of the Lake Charlevoix Association for the past 50 plus years. A watershed management approach has been an important component of our mission for more than a decade. One result of this effort to establish a watershed wide conservation strategy was the adoption of shoreline protection ordinances by all 10 of the jurisdictions sharing responsibility for some portion of the Lake Charlevoix shoreline. These ordinances were not passed based on aesthetic notions. They were adopted because scientists had identified the critical role our natural shoreline plays in maintaining a healthy lake. An important feature of a healthy lake is the presence of a broad spectrum of wildlife, plants and animals. Our lake needs to be suitable for recreation, but we also want it to be home to a wide assortment of wildlife. Owners of waterfront lots have the privilege and responsibility to preserve and protect these natural shorelines. I am one of those fortunate people who is privileged to own shoreline property. Just as we must manage our property so as not to negatively impact our neighbors to the side and back, so must we manage our property to avoid behaviors that negatively impact the lake. Our challenge is to find a way to manage development in balance with nature. At LCA, we know that many permits were sought and granted in the crush caused by the convergence of record high-water and EGLE’s diminished capacity due to Covid-19. We also suspect that certain riparian owners with access to equipment have dumped stone along their shorelines without bothering with the permit process. The end result was a failure of our shoreline protection system to protect the lake. The damage has been done. Our only choice going forward is to revisit the biological foundations of shoreline protection and think about how we can do a better job of it. I repeat, we don’t do shoreline protection because we like the looks of it. We do shoreline protection because it is essential to preserving the quality of the watershed. Tom Darnton, LCA President Mission: Protect the natural quality and beauty of Lake Charlevoix. Promote understanding and support for safe and shared lake use. Advocate sensible and sustainable practices for lake use and development. LCA Board of Trustees Tom Darnton | President Dan Mishler | 2nd Vice President Howard Warner | Treasurer John Hoffman | Secretary Kim Baker | Director Joe Kimmell | Director Peggy Smith | Director www.lakecharlevoix.org Like us on Facebook! Keeping Lake Charlevoix Blue: An LCA Septic Study Update Dan Mishler Logan and Sophie orient the crew and collect data on iPads LEAKING SEPTIC TANKS ARE PROBABLY THE LAST THINGS THAT COME TO MIND WHEN YOU LOOK OUT OVER LAKE CHARLEVOIX THE BEAUTIFUL. HOWEVER, THEY HAVE BEEN ON THE MINDS OF THE LAKE CHARLEVOIX ASSOCIATION, TIP OF THE MITT WATERSHED COUNCIL (TOMWC), AND CENTRAL MICHIGAN UNIVERSITY FOR MORE THAN A YEAR. TO ADDRESS THEIR CONCERNS, A COLLABORATIVE MULTI-YEAR SEPTIC STUDY WAS BEGUN THIS SUMMER ON OUR LAKE. Caroline Keson, Monitoring Programs Coordinator of TOMWC and two of their summer interns have been busy collecting data from suspected hotspots along our shoreline. Water was sampled at 396 points along the shoreline from kayaks with a conductivity probe. High conductivity is a strong indicator of excess nutrients in the water and positive results were found in 30 locations. Samples were collected at these hotspots and will be analyzed by CMU for nutrient types and concentration levels. Moving water and weather conditions can obscure the source of nutrients along a shoreline. To that end, permission was sought from owners to sample on land to better locate the actual source of any nutrients. We are encouraged that 65 people gave us permission to sample their property. A land probe will be used as a first indicator, followed by collecting water samples from just below the ground surface. These samples will be analyzed for levels of human enteric (gut) bacteria to determine if a leaky septic system is at fault. As with several previous projects, the LCA has been able to make use of summer interns working with TOMWC. The upside of this approach is that we get enthusiastic, competent young people who can help us complete projects that are beyond the scope of volunteers, while expanding their expertise in their chosen field. Evan Joneson, a CLEAR Fellow with the watershed council, reported the following, "This summer, I was able to help Caroline Keson rewrite the methodology behind the council’s septic sampling and monitoring program. It was a great experience for me to see what it was like to do research and outreach to determine best practices and see our preparation and work come to fruition. The fieldwork that followed was a great learning experience that taught me so much about the world of water quality conservation." The Great Lakes News Collaborative reports estimates of 330,000 failing septic systems in the State of Michigan. These failing systems contaminate lakes, rivers, and ground water. The upside of this project is that it addresses the twin goals of protecting the high water quality of Lake Charlevoix and the development of protocols for future studies of septic failures. As has been previously reported, this study is a multi-year project with funding from LCA and the Charlevoix County Community Foundation. Protecting the Shoreline Naturally Joel Van Roekel with much assistance from Jennifer Buchanan In early October a stretch of the East Jordan Tourist Park shoreline became the site of a much needed restoration effort. A two-day program was led by the Tip of the Mitt Watershed Council (TOMWC) in collaboration with the Michigan Natural Shoreline Partnership. It brought 23 contractors and representatives from local and state government to learn the why’s and how’s of bioengineered shorelines. Jennifer Buchanan, Associate Director of TOMWC said, “The workshop was designed as a continuing education training opportunity to help shoreline contractors learn more about the proper use of rock in designing resilient shorelines for high energy lakes.” The workshop emphasized the use of bioengineering as a method of shoreline restoration. These projects are designed to restore shoreline function using natural materials including native plants, coir (coconut fiber) logs, and fieldstone. When designed properly, these projects can withstand waves and ice push. According to Buchanan, “With time, the fieldstone will collect sand and organic material between the nooks and crannies and create more shoreline. This shoreline… will be fortified with plants and fieldstone that flex and yet stabilize providing water quality and habitat benefits all the while.” Funding for the shoreline restoration came from a DNR Aquatic Habitat Grant Program. This project is the result of many hands working many hours over many years. The LCA is grateful that there are so many people who believe that Lake Charlevoix is “Ours to Protect.” A woven coir blanket is draped over the sand slope, coir log, and large toe stones are placed in a trench to form the base of the slope. In the spring, the LCA will install native plants throughout the coir logs and the woven coir blanket. As these plants become established, they will “creep” toward the water’s edge. PHOTO CREDITS: TIP OF THE MITT WATERSHED COUNCIL A leisurely cruise along our Lake Charlevoix shoreline can be a memorable way to spend your day. Unique cottages, wonderfully creative landscapes, and a multitude of plants along the water’s edge are there to enjoy. While your eyes are more likely to be drawn to the lake-side homes, tall trees and abundant boulders, for Lindsey Bona-Eggeman and her team from CAKE/CISMA, their eyes are on the plants. Lindsey is the Program Coordinator for the Charlevoix, Antrim, Kalkaska, and Emmet / Cooperative Invasive Species Management Area. Their mission is to protect the natural resources, economy, and human health of Northern Lower Michigan. In addition to the four counties, they partner with more than 30 environmental groups and organizations for outreach, education, and restoration. CAKE/CISMA plays an important role in the ongoing battle against aquatic invasive species on our inland lakes. While there are several aquatic invasives of concern, non-native Phragmites has been a problem on Lake Charlevoix for more than a decade. Many of the large stands of Phragmites have been removed or reduced but it is a tenacious plant whose presence does not bode well for a shoreline. When a stand of Phragmites establishes itself, it can displace the diverse communities of native plants. It can also reduce quality habitat for wildlife, alter the natural shoreline, and develop into dense stands up to 15 feet in height. They reproduce through both seeds (up to 2,000 per plant) and rhizomes that can grow to more than 60 feet long and burrow six feet down. To learn more, the CAKE/CISMA website (cakecisma.org) is both highly informative and has a "Site Visit" request link. The following is from a conversation with Bona-Eggeman last month: **JVR:** Besides the great information on your website, is there anything else you would like people to know? **LB-E:** CAKE can’t tackle invasive species on our own. It takes all of us. We need individual property owners to take some initiative on their own properties. We can provide guidance and help with early detection and a rapid-response, and maintenance when we are alerted. **JVR:** Why can’t we just eradicate Phragmites? **LB-E:** Eradication tends to focus on when something is first discovered. Once an invasion happens, we are often stuck with it. Often, we work to figure out how to reach a balance with an invasive so that it doesn’t take over everything. We look to see how it is functioning in the ecosystem. **JVR:** What should people do if they think they spot an invasive on their shoreline? **LB-E:** Take some really good pictures. Send them to Michigan Invasive Species Information Network (misin.msu.edu) or email me at firstname.lastname@example.org or click on “Request a Site Visit” on our web page. **JVR:** What do we need to keep in mind in dealing with aquatic invasive? **LB-E:** Weeds are notorious for being pesky and are adapted to press on and survive. It often takes two to three years to get ahead of the curve. WHAT IS A LAKE? Beginnings Adventure Peace Wonder Joy Anticipation Renewal Chores Action Friends Reunion Farewells Ours to protect. BACKGROUND PHOTO CREDIT MARK STANLEY THE CISCO OR LAKE HERRING HAS PLIED THE WATERS OF LAKES MICHIGAN AND CHARLEVOIX FOR CENTURIES. ITS HISTORY IS ONE OF BOUNTY, WITH 19,000,000 POUNDS HARVESTED ANNUALLY BEFORE 1940, AS WELL AS NEAR EXTINCTION DUE TO OVERFISHING, THE INTRODUCTION OF RAINBOW SMELT, SEA LAMPREY, AND THE EXPLOSION OF ALEWIVES. THE CISCO CRASHED SO DRAMATICALLY IN THE 1960’S THAT SIX OF EIGHT MAJOR FORMS OF DEEP-WATER CISCO ARE NO LONGER FOUND IN LAKE MICHIGAN. In a surprising turn of events, it appears that the Cisco are making a comeback in Lake Michigan and Lake Charlevoix. This population uptick has caught the interest of the Michigan DNR Fisheries Division as well as other agencies, universities, and environmental groups. Several studies are under way to better understand the Cisco’s spawning locations, diet, and migration habits. The Cisco have surprised researchers with an unusually rapid shift in their diet. Historically, Cisco ate zooplankton and small invertebrates, both abundant in Lake Michigan and Lake Charlevoix. The introduction of the Zebra and Quagga Mussels changed all that. The Cisco were forced to forage for larger prey and in an ironic twist of fate, now consume both the alewives that were once competition and the invasive Round Goby that followed the Zebra and Quagga mussels from the Caspian Sea. To better understand the migration habits of the Lake Michigan/Charlevoix Cisco, scientists from the Michigan DNR and the US Geological Survey have begun a movement study using acoustic telemetry. Small transmitters are surgically implanted, and a grid of acoustic receivers record data from each individual whenever they swim within range of a receiver. Results of these studies will be reported in future issues of *The Lake Guardian*. The resurgence of the Cisco has been noticed by sport fishermen who have had great success year-round. Over the past seven years, more than 26,000 catches have been recorded in the Pine River Channel and off our coast in Lake Michigan, with thousands more landed in Lake Charlevoix. Surprisingly, similar levels of success are enjoyed by ice anglers. For reasons that are not completely understood, schools of Cisco are found from the Coast Guard station all the way to Boyne City throughout the winter season. A conversation with local guide Jim Chamberlain produced some surprising answers. He said, “There is not another Cisco ice fishery up north that competes in terms of quantity. Every fish we catch is a Michigan Master Angler fish. A 16-inch Cisco is considered a master angler fish and we consistently catch 22, 24, 25 -inch fish.” **PROJECT SUMMARY** Cisco (Lake herring) in Lake Charlevoix were implanted with acoustic tracking devices in January 2022. These fish are being actively tracked by scientists at the US Geological Survey and the Michigan DNR to better understand movement to and from Lake Charlevoix and to improve our understanding of the expanding population in northern Lake Michigan. Only a small number of fish were tagged and keeping as many in the water as possible is important to the success of this project. **What does a tagged cisco look like?** - Bright green tag implanted below dorsal fin - ID number and “Release and Report” printed on tag - If caught, note ID number and call 203-231-5289 - If harvested, **please report** and retain tracking device (implanted in the body by the stomach) *For more information, please call 203-231-5289* The news these days is filled with so many reports of environmental disasters and threatened ecosystems. It is great to be able to share stories of resiliency and adaptability in environmentally trying times. Tributes In Honor of Robert Doher Lake Charlevoix Cove Association In Honor of James Ehinger Mary Ann Ehinger In Honor of David Harris Rod Lemmer & Mary Foucard Rosemary Hill Dale & Mary Shaw Jeffery & Katherine Slabaugh In Honor of Patricia Hicks Shirley Barton In Honor of David & Linda Salisbury John & Susan Levasseur In Honor of M. Sue Shank Sarah Clark In Honor of Barbara White Susan Campbell Literature LOOK FOR THE LCA BINDERS LOCATED AT EACH OF THE LIBRARIES IN OUR SURROUNDING COMMUNITIES You can view simple, practical, and water-friendly ways to protect Lake Charlevoix at the Boyne City, Charlevoix, and East Jordan libraries. Please note that we will be updating the information regularly and we would love to have a few good volunteers help with that task. Contact Peggy Smith @ email@example.com to learn more. Scan QR Code for WATERLEVELS FORECAST News Bites With Much Appreciation: We applaud retiring LCA Board Member Joel Van Roekel and sincerely appreciate your hard work and efforts. Thank you for your many wonderful years of dedication! Welcome: The LCA welcomes new Board Member and Treasurer, Howard Warner. We look forward to collaborating with you on many upcoming lake protection projects. Thank You: Protecting our beautiful Lake Charlevoix would not be possible without the support of our members and volunteers. Thank you for backing our mission. Our member numbers are up, and engagement continues to grow through our social media presence. In addition, we’ve had a noticeable uptick in Tribute contributions. What an incredible way to pay homage to a favorite person in your life. Year-End Donations: The LCA is a 501c3 and always appreciates being included in your year-end giving. Your tax-deductible donation funds lake protection for Lake Charlevoix. You may donate on our website, https://www.lakecharlevoix.org/support-us.html, or donate with the enclosed envelope. License Plates: If you would like an LCA license plate for the front of your vehicle, please contact us at firstname.lastname@example.org. $10 for local pick up or $15 to ship.