text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
On Chordal-k-Generalized Split GraphsAndreas BrandstädtInstitut für Informatik, Universität Rostock, D-18051 Rostock, GermanyRaffaele MoscaDipartimento di Economia, Universitá degli Studi “G. D'Annunzio”,Pescara 65121, ItalyDecember 30, 2023 ============================================================================================================================================================================================================= A graph G is a chordal-k-generalized split graph if G is chordal and there is a clique Q in G such that every connected component in G[V ∖ Q] has at most k vertices. Thus, chordal-1-generalized split graphs are exactly the split graphs. We characterize chordal-k-generalized split graphs by forbidden induced subgraphs. Moreover, we characterize a very special case of chordal-2-generalized split graphs for which the Efficient Domination problem is -complete. Keywords: Chordal graphs; split graphs; partitions into a clique and connected components of bounded size; forbidden induced subgraphs; -completeness;polynomial time recognition. § INTRODUCTION G=(V,E) is a split graph if V can be partitioned into a clique and an independent set.The famous class of split graphs was characterized by Főldes and Hammer in <cit.> as the class of 2K_2-free chordal graphs, i.e., the class of (2K_2,C_4,C_5)-free graphs,and G is a split graph if and only if G and its complement graph G are chordal.There are various important kinds of generalizations of split graphs:G is unipolar if there is a clique Q in G such that G[V ∖ Q] is the disjoint union of cliques of G, i.e., G[V ∖ Q] is P_3-free. Clearly, not every unipolar graph is chordal. G is called a generalized split graph <cit.> if either G or G is unipolar. Generalized split graphs are perfect, and Prömel and Steger <cit.> showed that almost all perfect graphs are generalized split graphs.We consider a different kind of generalizing split graphs: A graph G is a chordal-k-generalized split graph if G is chordal and there is a clique Q in G such that every connected component in G[V ∖ Q] has at most k vertices; we call such a clique Q a k-good clique of G. Thus, chordal-1-generalized split graphs are exactly the split graphs.We characterize chordal-k-generalized split graphs by forbidden induced subgraphs.An induced matching M ⊆ E is a set of edges whose pairwise distance in G is at least two.A hereditary induced matching (h.i.m.) is the induced subgraph of an induced matching, i.e., every connected component has at most two vertices, i.e.,it is the disjoint union of an independent vertex set S and the vertex set of an induced matching M in G. Thus, G is a chordal-2-generalized split graph if and only if G has a clique Q such that G[V ∖ Q] is a hereditary induced matching.§ BASIC NOTIONS AND RESULTS For a vertex v ∈ V, N(v)={u ∈ V: uv ∈ E} denotes its (open) neighborhood, and N[v]={v}∪ N(v) denotes its closed neighborhood. A vertex v sees the vertices in N(v) and misses all the others. The non-neighborhood of a vertex v is N(v):=V ∖ N[v]. For U ⊆ V, N(U):= ⋃_u ∈ U N(u) ∖ U and N(U):=V ∖(U ∪ N(U)). For a set F of graphs, a graph G is called F-free if G contains no induced subgraph isomorphic to a member of F. In particular, we say that G is H-free if G is {H}-free. Let H_1+H_2 denote the disjoint union of graphs H_1 and H_2, and for k ≥ 2, let kH denote the disjoint union of k copies of H. For i ≥ 1, let P_i denote the chordless path with i vertices, and let K_i denote the complete graph with i vertices (clearly, P_2=K_2). For i ≥ 4, let C_i denote the chordless cycle with i vertices. A graph G is chordal if it is C_i-free for any i ≥ 4. H is a butterfly (see Figure <ref>) if H has five vertices, say v_1,…,v_5 such that v_1,v_2,v_3 induce a K_3 and v_3,v_4,v_5 induce a K_3, and there is no other edge in H.H is an extended butterfly (see Figure <ref>) if H has six vertices, say v_1,…,v_6 such that v_1,v_2,v_3 induce a K_3 and v_4,v_5,v_6 induce a K_3, and the only other edge in H is v_3v_4.H is an extended co-P (see Figure <ref>) if H has six vertices, say v_1,…,v_6 such that v_1,…,v_5 induce a P_5 andv_6 is only adjacent to v_1 and v_2. H is a chair if H has five vertices, say v_1,…,v_5 such that v_1,…,v_4 induce a P_4 and v_5 is only adjacent to v_3.H is an extended chair (see Figure <ref>) if H has six vertices, say v_1,…,v_6 such that v_1,…,v_5 induce a chair as above and v_6 is only adjacent to v_1 and v_2. H is a gem if H has five vertices, say v_1,…,v_5 such that v_1,…,v_4 induce a P_4 and v_5 is adjacent to v_1,v_2,v_3, and v_4.H is a double-gem (see Figure <ref>) if H has six vertices, say v_1,…,v_6 such that v_1,…,v_5 induce a gem, say with P_4 (v_1,v_2,v_3,v_4) and v_6 is only adjacent to v_3 and v_4. We say that for a vertex set X⊆ V, a vertex v ∉ X has a join (resp., co-join) to X if X⊆ N(v) (resp., X⊆N(v)). Join (resp., co-join) of v to X is denoted by vX (resp., vX); if vX then v is universal for X. Correspondingly, for vertex sets X,Y ⊆ V with X ∩ Y = ∅, XY denotes xY for all x ∈ X and XY denotes xY for all x ∈ X. A vertex x ∉ U contacts U if x has a neighbor in U. For vertex sets U,U' with U ∩ U' = ∅, U contacts U' if there is a vertex in U contacting U'. As a first step for our results, we show: If G=(V,E) is a chordal graph, Q is a clique in G and Z is a connected component in G[V ∖ Q] such that every vertex q ∈ Q has a neighbor in Z then there is a universal vertex z ∈ Z for Q, i.e., zQ. Proof. We show inductively for Q={q_1,…,q_k} that there is a z ∈ Z with zQ: If k=1 then trivially, there is such a vertex z ∈ Z, and if k=2, say Q={q_1,q_2}, then let x,y ∈ Z be such that q_1x ∈ E and q_2y ∈ E. If xq_2 ∉ E and yq_1 ∉ E then take a shortest path P_xy between x and y in Z; clearly, since G is chordal, there is a vertex in P_xy seeing both q_1 and q_2.Now assume that the claim is true for clique Q ∖{q_k} with k-1 vertices and let x ∈ Z be a universal vertex for Q ∖{q_k}.If xq_k ∈ E, we are done. Thus assume that xq_k ∉ E, and let y ∈ Z with q_ky ∈ E. If xy ∈ E then, since G is chordal, clearly, y is adjacent to all q_i, 1 ≤ i ≤ k-1. Thus, assume that the distance between x and y in Z is larger than 1. Without loss of generality, let y be a neighbor of q_k in Z which is closest to x. Let P_xy be a shortest path between x and y in Z.Thus, q_k is nonadjacent to every internal vertex in P_xy and to x. Then, since G is chordal, y is adjacent to all q_i, 1 ≤ i ≤ k-1.Thus, Lemma <ref> is shown.§ K-GOOD CLIQUES IN CHORDAL GRAPHS§.§ 2K_2-free chordal graphs For making this manuscript self-contained, let us repeat the characterization of split graphs (and a proof variant which can be generalized for k-good cliques). A chordal graph G is 2K_2-free if and only if G is a split graph.Proof.If G is a split graph then clearly, G is 2K_2-free chordal.For the converse direction, assume to the contrary that for every clique Q of G, there is a connected component, say Z_Q, of G[V ∖ Q] with at least two vertices; we call such components 2-nontrivial. Since G is 2K_2-free, all other components of G[V ∖ Q] consist of isolated vertices. Let Q_1:={q ∈ Q: q has a neighbor in Z_Q}, and Q_2 := Q ∖ Q_1, i.e., Q_2Z_Q. Thus, Q = Q_1 ∪ Q_2 is a partition of Q. Since G is 2K_2-free and Z_Q is 2-nontrivial, clearly, |Q_2| ≤ 1, and there is no connected component in G[V ∖ (Q_1 ∪ Z_Q)] with at least two vertices. Thus, G[V ∖ (Q_1 ∪ Z_Q)] is an independent vertex set.Let Q be a clique in G with smallest 2-nontrivial component Z_Q of G[V ∖ Q] with respect to all cliques in G. Clearly, Z_Q is also the nontrivial component of G[V ∖ Q_1], i.e., Z_Q_1=Z_Q. Thus, Q_1 is a clique in G with smallest 2-nontrivial component, and from now on, we can assume that every vertex in Q_1 has a neighbor in Z_Q_1. Thus, by Lemma <ref>, there is a universal vertex z ∈ Z_Q_1 for Q_1, i.e., zQ_1.This implies that for the clique Q':=Q_1 ∪{z}, the 2-nontrivial component Z_Q' (if there is any for Q') is smaller than the one of Q_1 which is a contradiction. Thus, Theorem <ref> is shown.§.§ (2P_3, 2K_3, P_3+K_3)-free chordal graphs Clearly, a connected component with three vertices is either a P_3 or K_3, and graph G is (2P_3, 2K_3, P_3+K_3)-free chordal if and only if G is (2P_3, 2K_3, P_3+K_3,C_4,C_5,C_6,C_7)-free. In a very similar way as for Theorem <ref>, we show: A chordal graph G is (2P_3, 2K_3, P_3+K_3)-free if and only if G is a chordal-2-generalized split graph. Proof.If G is a chordal-2-generalized split graph then clearly, G is (2P_3, 2K_3, P_3+K_3)-free chordal.For the converse direction, assume to the contrary that for every clique Q of G, there is a connected component, say Z_Q, of G[V ∖ Q] with at least three vertices; we call such components 3-nontrivial.Let Q_1:={q ∈ Q: q has a neighbor in Z_Q}, and Q_2 := Q ∖ Q_1, i.e., Q_2Z_Q. Thus, Q = Q_1 ∪ Q_2 is a partition of Q. Since G is (2P_3, 2K_3, P_3+K_3)-free and Z_Q is 3-nontrivial, clearly, |Q_2| ≤ 2, and there is no connected component in G[V ∖ (Q_1 ∪ Z_Q)] with at least three vertices. Thus, G[V ∖ (Q_1 ∪ Z_Q)] is a hereditary induced matching.Let Q be a clique in G with smallest 3-nontrivial component Z_Q of G[V ∖ Q] with respect to all cliques in G. Clearly, Z_Q is also the nontrivial component of G[V ∖ Q_1], i.e., Z_Q_1=Z_Q. Thus, Q_1 is a clique in G with smallest 3-nontrivial component, and from now on, we can assume that every vertex in Q_1 has a neighbor in Z_Q_1. Thus, by Lemma <ref>, there is a universal vertex z ∈ Z_Q_1 for Q_1, i.e., zQ_1. This implies that for the clique Q':=Q_1 ∪{z}, the 3-nontrivial component Z_Q' (if there is any for Q') is smaller than the one of Q_1 which is a contradiction. Thus, Theorem <ref> is shown. Clearly, not every (2P_3, 2K_3, P_3+K_3)-free graph and even not every (2P_3, 2K_3, P_3+K_3, C_5,C_6,C_7)-free graph has a 3-good clique as the following example shows:Let v_1,v_2,v_3,v_4 induce a C_4 with edges v_iv_i+1 (index arithmetic modulo 4) and let x_1,x_2,x_3 be personal neighbors of v_1,v_2,v_3 correspondingly. Then any clique Q has only at most two vertices, and for none of them, G[V ∖ Q] is a hereditary induced matching.§.§ The general case of h-good cliques in chordal graphs As usual, for a pair of connected graphs H_1, H_2 with disjoint vertex sets, let H_1+H_2 denote the disjoint union of H_1 and H_2. For any natural h, let C_h denote the family of connected graphs with h vertices, and let A_h = {X + Y: X, Y ∈ C_h}. In a very similar way as for Theorems <ref> and <ref>, we can show:For any natural h, chordal graph G is A_h+1-free if and only if there is a clique Q of G such that every connected component of G[V ∖ Q] has at most h vertices. Proof.If G is a chordal graph with a clique Q of G such that every connected component of G[V ∖ Q] has at most h vertices, then clearly, G is A_h+1-free chordal.For the converse direction, assume to the contrary that for every clique Q of G, there is a connected component,say Z_Q, of G[V ∖ Q] with at least h+1 vertices; we call such components (h+1)-nontrivial.Let Q_1:={q ∈ Q: q has a neighbor in Z_Q}, and Q_2 := Q ∖ Q_1, i.e., Q_2Z_Q. Thus, Q = Q_1 ∪ Q_2 is a partition of Q. Since G is A_h+1-free and Z_Q is (h+1)-nontrivial, clearly, |Q_2| ≤ h, and there is no connected component in G[V ∖ (Q_1 ∪ Z_Q)] with at least h+1 vertices. Thus, every connected component of G[V ∖ (Q_1 ∪ Z_Q)] has at most h vertices.Let Q be a clique in G with smallest (h+1)-nontrivial component Z_Q of G[V ∖ Q] with respect to all cliques in G. Clearly, Z_Q is also the nontrivial component of G[V ∖ Q_1], i.e., Z_Q_1=Z_Q. Thus, Q_1 is a clique in G with smallest (h+1)-nontrivial component, and from now on, we can assume that every vertex in Q_1 has a neighbor in Z_Q_1. Thus, by Lemma <ref>, there is a universal vertex z ∈ Z_Q_1 for Q_1, i.e., zQ_1.This implies that for the clique Q':=Q_1 ∪{z}, the (h+1)-nontrivial component Z_Q' (if there is any for Q') is smaller than the one of Q_1 which is a contradiction. Thus, Theorem <ref> is shown. Concerning the recognition problem, Tyshkevich and Chernyak <cit.> showed that unipolar graphs can be recognized in O(n^3) time. This time bound was slightly improved in <cit.>, and in <cit.>, McDiarmid and Yolov give a O(n^2) time recognition algorithm for unipolar graphs and generalized split graphs. Clearly, for each fixed k, chordal-k-generalized split graphs can be recognized in O(n m) time since chordal graphs have at most n maximal cliques,and for each of them, say Q, it can be checked in linear time whether the connected components of G[V ∖ Q] have at most k vertices. § CHARACTERIZING SPLIT-MATCHING-EXTENDED GRAPHSG=(V,E) is a split-matching-extended graph if V can be partitioned into a clique Q, an independent vertex set S_Q and the vertex set of an induced matching M_Q in G such that S_QV(M_Q) and for every edge xy ∈ M_Q, at most one of x,y has neighbors in Q(note that S_Q ∪ V(M_Q) is a hereditary induced matching in G). Clearly, split-matching-extended graphs are chordal. Thus, split-matching-extended graphs are aspecial case of chordal-2-generalized split graphs. Clearly, split-matching-extended graphs can be recognized in linear time since for every edge xy ∈ M, the degree of x or y is 1, and deleting all such vertices of degree 1 leads to a split graph (which can be recognized in linear time). Various algorithmic problems such as Hamiltonian Circuit and Minimum Domination are -complete for split graphs. Efficient Domination, however, is solvable in polynomial time (even in linear time <cit.>) for split graphs while it is -complete for split-matching-extended graphs (see e.g. <cit.>). As mentioned in <cit.>, the graph G_H in the reduction of Exact Cover 3XC to Efficient Domination is (2P_3, K_3+P_3, 2K_3, butterfly, extended butterfly, extended co-P, extended chair, double-gem)-free chordal (and clearly, a special kind of unipolar graphs). Clearly, 2P_3-free implies C_k-free for each k ≥ 8.G is a split-matching-extended graph if and only if G is (C_4, C_5, C_6, C_7, 2P_3, K_3+P_3, 2K_3, butterfly, extended butterfly, extended co-P, extended chair, double-gem)-free.Proof. Clearly, if G is a split-matching-extended graph then G is (C_4, C_5, C_6, C_7, 2P_3, K_3+P_3, 2K_3, butterfly, extended butterfly, extended co-P, extended chair, double-gem)-free. For the other direction, assume that G is (C_4,C_5,C_6,C_7,2P_3, K_3+P_3, 2K_3, butterfly, extended butterfly, extended co-P, extended chair, double-gem)-free. Then by Theorem <ref>, G has a 2-good clique Q such that G[V ∖ Q] is a hereditary induced matching, say with independent vertex set S_Q={s_1,…,s_k} and induced matching M_Q={x_1y_1,…,x_ℓy_ℓ}. Let Q be a maximum 2-good clique of G. For every edge xy ∈ E with x,y ∈ N(Q), N_Q(x) ⊆ N_Q(y) and correspondingly for the non-neighborhoods, N_Q(y) ⊆N_Q(x) or vice versa, and in particular, x and y have a common neighbor and a common non-neighbor in Q. Proof.The claim obviously holds since G is chordal and Q is maximum. ♢ For every edge xy ∈ E with x,y ∈ N(Q), x and y have exactly one common non-neighbor in Q.Proof. Recall that by Claim <ref>, x and y have a common neighbor, say q_xy∈ Q, andsuppose to the contrary that x and y have two common non-neighbors, say q_1,q_2 ∈ Q, q_1 ≠ q_2. Then x,y,q_xy,q_1,q_2 induce a butterfly in G which is a contradiction.Thus, Claim <ref> is shown. ♢ For every edge xy ∈ E with x,y ∈ N(Q), at least one of x and y has at least two non-neighbors in Q. Moreover, if N_Q(x) ⊆ N_Q(y) then y has exactly one non-neighbor in Q, and thus, N_Q(x) ⊂ N_Q(y).Proof.If both x and y would have exactly one non-neighbor, say q_x,q_y ∈ Q with xq_x ∉ E and yq_y ∉ E then by Claim <ref>,q_x=q_y and now, (Q ∖{q_x}) ∪{x,y} would be a larger clique in G but Q is assumed to be a maximum clique in G. Thus, at least one of x,y has at least two non-neighbors in Q. By Claim <ref>, we have the corresponding inclusions of neighborhoods and non-neighborhoods in Q.If N_Q(x) ⊆ N_Q(y) and y has two non-neighbors in Q then also x has these non-neighbors but this contradictsClaim <ref>. ♢G[N(Q)] is 2K_2-free.Proof. Suppose to the contrary that G[N(Q)] contains 2K_2, say xy,x'y' with xy ∈ E and x'y' ∈ E. Let z ∈ Q be a common neighbor of x,y, and let z' ∈ Q be a common neighbor of x',y'. If z=z' then x,y,x',y',z induce a butterfly. Thus, z ≠ z' for any common neighbors z of xy, z' of x'y'. If x',y' miss z and x,y miss z' then x,y,x',y',z,z' induce an extended butterfly. If x',y' miss z but exactly one of x,y, say x, sees z' then x,z,x',y',z' induce a butterfly. If exactly one of x,y, say x, sees z' and exactly one of x',y', say x', sees z then x,y,z,z',x',y' induce a double-gem. Thus, in any case we get a contradiction, and Claim <ref> is shown. ♢Thus, for at most one of the edges x_iy_i ∈ M_Q, both x_i and y_i have a neighbor in Q; without loss of generality, assume that x_1 and y_1 have a neighbor in Q, and for all other edges x_jy_j ∈ M_Q, j ≥ 2, y_jQ. By Claim <ref>, let without loss of generalityN_Q(x_1) ⊆ N_Q(y_1), and let u ∈ Q be a common neighbor of x_1 and y_1.By Claim <ref>, y_1 has exactly one non-neighbor, say z, in Q (which also misses x_1).Since G is extended-co-P-free and butterfly-free, and by definition, z {y_1,…,y_ℓ}, we have z {x_1,…,x_ℓ},and since G is extended-chair-free and butterfly-free, z has at most one neighbor in S_Q. Let Q':= (Q ∖{z}) ∪{y_1}; clearly, Q' is a clique in G. Then according to Q', G is a split-matching-extended graph, and thus, Theorem <ref> is shown.Note added in proof. After writing this manuscript, we learnt that actually, the results of this manuscript are not new; they follow from previous papers of Gagarin <cit.> and of Zverovich <cit.>. Sorry for that! 99BraMos2016 A. Brandstädt and R. Mosca, Weighted efficient domination for P_5-free and P_6-free graphs, extended abstract in: Proceedings of WG 2016, P. Heggernes, ed., LNCS 9941, pp. 38-49, 2016. Full version: SIAM J. Discrete Math. 30, 4 (2016) 2288-2303.BraMos2017 A. Brandstädt and R. Mosca, On efficient domination for some classes of H-free chordal graphs, CoRR arXiv:1701.03414, 2017.EkiHelStadeW2008 T. Ekim, P. Hell, J. Stacho, and D. de Werra, Polarity of chordal graphs, Discrete Applied Mathematics 156 (2008) 2469-2479.EscWan2014 E.M. Eschen and X. Wang, Algorithms for unipolar and generalized split graphs, Discrete Applied Mathematics 162 (2014) 195-201.FoeHam1977 S. Főldes and P.L. Hammer, Split graphs, Congressus Numerantium 19 (1977) 311-315.Gagar1999 A.V. Gagarin, Chordal (1,b)-polar graphs (in Russian), Vestsi Nats. Akad. Navuk Belarusi Ser. Fiz.-Mat. Navuk 4 (1999) 115-118.McDYol2016 C. McDiarmid and N. Yolov, Recognition of unipolar and generalised split graphs, CoRR arXiv:1604.00922 (2016).ProSte1992 H.J. Prömel and A. Steger, Almost all Berge graphs are perfect, Combinatorics, Probability and Computing 1 (1992) 53-79.TysChe1985 R.I. Tyshkevich and A.A. Chernyak, Algorithms for the canonical decomposition of a graph and recognizing polarity, Izvestia Akademii Nauk BSSR, ser. Fiz. Mat. Nauk 6 (1985) 16-23 (in Russian).Zvero2006 I.E. Zverovich, Satgraphs and independent domination. Part 1,Theoretical Computer Science 352 (2006) 47-56.
http://arxiv.org/abs/1702.07914v3
{ "authors": [ "Andreas Brandstädt", "Raffaele Mosca" ], "categories": [ "cs.DM" ], "primary_category": "cs.DM", "published": "20170225160826", "title": "On Chordal-$k$-Generalized Split Graphs" }
decorations.markings decorations.pathreplacing ->-/.style=decoration= markings, mark=at position #1 with >,postaction=decoratemiddlearrow/.style= decoration=markings, mark= at position 0.5 with #1 , , postaction=decorate yline arrow color cm theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollarydefcounterexamplecounter[section]remark[1][Remark]#1 Department of MathematicsUniversity of Southern CaliforniaLos Angeles, CA coguz@usc.eduDepartment of MathematicsUniversity of Virginia Charlottesville, VA mar3nf@virginia.edu Trace of the Twisted Heisenberg Category Michael Reeks December 30, 2023 ======================================== We show that the trace decategorification, or zeroth Hochschild homology, of the twisted Heisenberg category defined by Cautis and Sussan is isomorphic to a quotient of W^-, a subalgebra of W_1+∞ defined by Kac, Wang, and Yan. Our result is a twisted analogue of that by Cautis, Lauda, Licata, and Sussan relating W_1+∞ and the trace decategorification of the Heisenberg category. § INTRODUCTIONCategorification is the process of enriching an algebraic object by increasing its categorical dimension by one, e.g. passing from a set to a category or from a 1-category to a 2-category. The original object can be recovered through the inverse process of decategorification. The most commonly used decategorification functor is the split Grothendieck group K_0, but it is natural to ask whether alternative decategorification functors may give additional insight into the categorified object. One such alternative, advocated in <cit.>, is the trace decategorification, which often encodes more information than K_0. The trace, or zeroth Hochschild homology, of a -linear additive category 𝒞 is the -vector space given by (𝒞) := (⊕_x∈Ob(𝒞)_𝒞(X))/ span_{fg-gf},where f and g run through all pairs of morphisms f:x→ y and g:y→ x with x,y ∈Ob(𝒞). If a -linear category 𝒞 carries a monoidal structure, then span{fg-gf} is an ideal, and (𝒞) becomes an algebra where multiplication in the trace is induced from tensor product of 𝒞.The trace has the advantage that it is, unlike K_0, invariant under passage to the Karoubi envelope, cf. <cit.>. Since passing to the Karoubi envelope often prevents one from working with diagrams, trace seems to be a more suitable option to decategorify diagrammatic categories.The traces of several interesting categories have been computed. in <cit.> and <cit.>, the trace of any categorified type ADE quantum group is shown to be isomorphic to a current algebra. In <cit.>, traces of quiver Hecke algebras are studied. In <cit.>, the trace of the Hecke category is shown to be a semidirect product of the Weyl group and a polynomial algebra. A unifying approach to Heisenberg categorifications was given in <cit.> via Frobenius algebras; in <cit.>, the degree zero part of the trace of these categories are computed.The trace (𝒞) is closely related to the K_0(𝒞) through the Chern character maph_𝒞: K_0(𝒞) →(𝒞)which sends the isomorphism class of an object to the class of its identity morphism in the trace. Interestingly, the map h_𝒞 is usually injective, but is often not surjective. Thus, the trace often contains additional structure which has no analogue in the Grothendieck group.One interesting example in which h_𝒞 fails to be surjective is given by the Heisenberg category ℋ defined in <cit.>. It is a ℂ-linear additive monoidal category. Therefore (ℋ) carries an algebra structure. There is an injective algebra homomorphism from the Heisenberg algebra 𝔥 to K_0(ℋ) (they are conjecturally isomorphic).In <cit.>, (ℋ) is shown to be isomorphic to a quotient of W_1+∞, a filtered algebra which is important in conformal field theory. In particular, it properly contains 𝔥 in filtration degree zero. Hence (ℋ) likely contains more information than K_0(ℋ). This fits into a larger framework, studied in <cit.>, involving the elliptic Hall algebra. We study a twisted version of Khovanov's Heisenberg category. The twisted Heisenberg algebra 𝔥_tw is a unital associative algebra generated by h_m/2, m∈ 2ℤ+1, subject to the relations[h_n/2, h_m/2] = n/2δ_n,-m.In <cit.>, a twisted version of the Heisenberg category, denoted ℋ_tw, is introduced. It is also a ℂ-linear additive monoidal category, with an additional ℤ/2ℤ-grading. It is proved that K_0(ℋ_tw) contains 𝔥_tw (again, they are conjecturally isomorphic).The goal of this paper is to study the trace , and determine additional structure analogous to that in the untwisted version. We show that the even part ofis isomorphic as an algebra to a quotient of a subalgebra of W_1+∞ that we will denote by W^-. We give explicit descriptions of W_1+∞ and W^- in Section <ref>. This confirms the expectation in <cit.> that there should be a relationship between ℋ_tw and one of two subalgebras of W_1+∞ defined in <cit.>.There is an algebra isomorphism_0⟶W^-/⟨ w_0,0, C-1⟩.Even though the isomorphism between K_0(ℋ_tw) and the twisted Heisenberg algebra 𝔥_tw is still conjectural, we are able to completely characterize .To prove Theorem <ref>, we first compute sets of algebra generators and relations for both W^- and , adapting arguments used in <cit.> to accommodate the new supercommutative elements arising from the twisting (cf. Section <ref>). We then study actions of each algebra on its canonical level one Fock space representation. These Fock space representations are isomorphic, and so induce a linear map Φ:→ W^-. We prove that Φ is an algebra homomorphism by studying the actions ofboth W^- andon their Heisenberg subalgebras. Finally, we check that the actions of the generators are identified under Φ, and deduce that Φ is an algebra isomorphism.An important tool in studying the connection between these algebras is the relationship betweenand the degenerate affine Hecke-Clifford algebra _n of type A_n-1. The trace of _n as a vector space was computed by the second author in <cit.>. The algebraadmits a triangular decomposition, where (_n) is identified with the upper (respectively lower) half. This identification simplifies some of the computations and the calculation of the graded rank of .The structure of the paper is as follows. In Section 2, we describe the W-algebra W^- of interest, describe its gradings and a set of generators, and study its Fock space representation. In Section 3, we describe trace decategorification in more detail and present the twisted Heisenberg category studied in <cit.>, as well as its gradings. We also identify a copy of the degenerate affine Hecke-Clifford algebra within the trace. In Section 4, we study a subalgebra ofconsisting of circular diagrams called bubbles, and describe how they interact with other elements of the trace. Section 5 contains a number of calculations of diagrammatic relations in the trace that are useful for computing a generating set of . Finally, in Section 6, we describe a triangular decomposition of the trace, and then establish a generating set. This allows us to prove the desired isomorphism by using the action of each algebra on its Fock space.Acknowledgements The authors thank Andrea Appel, Victor Kleen, Aaron Lauda,Joshua Sussan, and Weiqiang Wang for helpful discussions and advice concerning the paper; Maxwell Siegel for his notation suggestions; and Joshua Sussan for suggesting this project.The first author was partially supported by the NSF grant DMS-1255334. The second author was partially supported by a GRA fellowship in Weiqiang Wang's NSF grant and by a GAANN fellowship. § W-ALGEBRAIn this section, we will recall the W-algebra we are interested in, its structure as a ℤ-graded and ℕ-filtered algebra, and one of its subalgebras – the twisted Heisenberg algebra – as well as their Fock space representations. §.§ Twisted Heisenberg algebra 𝔥_twWe recall the definition of the twisted Heisenberg algebra. The twisted Heisenberg algebra 𝔥_tw is a unital associative algebra generated by h_n for n∈ℤ+1/2 subject to the relation that [h_n,h_m]=nδ_n,-m.§.§ W-algebra W^- Let 𝒟 denote the Lie algebra of differential operators on the circle. The central extension 𝒟̂of 𝒟 is described in <cit.>.It is generated byby C and by w_k,l=t^kD^l for l∈ℤ and k∈ℤ_≥ 0 where t is a variable over ℂ, and D=td/dt, subject to relations that C and w_0,0 are central, and:[t^rf(D),t^sg(D)]=t^r+s(f(D+s)g(D)-f(D)g(D+r))+ψ(t^rf(D),t^sg(D))C,whereψ(t^rf(D),t^sg(D))= ∑_-r≤ j≤ -1f(j)g(j+r) r=-s≥ 0 0 r+s≠ 0for f,g polynomials.The W-algebra W_1+∞ is the universal envelopping algebra of 𝒟̂. It is shown in <cit.> that trace of Khovanov's Heisenberg category is isomorphic to a quotient of W_1+∞. In this paper, we are interested in the universal enveloping algebra of a central extension of a Lie subalgebra of 𝒟 fixed by a degree preserving anti-involution. Define the map: σ: 𝒟 ⟶𝒟1 ↦σ(1) = -1t ↦σ(t)=-tD ↦σ(D)=-D. This is a degree preserving anti-involution of 𝒟, and the Lie subalgebra fixed by -σ is𝒟^-:={a∈𝒟|σ(a)=-a}.Let 𝒟̂^- be a central extension of 𝒟^- where the 2-cocycle is the restriction of the 2-cocycle ψ given above. Therefore 𝒟̂^- is a Lie subalgebra of 𝒟̂.More explicitly, 𝒟̂^- is the Lie algebra over the vector space spanned by {C}∪{ t^2k-1 g(D+(2k-1)/2);g even}∪{ t^2k f(D+k) ; f odd} where k∈ℤ and even and odd refer to even and odd polynomial functions. Its Lie bracket is given by Equation (<ref>). Denote by W^- the universal enveloping algebra of 𝒟̂^-. Our main result relates the trace of twisted Heisenberg category to a quotient of W^-. [scale=1.1] [font=] at (-0.5,1.5) 𝒟; [->] (0,1.5) to (3,1.5); [font=] at (3.4,1.55) 𝒟;at (1.4,1.7) central extension; [->] (3.9,1.5) to (6.7,1.5); [font=] at (7.5,1.5) W_1+∞;at (5.3,1.7) enveloping algebra;[rotate=90,scale=1.5] at (-.5,.8) ⊂; [rotate=90,scale=1.5] at (3.4,.8) ⊂; [rotate=90,scale=1.5] at (7.4,.8) ⊂; [text width=1cm] at (-1,.8) fixed by -σ;[font=] at (-0.4,0.05) 𝒟^-; [->] (0,0) to (3,0); [font=] at (3.4,0.05) 𝒟^-;at (1.4,0.2) central extension; [->] (3.9,0) to (6.7,0); [font=] at (7.5,0) W^-;at (5.3,.2) enveloping algebra;Note that not all w_k,ℓ are contained in W^-. When k-ℓ is an even integer, w_k,ℓ∉W^-. Moreover, the difference k-ℓ being odd is not sufficient. For example, t^2D=w_2,1∉W^- since an element starting with t^2 should be followed by f(D+1) where f is an odd polynomial function. Hence t^2D=w_2,1∉W^- but t^2(D+1)=t^2D+t^2=w_2,1+w_2,0∈ W^- (and, indeed, σ(t^2 (D+1)) = t^2(-D-1) = -t^2(D+1)).§.§ Gradings on W^-There is a natural ℤ^≥ 0 filtration of W^- called the differential filtration with w_k, ℓ in degree ℓ; denote this filtration by |·|_dot. It is convenient to define an additional filtration: the difference filtration, where w_k, ℓ is in degree ℓ-k, denoted |·|_diff. That this is a filtration follows from the fact that W^- also carries a filtration with w_k, ℓ in degree k.These filtrations are compatible, so we have a (ℤ×ℤ^≥ 0)-filtration with with an element f=t^j g(D-j/2) ∈ W^- in bidegree ≤(|f|_diff, |f|_dot) = (deg(g)-j, deg(g)), where deg(g) is the polynomial degree of g(w) ∈ℂ[w]. Define the following subalgebras of W^-:W^-,> = ℂ⟨ t^j g(D-j/2) | (g)-j≥ 1⟩; W^-,< = ℂ⟨ t^j g(D-j/2) | (g)-j≤1⟩;W^-,0 = ℂ⟨ g(D) |(g)odd⟩.Let W^-,ω[≤ r, ≤ k] denote the set of elements in difference degree ≤ r and differential degree ≤ k, with ω∈{>,<,0}.Denote by W^- the associated graded object with respect to this filtration. Hence W^- is (ℤ×ℤ^≥ 0)-graded with |w_k,ℓ| = (ℓ-k, ℓ). For ω∈{>,<,0}, define a generating series for the graded dimension of (W^-)^ω byP_ (𝒲^-)^ω(t,q) = ∑_r∈ℤ∑_k∈ℤ, k≥ 0 (W^-)^ω[r,k] t^r q^k. The graded dimensions of (W^-)^> and (W^-)^< are given by: P_^> = ∏_r≥ 0∏_k>01/1-t^2r+1 q^k; P_^< = ∏_r≤ 0∏_k>01/1-t^2r-1 q^k.The algebra W^- is generated by elements of the form t^j g(D-j/2), where (g)-j is odd. Hence ^> is freely generated by elements w_k, ℓ with k-ℓ odd; such elements have bidgegree (k-ℓ, ℓ). The proposition follows. Let W^-_r,s denote the rank r, differential filtration s part of W^-. It is easy to see that the differential filtration zero part of W^-, namely ⋃_r∈ℤ W^-_r,0, is spanned as a vector space by {C}∪{t^2n+1}_n∈ℤ. As an algebra, we have that[t^2n+1,t^2m+1]=(2n+1)δ_n,-mHence we have an isomorphism between the differential filtration zero part of W^- and the twisted Heisenberg algebra 𝔥_tw given by:ϕ: 𝔥_tw ⟶⋃_r∈ℤ W^-_r,0h_2n+1/2 ↦1/√(2) t^2n+1where n∈ℤ. §.§ Generators of the algebra W^- The following lemma describes a generating set for W^- as an algebra.The algebra W^-/⟨ w_0,0, C⟩ is generated by w_1,0, w_0,3, and w_± 2,1± w_± 2,0. Let t^k g(D-k/2) be an arbitrary element of W^-. Without loss of generality, we may assume g is a monic monomial of the form g(w) = w^ℓ with ℓ-k odd, since lower terms in g are just monomials of this form with lower degree, and thus can be generated separately. Therefore, we havet^k g(D-k/2) = ∑_i=0^ℓℓi(-1)^ℓ - i (k/2)^ℓ - i t^k D^i.The leading term of this element with respect to differential degree is t^k D^ℓ. We will generate the leading term first, and address lower terms afterwards. There are two cases, depending on the parities of k and ℓ.First, suppose that k=2n is even and ℓ = 2m+1 is odd (recall that k and ℓ must have opposite parity in W^-). Hence, we must generate w_± 2n, 2m+1. The following calculations are easy, using Formula <ref>:[w_-2,1 - w_-2,0, w_1,0] = w_-1,0, [w_1,0, w_0,3] = -3(w_1,2+ w_1,1)-w_1,0, [w_1,2b,w_0,3] = -3w_1, 2b+2 + O(w_1,2b+1), where O(ω) refers to terms with lower differential degree than ω. Hence, starting with w_1,2 - w_1,1, we can use the Equation (<ref>) above to generate w_1,2b for any b. Now we have: [w_± 2a, 1, w_1,0] = w_± 2a+1,0, [w_± 2a+1, 0, w_1,2 - w_1,1] = -(4a+2)w_2a+2,1 - (2a+1)(2a+2)w_2a+2,0. Thus, starting from w_2,1 + w_2,0, we can generate w_2a,1 for any a. Finally, we have: [w_-1,0, w_1,2b] = ∑_i=0^2b-12bi (-1)^2b-i+1 w_0,i = w_0,2b-1 + O(w_0,2b-2), [w_± 2a, 1, w_0,2b-1]= -∑_i=0^2b-22b-1i (± 1)^2b-i(2)^2b-2-i t^2aD^i+1= w_2a,2b-1 + O(w_2a,2b-2). So, we can generate a polynomial with leading term w_± 2n, 2m+1. Next, suppose that k = 2n+1 is odd and positive and ℓ = 2m is even. Using Formula (<ref>), we have: [w_2a+1,0, w_0,2b+1]=t^2a+1∑_i=0^2b2b+1i (2a+1)^2b+1 -i D^i = w_2a+1, 2b + O(w_2a+1, 2b-1). Now Equations (<ref>) and (<ref>) give that we can generate w_2a+1,0 and w_0,2b+1. Hence we can generate a polynomial with leading term w_2a+1,2b.Finally, assume that k=-(2n+1) is odd and n=2m is even. Using Formula (<ref>), we have: [w_-2a,1, w_1,0] = w_1-2a,0. By Equation (<ref>), we can therefore generate w_-(2a+1),0 for any a. Next, note that: [w_-1,0, w_1,2b] = -∑_i=0^2b-12b-1i (-1)^2b-1-i D^i = w_0,2b-1 + O(w_0,2b-2). By Equation (<ref>), we can generator w_0, 2b+1 for any b. Finally, we have [w_-(2a+1),0, w_0,2b-1]= t^-(2a+1)∑_i=0^2n-22n-1i (-1)^2n-i (2a+1)^2n-1-i D^i= w_-(2a+1), 2b-2 + O(w_-(2a+1), 2b-3). Thus, we can generate a polynomial with leading term w_-(2n+1), 2m. It remains to adjust the lower terms of these equations so that they match those in Equation (<ref>). But note that each equation used above to generate the leading term results in lower terms which lie in different filtrations of W^-. Therefore we can adjust the coefficients of lower terms by scaling individual equations above. Since there is no dependency between these equations, we can choose constant coefficients for the generators so that our generated polynomial has the correct lower terms.§.§ Fock space representation of W^- The algebra W^- inherits a Fock space representation from W_1+∞. Let W^-,≥ = W^-,0⊕ W^-,>. For parameters c,d ∈ℂ, let ℂ_c,d be a one-dimensional module for W^-,≥ on which each w_k,ℓ with (k,ℓ) ≠ (0,0) acts as zero, C acts as c, and w_0,0 acts as d. Let ℳ_c,d := _W^-,≥^W^-ℂ_c,d. This induced module possesses the following properties: <cit.> The W^--module ℳ_c,d has a unique irreducible quotient 𝒱_c,d, which is isomorphic as a vector space to ℂ[w_-1,0, w_-2,0,w_-3,0,… ].<cit.> The action of W^-/(C-1,w_0,0) is faithful on 𝒱_1,0. This follows immmediately from the argument in <cit.> for W_1+∞ because W^- is a subalgebra. Proposition <ref> allows us to the compute the action of the generators on 𝒱_1,0, which we record for convenience below. Let k be a positive integer. The generators of W^- act on 𝒱_1,0 as follows:[w_1,0, w_-k,0]= δ_1,k, [w_-2,1-w_-2,0, w_-k,0]= (k+2) w_-(k+2),0 , [w_2,1 + w_2,0, w_-k,0]= -(k+2)w_2-k,0 , [w_0,3, w_-k,0]= 3k w_-k,2 - 3k^2 w_-k,1 + k^3 w_-k,0.§ TWISTED HEISENBERG CATEGORY We will now describe the main object of interest in the paper, the twisted Heisenberg category ℋ_tw. After defining the category, we recall the trace decategorification functor and some of its properties. We then describe some filtrations of , identify a copy of the degenerate affine Hecke-Clifford algebra _n, and describe the trace of _n. Finally, we identify a set of distinguished elements inwhich generate the nonzero filtration degrees of the algebra. §.§ Definition of ℋ_twThe twisted Heisenberg category ℋ^t is defined in <cit.> as the Karoubi envelope of a ℂ-linear ℤ/2ℤ-graded additive monoidal category, whose moprhisms are described diagrammatically. There is an injective algebra homomorpshim from 𝔥_tw to the split Grothendieck group of the twisted Heisenberg caterogy K_0(ℋ^t). As in the untwisted case, this map is conjecturally surjective.The object of our main interest is the trace decategorification or zeroth Hochschild homology of ℋ^t. It is shown in <cit.> that trace of an additive category is isomorphic to the trace of its Karoubi envelope,. Therefore, we can work with the non-idempotent completed version of ℋ^t. We will denote it by ℋ_tw. Focusing our attention to ℋ_tw allows us to work with the diagrammatics introduced in <cit.>.The category ℋ_tw is the ℂ-linear, ℤ/2ℤ-graded monoidal additive category whose objects are generated by P and Q. A generic object is a sequence of P's and Q's. The morphisms of ℋ_tw are generated by oriented planar diagrams up to boundary fixing isotopies, with generators [->](-2,-0.25) to (-2,0.75);(-1.99,0.25) circle [radius=2pt]; [<-](-1,-0.25) to (-1,0.75);(-1,0.25) circle [radius=2pt]; [->] (0,-0.25) to (1,0.75); [->] (1,-0.25) to (0,0.75); [->] (1.5,0.5) arc (180:360:5mm); [<-] (3,0.5) arc (180:360:5mm); [->] (4.5,0) arc (180:0:5mm); [<-] (6,0) arc (180:0:5mm);where the first diagram corresponds to a map P→ P{1} and the second diagram corresponds to a map Q→ Q{1}, where {1} denotes the ℤ/2ℤ-grading shift. The first two diagrams above have degree one, and the last five have degree zero. The identity morphisms of P and Q are indicated by an undecorated upward and downward pointing arrow, respectively. These generators satisfy the following relations: [baseline=(current bounding box.center),scale=0.75][->](0,0) to [out=45,in=-45] (0,2);[->](0.5,0) to [out=135,in=-135] (0.5,2);= [baseline=(current bounding box.center),scale=0.75] [->] (1.5,0) to (1.5,2); [->] (2,0) to (2,2); [baseline=(current bounding box.center),scale=0.75][->](0,0) to [out=45,in=-45] (0,2);[<-](0.5,0) to [out=135,in=-135] (0.5,2);= [baseline=(current bounding box.center),scale=0.75] [->] (1.5,0) to (1.5,2); [<-] (2,0) to (2,2); [baseline=(current bounding box.center),scale=0.75][->](0,0) to (1.5,2);[->](1.5,0) to (0,2); [->](0.75,0) to [out=45,in=-45] (0.75,2);= [baseline=(current bounding box.center),scale=0.75][->](0,0) to (1.5,2);[->](1.5,0) to (0,2); [->](0.75,0) to [out=135,in=225] (0.75,2); [baseline=(current bounding box.center),scale=0.75][<-](0,0) to [out=45,in=-45] (0,2);[->](0.5,0) to [out=135,in=-135] (0.5,2);= [baseline=(current bounding box.center),scale=0.75] [<-] (1.5,0) to (1.5,2); [->] (2,0) to (2,2);- [baseline=(current bounding box.center),scale=0.75] [->] (3,2) arc (180:360:5mm); [<-] (3,0) arc (180:0:5mm);- [baseline=(current bounding box.center),scale=0.75] [->] (3,2) arc (180:360:5mm); [<-] (3,0) arc (180:0:5mm);(3.05,1.8) circle [radius=3pt];(3.95,0.2) circle [radius=3pt]; [baseline=(current bounding box.center),scale=0.75] [->] (3,2) arc (-180:180:5mm);=1 [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);=0[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (1,2); [->] (1,0) to (0,2);(0.25,0.5) circle [radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (1,2); [->] (1,0) to (0,2);(0.75,1.5) circle [radius=3pt]; [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (1,2); [->] (1,0) to (0,2);(0.75,0.5) circle [radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (1,2); [->] (1,0) to (0,2);(0.25,1.5) circle [radius=3pt];[baseline=(current bounding box.center)] [<-] (3,2) arc (180:0:5mm);(3.05,2.2) circle [radius=2.5pt];= -[baseline=(current bounding box.center)] [<-] (3,2) arc (180:0:5mm);(3.95,2.2) circle [radius=2.5pt]; [baseline=(current bounding box.center)] [->] (3,2) arc (180:0:5mm);(3.05,2.2) circle [radius=2.5pt];= [baseline=(current bounding box.center)] [->] (3,2) arc (180:0:5mm);(3.95,2.2) circle [radius=2.5pt];[baseline=(current bounding box.center)] [->] (3,2) arc (180:360:5mm);(3.05,1.8) circle [radius=2.5pt];= [baseline=(current bounding box.center)] [->] (3,2) arc (180:360:5mm);(3.95,1.8) circle [radius=2.5pt]; [baseline=(current bounding box.center)] [<-] (3,2) arc (180:360:5mm);(3.05,1.8) circle [radius=2.5pt];= -[baseline=(current bounding box.center)] [<-] (3,2) arc (180:360:5mm);(3.95,1.8) circle [radius=2.5pt];[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);= [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,0.6) circle [radius=3pt];(0,1.2) circle [radius=3pt]; [baseline=(current bounding box.center),scale=0.75] [<-] (0,0) to (0,2);= -[baseline=(current bounding box.center),scale=0.75] [<-] (0,0) to (0,2);(0,0.6) circle [radius=3pt];(0,1.2) circle [radius=3pt]; [baseline=(current bounding box.center),scale=0.75] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=3pt];=0[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle [radius=3pt]; [fill=] (0.2,1) circle [radius=0.3pt]; [fill=] (0.4,1) circle [radius=0.3pt]; [fill=] (0.6,1) circle [radius=0.3pt]; [->] (0.8,0) to (0.8,2);(0.8,0.3) circle [radius=3pt];=-[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,0.3) circle [radius=3pt]; [fill=] (0.2,1) circle [radius=0.3pt]; [fill=] (0.4,1) circle [radius=0.3pt]; [fill=] (0.6,1) circle [radius=0.3pt]; [->] (0.8,0) to (0.8,2);(0.8,1.6) circle [radius=3pt]; . [baseline=(current bounding box.center),scale=0.75] [<-] (0,0) to (0,2);(0,1.6) circle [radius=3pt]; [fill=] (0.2,1) circle [radius=0.3pt]; [fill=] (0.4,1) circle [radius=0.3pt]; [fill=] (0.6,1) circle [radius=0.3pt]; [<-] (0.8,0) to (0.8,2);(0.8,0.3) circle [radius=3pt];=-[baseline=(current bounding box.center),scale=0.75] [<-] (0,0) to (0,2);(0,0.3) circle [radius=3pt]; [fill=] (0.2,1) circle [radius=0.3pt]; [fill=] (0.4,1) circle [radius=0.3pt]; [fill=] (0.6,1) circle [radius=0.3pt]; [<-] (0.8,0) to (0.8,2);(0.8,1.6) circle [radius=3pt]; .Also, if we let [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);:= [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); [fill](0,1) circle[radius=3pt];we get the following relations: [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); [fill](0,0.6) circle[radius=3pt]; (0,1.2) circle[radius=3pt];= -[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); (0,0.6) circle[radius=3pt]; [fill](0,1.2) circle[radius=3pt];[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2); [fill] (0,1.6) circle [radius=3pt]; [fill] (0.2,1) circle [radius=0.3pt]; [fill] (0.4,1) circle [radius=0.3pt]; [fill] (0.6,1) circle [radius=0.3pt]; [->] (0.8,0) to (0.8,2); [fill] (0.8,0.3) circle [radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2); [fill] (0,0.3) circle [radius=3pt]; [fill] (0.2,1) circle [radius=0.3pt]; [fill] (0.4,1) circle [radius=0.3pt]; [fill] (0.6,1) circle [radius=0.3pt]; [->] (0.8,0) to (0.8,2); [fill] (0.8,1.6) circle [radius=3pt]; [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle [radius=3pt]; [fill] (0.2,1) circle [radius=0.3pt]; [fill] (0.4,1) circle [radius=0.3pt]; [fill] (0.6,1) circle [radius=0.3pt]; [->] (0.8,0) to (0.8,2); [fill] (0.8,0.3) circle [radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,0.3) circle [radius=3pt]; [fill] (0.2,1) circle [radius=0.3pt]; [fill] (0.4,1) circle [radius=0.3pt]; [fill] (0.6,1) circle [radius=0.3pt]; [->] (0.8,0) to (0.8,2); [fill] (0.8,1.6) circle [radius=3pt];[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.25,0.5) circle[radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.75,1.5) circle[radius=3pt];+[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); [->](0.5,0) to (0.5,2); +[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); [->](0.5,0) to (0.5,2); (0,1.2) circle[radius=3pt]; (0.5,0.6) circle[radius=3pt]; [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.25,1.5) circle[radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.75,0.5) circle[radius=3pt];+ [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); [->](0.5,0) to (0.5,2);- [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); [->](0.5,0) to (0.5,2); (0,1.2) circle[radius=3pt]; (0.5,0.6) circle[radius=3pt];[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (1,2); [<-](1,0) to (0,2); [fill](0.25,0.5) circle[radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (1,2); [<-](1,0) to (0,2); [fill](0.75,1.5) circle[radius=3pt];+[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (0,2); [<-](0.5,0) to (0.5,2); -[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (0,2); [<-](0.5,0) to (0.5,2); (0,1.2) circle[radius=3pt]; (0.5,0.6) circle[radius=3pt];[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (1,2); [<-](1,0) to (0,2); [fill](0.25,1.5) circle[radius=3pt];= [baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (1,2); [<-](1,0) to (0,2); [fill](0.75,0.5) circle[radius=3pt];+[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (0,2); [<-](0.5,0) to (0.5,2); +[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (0,2); [<-](0.5,0) to (0.5,2); (0,1.2) circle[radius=3pt]; (0.5,0.6) circle[radius=3pt]; .If x and y are morphisms, the diagram corresponding to x⊗ y is obtained by placing the diagram of y to the right of the diagram of x. Since the relative positions of the hollow dots are important, we will work with the convention that the hollow dots in the diagram of y will be placed below the height of hollow dots in the diagram of x.§.§ Trace decategorificationIn <cit.>, the trace or zeroth Hochschild homology of a -linear additive category 𝒞 is proposed as an alternative decategorification functor. Here we will recall its definition, and point out one subtlety occuring in our case due to the supercommutative nature of hollow dots and solid dots.Let 𝒞 be a -linear additive category. Then its trace decategorification, denoted (𝒞), is defined as follows: (𝒞)≃( ⊕_x∈𝒪b(𝒞)(x) ) /ℐ,where ℐ is the ideal generated by span_{fg-gf} for all f:x→ y and g:y→ x , x,y∈𝒪b(𝒞). Note that here we quotient out by an ideal, so (𝒞) has an algebra structure.Trace decategorification has a nice diagrammatic interpretation, in which we consider our string diagrams to be drawn on an annulus instead of a plane. The annulus recaptures the trace relation fg=gf diagrammatically since we can slide f or g around the annulus to change their composition order. [baseline=(current bounding box.center),rounded corners] [blue,dashed,fill=blue!5!white] (0.5,1) circle [radius=40pt]; [fill,blue,fill=white] (0.5,1) circle [radius=4pt]; (0,0.3) to (0,0.5); (-0.25,0.5) rectangle (0.25,0.9); (0,0.9) to (0,1.2); (-0.25,1.2) rectangle (0.25,1.6); [->] (0,1.6) to (0,1.7); (0,1.7) arc(180:0:5mm); (1,1.7) to (1,0.3); (1,0.3) arc(360:180:5mm); at (0,0.7)f; at (0,1.4)g; = [baseline=(current bounding box.center),rounded corners] [blue,dashed,fill=blue!5!white] (0.5,1) circle [radius=40pt]; [fill,blue,fill=white] (0.5,1) circle [radius=4pt]; (0,0.3) to (0,0.5); (-0.25,0.5) rectangle (0.25,0.9); (0,0.9) to (0,1.2); (-0.25,1.2) rectangle (0.25,1.6); [->] (0,1.6) to (0,1.7); (0,1.7) arc(180:0:5mm); (1,1.7) to (1,0.3); (1,0.3) arc(360:180:5mm); at (0,0.7)g; at (0,1.4)f; As described in Section <ref>,has a ℤ/2ℤ-grading where [baseline=(current bounding box.center),scale=0.75] [->](-2,-0.25) to (-2,0.75); (-2,0.25) circle [radius=3pt];and[baseline=(current bounding box.center),scale=0.75] [<-](-1,-0.25) to (-1,0.75); (-1,0.25) circle [radius=3pt];have degree one, and other generating diagrams have degree zero. We also have supercommutativity relations (<ref>) and (<ref>) and supercyclicity relations (<ref>) and (<ref>). These relations have several interesting diagrammatic consequences. Working with relation (<ref>), we have the following compuation: [baseline=(current bounding box.center),rounded corners] [blue,dashed,fill=blue!5!white] (0.5,1) circle [radius=40pt]; [fill,blue,fill=white] (0.5,1) circle [radius=4pt]; [->] (0,0.3) to (0,1.7); (0,1.7) arc(180:0:5mm); (1,1.7) to (1,0.3); (1,0.3) arc(360:180:5mm); (0,0.7) circle [radius=2pt]; (0,1.4) circle [radius=2pt]; = [baseline=(current bounding box.center),rounded corners] [blue,dashed,fill=blue!5!white] (0.5,1) circle [radius=40pt]; [fill,blue,fill=white] (0.5,1) circle [radius=4pt]; [->] (0,0.3) to (0,1.7); (0,1.7) arc(180:0:5mm); (1,1.7) to (1,0.3); (1,0.3) arc(360:180:5mm); (0,0.7) circle [radius=2pt]; (0,1.4) circle [radius=2pt]; = -[baseline=(current bounding box.center),rounded corners] [blue,dashed,fill=blue!5!white] (0.5,1) circle [radius=40pt]; [fill,blue,fill=white] (0.5,1) circle [radius=4pt]; [->] (0,0.3) to (0,1.7); (0,1.7) arc(180:0:5mm); (1,1.7) to (1,0.3); (1,0.3) arc(360:180:5mm); (0,0.7) circle [radius=2pt]; (0,1.4) circle [radius=2pt]; =0. Here the first equality is obtained by sending the solid dot around the annulus using trace relation, and the second equality is a consequence of relation (<ref>). Therefore the above diagram is equal to zero in the trace. To demonstrate the subtlety with supercyclicity relations (<ref>) and (<ref>), consider the following situation: [baseline=(current bounding box.center),rounded corners] [blue,dashed,fill=blue!5!white] (0.5,1) circle [radius=40pt]; [fill,blue,fill=white] (0.5,1) circle [radius=4pt]; [->] (0,0.3) to (0,1.7); (0,1.7) arc(180:0:5mm); (1,1.7) to (1,0.3); (1,0.3) arc(360:180:5mm); (0,1.4) circle [radius=2pt]; =-[baseline=(current bounding box.center),rounded corners] [blue,dashed,fill=blue!5!white] (0.5,1) circle [radius=40pt]; [fill,blue,fill=white] (0.5,1) circle [radius=4pt]; [->] (0,0.3) to (0,1.7); (0,1.7) arc(180:0:5mm); (1,1.7) to (1,0.3); (1,0.3) arc(360:180:5mm); (0,0.7) circle [radius=2pt]; If we denote [baseline=(current bounding box.center),scale=0.75] [->](-2,-0.25) to (-2,0.75); (-2,0.25) circle [radius=3pt]; by f, with the usual trace relation, we would get f∘ id=id∘ f. However, in this case, we gain an extra negative sign from the supercyclicity relations. So, we must replace the usual trace relation fg=gf with the supertrace relation fg=(-1)^|f||g|gf in the ideal ℐ, where |f|,|g| are the degrees of f and g with respect to the ℤ/2ℤ grading.This example can be generalized to show that composition of an an odd morphism with a cycle of odd length is zero in the supertrace, since it will be equal to its negative when a hollow dot travels around the annulus and arrives to its original position. We wish to restrict our study to the following subalgebra of the trace. The even trace ofis defined by ≃( ⊕_x∈𝒪b()_0(x) ) /ℐ_0 where _0(x) consists of even degree endomorphisms and ℐ_0 is its ideal generated by span_ℂ{fg-gf} for all f:x→ y and g:y→ x , x,y∈𝒪b(). This is the restriction of the trace to only the even part (with respect to the ℤ/2ℤ grading induced by (c_i) = 1). The odd part of the trace is not zero (it contains, e.g., [baseline=(current bounding box.center),scale=0.75] [->](-2,-0.25) to (-2,0.75); (-2,0.25) circle [radius=3pt];), but is not interesting from a representation theoretic viewpoint as explained above. The example of trace functions on the finite Hecke-Clifford algebra in <cit.> demonstrates the importance of the even trace.Wan and Wang study the space of trace functions on the finite Hecke-Clifford algebra ℋ_n: linear functions ϕ: ℋ_n →ℂ such that ϕ([h,h'])=0 for all h,h'∈ℋ_n, and ϕ(h)=0 for all h∈ (ℋ_n)_1. This latter requirement encodes the information that odd elements act with zero trace on any ℤ_2-graded ℋ_n-module (because multiplication by an odd element results in a shift in degree). The space of such trace functions is clearly canonically isomorphic to the dual of the even cocenter, rather than of the full cocenter. The same observation holds for the trace of the affine Hecke-Clifford algebra, as studied in <cit.>.We will see in Section 4 that the structure ofis largely controlled by the even trace of the degenerate affine Hecke-Clifford algebra in type A; we therefore do not lose interesting representation-theoretic information by restricting to , and greatly simplify our calculations by doing so. For instance, the ambiguity in the supercyclicity relations identified in Example 3.<ref> does not interfere with calculations in .Since ℐ_0 is an ideal of ⊕_x∈𝒪b()_0(x), the compositions fg and gf must be even morphisms, even though individually f and g may be odd morphisms. This situation is analogous to the even cocenter of the degenerate affine Hecke-Clifford algebra studied in <cit.>, where Clifford generators c_i do no appear individually (as they are odd generators), but still have an impact on the cocenter via the relation c_i^2=-1.Diagrammatically, the above definition means that we will have an even number of hollow dots on our diagrams. In a diagram with 2n hollow dots, sliding one around the annulus from top to the bottom will multiply the diagram by (-1)^2n-1(-1)=1 where (-1)^2n-1 is a result of changing relative height with the remaining 2n-1 hollow dots using relation (<ref>) and (-1) is the result of sliding it through a clockwise cup using relation (<ref>). For the sake of clarity, when working with diagrams in the even trace we will not draw them on an annulus, but will instead draw them inside square brackets, e.g. [ [baseline=(current bounding box.center),scale=0.5] [->] (0,0) to (1,2); [->] (1,0) to (0,2); ]. This notation refers to the equivalence class of the diagram in . Our main theorem will relateand W^-. In particular, we will establish that the correspondence in Table 1 gives an isomorphism betweenand W^-. Recall that w_k,ℓ = t^ℓ D^k ∈ W^-.§.§ Degenerate affine Hecke-Clifford algebraWe recall the definition of the degenerate affine Hecke-Clifford algebra of type A_n-1, denoted _n, which was first studied in <cit.>. Let 𝒞ℓ_n be the Clifford algebra with generators c_1, …, c_n, subject to the relations:c_i^2 = -1 for1≤ i ≤ n, c_i c_j = - c_j c_i ifi≠j.The symmetric group S_n has a natural action on 𝒞ℓ_n by permuting the generators.Define the Sergeev algebra, or finite Hecke-Clifford algebra of type A_n-1, to be the semidirect product𝕊 := 𝒞ℓ_n ⋊ℂ S_ncorresponding to this action.The degenerate affine Hecke algebra, _n, is isomorphic as a vector space to 𝕊⊗ℂ[x_1, …, x_n]. It is an associative unital algebra over ℂ[u], where u is a formal parameter usually set to 1, generated by s_1,s_2,...,s_n-1, x_1,x_2,...,x_n, and c_1,c_2,...,c_n subject to relations making ℂ[x_1, …, x_n], Cℓ_V, and ℂ S_n subalgebras, along with the additional relations:x_i c_i= -c_i x_i,x_i c_j = c_j x_i(i≠ j) , σ c_i = c_σ(i)σ (1≤ i ≤ n, σ∈ S_n),x_i+1s_i-s_ix_i =u(1-c_i+1c_i), x_js_i =s_ix_j (j≠ i,i+1) . It also has a ℤ/2ℤ grading via (s_i)=(x_i)=0 and (c_i)=1. This algebra is also called the affine Sergeev algebra, and later on we will see that it controls the endomorphisms of up strands in ℋ_tw. §.§ Trace of the degenerate affine Hecke-Clifford algebra The second author computes the trace (or zeroth Hochschild homology) of the even part of _n as a vector space in <cit.>, where he gives an explicit description of a vector space basis for types A, B and D. Here we recall the result for type A.Let I be the standard root system of type A_n-1, and let W= S_n be the Weyl group. For a partition λ = (λ_1, λ_2, …, λ_k) ⊢ n, let J_λ be the unique minimal subset of I (up to conjugation by W) such that W_J_λ contains an element of cycle type λ. Let w_λ∈ W be the element (1,…, λ_1)(λ_1 +1, …, λ_1 + λ_2)… (n-λ_n +1, …, n). Then w_λ∈ W_J_λ. Let V be the standard representation of 𝔰𝔩_n, with basis {x_1, …, x_n}. Denote by V^2 the vector space with basis {x_1^2, …, x_n^2}.Finally, fix a basis {f_J_λ;i} of the vector space S((V^2)^W_J_λ)^N_W(W_J_λ), where S(U) denotes the symmetric algebra of the vector space U, and N_W(W_J_λ) denotes the normalizer of the parabolic subgroup W_J_λ in W. We have the following description of a basis for (_n)_0 in type A.<cit.>The set {w_λf_J_λ;i}_λ∈𝒪𝒫_n is a basis of (_n)_0, where 𝒪𝒫_n is the set of partitions of n with all odd parts.Let n=3. Then we have 𝒪𝒫_3={(1,1,1),(3)}. For λ=(1,1,1), we have w_λ = 1 and J_λ = ∅, sinceW_J_λ = {1}. Thus N_W(W_J_λ) = S_3. So, we choose a basis {f_J_λ;i} of the vector space S((V^2)^W_J)^N_W(W_J) = ℂ[x_1^2 , x_2^2, x_3^2]^S_3, i.e. the symmetric polynomials in 3 variables. We can take {f_J_λ;i} = {s_ν}, the Schur polynomials in 3 variables.For λ=(3), we have w_λ=(123), a 3-cycle in S_3. Thus J_λ = I, W_J_λ=W and N_W(W_J_λ)=W. Therefore f_J_λ,i is a basis of ℂ[x_1^2+x_2^2+x_3^2], polynomials in the variable (x_1^2+x_2^2+x_3^2) (in this case, the N_W(W_J_λ)-invariance is superfluous).Therefore a basis of Tr(_3)_0 is given by { s_ν}∪{(123)(x_1^2+ x_2^2+ x_3^2)^n}_n∈ℕ where {s_ν} are the Schur polynomials in 3 variables.Note that this bases does not contain any classes indexed by partitions with even parts. Correspondingly, we will see that degree zero diagrams incontaining even cycles are zero.§.§ Distinguished elements h_nDefine the elements: n(x_1^j_1⋯ x_n^j_n)(c_1^ϵ_1⋯ c_n^ϵ_n) := [[scale=0.8] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …;(.05,1.6) circle (2pt); (.78,1.6) circle (2pt); (1.58,1.6) circle (2pt); (3.18,1.6) circle (2pt); (.35,1.2) circle (2.5pt); (.62,1.2) circle (2.5pt);(1.425,1.2) circle (2.5pt); (3.025,1.2) circle (2.5pt); at (-0.18,1.7) j_1;at (.55,1.7) j_2;at (1.35,1.7) j_3; at (3.45,1.7) j_n;at(.051,1.25) ϵ_1;at(.99,1.25) ϵ_2; at(1.76,1.25) ϵ_3;at(3.37,1.25) ϵ_n;],-n(x_1^j_1⋯ x_n^j_n)(c_1^ϵ_1⋯ c_n^ϵ_n) := [[scale=0.8] [<-] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [<-] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [<-] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [<-] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …;(.05,1.6) circle (2pt); (.78,1.6) circle (2pt); (1.58,1.6) circle (2pt); (3.18,1.6) circle (2pt); (.35,1.2) circle (2.5pt); (.62,1.2) circle (2.5pt);(1.425,1.2) circle (2.5pt); (3.025,1.2) circle (2.5pt); at (-0.18,1.7) j_1;at (.55,1.7) j_2;at (1.35,1.7) j_3; at (3.45,1.7) j_n;at(.051,1.25) ϵ_1;at(.99,1.25) ϵ_2; at(1.76,1.25) ϵ_3;at(3.37,1.25) ϵ_n;], where ϵ_i ∈{0,1}. In both of these elements, we consider the hollow dots to be descending in height from left to right, so that the dot labeled ϵ_1 is the highest.These elements are analogues to those denoted h_± n⊗ (x_1^j_1⋯ x_n^j_n) in <cit.>.Additionally, setn∑ x_i^j_i = ∑nx_i^j_i. For n≥ 1 and 1≤ i ≤ n-1 we have * ± nx_i=± nx_i+1±± i±(n-i). * ± n x_i c_j = - ± nx_i+1c_j+1. Part (1) is just <cit.>, except our solid dot sliding relation through crossing involves an extra term with hollow bubbles. But cycles with single hollow dot are zero since sending the hollow dot around the annulus gives us the same diagram with a negative sign. For the above calculations, our n-cycles split into smaller cycles with single hollow dot at least on one of them. The proof of part 2 depends on the relative position of i and j, but is a straightforward computation. Let w∈ S_n, and define the elements:f_w; j_1, …, j_n; ϵ_1, …, ϵ_n=[very thick][->] (-.55,0) – (-.55,1.5); [very thick][->] (.55,0) – (.55,1.5); [fill=white!20,] (-.8,.4) rectangle (.8,.8);() at (0,.55) w;() at (0,1.25) ⋯;() at (0,.25) ⋯; (-.55,1.25) circle (2pt); (.55,1.25) circle (2pt); (-.55,.9) circle (2pt); (.55,.9) circle (2pt);(-.75,.9) node ϵ_1;(.8,.9) node ϵ_n;(-.75,1.25) node j_1;(.8,1.25) node j_n;andf_w; j_1, …, j_n; ϵ_1, …, ϵ_n=[very thick][<-] (-.55,0) – (-.55,1.5); [very thick][<-] (.55,0) – (.55,1.5); [fill=white!20,] (-.8,.4) rectangle (.8,.8);() at (0,.55) w;() at (0,1.25) ⋯;() at (0,.25) ⋯; (-.55,1.25) circle (2pt); (.55,1.25) circle (2pt); (-.55,.9) circle (2pt); (.55,.9) circle (2pt);(-.75,.9) node ϵ_1;(.8,.9) node ϵ_n;(-.75,1.25) node j_1;(.8,1.25) node j_n; . Let w∈ S_n and (n_1, …, n_r) be a composition of n. Then[f_± w; j_1, …, j_n; ϵ_1, …, ϵ_n] = ∑ d_n_1, …, n_rn_1p_n_1 c_n_1…n_r p_n_rc_n_rfor constants d_n_1, …, n_r∈ℂ, polynomials p_n_i in i variables, and elements c_n_i consisting of at most i Clifford generators (e.g. c_n_3 = { c_1^ϵ_1 c_2^ϵ_2 c_3^ϵ_3 | ϵ_i ∈{0,1}}). We proceed by induction on ∑ϵ_i. The base case is ∑ϵ_i = 0; then[f_± w; j_1, …, j_n; ϵ_1, …, ϵ_n] = [f_± w; j_1, …, j_n]and we apply <cit.>. Now assume the statement is true for ∑ϵ_i = k for all k<m≤ n. Take (ϵ_1, …, ϵ_n) so that ∑ϵ_i = m. Choose g∈ S_n such that gwg^-1 = w_λ, where λ is the cycle type of w (so gwg^-1 = (s_1 … s_n_1 -1)… (s_n_1 + … + n_r-1… s_n_1 +… +n_r -1)).Let p= x_1^j_1… x_n^j_n and c= c_1^ϵ_1… c_n^ϵ_n.Then we havef_± w; j_1, …, j_n; ϵ_1, …, ϵ_n = pcw = (-1)^ϵ cpwwhereϵ = ∑_ϵ_i = 1 j_i.Thus conjugating by g gives thatgpcwg^-1= (-1)^ϵ gcpwg^-1=(-1)^ϵ (g.c) gpwg^-1= (-1)^ϵ[ (g.c)(g.p)gwg^-1 + (g.c) p_L wg^-1],where p_L is a polynomial of degree less than j_1 + … + j_n. Note that gwg^-1 is a product of cycles, so the first term in the above expression has the correct form. In the second term, we have {i | ϵ_g(i) =1}≤ m (strict inequality can occur if g has fixed points). If {i | ϵ_g(i) =1} < m, we are done by induction, so assume that we have equality.Now repeat the process on the second term, choosing a g' ∈ S_n such that g'(wg^-1) (g')^-1 is a product of cycles, and conjugating (g.c) p_L wg^-1. Each application of this process results in one term in which the symmetric group element is a product of cycles (which has the desired form), and one term with the degree of the polynomial part strictly lesser and the degree of the Clifford part weakly lesser. If the degree of the Clifford part ever strictly decreases, we are done. If not, the conjugation will eventually reduce the degree of the polynomial part to 0, so we have an element of the form c' σ, c' ∈ Cℓ_n and σ∈ S_n. Choose a g”∈ S_n such that g”σ (g”)^-1 is a product of cycles; theng” c σ (g”)^-1 =(g”c) g”σ (g”)^-1.This now has the desired form. Let w∈ S_n and (n_1, …, n_r) a composition of n. Then[f_± w; j_1, …, j_n; ϵ_1, …, ϵ_n] = ∑ d_n_1, …, n_r± n_1x_1^ℓ_1 c_1^k_1…± n_rx_1^ℓ_r c_1^k_rwhere d_n_1, …, n_r∈ℂ and ℓ_1, …, ℓ_r, k_1, …, k_r ∈ℕ. This follows immediately from the preceding lemmas.Proposition <ref> allows us to write any element inoras a linear combination of the elements n. We will therefore direct our attention to these elements in future computations. §.§ Gradings inThe next lemma follows from diagrammatic computations in the next section. We record it here for convenience of terminology.The algebrais ℤ-filtered where (nx_1^2a) ≤ n for any a≥ 0.This is called the rank filtration. Denote by(resp. ) the subalgebra ofgenerated by nx_1^2a, n≥ 1(resp. n≤ 1). The algebrais ℤ^≥ 0-filtered where (n x_1^2a)≤ a for any a ≥ 0. Dots can slide through crossings modulo a correction term containing fewer dots. This is called the dot filtration, and corresponds to the differential filtration (given by (w_ℓ, k) = k) in W^-. These filtrations are compatible, sois (ℤ×ℤ^≥ 0)-filtered with n x_1^2a in bidegree (n, a). For ω∈{>,<,0} denote the associated graded object by . Define a generating series for the graded dimension ofbyP_(t,q) = ∑_r∈ℤ∑_k∈ℤ, k≥ 0 [r,k] t^r q^k. The following is an easy calculation using Proposition <ref> and Proposition <ref>. They are not used in the proof of the main result, but we record them here for convenience. The graded dimensions ofandare given by: P_^> = ∏_r≥ 0∏_k>01/1-t^2r+1 q^k; P_^< = ∏_r≤ 0∏_k>01/1-t^2r-1 q^k. Note that the rank grading and dot gradings are shifted by 1 for clockwise bubbles (so d_2 is in bidegree (1,2) and d_4 is in bidegree (1,3)). This is a consequence of the decomposition formula in Lemma <ref>. § BUBBLES We investigate the endomorphisms of 1 in , known as bubbles. We prove that all bubbles can be written in terms of clockwise bubbles, and deduce formulas for moving bubbles past strands in the trace. §.§ Definition and basic properties Elements of _ℋ_tw(1) are ℂ-linear combinations of possibly intersecting or nested closed diagrams, which may have dots. We can always separate the nested pieces, and resolve any crossing that occur between different closed diagrams using the defining relations and end up with non intersecting, not nested closed oriented diagrams. Each one can be deformed into an oriented circle, possibly with dots, via an isotopy. A single closed, oriented, non self intersecting diagram is called a bubble. They are the building blocks ofendomorphisms of the identity object in ℋ_tw. We define d̅_k,l:=[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];at (4.2,1.8) l;at (4.2,2.2) k; and d_k,l:=[baseline=(current bounding box.center)] [<-] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];at (4.2,1.8) l;at (4.2,2.2) k; for k,l∈ℤ_≥ 0.Given any closed diagram with any configuration of dots, it is possible to collect the hollow dots and the solid dots together, possibly after multiplying the diagram by -1, by using relation (<ref>). Solid dots move freely along caps and cups, and hollow dots may capture a negative sign while moving along caps or cups, depending on the orientation. After regrouping, we may assume that the dots are placed on the right middle side of the diagram as above.Moreover, using the left two equations in relation (<ref>), we can erase a pair of hollow dots, possibly by changing the sign of the diagram.Therefore the set {d_k,l,d̅_k,l|k∈ℤ_≥0, l∈{0,1}} is a spanning set for _ℋ_tw(1).In our defining relations, we have thatd̅_0,0=[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);=1 andd̅_0,1=[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];=0.Further, we have the following. We have that d̅_k,1=0 and d_k,1=0 for all non-negative integers k.An example computation shows thatd̅_1,1=[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];= -[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt]; =-[baseline=(current bounding box.center)][->] (3,2) arc (-180:180:5mm);(3.75,1.6) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];=-d̅_1,1=0,where in the second equality, negative sign comes from relation (11), and the third equality comes from sliding the solid dot around.More generally, if we have k solid dots where k is an even integer, then sliding the hollow dot around the circle and passing it through k solid dots multiplies the diagram by (-1)^k+1, so the diagram is zero. If k is an odd number, sliding a solid dot around the circle and passing it through a hollow dot catches a minus sign, so these diagrams are zero as well.These arguments do not depend on the orientation of the bubble, hence the result follows. From now on, we will assume that the second index in d̅_k,l and d_k,l is always zero. We will omit it from our notation and write d_k instead of d_k,0, andd̅_k instead of d̅_k,0. We have that d_2n+1=d̅_2n+1=0 for all non-negative integers n. Note thatd̅_1=[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];= [baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];= -[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];(3.05,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];= [baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];(3.95,2.2) circle [radius=2pt];(3.05,1.8) circle [radius=2pt];= [baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt]; = -[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];= -[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(4,2) circle [radius=2pt];=0.The same arguments works for any odd number of solid dots and works for clockwise oriented bubbles. We have thatd̅_2n=∑_2a+2b=2n-2[baseline=(current bounding box.center)] [->] (0,0) arc (0:180:5mm);(0,0) arc (360:180:5mm); [<-] (0.1,0) arc (-180:0:5mm);(0.1,0) arc (180:0:5mm);(-0.2,0.4) circle[radius=2pt];(0.9,0.4) circle[radius=2pt];at (-0.04,0.6) 2a;at (1.05,0.6) 2b;=∑_2a+2b=2n-2d̅_2ad_2bfor any integers a,b and n≥1.For the n=1 case, we have the following computation: d̅_2=[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];(3.95,1.8) circle [radius=2pt];= [baseline=(current bounding box.center)] [->] (0,0) arc (10:180:5mm);(0,-0.18) arc (350:180:5mm);(0.1,0) arc (170:-170:5mm);(-0.02,-0.2) to (0.1,0.01);(-0.01,0.02) to (0.11,-0.2);(-0.1,0.23) circle[radius=2pt]; = [baseline=(current bounding box.center)] [->] (0,0) arc (10:180:5mm);(-1,-0.08) arc (180:350:5mm);(0.1,0) arc (170:-170:5mm);(-0.02,-0.2) to (0.1,0.01);(-0.01,0.02) to (0.11,-0.2);(0.2,0.23) circle[radius=2pt];+[baseline=(current bounding box.center)] [->] (0,0) arc (0:180:5mm);(0,0) arc (360:180:5mm); [<-] (0.1,0) arc (-180:0:5mm);(0.1,0) arc (180:0:5mm);+[baseline=(current bounding box.center)] [->] (0,0) arc (0:180:5mm);(0,0) arc (360:180:5mm); [<-] (0.1,0) arc (-180:0:5mm);(0.1,0) arc (180:0:5mm);(-0.04,-0.15) circle[radius=2pt];(0.13,0.1) circle[radius=2pt];=d_0where the first diagram on right hand side is zero since it contains a left curl, the second term is d̅_0d_0=d_0 and the last term is zero by Lemma <ref>. For general n, if you replace one of the solid dots with a right-twist curl, and slide the remaining 2n-1 dots through the crossings using relations <ref> and <ref> repeatedly, we will get many resolution terms, consisting of a sum of product of counterclockwise and clockwise bubbles, some with only solid dots, some with hollow dots as well. The terms with hollow dots are zero, and so are the terms with an odd number of solid dots. Also, the figure eight shape contains a left twist curl, so it is zero as well, which proves the statement. §.§ Algebraic independence of bubbles A categorified Fock space action for ℋ_tw is described in <cit.>.ℋ_tw acts on the category 𝔖, whose objects are induction and restriction functors between ℤ/2ℤ-graded finite dimensional 𝕊_n-modules, for all n≥1. Morphisms of 𝔖 are natural transformations between the induction and restriction functors. Following Khovanov's approach from <cit.>, let 𝔖_n be the subcategory of 𝔖, whose objects start with induction or restriction functors from ℤ/2ℤ-graded finite dimensional 𝕊_n-modules. For every n∈ℤ_≥1, we have a functor F_n:ℋ_tw→𝔖_n sending P to _n^n+1 and sending Q to _n^n-1.Note that F_n sends _ℋ_tw(1) to the center of 𝕊_n, which is same as the center of ℂ [S_n]. Explicit descriptions of the actions of a crossing, a cup and a cap are provided in <cit.>. We would like to study the action of clockwise bubbles to show their algebraic independence. Note that d_2kis obtained as composition of a cup, k copies of 1x_1 and a cap.[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.2,2.2) k;= [baseline=(current bounding box.center)] [->] (1,-3.5) arc (0:180:5mm);(1,-3.8) circle[radius=2pt]; [->] (1,-4) to (1,-3.5); [->] (0,-3.5) to (0,-4);at (0,-4.15) ⋮;at (1,-4.15) ⋮;(1,-4.8) circle[radius=2pt]; [->] (1,-5) to (1,-4.5); [->] (0,-4.5) to (0,-5); [->] (0,-5) arc (-180:0:5mm);[decorate,decoration=brace,amplitude=2pt,xshift=-4pt,yshift=0pt] (1.3,-3.5) – (1.3,-5) node [black,midway,xshift=15pt] k dots; Therefore to study the action of d_2k, we need to know the action of 1x_1 in addition to actions of cups and caps. Now 1x_1 is defined as a combination of caps, cups and crossings:[baseline=(current bounding box.center),scale=1.15] [->] (0,0) to (0,1); (0,.5) circle[radius=2pt];= [baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,.5); [<-] (0.5,0.5) arc (-180:0:3mm); [->] (0,.5) to (.5,1); [->] (.5,.5) to (0,1); [->] (1.1,1) to (1.1,.5); [->] (0,1) to (0,1.5); [<-] (1.1,1) arc (0:180:3mm); Using the explicit description of Fock space representation of ℋ in <cit.>, we compute the required actions. These computations give that -1x_1 acts by sending1 ↦ J_n+1=∑_i=1^n(1-c_n+1c_i)(i,n+1).This is the (n+1)-st even Jucys-Murphy element. Therefore 1x_1^2 acts by multiplication by J_n+1^2. This is analogous to the untwisted case where the same element acts as multiplication by a Jucys-Murphy element. Finally, the action of the bubble d_2k is given by multiplication by∑_i=1^n(i↔ n+1)J_n+1^2k(n+1↔ i)-c_n(i↔ n+1)J_n+1^2k(n+1↔ i)c_1,where (i↔ n) denotes the n-cycle s_is_i+1....s_n-1.Here we can apply the filtration argument on the number of disturbances of permutations as done in <cit.> to obtain the following. The elements {d_2k}_k≥0 are algebraically independent, i.e. there is an isomorphism_ℋ_tw(1)≅ k[d_0,d_2,d_4,...]. Therefore the bubbles are algebraically independent, and they form of a copy of a polynomial ring in infinitely many variables. §.§ Counter-clockwise bubble slide moves In order to describeas a vector space, it would be convenient to have a standard form for our diagrams in the trace. In particular, we want to collect all the bubbles appearing in a diagram on the rightmost part of the diagram. In order to do so, we must describe how bubbles slide through upward and downward strands. Note that since we can work with local relations, the bubbles don't have to interact with solid dots or crossings, they can simply slide through under a crossing or under a solid dot.All calculations in this section take place in the trace, though we omit the brackets in some situations for readability. We have that [d̅_2n,1]=2∑_k=1^n [ [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.8,0.8)2k-1;] infor any positive integer n.The proof is a direct computation, given below:[baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);at (3,1.8) 2n; =[baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);at (2.5,1.8) 2n;= [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);(3.5,1) circle (2pt);at (2.7,1.8) 2n-1;at (3.6,1) ;+ [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.75,0.8) 2n-1;- [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.7,0.67) circle (1pt);(0.85,0.65) circle [radius=1pt];(0.98,0.57) circle (1pt);at (0.65,0.8) 2n-1; = [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);(3.5,1) circle (2pt);at (2.7,1.8) 2n-1;at (3.6,1) ;+ 2 [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.75,0.8) 2n-1; = [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);(3.5,1) circle (2pt);at (2.7,1.8) 2n-2;at (3.7,1) 2;+ [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.65,0.8) 2n-2;(1,0.75) circle (1pt);at (1.1,0.75) ;- [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.7,0.67) circle (1pt);(0.85,0.65) circle (1pt);(0.98,0.57) circle (1pt);at (0.65,0.8) 2n-2;(1,0.75) circle (1pt);at (1.1,0.75) ;+ 2 [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.75,0.8) 2n-1; = [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);(3.5,1) circle (2pt);at (2.7,1.8) 2n-2;at (3.7,1) 2;+ 2 [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.75,0.8) 2n-1; = [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);(3.5,1) circle (2pt);at (2.7,1.8) 2n-4;at (3.7,1) 4;+ 2 [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.75,0.8) 2n-3;+ 2 [baseline=(current bounding box.center),scale=1.60](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.75,0.8) 2n-1;Continuing to slide dots in the first term in this way, we obtain: [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);at (3,1.8) 2n;= [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3.5,1) circle (2pt);at (3.8,1) 2n;+ 2∑_k=1^n[baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.8,0.8) 2k-1; .We have that[ [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.75,0.8) 2n+1;] = ∑_a+b=n[ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);(4,1) circle (2pt);at (3,1.8) 2a;at (4.3,1) 2b;]infor any non-negative integer n.This is an easy computation using induction on n. The base case is [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);= [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(1,0.27) circle (1pt);+ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);- [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3.5,1) circle (2pt);(4,0.8) circle (2pt);= [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);where the first term after the first equality contains a left twist curl, and the last term is zero since a bubble with a hollow dot is zero.For the induction step, suppose the statement holds for n≥ 1. Then[baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.7,0.78) 2n+3;= [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.7,0.78) 2n+2;(1,0.27) circle (1pt);at (1.1,0.27) ;+ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt); at (3,1.8) 2n+2;at (4.3,1) ;- [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt); (3.5,1) circle (2pt);(4,0.8) circle (2pt);at (3,1.8) 2n+2;at (4.3,1) ;= [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.7,0.78) 2n+2;(1,0.27) circle (1pt);at (1.1,0.27) ;+ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt); at (3,1.8) 2n+2;at (4.3,1) ; = [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.7,0.78) 2n+1;(1,0.27) circle (1pt);at (1.1,0.27) 2;+ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);(4,1) circle (2pt);at (3,1.8) 2n+1;at (4.3,1) ;- [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);(4,1) circle (2pt);(3.5,1) circle (2pt);(4,0.8) circle (2pt);at (3,1.8) 2n+1;at (4.3,1) ;+ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt); at (3,1.8) 2n+2;at (4.3,1) ; = [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.7,0.78) 2n+1;(1,0.27) circle (1pt);at (1.1,0.27) 2;+ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt); at (3,1.8) 2n+2;at (4.3,1) ; ,where on the second line, we know that counter-clockwise bubbles with odd number of hollow dots are zero by Proposition <ref>, and the terms with hollow dots are zero by Proposition <ref>.Now we can apply our induction hypothesis to the upper part of [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.7,0.78) 2n+1;(1,0.27) circle (1pt);at (1.1,0.27) 2; to get that [baseline=(current bounding box.center),scale=1.75](1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm); [->] (0.95,0.46) to [out=75, in=270] (1,1);(0.8,0.67) circle (1pt);at (0.7,0.78) 2n+3;= ∑_a+b=n[baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);(4,1) circle (2pt);at (3,1.8) 2a;at (4.3,1) 2b;(4,0.5) circle (2pt);at (4.2,0.5) 2;+ [baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt); at (3,1.8) 2n+2;at (4.3,1) ;= ∑_a+b=n+1[baseline=(current bounding box.center),scale=0.75] [->] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);(4,1) circle (2pt);at (3,1.8) 2a;at (4.3,1) 2b; , as desired. Obtaining an explicit formula for sliding counter-clockwise bubbles is difficult since we express their commutators in terms of left twist curls with some dots on the curl, whose resolution terms still leave us with counter-clockwise bubbles on the left side of 1x_i^a. However, the situation is better with clockwise oriented bubbles.§.§ Clockwise bubble slide moves We can compute an explicit formula for clockwise bubble slides. We have[d_2n,1]=2[ [baseline=(current bounding box.center),scale=1.5] [->] (0,0) to (0,1);(0,0.75) circle (1pt);at (0.2,0.75) 2n;] +2∑_a+b=2n-1[ [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.02,0.75) circle (1pt);at (0.8,0.75) a;(1.45,0.5) circle (1pt);at (1.55,0.5) b;]infor all n≥ 0.This is a direct computation, given below:[baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);at (3,1.8) 2n;= [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);at (2.5,1.8) 2n;+2[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle (2pt);at (0.34,1.6) 2n; = [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);at (2.7,1.8) 2n-1;(3.5, 1) circle (2pt);at (3.8,1) ;+ [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-1;+ [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-1;(1,0.9) circle (1pt);(1.12,0.67) circle (1pt);+2[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle (2pt);at (0.34,1.6) 2n; = [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);at (2.7,1.8) 2n-1;(3.5, 1) circle (2pt);at (3.8,1) ;+2 [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-1;+2[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle (2pt);at (0.34,1.6) 2n; = [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);at (2.7,1.8) 2n-2;(3.5, 1) circle (2pt);at (3.8,1) 2;+ [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-2;(1.24,0.72) circle (1pt);+ [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-2;(1.24,0.72) circle (1pt);(1,0.9) circle (1pt);(1.12,0.67) circle (1pt);+ 2 [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-1;+2[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle (2pt);at (0.34,1.6) 2n; = [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (3.3,0) to (3.3,2);(3,1.5) circle (2pt);at (2.7,1.8) 2n-2;(3.5, 1) circle (2pt);at (3.8,1) 2;+ 2[baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-2;(1.24,0.72) circle (1pt);+ 2 [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.8) circle (1pt);at (0.7,0.8) 2n-1;+ 2[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle (2pt);at (0.34,1.6) 2n;Continuing to slide dots in the first term in this way, we obtain:[baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (4,0) to (4,2);(3,1.5) circle (2pt);at (3,1.8) 2n;= [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2n;+2[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2);(0,1.6) circle (2pt);at (0.34,1.6) 2n;+ 2∑_a+b=2n-1[baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.02,0.75) circle (1pt);at (0.8,0.75) a;(1.45,0.5) circle (1pt);at (1.55,0.5) b; .In particular, we can refine this statement to obtain the following recursive formula for computing [d_2n,1].We have [d_2n,1]=[d_2n-2,1]∘x_1^2 + 4[ [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.85) circle (1pt);at (1.1,0.85) 2;(1.45,0.5) circle (1pt);at (1.7,0.5) 2n-3;] -2[ [baseline=(current bounding box.center),scale=0.75] [<-] (0.5,1) arc (-180:180:5mm); [->] (0,0) to (0,2);(1.5,1) circle (2pt);at (1.8,1) 2n-2;]infor all n≥ 0.This lemma follows from the observation that [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.02,0.75) circle (1pt);at (0.8,0.75) a;(1.45,0.5) circle (1pt);at (1.65,0.5) 2k;= [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.02,0.75) circle (1pt);at (0.8,0.75) a+1;(1.45,0.5) circle (1pt);at (1.70,0.5) 2k-1;- [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2k-1;(2,1.4) circle (2pt);at (1.8,1.4) a;+ [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2k-1;(2,1.4) circle (2pt);at (1.8,1.4) a;(2,1.2) circle (2pt);(2.5,1.1) circle (2pt);= [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.02,0.75) circle (1pt);at (0.8,0.75) a+1;(1.45,0.5) circle (1pt);at (1.70,0.5) 2k-1; ,where the second term after the first equality is zero by Lemma <ref>, and the third term is zero by Lemma <ref>. Applying this result to the summands in the statement of Lemma <ref> yields the result. Finally, we obtain an explicit formula for computing [d_2n,1].We have[d_2n,1]=(2+4n)[ [baseline=(current bounding box.center),scale=1.5] [->] (0,0) to (0,1);(0,0.75) circle (1pt);at (0.2,0.75) 2n;] -∑_a+b=n-1(2+4a) [ [baseline=(current bounding box.center),scale=0.75] [<-] (0.5,1) arc (-180:180:5mm); [->] (0,0) to (0,2);(1.5,1) circle (2pt);(0,1.5) circle (2pt);at (0.3,1.5) 2a;at (1.8,1) 2b;]infor all n≥ 0.We claim that[baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.85) circle (1pt);at (1.1,0.85) 2;(1.45,0.5) circle (1pt);at (1.7,0.5) 2n-3;= [baseline=(current bounding box.center),scale=1.5] [->] (0,0) to (0,1);(0,0.75) circle (1pt);at (0.2,0.75) 2n;- ∑_a+b=n-1 a≠0[baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2b;(2,1.4) circle (2pt);at (1.7,1.4) 2a;for n≥2.We proceed via induction on n. The base case n=2 is a direct computation. Now suppose the formula holds for some n≥ 2. Then [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.85) circle (1pt);at (1.1,0.85) 2;(1.45,0.5) circle (1pt);at (1.8,0.5) 2n-3;= [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.85) circle (1pt);at (1.1,0.85) 3;(1.45,0.5) circle (1pt);at (1.8,0.5) 2n-4;- [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2n-4;(2,1.4) circle (2pt);at (1.7,1.4) 2;= [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.85) circle (1pt);at (1.1,0.85) 4;(1.45,0.5) circle (1pt);at (1.8,0.5) 2n-5;- [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2n-4;(2,1.4) circle (2pt);at (1.7,1.4) 2; = [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.02,0.75) circle (1pt);at (1.12,0.75) 2;(1,0.92) circle (1pt);at (1.1,0.92) 2;(1.45,0.5) circle (1pt);at (1.8,0.5) 2n-5;- [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2n-4;(2,1.4) circle (2pt);at (1.7,1.4) 2; . Now we can apply the induction hypothesis to the lower part of the first term in the last expression. This gives us: [baseline=(current bounding box.center),scale=1.5](1,0) to [out=90, in=85](1.05,0.5);(1.05,0.5) arc (-175:175:2mm); [->] (1.05,0.46) to [out=95, in=270] (1,1);(1.01,0.85) circle (1pt);at (1.1,0.85) 2;(1.45,0.5) circle (1pt);at (1.8,0.5) 2n-3;= ( [baseline=(current bounding box.center),scale=1.5] [->] (0,0) to (0,1);(0,0.65) circle (1pt);at (0.3,0.65) 2n-2;(0,0.85) circle (1pt);at (0.15,0.85) 2;- ∑_a+b=n-2 a≠0[baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2b;(2,1.4) circle (2pt);at (1.7,1.4) 2a;(2,1.7) circle (2pt);at (1.7,1.7) 2; )- [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2n-4;(2,1.4) circle (2pt);at (1.7,1.4) 2; = ( [baseline=(current bounding box.center),scale=1.5] [->] (0,0) to (0,1);(0,0.65) circle (1pt);at (0.2,0.65) 2n;- ∑_a+b=n-2 a≠0[baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2b;(2,1.4) circle (2pt);at (1.4,1.4) 2a+2; )- [baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2n-4;(2,1.4) circle (2pt);at (1.7,1.4) 2; = [baseline=(current bounding box.center),scale=1.5] [->] (0,0) to (0,1);(0,0.65) circle (1pt);at (0.2,0.65) 2n;- ∑_a+b=n-1 a≠0[baseline=(current bounding box.center),scale=0.75] [<-] (2.5,1) arc (-180:180:5mm); [->] (2,0) to (2,2);(3,1.5) circle (2pt);at (3,1.8) 2b;(2,1.4) circle (2pt);at (1.4,1.4) 2a; Applying this result to the recursive formula in Lemma <ref> proves the statement.Commutators of bubbles with downward strands are similar to those of bubbles with upward strands.We have[d_2n,-1]=-2[ [baseline=(current bounding box.center),scale=1.5] [<-] (0,0) to (0,1);(0,0.75) circle (1pt);at (0.2,0.75) 2n;] -2∑_a+b=2n-1[ [baseline=(current bounding box.center),scale=1.5] [<-] (1,0) to [out=90, in=-75](0.95,0.5);(0.95,0.5) arc (5:355:2mm);(0.95,0.46) to [out=75, in=270] (1,1);(1.00,0.75) circle (1pt);at (0.85,0.75) a;(0.55,0.5) circle (1pt);at (0.40,0.5) b;]infor all n≥ 0. This follows from a computation similar to those in the proofs of Lemmas <ref> and <ref>. Finally we have an explicit formula for commutators of clockwise oriented bubbles and a single downward strand. We have[d_2n,-1]=-(2+4n)[ [baseline=(current bounding box.center),scale=1.5] [<-] (0,0) to (0,1);(0,0.75) circle (1pt);at (0.2,0.75) 2n;] +∑_a+b=n-1(2+4a)[ [baseline=(current bounding box.center),scale=0.75] [<-] (0.5,1) arc (-180:180:5mm); [<-] (2.3,0) to (2.3,2);(1.5,1) circle (2pt);(2.3,1.5) circle (2pt);at (2.6,1.5) 2a;at (1.8,1) 2b;]infor n≥0. This follows from Lemma <ref>, using a similar argument as in the proof of Proposition <ref>.Note that in this formula, we are still left with clockwise bubbles on the left side of a downward strand, but with fewer dots on it. Hence the formula may be applied inductively in order to move all the bubbles to the rightmost part of the diagram.§ DIAGRAMMATIC LEMMAS This section contains some technical computations to derive relations between diagrams consisting of up and down strands. These relations allow us to find a generating set ofin Section 6.§.§ Differential degree zero part ofThe differential degree zero part ofconsists of elements {n}_n∈ℤ. First, we have the following basic fact. <cit.>We have2n≅ 0for any n∈ℤ. By Proposition <ref>, the proof in the Hecke-Clifford algebra applies here, as well.The elements ofsatisfy the following relations. The following commutators are zero for all non-negative integers n,m: * [n,m]=0,* [-n,-m]=0,* [2n,-2n]=0. Parts (1) and (2) follow from the fact that similarly oriented strands can be split apart when they cross twice. Part (3) follows immediately from Proposition <ref>. To obtain a copy of the twisted Heisenberg algebra in the , we need to look at commutators between elements with odd numbers of oppositely oriented strands. We have, for any n,m ∈ℤ^≥ 0,[2n+1,-2m+1]=(δ_n,-m)(-2(2n+1)).First note that<cit.> and <cit.> holds in our twisted case with a small modification, since all the arguments in their proofs use the fact that the resolution terms contain left twist curls, hence are zero. There are extra resolution terms with hollow dots due to relation (<ref>), but two hollow dots on a diagram containing a left twist curl still gives zero. The only modification comes in the case m=n where we get two copies of counter-clockwise bubbles instead of one, since a two hollow dots on a counter-clockwise bubble end up canceling each other without changing the sign of the diagram. We immediately get that when m≠ n, our commutator is zero since we have no solid dots. Therefore we haveh_2n+1h_-2m+1 =[[baseline=(current bounding box.center),scale=0.8][->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …; [baseline=(current bounding box.center),scale=0.8][<-] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [<-] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [<-] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [<-] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …;] = [[baseline=(current bounding box.center),scale=0.8](3,0) .. controls ++(0,1.25) and ++(0,-1.75) .. (0,1.5);(0,0) .. controls ++(0,1) and ++(0,-.7) .. (.6,1.5);(.6,0) .. controls ++(0,1) and ++(0,-.7) .. (1.2,1.5);(1.8,0) .. controls ++(0,1) and ++(0,-.7) .. (2.4,1.5);(2.4,0) .. controls ++(0,1) and ++(0,-.7) .. (3,1.5);at (1.2,.35) …;at (1.8,1.15) …;(6,0) .. controls ++(0,1.25) and ++(0,-1.75) .. (3.6,1.5) ;(3.6,0) .. controls ++(0,1) and ++(0,-.7) .. (4.2,1.5);(4.2,0) .. controls ++(0,1) and ++(0,-.7) .. (4.8,1.5);(5.4,0) .. controls ++(0,1) and ++(0,-.7) .. (6,1.5);at (4.8,-.65) …;at (5.4,1.85) …; [blue, dotted] (-0.4,0) – (6.4,0); [blue, dotted] (-0.4,1.5) – (6.4,1.5); [->](3,1.5) .. controls ++(0,0.5) and ++(0,-.5) .. (3.6,2.5); (3.6,1.5) .. controls ++(0,1) and ++(0,-1) .. (0,2.5);[->](2.4,1.5) .. controls ++(0,0.5) and ++(0,-.5) .. (3,2.5);[->](1.2,1.5) .. controls ++(0,0.5) and ++(0,-.5) .. (1.8,2.5);[->](.6,1.5) .. controls ++(0,0.5) and ++(0,-.5) .. (1.2,2.5);[->](0,1.5) .. controls ++(0,0.5) and ++(0,-.5) .. (.6,2.5); (4.2,1.5) – (4.2,2.5); (4.8,1.5) – (4.8,2.5); (6,1.5) – (6,2.5); (2.4,0) .. controls ++(0,-.5) and ++(0,.5) .. (3,-1);(1.8,0) .. controls ++(0,-.5) and ++(0,.5) .. (2.4,-1);(.6,0) .. controls ++(0,-.5) and ++(0,.5) .. (1.2,-1);(0,0) .. controls ++(0,-.5) and ++(0,.5) .. (.6,-1);(3,0) .. controls ++(0,-.5) and ++(0,.5) .. (3.6,-1); [,->] (3.6,0) .. controls ++(0,-1) and ++(0,+1) .. (0,-1);[->] (4.2,0) – (4.2,-1);[->] (5.4,0) – (5.4,-1);[->] (6,0) – (6,-1);] =h_-(2m+1)h_2n+1(-2d̅_0(2n+1)). Hence [(2n+1),-(2m+1)]=δ_n,-m(-2(2n+1)).Therefore the subset A={(2n+1)}_n∈ℤ of the filtration degree zero part ofis isomorphic to the twisted Heisenberg algebra via ϕ: 𝔥_tw ∼⟶ A 2n+1/2 ↦1/2-(2n+1). In the W-algebra W^-, we have an isomophic copy of the twisted Heisenberg algebra as well, given by B={ω_2n+1,0}_n∈ℤ, with the isomorphism given by ψ: 𝔥_tw ∼⟶ B 2n+1/2 ↦1/√(2) ω_2n+1,0. Therefore we have an isomorphism between the degree zero part ofand the degree zero part of W^-: ψ∘ϕ^-1: A ∼⟶ B -(2n+1) ↦√(2) w_2n+1,0.§.§ Nonzero differential degree part ofWe have the following basic facts about diagrams in ^>, which we may copy from the corresponding facts in the trace of the affine Hecke-Clifford algebra because of the triangular decomposition ofdescribed in Proposition <ref>. <cit.> In Tr() for any m,n ∈ℤ, we have2n+1x_1^2m+1 = 0, 2nx_1^2m =0. Hence any diagram containing an odd cycle with an odd number of dots or an even cycle with an even number of dots is zero. Therefore, the difference of the number of strands and number of solid dots must be odd. This agrees with the fact that in the W-algebra W^-, l-k has to be an odd number for w_l,k.The generators of ^> satisfy the following relations. [[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.25,1.5) circle[radius=3pt];,[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (0,2); ] =-4 [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); and[[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.25,1.5) circle[radius=3pt];,[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); ] =2 [baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2);[[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.75,0.5) circle[radius=3pt];,[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (0,2); ] =0 and[[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.25,1.5) circle[radius=3pt];,[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); ] =2 [baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2); [[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); [->](.5,0) to (.5,2);,[baseline=(current bounding box.center),scale=0.75] [<-](0,0) to (0,2); ] =-4 [baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); and[[baseline=(current bounding box.center),scale=0.75][->](0,0) to (0,2); [->](.5,0) to (.5,2);,[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (0,2); ] =0For m,n∈ℤ with mn>0, we have * [2m x_1, 2n x_1] = 2(n-m) 2n+2mx_1.* [mc_1, nc_1] = -2nc_1.Part (1) is a slight modification of <cit.>. By Proposition <ref>, if at least one of the indices inside the commutator is odd, the commutator will be zero. Hence we will work with the case where both indices are even numbers. The modification we need in <cit.> is a result of us having two resolution terms in our relations (<ref>) and (<ref>). As a consequence of having even number of strands in both of our elements, canceling the two empty dots in our resolution terms give rise to the same sign as the other resolution term, hence we have a coefficient of two in front of our result.Part (2) follows easily the proof of <cit.> since moving an empty dot through a crossing is for free in , and we get a negative sign from changing relative heights of hollow dots. For n≥0, we have[± 2n(x_1 + … + x_2n),1] = ± 4n ±(2n+1).First note that we have:[[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.25,1.5) circle[radius=3pt];[->](1.5,0) to (1.5,2); ] = [[baseline=(current bounding box.center),scale=0.75][->](0,0) to (1.5,2);[->](1.5,0) to (0,2);[fill](.25,1.63) circle[radius=3pt]; [->](0.75,0) to [out=135,in=225] (0.75,2); ] =[[baseline=(current bounding box.center),scale=0.75][->](0,0) to (1.5,2);[->](1.5,0) to (0,2);[fill](.65,1.2) circle[radius=3pt]; [->](0.75,0) to [out=135,in=225] (0.75,2); ]+[[baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2); ] - [[baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2);(.08,1.5) circle[radius=3pt];(.67, 1.2) circle[radius=3pt]; ]= [[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](0.25,1.5) circle[radius=3pt];[->](-.5,0) to (-.5,2); ] + 2 [[baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2); ]. Hence [2x_1, 1] = 2 3. Next, moving the solid dot in 2x_2 around to the bottom of the crossing using the trace relation gives: [[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](.75,.5) circle[radius=3pt];[->](1.5,0) to (1.5,2); ] = [[baseline=(current bounding box.center),scale=0.75][->](0,0) to (1.5,2);[->](1.5,0) to (0,2); [->](0.75,0) to [out=45,in=-45] (0.75,2);[fill](.9,.75) circle[radius=3pt]; ] =[[baseline=(current bounding box.center),scale=0.75][->](0,0) to (1.5,2);[->](1.5,0) to (0,2); [->](0.75,0) to [out=45,in=-45] (0.75,2);[fill](1.3,.25) circle[radius=3pt]; ]+[[baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2); ] - [[baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2);(1.77,.5) circle[radius=3pt];(1.05, .3) circle[radius=3pt]; ]= [[baseline=(current bounding box.center),scale=0.75] [->](0,0) to (1,2); [->](1,0) to (0,2); [fill](.75,.5) circle[radius=3pt];[->](-.5,0) to (-.5,2); ] +2[[baseline=(current bounding box.center),scale=0.75] [->] (2,0) .. controls (2,1.25) and (0,.25) .. (0,2); [->] (0,0) .. controls (0,1) and (.8,.8) .. (1,2); [->] (1,0) .. controls (1,1) and (1.8,.8) .. (2,2); ].So, [2(x_1 + x_2), 1 ] = 43. Next, we claim that [2nx_2n, 1] = 2 2n+1 for any n. Indeed, we have:[[baseline=(current bounding box.center),scale=0.75] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …; (3.18,.2) circle (3pt); [->](3.5,0) to (3.5,2); ] = [[baseline=(current bounding box.center),scale=0.75] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …; (2.8,.55) circle (3pt); [->](3,0) ..controls (3.2,1) and (3,.8)..(2.8,2); ] = [[baseline=(current bounding box.center),scale=0.75] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …; (3.18,.2) circle (3pt); [->](2.8,0) ..controls (3,1) and (2.8,.8)..(2.6,2); ] + [[baseline=(current bounding box.center),scale=0.75] [->] (3,0) .. controls (3,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …;[->](3.4,0) ..controls (3.6,1) and (3,.8)..(2.8,2); ]- [[baseline=(current bounding box.center),scale=0.75] [->] (3,0) .. controls (3,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …;(2.85,.4) circle (3pt);(3.43, .2) circle (3pt); [->](3.4,0) ..controls (3.6,1) and (3,.8)..(2.8,2); ]= [[baseline=(current bounding box.center),scale=0.75] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …; (3.18,.2) circle (3pt); [->](-.3,0) to (-.3,2); ] + 2 [[baseline=(current bounding box.center),scale=0.75] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …;[->] (2.8,0) .. controls (2.8,1) and (3.6,.8) .. (3.6,2);],where the last equality is obtained by pushing the crossings at the bottom of the diagrams without dots to the top. Indeed, diagrammatic calculations similar to the above give that[2n x_a, 1] = 22n+1for any 1 < a ≤ 2n.Finally, note that2nx_11 = [[baseline=(current bounding box.center),scale=0.75] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …; (.1,1.55) circle (3pt); [->](.4,0) ..controls (.4,1) and (.2,.8)..(.4,2); ].The dot will slide over the top-leftmost crossing in the same manner as in Equation (<ref>), meaning the correction terms will cancel out. Hence, we have the desired result. Let m be an odd integer. We have[ 2(x_1+ x_2), m] = 4mm+2. We compute directly:m2x_1 = [[scale=0.8] [->] (3,0) .. controls ++(0,1.25) and ++(0,-1.75) .. (0,2); [->] (0,0) .. controls ++(0,1) and ++(0,-1.2) .. (.6,2); [->] (.6,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.2,2); [->] (1.8,0) .. controls ++(0,1) and ++(0,-1.2) .. (2.4,2); [->] (2.4,0) .. controls ++(0,1) and ++(0,-1.2) .. (3,2);at (1.2,.35) …;at (1.8,1.65) …; [->] (3.4,0)–(4.5,2); [->] (4.5,0)–(3.4,2);(3.7,1.4) circle (2pt);]= [[scale=0.8](3,0) .. controls (3,1.25) and (0,.25) .. (0,1.5)node[pos=0.85, shape=coordinate](X);(0,0) .. controls (0,1) and (.6,.8) .. (.6,1.5);(.6,0) .. controls (.6,1) and (1.2,.8) .. (1.2,1.5);(1.8,0) .. controls (1.8,1) and (2.4,.8) .. (2.4,1.5);(2.4,0) .. controls (2.4,1) and (3,.8) .. (3,1.5);at (1.2,.35) …;at (1.8,1.15) …;(4.2,0) .. controls (4.2,1.25) and (3.6,.25) .. (3.6,1.5)node[pos=0.85, shape=coordinate](X);(3.6,0) .. controls (3.6,1) and (4.2,.8) .. (4.2,1.5); [blue, dotted] (-0.4,0) – (4.4,0); [blue, dotted] (-0.4,1.5) – (4.4,1.5); [->](3,1.5) .. controls (3,2) and (3.6,2) .. (3.6,2.5);[->] (3.6,1.5) .. controls (3.6,2.5) and (0,1.5) .. (0,2.5);[->](2.4,1.5) .. controls (2.4,2) and (3,2) .. (3,2.5);[->](1.2,1.5) .. controls (1.2,2) and (1.8,2) .. (1.8,2.5);[->](.6,1.5) .. controls (.6,2) and (1.2,2) .. (1.2,2.5);[->](0,1.5) .. controls (0,2) and (.6,2) .. (.6,2.5);[->] (4.2,1.5) – (4.2,2.5);(3.6,1.5) circle (2pt); (2.4,0) .. controls (2.4,-.5) and (3,-.5) .. (3,-1);(1.8,0) .. controls (1.8,-.5) and (2.4,-.5) .. (2.4,-1);(.6,0) .. controls (.6,-.5) and (1.2,-.5) .. (1.2,-1);(0,0) .. controls (0,-.5) and (.6,-.5) .. (.6,-1);(3,0) .. controls (3,-.5) and (3.6,-.5) .. (3.6,-1);(3.6,0) .. controls (3.6,-1) and (0,0) .. (0,-1); (4.2,0) – (4.2,-1); ] = [[scale=0.8](3,0) .. controls (3,1.25) and (0,.25) .. (0,1.5)node[pos=0.85, shape=coordinate](X);(0,0) .. controls (0,1) and (.6,.8) .. (.6,1.5);(.6,0) .. controls (.6,1) and (1.2,.8) .. (1.2,1.5);(1.8,0) .. controls (1.8,1) and (2.4,.8) .. (2.4,1.5);(2.4,0) .. controls (2.4,1) and (3,.8) .. (3,1.5);at (1.2,.35) …;at (1.8,1.15) …;(4.2,0) .. controls (4.2,1.25) and (3.6,.25) .. (3.6,1.5)node[pos=0.85, shape=coordinate](X);(3.6,0) .. controls (3.6,1) and (4.2,.8) .. (4.2,1.5); [blue, dotted] (-0.4,0) – (4.4,0); [blue, dotted] (-0.4,1.5) – (4.4,1.5); [->](3,1.5) .. controls (3,2) and (3.6,2) .. (3.6,2.5);[->] (3.6,1.5) .. controls (3.6,2.5) and (0,1.5) .. (0,2.5);[->](2.4,1.5) .. controls (2.4,2) and (3,2) .. (3,2.5);[->](1.2,1.5) .. controls (1.2,2) and (1.8,2) .. (1.8,2.5);[->](.6,1.5) .. controls (.6,2) and (1.2,2) .. (1.2,2.5);[->](0,1.5) .. controls (0,2) and (.6,2) .. (.6,2.5);[->] (4.2,1.5) – (4.2,2.5);(3.0,1.95) circle (2pt); (2.4,0) .. controls (2.4,-.5) and (3,-.5) .. (3,-1);(1.8,0) .. controls (1.8,-.5) and (2.4,-.5) .. (2.4,-1);(.6,0) .. controls (.6,-.5) and (1.2,-.5) .. (1.2,-1);(0,0) .. controls (0,-.5) and (.6,-.5) .. (.6,-1);(3,0) .. controls (3,-.5) and (3.6,-.5) .. (3.6,-1);(3.6,0) .. controls (3.6,-1) and (0,0) .. (0,-1); (4.2,0) – (4.2,-1); ] - [[scale=0.8](3,0) .. controls (3,1.25) and (0,.25) .. (0,1.5)node[pos=0.85, shape=coordinate](X);(0,0) .. controls (0,1) and (.6,.8) .. (.6,1.5);(.6,0) .. controls (.6,1) and (1.2,.8) .. (1.2,1.5);(1.8,0) .. controls (1.8,1) and (2.4,.8) .. (2.4,1.5);(2.4,0) .. controls (2.4,1) and (3,.8) .. (3,1.5);at (1.2,.35) …;at (1.8,1.15) …;(4.2,0) .. controls (4.2,1.25) and (3.6,.25) .. (3.6,1.5)node[pos=0.85, shape=coordinate](X);(3.6,0) .. controls (3.6,1) and (4.2,.8) .. (4.2,1.5); [blue, dotted] (-0.4,0) – (4.4,0); [blue, dotted] (-0.4,1.5) – (4.4,1.5); [->](3,1.5) .. controls (3,2) and (0,2) .. (0,2.5);[->] (3.6,1.5)–(3.6,2.5);[->](2.4,1.5) .. controls (2.4,2) and (3,2) .. (3,2.5);[->](1.2,1.5) .. controls (1.2,2) and (1.8,2) .. (1.8,2.5);[->](.6,1.5) .. controls (.6,2) and (1.2,2) .. (1.2,2.5);[->](0,1.5) .. controls (0,2) and (.6,2) .. (.6,2.5);[->] (4.2,1.5) – (4.2,2.5);(2.4,0) .. controls (2.4,-.5) and (3,-.5) .. (3,-1);(1.8,0) .. controls (1.8,-.5) and (2.4,-.5) .. (2.4,-1);(.6,0) .. controls (.6,-.5) and (1.2,-.5) .. (1.2,-1);(0,0) .. controls (0,-.5) and (.6,-.5) .. (.6,-1);(3,0) .. controls (3,-.5) and (3.6,-.5) .. (3.6,-1);(3.6,0) .. controls (3.6,-1) and (0,0) .. (0,-1); (4.2,0) – (4.2,-1); ]+[[scale=0.8](3,0) .. controls (3,1.25) and (0,.25) .. (0,1.5)node[pos=0.85, shape=coordinate](X);(0,0) .. controls (0,1) and (.6,.8) .. (.6,1.5);(.6,0) .. controls (.6,1) and (1.2,.8) .. (1.2,1.5);(1.8,0) .. controls (1.8,1) and (2.4,.8) .. (2.4,1.5);(2.4,0) .. controls (2.4,1) and (3,.8) .. (3,1.5);at (1.2,.35) …;at (1.8,1.15) …;(4.2,0) .. controls (4.2,1.25) and (3.6,.25) .. (3.6,1.5)node[pos=0.85, shape=coordinate](X);(3.6,0) .. controls (3.6,1) and (4.2,.8) .. (4.2,1.5);(3.6, 1.2) circle (3pt); [blue, dotted] (-0.4,0) – (4.4,0); [blue, dotted] (-0.4,1.5) – (4.4,1.5); [->](3,1.5) .. controls (3,2) and (0,2) .. (0,2.5);[->] (3.6,1.5)–(3.6,2.5);[->](2.4,1.5) .. controls (2.4,2) and (3,2) .. (3,2.5);[->](1.2,1.5) .. controls (1.2,2) and (1.8,2) .. (1.8,2.5);[->](.6,1.5) .. controls (.6,2) and (1.2,2) .. (1.2,2.5);[->](0,1.5) .. controls (0,2) and (.6,2) .. (.6,2.5);[->] (4.2,1.5) – (4.2,2.5);(2.8,1.7) circle (3pt);(2.4,0) .. controls (2.4,-.5) and (3,-.5) .. (3,-1);(1.8,0) .. controls (1.8,-.5) and (2.4,-.5) .. (2.4,-1);(.6,0) .. controls (.6,-.5) and (1.2,-.5) .. (1.2,-1);(0,0) .. controls (0,-.5) and (.6,-.5) .. (.6,-1);(3,0) .. controls (3,-.5) and (3.6,-.5) .. (3.6,-1);(3.6,0) .. controls (3.6,-1) and (0,0) .. (0,-1); (4.2,0) – (4.2,-1); ]. Cancelling the empty dots in the last term results in a change in sign, and both of the latter diagrams are (m+2)-cycles. Hence we have:=[[scale=0.8](3,0) .. controls (3,1.25) and (0,.25) .. (0,1.5)node[pos=0.85, shape=coordinate](X);(0,0) .. controls (0,1) and (.6,.8) .. (.6,1.5);(.6,0) .. controls (.6,1) and (1.2,.8) .. (1.2,1.5);(1.8,0) .. controls (1.8,1) and (2.4,.8) .. (2.4,1.5);(2.4,0) .. controls (2.4,1) and (3,.8) .. (3,1.5);at (1.2,.35) …;at (1.8,1.15) …;(4.2,0) .. controls (4.2,1.25) and (3.6,.25) .. (3.6,1.5)node[pos=0.85, shape=coordinate](X);(3.6,0) .. controls (3.6,1) and (4.2,.8) .. (4.2,1.5); [blue, dotted] (-0.4,0) – (4.4,0); [blue, dotted] (-0.4,1.5) – (4.4,1.5); [->](3,1.5) .. controls (3,2) and (3.6,2) .. (3.6,2.5);[->] (3.6,1.5) .. controls (3.6,2.5) and (0,1.5) .. (0,2.5);[->](2.4,1.5) .. controls (2.4,2) and (3,2) .. (3,2.5);[->](1.2,1.5) .. controls (1.2,2) and (1.8,2) .. (1.8,2.5);[->](.6,1.5) .. controls (.6,2) and (1.2,2) .. (1.2,2.5);[->](0,1.5) .. controls (0,2) and (.6,2) .. (.6,2.5);[->] (4.2,1.5) – (4.2,2.5);(3.0,1.95) circle (2pt); (2.4,0) .. controls (2.4,-.5) and (3,-.5) .. (3,-1);(1.8,0) .. controls (1.8,-.5) and (2.4,-.5) .. (2.4,-1);(.6,0) .. controls (.6,-.5) and (1.2,-.5) .. (1.2,-1);(0,0) .. controls (0,-.5) and (.6,-.5) .. (.6,-1);(3,0) .. controls (3,-.5) and (3.6,-.5) .. (3.6,-1);(3.6,0) .. controls (3.6,-1) and (0,0) .. (0,-1); (4.2,0) – (4.2,-1); ]- 2m+2 Sliding the solid dot in the first diagram all the way to the left results in m total crossing resolutions, each of which yieds a term of -2 m+2. So,=[[scale=0.8](3,0) .. controls (3,1.25) and (0,.25) .. (0,1.5)node[pos=0.85, shape=coordinate](X);(0,0) .. controls (0,1) and (.6,.8) .. (.6,1.5);(.6,0) .. controls (.6,1) and (1.2,.8) .. (1.2,1.5);(1.8,0) .. controls (1.8,1) and (2.4,.8) .. (2.4,1.5);(2.4,0) .. controls (2.4,1) and (3,.8) .. (3,1.5);at (1.2,.35) …;at (1.8,1.15) …;(4.2,0) .. controls (4.2,1.25) and (3.6,.25) .. (3.6,1.5)node[pos=0.85, shape=coordinate](X);(3.6,0) .. controls (3.6,1) and (4.2,.8) .. (4.2,1.5); [blue, dotted] (-0.4,0) – (4.4,0); [blue, dotted] (-0.4,1.5) – (4.4,1.5); [->](3,1.5) .. controls (3,2) and (3.6,2) .. (3.6,2.5);[->] (3.6,1.5) .. controls (3.6,2.5) and (0,1.5) .. (0,2.5);[->](2.4,1.5) .. controls (2.4,2) and (3,2) .. (3,2.5);[->](1.2,1.5) .. controls (1.2,2) and (1.8,2) .. (1.8,2.5);[->](.6,1.5) .. controls (.6,2) and (1.2,2) .. (1.2,2.5);[->](0,1.5) .. controls (0,2) and (.6,2) .. (.6,2.5);[->] (4.2,1.5) – (4.2,2.5);(.15,2.2) circle (2pt); (2.4,0) .. controls (2.4,-.5) and (3,-.5) .. (3,-1);(1.8,0) .. controls (1.8,-.5) and (2.4,-.5) .. (2.4,-1);(.6,0) .. controls (.6,-.5) and (1.2,-.5) .. (1.2,-1);(0,0) .. controls (0,-.5) and (.6,-.5) .. (.6,-1);(3,0) .. controls (3,-.5) and (3.6,-.5) .. (3.6,-1);(3.6,0) .. controls (3.6,-1) and (0,0) .. (0,-1); (4.2,0) – (4.2,-1); ]- 2mm+2=[[baseline=(current bounding box.center),scale=.8] [->] (0,0)–(1.2,2); [->] (1.2,0)–(0,2);(.35,1.4) circle (2pt);[baseline=(current bounding box.center),scale=0.8] [->] (3,0) .. controls ++(0,1.25) and ++(0,-1.75) .. (0,2); [->] (0,0) .. controls ++(0,1) and ++(0,-1.2) .. (.6,2); [->] (.6,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.2,2); [->] (1.8,0) .. controls ++(0,1) and ++(0,-1.2) .. (2.4,2); [->] (2.4,0) .. controls ++(0,1) and ++(0,-1.2) .. (3,2);at (1.2,.35) …;at (1.8,1.65) …;] - 2mm+2Hence we have[2x_1, m] = 2mm+2.A similar computation gives that[2x_2, m] = 2mm+2,giving the desired result.We have[2n(x_1+x_2 +…+ x_2n), -(2m+1)] = {[ -4(2m+1) 2n-2m-1 ifn>m≥ 1;0 ifn=m≥ 1; -2(2m+1) 2n-2m-1if1≤ n < m. ]. We follow the methods of <cit.>, substituting our new relations as necessary.As in that case, let β_n = 2nx_1 and α_m = 2m+1x_1, and proceed by induction on m. When m=1, we can compute directly:[[scale=0.8] [->] (3,0) .. controls ++(0,1.25) and (0,.25) .. (0,2); [->] (0.0,0) .. controls ++(0,1) and ++(0,-1.2) .. (0.6,2); [->] (0.6,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.2,2); [->] (1.2,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.8,2); [->] (2.4,0) .. controls ++(0,1) and ++(0,-1.2) .. (3,2);at (1.8,.35) …;at (2.4,1.65) …; [<-] (-0.6,0) – (-0.6,2); (.05,1.6) circle (2pt); ](<ref>)[[scale=0.8](3.0,0) .. controls ++(0,1.25) and ++(0,-1.1) .. (0,1.5);(0.0,0) .. controls ++(0,.5) and ++(0,-.6) .. (0.6,1.5);(0.6,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.2,1.5);(1.2,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.8,1.5);(2.4,0) .. controls ++(0,.5) and ++(0,-.6) .. (3,1.5);at (1.8,.35) …;at (2.4,1.35) …; [<-] (-0.6,0) – (-0.6,1.5); (.1,1.2) circle (2pt); [blue, dotted] (-1,0) – (3.4,0); [blue, dotted] (-1,1.5) – (3.4,1.5); [->] (0.0,1.5) .. controls ++(0,.45) and ++(0,-.6) .. (-0.6,2.5); (-0.6,1.5) .. controls ++(0,.45) and ++(0,-.6) .. (0.0,2.5);[->] (0.6,1.5) –(0.6,2.5);[->] (1.2,1.5) –(1.2,2.5);[->] (1.8,1.5) –(1.8,2.5);[->] (3,1.5) –(3,2.5);[<-] (0.0,0) .. controls ++(0,-.45) and ++(0,.6) .. (-0.6,-1);[->] (-0.6,0) .. controls ++(0,-.45) and ++(0,.6) .. (0.0,-1);[<-] (0.6,0) –(0.6,-1);[<-] (1.2,0) –(1.2,-1);[<-] (2.4,0) –(2.4,-1);[<-] (3.0,0) –(3,-1); ] +2 [[scale=0.8](3.0,0) .. controls ++(0,1.25) and ++(0,-1.1) .. (0,1.5);(0.0,0) .. controls ++(0,.5) and ++(0,-.6) .. (0.6,1.5);(0.6,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.2,1.5);(1.2,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.8,1.5);(2.4,0) .. controls ++(0,.5) and ++(0,-.6) .. (3,1.5);at (1.8,.35) …;at (2.4,1.35) …;(-0.6,0) – (-0.6,1.5); (.1,1.2) circle (2pt); [blue, dotted] (-1,0) – (3.4,0); [blue, dotted] (-1,1.5) – (3.4,1.5); (-0.6,1.5) .. controls ++(0,.35) and ++(0,.35) .. (0.0,1.5);[->] (0.6,1.5) –(0.6,2.5);[->] (1.2,1.5) –(1.2,2.5);[->] (1.8,1.5) –(1.8,2.5);[->] (3,1.5) –(3,2.5);[->] (-0.6,0) .. controls ++(0,-.35) and ++(0,-.35) .. (0.0,0);[<-] (0.6,0) –(0.6,-1);[<-] (1.2,0) –(1.2,-1);[<-] (2.4,0) –(2.4,-1);[<-] (3.0,0) –(3,-1); ]where the trailing terms arising from relation (<ref>) have the same sign after cancelling the empty dots, and thus add together. We claim that the diagram in the second term is 2n-1. Indeed, sliding the dot gives:[[scale=0.8](3.0,0) .. controls ++(0,1.25) and ++(0,-1.1) .. (0,1.5);(0.0,0) .. controls ++(0,.5) and ++(0,-.6) .. (0.6,1.5);(0.6,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.2,1.5);(1.2,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.8,1.5);(2.4,0) .. controls ++(0,.5) and ++(0,-.6) .. (3,1.5);at (1.8,.35) …;at (2.4,1.35) …;(-0.6,0) – (-0.6,1.5); (.3,.65) circle (2pt); [blue, dotted] (-1,0) – (3.4,0); [blue, dotted] (-1,1.5) – (3.4,1.5);(-0.6,1.5) .. controls ++(0,.35) and ++(0,.35) .. (0.0,1.5);[->] (0.6,1.5) –(0.6,2.5);[->] (1.2,1.5) –(1.2,2.5);[->] (1.8,1.5) –(1.8,2.5);[->] (3,1.5) –(3,2.5);[->] (-0.6,0) .. controls ++(0,-.35) and ++(0,-.35) .. (0.0,0);[<-] (0.6,0) –(0.6,-1);[<-] (1.2,0) –(1.2,-1);[<-] (2.4,0) –(2.4,-1);[<-] (3.0,0) –(3,-1); ](<ref>)d_0,02n-1 + d_0,12n-1 = 2n-1by relations (<ref>) and (<ref>). Now, sliding the solid dot over the crossing on the right hand side of Equation (<ref>) gives: (<ref>)[[scale=0.8](3.0,0) .. controls ++(0,1.25) and ++(0,-1.1) .. (0,1.5);(0.0,0) .. controls ++(0,.5) and ++(0,-.6) .. (0.6,1.5);(0.6,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.2,1.5);(1.2,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.8,1.5);(2.4,0) .. controls ++(0,.5) and ++(0,-.6) .. (3,1.5);at (1.8,.35) …;at (2.4,1.35) …; [<-] (-0.6,0) – (-0.6,1.5); (-0.55,2.2) circle (2pt); [blue, dotted] (-1,0) – (3.4,0); [blue, dotted] (-1,1.5) – (3.4,1.5); [->] (0.0,1.5) .. controls ++(0,.45) and ++(0,-.6) .. (-0.6,2.5); (-0.6,1.5) .. controls ++(0,.45) and ++(0,-.6) .. (0.0,2.5);[->] (0.6,1.5) –(0.6,2.5);[->] (1.2,1.5) –(1.2,2.5);[->] (1.8,1.5) –(1.8,2.5);[->] (3,1.5) –(3,2.5);[<-] (0.0,0) .. controls ++(0,-.45) and ++(0,.6) .. (-0.6,-1);[->] (-0.6,0) .. controls ++(0,-.45) and ++(0,.6) .. (0.0,-1);[<-] (0.6,0) –(0.6,-1);[<-] (1.2,0) –(1.2,-1);[<-] (2.4,0) –(2.4,-1);[<-] (3.0,0) –(3,-1); ] +2 [[scale=0.8](3.0,0) .. controls ++(0,1.25) and ++(0,-1.1) .. (0,1.5);(0.0,0) .. controls ++(0,.5) and ++(0,-.6) .. (0.6,1.5);(0.6,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.2,1.5);(1.2,0) .. controls ++(0,.5) and ++(0,-.6) .. (1.8,1.5);(2.4,0) .. controls ++(0,.5) and ++(0,-.6) .. (3,1.5);at (1.8,.35) …;at (2.4,1.35) …; [<-] (-0.6,0) – (-0.6,1.5); [blue, dotted] (-1,0) – (3.4,0); [blue, dotted] (-1,1.5) – (3.4,1.5); [->] (0.0,2.5) arc (360:180:3mm);[->] (0.0,1.5) arc (0:180:3mm);[->] (0.6,1.5) –(0.6,2.5);[->] (1.2,1.5) –(1.2,2.5);[->] (1.8,1.5) –(1.8,2.5);[->] (3,1.5) –(3,2.5);[<-] (0.0,0) .. controls ++(0,-.45) and ++(0,.6) .. (-0.6,-1);[->] (-0.6,0) .. controls ++(0,-.45) and ++(0,.6) .. (0.0,-1);[<-] (0.6,0) –(0.6,-1);[<-] (1.2,0) –(1.2,-1);[<-] (2.4,0) –(2.4,-1);[<-] (3.0,0) –(3,-1); ]where the trailing terms arising from relation (<ref>) have the same sign after canceling the empty dots, and thus add together.We can use the trace relation to slide the top cup in the second term to the bottom; after simplication, this term is therefore equal to n-1.The first term is equal to β_nα_-1 as in <cit.>. Thus, [[scale=0.8] [->] (3,0) .. controls ++(0,1.25) and (0,.25) .. (0,2); [->] (0.0,0) .. controls ++(0,1) and ++(0,-1.2) .. (0.6,2); [->] (0.6,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.2,2); [->] (1.2,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.8,2); [->] (2.4,0) .. controls ++(0,1) and ++(0,-1.2) .. (3,2);at (1.8,.35) …;at (2.4,1.65) …; [<-] (-0.6,0) – (-0.6,2); (.05,1.6) circle (2pt); ]=[[scale=0.8] [->] (3,0) .. controls ++(0,1.25) and (0,.25) .. (0,2); [->] (0.0,0) .. controls ++(0,1) and ++(0,-1.2) .. (0.6,2); [->] (0.6,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.2,2); [->] (1.2,0) .. controls ++(0,1) and ++(0,-1.2) .. (1.8,2); [->] (2.4,0) .. controls ++(0,1) and ++(0,-1.2) .. (3,2);at (1.8,.35) …;at (2.4,1.65) …; [<-] (3.6,0) – (3.6,2); (.05,1.6) circle (2pt);] +4 2n-1 as desired. The base case of the induction is proved. The induction step follows from examination of the Jacobi identity, exactly as in <cit.>, using our Lemma <ref> in place of <cit.>. Let n ∈ℤ. We have[1x_1^2, 2n-1] = 22nx_1 + … + x_2n +22nx_2+…+x_2n-1. This is a straightforward diagrammatic calculation similar to Lemmas <ref> and <ref>. We have [[baseline=(current bounding box.center),scale=0.75] [->] (3.2,0) .. controls (3.2,1.25) and (0,.25) .. (0,2)node[pos=0.85, shape=coordinate](X); [->] (0,0) .. controls (0,1) and (.8,.8) .. (.8,2); [->] (.8,0) .. controls (.8,1) and (1.6,.8) .. (1.6,2); [->] (2.4,0) .. controls (2.4,1) and (3.2,.8) .. (3.2,2);at (1.6,.35) …;at (2.4,1.65) …; (-.3,1.5) circle (3pt);at (-.7,1.5) 2n; [->](-.3,0) to (-.3,2); ] = [[scale=0.8](2.4,0) .. controls (2.4,1.25) and (0,.25) .. (0,1.5)node[pos=0.85, shape=coordinate](X);(0,0) .. controls (0,1) and (0.6,.8) .. (0.6,1.5);(0.6,0) .. controls (0.6,1) and (1.2,.8) .. (1.2,1.5);(1.8,0) .. controls (1.8,1) and (2.4,.8) .. (2.4,1.5);at (1.2,.35) …;at (1.8,1.65) …;(3,0) – (3,1.5);(0.05,2.25) circle (2pt);at (-.4,2.25) 2n; [blue, dotted] (-0.4,0) – (3.4,0); [blue, dotted] (-0.4,1.5) – (3.4,1.5); [->](3,1.5) .. controls ++(0,0.95) and ++(0,-1.15) .. (0,2.5);[->](2.4,1.5) .. controls ++(0,0.5) and ++(0,-0.5) .. (3,2.5);[->](0,1.5) .. controls ++(0,0.5) and ++(0,-0.5) .. (0.6,2.5);[->](0.6,1.5) .. controls ++(0,0.5) and ++(0,-0.5) .. (1.2,2.5);[->](1.2,1.5) .. controls ++(0,0.5) and ++(0,-0.5) .. (1.8,2.5);(3,0) .. controls ++(0,-0.95) and ++(0,+0.95) .. (0,-1);(0,0) .. controls ++(0,-0.5) and ++(0,+0.5) .. (0.6,-1);(0.6,0) .. controls ++(0,-0.5) and ++(0,+0.5) .. (1.2,-1);(1.8,0) .. controls ++(0,-0.5) and ++(0,+0.5) .. (2.4,-1); (2.4,0) .. controls (2.4,-.5) and (3,-.5) .. (3,-1);] Sliding the dots all the way to the right side of the diagram results in 2(2n-1) resolution terms. Each of these resolution terms contains a 2n-cycle and a single solid dot - there are 2 resolution terms containing a solid dot on the first strand and 2 containing a solid dot on the last strand, and 4 resolution terms with a dot on each other strand. All empty dots cancel in such a way that no resolution terms cancel with each other. The result follows. The following lemmas will allow us to generate bubbles with arbitrary numbers of dots using just ± 1x_1^2.We have∑_a+b=2n-1[baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.1,1.3) a;at (0.1,-.4) b; = ∑_i+j=n-1(1+2j)[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.2,2.2) 2i; [baseline=(current bounding box.center)] [<-] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.2,2.2) 2j; We compute:∑_a+b=2n-1[baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.1,1.3) a;at (0.1,-.4) b; = [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);at (0.2,1.3) 2n-1; + [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,1.3) 2n-2; + [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,1.3) 2n-3;at (0.2,-.4) 2; + … + [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,-.4) 2n-2; = [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);at (0.2,1.3) 2n-1; + 2 [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,1.3) 2n-3;at (0.2,-.4) 2; + 2 [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,1.3) 2n-5;at (0.2,-.4) 4; + … + 2 [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,-.4) 2n-2;because we have[baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,1.3) 2n-2i;at (0.2,-.4) 2i-1; = [baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,1.3) 2n-2i-1;at (0.2,-.4) 2i;.Moreover, we can decompose these figure eights into a linear combination of products of two bubbles using dot slide relations <ref> and <ref> as follows:[baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.2,1.3) 2n-2a-1;at (0.2,-.4) 2a; = ∑_i+j=n-1 j≥ a[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.2,2.2) 2i; [baseline=(current bounding box.center)] [<-] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.2,2.2) 2j; .Combining these results, we get that ∑_a+b=2n-1[baseline=(current bounding box.center),rotate=90] 3mm[->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (-0.05,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.1,1.3) a;at (0.1,-.4) b; = ∑_i+j=n-1(1+2j)[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.2,2.2) 2i; [baseline=(current bounding box.center)] [<-] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.2,2.2) 2j; .We have[1x_1^2a,-1x_1^2b]=-2d̅_2(a+b)-∑_i+j=2(a+b)-1(2+4j)d̅_2id_2jfor a,b∈ℤ_≥0.We compute:[baseline=(current bounding box.center),scale=0.75] [->] (0,0) to (0,2); [<-] (0.75,0) to (0.75,2);(0,1.2) circle (2pt);(0.75,1.2) circle (2pt);at (-0.3,1.3) 2a;at (1.1,1.3) 2b;= [baseline=(current bounding box.center),scale=0.75] [->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1); [<-] (-0.6,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,2); [->] (0.0,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,2); (-0.55,1.7) circle (2pt); (-0.1,1.7) circle (2pt);at (-.95,1.8) 2a;at (0.35,1.8) 2b;=[baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,2); [->] (0.0,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,2); (-0.05,1.2) circle (2pt);(-0.1,1.7) circle (2pt);at (0.3,1.3) 2a;at (0.35,1.8) 2b;-2 ∑_j=0^2a-1[baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,2) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,2); (-0.05,1.2) circle (2pt);(0,1.8) circle (2pt);at (0.2,1.2) j;at (1.2,1.8) 2(a+b)-1-j; = [baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,2); [->] (0.0,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,2); (-0.05,1.2) circle (2pt);(-0.55,1.2) circle (2pt);at (0.3,1.3) 2a;at (-0.9,1.3) 2b;- 2 ∑_i=0^2b-1[baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,2) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,2); (-0.05,1.2) circle (2pt);(0,1.8) circle (2pt);at (0.66,1.2) 2a+i;at (0.66,1.8) 2b-1-i;- 2 ∑_j=0^2a-1[baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,2) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,2); (-0.05,1.2) circle (2pt);(0,1.8) circle (2pt);at (0.2,1.2) j;at (1.2,1.8) 2(a+b)-1-j; = [baseline=(current bounding box.center),scale=0.75] [<-] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1); [->] (-0.6,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,2); [<-] (0.0,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,2); (-0.55,1.7) circle (2pt); (-0.1,1.7) circle (2pt);at (-.95,1.8) 2b;at (0.35,1.8) 2a;- 2 ∑_j=0^2(a+b)-1[baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,2) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,2); (-0.05,1.2) circle (2pt);(0,1.8) circle (2pt);at (0.2,1.2) j;at (1.2,1.8) 2(a+b)-1-j; = [baseline=(current bounding box.center),scale=0.75] [<-] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1); [->] (-0.6,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,2); [<-] (0.0,1) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,2); (-0.55,1.7) circle (2pt); (-0.1,1.7) circle (2pt);at (-.95,1.8) 2b;at (0.35,1.8) 2a;- 2 ∑_j=0^2(a+b)-1[baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (0,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.3,1.3) j;at (1.2,-.2) 2(a+b)-1-j; = [baseline=(current bounding box.center),scale=0.75] [<-] (0,0) to (0,2); [->] (0.75,0) to (0.75,2);(0,1.2) circle (2pt);(0.75,1.2) circle (2pt);at (-0.3,1.3) 2b;at (1.1,1.3) 2a;- 2[baseline=(current bounding box.center)] [->] (3,2) arc (-180:180:5mm);(3.95,2.2) circle [radius=2pt];at (4.5,2.2) 2(a+b);- 2 ∑_j=0^2(a+b)-1[baseline=(current bounding box.center),scale=0.75][->] (-0.6,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.0,1); [<-] (0.0,0) .. controls ++(0,.4) and ++(0,-.5) .. (-0.6,1);[<-] (-0.6,1) .. controls ++(0,.4) and ++(0, .4) .. (-0.0,1); [->] (0.0,0) .. controls ++(0,-.5) and ++(0,-.5) .. (-0.6,0); (0,1.2) circle (2pt);(-0.1,-.3) circle (2pt);at (0.3,1.3) j;at (1.2,-.2) 2(a+b)-1-j; . Therefore [1x_1^2a,-1x_1^2b]=-2d̅_2(a+b)-∑_i+j=2(a+b)-1(2+4j)d̅_2id_2j. § ALGEBRA ISOMORPHISMIn this section, we will study the structure of , first as a vector space and then as an algebra. We show thathas a triangular decomposition into two copies of the trace of _n and a polynomial algebra. We then describe a generating set for , which allows us to define the algebra homomorphism to W^-. Finally, we prove that this homomorphism is an isomorphism. §.§ Trace of ℋ_tw as a vector spaceLet m,n ≥ 0 and define J_m,n to be the 2-sided ideal in _ℋ_tw(P^mQ^n) generated by diagrams which contain at least one arc connecting a pair of upper points. There exists a split short exact sequence 0 → J_m,n→_ℋ_tw(P^mQ^n) → ()^op_m ⊗_n ⊗ℂ[d_0,d_2,d_4....] → 0. In _ℋ_tw(P^mQ^n), due to the middle diagram in relation (<ref>), we can assume our diagrams have no crossing between opposite oriented strands. Taking the quotient _ℋ_tw(P^mQ^n)/J_m,n kills diagrams with cups connecting two upper points, and those with caps connecting two lower points. Therefore we are left with diagrams, possibly with bubbles, which have no caps or cups and have crossings only among like-oriented strands.Note that in the quotient _ℋ_tw(P^mQ^n)/J_m,n, the diagram in relation (<ref>) simplifies to [baseline=(current bounding box.center),scale=0.75][<-](0,0) to [out=45,in=-45] (0,2);[->](0.5,0) to [out=135,in=-135] (0.5,2);= [baseline=(current bounding box.center),scale=0.75] [<-] (1.5,0) to (1.5,2); [->] (2,0) to (2,2);and therefore we can move the bubbles to the rightmost part of our diagrams for free. This gives us a short exact sequence 0 → J_m,n→_ℋ_tw(P^mQ^n) →_ℋ_tw(P^m) ⊗_ℋ_tw(Q^n) ⊗_ℋ_tw(1) → 0.By <cit.>, we have that _ℋ_tw(P^m) is isomorphic to ()^op_m and that _ℋ_tw(Q^n) is isomoprhic to _n. By Proposition <ref>, it follows that _ℋ_tw(1) is isomorphic to ℂ[d_0,d_2,d_4....]. Hence the result follows.If f,g∈_n such that fg=1,then f,g∈⋊ℂ[S_n]⊂_n. There is an ℕ-filtration on _n given by (x_i)=1 for i∈{1,...,n} and other generators have degree zero. Under this filtration, the degree zero part of _n is the semidirect product Cl_n⋊ℂ[S_n]. Therefore, in the associated graded object, we see that if fg=1, (gr(f)gr(g))=(gr(f))+(gr(g))=(1)=0, hence gr(f),gr(g) are in degree zero part. Therefore f,g∈⋊ℂ[S_n].The indecomposable objects of ℋ_tw are of the form P^mQ^n for m,n∈ℤ_≥ 0. First, note that if QP appears in an object, that object can be decomposed into more components using the diagram in relation (<ref>). Hence all indecomposable objects must be of the form P^mQ^n. On the other hand, to see that every sequence of the form P^mQ^n is an indecomposable object, we will show that any idempotent in (P^mQ^n) has to be the identity.Let f,g be two maps as mentioned in Lemma <ref>. Note that gf is an idempotent since (gf)(gf)=g(fg)f=gf. Since we had the splitting short exact sequence 0 → J_m,n→_ℋ_tw(P^mQ^n) →(P^m) ⊗ (Q^n) ⊗(id) → 0 in Lemma <ref>, we know that the maps f and g will decompose into (f_1,f_2) and (g_1,g_2) where f_1,g_1:P^m→ P^m and (f_2,g_2):Q^n→ Q^n. Now g_1f_1 is the identity map in (P^m), and by the above lemma g_1,f_1∈𝒞ℓ_n⋊ℂ[S_n]. Similarly, f_2, g_2∈𝒞ℓ_n⋊ℂ[S_n]. But in 𝒞ℓ_n⋊ℂ[S_n], g_1f_1=1 implies that f_1g_1=1 as well. To see this, consider the diagrams corresponding to g_1 and f_1 which consist of a permutation and some hollow dots on top. After composing these diagrams, we can collect all the hollow dots on the top since hollow dots can pass through crossing for free, possibly gaining a sign. Furthermore, each strand has an even number of hollow dots, since this composition is the identity map. So, the hollow dots cancel with each other. This shows that the corresponding permutations of f_1 and g_1 are inverses of each other, and in particular they commute. Therefore f_1g_1=1. Similarly, f_2g_2=1. Thus we have that fg=1.We have the triangular decomposition of :≅⊕_m,n∈ℤ_≥0((_m)^op⊗_n ⊗ℂ[d_0,d_2,d_4....]). As shown in <cit.>, to find , it is enough to consider the direct sum over indecomposable objects of endomorphism spaces of objects of H_tw. Let I=span_ℂ{fg-gf} where f:x→ y and g:y→ x for x,y objects of a ℂ-linear category. Therefore by Lemma <ref> we have≅(⊕_m,n∈ℤ_≥0_(P^mQ^n))/I. By Lemma <ref>, this gives us ≅(⊕_m,n∈ℤ_≥0 ((_m)^op⊗_n ⊗ℂ[d_0,d_2,d_4....])⊕ J_m,n)/I.Recall that the ideal J_m,n is generated by diagrams containing at least one cup connecting two upper points. Therefore, the diagrams in J_m,n must also contain caps, since they are dealing with endomorphisms. Using the trace relation and the relations in ℋ_tw, we can express the elements of J_m,n as direct sum of endomorphisms of P^m'Q^n' for m'≤ m and n'≤ n. Hence we have ≅⊕_m,n∈ℤ_≥0((_m)^op⊗_n ⊗ℂ[d_0,d_2,d_4....]) ≅(⊕_m,n∈ℤ_≥0((_m)^op⊗_n)) ⊗ℂ[d_0,d_2,d_4....].§.§ Generators of the algebraThe following gives a generating set foras an algebra.The algebrais generated by -1, ± 2(x_1 + x_2), and d_0 + d_2. First, Proposition <ref> implies that 1 and (d_0+d_2) allow us generate a differential degree two element 1x_1^2; since all relations in ℋ_tw are local, we can evaluate the commutator [1x_1^2, (d_0+d_2)] by moving the dot to the bottom of the upward strand and sliding the bubbles over the upper portion. We can therefore apply Lemma <ref> repeatedly to show that ad(d_0+d_2)^n1 has a leading term of 1x_1^2n.By Lemma <ref>, the elements -1 and 2x_1+x_2 are sufficient to generate 2m+1 for all integers m>0. Then we can generate 2nx_1+…+x_n from 1x_1^2 and 2m+1 by using Lemma <ref>.Lemma <ref>, -1 and 2nx_1+x_2+… + x_n allow us to generate 2r+1 for all integers r. Proposition <ref> implies that all elements with nonzero rank degree can be written as a sum of elements of the form ± nx_1^ℓ c_1^k. By Propositions <ref> and <ref>, all elements of this form except for the ones generated in the preceding paragraphs are 0 in , so we have generated all ofand .Finally, Lemma <ref> allows us to generate d_2n, applying Lemma <ref> to split up the d_2n terms.§.§ The isomorphism There is an obvious isomorphism of vector spaces between the Fock space representations ofand W^-:ϕ: V= ℂ[1, 2, …] →ℂ[w_-1,0, w_-2,0, …] = 𝒱_1,0. Recall that each algebra acts faithfully on its Fock space representation. The map ϕ in Equation (<ref>) commutes with the action of the twisted Heisenberg subalgebras in V and 𝒱_1,0, i.e.:ϕ(rv) =√(2)w_-r,0ϕ(v).The vector space realizations of V and 𝒱_1,0 in Equation (<ref>) imply that the action of r on V is simply the adjoint action of r on the subalgebra , and the action of w_-r,0 on ϕ(v) is the adjoint action of w_-r,0 on (W^-)^-. The Lemma follows from our computation of these twisted Heisenberg relations in Propositions <ref> and <ref>.For any v∈ V we have ϕ((d_0+d_2)v) = -2w_0,3ϕ(v). Propositions <ref> and <ref> give that w_0,3 maps w_-1,0 to an element with leading term w_-1,2, and (d_0+d_2) maps 1 to an element with leading term 1x_1^2. Comparision of the actions of these terms on the twisted Heisenberg subalgebras on either side gives that that their images in the endomorphisms of the Fock space are identical. For any v ∈ V we have ϕ(± 2(x_1 + x_2) v) = 2√(2)(w_∓ 2, 1 + w_∓ 2, 0) ϕ(v). This follows from comparision of Lemma <ref> and Proposition <ref>. Now extend ϕ to a mapΦ: ⟶ W^-/⟨ w_0,0, C-1 ⟩by mapping1↦√(2)w_-1,0± 2(x_1 + x_2)→ 2√(2)w_∓ 2, 1 + w_∓ 2, 0 d_2+d_0 ↦ -2w_0,3and extending algebraically, i.e.Φ(a_1 … a_k) = Φ(a_1) …Φ(a_k)for generators a_1, …, a_k of .The map Φ above is well defined.SupposeA∈ has two representations in terms of generators, A= a_i_1… a_i_k = a_j_1… a_j_ℓ. Then a_i_1… a_i_k. V = a_j_1… a_j_ℓ.V, so applying Φ gives Φ(a_i_1… a_i_k).𝒱_1,0=Φ(a_j_1… a_j_ℓ).𝒱_1,0. Hence Φ(a_i_1… a_i_k) = Φ(a_j_1… a_j_ℓ) by the faithfulness of the Fock space representation for W^-.The map Φ is an isomorphism of algebras.We immediately have that Φ is surjective, because it maps generators to generators. Thus, it remains to show that Φ is injective. Let A := a_i_1… a_i_k∈ and assume that Φ(A).𝒱_1,0 = 0. Then Φ(A) = 0 by the faithfulness of the representation. But then Φ(a_i_1)…Φ(a_i_k) .𝒱_1,0=0. Then, by Lemmas <ref>, <ref>, and <ref>, we have Φ(a_i_1)…Φ(a_i_k ). 𝒱_1,0 = ϕ(a_i_1… a_i_k. V) = ϕ(A.v)=0. But ϕ is an isomorphism, so this implies that A.V=0. Hence A = 0 by the faithfulness of the Fock space representation of .* amsalpha
http://arxiv.org/abs/1702.08108v3
{ "authors": [ "Can Ozan Oğuz", "Michael Reeks" ], "categories": [ "math.QA", "math.RT", "81R10, 20C08, 17B65, 18D10" ], "primary_category": "math.QA", "published": "20170226225607", "title": "Trace of the twisted Heisenberg Category" }
18pt
http://arxiv.org/abs/1702.08081v2
{ "authors": [ "Jiro Akahori", "Xiaoming Song", "Tai-Ho Wang" ], "categories": [ "q-fin.CP" ], "primary_category": "q-fin.CP", "published": "20170226203706", "title": "Probability density of lognormal fractional SABR model" }
Hajós-like theorem for signed graphs Yingli KangPaderborn Institute for Advanced Studies in Computer Science and Engineering and institute for Mathematics, Paderborn University, Warburger Str. 100, 33102 Paderborn, Germany; yingli@mail.upb.de; Fellow of the International Graduate School “Dynamic Intelligent Systems”.========================================================================================================================================================================================================================================================================================================== Artificial neural networks can be trained with relatively low-precision floating-point and fixed-point arithmetic, using between one and 16 bits. Previous works have focused on relatively wide-but-shallow, feed-forward networks. We introduce a quantization scheme that is compatible with training very deep neural networks. Quantizing the network activations in the middle of each batch-normalization module can greatly reduce the amount of memory and computational power needed, with little loss in accuracy.§ INTRODUCTION To improve accuracy, deeper and deeper neural networks are being trained, leading to seemingly ever-improving performance in computer vision, natural language processing, and speech recognition. Three techniques have been key to usefully increasing the depth of networks: rectified-linear units (ReLU) <cit.>, batch normalization <cit.>, and residual connections <cit.>.These techniques have fulfilled the promise of deep learning, allowing networks with a large number of relatively narrow layers to achieve high accuracy. The experiments that established the usefulness of these techniques were carried out using 32-bit precision floating-point arithmetic.Network architectures are often compared in terms of the number of parameters in the network, and the number of FLOPs per test evaluation. Memory requirements, either at training time or test time are rarely explicitly taken into account, even though they often seem to be a limiting factor in network design. ResNets double the number of feature planes per layer each time the linear size of the hidden layers is down-scaled by a factor of two, in order to keep the computational cost per layer constant. However, each down-sampling halves the number of hidden units, so most of the network's depth has to come after the input images have been scaled down by at least a factor of 16. Similarly, the placement of batch-normalization blocks in Inception-IV was limited by memory consideration <cit.>. Bandwidth is another reason to take memory into account. Finite data transfer speeds and cache sizes make it harder to make full use of all available processing power.To improve efficiency—reducing memory and power requirements—networks have been trained with low-precision floating-point arithmetic, fixed-point arithmetic, and even binary arithmetic <cit.>. These experiments have been carried out for relatively shallow networks—generally between three and eight layers. We will focus on much deeper networks. Well-designed deep networks are more efficient than wide, shallow networks, so we believe that reducing the level of precision is more challenging, but more useful when successful.With this in mind, we have tried to train modern network architectures using limited-precision arithmetic, focusing on the activations halfway through each batch-normalization module, after normalization, before the affine transform.In Section <ref> we briefly explore the differences in memory consumption for neural networks at test and training time. In Section <ref> we discuss converting complicated floating-point multiplications into simpler integer addition operations. In Section <ref> we recap batch-normalization, splitting it into normalization and affine transform steps.We focus our experiments on classes of models that are popular in computer vision. However, ideas from computer vision have informed network design in other areas, such as speech recognition <cit.> and neural machine translation <cit.>, and have been extended to three dimensions to process video <cit.>, so low precision batch-normalized activations should be applicable more broadly. The networks we consider use batch-normalization after each learnt layer, before applying an activation function. We explain this in Section <ref>.Attempts to use lower precision arithmetic have often combined a range of levels of precision, depending on the location. In general, lower precision is needed when storing the network weights and activations during the forward pass, while higher precision is needed during accumulation steps, e.g. for matrix multiplication, backpropagation and gradient descent. Storing the activations accurately seems to be crucial. From a full precision AlexNet/ImageNet top-5 accuracy baseline of 80.2%, binarizing only the weights reduces the accuracy only marginally to 79.4%, but binarizing the activations as well reduces the accuracy to 69.2% <cit.>. That is still extremely impressive performance, given the replacement of floating-point arithmetic with much simpler 1-bit XNOR operations, but it clearly cannot be used as a drop-in replacement for existing network designs. Also, it seems unlikely that binary activations are compatible with residual connections. We consider a range of quantization schemes using between two and eight bits, see Section <ref>. §.§ Test and training memory use Neural network memory usage is very different during training and testing. At test time, hidden units only need to be stored long enough to calculate the values of any output nodes. For feed-forward networks such as VGG <cit.>, this means storing one hidden layer to calculate the next. For modular networks such as ResNets <cit.>, once the output of a module is calculated, all the internal state can be forgotten. In contrast, DenseNets <cit.> accumulate a relatively large state within the densely connected blocks. The peak memory used when calculating the forward pass for a single image may not seem huge, but it is significant compared to, say, the size of a typical mobile processor's cache.At training time, much more memory is needed. Many of the activations generated during the forward pass must be stored to efficiently implement backpropagation. Additionally, training is generally carried out with batches of size 64–256 for reasons of computational efficiency, and to benefit from batch-normalization.Quantizing the batch-normalized activations has potential advantages both at training and at test time, reducing memory requirements and computational burden. §.§ Forward Propagation with fewer multiplicationsIn addition to saving memory, some low precision representations also have the useful property of allowing general floating-point multiplication to be replaced with integer addition. In <cit.>, the authors take advantage of the finite range of the tanh activation function; they approximate the output using only 3 bits, with the form ±2^k. Using one of the three bits to store the sign, the other two bits allow k to have range four, i.e. k∈{-3,-2,-1,0}. They also use a similar trick during backpropagation but using five bits. In addition to saving memory, the special form in which the activations are stored means that they can be multiplied by the network weights using just integer addition in the exponent, and a 1-bit XOR for the sign bit. This is computationally much simpler than regular floating-point multiplication.Unlike tanh, rectified linear units have unbounded range, so quantization may be less reliable. Batch-normalization is a double-edged sword: it regularizes the typical range of the activation during the forward pass, but it increases the complexity of the backward pass. For these reasons, we consider a range of quantization schemes, including log-scale ones that allow multiplications to be replaced by addition in the exponent.§.§ Batch-NormalizationWe will briefly review the definition of batch-normalization and its effect on backpropagation. We will separate the normal batch-normalization layer into its two constituent parts, a feature-wise normalization N, and a feature-wise, learnt, affine transform A_a,b. Let x denote a vector of features. During training, the vector is normalized to have mean zero and variance approximately one:N(x)=x-mean(x)/√(Var(x)+ε)The affine transform is then appliedBN(x)=A_a,b(N(x))=a·N(x)+b.During backpropagation, the gradient is adjusted using the formula (∇ x ≡∂cost /∂ x)∇ x=∇ N-mean(∇ N)-Nmean(N∇N)/√(Var(x)+ε)The normalized values N(x) are good candidates for storing using low precision for two reasons. Having been normalized, they are easier to store efficiently as typically the bulk of the values lie within a limited range, say [-6,6]. Secondly, the N(x) occur twice in (<ref>), so we need to store them in some form to apply the chain rule.§.§ Cromulent networksWe will focus on networks that have a specific form. Many popular network architectures either have this form, or can be modified to have it with minor changes. Consider a network composed of the following types of modules:– Learnt layers such as fully-connected (Linear) layers, or convolutional layers. For backpropagation, the input needs to be available, but the output does not. We can also consider the identity function to be a `trivial' learnt layer.– Batch-Normalization: BN_a,b(x)=A_a,b(N(x)). N(x) is needed for backpropagation through both the affine layer and the normalization layer. N(x) is commonly recalculated from x during the backward pass. We are proposing instead just to store a quantized version of N(x).– The rectified-linear (ReLU) activation function. Either the input or the output is needed for backpropagation.– `Branch' nodes that have one input, and produces as output multiple identical copies of the input. Nothing needs to be stored for backpropagation; the incoming gradients are simply summed together elementwise.– `Add' nodes, that take multiple inputs, and add them together elementwise. The activations do not need to be stored for backpropagation.– Average pooling. Nothing needs to be stored for backpropagation. Max-pooling is also allowed if the indices of the maximal inputs within each pooling region are stored. We will say that the network is cromulent if any forward path through the network has the form…-⋆-N-A_a,b-ReLU-Learnt-⋆-N-A_a,b-ReLU-Learnt-⋆-…with ⋆ denoting either a direct connection, a branch node, an add node or a pooling node. Cromulent networks include VGG-like networks; pre-activated ResNets and WideResNets; and DenseNets, with or without bottlenecks and compression. Some networks, such as Inception V4, SqueezeNets, and FractalNets are not cromulent, but can be turned into cromulent networks by converting them into pre-activated form, with BN-ReLU-Convolution blocks replacing Convolution-BN-ReLU blocks.The motivation for our definition of cromulent is to ensure the networks permit a number of optimizations:– At training time, to do backpropagation it is sufficient to store the N(x), as you can easily recalculate the adjacent Affine-ReLU transformations. The recalculation is very lightweight, compared to the learnt layers and the N part of batch-normalization.– Combining the Affine-ReLU-Learnt layers into one so multiplications are replaced with additions; see Section <ref>.– For VGG and DenseNets at test time, the calculation of N(x) can be rolled into the preceding learnt layer, meaning the output of the learnt layer can be stored directly in compressed form, saving bandwidth and memory. This is especially useful for DenseNets which accumulate a large state within each densely connected block. §.§ Low precision representations In Table <ref> we define eight quantization formulae. The first seven are similar to ones from <cit.>; however, we apply the formulae immediately after normalization, before the Affine-ReLU layers, so there are three key differences. Firstly, the input can be negative. We therefore use one bit to store the sign of the input—this seems to be more efficient overall as it save us from having to store additional information from the forward pass to apply equation (<ref>). Secondly, we can fix the output scale, rather than having to vary the output range according to the level within the network as in their paper. Thirdly, it allows a single normalization to be paired with different affine transformations, which is needed to implement DenseNets.The first four approximations use a log-scale, using 2, 3, 4 and 5 bits of memory respectively. They are compatible with replacing the multiplications during the forward pass with additions. We also consider three uniform scales, using 4, 5, and 8 bits. The uniform scales provide a different route to more efficient computations. Quantization the network weights with a uniform scale during the forward pass would allow the floating-point multiplications to be replaced with, say, 8-bit integer multiplication.§.§.§ Four-bit log-scale quantization First look at _4. Like all of the approximation formulae, _4 has been constructed so that in a weak sense _4(x)≈ x for x in a neighborhood of zero. Its range has size 16=2^4, consisting of the values±1/8,±1/4,±1/2,±1,±2,±4,±8,±16.In words, to calculate _4(x), we scale x by a constant factor, take absolute value to make it positive, take logs to the base 2, round down to the nearest integer, restrict the value to the range [-3,4], invert the log operation by raising 2 to that power, and then multiply by sign(x)=±1 to restore symmetry. The choice [-3,4] for the range of the exponent is a compromise between accuracy near zero and being able to represent values further away from one. If we put a standard Gaussian random variable X∼ N(0,1) into _4, then the mean value is preserved by symmetry, and the standard deviation is preserved thanks to the choice of the constant 1.36. Note that for a floating-point number x, calculating ⌊log_2x⌋ is just a matter of inspecting the exponent.We can look at _4 as a form of dropout with respect to all but the most significant bit of x. Unlike normal dropout, it is applied in the same way during training and testing. To measure how destructive this form of dropout is, we can look the correlation between N(X) and _4(N(X)). See Figure <ref> and Table <ref>. Even for a moderately heavy tailed distribution, the output of _4 is highly correlated with the input—by Chebyshev's inequality, at least 99.6% of the N(x) must lie within the range of _4. The variance of _4(N(x)) will generally be close to one.Replacing multiplications with additions in the exponent requires a little algebra to merge together the Affine-ReLU-Learnt triplet of layers. Let N(x)_i denote the i-th element of the the output of the normalization operation, let a and b denote the parameters of the affine transform, and let w denote a network parameter. ThenReLU(BN(x)_i)· w= (aw)_4(N(x)_i)+(bw)if _4(N(x)_i)>-b/a,0otherwise.We have replaced a number of floating-point multiplications for integer addition and floating-point addition. (Alternatively, this could be implemented with bit-shifting and fixed-point addition). This is significant as it would allow networks to be deployed to less powerful, more energy efficient devices.During training, we can save a substantial amount of memory by using _4(N(x)) in place of N(x)in equation (<ref>) and, and by recalculating the Affine-ReLU transformation as needed, rather than saving the result.§.§.§ Other representations We have also defined a variety of other functions corresponding to low precision representations. Using three bits we can represent a subset of the range of _4, specifically ±1/2,±1,±2,±4. Using two bits, we can represent the values ±1/√(2),±√(2). We can still perform the low precision multiplication trick, by storing the weights multiplied by √(2). With 5 bits, we can increase the precision by using a log-scale with base √(2); the multiplication trick can be modified by storing w and w√(2).To construct the uniform scales, we had to compromise between accuracy and width of support. The final representation in Table <ref> is an attempt to improve accuracy at the expense of simplicity. The set of output points is less concentrated around zero, allowing it to provide better accuracy further afield. The number 1.29 is chosen to give a range of approximately [-6,6].§ EXPERIMENTS We have performed a range of experiments with the CIFAR-10 and ImageNet datasets. §.§ Fully connected, permutation invariant CIFAR-10 To test low precision approximations for fully-connected networks, we treated CIFAR-10's 3×32×32 images as unordered collections of 3072 features, without data augmentation, and preprocessed only by scaling to the interval [-1,1]. We constructed a range of two layer, fully connected networks,FC(2^n)-BN-ReLU-FC(2^n)-BN-ReLU-FC(10)with n=0,1,…,10, for each of the quantization schemes. We trained the networks for 100 epochs in batches of size 100, with learning rate 10^-2 and Nesterov momentum of 0.9. See Figure <ref>. The two and three bit approximation schemes seem to have a strong regularizing effect during training, with substantially worse recall of the training labels. On the test set the effect is much smaller, and even positive for networks of size 64 to 128, suggesting the some overfitting is being prevented. This suggests that very low precision storage could be useful as a computationally more efficient alternative to regular batch-normalization combined with dropout. §.§ CIFAR-10 Using the fb.resnet.torch[<https://github.com/facebook/fb.resnet.torch>] package we trained a variety of convolutional networks on CIFAR-10. We trained two VGG-like <cit.> network with pairs of 3×3 convolutions, followed by 3×3 max-pooling with stride 2, and the number of features doubling after each max-pooling halves the spatial size, i.e.M(k)-M(2k)-M(4k)-(8k)C4 with M(k):=[kC3-BN-ReLU-kC3-MP3/2-BN-ReLU]with k=32 and k=64. These are relatively shallow networks, but are included for the sake of comparison.To experiment with deeper networks, we also trained ResNet-20 and ResNet-44 networks <cit.>, WideResNets <cit.> with depth 40 and widening factor 4, and DenseNets <cit.> with growth rate k=12 and depths 40 and 100. We used the default learning rate schedule for the VGG and ResNet networks: 164 epochs with learning rate annealing. For WideResNets and DenseNets, we used their customary 200 and 300 epoch learning rate schedules, respectively. All networks were trained with weight decay of 10^-4, and data augmentation in the form of random crops after 4-pixel zero padding, and horizontal flips._2 clearly inhibits proper training, at least with the defaults learning meta-parameters. _3 and _4 perform much better, especially when you consider the reduction in computational complexity of the operations involved. In this set of experiments, the uniform scale quantizations are broadly similar to the log-scale ones. Although not computationally simpler, the O_4 results show that substantial reductions can be made in memory usage with minimal loss of accuracy.§.§ ImageNet Again using fb.resnet.torch, we trained a number of ResNet models on ImageNet. Models are trained for 90 epochs with weight decay 10^-4, except where noted.The ResNet-18 consists of 8 basic residual blocks; ResNet-50 consists of 14 bottleneck residual blocks. We also trained a wide ResNet with widening factor 2, depth 18; all layers after the initial convolution have double the width of a regular ResNet-18. Although the bodies of these networks are cromulent, they begin with modules64C7/2-BN-ReLU-MP3/2,mapping the 3x224x224 input to size 64x56x56. To avoid changing the model definition, we decided to keep the initial BN operation unchanged, using low precision BN for all subsequent BN layers. See Table <ref>.Here we find that the uniform scale quantization can inhibit learning, while the other representations do better. Perhaps unsurprisingly, quantization affects the deep ResNet-50 more than the WideResNet-18. The failure of the uniform quantization methods are in stark contrast to the results for CIFAR-10, demonstrating that quantization methods cannot be assumed to be `mostly-harmless' without extensive testing on challenging datasets.The O_4 version of ResNet-18 has a top-5 error 0.65 percentage points higher than the 32-bit baseline. To see if performance was suffering due to the use of weight decay, we reduced thedecay rate to 10^-5 and trained for ten additional epochs: the top-1/top5 errors improved to 31.15% / 11.13%, narrowing the gap to 0.37%. This suggests that the low precision models need less regularization, but are capable of very similar levels of performance.§.§ Mixed networks with fine-tuning An alternative to training networks with reduced precision arithmetic is to take a network trained using 32-bit precision, and then try to compress the network to reduce test-time complexity, for example by pruning small weights, and quantizing the remaining weights <cit.>. With this in mind, we try substituting regular batch-normalization for _4-BN and _5-BN in pretrained DenseNet-121 and DenseNet-201 ImageNet networks.We tried quantizing (i) all normalized activations, and (ii) all but the first normalization, and (iii) all the normalizations in the second, third and fourth densely connected blocks (all but the first 14 BN-operations). See Tables <ref> and <ref>. The early layers of the network, where the number of features planes is small, seem to be most sensitive to reductions in accuracy.We did the same with a ResNet-18. For this experiment we made the network fully cromulent: with reference to <ref>, we moved the max-pooling module to immediately after the first convolution. We tried quantizing all the normalization operations, and all but the first normalization. Again, except at the beginning of the network, quantizing the normalized activations preserves most of their functionality. §.§ Training with quantized gradientsWe have quantized the activations; we have not so far quantized the gradients—the partial derivatives of the cost function with respect to the activations. While <cit.> have shown this is possible for feed-forward networks, the situation is more complicated for network architectures with multiple connections such as DenseNets. Consider the DenseNet-(3N+4,12) networks from Section <ref> (N=12 and 32). Each of the three densely connected blocks consists of an input followed by N convolutions. The n-th convolution takes the original input concatenated with the outputs of the n-1 preceding convolutions. During the backward pass, it has to somehow send the backpropagation error signal to each of its n inputs.The messages cannot realistically be sent directly; the number of messages would grow with order N^2. Instead, the messages are accumulated additively working backwards through the dense block[<https://github.com/liuzhuang13/DenseNet/>].Quantizing the gradients corresponds to iteratively quantizing partial sums of the messages. Experimentally, this amplifies numerical imprecision, destroying the capacity of the DenseNets to learn CIFAR-10.Alternatively, the gradients can be backpropagated with floating-point precision, but quantized immediately before being applied to each convolution—converting multiplications to additions, but not reducing the overall memory footprint. We trained _5-quantized DenseNets, quantizing the gradients to 8-bits using the set . Compared to totally unquantized gradients, mean test errors increased very slightly: from 5.45% to 5.64% for N=12, and from 4.29% to 4.44% for N=32. § IMPLEMENTATIONWe have looked at the experimental properties of some low precision approximation schemes. The natural way to take advantage of this method is:– At training time, combine the Affine-ReLU-Convolution operations into one function/kernel, so that the memory-reads can be replaced with low precision versions. This could be particularly useful for DenseNets, as the convolutions (or bottleneck projections) tend to have much larger input than output.– At test time, for VGG and DenseNets, also roll the N part of batch-normalization into the Affine-ReLU-Convolution operation, and write the output directly in quantized form.Implementing this efficiently is a moderate engineering challenge—the same is true for all BLAS and convolutional operations. Particularly in the case of DenseNets, being able to read the input from memory in a compressed form is an advantage, as each convolutional layer takes many more activations as input than it produces as output. Storing the 4D hidden layers tensors in the form height×width×batch×features, rather than the usual batch×features×height×width could be helpful in terms of efficiently reading contiguous blocks from memory.§ CONCLUSION Modern, efficient, deep neural-networks can be trained with substantially less memory, and nearly all of the multiplication operations can be replaced with additions, all with only very minor loss in accuracy. Moreover, this technique will allow larger networks to be trained in situations where memory is currently a limiting factor, such as 3D convolutional networks for video, and large language models.The first few layers of a network, where the number of feature planes is smaller, seem to be more sensitive to quantization. Mixing the number of bits used, with 5 bits in the lower layers, and 3 or 4 bits in the higher layers might be a good way to balance accuracy and memory footprint.Our experiments have been limited to ConvNets designed for use with floating-point arithmetic, using the existing training procedures without modification. Better results may be possible by designing new network architectures, or by fine-tuning the training procedures.Our definition of cromulent includes a wide range of networks. However, it is not meant to exhaustively identify every opportunity for low-precision batch-normalized activations. For example, in <cit.>, batch-normalization is added to an LSTM recurrent network. LSTM cells are built using sigmoid and tanh activations instead of ReLUs, but much of the internal state could still potentially be stored at reduced precision.To keep our analysis focused, we have not tried to optimize everything at the same time, focusing instead on just the activations. There are almost certainly additional opportunities to improve the computational performance of networks with low-precision batch-normalized activations:– ConvNets are often robust with respect to quantizing the network weights <cit.>. Allowing the network weights to take just three values, a ResNet-18 achieved 15.8% validation error on ImageNet <cit.>. An interesting problem for future work is merging mild weight quantization with low-precision batch-normalized-activations whilst minimizing loss of accuracy.– If the activations and network weights are quantized, lower-precision arithmetic might be sufficient for summing the activation×weight terms for each convolutional layer.Some of these optimizations will be immediately useful, particularly the reduced memory overhead. To take full advantage of these techiques will require the development of new, low-power hardware.
http://arxiv.org/abs/1702.08231v1
{ "authors": [ "Benjamin Graham" ], "categories": [ "cs.NE", "cs.CV" ], "primary_category": "cs.NE", "published": "20170227111054", "title": "Low-Precision Batch-Normalized Activations" }
jwahn@kaist.ac.kr Department of Physics, KAIST, Daejeon 305-701, KoreaArbitrary rotation of a qubit can be performed with a three-pulse sequence; for example, ZYZ rotations. However, this requires precise control of the relative phase and timing between the pulses, making it technically challenging in optical implementation in a short time scale. Here we show any ZYZ rotations can be implemented with a single laser-pulse, that is a chirped pulse with a temporal hole. The hole of this shaped pulse induces a non-adiabatic interaction in the middle of the adiabatic evolution of the chirped pulse, converting the central part of an otherwise simple Z-rotation to a Y rotation, constructing ZYZ rotations. The result of our experiment performed with shaped femtosecond laser pulses and cold rubidium atoms shows strong agreement with the theory.32.80.Qk, 42.50.Dv, 42.50.Ex Single-laser-pulse implementation of arbitrary ZYZ rotations of an atomic qubit Han-gyeol Lee, Yunheung Song, and Jaewook Ahn December 30, 2023 ===============================================================================§ INTRODUCTIONQubit is the information stored in the quantum state of a two-level system, routinely used as the smallest unit of information processed in the quantum circuit model of quantum computation <cit.>. In order to construct a universal computational gate set, single-qubit rotations, about at least two distinct rotational axes are required as well as a two-qubit gate, e.g., the CNOT gate. Single-qubit rotation gates, such as Hadamard and Pauli X, Y, and Z gates have been implemented on numerous physical systems, including photons <cit.>, ions <cit.>, atoms <cit.>, molecules <cit.>, quantum dots <cit.>, and superconducting qubits <cit.>.Many single-qubit rotations in a sequence can also be performed with a single arbitrary rotation gate, which simplifies otherwise complex physical implementation of many distinct rotations in a unified fashion. An arbitrary rotation (of rotation angle ϕ and rotational axis n̂) can be constructed with a minimum of three rotations that correspond to the set of Euler angle rotations: for example, the three rotations in the best-known ZYZ-decomposition are given by ℛ_n̂(ϕ) = ℛ_ẑ(Φ_2)ℛ_ŷ(Θ)ℛ_ẑ(Φ_1),where ℛ represents a rotational transformation, and n̂ and ϕ are respectively given as a function of three rotation angles Φ_1, Φ_2, and Θ <cit.>. In an optical implementation of two-level system dynamics, Z-rotations use either a time-evolution or a far-detuned excitation <cit.>, and X or Y-rotations a resonant area-pulse interaction, both of which and their combinations require a precise control of the relative phase and timing among the constituent pulsed interactions. In this paper, we show that an arbitrary rotation can be, alternatively, performed with a single laser-pulse, when the pulse is programmed to be a chirped pulse with a temporal hole. As to be discussed in the rest of the paper, a single laser pulse with the given pulse shape can implementZYZ-decomposed rotations all at once, where the temporal hole in the middle of a chirped pulse induces a strong non-adiabatic evolution, which is a Y-rotation, amid an otherwise monotonic adiabatic evolution, a Z-rotation, due to the chirped pulse. The predicted behavior of the ZYZ-decomposition is to be experimentally verified with cold atomic qubits and as-programmed femtosecond laser pulses.§ THEORETICAL ANALYSISWe consider the dynamics of a two-level atom, driven by a chirped laser pulse with a temporal hole. The electric field of the pulse, where both the main pulse and the hole are assumed to be of Gaussian pulse shape, is given by E(t) = A_0(e^-t^2/τ^2-ke^-t^2/τ_h^2)cos(ω_0t+α t^2+φ),where A_0 is the amplitude, τ and τ_h are respectively the widths of the main pulse and the hole, k (0≤ k≤ 1) is the depth of the hole, α is the linear chirp parameter, and φ is the carrier phase (see Appendix A). The contribution of the carrier phase is a simple Z-rotation, i.e. ℛ_ẑ(φ), so we will first consider the φ=0 case. When the base vectors are defined by |g⟩ and |e⟩ (of respective energies -ħω_0 /2 and ħω_0 /2), the Hamiltonian in the adiabatic basis <cit.> (see Appendix B), after the rotating wave approximation, is given byH_A= ħ/2[[ λ_- -2iϑ̇;2iϑ̇ λ_+ ]],where λ_± = ±√(Ω^2+Δ^2) are the eigenvalues, forthe Rabi frequency Ω andthe instantaneous detuning Δ=-2α t, and ϑ is the adiabatic mixing angle defined by 2ϑ = tan^-1Ω/Δ for 0 ≤ϑ≤π/2. However, with Eq. (<ref>), the phase of the state diverges at t→±∞, so we use an additional transformation 𝒯_Δ=exp(i∫_0^t T_Δ dt'/ħ) with T_Δ = ħ/2[ [ -|Δ|0;0|Δ| ]] to remove this rapidly oscillating phase. The resulting Hamiltonian that represents the dynamics of the adiabatic state in the “detuning” interaction picture is given byH_Δ= ħ/2[[ |Δ|-√(Δ^2+Ω^2) -2iϑ̇e^-i|Δ|/2; 2iϑ̇e^i|Δ|/2 √(Δ^2+Ω^2)-|Δ| ]],and corresponding base vectors are |0(t)⟩_Δ and |1(t)⟩_Δ.Figure <ref> shows the behavior of the mixing angle ϑ, compared with the Rabi frequency Ω for various hole depth k (first column), and the corresponding Bloch vector evolution in the “detuning” interaction picture (second column) and in the “atomic” interaction picture (third column).The pulseand the transformed base vectors are |g⟩_ω_0 = 𝒯_ω_0|g⟩ and |e⟩_ω_0 = 𝒯_ω_0|e⟩. Then, using an arbitrary stateψ(t), the relation between the interaction picture of the atomic basis (labeled with ω_0) and the interaction picture of the adiabatic basis (labeled with Δ) is given by|ψ(t)⟩_Δ = 𝒯_Δ(t)R(ϑ(t))𝒯_ω_L(t)𝒯_ω_0^†(t)|ψ(t)⟩_ω_0,without a hole in Fig. <ref>(a) shows slow change in ϑ and relatively large Ω, suggesting that the adiabatic condition, 2ϑ̇≪ |λ_+-λ_-|, is satisfied in all time. So, a pulse without a hole induces an adiabatic evolution, i.e., a Z-rotation in the adiabatic basis, as depicted in Fig. <ref>(d).On the other hand, the pulses with a hole in Figs. <ref>(b) and <ref>(c) exhibit abrupt change in ϑ near t=0. Therefore, the overall dynamics can be decomposed to sub-dynamics in three different time zones: t<-τ_h, -τ_h<t<τ_h, and t>τ_h, as shown in Figs. <ref>(e) and <ref>(f). In the central time zone (-τ_h<t<τ_h), the hole makes Ω small and rapid change in ϑ occurs. Since the Hamiltonian is dominated by the non-adiabatic coupling (the off-diagonal components), it is approximately given byH_Δ(t≈ 0) ≈ħ/2[[ 0 -2iϑ̇;2iϑ̇ 0 ]],which corresponds to the Y-rotation with rotation angle Θ ≈ ∫_-τ_h^τ_h 2ϑ̇ dt= 2[ϑ(τ_h)-ϑ(-τ_h)].In both side regions(t<-τ_h and t>τ_h), Z-rotations occur due to the adiabatic evolution of the chirped pulse. The rotation angles are respectively given byΦ_1 ≈ ∫_-∞^-τ_h[ |Δ(t)| - √(Δ^2(t)+Ω^2(t))] dt Φ_2 ≈ ∫_τ_h^∞[ |Δ(t)| - √(Δ^2(t)+Ω^2(t))] dt,and, as a result, the total time-evolution, including the Z-rotation due to the carrier phase ℛ_ẑ(φ), is givenbyℛ_ẑ(Φ_2)ℛ_ŷ(Θ)ℛ_ẑ(Φ_1+φ)= [ [ e^-i (Φ_1+Φ_2+ φ)/2cosΘ/2 -e^iφ/2sinΘ/2; e^-iφ/2sinΘ/2e^i(φ+Φ_1+Φ_2)/2cosΘ/2 ]],which corresponds to an arbitrary ZYZ rotation with three parameters Φ_1+φ, Φ_2, and Θ that can be made fully independent. Although the ZYZ rotation in the Eq. (<ref>) is derived for the adiabatic states in the “detuning” interaction picture, |ψ(t)⟩_Δ=𝒯_Δ |ψ(t)⟩_A, the result is also valid for the corresponding original atomic states in the “atomic” interaction picture, |ψ(t)⟩_ω_0= 𝒯_ω_0 |ψ(t)⟩ (see Appendix B for the definition), because of the simple relation between these two states at t=±∞. The relation between these two states are given by|ψ(t)⟩_Δ = 𝒯_Δ(t)R(ϑ(t))𝒯_ω_L(t)𝒯_ω_0^†(t)|ψ(t)⟩_ω_0,where the 𝒯_ω_L and R(ϑ) are the transformation to the “field” interaction picture and the adiabatic transform matrix(see Appendix B for details). At extreme times, t=±∞, the overall transformation becomes simple, given by𝒯_Δ(±∞)R(ϑ(±∞))𝒯_ω_L(±∞)𝒯_ω_0^†(±∞)=R(ϑ(±∞)),with R(ϑ(-∞)) = ([ 1 0; 0 1 ]) and R(ϑ(∞)) = ( [0 -1;10 ]). The base vectors in these two representations are identical (|0⟩_Δ = |g⟩_ω_0, |1⟩_Δ = |e⟩_ω_0) at t=-∞ and switched (|0⟩_Δ = -|e⟩_ω_0, |1⟩_Δ = |g⟩_ω_0) at t=∞. Therefore, the time evolution in Eq. (<ref>), the ZYZ rotations, defined in the {|0⟩_Δ, |1⟩_Δ} basis (the “detuning” interaction picture) can be also written asR^†(ϑ(∞))ℛ_ẑ(Φ_2)ℛ_ŷ(Θ)ℛ_ẑ(Φ_1+φ)in the {|g⟩_ω_0, |e⟩_ω_0} basis (the “atomic” interaction picture).The third column in Fig. <ref> shows the corresponding time-evolution in the “atomic” interaction picture. The net changes of the state vector between the initial and final states are the same as those in the second column (the “detuning” interaction picture). Otherwise complicated time-evolutions of the state vector, e.g., in the “atomic” interaction basis, can be easily decomposed to the ZYZ rotations in our “detuning” interaction picture.Figure  <ref> demonstrates the arbitrary qubit rotations. The numerical calculation in Fig. <ref>(a) shows Bloch sphere points accessible by as-shaped pulses controlled with two parameters 𝒜 (the pulse area) and φ (the carrier phase).When the pulse envelope is symmetric as in Eq. (<ref>), Φ_1 equals Φ_2. In this case and also when the qubit starts from the initial state given by|ψ_ init⟩ = 1/√(2)(|0(-∞)⟩_Δ+|1(-∞)⟩_Δ) = 1/√(2)(|g⟩_ω_0+|e⟩_ω_0),any final positions on the Bloch sphere are accessible, as shown in Fig. <ref>(a). Even without assuming such an initial state, full arbitrariness can be achieved with an addition degree of freedom in pulse shaping. When detuning δω is implemented by a time shift, δ t = δω / 2α, of the main pulse,the electric field is given by E(t) = A_0[e^-(t-δ t)^2/τ^2(1-ke^-t^2/τ_h^2)]cos(ω_0t+α t^2),where the hole is fixed at t=0. As shown in Fig. <ref>(b), then the full range range 2π for Φ_2 and π for Θ are completely spanned, ensuring the given ZYZ rotations to be arbitrary. We note that the equivalent transform-limited pulse area, 𝒜 in Fig. <ref> is defined with the pulse area of a transform-limited (TL) pulse that has the same pulse-energy with the shaped pulse, which is given by𝒜 = μ/ħ∫_-∞^∞ dt E_0e^-t^2/τ_0^2= 2μ/ħ√(τ_0√(π/2)∫_-∞^∞dt|E_ shaped(t)|^2),where τ_0 is the pulse width of the TL pulse. With this definition, the pulse energies of the shaped pulse and the TL pulse are equal i.e.,∫_-∞^∞ dt|E_0e^-t^2/τ_0^2cos(ω_0 t)|^2 =∫_-∞^∞ dt|E_ shaped(t)|^2. § EXPERIMENTAL VERIFICATION In order to verify the ZYZ rotations, we performed a proof-of-principle experiment with cold atomic qubits and as-programmed femtosecond laser pulses (see Fig. <ref>).The detail of our laser experimental setup is described in our previous work <cit.>. Briefly, we used amplified optical pulses from a Ti:sapphire mode-locked laser. Initial pulses were produced at a repetition rate of 1 kHz from the laser, wavelength-centered at the resonance wavelength 795 nm of the rubidium transition from 5S_1/2 to 5P_1/2. The spectral bandwidth was 2.5 THz in Gaussian width, equivalent to a pulse duration of 212 fs (FWHM) for a transform-limited (TL) Gaussian pulse. The pulses were then shaped with an acousto-optic pulse programming device (AOPDF, Dazzler from Fastlite) <cit.>. The two-level system was formed with the ground and excited states, |g⟩=5S_1/2 and |e⟩=5P_1/2, of atomic rubidium (^87Rb) and the atoms were held in a magneto-optical trap <cit.>. The inhomogeneity of the laser-atom interaction <cit.>, due to the spatial intensity profile of the laser, was minimized by reducing the size of the atom cloud 2.3 times smaller than the the laser beam. The size of the atom cloud was 250 μm (FWHM).The control experiment was conducted in three steps: initialization, qubit rotation, and detection. The atoms were first excited by a π/2-area pulse to initialize the atoms in the superposition state |ψ_ init⟩ defined in Eq. (<ref>). Then, the chirped pulse with a temporal hole rotated the state. Lastly, atoms in the excited state were detected through ionization, using a frequency-doubled split-off of an un-shaped laser pulse and a micro-channel plate (MCP) detector. The laser pulses for the initialization and qubit rotation were programmed by the AOPDF. In the frequency domain, the combined field is given byE(ω) = E_ init(ω)+E_ rot(ω)e^iφ,where E_ init(ω) is the π/2-area pulse, E_ rot(ω) is the chirped pulse with a temporal hole, and φ is the relative phase between them. The total energy of these two pulses was up to 20 μJ and the energy of each pulse was pre-calibrated through cross-correlation measurements. The chirp parameter for the control pulse was fixed at α=8.15 rad/ps^2, which corresponds to the frequency chirp of 60,000 fs^2 in the spectral domain. Figure <ref> shows a comparison between experimental and theoretical results. When atoms, in the initial superposition state |ψ_ init⟩ in Eq. (<ref>), undergo the rotation in Eq. (<ref>), the excited-state probability is given byP_e (Θ, φ, Φ_1) =|⟨ e|ℛ_ẑ(Φ_2)ℛ_ŷ(Θ)ℛ_ẑ (Φ_1+φ)|ψ_ init⟩|^2 = 1/2[1 - sinΘcos(Φ_1+φ)] .The resulting behavior of P_e is an oscillatory function, of which the amplitude and phase are determined by Θ and Φ_1+φ. In Fig. <ref>(a), the measured probability is plotted as a function of the equivalent (peak) TL pulse-area 𝒜 and the carrier phase φ. The result strongly agrees with the calculation in Fig. <ref>(b), performed with the corresponding time-domain Schrödinger equation (TDSE).Each point in Figs. <ref>(a) and <ref>(b) corresponds to a distinct Bloch vector evolution. A few characteristic trajectories (in the “detuning” interaction picture) are shown in Figs. <ref>(c,d,⋯,h) (see the figure caption for more detail). Along the dashed lines in Figs. <ref>(a) and <ref>(b), data points are extracted and compared in Fig. <ref>(i), where the excited-state probabilities, P_e(φ|Θ,Φ_1), are plotted as a function of φ at fixed Θ and Φ_1. The change of the peak oscillation point in Fig. <ref>(i) is related to theE_0-dependence of Φ_1 as in Eq. (<ref>); Φ_1 is a monotonically decreasing function of E_0, so the peaks in Fig. <ref>(i) shift to the upper right corner as E_0 increases.Also, the change in the oscillation amplitude is related to the E_0-dependence of Θ. As the electric-field amplitude E_0 increases, so does the rotation angle Θ of the Y-rotation; however, it is up to a certain maximum E_0, at above of which the dynamics involved with the hole gradually becomes adiabatic. Such behavior of Θ is clearly demonstrated in Fig. <ref>(i), where the oscillation amplitude given by sinΘ in Eq. (<ref>) reaches maximal, along the line marked by .5pt-.7pt 2, and decreases as E_0 increases. Therefore, the expected behaviors of Φ_1 and Θ in Eq. (<ref>) are clearly observed in the experimental results.§ CONCLUSION In summary, we proposed and demonstrated the use of hybrid adiabatic and non-adiabatic interaction forsingle-laser-pulse implementation of arbitrary qubit rotations. The chirped optical pulse with a temporal hole inducedZYZ-decomposed rotations of atomic qubits all at once, in which the temporal hole caused a non-adiabatic evolution amid an otherwise monotonic adiabatic evolution due to the chirped pulse. The proof-of-principle experimental verification of the given laser-atom interaction was performed with programmed femtosecond laser pulses and cold atoms. The result suggests that laser pulse-shape programming may be useful in quantum computation through concatenating gate operations in a quantum circuit. This research was supported by Samsung Science and Technology Foundation [SSTF-BA1301-12]. Authors thank Adam Massey and Chansuk Park for fruitful discussions.§ CHIRPED PULSES IN FREQUENCY AND TIME DOMAINSA linearly chirped pulse is defined with a second-order phase in the spectral domain, which can be written asE_ chirp(ω) =E_0/√(2)Δωexp[-(ω-ω_0)^2/Δω^2 -i c_2/2(ω-ω_0)^2],where a Gaussian pulse with amplitude E_0 and frequency chirp c_2 is assumed and the frequency is centered at the resonance ω_0 of the two-level system. Then, the time-domain electric field is given byE_ chirp(t) = E_0√(τ_0/τ)e^-t^2/τ^2cos[(ω_0 +α t )t + φ],where φ =-tan^-1(2c_2/τ_0^2)/2 is the phase, τ_0 = 2/Δω the transform-limited (TL) pulse width,τ=√(τ_0^2+4c_2^2/τ_0^2) the chirped pulse width, and α = 2c_2/(τ_0^4+4c_2^2) the chirp parameter. § HAMILTONIAN TRANSFORMATIONThe dynamics of a two-level system interacting with a shaped chirped pulse is governed by the Hamiltonian H = [ [ -ħω_0/2μ E(t);μ E(t)ħω_0/2 ]],where the two base vectors are defined as |g⟩ and |e⟩. Being transformed to the “field” interaction picture (with respect to the instantaneous laser frequency ω_L(t)=ω_0+2α t), the Hamiltonian H becomesH_ω_L = ħ/2[ [ -Δ(t)Ω(t);Ω(t)Δ(t) ]],after the rotating wave approximation, where Δ(t)=ω_0-ω_L(t)=-2α t is the instantaneous detuning and Ω(t) is the Rabi frequency. The transformation matrix from H to H_ω_L is given by 𝒯_ω_L=exp(i∫_0^t T_ω_L (t') dt'/ħ) withT_ω_L = ħ/2[ [ -(ω_0 t+ α t^2) 0; 0(ω_0 t+ α t^2) ]]where the base vectors in the “field” interaction picture are |g⟩_ω_L = 𝒯_ω_L|g⟩ and |e⟩_ω_L = 𝒯_ω_L|e⟩.Chirp pulses induce adiabatic evolution, which is a Z-rotation in the adiabatic basis. The adiabatic base vectors are given by|0(t)⟩_A= cosϑ(t)|g⟩_ω_L -sinϑ(t)|e⟩_ω_L, |1(t)⟩_A= sinϑ(t)|g⟩_ω_L +cosϑ(t)|e⟩_ω_L,where the eigenvalues areħ/2λ_±(t) = ±ħ/2√(Ω^2(t)+Δ^2(t))and the mixing angle ϑ(t) is ϑ(t) = 1/2tan^-1Ω(t)/Δ(t)for 0 ≤ϑ(t)≤π/2.The state in the adiabatic basis is given by |ψ(t)⟩_A=R(ϑ(t))|ψ(t)⟩_ω_L, where |ψ(t)⟩_ω_L = 𝒯_ω_L|ψ(t)⟩ and R(ϑ(t)) is the adiabatic transform matrix defined asR(ϑ(t))=[[cosϑ(t) -sinϑ(t);sinϑ(t)cosϑ(t) ]].The Schrödinger equation is then given in the adiabatic basis {|0(t)⟩_A, |1(t)⟩_A} byiħd /dt |ψ(t)⟩_A = ( R H_ω_L R^-1+iħṘ R^-1) |ψ(t)⟩_A,and the adiabatic Hamiltonian is H_A = ħ/2[[ λ_- -2iϑ̇;2iϑ̇ λ_+ ]],where 2ϑ̇ in the off-diagonal term is the “non-adiabatic coupling" given by2ϑ̇ = |Ω̇(t)Δ(t)-Ω(t)Δ̇(t)|/Δ^2(t)+Ω^2(t).With the adiabatic Hamiltonian H_A, the phase of the state diverges at t→±∞, because of the detuning. To remove this phase before and after the pulse duration, we perform an additional transform 𝒯_Δ=exp(i∫_0^t T_Δ(t') dt'/ħ) with T_Δ = ħ/2[ [ -|Δ(t)| 0; 0|Δ(t)| ]].The resulting Hamiltonian in this “detuning” interaction picture, also in Eq. (3), is given byH_Δ = ħ/2[[-Δ_F(t) Ω_F(t); Ω_F^*(t) Δ_F(t) ]],where the modified detuning and Rabi frequency areΔ_F(t)= √(Δ^2(t)+Ω^2(t))-|Δ(t)|,Ω_F(t)=-2iϑ̇e^-i|Δ(t)|/2,and the base vectors are defined by |0(t)⟩_Δ = 𝒯_Δ|0(t)⟩_A and |1(t)⟩_Δ = 𝒯_Δ|1(t)⟩_A.On the other hand, the conventional “atomic” interaction picture uses the transformation, given by 𝒯_ω_0=exp(i∫_0^t T_ω_0 (t') dt'/ħ) withT_ω_0 = ħ/2[[ -ω_00;0ω_0 ]]to remove the phase factor associated with the atomic energy splitting ω_0. In this representation (the “atomic” interaction picture), the base vectors are given by |g⟩_ω_0 = 𝒯_ω_0|g⟩ and |e⟩_ω_0 = 𝒯_ω_0|e⟩. 1NielsenChuang M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2000).photons P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn, Rev. Mod. Phys. 79, 135 (2007).ions H. Häffner, C. F. Roos, and R. Blatt, “Quantum computing with trapped ions," Phys. Rep. 469, 155 (2008).atoms M. Saffman, “Quantum computing with atomic qubits and Rydberg interactions: progress and challenges," J. Phys. B 49, 202001 (2016).molecules R. Hildner, D. Brinks, and N. F. van Hulst, “Femtosecond coherence and quantum control of single molecules at room temperature," Nat. Phys. 7, 172 (2011).aatoms F. H. L. Koppens, C. Buizert, K. J. Tielrooij, I. T. Vink, K. C. Nowack, T. Meunier, L. P. Kouwenhoven, and L. M. K. Vandersypen, “Driven coherent oscillations of a single electron spin in a quantum dot,” Nature 442, 766 (2006).scqubits E. Lucero, M. Hofheinz, M. Ansmann, R. C. Bialczak, N. Katz, M. Neeley, A. D. O'Connell, H. Wang, A. N. Cleland, and J. M. Martinis, “High-fidelity gates in a single Josephson qubit," Phys. Rev. Lett. 100, 247001 (2008).arbitrary The rotational axis and angle of a ZYZ-decomposed arbitrary rotation are respectively given by n̂ = ( sinΘ/2sinΦ_+, sinΘ/2cosΦ_-, cosΘ/2sinΦ_+) / sinϕ/2 and ϕ=2cos^-1 (cosΘ/2cosΦ_+), where Φ_±=(Φ_1±Φ_2)/2.LimSR2014 J. Lim, H. G. Lee, S. Lee, C. Y. Park, and J. Ahn, “Ultrafast Ramsey interferometry to implement cold atomic qubit gates,” Sci. Rep. 4, 5867 (2014). Weiss2016 Y. Wang, A. Kumar, T.-Y. Wu, and D. S. Weiss, “Single-qubit gates based on targeted phase shifts in a 3D neutral atom array," Science 352, 1562 (2016). ShoreBook B. W. Shore, Manipulating Quantum Structures Using Laser Pulses (Cambridge University Press, 2011).AllenBook L. Allen and J. H. Eberly, Optical Resonance and Two-Level Atoms (Dover, 1987).LeePRA2016 H. G. Lee, Y. Song, H. Kim, H. Jo, and J. Ahn, “Quantum dynamics of a two-state system induced by a chirped zero-area pulse,” Phys. Rev. A 93, 023423 (2016).SongPRA2016 Y. Song, H. G. Lee, H. Jo, and J. Ahn, “Selective excitation in a three-state system using a hybrid adiabatic-nonadiabatic interaction,” Phys. Rev. A 94, 023412 (2016).AOPDF P. Tournois, “Acousto-optic programmable dispersive filter for adaptive compensation of group delay time dispersion in laser systems,” Opt. Comm. 140, 245 (1997).LeeOL2015 H. G. Lee, H. Kim, and J. Ahn, “Ultrafast laser-driven Rabi oscillations of a Gaussian atom ensemble,” Opt. Lett. 40, 510 (2015).
http://arxiv.org/abs/1702.07833v2
{ "authors": [ "Han-gyeol Lee", "Yunheung Song", "Jaewook Ahn" ], "categories": [ "physics.atom-ph" ], "primary_category": "physics.atom-ph", "published": "20170225042902", "title": "Single-laser-pulse implementation of arbitrary ZYZ rotations of an atomic qubit" }
theoremTheorem
http://arxiv.org/abs/1702.08266v1
{ "authors": [ "M. N. Chernodub", "Shinya Gongyo" ], "categories": [ "hep-th", "cond-mat.other", "hep-ph" ], "primary_category": "hep-th", "published": "20170227131439", "title": "Effects of rotation and boundaries on chiral symmetry breaking of relativistic fermions" }
Rician MIMO Channel- and Jamming-Aware Decision Fusion D. Ciuonzo, Senior Member, IEEE, A. Aubry, Senior Member, IEEE, and V. Carotenuto, Member, IEEEManuscript received 20th November 2015; revised 3rd August 2016; accepted 23th February 2017. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Xavier Mestre.D. Ciuonzo was with University of Naples Federico II, DIETI, Via Claudio 21, 80125 Naples, Italy. He is now with Networking Measurement and Monitoring (NM-2) s.r.l., 80143 Naples, Itally. (e-mail: domenico.ciuonzo@ieee.org)A. Aubry and V. Carotenuto are with University of Naples Federico II, DIETI, Via Claudio 21, 80125 Naples, Italy. (e-mail: {augusto.aubry, vincenzo.carotenuto}@unina.it)======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this manuscript we study channel-aware decision fusion (DF) in a wireless sensor network (WSN) where: (i) the sensors transmit their decisions simultaneously for spectral efficiency purposes and the DF center (DFC) is equipped with multiple antennas; (ii) each sensor-DFC channel is described via a Rician model. As opposed to the existing literature, in order to account for stringent energy constraints in the WSN, only statistical channel information is assumed for the non-line-of-sight (scattered) fading terms. For such a scenario, sub-optimal fusion rules are developed in order to deal with the exponential complexity of the likelihood ratio test (LRT) and impractical (complete) system knowledge. Furthermore, the considered model is extended to the case of (partially unknown) jamming-originated interference. Then the obtained fusion rules are modified with the use of composite hypothesis testing framework and generalized LRT. Coincidence and statistical equivalence among them are also investigated under some relevant simplified scenarios. Numerical results compare the proposed rules and highlight their jamming-suppression capability.Decision Fusion, Distributed Detection, Virtual MIMO, Wireless Sensor Networks.§ INTRODUCTION§.§ Motivation and Related Literature Decision Fusion (DF) in a wireless sensor network (WSN) consists in transmitting local decisions about an observed phenomenon from sensors to a DF center (DFC) for a global decision, with the intent of surveillance and/or anomaly detection <cit.>. Typically all the studies had been focused on a parallel access channels (PACs) with instantaneous <cit.> or statistical channel-state information (CSI) <cit.>, although some recent works extended to the case of multiple access channels (MACs). Adoption of a MAC in WSNs is clearly attractive because of its increased spectral efficiency. Distributed detection over MACs was first studied in <cit.>, where perfect compensation of the fading coefficients is assumed for each sensor. Non-coherent modulation and censoring over PACs and MACs were analyzed in <cit.> with emphasis on processing gain and combining loss. The same scenario was studied in <cit.>, focusing on the error exponents (obtained through the large deviation principle) and the design of energy-efficient modulations for Rayleigh and Rice fading. Optimality of received-energy statistic in Rayleigh fading scenario was demonstrated for diversity MACs with non-identical sensors in <cit.>. Efficient DF over MACs only with knowledge of the instantaneous channel gains and with the help of power-control and phase-shifting techniques was studied in <cit.>. Techniques borrowed from direct-sequence spread-spectrum systems were combined with on-off keying (OOK) modulation and censoring for DF in scenarios with statistical CSI <cit.>.DF over a (virtual) MIMO (this setup will be referred to as “MIMO-DF” hereinafter) was first proposed in <cit.>, with focus on power-allocation design based on instantaneous CSI, under the framework of J-divergence. Distributed detection with ultra-wideband sensors over MAC was then studied in <cit.>. The same model was adopted to study sensor fusion over MIMO channels with amplify-and-forward sensors in <cit.>. A recent theoretical study on data fusion with amplify and forward sensors, Rayleigh fading channel and a large-array at the DFC has been presented in <cit.>. Design of several sub-optimal fusion rules for MIMO-DF scenario was given in <cit.> in a setup with instantaneous CSI and Rayleigh fading, while the analysis was extended in <cit.> to a large-array at the DFC, estimated CSI and inhomogeneous large scale fading. In both cases binary-phase shift keying (BPSK) has been employed. It is worth noticing that in MIMO-DF scenario the log-likelihood ratio (LLR) is not a viable solution, since it suffers from the exponential growth of the computational complexity with respect to (w.r.t.) the number of sensors and a strong requirement on system knowledge.However, frequently the final purpose of a WSN is anomaly detection (viz. the null hypothesis is much more frequent than the alternative hypothesis, denoting the “anomaly”). Such problem arises in many application contexts, such as intrusion detection or monitoring of hazardous events. In this case, a wise choice of the modulation format is the on-off keying (OOK), which ensures a nearly-optimal censoring policy (and thus significant energy savings) <cit.>. Additionally, though the channel between the sensors and the DFC may be accurately modelled as Rician, assuming instantaneous CSI (i.e., estimation of the scattered fading component) may be too energy costly for an anomaly detection problem. This motivated the study of DF over Rician MAC channels (in the single-antenna DFC case) with only statistical CSI in <cit.>. We point out that statistical CSI is instead a reasonable assumption for a WSN and can be obtained through long-term training-based techniques (since statistical parameters of Rician model have a slower variation with respect to the coherence time of the channel), mimicking the procedures proposed in <cit.>. The aforementioned problem may be further exacerbated by the presence of a (possibly distributed) jamming device in the WSN deployment area <cit.>. Such problem is clearly relevant in non-friendly environments, such as the battlefield, where malicious devices (i.e., the jammers) are placed to hinder the operational requirements of the WSN. Indeed, due to jammer hostile nature, unknown interference is superimposed to useful received signal (containing “informative” sensors contributions). Therefore, additional relevant parameters may be unknown at the DFC side. This precludes development of sub-optimal (simplified) fusion rules based on the LLR, which assume complete specification of pdfs under both hypotheses. To the best of our knowledge, the study of such a setup for MIMO-DF has not been addressed yet in the open literature. §.§ Main Results and Paper Organization The contributions of the present manuscript are summarized as follows. * We study decision fusion over MAC with Rician fading and multiple antennas at the DFC (as opposed to <cit.>). In the present study only the LOS component is assumed known at the DFC. Also, by adopting the same general assumptions in <cit.>, the considered model also accounts for unequal long-term received powers from the sensors, through a common path loss and shadowing model;* We derive sub-optimal fusion rules dealing with exponential complexity and with required system knowledge in the considered scenario, namely, we derive (i) “ideal sensors” (IS) (following the same spirit as in <cit.>), (ii) “non-line of sight” (NLOS), (iii) “widely-linear” (mimicking <cit.>) and (iv) “improper Gaussian moment matching” (IGMM, based on second order characterization of the received vector under both hypotheses) rules; * Subsequently, we consider DF in the presence of a (either distributed or co-located) multi-antenna jamming device, whose communication channel is described by an analogous Rician model. The problem is tackled within a composite hypothesis testing framework and solved via the generalized likelihood-ratio test (GLRT) <cit.> and similar key simplifying assumptions as in the “no-jamming” scenario, thus leading to IS-GLRT, NLOS-GLRT and IGMM-GLRT rules, respectively;* Simulation studies (along with a detailed complexity analysis) are performed to compare the performance of the considered rules and verify the asymptotical equivalences (later proved in Secs. <ref> and <ref>) among them in some specific instances. Also, the performance trend as a function of the Rician parameters of the WSN and the jammer, the thermal noise and the number of receive antennas are investigated and discussed.The remainder of the manuscript is organized as follows: Sec. <ref> introduces the model; in Sec. <ref> we derive and study the fusion rules, while in Sec. <ref> we generalize the analysis to the case of a subspace interference; the obtained rules are compared in terms of computational complexity in Sec. <ref>; in Sec. <ref> we compare the presented rules through simulations; finally in Sec. <ref> we draw some conclusions; proofs and derivations are contained in a dedicated Appendix.Notation - Lower-case (resp. Upper-case) bold letters denote vectors (resp. matrices), with a_n (resp. a_n,m) being the nth (resp. the (n,m)th) element of a (resp. A); upper-case calligraphic letters denote finite sets, with 𝒜^K representing the k-ary Cartesian power of 𝒜; O_N× K (resp. I_N) denotes the N× K (resp. N× N) null (resp. identity) matrix, with corresponding short-hand notation O_N for a square matrix; 0_N (resp. 1_N) denotes the null (resp. ones) vector of length N; a_n:m (resp. A_n:m) denotes the sub-vector of a (resp. the sub-matrix of A) obtained from selecting only nth to mth elements of v (resp. nth to mth rows/columns of A); 𝔼{·}, var{·}, (·)^T, (·)^†, (·)^-, (·), (·) and ‖·‖ denote expectation, variance, transpose, conjugate transpose, pseudo-inverse, real part, imaginary part and Euclidean norm operators, respectively; (·)_+ is used to indicate max{0,·}; diag(A) (resp. diag(a)) denotes the diagonal matrix extracted from A (resp. the diagonal matrix with main diagonal given by a); det(A) is used to denote the determinant of A; λ_min(A) denotes the minimum eigenvalue of the Hermitian matrix A; P_X^⊥ denotes the orthogonal projector to the range space spanned by X; a (resp. A) denotes the augmented vector (resp. matrix) of a (resp. A), that is a≜[[ a^T a^† ]]^T (resp. A≜[[ A^T A^† ]]^T); P(·) and p(·) denote probability mass functions (pmf) and probability density functions (pdf), while P(·|·) and p(·|·) their corresponding conditional counterparts; Σ_x (resp. Σ̅_x) denotes the covariance (resp. the complementary covariance) matrix of the complex-valued random vector x; 𝒩_ℂ(μ,Σ) (resp. 𝒩_ℂ(μ,Σ,Σ̅) denotes a proper (resp. an improper) complex normal distribution with mean vector μ and covariance matrix Σ (resp. covariance Σ and pseudo-covariance Σ̅), while 𝒩(μ,Σ) denotes the corresponding real-valued counterpart; finally the symbols ∝, and ∼ mean “statistically equivalent to” and “distributed as”, respectively.§ SYSTEM MODEL Hereinafter we will consider a decentralized binary hypothesis test, where K sensors are used to discern between the hypotheses in the set ℋ≜{ℋ_0,ℋ_1} (e.g. ℋ_0/ℋ_1 may represent the absence/presence of a specific target of interest). The kth sensor, k∈𝒦≜{1,2,…,K}, takes a binary local decision ξ_k∈ℋ about the observed phenomenon on the basis of its own measurements. Here we do not make any conditional (given ℋ_i∈ℋ) mutual independence assumption on ξ_k. Each decision ξ_k is mapped to a symbol x_k∈ X={0,+1} representing an OOK modulation: without loss of generality (w.l.o.g.) we assume that ξ_k=ℋ_i maps into x_k=i, i∈{0,1}. The quality of the WSN is characterized by the conditional joint pmfs P(x|ℋ_i). Also, we denote P_D,k≜ P(x_k=1|ℋ_1) and P_F,k≜ P(x_k=1|ℋ_0) the probability of detection and false alarm of the kth sensor, respectively (here we make the assumption P_D,k≥ P_F,k, meaning that each sensor decision procedure leads to receive operating characteristics above the chance line <cit.>). In some situations, aiming at improving clarity of exposition, we will use the short-hand notation (P_0,k,P_1,k)=(P_F,k,P_D,k) (and (P_0,P_1)=(P_F,P_D), in the simpler case of conditionally i.i.d. decisions).Sensors communicate with a DFC equipped with N receive antennas over a wireless flat-fading MAC in order to exploit diversity so as to mitigate small-scale fading; this setup determines a distributed (or virtual) MIMO channel <cit.>. Also, perfect synchronization[Multiple antennas at the DFC do not make these assumptions harder to verify w.r.t. a single-antenna MAC.], as in <cit.>, is assumed at the DFC.We denote: y_n the signal at the nth receive antenna of the DFC after matched filtering and sampling; (√(d_k,k) h̅_n,k) the composite channel coefficient between the kth sensor and the nth receive antenna of the DFC; w_n the additive white Gaussian noise at the nth receive antenna of the DFC. The vector model at the DFC is:y= H̅D^1/2x+wwhere y∈ℂ^N, x∈𝒳^K and w∼𝒩_ℂ(0_N,σ_w^2I_N) are the received-signal vector, the transmitted-signal vector and the noise vector, respectively. Also, the matrices H̅∈ℂ^N× K and D∈ℂ^K× K model independent small-scale fading, geometric attenuation and log-normal shadowing. More specifically, D≜diag([ β_1 ⋯ β_K ]^T) is a (known) matrix with kth diagonal element d_k,k=β_k (β_k>0) accounting for path loss and shadow fading experienced by kth sensor. On the other hand, the kth column of H̅ models the (small-scale) fading vector of kth sensor as h̅_k=b_k a(θ_k)+√(1-b_k^2) h_k. Here a(·) denotes the steering vector (which depends on the angle-of-arrival θ_k, assumed known at the DFC[W.l.o.g. in this work we adopt a 1-D functional dependence for a(·) (obtained from a “far-field” assumption), albeit more complicate expressions could be considered as well.]) corresponding to the LOS component and h_k∼𝒩_ℂ(0_N,I_N) corresponds to the normalized NLOS (scattered) component. Finally, we denote b_k≜√(κ_k/1+κ_k), where κ_k represents the (known) usual Rician factor between kth sensor and DFC.The matrix H̅ can be expressed compactly in terms of the relevant matrices A(θ)≜[ a(θ_1)⋯ a(θ_K) ], H≜[ h_1 ⋯ h_K ] and R≜diag([ b_1 ⋯ b_K ]^T), respectively, asH̅≜A(θ) R+H (I_K-R^2)^1/2 .Finally, we underline that the received scattered term from kth sensor in Eq. (<ref>) is √((1-b_k^2) β_k) h_k∼𝒩_ℂ(0_N,ν_k I_N), where ν_k≜[β_k (1-b_k^2)], while its LOS term is μ_k≜[√(β_k) b_k a(θ_k)] and corresponds to the kth column of the matrix A(θ)≜(A(θ) R D^1/2) ,denoting the matrix of received LOS terms from the WSN.§ FUSION RULES§.§ Optimum (LLR) RuleThe optimal test <cit.> is formulated on the basis of the LLR Λ_opt≜ln[p(y|ℋ_1)/p(y|ℋ_0)], and decides in favour of ℋ_1 (resp. ℋ_0) when Λ_opt>γ (resp. Λ_opt≤γ), with γ denoting the threshold which the LLR is compared to[ The threshold γ can be determined to ensure a fixed system false-alarm rate (Neyman-Pearson approach), or can be chosen to minimize the probability of error (Bayesian approach) <cit.>.]. After few manipulations, the LLR can be expressed explicitly asΛ_opt=ln[∑_x∈𝒳^KP(x|ℋ_1)/[σ_e^2(x)]^Nexp(-y-∑_k=1^Kμ_k x_k^2/σ_e^2(x))/∑_x∈𝒳^KP(x|ℋ_0)/[σ_e^2(x)]^Nexp(-y-∑_k=1^Kμ_k x_k^2/σ_e^2(x))]where σ_e^2(x)≜(σ_w^2+∑_k=1^Kν_k x_k). The above result follows from y|ℋ_i being a Gaussian mixture random vector, since the pdf under each hypothesis can be obtained as p(y|ℋ_i)=∑_xp(y|x) P(x|ℋ_i) (the directed triple ℋ→x→y satisfies the Markov property). It is apparent that implementation of Eq. (<ref>) requires a computational complexity which grows exponentially with K (namely 𝒪(2^K), where 𝒪(·) stands for the usual Landau's notation). Also, differently from <cit.> (where a BPSK modulation is employed), the computation here is complicated by the fact that each component of the mixture (under ℋ_i) has both different mean vectors and covariance (actually scaled identity) matrices. Therefore, sub-optimal fusion rules with reduced complexity are investigated in what follows. §.§ Ideal sensors (IS) rule The LLR in Eq. (<ref>) can be simplified under the assumption of perfect sensors <cit.>, i.e., P(x=1_K|ℋ_1)=P(x=0_K|ℋ_0)=1. In this case x∈{0_K,1_K} and Eq. (<ref>) reduces to <cit.>:ln[[σ_e^2(1_K)]^-N exp(-y-∑_k=1^Kμ_k^2/σ_e^2(1_K))/[σ_w^2]^-N exp(-y^2/σ_w^2)]∝2 (μ̅^†y)+ν̅/σ_w^2‖y‖ ^2≜Λ_ISwhere μ̅≜1/K∑_k=1^Kμ_k and ν̅≜1/K∑_k=1^Kν_k and terms independent from y have been discarded (as they can be incorporated in a suitably modified threshold γ).It is worth noticing that the assumption of perfect local decisions is used only for system design purposes, and does not mean that the system is working under such ideal conditions, thus the rule is suboptimal. Also, we observe that IS rule in (<ref>) is formed by a weighted combination of a maximum ratio combiner (MRC, which actually is the statistic resulting from IS assumption on the known part of channel vector at the DFC <cit.>) and an energy detector (ED, i.e., the statistic arising from the IS assumption on the random part of the channel vector at the DFC <cit.>). Clearly, from Eq. (<ref>) it is apparent that IS rule does not require sensor performance (i.e., the pmf P(x|ℋ_i), i∈0,1) for its implementation. §.§ Non line-of-sight (NLOS) rule In this case we derive a sub-optimal rule arising from the simplifying assumption κ_k=0 (i.e., no sensor has a LOS path), thus leading to: ln[∑_x∈𝒳^KP(x|ℋ_1)/[σ̅_e^2(x)]^Nexp(-y^2/σ̅_e^2(x))/∑_x∈𝒳^KP(x|ℋ_0)/[σ̅_e^2(x)]^Nexp(-y^2/σ̅_e^2(x))]where we have denoted σ̅_e^2(x)≜(σ_w^2+∑_k=1^Kβ_kx_k). We observe that in this case the LLR is function of the sole sufficient statistic Λ_ NL≜y^2, i.e., the energy of the received signal, which we retain as a simple statistic for our test[We recall that, as in the case of IS rule, NLOS assumption is only exploited at the design stage for development of simplified rule Λ_ NL.]. There is a twofold motivation for this choice. First, it was shown in <cit.> that under identical β_k's and conditionally independent decisions, the LLR in Eq. (<ref>) is a monotone function of y^2 (thus y^2>γ is the uniformly most powerful test <cit.>). Secondly, by applying Gaussian moment matching to the simplified model in Eq. (<ref>), the same test would be obtained. Therefore, though we have no optimality claims for Λ_ NL in this general case, we will consider NLOS rule as the decision statistic due to its simplicity (and no requirements on sensors performance). §.§ Widely-linear (WL) rules It can be shown that y|ℋ_i has the following statistical characterization up to the first two order moments (the proof is given in Appendix):𝔼{y|ℋ_i}=A(θ) ρ_i Σ_y|ℋ_i=A(θ) Σ_x|ℋ_i A(θ)^†+σ_e,i^2 I_N Σ̅_y|ℋ_i=A(θ) Σ_x|ℋ_i A(θ)^Twhere ρ_i≜[ P_i,1 ⋯ P_i,K ]^T and σ_e,i^2≜[∑_k=1^Kν_k P_i,k+σ_w^2]. Therefore a convenient and effective approach consists in adopting a WL statistic <cit.>. The WL approach (i.e., based on the augmented vector y) is motivated by linear complexity and y|ℋ_i being an improper (cf. Eq. (<ref>)) complex-valued random vector, that is Σ̅_y|ℋ_i≠O_N. More specifically, WL statistic is generically expressed as:Λ_WL≜z^†y , where the augmented vector z has to be designed according to a reasonable criterion. Then, Λ_WL is compared to a proper threshold γ to obtain the corresponding test.Clearly, several optimization metrics may be considered for obtaining z. The best choice (in a Neyman-Pearson sense) would be searching for the WL rule maximizing the global detection probability subject to a global false-alarm rate constraint, as proposed in <cit.> for a distributed detection problem. Unfortunately, the optimized z presents the following drawbacks: (i) it is not in closed-form, (ii) it requires a non-trivial optimization and (iii) it depends on the prescribed false-alarm constraint. Additionally, the problem under investigation is not a multivariate Gauss-Gauss test (i.e., y|ℋ_i∼𝒩_ℂ(μ_i,Σ_i)) but one discerning between mixtures of complex GMs (cf. Eq. (<ref>)). This would further complicate the optimization problem tackled in <cit.>.Differently, in this paper we choose z as the maximizer of either the normal <cit.> or modified <cit.> deflection measures, denoted as D_0( z ) and D_1( z ) respectively, that is:z_ WL,i≜max_z: ‖ z ‖ ^2=1D_i( z ) where D_i( z )≜(𝔼{Λ_WL|ℋ_1}-𝔼{Λ_WL|ℋ_0})^2/var{Λ_WL|ℋ_i}Maximization of deflection measures is commonly used in the design of (widely) linear rules for DF, since z_ WL,i always admits a closed-form and also literature has shown acceptable performance loss w.r.t. the LLR in analogous DF setups <cit.>. The vector z_ WL,i, being the optimal solution to the optimization in Eq. (<ref>), is (a similar proof can be found in <cit.>):z_ WL,i=Σ_y|ℋ_i^-1 A(θ) ρ_1,0/||Σ_y|ℋ_i^-1 A(θ) ρ_1,0||where ρ_1,0≜(ρ_1-ρ_0) and Σ_y|ℋ_i is given by:Σ_y|ℋ_i=A(θ) Σ_x|ℋ_i A(θ)^†+σ_e,i^2 I_2NThe WL statistics are thus obtained employing Eq. (<ref>) into (<ref>). It is worth pointing out that, from inspection of Eq. (<ref>), WL rules only require knowledge up to the second order of the vectors x|ℋ_i. §.§ Improper Gaussian moment matching (IGMM) rule Differently here we fully exploit the second order characterization provided in Eqs. (<ref>-<ref>). In fact, after fitting y|ℋ_i to an improper complex Gaussian, the following quadratic test can be obtained <cit.>:Λ_ IGMM ≜-(y-𝔼{y|ℋ_1})^† Σ_y|ℋ_1^-1 (y-𝔼{y|ℋ_1})+(y-𝔼{y|ℋ_0})^† Σ_y|ℋ_0^-1 (y-𝔼{y|ℋ_0})where 𝔼{y|ℋ_i}=A(θ) ρ_i and Σ_y|ℋ_i is given in Eq. (<ref>). IGMM rule presents the same (reduced) requirements on knowledge of sensors performance as the WL rules (cf. Eqs. (<ref>) and (<ref>)). Differently, we expect it to perform nearly-optimal (i.e., close to the LLR) at low SNR, as in such case both Gaussian mixtures are well-approximated by a single Gaussian pdf. §.§ Asymptotic equivalences In this sub-section, we will establish asymptotic equivalences among the proposed rules in the form of the following lemmas. These will be employed as useful tools to facilitate the understanding of the numerical comparisons shown in Sec. <ref>.As the sensors approach a NLOS condition (i.e., the Rician factor κ_k→0) IS and IGMM (recalling P_D,k≥ P_F,k) rules are statistically equivalent to the NLOS rule, i.e., they collapse to an energy detection test. The proof is obtained by substituting κ_k=0 in Eqs. (<ref>) and (<ref>), which respectively gives Λ_IS=y^2(∑_k=1^Kβ_k/(Kσ_w^2)) and Λ_ IGMM=2 y^2 ∑_k=1^Kβ_k(P_D,k-P_F,k)/(∑_k=1^Kβ_kP_D,k+σ_w^2)(∑_k=1^Kβ_kP_F,k+σ_w^2).The latter result follows from 𝔼{y|ℋ_i}=0_2N and Σ_y|ℋ_1=(∑_k=1^Kβ_kP_i,k+σ_w^2) I_2N (since under NLOS assumption R=O_K implies A(θ)=O_N× K and ν_k=β_k, respectively). Therefore it is apparent that IS and IGMM (assuming (P_D,k-P_F.k)≥0) rules become statistically equivalent to NLOS rule. The above lemma states that IS and IGMM rules are both statistically equivalent to NLOS rule when each sensor has only a purely scattered component. Indeed, in such a case (as also supported intuitively), only a dependence on ‖y‖ ^2 is relevant in the design of a fusion rule for the binary hypothesis test under consideration (i.e., all the mentioned decision procedures collapse into the received energy test). Accordingly IS rule, being based on a weighted combination of a MRC-ED (see Eq. (<ref>)), exploits only the non-coherent term in NLOS case. Similarly IGMM rule, being based on second-order characterization of y|ℋ_i, simplifies as the two hypotheses manifest in NLOS scenario with a sole change of variance in the received signal (i.e., no mean or covariance structure modification).However, in the case of conditionally i.i.d. decisions, a stronger result can be proved for IS and IGMM rules, as described by the following lemma.In the case of conditionally i.i.d. decisions, viz P(x|ℋ_i)=∏_k=1^KP(x_k|ℋ_i) and (P_D,k,P_F,k)=(P_D,P_F) (recalling that P_D>P_F), and under a “weak-LOS assumption” (quantified as P_i(1-P_i)λ_min(A(θ) A(θ)^†)≪σ_e,i^2), IS and IGMM rules are approximately statistically equivalent. We begin by observing that, under conditionally i.i.d. assumption, the covariance of y|ℋ_i simplifies to (since Σ_x|ℋ_i=P_i(1-P_i) I_K):Σ_y|ℋ_i=P_i(1-P_i) A(θ) A(θ)^†+σ_e,i^2 I_2N .Then, we express it in terms of the eigendecomposition (A(θ) A(θ)^†)=(U_M Λ_M U_M^†), that is Σ_y|ℋ_i=U_M [P_i(1-P_i)Λ_M+σ_e,i^2I_2N] U_M^†. If P_i(1-P_i)_min(A(θ) A(θ)^†)≪σ_e,i^2 holds, we can safely approximate Σ_y|ℋ_i≈(U_Mσ_e,i^2 U_M^†). We refer to this assumption as a “weak-LOS” one since, as all the κ_k's get low in R, all the eigenvalues in Λ_M get small while σ_e,i^2=(P_i ∑_k=1^KD(I_K-R^2)1_K+σ_w^2) increases. Also, we notice that IGMM rule in Eq. (<ref>) is statistically equivalent to:y^† (Σ_y|ℋ_0^-1-Σ_y|ℋ_1^-1) y +2 y^†(Σ_y|ℋ_1^-1𝔼{y|ℋ_1}-Σ_y|ℋ_0^-1𝔼{y|ℋ_0})Thus, by exploiting the aforementioned approximation in Eq. (<ref>), Λ_ IGMM is shown to be approximately expressed as:y^2 [1/σ_e,0^2-1/σ_e,1^2]+2 y^†[𝔼{y|ℋ_1}/σ_e,1^2-𝔼{y|ℋ_0}/σ_e,0^2]Then, after few manipulations (and exploiting definition of σ_e,i^2 and 𝔼{y|ℋ_1}, respectively), Eq. (<ref>) can be rewritten as:2 (P_D-P_F)/σ_e,0^2 σ_e,1^2 (∑_k=1^Kν_k y^2+σ_w^2 y^†A(θ) 1_K)= 2K (P_D-P_F) σ_w^2/σ_e,0^2 σ_e,1^2 (ν̅/σ_w^2y^2+2 (μ̅^†y))which is apparently the IS rule (except for an irrelevant positive scalar, recalling P_D>P_F). This concludes the proof.We underline that Lem. <ref> does not include Lem. <ref>, since at a relatively low Rician factor (_min(A(θ) A(θ)^†) gets low, whereas σ_e,i^2 increases) for all the sensors, data covariance matrix under ℋ_i will be approximately diagonal, while the difference of the mean terms 𝔼{y|ℋ_i} will not be negligible. In the latter case, IGMM will exhibit the same linear-quadratic dependence on the data as the IS rule. In other terms, it will reduce to a weighted combination of a MRC-ED (see (<ref>) and (<ref>), respectively). In this region, however, NLOS rule does not perform as well as those statistics, since its dependence is only on ‖y‖ ^2. Moreover, it is worth noticing that weak-LOS assumption P_i(1-P_i)eig_min(A(θ) A(θ)^†)≪σ_e,i^2 is also likely to be satisfied in a low-SNR regime (i.e., high σ_w^2, right-hand increases) and for “good-quality” sensors (i.e., (P_D,P_F)→(1,0), left-hand decreases).Finally, we look at the extreme case given by IS assumption. In this case, IS rule is statistically equivalent to the LLR (by construction, cf. Sec. <ref>). On the other hand, we are able to prove the following asymptotic equivalence properties among WL and IGMM rules, reported in the following lemma.Under “IS assumption”, IGMM rule is statistically equivalent to IS rule (thus attaining optimum performance), while WL rules are statistically equivalent and are given by the sole “widely-linear” part of IS rule in Eq. (<ref>). We start by recalling statistical equivalence of IGMM rule to Eq. (<ref>). Then, we observe that IS assumption straightforwardly implies ρ_1=1_K (resp. ρ_0=0_K) and Σ_x|ℋ_i=O_K. Thus Eq. (<ref>) specializes into:y^† (σ_e,1^2-σ_e,0^2/σ_e,0^2 σ_e,1^2) y+2/σ_e,1^2 y^†A(θ) 1_K= 2K/σ_e,1^2 {‖y‖ ^2 (ν̅/σ_e,0^2)+2 {μ̅^†y}}which is related to IS rule via an irrelevant positive constant (we recall that, under IS assumption, σ_e,1^2=(∑_k=1^Kν_k+σ_w^2) and σ_e,0^2=σ_w^2 hold). This proves the first part of the lemma. By similar reasoning, it can be shown that both WL rules in Eq. (<ref>), under IS assumption, coincide with:y^†((A(θ) 1_K)/‖A(θ) 1_K‖)=√(2)/‖μ̅‖ {μ̅^†y} .It is apparent that right-hand side of Eq. (<ref>) is proportional to the first contribution of IS rule in Eq. (<ref>), thus completing the proof.Therefore, when sensors are ideal, IGMM rule will be statistically equivalent to IS (viz. LLR) rule, as no covariance structure change happens when one of the two hypotheses is in force. Differently WL rules, lacking a ‖y‖ ^2 dependence, do not reduce to a weighted MRC-ED combination. Based on this reason, we expect that when the WSN operates with “good-quality” sensors, WL rules will experience some performance loss with respect to IS and IGMM rules.§ JAMMER (SUBSPACE) INTERFERENCE ENVIRONMENT In this section, we complicate the model in Eq. (<ref>) and assume the presence of jamming devices operating on the WSN-DFC communication channel. More specifically, we model the jamming signal as an r-dimensional vector, whose experienced channel follows the same Rician model as the WSN at the DFC, that is y_s=y+s_J, where: s_J=(A_J(ϕ) R_J+H_J (I_r-R_J^2)^1/2) D_J^1/2ψ. In Eq. (<ref>) ψ∈ℂ^r represents the (unknown deterministic) jamming signal. Similarly to the WSN, A_J(ϕ)∈ℂ^N× r, H_J∈ℂ^N× r, R_J∈ℝ^r× r and D_J∈ℝ^r× r denote the (full-rank) steering matrix (whose ℓth column is given by a(ϕ_ℓ) and depends on the angle-of-arrival ϕ_ℓ), the normalized scattered matrix (whose ℓth column h_J,ℓ∼𝒩_ℂ(0_N,I_N), assumed mutually independent from the others), the diagonal matrix of the Rician factors (whose ℓth element is denoted as b_ℓ,J) and the large-scale diagonal fading matrix of the jammer (whose ℓth element is denoted as β_ℓ,J), respectively. It is worth noticing that Eq. (<ref>) accounts for interfering systems with both distributed (viz. R_J and D_J are both diagonal) or co-located (viz. R_J and D_J are both scaled identity) transmitting antennas in space <cit.>. It is apparent that the former case includes the case of multiple jammers. The considered interfering source can be classified as a “constant jammer”, according to the terminology[We underline that the term “constant” may be misleading, as the definition of <cit.> implies that the jammer continuously emits a radio signal (changing with time), which is unknown at the DFC.] proposed in <cit.>. Though it represents the simplest typology of jammer, it is here considered as a first step toward the development of fusion rules robust to “smarter” jammers.In this case, the received signal y_s is conditionally distributed as:y_s|ℋ_i∼∑_x∈𝒳^KP(x|ℋ_i) 𝒩_ℂ(μ_s(x,ζ),[σ_e^2(x)+σ_J^2] I_N) whereμ_s(x,ζ)≜A(θ) x+A_J(ϕ) ζwhere σ_J^2≜∑_ℓ=1^rν_ℓ,J |ψ_ℓ|^2, ν_ℓ,J≜β_ℓ,J (1-b_ℓ,J^2) and ζ≜(R_J D_J^1/2 ψ), respectively. Hereinafter we will make the reasonable assumption that the DFC can only learn A_J(ϕ), i.e., the DFC does not have knowledge of: (i) the diagonal matrix of the Rician factors R_J, (ii) the large-scale fading diagonal matrix D_J and (iii) the actual jamming (transmitted) signal ψ. The following sub-sections are thus devoted to the design of (sub-optimal) fusion rules in the presence of the aforementioned (unknown deterministic) interference parameters. §.§ Clairvoyant LRT and GLRTIn what follows, we will employ in our comparison the clairvoyant LRT as a benchmark, which (unrealistically) assumes {ψ,D_J,R_J} as known and thus implements the statistic:Λ_c-opt≜ ln[∑_x∈𝒳^KP(x|ℋ_1)/[σ_e^2(x)+σ_J^2]^Nexp(-y-∑_k=1^Kμ_k x_k-A_J(ϕ) ζ^2/σ_e^2(x)+σ_J^2)/∑_x∈𝒳^KP(x|ℋ_0)/[σ_e^2(x)+σ_J^2]^Nexp(-y-∑_k=1^Kμ_k x_k-A_J(ϕ) ζ^2/σ_e^2(x)+σ_J^2)]Clearly, the LRT is uniformly most powerful <cit.> and thus no other fusion rule can expect to perform better. Unfortunately the LRT cannot be implemented, as the jamming parameters are not known in the practice. For this reason, hereinafter we will devise tests which tackle the arising composite hypothesis testing problem. A widespread test for the considered problem would be the GLRT <cit.>, requiring the maximization of pdf under both hypotheses w.r.t. the (unknown) parameters set. The GLRT has been successfully applied to different application contexts, such as spectrum sensing <cit.>, allowing important design guidelines on system level performance (in terms of optimized sensing time) <cit.>. In our case, it is not difficult to show that optimization w.r.t. {ψ,D_J,R_J} is tantamount to maximizing both pdfs w.r.t. σ_J^2 and ζ as they were (parametrically) independent. Therefore, this yields the statistic:Λ_ GLR≜ln[max_ζ,σ_J^2 p(y_s|ℋ_1)/max_ζ,σ_J^2 p(y_s|ℋ_0)] .From inspection of Eq. (<ref>), it is apparent that GLRT has no simple implementation for this problem, because of its exponential complexity (p(y_s|ℋ_i) is a GM with 2^K components) and required non-linear optimizations. Thus, exact GLRT implementation appears as not feasible from a practical point of view and will not be pursued in the following. Nonetheless, we will show that “GLRT philosophy” of Eq. (<ref>) can be exploited jointly with the simplifying assumptions that lead to the sub-optimal statistics obtained in Sec. <ref> in order to devise computationally efficient and jamming-robust fusion rules. §.§ IS-GLRT rule The GLRT in Eq. (<ref>) can be simplified under the IS assumption, i.e., P(x=1_K|ℋ_1)=P(x=0_K|ℋ_0)=1. Indeed, based on these assumptions, it holds:y_s|ℋ_0 ∼𝒩_ℂ(A_J(ϕ) ζ, [σ_w^2+σ_J^2] I_N) y_s|ℋ_1 ∼𝒩_ℂ(A(θ) 1_K+A_J(ϕ) ζ, [σ_e^2(1_K)+σ_J^2] I_N)The ML estimates of ζ under ℋ_0 and ℋ_1 are obtained respectively as <cit.>:ζ̂_0≜A_J(ϕ)^- y_s ζ̂_1≜A_J(ϕ)^- (y_s-A(θ) 1_K)Hence, the concentrated likelihoods are:p_is(y_s|ℋ_0,ζ̂_0,σ_J^2)= 1/{π[σ_w^2+σ_J^2]}^N exp[-‖r_0‖ ^2/σ_w^2+σ_J^2]p_is(y_s|ℋ_1,ζ̂_1,σ_J^2)= 1/{π[σ_e^2(1_K)+σ_J^2]}^N exp[-‖r_1‖ ^2/σ_e^2(1_K)+σ_J^2]where r_0≜[P_A_J(ϕ)^⊥ y_s] and r_1≜[P_A_J(ϕ)^⊥(y_s-A(θ) 1_K)], respectively. Then the ML estimates[These are straightforwardly obtained by setting ∂ln p_is(y_s|ℋ_i,ζ̂_i,σ_J^2)/∂σ_J^2=0 and accounting for the constraint σ_J^2≥0.] of σ_J^2 under ℋ_0 and ℋ_1 are obtained as <cit.>:σ̂_J,0^2 ≜ [‖r_0‖ ^2/N-σ_w^2]_+ σ̂_J,1^2 ≜ [‖r_1‖ ^2/N-σ_e^2(1_K)]_+Then, we substitute Eqs. (<ref>) and (<ref>) into (<ref>) and (<ref>), respectively, thus obtaining:p_is(y_s|ℋ_0,ζ̂_0,σ̂_J,0^2)= 1/{π[σ_w^2+σ̂_J,0^2]}^N exp[-‖r_0‖ ^2/σ_w^2+σ̂_J,0^2]p_is(y_s|ℋ_1,ζ̂_1,σ̂_J,1^2)= 1/{π[σ_e^2(1_K)+σ̂_J,1^2]}^N exp[-‖r_1‖ ^2/σ_e^2(1_K)+σ̂_J,1^2]Taking ln(·) of the concentrated likelihood ratio { p_is(y_s|ℋ_1,ζ̂_1,σ̂_J,1^2)/p_is(y_s|ℋ_0,ζ̂_0,σ̂_J,0^2)} provides the final expression:Λ_IS-GLR≜ { N ln[σ_w^2+σ̂_J,0^2/σ_e^2(1_K)+σ̂_J,1^2]-‖r_1‖ ^2/σ_e^2(1_K)+σ̂_J,1^2..+‖r_0‖ ^2/σ_w^2+σ̂_J,0^2}The proposed rule, in analogy to Sec. <ref>, will be referred to as IS-GLRT in the following. §.§ NLOS-GLRT rule Differently, here we start by using the NLOS assumption (κ_k=0, k∈𝒦) on the conditional received signal pdf, which gives:y_s|ℋ_i∼∑_x∈𝒳^KP(x|ℋ_i) 𝒩_ℂ(A_J(ϕ) ζ, [σ̅_e^2(x)+σ_J^2] I_N)where σ̅_e^2(x)≜(σ_w^2+∑_k=1^Kβ_kx_k). Even under such a simplifying assumption, Eq. (<ref>) still has the form of a complex Gaussian mixture with 2^K distinct components, thus being intractable from a computational point of view. Thus, we further resort to Gaussian moment matching to fit the pdf of y_s|ℋ_i to a (proper) complex Gaussian pdf as follows:𝔼{y_s|ℋ_i}=A_J(ϕ) ζΣ_y_s|ℋ_i=(σ_n,i^2+σ_J^2) I_Nwhere we have denoted σ_n,i^2≜(∑_k=1^KP_i,k β_k+σ_w^2). Therefore, moment matching yields:y_s|ℋ_i∼𝒩_ℂ(A_J(ϕ) ζ, [σ_n,i^2+σ_J^2] I_N)Now, in order to obtain a GLRT-like statistic, we would need to evaluate the ML estimates of {ζ,σ_J^2} under ℋ_i for the matched model in Eq. (<ref>). This is the case for the ML estimates of ζ under ℋ_0 and ℋ_1, being both equal to ζ̂_0 (cf. Eq. (<ref>)). After substitution, the concentrated matched likelihood of y_s|ℋ_i is:p_nl(y_s|ℋ_i;ζ̂,σ_J^2)=1/{π[σ_n,i^2+σ_J^2]}^N exp[-‖r_0‖ ^2/σ_n,i^2+σ_J^2]where r_0 has the same meaning as for IS-GLRT rule. After substitution, it is not difficult to prove that the “moment-matched” concentrated likelihood ratiop_nl(y_s|ℋ_1;ζ̂_0,σ_J^2)/p_nl(y_s|ℋ_0;ζ̂_0,σ_J^2)is an increasing function of ‖r_0‖ ^2, independently on the value of the (unknown) σ_J^2, whose estimation can be thus avoided (the proof can be obtained by taking the logarithm of (<ref>) and exploiting P_D,k≥ P_F,k). Therefore, the test deciding for ℋ_1 when Λ_NL-GLR>γ, whereΛ_NL-GLR≜‖P_A_J(ϕ)^⊥y_s‖ ^2is uniformly most powerful under NLOS assumption and after moment matching. For the mentioned reason, the present test, denoted here as NLOS-GLRT (in analogy to Sec. <ref> and with a slight abuse of terminology, since estimation of σ_J^2 is not needed for test implementation), will be employed in our comparison. §.§ IGMM-GLRT rule It can be readily shown that the characterization up to the second order in Eqs. (<ref>) and (<ref>) generalizes to:𝔼{y_s|ℋ_i}=t_i+A_J(ϕ) ζ Σ_y_s|ℋ_i=Σ_y|ℋ_i+σ_J^2 I_NΣ̅_y_s|ℋ_i=Σ̅_y|ℋ_iwhere we have denoted t_i≜𝔼{y|ℋ_i} (cf. Eq. (<ref>)). We first match the pdf of the vector y_s|ℋ_i to that of an improper complex Gaussian vector, that is:y_s|ℋ_i∼𝒩_ℂ(t_i+A_J(ϕ) ζ,Σ_y_s|ℋ_i,Σ̅_y_s|ℋ_i)It is easy to verify that Eq. (<ref>) is also equivalent to the following linear model:y_s|ℋ_i=t_i+A_J(ϕ) ζ+w_iwhere w_i∼𝒩_ℂ(0_N,Σ_y|ℋ_i+σ_J^2 I_N,Σ̅_y|ℋ_i) (i.e., a zero-mean non-circular complex Gaussian vector). Therefore, when the hypothesis ℋ_i is in force, we define y_s,i≜(y_s-t_i) and exploit the SVD of A_J(ϕ)=(U_J Λ_J V_J^†), thus obtaining y_s,i=U_J [Λ_r; O_(N-r)× r ]_= Λ_J V_J^† ζ+w_i . where Λ_r∈ℂ^r× r denotes the (diagonal) sub-matrix extracted from the matrix of the singular values Λ_J (since the rank of the interference is equal to r). We then define ζ^'≜(Λ_r V_J^† ζ)∈ℂ^r and notice that ζ and ζ^' are in one-to-one correspondence. Therefore, after a left-multiplication by U_J^† (i.e., a unitary matrix, which does not entail loss of information), Eq. (<ref>) is rewritten as follows:s_i=[I_r; O_(N-r)× r ]_≜Sζ^'+n_iwhere s_i≜(U_J^† y_s,i)∈ℂ^N and n_i∼𝒩_ℂ(0_N, U_J^† Σ_y|ℋ_i U_J+σ_J^2 I_N, U_J^† Σ̅_y|ℋ_i U_J^*)Then, we can define the following augmented model:s_i=S̅ ζ^'+n_i S̅ ≜ [[I_rO_r; O_(N-r)× r O_(N-r)× r;O_rI_r; O_(N-r)× r O_(N-r)× r ]]where n_i∼𝒩_ℂ(0_2N,R_A,i), and we have defined R_A,i≜(Σ_A,i+σ_J^2 I_2N), Σ_A,i≜U̅_J^† Σ_y|ℋ_i U̅_J and U̅_J≜[ U_J O_N; O_N U_J^* ] .Hence, the (matched) pdf of the augmented vector s_i is given by <cit.>:p_igmm(s_i ;ζ^',σ_J^2|ℋ_i)= 1/π^N(R_A,i)^1/2exp[-1/2(s_i-S̅ ζ^')^† R_A,i^-1 (s_i-S̅ ζ^')]In order to obtain the IGMM-GLRT rule, we need the ML estimates of {ζ^',σ_J^2}. First, the ML estimate of ζ^' from Eq. (<ref>) is readily given by ζ̂_̂î^̂'̂≜(S̅^† R_A,i^-1 S̅)^-1 S̅^† R_A,i^-1 s_i. After substitution, the concentrated log-likelihood is:ln p_igmm(s_i;ζ̂_̂î^̂'̂,σ_J^2|ℋ_i)=-N lnπ-1/2ln(R_A,i)-1/2s_i^† [R_A,i^-1-R_A,i^-1 S̅ (S̅^† R_A,i^-1 S̅)^-1S̅^† R_A,i^-1 ] s_iWe now observe that S̅ is related to a conveniently defined matrix T via a permutation matrix Γ, as shown in Eq. (<ref>) at the top of next page.Based on the aforementioned definition, Eq. (<ref>) is rewritten as:ln p_igmm(s_i;ζ̂_̂î^̂'̂,σ_J^2|ℋ_i)=-N lnπ-1/2ln(R_A,i)-1/2m_i^† [R_p,i^-1-R_p,i^-1 T (T^† R_p,i^-1 T)^-1T^† R_p,i^-1 ] m_iwhere m_i≜(Γ^† s_i) and R_p,i≜(Γ^† R_A,i^-1 Γ)^-1=(Γ^† R_A,i Γ) (since every permutation matrix is unitary, i.e., (Γ^†Γ)=(ΓΓ^†)=I_2N). It can be recognized in second line of Eq. (<ref>) that matrix in square brackets has the block structure (obtained by exploiting the simplified structure of T) [O_2r O_2r×2(N-r); O_2(N-r)×2rR_c,i^-1 ] where R_c,i^-1∈ℂ^2(N-r)×2(N-r) is the Schur complement of the block (R_p,i^-1)_1:2r of matrix R_p,i^-1 and can be identified from R_p,i as:R_p,i=[_i_i;_i^† R_c,i ]where _i∈ℂ^2r×2r and _i∈ℂ^2r×2(N-r), respectively. Accordingly, the third term in Eq. (<ref>) is equivalently written as -1/2 m_c,i^† R_c,i^-1 m_c,i, where m_c,i≜(m_i)_2r+1:2N. Furthermore, it is also apparent that R_c,i is in the form R_c,i=(Σ_c,i+σ_J^2 I_2(N-r)), where Σ_c,i≜(Γ^†Σ_A,iΓ)_(2r+1:2N). Therefore R_c,i^-1 has the eigenvalue decomposition R_c,i^-1=U_c,i [Λ_c,i+σ_J^2 I_2(N-r)]^-1 U_c,i^†. Consequently, Eq. (<ref>) can be expressed asln[p_igmm(s̆_i;ζ̂^̂'̂,σ_J^2)]= -N lnπ-1/2∑_n=1^2Nln[λ_A,i,n+σ_J^2] -1/2∑_ℓ=1^2(N-r)|v_i,ℓ|^2/λ_c,i,ℓ+σ_J^2 , where λ_A,i,n and λ_c,i,ℓ are the eigenvalues of Σ_A,i and Σ_c,i, respectively. Also, in Eq. (<ref>) and we have denoted with v_i,ℓ the ℓth element of v_i≜(U_c,i^† m_c,i). We also remark that, because of Σ_A,i definition, the eigenvalues λ_A,i,n are equal to those of Σ_y|ℋ_i. Eq. (<ref>) can now be easily differentiated w.r.t. σ_J^2 and set to zero in order to find the stationary points. This is achieved via the solution of the polynomial equation:∑_n=1^2N1/λ_A,i,n+σ_J^2=∑_ℓ=1^2(N-r)|v_i,ℓ|^2/(λ_c,i,ℓ+σ_J^2)^2Clearly, given a set of stationary points (to which we must add the boundary solution σ̂_J,i^2=0) say it σ̂_J,i^2(s), the argument corresponding to the maximum likelihood of Eq. (<ref>) is chosen as the actual σ̂_J,i^2, that is σ̂_J,i^2≜max_σ̂_J^2(s)≥0ln[p(s̆_i;ζ̂^̂'̂,σ̂_J^2(s))]. This is also implied by the objective function ln[p_igmm(s̆_i;ζ̂^̂'̂,σ_J^2)]→-∞ as σ_J^2 tends to +∞. Finally, IGMM-GLR statistic is evaluated asΛ_IGMM-GLR≜-1/2∑_n=1^2Nln[λ_A,1,n+σ̂_J,1^2/λ_A,0,n+σ̂_J,0^2]-1/2{∑_n=1^2(N-r)|v_1,ℓ|^2/λ_c,1,ℓ+σ̂_J,1^2-∑_n=1^2(N-r)|v_0,ℓ|^2/λ_c,0,ℓ+σ̂_J,0^2} .The procedure for evaluation of IGMM-GLR statistic is summarized in Alg. <ref>. §.§ Asymptotic equivalences in the presence of jammer Hereinafter, we will turn our attention to asymptotic equivalence properties of fusion rules which deal with the case of jammer presence, specularly as in Sec. <ref>.We first observe that, in the presence of jammer interference, it is not difficult to show that a similar statement as that in Lem. <ref> does not hold, since there is a different design criterion between NLOS-GLRT and IS/IGMM-GLRT. Indeed, the former is obtained by exploiting a monotonic concentrated LLR (under NLOS assumption, after Gaussian moment matching and implicit estimation of ζ); these assumptions allow avoiding the estimation of σ_J^2. Therefore, NLOS-GLRT cannot be interpreted as a GLRT-like procedure in a strict sense, since it implicitly estimates only ζ. On the other hand, IGMM-GLRT and IS-GLRT rules are both constructed on an estimate σ̂_J^2. Therefore, we cannot expect the three rules to have identical performance in a NLOS setting, as opposed to the “interference-free” scenario. However, an intuitive argument on their NLOS behaviour can be drawn by analyzing the forms of IS-GLR (cf. Eq. (<ref>)) and IGMM-GLR (cf. Eq. (<ref>)) under the aforementioned assumption. Indeed, by assuming that the Rician factors κ_k→0, produces (after lengthy manipulations):Λ= N lnσ_a^2/σ_b^2-‖r_0‖ ^2/σ_b^2+‖r_0‖ ^2/σ_a^2if ‖r_0‖ ^2/N<σ_a^2N ln‖r_0‖ ^2/Nσ_b^2-‖r_0‖ ^2/σ_b^2+N if σ_a^2≤‖r_0‖ ^2/N<σ_b^20 if ‖r_0‖ ^2/N≥σ_b^2where σ_a^2<σ_b^2 and their expressions are σ_a^2=σ_n,0^2=∑_k=1^Kβ_kP_F,k+σ_w^2 (resp. σ_b^2=σ_n,1^2=∑_k=1^Kβ_kP_D,k+σ_w^2) for IGMM-GLR and σ_a^2=σ_w^2 (resp. σ_b^2=σ̅_e^2(1_K)=∑_k=1^Kβ_k+σ_w^2) for IS-GLR, respectively. By looking at Eq. (<ref>), it is apparent that both the statistics are increasing functions of ‖r_0‖ ^2 (i.e., the energy of the received signal y_s after projecting out the LOS part of the jammer interference) within [0,σ_b^2]. Therefore, the higher σ_b^2 the more the statistic function in Eq. (<ref>) will be safely approximated by an increasing function of ‖r_0‖ ^2. Additionally, every statistic being an increasing function of ‖r_0‖ ^2 will experience the same performance as the NLOS-GLRT (we recall that such test is constructed simply comparing ‖r_0‖ ^2 to a suitable threshold, cf. Eq. (<ref>)). Such test is obtained without explicitly estimating σ_J^2 and by claiming uniformly most powerfulness after moment matching of the statistic ‖r_0‖ ^2. The use of this test allows avoiding a performance loss attributed to the fact that, under a NLOS assumption, we are testing (after moment matching)(σ_n,0^2+σ_J^2)under ℋ_0(σ_n,1^2+σ_J^2)under ℋ_1with σ_J^2 being unknown. Clearly, if we are faced to estimate σ_J^2 under the condition σ_J^2≥(σ_n,1^2-σ_n,0^2), discrimination among the two hypotheses is not achievable. Indeed, the uncertainty interval of σ_J^2 (i.e., [0,+∞)) produces overlapping intervals for the overall variance under both hypotheses (i.e., [σ_n,0^2,+∞) and [σ_n,1^2,+∞), respectively) and therefore, when the aforementioned condition is satisfied, the correct hypothesis cannot be declared on the basis of a simple variance estimation. Additionally, since σ_b^2 is higher for IS-GLR than for IGMM-GLR (as P_D,k≤1, k∈𝒦), we can expect IS-GLRT to perform better than IGMM-GLRT in a NLOS WSN situation, especially when σ_J^2 becomes large (which is either the case of a jammer emitting a high power signal or experiencing mostly a NLOS channel condition).Finally, we show that an analogous form of Lem. <ref> holds for IS-GLRT and IGMM-GLRT in a setup with an operating jammer, as stated hereinafter.Under “IS” assumption, IGMM-GLRT rule is statistically equivalent to IS-GLRT rule (and thus attains exact GLRT performance). Clearly, under IS assumption, IS-GLRT is statistically equivalent to the exact GLRT in Eq. (<ref>), by construction. Then, we need only to show that IGMM-GLRT is statistically equivalent to IS-GLRT. Indeed, under IS assumption, 𝔼{x|ℋ_1}=1_K, 𝔼{x|ℋ_0}=0_K and Σ_x|ℋ_i=O_K hold, respectively. Therefore, the second order characterization needed for IGMM-GLRT in Eqs. (<ref>) and (<ref>) reduces to:𝔼{y_s|ℋ_i}=μ_i+A_J(ϕ) ζ Σ_y_s|ℋ_i=(σ_e,i^2+σ_J^2) I_NΣ̅_y_s|ℋ_i=O_Nwhere the equalities σ_e,1^2=σ_e^2(1_K), σ_e,0^2=σ_w^2, μ_1=A(θ) 1_K and μ_0=0_N hold, respectively. It is apparent that the simplified characterization in Eqs. (<ref>) coincides with that in Eq. (<ref>). Since both rules are obtained with a GLRT-like approach, this proves their statistical equivalence.Then, when sensors are ideal, IGMM-GLRT rule will be statistically equivalent to IS-GLRT (viz. GLRT) rule, as there is no covariance structure change between the two hypotheses. On the other hand, we expect that when the WSN operates with “good-quality” sensors, NLOS-GLRT will experience some performance loss with respect to IS-GLRT and IGMM-GLRT rules, since it does not exploit the LOS part of the sensors channel vectors.§ COMPLEXITY ANALYSIS In Tab. <ref> we compare the computational complexity of the proposed rules, where 𝒪(·) indicates the usual Landau notation (i.e., the order of complexity). The results underline the computations required whenever each new y is transmitted (assuming static parameters pre-computed and stored in a suitable memory). First, as previously remarked, it is apparent that the optimum rule (i.e. the LLR) is unfeasible, especially when K is very large. Differently, all the proposed rules have polynomial complexity w.r.t both K and N (as well as r, when jammer-robust rules are considered). The computational complexity of IS rule is mainly given by the computation of the scalar product and energy needed to evaluate Eq. (<ref>), while the dominant term in the case of IS-GLRT is represented by the evaluation of the energy of r_0 and r_1, respectively (recall that the orthogonal projector of interference can be written as P_A_J(ϕ)^⊥=U_J,⊥U_J,⊥^†, where U_J,⊥ collects the last (N-r) columns of the eigenvector matrix U_J). Similar considerations (as IS rule) hold for NLOS (which simply requires ‖y‖ ^2), whereas NLOS-GLRT similarly (as IS-GLRT) requires first a projection operation, that is, evaluation of P_A_J(ϕ)^⊥y_s. Furthermore, a linear dependence with N, as IS and NLOS rules, holds for WL rules (see Eq. (<ref>)). Differently, IGMM rule is based on the computation of a quadratic form of y, which leads to 𝒪(N^2) complexity. A higher complexity is also required by IGMM-GLRT, whose dominant terms are given by: (i) the computation of v_i (see definition provided in Sec. <ref>) and the solution to a polynomial equation of order p_ord≜2N+4(N-r)-1. The solution is known to have a complexity 𝒪(p_ord^4τ^2) (e.g. following Sturm approach <cit.>), where τ is a parameter related to the bit resolution of the maximum value among the known coefficients. § SIMULATION RESULTS§.§ Setup description and measures of performance We consider sensors deployed in a 2-D circular area around the DFC (placed in the origin, whose cartesian coordinates are denoted as (x_dfc,y_dfc)) with radius r_max=1000 m. Sensors are located uniformly at random (in Cartesian coordinates, denoted as (p_x,k,p_y,k), k∈𝒦) and we assume that no sensor is closer to the DFC than r_min=100 m. The large-scale fading is modelled via β_k=ξ_k(r_min/r_k)^L, where ξ_k is a log-normal random variable, i.e., 10log_10(ξ_k)∼𝒩(μ_P,σ_P^2), where μ_P and σ_P are the mean and standard deviation in dBm, respectively. Moreover, r_k denotes the distance between the kth sensor and the DFC and L represents the path-loss exponent (for our simulations, we choose L=2). In the following, we assume (μ_P,σ_P)=(15,2) for the WSN. Additionally, we suppose that the DFC is equipped with a half-wavelength spaced uniformly linear array and that kth sensor is seen at the DFC as a point-like source, that is a(θ_k)=[ 1e^jπcos(θ_k) ⋯ e^jπ(N-1)cos(θ_k) ]^Twhere clearly θ_k=arccos[x_dfc-p_x,k/y_dfc-p_y,k]. A similar procedure is employed for the generation of jammer parameters, with reference to a case of a jamming device distributed in angular space. The sole difference is in the choice (μ_P,σ_P)=(25,2), reflecting a non-negligible jammer power received by the DFC.Also, the Rician factors of the sensors κ_k, k∈𝒦, are uniformly generated within [κ_min,κ_max]. Such interval will be varied in order to generate three typical scenarios corresponding to a WSN with “LOS”, “Intermediate” and “NLOS” channel situations, in order to comprehensively test the proposed fusion rules. More specifically, we will consider Rician factors generated randomly as: (i) [κ_min,κ_max]=[10,20] (dB) (LOS scenario), (ii) [κ_min,κ_max]=[-10,10] (dB) (Intermediate scenario) and [κ_min,κ_max]=[-20,-10] (dB) (NLOS scenario). Similar reasoning is applied to the generation of Rician factors for the jammer, where two different scenarios are also considered: (a) [κ_min,κ_max]=[10,20] (dB) (LOS jammer) and (b) [κ_min,κ_max]=[-10,10] (dB) (weak-LOS jammer).The three generated WSN examples are shown in Fig. <ref>, where the corresponding angles-of-arrival (θ_k, k∈𝒦), the averaged total received and LOS powers per antenna ((β_k,b_k^2β_k), k∈𝒦 ) are shown for the case of K=14 sensors. Also, in each of the subfigures, we illustrate the corresponding DOAs (ϕ_ℓ, ℓ=1,…,r), the averaged total received and LOS powers per antenna ((β_ℓ,J,b_ℓ.J^2β_ℓ,J), ℓ=1,…,r) of a jammer distributed in the angular space with r=2, whose Rician factors are generated according to scenarios (a) (LOS jammer scenario) and (b) (weak-LOS jammer scenario), respectively. Finally, for simplicity we assume conditionally i.i.d. decisions, that is P(x|ℋ_i)=∏_k=1^KP(x_k|ℋ_i) with (P_1,P_0)=(P_D,P_F)=(0.5,0.05). In this case, ρ_i=P_i1_K and Σ_x|ℋ_i=P_i(1-P_i) I_K hold, respectively.The performances of the proposed rules are analyzed in terms of system probabilities of false alarm and detection, defined respectively asP_F_0≜ P(Λ>γ|H_0), P_D_0≜ P(Λ>γ|H_1),with Λ representing the statistic associated to the generic fusion rule and γ the corresponding threshold.§.§ Fusion Rules Comparison P_D_0 vs. noise level σ_w^2 (No-interference): First, the scenario with no jammer is addressed. In Figs. <ref>, <ref> and <ref>, we show P_D_0 vs. σ_w^2, under the constraint P_F_0=0.01 for the “LOS”, “Intermediate” and “NLOS” setups in Figs. <ref>, <ref> and <ref>, respectively (K=14 sensors and N∈{2,6} antennas at the DFC). Clearly, LLR performs the best among all the considered rules. Secondly, WL rules are very close to the LLR in the “LOS” setup (indeed in the conditionally i.i.d. case and at high SNR, for a LOS condition it approximately holds z_ WL,i∝(A(θ) A(θ)^†)^-1A(θ)1_k, that is WL rules both approximate through right-pseudoinverse operation a counting rule, being optimal in this specific scenario) with increasing performance loss in the “Intermediate” and “NLOS” setups, respectively. Such a trend is in agreement with Lem. <ref>, which states that as NLOS assumption is verified, the optimum statistic should possess a dependence on ‖y‖ ^2, which is not the case of WL rules. Also, IGMM, IS and NLOS rules have a performance behaviour in line with the asymptotic equivalences shown in Sec. <ref>. Clearly, NLOS setup is such that performance of IGMM, IS and NLOS rules (almost) coincide. On the other hand, in the LOS scenario, IS and IGMM rules are very close (the “weak-LOS” assumption is almost satisfied), while NLOS rule experiences a certain performance loss. Finally, we underline that the benefit of improved number of antennas is only experienced by LLR, WL and IGMM rules. Differently, NLOS and IS rules do benefit of a larger DFC array only in the case of low SNR or NLOS setup. This can be attributed to the fact that only in these conditions there is no significant (pseudo-)covariance structure change between the two hypotheses (see (<ref>) and (<ref>)). Then NLOS and IS rules, not exploiting (at least) a second-order characterization of y|ℋ_i, are not able to benefit from increase of N in the remaining cases.P_D_0 vs. noise level σ_w^2 (Interference): A similar scenario is shown in Figs. <ref>, <ref> and <ref>, where we show P_D_0 vs. σ_w^2, under the constraint P_F_0=0.01 for the “LOS”, “Intermediate” and “NLOS” setups in Figs. <ref>, <ref> and <ref>, respectively (K=14, both jammer scenarios considered), and N=6 antennas at the DFC. For the sake of completeness, the performance of clairvoyant LRT are also reported (cf. Eq. (<ref>)). We first notice that IS-GLRT, NLOS-GLRT and IGMM-GLRT outperform IS, NLOS and IGMM rules (whose performance are obtained by ignoring the presence of the jamming signal), respectively, unless there is a significant receive noise σ_w^2 (i.e., low SNR); such trend is more apparent when moving to a WSN-DFC channel which experiences a LOS scenario (cf. Fig. <ref>). Indeed, in such a case, jammer interference suppression may come up at the expenses of (partial) cancellation of some of the sensors contributions. Indeed, in a LOS scenario and at low SNR, jammer interference suppression may not be beneficial as the scenario is noise-dominated. On the other hand, in a LOS scenario and at high SNR, the problem becomes interference-dominated; therefore an effective jammer signal suppression significantly improves performance, even at the expenses of (partial) elimination of some sensors contributions. The sole exception to these considerations is represented by IGMM-GLRT in a NLOS WSN scenario (cf. Fig. <ref>), where performance are observed to be worse than its interference-unaware counterpart (i.e., IGMM rule) over all the σ_w^2 range considered. Such evidence can be attributed to the overlapping of unknown parameter support under the two hypotheses, due to σ_J^2 (cf. Sec. <ref>), which does not allow to achieve satisfactory performance.P_D_0 vs. number of antennas N (Interference): The benefits of increasing the number of antennas on jammer suppression capabilities for the designed rules are illustrated in Figs. <ref>, <ref>, and <ref>, respectively. More specifically, it is shown P_D_0 vs. N, under the constraint P_F_0=0.01 and σ_w^2=0 dBm for the “LOS”, “Intermediate” and “NLOS” setups in Figs. <ref>, <ref> and <ref>, respectively. First of all, we notice that P_D_0 for all the “interference-aware” fusion rules increases with N. Furthermore, the gain with respect to their corresponding “interference-unaware” counterparts improves as well. This is true in the case of IS-GLRT and NLOS-GLRT for all the scenarios considered, since the considered noise level σ_w^2 implies a moderate SNR and due to improved jamming-suppression capabilities with higher N. Again, the only exception is given by IGMM-GLRT in the NLOS case, given the overlapping of unknown parameter support under both the hypotheses due to σ_J^2. By looking at the specific example, in the LOS WSN scenario, the IGMM-GLRT has the best trend with the number of antennas, as significant pseudo-covariance structure change in the hypotheses is implied in such scenario (therefore second-order characterization of y|ℋ_i is beneficial). Differently, in the “Intermediate” and “NLOS” WSN setups, IS-GLRT and NLOS-GLRT represent the best alternatives, with IS-GLRT slightly outperforming NLOS-GLRT. § CONCLUSIONS In this paper we studied channel-aware DF in a WSN with interfering sensors whose channels are modelled as Rician and their NLOS components are not known at the DFC (i.e., they are not estimated), focusing on anomaly detection problems. We developed five sub-optimal fusion rules (i.e., IS, NLOS, WL and IGMM rules) in order to deal with the exponential complexity of LRT. For the present setup, the following performance trends have been observed: * In a WSN with a LOS setup, WL rules represent the best (and more convenient) alternative to the LLR, whereas the same rules suffer from severe performance degradation in a NLOS setup. On the other hand, NLOS rule is mainly appealing in a NLOS setup, as the IS rule, which also achieves satisfactory performance in a weak-LOS condition. Indeed, in the latter case they are both able to exploit an increase in the number of receive antennas, as well as in the case of low SNR. Finally IGMM rule, exploiting a second-order characterization of the received vector under both hypotheses, has the most appealing performance when considering all the three scenarios.Successively, we considered a scenario with a (possibly distributed) “Rician” jamming interference and tackled the resulting composite hypothesis testing problem within the GLRT framework. More specifically, we developed sub-optimal GLRT-like decision rules which extend IS, NLOS and IGMM rules to the case of subspace interference. With reference to these rules, the following trends have been observed: * All the considered “interference-aware” rules (IS-GLRT, NLOS-GLRT and IGMM-GLRT) significantly outperform the “interference-unaware” counterparts in the case of a moderate-to-high SNR level and non-negligible LOS condition, as in such case system performance is interference-dominated and thus interference suppression leads to a remarkable gain. Also, it has been shown that all these rules benefit from increase of N for enhanced interference suppression, with the sole exception of IGMM-GLRT in a NLOS case (due to lack of identifiability). Numerical evidence has also underlined the appeal of IGMM-GLRT and IS-GLRT in LOS and Intermediate/NLOS setups, respectively;Finally, asymptotic equivalences established among all these rules in the case of either interference-free or interference-prone scenarios were confirmed by simulations. Future research tracks will concern theoretical performance analysis of the proposed rules and design of advanced fusion schemes robust to smarter jammers.In this appendix we will provide the second-order characterization of y|ℋ_i. The mean vector 𝔼{y|ℋ_i} is evaluated as:𝔼{y|ℋ_i}=𝔼{H̅D^1/2x+w|ℋ_i}= 𝔼{H̅} D^1/2 𝔼{x|ℋ_i}=A(θ) ρ_iwhere we have exploited 𝔼{w}=0_N and statistical independence between fading coefficients and sensors decisions, respectively. Finally in Eq. (<ref>) we have recalled the definitions of matrix A(θ), whose kth column equals μ_k=b_k √(β_k) a(θ_k), and of ρ_i=[ P_i,1 ⋯ P_i,K ]^T.Differently, the covariance matrix is expressed as:Σ_y|ℋ_i= 𝔼{(y-A(θ) ρ_i) (y-A(θ) ρ_i)^† |ℋ_i}= A(θ) 𝔼{(x-ρ_i)(x-ρ_i)^T |ℋ_i} A(θ)^† + 𝔼{(H B_s x)(H B_s x)^† |ℋ_i}+𝔼{ww^†}= A(θ) Σ_x|ℋ_i A(θ)^†+ 𝔼{(∑_k=1^Kh_k √(ν_k) x_k)(∑_ℓ=1^Kh_ℓ^† √(ν_ℓ) x_ℓ)|ℋ_i} +σ_w^2 I_Nwhere B_s≜(I_K-R^2)^1/2D^1/2 and we recall ν_k=(1-b_k^2)β_k. The second term in Eq. (<ref>) can be simplified as𝔼{(∑_k=1^Kh_k √(ν_k) x_k)(∑_ℓ=1^Kh_ℓ^† √(ν_ℓ) x_ℓ)|ℋ_i} = ∑_k=1^K𝔼{h_k h_k^†} ν_k 𝔼{x_k^2|ℋ_i}=∑_k=1^Kν_k P_i,k I_Nwhich follows from mutual independence of vectors h_k, k∈𝒦. Then, substituting back Eq. (<ref>) in Eq. (<ref>) gives:Σ_y|ℋ_i=A(θ) Σ_x|ℋ_i A(θ)^†+σ_e,i^2 I_Nwhere σ_e,i^2≜[∑_k=1^Kν_k P_i,k+σ_w^2]. Analogously, we can evaluate the pseudo-covariance of y|ℋ_i asΣ̅_y|ℋ_i=𝔼{(y-A(θ) ρ_i) (y-A(θ) ρ_i)^T |ℋ_i}=A(θ) 𝔼{(x-ρ_i)(x-ρ_i)^T |ℋ_i} A(θ)^T + 𝔼{(H B_s x)(H B_s x)^T |ℋ_i}=A(θ) Σ_x|ℋ_i A(θ)^T+ 𝔼{(∑_k=1^Kh_k √(ν_k) x_k)(∑_ℓ=1^Kh_ℓ^T √(ν_ℓ) x_ℓ)|ℋ_i}since 𝔼{w w^T}=O_N (i.e., the noise is assumed circular). Also, it can be shown that the second term in Eq. (<ref>) is a null matrix, since𝔼{(∑_k=1^Kh_k √(ν_k) x_k)(∑_ℓ=1^Kh_ℓ^T √(ν_ℓ) x_ℓ)|ℋ_i} = ∑_k=1^K𝔼{h_k h_k^T} ν_k 𝔼{x_k^2|ℋ_i}=O_Nsince the NLOS fading vector h_k is assumed circular. Therefore the final expression for the pseudo-covariance is:Σ̅_y|ℋ_i=A(θ) Σ_x|ℋ_i A(θ)^TEq. (<ref>) is not a null matrix, thus motivating augmented form processing.IEEEtran
http://arxiv.org/abs/1702.07915v1
{ "authors": [ "D. Ciuonzo", "A. Aubry", "V. Carotenuto" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170225161129", "title": "Rician MIMO Channel- and Jamming-Aware Decision Fusion" }
§ INTRODUCTIONIf there is one thing the astro- and/or particle physics communities agree on, it is that one of the major questions of contemporary science is about the true nature of Dark Matter (DM). This substance makes up more than 80% of the full matter content of the Universe, but up to now we have not yet figured out what is behind. While we are somewhat certain of several aspects of DM (its energy density, its rough distribution today, and the fact that it was crucial for cosmic structure formation), there are other things we do not know precisely (its identity, its production mechanism, its momentum distribution after production). While there are strong hints towards DM being made out of new elementary particles, our partial lack of knowledge suggests that we should question old believes and possibly re-evaluate certain statements made in the literature.In the literature, the “standard” candidate for DM is regarded to be a Weakly Interacting Massive Particle (WIMP). However, while WIMPs do have a lot of theory motivation behind them, we cannot ignore the fact that we have not yet seen an unambiguous signal. On the contrary, on-going experiments push our limits further and further, such that, given the lack of a detection, we must start to seriously consider alternative possibilities.Taking a step back, we should ask which non-WIMP particle ticks all the boxes for DM. One natural candidate is a sterile neutrino N. This particle with mass m_N is electrically neutral and interacts even more feebly than an “active” neutrino ν_α (with flavours α = e, μ, τ), namely by vertices suppressed by the small active-sterile mixing angle θ_α. Given that sterile neutrinos can be rather massive, they would certainly be able to act as DM, provided that they are produced in the early Universe in the right amount and with a suitable momentum spectrum <cit.>.Before we start, let us remark that the most complete source of information and references in existence at the moment is the White Paper on keV sterile neutrino Dark Matter <cit.>. § NON-THERMAL DM PRODUCTION AND HOW TO PRODUCE KEV STERILE NEUTRINOSStudying the literature, one finds a very common statement about sterile neutrino DM which is, however, only correct in a tiny fraction of all cases. The prejudice is that sterile neutrino DM would be warm. The origin of this statement is in parts historic, but when looking at the details it is often just not the case. Formally, a thermal distribution would amount to the following momentum spectrum,thermal spectrum: f_ th(p,T) = 1/e^√(p^2 + m^2)/T± 1{[ + for fermions,; - for bosons, ].where p is the DM momentum, m is its mass, and T is its temperature. A thermal distribution is essentially featureless and is characterised well by a single scale (e.g. its average momentum ⟨ p ⟩ or its temperature T), cf. black curve in the left Fig. <ref>. Given that, one would speak of cold, warm, hot DM if T≪ m, T≈ m, T≫ m, respectively.However, in nearly all cases, sterile neutrino DM features a non-thermal momentum spectrum. Such a spectrum can have all kinds of structures, and the only restrictions are those which force it to be physical,nonthermal spectrum: f_ nth(p) >0 ∀ p, ∫ dp p^2 f_ nth(p) < ∞,where the latter requirement arises from the necessity for the DM density to be finite. Note that, contrary to the thermal distribution from Eq. (<ref>), no temperature-like quantity can be defined for a non-thermal distribution. Schematically, a non-thermal distribution can look very similar to a thermal one (see, e.g., the blue and red curves in Fig <ref>), or it can be vastly different and, e.g., feature several momentum scales, like the green curve in Fig <ref>. In any case, a single number such as the temperature is insufficient to characterise such a distribution. This is what makes it non-trivial to decide about whether or not a given DM momentum distribution is allowed or disfavoured by cosmic structure formation.For the case of keV sterile neutrinos, several production mechanisms are discussed, out of which all but the very last one do exhibit non-thermal spectra:* non-resonant production: First studied by Langacker <cit.>, linked to DM by Dodelson and Widrow <cit.>. Detailed computations are of a newer date <cit.>. This mechanism nowadays known to be excluded by observations. * resonant production: First studied by Enqvist and collaborators <cit.>, linked to DM by Shi and Fuller <cit.>. Detailed computations are of a newer date <cit.>. This mechanism is allowed (but constrained) by observations. * (scalar) decay production: Several studies are available (see, e.g., Refs. <cit.> for different cases of parent particles). This mechanism is in good agreement with observations. * thermal overproduction with subsequent entropy dilution: Discussed for keV sterile neutrinos in <cit.>. Non-trivial to get in agreement with Big Bang Nucleosynthesis. Out of these, the second and third mechanisms are discussed most frequently, which is why we will put our main focus on those two here.§ PRODUCTION OF KEV STERILE NEUTRINOS BY ACTIVE-STERILE MIXINGThe most natural approach is to produce sterile neutrinos by their admixture to the active-neutrino sector <cit.>, which exploits the fact that any reaction producing active neutrinos can also produce steriles – only with a much smaller probability – as long as some active-sterile mixing angles are non-zero. This could gradually produce enough sterile neutrinos to explain the observed amount of DM, but the resulting spectrum would feature too many particles with rather large momenta, making this setting similar to a hot DM case and thus excluded by data.However, this could change for the case of a lepton number asymmetry being present in the early Universe, a suitable value of which can resonantly enhance the transitions from active to sterile neutrinos <cit.>. While the origin of such an asymmetry has to be explained at first, its effects have been computed. Basically, if the asymmetry is large enough, it produces very many sterile neutrinos at a very specific combination of momentum and temperature. Thus, in the momentum distribution function, a sharp peak appears on top of a continuous spectrum, clearly comprising a non-thermal distribution. If this peak is located at a comparatively small momentum, i.e., a big fraction of the sterile neutrinos are not very fast, the resulting spectrum should be “cooler” (i.e., smaller momenta are more dominant) than the one resulting from non-resonant production.It is non-trivial to obtain statements from cosmic structure formation about non-thermal spectra, cf. left Fig. <ref>, so that new methods have to be developed to do so. Confronting resonant production with data, it turns out that the “coldest” region, which would be closest to the cold DM case, is located in a region of the parameter space that is by far excluded from the X-ray bound (i.e., from not observing the photons stemming from sterile neutrino DM decay, N →νγ). All parameter combinations that are left can be investigated to see whether they are compatible with cosmic structure formation. This has been done in Ref. <cit.>, using a modified version of the so-called extended Press-Schechter approach. Two methods have been applied to the spectra resulting from resonant production: halo counting (i.e., whether a given DM distribution produces at least as many small haloes as dwarf galaxies are observed) and the Lyman-α method (i.e., whether the distribution of the intergalactic medium coincides with the DM distribution expected from the spectrum under consideration). The result was that, contrary to some statements in the literature which did not take into account bounds from structure formation, resonant production is pushed already by current data. Thus, depending on how aggressively the Lyman-α data is interpreted, resonant production may even be completely excluded. However, it is in any case correct to say that structure formation does yield a strong constraint on this production mechanism, cf. Fig. <ref>.§ PRODUCTION OF KEV STERILE NEUTRINOS BY PARTICLE DECAYSProduction by particle decays is an alternative way to generate keV sterile neutrino DM, by first producing a parent particle (e.g., a singlet scalar S) which then decays into sterile neutrinos (e.g., S → N N). Typically, the decisive parameters are the Higgs portal coupling λ, which drives the production of the scalar from Standard Model particles, and the Yukawa coupling y, which drives the decay. Depending on whether the scalar itself equilibrates <cit.> or not <cit.>, different (non-thermal) spectra are possible, some resembling e.g. the green curve plotted in the left Fig. <ref>.The currently most advanced analysis of such cases has been presented in the last Ref. <cit.>, which also confronted the resulting cases with cosmic structure formation. A resulting snapshot of the allowed parameter space, for the case of a scalar mass of m_S = 500 GeV, is depicted in the right Fig. <ref>. Here, the right (left) part of the plot corresponds to the region of large (small) λ or, in other words, where the scalar freezes out (freezes in) before decaying. In this plot, the lines of correct abundance (regions of sizable but too low abundance) are indicated by the solid red/purple/blue lines (bands) for different sterile neutrino masses. In both cases, the red parts are forbidden by the Lyman-α data. Other bounds, such as the Tremaine-Gunn or overclosure bounds are also displayed, along with indirect collider bounds from unitarity and from the W-boson mass correction (the latter two of which however only play a role in the most minimal setting possible).A detailed analysis confirms the picture given. Decay production is, in general, less constraint by (and thus in better agreement with) cosmic structure formation, as can also be seen from Fig. <ref>. This statement remains true even if, after DM production by decays is completed,non-resonant production produces a late-time correction to the spectrum <cit.>.Even more interestingly, on top of being the production mechanism with the better agreement with data, decay production can lead to very involved spectra with two or partially even three different characteristic momentum scales. This feature can possibly be used to address the (in)famous small scale problems of cosmic structure formation, i.e., the missing satellite, too-big-to-fail, and cusp-core problems. In order to derive bounds (e.g. from the Lyman-α forrest), the last Ref. <cit.> developed a completely new method based on the computation of the so-called squared transfer function T^2(k), which encodes information on which structures (i.e., of which size 2π/k) in space are suppressed compared to the cold DM case. This method consists of checking whether the part of the transfer function above the half-mode T^2 = 1/2 is allowed by the Lyman-α data, or not, and it turns out to be extremely robust. Indeed, the green lines shown in the right Fig. <ref> arise from an alternative bound related to structure formation, namely the abundance of highly redshifted (i.e., distant) galaxies. This bound derives from completely different physics but, while it is slight less stringent, it basically tracks the boundary between the red and purple regions, and thus strongly supports the results obtained in the last Ref. <cit.>.§ CONCLUSIONS AND OUTLOOKWe end by looking at the current situation of keV sterile neutrino DM. In Fig. <ref>, we have collected many current limits. First of all, the most stringent limits exist on the active-sterile mixing (collectively labeled θ here, because not all the limits may apply to all generations of fermions). Clearly, for large masses, the X-ray bound (green) is the strongest, due to the rate of the N →νγ decay depending on the mass m_N to the fifth power. For smaller masses, though, the superior bound is the one stemming from not overproducing the DM by non-resonant production (yellow). Other bounds, such as those from reionisation (blue), dark radiation (red dotted), or the lifetime (black dashed), confirm the stronger bounds but cannot compete with them. From the other side, cosmic structure formation is the strongest driving force to constrain m_N. There exists a general lower bound on the sterile neutrino mass (Tremaine-Gunn bound, gray rectangle), stemming from the mere fact that sterile neutrinos are fermions, applied to the cores of galaxies. When taking into account information about the production mechanism, this bound can be strongly improved. Using the Lyman-α bound or the requirement of producing at least as many small haloes as we observe dwarf galaxies, we can in fact obtain very stringent constraints on the different production mechanisms. As can be seen from the plot, resonant production is in fact rather pushed by current constraints, for the reasons explained in Sec. <ref>, while scalar decay production, Sec. <ref>, seems in better shape. Note that, in both cases, the spectra closest to the cold DM limit have been chosen, which is a rather conservative approach.The ways forward in this field should be clear from the plot. First and foremost, obviously, if our data on cosmic structures and, related to that, our understanding of cosmic structure formation in general improves, this may enable us to discriminate between different early Universe production mechanisms. Of course, another direction that should be pushed for is to continue hunting for the smoking gun X-ray signature stemming from DM decay. While the hopes were high for the Hitomi satellite, it was lost in 2016 through a chain of unfortunate events. Other searches such as NuSTAR (pink) can improve the limits – however, due to many unknowns involved, one must be careful not to overstate these advances (as, e.g., done in the first version of the NuSTAR paper <cit.>, which incorrectly stated that it would be closing in on the entire parameter space of sterile neutrino DM, while in reality it only does so for resonant production). Finally, there are also ground-based experiments trying to constrain active-sterile mixing. While several attempts such as KATRIN/TRISTAN <cit.>, ECHo <cit.>, or DyNO <cit.> are on their way, they unfortunately do not seem to be able to compete with astrophysical/cosmological limits.Nevertheless, this field clearly offers both, sufficient parameter space available and means to probe it – plus a well-motivated non-thermal DM candidate which may in particular strongly impact on cosmic structure formation. We can be curious what the future holds for this field. § ACKNOWLEDGEMENTSI would like to thank all my collaborators on keV sterile neutrinos, in particular Viviana Niro, Aurel Schneider, and Max Totzauer. Furthermore, I acknowledge partial support by the Micron Technology Foundation, Inc., and by the European Union through the Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements No. 690575 (InvisiblesPlus RISE) and No. 674896 (Elusives ITN). Of course, I also want to thank the organisers of NOW 2016 for a great meeting in an excellent environment. Finally, I guess that all participants of NOW 2016 may agree with me that we have learned a lot, in particular in what concerns tarantism.99Merle:2013gea A. Merle,Int. J. Mod. Phys. D 22 (2013) 1330020 [arXiv:1302.2625 [hep-ph]].Adhikari:2016bei M. Drewes et al.,JCAP 1701 (2017) 025 doi:10.1088/1475-7516/2017/01/025 [arXiv:1602.04816 [hep-ph]].Menci:2017nsr N. Menci, A. Merle, M. Totzauer, A. Schneider, A. Grazian, M. Castellano and N. G. Sanchez,Astrophys. J.836 (2017) 61 doi:10.3847/1538-4357/836/1/61 [arXiv:1701.01339 [astro-ph.CO]].Langacker:1989sv P. Langacker,UPR-0401T.Dodelson:1993je S. Dodelson and L. M. Widrow,Phys. Rev. Lett.72 (1994) 17 [hep-ph/9303287].Abazajian:2001nj K. Abazajian, G. M. Fuller, and M. Patel,Phys. Rev. D 64 (2001) 023501 [astro-ph/0101524].Merle:2015vzu A. Merle, A. Schneider, and M. Totzauer,JCAP 1604 (2016) 003 [arXiv:1512.05369 [hep-ph]].Enqvist:1990ek K. Enqvist, K. Kainulainen, and J. Maalampi,Phys. Lett. B 249 (1990) 531.Shi:1998km X. D. Shi and G. M. Fuller,Phys. Rev. Lett.82 (1999) 2832 [astro-ph/9810076].Ghiglieri:2015jua J. Ghiglieri and M. Laine,JHEP 1511 (2015) 171 [arXiv:1506.06752 [hep-ph]].dec-WIMP A. Kusenko,Phys. Rev. Lett.97 (2006) 241301 [hep-ph/0609081]; A. Kusenko and K. Petraki,Phys. Rev. D 77 (2008) 065014 [arXiv:0711.4646 [hep-ph]].dec-FIMP A. Merle, V. Niro, and D. Schmidt,JCAP 1403 (2014) 028 [arXiv:1306.3996 [hep-ph]]; A. Merle and M. Totzauer,JCAP 1506 (2015) 011 [arXiv:1502.01011 [hep-ph]]; Konig:2016dzg J. König, A. Merle, and M. Totzauer,JCAP 1611 (2016) 038 [arXiv:1609.01289 [hep-ph]].dilution F. Bezrukov, H. Hettmansperger, and M. Lindner,Phys. Rev. D 81 (2010) 085032 [arXiv:0912.4415 [hep-ph]]; M. Nemevsek, G. Senjanovic, and Y. Zhang,JCAP 1207 (2012) 006 [arXiv:1205.0844 [hep-ph]]; S. F. King and A. Merle,JCAP 1208 (2012) 016 [arXiv:1205.0551 [hep-ph]].Schneider:2016uqi A. Schneider,JCAP 1604 (2016) 059 [arXiv:1601.07553 [astro-ph.CO]].Perez:2016tcq K. Perez, K. C. Y. Ng, J. F. Beacom, C. Hersh, S. Horiuchi, and R. Krivonos,arXiv:1609.00667 [astro-ph.HE].Mertens:2014nha S. Mertens et al.,JCAP 1502 (2015) 020 [arXiv:1409.0920 [physics.ins-det]].ECHo L. Gastaldo et al.,J. Low Temp. Phys. 176 (2014) 876.Lasserre:2016eot T. Lasserre, K. Altenmueller, M. Cribier, A. Merle, S. Mertens, and M. Vivier,arXiv:1609.04671 [hep-ex].
http://arxiv.org/abs/1702.08430v1
{ "authors": [ "Alexander Merle" ], "categories": [ "hep-ph", "astro-ph.CO" ], "primary_category": "hep-ph", "published": "20170227184727", "title": "keV sterile neutrino Dark Matter" }
SDN as Active Measurement Infrastructure Erik Rye US Naval Academy rye@usna.eduRobert BeverlyNaval Postgraduate School rbeverly@nps.edu =================================================================================================================Two dimensional gas chromatography () plays a central role into the elucidation of complex samples. The automation of the identification of peak areas is of prime interest to obtain a fast and repeatable analysis ofchromatograms. To determine the concentration of compounds or pseudo-compounds, templates of blobs are defined and superimposed on a reference chromatogram. The templates then need to be modified when different chromatograms are recorded. In this study, we present a chromatogram and template alignment method based on peak registration called . Peaks are identified using a robustmathematical morphology tool. The alignment is performed by a probabilistic estimation of a rigid transformation along the first dimension, and a non-rigid transformation in the second dimension, taking into account noise, outliers and missing peaks in a fully automated way. Resulting aligned chromatograms and masks are presented on two datasets. The proposed algorithm proves to be fast and reliable. It significantly reduces the time to results for analysis.Published inJournal of Chromatography A (J. Chrom. A.), 2017, 1484, pages 65–72,Virtual Special Issue RIVA 2016 (40th International Symposium on Capillary Chromatography and 13th GCxGC Symposium) <http://dx.doi.org/10.1016/j.chroma.2017.01.003> Keywords: Comprehensive two-dimensional gas chromatography; ; Data alignment; Peak registration; Automation; Chemometrics § INTRODUCTION First introduced in 1991 by Phillips et al. <cit.>, comprehensive two-dimensional gas chromatography () has become in the past decade a highly popular and powerful analytical technique for the characterization of many complex samples such as food derivatives, fragrances, essential oils or petrochemical products <cit.>. In the field of oil industry, gives an unprecedented level of information <cit.> thanks to the use of two complementary separations combining different selectivities. It is very useful in the understanding of catalytic reactions or in the design of refining process units <cit.>.From an instrumental point of view, much progress has been made since the early nineties on both hardware and modulation systems <cit.>. Many modulator configurations are depicted in the literature or nowadays sold by manufacturers. With the use of leak-free column unions, many of these systems have become robust, easy to use, without cryogenic fluids handling while providing high resolution. Within a series of several consecutive injections, almost no significant shifts in retention times are observed and repeatability of experiments is nowadays a minor problem. However, reproducibility of results for detailed group-type analysis on complex mixtures is still a great challenge due to column aging, trimming or to slight differences in column features. This results in shifts on retention times that can affect the proper quantification of a single compound, a group of isomers or pseudo-compounds. Experimental retention time locking (RTL) procedures have been proposed to counterbalance shifts on retention times but these procedures must be repeated regularly <cit.>. On the way to routine analysis for , data treatment has therefore become the preferred optionto reduce the time to results <cit.>. A common way of treating data is to quantify compounds according to their number of carbon atoms and their chemical families by dividing the 2D chromatographic space into contiguous regions that are associated to a group of isomers. This treatment benefits from the group-type structure of the chromatograms and from the roof-tile effect for a set of positional isomers. For example,for a classical diesel fuel, up to 300 or 400 regions(often referred to as blobs)may be defined. Due to the lack of robustness in retention times, this step often requires human input and is highly time-consuming when moving from an instrument to another or when columns are getting degraded. Several hours may be necessary to correctly recalibrate a template of a few hundreds of blobs on a known sample. This operator-dependent step causes variability inquantitative results which is detrimental to reproducibility. In that goal, 2D-chromatogram alignment methods, consisting in modifying a recently acquired chromatogram tomatcha reference one, have been a quite active research area.In this paper, we propose a new algorithm called [The name is inspired by wind-produced crescent-shaped sand dunes (barkhan or barchan) reminiscent of 2D chromatogram shapes.]which aims at aligning chromatograms. It relies on a first peak selection step and then considers the alignment of the two point sets as a probability density estimation problem. This algorithm does not require the placement of anchor points by the user.§ MATERIAL AND METHODS§.§ Datasets and methodsThe straight-run gas-oil sample named GO-SR which is used in this study was provided by IFP Energies nouvelles and was analyzed on different experimental set-ups.Its boiling point distribution ranges from 180400.Dataset 1 was built by considering two chromatograms obtained on two different experimental set-ups in the same operating conditions with cryogenic modulation. These experiments were carried out with an Agilent 7890A chromatograph (Santa Clara, California, USA)equipped with a split/splitless injector, a LN2 two-stage 4 jets cryogenic modulation system from LECO (Saint-Joseph, Michigan, USA) and an FID. The two evaluated column sets were composed of a first 1D apolar 1D HP-PONA column (20, 0.2, 0.5, J&W, Folson, USA) and a mid-polar BPX-50 column (1, 0.1, 0.1, SGE, Milton Keynes, United Kingdom) connected together with Siltite microunions from SGE. Experiments were run with a constant flow rate of 1, a temperature program from 60 (0.5) to 350 at 2, a [retain-explicit-plus=true]+30 offset for hot jets and a 8 modulation period. 0.5 of neat sample were injected with a 1/100 split ratio. Dataset 2 includes a reference chromatogram obtained in the previous conditions and a chromatogram of the same sample obtained with a microfluidic modulation system. These data were obtained on a Agilent 7890B chromatograph equipped a split/splitless injector, a Griffith <cit.>type modulation system supplied by the Research Institute for Chromatography (Kortrijk, Belgium) and a FID. The modulation system consists in two Agilent CFT plates (a purged three-way and a two-way splitter) connected to an accumulation capillary. Separation was performed on a DB-1 (20, 0.1, 0.4) 1D column and a DB-17HT (10, 0.32, 0.15, J&W) 2D column. The modulation period was set to 10 whereas the oven programming and injection conditions were similar to the ones previously described.§.§ Softwareis implemented in C and Matlab.In-house platform INDIGO runs it through a user-friendly interface while the proprietary 2DChrom®software creates template masks (.idn files) and 2D images from data.§.§ Calculations The quality of the alignments obtained with was evaluated by two different ways. The correlation coefficient CC <cit.> as well asthe Structural Similarity index SSIM <cit.> between the reference chromatogram and the other one were computed. They directly match global image intensities, without feature analysis. Calculation details for CC and SSIM are provided in the supplementary material. These results were obtained on a restricted area of interest defined by the user. A second indicator to evaluate the quality of the alignment is the match quality between the adjusted template and a fully manually registered template. In practice, this featural similarity index is performed by comparingquantitative results obtained on chemical families with the template mask and with the optimized mask.§ THEORY §.§ Related worksWe may distinguish two classes of alignment methods: the ones that are directly performed on the full chromatographic signal, and the others which require a prior peak selection step.In the first class, the works of <cit.> and <cit.> look for shifts minimizing a correlation score betweensignals. In <cit.>, an affine transformation is assumed between the two chromatograms to register. The recent work of de Boer <cit.> looks for a warping function parametrized with splines that transforms the chromatogram to be registered into a chromatogram aligned with the reference. Low-degree polynomial alignment is proposed in <cit.>. Full image registration <cit.> is however limited for applications in because of the variability in chromatograms: positions of peaks in the two chromatograms could be similar, but this is not the case of their intensities. Therefore, the majority of alignment methods choose to first extract peaks in the reference and target chromatograms to only register the informative parts of chromatographic images.Thus, amongapproaches dedicated to chromatogram alignment, the work of <cit.> (focused on quantitative analysis) deduces local peaks displacements by correlation computations in slightly shifted blocks surrounding peaks. Variations of peak patterns in different experimental conditions (e.g. temperature) is studied in <cit.>, and exhibits satisfactory results for estimating an affine transformation. Similarly, <cit.> also models rigid transformations for LC×LC (2D liquid chromatography) template alignment. However, these hypotheses appear to be too restrictive in a general setting. Therefore <cit.> extended the space of possible deformations by looking for a warping function that transforms signals.Correlation Optimized Warping (COW) is judged effective by <cit.> that compares three different registration approaches, including target peak alignment (TPA) and semi-parametric time warping (STW) for one specific analysis. However, COW is still not satisfactory when incomplete separation and co-elution problems exist as pointed out by <cit.>. Instead, the latteruses bilinear peak alignment in addition to COW to correct for progressive within run retention time shifts on the second chromatographic dimension. In <cit.>, the alignment is performed after embedding the chromatograms surfaces into a three-dimensional cylinder, and the parametrization of the transform employs polynomials. The DIstance and Spectrum Correlation Optimization (DISCO) alignment method of <cit.>, extended in <cit.>, uses an elaborate peak selection procedure followed by interpolation to perform the alignment.The approach from <cit.> also performs peak alignment via correlation score minimization using dynamic programming, comparing favorably to DISCO. Finally, the work of <cit.>performs an assessment of different alignment methods with a new one. Their method requires a manual placement of matching peaks pairs, then the registration is performed differently on each axis: linear deformations along one dimension, and a neighbor based interpolation in a Voronoi diagram defined using the alignment anchor points for the other dimension. The linear constraintis relevant because one dimension displacements are independent of the other dimension elution times. The requirement of user-defined alignment points is robust to large variations in the reference and target chromatogram, at the expense of time-consuming markers placement.§.§ methodologyA schematic view for the principles of is depicted in the flowchart from Figure <ref>.First, chromatograms of the sample to analyze and the reference 2D image are loaded as images files. Then, the user is provided with a brush to surround, in a user interface, thearea of interest on bothreference andnew 2D chromatograms (see Figure <ref>). Peaks are extracted in those areas (Section <ref>). Only one centroid per local maximum is retained for the point set registration in order to diminish computation times and to prevent bias for large peaks. Datasets are then assimilated to centroids of a Gaussian Mixture Model (GMM) and a weighted noise model is added. Advantage is taken fromrecent progresses in point set registration, using a probabilistic and variational approach <cit.>. This choice is motivated by the fact that a complex transformation must be modeled while remaining robust to noise and outliers. In this context, GMMs (Gaussian Mixture Models) are particularly efficient at reconstructing missing data, which is especially convenient when selected peaks in one point cloud are not included in the other one. Finally, model parameters are optimized to yield registered results. Two types of results are produced:* if a template mask for the reference chromatogram exists, the transformation of the template points leading to a registered template mask is computed.* an aligned chromatogram may also be produced by computing the transformation of a grid defined as the coordinates of every pixel in an image, and interpolating the target image values at the transformed coordinates.Details on the calculations for every step are provided in the next paragraphs.§.§ Feature point extractionDespite the good behavior of the employed registration algorithm regarding noise and outliers, it is desirable to extract the most resembling point sets. Therefore, preliminary enhancement<cit.> proves useful.Inherent to the experimentation procedure, fragments of the stationary phase are frequently lost by the column resulting into the presence of hyperbolic lines in the chromatogram. Their differentiation from the real peaks is difficult to automate because of possible overlaps with the chromatogram peaks of interest. Therefore, in our treatments, a rough area of interest is delimited by an operator, taking approximately ten seconds.Rather than using second or third derivatives of the chromatogram <cit.>, which require non-trivial parameters to set, we employ the approach of <cit.> and extract the h-maxima of the chromatograms.Simply put, all local maxima having a height greater than a scalar h are extracted. Starting from an input signal f from ^d to ,the positions of the h-maxima may be obtained via a morphological opening by reconstruction, noted γ^(f,f-h). More specifically, this operation is defined as the supremum of all geodesic dilations of f-h by unit balls in f. More details are provided in <cit.> and a scheme is displayed in Figure <ref>. §.§ Data alignment modelTo guarantee results where the first point set is similar to the registered point set, while being robust to noise and outliers, we choose to employ a probabilistic approach. Supposing that the first point set X follows a normal distribution, the Coherent Point Drift method <cit.> seeks to estimate the probability density that explains the data X as a weighted sum of Gaussians initially centered by the second point set Y. We introduce our notations as follows. The first point setof size N×2, corresponding to the coordinates of N peaksextracted in the target chromatogram, is denoted X={X_1, …, X_N}. The second point set Y={Y_1, …, Y_M} of size M×2 corresponds to the peak coordinates in the reference chromatogram and is assimilated to centroids of a GMM. Each component X_i is a vector composed of two coordinates denoted X^(1)_i and X^(2)_i. The vector X^(i) denotes the i^ line of matrix X.Adding a weighted noise model to the GMM probability density function leads to:p(X_n)= w/N + ∑_m=1^M1-w/2 M πσ^2exp( -X_n-T(Y_m)^2/2σ^2)where the first term takes into account uniform noise weighted by the parameter w fixed between 0 and 1, σ is a variance parameter to estimate, and T is the point cloud transform to estimate. In this work, motivated by a failure of global rigid transformation attempts on our data, we modeled two different transforms across the two dimensions. We assume that a rigid displacement occurs along the y-axis second very short column, similarly to <cit.>, and non-rigid transformations are allowed on the x-axis first normal length column. The underlying assumption is a relative anisotropy of the data: two separate pixels in the vertical direction are distant by a much smaller time interval than those aligned horizontally. The x-axis is thus potentially subject to more important nonlinear distortions. Thus,we model the transformation T ofpoint cloud Y as:T(Y^(1)) =sY^(1)+t,T(Y^(2)) = Y^(2)+G W,where s and t are real numbers, respectively a scale and a translation parameter to estimate, and W is a vector of length M of non-rigid displacements to estimate. The matrix G ∈ R^M× M is a symmetric matrix defined element-wise by: G_ij = exp^- Y_i-Y_j/2β,where β is a positive scalar. The minimization of the non-negative likelihood leads to the minimization of: E_1(σ, W, s,t)= - ∑_n=1^N log p(X_n).A regularization of the weights W,enforcing the motion to be smooth, is necessary for the non-rigid registration, resulting into the following variational problem:min_σ, W, s,t E =E_1(σ, W, s,t) +λ/2(W^⊤ G W),wheredenotes the trace operator of a matrix. The estimation of parameters w, β and λ is discussed in generic terms in <cit.> and <cit.>. inherits a similar strategy, within the proposed combined rigid/non-rigid registration procedure. The parameter w∈ [0 , 1], related to the noise level, is first determined by visual inspection on ten regularly-spaced values. Albeit found to be the most determinant, our chromatograms sharing about the same signal-to-noise ratio, this value is kept constant in all our experiments. For other datatypes,multiple figures illustrating different registrations with varyingamounts of noise and outliers with an appropriate choice of w are presented in<cit.>. The determinationof the other parametersand λ is also discussed in <cit.>. We have set them to β=2 and λ=2 as by default in <cit.>. Slight changes did not affect the registration results sensitively. §.§ OptimizationWe employ the Expectation-Maximization (EM) algorithm <cit.> that alternates between:* the E step: we compute the probability P of correspondence for every couple of points. * the M step: we estimate the parameters σ, s, t, and W.To that goal, we compute the partial derivative of E with respect to σ, s, t, and W and set them to zero leading to an estimate of every parameter. Details are provided in the supplementary material, as well as the final algorithm itself.§ RESULTS AND DISCUSSIONThe areas of interest for both dataset 1 and 2 were defined so that every compound present in the sample is taken into account while limiting the number of peaks due to the bleeding (Figure <ref>). The detected peaks appear as small blue dots on both chromatograms whereas the selected areas are colored in green and delimited with a purple line. Peaks were extracted with a height parameter h from Section <ref> equal to 120 and 60 for dataset 1 and 2, respectively.Three types of transformations were evaluated: rigid transformations on both the x- and the y-axis, non-rigid transformations on both axes and transformation (non-rigid transformation on x-axis, rigid on y-axis).They are compared with the algorithm <cit.>. Significant changes inscores, especially for theCC index, suggesta better alignment oftwo chromatograms for dataset 1 with. However, small variations in these global indices demonstrate the need for a closer inspection of the results.Figure <ref> shows the optimization results for the three tested transformations on dataset 1 thanks to scatterplots <cit.>. Blue circles correspond to extracted peaks from the reference chromatogram whereas red crosses represent the extracted and transformed peaks for the new chromatogram. These images show that a fully rigid transformation (Figure <ref>, top-right) does not allow a good match between the reference chromatogram and the new one. A better agreement is obtained with the algorithm and the fully non-rigid transformation. However, when looking into details in some specific areas of the 2D chromatogram where the number of extracted peaks is highly different between the reference and the new image (see red boxes at the bottom of Figure <ref>), algorithm outperforms the fully non-rigid approach. The interest of over the fully non-rigid approach is also shown on the transformation of template masks (see supplementary material). Whereas algorithm leads to a coherent transformation of the template mask including for blobs in the upper right part of the chromatogram which are extrapolated, the fully non-rigid deformation is not relevant.To illustrate the changes modeled by on the chromatograms, the reference and the new chromatograms from dataset 1 are displayed in Figure <ref>, as well as the resulting aligned chromatogram.The featural efficiency of the chromatogram alignment was evaluated from a more informative quantitative point of view on dataset 1. Three different ways of integrating the newly acquired chromatogram with2DChrom®were tested: 1) an hundred-percent manual adjustment (MA) procedure during which the user has moved every point of the template reference mask to make it match with the new data with only simple local or global translation tools; 2) the sole application of alignment algorithm on the raw data and 3) the combination of with light manual editing. The modified mask, after transformation from flowchart in Figure <ref> with , is displayed on Figure <ref> for both datasets 1 and 2, together with the reference template mask on the reference analysis. Concerning dataset 1, it is clearly visible that the new analysis differs from the reference analysis despite the use of the same chromatographic conditions: the new data are slightly shifted to the left and 2D retention times are higher mainly due to lower elution temperatures from the 1D column. Realignment of the template mask however lookssatisfactory, with an overall good match between the readjusted mask and the analysis. The same conclusions can be drawn for dataset 2 even if the changes between the reference chromatogram and the new one are huge as these data were not obtained with same type of modulation system. This tends to show that the algorithm is robust and is able to handle large deviations betweenreferenceandnew data.Results for the quantification of chemical families are reported in Table <ref>. These are compared with reference data previously obtained on GO-SR sample during an intra-laboratory reproducibility study on two different chromatographs with two different users so as to take into account both instrumental and user variability. The transformation leads to coherent quantitative results for every chemical family except for normal and iso-paraffins (n-C_nH_2n+2 and i-C_nH_2n+2 respectively) and to a lesser extent naphthenes (C_nH_2n). The quantification of n-paraffins is underestimated while iso-paraffins are overestimated because of slight misalignments of the template mask as depicted on the bottom of Figure <ref> and on Figure <ref>. Indeed, some blobs identified in the reference mask as n-paraffins or naphthenes are only a few modulation periods wide as they correspond to single compounds. Small deviations in the alignment procedureimpact the accurate quantification of these blobs. An additional manual fitting is therefore required to satisfactorily correct the transformed integration mask for these specific compounds. It consists in manually moving the blob points of these small blobs to make them perfectly match with the measured individual peaks. Movements are generally smaller than one or two pixels on the first dimension and minor on the second dimension. This overall procedure is typically applied to 20 to 40 blobs for a classical gas-oil template mask and requires a few minutes. When looking at the data analysis time required to correctly apply a sophisticated template mask on a new chromatogram, the complexity of the GO-SR sample and of the complex mask with its 280 blobs implies several hours of work for an experienced user with a non-automated procedure. With an anchor point based approach, at the very least50 similar points in both chromatograms would need to be defined. It would result into a processing time approaching one hour. In contrast, the processing time for dataset 1 was of two minutes, including the peak selection step. Nevertheless, depending of the samples complexity, their range of differences, and the quality of the chromatographic acquisition, the resulting masks may still require light post-processing modifications. In this case, we verified that defining typically five anchor points in an interactive registration post-processing step was enough to get a result as good as a fully manually operated one. Time saving is therefore still significant compared to manual procedures. § CONCLUSION We present in this paper a 2D-chromatogram and template alignment named . It is based on three key ingredients: 1) a peak registration step which is performed on both the reference and the target 2D chromatograms; 2) two different types of transforms: a non-rigid one on the first chromatographic dimension and a rigid one on the second; 3) the use of the probabilisticCoherent Point Drift motion estimation strategy, that is proven to be robust to noise and outliers. It results into an overall procedure that is an order of magnitude faster than the competing user-interactive alignment algorithms, with an accuracy as good as manual registration while guarantying a better reproducibility. This fast procedure may have a great interest when changing configurations or when translating template masks on other analysis (analysis for example). Finally,feature point selection may benefit from the Bayesian peak tracking recently proposed in <cit.>. § ACKNOWLEDGMENTS The authors would like to thank Dr de Boer for his help with. unsrt 41urlstyle[Liu and Phillips(1991)]Liu_Z_1991_j-chromatogr-sci_comprehensive_tdgcuoctmi authorZ. Liu, authorJ.B. Phillips, titleComprehensive Two-Dimensional Gas Chromatography using an On-Column Thermal Modulator Interface, journalJ. Chromatogr. Sci. volume29 (number6) (year1991) pages227–231, doi10.1093/chromsci/29.6.227.[Adahchour et al.(2008)Adahchour, Beens, and Brinkman]Adahchour_M_2008_j-chrom-a_recent_dactdgc authorM. Adahchour, authorJ. Beens, authorU.A.T. Brinkman, titleRecent developments in the application of comprehensive two-dimensional gas chromatography, journalJ. Chrom. A volume1186 (number1-2) (year2008) pages67–108 doi10.1016/j.chroma.2008.01.002.[Meinert and Meierhenrich(2012)]Meinert_C_2012_j-angew-chem-int-ed_new_dssctdgc authorC. Meinert, authorU.J. Meierhenrich, titleA New Dimension in Separation Science: Comprehensive Two-Dimensional Gas Chromatography, journalAngew. Chem. Int. Ed. volume51 (number42) (year2012) pages10460–10470,doi10.1002/anie.201200842.[Seeley(2012)]Seeley_J_2012_j-chrom-a_recent_afcmgc authorJ.V. Seeley, titleRecent advances in flow-controlled multidimensional gas chromatography, journalJ. Chrom. A volume1255 (year2012) pages24–37,doi10.1016/j.chroma.2012.01.027.[Cortes et al.(2009)Cortes, Winniford, Luong, and Pursch]Cortes_H_2009_j-sep-sci_comprehensive_tdgcr authorH.J. Cortes, authorB. Winniford, authorJ. Luong, authorM. Pursch, titleComprehensive two dimensional gas chromatography review, journalJ. Sep. Sci. volume32 (number5-6) (year2009) pages883–904, doi10.1002/jssc.200800654.[Vendeuvre et al.(2005)Vendeuvre, Ruiz-Guerrero, Bertoncini, Duval, Thiébaut, and Hennion]Vendeuvre_C_2005_j-chrom-a_characterization_mdctdgcgcgcpapvsamd authorC. Vendeuvre, authorR. Ruiz-Guerrero, authorF. Bertoncini, authorL. Duval, authorD. Thiébaut, authorM.-C. Hennion, titleCharacterisation of middle-distillates by comprehensive two-dimensional gas chromatography (GC × GC): A powerful alternative for performing various standard analysis of middle-distillates, journalJ. Chrom. A volume1086 (number1-2) (year2005) pages21–28,doiDOI: 10.1016/j.chroma.2005.05.106.[Bertoncini et al.(2013)Bertoncini, Courtiade-Tholance, and Thiébaut]Bertoncini_F_2013_book_gas_c2dgcpirs editorF. Bertoncini, editorM. Courtiade-Tholance, editorD. Thiébaut (Eds.), titleGas chromatography and 2D-gas chromatography for petroleum industry. The race for selectivity, publisherÉditions Technip, year2013.[Nizio et al.(2012)Nizio, McGinitie, and Harynuk]Nizio_K_2012_j-chrom-a_comprehensive_msap authorK.D. Nizio, authorT.M. McGinitie, authorJ.J. Harynuk, titleComprehensive multidimensional separations for the analysis of petroleum, journalJ. Chrom. A volume1255 (year2012) pages12–23,doi10.1016/j.chroma.2012.01.078.[Edwards et al.(2011)Edwards, Mostafa, and Górecki]Edwards_M_2011_j-anal-bioanal-chem_modulation_ctdgc20yi authorM. Edwards, authorA. Mostafa, authorT. Górecki, titleModulation in comprehensive two-dimensional gas chromatography: 20 years of innovation, journalAnal. Bioanal. Chem. volume401 (number8) (year2011) pages2335–2349,doi10.1007/s00216-011-5100-6.[Mommers et al.(2011)Mommers, Knooren, Mengerink, Wilbers, Vreuls, and van der Wal]Mommers_J_2011_j-chrom-a_retention_tlpctdgc authorJ. Mommers, authorJ. Knooren, authorY. Mengerink, authorA. Wilbers, authorR. Vreuls, authorS. van der Wal, titleRetention time locking procedure for comprehensive two-dimensional gas chromatography, journalJ. Chrom. A volume1218 (number21) (year2011) pages3159–3165,doi10.1016/j.chroma.2010.08.065.[Vendeuvre et al.(2007)Vendeuvre, Ruiz-Guerrero, Bertoncini, Duval, and Thiébaut]Vendeuvre_C_2007_j-ogst_comprehensive_tdgcdcpp authorC. Vendeuvre, authorR. Ruiz-Guerrero, authorF. Bertoncini, authorL. Duval, authorD. Thiébaut, titleComprehensive Two-Dimensional Gas Chromatography for Detailed Characterisation of Petroleum Products, journalOil Gas Sci. Tech. volume62 (number1) (year2007) pages43–55, doi10.2516/ogst:2007004.[Murray(2012)]Murray_J_2012_j-chrom-a_qualitative_qactdgc authorJ.A. Murray, titleQualitative and quantitative approaches in comprehensive two-dimensional gas chromatography, journalJ. Chrom. A volume1261 (year2012) pages58–68,doi10.1016/j.chroma.2012.05.012.[Reichenbach et al.(2012)Reichenbach, Tian, Cordero, and Tao]Reichenbach_S_2012_j-chrom-a_features_ntcsactdc authorS.E. Reichenbach, authorX. Tian, authorC. Cordero, authorQ. Tao, titleFeatures for non-targeted cross-sample analysis with comprehensive two-dimensional chromatography, journalJ. Chrom. A volume1226 (year2012) pages140–148,doi10.1016/j.chroma.2011.07.046.[Zeng et al.(2014)Zeng, Li, Hugel, Xu, and Marriott]Zeng_Z_2014_j-trac-trends-anal-chem_interpretation_ctdgcduac authorZ. Zeng, authorJ. Li, authorH.M. Hugel, authorG. Xu, authorP.J. Marriott, titleInterpretation of comprehensive two-dimensional gas chromatography data using advanced chemometrics, journalTrends Anal. Chem. volume53 (year2014) pages150–166,doi10.1016/j.trac.2013.08.009.[Griffith et al.(2012)Griffith, Winniford, Sun, Edam, and Luong]Griffith_J_2012_j-chrom-a_reversed-flow_dfmctdgc authorJ.F. Griffith, authorW.L. Winniford, authorK. Sun, authorR. Edam, authorJ.C. Luong, titleA reversed-flow differential flow modulator for comprehensive two-dimensional gas chromatography, journalJ. Chrom. A volume1226 (year2012) pages116–123, doi10.1016/j.chroma.2011.11.036.[de Boer and Lankelma(2014)]DeBoer_W_2014_j-chrom-a_two-dimensional_spac authorW.P.H. de Boer, authorJ. Lankelma, titleTwo-dimensional semi-parametric alignment of chromatograms, journalJ. Chrom. A volume1345 (year2014) pages193–199, doi10.1016/j.chroma.2014.04.034.[Wang et al.(2004)Wang, Bovik, Sheikh, and Simoncelli]Wang_Z_2004_j-ieee-tip_image_qaevss authorZ. Wang, authorA.C. Bovik, authorH.R. Sheikh, authorE.P. Simoncelli, titleImage Quality Assessment: From Error Visibility to Structural Similarity, journalIEEE Trans. Image Process. volume13 (number4) (year2004) pages600–612, doi10.1109/TIP.2003.819861.[van Mispelaar et al.(2003)van Mispelaar, Tas, Smilde, Schoenmakers, and van Asten]VanMispelaar_V_2003_j-chrom-a_quantitative_atcctdgc authorV.G. van Mispelaar, authorA.C. Tas, authorA.K. Smilde, authorP.J. Schoenmakers, authorA.C. van Asten, titleQuantitative analysis of target components by comprehensive two-dimensional gas chromatography, journalJ. Chrom. A volume1019 (number1-2) (year2003) pages15–29,doi10.1016/j.chroma.2003.08.101.[Pierce et al.(2005)Pierce, Wood, Wright, and Synovec]Pierce_K_2005_j-anal-chem_comprehensive_tdrtaaecactdsd authorK.M. Pierce, authorL.F. Wood, authorB.W. Wright, authorR.E. Synovec, titleA Comprehensive Two-Dimensional Retention Time Alignment Algorithm To Enhance Chemometric Analysis of Comprehensive Two-Dimensional Separation Data, journalAnal. Chem. volume77 (number23) (year2005) pages7735–7743,doi10.1021/ac0511142.[Hollingsworth et al.(2006)Hollingsworth, Reichenbach, Tao, and Visvanathan]Hollingsworth_B_2006_j-chrom-a_comparative_vctdgc authorB.V. Hollingsworth, authorS.E. Reichenbach, authorQ. Tao, authorA. Visvanathan, titleComparative visualization for comprehensive two-dimensional gas chromatography, journalJ. Chrom. A volume1105 (number1–2) (year2006) pages51–58,doi10.1016/j.chroma.2005.11.074.[Reichenbach et al.(2015)Reichenbach, Rempe, Tao, Bressanello, Liberto, Bicchi, Balducci, and Cordero]Reichenbach_S_2015_j-anal-chem_alignment_ctdgcdscd authorS.E. Reichenbach, authorD.W. Rempe, authorQ. Tao, authorD. Bressanello, authorE. Liberto, authorC. Bicchi, authorS. Balducci, authorC. Cordero, titleAlignment for Comprehensive Two-Dimensional Gas Chromatography with Dual Secondary Columns and Detectors, journalAnal. Chem. volume87 (number19) (year2015) pages10056–10063, doi10.1021/acs.analchem.5b02718.[Zitová and Flusser(2003)]Zitova_B_2003_j-image-vis-comput_image_rms authorB. Zitová, authorJ. Flusser, titleImage registration methods: a survey, journalImage Vis. Comput. volume21 (number11) (year2003) pages977–1000,doi10.1016/S0262-8856(03)00137-9.[van Mispelaar et al.(2005)van Mispelaar, Smilde, de Noord, Blomberg, and Schoenmakers]VanMispelaar_V_2005_j-chrom-a_classification_hscoudsctdgcmt authorV.G. van Mispelaar, authorA.K. Smilde, authorO.E. de Noord, authorJ. Blomberg, authorP.J. Schoenmakers, titleClassification of highly similar crude oils using data sets from comprehensive two-dimensional gas chromatography and multivariate techniques, journalJ. Chrom. A volume1096 (number1-2) (year2005) pages156–164,doi10.1016/j.chroma.2005.09.063.[Ni et al.(2005)Ni, Reichenbach, Visvanathan, TerMaat, and Ledford]Ni_M_2005_j-chrom-a_peak_pvrctdgca authorM. Ni, authorS.E. Reichenbach, authorA. Visvanathan, authorJ. TerMaat, authorE.B. Ledford, Jr., titlePeak pattern variations related to comprehensive two-dimensional gas chromatography acquisition, journalJ. Chrom. A volume1086 (number1–2) (year2005) pages165–170,doi10.1016/j.chroma.2005.06.033.[Reichenbach et al.(2009)Reichenbach, Carr, Stoll, and Tao]Reichenbach_S_2009_j-chrom-a_smart_tppmctdlc authorS.E. Reichenbach, authorP.W. Carr, authorD.R. Stoll, authorQ. Tao, titleSmart Templates for peak pattern matching with comprehensive two-dimensional liquid chromatography, journalJ. Chrom. A volume1216 (number16) (year2009) pages3458–3466,doi10.1016/j.chroma.2008.09.058.[Zhang et al.(2008)Zhang, Huang, Regnier, and Zhang]Zhang_D_2008_j-anal-chem_two-dimensional_cowaagcgcmsd authorD. Zhang, authorX. Huang, authorF.E. Regnier, authorM. Zhang, titleTwo-Dimensional Correlation Optimized Warping Algorithm for Aligning GC×GC–MS Data, journalAnal. Chem. volume80 (number8) (year2008) pages2664–2671,doi10.1021/ac7024317.[van Nederkassel et al.(2006)van Nederkassel, Daszykowski, Eilers, and Vander Heyden]VanNederkassel_A_2006_j-chrom-a_comparison_taca authorA.M. van Nederkassel, authorM. Daszykowski, authorP.H.C. Eilers, authorY. Vander Heyden, titleA comparison of three algorithms for chromatograms alignment, journalJ. Chrom. A volume1118 (number2) (year2006) pages199–210,doiDOI: 10.1016/j.chroma.2006.03.114.[Parastar et al.(2012)Parastar, Jalali-Heravi, and Tauler]Parastar_H_2012_j-chem-int-lab-syst_comprehensive_tdgcgcgcrtscmubpacosmcr authorH. Parastar, authorM. Jalali-Heravi, authorR. Tauler, titleComprehensive two-dimensional gas chromatography (GC–GC) retention time shift correction and modeling using bilinear peak alignment, correlation optimized shifting and multivariate curve resolution, journalChemometr. Intell. Lab. Syst. volume117 (year2012) pages80–91, doi10.1016/j.chemolab.2012.02.003.[Weusten et al.(2012)Weusten, Derks, Mommers, and van der Wal]Weusten_J_2012_j-anal-chim-acta_alignment_csgcgcmsfucm authorJ.J.A.M. Weusten, authorE.P.P.A. Derks, authorJ.H.M. Mommers, authorS. van der Wal, titleAlignment and clustering strategies for GC×GC–MS features using a cylindrical mapping, journalAnal. Chim. Acta volume726 (number0) (year2012) pages9–21,doi10.1016/j.aca.2012.03.009.[Wang et al.(2010)Wang, Fang, Heim, Bogdanov, Pugh, Libardoni, and Zhang]Wang_B_2010_j-anal-chem_disco_dscoatdgctofmsbm authorB. Wang, authorA. Fang, authorJ. Heim, authorB. Bogdanov, authorS. Pugh, authorM. Libardoni, authorX. Zhang, titleDISCO: Distance and Spectrum Correlation Optimization Alignment for Two-Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry-Based Metabolomics, journalAnal. Chem. volume82 (number12) (year2010) pages5069–5081,doi10.1021/ac100064b.[Wang et al.(2012)Wang, Fang, Shi, Kim, and Zhang]Wang_B_2012_incoll_disco2_cpaatdgctofms authorB. Wang, authorA. Fang, authorX. Shi, authorS.H. Kim, authorX. Zhang, titleDISCO2: A Comprehensive Peak Alignment Algorithm for Two-Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry, in: editorD.-S. Huang, editorY. Gan, editorP. Premaratne, editorK. Han (Eds.), booktitleBio-Inspired Computing and Applications, vol. volume6840 of seriesLect. Notes Comput. Sci., publisherSpringer,pages486–491, doi10.1007/978-3-642-24553-4_64, year2012.[Kim et al.(2011)Kim, Koo, Fang, and Zhang]Kim_S_2011_j-bmc-bioinformatics_smith-waterman_pactdgcms authorS. Kim, authorI. Koo, authorA. Fang, authorX. Zhang, titleSmith-Waterman peak alignment for comprehensive two-dimensional gas chromatography-mass spectrometry, journalBMC Bioinformatics volume12 (number1) (year2011) pages235–245,doi10.1186/1471-2105-12-235.[Gros et al.(2012)Gros, Nabi, Dimitriou-Christidis, Rutler, and Arey]Gros_J_2012_j-anal-chem_robust_aatdc authorJ. Gros, authorD. Nabi, authorP. Dimitriou-Christidis, authorR. Rutler, authorJ.S. Arey, titleRobust Algorithm for Aligning Two-Dimensional Chromatograms, journalAnal. Chem. (year2012) pages9033–9040, doi10.1021/ac301367s.[Myronenko and Song(2010)]Myronenko_A_2010_j-ieee-tpami_point_srcpd authorA. Myronenko, authorX. Song, titlePoint Set Registration: Coherent Point Drift, journalIEEE Trans. Pattern Anal. Mach. Intell. volume32 (number12) (year2010) pages2262–2275,doi10.1109/tpami.2010.46.[Ning et al.(2014)Ning, Selesnick, and Duval]Ning_X_2014_j-chemometr-intell-lab-syst_chromatogram_bedusbeads authorX. Ning, authorI.W. Selesnick, authorL. Duval, titleChromatogram baseline estimation and denoising using sparsity (BEADS), journalChemometr. Intell. Lab. Syst. volume139 (year2014) pages156–167, doi10.1016/j.chemolab.2014.09.014.[Samanipour et al.(2015)Samanipour, Dimitriou-Christidis, Gros, Grange, and Arey]Samanipour_S_2015_j-chrom-a_analyte_qctdgcambcpdmeers authorS. Samanipour, authorP. Dimitriou-Christidis, authorJ. Gros, authorA. Grange, authorJ.S. Arey, titleAnalyte quantification with comprehensive two-dimensional gas chromatography: Assessment of methods for baseline correction, peak delineation, and matrix effect elimination for real samples, journalJ. Chrom. A volume1375 (year2015) pages123–139, doi10.1016/j.chroma.2014.11.049.[Fredriksson et al.(2009)Fredriksson, Petersson, Axelsson, and Bylund]Fredriksson_M_2009_j-sep-sci_automatic_pfmlcmsdgsdf authorM.J. Fredriksson, authorP. Petersson, authorB.-O. Axelsson, authorD. Bylund, titleAn automatic peak finding method for LC-MS data using Gaussian second derivative filtering, journalJ. Sep. Sci. volume32 (number22) (year2009) pages3906–3918, doi10.1002/jssc.200900395.[Yuille and Grzywacz(1989)]Yuille_A_1989_j-ijcv_mathematical_amct authorA.L. Yuille, authorN.M. Grzywacz, titleA mathematical analysis of the motion coherence theory, journalInt. J. Comp. Vis. volume3 (number2) (year1989) pages155–175, doi10.1007/bf00126430.[Dempster et al.(1977)Dempster, Laird, and Rubin]Dempster_A_1977_j-r-stat-soc-b-stat-methodol_maximum_lidema authorA.P. Dempster, authorN.M. Laird, authorD.B. Rubin, titleMaximum Likelihood from Incomplete Data via the EM Algorithm, journalJ. R. Stat. Soc. Ser. B Stat. Methodol. volume39 (number1) (year1977) pages1–38.[Anscombe(1973)]Anscombe_F_1973_j-american-statistician_graphs_sa authorF. J. Anscombe, titleGraphs in Statistical Analysis, journalAm. Stat. volume27 (year1973) pages17–21.[Barcaru et al.(2016)Barcaru, Derks, and Vivó-Truyols]Barcaru_A_2016_j-anal-chim-acta_bayesian_ptnpamgcgcc authorA. Barcaru, authorE. Derks, authorG. Vivó-Truyols, titleBayesian peak tracking: a novel probabilistic approach to match GCxGC chromatograms, journalAnal. Chim. Acta volume940 (year2016) pages46–55, doi10.1016/j.aca.2016.09.001.
http://arxiv.org/abs/1702.07942v1
{ "authors": [ "Camille Couprie", "Laurent Duval", "Maxime Moreaud", "Sophie Hénon", "Mélinda Tebib", "Vincent Souchon" ], "categories": [ "cs.CV", "physics.data-an" ], "primary_category": "cs.CV", "published": "20170225195939", "title": "BARCHAN: Blob Alignment for Robust CHromatographic ANalysis" }
Cutoff for Ramanujan graphs via degree inflation Jonathan HermonFaculty of mathematics and computer science, Weizmann Institute of Science, Rehovot, Israel. E-mail: jonathah@weizmann.ac.il.================================================================================================================================================== Recently Lubetzky and Peres showed that simple random walks on a sequence of d-regular Ramanujan graphs G_n=(V_n,E_n) of increasing sizes exhibit cutoff in total variation around the diameter lower bound d/d-2log_d-1|V_n|. We provide a different argumentunder the assumption that for some r(n) ≫ 1 the maximal number of simple cycles in a ball of radius r(n) in G_n is uniformly bounded in n. *Keywords: Cutoff, Ramanujan graphs, degree inflation. § INTRODUCTIONGenerically, we denote the stationary distribution of an ergodic Markov chain (X_t)_t ≥ 0 by π,its state space by Ω and its transition matrix by P. We denote by _x^t (resp. _x) the distribution of X_t (resp. (X_t)_t ≥ 0), given that the initial state is x. The total variation distance of two distributions on Ω is μ - ν_=1/2∑_y | μ(y)-ν(y)|. The total variation ϵ-mixing time is (ϵ):= inf{t: max_x _x^t - π_≤ϵ}. Next, consider a sequence of chains, ((Ω_n,P_n,π_n))_n ∈, each with itsmixingtime t_mix^(n)(·). We say that the sequence exhibits a cutoff if the following sharp transition in its convergence to stationarity occurs:∀∈ (0,1/2], lim_n →∞t_mix^(n)(ϵ)/t_mix^(n)(1-ϵ)=1.A family of d-regular graphs G_n with d ≥ 3 is called an expander family, if the second largest eigenvalues of the corresponding adjacency matrices are uniformly bounded away from d. Lubotzky, Phillips, and Sarnak <cit.> defineda connected finite d-regular graph G with d ≥ 3 to be Ramanujan if the eigenvalues of the transition matrix of simple random walk (SRW) on G all lie in {± 1}∪ [-ρ_d,ρ_d], where ρ_d:=2√(d-1)/d is the spectral radius of SRW on the infinite d-regular tree 𝕋_d. Lubotzky, Phillips, and Sarnak <cit.>, Margulis <cit.> and Morgenstern <cit.> constructed d-regular Ramanujan graphs for all d of the form d=p^m+1, where p is a prime number. Recently, Marcus,Spielman and Srivastava <cit.> proved the existence of bipartite d-regular Ramanujan graphs for all d ≥ 3.In light of the Alon-Boppana bound <cit.>, Ramanujan graphs are “optimal expanders" as they have asymptotically the largest spectral-gap.Let G_n=(V_n,E_n) be a sequence of finite connected d_n-regular graphs. Let P_n be the transition matrix of SRW on G_n. Denote the eigenvalues of P_n by 1=_1(n)>_2(n) ≥⋯≥_|V_n|(n) ≥ -1. We say that the sequence is asymptotically Ramanujan if |V_n| →∞ andmax{|_i(n)|:| _i(n) |≠ 1 }≤ρ_d_n^1-o(1) .We say that the sequence is asymptotically one-sided Ramanujan if |V_n| →∞, _2(n) ≤ρ_d_n^1-o(1) and lim inf_n →∞min{_i(n): _i(n) ≠ -1 }>-1.Friedman <cit.> showed that a sequence of d-regular random graphs of increasing sizes is w.h.p. asymptotically Ramanujan.Our definition of asymptotically Ramanujan graphs is not the standard one. The more standard definition is thatmax{|_i(n)|:| _i(n) |≠ 1 }≤ρ_d_n+o(1). It is elementary to show that for every n-vertex d-regular graph, the 1-ϵ total variation mixing time for the SRW is at least t_d,,n:=d/d-2log_d-1n-C√(n |logϵ|/d), for some constant C>0.[This can be derived from the fact that C can be chosen so that the probability that the probability that the distance of the walk at time t_d,,n from its starting point is at least ⌊log_d-1(1/4 n)⌋ with probability at most /2 (together with the fact that a ball of radius ⌊log_d-1(1/4 n)⌋ contains at most 1/2 n vertices).] The following precise formulation of this fact is due to Lubeztky and Peres <cit.>.Let G=(V,E) be an n-vertexd-regular graph with d ≥ 3.Let c_d:=2√(d(d-1))/(d-2)^3/2 and Φ^-1 be the inverse function of the CDF of the standard Normal distribution.Then SRW on G satisfies∀ϵ∈ (0,1), (1- ϵ -o(1) ) ≥d/d-2log_d-1n+c_dΦ^-1(ϵ)√(log_d-1n) .Recently, Lubetzky and Peres <cit.> showed that simple random walks on a sequence of non-bipartited_n-regular Ramanujan graphsG_n=(V_n,E_n)of increasing sizes exhibit cutoffaround the diameter lower bound d_n/d_n-2log_d_n-1|V_n|. In this work we present an alternative argument and prove the same result under the following assumption:Assumption 1: There exists a diverging sequence r_n such that the maximal number of simple cycles in a ball of radius r_n in G_n is uniformly bounded in n. Let G_n=(V_n,E_n) be a sequence of non-bipartite, finite, connected, d_n-regular asymptotically one-sided Ramanujan graphs. (i) If d_n=d for all n and Assumption 1 holds then the corresponding sequence of simple random walks exhibits cutoff around time d/d-2log_d-1|V_n|. (ii) If d_n diverges and log d_n =o(log_d_n|V_n|) then the corresponding sequence of simple random walks exhibits cutoff around time log_d_n|V_n|. If there is no cutoff, then cutoff must fail on some subsequence (n_k) such that either lim_k →∞d_n_k=∞ or d_n_k=d for all k for some fixed d ≥ 3. Thus there is no loss of generality in assuming that either lim_n →∞d_n=∞ or d_n=d for all n. Assumption 1 is rather mild as it is quite difficult to construct a family of asymptotically one-sided Ramanujan graphs violating this assumption. In particular, it is satisfied w.h.p. by a sequence of random d-regulargraphs of increasing sizes <cit.>. It follows from <cit.> that if G_n is a sequence of d-regular transitive asymptotically Ramanujan graphs of increasing sizes then lim_n →∞girth(G_n) = ∞, where for a graph G, girth(G) denotes its girth[The girth of a graph G is the length of the shortest cycle in G.] (and so Assumption 1 holds).The argument of Lubetzky and Peres <cit.> does not require Assumption 1 (nor the assumption log d_n =o(log_d_n|V_n|)). They studied the Jordan decomposition of the transition matrix of the non-backtracking walk[This is a random walk on the directed edges of the graph, with transition matrix P_NB((x,y)(z,w))=1_z=y,w ≠ x/(y)-1.] and used it to derive cutoff for the non-backtracking walk, which for a regular graph implies cutoff also for the SRW. In this note we study the SRW by looking at it only when it crosses distance k from its previous position, for some large k. §.§ Organization of this noteIn  <ref>, as a warm up, we present an extremely simple and short proof for the occurrence of cutoff for SRW ona sequence of asymptoticallyRamanujan graphs of diverging degree. In  <ref> we present some machinery for bounding mixing times using hitting times. We then apply this machinery to prove Part (ii) of Theorem <ref>. In  <ref> we give an overview of the proof of Part (i) of Theorem <ref>. In  <ref> we prove two auxiliary results. Finally, in  <ref> we conclude the proof of Theorem <ref>.§ A WARM UP It turns out thatfor a sequence of asymptoticallyRamanujan graphs of diverging degree the trivial diameter lower bound (of Lemma <ref>) is matched by thetrivial spectral-gap upper bound on the L_2 mixing time obtained via the Poincaré inequality. As a warm up and motivation for what comes we now prove the following theorem. Let G_n=(V_n,E_n) be a sequence of non-bipartite, finite, connected, d_n-regular asymptoticallyRamanujan graphs with d_n →∞. Then the corresponding sequence of simple random walks exhibits cutoff around time log_d_n|V_n|.Note that in Part (ii) of Theorem <ref> the graphs are assumed to be only asymptotically one-sided Ramanujan. Before proving Theorem <ref> we need a few basic definitions and facts. Let:=max{|a|:a ≠ 1, ais an eigenvalue of P }and:=1/1-.The L_2 distance of _x^t from π is defined as _x^t-π_2,π^2=∑_y π(y) ( P^t(x,y)/π(y))^2-1.By Jensen's and the Poincaré inequalities, for all t and xwe have that4 _x^t-π_^2 ≤_x^t-π_2,π^2 ≤^2t_x^0-π_2,π^2 ≤^2t/π(x).Hence for SRW on an n-vertex regular graph we have for all t and x that 4 _x^t-π_^2 ≤n ^2t⟹(ϵ) ≤1/2log_1/(n ϵ^-2). Proof of Theorem <ref>:By assumption= ρ_d_n^1-o(1) =d_n^-1/2(1-o(1)).Thus 1/2log_1/|V_n| =(1+o(1)) log_d_n|V_n|. The proof is concluded by combining (<ref>) with Lemma <ref>. § REPLACING THE POINCARÉ INEQUALITY BY ITS HITTING TIME ANALOG In the proof of Theorem <ref> we exploit the general connection between mixing times and escape times from small sets, established in <cit.> (Corollary 3.1 eq. (3.2)): There exists some absolute constant C>0 such that for every reversible chain (with a finite state space),∀ α, ϵ∈ (0,1), (ϵ + α) ≤_1-α(ϵ)+C log (1/α),where _1-α(ϵ):=inf{t:max_x,A: π(A) ≤α_x[T_A^c>t] ≤} and T_B:=inf{t:X_t ∈ B } is the hitting time of the set B. In the proof of Theorem <ref> we replace the naive L_2 bound used in the proof of Theorem <ref> by itshitting time counterpart: Under reversibility, for all A ⊊Ω,a ∈ A and t ≥ 0π_A(a)(_a[T_A^c>t])^2≤∑_b ∈ Aπ_A(b)( _b[T_A^c>t] )^2= P_A^t 1_A _2, A^2 ≤ [(A)]^2t,where π_A is π conditioned on A, P_A is the restriction of the transition matrix P to A (this is the transition matrix of the chain which is “killed" upon escaping A), f_2,A^2:=∑_b ∈ Aπ_A(b)f^2(b) for f ∈^A and (A) is the largest eigenvalue of P_A. The following proposition relates (A) to _2, the second largest eigenvalue of P. For every reversible Markov chain and any set A,(A) ≤_2+π(A),Similarly to (<ref>), by (<ref>)-(<ref>) we have for every reversible chain on a finite state space with _2<1/2 and every α∈ (0,_2] that_1-α(√(α)) ≤1/2 | log_1/2 _2(min_v π(v))|,(2 √(α)) ≤1/2 | log_1/2 _2(min_v π(v))|+C log (1/α).We are now in a position to give a short proof for Part (ii) of Theorem <ref>.Let G_n=(V_n,E_n) be a sequence of non-bipartite, finite, connected, d_n-regular asymptotically one-sided Ramanujan graphs. Assume that d_n diverges and log d_n =o(log_d_n|V_n|). Let α=α_n=d_n^-1/2=o(1). Let _2=_2(n) be the second largest eigenvalue of the transition matrix of SRW on G_n. By our assumptions2 _2=d_n^-1/2+o(1) and so by (<ref>) we have that(2 √(α)) ≤1/2log_1/2 _2|V_n|+C 'log (1/α)=(1+o(1))log_d_n|V_n| .The proof is concluded using Lemma <ref>. § DEGREE INFLATION The simple proof of Part (ii) of Theorem <ref> motivates looking at the following graph.Given a graph G=(V,E), we define G(k)=(V,E(k)) viaE(k):={{u,v } : dist_G(u,v)=k,u,v ∈ V },where dist_G(u,v) denotes the graph distance of u and v w.r.t. G. Denote the transition matrix of SRW on G(k) by K. Consider SRW on G, (X_t)_t=0^∞.Let T_0:=0 and inductively set T_i+1:=inf{t ≥ T_i : dist_G(X_T_i+1,X_T_i)=k }. Consider the chain 𝐘:=(Y_j)_j=0^∞ defined via Y_i:=X_T_i for all i, and denote its transition matrix by W.It is possible that G(k):=(V,E(k)) is not connected. This could be rectified, say by connecting every vertex to its entire k-neighborhood. However, below we only use the fact that the SRW on G(k) is reversible w.r.t. π_G(k)(x):=_G(k)(x)/(2|E(k)|).Let G=(V,E) be a d-regular finite Ramanujan graph. Assume that Assumption 1 holds. Let r=r_n be as in Assumption 1. Fix some k=k_n such that 1 ≪ k ≪√( r).Let K,W and T_i be as in Definitions <ref> and <ref>. By Assumption 1, for every x,y ∈ V of distance k from one another 1 ≤ K(x,y)d(d-1)^k-1≤ C_1(d). In Lemma <ref> we show that for such x,y also 1 ≤ W(x,y)d(d-1)^k-1≤ C_2(d). In fact, Assumption 1 could have been replaced by the assumption that max{W(x,y),K(x,y) }≤ (d-1)^-k(1-o(1)) and that T_1 is concentrated around dk/d-2 (uniformly for all initial states). §.§ An overview of the proof of Part (i) of Theorem <ref>Let G,k and r be as above. Intuitively,if either the SRW on G(k) orthe chain 𝐘 (from Definitions <ref> and <ref>) exhibit an abrupt convergence to stationarity around time t=t_n, thenalso theSRW on G should exhibit an abrupt convergence to stationarity around time t ·d/d-2k. The term d/d-2k comes from the fact that (by Assumption 1) the expected time it takes the walk on G to get within distance k from its current position is d/d-2k(1+o(1)).While the chain 𝐘 is more directly related to the SRW on G, it is harder to analyze it directly since it need not be reversible and a-priori it is not clear that its stationary distribution is close to the uniform distribution. Instead we analyze the walk on G(k) and use it to learn about 𝐘 and then in turn about the walk on G. In light of Part (ii) of Theorem <ref> (which has already been proven)a natural strategy for proving Part (i) of Theorem <ref> is to show that_2(K) = ρ_D^1-o(1)=(d-1)^-k/2(1-o(1)), where D is the maximal degree in G(k), K is the transition matrix ofSRW on G(k) and _2(K) is its second largest eigenvalue. Unfortunately, we do not know how to show this (see the first paragraph of  <ref>). Instead, we obtain such an estimate for _K(A), the largest eigenvalue of K_A, the restriction of K to A, for any “small" set A. By small we mean that its stationary probability is at most α:=(d-1)^-3k^2.Indeed, the key to the proof of Part (i) of Theorem <ref> is to show that _K(A) ≤ (d-1)^-k/2(1-o(1)) for every small set A. Using (<ref>) we get for the walk on G(k) that_a[T_A^c>(1+o(1))1/klog_d-1|V| ]=(d-1)^-k/2(1-o(1)). We then show that the same holds for 𝐘 (this is obvious when 2k < girth(G); The general case is derived using the fact that, as mentioned in Remark <ref>, c W(x,y) ≤ K(x,y) ≤ CW(x,y) for all x,y). Finally,using an obvious coupling between𝐘 and the SRW on G, after multiplying by d/d-2k (1+o(1)) the last bound is transformed into a bound on_1-α(o(1)) for SRW on G (for some o(1) terms).§ AUXILIARY RESULTSIn order to control _K(A) (for small A), apart from Proposition <ref> we need the following comparison result. While there are similar comparison techniques for the spectral-gap, we are not aware of a comparison technique which allows one to argue that _2(the second largest eigenvalue of the transition matrix) of one chain is close to 0 (say, that _2=o(1)) if that of another chain is close to 0. Let P^(1) and P^(2) be two transition matrices on the same finite state space Ω, both reversible w.r.t. π^(1) and π^(2), respectively. Assume that P^(1)(x,y) ≤ C_1 P^(2)(x,y) and 1/C_2 ≤π^(1)(x)/π^(2)(x) ≤ C_2 for all x,y. Let A ⊊Ω and let _P^(i)(A) be the largest eigenvalue of P_A^(i), the restriction of P^(i) to A (i=1,2).Then_P^(1)(A) ≤ C_1 C_2^2 _P^(2)(A). Proof: Denote ⟨ f,g⟩_π_A^(i):=∑_x∈ Aπ_A^(i)(x)g(x)f(x). By the Perron-Frobenius Theorem_P^(1)(A)=max_f ∈_+^A,f ≠ 0⟨ P_A^(1)f,f⟩_π_A^(1)/⟨ f,f⟩_π_A^(1)≤ C_1C_2^2 max_f ∈_+^A,f ≠ 0⟨ P_A^(2)f,f⟩_π_A^(2)/⟨ f,f⟩_π_A^(2) =C_1C_2^2 _P^(2)(A).Before proving Theorem <ref> we need one more lemma. Let G=(V,E) be ad-regular graph (d ≥ 3). Let v ∈ V. Fori,k ∈ let D_i:={u ∈ V : dist_G(u,v)=i },B_i:=∪_j=0^i D_j (the ball of radius i around v) and t(B_k):=|{{x,y}∈ E:y ∈ B_k-1,x ∈ B_k }|-|B_k|. For any s ≥ 0 there exist someconstant C(s,d)>0 and k_s such that if k ≥ k_s, t(B_k) ≤ s and D_k ≠ then1/d(d-1)^k-1≤min_u ∈ D_k _v[T_D_k=T_u] ≤max_u ∈ D_k _v[T_D_k=T_u] ≤C(s,d)/d(d-1)^k-1.Let u ∈ D_k. We first prove that _v[T_D_k=T_u] ≥1/d(d-1)^k-1. This follows from a standard argument involving the covering tree of G. A non-backtracking path of length ℓ is a sequence of vertices (v_0,v_1,…,v_ℓ) such that {v_i,v_i-1}∈ E and v_i+2≠ v_i for all i. Let 𝒫_ℓ be the collection of all non-backing paths of length ℓ starting from v. Let 𝕋_d be the (infinite) d-regular tree. We may label the ℓth level of 𝕋_d by the set 𝒫_ℓ (in a bijective manner) such that the children of(v,v_1,…,v_ℓ) are {(v,v_1,…,v_ℓ,v'):(v,v_1,…,v_ℓ,v') ∈𝒫_ℓ+1}. Forγ= (v,v_1,…,v_ℓ) let ϕ(γ):=v_ℓ. Note that if (S_n)_n=0^∞ is a SRW on𝕋_d (labeled as above) started from (v) (which is the root) then (ϕ (S_n))_n=0^∞ is a SRW on G started from v. Denote the law of (S_n)_n=0^∞ by ℙ_v.Fix some γ:= (v,v_1,…,v_k) ∈𝒫_k such that v_k=u. Finally, observe that_v[T_D_k=T_u] ≥ℙ_v[T_𝒫_k= T_γ]=1/|𝒫_k|=1/d(d-1)^k-1. We now prove that _v[T_D_k=T_u] ≤C(s,d)/d(d-1)^k-1. We prove thisby induction on s. The base case t(B_k)=0 is trivial (it holds with C(1,d)=1). Now consider the case that t(B_k)=s>0. Let z ∈ D_k be such that _v[T_D_k=T_z] =max_u ∈ D_k _v[T_D_k=T_u]. For an edge e:={x,y}∈ Elet G_e:=(V,E ∖{e} ) be the graph obtained by deleting e from G. Let H_e:=(V_e,E_e) be the graph obtained from G_e by connecting x (resp. y)to the root of ad-ary tree[The root of a d-ary tree is of degree d-1.] 𝒯_x (resp. 𝒯_y). Denote the law of SRW on H_e by P^(e). Let D_i^(e):={u ∈ V_e : dist_H_e(u,v)=i } and B_k^(e):=∪_i=0^k D_i^(e). We now show that there is some constant K(s,d) and an edgee={x,y}∈ E belonging to some cycle in B_k such that x ∈ B_k,y ∈ B_k-1 and_v[T_D_k=T_z] ≤ K(s,d)P_v^(e)[T_D_k^(e)=T_z]. Once this is established, invoking the induction hypothesis concludes the induction step. Consider an arbitrary cycle in B_k with at most one vertex in D_k. Let x be the vertex of the cycle which maximizes _x[T_D_k=T_z]. Let e={x,y},e'={x,y'} be the two edges of the cycle which are incident to x. Without loss of generality, let e be the one through which xis less likely to be reached. More precisely, assume that _v[X_T_x-1=y,T_x ≤ T_D_k] ≤_v[X_T_x-1=y',T_x ≤ T_D_k].Also, by the choice of x we have that _x[T_D_k=T_z] ≥_y[T_D_k=T_z].Note that if x ∈ D_k and x ≠ z then _v[T_D_k=T_z] =P_v^(e)[T_D_k^(e)=T_z]. If x=z then by (<ref>) _v[T_D_k=T_z] ≤ 2 P_v^(e)[T_D_k^(e)=T_z]. Now consider the case that x ∉ D_k. Denote T_x,y:=min{T_x,T_y} and T_x^+:=inf{t>0:X_t=x }.Observe that_v[T_D_k=T_z<T_x,y ] = P_v^(e)[T_D_k^(e)=T_z<T_x,y ].Thus in order to conclude the proof of (<ref>) it remains only to show that_v[T_D_k=T_z>T_x,y ]≤C̃(s,d) P_v^(e)[T_D_k^(e)=T_z>T_x,y ].By(<ref>) we have that_v[T_x < min{ T_D_k,T_y} ] ≥_v[T_y<T_x <T_D_k ] ≥1/d_v[T_y <T_D_k ].Thus _v[T_x <T_D_k ] ≥2/ d_v[T_y <T_D_k ]. By(<ref>) we get that_v[T_x <T_D_k =T_z]=_v[T_x <T_D_k ]_x[T_D_k =T_z] ≥2/ d_v[T_y <T_D_k ]_y[T_D_k =T_z]=2/d_v[T_y <T_D_k =T_z]. Hence, there exists some constant M(s,d) such that _v[T_D_k=T_z>T_x,y] ≤_v[T_D_k=T_z>T_x]+_v[T_D_k=T_z>T_y] ≤ (1+d/2)_v[T_D_k=T_z>T_x] ≤ (d+2)_v[T_D_k=T_z, T_x < min{ T_D_k,T_y}]≤ M(s,d)_v[T_x < min{ T_D_k,T_y}]_x[T_D_k=T_z,min{T_x^+,T_y} >T_D_k]≤ M(s,d)P_v^(e)[T_x < min{ T_D_k^(e),T_y}]P_x^(e)[T_D_k^(e)=T_z<T_x^+] ≤ M(s,d) P_v^(e)[T_D_k^(e)=T_z>T_x,y ], where in the second inequality we have used the fact that _x[min{T_x^+,T_y} >T_D_k] ≥ c(s,d) for some constant c(s,d)>0[This could be proved by induction on s.]and that by the choice of x (namely, by (<ref>)) we have that _y[T_D_k=T_z| T_x>T_D_k] ≤_x[T_D_k=T_z ]=_x[T_D_k=T_z| T_x^+>T_D_k] and so_x[T_D_k=T_z|min{T_x^+,T_y} >T_D_k] ≥_x[T_D_k=T_z| T_x^+>T_D_k]= _x[T_D_k=T_z ]. We leave the missing details as an exercise. Finally, combining (<ref>) and (<ref>) yields (<ref>). § PROOF OF THEOREM <REF> Part (ii) was proven in  <ref>. Let G_n=(V_n,E_n) be a sequence of non-bipartite, finite, connected, d-regular asymptotically one-sided Ramanujan graphs satisfying Assumption 1. Letr_n →∞ be as in Assumption 1. Pick some k=k_n →∞ such that k_n^2=o(r_n).From this point on we often suppress the dependence on n from our notation. Denote the transition matrix of SRW on G (resp. G(k)) by P (resp. K) and its stationary distribution by π (resp. π_G(k)). Let A be an arbitrary set such that π(A) ≤α=α_n:=d^-3k^2.Denote Q:=P^k+2k^2. Before proceeding with the proof, we explain the choice of k+2k^2 in the definition of Q. In order to obtain an upper bound on _K(A) we shall apply Proposition <ref> with P^t (for some t) and K in the roles of P^(2) and P^(1) (respectively) from Proposition <ref>. The obtained estimate is useful only when t ≥ c k^2. Heuristically, this is related to the fact that a SRW on a d-regular tree is much more likely to be at time t at some given vertex of distance O(√(t)) from its starting point, than at some other given vertex at distance ≫√(t) from its starting point (and we want k = O(√(t))).Recall that ρ_d:=2√(d-1)/d. Let _2 and _2' be the second largest eigenvalues of P and Q, respectively. Since _2= ρ_d^1-o(1), by decreasing k if necessary, we may assume that _2 ≤ρ_d^1-1/3k^2 log d. By Proposition <ref> (using the notation from there) and our choice of α,_Q(A) ≤_2' + α =_2^k+2k^2+ α≤ C_1 ρ_d^k+2k^2.Let (S_t)_t=0^∞ be SRW on 𝕋_d,the infinite d-regular tree rooted at o. Denote its transition kernel byP_𝕋_d. Denote the ith level of 𝕋_d by ℒ_i. Let S̃_t be the level S_t belongs to. Let v ∈ℒ_k. Let T_0^+:=inf{t>0:S̃_t=0}. Then by Lemma <ref> (second inequality)|ℒ_k|P_𝕋_d^k+2k^2(o,v)=_0[S̃_k+2k^2 =k] ≥_0[S̃_k+2k^2 =k,T_0^+>k+2k^2]≥ c_0 k^-2 2^k+2k^2(d-1)^k^2+k-1d^-(k+2k^2)+1≥ c_1 k^-2 (d-1)^k/2ρ_d^2k^2+k Let x,y be a pair of adjacent vertices in G(k). It is standard that P^t(x,y) ≥ P_𝕋_d^t(o,v) for all t (where v is as above), and so by (<ref>)Q(x,y)=P^k+2k^2(x,y) ≥ P_𝕋_d^k+2k^2(o,v) ≥ (d-1)^k/2(1-o(1))ρ_d^2k^2+k=:C_k. By Proposition <ref>(and borrowing the notation from there) in conjunction with (<ref>), (<ref>) and Assumption 1 (which implies that there exists some constant C_0=C_0(d)>0 such thatL:=max_x _G(k)(x)/min_y _G(k)(y)≤ C_0 and that if x,y are of distance k in G then K(x,y) ≤ C_0(d-1)^-k), we have that_K(A) ≤_Q(A)C_0^3(d-1)^-k/C_k=(d-1)^-k/2 (1-o(1)).Denote the probability w.r.t. SRW on G(k) by ℙ. By (<ref>) we have for all t (uniformly) that max_(a,A):a ∈ A, π(A) ≤αℙ_a[T_A^c>t] ≤√( C_0α |V|) (d-1)^-tk/2(1-o(1))=√(α |V|) (d-1)^-tk/2(1-o(1)) ,where we have used the fact thatmax_x ∈ V π_G(k)(x)/π(x) ≤ C_0, where C_0 is as above. Consider SRW on G, (X_t)_t=0^∞.Let T_0:=0 and inductively, T_i+1:=inf{t ≥ T_i : dist_G(X_T_i+1,X_T_i)=k }. As in Definition <ref>, consider the chain 𝐘=(Y_i)_i=0^∞, where Y_i:=X_T_i for all i.Let W be its transition matrix. By Assumption 1 andLemma <ref>there exists someconstant C=C(d) such that for all x,y ∈ V of distance k from one another (in G), 1/C ≤ W(x,y)/K(x,y) ≤ C.Denote the probability w.r.t. 𝐘 by 𝐏. Then by (<ref>) and (<ref>) max_(a,A):a ∈ A, π(A) ≤α𝐏_a[T_A^c>t]≤ C^tmax_(a,A):a ∈ A, π(A) ≤αℙ_a[T_A^c>t] ≤√(α |V|) (d-1)^-tk/2(1-o(1)) ,uniformly for all t. Denote the distribution of SRW on G by .Observe that for all s,t ≥ 0 max_(a,A):a ∈ A, π(A) ≤αP_a[T_A^c>t+s] ≤max_(a,A):a ∈ A, π(A) ≤α𝐏_a[T_A^c>τ(t)]+max_a ∈ VP_a[T_τ(t)> t+s],whereτ(t):= ⌈(d-2)t/dk⌉.To conclude the proof (using(<ref>) in conjunction with Lemma <ref>), we now show that (for some o(1) terms) substituting above t=⌈ (1+o(1)) d/d-2log_d-1|V|⌉ and s=t/√(k) +t^2/3 (the value 2/3 in the exponent can be replaced by any number in (1/2,1)) yields max_(a,A):a ∈ A, π(A) ≤αP_a[T_A^c>t+s]=o(1). By (<ref>) it suffices to show that for this choice of s and t we have that max_a ∈ VP_a[T_τ(t)> t+s]=o(1). Fix s and t as above. We say that time j is good if X_j hasd-1 neighbors of greater distance from X_T_i(j), where i(j) is the index for which j ∈ [T_i(j),T_i(j)+1). LetU_i:=|{t ∈ [T_i,T_i+1):tis not good}| and U:=∑_i=0^τ(t) U_i.By Assumption 1 we have that max_v_v[U_0> ℓ ] ≤ C' e^-c ℓ for all ℓ, for some constants c,C'>0 (this is left as an exercise). By the Markov property, it follows thatmax_v_v[U> t/√(k)]=o(1). Consider a coupling of the SRW on G (X_j)_j=0^∞ with the SRW on 𝕋_d started from its root o (S_j)_j=0^∞ in which if j is the ℓth good time, then dist_G(X_j+1,X_T_i(j))< dist_G(X_j,X_T_i(j)) iffdist_𝕋_d(S_ℓ+1,o)< dist_𝕋_d(S_ℓ,o) (unless S_ℓ=o, but there is no harm in neglecting this possibility, as the number of returns to o has a Geometric distribution). Using this coupling we get that for all a ∈ Vwe have thatP_a[T_τ(t)> t+s] ≤_a[U >t/√(k) ]+max_0 ≤ j ≤⌈t/√(k)⌉_o[S_ t+s-j∈∪_i=0^τ(t)+jℒ_i]=o(1).To see that max_0 ≤ j ≤⌈t/√(k)⌉_o[S_ t+s-j∈∪_i=0^τ(t)+jℒ_i]=o(1) use the fact that the distance of S_t+s-j from o is concentrated around d-2/d(t+s-j) within a window whose length is of order √(t)(c.f. <cit.> (2.2)-(2.3) pg. 9) and that by our choice of s we have that d-2/d(t+s-j)-(τ(t)+j) ≫√(t), for all 0 ≤ j ≤⌈t/√(k)⌉. Let M be the number of paths of length k+2k^2 in ℤ, starting from 0, which end at k and do not return to 0. Then M ≥ c_0 2^k+2k^2/k^2.Let (Z_i)_i=0^∞ be a SRW on . Let T_0^+:=inf{t>0: Z_t=0 }. Then_0[Z_k+2k^2=k,T_0^+>k+2k^2 ] ≥_0[ T_0^+> k+2k^2≥ T_k ] min_0 ≤ i ≤ k^2_k[T_0>2i,Z_2i=k] ≥ c_0k^-2,where we have used the fact that_0[ T_0^+> k+2k^2 ≥ T_k ] ≥ c_1 _0[ T_0^+>T_k ]=c_1/(2k) and that _k[T_0>2i,Z_2i=k] ≥_k[T_{0,2k}>2i]_k[Z_2i=k | T_{0,2k}>2i] ≥ c_2 ·1/2k for all i ≤ k^2.§ ACKNOWLEDGEMENTSThe author is grateful to Nathanaël Berestycki, Gady Kozma, Eyal Lubetzky, Yuval Peres, Justin Salez, Allan Sly and Perla Sousifor useful discussions. plain
http://arxiv.org/abs/1702.08034v3
{ "authors": [ "Jonathan Hermon" ], "categories": [ "math.PR", "math.CO", "60B10, 60J10, 05C12, 05C81" ], "primary_category": "math.PR", "published": "20170226140102", "title": "Cutoff for Ramanujan graphs via degree inflation" }
Synchronization Problems in Automatawithout Non-trivial Cycles Andrew Ryzhikov^1, 2 ^1Université Grenoble Alpes, Laboratoire G-SCOP, 38031 Grenoble, France ^2UnitedInstitute of Informatics Problemsof NASB, 220012 Minsk, Belarus ryzhikov.andrew@gmail.com========================================================================================================================================================================================== We study the computational complexity of various problems related to synchronization of weakly acyclic automata, a subclass of widely studied aperiodic automata. We provide upper and lower bounds on the length of a shortest word synchronizing a weakly acyclic automaton or, more generally, a subset of its states, and show that the problem of approximating this length is hard. We investigate the complexity of finding a synchronizing set of states of maximum size. We also show inapproximability of the problem of computing the rank of a subset of states in a binary weakly acyclic automaton and prove that several problems related to recognizing a synchronizing subset of states in such automata are NP-complete. § INTRODUCTION The concept of synchronization is widely studied in automata theory and has a lot of different applications in such areas as manufacturing, coding theory, biocomputing, semigroup theory and many others <cit.>. Let A = (Q, Σ, δ) be a complete deterministic finite automaton (which we simply call an automaton in this paper), where Q is a set of states, Σ is a finite alphabet and δ: Q ×Σ→ Q is a transition function. Note that our definition of an automaton does not include initial and accepting states. The function δ can be naturally extended to a mapping Q ×Σ^* → Q, which we also denote as δ, in the following way: for x ∈Σ and a ∈Σ^* we recursively set δ(q, xa) = δ(δ(q, x), a). An automaton is called synchronizing if there exists a word that maps all its states to a fixed state. Such word is called a synchronizing word. A state q ∈ Q is called a sink state if all letters from Σ map q to itself.In this paper synchronization of weakly acyclic automata is studied. A simple cycle in an automaton A = (Q, Σ, δ) is a sequence q_1, …, q_k of its states such that all the states in the sequence are different and there exist letters x_1, …, x_k ∈Σ such that δ(q_i, x_i) = q_i + 1 for 1 ≤ i ≤ k - 1 and δ(q_k, x_k) = q_1. A simple cycle is a self-loop if it consists of only one state. An automaton is called weakly acyclic if all its simple cycles are self-loops. In other words, an automaton is weakly acyclic if and only if there exists an ordering q_1, q_2, …, q_n of its states such that if δ(q_i, x) = q_j for some letter x ∈Σ, then i ≤ j (such ordering is called a topological sort). Since a topological sort can be found in polynomial time <cit.>, this class can be recognized in polynomial time. Weakly acyclic automata are called acyclic in <cit.> and partially ordered in <cit.>, where in particular the class of languages recognized by such automata is characterized.Weakly acyclic automata arise naturally in synchronizing automata theory. Section <ref> of this paper shows several examples of existing proofs where weakly acyclic automata appear implicitly in complexity reductions. Surprisingly, most of the computational problems that are hard for general automata remain very hard in this class despite its very simple structure. Thus, investigation of weakly acyclic automata provides good lower bounds on the complexity of many problems for general automata.An automaton is called aperiodic if for any word w ∈Σ^* and any state q ∈ Q there exists k such that δ(q, w^k) = δ(q, w^k + 1), where w^k is a word obtained by k concatenations of w <cit.>. Obviously, weakly acyclic automata form a proper subclass of aperiodic automata, thus all hardness results hold for the class of aperiodic automata.The concept of synchronization is often used as an abstraction of returning control over an automaton when there is no a priori information about its current state, but the structure of the automaton is known. If the automaton is synchronizing, we can apply a synchronizing word to it, and thus it will transit to a known state. If we want to perform the same operation when the current state is known to belong to some subset of states of the automaton, we come to the definition of a synchronizing set. A set S ⊆ Q of states of an automaton A is called synchronizing if there exists a word w ∈Σ^* and a state q ∈ Q such that the word w maps each state s ∈ S to the state q. The word w is said to synchronize the set S. It follows from the definition that an automaton is synchronizing if and only if the set Q of all its states is synchronizing. Consider the problem Sync Set of deciding whether a given set S of states of an automaton A is synchronizing.  Sync Set  Input: An automaton A and a subset S of its states;  Output: Yes if S is a synchronizing set, No otherwise. The Sync Set problem is PSPACE-complete <cit.>, even for binary strongly connected automata <cit.> (an automaton is called binary if its alphabet has size two, and strongly connected if any state can be mapped to any other state by some word). In <cit.> it is shown that the Sync Set problem is solvable in polynomial time for orientable automata if the cyclic order respected by the automaton is provided in the input. This problem is also solvable in polynomial time for monotonic automata <cit.>. The problem of deciding whether the whole set of states of an automaton is synchronizing is also solvable in polynomial time <cit.>.One of the most important questions in synchronizing automata theory is the famous Černý conjecture stating that any n-state synchronizing automaton has a synchronizing word of length at most (n - 1)^2. The conjecture is proved for various special cases, including orientable, Eulerian, aperiodic and other automata (see <cit.> for references), but is still open in general. For more than 30 years, the best upper bound was n^3 - n/6, obtained in <cit.>. Recently, a small improvement on this bound has been reported in <cit.>: the new bound is still cubic in n but improves the coefficient 1/6 at n^3 by 4/46875.While there is a simple cubic bound on the length of a synchronizing word for the whole automaton, there exist examples of automata where the length of a shortest word synchronizing a subset of states is exponential in the number of states <cit.>. For orientable n-state automata, a tight upper bound of (n - 1)^2 is known <cit.>, and this bound is also asymptotically tight for monotonic automata <cit.>. On the other hand, a trivial upper bound 2^n - n - 1 on the length of a shortest word synchronizing a subset of states in a n-state automaton is known <cit.>. In <cit.> Cardoso considers the length of a shortest word synchronizing a subset of states in a synchronizing automaton.We assume that the reader is familiar with the notions of an NP-complete problem (refer to the book by Sipser <cit.>), an approximation algorithm and a gap-preserving reduction (for reference, see the book by Vazirani <cit.>). Given an automaton A, the rank of a word w with respect to A is the number |{δ(s, w) | s ∈ Q}|, i.e., the size of the image of Q under the mapping defined in A by w. More generally, the rank of a word w with respect to a subset S of states of A is the number |{δ(s, w) | s ∈ S}|. The rank of an automaton (resp. of a subset of states) is the minimum among the ranks of all words w ∈Σ^* with respect to the automaton (resp. to the subset of states).In this paper we provide various results concerning computational complexity and approximability of the problems related to subset synchronization in weakly acyclic automata. In Section <ref> we prove some lower and upper bounds on the length of a shortest word synchronizing a weakly acyclic automaton or, more generally, a subset of its states. In Section <ref> we investigate the computational complexity of finding such words. In Section <ref> we study inapproximability of the problem of finding a subset of states of maximum size. In Section <ref> we give strong inapproximability results for computing the rank of a subset of states in binary weakly acyclic automata. In Section <ref> we show that several other problems related to recognizing a synchronizing set in a weakly acyclic automaton are hard.A preliminary conference version of this paper was published in <cit.>.§ BOUNDS ON THE LENGTH OF SHORTEST SYNCHRONIZING WORDS Each synchronizing weakly acyclic automaton is a 0-automaton (i.e., an automaton with exactly one sink state), which gives an upper bound n(n - 1)/2 on the length of a shortest synchronizing word <cit.>. The same bound can be deduced from the fact that each weakly acyclic automaton is aperiodic <cit.>. However, for weakly acyclic automata a more accurate result can be obtained, showing that weakly acyclic automata of rank r behave in a way similar to monotonic automata of rank r (see <cit.>). Let A = (Q, Σ, δ) be a n-state weakly acyclic automaton, such that there exists a word of rank r with respect to A. Then there exists a word of length at most n - r and rank at most r with respect to A. Observe that the rank of a weakly acyclic automaton is equal to the number of sink states in it. The conditions of the theorem imply that A has at most r sink states. Consider the sets S_1, …, S_t constructed in the following way. Let p_i be the state in S_i - 1 with the smallest index in the topological sort such that p_i is not a sink state. Let x_i, 1 ≤ i ≤ t, be a letter mapping the state p_i to some other state, where S_i = {δ(q, x_i) | q ∈ S_i - 1}, 1 ≤ i ≤ t, and S_0 = Q. Since A has at most r sink states, the word w = x_1 … x_t exists for any t ≤ n - r and has rank at most r with respect to A.The following simple example shows that the bound is tight. Consider an automaton A = (Q, Σ, δ) with states q_1, …, q_n. Let each letter except some letter x map each state to itself. For the letter x define the transition function δ(q_i, x) = q_i + 1 for 1 ≤ i ≤ n - r and δ(q_i, x) = q_i for n - r + 1 ≤ i ≤ n. Obviously, A has rank r and shortest words of rank r with respect to A have length n - r. Let S be a synchronizing set of states of size k in a weakly acyclic n-state automaton A = (Q, Σ, δ). Then the length of a shortest word synchronizing S is at most k(2n - k - 1)/2. Consider a topological sort q_1, …, q_n of the set Q. Let q_s be a state such that all states in S can be mapped to it by a shortest word w = x_1 … x_t. We can assume that the images of all words x_1 … x_j, j ≤ t, are pairwise distinct, otherwise some letter in this word can be removed. Then a letter x_j maps at least one state of the set {δ(q, x_1 … x_j - 1) | q ∈ S } to some other state. Thus the maximum total number of letters in w sending all states in S to q_s is at most (n - k) + (n - k + 1) + … + (n - 1) = k(2n - k - 1)/2, since application of each letter of w increases the sum of the indices of reached states by at least one.Consider a binary automaton A = (Q, {0, 1}, δ) with n states q_1, …, q_k - 1, s_1, …, s_ℓ, t, where ℓ = n - k. Define δ(q_i, 0) = q_i, δ(q_i, 1) = q_i + 1 for 1 ≤ i ≤ k - 2,δ(q_k - 1, 1) = s_1. Define also δ(s_i, 0) = s_i + 1 for 1 ≤ i ≤ℓ - 1, δ(s_i, 1) = t for 1 ≤ i ≤ℓ - 1. Define both transitions for s_ℓ and t as self-loops. Set S = {q_1, …, q_k - 1, s_ℓ}. The shortest word synchronizing S is (10^l - 1)^k-1 of length (k-1)(n - k). The automaton in this example is binary weakly acyclic, and even has rank 2. Figure <ref> gives the idea of the described construction. As was noted by an anonymous reviewer, for alphabet of size n-2, a better lower bound of (k - 1)(2n - k - 2)/2 can be shown as follows. Let Q = {-1, 0, 1, …, n-2}, Σ = {a_1, …, a_n - 2}, δ(k, a_i) = {[k k > i,;k - 1 k = i,; -1 0 < k < i,;kk∈{-1, 0} ]. If k < n and S={0, n - 2, n - 3, …, n - k}, then it is easy to see that the shortest word synchronizing S has length (n - k) + (n - k + 1) + … + (n - 2) = (k - 1)(2n - k - 2)/2. For each n and k, this is less than the upper bound of Proposition <ref> by n - 1 only.§ COMPLEXITY OF FINDING SHORTEST SYNCHRONIZING WORDS Now we proceed to the computational complexity of some problems, related to finding a shortest synchronizing word for an automaton. Consider first the following problem.  Shortest Sync Word  Input: A synchronizing automaton A;  Output: The length of a shortest synchronizing word for A. First, we note that the automaton showing inapproximability of Shortest Sync Word in the construction of Berlinkov <cit.> is weakly acyclic. For any γ > 0, the Shortest Sync Word problem for n-state weakly acyclic automata with alphabet of size at most n^1 + γ cannot be approximated in polynomial time within a factor of d log n for any d < c_sc unless P = NP, where c_sc is some constant.In Berlinkov's reduction to the binary case, the automaton is no longer weakly acyclic. However, the binary automaton showing NP-hardness of Shortest Sync Word in Eppstein's construction <cit.> is weakly acyclic. Shortest Sync Word is NP-hard for binary weakly acyclic automata. Consider now the following more general problem.  Shortest Set Sync Word  Input: An automaton A and a synchronizing subset S of its states;  Output: The length of a shortest word synchronizing S. It follows from Proposition <ref> that the decision version of this problem (asking whether there exists a word of length at most k synchronizing S) is in NP for weakly acyclic automata, so it is reasonable to investigate its approximability. The Shortest Set Sync Word problem for n-state binary weakly acyclic automata cannot be approximated in polynomial time within a factor of O(n^1/2 - ϵ) for any ϵ > 0 unless P = NP.To prove this theorem, we construct a gap-preserving reduction from theShortest Sync Word problem in p-state binary automata, which cannot be approximated in polynomial time within a factor of O(p^1 - ϵ) for any ϵ > 0 unless P = NP <cit.>. Let a binary automaton A = (Q, {0, 1}, δ) be the input of Shortest Sync Word. Let Q = {q_1, …, q_p}. Construct a binary automaton A' = (Q', {0, 1}, δ') with the set of states Q' ={q_i^(j)| 1 ≤ i ≤ p, 1 ≤ j ≤ p + 1}. Define δ'(q_i^(j), x) = q_k^(j + 1) for 1 ≤ i ≤ p, 1 ≤ j ≤ p, x ∈{0, 1}, where k is such that q_k = δ(q_i, x). Define δ'(q_i^(p + 1), x) = q_i^(p + 1) for 1 ≤ i ≤ p and x ∈{0, 1}. Take S' = {q_i^(1)| 1 ≤ i ≤ p }. Observe that any word synchronizing S' in A' is a synchronizing word for A because of the definition of δ'. In the other direction, we note that a shortest synchronizing word for a p-state automaton in the construction of Gawrychowski and Straszak <cit.> has length at most p. Hence, a shortest synchronizing word for A also synchronizes S' in A'. Thus, the length of a shortest synchronizing word for A is equal to the length of a shortest word synchronizing S' in A', and we get a gap-preserving reduction with gap O(p^1-ϵ) = O(n^1/2 - ϵ), as A' has O(p^2) states. Finally, it is easy to see that A' is binary weakly acyclic.§ FINDING A SYNCHRONIZING SET OF MAXIMUM SIZEOne possible approach to measure and reduce initial state uncertainty in an automaton is to find a subset of states of maximum size where the uncertainty can be resolved, i.e., to find a synchronizing set of maximum size. This is captured by the following problem.  Max Sync Set  Input: An automaton A;  Output: A synchronizing set of states of maximum size in A. Türker and Yenigün <cit.> study a variation of this problem, which is to find a set of states of maximum size that can be mapped by some word to a subset of a given set of states in a given monotonic automaton. They reduce the N-Queens Puzzle problem <cit.> to this problem to prove its NP-hardness. However, their proof is unclear, since the input has size O(log N), and the output size is polynomial in N. Also, the N-Queens Puzzle problem is solvable in polynomial time <cit.>.First we investigate the PSPACE-completeness of the decision version of theMax Sync Set problem, which we shall denote as Max Sync Set-D. Its formulation is the following: given an automaton A and a number c, decide whether there is a synchronizing set of states of cardinality at least c in A. The Max Sync Set-D problem is PSPACE-complete for binary automata. The Sync Set problem is in PSPACE <cit.>. Thus, the Max Sync Set-D problem is also in PSPACE, as we can sequentially check whether each subset of states is synchronizing and compare the size of a maximum synchronizing set to c. To prove that the Max Sync Set-D problem is PSPACE-hard for binary automata, we shall reduce a PSPACE-complete Sync Set problem for binary automata to it <cit.>. Let an automaton A and a subset S of its states be an input to Sync Set. Let n be the number of states of A. Construct a new automaton A' by initially taking a copy of A. For each state s ∈ S, add n + 1 new states to A' and define all the transitions from these new states to map to s, regardless of the input letter. Define the set S' to be a union of all new states and take c = |S'| = (n + 1)|S|. Let S_1 be a maximum synchronizing set in A not containing at least one new state q. As S_1 is maximum, it does not contain other n new states that can be mapped to the same state as q. Thus, the size of S_1 is at most n + (n + 1)|S| - (n + 1) < (n + 1)|S| = c. Hence, each synchronizing set of size at least c in A' contains S'. The set S is synchronizing in A if and only if S' is synchronizing in A', as each word w synchronizing S in A corresponds to a word xw synchronizing S' in A', where x is an arbitrary letter. Thus, A' has a synchronizing set of size at least c if and only if S is synchronizing in A. Now we proceed to inapproximability results for the Max Sync Set problem in several classes of automata. We shall need some results from graph theory. An independent set I in a graph G is a set of its vertices such that no two vertices in I share an edge. The size of a maximum independent set in G is denoted α(G). The Independent Set problem is defined as follows.  Independent Set  Input: A graph G;  Output: An independent set of maximum size in G. Zuckerman <cit.> has proved that, unless P = NP, there is no polynomialalgorithm for the Independent Set problem for any ε > 0, where p is the number of vertices in G. The problem Max Sync Set for weakly acyclic n-state automata over an alphabet of cardinality O(n) cannot be approximated in polynomial time within a factor of O(n^1 - ε) for any ε > 0 unless P = NP. We shall prove this theorem by constructing a gap-preserving reduction from the Independent Set problem. Given a graph G = (V, E), V = {v_1, v_2, …, v_p}, we construct an automaton A = (Q, Σ, δ) as follows. For each v_i ∈ V, we construct two states s_i, t_i in Q. We also add a state f to Q. Thus, |Q| = 2p + 1. The alphabet Σ consists of letters ṽ_1, …, ṽ_p corresponding to the vertices of G. The transition function δ is defined in the following way. For each 1 ≤ i ≤ p, the state s_i is mapped to f by the letter ṽ_i. For each v_iv_j ∈ E the state s_i is mapped to t_i by the letter ṽ_j, and the state s_j is mapped to t_j by the letter ṽ_i. All yet undefined transitions map a state to itself. Let I be a maximum independent set in G. Then the set S = {s_i | v_i ∈ I}∪{f} is a synchronizing set of maximum cardinality (of size α(G) + 1) in the automaton A = (Q, Σ, δ). Let w be a word obtained by concatenating the letters corresponding to I in arbitrary order. Then w synchronizes the set S = {s_i | v_i ∈ I }∪{f} of states of cardinality |I| + 1. Thus, A has a synchronizing set of size at least α(G) + 1. In other direction, let w be a word synchronizing a set of states S' of maximum size in A. We can assume that after reading w all the states in S' are mapped to f, as all the sets of states that are mapped to any other state have cardinality at most two. Then by construction there are no edges in G between any pair of vertices in I' = {v_i | s_i ∈ S'}, so I' is an independent set of size |S'| - 1 in G. Thus the maximum size of a synchronizing set in A is equal to α(G) + 1. Thus we have a gap-preserving reduction from the Independent Set problem to the Max Sync Set problem with a gap Θ(p^1 - ε) for any ε > 0. It is easy to see that n = Θ(p) and A is weakly acyclic, which concludes the proof of the theorem. Next we move to a slightly weaker inapproximability result for binary automata. The problem Max Sync Set for binary n-state automata cannot be approximated in polynomial time within a factor of O(n^1/2 - ε) for any ε > 0 unless P = NP. Again, we construct a gap-preserving reduction from the Independent Set problem extending the proof of Theorem <ref>. Given a graph G = (V, E), V = {v_1, v_2, …, v_p}, we construct an automaton A =(Q, Σ, δ) in the following way. Let Σ = {0, 1}. First we construct the main gadget A_main having a synchronizing set of states of size α(G). For each vertex v_i ∈ V, 1 ≤ i ≤ p, we construct a set of new states L_i = V_i ∪ U_i in Q, where V_i = {v^(i)_j : 1 ≤ j ≤ p}, U_i = {u^(i)_j : 1 ≤ j ≤ p}. We call L_i the ith layer of A_main. We also add a state f to Q. For each i, 1 ≤ i ≤ p, the transition function δ imitates choosing taking the vertices v_1, v_2, …, v_p into an independent set one by one and is defined as: δ(v^(i)_j, 0) = {[ u^(i)_j; v^(i + 1)_j; ]. δ(v^(i)_j, 1) = {[ u^(i)_j; v^(i + 1)_j; ]. Here all v^(n + 1)_j, 1 ≤ j ≤ p, coincide with f. For each state u^(i)_j, the transitions for both letters 0 and 1 lead to the originating state (i.e. they are self-loops). We also add an p-state cycle A_cycle attached to f. It is a set of p states c_1, …, c_p, mapping c_i to c_i + 1 and c_p to c_1 regardless of the input symbol. Finally, we set c_1to coincide with f. Thus we get the automaton A_1. Figure <ref> presents an example of A_1 for a graph with three vertices v_1, v_2, v_3 and one edge v_2v_3. The main property of A_1 is claimed by the following lemma. The size of a maximum synchronizing set of states from the first layer in A_1 equals α(G). Let I be a maximum independent set in G. Consider a word w of length p such that its ith letter is equal to 0 if v_i ∉ I and to 1 if v_i ∈ I. By the construction of A_1, this word synchronizes the set {v^(1)_j | v_j ∈ I}. Conversely, a synchronizing set of at least three states from the first layer can be mapped only to some vertex of A_cycle, and the corresponding set of vertices in G is an independent set. Some layer in the described construction can contain a synchronizing subset of size larger than the maximum synchronizing subset of the first layer. To avoid that, we modify A_1 by repeating each state (with all transitions) of the first layer p times. More formally, we replace each pair of states v^(1)_j, u^(1)_j with p different pairs of states such that in each pair all the transitions repeat the transitions between v^(1)_j, u^(1)_j, and all the other states of the automaton. We denote the automaton thus constructed as A. The following lemma claims that the described procedure of constructing A from G is a gap-preserving reduction from the Independent Set problem in graphs to the Max Sync Set problem in binary automata. If α(G) > 1, then the maximum size of a synchronizing set in A is equal to pα(G) + 1. Note that due to the construction of A_cycle, each synchronizing set of A is either a subset of a single layer of A together with a state in A_cycle or a subset of a set {v^(i)_j | 2 ≤ i ≤ℓ}∪{u^(ℓ)_j} for some ℓ and j, together with p new states that replaced v^(1)_j. Consider the first case. If some maximum synchronizing set S contains a state from the ith layer of A and i > 1, then its size is at most p + 1. A maximum synchronizing set containing some states from the first layer of A consists of pα(G) states from this layer (according to Lemma <ref>) and some state of A_cycle, so this set has size pα(G) + 1 ≥ 2p + 1. In the second case, the maximum size of a synchronizing set is at most p + (p - 1) + 1 = 2p < p α(G) + 1. It is easy to see that the constructed reduction is gap-preserving with a gap Θ(p^1 - ε) = Θ(n^1/2 - ε), where n is the number of states in A, as n = Θ(p^2). Thus the Max Sync Set for n-state binary automata cannot be approximated in polynomial time within a factor of O(n^1/2 - ε) for any ε > 0 unless P = NP, which concludes the proof of the theorem. Theorem <ref> can also be proved by using Theorem <ref> and a slight modification of the technique used in <cit.> for decreasing the size of the alphabet. However, in this case the resulting automaton is far from being weakly acyclic, while the automaton in the proof of Theorem <ref> has only one cycle. The next theorem shows how to modify our technique to prove an inapproximability bound for Max Sync Set in binary weakly acyclic automata. The Max Sync Set problem for binary weakly acyclic n-state automata cannot be approximated in polynomial time within a factor of O(n^1/3 - ε) for any ε > 0 unless P = NP. We modify the construction of the automaton A_main from Theorem <ref> in the following way. We repeat each state (with all transitions) of the first layer p^2 times in the same way as it is done in the proof of Theorem <ref>. Thus we get a weakly acyclic automaton A_wa with n = Θ(p^3) states, where p is the number of vertices in the graph G. Furthermore, similar to Lemma <ref>, the size of the maximum synchronizing set of states in A_wa is between p^2α(G) and p^2α(G) + p(p-1) + 1, because some of the states from the layers other than the first can be also mapped to f. Both of the values are of order Θ(p^2α(G)), thus we have an gap-preserving reduction providing the inapproximability within a factor of O(p^1 - ε) = O(n^1/3 - ε) for any ε > 0, where n is the number of states in A_wa.We finish by noting that for two classes of automata the Max Sync Set problem is solvable in polynomial time. The problem Max Sync Set can be solved in polynomial time for unary automata. Consider the digraph G induced by states and transitions of an unary automaton A. By definition, each vertex of G has outdegree 1. Thus, the set of the vertices of G can be partitioned into directed cycles and a set of vertices not belonging to any cycle, but lying on a directed path leading to some cycle. Let n be the number of states in A. It is easy to see that after performing n transitions, each state of A is mapped into a state in some cycle, and all further transitions will not map any two different states to the same state. Thus, it is enough to perform n transitions and select such state s that the maximum number of states are mapped to s.A more interesting case is covered by the following proposition. An automaton A = (Q, Σ, δ) is called Eulerian if there exists k such that for each state q ∈ Q there are exactly k pairs (q', a), q' ∈ Q, a ∈Σ, such that δ(q', a) = q. The problem Max Sync Set can be solved in polynomial time for Eulerian automata. According to Theorem 2.1 in <cit.> (see also <cit.> for the discussion of the Eulerian case), each word of minimum rank with respect to an Eulerian automaton synchronizes the sets S_1, S_2, …, S_r forming a partition of the states of the automaton into inclusion-maximal synchronizing sets. Moreover, according to this theorem all inclusion-maximal synchronizing sets in an Eulerian automaton are of the same size, thus each inclusion-maximal synchronizing set has maximum cardinality. A word of minimum rank with respect to an automaton can be found in polynomial time <cit.>, which concludes the proof. § COMPUTING THE RANK OF A SUBSET OF STATES Assume that we know that the current state of the automaton A belongs to a subset S of its states. Even if it is not possible to synchronize S, it can be reasonable to minimize the size of the set of possible states of A, reducing the uncertainty of the current state as much as possible. One way to do it is to map S to a set S' of smaller size by applying some word to A. Recall that the size of the smallest such set S' is called the rank of S. Consider the following problem of finding the rank of a subset of states in a given automaton.  Set Rank  Input: An automaton A and a set S of its states;  Output: The rank of S in A.The rank of an automaton, that is, the rank of the set of its states, can be computed in polynomial time <cit.>. However, since the automaton in the proof of PSPACE-completeness of Sync Set in <cit.> has rank 2 (and thus each subset of states in this automaton has rank either 1 or 2), it follows immediately that there is no polynomial c-approximation algorithm for the Set Rank problem for any c < 2 unless P = PSPACE. It also follows that checking whether the rank of a subset of states equals the rank of the whole automaton is PSPACE-complete. For monotonic weakly acyclic automata, this problem is hard to approximate within a factor of 9/8 - ϵ for any ϵ > 0 <cit.>. For general weakly acyclic automata it is possible to get much stronger bounds, as it is shown by the results of this section.We shall need the Chromatic Number problem. A proper coloring of a graph G = (V, E) is a coloring of the set V in such a way that no two adjacent vertices have the same color. The chromatic number of G, denoted χ(G), is the minimum number of colors in a proper coloring of G. Recall that a set of vertices in a graph is called independent if no two vertices in this set are adjacent. A proper coloring of a graph can be also considered as a partition of the set of its vertices into independent sets.  Chromatic Number  Input: A graph G;  Output: The chromatic number of G. This problem cannot be approximated within a factor of O(p^1 - ϵ) for any ϵ > 0 unless P = NP, where p is the number of vertices in G <cit.>. The Set Rank problem for n-state weakly acyclic automata with alphabet of size O(√(n)) cannot be approximated within a factor of O(n^1/2 - ϵ) for any ϵ > 0 unless P = NP. We shall prove this theorem by constructing a gap-preserving reduction from the Chromatic Number problem, extending the technique in the proof of Theorem <ref>. Given a graph G = (V, E), V = {v_1, v_2, …, v_p}, we construct an automaton A = (Q, Σ, δ) as follows. The alphabet Σ consists of letters ṽ_1, …, ṽ_p corresponding to the vertices of G, together with a switching letter ν. We use p identical synchronizing gadgets T^(k), 1 ≤ k ≤ p, such that each gadget synchronizes a subset of states corresponding to an independent set in G. Gadget T^(k) consists of a set {s^(k)_i, t^(k)_i | 1 ≤ i ≤ p}∪{f^(k)} of states. The transition function δ is defined as follows. For each gadget T^(k), for each 1 ≤ i ≤ p, the state s^(k)_i is mapped to f^(k) by the letter ṽ_i. For each v_iv_j ∈ E the state s^(k)_i is mapped to t^(k)_i by the letter ṽ_j, and the state s^(k)_j is mapped to t^(k)_j by the letter ṽ_i. All yet undefined transitions corresponding to the letters ṽ_1, …, ṽ_p map a state to itself. It remains to define the transitions corresponding to ν. For each 1 ≤ k ≤ p - 1, ν maps t^(k)_i and s^(k)_i to s^(k + 1)_i, and f^(k) to itself. Finally, ν acts on all states in T^(p) as a self-loop. Define S = {s^(1)_i | 1 ≤ i ≤ p}. We shall prove that the rank of S is equal to the chromatic number of G. Consider a proper coloring of G with the minimum number of colors and let I_1 ∪…∪ I_χ(G) be the partition of G into independent sets defined by this coloring. For each I_j, consider a word w_j obtained by concatenating the letters corresponding to the vertices in I_j in some order. Consider now the word w_1 ν w_2 ν…ν w_χ(G). This word maps the set S to the set {f^(i)| 1 ≤ i ≤χ(G)}, which proves that the rank of S is at most χ(G). In the other direction, note that after each reading of ν all states except f^(k), 1 ≤ k ≤ p - 1, are mapped to the next synchronizing gadget (except the last gadget T^(p) which is mapped to itself). By definition of δ, only a subset of states corresponding to an independent set of vertices can be mapped to some particular f^(k), and the image of S after reading any word is a subset of the states in some gadget together with some of the states f^(k), 1 ≤ k ≤ p. Hence, the rank of S is at least χ(G). Thus we have a gap-preserving reduction from the Chromatic Number problem to the Set Rank problem with gap Θ(p^1 - ε) for any ε > 0. It is easy to see that n = Θ(p^2), A is weakly acyclic and its alphabet has size O(√(n)), which finishes the proof of the theorem. Using the classical technique of reducing the alphabet size (see <cit.>), O(n^1/3 - ϵ) inapproximability can be proved for binary automata. To prove the same bound for binary weakly acyclic automata, we have to refine the technique of the proof of the previous theorem. The Set Rank problem for n-state binary weakly acyclic automata cannot be approximated within a factor of O(n^1/3 - ϵ) for any ϵ > 0 unless P = NP.To prove this theorem we construct a gap-preserving reduction from the Chromatic Number problem, extending the proof of the previous theorem. Given a graph G = (V, E), V = {v_1, v_2, …, v_p}, we construct an automaton A =(Q, {0, 1}, δ). In our reduction we use two kinds of gadgets: p synchronizing gadgets T^(k), 1 ≤ k ≤ p, and p waiting gadgets R^(k), 1 ≤ k ≤ p. Gadget T^(k) consists of a set {v_i,j^(k)| 1 ≤ i,j ≤ p}of states, together with a state f^(k), and R^(k), 1 ≤ k ≤ p, consists of the set {u_i,j^(k)| 1 ≤ i, j ≤ p}. For each i, j, k, 1 ≤ i, j, k ≤ p, the transition function δ is defined as: δ(v^(k)_i, j, 0) = {[ u^(k)_i,j; v^(k)_i + 1,j; ]. δ(v^(k)_i,j, 1) = {[ u^(k)_i,j; v^(k)_i + 1,j; ]. Here all v^(k)_p + 1,j, 1 ≤ j ≤ p, coincide with f^(k). We set δ(u^(k)_i,j, x) = u^(k)_i + 1,j for x ∈{0, 1}, 1 ≤ i, k ≤ p - 1, 1 ≤ j ≤ p, and δ(u^(k)_p, j, x) = v^(k + 1)_1,j for 1 ≤ j ≤ p, 1 ≤ k ≤ p - 1, x ∈{0, 1}. The states u^(p)_i,j are sink states: both letters 0 and 1 act on them as self-loops. Finally, we set S = {v^(1)_1, j| 1 ≤ j ≤ p}. Figure <ref> gives an idea of the described construction. The idea of the presented construction is essentially a combination of the ideas in the proofs of Theorems <ref> and <ref>, so we provide only a sketch of the proof. A synchronizing gadget T^(k) synchronizes a set S^(k)⊆ S of states corresponding to some independent set in G. All the states corresponding to the vertices adjacent to vertices corresponding to S^(k) are mapped to the corresponding waiting gadget R^(k), and get to the next synchronizing gadget T^(k + 1) only after the states of S^(k) are synchronized (and thus mapped to f^(k)). Hence, the minimum size of a partition of V into independent sets is equal to the rank of S.The number of states in A is O(p^3). Thus, we get Θ(n^1/3 - ϵ) inapproximability. § SUBSET SYNCHRONIZATION In this section, we obtain complexity results for several problems related to subset synchronization in weakly acyclic automata. We adapt Eppstein's construction from <cit.>, which is a powerful and flexible tool for such proofs. We shall need the following NP-complete SAT problem <cit.>.  SAT  Input: A set X of n boolean variables and a set C of m clauses;  Output: Yes if there exists an assignment of values to the variables in X such that all clauses in C are satisfied, No otherwise. The Sync Set problem in binary weakly acyclic automata is NP-complete. Because of the polynomial upper bound on the length of a shortest word synchronizing a subset of states proved in Proposition <ref>, we can use such word as a certificate. Thus, the problem is in NP. We reduce the SAT problem. Given X and C, we construct an automaton A =(Q, {0, 1}, δ). For each clause c_j, we construct n + 1 states y^(j)_i, 1 ≤ i ≤ n + 1, in Q. We introduce also a state f ∈ Q. The transitions from y^(j)_i correspond to the occurrence of x_i in c_j in the following way: for 1 ≤ i ≤ n, 1 ≤ j ≤ m, δ(y^(j)_i, a) = f if the assignment x_i = a, a ∈{0, 1}, satisfies c_j, and δ(y^(j)_i, a) = y^(j)_i + 1 otherwise. The transition function δ also maps y^(j)_n + 1 to itself for all 1 ≤ j ≤ m and both letters 0 and 1. Let S = {y^(j)_1 | 1 ≤ j ≤ m}. The word w = a_1 a_2 … a_n synchronizes S if a_i is the value of x_i in an assignment satisfying C, and vice versa. Thus, the set is synchronizing if and only if all clauses in C can be satisfied by some assignment of binary values to the variables in X. By identifying the states y^(j)_n + 1 for 1 ≤ j ≤ m and adding f to S it is also possible to prove that the problem of checking whether the rank of a subset of states equals the rank of an automaton is coNP-complete for binary weakly acyclic automata (cf. the remarks in the beginning of Section <ref>).The proof of Theorem <ref> can be used to prove the hardness of a special case of the following problem, which is PSPACE-complete in general <cit.> and NP-complete for weakly acyclic monotonic automata over a three-letter alphabet <cit.>.  Finite Automata Intersection  Input: Automata A_1, …, A_k (with initial and accepting states);  Output: Yes if there is a word which is accepted by all automata, No otherwise. Finite Automata Intersection is NP-complete when all automata in the input are binary weakly acyclic. Observe first that if there exists a word which is accepted by all automata, then a shortest such word w has length at most linear in the total number of states in all automata. Indeed, for each automaton consider a topological sort of the set of its states. Each letter of w maps at least one state in some automaton to some other state, which has larger index in the topological sort of the set of states of this automaton. Thus, the considered problem is in NP. For the hardness proof, we use the same construction as in Theorem <ref>. Provided X and C, define A in the same way as in Theorem <ref>. Define A_j = (Q_j, {0, 1}, δ_j) as follows. Take Q_j = {y^(j)_i, 1 ≤ i ≤ n + 1 }∪{f} and δ_j to be the restriction of δ to the set Q_j. Set y^(j)_1 to be the input state and f to be the only accepting state of A_j. Then there exists a word accepted by automata A_1, …, A_m if and only if all clauses in C are satisfiable by some assignment. To obtain the next results, we shall need a modified construction of the automaton from the proof of Theorem <ref>, as well as some new definitions. A partial automaton is a triple (Q, Σ, δ), where Q and Σ are the same as in the definition of a finite deterministic automaton, and δ is a partial transition function (i.e., a transition function which may be undefined for some argument values). Given an instance of the SAT problem, construct a partial automaton A_base = (Q, {0, 1}, δ) as follows. We introduce a state f ∈ Q. For each clause c_j, we construct n + 1 states y^(j)_i, 1 ≤ i ≤ n + 1, in Q. For each c_j, construct also states z^(j)_i for h_i + 1 ≤ i ≤ n + 1, where h_i is the smallest index of a variable occurring in c_j. The transitions from y^(j)_i correspond to the occurrence of x_i in c_j in the following way: for 1 ≤ i ≤ n, δ(y^(j)_i, a) = z^(j)_i + 1 if the assignment x_i = a, a ∈{0, 1}, satisfies c_j, and δ(y^(j)_i, a) = y^(j)_i + 1 otherwise. For x ∈{0, 1}, we set δ(z^(j)_i, a) = z^(j)_i + 1 for h_i + 1 ≤ i ≤ n, 1 ≤ j ≤ m, a ∈{0, 1}. The transition function δ also maps z^(j)_n + 1, 1 ≤ j ≤ m, and f to f for both letters 0 and 1.A word w is said to carefully synchronize a partial automaton A if it maps all its states to the same state q, and each mapping corresponding to a prefix of w is defined for each state. The automaton A is then called carefully synchronizing. We use A_base to prove the hardness of the following problem.  Careful Synchronization  Input: A partial automaton A;  Output: Yes if A is carefully synchronizing, No otherwise. For binary automata, Careful Synchronization is PSPACE-complete <cit.>. For monotonic automata over a four-letter alphabet it is NP-hard. We call a partial automaton aperiodic if for any word w ∈Σ^* and any state q ∈ Q there exists k such that either δ(q, w^k) is undefined, or δ(q, w^k) = δ(q, w^k + 1). Careful Synchronization is NP-hard for aperiodic partial automata over a three-letter alphabet. We reduce the SAT problem. Given X and C, we first construct A_base. Then we add an additional letter r to the alphabet of A_base and introduce m new states s^(m). For 1 ≤ i ≤ n, 1 ≤ j ≤ m, we define δ(s^(j), r) = y^(j)_1, δ(y^(j)_i, r) = y^(j)_1, δ(z^(j)_i, r) = y^(j)_1, δ(f, r) = f. All other transitions are left undefined. Let us call the constructed automaton A. The automaton A is carefully synchronizing if and only if all clauses in C can be satisfied by some assignment of binary values to the variables in X. Moreover, the word w = r w_1 w_2 … w_n 0, is carefully synchronizing if w_i is the value of x_i in such an assignment. Indeed, note that the first letter of w is necessarily r, as it is the only letter defined for all the states. Moreover, each word starting with r maps Q to a subset of {y^(j)_i, z^(j)_i | 1 ≤ j ≤ m + 1}∪{f}. The only way for a word to map all states to f is to map them first to the set {z^(j)_n + 1| 1 ≤ j ≤ m}, because there are no transitions defined from any y^(j)_n + 1, except the transitions defined by r. But this exactly means that there exists an assignment satisfying C. The constructed automaton is aperiodic, because each cycle which is not a self-loop contains exactly one letter r. The complexity of the following problem can be obtained from Theorem <ref>.  Positive Matrix  Input: A set M_1, …, M_k of n × n binary matrices;  Output: Yes if there exists a sequence M_i_1×…× M_i_k of multiplications (possibly with repetitions) providing a matrix with all elements equal to 1, No otherwise. Positive Matrix is NP-hard for two upper-triangular and two lower-triangular matrices. The proof uses the idea from <cit.>. Consider three transition matrices corresponding to the letters of the automaton constructed in the proof of Theorem <ref>. Add the matrix corresponding to the letter mapping the state f to all states and undefined for all other states. Any sequence of matrices resulting in a matrix with only positive elements must contain the new matrix, and before that there must be a sequence of matrices corresponding to a word carefully synhronizing the automaton from the proof of Theorem <ref>. Thus we get a reduction from Careful Synchronization for aperiodic partial automata over a three-letter alphabet to Positive Matrix. It is easy to see that the reduction uses two upper-triangular and two lower-triangular matrices. Finally, we show the hardness of the following problem (PSPACE-complete in general <cit.>).  Subset Reachability  Input: An automaton A = (Q, Σ, δ) and a subset S of its states;  Output: Yes if there exists a word w such that {δ(q, w) | q ∈ Q} = S, No otherwise. Subset Reachability is NP-complete for weakly acyclic automata. Consider a topological sort of Q.Let w be a shortest word mapping Q to some reachable set of states. Then each letter of w maps at least one state to a state with a larger index in the topological sort. Thus w has length O(|Q|^2), since the maximum total number of such mappings is (|Q| - 1) + (|Q| - 2) + … + 1 + 0. Thus, the considered problem is in NP. For the NP-hardness proof, we again reduce the SAT problem. Given an instance of SAT, construct A_base first. Next, add a transition δ(y^(j)_n + 1, a) = f for 1 ≤ j ≤ m, a ∈{0, 1}, resulting in a deterministic automaton A. Similar to the proof of Theorem <ref>, C is satisfiable if and only if the set {z^(n + 1)_j | 1 ≤ j ≤ m}∪{f} is reachable in A. § CONCLUSIONS AND OPEN PROBLEMS As shown in this paper, weakly acyclic automata serve as an example of a small class of automata where most of the synchronization problems are still hard. More precisely, switching from general automata to weakly acyclic usually results in changing a PSPACE-complete problem to a NP-complete one.Some problems for weakly acyclic automata are still open. One of them is to study the approximability of the Shortest Sync Word problem: there is a drastic gap between known inapproximability results and the O(n)-approximation algorithm for general automata. Another natural problem is to study the Max Sync Set and Set Rank problems complexity in strongly connected automata. The technique used by Vorel for proving PSPACE-completeness of the Sync Set problem in strongly connected automata seems to fail here.§.§.§ Acknowledgments We would like to thank Peter J. Cameron for introducing us to the notion of synchronizing automata, and Vojtěch Vorel, Yury Kartynnik, Vladimir Gusev and Ilia Fridman for very useful discussions. We also thank Mikhail V. Volkov and anonymous reviewers for their great contribution to the improvement of the paper. alpha
http://arxiv.org/abs/1702.08144v2
{ "authors": [ "Andrew Ryzhikov" ], "categories": [ "cs.FL", "cs.CC", "68Q17", "F.1.1; F.1.3; F.2.2" ], "primary_category": "cs.FL", "published": "20170227044203", "title": "Synchronization Problems in Automata without Non-trivial Cycles" }
Boosted Generative Models Aditya Grover, Stefano ErmonComputer Science DepartmentStanford University December 30, 2023 ====================================================================================== We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our meta-algorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our black-box boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models.§ INTRODUCTION A variety of deep generative models have shown promising results on tasks spanning computer vision, speech recognition, natural language processing, and imitation learning <cit.>. These parametric models differ from each other in their ability to perform various forms of tractable inference, learning algorithms, and objectives. Despite significant progress, existing generative models cannot fit complex distributions with a sufficiently high degree of accuracy, limiting their applicability and leaving room forimprovement.In this paper, we propose a technique for ensembling (imperfect) generative models to improve their overall performance. Our meta-algorithm is inspired by boosting, a technique used in supervised learning to combine weak classifiers (, decision stumps or trees), which individually might not perform well on a given classification task, into a more powerful ensemble. The boosting algorithm will attempt to learn a classifier to correct for the mistakes made by reweighting the original dataset, and repeat this procedure recursively. Under some conditions on the weak classifiers' effectiveness, this procedure can drive the (training) error to zero <cit.>. Boosting can also be thought as a feature learning algorithm, where at each round a new feature is learned by training a classifier on a reweighted version of the original dataset.In practice, algorithms based on boosting perform extremely well in machine learning competitions <cit.>.We show that a similar procedure can be applied to generative models. Given an initial generative model that provides an imperfect fit to the data distribution, we construct a second model to correct for the error, and repeat recursively. The second model is also a generative one, which is trained on a reweighted version of the original training set. Our meta-algorithm is general and can construct ensembles of any existing generative model that permits (approximate) likelihood evaluation such as fully-observed belief networks, sum-product networks, and variational autoencoders. Interestingly, our method can also leverage powerful discriminative models.Specifically, we train a binary classifier to distinguish true data samples from “fake” ones generated by the current model and provide a principled way to include this discriminator in the ensemble. A prior attempt at boosting density estimation proposed a sum-of-experts formulation <cit.>. This approach is similar to supervised boosting where at every round of boosting we derive a reweighted additive estimate of the boosted model density.In contrast, our proposed framework uses multiplicative boosting which multiplies the ensemble model densities and can be interpreted as a product-of-experts formulation.We provide a holistic theoretical and algorithmic framework for multiplicative boosting contrasting with competing additive approaches. Unlike prior use cases of product-of-experts formulations, our approach is black-box, and we empirically test the proposed algorithms on several generative models from simple ones such as mixture models to expressive parameteric models such as sum-product networks and variationalautoencoders.Overall, this paper makes the following contributions: * We provide theoretical conditions for additive and multiplicative boosting under which incorporating a new model is guaranteed to improve the ensemble fit.* We design and analyze a flexible meta-algorithmic boosting framework for including both generative and discriminative models in the ensemble.* We demonstrate the empirical effectiveness of our algorithms for density estimation, generative classification, and sample generation on several benchmark datasets.§ UNSUPERVISED BOOSTINGSupervised boosting provides an algorithmic formalization of the hypothesis that a sequence of weak learners can create a single strong learner <cit.>.Here, we propose a framework that extends boosting to unsupervised settings for learning generative models. For ease of presentation, all distributions are with respect to any arbitrary𝐱∈ℝ^d, unless otherwise specified. We use upper-case symbols to denote probability distributions and assume they all admit absolutely continuous densities (denoted by the corresponding lower-case notation) on a reference measure d𝐱. Our analysis naturally extends to discrete distributions, which we skip for brevity.Formally, we consider the following maximum likelihood estimation (MLE) setting. Given somedata points X={𝐱_i ∈ℝ^d}_i=1^m sampled i.i.d. from an unknown distribution P, we provide a model class 𝒬 parameterizing the distributions that can be represented by the generative model and minimize the Kullback-Liebler (KL) divergence with respect to the true distribution:min_Q ∈𝒬 D_KL(P ‖ Q). In practice, we only observe samples from P and hence, maximize the log-likelihood of the observed data X. Selecting the model class for maximum likelihood learning is non-trivial; MLE w.r.t. a small class can be far from P, whereas a large class poses the risk of overfitting in the absence of sufficient data, or even underfitting due to difficulty in optimizing non-convex objectives that frequently arise due to the use of latent variable models, neural networks, etc.The boosting intuition is to greedily increase model capacity by learning a sequence of weak intermediate models {h_t ∈ℋ_t}_t=0^T that can correct for mistakes made by previous models in the ensemble. Here, ℋ_t is a predefined model class (such as 𝒬) for h_t. We defer the algorithms pertaining to the learning of such intermediate models to the next section, and first discuss two mechanisms for deriving the final estimate q_T from the individual density estimates at each round, {h_t}_t=0^T. §.§ Additive boostingIn additive boosting, the final density estimate is an arithmetic average of the intermediate models:q_T = ∑_t=0^T α_t · h_twhere 0 ≤α_t≤ 1 denote the weights assigned to the intermediate models. The weights are re-normalized at every round to sum to 1 which gives us a valid probability density estimate. Starting with a base model h_0, we can express the density estimate after a round of boosting recursively as:q_t = (1-α̂_t) · q_t-1 + α̂_t · h_twhere α̂_t denotes the normalized weight for h_t at round t. We now derive conditions on the intermediate models that guarantee “progress” in every round of boosting. Let δ^t_KL(h_t, α̂_t) = D_KL(P ‖ Q_t-1) - D_KL(P ‖ Q_t) denote the reduction in KL-divergence at the t^th round of additive boosting. The following conditions hold: * Sufficient: If 𝔼_P [logh_t/q_t-1] ≥ 0, then δ^t_KL(h_t, α̂_t) ≥ 0 for allα̂_t ∈ [0,1]. * Necessary: If ∃α̂_t ∈ (0, 1] such that δ^t_KL(h_t, α̂_t) ≥ 0, then 𝔼_P [h_t/q_t-1] ≥ 1. In Appendix <ref>. The sufficient and necessary conditions require that the expected log-likelihood and likelihood respectively of the current intermediate model, h_t are better-or-equal than those of the combined previous model, q_t-1 under the true distribution when compared using density ratios.Next, we consider an alternative formulation of multiplicative boosting for improving the model fit to an arbitrary data distribution. §.§ Multiplicative boosting In multiplicative boosting, we factorize the final density estimate as a geometric average ofT+1 intermediate models {h_t}_t=0^T, each assigned an exponentiated weight α_t:q_T = ∏_t=0^T h_t^α_t/Z_Twhere the partition function Z_T = ∫∏_t=0^T h_t^α_t d𝐱.Recursively, we can specify the density estimate as:q̃_t= h_t^α_t·q̃_t-1where q̃_t is the unnormalized estimate at round t. The base model h_0 is learned using MLE. The conditions on the intermediate models for reducing KL-divergence at every round are stated below. Let δ^t_KL(h_t, α_t) = D_KL(P ‖ Q_t-1) - D_KL(P ‖ Q_t) denote the reduction in KL-divergence at the t^th round of multiplicative boosting. The following conditions hold: * Sufficient: If 𝔼_P [log h_t] ≥log𝔼_Q_t-1[h_t], then δ^t_KL(h_t, α_t) ≥ 0 for allα_t ∈ [0, 1]. * Necessary: If ∃α_t ∈ (0, 1] such that δ^t_KL(h_t, α_t) ≥ 0, then 𝔼_P [log h_t] ≥𝔼_Q_t-1[log h_t]. In Appendix <ref>. In contrast to additive boosting, the conditions above compare expectations under the true distribution with expectations under the model distribution in the previous round, Q_t-1. The equality in the conditions holds for α_t=0, which corresponds to the trivial case where the current intermediate model is ignored in Eq. (<ref>). For other valid α_t, the non-degenerate version of the sufficient inequality guarantees progress towards the true data distribution. Note that the intermediate models increase the overall capacity of the ensemble at every round. As we shall demonstrate later, we find models fit using multiplicative boosting to outperform their additive counterparts empirically suggesting the conditions in Theorem <ref> are easier to fulfill in practice. From the necessary condition, we see that a “good" intermediate model h_tassigns a better-or-equal log-likelihood under the true distribution as opposed to the model distribution, Q_t-1. This condition suggests two learning algorithms for intermediate models which we discuss next.§ BOOSTED GENERATIVE MODELSIn this section, we design and analyze meta-algorithms for multiplicative boosting of generative models. Given any base model which permits (approximate) likelihood evaluation, we provide a mechanism for boosting this model using an ensemble of generative and/or discriminative models.§.§ Generative boostingSupervised boosting algorithms such as AdaBoost typically involve a reweighting procedure for training weak learners <cit.>. We can similarly train an ensemble of generative models for unsupervised boosting, where every subsequent model performs MLE w.r.t a reweighted data distribution D_t:max_h_t𝔼_D_t[log h_t]where d_t ∝(p/q_t-1)^β_tand β_t ∈ [0, 1] is the reweighting coefficient at round t. Note that these coefficients are in general different from the model weights α_t that appear inEq. (<ref>).If we can maximize the objective in Eq. (<ref>) optimally, then δ_KL^t(h_t, α_t) ≥ 0 for any β_t ∈ [0, 1] with the equality holding for β_t=0. In Appendix <ref>. While the objective in Eq. (<ref>) can be hard to optimize in practice, the target distribution becomes easier to approximate as we reduce the reweighting coefficient. For the extreme case of β_t=0, the reweighted data distribution is simply uniform. There is no free lunch however, since a low β_t results in a slower reduction in KL-divergence leading to a computational-statistical trade-off. The pseudocode for the corresponding boosting meta-algorithm, referred to as GenBGM, is given in Algorithm <ref>. In practice, we only observe samples from the true data distribution, and hence, approximate p based on the empirical data distribution which is defined to be uniform over the dataset X. At every subsequent round, GenBGM learns an intermediate model that maximizes the log-likelihood of data sampled from a reweighted data distribution.§.§ Discriminative boosting A base generative model can be boosted using a discriminative approach as well. Here, the intermediate model is specified as the density ratio obtained from a binary classifier. Consider the following setup: we observe an equal number of samples drawn i.i.d. from the true data distribution (w.l.o.g. assigned the label y=+1) and the model distribution in the previous round Q_t-1 (label y=0).Let f: ℝ^+→ℝ be any convex, lower semi-continuous function satisfying f(1) = 0.The f-divergence between P and Q is defined as, D_f(P ‖ Q) = ∫ q · f(p/q) d𝐱. Notable examples include the Kullback-Liebler (KL) divergence, Hellinger distance, and the Jenson-Shannon (JS) divergence among many others. The binary classifier in discriminative boosting maximizes a variational lower bound on any f-divergence at round t:D_f(P ‖ Q_t-1) ≥sup_r_t∈ℛ_t (𝔼_P[r_t] - 𝔼_Q_t-1[f^⋆(r_t)]). where f^⋆ denotes the Fenchel conjugate of f and r_t:ℝ^d →dom_f^⋆ parameterizes the classifier. Under mild conditions on f <cit.>, the lower bound in Eq. (<ref>) is tight if r_t^⋆ = f'( p/q_t-1). Hence, a solution to Eq. (<ref>) can be used to estimate density ratios. The density ratios naturally fit into the multiplicative boosting framework and provide a justification for the use of objectives of the form Eq. (<ref>) for learning intermediate models as formalized in the proposition below.For any given f-divergence, let r_t^⋆ denote the optimal solution to Eq. (<ref>) in the t^th round of boosting. Then, the model density at the end of the boosting round matches the true density if we set α_t=1 andh_t = [f']^-1 (r_t^⋆) where [f']^-1 denotes the inverse of the derivative of f. In Appendix <ref>. The pseudocode for the corresponding meta-algorithm, DiscBGM is given in Algorithm <ref>. At every round, we train a binary classifier to optimize the objective in Eq. (<ref>) for a chosen f-divergence. As a special case, the negative of the cross-entropy loss commonly used for binary classification is also a lower bound on an f-divergence. While Algorithm <ref> is applicable for any f-divergence, we will focus on cross-entropy henceforth to streamline the discussion.Consider the (negative) cross-entropy objective maximized by a binary classifier:sup_c_t∈𝒞_t𝔼_P[log c_t] + 𝔼_Q_t-1[log(1-c_t)].If a binary classifier c_t trained to optimize Eq. (<ref>) is Bayes optimal, then the model density after round t matches the true density if we set α_t=1 andh_t= c_t/1-c_t. In Appendix <ref>. In practice, a classifier with limited capacity trained on a finite dataset will not generally be Bayes optimal. The above corollary, however, suggests that a good classifier can provide a `direction of improvement', in a similar spirit to gradient boosting for supervised learning <cit.>. Additionally, if the intermediate model distribution h_t obtained using the above corollary satisfies the conditions in Theorem <ref>, it is guaranteed to improve the fit. The weights α_t∈ [0,1] can be interpreted as our confidence in the classification estimates, akin to the step size used in gradient descent.While in practice we heuristically assign weights to the intermediate models, the greedy optimum value of these weights at every round is a critical point for δ^t_KL (defined in Theorem <ref>). For example, in the extreme case where c_t is uninformative, , c_t ≡ 0.5, then δ^t_KL(h_t, α_t)=0 for all α_t∈ [0,1]. If c_t is Bayes optimal, then δ^t_KL attains a maxima when α_t=1 (Corollary <ref>). §.§ Hybrid boostingIntermediate models need not be exclusively generators or discriminators; we can design a boosting ensemble with any combination of generators and discriminators. If an intermediate model is chosen to be a generator, we learn a generative model using MLE after appropriately reweighting the data points. If a discriminator is used to implicitly specify an intermediate model, we set up a binary classification problem. §.§ RegularizationIn practice, we want boosted generative models (BGM) to generalize to data outside the training set X. Regularization in BGMs is imposed primarily in two ways. First, every intermediate model can be independently regularized by incorporating explicit terms in the learning objective, early stopping based on validation error, heuristics such as dropout, etc.Moreover, restricting the number of rounds of boosting is another effective mechanism for regularizing BGMs. Fewer rounds of boosting are required if the intermediate models are sufficiently expressive.§ EMPIRICAL EVALUATIONOur experiments are designed to demonstrate the superiority of the proposed boosting meta-algorithms on a wide variety of generative models and tasks.A reference implementation of the boosting meta-algorithms is available at . Additional implementation details for the experiments below are given in Appendix <ref>.§.§ Multiplicative vs. additive boostingA common pitfall with learning parameteric generative models is model misspecification with respect to the true underlying data distribution. For a quantitative and qualitative understanding of the behavior of additive and multiplicative boosting, we begin by considering a synthetic setting for density estimation on a mixture of Gaussians. Density estimation on synthetic dataset. The true data distribution is a equi-weighted mixture of four Gaussians centered symmetrically around the origin, each having an identity covariance matrix. The contours of the underlying density are shown in Figure <ref>. We observe 1,000 training samples drawn independently from the data distribution (shown as black dots in Figure <ref>), and the task is to learn this distribution. The test set contains 1,000 samples from the same distribution. We repeatthe process 10 times for statistical significance.As a base (misspecified) model, we fit a mixture of two Gaussians to the data; the contours for an example instance are shown in Figure <ref>. Wecompare multiplicative and additive boosting, each run for T=2 rounds. For additive boosting (Add), we extend the algorithm proposed by  rosset2002boosting setting α̂_0 to unity and doing a line search over α̂_1, α̂_2 ∈ [0, 1].For Add and GenBGM, the intermediate models are mixtures of two Gaussians as well.The classifiers for DiscBGM are multi-layer perceptrons with two hidden layers of 100 units each and ReLU activations, trained to maximize f-divergences corresponding to the negative cross-entropy (NCE) and Hellinger distance (HD) using the Adam optimizer <cit.>. The test negative log-likelihood (NLL) estimates are listed in Table <ref>. Qualitatively, the contour plots for the estimated densities after every boosting round on a sample instance are shown in Figure <ref>. Multiplicative boosting algorithms outperform additive boosting in correcting for model misspecification. GenBGM initially leans towards maximizing coverage, whereas both versions of DiscBGM are relatively more conservative in assigning high densities to data points away from the modes.Heuristic model weighting strategies. The multiplicative boosting algorithms require as hyperparameters the number of rounds of boosting and weights assigned to the intermediate models. For any practical setting, these hyperparameters are specific to the dataset and task under consideration and should be set based on cross-validation. While automatically setting model weights is an important direction for future work, we propose some heuristic weighting strategies.Specifically, the unity heuristic assigns a weight of 1 to every model in the ensemble, the uniform heuristic assigns a weight of 1/(T+1) to every model, and the decay heuristic assigns as a weight of 1/2^t to the t^th model in the ensemble. In Figure <ref>, we observe that the performance of the algorithms is sensitive to the weighting strategies. In particular, DiscBGM produces worse estimates as T increases for the “uniform" (red) strategy.The performance of GenBGM also degrades slightly with increasing T for the “unity” (green) strategy.Notably, the “decay” (cyan) strategy achieves stable performance for both the algorithms. Intuitively, this heuristic follows the rationale of reducing the step size in gradient based stochastic optimization algorithms, and we expect this strategy to work better even in other settings. However, this strategy could potentially result in slower convergence as opposed to the unity strategy.Density estimation on benchmark datasets. We now evaluate the performance of additive and multiplicative boosting for density estimation on real-world benchmark datasets <cit.>. We consider two generative model families: mixture of Bernoullis (MoB) and sum-product networks <cit.>. While our results for multiplicative boosting with sum-product networks (SPN) are competitive with the state-of-the-art, the goal of these experiments is to perform a robust comparison of boosting algorithms as well as demonstrate their applicability to various model families.We set T=2 rounds for additive boosting and GenBGM. Since DiscBGM requires samples from the model density at every round, we set T=1 to ensure computational fairness such that the samples can be obtained efficiently from the base model sidestepping running expensive Markov chains. Model weights are chosen based on cross-validation.The results on density estimation are reported in Table <ref>. Since multiplicative boosting estimates are unnormalized, we use importance sampling to estimate the partition function. When the base model is MoB, the Add model underperforms and is often worse than even the baseline model for the best performing validated non-zero model weights. GenBGM consistently outperforms Add and improves over the baseline model in a most cases (4/6 datasets). DiscBGM performs the best and convincingly outperforms the baseline, Add, andGenBGM on all datasets. For results on SPNs, the boosted models all outperform the baseline. GenBGM again edges out Add models (4/6 datasets), whereas DiscBGM models outperform all other models on all datasets.These results demonstrate the usefulness of boosted expressive model families, especially the DiscBGM approach, which performs the best, while GenBGM is preferable to Add. §.§ Applications of generative modelsClassification. Here, we evaluate the performance of boosting algorithms for classification. Since the datasets above do not have any explicit labels, we choose one of the dimensions to be the label (say y). Letting 𝐱_y̅ denote the remaining dimensions, we can obtain a prediction for y as, p(y=1|𝐱_y̅)= p(y=1, 𝐱_y̅)/p(y=1, 𝐱_y̅) + p(y=0, 𝐱_y̅)which is efficient to compute even for unnormalized models. We repeat the above procedure for all the variables predicting one variable at a time using the values assigned to the remaining variables. The results are reported in Table <ref>. When the base model is a MoB, we observe that the Add approach could often be worse than the base model whereas GenBGM performs slightly better than the baseline (4/6 datasets). The DiscBGM approach consistently performs well, and is only outperformed by GenBGM for two datasets for MoB. When SPNs are used instead, both Add and GenBGM improve upon the baseline model while DiscBGM again is the best performing model on all but one dataset.Sample generation.We compare boosting algorithms based on their ability to generate image samples for the binarized MNIST dataset of handwritten digits <cit.>. We use variational autoencoders (VAE) as the base model <cit.>. While any sufficiently expressive VAE can generate impressive examples, we design the experiment to evaluate the model complexity approximated as the number of learnable parameters.Ancestral samples obtained by the baseline VAE model are shown in Figure <ref>. We use the evidence lower bound (ELBO) as a proxy for approximately evaluating the marginal log-likelihood during learning. The conventional approach to improving the performance of a latent variable model is to increase its representational capacity by adding hidden layers (Base + depth) or increasing the number of hidden units in the existing layers (Base + width). These lead to a marginal improvement in sample quality as seen in Figure <ref> and Figure <ref>. In contrast,boosting makes steady improvements in sample quality. We start with a VAE with much fewer parameters and generate samples using a hybrid boosting GenDiscBGM sequence VAE→CNN→VAE (Figure <ref>) . The discriminator used is a convolutional neural network (CNN) <cit.> trained to maximize the negative cross-entropy. We then generate samples using independent Markov chain Monte Carlo (MCMC) runs. The boosted sequences generate sharper samples than all baselines in spite of having similar model capacity. § DISCUSSION AND RELATED WORKIn this work, we revisited boosting, a class of meta-algorithms developed in response to a seminal question: Can a set of weak learners create a single strong learner? Boosting has offered interesting theoretical insights into the fundamental limits of supervised learning and led to the development of algorithms that work well in practice  <cit.>.Our work provides a foundational framework for unsupervised boosting with connections to prior work discussed below. Sum-of-experts.  rosset2002boosting proposed an algorithm for density estimation using Bayesian networks similar to gradient boosting. These models are normalized and easy to sample, but are generally outperformed by multiplicative formulations for correcting for model misspecification, as we show in this work. Similar additive approaches have been used for improving approximate posteriors for specific algorithms for variational inference <cit.> and generative adversarial networks <cit.>. For a survey on variations of additive ensembling for unsupervised settings, refer to the survey by  bourel2012aggregating.Product-of-experts. Our multiplicative boosting formulation can be interpreted as a product-of-experts approach, which was initially proposed for feature learning in energy based models such as Boltzmann machines. For example, the hidden units in a restricted Boltzmann machine can be interpreted as weak learners performing MLE. If the number of weak learners is fixed, they can be efficiently updated in parallel but there is a risk of learning redundant features  <cit.>.Weak learners can also be added incrementally based on the learner's ability to distinguish observed data and model-generated data <cit.>.  tu2007learning generalized the latter to boost arbitrary probabilistic models; their algorithm is a special case of DiscBGMwith all α's set to 1 and the discriminator itself a boosted classifier. DiscBGM additionally accounts for imperfections in learning classifiers through flexible model weights. Further, it can include any classifier trained to maximize any f-divergence. Related techniques such as noise-contrastive estimation, ratio matching, and score matching methods can be cast as minimization of Bregman divergences, akin to DiscBGM with unit model weights <cit.>. A non-parametric algorithm similar to GenBGM was proposed by  di2004boosting where an ensemble of weighted kernel density estimates are learned to approximate the data distribution. In contrast, our framework allows for both parametric and non-parametric learners and uses a different scheme for reweighting data points than proposed in the above work. Unsupervised-as-supervised learning. The use of density ratios learned by a binary classifier for estimation was first proposed by  friedman2001elements and has been subsequently applied elsewhere, notably for parameter estimation using noise-contrastive estimation <cit.> and sample generation in generative adversarial networks (GAN) <cit.>. While GANs consist of a discriminator distinguishing real data from model generated data similar to DiscBGM for a suitable f-divergence, they differ in the learning objective for the generator <cit.>. The generator of a GAN performs an adversarial minimization of the same objective the discriminator maximizes, whereas DiscBGM uses the likelihood estimate of the base generator (learned using MLE) and the density ratios derived from the discriminator(s) to estimate the model density for the ensemble.Limitations and future work. In the multiplicative boosting framework, the model density needs to be specified only up to a normalization constant at any given round of boosting. Additionally, while many applications of generative modeling such as feature learning and classification can sidestep computing the partition function, if needed it can be estimated using techniques such as Annealed Importance Sampling <cit.>. Similarly, Markov chain Monte Carlo methods can be used to generate samples. The lack of implicit normalization can however be limiting for applications requiring fast log-likelihood evaluation and sampling. In order to sidestep this issue, a promising direction for future work is to consider boosting of normalizing flow models <cit.>. These models specify an invertible multiplicative transformation from onedistribution to another using the change-of-variables formula such that the resulting distribution is self-normalized and efficient ancestral sampling is possible.The GenBGM algorithm can be adapted to normalizing flow models whereby every transformation is interpreted as a weak learner. The parameters for every transformation can be trained greedily after suitable reweighting resulting in a self-normalized boosted generative model. § CONCLUSIONWe presented a general-purpose framework for boosting generative models by explicit factorization of the model likelihood as a product of simpler intermediate model densities. These intermediate models are learned greedily using discriminative or generative approaches, gradually increasing the overall model's capacity.We demonstrated the effectiveness of these models over baseline models and additive boosting for the tasks of density estimation, classification, and sample generation.Extensions to semi-supervised learning <cit.> and structured prediction <cit.> are exciting directions for future work. § ACKNOWLEDGEMENTSWe are thankful to Neal Jean, Daniel Levy, and Russell Stewart for helpful critique. This research was supported by a Microsoft Research PhD fellowship in machine learning for the first author, NSF grants #1651565, #1522054, #1733686, a Future of Life Institute grant, and Intel. 9pt10pt aaai§ APPENDICES § PROOFS OF THEORETICAL RESULTS §.§ Theorem <ref>The reduction in KL-divergence can be simplified as: δ^t_KL(h_t, α̂_t)= 𝔼_P[logp/q_t-1] - 𝔼_P[logp/q_t] = 𝔼_P[logq_t/q_t-1] = 𝔼_P[log[ (1-α̂_t) + α̂_t h_t/q_t-1]]. We first derive the sufficient condition by lower boundingδ^t_KL(h_t, α̂_t).δ^t_KL(h_t, α̂_t)=𝔼_P[log[ (1-α̂_t) + α̂_t h_t/q_t-1]] ≥𝔼_P[(1-α̂_t) log 1 + α̂_t logh_t/q_t-1](Arithmetic Mean≥Geometric Mean)= α̂_t 𝔼_P[logh_t/q_t-1].(Linearity of expectation)If the lower bound is non-negative, then so is δ^t_KL(h_t, α̂_t). Hence:𝔼_P[logh_t/q_t-1]≥ 0which is the stated sufficient condition.For the necessary condition to hold, we know that:0≤δ^t_KL(h_t, α̂_t) =𝔼_P[log[ (1-α̂_t) + α̂_t h_t/q_t-1]] ≤log𝔼_P[ (1-α̂_t) + α̂_t h_t/q_t-1](Jensen's inequality)= log [ (1-α̂_t) + α̂_t 𝔼_P[h_t/q_t-1]](Linearity of expectation)Taking exponential on both sides, we get:(1-α̂_t) + α̂_t 𝔼_P[h_t/q_t-1]≥ 1 𝔼_P[h_t/q_t-1]≥ 1which is the stated necessary condition. §.§ Theorem <ref> We first derive the sufficient condition.δ^t_KL(h_t, α_t)= ∫ p log q_td𝐱 - ∫ p log q_t-1 d𝐱= ∫ p logh_t^α_t· q_t-1/Z_t - ∫ p log q_t-1 (using Eq. (<ref>))= α_t ·𝔼_P[log h_t] - log𝔼_Q_t-1[h_t^α_t]≥α_t ·𝔼_P[log h_t] - log𝔼_Q_t-1[h_t]^α_t (Jensen's inequality)= α_t ·[𝔼_P[log h_t] - log𝔼_Q_t-1[h_t]] ≥ 0.(by assumption)Note that if α_t=1, the sufficient condition is also necessary. For the necessary condition to hold, we know that:0 ≤δ^t_KL(h_t, α_t)= α_t ·𝔼_P[log h_t]- log𝔼_Q_t-1[h_t^α_t]≤α_t ·𝔼_P[log h_t]- 𝔼_Q_t-1[log h_t^α_t] (Jensen's inequality)=α_t · [𝔼_P[log h_t] - 𝔼_Q_t-1 [log h_t]](Linearity of expectation)≤𝔼_P[log h_t] - 𝔼_Q_t-1 [log h_t]. (since α_t > 0)§.§ Proposition <ref> By assumption, we can optimize Eq. (<ref>) to get:h_t∝(p/q_t-1)^β_t . Substituting for h_t in the multiplicative boosting formulation in Eq. (<ref>),:q_t∝q_t-1· h_t/Z_q_t∝ q_t-1·(p/q_t-1)^β_t=p^β_t·q_t-1^1-β_t/Z_q_twhere the partition function Z_q_t = ∫ p^β_t·q_t-1^1-β_t.In order to prove the inequality, we first obtain a lower bound on the log-partition function, Z_q_t. For any given point, we have:p^β_t·q_t-1^1-β_t ≤β_t p +(1-β_t) q_t-1.(Arithmetic Mean ≥ Geometric Mean)Integrating over all points in the domain, we get: log Z_q_t ≤log[β Z_p + (1-β) Z_q_t-1] = 0where we have used the fact that p and q_t-1 are normalized densities. Now, consider the following quantity:D_KL(P ‖ Q_t)= 𝔼_P [logp/q_t]= 𝔼_P [logp/p^β_t·q_t-1^1-β_t/Z_q_t]= (1-β_t) 𝔼_P [logp/q_t-1] + log Z_q_t≤ (1-β_t) 𝔼_P [logp/q_t-1](using Eq. (<ref>))≤𝔼_P [logp/q_t-1] (since β_t ≥ 0) = D_KL(P ‖ Q_t-1).§.§ Proposition <ref> By the f-optimality assumption, we know that:r_t= f'(p/q_t-1).Hence, h_t = p/q_t-1. From Eq. (<ref>), we get:q_t= q_t-1· h_t^α_t = pfinishing the proof. §.§ Corollary <ref> Let u_t denote the joint distribution over (𝐱, y) at round t. We will prove a slightly more general result where we have m positive training examples sampled from p and the k negative training examples sampled from q_t-1.[In the statement for Corollary <ref>, the classes are assumed to be balanced for simplicity i.e., m=k.] Hence, we can express the conditional and prior densities as:p= u(𝐱| y=1) q_t-1 = u(𝐱| y=0) u(y=1)= m/m+ku(y=0)= k/m+k.The Bayes optimal density c_t can be expressed as:c_t= u(y=1 |𝐱)= u(𝐱| y=1)u(y=1) / u(𝐱) .Similarly, we have:1-c_t= u(𝐱| y=0) u(y=0) / u(𝐱) .From Eqs. (<ref>-<ref>, <ref>-<ref>), we have:h_t= γ·c_t/1-c_t = p/q_t-1.where γ = k/m. Finally from Eq. (<ref>), we get:q_t= q_t-1· h_t^α_t = pfinishing the proof. In Corollary <ref> below, we present an additional theoretical result below that derives the optimal model weight, α_t for an adversarial Bayes optimal classifier.§.§ Corollary <ref>  [to Corollary <ref>] Define an adversarial Bayes optimal classifier c_t' as one that assigns the density c_t' = 1 - c_t where c_t is the Bayes optimal classifier. For an adversarial Bayes optimal classifier c_t', δ^t_KL attains a maxima of zero when α_t=0. For an adversarial Bayes optimal classifier,c_t'= u(𝐱| y=0) u(y=0) / u(𝐱)1-c_t'= u(𝐱| y=1) u(y=1) / u(𝐱).From Eqs. (<ref>-<ref>, <ref>-<ref>), we have:h_t= γ·c_t'/1-c_t' = q_t-1/p.Substituting the above intermediate model in Eq. (<ref>),δ^t_KL(h_t, α_t)= α_t ·𝔼_P[logq_t-1/p] - log𝔼_Q_t-1[q_t-1/p]^α_t≤α_t ·𝔼_P[logq_t-1/p] - 𝔼_Q_t-1[α_t ·logq_t-1/p](Jensen's inequality)= α_t · [𝔼_P[logq_t-1/p] - 𝔼_Q_t-1[logq_t-1/p]](Linearity of expectation)= -α_t [D_KL(P ∥ Q_t-1) + D_KL(Q_t-1∥ P) ]≤ 0 (D_KL is non-negative).By inspection, the equality holds when α_t=0 finishing the proof.§ ADDITIONAL IMPLEMENTATION DETAILS   §.§ Density estimation on synthetic dataset Model weights. For DiscBGM, all model weights, α's to unity. The model weights for GenBGM, α's are set uniformly to 1/(T+1) and reweighting coefficients, β'sare set to unity. §.§ Density estimation on benchmark datasets Generator learning procedure details. We use the default open source implementations of mixture of Bernoullis (MoB) and sum-product networks (SPN) as given inandrespectively for baseline models. Discriminator learning procedure details. The discriminator considered for these experiments is a multilayer perceptron with two hidden layers consisting of 100 units each and ReLU activations learned using the Adam optimizer <cit.> with a learning rate of 1e-4. The training is for 100 epochs with a mini-batch size of 100, and finally the model checkpoint with the best validation error during training is selected to specify the intermediate model to be added to the ensemble. Model weights. Model weights for multiplicative boosting algorithms, GenBGM and DiscBGM, are set based on best validation set performance of the heuristic weighting strategies. Partition function is estimated using importance sampling with the baseline model (MoB or SPN) as a proposal and a sample size of 1,000,000. §.§ Sample generationVAE architecture and learning procedure details. Only the last layer in every VAE is stochastic, rest are deterministic. The inference network specifying the posterior contains the same architecture for the hidden layer as the generative network. The prior over the latent variables is standard Gaussian, the hidden layer activations are ReLU, and learning is done using Adam <cit.> with a learning rate of 10^-3 and mini-batches of size 100.CNN architecture and learning procedure details. The CNN contains two convolutional layers and a single full connected layer with 1024 units. Convolution layers have kernel size 5× 5, and 32 and 64 output channels, respectively. We apply ReLUs and 2× 2 max pooling after each convolution. The net is randomly initialized prior to training, and learning is done using the Adam <cit.> optimizer with a learning rate of 10^-3 and mini-batches of size 100. Sampling procedure for BGM sequences. Samples from the GenDiscBGM are drawn from a Markov chain run using the Metropolis-Hastings algorithm with a discrete, uniformly random proposal and the BGM distribution as the stationary distribution for the chain. Every sample in Figure <ref> (d) is drawn from an independent Markov chain with a burn-in period of 100,000 samples and a different start seed state.
http://arxiv.org/abs/1702.08484v2
{ "authors": [ "Aditya Grover", "Stefano Ermon" ], "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227192840", "title": "Boosted Generative Models" }
=11.15font=small,labelfont=up,textfont=slequationsection αβ̱a̅ emptyIFT-UAM/CSIC-17-0141.5cm R+α R^n Inflation in higher-dimensional Space-times Santiago Pajón Otero^1, Francisco G. Pedro^1, and Clemens Wieck^1 ^1Departamento de Física Teórica and Instituto de Física Teórica UAM/CSIC,Universidad Autónoma de Madrid, Cantoblanco, 28049 Madrid, Spain We generalise Starobinsky's model of inflation to space-times with D>4 dimensions, where D-4 dimensions are compactified on a suitable manifold. The D-dimensional action features Einstein-Hilbert gravity, a higher-order curvature term, a cosmological constant, and potential contributions from fluxes in the compact dimensions.The existence of a stable flat direction in the four-dimensional EFT implies that the power of space-time curvature, n, and the rank of the compact space fluxes, p, are constrained via n=p=D/2. Whenever these constraints are satisfied, a consistent single-field inflation model can be built into this setup, where the inflaton field is the same as in the four-dimensional Starobinsky model. The resulting predictions for the CMB observables are nearly indistinguishable from those of the latter. plain § INTRODUCTION As one of the first examples of single-field slow-roll inflation, Starobinsky proposed a model of extended gravity with f(R) = R + α R^2 that leads to a scalar field theory with an exponentially flat potential <cit.>. By means of a Legendre-Weyl transformation the non-trivial gravity action can be recast into the form of Einstein-Hilbert gravity with a minimally coupled scalar field, ϕ, whose scalar potential takes the formV = 1/8α(1-e^-√(2/3)ϕ)^2 .This model of inflation is, over three decades after its proposal, compatible with the latest observational constraints <cit.>.The aim of this work is to study possible generalisations of the underlying f(R) theory in D>4 dimensions; it is based on <cit.>. Recently, there has been further research in this direction <cit.>, which shares some of the conclusions of the present work,without addressing the important aspect of moduli stabilisation. Whenever higher-dimensional theories are compactified, deformation modes of the internal manifold enter the four-dimensional effective field theory (EFT) as additional scalar fields. Usually those fields must be stabilised in a suitable way to not cause a variety of problems. We study the interplay between inflation from f(R) gravity in higher dimensional spacetimes and moduli stabilisation using a simple toy model. We show that, without ingredients other than the gravitational action and a cosmological constant, the potential is generically unstable along the direction of the volume of the compact space. Following the original idea of Freund and Rubin <cit.>, we demonstrate that non-vanishing two-form flux on the compact space can lead to sufficiently stable minima with a Minkowski or de Sitter space-time in four dimensions. However, we show that there are no stable inflationary trajectories ending in those minima. While for large values of the scalar field ϕ the potential features a plateau – as in the original Starobinsky model – this plateau is always unstable in the direction of the volume modulus. Finally, we propose a solution to this problem using a more general p-form flux background on the compact space. This allows us to separate moduli stabilisation from the inflationary dynamics.This work fits well in a line with previous studies of plateau inflation in higher-dimensional theories. For example, the authors of <cit.> use a similarly simple toy model of moduli stabilisation to investigate its interplay with inflationary theories. Moreover, the past decade has seen substantial progress in string theory implementations of plateau inflation models, also including the study of moduli stability, cf. <cit.>.The remainder of this paper is organised as follows. In Section 2 we first give the ansatz for the D-dimensional f(R) theory. Second, we give the resulting four-dimensional action for the two involved scalar fields in the Einstein frame, after compactification on a sphere. Third, we demonstrate that two-form flux cannot sufficiently stabilise the volume modulus during inflation. Finally, we solve this problem by introducing p-form flux on the sphere and discuss the ensuing observational footprint of the model. In Section 3 we conclude, and compile the details regarding the main result, which is the four-dimensional action in Einstein frame, in Appendix A. § STAROBINSKY'S MODEL IN D DIMENSIONS The starting point of our discussion is a generalisation of Starobinsky's model in D space-time dimensions. Following <cit.> the D-dimensional action features an Einstein-Hilbert term, a higher-order curvature term, a cosmological constant, and the kinetic term of a (p-1)-form gauge potential. In total, we have S = M^D-2/2∫d^DX √(-g)(R + αR^n- 2 M^2Λ -|F_p|^2 ) ,where|F_p|^2= g^M_1N_1⋯ g^M_p N_pF_M_1… M_pF_N_1… N_p ,and n and Λ are treated as free parameters. Moreover, M denotes the D-dimensional Planck mass. In the following we are interested in the four-dimensional effective field theory (EFT) after compactification of D-4 dimensions on a sphere.[We choose a sphere because it is a simple example manifold with positive Euler number and a single volume modulus.]F_p only has non-vanishing components in the compact space to satisfy Lorentz invariance in the EFT, so for D = 4, n = 2, and Λ = 0(<ref>) reduces to the standard Starobinsky action. §.§ Einstein frame and compactification The action above is written in a D-dimensional Jordan frame. To extract the physical predictions of the EFT after compactification of D-4 dimensions, an Einstein frame description is particularly useful. The strategy to obtain the desired four-dimensional action is as follows. First, by introducing an auxiliary scalar field A we can remove the term proportional to R^n in (<ref>). Second, using a conformal transformation of the D-dimensional metric, we can transform the result to the D-dimensional Einstein frame. Subsequently we compactify the D-4 internal dimensions on a sphere. Finally, as the result is again given in a four-dimensional Jordan frame, we perform another conformal transformation to obtain the four-dimensional Einstein frame action of the EFT. The details of this procedure can be found in Appendix <ref>. Here we merely state the final result,S= 1/2∫d^4x √(- g){R - 1/2(D-4) (D-2) ∂_μlnσ∂^μlnσ - D-1/D-2∂_μln A ∂^μln A + 2 σ^2-D- σ^4-D A^D/2-D[ (n-1) α( A - 1/n α)^n/n-1 + 2 Λ]+V_flux} ,where g and R now denote the respective four-dimensional quantities. This action is given in terms of four-dimensional natural units, i.e., we have set the four-dimensional Planck mass to unity, M_p = 1. Also, compared to (<ref>) in the appendix we have dropped the hats for convenience. The four-dimensional EFT apparently contains Einstein gravity and two dynamical scalar fields. Here A is the would-be inflaton field analogous to the one in Starobinsky's model and σ is the radial modulus of the compact sphere. The canonically normalised variables, ϕ and Σ,can be deduced from (<ref>) and are defined by σ= e^√(2 / (D-4)(D-2) )Σ,A=e^√(D-2/D-1)ϕ .The scalar potential V(σ,A) features contributions from the D-dimensional higher-order curvature term, from the integrated curvature of the compact space, and from compact space fluxes. It readsV = 1/2σ^4-D A^D/2-D[(n-1) α( A-1/n α)^n/n-1 + 2 Λ] - σ^2-D+V_flux ,where V_flux is the potential generated by the non-vanishing integral over |F_p|^2 on the sphere. It is generally a function of both A and σ; its form is given below.If (<ref>) is to have a plateau at large values of A, as is typical of four-dimensional Starobinsky inflation, the dimensionality of space-time must be related to the power of the Ricci scalar as follows <cit.>,D = 2n .We stress that the violation of this condition does not exclude the existence of a flat patch of the potential where inflation can take place. However, in the remainder of the paper we consider setups that feature an infinite plateau in the A direction, for which (<ref>) is a necessary (but not sufficient) condition. As argued below, the stability of this plateau places non-trivial constraints on the functional form of V_flux.Before we analyse in detail the interplay between the flux stabilisation of the volume modulus and the existence of a stable and flat inflationary trajectory, let us note that, by taking the limit D → 4 and n → 2 while setting V_flux=Λ=0, one recovers the standard four-dimensional Starobinsky potentiallim_D → 4 V |_n=2Λ = 0 =1/8α( 1-A )^2 ,in terms of the non-canonical variable A. §.§ Volume stabilisation with two-form fluxes One crucial observation following from the result (<ref>) is that, if V_flux = 0, the theory always has a runaway direction towards σ→ 0. To arrange for a (meta-)stable minimum of the volume modulus, we can turn on fluxes in the compact space which contribute to the four-dimensional scalar potential. Like in the original Freund-Rubin compactification <cit.> (see also<cit.> for a relation to string theory), we may try to employ two-form field strengths. In that case the last term of our starting action (<ref>) readsS ⊃ - M^D-2/2∫d^DX √(-g)g^M N g^P Q F_MP F_NQ ,which, upon dimensional reduction, gives rise toV_flux= 1/2 f^2σ^-D A^-D-4/D-2in the four-dimensional Einstein frame. The integer flux constant f is defined in (<ref>). Thus, the full scalar potential in this case, assuming D=2n, readsV(σ,A)= 1/2σ^2(2-n)[ (n-1) α( 1 - A^-1/n α)^n/n-1+ 2 A^ - n/n-1Λ] + 1/2 f^2σ^-2n A^-n-2/n-1 - σ^2(1-n) .We may now study whether this potential has a sufficiently stable minimum with vanishing (four-dimensional) cosmological constant and a stable inflationary trajectory.A vacuum with the desired properties seems to exist in a limited region of parameter space for any value of n. For example, with n=3 one finds after solving ∂_σ V = ∂_A V = V = 0,σ_0^4 = f^4 + f^2 λ/2 ,A_0 = f^4 - f^2 λ/24 α , Λ = - f^4 + f^2 λ + 96 α/72 α (f^2+ λ) ,with λ = √(f^4 - 48 α). Thus, the existence of a post-inflationary vacuum implies the parameter constraint f^4 > 48 α.Inflation, however, seems challenging to realise. One can check that for any n, the only potentially viable inflationary trajectory in the potential (<ref>) is along the coordinate A<cit.>. We can evaluate the potential for large values of A as follows, V_lim = lim_A →∞ V = 1/2σ^2-2n[σ^2(n-1)α (n α)^n/1-n -2 ] .The result does not depend on A, so the potential develops a plateau as in the original setup of Starobinsky. However, the plateau is always unstable in the direction of the modulus σ. In fact, V_lim has a single local extremum at σ_c^2 = 2 (n α)^n/n-1/α(n-2) ,which does feature a positive value for the scalar potential on the plateau, V_plat = V_lim(σ_c) = 2^1-n/α(n-2)^n-2 n^-n ,but a mass for σ that is always negative for n>2,∂_σ^2 V_lim(σ_c) = -2^2-nα^n (n-1) (n-2)^n (n α)^n^2/1-n .This leads us to exclude the possibility of Starobinsky inflation in D > 4 dimensions in cases where the radial modulus of the compact dimensions is stabilised by two-form flux. This is ultimately due to the fact that, as a result of the dimensional reduction and conformal transformation, the flux term in (<ref>) depends inversely on A. Hence, for large A the crucial stabilising term is eliminated. In the following, we show how this problem can be avoided in a more general flux background. §.§ Volume stabilisation with p-form fluxes In order to disentangle problem of moduli stabilisation from the potential of the would-be inflaton field, one can consider the more general case of stabilisation via p-form fluxes with p>2. The corresponding term in the original action is thenS⊃- M^D-2/2∫d^DX √(-g) g^M_1N_1...g^M_p N_pF_M_1...M_pF_N_1...N_p.After noticing that the source of difficulties in the two-form case is the A dependence in (<ref>), we consider flux terms that are invariant under the Legendre-Weyl transformation that recasts the D-dimensional action into the Einstein frame.[For details cf. Appendix <ref>.] This implies a link between the rank of the p-form and the dimensionality of space-time,D=2p .This degree of flux is only possible if D≥8, since it must be p ∈ℕ and p ≤ D-4.As shown in Appendix <ref>, upon dimensional reduction (<ref>) gives rise to the following term in the four-dimensional Einstein-frame action,V_flux = 1/2 f^2 σ^-2D+4=1/2 f^2 σ^4-4n ,where the last equality follows from imposing D=2n. As advertised, this stabilising term is independent of A. The full scalar potential then readsV(σ,A)= 1/2σ^2(2-n)[ (n-1) α( 1 - A^-1/n α)^n/n-1+ 2 A^ - n/n-1Λ]+ 1/2 f^2 σ^4-4n -σ^2(1-n).Again we find a stable Minkowski vacuum for any n, given by σ_0 = (f^2 n/2)^1/2n-2,A_0 =f^2/f^2 - 2^n α , Λ = n-1/n(f^2 n/2-2^n-1α n )^1/1-n . As in Section <ref> we may look for the possibility of a plateau at large values of A. Indeed one finds in the A →∞ limitV = 1/2 f^2 σ^4-4n-σ^2-2n +(n-1)(n α)^1/1-n/2 nσ^4-2n+𝒪(A^-1) .In this regime the volume modulus actually develops a local minimum at σ_c, defined byf^2=σ_c^2n-2(1+2-n/2n(n α)^1/1-nσ_c^2) ,which implies that the height of the plateau at large A is given byV_plat = 1/4σ_c^2-2n(-2+(n α)^1/1-nσ_c^2 ) .This situation is different from the one with two-from fluxes in Section <ref>. The plateau is actually stable in a certain parameter regime, since the mass of σ can be positive and large compared to the inflationary energy scale. In particular, one finds for the mass of the canonically normalised modulus at σ_c,m^2_Σ=σ_c^2-2n( 2n-2/n-2-(n α)^1/1-nσ_c^2) . Requiring the inflationary dynamics to be described by a single-field system, i.e., imposing that σ can be integrated out consistently, leads to the following two constraints on the parameters of the model,m^2_Σ>0 , m^2_Σ/V_plat≫ 1 .The latter constraint comes from the requirement that the dynamics of σ are negligible during the inflationary epoch.These constraints imply a tuning of the parameters such that2 <σ_c^2 (nα)^1/1-n < 2n-12/5/n-2 .With n>3, as has to be the case in our setup, one finds 2n-12/5/n-2<4. Moreover, note that (<ref>), (<ref>), and (<ref>) imply that in the desired parameter regime σ_0 ≈σ_c. This means that the back-reaction of the inflationary energy density on the expectation value of the volume modulus is negligible.Validity of the four-dimensional EFTIn order to evaluate the validity of the four-dimensional EFT one must compare the energy scales in the problem to the Kaluza-Klein (KK) scale of the compactification. Let us expandσ_c^2 (nα)^1/1-n≡ 2+δ ,where δ≪1. One can then show that V_plat=1/4σ_c ^2-2nδand m^2_Σ= σ_c ^2-2n2/n-2+𝒪(δ) .For the four-dimensional description to be valid both energy scales must be below the KK scale, V_plat≪ m^2_Σ≪ M_KK^2, which is given byM_KK≃1/σ . Since one can tune δ≪1 it automatically follows that the Hubble parameter during inflation is parametrically smaller than the square of theKK scale. The situation of m_Σ is more subtle since for D= 2n≥ 8 one findsm_Σ^2/M_KK^2≃2/n-2≲ 1 .We therefore conclude that the mass of the volume mode is below, but very close to the KK scale. We note that by tuning the dimensionality of space-time the ratio can be made smaller but that a hierarchical separation is hard to achieve. This renders the moduli stabilisation physics discussed above vulnerable to corrections coming from higher-dimensional physics.Inflationary footprint If (<ref>) is fulfilled we can describe inflation in terms of a single-field Lagrangian with the scalar potential V (A) ≈ V(σ_0, A) to very high accuracy. We can then determine the observational footprint of the model as follows. The inflationary potential in terms of the canonical variable ϕ, defined in (<ref>), readsV_inf= 𝒞_1+𝒞_2e^-n/n-1κϕ+𝒞_3 (1-e^-κϕ)^n/n-1,where κ≡√(2n-2/2n-1) and the 𝒞_i can be read off from (<ref>) after setting σ = σ_0. Notice that the value of κ plays a pivotal role in the determination of the observables for this class of potentials. For interesting cases one findsκ|_D=8=√(6/7) ,κ|_D=10 =2√(2)/3 . As mentioned above, the single-field regime of this setup can be reached whenever the conditions (<ref>) are imposed. The closer the parameter choice is to saturating the lower bound on the left-hand side of (<ref>), the more robust the mass hierarchy between volume modulus and the inflaton becomes. Furthermore, the correct normalisation of the scalar perturbations requires that at horizon exit V ∼ 10^-10 in Planck units. Hence the closer the parameter choice is to saturating the lower bound, the smaller the radius of the compact space σ_0, and consequently the smaller the required values of the parameters f and α. For illustration, Figure <ref> depicts one correctly normalised example with a large mass hierarchy.In what concerns CMB observables, even in D > 4 dimensions, one recovers values similar to the well-known ones for Starobinsky-type potentials, namelyn_s≈ 1-2/N_e , r≈9/κ^2 N_e^2 ,where N_e denotes the number of e-folds of expansion. Therefore, while these models lie at the centre of the Planck 1-σ region <cit.>, they are essentially indistinguishable among themselves and also from the four-dimensional Starobinsky model with κ=√(2/3). § DISCUSSION In this paper we have explored the relation between an R+α R^n gravitational theory in a D-dimensional space-time and the occurrence of inflation in four dimensions. This work constitutes an obvious extension of the Starobinsky model of inflation. Using the example of a sphere with a single volume modulus, we have found that the stabilisation and dynamics of the extra-dimensional manifold is closely connected to inflation and that disentangling the two requires judicious choices of the model parameters. This situation is analogous to well known results in string inflation, where the interplay between inflation and moduli stabilisation has been extensively studied over the last decade.The stand-out feature of the original four-dimensional Starobinsky proposal, apart from the fact that after 30 years it is nowhere near being excluded by CMB data, is the existence of an infinite plateau at large field values. In our D-dimensional case, demanding the scalar potential in the Einstein frame to have a similar plateau constrains the form of the initial gravitational action to f(R)=R+α R^D/2. Requiring stability of the compact space during inflation further constrains the form of the action, determining the extra degrees of freedom that can be present in the UV limit.More concretely, it excludes stabilisation of the compact space with a two-form field strength, as in Freund-Rubin compactifications. Instead, one may stabilise the volume via p-forms, where the rank p is related to the dimensionality of space-time, p=D/2. This last constraint combined with four-dimensional Lorentz invariance forces us to consider spaces of even dimensionality with D≥8. Once all these conditions can be met, it is possible to tune the microscopic parameters – such as the amount of flux, the D-dimensional cosmological constant, and the strength of the R^n term – to generate viable models of single-field inflation, compatible with the latest observational constraints, that exit into a viable post-inflationary minimum. § ACKNOWLEDGMENTSThis work is partially supported by the grants FPA2012-32828 from the MINECO, the ERC Advanced Grant SPLE under contract ERC-2012-ADG-20120216-320421 and the grant SEV-2012-0249 of the “Centro de Excelencia Severo Ochoa" Programme. § APPENDIX §.§ Derivation of the four-dimensional Einstein frame action Here we perform the series of transformations that take the action from the D-dimensional f(R) frame of(<ref>) to its four-dimensional Einstein frame form, cf. (<ref>). As usual, the Einstein frame is defined as the frame in which the gravitational action takes the Einstein-Hilbert form, the Jordan frame is the one in which the Ricci scalar appears multiplied by a function of a scalar field and the f(R) frame is the one in which the gravitational part of the action is expressed as a (non-linear) function of the Ricci scalar. In this paper, in a slight abuse of nomenclature, we refer to the Jordan and f(R) frames indiscriminately. Let us first focus on the pure gravity part of the action in order to write it in the Einstein-Hilbert form in D dimensions.We decompose (<ref>) into S= S_grav + S_matt, whereS_grav= M^D-2/2∫d^DX √(-g)( R + αR^n) ,andS_matt= M^D-2/2∫d^DX √(-g)( - 2 M^2 Λ - g^M_1 N_1...g^M_p N_pF_M_1...M_pF_N_1...N_p) .The D-dimensional cosmological constant Λ is dimensionless and the field strength p-forms have mass dimension one.Let us introduce an auxiliary field χ with mass dimension 2, and write the action as <cit.> S =M^D-2/2∫d^DX √(-g)[ f(χ) + ∂ f(χ)/∂χ( R - χ) ] .Note that, at this level, χ is a genuine auxiliary field because its action has no time derivatives. The equation of motion for χ following from this action is∂^2 f(χ)/∂χ^2( R - χ) = 0 ⇒ R = χ ,where we used that ∂^2 f(χ)/∂χ^2 = n (n-1) αχ^n-2≠ 0. This implies that the action (<ref>) is trivially equivalent to (<ref>). After defining the dimensionless field A viaA ≡∂ f(χ)/∂χ = 1 + n αχ^n-1 ,we can express the action in (<ref>) asS = M^D-2/2∫d^DX √(-g)( A R - Z(A) ) ,whereZ(A) ≡χ (A) A - f(χ(A)) = (n-1) αχ^n(A)=(n-1) α( A - 1/n α)^n/n-1 .It is useful to write all dimensionful parameters in terms of the D-dimensional Planck mass M, so we rescaleα→M^2-2nα , Z(A) → M^2 Z(A) .From this point onwards α (like Λ) is dimensionless. So far, the total action is thus S =M^D-2/2∫d^DX √(-g)( A R - M^2 Z(A) - 2 M^2 Λ - g^M_1 N_1… g^M_n N_nF_M_1 … M_nF_N_1… N_n) . In order to transform this to the D-dimensional Einstein frame, we perform a conformal transformation and write the action in terms of the metric g̃ defined byg_M N= Ω^-2g̃_MN ,g^M N= Ω^2g̃^MN , √(-g_(D)) =Ω^-D√(-g̃_(D)).One can show that under a Weyl rescaling R transforms as <cit.> R = Ω^2[ R̃+2(D-1)g̃^M N∇̃_M∇̃_NlnΩ - (D-1)(D-2)g̃^M N(∂_MlnΩ)(∂_NlnΩ)].where quantities with a tilde are understood with respect to g̃_MN. In order for g̃ to be the D-dimensional Einstein frame metric it must beΩ=A^1/D-2 .Then the action in the D-dimensional Einstein frame readsS= M^D-2/2∫d^DX √(-g̃)[R̃+2 D-1/D-2g̃^M N∇̃_M∇̃_Nln A - M^2 A^D/2-D Z(A) - D-1/D-2g̃^M N∂_Mln A ∂_Nln A - 2 M^2 A^D/2-DΛ - A^2 p-D/D-2g̃^M_1 N_1…g̃^M_p… N_pF_M_1… M_pF_N_1… N_p] .In what follows we ignore the total derivative ∇̃^2 ln A.Space-time in our framework is described by a D-dimensional manifold ℳ_D equipped with the metric g̃_M N. This D-dimensional manifold can be factorised into a four-dimensional manifold ℳ_4 and a (D-4)-dimensional manifold ℳ_D-4,ℳ_D = ℳ_4×ℳ_D-4 ,such that the metric g_M N can be written in a block-diagonal form,d s^2 = g̃_M Nd X^Md X^N = g_μνd x^μd x^ν + σ^2 g_m nd y^md y^n,with M,N = {0,…, D-1}; μ,ν ={ 0,…, 3}, and m,n = {4,…, D-1}. With this notation we choose the metric g_mn to have unit volume such that the physical volume of the compact space is determined by the dimensionless scalar field σ. Moreover, we choose the compact space to be a sphere. In what follows, we assume that σ=σ(x) and A=A(x), i.e., the scalars have constant profiles in the compact space. We further assume, in line with Freund-Rubin stabilisation, that the p-form fluxes are non-vanishing in the compact space but vanishing in the external dimensions, thereby preserving four-dimensional Lorentz invariance. This translates intoℒ⊃ -M^D-2/2 A^2 p-D/D-2g̃^M_1 N_1…g̃^M_p… N_pF_M_1… M_pF_N_1 … N_p =-M^D-2/2 A^2 p-D/D-2σ^-2 p g^m_1 n_1… g^m_p… n_pF_m_1… m_pF_n_1… n_p.Given the block-diagonal form in (<ref>) one may factorise the determinant,√(-g̃_(D))=σ^D-4√(-g_(4))√(g_(D-4)),where g_(4) andg_(D-4) are the determinants of g_μν and g_mn, respectively. Then the D-dimensional Ricci scalar decomposes as follows,R̃_(D)=R_(4) + R_(D-4)/σ^2 - 2(D-4)g^μν∇_μ∇_νσ/σ - [ (D-4)^2-(D-4)]g^μν∂_μσ∂_νσ/σ^2 ,where R_(4) is the curvature of the four-dimensional metric g_μν, R_(D-4) is the curvature of the (D-4)-dimensional metric g_m n and ∇_μ is the four-dimensional covariant derivative with respect to g_μν. Under this decomposition the D-dimensional Einstein-Hilbert action becomesS =M^D-2/2∫d^4 x d^D-4 y √(-g_(4))√(g_(D-4))σ^D-4{ R_(4) + R_(D-4)/σ^2 - 2(D-4) g^μν∇_μ∇_νσ/σ - [ (D-4)^2-(D-4)]g^μν∂_μσ∂_νσ/σ ^2} .One can, at this point, perform the integral over the (D-4)-dimensional internal space. We remember that ∫d^D-4 y √(g_(D-4))R_(D-4)= χ M^6-D= 2 M^6-D ,for the Euler characteristic of the compact sphere. Moreover, the volume of the compact space is given by𝒱= σ^D-4∫d^D-4y√(g_(D-4))= M^4-Dσ^D-4 ,where we used that∫d^D-4y √(g_(D-4))=M^4-D .Using (<ref>), (<ref>), and (<ref>) we can now explicitly perform the integration. For later convenience we multiply and divide by the vacuum expectation value of the field σ_0^D-4. This step is necessary to find the relation between the D-dimensional Planck mass and the four-dimensional Planck mass. The action in the four-dimensional Jordan frame then becomesS= M^2 σ_0^D-4/2∫d^4x √(-g_(4))(σ/σ_0)^D-4{ R_(4) + M^2 σ^-2 + (D-4)(D-5)σ^-2 g^μν∂_μσ∂_νσ-D-1/D-2 g^μν∂_μln A ∂_νln A -M^2 A^D/2-D[ Z(A)+ 2 Λ] - M^2 σ^-2pA^2p-D/D-2 f^2}.We have used partial integration on the ∇^2 σ term of in (<ref>) and defined the dimensionless flux constant f via∫d^D-4y √(g_(D-4)) g^m_1n_1… g^m_p n_p F_m_1… m_pF_n_1… n_p≡ M^2-D f^2. A further conformal transformation is necessary to yield the four-dimensional Einstein frame, so we define the new metric ĝ viag_μν=Ω^-2ĝ_μν ,g^μν= Ω^2ĝ^μν , √(-g_(4))= Ω^-4√(-ĝ_(4)) . R transforms under this conformal transformation as follows,R = Ω^2( R̂+6ĝ^μν∇̂_μ∇̂_νlnΩ - 6ĝ^μν∂_μlnΩ∂_νlnΩ).Imposing that ĝ is the four-dimensional Einstein frame metric fixesΩ = (σ/σ_0)^D-4/2 ,which implies that the action takes the following form,S= M^2/2σ_0^D-4∫d^4x √(-ĝ_(4)){ R̂_(4) + M^2 σ_0^D-4σ^2-D+ 3(D-4)ĝ^μν∇̂_μ∇̂_νlnσ - 3/2(D-4)^2 ĝ^μν∂_μlnσ∂_νlnσ + (D-4)(D-5) ĝ^μν∂_μlnσ∂_νlnσ-D-1/D-2ĝ^μν∂_μln A ∂_νln A-M^2 σ^4-D/σ_0^4-D A^D/2-D[ Z(A)+ 2 Λ]- M^2 σ^-D A^-D-4/D-2σ_0^D-4 f^2} ,The last term in the first line is a total derivative and can be neglected. Defining the four-dimensional Planck mass in terms of M and 𝒱 as follows,M_p^2≡ M^2 σ_0^D-4=M^2 𝒱_0 ,allows us to write the action in its most useful form,S= M_p^2/2∫d^4x √(-ĝ_(4)){ R̂_(4) - 1/2(D-4) (D-2) ĝ^μν∂_μlnσ∂_νlnσ-D-1/D-2ĝ^μν∂_μln A ∂_νln A + M_p^2σ^2-D -M_p^2 σ^4-D A^D/2-D[ (n-1) α( A - 1/n α)^n/n-1+ 2 Λ]- M_p^2 σ^4-2p-D A^2p-D/D-2f^2},where have used (<ref>). This is the result given in Section <ref>, where we set M_p = 1 and omit the hats and indices on g and R for clarity.99 Starobinsky:1980teA. A. Starobinsky, “A New Type of Isotropic Cosmological Models Without Singularity,” Phys. Lett.91B, 99 (1980). doi:10.1016/0370-2693(80)90670-X Ade:2015lrjP. A. R. Adeet al. [Planck Collaboration], “Planck 2015 results. XX. Constraints on inflation,” Astron. Astrophys.594, A20 (2016) doi:10.1051/0004-6361/201525898 [arXiv:1502.02114 [astro-ph.CO]].SantiagoThesisS. Pajòn-Otero,“Starobinsky Inflation in D > 4 Dimensions”, UAM MSc Thesis, August 2016. Ketov:2017aauS. V. Ketov and H. Nakada, “Inflation from (R+γ R^n-2Λ) Gravity in Higher Dimensions,” arXiv:1701.08239 [hep-th].Freund:1980xhP. G. O. Freund and M. A. Rubin, “Dynamics of Dimensional Reduction,” Phys. Lett.97B, 233 (1980). doi:10.1016/0370-2693(80)90590-0 Burgess:2016ygsC. P. Burgess, J. J. H. Enns, P. Hayman and S. P. Patil, “Goldilocks Models of Higher-Dimensional Inflation (including modulus stabilization),” JCAP1608, no. 08, 045 (2016) doi:10.1088/1475-7516/2016/08/045 [arXiv:1605.03297 [gr-qc]]. Conlon:2005jmJ. P. Conlon and F. Quevedo, “Kahler moduli inflation,” JHEP0601, 146 (2006) doi:10.1088/1126-6708/2006/01/146 [hep-th/0509012]. Cicoli:2008gpM. Cicoli, C. P. Burgess and F. Quevedo, “Fibre Inflation: Observable Gravity Waves from IIB String Compactifications,” JCAP0903, 013 (2009) doi:10.1088/1475-7516/2009/03/013 [arXiv:0808.0691 [hep-th]]. Cicoli:2011ctM. Cicoli, F. G. Pedro and G. Tasinato, “Poly-instanton Inflation,” JCAP1112, 022 (2011) doi:10.1088/1475-7516/2011/12/022 [arXiv:1110.6182 [hep-th]]. Broy:2015zbaB. J. Broy, D. Ciupke, F. G. Pedro and A. Westphal, “Starobinsky-Type Inflation from α'-Corrections,” JCAP1601, 001 (2016) doi:10.1088/1475-7516/2016/01/001 [arXiv:1509.00024 [hep-th]]. Burgess:2016owbC. P. Burgess, M. Cicoli, S. de Alwis and F. Quevedo, “Robust Inflation from Fibrous Strings,” JCAP1605, no. 05, 032 (2016) doi:10.1088/1475-7516/2016/05/032 [arXiv:1603.06789 [hep-th]]. Cicoli:2016chbM. Cicoli, D. Ciupke, S. de Alwis and F. Muia, “α' Inflation: moduli stabilisation and observable tensors from higher derivatives,” JHEP1609, 026 (2016) doi:10.1007/JHEP09(2016)026 [arXiv:1607.01395 [hep-th]]. Broy:2014xwaB. J. Broy, F. G. Pedro and A. Westphal, “Disentangling the f(R) - Duality,” JCAP1503, no. 03, 029 (2015) doi:10.1088/1475-7516/2015/03/029 [arXiv:1411.6010 [hep-th]]. Douglas:2006esM. R. Douglas and S. Kachru, “Flux compactification,” Rev. Mod. Phys.79, 733 (2007) doi:10.1103/RevModPhys.79.733 [hep-th/0610102]. Denef:2007pqF. Denef, M. R. Douglas and S. Kachru, “Physics of String Flux Compactifications,” Ann. Rev. Nucl. Part. Sci.57, 119 (2007) doi:10.1146/annurev.nucl.57.090506.123042 [hep-th/0701050]. DeFelice:2010ajA. De Felice and S. Tsujikawa, “f(R) theories,” Living Rev. Rel.13, 3 (2010) doi:10.12942/lrr-2010-3 [arXiv:1002.4928 [gr-qc]]. Fujii:2003paY. Fujii and K. Maeda, “The scalar-tensor theory of gravitation,” Cambridge University Press, 2003.
http://arxiv.org/abs/1702.08311v1
{ "authors": [ "Santiago Pajón Otero", "Francisco G. Pedro", "Clemens Wieck" ], "categories": [ "hep-th", "astro-ph.CO", "hep-ph" ], "primary_category": "hep-th", "published": "20170227150417", "title": "$R+αR^n$ Inflation in higher-dimensional Space-times" }
§ INTRODUCTIONReinforcement learning (RL) is a powerful learning paradigm forsequential decision making<cit.>. An RL agent interacts with the environment by repeatedly observing the current state, taking an action according to a certain policy, receiving a reward signal and transitioning to a next state.A policy specifies which action to take given the current state.Policy evaluation estimates a value function that predicts expected cumulative reward the agent would receive by following a fixed policy starting at a certain state. In addition to quantifying long-term values of states, which can be of interest on its own,value functions also provide important information for the agent to optimize its policy. For example, policy-iteration algorithms iterate between policy-evaluation steps and policy-improvement steps, until a (near-)optimal policy is found <cit.>.Therefore, estimating the value function efficiently and accurately is essential in RL.There has been substantial work on policy evaluation, with temporal-difference (TD) methods being perhaps the most popular. These methods use the Bellman equation to bootstrap the estimation process.Different cost functions are formulated to exploit this idea,leading to different policy evaluation algorithms;see <cit.> for a comprehensive survey. In this paper, we study policy evaluation by minimizingthe mean squared projected Bellman error (MSPBE)with linear approximation of the value function. We focus on the batch setting where a fixed, finite dataset is given. This fixed-data setting is not only important in itself <cit.>, but also an important component in other RL methods such as experience replay <cit.>.The finite-data regime makes it possible to solve policy evaluation more efficiently with recently developed fast optimization methods based onstochastic variance reduction, such asSVRG <cit.> and SAGA <cit.>. For minimizing strongly convex functions with a finite-sum structure, such methods enjoy the same low computational cost per iteration as the classical stochastic gradient method, but also achieve fast, linear convergence rates (i.e., exponential decay of the optimality gap in the objective).However, they cannot be applied directly to minimize the MSPBE, whose objective does not have the finite-sum structure. In this paper, we overcome this obstacle by transforming the empirical MSPBE problem to an equivalent convex-concave saddle-point problem that possesses the desired finite-sum structure.In the saddle-point problem, we consider the model parameters as the primal variables, which are coupled with the dual variables through a bilinear term.Moreover, without an ℓ_2-regularization on the model parameters, the objective is only strongly concave in the dual variables, but not in the primal variables. We propose a primal-dual batch gradient method, as well as two stochastic variance-reduction methods based on SVRG and SAGA, respectively. Surprisingly, we show that when the coupling matrix is full rank, these algorithmsachieve linear convergence in both the primal and dual spaces, despite the lack of strong convexity of the objective in the primal variables.Our results also extend to off-policy learning and TD with eligibility traces <cit.>. We note that <cit.> have extendedboth SVRG and SAGA to solve convex-concave saddle-point problems with linear-convergence guarantees. The main difference between our results and theirs are * Linear convergence in <cit.> relies on the assumption that the objective is strongly convex in the primal variables and strongly concave in the dual.Our results show, somewhat surprisingly, that only one of them is necessary if the primal-dual coupling is bilinear and the coupling matrix is full rank.In fact, we are not aware of similar previous results even for theprimal-dual batch gradient method, which we show in this paper.* Even if a strongly convex regularization on the primal variables is introduced to the MSPBE objective, the algorithms in <cit.>cannot be applied efficiently. Their algorithms require that the proximal mappings ofthe strongly convex and concave regularization functions be computed efficiently. In our saddle-point formulation, the strong concavity of the dual variables comes from a quadratic function defined by the feature covariance matrix, which cannot be inverted efficiently and makes the proximal mapping costly to compute. Instead, our algorithms only use its (stochastic) gradients and hence are much more efficient.We compare various gradient based algorithms on a Random MDP and Mountain Cardata sets.The experiments demonstrate the effectiveness of our proposed methods. § PRELIMINARIES We consider a Markov Decision Process(MDP) <cit.> described by (𝒮,𝒜,𝒫_ss'^a,ℛ,γ),where 𝒮 is the set of states,𝒜 the set of actions, 𝒫_ss'^a the transition probability from states to state s' after taking actiona, ℛ(s,a) the reward received after taking action a in state s, and γ∈ [0,1) a discount factor.The goal of an agent is to find an action-selection policy π, so that the long-term reward under this policy is maximized.For ease of exposition, we assume 𝒮 is finite, but none of our results relies on this assumption.A key step in many algorithms in RL is to estimate the value function ofa given policy π, defined as V^π(s) [ ∑_t=0^∞γ^t ℛ(s_t,a_t) | s_0 = s, π ]. Let V^π denote a vector constructed by stacking the values of V^π(1),…, V^π(|S|) on top of each other. Then V^π is the unique fixed point of the Bellman operator T^π: V^π = T^π V^π≜ R^π + γ P^π V^π , where R^π is the expected reward vector under policy π, defined elementwise as R^π(s)=_π(a|s)ℛ(s,a); and P^π is the transition matrix induced by the policy applying π, defined entrywise as P^π(s,s')=_π(a|s)𝒫^a_ss'. §.§ Mean squared projected Bellman error (MSPBE) One approach to scale up when the state space size 𝒮 is large or infinite is to use a linear approximation for V^π. Formally, we use a feature map ϕ: 𝒮→ℝ^d and approximate the value function byV^π(s) = ϕ(s)^T θ, where θ∈ℝ^d is the model parameter to be estimated.Here, we want to find θ that minimizesthe mean squared projected Bellman error, or MSPBE: MSPBE(θ) 1/2V^π - Π T^πV^π_Ξ^2 ,where Ξ is a diagonal matrix with diagonal elements being the stationary distribution over 𝒮 induced by the policy π, and Π is the weighted projection matrix onto the linear space spanned by ϕ(1),…, ϕ(|S|), that is, Π = Φ (Φ^T ΞΦ)^-1Φ^T Ξ where Φ [ ϕ^T(1),…,ϕ^T(|S|)] is the matrix obtained by stacking the feature vectors row by row. Substituting (<ref>) and (<ref>)into (<ref>),we obtain <cit.>MSPBE(θ) = 1/2Φ^T Ξ (V^π - T^πV^π ) ^2_(Φ^T ΞΦ)^-1 .We can further rewrite the above expression for MSPBE as a standard weightedleast-squares problem:MSPBE(θ) = 1/2 A θ - b _C^-1^2,with properly defined A, b and C, described as follows. Suppose the MDP under policy π settles at its stationary distributionand generates an infinite transition sequence {(s_t, a_t,r_t,s_t+1)}_t=1^∞, where s_t is the current state, a_t is the action,r_t is the reward, and s_t+1 is the next state. Then with the definitions ϕ_t ϕ(s_t) and ϕ_t' ϕ(s_t+1), we haveA = [ ϕ_t (ϕ_t - γϕ_t')^T ], b = [ ϕ_t r_t ], C =[ϕ_t ϕ_t^T ],where [·] are with respect to the stationary distribution.Many TDsolutions converge to a minimizer of MSPBE in the limit <cit.>. §.§ Empirical MSPBEIn practice, quantities in (<ref>) are often unknown, and we only have access to a finite dataset with n transitions 𝒟 = {(s_t, a_t,r_t,s_t+1)}_t=1^n.By replacing the unknown statistics with their finite-sample estimates, we obtain the Empirical MSPBE, or EM-MSPBE.Specifically, letA ≜1/n∑_t=1^n A_t, b≜1/n∑_t=1^n b_t, C≜1/n∑_t=1^nC_t,where for t=1,…,n, A_t ≜ϕ_t(ϕ_t-γϕ'_t)^T,b_t ≜ r_tϕ_t, C_t ≜ϕ_tϕ_t^T.EM-MSPBE with an optional ℓ_2-regularization is given by:EM-MSPBE(θ) = 1/2Aθ - b_C^-1^2 + ρ/2θ^2 ,where ρ≥ 0 is a regularization factor.Observe that (<ref>) is a (regularized) weighted least squares problem.Assuming C is invertible, its optimal solution isθ^⋆ = (A^⊤C^-1A + ρ I)^-1A^⊤C^-1b.Computing θ^⋆ directly requires O(nd^2) operations to form thematrices A, b and C, and thenO(d^3) operations to complete the calculation. This method, known as least-squares temporal difference or LSTD <cit.>, can be very expensive when n and d are large. One can also skip forming the matrices explicitly and compute θ^⋆ using n recusive rank-one updates<cit.>. Since each rank-one update costs O(d^2), the total cost is O(nd^2). In the sequel, we develop efficient algorithms to minimize EM-MSPBE by using stochastic variance reduction methods, which samples one (ϕ_t, ϕ_t') per update without pre-computing A, b and C.These algorithms not only maintain a low O(d) per-iteration computation cost, but also attain fast linear convergence rates with a log(1/ϵ) dependenceon the desired accuracy ϵ. § SADDLE-POINT FORMULATION OF EM-MSPBEOur algorithms (in Section <ref>) are based on the stochastic variance reduction techniques developed for minimizing a finite sum of convex functions, more specifically, SVRG <cit.> andSAGA <cit.>. They deal with problems of the formmin_x∈^d { f(x) ≜1/n∑_i=1^n f_i(x) },where each f_i is convex. We immediately notice that the EM-MSPBE in (<ref>) cannotbe put into such a form, even though the matrices A, b and C have the finite-sum structure given in (<ref>). Thus, extending variance reduction techniques to EM-MSPBE minimization is not straightforward.Nevertheless, we will show that the minimizing the EM-MSPBE is equivalent to solving a convex-concave saddle-point problem which actually possesses the desired finite-sum structure. To proceed, we resort to the machinery of conjugate functions <cit.>.For a function f: R^d →R, its conjugate function f^⋆: R^d →R is defined as f^⋆(y) sup_x (y^T x - f(x)).Note that the conjugate function of 1/2 x _C^2 is 1/2 y _C^-1^2, i.e.,1/2 y _C^-1^2 = max_x(y^T x - 1/2 x _C^2 ).With this relation, we can rewrite EM-MSPBE in (<ref>) asmax_w( w^T(b-Aθ) - 1/2 w _C^2 ) + ρ/2θ^2 ,so that minimizing EM-MSPBE is equivalent to solvingmin_θ∈R^dmax_w ∈R^d {ℒ(θ, w)= 1/n∑_t=1^n ℒ_t (θ, w) },where the Lagrangian, defined asℒ(θ, w) ≜ρ/2θ^2 - w^T Aθ - ( 1/2 w _C^2 - w^T b) , may be decomposed using(<ref>), withℒ_t(θ, w) ρ/2θ^2 - w^T A_t θ - (1/2w_C_t^2 - w^T b_t) .Therefore, minimizing the EM-MSPBE is equivalent to solving the saddle-point problem (<ref>), which is convex in the primalvariable θ and concave in the dual variable w. Moreover, it has a finite-sum structure similar to (<ref>).<cit.> and <cit.> independentlyshowed that the GTD2 algorithm <cit.> is indeed a stochastic gradient method for solving thesaddle-point problem (<ref>), although they obtained the saddle-point formulation with different derivations. More recently, <cit.> used the conjugate function approach to obtain saddle-point formulations for a more general class of problems and derived primal-dual stochastic gradient algorithms for solving them. However,these algorithms have sublinear convergence rates, which leaves much room to improve when applied to problems with finite datasets. Recently, <cit.> developed SVRG methods for a general finite-sum composition optimization that achieve linear convergence rate. Different from our methods, their stochastic gradients are biased and they have worse dependency on the condition numbers (κ^3 and κ^4). The fast linear convergence of our algorithms presented in Sections <ref> and <ref> requires the following assumption:A has full rank, C is strictly positive definite, and the feature vector ϕ_t is uniformly bounded.Under mild regularity conditions <cit.>,we have A and C converge in probability to A and C defined in (<ref>), respectively. Thus, if the true statistics A is non-singular and C is positive definite,and we have enough training samples, these assumptions are usually satisfied.They have been widely used in previous works on gradient-based algorithms <cit.>.A direct consequence of Assumption <ref> is thatθ^⋆ in (<ref>) is the unique minimizer of the EM-MSPBE in (<ref>), even without any strongly convex regularization on θ (i.e., even if ρ=0). However, if ρ=0, then the Lagrangian ℒ(θ,w) isonly strongly concave in w, but not strongly convex in θ. In this case, we will show that non-singularity of the couplingmatrix A can “pass” an implicit strong convexityon θ, which is exploited by our algorithms to obtainlinear convergence in both the primal and dual spaces.§ A PRIMAL-DUAL BATCH GRADIENT METHODBefore diving into the stochastic variance reduction algorithms, we first present Algorithm <ref>, which is a primal-dualbatch gradient (PDBG) algorithm for solving the saddle-point problem (<ref>). In Step 2, the vector B(θ,w) is obtained by stacking the primal and negative dual gradients:B(θ,w)≜[∇_θ L(θ,w); -∇_w L(θ,w) ] = [ ρθ - A^T w; Aθ - b + C w ] .Some notation is needed in order to characterize the convergence rate ofAlgorithm <ref>. For any symmetric and positive definite matrix S,let λ_max(S) and λ_min(S) denote itsmaximum and minimum eigenvalues respectively, and define its condition number to be κ(S)λ_max(S)/λ_min(S). We also define L_ρ and μ_ρ for any ρ≥0:L_ρ λ_max(ρ I + A^T C^-1A),μ_ρ λ_min(ρ I + A^T C^-1A).By Assumption <ref>, we have L_ρ≥μ_ρ>0. The following theorem is proved in Appendix <ref>.Suppose Assumption <ref> holds and let(θ_⋆, w_⋆) be the (unique) solutionof (<ref>). If the step sizes are chosen as σ_θ = 1/ 9 L_ρκ(C) and σ_w = 8/9λ_max(C), then the number of iterations of Algorithm <ref>to achieve θ-θ_⋆^2 + w - w_⋆^2 ≤ϵ^2is upper bounded byO ( κ( ρ I + A^T C^-1A) ·κ(C) ·log(1/ϵ) ) . We assigned specific values to the step sizes σ_θ and σ_w for clarity. In general, we can use similar step sizes while keeping their ratio roughly constant as σ_w/σ_θ≈8 L_ρ/λ_min(C); see Appendices <ref> and <ref> for more details.In practice, one can use a parameter search on a small subset of data to find reasonable step sizes.It is an interesting open problem how to automatically select and adjust step sizes.Note that the linear rate is determined by two parts: (i) the strongly convex regularization parameter ρ, and (ii) the positive definiteness of A^T C^-1A. The second part could be interpreted as transferringstrong concavity in dual variables via the full-rank bi-linear coupling matrix A. For this reason, even if the saddle-point problem (<ref>) has only strong concavity in dual variables (when ρ=0),the algorithm still enjoys a linear convergence rate. Moreover, even if ρ>0, it will be inefficient to solveproblem (<ref>) using primal-dual algorithms based onproximal mappings of the strongly convex and concave terms <cit.>. The reason is that, in (<ref>), the strong concavity of theLagrangian with respect to the dual lies in the quadratic function (1/2)w_C, whose proximal mapping cannot be computed efficiently. In contrast, the PDBG algorithm only needs its gradients.If we pre-compute and store A, b and C, which costs O(nd^2) operations, then computing the gradient operator B(θ,w) in (<ref>) during each iteration of PDBG costs O(d^2) operations. Alternatively, if we do not want to store these d× d matrices (especially if d is large), then we can compute B(θ,w) as finite sums on the fly.More specifically, B(θ,w) = 1/n∑_t=1^n B_t(θ, w), where for each t=1,…,n,B_t(θ,w) = [[ ρθ - A_t w; A_tθ - b_t + C_t w ]].Since A_t, b_t and C_t are all rank-one matrices, as given in (<ref>), computing each B_t(θ,w) only requires O(d) operations.Therefore, computing B(θ,w) costs O(nd)operations as it averages B_t(θ,w) over n samples.§ STOCHASTIC VARIANCE REDUCTION METHODSIf we replace B(θ,w) in Algorithm <ref> (line 2)by the stochastic gradient B_t(θ,w) in (<ref>),then we recover the GTD2 algorithm of <cit.>,applied to a fixed dataset, possibly with multiple passes.It has a low per-iteration cost but a slow, sublinear convergence rate. In this section, we provide two stochastic variance reduction methodsand show they achieve fast linear convergence. §.§ SVRG for policy evaluationAlgorithm <ref> is adapted from the stochastic variance reduction gradient (SVRG) method <cit.>. It uses two layers of loops and maintains two sets of parameters (θ̃,w̃) and (θ,w). In the outer loop, the algorithm computes a full gradient B(θ̃,w̃) using (θ̃, w̃), which takes O(nd) operations. Afterwards, the algorithm executes the inner loop, which randomly samples an index t_j andupdates (θ,w) using variance-reduced stochastic gradient: B_t_j(θ,w,θ̃,w̃) = B_t_j(θ,w) + B(θ̃,w̃) - B_t_j(θ̃,w̃).Here, B_t_j(θ,w) contains the stochastic gradients at (θ,w)computed using the random sample with index t_j, and B(θ̃,w̃) - B_t_j(θ̃,w̃) is a term used to reduce the variance in B_t_j(θ,w) while keeping B_t_j(θ,w,θ̃,w̃)an unbiased estimate of B(θ,w). Since B(θ̃,w̃) is computed once during each iteration ofthe outer loop with cost O(nd)(as explained at the end of Section <ref>), and each of the N iterations of the inner loop cost O(d) operations, the total computational cost of for each outer loop is O(nd+Nd).We will present the overall complexity analysis ofAlgorithm <ref> in Section <ref>. §.§ SAGA for policy evaluationThe second stochastic variance reduction method for policy evaluation is adapted from SAGA <cit.>; see Algorithm <ref>. It uses a single loop, and maintains a single set of parameters (θ,w).Algorithm <ref> starts by first computing each component gradients g_t=B_t(θ,w) at the initial point, and also form their average B=∑_t^n g_t. At each iteration, the algorithm randomly picks an index t_m∈{1,…,n}and computes the stochastic gradient h_t_m=B_t_m(θ,w). Then, it updates (θ,w) using a variance reduced stochastic gradient: B + h_t_m - g_t_m, where g_t_m is the previously computed stochastic gradient using thet_m-th sample (associated with certain past values of θ and w). Afterwards, it updates the batch gradient estimate B asB+1/n(h_t_m-g_t_m) and replaces g_t_m with h_t_m. As Algorithm <ref> proceeds, different vectors g_t arecomputed using different values of θ and w (depending on when the index t was sampled). So in general we need to store all vectors g_t, for t=1,…,n,to facilitate individual updates, which will cost additional O(nd) storage. However, by exploiting the rank-one structure in (<ref>), we only need to store three scalars (ϕ_t-γ_ϕ')^Tθ, (ϕ_t-γ_ϕ')^T w, and ϕ_t^Tw,and form g_t_m on the fly using O(d) computation. Overall, each iteration of SAGA costs O(d) operations.§.§ Theoretical analyses of SVRG and SAGAIn order to study the convergence properties of SVRG and SAGA forpolicy evaluation, we introduce a smoothness parameter L_G based on the stochastic gradients B_t(θ,w).Let β = σ_w/σ_θ be the ratio between the primal and dual step-sizes, and define a pair of weighted Euclidean norms Ω(θ,w) (θ^2 + β^-1w^2)^1/2,Ω^*(θ,w) (θ^2 + βw^2)^1/2.Note that Ω(·,·) upper bounds the error in optimizing θ: Ω(θ-θ_⋆,w-w_⋆) ≥θ-θ_⋆.Therefore, any bound on Ω(θ-θ_⋆,w-w_⋆) applies automatically to θ-θ_⋆.Next, we define the parameter L_G through its square: L_G^2 sup_θ_1,w_1, θ_2, w_21/n∑_t=1^n Ω^*(B_t(θ_1,w_1)-B_t(θ_2,w_2))^2 /Ω(θ_1-θ_2, w_1-w_2)^2 .This definition is similar to the smoothness constant L̅ used in <cit.> except that we used the step-size ratio β rather than the strong convexity and concavity parameters of the Lagrangian to define Ω and Ω^*.[Since our saddle-point problem is not necessarily strongly convex in θ (when ρ=0),we could not define Ω and Ω^* in the same wayas <cit.>.] Substituting the definition of B_t(θ,w)in (<ref>), we have L_G^2 = 1/n∑_t=1^nG_t^T G_t,    G_t [ ρ I -√(β) A_t^T;√(β) A_t β C_t ]. With the above definitions,we characterize the convergence of Ω(θ_m-θ_⋆, w_m-w_⋆), where (θ_⋆, w_⋆) is the solutionof (<ref>), and (θ_m,w_m) is the output ofthe algorithms after the m-th iteration. For SVRG, it is the m-th outer iteration in Algorithm <ref>. The following two theorems are proved inAppendices <ref> and <ref>,respectively.Suppose Assumption <ref> holds. If we choose σ_θ = μ_ρ/48κ(C) L_G^2, σ_w=8L_ρ/λ_min(C)σ_θ, N=51 κ^2(C) L_G^2/μ^2_ρ, where L_ρ and μ_ρ are defined in (<ref>)and (<ref>), then [Ω(θ_m-θ_⋆, w_m-w_⋆)^2] ≤(4/5)^m Ω(θ_0-θ_⋆, w_0-w_⋆)^2 .The overall computational cost for reaching [Ω(θ_m-θ_⋆, w_m-w_⋆)]≤ϵ is upper bounded byO( (n+κ(C) L_G^2/λ_min^2(ρ I + A^TC^-1A))d log(1/ϵ) ). Suppose Assumption <ref> holds. If we chooseσ_θ = μ_ρ/3(8κ^2(C)L_G^2 + n μ_ρ^2) and σ_w=8L_ρ/λ_min(C)σ_θ in Algorithm <ref>, then[Ω(θ_m-θ_⋆, w_m-w_⋆)^2] ≤ 2 (1-ρ)^m Ω(θ_0-θ_⋆, w_0-w_⋆)^2 ,whereρ≥μ_ρ^2/9(8κ^2(C)L_G^2 + nμ_ρ^2). The total cost to achieve[Ω(θ_m-θ_⋆, w_m-w_⋆)]≤ϵ has the same bound in (<ref>). Similar to our PDBG results in (<ref>),both the SVRG and SAGA algorithms for policy evaluation enjoy linear convergence even if there is no strong convexity in thesaddle-point problem (<ref>) (i.e., when ρ = 0). This is mainly due to the positive definiteness of A^T C^-1A when C is positive-definite and A is full-rank.In contrast, the linear convergence of SVRG and SAGA in <cit.> requiresthe Lagrangian to be both strongly convex in θ and strongly concave in w. Moreover, in the policy evaluation problem, the strong concavity with respect to the dual variable w comes from a weighted quadratic norm (1/2)w_C, which does not admit an efficient proximal mapping as required by the proximal versions of SVRG and SAGA in <cit.>. Our algorithms only require computing the stochastic gradients ofthis function, which is easy to do due to its finite sum structure.<cit.> also proposed accelerated variants ofSVRG and SAGA using the “catalyst” framework of<cit.>. Such extensions can be done similarly for the three algorithms presented in this paper, and we omit the details due to space limit.§ COMPARISON OF DIFFERENT ALGORITHMS This section compares the computation complexities of several representative policy-evaluation algorithms that minimize EM-MSPBE, as summarized in Table <ref>. The upper part of the table lists algorithms whose complexity is linear in feature dimension d, including the two new algorithms presented in the previous section.We can also apply GTD2 to a finite dataset with samples drawn uniformly at random with replacement.It costs O(d) per iteration, but has a sublinearconvergence rate regarding ϵ. In practice, people may choose ϵ=Ω(1/n) for generalization reasons (see, e.g., <cit.>), leading to an O(κ'nd) overall complexity for GTD2, where κ' is a condition number related to the algorithm.However, as verified by our experiments, the bounds in the table show that our SVRG/SAGA-based algorithms are much faster as their effective condition numbers vanish when n becomes large.TDC has a similar complexity to GTD2.In the table, we list two different implementations of PDBG. PDBG-(I) computes the gradients by averaging the stochastic gradients over the entire dataset at each iteration, which costs O(nd) operations; see discussions at the end of Section <ref>. PDBG-(II) first pre-computes the matrices A, b andC using O(nd^2) operations, then computes the batch gradient at each iteration with O(d^2) operations.If d is very large (e.g., when d ≫ n), then PDBG-(I) would have an advantage over PDBG-(II).The lower part of the table also includes LSTD, which has O(nd^2) complexity if rank-one updates are used.SVRG and SAGA are more efficient than the other algorithms, when either d or n is very large.In particular, they have a lower complexity than LSTD whend > (1 + κ(C)κ_G^2/n ) log(1/ϵ), This condition is easy to satisfy, when n is very large.On the other hand, SVRG and SAGA algorithms are more efficient than PDBG-(I) if n is large, sayn > κ(C)κ_G^2/ (κ(C)κ - 1), where κ and κ_G are described in the caption of Table <ref>.There are other algorithms whose complexity scales linearly with n and d, including iLSTD <cit.>, and TDC <cit.>, fLSTD-SA <cit.>, and the more recent algorithms of <cit.> and<cit.>.However, their convergence is slow:the number of iterations required to reach a desired accuracy ϵgrows as 1/ϵ or worse.The CTD algorithm <cit.> uses a similar idea as SVRG to reduce variance in TD updates.This algorithm is shown to have a similar linear convergence rate in an online setting where the data stream is generated by a Markov process with finite states and exponential mixing.The method solves for a fixed-point solution by stochastic approximation.As a result, they can be non-convergent in off-policy learning, while our algorithms remain stable (c.f., Section <ref>). § EXTENSIONSOther objectives than MSPBE can be used for policy evaluation. We choose MSPBE in this paper because it has both theoretical and empirical advantages over other objective functions. Nevertheless, we believe that the saddle-point formulation in Section <ref> and the algorithms inSections<ref> and <ref>) could be generalized to other similar objective functions such as mean squared Bellman error (MSBE) and the norm of the expected TD update (NEU).We refer the reader to the survey by <cit.> for a detailed account of these objective functions. It is possible to extend our approach to accelerate optimization of other objectives such as MSBE and NEU <cit.>.In this section, we briefly describe two extensions of the algorithms developed earlier.§.§ Off-policy learningIn some cases, we may want to estimate the value function of a policy π from a set of data 𝒟 generatedby a different “behavior” policy π_b.This is called off-policy learning <cit.>.In the off-policy case, samples are generated from the distribution induced by the behavior policy π_b, not the the target policy π.While such a mismatch often causes stochastic-approximation-based methods to diverge <cit.>, our gradient-based algorithms remain convergent with the same (fast) convergence rate. Consider the RL framework outlined in Section <ref>.For each state-action pair (s_t,a_t) such that π_b(a_t|s_t) > 0, we define the importance ratio, ρ_t π(a_t|s_t)/π_b(a_t|s_t).The EM-MSPBE for off-policy learning has the same expression as in (<ref>) except that A_t, b_t and C_t are modified by the weight factor ρ_t, as listed in Table <ref>; see also <cit.> for a related discussion.)Algorithms <ref>–<ref> remain the same for the off-policy case after A_t, b_t and C_t are modified correspondingly.§.§ Learning with eligibility traces Eligibility traces are a useful technique to trade off bias and variance in TD learning <cit.>. When they are used, we can pre-compute z_t in Table <ref> before running our new algorithms.Note that EM-MSPBE with eligibility traces has the same form of (<ref>), with A_t, b_t and C_t defined differently according to the last row of Table <ref>.At the m-th step of the learning process, the algorithm randomly samples z_t_m, ϕ_t_m, ϕ_t_m' and r_t_m from the fixed dataset and computes the corresponding stochastic gradients, where the index t_m is uniformly distributed over {1,…,n} and are independent for different values of m. Algorithms <ref>–<ref> immediately work for this case, enjoying a similar linear convergence rate and a computation complexity linear in n and d. We needadditional O(nd) operations to pre-compute z_t recursively and an additional O(nd) storage for z_t.However, it does not change the order of the total complexity for SVRG/SAGA. § EXPERIMENTSIn this section, we compare the following algorithms on two benchmark problems:(i) PDBG (Algorithm <ref>);(ii) GTD2 with samples drawn randomly with replacement from a dataset; (iii) TD: the fLSTD-SA algorithm of <cit.>;(iv) SVRG (Algorithm <ref>); and (v) SAGA (Algorithm <ref>).Note that when ρ > 0, the TD solution and EM-MSPBE minimizerdiffer, so we do not include TD. For step size tuning, σ_θ is chosen from {10^-1,10^-2,…,10^-6}1/L_ρκ(C) and σ_w is chosen from{1,10^-1,10^-2}1/λ_max(C).We only report the results of each algorithm which correspond to the best-tuned step sizes; for SVRG we choose N=2n. In the first task, we consider a randomly generated MDP with 400 states and 10 actions <cit.>.The transition probabilities are defined as P(s'|a,s) ∝ p_ss'^a + 10^-5, where p_ss'^a ∼ U[0,1].The data-generating policy and start distribution were generated in a similar way.Each state is represented by a 201-dimensional feature vector, where 200 of the features were sampled from a uniform distribution, and the last feature was constant one.We chose γ=0.95.Fig. <ref> shows the performance of various algorithms for n=20000. First, notice that the stochastic variance methods converge much faster than others. In fact, our proposed methods achieve linear convergence.Second, as we increase ρ, the performances of PDBG, SVRG and SAGA improve significantly due to better conditioning, as predicted by our theoretical results.Next, we test these algorithms on Mountain Car <cit.>.To collect the dataset, we first ran Sarsa with d=300 CMAC features to obtain a good policy.Then, we ran this policy to collect trajectories that comprise the dataset. Figs. <ref> and <ref> show our proposed stochastic variance reduction methods dominate other first-order methods.Moreover, with better conditioning (through a larger ρ), PDBG, SVRG and SAGA achieve faster convergence rate. Finally, as we increase sample size n, SVRG and SAGA converge faster. This simulation verifies our theoretical finding in Table <ref> that SVRG/SAGA need fewer epochs for large n.§ CONCLUSIONSIn this paper, we reformulated the EM-MSPBE minimization problem in policy evaluation into an empirical saddle-point problem, and developed and analyzed a batch gradient method and two first-order stochastic variance reduction methods to solve the problem. An important result we obtained is that even when the reformulated saddle-point problem lacks strong convexity in primal variables and has only strong concavity in dual variables, the proposed algorithms are still able to achieve a linear convergence rate.We are not aware of any similar results for primal-dual batch gradient methods or stochastic variance reduction methods. Furthermore, we showed that when both the feature dimension d and the number of samples n are large, the developed stochastic variance reduction methods are more efficient than any other gradient-based methods which are convergent in off-policy settings.This work leads to several interesting directions for research.First, we believe it is important to extend the stochastic variance reduction methods to nonlinear approximation paradigms <cit.>, especially with deep neural networks.Moreover, it remains an important open problem how to apply stochastic variance reduction techniques to policy optimization.
http://arxiv.org/abs/1702.07944v2
{ "authors": [ "Simon S. Du", "Jianshu Chen", "Lihong Li", "Lin Xiao", "Dengyong Zhou" ], "categories": [ "cs.LG", "cs.AI", "cs.SY", "math.OC", "stat.ML" ], "primary_category": "cs.LG", "published": "20170225201555", "title": "Stochastic Variance Reduction Methods for Policy Evaluation" }
Institute for Physical Problems, Baku State University, Az–1148 Baku, AzerbaijanDepartment of Physics, Doǧuş University, Acibadem-Kadiköy, 34722 Istanbul, TurkeyDepartment of Physics, Kocaeli University, 41380 Izmit, Turkey The charged axial-vector J^P=1^+ tetraquarks Z_q=[cq][b̅q̅ ] and Z_s=[cs][b̅s̅] with the open charm-bottom contents are studied in the diquark-antidiquark model. The masses and meson-current couplings of these states are calculated by employing QCD two-point sum rule approach, where the quark, gluon and mixed condensates up to eight dimensions are taken into account. These parameters of the tetraquark states Z_q and Z_s are used to analyze the vertices Z_q B_c ρ and Z_s B_c ϕ to determine the strong g_Z_qB_c ρ and g_Z_sB_c ϕ couplings. For these purposes, QCD light-cone sum rule method and its soft-meson approximation are utilized. The couplings g_Z_qB_c ρ and g_Z_sB_c ϕ, extracted from this analysis, are applied for evaluating of the strong Z_q → B_c ρ and Z_s → B_c ϕ decays' widths, which are essential results of the present investigation. Our predictions for the masses of the Z_q and Z_s states are confronted with similar results available in the literature. Open charm-bottom axial-vector tetraquarks and their properties H. Sundu December 30, 2023 ===============================================================§ INTRODUCTION Charmonium-like states discovered during last years mainly in the exclusive B-meson decays as resonances in the relevant mass distributions became interesting objects for both experimental and theoretical studies in high energy physics. Conventional hadrons, composed of two and three quarks, and investigated in a rather detailed form constitute main part of the known particles. At the same time, the theory of the strong interactions – the Quantum Chromodynamics does not contain principles excluding an existence of the multiquark states. The tetraquark and pentaquark states composed of the four and five valence quarks, respectively, and hybrids built of the quarks and gluons are among most promising candidates to occupy the vacant shelves in the multiquark spectroscopy. Due to joint efforts of experimentalists and theorists considerable progress in understanding of the quark-gluon structure of the multiquark –exotic states and explaining of their properties were achieved, but remaining questions are more numerous that answered ones (for latest reviews, see Refs. <cit.>).The main source of problems, which complicates the studying of the charmonium-like tetraquarks, is the existence of the conventional charmonium states in the energy ranges of the exploring decay processes. Charmonia generate difficulties in interpretation of experimental results, because the pure cc̅ states may emerge as the resonances in the mass distributions of the processes, or generate background effects due to states dynamically connected with cc̅ levels. Only after eliminating effects of the charmonium states in forming of the experimental data, observed resonances can be considered as real exotic particles. The well-known X(3872) state is the best sample to illustrate existing problems. It was discovered as very narrow resonance in B meson decay B→ KX → KJ/ψρ→ KJ/ψπ^+π^- by the Belle Collaboration <cit.> , and was later confirmed in CDF, D0 and BaBar experiments (see, Refs. <cit.>). Its other production mechanisms running through decay chains B→ KX → KJ/ψω→ KJ/ψπ^+π^-π^0, B→ KX → KJ/ψγ and B→ KX → Kψ(2S) γ were also experimentally measured and comprehensively studied <cit.>. The gathered information poses severe restrictions on theoretical models claiming to describe a behavior of the X(3872) state. Attempts were made to explain the collected data by treating X(3872) as the excited conventional charmonium χ_c1(2^3P_1) <cit.>, or as the state formed due to dynamical coupled-channel effects <cit.>. It was considered in the context of four-quark compounds, both as the DD̅^⋆ molecule or its admixtures with the charmonium states <cit.>, and as the diquark-antidiquarks states <cit.>.But existence of the tetraquarks, which do not contain c̅c or b̅ b pairs is also possible, because fundamental laws of QCD do not forbid production of such resonances in hadronic processes. These particles may appear in the exclusive reactions as the open charm (i.e., as states containing c or c̅ quarks) and open bottom resonances. The D_s0^⋆(2317) and D_s1(2460) mesons discovered by the BaBar and CLEO collaborations <cit.>, are now being considered as candidates to open charm tetraquark states. The X(5568) resonance remains a unique candidate to the open bottom tetraquark, which is alsoa particle containing four different quarks. Unfortunately, the experimental situation formed around X(5568) remains unclear. Indeed, the evidence for X(5568) was first announced by the D0 Collaboration in Ref. <cit.>. Later it was seen againby D0 in the B_s^0 meson's semileptonic decays <cit.>. Nevertheless, the LHCb and CMS collaborations could not see the same resonance from analysis of their experimental data <cit.>. Theoretical investigations aiming to explain the nature of X(5568) and calculate its parameters lead also to contradictory conclusions. Predictions obtained in some of these works are in a nice agreement with results of the D0 Collaboration, while in others even an existence of the X(5568) state is an object of doubts. The detailed discussions of these and related questions of the X(5568) state' physics can be found in original works (see, Ref. <cit.> and references therein).The open charm-bottom tetraquarks belong toanother type of exotic states. They already attracted an interest of physicists even till now were not observed experimentally. The original investigations of these particles started more than two decades ago, and, therefore, a considerable theoretical information on their expecting properties is available in the literature. For example, the open charm-bottom type tetraquarks with the contents {Qq}{Q^'q}, {Qs}{Q^'s} and molecule structures were considered in Refs. <cit.> and <cit.>, respectively. In these papers the masses of these hypothetical states were calculated in the context of QCD two-point sum rule approach using in the operator product expansion (OPE) the operators up to dimension six. In the framework of the diquark-antidiquark model the open charm-bottom states were analyzed in Ref. <cit.>. In order to extract masses of these states, the authors again utilized QCD sum rule method and interpolating currents of different color structure. Other aspects of these tetraquark systems can be found in Refs. <cit.>.In a previous article <cit.> we explored the charged scalar J^P=0^+ tetraquark states Z_q=[cq][b̅q̅ ] and Z_s=[cs][b̅s̅] in the context of the diquark-antidiquark model, and calculated their masses and widths some of their decay channels. In the present work we extend our investigations by including into analysis the axial-vector J^P=1^+ Z_q=[cq][b̅q̅] and Z_s=[cs][b̅s̅] open charm-bottom tetraquarks, and their kinematically allowed decay modes.We start from calculation of their masses and meson-current couplings. For these purposes, we employ QCD two-point sum rule method, which was invented to calculate parameters of the conventional hadrons <cit.>, but soon was applied to analysis of the exotic states, as well (see, Refs. <cit.>). The parameters of the open charm-bottom tetraquarks obtained within this method are used to explore the strong vertices Z_qB_cρ and Z_sB_cϕ, and calculate the corresponding couplings g_Z_qB_cρ and g_Z_sB_cϕ. These couplings are required to evaluate the widths of the Z_q→ B_cρ and Z_s→ B_cϕ decays. To this end, we apply QCD light-cone sum method and soft-meson approximation proposed in Refs.<cit.>. For analysis of the strong vertices of tetraquarks the method was, for the first time, examined in Ref. <cit.>, and afterwards successfully used to investigate decay channels of sometetraquarks states (see, Refs. <cit.>).The present work is organized in the following manner. In Sec. <ref> we calculate the masses and meson-current couplings of the axial-vector tetraquarks with open charm-bottom contents. Section <ref> is devoted to computation of the strong couplings g_Z_qB_cρ and g_Z_sB_cϕ. In this section we calculate the widths of the decays Z_q→ B_cρ and Z_s→ B_cϕ. In Sec. <ref> we examine our results as a part of the general tetraquark's physics and compare them with predictions of Ref. <cit.>, where the masses of the axial-vector open charm-bottom tetraquarks were found. It contains also our concluding remarks. § MASSES AND MESON-CURRENT COUPLINGSIn order to find the masses and meson-current couplings of the diquark-antidiquark type axial-vector states Z_q and Z_s, we use the two-point QCD sum rules. Below the explicit expressions for the Z_q state are written down. Their generalization to embrace Z_stetraquark is straightforward.The two-point sum rule can be extracted from analysis of the correlation functionΠ _μν(p)=i∫ d^4xe^ipx⟨ 0|𝒯{J_μ(x)J_ν^†(0)}|0⟩ ,where J_μ is the interpolating current of the Z_q state.The scalar and axial-vector open charm-bottom diquark-antidiquark states can be modeled using different type of interpolating currents <cit.>. Thus, the interpolating currents can be either symmetric or antisymmetric in the color indices. In our previous work we chose the symmetric interpolating current to find masses and decay widths of the scalar open charm-bottom tetraquarks <cit.>. In the present work to consider the axial-vector tetraquark states Z_q and Z_s we use again the interpolating currents, which are symmetric in the color indices. Such axial-vector current has the following formJ_μ=q_a^TCγ _5c_b( q_aγ _μC b_b^T+q_bγ _μCb _a^T),and is symmetric under exchange of the color indices a↔ b. Here by C we denote the charge conjugation matrix.To derive QCD sum rules for the mass and meson-current coupling we follow standard prescriptions of the sum rule method and express the correlation function Π _μν(p) in terms of the physical parameters of the Z_q state, which results in obtaining Π _μν^Phys(p) . From another side the same function should be obtained in terms of the quark-gluon degrees of freedom Π _μν^QCD(p).We start from the function Π _μν^Phys(p) and compute it by suggesting, that the tetraquarks under consideration are the ground states in the relevant hadronic channels. After saturating the correlation function with a complete set of the Z_q states and performing in Eq. ( <ref>) integration over x , we get the required expression for Π _μν^Phys(p)Π _μν^Phys(p)=⟨ 0|J_μ|Z_q(p)⟩⟨ Z_q(p)|J_ν^†|0⟩/m_Z^2-p^2+...where m_Z is the mass of the Z_q state, and dots indicate contributions coming from higher resonances and continuum states. We introduce the meson-current coupling f_Z by means of the equality⟨ 0|J_μ|Z_q(p)⟩ =f_Zm_Zε _μ,where ε _μ is polarization vector of the axial-vector tetraquark. In terms of m_Z and f_Z the correlation function takes the simple formΠ _μν^Phys(p)=m_Z^2f_Z^2/m_Z^2-p^2 ( -g_μν+p_μp_ν/m_Z^2) +…Having applied the Borel transformation to the function Π _μν^ Phys(p) we getℬ_p^2Π _μν^Phys (p^2)=m_Z^2f_Z^2e^-m_Z^2/M^2( -g_μν+ p_μp_ν/m_Z^2) +… In order to obtain the function Π _μν^QCD(p) we substitute the interpolating current given by Eq. (<ref>) into Eq.(<ref>), and employ the light and heavy quark propagators in calculations. For Π_μν^QCD(p), as a result, we get:Π _μν^QCD(p)=i∫ d^4xe^ipx{Tr [ γ _μS_b^b^'b(-x)γ _νS_q^a^'a(-x)] . ×Tr[ γ _5S_q^aa^'(x)γ _5S_c^bb^'(x)] +Tr[ γ _μS_b^a^'b(-x). ×. γ _νS_q^b^'a(-x)] Tr [ γ _5S_q^aa^'(x)γ _5S_c^bb^'(x)] +Tr[ γ _μS_b^b^'a(-x)γ _νS_q^a^'b(-x)] Tr[ γ _5S_q^aa^'(x)γ _5S_c^bb^'(x)] . +Tr[ γ _μS_b^a^'a(-x)γ _νS_q^b^'b(-x)] Tr[ γ _5S_q^aa^'(x)γ _5S_c^bb^'(x)] } ,whereS_q(b)^ab(x)=CS_q(b)^Tab(x)C,with S_q(x) and S_b(x) being the q- and b-quark propagators, respectively.We proceed including into analysis the well known expressions of the light and heavy quark propagators. For our aims it is convenient to use the x -space expression of the light quark propagator,S_q^ab(x)=iδ _ab x/2π ^2x^4-δ _ab m_q/4π ^2x^2-δ _ab⟨qq⟩/12 +iδ _ab xm_q⟨qq⟩/48 -δ _abx^2/192⟨qg_sσ Gq⟩ +iδ _abx^2 xm_q/1152⟨q g_sσ Gq⟩ -ig_sG_ab^αβ/32π ^2x^2[x σ _αβ+σ _αβ x] -iδ _abx^2 xg_s^2⟨qq⟩ ^2/7776 -δ _abx^4⟨qq⟩⟨ g_sq^2G^2⟩/27648+… .For the heavy Q=b, c quarks we utilize the propagator S_Q^ab(x) given in the momentum space in Ref. <cit.>:S_Q^ab(x)=i∫d^4k/(2π )^4e^-ikx{δ _ab(k+m_Q) /k^2-m_Q^2 -g_sG_ab^αβ/4σ _αβ( k+m_Q) +(k+m_Q) σ _αβ/(k^2-m_Q^2)^2 +g_s^2G^2/12δ _abm_Qk^2+m_Q k/ (k^2-m_Q^2)^4+g_s^3G^3/48δ _ab( k+m_Q) /(k^2-m_Q^2)^6 ×[k( k^2-3m_Q^2) +2m_Q( 2k^2-m_Q^2) ] (k+m_Q) +… }.In the expressions aboveG_ab^αβ=G_A^αβt_ab^A,  G^2=G_αβ^AG_αβ^A, G^3=f^ABCG_μν^AG_νδ^BG_δμ^C,where a, b=1,2,3 arecolor indices and A,B,C=1, 2 … 8. Here t^A=λ ^A/2 , where λ ^A are the Gell-Mann matrices, and the gluon field strength tensor is fixed at x=0, i.e. G_αβ^A≡ G_αβ^A(0) . The QCD sum rules can be derived after fixing the Lorentz structures in both the physical and theoretical expressions of the correlation function and equating the correspondent invariant functions. In the case of the axial-vector particles the Lorentz structures in these expressions are ones ∼ g_μν and ∼ p_μp_ν. Because, the structures ∼ p_μp_ν are contaminated bythe scalar states with the same quark contents, we choose ∼ g_μν and the invariant function Π ^QCD(p^2) corresponding to this structure. Then in the theoretical side of the sum rule there is only one invariant function Π ^QCD (p^2), which can be represented as the dispersion integralΠ ^QCD(p^2)=∫_ℳ^2^∞ρ ^ QCD(s)/s-p^2ds+...,where the lower limit of the integral ℳ in the case under consideration is equal to ℳ=m_b+m_c . When considering the Z_s state it should be replaced by ℳ=m_b+m_c+2m_s.In Eq. (<ref>), ρ ^QCD(s) is the spectral density calculated as the imaginary part of the correlation function. It is the important component of the sum rule calculations. Because the technical tools necessary for derivation of ρ ^QCD(s) in the case of the tetraquark states are well known and explained in the clear form in Refs. <cit.>, here we avoid providing details of relevant manipulations, and refrain also from presenting explicit expressions for ρ ^QCD(s). We want to emphasize only that the spectral density is computed by taking into account vacuum condensates up to dimension eight, and include effects of the quark ⟨qq⟩, gluon⟨α_sG^2/ π⟩, ⟨ g_s^3G^3⟩, mixed ⟨qg_sσ Gq⟩ condensates, and also terms of their products.Applying the Borel transformation on the variable p^2 to the invariant function Π ^QCD(p^2), equating the obtained expression with ℬ_p^2Π ^Phys(p), and subtracting the contribution of higher resonances and continuum states, one finds the required sum rule. Then the sum rule for the mass of the Z_q state readsm_Z^2=∫_ℳ^2^s_0dsρ ^QCD (s)se^-s/M^2/∫_ℳ^2^s_0dsρ ^QCD (s)e^-s/M^2.The meson-current coupling f_Z can be extracted from the sum rule:f_Z^2m_Z^2e^-m_Z^2/M^2=∫_ℳ^2^s_0dsρ ^QCD(s)e^-s/M^2.In Eqs. (<ref>) and (<ref>) by s_0 we denote the threshold parameter, that separates the ground state's contribution from contributions arising due to higher resonances and continuum. The sum rules contain the parameters, which are necessary for numerical computations: Their numerical values are collected in Table <ref>. The quark and gluon condensates are well known, therefore we utilize their standard values. The Table <ref> contains also B_c, ρ and ϕ mesons' masses (see, Ref. <cit.>) and decay constants, which will serve as input parameters when computing the strong couplings and decay widths. It is worth noting that for f_ρ, ϕ and f_B_c we use the sum rule estimations from Refs. <cit.>.The sum rules Eqs. (<ref>) and (<ref>) contain also two parameters s_0 and M^2, choices of which are decisive to extract reliable estimations for the quantities under question. The continuum threshold s_0 determines a boundary that dissects ground state contribution from ones due to excited resonances and continuum. It depends on the energy of the first excited state corresponding to the ground state hadron. The continuum threshold s_0 can also be found from analysis of the pole to total contribution ratio. The analysis done in the case of the tetraquark Z_q allows us to fix a working interval for s_0 as59 GeV^2≤ s_0≤ 60 GeV^2.The Borel parameter M^2 has also to satisfy well-known requirements. Namely, convergence of OPE and exceeding of the perturbative contribution over the nonperturbative one fixes a lower bound of the allowed values of M^2. The upper limit of the Borel parameter is determined to achieve the largest possible pole contribution to the sum rule. These constraints lead to the following working window for M^28.2 GeV^2≤ M^2≤ 8.4 GeV^2. In Figs. <ref> and <ref> we graphically demonstrate some stages in extracting of the working regions for these parameters. Thus, in Fig. <ref> the perturbative and nonperturbative contributions to the sum rule in the chosen regions for s_0 and M^2 are depicted. The convergence of OPE can be seen by inspecting Fig. <ref>, where the effects of the operators of the different dimensions are plotted. By varying the parameters s_0 and M^2 within their working ranges we find, that the pole contribution to the mass sum rule amounts to ∼ 65% of the result.The final results for the mass and meson-current coupling of the Z_q state are drawn in Fig. <ref> and collected in Table <ref>. As is seen from Fig. <ref>, the quantities extracted from the sum rules demonstrate a mild dependence on M^2, whereas effects of s_0 on them are sizable. The uncertainties generated by the parameters s_0 and M^2 are main sources of errors, which are inherent part of sum rule computations and equal up to 30% of the whole integral.The mass and meson-current coupling of the Z_s state can be obtained from the similar calculations, the difference being only in terms ∼ m_s kept in the spectral density, whereas in Z_q calculations we set m_q=0. These modifications and also replacement ℳ⇒ m_b+m_c+2m_s in the integrals result in shifting of the working ranges of the parameters s_0 and M^2 towards slightly larger values, which now read60 GeV^2≤ s_0≤ 61 GeV^2, 8.4 GeV^2≤ M^2≤ 8.6 GeV^2.Predictions for m_Z_s and f_Z_s obtained using s_0 and M^2 from Eq. (<ref>) are also written down in Table <ref>.§ Z_Q→ B_CΡ AND Z_S→ B_C Φ DECAYSIn this section we investigate the strong decays of the exotic axial-vector Z_q(s) states, and calculate widths of their main decay modes, which, in accordance with results of Sec. <ref>, are kinematically allowed.One can see, that the quantum numbers, quark content and mass of the Z_q tetraquark make the process Z_q→ B_cρ its preferable decay mode. The Z_s state may decay to B_c and ϕ mesons. It is worth noting that, due to theρ -ω and ω -ϕ mixing, the processes Z_q→ B_cω and Z_s→ B_cω are also among their kinematically allowed decay channels. But because, for example, ϕ and ω mesons are almost pure ss and ( uu+dd) /√(2) states the Z_s→ B_cω process is unessential provided the mass of Z_s allows its decay to ϕ meson: Alternative channels with ω may play an important role in exploration of the tetraquark states containing s̅s pair, if their masses are not enough to create ϕ meson.We are going to carry out a required analysis and write down all expressions necessary to find the Z_q→ B_cρ decay's width. After rather trivial replacements in corresponding formulas and input parameters, the same calculations can easily be repeated for the Z_s→ B_cϕ decay.As first step we have to compute the coupling g_Z_qB_cρ, which describes the strong interaction in the vertex Z_qB_cρ, and can be extracted from the QCD sum rule. To this end, we explore the correlation functionΠ _μ(p,q)=i∫ d^4xe^ipx⟨ρ (q)|𝒯 {J^B_c(x)J_μ^†(0)}|0⟩ ,where J^B_c(x) is the interpolating current of the B_c meson: It is defined in the formJ^B_c(x)=ib_l(x)γ _5c_l(x).The correlation function in Eq. (<ref>) is introduced in the form, which implies usage of the light-cone sum rule method. Indeed, Π_μ(p,q) will be computed employing QCD sum rule on the light-cone by using a technique of the soft-meson approximation.In terms of the physical parameters of the involved particles and coupling g_Z_qB_cρ the function Π_μ(p,q) has a simple form and generates the phenomenological side of the sum rule. Namely,Π _μ^Phys(p,q)= ⟨ 0|J^B_c|B_c( p) ⟩/p^2-m_B_c^2⟨ B_c( p) ρ (q)|Z_q(p^')⟩ ×⟨ Z_q(p^')|J_μ^†|0⟩/ p^' 2-m_Z^2+… ,where p, q and p^'=p+q are the momenta of B_c, ρ and Z_q particles, respectively. The term presented above is the contribution of the ground state: the dots stand for effects of the higher resonances and continuum states.We introduce the B_c meson matrix element⟨ 0|J^B_c|B_c( p) ⟩ = f_B_cm_B_c^2/m_b+m_cwhere m_B_c and f_B_c are the mass and decay constant of the B_c meson, and also the matrix element corresponding to the vertex⟨ B_c( p) ρ (q)|Z_q(p^')⟩ = g_Z_qB_cρ[ ( q·ε ^') ( p^'·ε ^∗) . . -( q· p^') ( ε ^∗·ε ^') ] .Then the ground state term in the correlation function can be easily found, as:Π _μ^Phys(p,q)= f_B_cf_Zm_Zm_B_c^2g_Z_qB_cρ/( p^' 2-m_Z^2) ( p^2-m_B_c^2) (m_b+m_c) ×( m_Z^2-m_B_c^2/2ε _μ^∗-p^'·ε ^∗q_μ) +… . Strong vertices of a tetraquark with two conventional mesons differ from vertices containing only ordinary mesons. The reason here is very simple: the tetraquark Z_qis a state composed of four valence quarks, therefore the expansion of the non-local correlation function Π _μ(p,q) leads to the expression, which instead of distribution amplitudes ofρ meson depends on its local matrix elements (of course,same arguments are valid for Z_s, as well). Then, the conservation of the four-momentum at the vertex Z_qB_cρ equals q to zero. In other words, within the light-cone sum rule method the momentum ofρ meson should be equal to zero in our case. Invertices of ordinary hadrons four-momentaof all involved particles can take nonzero values. The soft-meson approximation corresponds to a situation when q=0. Calculations of the same strong couplings within the full light-cone sum rule method and in the soft-meson approximation demonstrated that the difference between results extracted using these two approaches is numerically small (for detailed discussion, see Ref. <cit.>).In the soft limit p=p^',only the term that survives in Eq. (<ref>) is ∼ε _μ^∗.The invariant function Π ^Phys(p^2) corresponding to this structure depends on the variable p^2, and is given asΠ ^Phys(p^2)= f_B_cf_Zm_Zm_B_c^2g_Z_sB_cη/2( p^2-m^2) ^2(m_b+m_c) ×( m_Z^2-m_B_c^2) +… ,where m^2=( m_Z^2+m_B_c^2) /2.In the soft-meson approximation we additionally apply the operator( 1-M^2d/dM^2) M^2e^m^2/M^2,to both sides of the sum rule. The last operation is required to remove all unsuppressed contributions existing in the physical side of the sum rule in the soft-meson limit (see, Ref. <cit.>).The second component of the sum rule, i.e. QCD expression for the correlation function Π^QCD_μ(p,q) is calculated employing the quark propagators and shown belowΠ _μ^QCD(p,q)=-i∫ d^4xe^ipx{[ γ _5S_c^ib(x)γ _5. . . ×S_b^bi(-x)γ _μ] _αβ⟨ρ (q)|q_α^aq_β^a|0⟩ . +[ γ _5S_c^ib(x)γ _5S _b^ai(-x)γ _μ] _αβ⟨ρ (q)| s_α^as_β^b|0⟩} ,with α and β being the spinor indices.We continue our calculations by employing the expansionq_α^aq_β^b→1/4Γ _βα^j( q^aΓ ^jq^b) ,where Γ ^j=1, γ _5, γ _μ, iγ _5γ _μ, σ _μν/√(2) is the full set of Dirac matrices, and carry out the color summation.Prescriptions to perform summation over color indices, as well as procedures to calculate resulting integrals and extract the imaginary part of the correlation function Π _μ^QCD(p,q) were numerously presented in our previous works Refs. <cit.>. Therefore, here we skip further details, and provide the ρ meson local matrix elements that in the soft limit contribute to the spectral density, as well as, final formulas for the spectral density ρ _c(s).Analysis demonstrates that in the soft limit only the matrix elements⟨ 0|qγ _μq|ρ ^0(p)⟩ =1/√(2) f_ρm_ρε _μ,and⟨ 0|qgG_μνγ _νγ _5q|ρ ^0(p)⟩ =1/√(2)f_ρm_ρ^3ζ _4ρε _μ,are involved into computations, where q denotes one of the u or d quarks. The matrix elements depend on the ρ meson mass m_ρ and decay constant f_ρ. The twist-4 matrix element in Eq. (<ref> ), as a factor, contains also the parameter ζ _4ρ. Its numerical value was extracted at the scale μ =1 GeV from the sum rule calculations in Ref. <cit.> and equals toζ _4ρ=0.07± 0.03. The final expression of the spectral density has the formρ _c(s)=f_ρm_ρ/24√(2)[ F^pert. (s)+F^n.-pert.(s)] .Here F^pert.(s) is the perturbative contribution to ρ _c(s)F^pert.(s) =1/π ^2s^2{[ s^2+s( m_b^2+6m_bm_c+m_c^2) . . . . -2(m_b^2-m_c^2)^2] }√(( s+m_b^2-m_c^2) ^2-4m_b^2s),whereas by F^n.-pert.(s) we denote its nonperturbative component. The function F^n.-pert.(s) is the sum of the termsF^n.-pert.(s)=F_G^n.-pert.(s)+⟨ α _sG^2/π⟩∫_0^1f_g_s^2G^2(z,s)dz+⟨ g_s^3G^3⟩ ∫_0^1f_g_s^3G^3(z,s)dz +⟨α _sG^2/π⟩^2 ∫_0^1f_(g_s^2G^2)^2(z,s)dz.Here F_G^n.-pert.(s) appears from integration of the perturbative component of one heavy quark propagator with the term ∼ G from another one. It can be expressed using the matrix element given by Eq.(<ref>) and has a rather simple formF_G^n.-pert.(s)=3m_ρ^2ζ _4ρ/2π ^2s √(( s+m_b^2-m_c^2) ^2-4m_b^2s).The nonperturbative factors in front of the integrals, and subscripts of the functions clearly indicate the origin of the remaining terms. In fact, the functions f_g_s^2G^2, f_g_s^3G^3 are due to products of ∼ g_s^2G^2 and ∼ g_s^3G^3 terms with the perturbative component of another propagator, whereas f_(g_s^2G^2)^2 comes from integrals obtained using ∼ g_s^2G^2 components of b and c quarks' propagators. These terms are four, six and eight dimensional nonperturbative contributions to the spectral density ρ _c(s), respectively. Their explicit forms are presented below: f_g_s^2G^2(z,s)=1/12z^2(z-1)^2{ 54(1-z)z^2δ (s-Φ )+[ 8m_b^2(z-1)^3+z^2( 27s(1-z)-8m_b^2z) . .. . +2m_bm_c( 4+15z+12z^2) ] δ ^(1)(s-Φ )-4s[ m_b^2(1-z)^3+m_bm_cz(1-z)-m_c^2z^3 ] δ ^(2)(s-Φ )} , f_g_s^3G^3(z,s)=1/15· 2^6z^5(z-1)^5{ -12z^2(z-1)^2[ 3m_b^2(z-1)^5+3m_bm_c((1-z)^5+z^5)+z ( -3m_c^2z^4. . .. . +s( 1-8z+25z^2-40z^3+33z^4-11z^5) ) ] δ ^(2)(s-Φ )+2z(z-1)[ m_b^2(z-1)^5( 7m_b^2-4m_bm_c-9sz(2z-1)) .. +2m_bm_cz^2( 2m_c^2z^3-9s(z-1)^2(1-3z+3z^2)) +z^3( -7m_c^4z^2+9sm_c^2z^2(1-3z+2z^2)+2s^2(z-1)^3(2-7z+7z^2) ) ]×δ ^(3)(s-Φ )+[ -2m_b^5m_c(z-1)^5+7m_b^4sz(z-1)^6-4m_b^3m_csz^2(z-1)^5 -6m_b^2s^2z^3(z-1)^6+2m_bm_cz^4.. . ×( -3s^2(z-1)^4+m_c^4z+2m_c^2sz(z-1)^2) +s(z-1)z^5( s^2(z-1)^4-7m_c^4z+6m_c^2sz(z-1)^2) ] δ ^(4)(s-Φ )} , f_(g_s^2G^2)^2(z,s)=m_bm_c/54z^2(z-1)^2{ 2 [ m_bm_c-s(1-3z+3z^2)] δ ^(4)(s-Φ )+s[m_bm_c+s(1-z)z]δ ^(5)(s-Φ )} , where,δ^(n)(s-Φ )=d^n/ds^nδ(s-Φ ),with Φ being defined asΦ =m_b^2(1-z)+m_c^2z/z(1-z). The final sum rule to evaluate the strong coupling readsg_Z_qB_cρ=2(m_b+m_c)/ f_B_cf_Zm_Zm_B_c^2(m_Z^2-m_B_c^2)( 1-M^2 d/dM^2) × M^2∫_(m_b+m_c)^2^s_0dse^(m^2-s)/M^2ρ _c(s).To calculate the width of the decay Z_q→ B_cρ we use the expression,Γ( Z_q→ B_cρ) =g_Z_qB_cρ^2m_ρ^2/24πλ( m_Z, m_B_c,m_ρ)×[ 3+2λ ^2( m_Z_q, m_B_c,m_ρ) /m_ρ^2] ,whereλ (a, b, c)=√(a^4+b^4+c^4-2( a^2b^2+a^2c^2+b^2c^2) )/2a. Parameters necessary for numerical calculations of the strong coupling g_Z_qB_cρ and Γ( Z_q→ B_cρ) are listed in Table <ref>.The investigation carried out in accordance with standard requirements of the sum rule calculations allows us to determine the ranges for s_0 and M^2. For example, the pole contribution to the sum rule amounts to ∼ 48-60 % of the total result, as is seen from Fig. <ref>. Other constraints, i.e. convergence of OPE, prevalence of the perturbative contribution have been checked, as well. Summing up the performed analysis we fix the interval for the continuum threshold s_0 as in the mass calculations (see, Eq. (<ref>)), whereasfor the Borel parameter we obtain8 GeV^2≤ M^2≤ 9 GeV^2,which is wider than the corresponding window in the mass sum rule.In Fig. <ref> we provide our final results and depict the strong coupling g_Z_qB_cρ as the function of the Borel parameter (at fixed s_0) and as the function of the continuum threshold (at fixed M^2). The dependence of the strong coupling on these parameters has a traditional form, and systematic errors of the calculations are within reasonable limits.The decay Z_s→ B_cϕ can be considered in analogous manner: One only needsto write down in the relevant expressions the parameters of the ϕ meson. Thus, the matrix elements of the ϕ meson that take part in forming of the spectral density are⟨ 0|sγ _μs|ϕ (p)⟩ = f_ϕm_ϕε _μ,⟨ 0|sgG_μνγ _νγ _5s|ϕ (p)⟩ = f_ϕm_ϕ^3ζ _4ϕε _μ,where the twist-4 parameterζ _4ϕ=0.00± 0.02was estimated and found compatible with zero inRef. <cit.>.In calculations of the coupling g_Z_sB_cϕ the working regions for the Borel parameter and continuum threshold are fixed in the form:60 GeV^2≤ s_0≤ 61 GeV^2, 8.2 GeV^2≤ M^2≤ 9.2 GeV^2.Our results for the strong couplings and widths of thedecay modes studied in this work are collected in Table <ref>.§ DISCUSSION AND CONCLUDING REMARKS In the present work we have calculated the parameters of the open charm-bottom axial-vector tetraquark states Z_q and Z_s within QCD sum rule method. Their masses and meson-current couplingshave been obtained using the two-point sum rule method. In these calculations for Z_q and Z_s we have used the symmetric in color indices interpolating currents by assuming that they are ground states in corresponding tetraquark multiplets. Indeed, one can anticipate that Z_q and Z_s are the axial-vector components of the 1S diquark-antidiquark[cq][b̅q̅ ] and [cs][b̅s̅] multiplets, respectively. During last years some progress was archived in investigation of the [cq][c̅q̅] and [cs][c̅s̅] multiplets, and classification of the observed hidden-charm tetraquarks as their possible members (see, Refs. <cit.>). Thus,within the "type-II" model elaborated in these works, the authors not only identified the multiplet levels with discovered tetraquarks, but also estimated masses of the states, which had not yet beenobserved. This model is founded on some assumptions about a nature of inter-quark and inter-diquark interactions, and considers spin-spin interactions within diquarks as decisive source of splitting inside of the multiplet.The information useful for our purposes is accumulated in the axial-vector sector of these multiplets. The axial-vector J^PC=1^++particle in the ground-state [cq][c̅q̅] multiplet was identified with the well-known X(3872) resonance. The similar analysis carried out for the multiplet of [cs][c̅s̅] states demonstrated that its J^PC=1^++ level maybe considered as X(4140). The mass difference of the axial-vector resonancesbelonging to "q" and "s" hidden-charm multiplets isX(4140)-X(3872)≈ 270 MeV.In the present work we have evaluated masses of the axial-vector states from the [cq][b̅q̅ ] and [cs][b̅s̅] multiplets. The mass shift between these multipletsm_Z_s-m_Z_q≈ 240 MeV,is innice agreement with Eq. (<ref>).Another question to be addressed here is connected with masses of excited states, which in sum rule calculations determine continuum threshold s_0. We have found that for [cq][b̅q̅ ] and [cs][b̅s̅] multiplets sum rule calculations fix the lower bounds of the parameter s_0as s_0=59GeV^2 and s_0=60GeV^2, respectively. This means that sum rule has placed a first excited state to position√(s)_0. In order to estimate a gap between the excited and ground states we invoke √(s)_0 andcentral values of Z_q and Z_s masses. Then, it is not difficult to see, that for [cq][b̅q̅] type tetraquarks, it equals to√(s)_0 GeV-7.06 GeV≈ 0.62 GeV,whereas for the [cs][b̅s̅] one gets√(s)_0 GeV-7.30 GeV≈ 0.45 GeV. The masses of 1S and 2S states withJ^PC=1^+- from the [cq][c̅q̅^'] multiplet were calculatedby means of the two-point sum rule method in Ref. <cit.>. The ground-statelevel 1S was identified with the resonance Z_c(3900), whereas the resonance Z(4430) was included into a multiplet of the excited 2S states. If this assignment is correct, then the experimental data provides the mass difference between the ground and first radially excited states, which is equal to 530 MeV. Results of the calculations led to predictions M_Z_c(3900)=3.91^+21_-17 GeV and M_Z_c(4430)=4.51^+17_-09 GeV, and to the mass difference ∼ 600 MeV.The1S and 2S multiplets of [cs][c̅s̅] tetraquarks were explored in the context of the "type-II" model in Ref. <cit.>. For the axial-vector levels J^PC=1^++ named there as X states, the2S-1S gap is 4600 MeV-4140 MeV=460MeV, and for the particles X^(1) and X^(2) with the quantum numbers J^PC=1^+- one gets 4600 MeV-4140 MeV=460MeV and 4700 MeV-4274 MeV=426MeV, respectively. Comparison of these results with ones given byEqs. (<ref>) and (<ref>) can be consideredas confirmation of a self-consistent character of the performed analysis.In the framework of QCD two-point sum rule approach masses of the open charm-bottom diquark-antidiquark stateswere previously calculated in Ref. <cit.>. For masses of the axial-vector tetqaruarks Z_q and Z_s the authors found:m_Z_q=7.10± 0.09± 0.06± 0.01GeV,andm_Z_s=7.11± 0.08± 0.05± 0.03GeV.These predictions were extracted by usingthe parameter s_0=(55± 2)GeV^2 in calculations of m_Z_q and m_Z_s, and M^2=(7.9-8.2)GeV^2 and M^2=(6.7-7.9)GeV^2 for "q" and "s" states, respectively. It is seen, that mass differences m_Z_s-m_Z_q≈ 10 MeV and √(s)_0-m_Z_q≈√(s)_0-m_Z_s≈ 180 MeV can be neither included into "q"-"s"mass-hierarchy scheme of the ground state tetraquarks nor accepted as giving correct mass shift between 1S and 2S multiplets. Our results for m_Z_q and m_Z_s, ifdifferences are ignored in chosen windows for the parameters s_0 and M^2, within theoretical errors may be considered as being in agreement with the predictions of Ref.<cit.>. But in our case the central value of m_Z_s allows the decay process Z_s → B_c ϕ, whereas for m_Z_s from Eq. (<ref>) it remains among kinematically forbidden channels.We have also calculated the widths of theZ_q→ B_cρand Z_s→ B_cϕ decays, which are new results of this work. Obtained predictions for Γ(Z_q→ B_cρ)and Γ(Z_s→ B_cϕ) show that Z_q may be considered as a narrow resonance, whereas Z_s belongs to a class of wide tetraquark states.Investigation of the open charm-bottom axial-vector tetraquarks performed in the present work within the diquark-antidiquark picture led to quite interesting predictions. Theoretical explorations of other members of the [cq][b̅q̅ ] and [cs][b̅s̅] tetraquark multiplets, as well as their experimental studies may shed light on the nature of multi-quark hadrons.§ ACKNOWLEDGEMENTS Work of K. A. was financed by TUBITAK under the grant No. 115F183.999Chen:2016qju H. X. Chen, W. Chen, X. Liu and S. L. Zhu,Phys. Rept. 639, 1 (2016).Chen:2016spr H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu,arXiv:1609.08928 [hep-ph]. Esposito:2014rxa A. Esposito, A. L. Guerrieri, F. Piccinini, A. Pilloni and A. D. Polosa,Int. J. Mod. Phys. A 30, 1530002 (2015).Meyer:2015eta C. A. Meyer and E. S. Swanson,Prog. Part. Nucl. Phys. 82, 21 (2015)Belle:2003 S.-K. Choi et al. [Belle Collaboration], Phys. Rev. Lett. 91, 262001 (2003).CDF:2004 D. Acosta et al. [CDF II Collaboration] Phys.Rev. Lett. 93, 072001 (2004).D0:2004 V. M. Abazov et al. [D0 Collaboration], Phys.Rev. Lett. 93, 162002 (2004).BaBar:2005 B. Aubert et al. [BaBar Collaboration], Phys.Rev. D 71, 071103 (2005).Abe:2005ix K. Abe et al. [Belle Collaboration],BELLE-CONF-0540, hep-ex/0505037. Aubert:2008ae B. Aubert et al. [BaBar Collaboration],Phys. Rev. Lett. 102, 132001 (2009). Barnes:2005pb T. Barnes, S. Godfrey and E. S. Swanson,Phys. Rev. D 72, 054026 (2005). Danilkin:2010cc I. V. Danilkin and Y. A. Simonov,Phys. Rev. Lett. 105, 102002 (2010). Close:2003sg F. E. Close and P. R. Page, Tornqvist:2004qy N. A. Tornqvist,Phys. Lett. B 590, 209 (2004). Zanetti:2011ju C. M. Zanetti, M. Nielsen and R. D. Matheus,Phys. Lett. B 702, 359 (2011). Guo:2014taa F. K. Guo, C. Hanhart, Y. S. Kalashnikova, U. G. Meißner and A. V. Nefediev,Phys. Lett. B 742, 394 (2015). Maiani:2004vq L. Maiani, F. Piccinini, A. D. Polosa and V. Riquer,Phys. Rev. D 71, 014028 (2005). Maiani:2007vr L. Maiani, A. D. Polosa and V. Riquer,Phys. Rev. Lett. 99, 182003 (2007), Navarra:2006nd F. S. Navarra and M. Nielsen,Phys. Lett. B 639, 272 (2006). Dubnicka:2010kz S. Dubnicka, A. Z. Dubnickova, M. A. Ivanov and J. G. Korner,Phys. Rev. D 81, 114007 (2010). Wang:2013vex Z. G. Wang and T. Huang,Phys. Rev. D 89, 054019 (2014). Aubert:2003fg B. Aubert et al. [BaBar Collaboration],Phys. Rev. Lett. 90, 242001 (2003). Besson:2003cp D. Besson et al. [CLEO Collaboration],Phys. Rev. D 68, 032002 (2003) Erratum: [Phys. Rev. D 75 , 119908 (2007)]. D0:2016mwd V. M. Abazov et al. [D0 Collaboration],Phys. Rev. Lett. 117, 022003 (2016). D0 The D0 Collaboration, D0 Note 6488-CONF, (2016).Aaij:2016iev R. Aaij et al. [LHCb Collaboration],Phys. Rev. Lett. 117, 152003 (2016).CMS:2016 The CMS Collaboration, CMS PAS BPH-16-002, (2016). Zhang:2009vs J. R. Zhang and M. Q. Huang,Phys. Rev. D 80, 056004 (2009). Zhang:2009em J. R. Zhang and M. Q. Huang,Commun. Theor. Phys. 54, 1075 (2010)Chen:2013aba W. Chen, T. G. Steele and S. L. Zhu,Phys. Rev. D 89, 054037 (2014).Zouzou:1986qh S. Zouzou, B. Silvestre-Brac, C. Gignoux and J. M. Richard,Z. Phys. C 30, 457 (1986). SilvestreBrac:1993ry B. Silvestre-Brac and C. Semay,Z. Phys. C 59, 457 (1993). Ebert:2007rn D. Ebert, R. N. Faustov, V. O. Galkin and W. Lucha,Phys. Rev. D 76, 114015 (2007). Sun:2012sy Z. F. Sun, X. Liu, M. Nielsen and S. L. Zhu,Phys. Rev. D 85, 094008 (2012) Albuquerque:2012rq R. M. Albuquerque, X. Liu and M. Nielsen,Phys. Lett. B 718, 492 (2012)Agaev:2016dsg S. S. Agaev, K. Azizi and H. Sundu,Phys. Rev. D 95, 034008 (2017).Shifman:1979 M. A. Shifman, A. I. Vainshtein and V. I. Zhakharov, Nucl. Phys. B 147, 385 (1979).Braun:1985ah V. M. Braun and A. V. Kolesnichenko,Phys. Lett. B 175, 485 (1986). Braun:1988kv V. M. Braun and Y. M. Shabelski,Sov. J. Nucl. Phys. 50, 306 (1989) [Yad. Fiz. 50, 493 (1989)]. Balitsky:1982ps I. I. Balitsky, D. Diakonov and A. V. Yung,Phys. Lett. B 112, 71 (1982); Z. Phys. C 33, 265 (1986). Reinders:1985 J. Govaerts, L. J. Reinders, H. R. Rubinstein and J. Weyers, Nucl. Phys. B 258, 215 (1985);J. Govaerts, L. J. Reinders and J. Weyers, Nucl. Phys. B 262, 575 (1985).Braun:1989 I. I. Balitsky, V. M. Braun, A. V. Kolesnichenko, Nucl. Phys. B 312, 509 (1989).Ioffe:1983ju B. L. Ioffe and A. V. Smilga,Nucl. Phys. B 232, 109 (1984). Braun:1995 V. M. Belyaev, V. M. Braun, A.  Khodjamirian and R. Rückl, Phys. Rev. D 51, 6177 (1995).Agaev:2016dev S. S. Agaev, K. Azizi and H. Sundu,Phys. Rev. D 93, 074002 (2016).Agaev:2016ijz S. S. Agaev, K. Azizi and H. Sundu,Phys. Rev. D 93, 114007 (2016). Agaev:2016lkl S. S. Agaev, K. Azizi and H. Sundu, Phys. Rev. D93, 094006 (2016). Agaev:2016urs S. S. Agaev, K. Azizi and H. Sundu,Eur. Phys. J. Plus 131, 351 (2016). Reinders:1984sr L. J. Reinders, H. Rubinstein and S. Yazaki,Phys. Rept. 127, 1 (1985). Agaev:2016mjb S. S. Agaev, K. Azizi and H. Sundu,Phys. Rev. D 93, 074024 (2016).Olive:2016xmw C. Patrignani,Chin. Phys. C 40, 100001 (2016).Ball:2007zt P. Ball, V. M. Braun and A. Lenz,JHEP 0708, 090 (2007).Baker:2013mwa M. J. Baker, J. Bordes, C. A. Dominguez, J. Penarrocha and K. Schilcher,JHEP 1407, 032 (2014).Maiani:2014 L. Maiani, F. Piccinini, A. D. Polosa and V. Riquer, Phys. Rev. D 89, 114010 (2014). Maiani:2016wlq L. Maiani, A. D. Polosa and V. Riquer,Phys. Rev. D 94, 054026, (2016).Wang:2014vha Z. G. Wang,Commun. Theor. Phys.63,325 (2015).
http://arxiv.org/abs/1702.08230v3
{ "authors": [ "S. S. Agaev", "K. Azizi", "H. Sundu" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170227110859", "title": "Open charm-bottom axial-vector tetraquarks and their properties" }
(Department of Mathematical Sciences, Loughborough University, Loughborough, LE11 3TU, UKH. H. Wills Physics Laboratory, University of Bristol, Bristol, BS8 1TL, UKDepartment of Mathematical Sciences, Loughborough University, Loughborough, LE11 3TU, UK Using classical density functional theory (DFT) we calculate the density profile ρ(r⃗) and local compressibility χ(r⃗) of a simple liquid solvent in which a pair of blocks with (microscopic) rectangular cross-section are immersed. We consider blocks that are solvophobic, solvophilic and also ones that have both solvophobic and solvophilic patches. Large values of χ(r⃗) correspond to regions in space where the liquid density is fluctuating most strongly. We seek to elucidate how enhanced density fluctuations correlate with the solvent mediated force between the blocks, as the distance between the blocks and the chemical potential of the liquid reservoir vary. For sufficiently solvophobic blocks, at small block separations and small deviations from bulk gas-liquid coexistence, we observe a strongly attractive (near constant) force, stemming from capillary evaporation to form a low density gas-like intrusion between the blocks. The accompanying χ(r⃗) exhibits structure which reflects the incipient gas-liquid interfaces that develop. We argue that our model system provides a means to understanding the basic physics of solvent mediated interactions between nanostructures, and between objects such as proteins in water, that possess hydrophobic and hydrophilic patches. Solvent fluctuations around solvophobic, solvophilic and patchy nanostructures and the accompanying solvent mediated interactions Andrew J. Archer December 30, 2023 =================================================================================================================================§ INTRODUCTION Understanding the properties of water near hydrophobic surfaces continues to attract attention across several different disciplines,<cit.> ranging from the designof self-cleaning materials <cit.> to biological self-assembly and protein interactions.<cit.> Likewise, understanding the (water mediated) interactions between hydrophobic and hydrophilic entities is important in many areas of physical chemistry and chemical physics. In a recent article, Kanduč et al.<cit.> survey the field and describe informatively how the behaviour of soft-matter at the nano-scale depends crucially on surface properties and outline the key role played by water mediated interactions in many technological and biological processes. These include colloid science, where altering the surface chemistry can change enormously the effective interactions, e.g. those preventing aggregation, and biological matter where effective membrane-membrane interactions can be important in biological processes. In attempting to ascertain the nature of effective interactions, it is crucial to know whether a certain substrate, or entity, is hydrophilic or hydrophobic. For a macroscopic (planar) substrate the degree of hydrophobicity is measured by Young's contact angle θ. A strongly hydrophobic surface, such as a self-assembled monolayer (SAM), paraffin or hydrocarbon, can have θ > 120^∘, while a strongly hydrophilic surface can often correspond to complete wetting, i.e. θ=0, meaning a water drop spreads across the whole surface. However, in the majority of systems encountered in the physical chemistry of colloids, in nanoscience and in situations pertinent to biological systems, the entities immersed in water do not have a macroscopic surface area. Thus it is important to ask to what extent ideas borrowed from a macroscopic (capillarity) description, which simply balance bulk (volume) and surface (area) contributions to the total grand potential but which make specific predictions for the effective interaction between two immersed macroscopic hydrophobic entities, remain valid at the nanoscale.For example, Huanget al. <cit.> consider the phenomenon of capillary evaporation of SPC water between two hydrophobic oblate (ellipsoidal) plates. These authors discuss the validity of the macroscopic formula at which evaporation occurs and the form of the solvent mediated force between the plates. More recently, Jabeset al. <cit.> investigate the solvent-induced interactions for SPC/E water between curved hydrophobes; they consider the influence of different types of confining geometry and conclude that macroscopic thermodynamic (capillarity) arguments work surprisingly well, even at length scales corresponding to a few molecular (water) diameters. The survey article Ref. kanduc2016water emphasises the usefulness of capillarity ideas for analysing water mediated forces between two entities that have different adsorbing strengths.Such observations raise the general physics question as to how well should one expect capillarity arguments to work for nanoscale entities immersed in an arbitrary solvent. Are these observations specific to water? This seems most unlikely. In this paper we argue that insight into fundamental aspects of solvent mediated interactions, particularly those pertaining to solvophobes, are best addressed by considering the effective, solvent mediated interactions between nanostructures immersed in a simple Lennard-Jones (LJ) liquid. By focusing on a model liquid with much simpler intermolecular forces than those in water, one can investigate more easily and more systematically the underlying physics, e.g. the length scales relevant for phenomena such as capillary evaporation and how these determine the effective interactions.A second, closely related, aspect of our present study is concerned with the strength and range of density fluctuations in water close to hydrophobic substrates. It is now accepted that for water near a macroscopic strongly hydrophobic substrate the local number density of the water is reduced below that in bulk for the first one or two adsorbed molecular layers. Accompanying this reduction in local density there is growing evidence for a substantial increase in fluctuations in the local number density; these increase for increasing water contact angle. An illuminating review <cit.> surveys the field up to 2011, describing earlier work on density fluctuations, from the groups of Garde, Hummer, and Chandler. The basic idea of Garde and co-workers is that a large value of some, appropriately defined, local compressibility reflects the strength of density fluctuations in the neighbourhood of the substrate and should provide a quantitative measure of the degree of hydrophobicity of the hydrophobic entity.<cit.> The idea is appealing. However, even for a macroscopic planar substrate, there are problems in deciding upon the appropriate measure. Once again, this issue is not specific to water. If strong fluctuations occur at hydrophobic surfaces one should also expect these to occur at solvophobic surfaces, for similar values of chemical potential deviation from bulk coexistence. In other words, pronounced fluctuations cannot be specific to water near hydrophobic substrates. This argument was outlined recently.<cit.>Evans and Stewart <cit.> discuss the merits of various different quantities that measure surface fluctuations. They argue that the compressibility χ(r⃗), defined as the derivative of the equilibrium density, ρ(r⃗), with respect to the chemical potential μ at fixed temperature T:χ(r⃗) ≡ρ(r⃗) μ_Tprovides the most natural and useful measure for quantifying the local fluctuations in an inhomogeneous liquid. This quantity was introduced much earlier,<cit.> in studies of wetting/drying and confined fluids and is, of course, calculated in the grand canonical ensemble. The usual isothermal compressibility <cit.>κ_T=χ_b/ρ_b^2, where χ_b≡(∂ρ_b/∂μ)_T is the bulk value of the compressibility; recall that χ_b→∞ on approaching the bulk fluid critical point. Note that χ(r⃗) can be expressed<cit.> as the correlatorχ(r⃗) = β⟨ N ρ̂(r⃗) -⟨ N⟩⟨ρ̂(r⃗)⟩⟩where β=(k_BT)^-1, ρ̂(r⃗) is the particle density operator, N= ∫ρ̂(r⃗) dr⃗ is the number of particles and ⟨⋯⟩ denotes a grand canonical average. Thus ⟨ρ̂(r⃗)⟩ =ρ(r⃗) and ⟨ N ⟩ is the average number of particles. Clearly χ(r⃗) correlates the local number density at r⃗ with the total number of particles in the system. The measures of χ(r⃗) introduced by other authors<cit.> are designed for molecular dynamics computations which are performed in the canonical ensemble rather than in the grand canonical ensemble. The latter is more appropriate for adsorption studies.Using DFT, Evans and Stewart <cit.> calculated χ(z) defined by Eq. (<ref>) for LJ liquids near planar substrates, with the wall at z=0. They investigated substrates which ranged from neutral (θ≈90^∘) to very solvophobic (θ≈170^∘) and found that this quantity is enhanced over bulk, exhibiting a peak for z within one or two atomic diameters of the substrate. The height of the peak increased significantly as θ increased and the substrate became more solvophobic. In subsequent investigations, using Grand Canonical Monte Carlo (GCMC) for SPC/E water <cit.> and GCMC plus DFT for a LJ liquid <cit.> at model solvophobic substrates, it was observed the the maximum in χ(z) increases rapidly as the strength of the wall-fluid attraction is reduced, thereby increasing θ towards 180^∘, i.e. towards complete drying. For different choices of wall-fluid potentials the drying transition is continuous (critical) and the thickness of the intruding gas-like layer as well as the maximum in χ(z) diverge as cosθ→ -1. <cit.> These observations pertain to the liquid at coexistence, where μ = μ_coex^+.Much is made in the literature concerning the depleted local density and accompanyingenhanced surface fluctuations of water at hydrophobic surfaces as arising from the particular properties of water, namely the hydrogen-bonding and the open tetrahedrally coordinated liquid structure, which is said to be disrupted by the presence of large enough hydrophobic objects. However, following from Refs. evans2015local,evans2015quantifying we show here that much of this phenomenology is also observed when a simple LJ like solvent that is near to bulk gas-liquid phase coexistence is in contact with solvophobic objects. The particular entities we consider are i) planar surfaces of infinite area and ii) long blocks with a finite rectangular-cross-section. For these objects to be solvophobic, we treat them as being composed of particles to which the solvent particles are attracted weakly, compared to the strength of the attraction between solvent particles themselves. The contact angle of the solvent liquid at the planar solvophobic substrate considered here is θ≈144^∘. We also consider the behaviour at solvophilic objects, for which the contact angle at the corresponding planar substrate is θ≈44^∘.The simple LJ like solvent we consider consists of particles with a hard-sphere pair interaction plus an additional attractive tail potential that decays ∼ r^-6, where r is the distance between the solvent particles. We use classical density functional theory (DFT),<cit.> treating the hard core interactions using the White-Bear version of fundamental measure theory (FMT),<cit.> together with a mean-field treatment of the attractive interactions, to calculate the solvent density profile ρ(r⃗) and local compressibility χ(r⃗). An advantage of using DFT is that having calculated ρ(r⃗), one then has access to all thermodynamic quantities including the various interfacial tensions. Calculating the grand potential as a function of the distance between the blocks yields the effective solvent mediated potential; minus the derivative of this quantity is the solvent mediated force between the blocks.When both blocks are solvophobicand the liquid is at a state point near to bulk gas-liquid phase coexistence, we find that the solvent mediated force between these is strongly attractive at short distances due to the formation of a gas-like intrusion. Proximity to coexistence can be be quantified by the difference Δμ=μ-μ_coex, where μ_coex is the value at bulk gas-liquid coexistence. For a slit pore consisting of two parallel surfaces of infinite extent that are sufficiently solvophobic, θ>90^∘, a first order transition, namely capillary evaporation, occurs as Δμ→0, corresponding to the stabilisation of the incipient gas phase in the slit of finite width.<cit.> The formation of the gas-like intrusion between the blocks that we consider here occurs at smaller Δμ. This is not a genuine first order surface phase transition, owing to the finite size of the blocks. However, this phenomenon is intimately related to the capillary evaporation that occurs between parallel planar surfaces with both dimensions infinite. It turns into the genuine capillary evaporation phase transition as the height of our blocks is increased to ∞. Note that some authors in the water community, e.g. Refs. berne2009dewetting, huang2003dewetting and Remsinget al.,<cit.> refer to this phenomenon as “dewetting”, but given that this term is also used to describe a film of liquid on a single planar surface breaking up to form droplets, a network pattern or other structures,<cit.> we prefer to use the more accurate term, capillary evaporation. The important matter of nomenclature was emphasised in a Faraday Discussion on hydrophobic and structured surfaces; see Refs. luzar2010wetting,evans2010wetting.We also present results for the local compressibility χ(r⃗) in the vicinity of the blocks. Maxima in χ(r⃗) correspond to points in space where the density fluctuations are the greatest. We find that the formation of the gas-like intrusion between the hydrophobic blocks is associated with a local value of χ(r⃗) that is much greater than the bulk value. However, we find that the solvent density fluctuations are not necessarily at points in space that one might initially expect. For example, when there is a gas-like intrusion, the value of χ(r⃗) is larger at the entrance to the gap between the blocks, rather than in the centre of the gap.We are not the first to use DFT to study liquids near corners and between surfaces. Bryket al. <cit.> calculated the solvent mediated (depletion) potential between a hard-sphere colloidal particle, immersed in a solvent of smaller hard-spheres, and planar substrates or geometrically structured substrates, including a right-angled wedge. They found that in the wedge geometry there is a strong attraction of the colloid to inner corners, but there is a free energy barrier repelling the colloid from an outer corner (edge) of a wedge. Hopkinset al.<cit.> studied the solvent mediated interaction between a spherical (soft-core) particle, several times larger than the (soft-core) solvent particles, and a planar interface. They showed that when the binary solvent surrounding the large particle (colloid) is near to liquid-liquid phase coexistence, thick (wetting) films rich in the minority solvent species can form around it and on the interface. This has a profound effect on the solvent mediated potential, making it strongly attractive. A similar effect, due to proximity to liquid-liquid phase separation, was found for the solvent mediated potential between pairs of spherical colloidal particles. <cit.> Analogous effects arising from proximity to gas-liquid phase coexistence, i.e. when Δμ is small were found in a very recent study. <cit.> Such investigations, studying the influence of proximity to bulk phase coexistence on the solvent mediated potential between pairs of spherical particles, provide insight regarding what one might expect in the cases studied here, namely pairs of hydrophobic, hydrophilic and patchy blocks.The strong attractive forces between solvophobic objects, decreased local density and enhanced fluctuations close to the substrate, all occur when the liquid is near to bulk gas-liquid phase coexistence, i.e. when Δμ is small. Note that liquid water at ambient conditions is near to saturation. For water at ambient conditions βΔμ∼10^-3. This dimensionless quantity provides a natural measure of over-saturation, indicating where our results might be appropriate to water and to other solvents. The other key ingredient in determining the physics of effective interactions is the liquid-gas surface tension γ_lg, which is especially large for water. More precisely, it is the ratio γ_lg/Δμρ_l, where ρ_l is the density of the coexisting liquid, that sets the length scale for the capillary evaporation of any liquid; see Eq. (<ref>) below. The length scale in water is, of course, especially important. The influential article by Lumet al. <cit.> underestimates this. Subsequent articles <cit.> and the informative piece <cit.> by Cerdeiriñaet al. point to the fact that for water under ambient conditions the characteristic length for capillary evaporation is L_c ∼1.5. The latter authors analyse why this length scale is so long and conclude this is due primarily to the large value of γ_lg of water at room temperature. The paper is arranged as follows: In Sec. <ref> we define the model solvent and the DFT used to describe it. Results for the fluid at a single planar substrate (wall) and between two identical walls are discussed in Sec. <ref>. Then, in Sec. <ref>, we build a model for the two rectangular blocks and analyse the density profiles and local compressibility around the pair of blocks, comparing with results for the planar substrates. We examine the effect of changing the distance between the two blocks; this enables us to determine effective solvent mediated interactions between the blocks. These interactions differ enormously between an identical pair of solvophobic and a pair of solvophilic blocks. We also consider the case of i) a solvophobic and a solvophilic block and ii) blocks made from up to three patches that can be either solvophobic or solvophilic. The density and local compressibility profiles exhibit a rich structure in these cases and the resulting effective interactions exhibit considerable variety. We conclude in Sec. <ref> with a discussion of our results. § MODEL SOLVENT DFT<cit.> introduces the thermodynamic grand potential functional Ω[ρ] as a functional of the fluid one-body density profile, ρ(r⃗). The profile which minimises Ω[ρ] is the equilibrium profile and for this profile the functional is equal to the grand potential for the system. For a fluid of particles interacting via a hard-sphere pair potential plus an additional attractive pair potential v(r), the grand potential functional can be approximated as follows:<cit.> Ω[ρ]= _id[ρ] + F_ex^HS[ρ] + 1/2∬ρ(r⃗) ρ(r⃗')v(|r⃗ - r⃗'|)dr⃗ dr⃗'+ ∫ρ(r⃗)(ϕ(r⃗) - μ )dr⃗ ,where _id =k_B T ∫ρ(r⃗) ( ln [ Λ^3ρ(r⃗)] - 1 ) dr⃗ is the ideal-gas contribution to the free energy, with Boltzmann's constant k_B, temperature T and thermal de Broglie wavelength Λ. _ex^HS = ∫Φ({n_α})dr⃗ is the hard-sphere contribution to the excess free energy, which we treat using the White-Bear version of FMT,<cit.> i.e. the free energy density Φ is a function of the weighted densities {n_α}. ϕ(r⃗) is the external potential and μ is the chemical potential. The attractive interaction between the particles is assumed to be given by a simple interaction potential, incorporating London dispersion forces,v(r) = {[ -4σ/r^6r ≥σ; 0r < σ, ].where σ is the hard-sphere diameter and >0 is the attraction strength.In Fig. <ref> we display the bulk fluid phase diagram, showing the gas-liquid coexistence curve (binodal) and spinodal calculated from Eq. (<ref>). Bulk gas-liquid phase separation occurs when T < T_c, where the critical temperature T_c= 1.509/k_B and the critical density ρ_cσ^3 = 0.249. The results presented in the remainder of the paper are calculated along the isotherm with T=0.8 T_c. We approach bulk gas-liquid coexistence from the liquid side, varying the chemical potential to determine the bulk liquid density. At coexistence, the chemical potential μ = μ_coex takes the same value for both liquid and gas phases. We define Δμ = μ - μ_coex, which gives a measure of how far a given bulk state is from coexistence. In Fig. <ref> we display the bulk liquid density as a function of βΔμ, for T = 0.8 T_c.In addition to calculating density profiles and thermodynamic properties of the system, we also calculate the local (position dependent) compressibility in Eq. (<ref>). In order to calculate this quantity, we use the finite difference approximation:χ(r⃗) = ρ(r⃗; μ + δμ)-ρ(r⃗; μ- δμ)/2δμ,with βδμ=10^-4. The bulk value of the compressibility χ_b≡(∂ρ_b/∂μ)_T, as a function of the chemical potential, is also shown in Fig. <ref>, for T=0.8 T_c. We see that as the chemical potential is increased away from the value at coexistence, the bulk density increases (solid line) and χ_b decreases (dashed line).§ LIQUID AT PLANAR WALLS Before presenting results for the liquid solvent around various different rectangular blocks, we describe its behaviour in the presence of a single planar wall and confined between two parallel planar walls. This is a prerequisite for understanding the behaviour around the blocks. §.§ Single hard wall with an attractive tail Initially, we treat the wall as being made of a different species of particle having a uniform density distribution and interacting with the fluid via a pair potential of the same form as the potential between the fluid particles, i.e. a hard-sphere potential together with the attractive pair potentialv^h_wf(r) = {[ -4_wf^hσ/r^6 r ≥σ;0 r < σ. ]. This is the same as the potential in Eq. (<ref>), but withreplaced by the wall-fluid attraction strength parameter _wf^h>0. Thus, the external one-body potential due to a substrate made of particles having uniform density ρ_w, occupying the half space z<0 (i.e. the wall surface is located at z = 0), isϕ(r⃗)≡ϕ(z)=ρ_w∫_z<0dr⃗' v^h_wf(|r⃗ - r⃗'|),for z≥σ/2 and ϕ(z)=∞ for z<σ/2. From this we obtainϕ(z) = {[ -2/3_wf^hρ_wσ^3πσ/z^3z ≥σ/2; ∞z < σ/2, ].where z is the perpendicular distance from the surface of the wall.Henceforth, for simplicity, we set ρ_wσ^3=1.In Fig. <ref> we display the fluid density profiles and the local compressibility for the hard-sphere fluid (= 0, equivalent to T→∞) against a planar hard wall (_wf^h = 0). This is useful for comparing with the later results, in order to assess the influence of the attractive interactions. We see that for low values of the bulk fluid density ρ_b, the density profile has little structure, as does χ(z). Increasing the bulk fluid density, we observe oscillations developing near to the wall arising from packing. The local compressibility χ(z) also develops significant oscillations near the wall. For higher values of ρ_b we see that the contact value χ(σ/2^+) is significantly larger than the bulk value. We also note that it is possible for the local compressibility χ(z) to benegative, while of course the bulk value χ_b must be positive. This is because for larger values of ρ_b, the local density values at the minima of the oscillations are much smaller than in bulk, reflecting the fact that layering of the fluid at the wall becomes more pronounced.We turn now to the case >0 and consider the temperature T = 0.8 T_c, where bulk gas-liquid phase separation occurs. We set the wall attraction to be β_wf^h = 0.13, which is rather weak, corresponding to a solvophobic substrate with contact angle ≈ 144^∘ at this temperature. The contact angle θ is calculated using Young's formulaγ_wg = γ_wl + γ_glcosθ,where γ_wg, γ_wl and γ_gl are the wall-gas, wall-liquid and gas-liquid surface tensions, respectively. These interfacial tensions are each calculated separately via DFT in the usual manner (see e.g. Ref. stewart2012phase and references therein).Fig. <ref> shows liquid density profiles and the local compressibility (both divided by their respective bulk values) on the isotherm T = 0.8 T_c. At this temperature the bulk density of the liquid at coexistence with the gas is ρ_bσ^3 ≈ 0.587. For larger values of βΔμ, away from coexistence, the density profiles exhibit oscillations at the wall, similar to the density profile for pure hard-spheres against the hard wall (Fig. <ref>). As coexistence is approached, the oscillations in the density profiles are slightly eroded, although for this value of β_wf^h = 0.13, the changes in the density profile are not particularly striking. This can also be seen from the inset in Fig. <ref> which displays the adsorptionΓ=∫_0^∞dz (ρ(z)-ρ_b).Note that Γ is negative and remains finite as βΔμ→0. However, as can be seen from the lower panel of Fig. <ref>, where we display the corresponding local compressibility profiles χ(z), there is a significant increase in the local compressibility in layers adjacent to wall as coexistence is approached, βΔμ→0^+. We note also that the compressibility has oscillations whose maxima match those in the density profiles. §.§ Single soft Lennard-Jones wall The wall, Eq. (<ref>), considered in the previous subsection leads to the fluid density profile and local compressibility having a very sharp (and discontinuous) first peak at z=σ/2, particular to this wall potential. The contact density ρ(σ/2) is related to the bulk pressure via a sum rule [see e.g. Eq (68a) in Ref. henderson1992], which is satisfied by the present DFT. For general wall-potentials there is noexplicit formula for ρ(σ/2). However, it is clear from the relation emerging from the sum rule that this quantity must be very large for a potential such as (<ref>).<cit.> Real molecular fluids interact with substrates via continuous (softer) potentials. Thus, we now consider a planar wall composed of particles that interact with the fluid particles via the LJ pair potentialv_wf(r) = 4_wf[σ/r^12 - σ/r^6 ],where _wf>0 is the coefficient determining the strength of wall attraction. Thus, using Eq. (<ref>) with v_wf^h replaced by v_wf, for z>0 and ϕ(z)=∞ for z ≤ 0, we haveϕ(z) = {[ 4_wfρ_wσ^3π( σ^945z^9 - σ^36z^3)z > 0;∞ z ≤ 0, ].where z is the perpendicular distance from the wall. The contact angle calculated using Eq. (<ref>) for the liquid against this soft wall for T=0.8 T_c is shown in Fig. <ref>. When we set the wall attraction to be β_wf = 0.3, then the contact angle is θ≈ 144^∘, which is the same contact angle that the fluid has against the hard wall with an attractive tail potential (<ref>), with β^h_wf = 0.13– as treated in Fig. <ref>. Note that in Fig. <ref> the drying transition, where θ→ 180^∘, occurs at β_wf=0.0344 and is continuous (critical). The numerical result from DFT for this value agrees precisely with the analytical prediction from the binding potential treatment for the same model potentials treated in the sharp-kink approximation.<cit.> The latter predicts a continuous drying transition when β_wf(ρ_wσ^3) = β(ρ_gσ^3), where ρ_g is the density of the coexisting gas at the given temperature. What is striking about this result is that it also applies for the wall potential in Eq. (<ref>), i.e. critical drying occurs at the same value β_wf^h=β_wf=0.0344. This is a consequence of both potentials having the same asymptotic decay as z→∞. However, for the potential in Eq. (<ref>) wetting, θ=0, occurs at a much smaller value of β_wf^h. Thus, the overall behaviour of (1+cosθ) vs wall strength is sensitive to the precise form of the wall potential.Fig. <ref> shows liquid density profiles and the local compressibility (both divided by their respective bulk values) on the isotherm T = 0.8 T_c. For large values of βΔμ the density profiles exhibit oscillations at the wall similar to the density profiles for the walls in Figs. <ref> and <ref>. However, as coexistence is approached the degree to which the oscillations in the density profiles near the wall are eroded is greater than for the case of the wall (<ref>) and a region of depleted density appears at the wall. Note that for this value of _wf, the low density film close to the wall remains finite in thickness right up to coexistence, βΔμ→0, since the wall-liquid interface is only partially dry: θ<180^∘. This can also be seen from the inset which shows the adsorption (<ref>). Although Γ is somewhat larger in magnitude than for the wall potential (<ref>), displayed in the inset to Fig. <ref>, it remains finite at coexistence. In the lower panel of Fig. <ref> we display the corresponding local compressibility profiles χ(z) in the vicinity of this solvophobic surface. We observe that in the first few adsorbed layers, the local compressibility increases significantly i.e. the range over which χ(z)/χ_b is significantly greater than unity increases as βΔμ→0. Moreover the maximum near z/σ=2, corresponding to the second particle layer, grows rapidly as βΔμ decreases. §.§ Two planar walls We now consider briefly a pair of planar walls, where the distance between the walls is L. The external potential isϕ_2w(z)=ϕ(z)+ϕ(L-z),where ϕ(z) is given by the soft wall Eq. (<ref>). Capillary evaporation from this planar slit can occur as βΔμ→0, whereby the liquid between the two solvophobic planar walls evaporates as coexistence is approached. The value of L at which this occurs can be estimated from the Kelvin equation:<cit.> L^* ≈-2γ_lgcosθ/Δμ(ρ_l-ρ_g)where L^* ≡ L-2σ is defined as roughly the distance between maxima of the density profile, corresponding to the first adsorbed layer at each wall. L^* is the effective distance between the walls. γ_lg is the gas-liquid interfacial tension, θ is the single planar wall contact angle, and ρ_g and ρ_l are the coexisting gas and liquid densities, respectively. Eq. (<ref>) is appropriate to a partial drying situation.<cit.>Fig. <ref> shows the capillary evaporation phase transition line, comparing the prediction from the Kelvin equation (<ref>) with that calculated from DFT. This is the line in the (Δμ,L) plane where the gas-filled slit and liquid-filled slit are at thermodynamic coexistence, i.e. these states have the same grand potential, temperature, and chemical potential. The inset in Fig. <ref> shows the density profiles of coexisting gas and liquid states for L=6σ. As we expect, the Kelvin equation is accurate for large L, but is less reliable for small L. Nevertheless for values down to L≈4σ and βΔμ=0.53, where the critical point occurs in DFT, the Kelvin equation prediction remains fairly good. This may come as a surprise to some readers, given that the equation is based on macroscopic thermodynamics. Note that Eq. (<ref>) does not account for a capillary critical point.<cit.> We have also investigated the solvent mediated potential between two planar walls, i.e. the excess grand potential arising from confinement. The derivative of this quantity with respect to L jumps at capillary evaporation. We return to this later.§ TWO RECTANGULAR BLOCKS In this section we describe the properties of the liquid around two rectangular cross-section beams of length a– the two “blocks" illustrated in Fig. <ref>. We assume that the blocks are long, i.e. we take the limit a→∞. The distance between the closest faces of the blocks is x_G and we set the size of the cross-section of the two blocks to be b× c, where b=8σ and c=3σ. We locate the origin of our Cartesian coordinate system to be midway between the two blocks.The external potential due to the two blocks is defined in a manner analogous to that used above for the planar wall potential [cf. Eq. (<ref>)]; i.e. the potential due to the two blocks isϕ(r⃗)=ρ_w∫_ Ddr⃗' v_wf(|r⃗ - r⃗'|),where D is the region of space occupied by each of the blocks. The parameter _wf characterises the strength of the attraction between the blocks and the fluid. When _wf is small, the blocks are solvophobic, but for larger values of _wf they are solvophilic. Later we consider blocks having some sections that are solvophobic and others that are solvophilic: these are the so called “patchy” blocks. Note that the region D is where the fluid is completely excluded, with ϕ(r⃗)=∞ and is made of two volumes with cross sectional area b× c=8σ×3σ. However, the effective exclusion cross-sectional area of each block is ≈ b^*× c^* = 10σ×5σ, which includes an exclusion zone of width σ around each of the blocks. §.§ Two solvophobic blocks §.§.§ Blocks at fixed separation x_G The results we present first are for a pair of blocks with soft solvophobic surfaces with β_wf=0.3, at the temperature T = 0.8 T_c. Recall that for the single soft planar wall this value of _wf corresponds to a contact angle θ≈ 144^∘ and that for the pair of planar walls the capillary evaporation critical point is at βΔμ=βΔμ_cc=0.53– see Fig. <ref>. In Figs. <ref> and <ref> we display density profiles and the local compressibility χ(r⃗), for various βΔμ and fixed x_G=5σ.The density profiles in Fig. <ref> show that as coexistence is approached, i.e. as βΔμ→ 0, the density in the space between the pair of blocks becomes very small, i.e. gas-like. This is somewhat analogous to the capillary evaporation observed between two infinite planar walls – see Fig. <ref>. For larger values of βΔμ, away from the value where bulk gas-liquid coexistence occurs, we see oscillations in the density profile arising from the packing of the liquid particles around the blocks. We also note that the density is higher near the corners of the blocks.The local compressibility χ(r⃗) provides a measure of the strength of the local fluctuations within the fluid and so large values of this quantity reveals regions in space where the local density fluctuations are greatest. In Fig. <ref>, we see that for βΔμ = 0.4, well away from bulk coexistence, the local compressibility is largest around the surface of the two blocks, decreasing in an oscillatory manner as the distance from the blocks increases. When the chemical potential deviation is smaller, βΔμ = 0.22, the local compressibility in the vicinity of the outside of the blocks is similar to the case for the larger value of βΔμ = 0.4. However, in the region between the two blocks, we see that the local compressibility is significantly larger, indicating strong fluctuations in this region. For βΔμ = 0.22, we see from Fig. <ref> that the average density in the gap between the blocks is intermediate between the bulk gas and liquid coexisting densities and so we expect that typical microstates of the system include both gas-like and liquid-like average densities in the gap. The fluctuations of the system between these two typical states are what lead to the high values of the local compressibility.Approaching even closer to the bulk coexistence point leads to the gas being stabilised in the gap between the blocks – see the density profiles for βΔμ = 0.01 in Fig. <ref>. For this value of βΔμ we see from Fig. <ref> that the region where the local compressibility is largest is not in the gap between the blocks, but is instead at the entrance to this region, where there is an `interface' between the bulk liquid and the gas-like intrusion between the blocks. It is the fluctuations in this interface that lead to the maxima in the local compressibility χ(x,y).We now present results for x_G = 7σ, i.e. with the gap between the blocks being slightly larger. In order to display in more detail the properties of the density profiles and local compressibility around the pair of blocks, we plot these along the three different paths P1, P2 and P3, illustrated in Fig. <ref>. The density and compressibility profiles are, of course, symmetrical around the mid-line through the gap between the blocks, so we display profiles around the right hand block only. From Fig. <ref> we see that paths P1 and P2 are along the lines of symmetry and path P3 is along the horizontal side of the block.In Fig. <ref>, we display results along paths P1 and P3. On both paths, both the density and the local compressibility are, of course, zero within the block. Focussing first along the portion of path P1 not in the gap between the blocks, we see that the profiles for varying βΔμ are very similar to those displayed in Fig. <ref> for the planar LJ wall: as βΔμ is decreased, the density in the vicinity of the wall decreases and the maxima in χ(r⃗) near the wall increase. Comparing with the density profiles along the parallel path P3, along the horizontal size of the blocks, we see that away from the gap between the blocks the local density is slightly higher than along path P1 (this is the influence of the corner), but both the density and compressibility follow the same trend as along path P1.Moving on to examine the behaviour in the gap between the blocks, in Fig. <ref> we see that on decreasing βΔμ, along path P1 the density decreases and at βΔμ≲ 0.04 there is a discontinuous change in the density profile. The density profiles for βΔμ = 0.03 and 0.01 are almost identical and correspond to a dilute `gas' state. The strong fluctuations connected to the onset of this transition result in very large values of χ(x,0) for βΔμ = 0.05 and 0.04. χ(x,0) exhibits a discontinuous change in the gap between the blocks at the value of βΔμ where the density profile jumps. Moreover, along the portion of path P3 along the end of the gap between the blocks, we also observe a large jump in the density profile as coexistence is approached. Along path P3 the local compressibility also jumps. Unlike on path P1, where in the gap χ(x,0) takes small gas-like values for βΔμ=0.03 and 0.01, on path P3 χ(0,± b/2)/χ_b is very large for these values of βΔμ, reflecting the occurrence of gas-liquid interfacial fluctuations. All of this is reminiscent of the capillary evaporation observed for two planar solvophobic walls. However, the transition occurs at a smaller value of βΔμ due to the finite dimensions of the blocks. Specifically, the transition occurs at βΔμ≲ 0.04, whereas for the planar slit with L=7σ evaporation occurs at βΔμ=0.21; see Fig. <ref>.In Fig. <ref> we display density profiles and the local compressibility along path P2 (see Fig. <ref>), which starts from the origin (the mid point between the blocks) and goes along the positive y-axis. For small βΔμ, i.e. βΔμ = 0.03 and 0.01, we see that the density is gas-like in the gap between the blocks, increasing to the bulk liquid value outside the gap, y ≳ 8σ. The density profile changes discontinuously at βΔμ≲ 0.04 and for larger values, the density is liquid-like throughout path P2. For smaller values of the chemical potential, βΔμ≲0.04, there is a local maximum in the local compressibility along this path and the location of the maximum occurs roughly where the density profile ρ(0,y)/ρ_b = 0.5. Thus, as the chemical potential is varied, the local compressibility maximum splits and shifts along the y-axis in the gap between the blocks. Recall that along the y-axis the system is symmetric around the origin, therefore for small βΔμ there is a peak in χ(r⃗) at each of the entrances to the gap, i.e. for y ≈± 5σ [cf. Fig. <ref>].§.§.§ Varying the separation between the blocks In Fig. <ref> we show how the mid-point density ρ(0,0) ≡ρ_0, varies as the distance between the two blocks x_G is changed. The figure also shows how the local compressibility at the origin χ(0,0) ≡χ_0 varies with x_G. For βΔμ=0.1,0.2and 0.3 there is a discontinuous change in the density. The magnitude of the `jump' gets larger as βΔμ approaches zero. Note that if the density, or more precisely the adsorption, jumps from one finite to another finite value at a particular value of x_G then so must the local compressibility. This is a signature of the first order transition which occurs in the present mean-field DFT treatment. For βΔμ≳ 0.4 the density varies smoothly with x_G. In the lower panel of Fig. <ref> we observe a peak in χ_0 when the mid-point density crosses ρ_0/ρ_b = 0.5. The height of this peak appears to be maximal at βΔμ≈ 0.4, the value at which the transition in ρ_0 appears to change from discontinuous to continuous. In other words, capillary evaporation still manifests itself as a first order transition, with its accompanying critical point, in our mean-field treatment of `evaporation' between two blocks of finite cross-sectional area. Bearing in mind the effectively one-dimensional nature (b and c finite but a →∞) of the capillary-evaporation-like transition we observe in the fluid between the blocks, we expect the divergence in χ_0 to be rounded, in reality, by finite size effects. Likewise, we expect the jump in ρ_0 to be rounded in reality. In Fig. <ref> we display a plot of the excess grand potential per unit length, W(x_G)≡(Ω(x_G)-Ω_∞)/a, as a function of x_G. Ω_∞≡Ω(x_G→∞) is the value of the grand potential when the two blocks are far apart. W(x_G) is the solvent mediated interaction potential per unit length between the two blocks. Since W(x_G) becomes increasingly negative as x_G decreases, this indicates that the solvent mediated interaction between the pair of solvophobic blocks is attractive. For smaller βΔμ, i.e. for states nearer to coexistence, the solvent mediated potential W(x_G) is longer ranged; the gas intrusion between the blocks lowers the free energy out to larger separations. Close inspection of Fig. <ref> shows that there are actually two solution branches to the grand potential. For βΔμ≳ 0.4 there is only a single smooth branch (not shown). When there are two branches, the one at large x_G corresponds to the case when the density between the blocks is liquid-like and the other, at smaller x_G, to when there is a gas-like intrusion. Where the branches meet corresponds to the value of x_G where the evaporation transition occurs for a given βΔμ. The solvent mediated force between the blocks jumps at the transition. Note that the potential W(x_G) in Fig. <ref> forfinite size blocks (i.e. finite b) is very different from the corresponding potential between two infinite planar walls (i.e. b→∞). For example, from Fig. <ref> we see that when βΔμ=0.05 the two branches in W(x_G) meet at x_G≈8σ. In contrast, for the infinite walls at the same βΔμ, the two branches meet at x_G≈21σ.In the same manner used to derive the Kelvin equation (<ref>), we can use macroscopic thermodynamics to obtain a simple estimate for W(x_G). The grand potential of the system with no blocks present is Ω_0 = -p_lV, where p_l is the pressure of the bulk liquid and V is the volume of the system. The grand potential of the system with one block present in the liquid isΩ_1 = -p_l(V-ab^*c^*) + 2(ac^*+ab^*)γ_wl + 4aE_lwhere, a, b^*, c^* are the effective dimensions of the block, as illustrated in Fig. <ref>. Note that b^*c^*>bc is the effective cross sectional area of the block, which includes the fluid exclusion region around the blocks, as discussed below Eq. (<ref>). Thus (V-ab^*c^*) is the volume occupied by the liquid. Recall that we assume the block is long, i.e. a →∞. 2(ac^*+ab^*) is the surface area of the block in contact with the liquid and γ_wl is the planar wall-liquid interfacial tension. E_l is a free energy per unit length so that the final term in Eq. (<ref>) is the line-tension-like contribution to the grand potential arising from the four edges of the block (corners on the cross-section in Fig. <ref>) in contact with the liquid.Similarly, we can estimate the grand potential when there are two blocks present. If the pair of blocks are close enough together (see e.g. the density profile for βΔμ = 0.01 in Fig. <ref>) there is a portion of `gas' phase between the blocks, so the grand potential isΩ_2 = -p_l(V-2ab^*c^*-ab^*x_G^*) - p_gab^*x_G^*+ (4ac^* +2ab^*)γ_wl + 2ab^*γ_wg + 2ax_G^*γ_gl + 4aE_l + 4aE_gl,where p_g is the pressure of the gas at the same chemical potential as the (bulk) liquid. x_G^* is the effective thickness of the `gas' region between the blocks and as when implementing the Kelvin equation (<ref>), we set x_G^*= x_G - 2σ. γ_wg is the planar wall-gas interfacial tension, γ_gl is the planar gas-liquid interfacial tension and E_gl is the free energy per unit length contribution, i.e. the final term in Eq. (<ref>) is due to the inner edges of the blocks connecting to a gas-liquid interface. Hence, from Eqs. (<ref>), (<ref>) and (<ref>) the solvent mediated potential, W(x_G^*) =( Ω_2 -2 Ω_1 + Ω_0) / a, is given byW(x_G^*) ≈ E + 2b^*γ_lgcosθ + [ 2γ_gl + b^*(ρ_l - ρ_g)Δμ ]x_G^*,where E = 4(E_gl - E_l). We have used the standard Taylor expansion of the pressures around the value at gas-liquid bulk coexistence, p_coex, to give p_l - p_g ≈ (ρ_l - ρ_g)Δμ, where ρ_l and ρ_g are the coexisting bulk liquid and gas densities, respectively. Eq. (<ref>) predicts that the solvent mediated potential is linear in the distance between the blocks x_G^*, and thus the force -∂ W/∂ x_G^* = -2γ_gl - b^*(ρ_l - ρ_g)Δμ is constant when there is a gas-like state between the blocks. The result from Eq. (<ref>), with E=0, is displayed as the thin dotted lines in Fig. <ref> for the two extreme cases, βΔμ = 0.05 and 0.2. One can see that the gradient of W(x_G) predicted by Eq. (<ref>) agrees very well with the DFT results. However, each line is shifted vertically relative to the DFT curve. This is probably the consequence of having neglected the unknown contribution from the edges, E. The difference between the result fromEq. (<ref>) and the DFT implies that |E| < 0.5 k_B T/σ. Note that the force -∂ W/∂ x_G^* does not depend on E, nor on cosθ. That the macroscopic thermodynamic result in Eq. (<ref>) agrees rather well with the microscopic DFT results might, once again, come as a surprise to some readers, bearing in mind the microscopic cross-sectional size of the blocks and that the distance between these is only a few solvent particle diameters. The validity of Eq. (<ref>) is partly due to the fact that the correlation length in the intruding gas state is rather short, but this kind of agreement between results of microscopic DFT and simple macroscopic thermodynamic estimates has been observed previously for related problems; see e.g. Refs. archer2003solvent, archer2005solvent, hopkins2009solvent, malijevsky2015bridging. Note that the condition W(x_G^*) = 0 in Eq. (<ref>) yieldsx_G^*=-2γ_lgcosθ/(ρ_l-ρ_g)Δμ+2γ_lg/b^*for the separation at which capillary evaporation occurs for identical blocks, i.e. the `gas' is thermodynamically stable relative to the liquid for smaller separations. This is a particular case of the formula introduced by Lum and Luzar.<cit.> In the limit b^*→∞, the solvent mediated force per unit area is constant, equal to (ρ_l - ρ_g)Δμ, in the `gas'. The same result is valid for Δμ→0, in the condensed `liquid' in the case of capillary condensation.<cit.> §.§ Two Solvophilic BlocksSo far we have discussed the properties of an identical pair of solvophobic blocks. Now we increase the parameter _wf so that the surface of the blocks attracts more strongly the liquid, i.e. the surfaces of the blocks become solvophilic. We set β_wf = 1, which for the planar wall has the contact angle θ = 43.7^∘, see Fig. <ref>. The density profiles for the blocks of the same dimensions (not displayed) are, for all values of βΔμ, qualitatively similar to the profile corresponding to βΔμ = 0.4 in Fig. <ref>, but with higher densities at the surface of the blocks and larger amplitude oscillations in the density profile around the blocks. The same is true for the compressibility. The key difference between a pair of solvophobic blocks and a pair of solvophilic blocks is that there is no capillary evaporation of the liquid in the gap between the solvophilic blocks as βΔμ→0. This has profound consequences for the solvent mediated potential.Fig. <ref> shows the solvent mediated potential W(x_G) between the solvophilic blocks. We see pronounced oscillations as the distance between the blocks is decreased. Also, since W(x_G) decreases (albeit in damped oscillatory fashion) as x_G is increased, this indicates that the effective interaction potential between a pair of solvophilic blocks is repulsive. Note that W(x_G) is almost independent of βΔμ in this particular case. The results in Fig. <ref> are quite similar to those obtained for two planar walls with the same β_wf (thin dotted black line). Note that for planar walls the asymptotic decay, L→∞, of the excess grand potential per unit area W(L) is known <cit.> for various choices of the fluid-fluid and wall-fluid potentials. For our present choice [Eqs. (<ref>) and (<ref>)], with β_wf=1, theory predicts β W(L) ∼ 0.934 L^-2, as L →∞, i.e. the solvent mediated force per unit area -(∂ W / ∂ L)_T,μ is repulsive and decays ∼ L^-3. We are not able to investigate the asymptotics numerically for blocks. §.§ One solvophobic and one solvophilic block and patchy blocksThe two previous subsections discuss the solvent mediated interactions W(x_G) between pairs of blocks that are identical. We now present results for W(x_G) for the case when one of the blocks is solvophobic and the other is solvophilic. We also consider various pairs of block having a mixture of solvophobic and solvophilic patches. We split each block into a maximum of three segments. The DFT results for the solvent mediated potentials between the various blocks are shown in Fig. <ref>, with the inset giving a sketch of the arrangement of the patches: dotted regions are solvophobic and diagonally striped regions are solvophilic. In all cases in Fig. <ref>, we notice that there is a local minimum of W(x_G) occurring when x_G≈2σ. This is the distance at which the two exclusion zones around the blocks meet, so that for x_G less than this value, the fluid density between the blocks is almost zero. In general, the range of the solvent mediated interaction decreases as βΔμ is increased. Note that having blocks with only one solvophobic segment causes the solvent mediated potential W(x_G) to become attractive. Nevertheless, W(x_G) retains the oscillatory behaviour of the pure solvophilic blocks observed in Fig. <ref>. Furthermore, the oscillations in the potential are enhanced when the solvophilic patches are together on the ends of the blocks – see Fig. <ref>(d). In Fig. <ref> we display a series of density profiles and local compressibility profiles corresponding to all the cases displayed in Fig. <ref>. We observe that whenever two solvophobic segments are opposite one another, a gas-like region forms between the blocks provided these are sufficiently close (as they are in Fig. <ref>) and this leads to large values of the local compressibility χ(r⃗) in these regions.It is particularly instructive to compare the results in Figs. <ref>(e) and <ref>(e), corresponding to two solvophobic patches facing each other at both ends of the blocks, with the corresponding ones for identical uniform solvophobic blocks, Figs. <ref>, <ref> and <ref>. For βΔμ=0.05, the solvent mediated potential in Fig. <ref>(e) has a form close to that in Fig. <ref>. The separation, x_G≈ 5σ, at which capillary evaporation occurs is smaller for the patchy case than for the uniform case, x_G≈ 8σ. However, in both cases one finds a linear solvent mediated potential at smaller separations with constant gradients; the magnitude of the force is similar in both cases. Such behaviour is consistent with reduced area of (facing) solvophobic regions. Recall that for two identical blocks Eq. (<ref>) implies that the force does not depend on cosθ. §.§ Blocks shifted vertically The results presented in the previous subsections are for the case when the centres of the blocks are at y=0 and only the distance between the closest faces x_G is varied. Now we fix x_G=5σ and move one of the blocks vertically along the y-axis [cf. inset of Fig. <ref>]. The vertical distance from the x-axis is defined as y_S (in the previous subsections y_S=0). The solvent mediated potential W(y_S) for a pair of solvophobic blocks and a pair of patchy blocks (divided into two segments: half solvophobic and half solvophilic, aligned evenly) is shown in Fig. <ref>. In both cases we see that W(y_S) is attractive, with a minimum at y_S=0. This indicates that the preferred position (lower grand potential) is when the pair of blocks are aligned, with y_S=0. We also see that for a given chemical potential the range and depth of the potential is greater for a pair of fully solvophobic blocks [Fig. <ref>(a)] than for a pair of two-segment blocks aligned evenly [Fig. <ref>(b)]. This is as one would expect, since the amount of solvophobic area on each block is greater in the former case (a). For a pair of solvophobic blocks, we showed in Fig. <ref> that the solvent mediated potential W(x_G) varies approximately linearly with x_G, on the `gas branch' arising for smaller values of x_G. However, we see from Fig. <ref>(a) that for fixed x_G the solvent mediated potential is not a linear function of y_S. For the pair of two-segment blocks aligned evenly (Fig. <ref>(b)) we do not see any oscillations in the solvent mediated potential as y_S is varied – recall that there are oscillations as x_G is varied – see Fig. <ref>(b). Close inspection of Fig. <ref> shows that within the present mean-field DFT approach there are actually two solution branches to the grand potential for both types of blocks. The branch for large y_S corresponds to a liquid-like density between the blocks while the other branch at smaller y_S, corresponds to the density between the blocks being gas-like. Consistent with our earlier discussion, the evaporation transition occurs at the value of y_S where the two branches meet and the solvent mediated force between the blocks jumps at this point. The value of y_S at which this transition occurs varies with βΔμ.Note that it is straightforward to derive a formula for W(y_s) analogous to that in Eq. (<ref>), making the same assumptions. However, the assumption that the gas-liquid interface meets the blocks at the corners is no longer necessarily true and the resulting formula gives poor agreement with the DFT. §.§ Blocks at an angle So far we have considered pairs of blocks with their faces aligned parallel to each other. We now consider a pair of identical solvophobic blocks with second block rotated by an angle α with respect to the centre of the first, i.e. α is the angle between the orientation vectors of the two blocks. In Fig. <ref> we plot the density and compressibility profiles as the angle α is varied whilst keeping the distance between the centres of the blocks fixed, x_C = 8σ (note that x_Cx_G). The temperature T=0.8 T_c and chemical potential βΔμ = 0.05 are also fixed. We present results for a range of angles; by symmetry we only need to consider the range 0^∘≤α≤90^∘.Fig. <ref> (top) shows that as α is increased for fixed x_C=8σ, the gas-like region between the blocks remains. The area of one of the interfaces between the gas-like region and the bulk liquid increases, while the other decreases. Additionally, we see that the volume of the gas-filled region between the blocks decreases as α is increased from zero, since the blocks become closer to each other. Note also that for the larger values of α, the gas-liquid interface does not connect to the corners of the blocks, which must be taken into account if generalising Eq. (<ref>) to derive an approximation for W as a function of α. From the corresponding compressibility profiles in Fig. <ref> (bottom) we see that χ(r⃗) is largest in the gas-liquid interfaces, as previously. Also, the peak value of the compressibility increases as α is increased from zero, attaining its maximum value when α≈45^∘. Increasing α further leads to a drop in the peak value of the compressibility.In Fig. <ref> we plot the solvent mediated potential for two solvophobic blocks as a function of α for fixed distance between the centres of the blocks, x_C = 8σ, corresponding to the profiles in Fig. <ref>. We see that the minimum of the solvent mediated potential occurs when α=90^∘ for fixed x_C = 8σ. This is because as the angle is varied, the blocks become closer to each other as α→ 90^∘ (see Fig. <ref>) and this leads to the excess grand free energy being lower. However, if we rotate the solvophobic blocksand also move the centres of the blocks such that closest distance between the two blocks x_G is always constant, we find the minimum of the grand potential is when α = 0^∘ (not shown). In this case, it is because rotating to α = 90^∘ results in a smaller area of the block surfaces being opposite one another than when α = 0^∘. Generically the attractive well in the solvent mediated potential between the blocks becomes deeper (i.e. stronger attraction) as βΔμ→ 0.In order to analyse further the solvent mediated potential between the solvophobic blocks, we fix the relative orientation between the blocks at α=45^∘ and vary the separation between the blocks x_G, which is the distance between the closest points on the pair of blocks. W(x_G) is shown in Fig. <ref> for fixed temperature T=0.8 T_c and wall attraction β_wf = 0.3. In the inset we sketch the relative orientations of the two blocks. Thus, x_G is the distance from the left-most corner of the right hand block to the near face of the left hand block. From Fig. <ref>, we see that the solvent mediated potential between the pair of solvophobic blocks with fixed α=45^∘ is qualitatively similar to that for α=0^∘, see Fig. <ref>. For small βΔμ, i.e. for states nearer to coexistence, the solvent mediated potential W(x_G) is longer ranged (although not as long-ranged as when the faces are parallel, α=0^∘, shown in Fig. <ref>) and also has two solution branches to the grand potential. The branch for large x_G corresponds to the liquid-like density between the blocks and the other, at smaller x_G, is when the density between the blocks is gas-like. Once again the evaporation transition occurs at the value of x_G where the two branches cross and the solvent mediated force jumps at this value of x_G for the given βΔμ. § CONCLUDING REMARKS Using classical DFT we have calculated the liquid density profile and the local compressibility around pairs of solvophobic, solvophilic and patchy blocks immersed in a simple LJ like solvent. We have also calculated an important thermodynamic quantity, namely the solvent mediated interaction potential between the blocks W(x_G). When both blocks are solvophobic, the potential W(x_G) is an almost linear function at small separations x_G, is strongly attractive and is very sensitive to the value of βΔμ; see Fig. <ref>. In this regime, treating the system using macroscopic thermodynamics, i.e. usingEq. (<ref>), turns out to be a rather good approximation for W(x_G). Although this may seem surprising, given that the blocks we consider have the microscopic cross sectional area ≈ 10σ× 5σ, it is in keeping with recent simulation studies<cit.> of water induced interactions between hydrophobes. In contrast, when both blocks are solvophilic, the potential W(x_G) is oscillatory but overall repulsive and exhibits only a weak dependence on βΔμ; see Fig. <ref>. When the blocks are patchy, the nature of the solvent mediated potential is complex. However, we find that if solvophobic patches are present, are sufficiently large and near to one another (facing each other on the opposing blocks), then their contribution to the effective potential dominates (see Fig. <ref>). Then the potential W(x_G) is still strongly attractive and is nearly linear in x_G for small βΔμ, particularly if the solvophobic patches are on the ends of the blocks [see Fig. <ref>(e)]. From Fig. <ref> we see that for fixed x_G there is a minimum in W as a function of the vertical distance y_s, when the solvophobic patches on the blocks are aligned. For a pair of identical solvophobic blocks, the solvent mediated potential per unit length of the blocks is ≈-5k_BT when the blocks are close to contact (see Fig. <ref>). Thus, if we assume that the blocks are actually finite in length, with length a=10σ (i.e. finite blocks of size 10σ× 10σ× 5σ), then when the blocks are close to contact we have W(x_G ≲σ)≈ -50k_BT or about -120 kJ mol^-1 at ambient temperature. This is the same order of magnitude as the solvent mediated potentials between a hydrophobic (polymeric) solute of a similar size and a hydrophobic SAM surface measured in computer simulations employing a realistic model of water – see Fig. 6(c) in Ref. jamadagni2011hydrophobicity and also Ref. jamadagni2009surface. Moreover, it is important to note that when the SAM surface is strongly hydrophobic, the solvent mediated potentials in Ref. jamadagni2011hydrophobicity display a portion that is almost linear. Hydrophobic interactions also play a role in determining the structure of proteins: simulations suggest capillary evaporation between hydrophobic patches can lead to strong forces between protein surfaces.<cit.> Given these observations, we expect that the results described here for a simple LJ like liquid incorporate the essential physics of a realistic model of a water solvent.In the vicinity of a single solvophobic surface the solvent density is lower, when βΔμ is sufficiently small. However, the thickness of the depleted layer is only one or two particle diameters – see Fig. <ref> corresponding to θ≈ 144^∘. This is consistent with the x-ray studies of water at a water-OTS (octadecyl-trichlorosilane) surface reported in Ref. mezger2006high and with simulation results for SPC/E water at non-polar substrates.<cit.> When two solvophobic surfaces become sufficiently close a gas-like region forms between the blocks. The extent of this can be large, see e.g. Fig. <ref>, and the density profile passing from the gas inside to the liquid outside of the blocks closely resembles the free gas-liquid interfacial profile – see Fig. <ref>. Moreover, the local compressibility is large in the neighbourhood of this interface, indicating that it is a region with large density fluctuations. Given that this interface is pinned to the corners of the blocks – see Fig <ref>– we do not expect significant “capillary wave” broadening of the profile beyond the present mean-field DFT, as one would normally expect at a macroscopic free interface.As the separation between solvophobic blocks is increased, there is a jump in the solvent mediated force when the blocks reach a particular distance, x_G=x_J, where the state minimising the grand potential changes from one with a gas-like density between the blocks to one where this is liquid-like. Within DFT the potential W(x_G) has two branches and there is a discontinuity in the the gradient at x_J– see e.g. Fig. <ref>. We do not display the metastable portions of the branches of W(x_G); these do not extend very far from the crossing point indicating that the height of the nucleation barrier is small. This is due to the small size of the blocks and the small values of x_G. For hydrophobic surfaces with greater surface area and at a greater distance apart, the free energy barrier should be larger; for a recent discussion of nucleation pathways to capillary evaporation in water see Ref. remsing2015pathways.We have also studied the local compressibility χ(r⃗) in the liquid between and surrounding pairs of blocks of differing nature. The local compressibility exhibits pronounced peaks; these indicate where the local density fluctuations are large. These fluctuations are maximal close to the incipient gas-liquid interface – see for example the central plot in Fig. <ref>, which is for βΔμ=0.22, and also Fig. <ref> (b) and (e) for βΔμ = 0.01. Fig. <ref> displays how for angled blocks χ(r⃗) depends on alignment and the confining geometry. When pronounced fluctuations, in conjunction with a depleted surface density, are observed in simulations of water at hydrophobic interfaces, this phenomenon is often ascribed to the disruption of the water hydrogen bonding network. Given that we observe similar behaviour for a simple LJ like liquid close to solvophobic substrates, we argue that this phenomenon is by no means specific to water. Rather it is due (i) to the weak bonding between the fluid and the (solvophobic) surface and (ii) the system being close to bulk gas-liquid phase coexistence, i.e. a small value of βΔμ. Thus, since the LJ like fluid considered here is representative of a broad class of simple liquids, we expect strong attraction between solvophobic surfaces, enhanced density fluctuations near such surfaces and other features of hydrophobicity to manifest themselves whenever the solvent, whatever its type, is near to bulk gas-liquid phase coexistence. There are obvious advantages, both in simulation and theory, in performing detailed investigations for simple model liquids, especially when tackling subtle questions of surface phase transitions such as critical drying.<cit.>§ ACKNOWLEDGEMENTS We benefitted from useful discussions about this work with Chris Chalmers and Nigel Wilding. BC acknowledges the support of EPSRC and the work of RE was supported by a Leverhulme Emeritus Fellowship: EM-2016-031. 50 fxundefined [1]ifx#1 fnum [1] #1firstoftwosecondoftwo fx [1] #1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty [Chandler(2005)]chandler2005interfaces authorauthorD. Chandler, @noop journaljournalNature volume437, pages640 (year2005)NoStop[Berne et al.(2009)Berne, Weeks, and Zhou]berne2009dewetting authorauthorB. J. Berne, authorJ. D. Weeks, and authorR. Zhou, @noop journaljournalAnnu. Rev. Phys. Chem.volume60, pages85 (year2009)NoStop[Ueda and Levkin(2013)]ueda2013emerging authorauthorE. Ueda and authorP. A. Levkin, @noop journaljournalAdv. Mater. volume25, pages1234 (year2013)NoStop[Aytug et al.(2015)Aytug, Lupini, Jellison, Joshi, Ivanov, Liu, Wang, Menon, Trejo, Lara-Curzio, Hunter, Simpson, Paranthaman, and Christen]aytug2015monolithic authorauthorT. Aytug, authorA. R. Lupini, authorG. E. Jellison, authorP. C. Joshi, authorI. H. Ivanov, authorT. Liu, authorP. Wang, authorR. Menon, authorR. M. Trejo, authorE. Lara-Curzio, authorS. R. Hunter, authorJ. T. Simpson, authorM. P. Paranthaman,and authorD. K. Christen, @noop journaljournalJ. Mater. Chem. C volume3, pages5440 (year2015)NoStop[Ball(2008)]ball2008water authorauthorP. Ball, @noop journaljournalChem. Rev. volume108, pages74 (year2008)NoStop[Kanduč et al.(2016)Kanduč, Schlaich, Schneck, andNetz]kanduc2016water authorauthorM. Kanduč, authorA. Schlaich, authorE. Schneck, and authorR. R. Netz,@noop journaljournalLangmuirvolume32, pages8767 (year2016)NoStop[Huang et al.(2003)Huang, Margulis, and Berne]huang2003dewetting authorauthorX. Huang, authorC. J. Margulis,and authorB. J. Berne,@noop journaljournalProc. Natl. Acad. Sci. U.S.A. volume100, pages11953 (year2003)NoStop[Jabes et al.(2016)Jabes, Bratko, and Luzar]jabes2016universal authorauthorB. S. Jabes, authorD. Bratko, and authorA. Luzar,@noop journaljournalJ. Phys. Chem. Lett. volume7, pages3158 (year2016)NoStop[Jamadagni et al.(2011)Jamadagni, Godawat, and Garde]jamadagni2011hydrophobicity authorauthorS. N. Jamadagni, authorR. Godawat,and authorS. Garde,@noop journaljournalAnnu. Rev. Chem. Biomol. Eng. volume2, pages147 (year2011)NoStop[Evans and Stewart(2015)]evans2015local authorauthorR. Evans and authorM. C. Stewart, @noop journaljournalJ. Phys.: Condens. Matter volume27, pages194111 (year2015)NoStop[Evans and Wilding(2015)]evans2015quantifying authorauthorR. Evans and authorN. B. Wilding, @noop journaljournalPhys. Rev. Lett. volume115, pages016103 (year2015)NoStop[Tarazona and Evans(1982)]tarazona1982long authorauthorP. Tarazona and authorR. Evans, @noop journaljournalMol. Phys. volume47, pages1033 (year1982)NoStop[Evans and Parry(1989)]evans1989wetting authorauthorR. Evans and authorA. O. Parry, @noop journaljournalJ. Phys.: Condens. Matter volume1, pages7207 (year1989)NoStop[Stewart and Evans(2012)]stewart2012phase authorauthorM. C. Stewart and authorR. Evans, @noop journaljournalPhys. Rev. E volume86, pages031601 (year2012)NoStop[Hansen and McDonald(2013)]hansen2013theory authorauthorJ.-P. Hansen and authorI. R. McDonald, @noop titleTheory of Simple Liquids: With Applications to Soft Matter, edition4th ed.(publisherAcademic Press, year2013)NoStop[Acharya et al.(2010)Acharya, Vembanur, Jamadagni, andGarde]acharya2010mapping authorauthorH. Acharya, authorS. Vembanur, authorS. N. Jamadagni,andauthorS. Garde, @noop journaljournalFaraday Disc. volume146, pages353 (year2010)NoStop[Willard and Chandler(2014)]willard2014molecular authorauthorA. P. Willard and authorD. Chandler, @noop journaljournalJ. Chem. Phys. volume141, pages18C519 (year2014)NoStop[Evans et al.(2016)Evans, Stewart, and Wilding]evans2016critical authorauthorR. Evans, authorM. C. Stewart,and authorN. B. Wilding,@noop journaljournalPhys. Rev. Lett.volume117, pages176102 (year2016)NoStop[Evans(1979)]evans1979nature authorauthorR. Evans, @noop journaljournalAdv. in Phys. volume28, pages143 (year1979)NoStop[Evans(1992)]evans1992density authorauthorR. Evans, in @noop booktitleFundamentals of Inhomogeneous Fluids, editoredited by editorD. Henderson (publisherMarcel Dekker, addressNew York, year1992)Chap. chapter3, pp. pages85–176NoStop[Roth et al.(2002)Roth, Evans, Lang, and Kahl]roth2002fundamental authorauthorR. Roth, authorR. Evans, authorA. Lang,and authorG. Kahl, @noop journaljournalJ. Phys.: Condens. Matter volume14, pages12063 (year2002)NoStop[Roth(2010)]roth2010fundamental authorauthorR. Roth, @noop journaljournalJ. Phys.: Condens. Matter volume22, pages063102 (year2010)NoStop[Tarazona et al.(1987)Tarazona, Marconi, and Evans]tarazona1987phase authorauthorP. Tarazona, authorU. M. B. Marconi,and authorR. Evans, @noop journaljournalMol. Phys. volume60, pages573 (year1987)NoStop[Evans(1990)]evans1990fluids authorauthorR. Evans, @noop journaljournalJ. Phys.: Condens. Matter volume2, pages8989 (year1990)NoStop[Gelb et al.(1999)Gelb, Gubbins, Radhakrishnan, and Sliwinska-Bartkowiak]gelb1999phase authorauthorL. D. Gelb, authorK. E. Gubbins, authorR. Radhakrishnan,andauthorM. Sliwinska-Bartkowiak,@noop journaljournalRep. Prog. Phys.volume62, pages1573 (year1999)NoStop[Remsing et al.(2015)Remsing, Xi, Vembanur, Sharma, Debenedetti, Garde, andPatel]remsing2015pathways authorauthorR. C. Remsing, authorE. Xi, authorS. Vembanur, authorS. Sharma, authorP. G. Debenedetti, authorS. Garde,and authorA. J. Patel,10.1073/pnas.1503302112journaljournalProc. Natl. Acad. Sci. U.S.A. volume112, pages8181 (year2015)NoStop[Reiter(1992)]reiter1992dewetting authorauthorG. Reiter, @noop journaljournalPhys. Rev. Lett. volume68, pages75 (year1992)NoStop[Seemann et al.(2001)Seemann, Herminghaus, and Jacobs]seemann2001dewetting authorauthorR. Seemann, authorS. Herminghaus,and authorK. Jacobs, @noop journaljournalPhys. Rev. Lett. volume86, pages5534 (year2001)NoStop[Thiele(2003)]thiele2003open authorauthorU. Thiele, @noop journaljournalEuro. Phys. J. E volume12, pages409 (year2003)NoStop[Thiele(2010)]thiele2010thin authorauthorU. Thiele, @noop journaljournalJ. Phys.: Condens. Matter volume22, pages084019 (year2010)NoStop[Archer et al.(2010)Archer, Robbins, and Thiele]archer2010dynamical authorauthorA. J. Archer, authorM. J. Robbins,and authorU. Thiele,@noop journaljournalPhys. Rev. Evolume81, pages021602 (year2010)NoStop[Luzar(2010)]luzar2010wetting authorauthorA. Luzar, @noop journaljournalFaraday Discuss. volume146, pages290 (year2010)NoStop[Evans(2010)]evans2010wetting authorauthorR. Evans, @noop journaljournalFaraday Discuss. volume146, pages297 (year2010)NoStop[Bryk et al.(2003)Bryk, Roth, Schoen, and Dietrich]bryk2003depletion authorauthorP. Bryk, authorR. Roth, authorM. Schoen,and authorS. Dietrich, @noop journaljournalEurophys. Lett. volume63, pages233 (year2003)NoStop[Hopkins et al.(2009)Hopkins, Archer, and Evans]hopkins2009solvent authorauthorP. Hopkins, authorA. J. Archer,and authorR. Evans,@noop journaljournalJ. Chem. Phys.volume131, pages124704 (year2009)NoStop[Archer and Evans(2003)]archer2003solvent authorauthorA. J. Archer and authorR. Evans,@noop journaljournalJ. Chem. Phys.volume118, pages9726 (year2003)NoStop[Archer et al.(2005)Archer, Evans, Roth, and Oettel]archer2005solvent authorauthorA. J. Archer, authorR. Evans, authorR. Roth,and authorM. Oettel, @noop journaljournalJ. Chem. Phys. volume122, pages084513 (year2005)NoStop[Malijevskỳ and Parry(2015)]malijevsky2015bridging authorauthorA. Malijevskỳ and authorA. O.Parry, @noop journaljournalPhys. Rev. E volume92, pages022407 (year2015)NoStop[Lum et al.(1999)Lum, Chandler, and Weeks]lum1999hydrophobicity authorauthorK. Lum, authorD. Chandler, and authorJ. D. Weeks,@noop journaljournalJ. Phys. Chem. Bvolume103, pages4570 (year1999)NoStop[Evans et al.(2004)Evans, Henderson, and Roth]evans2004nonanalytic authorauthorR. Evans, authorJ. R. Henderson,and authorR. Roth, @noop journaljournalJ. Chem. Phys. volume121, pages12074 (year2004)NoStop[Cerdeiriña et al.(2011)Cerdeiriña, Debenedetti, Rossky, andGiovambattista]cerdeirina2011evaporation authorauthorC. A. Cerdeiriña, authorP. G. Debenedetti, authorP. J. Rossky,and authorN. Giovambattista, @noop journaljournalJ. Phys. Chem. Lett. volume2, pages1000 (year2011)NoStop[Henderson(1992)]henderson1992 authorauthorJ. R. Henderson, in @noop booktitleFundamentals of Inhomogeneous Fluids, editoredited by editorD. Henderson (publisherMarcel Dekker, addressNew York, year1992)Chap. chapter2, pp. pages23–84NoStop[Stewart and Evans(2005)]stewart2005critical authorauthorM. C. Stewart and authorR. Evans, @noop journaljournalJ. Phys.: Condens. Matter volume17, pagesS3499 (year2005)NoStop[Evans and Marconi(1987)]evans1987phase authorauthorR. Evans and authorU. M. B. Marconi, @noop journaljournalJ. Chem. Phys. volume86, pages7138 (year1987)NoStop[Lum and Luzar(1997)]lum1997pathway authorauthorK. Lum and authorA. Luzar,@noop journaljournalPhys. Rev. Evolume56, pagesR6283 (year1997)NoStop[Attard et al.(1991)Attard, Bérard, Ursenbach, and Patey]attard1991interaction authorauthorP. Attard, authorD. Bérard, authorC. Ursenbach,andauthorG. Patey, @noop journaljournalPhys. Rev. A volume44, pages8224 (year1991)NoStop[Maciołek et al.(2004)Maciołek, Drzewiński, and Bryk]maciolek2004solvation authorauthorA. Maciołek, authorA. Drzewiński,and authorP. Bryk, @noop journaljournalJ. Chem. Phys. volume120, pages1921 (year2004)NoStop[Jamadagni et al.(2009)Jamadagni, Godawat, and Garde]jamadagni2009surface authorauthorS. N. Jamadagni, authorR. Godawat,and authorS. Garde,@noop journaljournalLangmuirvolume25, pages13092 (year2009)NoStop[Liu et al.(2005)Liu, Huang, Zhou, and Berne]liu2005observation authorauthorP. Liu, authorX. H. Huang, authorR. H. Zhou,andauthorB. J. Berne, @noop journaljournalNature volume437, pages159 (year2005)NoStop[Mezger et al.(2006)Mezger, Reichert, Schöder, Okasinski, Schröder, Dosch, Palms, Ralston, and Honkimäki]mezger2006high authorauthorM. Mezger, authorH. Reichert, authorS. Schöder, authorJ. Okasinski, authorH. Schröder, authorH. Dosch, authorD. Palms, authorJ. Ralston,and authorV. Honkimäki, @noop journaljournalProc. Natl. Acad. Sci. U.S.A. volume103, pages18401 (year2006),notesee also M. Mezger, H. Reichert, B. M. Ocko, H. Daillant and H. Dosch, Phys. Rev. Lett.,107, 249801 (2011) and S. Chattopadhyay, A. Uysal, B. Stripe, Y.-G. Ha, J. Tobin, E. A. Karapetrova and P. Dutta, Phys. Rev. Lett.107, 249802 (2011).Stop[Janecek and Netz(2007)]janecek2007interfacial authorauthorJ. Janecek and authorR. R. Netz, @noop journaljournalLangmuir volume23, pages8417 (year2007)NoStop
http://arxiv.org/abs/1702.08278v1
{ "authors": [ "Blesson Chacko", "Robert Evans", "Andrew J. Archer" ], "categories": [ "cond-mat.soft", "cond-mat.stat-mech" ], "primary_category": "cond-mat.soft", "published": "20170227134036", "title": "Solvent fluctuations around solvophobic, solvophilic and patchy nanostructures and the accompanying solvent mediated interactions" }
Why T2K should run in dominant neutrino mode to discover CP violation ?Monojit Ghosh Department of Physics, Tokyo Metropolitan University, Hachioji, Tokyo 192-0397, Japan,monojit@tmu.ac.jp Why T2K should run in dominant neutrino mode to discover CP violation ? Monojit Ghosh Received 5 February 2016 / Accepted 6 December 2016 ======================================================================= The first hint of the leptonic CP phase δ_CP=-90^∘ has already came from the long-baseline neutrino oscillation experiment T2K.This hint is derived from the neutrino data of T2K and currently itis running in the antineutrino mode. In this work we ask the question what should be the proportion of neutrino and antineutrino running of the T2K experimentto discover CP violation in the leptonic sector.§ INTRODUCTION Neutrino oscillation in standard three flavour is defined by three mixing angles i.e., θ_12, θ_13, θ_23, two mass squared differences i.e., Δ m^2_21, Δ m^2_31 and thethe phase δ_CP. Among these six parameters at these moments the unknowns are: (i) neutrino mass hierarchy i.e., normal or inverted (NH: Δ m^2_31 > 0 or IH: Δ m^2_31 < 0),(ii) the octant of the mixing angle θ_23 i.e., lower or higher (LO: θ_23 < 45^∘ or HO: θ_23 > 45^∘) and the leptonic CP phase δ_CP. The first hint of CP violation in the leptonic sector is believed to come from the currently running long-baseline experiment T2K <cit.> in Japan which has already indicated towards a mild preference towards δ_CP=-90^∘.This hint has come from the neutrino data of T2K <cit.> and currently it is running in the antineutrino mode.In this work we ask the question should be the proportion of neutrino and antineutrino run to extract the best sensitivity from T2K regarding the discovery of leptonic CP violation. The capability of T2K to determine the phase δ_CP is limited by parameter degeneracies <cit.> which are (i) hierarchy-δ_CP degeneracy and (ii) octant - δ_CP degeneracy. It is already shown that the the hierarchy - δ_CP degeneracy is same for neutrinos and antineutrinos but octant - δ_CP degeneracy behaves differently forneutrinos and antineutrinos. Thus a combination of neutrino and antineutrino can resolve the octant - δ_CP degeneracy but not the hierarchy- δ_CP degeneracy. In this work we will show that for T2K, if the parameter space is free from octant degeneracy then best CP sensitivity comes from the pure neutrino run of T2K. On the other hand antineutrinos are required for the parameter space where octant degeneracy is present. To overcome this problem we also study the possibility of adding data from other experiments namely the ongoing accelerator base long-baseline experiment NOνA <cit.>in Fermilab and the proposed atmospheric neutrino experiment ICAL@INO <cit.> in India to show that the maximum CP sensitivity of T2K comes from the dominant neutrino run. § RESULTS AND DISCUSSIONS For the simulation of T2K experiment we consider a total exposure of 8 × 10^21 protons of target (pot). We have divided this exposure in different proportion of neutrino and antineutrino running in units of 10^21 pot.In Fig. <ref>, we have plotted the CP violation discovery potential of T2K for NH (Δ m^2_31 = 0.0024 eV^2)-LO (θ_23=39^∘). Fromthe figure we see that when octant is known (left panel), the best sensitivity comes from the pure neutrino run i.e., 8+0 configuration. But when octant is unknown (right panel), 8+0 gives the worst sensitivity. As the proportion of antineutrinos increases, CP sensitivity gets improved. The maximum sensitivity is for 5+3 and further increase of antineutrinos decreases the sensitivity. This is because for 5+3, the wrong-octant solution is completely removed and further additionof antineutrinos reduces the statistics and hence the decrease in the sensitivity.In Fig. <ref> we plotted the same but for all the four combinations of hierarchy and octant assuming octant is unknown.IH corresponds to Δ m^2_31 = -0.0024 eV^2 and HO corresponds to θ_23=51^∘. From this figure we see that apart from -90^∘ - NH - LO and +90^∘ - IH - HO, 8+0 configuration of T2Kgives the best CP sensitivity. Thus to get a handle over these two situations, in Fig. <ref> we plotted the same as Fig. <ref> but for the combination of T2K+NOνA+ICAL.For NOνA we assume a three years running in both neutrino and antineutrino mode and for ICAL we consider a 50kt iron calorimeter detector running for 10 years. From the figure we see that when NOνA and ICAL are combined with the T2K data then the best CP sensitivity comes from the 7+1 configuration of T2K. For further details see Ref. <cit.> on which this work is based upon.§ ACKNOWLEDGEMENTSThis work is partly supported by the Grant-in-Aid for Scientific Research of the Ministry of Education, Science and Culture, Japan, under Grant No. 25105009. 6Abe:2015awaK. Abe et al. [T2K Collaboration],Phys. Rev. D 91, no. 7, 072010 (2015) [arXiv:1502.01550 [hep-ex]]. Ghosh:2015enaM. Ghosh, P. Ghoshal, S. Goswami, N. Nath and S. K. Raut,Phys. Rev. D 93, no. 1, 013013 (2016) [arXiv:1504.06283 [hep-ph]]. Adamson:2016xxwP. Adamson et al. [NOvA Collaboration],Phys. Rev. D 93, no. 5, 051104 (2016) [arXiv:1601.05037 [hep-ex]]. Ahmed:2015jtvS. Ahmed et al. [ICAL Collaboration],arXiv:1505.07380 [physics.ins-det]. Ghosh:2015tanM. Ghosh,Phys. Rev. D 93, no. 7, 073003 (2016) [arXiv:1512.02226 [hep-ph]].
http://arxiv.org/abs/1702.07885v1
{ "authors": [ "Monojit Ghosh" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170225125138", "title": "Why T2K should run in dominant neutrino mode to discover CP violation ?" }
This is the second part of our study of the Inertial Manifolds for 1D systems of reaction-diffusion-advection equations initiated in <cit.> and it is devoted to the case of periodic boundary conditions. It is shown that, in contrast to the case of Dirichlet or Neumann boundary conditions, considered in the first part, Inertial Manifolds may not exist in the case of systems endowed by periodic boundary conditions. However, as also shown,inertial manifolds still exist in the case of scalar reaction-diffusion-advection equations. Thus, the existence or non-existence of inertial manifolds for this class of dissipative systems strongly depend on the choice of boundary conditions.This work is partially supported bythe grants14-41-00044 and 14-21-00025 of RSF as well asgrants 14-01-00346and 15-01-03587 of RFBR [ Adam M. Oberman December 30, 2023 =====================R ∂_t ∂^2_x ∂_x ε § INTRODUCTIONThis paper can be considered as a continuation of our study of the Inertial Manifolds (IMs) for the so-called 1D reaction-diffusion-advection (RDA) systems of the form ∂_tu- u+u+f(u)∂_x u+g(u)=0,x∈(-π,π), initiated in <cit.> although can be read independently. Here u=u(t,x)=(u_1,⋯, u_m) is an unknown vector-valued function and f and g are given nonlinearities which are assumed belonging to the space C_0^∞. It is well-known that the existence of IM usually requires strong extra assumptions on the dissipative system considered. For instance, for the abstract parabolic equation in a Hilbert space H: u+Au=F(u), where A: D(A)→ H is a linear positive self-adjoint operator with compact inverse and F: H^β→ H, H^β:=D(A^β/2), is a nonlinear globally Lipschitz operator, the spectral gap conditions read λ_N+1-λ_N/λ_N+1^β/2+λ_N^β/2>L. Here N is the dimension of the IM, {λ_n}_n=1^∞ are the eigenvalues of A enumerated in the non-decreasing order, 0≤β<2 and L is a Lipschitz constant of the nonlinearity F, see FST,mik,28,rom-man,Zel2 for more details. If this condition is satisfied, then there exists a Lipschitz (C^1+-smooth for some small positive >0 and normally hyperbolic if F is smooth enough) invariant manifold of dimension N in H with the exponential tracking property. Thus, restricting equation (<ref>) to this manifold, we get a system of ODEs describing the limit dynamics generated by (<ref>) - the so-called inertial form (IF) of this equation. It is also known that, at least on the level of abstract equation (<ref>), the spectral gap conditions (<ref>) are sharp and the IM may not exist if they are violated. Moreover, in this case the associated dynamics may also be infinite-dimensional despite the fact that the global attractor exists and has finite box-counting dimension, see EKZ,sell-counter,rom-3,Zel2 for the details. In the case of RDA equations (<ref>), A:=-+1 endowed by the proper boundary conditions, H:=L^2(-π,π) and the nonlinearity F(u):=f(u)∂_xu+g(u) maps H^1(-π,π) to H (and can be made globally Lipschitz after the proper cut-off procedure), so β=1, λ_n∼ n^2 and the spectral gap conditions read λ_N+1-λ_N/λ_N^1/2+λ_N+1^1/2∼ C(N+1)^2-N^2/N+N+1=C>L, where C is independent of N. Thus, the nonlinearity in the RDA system (<ref>) is in a sense critical from the point of view of the IM theory and the spectral gap conditions are satisfied only in the case where the Lipschitz constant L of the nonlinearity is small enough (no matter what N is). By this reason, the existence or non-existence of IMs for RDA equations with arbitrarily large nonlinearities was a long-standing open problem. We also remind that, in the scalar case m=1 there is the so-called Romanov theory which allows us to construct the IF with Lipschitz continuous nonlinearities without using IMs, see rom-th,rom-th1 (and also Kuk,Zel2) but this result is essentially weaker than what we may get from IMs, namely, this IF works on the attractor only, does not possess any normal hyperbolicity/exponential tracking property and the smoothness of the associated vector field is restricted to C^0,1 (Lipschitz continuity only). Note that the regularity of the reduced IF equations is a crucial point here since the C^α-smooth IFs with α<1 can be constructed basedonly on the Mané projection theorem and the fact that the box-counting dimension of the attractor is finite, see EKZ,28, and this "reduction" works even in the examples where the dynamics on the attractor is clearly not finite-dimensional, see <cit.> for the details. On the other hand, as conjectured, such Lipschitz continuous IFs may be natural extensions of the concept of the IM to dissipative systems which do not satisfy the spectral gap conditions and do not possess IMs. However, to the best of our knowledge, up to the moment there was only one candidate where the Romanov theory works and the existence of the IM was unknown and this is exactly the class of scalar 1D RDA equations. Thus, one of the motivations of our study is to clarify at least on this model example whether the existenceof Lipschitz continuous IFs is caused by the existence of IMs or the Romanov theory is indeed a step beyond the IMs. Another source of interest is related with the fact that Burgers or coupled Burgers equations (which are the particular cases of RDA equations) are often considered as simplified models for the Navier-Stokes equations and turbulence, so clarifying the situation with RDA equations may bring some light on the main open problem of the IM theory, namely, existence or non-existence of IMs for the Navier-Stokes equations. As follows from our investigation,the existence of IMs for 1D RDA equations strongly depends on the type of boundary conditions chosen. We have considered three types of BC: Dirichlet, Neumann and periodic ones. As shown in the first part of our study <cit.>, in the case of Dirichlet boundary conditions the problem can be settled by transforming our equation to the new one for which the spectral gap conditions will be satisfied using the trick with the non-local change of variables u=a(t,x)w, where a is a properly chosen matrix depending on w. Indeed, the new independent variable w solves w - ∂^2_x w = {a^-1(2 ∂_xa - f(aw)a)∂_x w} + + {a^-1[∂_x^2 a -a - f(aw)∂_x a]w - a^-1g(aw)}:=ℱ_1(w)+ℱ_2(w). This guesses the choice of the matrix a. The naive one would be to fix it as a solution of the following ODE d/dxa = 1/2 f(aw)a=1/2f(u)a, a|_x=-π =Id. Then the operator ℱ_1(w) disappears, but the second one will still consume smoothness (ℱ_2:H^1→ H) due to the presence of a and ∂_ta and we will achieve nothing. However, a bit more clever choice d/dxa = 1/2 f(P_K(aw))a=1/2f(P_Ku)a, a|_x=-π =Id, where P_K is the orthoprojector to the first K eigenvectors of A:=-+1, actually solves the problem. Indeed, in this case, a depends onw (or u)through the smoothifying operator P_Kw, so a and ∂_t a will not consume smoothness and the operator ℱ_2 will map H^1(-π,π) to H^1(-π,π). On the other hand, the mapℱ_1(w):=a^-1(f(P_K(aw))-f(aw))a∂_x wcan be made small (as an operator from H^1(-π,π) to H) by fixing K large enough, see <cit.> for the details. Finally, as shown in <cit.>, the map u→ w is a diffeomorphism in the neighbourhood of the global attractor if K is large enough and the transformed equation (<ref>) satisfies (after the proper cut-off procedure) the spectral gap conditions and possesses the IM. This gives the positive answer on the IM's existence problem for 1D RDA systems endowed by Dirichlet boundary conditions. The case of Neumann boundary conditions is more delicate due to the fact that the transform u=aw does not preserve these boundary conditions and as a result, we would have the nonlinear and non-local boundary conditions for the transformed equation (<ref>). Since nothing is known about the IMs for such type of BC even in the simplest cases, making this transform does not look as a good idea. Fortunately, there is an alternative way to handle this problem, namely, to reduce the Neumann BC to the Dirichlet one by differentiating the equations in x. Indeed, let v=∂_x u. Then functions (u,v) solve u- u+u+f(u)v+g(u)=0,∂_xu|_x=-π=∂_x u|_x=π=0v- v+v+f(u)∂_x v+f'(u)v^2+g'(u)v=0,v|_x=-π=v|_x=π=0. Since the first equation does not contain first derivatives in x, it is enough to transform the second component v=a(t,x)w and this component has Dirichlet boundary conditions and the above mentioned problem with boundary conditions is overcome, see <cit.> for the details. Thus, in the case of Neumann boundary conditions, the initial RDA system can be embedded into a larger RDA system which possesses an IM, so the answer on the question about the existence of IMs in this case is also positive. This paper is devoted to the most complicated case of periodic boundary conditions. As we will see, there is a principal difference between the scalar (m=1) and vector (m>1) cases. In the scalar case, it is possible to modify the equation for a as follows d/dxa = 1/2 [f(P_K(aw))-f(P_K(aw)]a=1/2[f(P_Ku)-f(P_Ku)]a, a|_x=-π =Id, where U:=1/2π∫_-π^πU(x) dx is the spatial mean of the function U. This extra term makes the function a 2π-periodic in space (so the associated transform will preserve the periodic boundary conditions). On the other hand, it leads to the extra termℱ_3(w):=f(P_K(aw)∂_x win the right-hand side of the transformed equations (<ref>) which is not small and still consumes smoothness (ℱ_3 maps H^1(-π,π) to H only). Nevertheless, as shown below, this term does not destroy the construction of the IM since it has very special structure. Thus, in the scalar case and periodic boundary conditions, the answer on the question about the existence of IMs is also positive, namely, the following theorem can be considered as one of two main results of this paper. Letthe nonlinearities f,g∈ C_0^∞(). Then the RDA equation u- u+u+f(u)∂_x u+g(u)=0, u|_x=-π=u|_x=π,∂_x u|_x=-π=∂_x u|_x=π possesses an IM (after the proper cut-off procedure). Note that the proof of this theorem differs essentially from the one given in the first part of our study for the case of Dirichlet boundary conditions. In particular, we have to use a special cut-off procedure similar to the one developed in <cit.> (see also kwean,KZ1,Zel2) for the so-called spatial averaging method as well as the graph transform and invariant cones instead of the Perron method. The extra term u is added only in order to have dissipativity and the global attractor in the periodic case as well and is not essential for IMs. We now turn to the vector case m>1 with periodic boundary conditions. In contrast to all previously mentioned cases, the answer on the question about the existence of IMs here is negative. Namely, the following theorem can be considered as the second main result of the present paper.There exist m>1 and the nonlinearities f∈ C_0^∞(^m,ℒ(^m,^m)) and g∈ C_0^∞(^m,^m) such that the associated RDA system (<ref>) with periodic boundary conditions does not possess any finite-dimensional IM containing the global attractor. Moreover, the associated limit dynamics on the attractor is infinite-dimensional and, in particular, contains limit cycles with supra exponential rate of attraction.As in the case of abstract parabolic equations, see EKZ,Zel2, the proof of this result is based on the proper counterexample to the Floquet theory for linear equations with time-periodic coefficients. Such counterexamples are well-known and can be relatively easily constructed in the class of abstract parabolic equations, see <cit.> for the details. However, to the best of our knowledge, finding such counterexamples in the class of parabolic PDEs and local differential operators was also a long standing open problem. In the present paper, we give a solution of this problem. Namely, we have found smooth space-time periodic functions f(t,x)∈ℒ(^m,^m) and g(t,x)∈^m such that the period map U associated with the linear RDA system u-∂_x^2u+u+f(t,x)∂_x u+g(t,x)u=0 is a Volterra type operator such that its spectrum coincides with {0}. As a result, all solutions of problem (<ref>) decay faster than exponentially as t→∞ (actually, the decay rate is like e^-κ t^3 for some positive κ). Note also that, as shown in <cit.>, the IM may not exist even in the scalar case of RDA equation and periodic boundary conditions if we allow the nonlinearities to contain the non-local terms like periodic Hilbert operators. The paper is organized as follows. We first study the scalar case. Section <ref> is devoted tothe properties of solutions of (<ref>) and the associated diffeomorphisms of the phase space. In Section <ref> we deduce the transformed equations and verify the basic properties of the transformed nonlinearities which are crucial for the inertial manifold theory, and the construction of the IM for the scalar case is given in Section <ref> based on a special cut-off procedure in the spirit of <cit.> and invariant cones. The example of a system of eight RDA equations with periodic boundary conditions which does not possess any finite-dimensional inertial manifold is given in Section <ref>. Finally, some generalizations of the obtained results are considered in Section <ref>. § SCALAR CASE: AN AUXILIARY DIFFEOMORPHISMIn this section, we study the nonlinear transformation u(t,x)=a(t,x)w(t,x) mentioned in the introduction. Namely, we define the map W: H_per^1(-π,π)→ H_per^1(-π,π) via the expression W(u)(x):=[a(u)(x)]^-1u(x) where the function a(x)=a(u)(x) solves the equation d/dxa =1/2[f(P_Ku)-f(P_Ku)]a, a|_x=-π =1, where K is large enough and P_K is the orthoprojector to the first 2K+1 Fourier modes. We recall that in our scalarspace periodic case, the eigenvalues of the Laplacian A=-+1 are λ_0=1, λ_2n-1=λ_2n:= n^2+1 for n>0 (for n>0 these eigenvalues have multiplicity two with the corresponding eigenfunctions sin(nx) and cos(nx)). Thus, the orthonormal basis of eigenfunctions coincides with thebasis for the classical Fourier series and, in particular, the orthoprojector P_K has the form: (P_K u)(x)=a_0/2+∑_n=1^Ka_n cos(nx)+b_nsin(nx),a_n:=1/π∫_-π^π u(x)cos(nx) dx, b_n:=1/π∫_-π^π u(x)sin(nx) dx. The basic properties of the maps a(u) and W(u) are collected in the following lemmasFor any u∈ H^1_per(-π,π) and any K there exists a unique solution a=a(u)∈ C^∞() for problem (<ref>). This solution is space periodic with the period 2π and the following estimate holds:a_W^1,∞ + a^-1_W^1,∞≤ C,where constant C is independent of K and u. Moreover, the maps u→ a(u) and u→ a^-1(u) are C^∞-differentiable as maps from H^1_per(-π,π) to W^1,∞(-π,π) and the norms of their Frechet derivatives are bounded by constants which are independent of u and K. In particular, the following global Lipschitz continuity holds: a(u_1) -a(u_2)_W^1,∞ + a^-1(u_1) - a^-1(u_2)_W^1,∞≤ Cu_1 - u_2_H^1, where the constant C is independent of K and u. Since equation (<ref>) is linear we can explicitly solve it and obtain a(u)(x) = e^1/2∫_-π^x f((P_Ku)(s)) - f(P_K u) ds. All assertions of the lemma follow then from this explicit formula and the fact that f∈ C_0^∞(). Indeed, the Frechet derivative a'(u)θ, θ∈ H^1_per(-π,π) satisfiesa'(u)θ=1/2 a(u)∫_-π^x f'(P_Ku)P_Kθ - f'(P_K u)P_Kθ dsand we see thata'(u)θ_W^1,∞≤ Ca(u)_W^1,∞P_Kθ_L^∞≤ Cθ_H^1.This gives the desired uniform Lipschitz continuity. The higher Frechet derivatives may be estimated analogously and the lemma is proved. The map u→ W(u) is C^∞-smooth as a map from H^1_per(-π,π) to itself and the norms of its Frechet derivatives are uniformly bounded with respect to K (but depend on u_H^1). Moreover, the following estimate holds: C^-1u_H^1≤W(u)_H^1≤ Cu_H^1,where the constant C>1 is independent of K and u.Indeed, these assertions are immediate corollaries of Lemma <ref>.Note that a(u) is a smoothifying operator in the sense that a(u)∈ C^∞(-π,π) if u∈ H^1(-π,π). However, the smoothifying norms of this operator will depend on K. Actually, the H^2-norm will be still uniform with respect to K sinced^2/dx^2a=1/4(f(P_Ku)-f(P_Ku))^2a+1/2 af'(P_Ku)P_Kd/dxuand u∈ H^1_per(-π,π), but the third derivative will contain the term d^2/dx^2P_Ku which is not uniform with respect to K. Moreover, as we see from this formula, the H^2-norm of a is not globally bounded with respect to u (due to the presence of the linearly growing term d/dxP_Ku). That is the reason why we use W^1,∞-norm in Lemma <ref>.Recall that we want to verify that the map W is a diffeomorphism, so we need to study the inverse map U:w→ u. To this end, we need to find the function a if the function w is known. Obviously, this function (if exists) should satisfy the equation d/dx a = 1/2 (f(P_K(aw)) - f(P_K(aw)))a, a|_x=-π=1. The rest of this section is devoted to the study of equation (<ref>). We start with the solvability problem.For any w∈ H^1_per(-π,π) and any K (including K=∞ which corresponds to P_K=Id), there exists at least one solution a∈ C^∞_per(-π,π) of equation (<ref>). Moreover, a_W^1,∞+a^-1_W^1,∞≤ C, where the constant C is independent of K and w. It is convenient to make the change of variable a=e^y and replace equation (<ref>) by the following one d/dxy=1/2(f(P_K(e^yw))-f(P_K(e^yw))),y|_x=-π=0 or, in the equivalent integral form, y(x)=1/2∫_-π^x(f(P_K(e^y(s)w(s)))-f(P_K(e^yw))) ds:=I(y)(x). Note that the condition y(π)=0 is automatically satisfied, so the function a is automatically 2π periodic in x. Moreover, as not difficult to see, the operator I is continuous and compact as an operator from C[-π,π] to itself and is globally boundedI(y)_C[-π,π]≤ C,where C is independent of y and K. Thus, I maps the closed R-ball in the space C[-π,π] to itself if R≥ C and, thanks to the Schauder fixed point theorem, equation (<ref>) possesses at least one solution y∈ C[-π,π] belonging to this ball. All other properties stated in the lemma are immediate corollaries of equation (<ref>) and the lemma is proved.Our next task is to verify the uniqueness of the solution a and its smooth dependence on the function w. To this end, we start with the following linear problem which corresponds to the linearization of (<ref>) with K=∞: d/dxξ(x)=φ(x)ξ(x)-φξ+h(x),ξ|_x=-π=0. Let φ,h∈ L^1(-π,π) and h=0. Then, problem (<ref>) possesses a unique solution ξ∈ W^1,1_per(-π,π) and this solution is given by the following expression:ξ(x)= ∫_-π^x e^∫_s^x φ(χ)dχ(-D + h(s))ds,whereD= φξ=∫_-π^π h(s)e^∫_s^πφ(χ)dχds/∫_-π^π e^∫_s^πφ(χ)dχds.Moreover, the following estimate holds: ξ_W^1,1≤ C h_L^1, where C=C(φ_L^1) is independent of h, and, therefore, the linear solution operatorΥ=Υ_φ : L^1(-π,π) → W^1,1(-π,π),Υ_φ h:=ξis well-defined. Indeed, denoting D:=φξ and solving the linear ODE with the initial data ξ|_x=-π=0, we get (<ref>). The explicit value of D is then computed assuming that ξ(π)=0 and inserting this value to the left-hand side of (<ref>). Estimate (<ref>) is then an immediate corollary of (<ref>) and the lemma is proved. Let, in addition, φ, h∈ L^s(-π,π) for some 1≤ s≤∞, then Υ_φ_ℒ(L^s,W^1,s)≤ C, where the constant C depends on the L^s-norm of φ.Indeed, this estimate is also an immediate corollary of (<ref>). We now ready to study the case K<∞. Namely, let us consider the following equation: d/dxξ=φ(x)(P_K(ψξ))(x)-φ P_K(ψξ)+h(x),ξ|_x=-π=0.Let φ,h∈ L^2(-π,π) with h=0 and ψ∈ H^1_per(-π,π). Then, there exists K_0=K_0(φ_L^2+ψ_H^1) such that, for all K>K_0, problem (<ref>) is uniquely solvable in the space H^1_per(-π,π). Therefore, the linear solution operator Υ^K=Υ^K_φ,ψ: L^2(-π,π)→ H^1(-π,π),Υ_φ,ψ^Kh:=ξ is well defined and possesses the estimate Υ_φ,ψ^K_ℒ(L^2,H^1)≤ C, where the constant C depends on φ_L^2 and ψ_H^1, but is independent of K>K_0. Moreover, if in addition the assumptions of Corollary <ref> are satisfied for some s>2 then the analogue of (<ref>) also holds uniformly with respect to K>K_0. We rewrite problem (<ref>) using operator Υ and the fact that ψξ=P_K(ψξ)+(1-P_K)(ψξ) in the form ξ=Υ_φψ-φ(1-P_K)(ψξ)+φ(1-P_K)(ψξ)+Υ_φψ h:=ℛ_K,φ,ψξ+Υ_φψ h. Then, it is sufficient to verify that the operator ℛ_K,φ,ψ is a contraction in H^1_per(-π,π). Indeed, due to the interpolation inequality and the fact that H^1 is an algebra, we have (1-P_K)(ψξ)_L^∞≤ ≤ C(1-P_K)(ψξ)_L^2^1/2(1-P_K)(ψξ)_H^1^1/2≤ CK^-1/2ψξ_H^1≤ CK^-1/2ξ_H^1, where the constant C depends on ψ_H^1, but is independent of K. Then, due to Corollary <ref>, ℛ_K,φ,ψξ_H^1≤ Cφ(1-P_K)(ψξ)_L^2≤ ≤ Cφ_L^2(1-P_K)(ψξ)_L^∞≤ Q(φ_L^2+ψ_H^1)K^-1/2ξ_H^1, where the function Qis independent of K. Fixing nowK_0:=4Q(φ_L^2+ψ_H^1)^2,we see that for all K>K_0 the operator ℛ_K,φ,ψ is a contraction with the contraction factor which is less than 1/2. Therefore, equation (<ref>) is uniquely solvable in H^1_per and the following estimate holds: ξ_H^1≤ 2Υ_φψh_H^1≤ Ch_L^2, where the constant C depends on φ_L^2 and ψ_H^1, but is independent of K>K_0. The remaining statements of the lemma are immediate corollaries of this estimate and the lemma is proved.We are finally ready to establish the analogue of Lemma <ref> for the map w→ a(w) defined as a solution of equation (<ref>).For any R>0 there exists K_0=K_0(R) such that for any w∈ H^1_per(-π,π), w_H^1≤ R and every K>K_0, equation (<ref>) possesses a unique solution a=a(w)∈ C^∞_per(-π,π). Moreover, the map w→ a(w) is C^∞-differentiable as the map fromB(R,0,H^1):={w∈ H^1_per(-π,π), w_H^1<R}to W^1,∞_per(-π,π) and the norms of its Frechet derivatives depend on R, but are independent of the value of the parameter K>K_0. The existence of the solution a is verified in Lemma <ref>. Let us verify the uniqueness. Instead of working with equation (<ref>), we will work with the equivalent equation (<ref>). Indeed, let y_1 and y_2 be two solutions of this equation which correspond to the same w∈ B(R,0,H^1) and let y̅:=y_1-y_2. Then this function solves d/dxy̅=φ_y_1,y_2(x)P_K(ψ_y_1,y_2y̅)-φ_y_1,y_2P_K(ψ_y_1,y_2y̅), whereφ_y_1,y_2:=1/2∫_0^1f'(P_K(se^y_1w+(1-s)e^y_2w)) ds,ψ_y_1,y_2:=w∫_0^1e^sy_1+(1-s)y_2 ds.Since f∈ C^∞_0() and y_i are uniformly bounded in W^1,∞, we have φ_y_1,y_2_L^2≤ C,ψ_y_1,y_2_H^1≤ Cw_H^1≤ CR where the constant C is independent of K. Thus, according to Lemma <ref>, y̅=0 is a unique solution of (<ref>) if K>K_0(R) and the uniqueness is proved. Let us now estimate the norm of the Frechet derivative of the map w→ a(w) (the differentiability can be verified in a standard way and we left its proof to the reader). Let w∈ B(R,0,H^1), θ∈ H^1_per(-π,π) and ξ:=y'(w)θ. Then, this function solves d/dxξ=1/2f'(P_K(e^yw))P_K(e^ywξ)-f'(P_K(e^yw))P_K(e^ywξ)+ + 1/2f'(P_K(e^yw))P_K(e^yθ)-f'(P_K(e^yw))P_K(e^yθ), ξ|_x=-π=0. This equation has the form of equation (<ref>) withφ:=1/2f'(P_K(e^yw), ψ:=e^yw, h=1/2f'(P_K(e^yw))P_K(e^yθ)-f'(P_K(e^yw))P_K(e^yθ).Moreover, the functions φ and ψ satisfy exactly the same bounds as in (<ref>) and, consequently, according to Lemma <ref>,ξ_W^1,∞=Υ_φ,ψ^Kh_W^1,∞≤ Ch_L^∞if K>K_0(R). It remains to note thath_L^∞≤ CP_K(e^yθ)_L^∞≤ Ce^yθ_H^1≤ Cθ_H^1,where C is independent of K. This gives the following estimate y'(w)_ℒ(H^1,W^1,∞)≤ C, where C is independent of K and the desired uniform bound for the first Frechet derivative is obtained. Higher derivatives can be estimated analogously and the lemma is proved.We combine the obtained results in the following theorem.For any R>0 there exists K_0=K_0(R) such that the map W: H^1(-π,π)_per→ H^1_per(-π,π) is C^∞- diffeomorphism between B(R,0,H^1) and W(B(R,0,H^1)) ⊂ H^1 if K> K_0(R). Moreover, the norms ofW, U:=W^-1and their derivatives are independent of K and the following embeddings hold: B(C^-1R,0,H^1_per)⊂ W(B(R,0,H^1_per))⊂ B(CR,0,H^1_per) for some constant C>1 which is independent of K and R.Indeed, embeddings follow from inequalities (<ref>) and the remaining properties are actually proved in Corollary <ref> and Lemma <ref>.§ SCALAR CASE: THE TRANSFORMED EQUATION The aim of this section is to make the change w=W(u) of the independent variable u and study the properties of the nonlinearities involved in the transformed equation. Recall that the transform W(u) is a diffeomorphism ona large ball B(R,0,H^1_per) only (where R depends on the parameter K), so we need to do this transform not in the whole phase space Φ:=H^1_per(-π,π), but only on the absorbing ball of the corresponding solution semigroup. By this reason, we start our exposition with a theorem which guarantees the well-posedness and dissipativity of the solution semigroup (although in this section we need this result for the scalar equation only, we state below the theorem for the vector case as well). Namely, let us consider the following RDA system with periodic boundary conditions: u -∂^2_x u+u+ f(u)∂_x u+g(u)= 0, x∈(-π, π), u|_t=0=u_0∈Φ, where u(x,t)=(u_1,⋯,u_m) is an unknown vector function, f and g are given nonlinear smooth functions with finite support.Let the above assumptions hold. Then for any u_0 ∈ H^1_per(-π,π) there exists a unique solution of equation (<ref>)u ∈ C([0,T], H^1_per(-π,π))∩ L^2([0,T], H^2(-π,π)), T>0,satisfying u|_t=0 = u_0 and, therefore, the solution semigroup S(t) is well-defined in the phasespace Φ via S(t): Φ→Φ, S(t)u_0:=u(t). Moreover the following estimates hold for any solution u(t) of problem (<ref>) 1. Dissipativity: u(t)_Φ≤Ce^-γ tu_0_Φ +C, where γ, C are some positive constants; 2. Smoothing property: u(t)_H^2≤ t^-1/2Q(u(0)_Φ) + C_*, where the monotone function Q and positive constant C_* are independent of t>0. We give below only the schematic derivation of the stated estimates lefting the standard details to the reader. Step 1. L^2-estimate. Multiplying equation (<ref>) by u, integrating over x and using that both f and g have finite support, we get 1/2d/dtu^2_L^2+ u^2_L^2+u^2_L^2≤ C_1 u_L^2+C_2 and after applying the Gronwall inequality, we arrive at u(t)^2_L^2+∫_t^t+1 u(s)^2_L^2 ds≤ Cu(0)^2_L^2e^-δ t+C_* for some positive C_*, δ and C which are independent of t and u. Step 2. H^1-estimate. Multiplying equation (<ref>) by - u, integrating by parts and using again the fact that f and g have finite supports, we get 1/2d/dt u^2_L^2+ u^2_L^2+ u^2_L^2≤ ≤ C u_L^2+C_1 u_L^2 u_L^2≤1/2 u^2_L^2+C( u^2_L^2+1) Applying the Gronwall inequality to this relation and using (<ref>) for estimating the right-hand side, we arrive at u(t)^2_H^1+∫_t^t+1u(s)^2_H^2 ds≤ Cu(0)^2_H^1e^-δ t+C_* which gives the desired dissipative estimate in H^1. Step 3. Smoothing property. Multiplying equation (<ref>) by ∂_x^4 u, integrating by parts, using again that f and g have finite supports and the interpolation inequality v_L^∞^2≤ Cv_L^2 v_L^2, we get 1/2d/dt∂^2_x u^2_L^2+∂_x^3u^2_L^2+∂^2_x u^2_L^2≤ ≤ C∂^3_xu_L^2(∂^2_x u_L^2+∂_x u^2_L^∞+∂_x u_L^2)≤ C∂^3_xu_L^2( u_L^2+1)( u_L^2+1)≤ ≤1/2∂^3_x u^2_L^2+C( u^2_L^2+1)( u^2_L^2+1). Applying the Gronwall inequality to this relation and using(<ref>), we arrive at the dissipative estimate in H^2: u(t)_H^2^2+∫_t^t+1u(s)^2_H^3 ds≤ Q(u(0)_H^2)e^-γ t+C_*. for some monotone increasing function Q and positive constants γ and C_* which are independent of t. Finally, to obtain the smoothing property, we assume that t≤1, multiply inequality (<ref>) by t and apply the Gronwall inequality with respect to the function Y(t):=t u(t)^2_L^2. This gives estimate (<ref>) for t≤1. The estimate for t≥1 can be obtained combining estimate (<ref>) with (<ref>) for t≤1. Step 4. Uniqueness and Lipschitz continuity. Let u_1 and u_2 be two solutions of equation (<ref>) and let u̅=u_1-u_2. Then, this function solves u̅-u̅+u̅+[f(u_1) u_1-f(u_2) u_2]+[g(u_1)-g(u_2)]=0. Using the fact that H^1 is an algebra together with estimate (<ref>), we getf(u_1) u_1-f(u_2) u_2_L^2≤ Cu_1-u_2_H^1,g(u_1)-g(u_2)_L^2≤ Cu_1-u_2_L^2,where the constant C depends on the H^1-norms of u_1(0) and u_2(0). Multiplying now equation (<ref>) by -u̅+u̅ and using these estimates, we end up after the standard transformations with the following inequality:d/dtu̅^2_H^1+u̅^2_H^2≤C̃u̅^2_H^1and, therefore, u_1(t)-u_2(t)_H^1^2≤ e^C̃ tu_1(0)-u_2(0)_H^1^2, where the constant C̃ depends on the H^1-norms of u_1 and u_2, but is independent of t. Thus, the uniqueness is verified. The existence of a solution can be proved using e.g., the Galerkin approximations, see bv1,tem and the theorem is proved.The proved theorem guarantees the existence of a global attractor for the solution semigroup S(t). For the convenience of the reader, we recall the definition of the global attractor and state the corresponding result, see bv1,tem for more details. A set 𝒜 to be called a global attractor for the solution semigroup S(t) generated by equation (<ref>) if it satisfies the following properties: 1. The set 𝒜 iscompactin Φ:=H^1_per(-π,π); 2. The set 𝒜 is invariant with respect to thesemigroup S(t), i.e., S(t)𝒜 = 𝒜, t≥0; 3. The set 𝒜 isattracting, i.e., for any bounded set B⊂Φ and any neighbourhood 𝒪 of the attractor 𝒜 there exists time T = T(B,𝒪) such that S(t)B ⊂𝒪(𝒜), for allt≥ T. The next result is a standard corollary of the proved Theorem <ref> Under the assumptions of Theorem <ref> the solution semigroup S(t) of (<ref>) possesses a global attractor 𝒜 in the phase space Φ=H^1_per(-π,π). Moreover this attractor is a bounded set in H^2_per(-π,π). We recall that, as a rule, the nonlinearities f and g do not have finite support, but satisfy some dissipativity and growth restrictions which allow to establish the dissipativity of the corresponding solution semigroup and the existence of a global attractor, say, in the phase space Φ, see e.g., <cit.> for the case of coupled Burgers equations. After that, since we are interested in the long time behavior of solutions only, we cut off the nonlinearities outside of some neighbourhood of the attractor making them C_0^∞ on the one hand and without changing the global attractor on the other hand. In the present paper, we assume from the very beginning that the cut off procedure is already done and verify the existence of the global attractor for equation (<ref>) just for completeness of the exposition. Let us fix the radius R_0 in such a way that 𝒜⊂ B(R_0/2,0,Φ) and introduce the set ℬ:=∪_t≥0S(t)B(R_0,0,Φ). Then, this set is bounded according to Theorem <ref> and is invariant with respect to the semigroup S(t): 𝒜⊂ B(R_0/2,0,Φ)⊂ B(R_0,0,Φ)⊂ℬ⊂ B(R̅,0,Φ),S(t)ℬ⊂ℬ. Thus, we are not interested in the solutions starting outside of the set ℬ and need to transform our equation on a set ℬ only. From now on, we return to the scalar case m=1 and apply the transform w=W(u) defined in the previous section. Recall that this transform depends on the parameter K. Moreover, according to (<ref>), W(ℬ)⊂ B(CR̅,0,Φ) and, for all K>K_0=K_0(R̅), the inverse map U=U(w) is well-defined and smooth on B(2CR̅,0,Φ) (see Theorem <ref>. Then the transformed equation on W(ℬ) reads w - ∂^2_x w+w +f(P_K(aw))∂_x w= F_1(w)+ F_2(w), where F_1(w) = (f(P_K(aw)) - f(aw)) ∂_x w and F_2(w) = a^-1[∂_x^2 a -a - f(aw)∂_x a]w - a^-1g(aw). To obtain these formulas we just put u(t,x):=a(t,x)w(t,x) in equation (<ref>), see also (<ref>). However, in order to complete the transform, we need to express the function a as well as a, a and ∂_t a in terms of the new variable w. Indeed, the map w→ a(w) is defined as a solution of equation (<ref>) (see Lemma <ref>). The derivative ∂_x a is then can be found from equation (<ref>):(∂_x a)(w)=1/2(f(P_K(a(w)w))-f(P_K(a(w)w)))a(w).Differentiating this equation in x and using it for evaluating a in the differentiated equation, we get( a)(w)=1/4(f(P_K(a(w)w))-f(P_K(a(w)w))^2a(w)+1/2 a(w)f'(P_K(a(w)w))d/dx(P_K(a(w)w)and since P_K is a smoothifying operator, the terms ∂_x a and ∂_x^2 a can be expressed in a smooth way in terms of the map w→ a(w). In particular, they are well defined on the ball B(2CR̅,0,Φ) if K>K_0 and, due to the presence of derivatives d/dx P_K(a(w)w), the norms of these operators and their Frechet derivatives depend on K. The term containing ∂_t a is a bit more delicate since a is local in time and we need to use the chain rule in order to find an expression for it. To do this, we first express the value ∂_t P_Ku from equation (<ref>):P_K u= P_K(a(w)w)-P_K(a(w)w)-P_K(f(a(w)w)∂_x(a(w)w))-P_Kg(a(w)w)and we see that the right-hand side smoothly expressed in terms of w→ a(w). Therefore, the operator w→ (P_K u)(w) is well-defined and smooth on the ball B(2CR̅,0,Φ). Differentiating then the explicit formula (<ref>) in time, we get (( a)(w))(x) = = a(w)(x)1/2∫_-π^x f'(P_K(a(w)(s)w(s)))(P_K u)(w)(s) - f'(P_K(a(w)w))(P_K u)(w) ds and this shows that the map w→ ( a)(w) is also well-defined and smooth on B(2CR̅,Φ). Thus, we have proved the following result.Under the above assumptions the map F_2(w) is well-defined and smooth as the map from B(2CR̅,0,Φ) to Φ for all K≥ K_0. In particular, F_2_C^1(B(2CR̅,0,Φ),Φ)≤ C_K, where the constant C_K depends on K≥ K_0.We now turn to the nonlinearity F_1. Obviously, it is well-defined and smooth as the map from B(2CR̅,0,Φ) to L^2_per(-π,π). Moreover, this map issmall if K is large and this property is crucial for us.Under the above assumptions, the map F_1(w) defined by (<ref>) is well-defined as a map from B(2CR̅,0,Φ) to L^2_per(-π,π) and the following estimate holds: F_1_C^1(B(2CR̅,0,Φ),L^2_per)≤ CK^-1/2, where the constant C is independent of K≥ K_0.Indeed, this estimate can be obtained arguing exactly as in (<ref>), see also <cit.> for more details. The considered terms F_1(w) and F_2(w) in equation (<ref>) are similar to the case of Dirichlet boundary conditions considered in <cit.>. However, the extra termF_3(w):= f(P_K(aw))∂_x wis specific to the case of periodic boundary conditions and is essentially different. Indeed, as before, we obviously have the smoothness of this term and the estimate F_3_C^1(B(2CR̅,0,Φ),L^2_per)≤ C, where the constant C is independent of K≥ K_0, but this constant is not small as K→∞, so we cannot treat this term as a perturbation. We finally note that the nonlinearities F_i are defined not on the whole space Φ, but only on a large ball B(2CR̅,0,Φ) which is not convenient for constructing the inertial manifolds. To overcome this problem, we introduce a smooth cut off function θ∈ C^∞_0() such thatθ(z)≡ 1,|z|^2≤ CR̅ andθ(z)≡0,|z|^2≥ 2CR̅and the modified operators ℱ_i(w):=θ(w^2_H^1)F_i(w),i=1,2,3. Then the operators ℱ_i are defined and smooth already in the whole phase space Φ, coincide with F_i on the ball B(CR̅,0,Φ) and vanish outside of the ball B(2CR̅,0,Φ). Moreover, the operator ℱ_3(w)=Θ(w)∂_x w, whereΘ(w):=θ(w_H^1^2)f(P_K(a(w)w))is a smooth map from Φ towhich also vanishes outside of B(2CR̅,0,Φ). Thus the transformed equation now reads w- w+w+Θ(w) w=ℱ_1(w)+ℱ_2(w). Moreover, this equation coincides with (<ref>) on the ball B(CR̅,0,Φ) and, consequently, the diffeomorphism W:u→ w maps solutions of the initial equation (<ref>) from some neighbourhood of the attractor 𝒜 into the solutions of (<ref>) belonging to some neighbourhood of W(𝒜). In particular, the set W(𝒜) is an attractor for equation (<ref>) (maybe local since we do not control the behavior of solutions of (<ref>) outside of the ball B(CR̅,0,Φ) where some new limit trajectories may a priori appear). Thus, from now on we forget about the initial equation (<ref>) and will work with the transformed equation (<ref>) only. For the convenience of the reader, we collect the verified properties of maps ℱ_i in the next theorem. The operators ℱ_1, ℱ_2and Θ belong to C^∞(Φ,L^2_per), C^∞(Φ,Φ) and C^∞(Φ,) respectively and vanish outside of a big ball B(2CR̅,0,Φ). Moreover, the following estimates hold: ℱ_1_C^1(Φ,L^2_per)≤ CK^-1/2,ℱ_2_C^1(Φ,Φ)≤ C_K,Θ_C^1(Φ,)≤ C, where the constant C is independent of K and the constant C_K may depend on K. We see that, in contrast to the case of Dirichlet boundary conditions considered in <cit.>, in the periodic case the transform W does not allow us to make the nonlinearity which contains spatial derivatives small, but makes it small up to the operator Θ(w) w only. Although this term has a very simple structure, it prevents us from using the standard Perron method of constructing the inertial manifolds and makes the situation essentially more complicated. Actually, overcoming this difficulty is one of two main results of the paper.§ SCALAR CASE: EXISTENCE OF AN INERTIAL MANIFOLD In this section, we will construct the inertial manifold for the transformed equation (<ref>). To be more precise, in contrast to the case of Dirichlet boundary conditions, we do not know how to construct the inertial manifold directly for equation (<ref>) and need to introduce one more cut off function. We first note that arguing exactly as in Theorems <ref> and <ref>, we may prove that equation (<ref>) is uniquely solvable for every w(0)∈Φ and the corresponding solution w(t) satisfies all of the estimates derived in Theorem <ref>. This in turn means that the solution semigroup S_tr(t):Φ→Φ is well-defined, dissipative and possesses a global attractor 𝒜_tr∈ H^2_per(-π,π). Moreover, according to the analogue of the H^2-dissipative estimate (<ref>), the set ℬ_H^2:=∪_t∈_+S_tr(t)B(r,0,H^2_per) will be invariant, bounded in H^2_per set which contains the global attractor 𝒜_tr: S_tr(t)ℬ_H^2⊂ℬ_H^2,𝒜_tr⊂ℬ_H^2,ℬ_H^2_H^2_per≤R/2, where r is large enough and R>r is some number depending only on r. We also recall that by the construction of the transformed equation (<ref>) W(𝒜)⊂𝒜_tr (where 𝒜 is the attractor of the initial equation (<ref>)) and S_tr(t)=W∘ S(t)∘ W^-1 in a neighboorhood of the set W(𝒜). Thus, the dynamics generated byequation (<ref>) outside of the ball B(R,0,H^2_per) becomes not essential and we may change it there in order to simplify the construction of the inertial manifold. To this end, we introduce one more cut-off function ϕ∈ C^∞() which is monotone decreasing and ϕ(z)≡ 0,z≤ R^2,ϕ(z)≡-1/2,z≥(2R)^2 and one more nonlinear operator T(w):=ϕ((-1)P_Nw^2_L^2_per)(-1)P_Nw, where the number N will actually coincide with the dimension of the manifold and will be fixed below. The key properties of this map are collected in the next lemma.The map T is a C^∞-smooth map from Φ to P_NΦ. Moreover, its Frechet derivative T'(w) is globally bounded as a map from Φ to ℒ(Φ,Φ) and satisfies the following inequalities: (T'(w)ξ, (∂_x^2-1) P_N ξ) ≤0, for all w∈Φ and (T'(w)ξ, (∂_x^2-1) P_N ξ) = -1/2P_N ξ^2_H^2 for w∈Φ such that (-1) P_N w^2_L^2_per≥ 2R. Indeed, the Frechet derivative of T reads T'(w)ξ=ϕ((-1)P_Nw^2_L^2_per)(-1)P_Nξ+ + 2ϕ'((-1)P_Nw^2_L^2_per)((-1)P_N w,(-1)P_Nξ)(-1)P_Nw. Using the fact that ϕ'(z)=0 for z>4R^2, we see that the derivative T'(w) is uniformly bounded as a map from Φ to ℒ(Φ,Φ) and, in particular, the map w→ T(w) is globally Lipschitz as the map from Φ to Φ. Moreover, since ϕ(z)≤0 and ϕ'(z)≤0 for all z∈ R, we have (T'(w)ξ, (∂_x^2-1) P_Nξ)= 2 ϕ'((-1)P_N w^2_L^2)((∂_x^2-1) P_N w, (∂_x^2-1) P_N ξ)^2++ ϕ((-1)P_N w^2_L^2) (∂_x^2-1) P_N ξ^2_L^2≤0. For the case (-1)P_N w^2_L^2≥ 4R^2 by definition T(w) = -1/2(∂_x^2-1) P_N w and consequently (T'(w)ξ,(∂_x^2-1) P_Nξ ) = -1/2(-1)P_Nξ^2_L^2 and the lemma is proved.Thus, we arrive at the following final equation for the inertial manifold to be constructed: w- ∂^2_x w+w+Θ(w) w = T(w)+ℱ_1(w)+ℱ_2(w) . Note that this equation can be interpreted as a particular case of an abstract semilinear parabolic equation w+Aw=ℱ(w) in a Hilbert space Φ:=H^1_per(-π,π), where A:=1- (is a self-adjoint positive operator in Φ with compact inverse) andℱ(w):=ℱ_1(w)+ℱ_2(w)+T(w)-Θ(w) w.Indeed, as follows from Theorem <ref> and Lemma <ref>, the nonlinearity ℱ is globally Lipschitz continuous as the map from Φ to L^2_per(-π,π)=D(A^-1/2). This, in particular, implies that this equation is also globally well-posed in Φ, generates a dissipative semigroup S̅(t):Φ→Φ and the corresponding solution w(t) satisfies all of the estimates stated in Theorem <ref>. Moreover, due to Theorem <ref> and the obvious fact that Q_NT(w)=0, the Q_N-component of the nonlinearity ℱ is globally bounded: Q_Nℱ(w)_L^2_per≤ C, where the constant C is independent of N and w. This property givesthe control for the Q_N-component of the solution w which is crucial for what follows.Let the nonlinearity ℱ satisfy (<ref>). Then, for any κ∈(0,1), there exists a constant R_κ>0 (independent of N) such that, for any solution w(t) of equation (<ref>) with w(0)∈ H^2-κ_per(-π,π), the following estimate holds: Q_Nw(t)_H^2-κ_per≤ (Q_Nw(0)_H^2-κ_per-R_κ)_+e^-α t+R_κ, where z_+:=max{z,0} and the positive constant α is independent of κ, t, N and w. Indeed, according to the variation of constants formula, Q_Nw(t) satisfies Q_Nw(t)=Q_Nw(0)e^-At+∫_0^te^-A(t-s)Q_Nℱ(w(s)) ds. Taking the H^2-κ_per-norm to both sides of this equality and using thate^-A(t-s)_ℒ(L^2_per,H^2-κ_per)≤ Ce^-α(t-s)(t-s)^-1+κ/2for some positive C and α, we end up with the following estimate: Q_Nw(t)_H^2-κ_per≤Q_Nw(0)_H^2-κ_pere^-α t+ +C∫_0^te^-α(t-s)(t-s)^-1+κ/2Q_Nℱ(w(s))_L^2_per ds≤ ≤Q_Nw(0)_H^2-κ_pere^-α t+C_1∫_0^te^-α(t-s)/(t-s)^1-κ/2 ds and the assertion of the lemma is a straightforward corollary of this estimate. Thus, the lemma is proved.The proved lemma shows that the sets B_κ:={w∈ H^2-κ_per,Q_Nw_H^2-κ_per≤ R_κ} are invariant with respect to the semigroup S̅(t):S̅(t) B_κ⊂ B_κ. The auxiliary operator T(w) has been introduced in <cit.> in order to construct the inertial manifolds for reaction-diffusion equations in higher dimensions using the so-called spatial averaging method. On the one hand, since T(w)=-1/2P_N(-1) w if P_Nw_H^2_per islarge, this term shifts roughly speaking the first N-eigenvalues and makes the spectral gap large enough to treat the nonlinearity. So, this trick actually allows to check the cone property for the case where P_Nw_H^2_per≤ 2R. On the other hand, together with the control (<ref>), this gives us the control of H^2-κ-norm in the estimates related with the cone property, see also KZ1,Zel2 and the proof of Theorem <ref> below. Mention also that by the construction of the nonlinearity T, equation (<ref>) coinsides with (<ref>) in the neighbourhod of the attractor 𝒜_tr.We are ready to verify the existence of the inertial manifold for the problem (<ref>). For the convenience of the reader, we first recall the definition of an inertial manifold and the result which guarantees its existence.A set ℳ∈Φ to be called an inertial manifold forproblem (<ref>) if it satisfies the following properties: 1. ℳ is strictly invariant under the action of the semigroup S̅(t), i. e. S̅(t)ℳ = ℳ; 2. ℳ is a Lipschitz submanifold of Φ which can be presented as a graph of a Lipschitz continuous function M:P_N Φ→ Q_N Φ for some N ∈ N, i.e., ℳ = { w_+ + M(w_+), w_+ ∈ P_N Φ} andM(w^1_+) - M(w_+^2)_Φ≤ L_Mw_+^1 - w_+^2_Φ; for some constant L_M; 3. ℳ possesses an exponential tracking property, i.e. for any solution w(t), t ≥ 0, of problem (<ref>) there exists a solution w̃(t), t∈, belonging to ℳ for all t∈ such that:w(t) - w̃(t)_Φ≤ C e^-γ tw(0) - w̃(0)_Φfor some positive constants C and γ.The proof of the existence of an inertial manifold will be based on the invariant cone property and the graph transform method, see FST,mal-par,rom-man,Zel2 for more details. To introduce the invariant cone property convenient for our purposes, we introduce the following quadratic form V(ξ) = Q_N ξ^2_Φ - P_N ξ^2_Φ,z_Φ^2:=(Az,z)= z^2_L^2_per+z^2_L^2_per and corresponding cone in the phase space Φ: 𝒦^+ = {ξ∈Φ : V(ξ)≤ 0 }.We say that equation (<ref>) possesses a strong cone property in the differential form if there exist a positive constant μ and a bounded function α: Φ→ R, which satisfies the property: 0<α_- ≤α(w)≤α_+ <∞, such that for any solution w(t)∈Φ, t∈ [0,T], of equation (<ref>) and any solution ξ(t) of the corresponding equation in variations: ξ +A ξ =ℱ'(w(t))ξ the following inequality holds: d/dtV(ξ) + α(w)V(ξ) ≤ -μξ^2_H^2_per. If inequality (<ref>) holds not for all trajectories w(t), but only for the ones belonging to some invariant set, we will say that the strong cone property is satisfied on this set.The next theorem gives the conditions which guarantees the existence of the inertial manifold for the abstract equation (<ref>) which we need to verify for our case of equation (<ref>). Let the nonlinearity ℱ be globally Lipschitz continuous as a map from Φ to L^2_per=D(A^-1/2) and let the number N be chosen in such a way thatQ_Nℱ is globally bounded on Φ and the strongcone property in the differential form is satisfied on the invariant set B_κ for some κ∈(0,1] defined by (<ref>). Then (<ref>) possesses a (2N+1)-dimensional Lipschitz inertial manifold in the space Φ. Moreover, if the nonlinearity ℱ is of class C^1+β(Φ,L^2_per) for some β>0, then the inertial manifold is of class C^1+ for some =(β,N)>0. It is well-known result, see e. g. mal-par,Zel2, that the validity of the strong cone property in the differential form leads to the existence of an (2N+1)-dimensional inertial manifold.The following theorem can be considered as one of the main results of this chapter.Under above assumptionsfor infinity many values of N∈ N equation (<ref>) possesses a (2N+1)-dimensional inertial manifold. Moreover these inertial manifolds are C^1+ε-smooth for some small positive =(N)>0. According to Theorem <ref>, we only need to verify the validity of the strong cone condition on the invariant set B_κ for some κ>0. The rest of the assumptions of this theorem are already verified above. We fix κ=1/4 and write out the equation of variations which corresponds to equation (<ref>): ∂_t ξ+Aξ = -Θ(w)ξ-(Θ'(w),ξ)_H^1 w+ℱ_1'(w)ξ+ℱ_2'(w)ξ+T'(w)ξ, wherew(t) is the solution of the equation (<ref>) belonging to B_1/4. Multiplying this equationby A Q_N ξ - A P_N ξ and denoting α̅:=λ_2N+1+λ_2N/2, we get1/2d/dt V(ξ) + α̅V(ξ) =((α̅-A)ξ, A Q_N ξ) - ((α̅-A) ξ, AP_N ξ) - -Θ(w)(ξ,A Q_N ξ - A P_N ξ)-(Θ'(w),ξ)_H^1( w,A Q_N ξ - A P_N ξ)+ +(ℱ'_1(w)ξ,A Q_N ξ - A P_N ξ )+ (ℱ'_2(w)ξ, A Q_N ξ -A P_N ξ)- (T'(w)ξ, A P_N ξ). Let us estimate every term in the right-hand side of this inequality separately. Integrating by parts in the first term, we see that Θ(w)(∂_x ξ,A Q_N ξ - A P_N ξ) =0. Due to estimate (<ref>) on the nonlinearity ℱ_1 we have (ℱ'_1(w)ξ, A Q_N ξ - A P_N ξ) ≤ C K^-1/2ξ_H^1_perξ_H^2_per, and estimate (<ref>) on ℱ_2 gives us (ℱ'_2(w)ξ, A Q_N ξ -A P_N ξ) ≤ C_K ξ^2_H^1_per. In next estimates, we will use the notations e_2n:=cos(nx), n={0}∪ N,e_2n-1:=sin(nx),n∈ N; λ_0=1,λ_2n=λ_2n-1:=n^2+1,n∈ N andformulas ξ:=∑_n=1^∞ξ_n e_n,P_Nξ=∑_n=1^2Nξ_n e_n,Q_Nξ:=∑_n=2N+1^∞ξ_n e_n. Then, we estimate the linear terms as follows ((α̅-A)ξ, -A P_N ξ)=∑_n=0^2N(λ_n^2 - α̅λ_n)ξ_n^2 = 1/2∑_n=0^2N(λ_n - α̅)λ_nξ_n^2 +1/4∑_n=0^2N(λ_n^3/4 - α̅/λ_n^1/4)λ_n^5/4ξ_n^2 +1/4∑_n=0^2N(1-α̅/λ_n)λ_n^2 ξ_n^2≤ 1/2(λ_2N -α̅)P_N ξ_H^1^2 + 1/4(λ_2N^3/4 - α̅/λ_2N^1/4)P_N ξ_H^5/4^2+1/4(1 - α̅/λ_2N)P_N ξ_H^2^2, and ((α̅-A)ξ, A Q_N ξ) = ∑_n=2N+1^∞(α̅λ_n - λ_n^2)ξ_n^2≤ 1/2(α̅- λ_2N+1)Q_N ξ_H^1^2 +1/4(α̅/λ^1/4_2N+1 - λ_2N+1^3/4)Q_N ξ^2_H^5/4+ 1/4(α̅/λ_2N+1 - 1)Q_N ξ^2_H^2. We recall that α̅= λ_2N+1 + λ_2N/2, consequently ((α -A)ξ, -A P_N ξ) + ((α -A)ξ, A Q_N ξ) ≤ ≤ -λ_2N+1 - λ_2N/4ξ^2_H^1- λ_2N+1^3/4 - λ_2N^3/4/8ξ^2_H^5/4 - λ_2N+1 - λ_2N/8λ_2N+1ξ^2_H^2= =-λ_2N+1 - λ_2N/4ξ^2_H^1- λ_2N+1^3/4 - λ_2N^3/4/8ξ^2_H^5/4 - λ_2N+1 - λ_2N/16 λ_2N+1ξ^2_H^2-μξ^2_H^2, where, we set μ:=λ_2N+1 - λ_2N/16 λ_2N+1. Inserting the obtained estimates into the right-hand side of (<ref>) and using thatC K^-1/2ξ_H^1_perξ_H^2_per≤λ_2N+1-λ_2N/8ξ^2_H^1_per+C^2K^-12/λ_2N+1 - λ_2Nξ^2_H^2_per we arrive at 1/2d/dt V(ξ) + α̅V(ξ)+μξ^2_H^2_per≤ -(Θ'(w),ξ)_H^1( w,A Q_N ξ - A P_N ξ)- -(T'(w)ξ, A P_N ξ)- λ_2N+1-λ_2N/8-C_Kξ^2_H^1_per- λ_2N+1^3/4 - λ_2N^3/4/8 ξ^2_H^5/4_per- -λ_2N+1 - λ_2N/16 λ_2N+1-C^2K^-12/λ_2N+1 - λ_2Nξ^2_H^2_per Let us now estimate the first term in the right-hand side of (<ref>). To this end, we fix an arbitrary t ≥ 0 and consider two cases: 1)P_N w(t)_H^2_per≤2R and 2) P_N w(t)_H^2_per>2R where the constant R is the same as in (<ref>). In the first case, using also that w∈ B_1/4, we conclude that w^2_H^7/4_per≤P_N w^2_H^2_per + Q_N w^2_H^7/4_per≤ 2R + R_1/4:=C̅. Therefore, using also that Θ'(w) is globally bounded in H^1_per |(Θ'(w),ξ)_H^1(∂_x w,A Q_N ξ - AP_N ξ)|≤ Cw_H^7/4_perξ_H^1_perξ_H^5/4_per≤C̃ξ^2_H^5/4_per. As follows from Lemma <ref> additional term containing T'(w) does not make any difference since(T'(w)ξ, -A P_N ξ) ≤ 0.Therefore, in the first case inequality (<ref>) reads 1/2d/dt V(ξ) + α̅V(ξ)+μξ^2_H^2_per≤λ_2N+1-λ_2N/8-C_Kξ^2_H^1_per- - λ_2N+1^3/4 - λ_2N^3/4/8 -C̃ξ^2_H^5/4_per-λ_2N+1 - λ_2N/16 λ_2N+1-C^2K^-12/λ_2N+1 - λ_2Nξ^2_H^2_per. We now recall that theeigenvaluesλ_2N=N^2+1 and λ_2N+1=(N+1)^2+1. Therefore, forN>0, λ_2N+1-λ_2N=2N+1 and(λ_2N+1-λ_2N)^2/λ_2N+1=(2N+1)^2/(N+1)^2+1≥1.Thus, if we fix the parameter K≥ K_0 in such way that C^2K^-1≤1/64, the last term in the left hand side will be non-positive. Crucial for us that we may fix K in such way that this property holds for allNs simultaneously. Obviously, the first two terms in the LHS of (<ref>) will be also non-positive if Nis large enough. Thus in the case P_N w_H^2_per≤ 2R, we may takeα(w):=α̅=λ_2N+1+λ_2N/2and the strong cone condition will be satisfied. Let us now consider the second case where P_Nw(t)_H^2_per>2R. In this case, the auxiliary map T is really helpful. Indeed, according to Lemma <ref>, (T'(w)ξ,-AP_Nξ)≤ -1/2AP_Nξ_L^2_per^2≤ -λ_2N/2P_Nξ^2_H^1_per=-λ_2N/4P_Nξ^2_H^1_per+ + λ_2N/4Q_Nξ^2_H^1_per-P_Nξ^2_H^1_per- λ_2N/4Q_Nξ^2_H^1_per=λ_2N/4V(ξ)-λ_2N/4ξ^2_H^1_per. Using now that the norm of the derivative Θ'(w)_H^1_per vanishes if w_H^1_per is large and, consequently, Θ'(w)_H^1_perw_H^1_per≤C̅_1, we estimate the first term in the right-hand side of (<ref>) as follows: |(Θ'(w),ξ)_H^1( w,AQ_Nξ-AP_Nξ)|≤Θ'(w)_H^1_perw_H^1_perξ_H^1_perξ_H^2_per≤ ≤C̅_1ξ_H^1_perξ_H^2_per≤C̅_1^28λ_2N+1/λ_2N+1-λ_2Nξ^2_H^1_per+ λ_2N+1-λ_2N/32λ_2N+1ξ^2_H^2_per. Thus, the analogue of (<ref>) for the second case reads 1/2d/dt V(ξ) + α̅-λ_2N/4 V(ξ)+μξ^2_H^2_per≤λ_2N+1-λ_2N/8-C_Kξ^2_H^1_per- - λ_2N/4 -C̅_1^28λ_2N+1/λ_2N+1-λ_2Nξ^2_H^1_per-λ_2N+1 - λ_2N/32 λ_2N+1-C^2K^-12/λ_2N+1 - λ_2Nξ^2_H^2_per. We see that the third term in the right-hand side is non-positive if the parameter K satisfies exactly the same assumption (<ref>) as in the first case (in particular, it can be fixed independently of N). Moreover, since λ_2N∼ N^2 and λ_2N+1-λ_2N∼ 2N+1, the second term and the first terms are also non-positive if N is large enough. Thus, we are able to fix the parameters K and N in such ways that the right-hand sides of both inequalities (<ref>) and (<ref>) will be non-positive. Let us now introduce the function α(w):=λ_2N+1+λ_2N/2, P_Nw_H^2_per≤ 2R λ_2N+1+λ_2N/2-λ_2N/4, P_Nw_H^2_per>2R. Then, we have proved that for all sufficiently largeNs, the strong cone inequality 1/2d/dtV(ξ(t))+α(w(t))V(ξ(t))≤-μξ(t)^2_H^2_per is satisfied and the theorem is proved.§ VECTOR CASE: A COUNTEREXAMPLE In this section, we will show that, in contrast to the scalar case considered above, the (Lipschitz continuous) IM may not exist in the case of a system of RDA equations (<ref>) (i.e., if m>1). Analogously to <cit.>, our counterexample is built up based on the counterexample to Floquet theory for linear RDA equations with time-periodic coefficients. Namely, we consider the following system of linear RDA equations: u-∂_x^2 u+f(t,x)∂_x u+g(t,x) u=0 endowed with periodic boundary conditions. We assume that u=(v(t,x),u(t,x)) where the unknown functions as well as given 2T-periodic in time functions f and g are complex valued, so we will consider a system of two coupled complex valued RDA equations. Of course, separating the real and imaginary parts of functions v and u, we nay rewrite it as a system of four real-valued RDA equations with respect to u=(v_Re,v_Im,u_Re,u_Im), but preserving the complex structure is more convenient for our purposes. The main idea is to construct the functions f and g in such a way that all solutions u(t) will decay faster than exponentially as t→∞. If these functions are constructed, the standard trick with producing the space-time periodic functions f and g as particular solutions of some extra nonlinear RDA system, will give us a super-exponentially attracting limit cycle inside of the global attractor (see e.g., <cit.>) which clearly contradicts the existence of the IM for the full system. We first recall that, at least for smooth functions f and g, equation (<ref>) is well-posed in the phase space Φ (this can be established analogously to Theorem <ref>) and generates a dissipative dynamical process {U(t,τ), t≥τ,τ∈} in the phase space Φ via U(t,τ) u_τ:= u(t), U(t,s)=U(t,τ)∘ U(τ,s),t≥τ≥ s, where the function u(t) solves (<ref>) with the initial datau|_t=τ= u_τ∈Φ. In particular, since the functions f and g are 2T-periodic in time, the long-time behavior of solutions of (<ref>) is completely determined by the iterations of the period map P:=U(2T,0). Since, due to the smoothing property, the linear operator P is compact its spectrum consists of {0} as an essential spectrum and at most countable number of non-zero eigenvalues of finite multiplicity. It is well-known that any eigenvalue μ0 of this operator generates the so-called Floquet-Bloch solutions of (<ref>) of the formu_μ,n(t):=t^n-1e^ν tQ_n-1(t), where ν:=1/2Tlnμ, n≥1 does not exceed the algebraic multiplicity of the eigenvalue μ and Q_n(t) are 2T-periodic Φ-valued functions. It is also known that, at least on the level of abstract parabolic equations in a Hilbert space, the linear combinations of Floquet-Bloch solutions are not dense in the space of all solutions. Moreover, the point spectrum of the operator P may be empty which means that σ(P)={0}, see <cit.> for more details. According to the Gelfand spectral radius formula, this will be the case when all solutions of problem (<ref>) decay faster than exponential as t→∞ and this is exactly the case which we are interested in. The next theorem gives the desired example of the functions f and g such that (<ref>) is satisfied. To the best of our knowledge, similar examples have been previously known only for abstract parabolic equations (with non-local nonlinearities), but not for systems of second order parabolic PDEs. Let us consider the system of convective reaction-diffusion equationswithperiodic boundary conditionsu =∂^2_x u + f(w)∂_x w+g(w), x∈(0,2π), w|_t=0=w_0,where w=(w^1(t,x), ..., w^n(t,x)) is an unknown vector-valued function, f and g are some nonlinear functions. Our goal is to find such functions f and g for which there exist two trajectories w_1 and w_2 such that:1. w_1 and w_2 belong to the attractor;2. w_1(t) - w_2(t)_L^2≤ C e^-γ t^3w_1(0) - w_2(0)_L^2, t≥ 0, for some positive constants C and γ.Thus the main result of this paper is the following theorem.There exist nonlinear functions f and g such that the system (<ref>) of 6 convective reaction-diffusion equationswithperiodic boundary conditions does not possess a Lipschitz continuous inertial manifold and/or Lipschitz continuous inertial form. The proof of this result is based on the ideas of <cit.>. Let us consider the auxiliary problemw =w + f(t,x) w + g(t,x)w,where w=(v(t,x), u(t,x)) is an unknown vector-valued function, f(t,x) and g(t,x) are given time-periodic functions with a period 2T. Denote by U(t,s), s∈, t ≥ s, the solution operator generated by this equation:U(t,s)w(s):= w(t). Then the following theorem is valid.For every sufficiently large T there exist smooth functions f(t,x)∈ℒ( C^2, C^2) and g(t,x)∈ C^2 which are 2T-periodic in time and 2π-periodic in space such that all solutions of equation (<ref>) decay faster than exponential as t→∞. Moreover, the following estimate holds for any of such solutionsthat the Poicare map P:=U(2T,0) associated with equation (<ref>) satisfies the following properties:P ([ 1; 0 ])e_n = ([ μ_n; 0 ])e_n+1, P([ 0; 1 ])e_n = ([ 0; ν_n-1 ])e_n-1,n∈ Z,whereμ_n and ν_n are some positive multipliers such that: μ_n, ν_n ≤ e^-K n^2, K>0. Moreover, all solutions of (<ref>) decay super-exponentially: u(t)_L^2≤ C e^-γ t^3 u(0)_L^2,where positive constants C and γ are independent of u(0)∈ L^2(-π,π).Let now e_n:=e^inx, n∈ Z be the eigenvectors of the operator - acting in the space of complex-valued functions. Obviously the corresponding eigenvalues are λ_n=n^2. The following simple formula is however crucial for the construction of our counterexample: e_n+1=e^ixe_n, ( +2i-1)e_n=-λ_n+1e_n, n∈ Z. Keeping in mind that our equation has two components u=(v,u), we introduce the following base vectors in [L^2_per(-π,π)]^2: e_n^v:=1 0e_n,e_n^u:=0 1e_n. Then, the vectors {e_n^v,e_n^u}_n∈ Z form an orthogonal base in the space [L^2_per(-π,π; C)]^2. Moreover, these are the eigenvectors for the unperturbed problem (<ref>) (with f=g=0) which correspond to the eigenvalue λ_n=n^2 and for every n0, the corresponding eigenspace is spanned by {e_± n^v,e^u_± n} and therefore has the complex dimension 4 (and real dimension 8). We intend to construct the functions f and g in such a way that the corresponding period map has the following properties: P e_n^v=μ_ne_n+1^v,P e_n^u=ν_n e_n-1^u,n∈ N, where-μ_n and -ν_n are some positive multipliers such that |μ_n|+|ν_n| ≤ e^-KT n^2 for some K>0 independent of n. Indeed, assume that such example is constructed. Then, clearly the point spectrum of P is empty and, moreover, the following estimate holds: P^Ne_n^v_L^2≤ e^-KT∑_k=n^n+Nk^2 = e^-KT/6(N+1)(2N^2+6Nn+6n^2+N)≤ Ce^-γN^3, for some γ>0 which is independent of N and n (here we have implicitly used the positivity of the quadratic form 2N^2+6Nn+6n^2). Arguing analogously, we have P^N e_n^u_L^2≤ e^-KT∑_k=n-N^n k^2≤ Ce^-γ N^3. These estimates, together with (<ref>), imply that P^N_ℒ(L^2,L^2)≤ Ce^-γ N^3. Thus, estimate (<ref>) is verified and we only need to construct the functions f and g for which the period map P of equation (<ref>) will satisfy (<ref>). Roughly speaking, similarly to <cit.>, we initially take the unperturbed equationu= u,and split the time interval[0,2T] on two parts [0,T] and [T,2T]. At the first interval, we shift the spectrum of the v-component by adding the term 2i v-v (after this shift the vectors e_n^v and e_n+1^u will be in the eigenspace which corresponds to the eigenvalue λ_n+1) and switch on the "rotation" in the plane spanned by {e_n^v,e_n+1^u} by adding the proper anti-symmetric term. This guesses the following form of the perturbed equation:v= v+(2i v-v)- e^-ixu, u= v+ e^ixv,t∈[0,T],on the first half-period. The parameter >0 should be chosen in such way that the half-period map U(T,0) will rotate the direction of e_n^v into the direction of e_n+1^u and vise versa. At the second half-period, we need not to do shift and just put the "rotation" termsv= v- u, u= u+ v,t∈[T,2T],where we again chose >0 in such way that the half-period map U(2T,T) rotates the direction of e^v_n to the direction of e_n^u and vise versa. Then, as not difficult to seethe composition P=U(2T,T)∘ U(T,0) will satisfy relations (<ref>) and the estimates for μ_n and ν_n will be also satisfied. Thus, the above arguments allow us to construct the desired counterexample in the class of piecewiseconstant (in time) periodic functions f and g. However, in order to build the counterexample to inertial manifolds, we need the functions f and g to be smooth, so we need to "smoothify" our construction by adding the properly chosen cut-off functions. Namely, let us fix an auxiliary 2T-periodic function y(t) satisfying the following assumptions: [ 1.y(t)is odd andy(T-t) = y(t) for all t;; 2.y(t) has a maximum point att=T/2 and y(T/2) = 1;; 3.y”(t)≤ 0for0<t<Tand y'(t)> 0for 0<t<T/2. ] One of the possible choices ofy(t) issin (π t/ T). In addition we introduce a pair of smooth non-negative cut-off functions θ_1 and θ_2:θ_1(y) = 0 , for y ≤ 1/4, θ_1(y)=1 , for y ≥ 1/2; θ_2(y) = 0 , for y ≤ 0, θ_2(y)=1 , fory ≥ 1/4.Now we are ready to introduce the desired equations:v =v + (2 iv - v)θ_2(y) - ε e^- i x uθ_1(y) - ε u θ_1(-y),u =u + ε e^ix v θ_1(y) + ε v θ_1(-y),where ε is a small parameter which will be choosing later in such a way that on the first half-period U(T,0)e_n^v = K_n^+e_n+1^uandU(T,0)e_n^u = C_n^+e_n-1^v, and on the other part of period U(2T,T)e_n+1^u =K_n^-e_n+1^vandU(2T,T)e_n-1^v =C_n^-e_n-1^u, for some contraction factors K^+_n, C^+_n,K^-_n, C^-_n. We claim that the proposed equations satisfy all the assumptions of the theorem. Indeed, let us first consider equations (<ref>) on a half-period [0,T] which due to the specific form of the cut-off functions θ_1(y), θ_2(y) and time-periodic function y(t) have a form v =v + (2 iv - v)θ_2(y) - ε e^- i x uθ_1(y),u =u + ε e^ix v θ_1(y). We fix T_0 such that y(T_0)=1/4. Then on the intervals [0, T_0] and [T- T_0, T] equations (<ref>) become decoupled: v =v + (2 iv - v)θ_2(y),u =u. Writing these equations in Fourier coordinates, we obtain d/dtv_n = -(n^2 + (2 n +1)θ_2(y))v_n, d/dtu_n = - n^2 u_n. Therefore, U(T_0,0) e_n^v =e^-T_0n^2 - (2n+1)∫_0^T_0θ_2(y(t))dt e_n^v,U(T_0,0) e_n^u =e^-T_0n^2e_n^u, and U(T,T-T_0) e_n^v =e^-T_0n^2 - (2n+1)∫^T_T - T_0θ_2(y(t))dt e_n^v, U(T,T -T_0) e_n^u =e^-T_0n^2e_n^u. Let us turn to the map U(T - T_0, T_0). The specific choice of the cut-off functions allows us to rewrite the equation (<ref>) on this interval in the form v =v + 2 iv - v - ε e^- i x uθ_1(y),u =u + ε e^ix v θ_1(y). Since e_n = e^inx and consequently e_n+1 = e^ixe_n, after writing down our equations in Fourier modes an equation on v_n will be coupled with an equation on u_n+1: d/dtv_n = - (n+1)^2 v_n - ε u_n+1θ_1(y), d/dtu_n+1 = -(n+1)^2 u_n+1 + ε v_n θ_1(y). To study these equations we introduce the polar coordinates: v_n + i u_n+1 = R_n e^i ϕ_n, which leads to two separate equations on the radial and angular coordinates: d/dtR_n = -(n+1)^2 R_n,d/dtϕ_n = εθ_1(y(t)). Fixing ε:=π/2 ∫_T_0^T - T_0θ_1(y(t))dt, we see that U(T- T_0, T_0) restricted on the span{ e_n^v, e_n+1^u } is a composition of the rotation on the angle π/2 and the proper contraction, more precisely: U(T-T_0, T_0) e_n^v =e^-(T-2T_0)(n+1)^2e_n+1^u, U(T-T_0, T_0) e_n+1^u =-e^-(T-2T_0)(n+1)^2e_n^v. Taking the composition of maps U(T_0, 0), U(T-T_0, T_0) and U(T, T-T_0), we have U(T, 0) e_n^v =e^-T_0n^2 - (2n+1)∫_0^T_0θ_2(y(t))dt e^-(T-2T_0)(n+1)^2e^-T_0(n+1)^2e_n+1^u, and U(T, 0) e_n+1^u =-e^-T_0(n+1)^2 e^-(T-2T_0)(n+1)^2e^-T_0n^2 - (2n+1)∫_T - T_0^Tθ_2(y(t))dt e_n^v. It is remained to consider equations (<ref>) on the half-period [T, 2T]:v =v- ε u θ_1(-y),u =u + ε v θ_1(-y),the situation here is more or less similar to the case of interval [0,T]. Indeed, due to the specific form of the cut-off function θ_1(y) and periodic function y(t), on the time intervals [T, T + T_0] and [2T - T_0, 2T] the equations are decoupled: v =v,u =u. Therefore U(T + T_0,T) e_n^v =e^-T_0n^2 e_n^v,U(T +T_0,0) e_n^u =e^-T_0n^2e_n^u, and U(2T,2T - T_0) e_n^v =e^-T_0n^2 e_n^v,U(2T,2T - T_0) e_n^u =e^-T_0n^2e_n^u. Equations (<ref>) on an interval [T+T_0, 2T- T_0] have a form v =v- ε u θ_1(-y),u =u + ε v θ_1(-y), , and we see that in Fourier coordinates v_n is coupled with u_n in comparison to the case of interval [T_0, T-T_0], where v_n was coupled with u_n+1. Namely,d/dtv_n = - n^2 v_n - ε u_nθ_1(-y), d/dtu_n = - n^2 u_n + ε v_nθ_1(-y). As before, we introduce the polar coordinates: v_n + i u_n = r_n e^iψ_n, and obtain the following equations on radial coordinate r_n and angular coordinate ψ_n: d/dtr_n = - n^2 r_n,d/dtψ_n =εθ_1(-y). Substituting ε from (<ref>) and using symmetry of y(t) we see that phase ψ_nchanges on π/2 on the interval [T + T_0, 2T - T_0]. Thus the map U(2T - T_0, T + T_0) restricted on the span{e_n^v,e_n^u } is a composition of the rotation and the contraction: U(2T -T_0,T + T_0) e_n^v =e^-(T - 2T_0)n^2 e_n^u, and U(2T - T_0,T + T_0) e_n^u =-e^-(T - 2T_0)n^2e_n^v. Therefore the composition of maps U(T + T_0, T), U(2T - T_0, T + T_0) and U(2T, 2T -T_0) gives us U(2T, T ) e_n^v =e^-T_0n^2 e^-(T - 2T_0)n^2e^-T_0n^2e_n^u and U(2T, T ) e_n^u =-e^-T_0n^2 e^-(T - 2T_0)n^2e^-T_0n^2 e_n^v.Formulas (<ref>), (<ref>), (<ref>) and (<ref>) guarantee that the Poincare map P = U(2T,T)∘ U(T, 0) satisfies properties (<ref>) with μ_n = -e^-2T(n+1)^2 - (2n +1)∫_0^T_0(θ_2(y(t))-1)dt andν_n = -e^-2Tn^2 -(2n +1)T - (2n +1)∫_0^T_0(θ_2(y(t))-1)dt. Thus, the theorem is proved. It follows from the explicit form of the functions f(t,x) and g(t,x) that they can be written in the formg(t,x)=g̅(y(t),e^ix), f̅(t,x)=f(y(t),e^ix)for some C^∞-functions f̅ and g̅. Moreover fixing y(t)=sinπ t/T, we may achieve that g(t,x)=g̅(sinπ t/T,cos x,sin x), f(t,x)=f̅(sinπ t/T,cos x,sin x). Moreover, as also follows from the construction, the functions f̅ and g̅ are linear with respect to sin x and cos x.We turn now to the nonlinear case. For the reader convenience, we start with discussing some known facts on Lipschitz manifolds, finite-dimensional reduction and attractors, see rom-th,rom-th1,28,Zel2 for more details. A set ℳ is a Lipschitz submanifold of dimension N ofa Hilbert space Φ if it can be presented locally as a graph of a Lipschitz continuous function. In other words, for any u_0∈ℳ, there exist =(u_0)>0, the open neighborhood 𝒱_u_0 of u_0 in Φ, the projector 𝒫_u_0∈ℒ(Φ,Φ) of rank N and a Lipschitz continuous map M_u_0:𝒫_u_0Φ→ (1-𝒫_u_0)Φ such that ℳ∩𝒱_u_0={u_++M_u_0(u_+),u_+∈B(,𝒫_u_0u_0,P_u_0Φ)}. In particular, this means that u-v_Φ≤ L_u_0𝒫_u_0(u-v)_Φ for all u,v∈𝒱_u_0 and some constant L_u_0 which is independent of u and v. Note that there is an alternative definition of a Lipschitz manifold which is also widely used in the literature. Namely, ℳ is a Lipschitz manifold in Φ of dimension N if for every u_0∈ℳ there exists a neighborhood 𝒱_u_0 of u_0 in Φ, the number =(u_0)>0 and a bi-Lipschitz homeomorphismM: B(,0,^N)→ℳ∩𝒱_u_0.As elementary examples show, these two definitions are not equivalent (actually, the second one is weaker than the first one), so the choice of the proper definition becomes important. Our choice of the first definition is motivated by the following two reasons: 1) it naturally generalizes the concept of a submanifold from the smooth to Lipschitz cases and, to the best of our knowledge, all known constructions of inertial manifolds automatically give the structure (<ref>); 2) we do not know whether or not the key statement about the finite-dimensionality of the dynamics on the attractor embedded into the finite-dimensional Lipschitz manifold holds without the assumption (<ref>), see below.Let now 𝒜⊂Φ be an attractor of the dissipative semigroup S(t) generated by an abstract semilinear parabolic equation (<ref>).We say that the dynamics generated by S(t) on the attractor possesses a Lipschitz continuous inertial form if the following conditions are satisfied: 1) There exist N>0 and an injective Lipschitz map I:𝒜→^N such that I^-1:𝒜̅:=I(𝒜)→𝒜 is also Lipschitz continuous. 2) There exists a Lipschitz continuous vector field G on 𝒜̅⊂^N such that the projected semigroup S̅(t):=I∘ S(t)∘ I^-1 on 𝒜̅ is a solution semigroup of the following ODEs: d/dtU=G(U),U|_t=0=I(u_0),u_0∈𝒜. This system of ODEs is referredthen as an initial form associated with (<ref>).We give below only several known facts on such inertial forms which are crucial for our purposes, more details can be found in rom-th,rom-th1.Under the above assumptions the Lipschitz continuous inertial form exists if and only if the semigroup S(t) restricted to the global attractor 𝒜 can be extended for negative times to a Lipschitz continuous group {S(t), t∈}. Moreover, then the spectral projector P_N can be used as a map I for sufficiently large N.Indeed, in one side the statement is obvious since Lipschitz vector field in ^N generate Lipschitz continuous solution groups. In the opposite side it is a bit more delicate and require some efforts, see <cit.>.Under the above assumptions the Lipschitz continuous inertial form exists if and only if there exists a finite-dimensional Lipschitz submanifold (not necessarily invariant) containing the global attractor 𝒜.This statement isproved in <cit.>[Theorem 1.5] (actually, the existence of an inertial form is verified there under the extra assumption that the manifold is C^1-smooth, but this fact is used only in order to obtain estimate (<ref>) which is incorporated in our case into the definition of the Lipschitz submanifold, see Remark <ref>). We are now ready to state and prove the second main result of the paper on the non-existence of IMs for systemsof RDAs. There exists an example of the RDA system (<ref>) with the number of equations m=8 and the nonlinearities f and g satisfying the assumptions of Theorem <ref> such that the associated global attractor is not a subset of any finite-dimensional Lipschitz continuous submanifold of the phase space Φ. In particular, this equation does not possess an inertial manifold. Our strategy is the following: to verify the non-existence, we will find two trajectories u_1(t) and u_2(t) belonging to the attractor 𝒜 such that u_1(t)- u_2(t)_Φ≤ Ce^-γ t^3. The existence of such trajectories does not allow to extend the solution semigroup S(t) on the attractor to a Lipschitz continuous group and, thanks to Proposition <ref>, the associated Lipschitz inertial form does not exist. Then, applying Proposition <ref>, we see that the embedding of the attractor to any Lipschitz submanifold is also impossible. Thus, it only remains to find the trajectories satisfying (<ref>). We construct the desired example based on the counterexample given in Theorem <ref> using (<ref>) and interpreting the functions y(t)=sinπ t/T, y_1(x)=e^ix as particular solutions of extra RDA equations. However, to fulfill the other assumptions, we need to modify slightly equations (<ref>). Namely, let us introduce a cut-off functionϕ(ξ) such that ϕ(ξ) = 1 for |ξ|≤ 1/4 and ϕ(ξ) = 0 for |ξ|≥ 1/2 and consider the RDA system u= u+ϕ(| u|^2)(f(t,x) u+g(t,x) u)+(1-ϕ(| u|^2))( u- u| u|^2), where f and g are exactly the same as in Theorem <ref>. Then, on the one hand, this system remains linear near the origin u=0, so u=0 is a super exponentially attracting equilibrium. On the other hand, the presence of the nonlinearity of a Ginzburg-Landau type makes the system dissipative and produces extra equilibria filling the sphere | u|=1. Let P:=U(2T,0):Φ→Φ be the period map generated by the nonlinear equation (<ref>). Then, since (<ref>) is time-periodic, U(2nT,0)=P^n and the dynamics of (<ref>) is determined by the discrete semigroup S_per(n):=P^n generated by the iterations of the map P. In particular, as not difficult to see arguing as in Theorem <ref>, this semigroup possesses a global attractor 𝒜_tr which is a compact connected set in the phase space Φ. Obviously, the attractor contains all equilibria{0}∪{ u∈^4, | u|=1}⊂𝒜_per.Furthermore, since 0 is locally asymptotically stable and the attractor is connected, there exists a non-trivial complete bounded trajectory u̅_2(t), t∈ such that u_2(t)→0 as t→∞. Finally, since (<ref>) coincides with (<ref>) in the neighbourhood of zero, from Theorem <ref> we conclude that u_1(t)- u_2(t)_Φ≤ Ce^-t^3, u_1(t)≡ 0 (here we have implicitly used the smoothing property in order to obtain the attraction in the norm of Φ). We are now ready to embed system (<ref>) into a large autonomous and spatially homogeneous system of RDA equations. To this end, we note that functions y(t,x):=e^π it/T and z(t,x)=e^ix solve the semilinear heat equations y= y+π i/Ty+y(1-|y|^2), z= z+z(2-|z|^2) respectively, so we may introduce the extended system y= y+π i/Ty+y(1-|y|^2),z= z+z(2-|z|^2),u= u+ϕ(| u|^2)(f( y, z, z) u+ eggogeggogeggogeggog+g( y, z, z) u)+(1-ϕ(| u|^2))( u- u| u|^2). The number of equations in this system is 2+2+4=8. Then, as not difficult to see, the system (<ref>) of RDA equations is dissipative and possesses a global attractor A in the phase space Φ. On the other hand, by the construction, the trajectories U_1(t):=(e^π i T,e^ix, u_1(t)) and U_2(t):=(e^π i/T,e^ix, u_2(t)), t∈, solve these equations. Moreover, since these are complete bounded trajectories, they belong to the global attractor 𝒜: U_1(t), U_2(t)∈𝒜,t∈ Finally,U_1(t)- U_2(t)_Φ= u_1(t)- u_2(t)_Φ≤ Ce^-γ t^3,so the Lipschitz extension of S(t) on the attractor for negative times does not exists and the attractor 𝒜 is not a subset of any Lipschitz finite-dimensional submanifold of Φ. It only remains to note that, although the constructed nonlinearities formally do not satisfy the assumptions of Theorem <ref> since they do not have finite supports, but this can be easily corrected by cutting of the nonlinearities outside of a large ball. Thus, the theorem is proved. The obtained counterexample excludes the embeddings of the global attractor in Lipschitz submanifolds, but does not forbid the existence of log-Lipschitz inertial forms and related embeddings to log-Lipschitz manifolds which are of big current interest, see <cit.> and references therein. However, the constructed counterexample to the Floquet theory is the key point of the proof of non-existence of such forms given in <cit.> for the case of abstract parabolic equations, so we expect that the analogous counterexample could be extended in a straightforward way to the case of RDA equations. Since the construction given in <cit.> is rather technical we decided not to present it here. § CONCLUDING REMARKS In this concluding section, we briefly discuss possible generalizations of the obtained results. We start with particular cases of systems where the IM still exists. §.§ Vector case and existence of IMsAs the constructed in Theorem <ref> counterexample shows, we cannot expect the existence of IMs under general assumptions on the nonlinearity f. However, this a priori does not exclude the existence of IMs if the matrix f has some specific structure. In particular, it will be so if the matrix f(u) has a diagonal structure with only one non-zero entry on the diagonal: f(u)=diag(f_1(u),⋯,f_m(u)) and f_i(u)=ψ(u)δ_ij for some j∈{1,⋯, m}. It worth emphasizing that, in contrast to the previous section all functions are real-valued here.Let the assumptions of Theorem<ref> holds and let, in addition, the nonlinearity f satisfy (<ref>) and (<ref>). Then problem (<ref>) possesses an IM in the phase space Φ.Indeed, in this case we need to transform only one component of u=(u_1,⋯,u_m) via u_j(t,x)=a(t,x)w_j(t,x) and we will have a scalar equation on the factor a which can be solved exactly as in Section <ref>. So, the IM can be constructed exactly as in the case of a scalar equation considered above. Analogously to the case of Dirichlet or Neumann boundary conditions, this simple observation allows us to treat the case of scalar quasilinear equation u= u+f(u, u) with periodic boundary conditions. Indeed, differentiating this equation by x and denoting v= u, we end up with a system of RDA equations u= u +f(u,v),v= v+f'_u(u,v)v+f'_v(u,v) v which satisfies assumptions of Proposition <ref> and, therefore, possesses an IM. Note that in the constructed counterexample only two components of the matrix f(u) are non-zero and this is enough to destroy the existence of IMs, so the assumptions of Proposition <ref> are in a sense sharp. Note also that this nonlinearity will satisfy assumptions (<ref>) and (<ref>) if we allow the components of u to be complex-valued, so the assumption that u is real-valued is crucial for the validity of Proposition <ref>. §.§ Mixed Dirichlet-Neumann boundary conditionsAs shown in the first part of this work (see <cit.>), an IM exists for systems of RDAs in the case of Dirichlet boundary conditions as well as for Neumann boundary conditions. Surprisingly, it may be not the case if some components of the vector u=(u_1,⋯, u_m) are endowed by the Dirichlet and the rest by the Neumann boundary conditions. To see this we start with the counterexample constructed in Theorem <ref> for periodic boundary conditionswhich we write here as U= U+f( U) U+g( U) and introduce the functionsU_alt(t,x):= U(t,x)- U(t,2π-x), U_sym= U(t,x)+ U(t,2π-x).Then, obviously, the functions U_alt and U_sym satisfy the Dirichlet and Neumann boundary conditions respectively andU(t)=1/2( U_alt(t)+ U_sym(t)).On the other hand,U(t,2π-x)= U(t,2π-x)-f( U(t,2π-x)) U(t,2π-x)+g( U(t,2π-x)).Taking a sum and a difference of this equation and equation (<ref>), we end up with the following equations U_sym= U_sym+ +1/2f(( U_alt+ U_sym)/2) ( U_alt+ U_sym)-f(( U_sym- U_alt)/2) ( U_sym- U_alt)+ +g(( U_alt+ U_sym)/2)+g(( U_sym- U_alt)/2) and U_alt= U_alt+ +1/2f(( U_alt+ U_sym)/2) ( U_alt+ U_sym)+f(( U_sym- U_alt)/2) ( U_sym- U_alt)+ +g(( U_alt+ U_sym)/2)-g(( U_sym- U_alt)/2) The obtained system is a system of 16 RDA equations first 8 of which are endowed by the Neumann boundary conditions and the second 8 equations have Dirichlet boundary conditions. Since the attractor of this system contains the attractor of the system (<ref>), it also does not possess an inertial manifold. This example confirms once more that the existence or non-existence of IMs for the systems of RDA equations strongly depends on the choice of boundary conditions.
http://arxiv.org/abs/1702.08559v1
{ "authors": [ "Anna Kostianko", "Sergey Zelik" ], "categories": [ "math.AP", "35B40, 35B45" ], "primary_category": "math.AP", "published": "20170227221613", "title": "Inertial manifolds for 1D reaction-diffusion-advection systems. Part II: periodic boundary conditions" }
Lattice Coding and Decoding for Multiple-Antenna Ergodic Fading Channels Ahmed Hindy, Student member, IEEE andAria Nosratinia, Fellow, IEEE The authors are with the department of Electrical Engineering, University of Texas at Dallas, Email: ahmed.hindy@utdallas.edu and aria@utdallas.edu This work was supported in part by the grant 1546969 from the National Science Foundation.December 30, 2023 ========================================================================================================================================================================================================================================================================================================================theoremTheorem lemmaLemma remarkRemark corollaryCorollary P_x σ_h^2 ρ I h x y w e t d z v u a g λ Ψ H A B U D V G F ℂ ℤ^+ ℝ ℋ 𝕃 Λ 𝒱 Ω_1 ℒ_1 𝔼 Vol 𝒢 ℬ Σ K_x 𝒮 ℙ @twocolumn For ergodic fading, a lattice coding and decoding strategy is proposed and its performance is analyzed for the single-input single-output (SISO) and multiple-input multiple-output (MIMO) point-to-point channel as well as the multiple-access channel (MAC), with channel state information available only at the receiver (CSIR). At the decoder a novel strategy is proposed consisting of a time-varying equalization matrix followed by decision regions that depend only on channel statistics, not individual realizations. Our encoder has a similar structure to that of Erez and Zamir. For the SISO channel, the gap to capacity is bounded by a constantunder a wide range of fading distributions. For the MIMO channel under Rayleigh fading, the rate achieved is within a gap to capacity that does not depend on the signal-to-noise ratio (SNR), and diminishes with the number of receive antennas. The analysis is extended to the K-user MAC where similar results hold. Achieving a small gap to capacity while limiting the use of CSIR to the equalizer highlights the scope for efficient decoder implementations, since decision regions are fixed, i.e., independent of channel realizations. Ergodic capacity, ergodic fading, lattice codes, MIMO, multiple-access channel. § INTRODUCTIONIn practical applications, structured codes are favored due to computational complexity issues; lattice codes are an important class of structured codes that has gained special interest in the last few decades.An early attempt to characterize the performance of lattice codes in the additive white Gaussian noise (AWGN) channel was made by de Buda <cit.>; a result that was later corrected by Linder et al. <cit.>. Subsequently, Loeliger <cit.> showed the achievability of 1/2log(SNR) with lattice coding and decoding. Urbanke and Rimoldi <cit.> showed the achievability of 1/2log(1+SNR) with maximum-likelihood decoding. Erez and Zamir <cit.> demonstrated that lattice coding and decoding achieve the capacity of the AWGN channel using a method involving common randomness via a dither variable and minimum mean-square error (MMSE) scaling at the receiver.Subsequently, Erez et al. <cit.> proved the existence of lattices with good properties that achieve the performance promised in <cit.>. El Gamal et al. <cit.> showed that lattice codes achieve the capacity of the AWGN MIMO channel, as well as the optimal diversity-multiplexing tradeoff under quasi-static fading. Prasad and Varanasi <cit.> developed lattice-based methods to approach the diversity of the MIMO channel with low complexity. Dayal and Varanasi <cit.> developed diversity-optimal codes for Rayleigh fading channels using finite-constellation integer lattices and maximum-likelihood decoding. Zhan et al. <cit.> introduced integer-forcing linear receivers as an efficient decoding approach that exploits the linearity of lattice codebooks. Ordentlich and Erez <cit.> showed that in conjunction with precoding, integer-forcing can operate within a constant gap to the MIMO channel capacity.Going beyond the point-to-point channel, Song and Devroye <cit.> investigated the performance of lattice codes in the Gaussian relay channel. Nazer and Gastpar <cit.> introduced the compute-and-forward relaying strategy based on the decoding of integer combinations of interfering lattice codewords from multiple transmitters. Compute-and-forward was also an inspiration for the development of integer-forcing <cit.>. Özgür and Diggavi <cit.> showed that lattice codes can operate within a constant gap to the capacity of Gaussian relay networks.Ordentlich et al. <cit.> proposed lattice-based schemes that operate within a constant gap to the sum capacity of the K-user MAC, and the sum capacity of a class of K-user symmetric Gaussian interference channels.On the other hand, a brief outline of related results on ergodic capacity is as follows. The ergodic capacity of the Gaussian fading channel was established by McEliece and Stark <cit.>. The capacity of the ergodic MIMO channel was established by Telatar <cit.> and Foschini and Gans <cit.>.The capacity region of the ergodic MIMO MAC was found by Shamai and Wyner <cit.>. The interested reader is also referred to the surveys on fading channels by Biglieri et al. <cit.> and Goldsmith et al. <cit.>. For the most part, lattice coding results so far have addressed channel coefficients that are either constant or quasi-static. Vituri <cit.> studied the performance of lattice codes with unbounded power constraint under regular fading channels.Recently, Luzzi and Vehkalahti <cit.> showed that a class of lattices belonging to a family of division algebra codes achieve rates within a constant gap to the ergodic capacity at all SNR, where the gap depends on the algebraic properties of the code as well as the antenna configuration. Unfortunately, the constant gap in <cit.> can be shown to be quite large at many useful antenna configurations, in addition to requiring substantial transmit power to guarantee any positive rate. Liu and Ling <cit.> showed that polar lattices achieve the capacity of the i.i.d. SISO fading channel. Campello et al. <cit.> also proved that algebraic lattices achieve the ergodic capacity of the SISO fading channel.In this paper we propose a lattice coding and decoding strategy and analyze its performance for a variety of MIMO ergodic channels, showing that the gap to capacity is small at both high and low SNR. The fading processes in this paper are finite-variance stationary and ergodic. First, we present a lattice coding scheme for the MIMO point-to-point channel under isotropic fading, whose main components include the class of nested lattice codes proposed in <cit.> in conjunction with a time-varying MMSE matrix at the receiver. The proposed decision regions are spherical and depend only on the channel distribution, and hence the decision regions remain unchanged throughout subsequent codeword transmissions. [Although the decision regions are designed independently of the channel realizations, the received signal is multiplied by an MMSE matrix prior to decoding the signal, and hence channel knowledge at the receiver remains necessary for the results in this paper.]The relation of the proposed decoder with Euclidean lattice decoding is also discussed. The rates achieved are within a constant gap to the ergodic capacity for a broad class of fading distributions. Under Rayleigh fading, a bound on the gap to capacity is explicitly characterized which vanishes as the number of receive antennas grows. Similar results are also derived for the fading K-user MIMO MAC. The proposed scheme provides useful insights on the implementation of MIMO systems under ergodic fading. First, the results reveal that structured codes can achieve rates within a small gap to capacity. Moreover, channel-independent decision regions approach optimality when the number of receive antennas is large. Furthermore, for the special case of SISO channels thegap to capacity is characterized for all SNR values and over a wide range of fading distributions. Unlike <cit.>, the proposed scheme achieves positive rates at low SNR where the gap to capacity vanishes. At moderate and high SNR, the gap to capacity is bounded by a constant that is independent of SNR and only depends on the fading distribution. In the SISO channel under Rayleigh fading, thegap is a diminishing fraction of the capacity as the SNR increases.[Earlier versions of the SISO and MIMO point-to-point results of this paper appeared in <cit.>; these results are improved in the current paper in addition to producing extensions to MIMO MAC.] Throughout the paper we use the following notation. Boldface uppercase and lowercase letters denote matrices and column vectors, respectively. The set of real and complex numbers are denoted ,. A^T, A^H denote the transpose and Hermitian transpose of matrix A, respectively. a_i denotes element i of a. A≽ indicates that A- is positive semi-definite. () and tr() denote the determinant and trace of , respectively. , denote the probability and expectation operators, respectively. _n(q) is an n-dimensional sphere of radius q and the volume of an arbitrary shape 𝒜 is (𝒜).All logarithms are in base 2.§ OVERVIEW OF LATTICE CODINGA latticeis a discrete subgroup of ^n which is closed under reflection and real addition.The fundamental Voronoi region  of the lattice  is defined by= {s: argmin_∈ || s -||= 0}.The second moment per dimension of is defined as σ_^2 = 1/n ()∫_ ||s||^2 d s,and the normalized second moment 𝐺() ofis 𝐺() = σ_^2/^2/n(),where 𝐺() > 1/2 π e for any lattice in ^n. Every s∈^n can be uniquely written as s=+ where ∈, ∈. The quantizer is then defined byQ_(s)=, if s∈+.Define the modulo- operation corresponding toas follows[s]mod≜s- Q_(s).The modoperation also satisfies[ s + t]mod =[ s + [t]mod]mod∀s,t∈^n.The latticeis said to be nested in _1 if ⊆_1. We employ the class of nested lattice codes proposed in <cit.>. The transmitter constructs a codebook = _1 ∩, whose rate is given byR=1/nlog()/(_1).The coarse lattice  has an arbitrary second moment  and is good for covering and quantization, and the fine lattice _1 is good for AWGN coding, where both are construction-A lattices <cit.>. The existence of such lattices has been proven in <cit.>.A latticeis good for covering if lim_n →∞1/nlog(_n(R_c))/(_n(R_f))=0,where the covering radius R_c is the radius of the smallest sphere spanningand R_f is the radius of the sphere whose volume is equal to (). In other words, for a good nested lattice code with second moment , the Voronoi region  approaches a sphere of radius √(n ). A lattice  is good for quantization iflim_n →∞𝐺() = 1/2 π e.A key ingredient of the lattice coding scheme proposed in <cit.> is using common randomness (dither)  in conjunction with the lattice code at the transmitter.is also known at the receiver, and is drawn uniformly over . <cit.> If ∈ is independent of , thenis uniformly distributed over and independent of the lattice point .<cit.>. An optimal lattice quantizer with second moment σ_^2 is white, and the autocorrelation of its dither _opt is given by [ _opt_opt^T ]= σ_^2_n.Note that the optimal lattice quantizer is a lattice quantizer with the minimum 𝐺(). Since the proposed class of lattices is good for quantization, the autocorrelation ofapproaches that of _opt as n increases. For a more comprehensive review on lattice codes see <cit.>.§ POINT-TO-POINT CHANNEL§.§ MIMO channelConsider a MIMO point-to-point channel with N_t transmit antennas and N_r receive antennas. The received signal at time instant i is given by _i=_i _i+_i,where _i is an N_r × N_t matrix denoting the channel coefficients at time i. The channel is zero-mean with strict-sense stationary and ergodic time-varying gain. Moreover,  is isotropically distributed, i.e., ()=() for any unitary matrixindependent of . We first consider real-valued channels; the extension to complex-valued channels will appear later in this section.The receiver has instantaneous channel knowledge, whereas the transmitter only knows the channel distribution. _i ∈^N_t is the transmitted vector at time i, where the codeword≜ [_1^T , _2^T , … , _n^T]^T is transmitted throughout n channel uses and satisfies [ ||||^2 ] ≤ n.The noise ∈^N_r n defined by ^T ≜ [_1^T , _2^T , … , _n^T]^T is a zero-mean i.i.d. Gaussian noise vector with covariance _N_r n, and is independent of the channel realizations. For convenience, we define the SNR per transmit antenna to be ≜/ N_t.For the ergodic fading MIMO channel with isotropic fading, any rate R satisfying R < -1/2log( [ (_N_t + ^T )^-1] )is achievable using lattice coding and decoding. Encoding: Nested lattice codes are used where ⊆_1. The transmitter emits a lattice point ∈_1 that is dithered with  which is drawn uniformly over .has a second moment  and is good for covering and quantization, and _1 is good for AWGN coding, where both are construction-A lattices <cit.>. The dithered codeword is then as follows=[ - ]mod = -+ ,where = - Q_( -) ∈ from (<ref>). The coarse lattice ∈^N_t n has a second moment . The codeword is composed of n vectors _i each of length N_t as shown in (<ref>), which are transmitted throughout the n channel uses. Decoding: The received signal can be expressed in the form =_s +, where_s is a block-diagonal matrix whose diagonal block i is _i. The received signalis multiplied by a matrix _s ∈^N_r n × N_t n and the dither is removed as follows' ≜ _s^T+= + (_s^T _s - _N_t n)+ _s^T+ = ++ ,where ≜ ( _s^T _s - _N_t n)+ _s^T ,andis independent of , according to Lemma <ref>. The matrix _s that minimizes [ ||||^2 ] is then a block-diagonal matrix whose diagonal block i is the N_t × N_r MMSE matrix at time i given by_i= (_N_r + _i _i^T)^-1_i. From (<ref>),(<ref>), the equivalent noise at time i, i.e., _i ∈^N_t, is expressed as_i = ( _i^T (_N_r + _i _i^T)^-1_i -_N_t) _i+ _i^T ( _N_r + _i _i^T)^-1_i = - ( _N_t + _i^T _i)^-1_i + _i^T ( _N_r + _i _i^T )^-1_i,where (<ref>) holds from the matrix inversion lemma, and≜ [_1^T, … , _n^T]^T.Naturally, the distribution ofconditioned on _i (which is known at the receiver) varies across time. For reasons that will become clear later, we need to get rid of this variation. Hence,we ignore the instantaneous channel knowledge, i.e., the receiver considers _i a random matrix after equalization.The following lemma elaborates some geometric properties of  in the N_t n-dimensional space.Letbe a sphere defined by≜{∈^N_t n: || ||^2≤ (1+ϵ) tr () }, where ≜[ ( _N_t n + _s^T _s )^-1 ]. Then, for any ϵ>0 and γ>0, there exists n_γ,ϵ such that for all n>n_γ,ϵ, ( ∉) < γ. See Appendix <ref>.We apply a version of the ambiguity decoder proposed in <cit.> defined by the spherical decision regionin (<ref>). [ satisfies the condition in <cit.> of being a bounded measurable region of ^N_t n, from (<ref>).] The decoder chooses ∈_1 if the received point falls inside the decision region of the lattice point , but not in the decision region of any other lattice point. Error Probability:As shown in <cit.>, on averaging over the set of all good construction-A fine lattices  of rate R, the probability of error can be bounded by1/||∑__i ∈ _e<(∉) + (1+ δ)()/(_1) =(∉) + (1+ δ) 2^nR ()/(),for any δ > 0, where (<ref>) follows from (<ref>). This is a union bound involving two events: the event that the noise vector is outside the decision region, i.e., ∉ and the event that the post-equalized point is in the intersection of two decision regions, i.e., {' ∈{_1 + }∩{_2 + }}, where _1, _2 ∈_1 are two distinct lattice points. Owing to Lemma <ref>, the probability of the first event vanishes with n. Consequently, the error probability can be bounded by 1/||∑__i ∈ _e < γ + (1+ δ) 2^nR()/(),for any γ,δ>0. For convenience define = ^-1. The volume of  is given by() = (1+ ϵ)^N_t n/2( _N_t n (√(N_t n )) ) ( ^-1/2) . The second term in (<ref>) is bounded by(1+ δ) 2^nR (1+ ϵ)^N_t n/2(_N_t n (√(N_t n )))/()( ^-1/2 )= (1+ δ) 2^ - N_t n( - 1/N_t nlog( (_N_t n (√(N_t n )))/()) + ξ ) ,where ξ ≜-1/2log(1+ ϵ) - 1/2 N_t nlog ( ^-1 ) - 1/N_t R = -1/2log(1+ ϵ) - 1/2 N_tlog( [ (_N_t + ^T )^-1] ) - 1/N_t R .From (<ref>), since the latticeis good for covering, the first term of the exponent in (<ref>) vanishes. From (<ref>), whenever ξ is a positive constant we have _e→ 0 as n →∞, where ξ is positive as long asR < -1/2log( [ (_N_t + ^T )^-1] )- 1/2log(1+ ϵ) - ϵ',where ϵ,ϵ' are positive numbers that can be made arbitrarily small by increasing n.From (<ref>), the outcome of the decoding process in the event of successful decoding is =+, where the transformation ofby ∈ does not involve any loss of information. Hence, on applying the modulo- operation on [] mod= [+] mod=,where the second equality follows from (<ref>) since ∈. Since the probability of error in (<ref>) is averaged over the set of lattices in , there exists at least one lattice that achieves the same (or less) error probability. [The error analysis adopted in this work (which stems from <cit.>) is based on existence arguments from the ensemble of construction-A lattices, i.e., the proof shows that at least one realization of the lattice ensemble achieves the average error performance. However, no guarantee that all members of the ensemble would perform similarly.]Following in the footsteps of <cit.>, the existence of a sequence of covering-good coarse lattices with second moment  that are nested in _1 can be shown. The final step required to conclude the proof is extending the result to Euclidean lattice decoding, which is provided in the following lemma.The error probability of the Euclidean lattice decoder given by [The Euclidean decoder in (<ref>) does not involve the channel realizations, unlike that in <cit.>.]=[ argmin_∈_1 ||'- '||^2]mod is upper-bounded by that of the ambiguity decoder in (<ref>). Details of the proof of Lemma <ref> is provided in Appendix <ref>, whose outline is as follows. For the cases where the ambiguity decoder declares a valid output (' lies exclusively within one decision sphere), both the Euclidean lattice decoder and the ambiguity decoder with spherical regions would be identical, since a sphere is defined by the Euclidean metric. However, for the cases where the ambiguity decoder fails to declare an output (ambiguity or atypical received sequence), the Euclidean lattice decoder still yields a valid output, and hence is guaranteed to achieve the same (or better) error performance, compared to the ambiguity decoder. This concludes the proof of Theorem <ref>.The results can be extended to complex-valued channels with isotropic fading using a similar technique to that in <cit.>. The proof is omitted for brevity. For the ergodic fading MIMO channel with complex-valued channelsthat are known at the receiver, any rate R satisfying R < - log( [ (_N_t + ^H )^-1] )is achievable using lattice coding and decoding. We compare the achievable rate in (<ref>) with the ergodic capacity, given by <cit.>C = [ log ( _N_t + ^H)].The gapbetween the rate of the lattice scheme (<ref>) and the ergodic capacity in (<ref>) for the N_t × N_r ergodic fading MIMO channel is upper bounded by * N_r ≥ N_t and ≥ 1: For any channel for which all elements of [ ( ^H )^-1]< ∞ < log( ( _N_t +[ ^H] )[( ^H )^-1] ). * N_r > N_t and ≥ 1: When  is i.i.d. complex Gaussian with zero mean and unit variance, < N_tlog( 1+N_t+1/N_r-N_t). * N_t=1 and and < 1/ [ ||||^2 ]: When [ ||||^4 ] < ∞, < 1.45[ ||||^4 ]^2.See Appendix <ref>. The expression in (<ref>) for the Rayleigh fading case is depicted in Fig. <ref> for a number of antenna configurations. The gap-to-capacity vanishes with N_r for any ≥ 1.This result has two crucial implications. First, under certain antenna configurations, lattice codes approximate the capacity at finite SNR. Moreover, channel-independent decision regions approach optimality for large N_r. The results are also compared with that of the class of division algebra lattices proposed in <cit.>, whose gap-to-capacity is both larger and insensitive to N_r. For the square MIMO channel with N_t=N_r=2, the throughput of the proposed lattice scheme is plotted in Fig. <ref> and compared with that of <cit.>. The gap to capacity is also plotted, which show that for the proposed scheme the gap also saturates when N_t=N_r.Division algebra codes in <cit.> guarantee non-zero rates only above a per-antenna SNR threshold that is no less than 21 N_t -1 when N_t < N_r and [^H ] = _N_t, e.g., an SNR threshold of 10 dB for a 1× 2 channel. Our results guarantee positive rates at all SNR; for the single-input multiple-output (SIMO) channel at low SNR the proposed scheme has a gap on the order of ^2. Since at ≪ 1 we have C ≈[ ||h̃||^2 ] log e, the proposed scheme can be said to asymptotically achieve capacity at low SNR. Our results also show the gap diminishes to zero with large number of receive antennas under Rayleigh fading. §.§ SISO channel For the case where each node is equipped with a single antenna, we find tighter bounds on the gap to capacityfor a wider range of fading distributions. Without loss of generality let [|h̃|^2] = 1.The gap to capacity in the single-antenna case is given by= [ log( 1 +|h̃|^2 )] + log ( [ 1/1+ |h̃|^2]).In the following, we compute bounds on the gap for awide range of fading distributions, at both high and low SNR values. When N_t=N_r=1, the gap to capacityis upper bounded as follows * <1: For any fading distribution where [ |h̃|^4 ]< ∞,< 1.45[ |h̃|^4 ]^2. * ≥ 1: For any fading distribution where [ 1/|h̃|^2]< ∞, < 1 + log( [ 1/|h̃|^2] ). * ≥ 1: Under Nakagami-m fading with m>1,< 1 + log ( 1 + 1/m-1 ). * ≥ 1: Under Rayleigh fading,< 0.48 + log ( log (1+ )). See Appendix <ref>.Although the gap depends on the SNR under Rayleigh fading,is a vanishing fraction of the capacity asincreases, i.e., lim_→∞/C=0. Simulations are provided to give a better view of Corollary <ref>. First, the rate achieved under Nakagami-m fading with m=2 and the corresponding gap to capacity are plotted in Fig. <ref>. The performance is compared with that of the division algebra lattices from <cit.>. Similar results are also provided under Rayleigh fading in Fig. <ref>.A closely related problem appears in <cit.>, where lattice coding and decoding were studied under quasi-static fading MIMO channels with CSIR, and a realization of the class of construction-A lattices in conjunction with channel-matching decision regions (ellipsoidal shaped) were proposed.Unfortunately, this result by itself does not apply to ergodic fading because the application of the Minkowski-Hlawka Theorem <cit.>, on which the existence results of <cit.> depend, only guarantees the existence of a lattice for each channel realization, and is silent about the existence of a universal single lattice that is suitable for all channel realizations. This universality issue is the key challenge for showing results in the case of ergodic fading.[In <cit.> we attempted to show that for decoders employing channel-matching decision regions the gap to capacity vanishes, however, subsequently it was observed that <cit.> has not demonstrated the universality of the required codebooks.]The essence of the proposed lattice scheme in this section is approximating the ergodic fading channel (subsequent to MMSE equalization) with a non-fading additive-noise channel with lower SNR ' ≜α, where α≤ 1. The distribution of the (equivalent) additive noise term, , in the approximate model depends on the fading distribution but not on the realization, which allows fixed decision regions for all fading realizations. The SNR penalty factor α incurred from this approximation for the special case of N_t=N_r=1 is given byα = [ |h̃|^2/ |h̃|^2 + 1]/ [ 1/ |h̃|^2 + 1 ]. As shown in the gap analysis throughout the paper, the loss caused by this approximation is small under most settings.§ MULTIPLE-ACCESS CHANNEL §.§ MIMO MACConsider a K-user MIMO MAC with N_r receive antennas and N_t_k antennas at transmitter k. The received signal at time i is given by ^*_i= ^*_1,i^*_1,i + ^*_2,i^*_2,i + … + ^*_K,i^*_K,i + _i,where _1^*,…,_K^* are stationary and ergodic processes with zero-mean and complex-valued coefficients. The noiseis circularly-symmetric complex Gaussian with zero mean and unit variance, and user k has a total power constraint N_t_k^*_k. An achievable strategy for the K-user MIMO MAC is independent encoding for each antenna, i.e., user k demultiplexes its data to N_t_k data streams, and encodes each independently and transmits it through one of its antennas. The channel can then be analyzed as a SIMO MAC with L ≜∑_k=1^K N_t_k virtual users. The received signal is then given by _i= _1,ix̃_1,i + _2,ix̃_2,i + … + _L,ix̃_L,i + _i,where _ν(k)+1,i, …, _ν(k) + N_t_k,i denote the N_t_k column vectors of ^*_k,i, and ν(k) ≜∑_j=1^k-1 N_t_j. The virtual user ℓ in (<ref>) has power constraint _l, such that _ν(k)+1 + … + _ν(k)+N_t_k = N_t_k^*_k , k=1,2,…,K. The MAC achievable scheme largely depends on the point-to-point lattice coding scheme proposed earlier, in conjunction with successive decoding. For the L-user SIMO MAC, there are L! distinct decoding orders, and the rate region is the convex hull of the L! corner points. We define the one-to-one function π(ℓ) ∈{ 1,2,…,L } that depicts a given decoding order. For example, π(1)=2 means that the codeword of user two is the first codeword to be decoded.For the L-user SIMO MAC with ergodic fading and complex-valued channel coefficients, lattice coding and decoding achieve the following rate regionR_MAC≜ 𝐶𝑜( ⋃_π{ ( R_1,…,R_L): R_π(ℓ)≤ - log( [ 1/ 1 + _π(ℓ)_π(ℓ)^H_π(ℓ)^-1_π(ℓ)] )}),where_π(ℓ)≜_N_r +∑_j=ℓ+1^L _π(j)_π(j)_π(j)^H ,and 𝐶𝑜(·) represents the convex hull of its argument, and the union is over all permutations (π(1),…,π(L) ). For ease of exposition we first assume the received signal is real-valued in the form _i = ∑_ℓ=1^L _ℓ,i_ℓ,i +_i.Encoding: The transmitted lattice codewords are given by_ℓ=[_ℓ - _ℓ]mod^(ℓ)ℓ=1,2,…,L,where each lattice point _l is drawn from _1^(ℓ)⊇^(ℓ), and the dithers _ℓ are independent and uniform over ^(ℓ). The second moment of ^(ℓ) is _ℓ. Note that since transmitters have different rates and power constraints, each transmitter uses a different nested pair of lattices. The independence of the dithers across different users is necessary so as to guarantee the L transmitted codewords are independent of each other. Decoding: The receiver uses time-varying MMSE equalization and successive cancellation over L stages, where in the first stage _π(1) is decoded in the presence of _π(2), …, _π(L) as noise, and then _π(1),i_π(1),i is subtracted from _i for i=1, …, n. Generally, in stage ℓ, the receiver decodes _π(ℓ) from _π(ℓ), where _π(ℓ),i≜_i - ∑_j=1^ℓ-1_π(j),i_π(j),i. Note that at stage ℓ the codewords _π(1), …, _π(ℓ-1) had been canceled-out in previous stages, whereas _π(ℓ+1), …, _π(L) are treated as noise. The MMSE vector at time i, _π(ℓ),i, is given by_π(ℓ),i = _π(ℓ) (_N_r +∑_j=ℓ^L _π(j)_π(j),i_π(j),i^T ) ^-1_π(ℓ),i, and the equalized signal at time i is expressed as followsy'_π(ℓ),i= _π(ℓ),i^T _π(ℓ),i + d_π(ℓ),i = t_π(ℓ),i + λ_π(ℓ),i +z_π(ℓ),i,where _π(ℓ)∈^(π(ℓ)), andz_π(ℓ),i =( _π(ℓ),i^T _π(ℓ),i - 1) x_π(ℓ),i +∑_j=ℓ+1^L_π(ℓ),i^T _π(j),i x_π(j),i + _π(ℓ),i^T _i. Similar to the point-to-point step, we ignore the instantaneous channel state information subsequent to the MMSE equalization step. In order to decode _π(ℓ) at stage ℓ, we apply an ambiguity decoder defined by a spherical decision region ^(π(ℓ))≜{ ∈^n :  ||||^2≤ (1+ϵ) n_π(ℓ) [1/1 + _π(ℓ)_π(ℓ)^T _π(ℓ)^-1_π(ℓ)]_n }.where ϵ is an arbitrary positive constant.Error Probability: For an arbitrary decoding stage ℓ, the probability of error is bounded by1/||∑_ _e^(π(ℓ))< (_π(ℓ)∉^(π(ℓ)))+ (1+ δ) 2^n Ř_π(ℓ) (^(π(ℓ)))/(^(π(ℓ))), for some δ>0. Following in the footsteps of the proof of Lemma <ref>, it can be shown that (_π(ℓ)∉^(π(ℓ))) < γ, where γ vanishes with n; the proof is therefore omitted for brevity.From (<ref>),( ^(π(ℓ))) =(1+ ϵ)^n/2( _n (√(n _π(ℓ))) ) ( [1/1 + _π(ℓ)_π(ℓ)^T _π(ℓ)^-1_π(ℓ)] )^n/2 .The second term in (<ref>) is then bounded by(1+ δ) 2^ -n( - 1/nlog( (_n (√( n _π(ℓ))))/(^(π(ℓ)))) + ξ ) ,where ξ= -1/2log( [ 1/ 1 + _π(ℓ)_π(ℓ)^T_π(ℓ)^-1_π(ℓ)] )-Ř_π(ℓ) - 1/2log(1+ ϵ).The first term of the exponent in (<ref>) vanishes since ^(π(ℓ)) is covering-good. Then, the error probability vanishes whenŘ_π(ℓ) < -1/2log( [ 1/ 1 + _π(ℓ)_π(ℓ)^T_π(ℓ)^-1_π(ℓ)] )for all ℓ∈{1,2,…,L }. The achievable rate region can then be extended to complex-valued channels, such thatR_π(ℓ) < - log( [ 1/ 1 + _π(ℓ)_π(ℓ)^H_π(ℓ)^-1_π(ℓ)] ) , ℓ=1,..,L.This set of rates represents one corner point of the rate region. The whole rate region is characterized by the convex hull of the L! corner points that represent all possible decoding orders, as shown in (<ref>). This concludes the proof of Theorem <ref>. Returning to the MIMO MAC model in (<ref>), it is straightforward that the rate achieved by user k would then be R^*_k = ∑_j=1^N_t_k R_ν(k)+j,where R_j are the rates given in (<ref>). Now we compare R_sum≜∑_k=1^K R^*_k with the sum capacity of the MIMO MAC model in (<ref>). We focus our comparison on the case where the channel matrices have i.i.d. complex Gaussian entries and all users have the same number of transmit antennas as well as power budgets, i.e., N_t_k=N_t, ^*_k = for all k ∈{1,2,…,K}. The optimal input covariance is then a scaled identity matrix <cit.> and the sum capacity is given by <cit.>C_sum =[ log( _N_r + ∑_k=1^K _k^* _k^*H)].For the K-user fading MIMO MAC in (<ref>), when ^*_k is i.i.d. complex Gaussian and N_r > K N_t, the gap between the sum rate of the lattice scheme and the sum capacity at ≥ 1 is upper bounded by< ∑_ℓ=1^N_t Klog( 1 + ℓ + 1 / N_r - ℓ).See Appendix <ref>.Similar to the point-to-point MIMO, the gap to capacity vanishes at finite SNR as N_r grows, i.e., → 0 as N_r →∞. This suggests that decision regions which only depend on the channel statistics approach optimality for a fading MAC with large values of N_r.The expression in (<ref>) is plotted in Fig. <ref> for K=2, as well as for the K-user SIMO MAC.§.§ SISO MACFor the two-user case with N_r = N_t =1, the rate region in (<ref>) can be expressed by[Unlike the two-user MAC capacity region, the sum rate does not necessarily have a unit slope.]R_1 < - γ_1 ,R_2 < - γ_2 , (γ_4 - γ_2) R_1 + (γ_3 - γ_1) R_2 <(γ_1 γ_2 - γ_3 γ_4),whereγ_1 = log ([ 1/1 + _1 |h̃_1|^2 ] ) ,    γ_2 = log ([ 1/1 + _2 |h̃_2|^2 ] ),γ_3 = log ([ 1/1 + _1 |h̃_1|^2/1 + _2 |h̃_2|^2]), γ_4 = log ([ 1/1 + _2 |h̃_2|^2/1 + _1 |h̃_1|^2]) .For the case where all nodes are equipped with a single antenna, we characterize the gap to sum capacity of the two-user MAC for a wider range of distributions and over all SNR values. For ease of exposition we assume h̃_1 and h̃_2 are identically distributed with [|h̃_1|^2] =[|h̃_2|^2] =1.is then given by≜ [ log( 1 +|h̃_1|^2 +|h̃_2|^2)] + log( [1 +|h̃_1|^2 / 1 +|h̃_1|^2 +|h̃_2|^2 ] [ 1/1 +|h̃_1|^2] ). The gap to capacity of the two-user MAC given in (<ref>) is upper-bounded as follows * < 1/2: For any fading distribution where [ |h̃_1|^4 ] < ∞,< 1.45(1 + 2 [ |h̃_1|^4 ] )^2. * ≥1/2: For any fading distribution where [ 1/|h̃_1|^2]< ∞,< 2 + log( [ 1/|h̃_1|^2] ). * ≥1/2: Under Nakagami-m fading with m>1,< 2 + log ( 1 + 1/m-1 ). * ≥1/2: Under Rayleigh fading,< 1.48 + log ( log (1+ )).See Appendix <ref>. In Fig. <ref>, the sum rate of the two-user MAC lattice scheme is compared with the sum capacity under Nakagami-m fading with m=2, as well as under i.i.d. Rayleigh fading. It can be shown that the gap to sum capacity is small in both cases. Moreover, we plot the rate region under Rayleigh fading at =-6 dB in Fig. <ref>.The rate region is shown to be close to the capacity region, indicating the efficient performance of the lattice scheme at low SNR as well. § CONCLUSION This paper presents a lattice coding and decoding strategy and analyzes its performance for a variety of ergodic fading channels. For the MIMO point-to-point channel, therates achieved are within a constant gap to capacity for a large class of fading distributions. Under Rayleigh fading, the gap to capacity for the MIMO point-to-point and the K-user MIMO MAC vanishes as the number of receive antennas increases, even at finite SNR. The proposed decision regions are independent of the instantaneous channel realizations and only depend on the channel statistics. This both simplifies analysis and points to simplification in future decoder implementations. For the special case of single-antenna nodes, the gap to capacity is shown to be a constant for a wider range of fading distributions that include Nakagami fading. Moreover, at low SNR the gap to capacity is shown to be a diminishing fraction of the achievable rate. Similar results are also derived for the K-user MAC. Simulation results are provided that illuminate the performance of the proposed schemes.§ PROOF OF LEMMA <REF> The aim of Lemma <ref> is showing that  lies with high probability within the sphere . However, computing the distribution of  is challenging since it depends on that of , as shown in (<ref>), where the distribution of  is not known at arbitrary block length, and no fixed distribution is imposed for . The outline of the proof is as follows. First,we replace the original noise sequence with a noisier sequence whose statistics are known. Then, we use the weak law of large numbers to show that the noisier sequence is confined with high probability within , which implies that the original noise  is also confined within . We decompose the noisein (<ref>) in the form = _s+ √()_s, where both _s, _s are block-diagonal matrices with diagonal blocks _i ≜ - ( _N_t + _i^T _i )^-1 and _i ≜√()_i^T ( _N_r + _i _i^T )^-1, respectively, such that _s _s^T + _s _s^T = ( _N_t n + _s^T _s )^-1.Since _i is a stationary and ergodic process, _i and _i are also stationary and ergodic. Denote the eigenvalues of the random matrix ^T (arranged in ascending order) by σ_H,1^2, …, σ_H,N_t^2. Then its eigenvalue decomposition is ^T ≜^T, whereis a unitary matrix andis a diagonal matrix whose unordered entries are σ_H,1^2, …, σ_H,N_t^2. Owing to the isotropy of the distribution of , ^T =(_N_t+ )^-2^T is unitarily invariant, i.e., (^T) =(^T ^T) for any unitary matrix  independent of . As a resultis independent of  <cit.>. Hence,[ ^T ] =[ ( _N_t + ^T )^-2]=[(_N_t + )^-2^T] =_|[ _ [ (_N_t + )^-2 ] ^T]=_|[ σ_A^2 _N_t^T]=σ_A^2 _N_t,where σ_A^2 ≜_j [ _σ_,j [ 1/(1+ σ_,j^2)^2 ] ]. Similarly, it can be shown that [ ^T ] =σ_B^2 _N_t, where σ_B^2 ≜_j [ _σ_,j[ σ_,j^2/(1+ σ_,j^2)^2] ]. For convenience define σ_z^2 ≜σ_A^2+σ_B^2. Next, wecompute the autocorrelation ofas follows_z≜[^T] = [ _s _x_s^T ] + [ _s_s^T ], where _x≜[ ^T ]. Unfortunately, _x is not known for all n, yet it approaches _N_t n for large n, according to Lemma <ref>. Hence one can rewrite _z = σ_x^2 [ _s _s^T+_s_s^T ] _σ_x^2σ_z^2 _N_t n+ [ _s (_x - σ_x^2 _N_t n) _s^T ] +(- σ_x^2) [ _s_s^T ] _≻ 0, whereσ_x^2 ≜λ_min(_x) - δ, and λ_min(_x) is the minimum eigenvalue of _x. Note that ≥σ_x^2, from the definition in (<ref>). As a result the second term in (<ref>) is positive-definite, and _z≻σ_x^2σ_z^2 _N_t n. This implies that_z^-1≺1/σ_x^2 σ_z^2_N_t n . To make noise calculations more tractable, we introduce a related noise variable that modifies the second term of  as follows^* = _s+ _s (√() + √(1/N_t n R_c^2 - ) ^* ),where ^* is i.i.d. Gaussian with zero mean and unit variance, and R_c is the covering radius of . We now wish to bound the probability that ^* is outside a sphere of radius √((1+ϵ) N_t n σ_x^2 σ_z^2). First, we rewrite ||^*||^2 =^T _s^T _s+ 1/N_t n R_c^2 ^T _s _s^T+ 2 √(1/N_t n R_c^2) ^T _s^T _s .Then, we bound each term separately using the weak law of large numbers.The third term satisfies[The third term in (<ref>) is a sum of zero mean uncorrelated random variables to which the law of large numbers applies <cit.>.] ( 2 √(1/N_t n R_c^2) ^T _s^T _s> N_t n ϵ_3 ) < γ_3. Addressing the second term in (<ref>),[Note that μ_i ≜_i^T _i^T _i _i is also a stationary and ergodic process that obeys the law of large numbers.]( 1/N_t n R_c^2^T _s^T _s> σ_B^2 R_c^2 + N_t n ϵ_2 ) = ( 1/N_t n R_c^2tr ( _s ^T _s^T) > σ_B^2 R_c^2 + N_t n ϵ_2 ) <γ_2. Now, we bound the first term in (<ref>). Given that _s^T _s is a block-diagonal matrix with [ _s^T _s ] = σ_A^2 _N_t n, and that _x→_N_t n as n →∞, it can be shown using <cit.> that 1/||||^2^T _s^T _s →σ_A^2 as n →∞. More precisely,( ^T _s^T _s>σ_A^2 R_c^2 + N_t n ϵ_1 ) < ( ^T _s^T _s >σ_A^2 ||||^2 + N_t n ϵ_1 ) < γ_1,where ||||^2 < R_c^2, and ϵ_1, ϵ_2,ϵ_3 and γ_1,γ_2,γ_3 can be made arbitrarily small by increasing n. Using a union bound, ( ||^*||^2 > (1+ϵ_4)R_c^2 σ_z^2) < γ,where ϵ_4 ≜(ϵ_1+ϵ_2+ϵ_3)/R_c^2 σ_z^2 and γ≜γ_1 + γ_2 + γ_3. For large n, 1/N_t nR_c^2 ≤ (1+ϵ_6) for covering-good lattices and ≤ (1+ϵ_7) σ_x^2 according to Lemma <ref>. Let ϵ_5 ≜ (1+ϵ_6)(1+ϵ_7)-1, then for any ϵ such that ϵ≤ (1+ϵ_4)(1+ϵ_5) -1,( ^*T_z^* > (1+ϵ)N_t n ) < ( ||^*||^2 > (1+ϵ)N_t n σ_z^2)= ( ^*T( [ (_N_t n + _s^T _s)^-1] )^-1^*> (1+ϵ)N_t n ) < γ,where (<ref>) holds from (<ref>) and (<ref>) holds since [ (_N_t n + _s^T _s)^-1] = σ_z^2 _N_t n, according to (<ref>). The final step is to show that ||^* - || → 0 as n →∞, where ^*-= √(1/N_t n R_c^2 - ) _s ^*. From the structure of _s, the norm of each of its rows is less than N_t, and hence the variance of each of the elements of _s ^* is no more than N_t. Since lim_n →∞1/N_t n R_c^2 = for a covering-good lattice, it can be shown using Chebyshev's inequality that the elements of √(1/N_t n R_c^2 - ) _s ^* vanish and |_j^* - _j| → 0 as n →∞ for all j ∈{ 1, …, N_t n }. This concludes the proof of Lemma <ref>. § PROOF OF LEMMA <REF>Denote by  the event that the post-processed received point ' falls exclusively within one decision sphere, defined in (<ref>), where the probability of occurrence of  is _≜ 1-γ_s. Using the law of total probability, the probability of error (in general) is given by_e = _e|_ + _e|^c_^c First we analyze the ambiguity decoder with spherical decision regions (denoted by superscript ^(SD)). From the definition of ambiguity decoding, _e|^c^(SD)=1. Hence,_e^(SD) = η' (1-γ_s) + γ_s,where _e|^(SD)≜η'. Now we analyze the Euclidean lattice decoder (denoted by superscript ^(LD)). Since a sphere is defined by the Euclidean metric, the outcomes of the spherical decoder and the Euclidean lattice decoder conditioned on the event  are identical, and hence yield the same error probability, i.e., _e|^(LD) = _e|^(SD) = η'. However, from (<ref>), the Euclidean lattice decoder declares a valid output even under the event ^c. Hence,_e|^c^(LD)≜η”≤ 1. Thereby,_e^(LD) = η' (1-γ_s) + η”γ_s ≤_e^(SD). § PROOF OF COROLLARY <REF> For an i.i.d. complex Gaussian M × N matrix  whose elements have zero mean, unit variance and M > N, then [ (^H )^-1] = 1/M-N _N. See <cit.>.§.§ Case 1: N_r ≥ N_t and the elements of [ (^H )^-1] < ∞=C-R =[ log ( _N_t + ^H) ] +log( [ (_N_t + ^H )^-1] )(a)≤ log( _N_t +[ ^H] )+log( [ (_N_t + ^H )^-1] )(b)< log( _N_t +[ ^H] )+log( [ ( ^H )^-1] ) =log(( 1/_N_t +[ ^H] ) [ (^H )^-1 ] )(c)≤ log(( _N_t +[ ^H] ) [ (^H )^-1 ] ),where (a), (b) follow since log () is a concave and non-decreasing function over the set of all positive definite matrices <cit.>. (c) follows since ≥ 1.§.§ Case 2: N_r > N_t and the elements of  are i.i.d. complex Gaussian with zero mean and unit variance(d)< log(( _N_t +[ ^H] ) [ (^H )^-1 ] )(e)=log((1+N_r)1/N_r-N_t_N_t)= N_t log ( 1 + N_t+1/N_r - N_t ),where (d),(e) follow from Case 1 and Lemma <ref>, respectively. §.§ Case 3: N_t=1 and < 1/ [ ||||^2 ] =  C-R= [ log( 1 +||||^2 )] + log ( [ 1/1 +||||^2]) (f)≤ log ( 1 +[||||^2]) +log ([ 1/1 +||||^2 ]) (g)≤log e [||||^2]+ log e[ -||||^2 /1 +||||^2 ] =log e[||||^2 -||||^2 /1 +||||^2]=log e[ ||||^4/1 +||||^2]^2 < 1.45[ ||||^4 ]^2,where (f) is due to Jensen's inequality and (g) utilizes lnx≤ x-1. § PROOF OF COROLLARY <REF> The results in Case 1 and Case 2 are straightforward from Corollary <ref>. The proofs are therefore omitted.§.§ Case 3: ≥ 1, Nakagami-m fading with m>1The Nakagami-m distribution with m>1 satisfies the condition [ 1/|h̃|^2] < ∞. For a Nakagami-m variable with unit power, i.e., [|h̃|^2 ]=1,[ 1/|h̃|^2] is computed as follows[ 1/|h̃|^2]=2m^m/Γ(m) ∫_0^∞1/x^2 x^2m-1 e^-mx^2 dx=2m^m/Γ(m) 1/2 m^m-1 ∫_0^∞ y^m-2 e^-y dy= mΓ(m-1)/Γ(m)=mΓ(m-1)/(m-1)Γ(m-1) = 1 + 1/m-1,where Γ(·) denotes the gamma function. Substituting in (<ref>), < 1+ log (1+ 1/m-1 ). §.§ Case 4: ≥ 1, Rayleigh fadingFor any z>0, the exponential integral function defined by E̅_1(z)= ∫_z^∞e^-t/t dt is upper bounded byE̅_1(z) < 1/log ee^-z log(1+1/z).See <cit.>. Under Rayleigh fading, |h̃|^2 is exponentially distributed with unit power. Hence, = [ log( 1 +|h̃|^2 ) ] +log ( [ 1/1 +|h̃|^2 ]) (a)≤ log ( 1 +[|h̃|^2]) +log ( [ 1/1 +|h̃|^2]) (b)≤1+ log ( [ 1/|h̃|^2 + 1/ ]) ≤1+ log (∫_0^∞1/x+1/ e^-x dx)= 1+ log( e^1/ ∫_1/^∞1/y e^-y dy) = 1+ log ( e^1/E̅_1(1/)) (c)<1+ log ( 1/log elog ( 1 +))< 0.48 + log ( log ( 1 +)) ,where (a) follows from Jensen's inequality. (b) holds from the condition ≥ 1 and (c) follows from Lemma <ref>. § PROOF OF COROLLARY <REF> For any two independent i.i.d. Gaussian matrices ∈^r × m,∈^r × q where r ≥ q+1 whose elements have zero mean and unit variance,^H ( c _r + ^H )^-1≻1/c A̅^H A̅,where the elements of A̅∈^(r-q) × m are i.i.d. Gaussian with zero-mean and unit variance, and c is a positive constant.Using the eigenvalue decomposition of ( c _r + ^H)^-1,^H ( c _r + ^H )^-1=^H ^H =^H ,where the columns ofare the eigenvectors of ^H.The corresponding eigenvalues of ^H are then in the form σ_1^2,…,σ_q^2,0,…,0. Hence, q of the diagonal entries of  are in the form 1/(c+σ_j^2), whereas the remaining r-q entries are 1/c. Since  is unitary, then ≜^H is i.i.d. Gaussian, similar to  <cit.>. One can rewrite (<ref>) as follows^H =∑_j=1^r-q1/c _j _j^H + ∑_j=r-q+1^r1/c+σ_j^2 _j _j^H≻ ∑_j=1^r-q1/c _j _j^H=1/c ^H ,where _j is the conjugate transposition of row j in , and the columns of the matrixare _j for j ∈{1,…,r-q }. The generalized inequality in (<ref>) follows since X + Y≽X for any two positive semidefinite matrices X,Y. Let _π(k)≜_N_r + ∑_l=k+1^K _π(l)_π(l)^H, where π(·) is an arbitrary permutation as described in Section <ref>. We first bound the sum capacity in (<ref>) (from above) as followsC_sum ≜[ log( _N_r + ∑_k=1^K _k _k^H )]=∑_k=1^K [ log( _N_t + _π(k)^H _π(k)^-1_π(k)) ] (a)≤∑_k=1^K [ log( _N_t + _π(k)^H _π(k))] ≤ ∑_k=1^K log( _N_t +[ _π(k)^H_π(k)] ) =Klog( ( 1 +N_r ) _N_t)= N_t K log ( 1 +N_r ) (b)≤N_t K ( log +log ( 1 + N_r ) ),where (a) follows since interference cannot increase capacity, and (b) follows since ≥ 1. Now, we bound (from below) R_sum. Since the sum of the rate expressions in both (<ref>) and (<ref>) are equal, we bound each of the N_t K terms in (<ref>), where the power is allocated uniformly over each virtual user, given by  as followsR_π(ℓ) = - log( [ 1/ 1 + _π(ℓ)^H(_N_r + ∑_j=ℓ+1^L_π(j)_π(j)^H )^-1_π(ℓ)] ) =- log( [ 1/ 1 + _π(ℓ)^H(1/_N_r +_π(ℓ)_π(ℓ)^H )^-1_π(ℓ)] )(c)≥- log( [ 1/ 1 + _π(ℓ)^H _π(ℓ)] ) >- log( [ 1/_π(ℓ)^H _π(ℓ)] )= log- log( [ 1/_π(ℓ)^H _π(ℓ)] )(d)= log+log( N_r - (K-ℓ+1) ),where _π(ℓ)≜ [_π(ℓ+1) , …, _π(L)]. (c) follows from Lemma <ref> where _π(ℓ)∈^N_r-L+ℓ is an i.i.d. Gaussian distributed vector whose elements have unit variance, and (d) follows from Lemma <ref>.Hence, from (<ref>),(<ref>) the gap is bounded as follows≜C_sum - R_sum <∑_ℓ=1^N_t K( log ( 1 + N_r )-log( N_r - (N_t K-ℓ+1) ) )=∑_ℓ=1^N_t Klog(1 + N_r / N_r - (N_t K-ℓ+1)) =∑_ℓ=1^N_t Klog(1 + N_r / N_r - ℓ)=∑_ℓ=1^N_t Klog( 1 + ℓ +1 / N_r - ℓ).§ PROOF OF COROLLARY <REF> §.§ Case 1: < 1/2 = [ log( 1 +|h̃_1|^2 +|h̃_2|^2)] +log( [1 +|h̃_1|^2 / 1 +|h̃_1|^2 +|h̃_2|^2 ] [ 1/ 1 +|h̃_1|^2] )≤ log( 1 +[|h̃_1|^2] +[|h̃_2|^2]) + log( [ 1/ 1 +|h̃_1|^2 ] ) +log( [1 +|h̃_1|^2 / 1 +|h̃_1|^2 +|h̃_2|^2 ]) < log e ([|h̃_1|^2] +[|h̃_2|^2] + [-|h̃_2|^2 / 1 +|h̃_1|^2 +|h̃_2|^2] + [ -|h̃_1|^2/ 1 +|h̃_1|^2 ]) <log e ([|h̃_2|^2 -|h̃_2|^2 / 1 +|h̃_1|^2 +|h̃_2|^2] +[|h̃_1|^2 -|h̃_1|^2 / 1 +|h̃_1|^2]) =log e ([ ^2 |h̃_1|^2 |h̃_2|^2 + ^2 |h̃_2|^4/1 +|h̃_1|^2 +|h̃_2|^2] + [ ^2 |h̃_1|^4/ 1 +|h̃_1|^2])≤ log e ( [^2 |h̃_1|^2 |h̃_2|^2 + ^2 |h̃_2|^4] + [ ^2 |h̃_1|^4]) =log e ( 1 + [|h̃_1|^4] +[|h̃_2|^4])^2 < 1.45 ( 1 + 2 [|h̃_1|^4])^2.§.§ Case 2: ≥1/2 and [ 1/|h̃|^2] < ∞ =[ log( 1 +|h̃_1|^2 +|h̃_2|^2)] + log( [1 +|h̃_1|^2/ 1 +|h̃_1|^2 +|h̃_2|^2] [ 1/ 1 +|h̃_1|^2 ] )(a)≤ log( 1 +[|h̃_1|^2] +[|h̃_2|^2])] +log( [1 +|h̃_1|^2/ 1 +|h̃_1|^2 +|h̃_2|^2 ] [ 1/ 1 +|h̃_1|^2 ] )<log( ( 1 + 2) [ 1/ 1 +|h̃_1|^2 ] ) (b)≤2 + log ( [ 1/|h̃_1|^2 + 1/ ]) < 2 + log ([ 1/|h̃_1|^2]),where (a) follows from Jensen's inequality and (b) follows since ≥1/2. §.§ Case 3: ≥1/2, Nakagami-m fading with m>1Since the Nakagami-m distribution with m>1 belongs to the class of distributions in Case 2, then<2 + log ([ 1/|h̃_1|^2]) (c)= 2 + log (1+ 1/m-1 ),where (c) follows from the proof of Case 3 in Appendix <ref>. §.§ Case 4: ≥1/2, Rayleigh fading (d)≤2 + log ( [ 1/|h̃_1|^2 + 1/ ]) (e)<1.48 + log ( log ( 1 +)),where (d) follows from Case 2 and (e) follows from Case 4 in Appendix <ref>.IEEEtran
http://arxiv.org/abs/1702.08099v1
{ "authors": [ "Ahmed Hindy", "Aria Nosratinia" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170226222854", "title": "Lattice Coding and Decoding for Multiple-Antenna Ergodic Fading Channels" }
Given a pseudoword over suitable pseudovarieties, we associate to it a labeled linear order determined by the factorizations of the pseudoword. We show that, in the case of the pseudovariety of aperiodic finite semigroups, the pseudoword can be recovered from the labeled linear order. 3D Scanning System for Automatic High-Resolution Plant PhenotypingChuong V. Nguyen ARC Centre of Excellence for Robotic VisionResearch School of Engineering, Australian National UniversityCanberra ACT 2601, AustraliaEmail: Chuong.Nguyen@anu.edu.au Jurgen Fripp CSIRO Health and BiosecurityAustralian eHealth Research CentreHerston QLD 4029, AustraliaEmail: Jurgen.Fripp@csiro.au David R. Lovell Electrical Eng. & Computer ScienceQueensland University of TechnologyBrisbane QLD 4001, AustraliaEmail: David.Lovell@qut.edu.au Robert Furbank CoE for Translational PhotosynthesisAustralian National UniversityCanberra ACT 2601, AustraliaEmail: Robert.Furbank@anu.edu.au Peter Kuffner, Helen Daily, Xavier Sirault CSIRO Agriculture and FoodHigh Resolution Plant Phenomics CentreCanberra ACT 2601, AustraliaEmail: Xavier.Sirault@csiro.auDecember 30, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONSince the publication of Eilenberg's textbook <cit.>, a large body of finite semigroup theory is in fact the theory of pseudovarieties of semigroups. Besides its own mathematical interest, it draws motivation from the connections with computer science through Eilenberg's correspondence between pseudovarieties of semigroups and varieties of regular languages. As pseudovarieties are classes of finite semigroups, only in very special cases do they contain most general members on a given finite set of generators, that is relatively free semigroups, namely semigroups on n generators in the pseudovariety such that every other member of the pseudovariety on n generators is their homomorphic image. To obtain relatively free structures, one needs to step away from finiteness into the more general framework of profinite semigroups, and indeed such a tool has been shown to lead to useful insights and has found many applications <cit.>.As topological semigroups, relatively free profinite semigroups S over a finite alphabet A are generated by A, which means that elements of S are arbitrarily well approximated by words in the letters of A. Thus, the elements of S may be considered a sort of generalization of words on the alphabet A, which are sometimes called pseudowords.Of course, S may satisfy nontrivial identities, which means that different words may represent the same element of S, although in the most interesting examples of pseudovarieties, this is not the case.Now, words on the alphabet A may be naturally viewed as A-labeled finite linear orders, a perspective that underlies many fruitful connections with finite model theory <cit.>. For some pseudovarieties, such as R, of all finite R-trivial semigroups, and DA, of all finite semigroups in which the idempotents are the only regular elements, representations of the corresponding finitely generated relatively free profinite semigroups by labeled linear orders have been obtained and significantly applied <cit.>. The purpose of this paper is to investigate such a linear nature of pseudowords for pseudovarieties with suitable properties. Our main motivation is to understand pseudowords over the pseudovariety A, of all finite aperiodic semigroups.The key properties of the pseudovariety A that play a role in this paper are of a combinatorial nature: the corresponding variety of languages is closed under concatenation and the cancelability of first and last letters. The first of these properties entails a very useful feature of the corresponding finitely generated relatively free profinite semigroups, namely equidivisibility, which means that different factorizations of the same pseudoword have a common refinement. This condition already forces a linear quasi-order on the factorizations of a given pseudoword, and this is the starting point for the whole paper. The cancelability condition leads to special types of factorizations, which we call step points, to which a letter is naturally associated. The corresponding linear order has interesting order and topological properties, such as being compact for the interval topology. The step points are the isolated points and there are only countably many of them. All other points are called stationary and, in contrast, there may be uncountably many of them. Perhaps somewhat surprisingly, there is no correlation between the number of stationary points and how low pseudowords fall in the J-order.Our main result is that the linear order of factorizations with alphabet-labeled step points provides a faithful representation of pseudowords over A. We also obtain a characterization of the partially labeled linear orders that appear in this way, albeit in terms of properties involving finite aperiodic semigroups. A natural goal for future work consists in looking for a characterization of the image of the representation which is independent of such semigroups, as has been done in the case of the pseudovarieties R and DA <cit.>.While this paper was being written, Gool and Steinberg developed a different approach on the pseudowords over A, applying Stone duality and model theory to view them as elementary equivalence classes of labeled linear orders <cit.>. They worked specially with saturated models. In our paper, the models that appear in the image of the representation are not saturated in general.We also mention the articles <cit.> and <cit.>, where labeled linear orders were assigned only to a special class of pseudowords, the ω-terms, and were used to solve the word problem for ω-terms in several pseudovarieties, either for the first time, or with new proofs, as in the case of A, treated in <cit.>.The paper is organized as follows. After a section of preliminaries, Section <ref> introduces the key notion of equidivisible semigroup in the context of relatively free profinite semigroups, with an emphasis on pseudovarieties closed under concatenation. Several results of the paper apply to all such pseudovarieties, but at a certain point our hypothesis restricts to A. In the next four sections, we develop more on the tools and the language necessary for the main results. In Section <ref>, we give our faithful representation of pseudowords over A as labeled linear orders. The following three sections relate to the proof of this representation (the first two of them having independent interest). This is followed by a study of the effect of the multiplication in the image of the representation, and by a characterization of the image. The paper closes with Section <ref> where, among other things, it is shown that the ordered set of the real numbers can be embedded in the ordered set of the stationary points of a pseudoword over a finitely cancelable pseudovariety containing LSl. This is done via a connection with symbolic dynamics.§ PRELIMINARIESWe assume some familiarity with pseudovarieties of semigroups and relatively free profinite semigroups <cit.>. For the reader's convenience, some notation and terminology is presented here. The following is a list of some of the pseudovarieties we will be working with: * I: all trivial semigroups;* S: all finite semigroups;* A: all finite aperiodic semigroups;* N: all finite nilpotent semigroups;* D: all finite semigroups in which the idempotents are right zeros;* LSl: all finite local semilattices.In the whole paper, A denotes a finite alphabet. Let V be a pseudovariety of semigroups. The free pro-V semigroup generated by A is denoted AV. Its elements are pseudowords over V. When V≠ I, as the associatedgenerating mapping A→ AV is injective, one considers A to be contained in AV. If φ A→ S is a generating mapping of a pro-V semigroup, then we denote by φ_ Vthe unique continuous homomorphism AV→ S extending φ.If V contains N, then the subsemigroup of AV generated by A is isomorphic to A^+ and its elements are the isolated points of AV, in view of which A^+ is considered to be contained in AV, and the elements of A^+ and AV∖ A^+ are respectively called the finite and infinite pseudowords over V.By a topological semigroup, we mean a semigroup endowed with a topology that makes the semigroup multiplication continuous. Unlike some authors, we require that a compact space be Hausdorff. By a compact semigroup, we mean a compact topological semigroup. See <cit.>.We denote by S^I the monoid obtained from the semigroup S by adjoining to S an element denoted by 1 which acts as the identity. Every semigroup homomorphism φ S→ T is extended to a semigroup homomorphism S^I→ T^I, also denoted φ, such that φ(1)=1. If S is a topological semigroup, then S^I is viewed as a topological monoid whose topology is the sum of the topological spaces S and {1}, whence 1 is an isolated point of S^I.We use the standard notation for Green's relations and its quasi-orders on a semigroup S. Hence, s≤_ Rt, s≤_ Lt and s≤_ Jt respectively mean s∈ tS^I, s∈ S^It and s∈ S^ItS^I, R, L, Jare the associated equivalence relations, 𝒟=ℛ∨ℒ, and ℋ=ℛ∩ℒ.A semigroup S has unambiguous ≤_ L-order if, for every x,y,z∈ S,x≤_ Ly and x≤_ Lz implies y≤_ Lz or z≤_ Ly. One also has the dual notion of unambiguous ≤_ R-order.An unambiguous semigroup is a semigroup with unambiguous ≤_ R-order and unambiguous ≤_ L-order.The next proposition is an important tool to show one of our main results. Let A be a finite alphabet. Let u,v∈ AA. Then u=v if and only if φ_ A(u)=φ_ A(v), for every mapping φ from A onto an unambiguous finite aperiodic semigroup.The “only if” direction of the statement is immediate. To establish the “if” direction, it suffices to show that it is the inverse limit of A-generated unambiguous finite aperiodic semigroups.It is well known that every A-generated finite aperiodic semigroup is a homomorphic image of an unambiguous A-generated finite aperiodic semigroup, namely its Birget-Rhodes expansion (also called iterated Rhodes expansion), cut down to the set of generators A <cit.>. Since pairs of distinct points of AA may be separated by continuous homomorphisms into finite aperiodic semigroups, the result follows.§ EQUIDIVISIBILITYAND PSEUDOVARIETIES CLOSED UNDER CONCATENATIONA language L⊆ A^+ is said to be V-recognizable if there is a homomorphism φ:A^+→ S into a semigroup S from V such that L=φ^-1φ(L). We say that a pseudovariety V of semigroups is closed under concatenation if, for every finite alphabet A, whenever L and K are V-recognizable languages of A^+, the set LK is also a V-recognizable language of A^+.The following conditions are equivalent for a pseudovariety V of semigroups. *V is closed under concatenation;*A V= V;*V contains N and the multiplication in AV is an open mapping for every finite alphabet A.The equivalence <ref>⇔<ref> in Theorem <ref> is from <cit.>. The difficult part of the theorem is the equivalence <ref>⇔<ref>, which is a particular case of a more general result established by Chaubard, Pin and Straubing <cit.>. The latter, in turn, extends an earlier result of Straubing <cit.>, establishing that a nontrivial pseudovariety V of monoids satisfies A V= V if and only if, for every finite alphabet A, whenever L and K are V-recognizable languages of A^∗, the set LK is also a V-recognizable language of A^∗. In the case of semigroups, the absence in Theorem <ref> of reference to the pseudovariety I of trivial semigroups is not surprising if we take into account that A^+ is I-recognizable but not A^+ A^+, where we view these languages as languages of A^+. Schützenberger <cit.> proved that a language over a finite alphabet is A-recognizable if and only if it is star-free, in the sense that it can be obtained from finite languages by using only finite Boolean operations and concatenation. In particular, it follows that A is closed under concatenation. As important classes of examples of pseudovarieties closed under concatenation that include A, one has the complexity pseudovarieties C_n (cf. <cit.>) and every pseudovariety H formed by the finite semigroups whose subgroups belong to the pseudovariety of groups H.Combined with Theorem <ref>, the next lemma, which will be quite useful in the sequel, provides yet another characterization of the pseudovarieties closed under concatenation. A weaker version of the direct implication was proved in <cit.>. Let S be a topological semigroup whose topology is defined by ametric. The following conditions are equivalent: *The multiplication in S is an open mapping;*For every u,v∈ S, if (w_n)_n is a sequence of elements of S converging to uv, then there are sequences (u_n)_n and (v_n)_n of elements of S^I such that w_n=u_nv_n, lim u_n=u, and lim v_n=v. Consider a metric d inducing the topology of S. We denote by B(t,ε) the open ball in S with center t and radius ε.<ref>⇒<ref>: Let k be a positive integer. Since themultiplication is an open mapping, the set B(u,1/k) B(v,1/k) is an open neighbourhoodof uv. Hence there is p_k such that w_n∈ B(u,1/k) B(v,1/k) if n≥ p_k. Let n_k be the strictly increasing sequence recursively defined by n_1=p_1 andn_k=max{n_k-1+1,p_k} whenever k>1. Then there are sequences (u_n)_n and (v_n)_n satisfying the following conditions: if n_k≤ n<n_k+1 then u_n∈ B(u,1/k), v_n∈ B(v,1/k), and w_n=u_nv_n; and if n<n_1 then u_n=1 and v_n=w_n. The pair of sequences (u_n)_n and (v_n)_n satisfies condition <ref>.<ref>⇒<ref>: We want to prove that B(s,ε)B(t,ε) is open, for every s,t∈ S and ε>0. Let (w_n)_n be a sequence of elements of S converging to an element of B(s,ε)B(t,ε). Let u∈ B(s,ε) and v∈ B(t,ε) be such that lim w_n=uv. Take sequences (u_n)_n and (v_n)_n as in the statement of condition <ref>. There is N such that d(u_n,u)<ε-d(u,s) for all n≥ N. Then d(u_n,s)≤ d(u_n,u)+d(u,s)<ε for all n≥ N. Similarly, d(v_n,t)<ε for all sufficiently large n. Therefore, since w_n=u_nv_n, we have w_n∈ B(s,ε)B(t,ε) for all sufficiently large n, which proves that B(s,ε)B(t,ε) is open. A semigroup S is said to be equidivisible <cit.> if, for every equality of the form xy=uv, with x,y,u,v∈ S, there exists t∈ S^I such that, either xt=u and y=tv, or x=ut and ty=v. Clearly, free semigroups and groups are equidivisible. Moreover, all completely simple semigroups are equidivisible. Actually, a semigroup S is completely simple if and only if, for every x,y,u,v∈ S such that xy=uv, there are t,s∈ S such that xt=u, y=tv, x=us and sy=v <cit.>. Note that every equidivisible semigroup is unambiguous. The converse is not true: for instance, free bands are unambiguous, which follows easily from the solution of the word problem for free bands (see, for instance <cit.>) but not equidivisible for more than one free generator since, if a,b are two distinct free generators in a free band then, for x=a, y=b, u=v=ab, we have xy=uv, yet y>_ℒv and x>_ℛu. More generally, it is shown in <cit.> that, if V is a pseudovariety of semigroups such that V=RB V, where RB denotes the pseudovariety of finite rectangular bands, then AV is unambiguous, for every finite alphabet A.Let us say that a pseudovariety of semigroups V is equidivisible if AV is equidivisible, for every finite alphabet A. The following result was established by the first two authors <cit.>, wheredenotes the Mal'cev product, LI the pseudovariety of allfinite locally trivial semigroups, and CS the pseudovariety of all finite completely simple semigroups. A pseudovariety of semigroups V is equidivisible if and only ifV=LI V or V⊆CS. In particular, every pseudovariety closed under concatenation is equidivisible. Many of our results below are formulated not in terms of pseudovarieties but more abstractly for free profinite semigroups with suitable properties, which are satisfied for free profinite semigroups over pseudovarieties that are closed under concatenation or, sometimes, more generally, equidivisible.§ THE QUASI-ORDER OF 2-FACTORIZATIONSBy a quasi-order on a set we mean a reflexive transitive relation. In case the relation is additionally anti-symmetric, the quasi-order is called a partial order. A quasi-ordering (X,≤), in the sense of a set X with a quasi-order ≤, is said to be total, or linear if x≤ y or y≤ x, for every x,y∈ X. §.§ Definition and propertiesLet S be a semigroup. A 2-factorization of s∈ S is a pair (u,v) of elements of S^I such that s=uv. We denote the set of 2-factorizations of s by 𝔉(s). We introduce in 𝔉(s) a relation ≤ defined by (u,v)≤(u',v') if there exists t∈S^I such that ut=u' and v=tv', in which case we say that t is a transition from (u,v) to (u',v'). The relation ≤ is a quasi-order. Concerning transitivity, we have more precisely that if t is a transition from (u,v) to (u',v') and t' is a transition from (u',v') to (u”,v”), then tt' is a transition from (u,v) to (u”,v”).Given a quasi-order ≤ on a set P, we denote by ∼ the equivalence relation on P induced by ≤ and we write p<q if p≤ q but not p∼ q. Denote by ≺ the relation on P such that p≺ q if and only if q is a successor of p (equivalently, p is a predecessor of q), that is, p≺ q if and only if p<q and p≤ r≤ q ⇒ (r∼ p∨ r∼ q).For every element s of a semigroup S, the quotient set 𝔉(s)/∼ is denoted 𝔏(s). We denote the quotient mapping 𝔉(s)→𝔏(s) by χ. The partial order on 𝔏(s) induced by the quasi-order ≤ on 𝔉(s) is also denoted by ≤. For p,q∈𝔏(s), we also write p≺ q if p is a predecessor of q. Sometimes we will also consider the unions 𝔉(S)=⋃_s∈ S𝔉(s) and 𝔏(S)=⋃_s∈ S𝔏(s).The following result is immediate.A semigroup S is equidivisible if and only if 𝔏(s) is linearly ordered for every s∈ S. The previous lemma is the departing point motivating this paper. For a good supporting reference on the theory of linear orderings, see <cit.>.We proceed to extract from topological assumptions on S some consequences on the quasi-order of 2-factorizations. In what follows, 𝔉(s) is viewed as a topological subspace of S^I× S^I. If S is a compact semigroup, then, for every s∈ S, the quasi-order ≤ on 𝔉(s) is a closed subset of 𝔉(s)×𝔉(s).Suppose (p_i,q_i)_i∈ I is a convergent net of elements of 𝔉(s)×𝔉(s) with limit (p,q) and such that p_i≤ q_i for every i∈ I. Then, for each i∈ I, there is t_i∈ S^I making a transition from p_i to q_i. Since S^I is compact, the net (t_i)_i∈ I has a subnet converging to some t∈ S^I. Then, by continuity of multiplication on S^I, one deduces that indeed p≤ q, with t being a transition from p to q.We shall denote the open intervals of a quasi-ordered set P by]←,p[={r∈ P: r<p}, ]p,→[={r∈ P: p<r}, ]p,q[=]p,→[∩]←,q[,for every p,q∈ P. Considering the relation ≤, we also have the intervals of the form ]←,p]={r∈ P: r≤ p}, [p,→[={r∈ P: p≤ r}, and so on. Recall that the order topology of a linearly ordered set P is the topology with subbase the sets of the form ]←,p[ and ]p,→[. In particular, we consider the order topology on 𝔏(s).Let S be a compact equidivisible semigroup. For every s∈ S, the mapping χ𝔉(s)→𝔏(s) is continuous.It is sufficient to show that the sets of both forms χ^-1(]←,q[) and χ^-1(]q,→[) are open. By duality, we are actually reduced to show that χ^-1(]q,→[) is open. Since 𝔏(s) is linearly ordered by Lemma <ref>, the complement of this last set is χ^-1(]←,q]), which we therefore want to show to be closed. Consider a net (r_i)_i∈ I of elements of χ^-1(]←,q]), converging to some r∈𝔉(s). Let q̂∈χ^-1(q). Then r_i≤q̂ for every i∈ I. It follows from Lemma <ref> that r≤q̂, that is, r∈χ^-1(]←,q]), showing that χ^-1(]←,q]) is closed. Let S be a compact equidivisible semigroup. Then, for every s∈ S, the order topology of 𝔏(s) is compact. Moreover, if the space S is metrizable, then the space 𝔏(s) is also metrizable andthe set of isolated points of 𝔏(s) is countable.Since S is compact, 𝔉(s) is compact, being the preimage in S^I× S^I under multiplication of the closed set {s}, it is a closed subset of a compact space, whence compact. Since 𝔏(s) is clearly Hausdorff, it follows from Proposition <ref> that 𝔏(s) is compact.Suppose that S is metrizable. Then 𝔉(s) is metrizable, being a subspace of a product of two metrizable spaces. Since the continuous image of a compact metric space in a Hausdorff space is metrizable (<cit.>), it also follows from Proposition <ref> that 𝔏(s) is metrizable. As a compact metrizable space, 𝔏(s) has a dense countable subset. Since isolated points belong to every dense subset, they form a countable set. Recall that a linearly ordered set L is said to be complete if every subset of L which is bounded above has a least upper bound (i.e., a supremum) or, equivalently, if every subset of L which is bounded below has a greatest lower bound (i.e., an infimum) <cit.>.Suppose S is a compact equidivisible semigroup. Then the linearly ordered set 𝔏(s) is complete.Let X be a subset of 𝔏(s). Consider the subset Y of 𝔏(s) of lower bounds of X and assume that it is nonempty. As S is equidivisible, we know by Lemma <ref> that the quasi-order ≤ on the set χ^-1(U) is linear, whence in particular this set is directed. Therefore, we may consider, in the product space S^I× S^I, the net (p)_p∈χ^-1(U). By compactness, this net has a subnet (p_i)_i∈ I converging to some element q of 𝔉(s). We claim that χ(q)=max U. If r∈χ^-1(X), then we have p≤ r for all p∈χ^-1(U), and so q≤ r, by Lemma <ref>, showing that q∈χ^-1(U). On the other hand, if p∈χ^-1(U), then, by the definition of subnet, there exists i_0∈ I such that p≤ p_i for all i≥ i_0. Hence we have p≤ q, again by Lemma <ref>. This proves the claim that χ(q)=max U, and so χ(q) is the infimum of X.§.§ The category of transitionsA directed graph with vertex set V and edge set E, which are assumed to be disjoint, is given by mappings α,ω:E→ V assigning to each edge s its source α(s) and its target ω(s). A semigroupoid is a directed graph, with a nonempty set of edges, endowed with a partial associative binary operation on the set of its edges such that if s and t are edges, then st is defined if and only if ω(s)=α(t), in which case α(st)=α(s) and ω(st)=ω(t).Semigroupoids can be viewed as generalizations of semigroups, which in turn can be viewed as one-vertex semigroupoids. In particular, Green's relations generalize straightforwardly to Green's relations between the edges in semigroupoids. For instance, in a semigroupoid S, s≤_ Jt means that the edge t is a factor of the edge s and s Jt means that s≤_ Jt and t≤_ Js. A subsemigroupoid of the semigroupoid S is a subgraph T of S, with a nonempty set of edges, such that s,t∈ T implies st∈ T whenever ω(s)=α(t). Also, an ideal of a semigroupoid S is a subsemigroupoid I of S such that for every t∈ I and every s∈ S, ω(s)=α(t) implies st∈ I,and ω(t)=α(s) implies ts∈ I. A category is a semigroupoid such that, for each vertex v, there is a loop 1_v at v satisfying 1_vs=s and t1_v=t for every edge s starting in v and every edge t ending in v. This coincides with the notion of small category from Category Theory, except that we compose in the opposite direction. In doing so, we are following a common convention in Semigroup Theory, see for example <cit.>.If the sets of edges and vertices of a semigroupoid are both endowed with compact topologies, for which the semigroupoid operation and the mappings α and ω are continuous, then the semigroupoid is said to be compact.Let S be an arbitrary semigroup. To each s∈ S, we associate a category T(s), the category of transitions for s, as follows: * the set of vertices of T(s) is 𝔉(s);* we have an edge (u,v,t,x,y) from (u,v) to (x,y), which we may denote (u,v)(x,y), if t is a transition from (u,v) to (x,y) (thus implying (u,v)≤ (x,y)); we say that t is the label of the edge;* multiplication of consecutive edges is done by multiplying their labels, that is, the product of (u_1,v_1)(u_2,v_2) an (u_2,v_2)(u_3,v_3) is (u_1,v_1)(u_3,v_3). Note that the sets of vertices of the strongly connected components of the category T(s) are precisely the ∼-classes of 𝔉(s).The category of transitions for S, denoted T(S), is the coproduct category ⋃_s∈ S T(s). We denote by Λ the faithful functor T(S)→ S^I mapping each edge (u,v,t,x,y) to t. We say that Λ is the labeling functor associated to T(S). We remark that if S is a compact semigroup, then T(S) is a compact category, with the vertex and edge sets respectively endowed with the subspace topology of (S^I)^2 and of (S^I)^5. Note that Λ is continuous. Suppose that in 𝔏(s) we have p≤ q. An element t∈ S^I will be called a transition from p to q if t is a transition from an element of p to an element of q, in which case we use the notation pq.For future reference, it is convenient to register the following remark, concerning the relationship between T(u) and T(uv).Let u,v be elements of a semigroup S. If (α,β)(γ,δ) is an edge of T(u), then (α,β v)(γ,δ v) is an edge of T(uv). This remark is applied in the proof of the following lemma, which in turn will later be used in the proof of Theorem <ref>.Let S be an equidivisible semigroup. Consider two edges σ and τ of T(S) with the same target and such that α(σ)<α(τ). Then the label of σ is a suffix of the label of τ.Let σ be the edge (α,β)(ε,φ) and τ be the edge (γ,δ)(ε,φ), with (α,β)<(γ,δ). The following equalities hold: ε=α t=γ z, β=tφ, and δ=zφ. From the equality α t=γ z and by equidivisibility, we deduce that if z is not a suffix of t, then there exists s∈ S such that γ s=α and st=z, that is, we have an edge (γ,z)(α,t) in T(ε). By Remark <ref>, there is an edge (γ,δ)(α,β) in T(S), which contradicts the hypothesis that (α,β)<(γ,δ). Hence z is a suffix of t. § THE MINIMUM IDEAL SEMIGROUPOID AND THE J-CLASS ASSOCIATED TO A ∼-CLASS In a strongly connected compact semigroupoid C, there is an underlying minimum ideal semigroupoid K(C) which may be defined as follows. Consider any vertex v of C and the local semigroup C_v of C at v, that is, the semigroup formed by the loops at v. Then C_v is a compact semigroup, and therefore it has a minimum ideal K_v. Let K(C) be the subsemigroupoid of C with the same set of vertices of C and whose edges are those edges as C which admit some (and therefore every) element of K_v as a factor. The next lemma is folklore.If C is a strongly connected compact semigroupoid, then K(C) is a closed ideal of C whose definition does not depend on the choice of v. Moreover, the edges in K(C) are J-equivalent, more precisely they are J-below every edge of C. Let (u,v)∈𝔉(S). An element z∈ S^I stabilizes (u,v) if z labels a loop of T(S) at (u,v). Note that the set M_(u,v) of stabilizers of (u,v) is a monoid and that M_(u,v) is the isomorphic image, under the labeling functor Λ, of the local monoid of T(S) at (u,v).Assume S is a compact semigroup. For p∈𝔏(S), let T_p be the strongly connected component of T(S) whose vertices are the elements of p. We denote by K_p the minimum ideal semigroupoid K( T_p). Since ΛT(S)→ S^I is a (continuous) functor, where S^I is viewed as the set of edges of single vertex semigroupoid, in view of Lemma <ref> the set of labels of edges in K_p is contained in a single J-class of S^I, which we denote J_p. For every (u,v)∈ p, the minimum ideal of M_(u,v), which we denoteI_(u,v), is the image under Λ of the minimum ideal of the local monoid of T(S) at (u,v), whence I_(u,v)⊆ J_p. Note that J_p is regular, since I_(u,v) is itself regular. The set J_p can also be characterized as the set of J-minimum transitions from p to itself, as seen in the next lemma. Let S be a compact semigroup, and let p∈𝔏(S). Then t is a transition from p to p if and only if t is a factor of the elements of J_p.Let (u,v) (x,y) be a transition between elements of p. Since (u,v)∼ (x,y), there is a transition (x,y) (u,v). The loop (u,v) (u,v) is a factor of every element ε in the minimum ideal of the local monoid at (u,v). Therefore, ts is a factor of Λ (ε)∈ J_p.Conversely, suppose that t is factor of the elements of J_p. Then there is a loop (u,v) (u,v) in 𝒦_p such that z=xty for some x,y∈ S^I. In 𝒯(S) we have the following path: (u,v)(ux,tyv)(uxt,yv)(u,v). Therefore, (ux,tyv)(uxt,yv) is an edge of 𝒯_p. We next give some results that further highlight the role of idempotent stabilizers of 2-factorizations of elements of S, specially those idempotents in a J-class of the form J_p.Recall that a semigroup is stable if 𝒥∩≤_ℒ=ℒ and 𝒥∩≤_ℛ=ℛ. In particular, any compact semigroup is stable, see for instance <cit.>.Let S be a stable unambiguous semigroup. Let e,f be idempotents stabilizing an element (u,v) of 𝔉(S). If e J f then e=f.The hypothesis gives ue=u=uf and ev=v=fv. Since S is unambiguous, from ue=uf we get e≤_ Lf or f≤_ Le. By stability, as e Jf, it follows that e Lf. Dually, from ev=fv we get e Rf. Hence e=f.Let S be a compact unambiguous semigroup, p∈𝔏(S) and (u,v),(x,y)∈ p. The edge (u,v) (x,y) of T(S) belongs to 𝒦_p if and only if t∈ J_p.The “only if” part holds by definition of J_p. Conversely, suppose that t∈ J_p. Denote by ε the edge (u,v) (x,y). As J_p is regular, there is an idempotent e∈ J_p such that t=et. Since t is a prefix of v, we have v=ev, thus we may consider the edges (u,v) (ue,v), (ue,v) (ue,v) and (ue,v) (x,y), respectively denoted by α, β and γ. Observe that ε=αβγ, and so it suffices to show that the loop β belongs to 𝒦_p. The ideal 𝒦_p contains the minimum ideal of the local monoid of T(S) at (ue,v). The latter contains an idempotent, of the form (ue,v)(ue,v) for some f∈ J_p. But f=e by Lemma <ref> and therefore ε∈𝒦_p. Note that in the next lemma one does not assume that S is unambiguous. Let S be a compact semigroup, and let (u,v)∈𝔉(S). Let e be an idempotent stabilizing (u,v). If f is an idempotent J-equivalent to e, then f stabilizes an element of the ∼-class p of (u,v). Moreover, if e labels a loop of 𝒦_p, then f also labels a loop of 𝒦_p.If f Je, then there are in the J-class of e some elements s,t such that sts=s, tst=t, st=e, ts=f. We have the four edges in T(S) which are depicted in Figure <ref>. In particular, f stabilizes a vertex ∼-equivalent to (u,v).Denote by ε, σ, ϕ, τ the edges in Figure <ref> labeled by e, s, f, t, respectively. Since s=es, t=te, f=ts, we have σ=εσ, τ=τε and ϕ=τσ. Therefore, if ε belongs to the ideal 𝒦_p, then all edges in Figure <ref> belong to 𝒦_p, and so f labels a loop of 𝒦_p.Let S be a compact semigroup, and let p∈𝔏(S). Every idempotent of J_p labels a loop of 𝒦_p.For e∈ J_p, denote by p_e the nonempty set of elements of p stabilized by e.Let S be a compact unambiguous semigroup. Let p∈𝔏(S). Then J_p is the set of labels of edges of K_p. Moreover, if s∈ J_p and e and f are idempotents such that e Rs Lf, then s labels an edge from p_e to p_f. Moreover, there is a bijection p_e→ p_f, given by μ_s(u,v)=(us,tv), where t is the unique t∈ J_p such that st=e and ts=f.Let s be an element of J_p and let e and f be idempotents such that e Rs Lf. Then there exists (a unique) t∈ J_p such that st=e and ts=f, for which we have e Lt Rf. Let (u,v)∈ p_e. Note that such a pair (u,v) exists by Corollary <ref>. Therefore, we are in the same situation as in the proof of Lemma <ref>, with the four edges depicted in Figure <ref> belonging to 𝒦_p by Corollary <ref>. If there is another edge (u,v)(x,y) in K_p with (x,y)∈ p_f, then x=us and v=sy, thus y=fy=tsy=tv. Hence, there is for each vertex in p_e exactly one edge labeled s into a vertex of p_f. This defines the function μ_s p_e→ p_f such that μ_s(u,v)=(us,tv). Finally, note that μ_s and μ_t are mutually inverse. We finish this section with a couple of observations concerning aperiodic semigroups, starting with the next lemma.Let S be a compact aperiodic semigroup. Let p∈𝔏(S). If (u,v),(x,y) are elements of p stabilized by the same idempotent e of J_p, then (u,v)=(x,y).Since (u,v)∼(x,y), there are t,z∈ S^I such that there are edges (u,v)(x,y) and (x,y)(u,v) in the category T(S). Then we also have edges as in the following picture:(50,16)(0,0) Nframe=n,Nadjust=w(uv)(5,8)(u,v)(xy)(45,8)(x,y)[curvedepth=5](uv,xy)ete[curvedepth=5](xy,uv)eze[loopangle=180](uv)e[loopangle=0](xy)eBy Lemma <ref> and stability of S, we conclude that ete and eze are H-equivalent to e, thus, by aperiodicity, we get ete=eze=e. By the definition of the category T(S), we deduce that u=ue=x and v=ey=y. In the following result, we have a case in which the idempotents of J_p parameterize the elements of p.Let S be a compact and unambiguous aperiodic semigroup. Let p∈𝔏(S). Then there is a bijection between the ∼-class p and the set of idempotents in J_p, sending each (u,v) to the unique idempotent e∈ J_p that stabilizes (u,v).Let (u,v)∈ p. There are in J_p idempotents that stabilize (u,v), as J_p contains the minimum ideal of the monoid of stabilizers of (u,v). If e,f are idempotents of J_p stabilizing (u,v), then e=f by Lemma <ref>. Hence, we can consider the function ε p→ J_p sending (u,v) to the unique idempotent of J_p stabilizing (u,v). The function ε is injective by Lemma <ref>, and it is surjective by Corollary <ref>. § FINITELY CANCELABLE SEMIGROUPSConsider a compact semigroup S generated by a closed set A. Recall that, in the context of topological semigroups, that means that every element of S is arbitrarily close to products of elements of A. Note that, since A is closed, we have S=S^IA=AS^I. Indeed, every element of S is the limit of a net of the form (w_ia_i)_i∈ I, where the a_i∈ A and the w_i are perhaps empty products of elements of A. By compactness, we may assume that the nets (w_i)_i∈ I and (a_i)_i∈ I converge in S^I, say to w and a, respectively. Since A is closed, we conclude that a∈ A, which shows that S⊆ S^IA.Say that S is right finitely cancelable with respect to A when, for every a,b∈ A and u,v∈ S^I, the equality ua=vb impliesa=b and u=v.This implies A∩ SA=A∩ AS=∅. Say that S is right finitely cancelableif it is finitely cancelable with respect to some closed generating subset A.It turns out that the set A is uniquely determined by S, as shown next. Let S be a compact semigroup generated by closed subsets A and Bsuch that A∩ SA=B∩ SB=∅. Then we have A=B. In particular, if S is right finitely cancelable with respect to A andto B, then A=B. Let a∈ A. Since S=S^IB=S^IA, we have a=sb for somes∈ S^I and b∈ B, and b=tc for some t∈ S^I and c∈ A.We obtain the factorization a=stc. Since A∩ SA=∅,we must have s=t=1, and so a=b∈ B, showing thatA⊆ B. By symmetry, we have B⊆ A. Say that a pseudovariety of semigroups is right finitelycancelable if AV is rightfinitely cancelable with respect to A, for every finite alphabetA. A pseudovariety of semigroups V is right finitely cancelable if and only if V= D∗ V. It is observed in <cit.> that V is rightfinitely cancelable if and only if, for every finite alphabet A,and for every V-recognizable language L of A^+ and a∈A, the language La is also V-recognizable.In <cit.> one finds a proof that this is equivalentto V= D∗ V. The above definitions have obvious duals which are obtained by replacingright by left. Note that a semigroup pseudovariety V is right finitely cancelable if and only the pseudovarietyV^op of semigroups of V with reversed multiplications is left finitely cancelable.We say that a compact semigroup isfinitely cancelable (with respect to A)if it is simultaneously right and leftfinitely cancelable (with respect to A). Similarly, a pseudovariety of semigroups is finitely cancelable if it is simultaneously right andleft finitely cancelable. If V is a semigroup pseudovariety containing some nontrivial monoid and such that V= V∗ D, then V is finitely cancelable (cf. <cit.> and <cit.>). The following proposition is <cit.>. If V is an equidivisible pseudovariety of semigroups not contained in CS, then V is finitely cancelable. The next lemma is the first of a series of results in which the hypothesis of a semigroup being finitely cancelable enables us to get further insight into the quasi-order of 2-factorizations.Suppose S is a compact semigroup, finitely cancelable with respect to A. Let u,v∈ S^I and a∈ A. If the ∼-class of at least one of (ua,v) and (u,av) is not a singleton, then (u,av)∼(ua,v).By duality, it suffices to consider the case where the ∼-class of p=(ua,v) is not a singleton. Let q be in p/∼ with p≠ q. As p≤ q, we may consider a transition x from p to q. Then we have q=(uax,y) for some y∈ S^I such that v=xy. Since q≤ p, there is tsuch that ua=uaxt and y=txy. Because p≠ q, we must have t≠ 1, whence we may take b∈ A and z∈ S^I such that t=zb. Because S is finitely cancelable with respect to A, from ua=uaxt=uaxzb we get a=b and u=(ua)(xz). On the other hand, we have (xz)(av)=x(za)v=xtxy=xy=v, which shows that (ua,v)∼(u,av). We now turn our attention to profinite semigroups.Suppose S is a profinite semigroup generated by a closed subset A. Let p,q∈𝔉(s) with p<q. Then, there are x,y∈ S^I and a∈ A such that p≤(x,ay)<(xa,y)≤ q. Let p=(u,v) and q=(u',v'). Since (u,v)<(u',v'), there exists t∈ S such that u'=ut and v=tv', and the system{[utX= u; Xtv' = v' ].has no solution X∈ S. By a standard compactness argument which can be found in the proof of <cit.>, there is somecontinuous onto homomorphism φ_0 S→ R,with R finite,which may be naturally extended toan onto continuous homomorphism φ S^I→ R^I,andsuch that the following system (<ref>) has no solution X∈ R:{[φ(u)φ(t)X = φ(u); Xφ(t)φ(v') = φ(v'). ].Let (t_n)_n be a net of elements of the (discrete) subsemigroup of S generated by A such that (t_n)_n converges to t and such that φ(t_n)=φ(t) for all n. Write t_n=a_n,0a_n,1⋯ a_n,k_n, with the a_n,i∈ A. Then the following inequalities hold for i=0,…,k_n:(φ(ua_n,0⋯ a_n,i-1),φ(a_n,i⋯ a_n,k_nv')) ≤ (φ(ua_n,0⋯ a_n,i),φ(a_n,i+1⋯ a_n,k_nv')).Since ≤ is a transitive relation and the non-existence of a solution to(<ref>) guarantees that the following strict inequality holds(φ(u),φ(a_n,0⋯ a_n,k_nv')) < (φ(ua_n,0⋯ a_n,k_n),φ(v')),we deduce that there is i=i_n such that the inequality (<ref>) is also strict. As A is closed and S is compact, by taking subnets we may assume that the net (a_n,i_n)_n converges to some a∈ A, that φ(a_n,i_n)=φ(a) for every n, and that each of the nets t'_n=a_n,0⋯ a_n,i_n-1 and t”_n=a_n,i_n+1⋯ a_n,k_n converges to some t',t”∈ S^I, respectively (in particular, this yields t=t'at”). Then the strict inequality in (<ref>), with i=i_n, yields (φ(ut'),φ(at”v'))<(φ(ut'a),φ(t”v')), which implies thatp=(u,t'at”v') ≤(ut',at”v') <(ut'a,t”v') ≤(ut'at”,v')=q.Thus, it suffices to choose x=ut' and y=t”v' to obtain the inequalities of the statement of the proposition.We close this subsection with a result regarding the existence of a successor in the quasi-ordered set of 2-factorizations. Suppose S is a profinite semigroup, finitely cancelable with respect to A. Let p,q∈𝔉(s) and suppose that p<q. *Consider the unique u,v,a such that u,v∈ S^I, a∈ A and p=(u,av). If p≺ q, then we havep=(u,av)≺(ua,v)=q. Moreover, the ∼-classes of p and q are singletons.*Conversely, if p=(u,av) and q=(ua,v), where u,v∈ S^I and a∈A, then we have p≺ q. <ref> Notice that u,v,a really exist and are unique. Indeed, take p=(u,w), with u,w∈ S^I. One has w≠ 1, because p<q, and so w=av for some a∈ A and v∈ S^I, which are unique because S is finitely cancelable with respect to A.ByProposition <ref>, there are u',v'∈ S^I and a'∈A such that p≤ (u',a'v')<(u'a',v')≤ q. Since p≺ q, we must have p∼ (u',a'v')<(u'a',v')∼ q. It then follows from Lemma <ref> that the ∼-classes of (u',a'v') and (u'a',v') are singletons, thus p=(u',a'v') and q=(u'a',v'). By the uniqueness of u,v and a, we then have q=(ua,v) <ref> Assume there exists r=(x,y) with p<r<q. There are z,t such that x=ut, av=ty, ua=xz and y=zv.If t=1, then r=p, while if z=1, then r=q, hence both t and z are different from 1. Since av=ty, from the fact that S is finitely cancelable with respect to A, it follows that there is t' such that t=at' and v=t'y. Similarly, there is z' such that z=z'a and u=xz'. Therefore, u=xz'=utz'=ua· t'z' and v=t'y=t'zv=t'z'· av. This shows that p∼ q, in contradiction with p<q. Hence p≺ q. § STEP POINTS AND STATIONARY POINTSIn this section, we continue gathering important properties of the linear orders induced by pseudowords. We identify two types of elements in such orders, that we call step points and stationary points. Let us start by introducing these notions.Let P be a partially ordered set. We call step points the points of P that admit either a successor or a predecessor, or are the minimum or the maximum of P, if they exist. All other points are said to be stationary. The set of step points of L will be denoted by (L), and the set of stationary points of L will be denoted by (L).For an element s of a semigroup S, we also say that p∈𝔉(s) is a step point (respectively, a stationary point) if χ(p) is a step point (respectively, a stationary point) of 𝔏(s). In this section we further develop the results obtained in Section <ref>, using the notions of step point and stationary point. If S is profinite and finitely cancelable, then the ∼-class p of a step point (u,v) of 𝔉(S) is a singleton (cf. Proposition <ref><ref>), for which reason, in that case, we feel free to make the abuse of notation p=(u,v).As a preparation for the following example, recall that in a compact semigroup S, if s∈ S, then s^ω denotes the unique idempotent in the closed subsemigroup of S generated by s. Later on, we shall also make use of the notation s^ω+1 for s^ω s, and s^ω-1 for the inverse of s^ω+1 in the maximal subgroup containing s^ω+1.Consider the pseudoword a^ω of the free pro-aperiodic semigroup {a}A. Then 𝔉(a^ω) has only one stationary point, namely (a^ω,a^ω). The set 𝔉(a^ω) is linearly ordered, whence isomorphic to 𝔏(a^ω), with order type ω+1+ω^∗. More precisely, its elements are ordered as follows:(1,a^ω)<(a,a^ω)<(a^2,a^ω) <⋯ <(a^ω,a^ω)<⋯ <(a^ω,a^2)<(a^ω,a)<(a^ω,1). Example <ref> should be compared with the following one. Consider the pseudoword a^ω of the free profinite semigroup AS. Like in Example <ref>, 𝔏(a^ω) has order type ω+1+ω^*, and its sole stationary point is p=(a^ω,a^ω)/∼:(1,a^ω)<(a,a^ω-1)<(a^2,a^ω-2) <⋯ <p<⋯ <(a^ω-2,a^2)<(a^ω-1,a)<(a^ω,1). But (a^ω,a^ω)/∼ has infinitely many elements, namely, the pairs of the form (g,g^ω-1), where g is an element in the maximal subgroup containing a^ω. Examples <ref> and <ref> fit in the following definition.[Clustered sets] We say that the linearly ordered set P is clustered if the following conditions hold: *P has a minimum min P and a maximum max P;*for every q∈ P, if q=min P or q has a predecessor, then q has a successor or q=max P;*for every q∈ P, if q=max P or q has a successor, then q has a predecessoror q=min P;*for every p,q∈ P, if ]p,q[ is nonempty, then there is a step point in the interval ]p,q[. Property <ref> translates into saying that the set of step points of P is dense with respect to the order topology of P. Let S be a profinite semigroup which is finitely cancelable, and let s∈ S. Then 𝔏(s) is clustered.Property <ref> in Definition <ref> holds trivially, with min𝔏(s)=(1,s) and max𝔏(s)=(s,1).Let us show <ref>. Take s∈ S and p,q∈𝔉(s) with p≺ q. We have to show that either q has a successor, or q=max P. Let A be the generating set with respect to which S is finitely cancelable. By Proposition <ref>, there are u,v∈ S^I and a∈ A with p=(u,av) and q=(ua,v). If v=1, then q=max P, so that we may assume v≠1. Let v=bw with b∈ A, and let r=(uab,w).Clearly, we have q≤ r. We claim that q≺ r. Indeed, if q=r, then ua=uab and bw=w. Therefore, a=b, u=ua, and av=aaw=aw=v, showing that p=q, in contradiction with the hypothesis. Hence, we must have q<r, since otherwise we would obtain q∼ r and q≠ r, which entails p∼ q by Lemma <ref>. Finally, by Proposition <ref><ref> applied to q and r, we get q≺ r. This establishes <ref>, and <ref> holds dually.Finally, let us prove <ref>. If p is a step point, then ]p,q[≠∅ implies that the successor of p belongs to ]p,q[. Hence, it suffices to consider the case where p is stationary. Let p̂∈χ^-1(p) and q̂∈χ^-1(q). By Proposition <ref>, there are u,v∈ S^I and a∈ A such that p̂≤(u,av)<(ua,v)≤q̂ in 𝔉(s). By Proposition <ref><ref>, we have (u,av)≺(ua,v). Therefore, (u,av) and (ua,v) are step points. In particular, we have (u,av)∈]p,q[∩ (𝔏(s)).In the next result, we characterize the stationary points as the vertices with a nontrivial local monoid in the category of transitions. Let S be a profinite semigroup which is finitely cancelable. Let (u,v)∈𝔉(S). Then (u,v) is a stationary point if and only if it is stabilized by some element of S.Suppose that (u,v) is stationary. Let A be the set with respect to which S is finitely cancelable. Since v≠ 1, we may take a factorization v=aw with a∈ A and w∈ S^I. Clearly,(u,aw)≤ (ua,w) holds, and so (u,aw)∼ (ua,w) by Proposition <ref><ref>. Hence, there is an edge (ua,w) (u,aw), for some t∈ S^I. This implies that at∈ S stabilizes (u,v).Conversely, suppose there is z∈ S stabilizing(u,v). There is some factorization of the form z=at, for some a∈ A and t∈ S^I. The following equalities and inequalities(u,v)=(u,atv)≤(ua,tv)≤(uat,v)=(u,v)show that (u,v)=(u,atv)∼(ua,tv). It follows from Proposition <ref><ref> that (u,v) has no successor, since otherwise this successor would be (ua,tv), in contradiction with (ua,tv)≤(u,v). Since it is not the maximum of 𝔉(s) (because v=zv≠ 1), it must be a stationary point. Proposition <ref> is applied in the proof of the next result. Let S be an equidivisible profinite semigroup Swhich is finitely cancelable.Then, in T(S), any two coterminal edges betweenelements of distinct strongly connected componentsareequal. In other words, if(α,β)(γ,δ)and (α,β)(γ,δ)are edges ofT(S) such that (α,β)<(γ,δ), then t=s. The hypothesis translates into the following equalities: γ=α t=α s and β=tδ=sδ. We first assume that at least one of the points (α,β) and (γ,δ) is a step point. By symmetry, we may as well assume that (α,β) is a step point. From the equality α t=α s, by equidivisibility and without loss of generality, we may assume that there is some z∈ S^I such that α z=α and t=zs. Then, we have zβ=zsδ=tδ=β. This shows that (α,β)(α,β) is a loop of T(w). Since (α,β) is assumed to be a step point, applying Proposition <ref> we obtain z=1, thus t=s.It remains to consider the case where both (α,β) and (γ,δ) are stationary. By Theorem <ref>, there is a step point (x,y)∈𝔉(w) such that (α,β)<(x,y)<(γ,δ). By the preceding paragraph, there are unique edges in T(w) from (α,β) to (x,y) and from (x,y) to (γ,δ). Let r_1 and r_2 be the respective labels. To prove the proposition, it suffices to show that τ=r_1r_2 whenever (α,β)(γ,δ) is an edge of T(w).Note that (x,r_2) and (α,τ) are elements of 𝔉(γ). Since β=τδ and y=r_2δ, if (x,r_2)≤ (α,τ) (in 𝔉(γ)), then (x,y)≤(α,β) (in 𝔉(w)) by Remark <ref>, which gives a contradiction. Hence, by equidivisibility, there is in T(γ) an edge (α,τ)(x,r_2). In particular τ=r_0r_2.Remark <ref> guarantees theexistence of the edge(α,β)(x,y). But we defined, in the previous paragraph, the edge (α,β)(x,y) as the unique edge from (α,β) to (x,y).Therefore r_0=r_1, and so we have the equality τ=r_1r_2, which we have seento be sufficient to conclude the proof. Let S be an equidivisible profinite semigroup Swhich is finitely cancelable.Let p_1,p_2∈𝔏(w).If p_1<p_2 then the set of transitions from p_1 to p_2 iscontained in a J-class of S.Let x_1,x_1'∈ p_1 and x_2,x_2'∈ p_2, and consider edges x_1x_2 and x_1'x_2'. Then we have edges x_ix_i' and x_i'x_i, and also x_1x_2 and x_1'x_2'. By Proposition <ref>, we must have t=s_1t'r_2 and t'=r_1ts_2, whence t Jt'.We remark that there is a large class of pseudovarieties whose corresponding finitely generated relatively free profinite semigroups satisfy the hypotheses of Proposition <ref>. Let V be a pseudovariety of semigroups such that V=LI V. Then every semigroup of the form AV, with A a finite alphabet, satisfies all conditions in Proposition <ref>: they are profinite, equidivisible, and finitely cancelable (cf. Theorem <ref> and Proposition <ref>, where the latter may be applied because V=LI V implies V⊇LI and thus V⊈CS).Proposition <ref> is used in the proof of the following lemma, establishing a sufficient condition for equality between stationary points.Let S be an equidivisible profinite semigroup which is finitely cancelable. Let p and q be stationary points of 𝔏(S). If there is a transition pq such that t lies J-above both J_p and J_q then p=q.We have p≤ q, so, arguing by contradiction, suppose that p<q. Consider an edge (u,v)(x,y) of T(S) with (u,v)∈ p and (x,y)∈ q. Let e∈ J_p and f∈ J_q be idempotents respectively stabilizing (u,v) and (x,y). Then we have a transition (u,v)(x,y). By Proposition <ref>, it follows that etf=t, whence, since S is stable, and by the hypothesis that t lies J-above both e and f, we have e Rt Lf. Therefore, there exists s∈ J_p=J_q such that ts=e and st=f. Now, the assumption that we have a transition (u,v)(x,y) means that the equalities ut=x and v=ty hold. Combining with the equalities ue=u and fy=y, we deduce that xs=uts=ue=u and sv=sty=fy=y. Hence (x,y)≤(u,v), which contradicts the assumption p<q. To avoid the contradiction, we must have p=q. The following is an example obtained by application of Lemma <ref>.Let V be a pseudovarietycontaining LSl, and let A be a finite alphabet.It is shown in <cit.> (seealso <cit.>),using Zorn's Lemma and a standard compactnessargument, that AV contains regular elements thatare J-maximal among the elements of AV∖ A^+.We next verify that if V is closed under concatenationand u is a J-maximal regular element ofAV, then the order type of 𝔏(u) is ω+1+ω^*.Indeed, let p,q be two stationary points of 𝔏(u) such that p≤ q, and let e∈ J_p, f∈ J_q. From the maximality assumption on u, we deduce that e Ju Jf. Hence p=q by Lemma <ref>, showing that 𝔏(u) has only one stationary point. Restricting our attention to the case of profinite semigroups that are free relatively to pseudovarieties closed under concatenation, we obtain stronger results about step and stationary points. The next lemma is a provisional result with this flavor, which is improved later on, namely in Proposition <ref>.Let A be a finite alphabet, and let V be a pseudovariety closed under concatenation. Let w∈ AV. If (u,v) is a stationary point of 𝔉(w), then there exists a strictly increasing sequence (u_n,v_n)_n of points of 𝔉(w) such that lim u_n=u and lim v_n=v.Since A is finite, AV is metrizable, and so 𝔏(w) is metrizable by Corollary <ref>. Let p=(u,v)/∼. ByTheorem <ref>, there is a strictly increasing sequence (x_n,y_n)_n of step points converging in 𝔏(w) to p. As S is compact, taking subsequences we may assume that (x_n,y_n)_n converges in 𝔉(w) to some (x,y)∈𝔉(w). By Proposition <ref>, we know that (x,y)∈ p. Then there are edges (x,y)(u,v) and (u,v)(x,y) in T(w). Note that lim x_n=x=xst. Hence, by Theorem <ref> and Lemma <ref>, for every n there is a factorization x_n=x_n's_nt_n with lim x_n'=x, lim s_n=s, and lim t_n=t. Then, we have lim(x_n's_n,t_ny_n)=(u,v) in 𝔉(w), thus lim(x_n's_n,t_ny_n)/∼=p in 𝔏(w) by Proposition <ref>. And since (x_n's_n,t_ny_n)≤(x_n,y_n)<(x,y), we conclude that the sequence((x_n's_n,t_ny_n)/∼)_n has p as a supremum but not as a maximum, enabling us to extract from the sequence (x_n's_n,t_ny_n)_n a strictly increasing subsequence. Since this subsequence also converges to (u,v) in 𝔉(w), this completes the proof.Let A be a finite alphabet, and let V be a pseudovariety closed under concatenation. Let w∈ AV and let K be a nonempty clopen subset of 𝔉(w). Then K has a minimum and a maximum, and both of them are step points.As K is closed, whence compact, we know from Proposition <ref> that its projection under χ is also compact, whence closed. Let p=infχ(K) and q=supχ(K). Since χ(K) is closed, we have p=minχ(K) and q=maxχ(K). Hence, there exist p̂∈ K∩χ^-1(p) and q̂∈ K∩χ^-1(q). Although these points are perhaps not unique, all choices are ∼-equivalent. Moreover, for every other point r∈ K, from χ(r)∈[p,q] we deduce that p̂≤ r≤q̂.It remains to show that p̂ and q̂ are step points. Suppose on the contrary that p̂ is a stationary point. By Lemma <ref>, there is a strictly increasing sequence (p̂_n)_n in 𝔉(w) converging to p̂. Since K is open, it must contain points of the form p̂_n. As p̂_n<p̂, this contradicts the fact that p̂ is a minimum of K. Hence, p̂ is a step point and, similarly, so is q̂. Lemma <ref> was used to prove Lemma <ref>. We next use Lemma <ref> to show that the sequence in Lemma <ref> may be formed only by step points.Let A be a finite alphabet, and let V be a pseudovariety closed under concatenation. Let w∈ AV. If (u,v) is a stationary point of 𝔉(w), then there exists a strictly increasing sequence (u_n,v_n)_n of step points of 𝔉(w) such that lim u_n=u and lim v_n=v.Since 𝔉(w) is a zero-dimensional space and the set of step points of 𝔉(w) is linearly ordered by ≤, it suffices to show that, for every clopen subset K⊆𝔉(w) containing p=(u,v)/∼ and every interval [q,p], with q<p, there is some step point in the intersection K∩[q,p]. Because (𝔏(w)) is topologically dense in 𝔏(w) (cf. Theorem <ref>), we are reduced to the case where q is a step point. Let r be a step point such that r>p, for instance r=(w,1). The closed intervals in 𝔏(w) whose extremities are step points are clopen, hence, by Proposition <ref>, the interval [q,r] in 𝔉(w) is clopen. Therefore, the subset L=K∩[q,r] is also clopen. Note that L is nonempty, as p∈ L. Lemma <ref> guarantees that L has a minimum s which is a step point. It then follows from p∈ L that s∈ K∩[q,p], thereby completing the proof of the lemma. § CLUSTER WORDSIn this section, we use the knowledge obtained about the labeled linear orders induced by pseudowords over A to obtain a representation theorem: every pseudoword over A may be represented by a (partially) labeled linear order having specific properties, which we introduce now.By a partially labeled ordered set, we mean a pair (P,f) such that P is an ordered set and f is a function (the labeling) with domain contained in P. An isomorphism between partially labeled ordered sets (P,f) and (Q,g) is an isomorphism φ P→ Q of ordered sets such that g=φ( f) and f(p)=g∘φ(p) for every p∈ f.Consider a profinite equidivisible semigroup S which is finitely cancelable with respect to A. We may then consider the mappingsandfrom S^I to A⊎{1} such that (1)= (1)=1 and, for s∈ S, the images (s) and (s) are respectively the unique prefix and the unique suffix of s in A. For s∈ S, denote by 𝔏_c(s) the partially labeled linearly ordered set (𝔏(s),ℓ) defined by the mapping ℓ (L)→ A⊎{1} such that ℓ (u,v)=(v), for (u,v)∈ (𝔏(s)). Note that ℓ (p)=1 if and only if p=(s,1). Recall from Theorem <ref> that 𝔏(s) is clustered. By a cluster word over A we mean a partially labeled linearly ordered set (L,ℓ) such that L is clustered and ℓ is a function (L)→ A⊎{1}.For the pseudoword w=(a^ω b)^ω of {a,b}A, the cluster word 𝔏_c(w) is described in Diagram (<ref>),aa⋯_ω ∙ ⋯ aa_ω^*b aa⋯_ω ∙ ⋯ aa_ω^*baa⋯_ω ∙ ⋯ aa_ω^*b ⋯_ω∙⋯aa⋯_ω ∙ ⋯ aa_ω^*b aa⋯_ω ∙ ⋯ aa_ω^*b aa⋯_ω ∙ ⋯ aa_ω^*b _ω^*where each of the small bullets ∙ represent a stationary point q such that a^ω∈ J_q and the bigger bullet ∙ represents the unique stationary point r such that (a^ω b a^ω)^ω∈ J_r. Note that, in particular, the order type of 𝔏(w) is (ω+1+ω^*)ω+1+(ω+1+ω^*)ω^*.We next introduce, in Definition <ref>, a notion of algebraic recognition of cluster words inspired by the definition of automata recognizing words indexed by linear orders, introduced in <cit.>. A notorious similarity resides in the role played by cofinal sets, whose definition we next recall. In a linearly ordered set L, a subset X of L is left cofinal at p if X∩]q,p[≠∅ for every q<p, right cofinal at p if X∩]p,q[≠∅ for every q>p, and cofinal if it is right or left cofinal at p.Let φ A→ S be a generating mapping of a semigroup S,and let s∈ S. We say that the cluster word (L,ℓ) over A is recognized by the pair (φ,s) if there is a mapping g(L)→𝔉(s) satisfying: * g(min L)=(1,s);* g(max L)=(s,1);* if p_1≺ p_2 in L, then g(p_1)g(p_2) is an edge of T(s);* if p is a stationary point of L then, for every q∈𝔉(s), the set g^-1(q) is left cofinal at p if and only if it is right cofinal at p.If such conditions are satisfied, then we say that (L,ℓ) is g-recognized by the pair (φ,s). We also say that (L,ℓ) is recognized by the pair (φ,s) if it is g-recognized, for some g. Finally, when (L,ℓ) is g-recognized by (φ,s), we define F_g to be the function from (L) to the power set 𝒫(𝔉(s)) such thatF_g(p) ={q∈𝔉(s):g^-1(q) is left cofinal at p}={q∈𝔉(s):g^-1(q) is right cofinal at p},for every stationary point p of L (cf. <ref>). If f (L',ℓ')→ (L,ℓ) is an isomorphism of cluster words over A and if (L,ℓ) is g-recognized by (φ,s), then (L',ℓ') is (g∘ f)-recognized by (φ,s). Hence, the property of (L,ℓ) being recognized by (φ,s) is invariant under isomorphism of cluster words. The next lemma, concerning the function F_g, will be applied in Section <ref>. Consider a cluster word (L,ℓ) over A,recognized by the pair (φ,s), whereφ A→ S is a generating mapping of a finite semigroup Sand s∈ S.Suppose that the order topology of L is metrizable.Let p be a stationary point of L.Then the following properties hold: * if P is a subset of 𝔉(s) such that F_g^-1(P) is cofinal at p, then P⊆ F_g(p);* there is an open interval I containing p such that F_g(q)⊆ F_g(p) for every stationary point q in I.<ref> Without loss of generality, suppose that F_g^-1(P) is left cofinal at p. Then, taking into account that L is metrizable, there is a strictly increasing sequence (p_n)_n of stationary points of L converging to p such that F_g(p_n)=P for all n. Consider a metric d on L inducing the order topology of L.For each k≥ 1, let ]q_k,p[ be an open interval of L such that d(q_k,p)<1/k Then, there isn_k such that p_n_k∈]q_k,p[. Let (s_1,s_2)∈ P. By the definition of F_g, we know thatg^-1(s_1,s_2) is left cofinal at p_n_k. Therefore, there is in the interval ]q_k,p_n_k[ a step point belonging to g^-1(s_1,s_2). Since ]q_k,p_n_k[⊆]q_k,p[, this proves that g^-1(s_1,s_2) is left cofinal at p, thus (s_1,s_2)∈ F_g(p). We have therefore proved that the inclusion P⊆ F_g(p) holds.<ref> Let P be the set of subsets P of 𝔉(s) such that F_g^-1(P) is not cofinal at p.For each P∈ P, there is an open interval I_p containing p such that F_g^-1(P)∩ I_p=∅. Because S is a finite semigroup, the set P is finite, and so I=⋂_P∈ PI_p is an open interval of L containing p.Let q be a stationary point in I.Let R=F_g(q).Suppose R∈ P. Then q∈ I_R. But we also have q∈ F_g^-1(R), which leads to F_g^-1(R)∩ I_R≠∅, a contradiction with the definition of I_R.This shows that R∉ P.We then deduce from part <ref> of the lemma that R⊆ F_g(p). The main results in this section are about cluster words defined by elements of AA, but in the next proposition we embrace without additional effort all pseudovarieties closed under concatenation.Consider a pseudovariety V closed under concatenation. Given w∈ AV, and a generating mapping φ A→ S of a semigroup S of V, let s=φ_ V(w). Consider the mapping g_w,φ(𝔏(w))→𝔉(s) such that g_w,φ(u,v)=(φ_ V(u),φ_ V(v)) for everystep point (u,v) of 𝔏(w). Then the cluster word 𝔏_c(w) is g_w,φ-recognized by (φ,s).The conditions <ref>-<ref> in Definition <ref> for g_w,φ-recognition by(φ,s) are clearly satisfied, and <ref> follows directly from Proposition <ref><ref>.Let us verify condition <ref>. Let (x,y)∈𝔉(s) be such that g_w,φ^-1(x,y) is left cofinal at p. Then, there is a strictly increasing sequence (u_n,v_n)_n≥ 1 of step points belonging to g_w,φ^-1(x,y) converging in 𝔉(w) to an element (u,v) of p. Since (φ_ V(u_n),φ_ V(v_n))=(x,y) for every n≥ 1, we also have (φ_ V(u),φ_ V(v))=(x,y), by continuity of φ_ V. By the dual of Proposition <ref>, there is a strictly decreasing sequence (u'_n,v'_n)_n≥ 1 of step points converging to (u,v) in 𝔉(w). In particular, by continuity of φ_ V, for all sufficiently large n, we have (φ_ V(u_n'), φ_ V(v_n')) = (x,y), and so g_w,φ^-1(x,y) is right cofinal at p. Dually, if g_w,φ^-1(x,y) is right cofinal at p then g_w,φ^-1(x,y) is left cofinal at p. This establishescondition <ref>. In the case of unambiguous aperiodic semigroups, we have a converse for Proposition <ref>, as stated in the next theorem. Let w∈ AA, and consider a generating mappingφ A→ S of a finite aperiodic unambiguous semigroup S. Then φ_ A(w)=s if and only if 𝔏_c(w) is recognized by (φ,s). We defer the proof of Theorem <ref> to Section <ref> (but note that the direct implication in Theorem <ref> is an immediate application of Proposition <ref>). Meanwhile, we use it to prove the following main result.Let u,v∈ AA. Then u=v if and only if the cluster words 𝔏_c(u) and 𝔏_c(v) are isomorphic.The isomorphism of cluster words is clearly necessary to have u=v. Conversely, suppose 𝔏_c(u) and 𝔏_c(v) are isomorphic. Let φ A→ S be a generating mapping of a finite unambiguous aperiodic semigroup S. Take s=φ_A(u). Then 𝔏_c(u) is recognized by (φ,s), according to the direct implication in Theorem <ref>. But then 𝔏_c(v) is also recognized by (φ,s) (cf. Remark <ref>). Hence, we have s=φ_A(v) by the converse implication in Theorem <ref>. By Proposition <ref>, this establishes u=v.§ STABILIZERS Given a semigroup S and s∈ S, we say that an element x of S^I stabilizes s on the right if sx=s; the set (s) of all such x constitutes a submonoid of S^I and is called the right stabilizer of s. One defines dually the elements that stabilize s on the left, which form a submonoid (s) of S^I, called the left stabilizer of s.An application of the following result will be required in the sequel. Let S be an equidivisible profinite semigroup which is finitely cancelable. Let u∈ S. Let g be an element of the monoid (u) or of the monoid(u). If g is regular within that monoid, then g=g^2. In Theorem <ref>, the hypothesis that S is finitely cancelable is not superfluous: if G is a group, then the semigroup G^0 obtained from G by adjoining a zero is equidivisible, and G^0 is the left and right stabilizer of zero.We should mention that we do not know of any other examples of equidivisible profinite semigroups that are finitely cancelable other than free pro-V semigroups, where V is a pseudovariety with suitable closure properties. For such semigroups, Theorem <ref> follows from more general results in <cit.>. Nevertheless, since our results apply to all equidivisible profinite semigroups that are finitely cancelable, we present a proof of Theorem <ref> which may be of independent interest. As a first step we have the following simple statement.Let x and y be D-equivalent elements of a stable semigroup S. If yx=x then y=y^2.Since yx=x and y are in the same D-class and S is stable, we must have y Rx. Therefore, y=xu for some u∈ S^I, so y^2=yxu=xu=y. We proceed with an auxiliary lemma. Let S be an equidivisible profinite semigroup which is finitely cancelable. Let g,w∈ S be such that gw=w. If (x,y)∈𝔉(w) is a step pointsatisfying (g^ω,w)≤(x,y), then gx=x.Note that (g^ω,w)≤(x,y) implies x≤_ R g^ω, thus x=g^ω x, a fact that we shall use along the proof.By equidivisibility, (x,y) and (gx,y) are comparable in 𝔉(w). Suppose first that (gx,y)≤(x,y). Then there is t∈ S^I such that gxt=x and ty=y. It suffices to show that t=1. The equality gxt=x entails x=g^nxt^n for every n≥1, thus x=g^ω xt^ω=xt^ω. And since ty=y, we conclude that t^ω stabilizes (x,y) in T(S). Because (x,y) is a step point,Proposition <ref> implies that t=1.Suppose next that (x,y)≤(gx,y). Then (g^ω-1x,y)≤(g^ω x,y)=(x,y) (cf. Remark  <ref>) and, by the preceding case, we deduce that g^ω-1x=x. With a left multiplicationby g on both sides of the latter equality, we obtain x=g^ω x=gx, as desired. It suffices to consider the case where g is an element of (u), as the other case is dual.We first establish the theorem when g is a group element, that is, g=g^ω+1.Let R be the set of elements (α,β) of 𝔉(u) such that α Rg. Note that R is nonempty, indeed (g,u)∈ R.The set R is closed, whence compact, and so by continuity of χ, the image χ(R) is also compact, whence closed. Therefore, by completeness of 𝔏(u) (cf. Proposition <ref>), the closed set χ(R) has a maximum p=(x,y)/∼. Let us observe that, since two ∼-equivalent elements must have R-equivalent first components, the inclusion χ^-1(p)⊆ R holds.Moreover, since g=g^ω+1, we have (g^ω,u)∈ R, thus (g^ω,u)≤ (x,y) by definition of p.Suppose first that p is a step point. Then gx=x by Lemma <ref>. As x Rg, we conclude that g=g^2 by Lemma <ref>.If p is stationary, then, by Theorem <ref>, there is a net (x_i,y_i)_i∈ I of step points converging in 𝔏(u) to p and such that p<(x_i,y_i) for all i∈ I. By compactness, taking a subnet, we may assume that (x_i,y_i)_i∈ I converges in 𝔉(u) to some element (x',y') of p. As (g^ω,u)< (x_i,y_i), we deduce from Lemma <ref> that gx_i=x_i for every i∈ I.Taking limits, it follows that gx'=x'.Since(x',y')∈ p, we havex Rx', whenceg Rx', and we again deduce that g=g^2 byLemma <ref>. We have thus concluded the proof for the case where g is a groupelement of S. Let us now supposethatg is regular within (u). Then there is h∈(u)such that g=ghg and h=hgh. Since hu=gu,by equidivisibility we know that h and gare ℛ-comparable in S. We actually haveh Rg in S, by stability of S.Let z∈ S^I be such that h=gz. Then g=g^2zg,and therefore g Rg^2.This shows that g is a group element of S, and so,as we are in the case already proved, we get g=g^2. A semigroupoid S is trivial if, for any two vertices p,q∈ S, there is at most one edge p→ q.Let S be an equidivisible profinite semigroup which is finitely cancelable. For every p∈𝔏(S), the ideal K_p is a trivial category.Let (u,v)(x,y) and (u,v)(x,y) be edges of 𝒦_p. The proof amounts to showing that s=t. There is an edge (x,y)(u,v) in 𝒦_p. In particular, sz and (sz)^2 label loops in 𝒦_p at vertex (u,v). Hence, sz and (sz)^2 belong to J_p, and so sz is a group element of S stabilizing u on the right. We then deduce from Theorem <ref> that sz is an idempotent of J_p, denoted by e, which stabilizes (u,v). Similarly, tz is an idempotent of J_p stabilizing (u,v). It follows from Lemma <ref> that sz=tz=e. Symmetrically, we have zs=zt=f, with f an idempotent of J_p. As s,t,z belong to J_p, this shows that s=t.For a compact semigroup S and p∈𝔏(S), let U_p be the union of the maximal subgroups of J_p. The next proposition, whose proof relies on Theorem <ref>, should be compared with Proposition <ref>. We show that U_p parameterizes in a natural way the class p, without assuming aperiodicity, but assuming equidivisibility and finite cancelability. Let S be an equidivisible profinite semigroup which is finitely cancelable. Let p∈𝔏(S) be a stationary point. Then we have a bijection ν_p U_p→ p, defined as follows: * for each idempotent e∈ J_p, fix an element(u_e,v_e) of p stabilized by e;* to each g in the maximal subgroup H_e of S containing e, associate the element ν_p(g)=(u_eg,g^ω-1v_e) of p.We first verify that the function ν_p is well defined. Corollary <ref> guarantees that every idempotent e∈ J_p stabilizes some (u_e,v_e)∈ p. If g∈ H_e, then (u_eg,g^ω-1v_e) (u_e,v_e) and (u_e,v_e) (u_eg,g^ω-1v_e) are edges of T(S), whence (u_eg,g^ω-1v_e)∼ (u_e,v_e). It follows that ν_p is indeed well defined. We next show that ν_p is injective. If g∈ U_p then g^ω is an idempotent of J_p stabilizing ν_p(g)=(u_g^ωg,g^ω-1v_g^ω). Hence, by Lemma <ref>, if ν_p(g)=ν_p(h), then g^ω=h^ω, thus g,h belong to the same maximal subgroup and u_g^ωg=u_g^ωh. The latter is equivalent to u_g^ω=u_g^ωhg^ω-1, and so, by Theorem <ref>, we have hg^ω-1=g^ω. This means that g=h, thereby showing that ν_p is injective.It remains to show that ν_p is surjective. Let (u,v) be an arbitrary element of p and let e be an idempotent in J_p stabilizing (u,v). Since p is the set of vertices of a strongly connected component of T(S), there is some edge (u_e,v_e)(u,v), and whence also an edge (u_e,v_e)(u,v). By Lemma <ref>, it follows that ete∈ H_e. Hence, we must have ν_p(ete)=(u,v). Note that, under the conditions of Proposition <ref>, the set p_e of elements of p stabilized by e is precisely ν_p(H_e).§ A CHARACTERIZATION OF THE J-CLASS ASSOCIATED TO A ∼-CLASSLet S be an equidivisible compact semigroup, w∈ S and p∈𝔏(w). We define a subset L_p of S^I, depending only on p, as follows. Take an arbitrary strictly increasing sequence p_1<p_2<⋯converging to p in 𝔏(w) — if such a sequence does not exist, for instance, if p has a predecessor, then take L_p=∅. For each m≥1 and n> m, let t_m,n be a transition from p_m to p_n. For fixed m≥ 1, let t_m be an accumulation point of the sequence (t_m,n)_n> m. If t is an accumulation point of the sequence (t_m)_m then t∈ L_p, and every element of L_p is obtained by this process, the sequence p_1<p_2<⋯ being allowed to change. Dually, taking strictly decreasing sequences converging to p, we define a subset R_p of S associated with p.Let S be an equidivisible compact semigroup. For every p∈𝔏(S), the sets L_p and R_p are contained in J_p.Let w∈ S be such that p∈𝔏(w), and suppose that (p_n)_n is a strictly increasing sequence of elements of 𝔏(w) converging to p. For each m≥ 1 and n>m, let t_m,n be a transition from p_m to p_n, and let t_m be an accumulation point of the sequence (t_m,n)_n. Finally, let t be an accumulation point of the sequence (t_m)_m. The proof that L_p is contained in J_p is concluded once we show that t∈ J_p.We first claim that, for every fixed m≥1, t_m is a transition from p_m to p. For each n>m, choose (u_n,v_n)∈ p_m and (x_n,y_n)∈ p_n such that u_nt_m,n=x_n and v_n=t_m,ny_n. By compactness, the sequence (t_m,n,u_n,v_n,y_n)_n has some subnet (t_m,n_k,u_n_k,v_n_k,y_n_k)_n_k such that t_m=lim_k t_m,n_k and the nets (u_n_k)_k, (v_n_k)_k, and (y_n_k)_k converge, respectively to some u, v, and y. Since (u_n_k,t_m,n_ky_n_k)=(u_n_k,v_n_k)∈ p_m and (u_n_kt_m,n_k,y_n_k)=(x_n_k,y_n_k)∈ p_n_k, it follows from Proposition <ref> that (u,v)=(u,t_my)∈ p_m and (ut_m,y)∈ p, which proves the claim.Since (p_m)_m converges to p and t is an accumulation point of the sequence (t_m)_m, it follows again from Proposition <ref> that t is a transition from p to p.Choose z∈ J_p such that there is a loop (ut_m,y)(ut_m,y). Since (u,v)<(ut_m,y) and (u,v)(ut_m,y) is an edge of T(S), Lemma <ref> yields that z is a suffix of t_m. This holds for every m≥1, whence z is a suffix of t. As we have already shown that t is a transition from p to p, we deduce that t∈ J_p by Lemma <ref>. Hence we have L_p⊆ J_p and dually R_p⊆ J_p.Let S be an equidivisible profinite semigroup which is finitely cancelable. Let w∈ S. Suppose that (u_n,v_n)_n is a strictly increasing sequence in 𝔉(w) converging to a stationary point (u,v). For each pair m<n, let t_m,n∈ S be a transition (u_m,v_m)→(u_n,v_n). Then, for each m, the sequence (t_m,n)_n converges to the unique transition from (u_m,v_m) to (u,v). Moreover, the sequence (t_m)_m converges to the label of the only loop at the vertex (u,v) in the trivial category K_p, where p=(u,v)/∼.Every accumulation point of the sequence (t_m,n)_n labels an edge from (u_m,v_m) to (u,v). But there is only one such edge, by Proposition <ref>. Since we are dealing with a compact space, this implies that (t_m,n)_n converges to some element t_m. To conclude the proof, it suffices to show that every accumulation point t of the sequence (t_m)_m labels the same loop at vertex (u,v). By the definition of L_p, we have t∈ L_p, whence t∈ J_p by Theorem <ref>. Moreover, in T(w) the sequence of edges (u_m,v_m)(u,v) admits the loop (u,v)(u,v) as an accumulation point. Since t∈ J_p, this loop belongs to K_p by Corollary <ref>. By Corollary <ref>, there is only one loop of K_p at (u,v). Therefore, every accumulation point of (t_m)_m is the label of that loop. In some cases, Theorem <ref> can be strengthened, as seen next.Let A be a finite alphabet, and let V be a pseudovariety closed under concatenation. Then L_p=J_p=R_p for every stationary point p∈𝔏( AV).According to Theorem <ref>, it remains to prove that J_p is contained in L_p and in R_p. By symmetry, it suffices to prove that J_p⊆ L_p.Let τ be an element of J_p. Then by Proposition <ref> there are elements (u,v) and (u',v') of p such that (u,v)(u',v') is an edge of K_p. According to Proposition <ref>, there are strictly increasing sequences (q_n)_n and (q'_n)_n of step points of 𝔉(w) converging to (u,v) and (u',v'), respectively. We may define recursively a strictly ascending sequence of step points p_n as follows: * p_1=q_1;* if n>1 is even, then p_n is the smallest term of the sequence (q'_n)_n belonging to ]p_n-1,p[; * if n>1 is odd, then p_n is the smallest term of the sequence (q_n)_n belonging to ]p_n-1,p[;For each m≥1 and each n≥ m let t_m,n be the unique transition from p_2m-1 to p_2n. Let t_m be an accumulation point of the sequence (t_m,n)_n≥ m. Since (p_2n)_n converges to (u',v'), the pseudoword t_m labels an edge from p_2m-1 to (u',v'). Let t be an accumulation point of the sequence (t_m)_m. Since (p_2m-1)_m converges to (u,v), the pseudoword t labels an edge from (u,v) to (u',v'). By the definition of L_p, we have t∈ L_p. Therefore t∈ J_p by Theorem <ref>. By Corollary <ref>, the category K_p is trivial, whence t=τ, thus proving that τ∈ L_p. § PROOF OF THEOREM <REF>Throughout this section, when not explicitly stated, we consider w to be an element of AA, where A is a finite alphabet, and φ A→ S to be a generating mapping of a finite aperiodic semigroup S. We also take s=φ_ A(w).Consider a mapping g(𝔏(w))→𝔉(s). From hereon, we assume that 𝔏_c(w) is g-recognized by the pair (φ,s).In particular, all properties of Definition <ref> are fulfilled when (L,ℓ)=𝔏_c(w).[g-projection] Let x and y be two step points of 𝔏(w) such that x≤ y. By Propositions <ref> (in case x<y) and <ref> (in case x=y), there is a unique edge in T(w) from x to y. Let t be its label, which is 1 if x=y. If g(x) g(y) is an edge of T(s), then we say that the edge xy is g-projected (to g(x) g(y)). The unique edge from x to y will sometimes be denoted simply by xy, without reference to the label. Let x, y and z be step points such that x≤ y≤ z. If x y and y z are g-projected, then x z is g-projected. Given two step points x and y, write x≺≺ y if x≤ y and the interval [x,y] is finite.If x≺≺ y, then x y is g-projected.Let ≈ be the equivalence relation on (𝔏(w)) generated by ≺≺, that is, x≈ y if and only x≺≺ y or y≺≺ x. The ≈-class of x is denoted [x]_≈. Note that w∈ A^+ if and only if (1,w)≈ (w,1). Let O_w be a subset of (𝔏(w)) such that each ≈-class contains exactly one element ofO_w, withthe additional restriction that if w∉ A^+ then we haveO_w∩ [(1,w)]_≈={(1,w)}and O_w∩ [(w,1)]_≈={(w,1)}. [Bridges] A bridge in 𝔏(w), with respect to the mapping g, is a nonempty open interval I of 𝔏(w) such that, for every pair of step points x,y of I, with x<y, the edge x→y is g-projected. A special bridge in 𝔏(w) is a bridge of the form [X,Y[, with X,Y∈ O_w such that X<Y. Notice that every nonempty interval contained in a bridge is also a bridge, and that special bridges are clopen intervals.Let F be a nonempty family of special bridges of 𝔏(w).If ⋃ F is a closed interval,then ⋃ F is a special bridge. Before proving Lemma <ref>, we remark that the hypothesis that the union ⋃ F is closed cannot be removed. Indeed, consider a case where we have a strictly increasing sequence (q_n)_n of stationary points converging to a stationary point q (cf. Example <ref>). Between q_n and q_n+1 pick an element p_n of O_w. Then the union of the special bridges [min𝔏(w),p_n[ is [min𝔏(w),q[, which is not a special bridge. Since ⋃ F is a compact set having an open cover by the elements of F, we have ⋃ F=⋃ F' for some finite subfamily F' of F. Then, for some n≥ 1, we may assume that there are elementsX_1<X_2<⋯ <X_n and Y_1<Y_2<⋯ <Y_n of O_w such thatF'={[X_k,Y_k[:1≤ k≤ n}.Consider the setZ={X_i:1≤ k≤ n}∪{Y_i:1≤ k≤ n}.and let Z_1<Z_2<⋯ <Z_m be the elements of Z. Notice that m≥ 2. For each k∈{1,…,m-1}, denote by I_k the interval [Z_k,Z_k+1[. In case Z_k=X_j for some j, we have Z_k+1≤ Y_j, and so I_k is a special bridge contained in the special bridge [X_j,Y_j[. If Z_k=Y_j for some j, then, since Z_k<Z_m, we must have Z_k∈⋃ F, whence Z_k∈[X_i,Y_i[ for some i. As Z_k<Y_i, we have Z_k+1≤ Y_i, and so I_k is a special bridge contained in the special bridge [X_i,Y_i[. Hence,the setF”={I_k:1≤ k≤ m-1}is a family of special bridges.Let U_k=⋃_j=1^k I_j. Note that U_k=[Z_1,Z_k+1[. We prove by induction on k∈{1,…,m-1} that U_k is a special bridge. The initial step is trivial. Suppose that U_k is a special bridge, for some k∈{1,…,m-2}. To prove that U_k+1 is a special bridge, consider two step points x,y in U_k+1 with x<y. We have to show that the edge x y is g-projected. Using induction, it suffices to consider the case wherex∈ U_k and y∈ I_k+1. Let Z'_k+1 be the predecessor of the step point Z_k+1 in 𝔏(w). Since U_k=[Z_1,Z_k+1[ and I_k+1=[Z_k+1,Z_k+2[, the edges x Z'_k+1 and Z_k+1 y are g-projected, and the same is true obviously for the edge Z'_k+1 Z_k+1; hence, by Remark <ref>, xy is g-projected. This proves that U_k+1 is a special bridge, concluding the induction.The result now follows because U_m-1=⋃ F”=⋃ F'=⋃ F.In the proof of the next lemma and in the sequel, we use the notation λ(x)=inf [x]_≈ andρ(x)=sup[x]_≈, for a step point x.Note that λ(x) is stationary unless(1,w)≺≺ x, and ρ(x) is stationary unlessx≺≺ (w,1). Also, when r∈(𝔏(w)), the unique element of O_w∩ [r]_≈ is denoted O_w(r). Let X,Y∈ O_w be such that X<Y. Suppose that for every stationary point p in [X,Y[ there is a special bridge containing p. Then [X,Y[ is a special bridge.Let P=[X,Y[∩(𝔏(w)). By hypothesis, for each p∈ P there is a special bridge I_p containing p. Note that I_p∩[X,Y[ is also a special bridge containing p. Therefore, we may as well assume that I_p⊆[X,Y[. Let U=⋃_p∈ PI_p. Then U⊆[X,Y[.We claim that U=[X,Y[. As we trivially have P⊆ U, we are reduced to showing that, for every z∈[X,Y[∩(𝔏(w)), we have z∈ U.If ρ(X)<z<λ (Y) holds, then the stationary points ρ(z) and λ(z) belong to [X,Y[. Either z∈]λ(z), O_w(z)[ or z∈[ O_w(z),ρ(z)[. As ]λ(z), O_w(z)[⊆ I_λ(z) and [ O_w(z),ρ(z)[⊆ I_ρ(z), it follows that z∈ U.Suppose that ρ(X)≥ z. Then z∈[X,ρ(X)[, thus z∈ I_ρ (X). Similarly, if λ (Y)≤ zthen z∈ I_λ (Y). In both cases z∈ U.We proved that U=[X,Y[. By Lemma <ref>, the interval [X,Y[ is a special bridge.Let p_1 and p_2 be two elements of 𝔏(w) such that p_1≤ p_2. If p_1<p_2, then, as written in Corollary <ref>, the set of transitions from p_1 to p_2 is contained in a J-class J_p_1,p_2. In case p_1=p_2, let J_p_1,p_2=J_p_1. By a J-minimum transition from p_1 to p_2, we mean a transition from p_1 to p_2 belonging to J_p_1,p_2; this terminology is useful to unify both cases p_1<p_2 and p_1=p_2 in some of our arguments. Let p_1,p_2∈𝔏(w), with p_1≤ p_2. For every (x_1,y_1)∈ p_1 and (x_2,y_2)∈ p_2, the intersection I of J_p_1,p_2 with the set of labels of edges from (x_1,y_1) to (x_2,y_2) is a singleton. If t is the unique element of I, and if e_i is the unique idempotent of J_p_i that stabilizes (x_i,y_i) (cf. Lemma <ref>), then t=e_1te_2. If p_1<p_2, then the lemma follows straightforwardly from Proposition <ref>. If p_1=p_2=p, thenJ_p_1,p_2=J_p. Since edges in 𝒯_p labeled by elements of J_p are edges of K_p by Corollary <ref>, and since K_p is trivial by Corollary <ref>,we have also in this case that there is only one edge from (x_1,y_1) to (x_2,y_2) with label in J_p_1,p_2. Let e_i and t be as in the statement of the lemma. Because e_i stabilizes (x_i,y_i), we may consider the edge (x_1,y_1)(x_1,y_2),and so t Je_1te_2. By the already proved uniqueness, we must have t=e_1te_2. [J-bridge] A pair of elements (p_1,p_2) of (𝔏(w)) is a J-bridge with respect to g, if * p_1≤ p_2;* F_g(p_1)∩ F_g(p_2)≠∅;* the elements of φ_ A(J_p_1) are J-equivalent to the elements of φ_ A(J_p_2);* if τ is a J-minimum transition from an element of p_1 to an element of p_2, then φ_ A(τ) is J-equivalent to the elements of φ_ A(J_p_1).Note that (p,p) is a J-bridge, whenever p is a stationary point. In the proof of the followingproposition, the hypothesis that S is an unambiguous aperiodic semigroupis essential. Suppose that S is unambiguous and let (p_1,p_2) be a J-bridge with respect to g. Suppose there are step points z_1 and z_2 such that [z_1,p_1[ and ]p_2,z_2] are bridges with respect to g. Let (s_1,s_2)∈ F_g(p_1)∩ F_g(p_2). Then there are step points x_1∈ [z_1,p_1[ and x_2∈]p_2,z_2] such that x_1 x_2 is g-projected to an idempotent loop of T(s) of the form (s_1,s_2) (s_1,s_2).Let I_1=[z_1,p_1[ and I_2=]p_2,z_2]. By the definition of F_g(p_i), there are strictly monotone sequences (r_i,m)_m≥ 1 (increasing if i=1, decreasing if i=2)of step points in I_i∩ g^-1(s_1,s_2), converging in 𝔉(w) to elements r_i of p_i (i=1,2). Denote by e_i the unique idempotent of J_p_i that stabilizes r_i.For each n≥ m, let t_1,m,n be the label of the unique edge from r_1,m to r_1,n, and let t_2,m,n be the label of the unique edge from r_2,n to r_2,m. By Corollary <ref> and its dual, for each i∈{1,2} the sequence (t_i,m,n)_n converges to an element t_i,m, and in turn the sequence (t_i,m)_m converges to e_i. Therefore, by continuity of φ_ A, for each i∈{1,2} we may take m_i≥ 1 and n_i≥ m_i for which we haveφ_ A(e_i)=φ_ A(t_i,m_i)=φ_ A(t_i,m_i,n_i).Our hypothesis yields that the edge r_1,m_1 r_1,n_1 is g-projected to the edge (s_1,s_2)(s_1,s_2). In particular, (s_1,s_2) is stabilized by φ_ A(e_1). Similarly, (s_1,s_2) is stabilized by φ_ A(e_2). Therefore, φ_ A(e_1)=φ_ A(e_2), by Lemma <ref>, since the finite semigroup S is unambiguous.By Lemma <ref>,the unique minimum J-transition τ from r_1 to r_2 is such that τ=e_1τ e_2. By the definition of J-bridge, we have φ_ A(e_1) Jφ_ A(τ). On the other hand, τ≤_ Re_1 and τ≤_ Le_2. Hence φ_ A(e_1) Hφ_ A(τ) because φ_ A(e_1)=φ_ A(e_2) and S is stable. Therefore,φ_ A(e_1)=φ_ A(τ)=φ_ A(e_2),since S is aperiodic. The unique edge (cf. Proposition <ref>) from r_1,m_1 to r_2,m_2, with label ζ, is factorized by the edges r_1,m_1 r_1, r_1 r_2 and r_2 r_2,m_2. Therefore ζ=t_1,m_1τ t_2,m_2. From (<ref>) and (<ref>) we get φ_ A(ζ)=φ_ A(e_1). Therefore, r_1,m_1 r_2,m_2 is g-projected to the idempotent loop (s_1,s_2) (s_1,s_2). Since r_i,m_i is a step point in I_i, this concludes the proof. We next show a pair of lemmas where the function g does not appear, but which will be used later in this section in the proof of Theorem <ref>.Consider a pseudovariety V closed under concatenation. Let w be an element of AV. Consider a generating mapping φ A→ S of a semigroup in V. For every stationary point p of 𝔏(w), there is a clopen interval I containing p such that, if t is a transition between elements q,r of I with q≤ r, then φ_ V(t) is a factor of the elements of φ_ V(J_p).Let (u,v)∈ p. By Proposition <ref> and its dual, there is in 𝔉(w) an increasing sequence (p_n)_n of step points converging to (u,v) and a decreasing sequence (p'_n)_n of step points converging to (u,v). Denote by τ_n the unique edge from p_n to (u,v), and by τ'_n the unique edge from (u,v) to p'_n (uniqueness of these edges is from Proposition <ref>). Let ε be a loop in T_p at vertex (u,v). Then τ_nετ'_n converges to a loop ε̃ of T(w) at vertex (u,v). Since ε is a factor of ε̃, we must have Λ(ε̃)∈ J_p (thus we have ε̃=ε by Corollary <ref>, although we shall not use that fact). Therefore, since φ_ A and the labeling mapping Λ are continuous, there is N such that φ_ V(Λ(τ_nετ'_n))∈φ_ V(J_p) for all n≥ N. Consider the clopen interval I=[p_N,p'_N]. Let q and r be elements of I such that q≤ r, and let t be a transition from q to r. Then t is a factor of a transition from p_N to p'_N. But Λ(τ_Nετ'_N) is the unique transition from p_N to p'_N, by Proposition <ref>. Hence, φ_ V(t) is a factor of the elements of φ_ V(J_p).Let (q_n)_n be a sequence of stationary points of 𝔏(w)converging to q. For each n, let u_n be an element of J_q_n. Suppose that (u_n)_n converges to u. Then u is J-above J_q.The sequence of edges (q_nq_n)_n of T(w) converges to qq. Since u labels a transition from q to itself, u is J-above J_q. In Lemma <ref> it is not true in general that u∈ J_q. In Example <ref>, the stationary point r such that (a^ω ba^ω)^ω belongs to J_r is the limit of a sequence (q_n)_n of stationary pointssuch that a^ω∈ J_q_n, but a^ω∉ J_r.[Mapping Γ] Let J_S be the set of regular J-classes of S, and denote by Υ the mapping (𝔏(w))→ J_S sending a stationary point p to the J-class containing φ_ A(J_p). Denote by Γ the mapping Υ× F_g(𝔏(w))→ J_S×Im(F_g) sending p to (Υ(p),F_g(p)). In the next lemma, we continue to assume that 𝔏_c(w) is g-recognized by (φ,s). Suppose that the semigroup S is unambiguous. Let X,Y be elements of O_w such that X<Y. Suppose that the restriction of Γ to (𝔏(w))∩[X,Y[ is constant. Then [X,Y[ is a special bridge.Let p∈(𝔏(w))∩[X,Y[. By Lemma <ref>, it suffices to prove that there is a special bridge containing p. Let I be a clopen interval containing p satisfying the properties described in Lemma <ref>, and let [x_0,y_0] be the clopen interval I∩[X,Y[, which is indeed nonempty as it contains p. Note that x_0 and y_0 are step points. Let X'= O_w(x_0) and Y'= O_w(y_0). By the definition of O_w, x_0≥ X implies X'≥ X, and y_0≤ Y implies Y'≤ Y, thus [X',Y'[⊆[X,Y[. Therefore, we have p∈[X',Y'[ and [ρ(X'),λ(Y')]⊆ I∩[X,Y[.Let x,y be step points in [X',Y'[ such that x<y. We want to show that x y is g-projected. If x≺≺ y,then we apply Remark <ref>, so we suppose that is not the case, which implies ρ (x)≤λ(y). Note that [ρ(x),λ(y)]⊆[ρ(X'),λ(Y')]. If t is a J-minimum transition from ρ (x) to λ(y) then t≤_ JJ_ρ(x) and t≤_ J J_λ(y) by Lemma <ref>, thus φ_ A(t)≤_ JΥ(ρ(x)) and φ_ A(t)≤_ JΥ(λ(y)). Hence φ_ A(t)≤_ JΥ(p), because the restriction of Υ to (𝔏(w))∩[X,Y[ is constant. By the choice of I (cf. Lemma <ref>), this implies φ_ A(t)∈Υ(p). Therefore, since the restriction of Γ to (𝔏(w))∩[X,Y[ is constant, we have shown that the pair (ρ (x),λ(y)) is a J-bridge.By Remark <ref>,[x,ρ(x)[and ]λ(y),y]are bridges, whence it follows fromProposition <ref>that there are step points x'∈[x,ρ(x)[and y'∈]λ(y),y] such thatx' y' is g-projected.Therefore x y is g-projectedby Remarks <ref>and <ref>. This concludes the proof that [X',Y'[ is a special bridgecontaining p.Suppose that the finite aperiodic semigroup S is unambiguous. Then every stationary point p of 𝔏(w) is contained in some special bridge.Endow J_S×Im(F_g) with the following partial order:(J_1,P_1)≤ (J_2,P_2) (J_2<_ JJ_1 or (J_1=J_2 and P_1⊆ P_2)).For each p∈(𝔏(w)), letG(p)={Γ(q) : q∈(𝔏(w)) and Γ(q)<Γ(p)}.We shall prove by induction on the cardinal of G(p) that p is contained in some special bridge.We start with some preliminary remarks. Let I=[α,β] be a clopen interval containing p satisfying the properties described in Lemmas <ref> and <ref><ref>. Let X= O_w(α) and Y= O_w(β). Then the following holds:(𝔏(w))∩ I= (𝔏(w))∩[X,Y[.Let q∈(𝔏(w))∩[X,Y[. Then we have Υ(p)≤_ JΥ(q) by the choice of I (cf. Lemma <ref>). It also follows from the choice of I (cf. Lemma <ref><ref>) that F_g(q)⊆ F_g(p). Hence we haveΓ(q)≤Γ(p)for all q∈(𝔏(w))∩[X,Y[. Initial step. Suppose that G(p)=∅. It follows from (<ref>) that Γ(q)=Γ(p) for every q∈(𝔏(w))∩[X,Y[. Therefore, [X,Y[ is a special bridge, by Lemma <ref>.Inductive step. Suppose that for every stationary point q such that |G(q)|<|G(p)|, there is a special bridge containing q. Consider the setsM ={q∈(𝔏(w))∩[X,Y[:Γ(q)<Γ(p)},N ={q∈(𝔏(w))∩[X,Y[:Γ(q)=Γ(p)}.Note that M∪ N=(𝔏(w))∩[X,Y[, by (<ref>). Note also that, by the induction hypothesis, every element of M is contained in some special bridge, as Γ(q)<Γ(p) implies G(q)⊊ G(p) with Γ(q)∈ G(p)∖ G(q). Our goal is to apply Lemma <ref> to the interval [X,Y[. So, it remains to show that every element of N is contained in some special bridge. For that purpose, we prove the following lemma. The set N is closed.Consider a sequence (q_n)_n of elements of N converging to q. Since [X,Y[ and (𝔏(w)) are closed (the latter in view of Proposition <ref>), we only need to prove that Γ(q)=Γ(p).By Lemma <ref><ref>, we have F_g(q_n)⊆ F_g(q) for sufficiently large n. By the definition of N, we have F_g(q_n)= F_g(p) for all n, whence F_g(p)⊆ F_g(q). On the other hand, since q∈[X,Y[ and thus q∈ I, by the choice of I (cf. Lemma <ref><ref>) we have F_g(q)⊆ F_g(p). This concludes the proof of the equality F_g(q)=F_g(p).By Lemma <ref>, we have Υ (q)≤_ JΥ (q_n) for some sufficiently large n. By the definition of N, we have Υ (p)=Υ (q_n) for all n, whence Υ (q)≤_ JΥ (p). On the other hand, since q∈ [X,Y[ we have Υ (p)≤_ JΥ (q) by (<ref>). Therefore we have Γ(p)=Γ(q). Let us proceed with the proof of Proposition <ref>. Suppose that q∈ N. Let I' be a clopen interval containing q, and contained in [X,Y[, satisfying the properties described in Lemmas <ref> and <ref><ref>. In a similar way as we have built X,Y from I, we may build from I' elements X',Y'∈ O_w such that q∈[X',Y'[ and (𝔏(w))∩[X',Y'[⊆ I'.Let x and y be two step points of [X',Y'[ such that x<y. We want to prove that the edge x y is g-projected. By Remark <ref>, we may as well assume that x and y are not ≈-equivalent.Suppose N∩ [x,y]=∅. Then every element of the nonempty intersection (𝔏(w))∩[x,y[ belongs to M and is therefore contained in a special bridge, as observed earlier. Hence [ O_w(x), O_w(y)[ is a special bridge by Lemma <ref>. It follows that the edge x y is g-projected (cf. Remarks <ref> and <ref>).Suppose that N∩ [x,y]≠∅. Then N∩ [x,y] has a minimum r_x and a maximum r_y, because N∩ [x,y] is closed by Lemma <ref>.Let t be a J-minimum transition from r_x to r_y. Then we have t≤_ J J_r_x by Lemma <ref>, thus φ_ A(t)≤_ JΥ(r_x). On the other hand, r_x,r_y∈ I', and so, by the choice of I' (cf. Lemma <ref>), we know that φ_ A(t)≥_ JΥ(q). Since by the definition of N, the equalities Υ(r_x)=Υ(r_y)=Υ(q) hold, we obtain φ_ A(t)∈Υ(q). Also by the definition of N, we conclude that F_g(r_x)=F_g(r_y). This shows that (r_x,r_y) is a J-bridge.We claimthat [x,r_x[ is a bridge. Let x_1 and x_2 be step points such that x<x_1≤ x_2<r_x. We want to show that x_1 x_2 is g-projected, for which we may as well assume that x_1 and x_2 are not ≈-equivalent. Recall that M∪ N=(𝔏(w))∩[X,Y[ by (<ref>), and so, by the definition of r_x, the set (𝔏(w))∩[ O_w(x_1), O_w(x_2)[ is contained in M. But we already observed that every element of M is contained in some special bridge. Hence, [ O_w(x_1), O_w(x_2)[ is a special bridge by Lemma <ref>. and so x_1 x_2 is g-projected ((cf. Remarks <ref> and <ref>)), thus proving the claim. Similarly, ]r_y,y] is a bridge.By Proposition <ref>, there are step points x_0 and y_0 satisfying x<x_0<r_x and r_y<y_0<y such thatx_0 y_0 is g-projected. Again by the definition of r_x and by (<ref>), every element of (𝔏(w))∩[ O_w(x), O_w(x_0)[ belongs to M, and so [ O_w(x), O_w(x_0)[ is a special bridge by Lemma <ref>. Hence, the edgex x_0 is g-projected. Similarly, y_0 y is g-projected. It follows that x y is g-projected by Remark <ref>.We showed in all cases that x y is g-projected. Hence, [X',Y'[ is a special bridge containing q.We have established that every element of (𝔏(w))∩[X,Y[ is contained in some special bridge. From Lemma <ref>, we then deduce that [X,Y[ is a special bridge containing p. This concludes the inductive step in our proof, thereby showing that the proposition holds.Let w be an element of AV. Let φ A→ S be a generating mapping of a finite aperiodic unambiguous semigroup S. Suppose that 𝔏_c(w) is g-recognized by (φ,s). Then 𝔏(w) is a bridge.Clearly, if w∈ A^+ then 𝔏(w) is a bridge by Remark <ref>. Suppose that w∈ AA∖ A^+. By Proposition <ref>, for each stationary point p of 𝔏(w), there is a special bridge [X_p,Y_p[ containing it. The union of the nonempty family ([X_p,Y_p[)_p∈(𝔏(w)) is 𝔏(w)∖{(w,1)}, therefore 𝔏(w) is a bridge, in view of Lemma <ref>. The direct implication in Theorem <ref> follows directly from Proposition <ref>.Conversely, suppose that 𝔏(w) is g-recognized by (φ,s), for some g. Then (1,w) (w,1) is g-projected to g(1,w) g(w,1) by Theorem <ref>. But g(1,w)=(1,s) and g(w,1)=(s,1). It follows that s=φ_ A(w).§ THE EFFECT OF MULTIPLICATION ON THE QUASI-ORDER In this section, under suitable conditions, we relate 𝔉(uv) on one hand, with 𝔉(u) and 𝔉(v) on the other hand.The quasi-orders considered in this section are all total, as they stem from the quasi-order of 2-factorizations of equidivisible semigroups. We want to compare different intervals of quasi-ordered sets of 2-factorizations. This leads us to introduce the following definitions. Let (P,≤) and (Q,≤) be two quasi-ordered sets, and let φ be a function from P to Q. Recall that φ is monotone if p≤ q implies φ(p)≤φ(q), for every p,q∈ P. Suppose moreover that the quasi-order on P is total. Then we say that φ is a quasi-isomorphism if φ is a surjective monotone mapping such that, for all p,q∈ P, we have p< q⇒φ(p)<φ(q). Because the quasi-order on P is total, we have φ(p)<φ(q)⇒ p<q and φ(p)∼φ(q)⇒ p∼ q, for all p,q∈ P. Therefore, φ induces the isomorphism of linearly ordered sets φ̃ P/∼→ Q/∼ sending p/∼ to φ(p)/∼. In particular, the quasi-order on Q is also total.Let I_u be an interval of 𝔏(u) and I_v an interval of 𝔏(v), for some u,v in a compact semigroup S. A mapping θ I_u→ I_v is said to be J-preserving if J_p=J_θ(p) for every p∈ I_u.Consider an equidivisible profinite semigroup S which is finitely cancelable, and take w∈ S. Let (u,v)∈𝔉(w). For p=(u,v)/∼, let e be the unique idempotent of J_p stabilizing (u,v). Then the following mappings are quasi-isomorphisms of intervals of the respective totally quasi-ordered sets 𝔉(w), 𝔉(u), and 𝔉(v):λ_(u,v)[(1,u),(u,e)] →[(1,w),(u,v)],(x,y) ↦(x,yv) ρ_(u,v)[(e,v),(v,1)] →[(u,v),(w,1)].(x,y) ↦(ux,y)Moreover, the induced isomorphisms λ̃_(u,v) and ρ̃_(u,v) are J-preserving. Before proceeding with the proof of Proposition <ref>, let us recall that the uniqueness of e mentioned in its statement is guaranteed by Lemma <ref>. The next technical lemma will be used in the proof of Proposition <ref>.Let S be an equidivisible compact semigroup, and let x,y,z∈ S. If xy=xyz, then there exists some idempotent e∈ S^I such that yz^ω=ey and xe=x. Dually, if yz=xyz, then there exists some idempotent e∈ S^I such that x^ω y=ye and ez=z.We deal only with the case xy=xyz, as the other case is dual. Since S is equidivisible, the pairs (x,y) and (x,yz) are comparable elements of the quasi-ordered set 𝔉(xy). If (x,yz)≤(x,y), then there exists t∈ S^I such that x=xt and yz=ty, and so yz^k=t^ky for every k≥1, whence yz^ω=t^ω y and we choose e=t^ω. Otherwise, (x,y)<(x,yz), and so there exists u∈ S such that x=xu and y=uyz, whence x=xu^ω and y=u^ω yz^ω; since yz^ω=y=u^ω y, we may choose e=u^ω.By symmetry, it suffices to consider the mapping λ=λ_(u,v).By Remark <ref>, and since ev=v, the mapping λ indeed takes its values in the interval [(1,w),(u,v)] and it is monotone.Let (x,z)∈𝔉(w) be such that (x,z)≤(u,v). Then there is t∈ S^I such that xt=u and z=tv. And, since u=ue, we deduce that (x,te)(u,e) is and edge of 𝒯(u), whence (x,te) belongs to the interval [(1,u),(u,e)]. As λ(x,te)=(x,tev)=(x,z), we conclude that λ is surjective.To prove that λ is a quasi-isomorphism, it remains to show that if (x_1,y_1), (x_2,y_2) are elements of [(1,u),(u,e)], then(x_1,y_1)<(x_2,y_2)(x_1,y_1v)<(x_2,y_2v). Reasoning by reductio ad absurdum, suppose that the implication fails, that is, that (x_1,y_1)<(x_2,y_2) and (x_2,y_2v)≤ (x_1,y_1v).We may then consider s,t∈ S^I such that x_1t=x_2, y_1=ty_2, x_2s=x_1, and y_2v=sy_1v. The latter equality can be written as y_2v=st· y_2v, and applying the second case of Lemma <ref> to it, we conclude that there exists an idempotent f∈ S^I such that (st)^ω y_2=y_2f and fv=v. The calculationsx_2·(st)^ω-1s=x_1(ts)^ω=x_1and(st)^ω-1s· y_1=(st)^ω y_2=y_2fshow that x_2· y_2f=x_1y_1=u and(x_2,y_2f)≤(x_1,y_1)in [(1,u),(u,e)]. Since (x_1,y_1)<(x_2,y_2), we reach a contradiction provided we prove that y_2f=y_2.Recall that v=fv, and note that u=x_2y_2f implies u=uf. Hence, as (u,v)(u,v) belongs to the ideal 𝒦_p, we conclude that (u,v)(u,v) also belongs to 𝒦_p. From Corollary <ref>, we get efe=e. On the other hand, since (x_2,y_2f)≤ (u,e) (cf. (<ref>)) and (x_2,y_2)≤(u,e), we have y_2fe=y_2f and y_2e=y_2. Hence, y_2f=y_2 fe=y_2efe=y_2e=y_2, and so we reach the desired contradiction. The contradiction was originated by the assumption that the implication (<ref>) fails in the interval [(1,u),(u,e)]. Hence, the implication holds, which concludes the proof that λ is a quasi-isomorphism.It remains to show that, for (x,y)∈ [(1,u),(u,e)], we have J_q=J_λ̃(q), where q=(x,y)/∼. Let ε∈ S^I be an idempotent. Observe that if ε stabilizes (x,y) then it also stabilizes (x,y v), which shows that J_q lies J-above J_λ̃(q).Conversely, suppose that ε stabilizes (x,yv). As ε yv=yv, it follows from Lemma <ref> that there is some idempotent f∈ S^I withε y=yf and fv=v.We claim that (x,yf)≤ (u,e). Suppose on the contrary that(u,e)<(x,yf).Then e≤_ Lyf, and so in particular we have e=ef. Similarly, from(x,y)≤ (u,e) we get y=ye. Therefore, yf=yef=ye=y,yielding a contradictionbetween (<ref>)and (<ref>).This shows the claim that (x,yf)≤ (u,e). We are now assured that (x,yf) belongs to the domain ofλ. From v=fv, we get λ(x,y)=(x,yv)=λ(x,yf). We have already proved that λ is a quasi-isomorphism, and so we conclude that (x,y)∼(x,yf). On the other hand, ε clearly stabilizes (x,yf)=(x,ε y). Hence, J_λ̃(q) lies J-above J_q, which proves that the two J-classes coincide.Notice that in Proposition <ref>, in the special case where (u,v) is a step point, we have e=1 and so the domains of λ_(u,v) and ρ_(u,v) are, respectively, 𝔉(u) and 𝔉(v).Suppose that in Proposition <ref> we have e≠ 1. Let q be the stationary point (e,e)/∼ of 𝔏(e). Since e stabilizes (e,e) and J_q is J-above e, we have e∈ J_q. Therefore, applying Proposition <ref>, we may consider the quasi-isomorphisms λ_(e,v)[(1,e),(e,e)]→[(1,v),(e,v)] and ρ_(u,e)[(e,e),(e,1)]→[(u,e),(u,1)]. The diagram in Figure <ref> may facilitate the understanding of the applications of Proposition <ref> in the case in which (u,v) is a stationary point.The arrows indicate quasi-isomorphisms between various intervals of the quasi-ordered sets 𝔉(w), 𝔉(u), 𝔉(v), and 𝔉(e). Those quasi-isomorphisms induce isomorphisms between the corresponding intervals of the linearly ordered sets 𝔏(w), 𝔏(u), 𝔏(v), and 𝔏(e). The picture is perhaps clearer if interpreted in this context, in which case, the points (u,v), (u,e), (e,v), and (e,e) should be replaced by their respective ∼-classes. Let S be an equidivisible profinite semigroup S which is finitelycancelable, and let w∈ S.We endow every ordered subset Q of 𝔏(w)with the following labeling :for a step point p=(u,v) of 𝔏(w) belonging to Q,let (p)=(v) (and soif Q=𝔏(w) then the labeling on step points is the one defining the cluster word𝔏_c(w));for a stationary point p of 𝔏(w) belonging to Q,let (p)=J_p. The resulting labeled ordered set is denoted Q_.In the next result, P_+Q_ denotesthe labeled ordered set with underlyingordered set P+Q and labeling whose restriction to Pand Q is respectively the labeling of P_ and Q_.The symbol ≅ stands for isomorphism of labeled ordered sets.Let S be an equidivisible profinite semigroup S which is finitely cancelable. Take w∈ S. Let u,v∈ S be such that w=uv. If e is the unique idempotent of J_(u,v)/∼ stabilizing (u,v), then𝔏(w)_≅[(1,u)/∼, (u,e)/∼[_+ [(e,v)/∼,(v,1)/∼]_. In particular, if (u,v) is a step point, then𝔏(w)_≅(𝔏(u)∖{(u,1)})_+𝔏(v)_.Consider the quasi-isomorphism λ_(u,v) and respective isomorphism λ̃_(u,v) as in Proposition <ref>. Then, the pair (x,y)∈𝔉(u) is a step point of the interval [(1,u),(u,e)[ if and only if its image λ_(u,v)(x,y)=(x,yv) is a step point of [(1,w),(u,v)[, and (y)=(yv). This fact, together with λ̃_(u,v) being J-preserving, enables us to conclude that[(1,u)/∼,(u,e)/∼[_≅[(1,w)/∼,(u,v)/∼[_.Similarly, we obtain[(e,v)/∼,(v,1)/∼]_≅[(u,v)/∼,(w,1)/∼]_. Since we clearly have𝔏(w)_=[(1,w)/∼,(u,v)/∼)[_+[(u,v)/∼,(w,1)/∼]_,this concludes the proof.§ THE IMAGE OF THE REPRESENTATION W|->LC(W) IN THE APERIODIC CASEConsider a cluster word (L,ℓ) over A. Let φ A→ S be a generating mapping of a semigroup S. Let s∈ S and g(L)→𝔉(s) be such that (L,ℓ) is g-recognized by (φ,s). We say that g is a (φ,s)-recognizer of (L,ℓ). Let φ A→ S be a generating mapping of a finite semigroup S, and let π S→ T be an onto homomorphism of semigroups. Suppose that (L,ℓ) is recognized by (φ,s). Then (L,ℓ) is recognized by (π∘φ,π(s)).Let g(L)→𝔉(s) be a (φ,s)-recognizer of (L,ℓ). For each p∈(L), let g(p)=(u_p,v_p). Consider the mapping h(L)→𝔉(π(s)) defined by h(p)=(π(u_p),π(v_p)). We claim that (L,ℓ) is h-recognized by (π∘φ,π(s)). The conditions <ref>-<ref> in Definition <ref> for h-recognition by(π∘φ,π(s)) are clearly satisfied. It remains to show that condition <ref> holds. Let p be a stationary point of (L,ℓ). Take an element q of 𝔉(π(s)) such that h^-1(q) is left cofinal at p. Consider the setX={(u,v)∈𝔉(s):(π(u),π(v))=q}.Then we have h^-1(q)=g^-1(X)=⋃_x∈ X g^-1(x). Since h^-1(q) is left cofinal at p and X is finite, there is at least one element x_0 of X such that g^-1(x_0) is left cofinal at p. But then g^-1(x_0) is also right cofinal at p, because (L,ℓ) is g-recognized by (φ,s). Therefore, h^-1(q) is right cofinal at p. Symmetrically, if h^-1(q) is right cofinal at p, then it is left cofinal at p. This concludes our proof. For a cluster word (L,ℓ) over A, if p and q are step points of L such that p≤ q, then ([p,q],ℓ) is the cluster word obtained from (L,ℓ) by restricting ℓ to [p,q[, and letting ℓ(q)=1.We wish to study cluster words (L,ℓ) satisfying the following conditions:* For every finite aperiodic unambiguous A-generated semigroup S, and every generating mapping φ A→ S, there is a unique s∈ S such that (L,ℓ) is recognized by (φ,s).* If p and q are step points of L such that p<q, then ([p,q],ℓ) satisfies <ref>.* Consider an arbitrary finite aperiodic unambiguous A-generated semigroup S and a generating mapping φ A→ S. Let s be such that (L,ℓ) if recognized by (φ,s). Take a (φ,s)-recognizer g. Suppose p and q are step points such that p<q. If t∈ S is such that (φ,t) recognizes ([p,q],ℓ), then g(p)g(q) is an edge of T(s).In the setting of condition <ref>, there is only one such (φ,s)-recognizer, assuming that <ref> and <ref> also hold. Indeed, if g is a (φ,s)-recognizer, and p is a step point of L, and if s_1 and s_2are (the unique) elementsof S such that ([min L,p],ℓ) and([p,max L],ℓ) are respectivelyrecognized by (φ,s_1) and (φ,s_2),then g(min L) g(p)andg(p) g(max L)are edges of T(s), and so g(p)=(s_1,s_2).Finally we consider a fourth condition, assuming<ref>-<ref> hold:* For every step point p of L, there is a finite aperiodic unambiguous semigroup S and a generating mapping φ A→ S such that, for the unique (φ,s)-recognizer g of (L,ℓ), there are no elements of S that stabilize g(p) in T(S). A cluster word satisfying conditions <ref>-<ref> is called a worthy cluster word. A cluster word (L,ℓ) over A is isomorphic to a cluster word of the form 𝔏_c(w), w∈ AA, if and only if it is a worthy cluster word.Let w∈ AA. By Theorem <ref>, the cluster word 𝔏_c(w) satisfies condition <ref>.Take two step points p and q of 𝔏_c(w) such that p<q. Let t∈ AA be the unique transition from p to q. Applying twice Proposition <ref>, we conclude that ([p,q],ℓ) is isomorphic with 𝔏_c(t).Therefore, by Theorem <ref>, ([p,q],ℓ) satisfies condition <ref>.By the previous paragraph and by Theorem <ref>,the cluster word 𝔏_c(w) satisfies condition <ref>.Suppose condition <ref> does not hold for 𝔏_c(w). Then, there is a step point p such that, for every finite aperiodic unambiguous A-generated semigroup S and every generating mapping φ A→ S, the vertex g(p) of 𝒯(s) is stabilized by some element of S. Let p=(u,v). We then have g(p)=(φ_ A(u),φ_ A(v)) (cf. Theorem <ref>). By a standard compactness argument, this implies that (u,v) is stabilized by some element of AA. In view of Proposition <ref>, this is impossible since p is a step point.Conversely, suppose that (L,ℓ) is a worthy cluster word over A. Let (π_i)_i∈ I be an inverse system of continuoushomomorphisms π_i AA→ S_i onto finite aperiodic unambiguous A-generated semigroups, with connecting homomorphisms π_j,i S_j→ S_i, such that AA=_i∈ I S_i. According to condition <ref>, for each i∈ I, we may consider the unique element s_i of S_i such that (L,ℓ) is recognized by (π_i,s_i). Applying Lemma <ref>, we then conclude that π_j,i(s_j)=s_i, whenever i,j∈ I are such that i≤ j. Hence, we may consider the unique element w_L of AA such thatπ_i(w_L)=s_i,for every i∈ I.Consider the mapping λ(L)→(𝔏(w_L)) defined byλ(p)=(w_[min L,p],w_[p,max L]).Note that condition <ref> ensures that w_[min L,p] and w_[p,max L] are well defined. We claim that λ is an isomorphism between the cluster words (L,ℓ) and 𝔏_c(w_L). In the process of proving this we show that (w_[min L,p],w_[p,max L]) is indeed a step point of 𝔉(w_L) (and thus of 𝔏(w_L)).We begin by observing that formula (<ref>) generalizes to every generating mapping φ A→ S of a finite aperiodic unambiguous A-generated semigroup. Indeed, take s∈ S such that (L,ℓ) is recognized by (φ,s). There is some i∈ I for which there is an onto homomorphism ρ S_i→ S satisfying φ_ A=ρ∘π_i. By Lemma <ref>, we know that ρ(s_i)=s. Therefore, we haveφ_ A(w_L)=s.For such a pair (φ,s), let g_φ(L)→𝔉(s) be the unique (φ,s)-recognizer of (L,ℓ). If p is a step point of L, then applying formula (<ref>) to [min L,p] and to [p,max L], and taking into account Remark <ref>, we conclude thatg_φ(p)=(φ_ A(w_[min L,p]),φ_ A(w_[p,max L])).In particular, we haveφ_ A(w_[min L,p]w_[p,max L])=s=φ_ A(w_L).Because φ was arbitrarily chosen among generating mappings of finite aperiodic unambiguous A-generated semigroups, this shows that the pair λ(p)=(w_[min L,p],w_[p,max L]) indeed belongs to 𝔉(w_L).Consider step points q and r of L such that q≺ r. Let a=ℓ(q). Take a generating mapping φ A→ S of a finite aperiodic unambiguous semigroup S. By the definition of (φ,s)-recognizer, we can consider in T(s) the edge g_φ(q)g_φ(r). In view of formula (<ref>), applied to q and r, we then haveφ_ A(w_[min L,q]a)=φ_ A(w_[min L,r])andφ_ A(w_[q,max L])=φ_ A(aw_[r,max L]).Since φ was arbitrarily chosen among generating mappings of finite aperiodic unambiguous A-generated semigroups, we conclude thatw_[min L,q]a=w_[min L,r]and w_[q,max L]=aw_[r,max L],that is, λ(q)λ(r) is an edge of T(w_L). By Proposition <ref>, we either have λ(q)∼λ(r) or λ(q)≺λ(r).If λ(q)∼λ(r), then there is z∈ a( AA)^I such that λ(q)λ(q) is an edge of T(w_L). Therefore, in view of formula (<ref>), we conclude that φ(z) labels a loop of T(s) rooted at g_φ(q). This contradicts the assumption that condition <ref> holds.We then conclude that, for step points q,r of L, we haveq≺ rλ(q)≺λ(r).We also showed that ℓ(λ(q))=ℓ(q), thus establishing that the mapping λ(L)→(𝔏(w_L)) has a well-defined codomain and that it preserves labels. Notice that formula (<ref>)can now be seen as follows: forthe (φ,s)-recognizerg_w_L,φ(𝔏(w))→𝔉(s)of 𝔏_c(w_L),as in Proposition <ref>,we have g_φ(p)=g_w_L,φ(λ(p)), for every step point p of L.Let q and r be step points such that q<r. Suppose that λ(q)≥λ(r). Then, for the pair (φ,s) considered so far, and in view of (<ref>), we may consider in T(s) an edge g_φ(r)g_φ(q) labeled by some t∈ S^I. On the other hand, according to condition <ref>, there is in T(s) an edge g_φ(q)g_φ(r) labeled by some z∈ S.It follows that there is a loop in T(s)at g_φ(q)labeled by zt∈ S.This contradicts <ref>. Hence, we haveλ(q)<λ(r).It remains to show that λ is onto.Let (u,v) be a step point of 𝔏(w_L).Consider the setX={q∈(L):λ(q)≤ (u,v)}. Notice that X is nonempty: indeed, one clearly has min L∈ X. We claim that p=sup X is a step point.Suppose not.Let φ A→ Sbe the generating mapping of a finite aperiodic unambiguous A-generatedsemigroup. Since g_φ is finiteand {q∈(L):q>p} is right cofinal at p, there is(s_1,s_2)∈ g_φsuch that R={q∈(L):q>p and g_φ(q)=(s_1,s_2)} is right cofinal at p.In particular, g_φ^-1(s_1,s_2) is right cofinal at p.Taking into account condition <ref>in Definition <ref>,we know that g_φ^-1(s_1,s_2) is also left cofinal at p.Therefore, there is a step point q such that q<p andg_φ(q)=(s_1,s_2). Since p=sup X, there is a step pointq' such that q<q'<p and λ(q')≤ (u,v).We have already shown that λ is injective and respects the order,so we actually have λ(q)<(u,v).Let r be an element of the nonempty set R. Since r>p,we have (u,v)<λ (r). Let t_1 and t_2 be (the unique)transitions from λ(q) to (u,v) and from (u,v) toλ(r), respectively. Then, in T(s), we have thefollowing edges g_w_L,φ(λ(q))g_w_L,φ(u,v)g_w_L,φ(λ(r)). But we have g_w_L,φ(λ(q))=g_φ(q)=(s_1,s_2)=g_φ(r) =g_w_L,φ(λ(r)).Hence, we can multiply the second edge in (<ref>)with the first edge, obtaining a loop atg_w_L,φ(u,v)=(φ_ A(u),φ_ A(v))labeled by φ_ A(t_2t_1)∈ S,leading to a contradiction, since 𝔏_c(w_L) satisfies <ref>. This establishes the claim that p is a step point, thus p∈ X. Suppose that λ(p)<(u,v).Let p' be the step point such that p≺ p'. Then,applying (<ref>),we get λ(p)≺λ (p'). Since λ(p)< (u,v), weobtain λ(p')≤ (u,v), and so p'∈ X.But then p'≤sup X=p, a contradiction with p<p'.As p∈ X, to avoid the contradiction, we must have λ(p)=(u,v).This concludes the proof that λ is onto.It would also be interesting to characterize the worthy clustered linear orders that arise as the images 𝔏_c(w) of ω-words w. We leave this as an open problem.§ ON THE CARDINALITY OF THE SET OF STATIONARY POINTSLet V be an equidivisible pseudovariety of semigroups not contained in CS. Then V is finitely cancelable (cf. Proposition <ref>), and so, by Theorem <ref>, for a finite alphabet A and for w∈ AV, the set (𝔏(w)) of step points is the set of isolated points of 𝔏(w), with respect to the order topology. Therefore, (𝔏(w)) is at most countable by Corollary <ref>. The aim of this section is to show that when A has at least two elements, there are elements w in AV for which the set(𝔏(w)) of stationary points has cardinal 2^ℵ_0. This will be done using some tools originated from symbolic dynamics, following an approach that has been successfully used in recent years to elucidate structural aspects of relatively free profinite semigroups <cit.>. §.§ Subshifts Consider a finite alphabet A,and endow A^ℤwith the product topology, where A is endowed withthe discrete topology.The shift map of A^ℤ is thehomeomorphism σ A^ℤ→ A^ℤ,defined by σ((x_i)_i∈ℤ)=(x_i+1)_i∈ℤ. A symbolic dynamical system ofA^ℤ, also called subshift of A^ℤ, is a nonempty closed subset X of A^ℤ such that σ( X)= X. The books <cit.> are good references on symbolic dynamical systems. We say that a subset L of a semigroup S is * factorial if it is closed under taking factors;* prolongable if, for every s∈ L, there are t,u∈ S such that ts,su∈ L;* irreducible if, for all s,t∈ L, there is u∈ S such that sut∈ L. If X is a subshift of A^ℤ, then L( X) denotes the language of the words of A^+ of the form x_kx_k+1… x_k+n, where k∈ℤ, n≥ 0 and (x_i)_i∈ℤ∈ X. The set L( X) is a factorial and prolongable language of A^+, and in fact all nonempty factorial and prolongable languages of A^+ are of this form; moreover, Y⊆ X if and only if L( Y)⊆ L( X), whenever X and Y are subshifts of A^ℤ <cit.>. Finally, X is said to be irreducible ifL( X)is an irreducible subset of A^+. If X is a subshift of A^ℤ thenthe sequence (1/nlog_2 |L( X)∩ A^n|)_nconverges to its infimum, which is called theentropy of X and denoted h( X) <cit.>.Note that X⊆ Y implies h( X)≤ h( Y),whenever X and Y are subshifts. If X is a subshift of A^ℤ then h( X)≤log_2|A|=h(A^ℤ). Moreover, from the fact that (1/nlog_2 |L( X)∩ A^n|)_n converges to its infimum one easily deduces that the subshift X of A^ℤ satisfies h( X)=log_2|A| if and only if X=A^ℤ (this a special case of <cit.>).§.§ A special J-class Consider a subshift X of A^ℤ, and suppose that V is a pseudovariety containing LSl. Let M_ V( X) be the set of pseudowords w∈ AV such that all finite factors of w belong to L( X). The set M_ V( X) is a factorial subset of AV. Because, as it is well known, the languages of the form A^∗ uA^∗, with u∈ A^+, are LSl-recognizable, the hypothesis that V contains LSl ensures that M_ V( X) is a closed subset of AV (cf. <cit.>). Let X be an irreducible subshift of A^ℤ. Consider a pseudovariety V containing LSl. For every u,v∈ M_ V( X) there is w∈ AV, depending only on the finite suffixes of u and on the finite prefixes of v, such that uwv∈ M_ V( X).If V is a pseudovariety containing D and its dual, then every infinite element of AV has a unique prefix (suffix) in A^+ with length n, for every n≥ 1. Let s_n be the suffix of length n of u and let p_n be the prefix of length n of v. Since X is irreducible,for each n∈ℕ, there is w_n∈ L( X) such that s_nw_np_n∈ L( X). Let w be an accumulation point of (w_n)_n. Then w has the desired property. [<cit.>]Let S be a compact semigroup and let X⊆ S. Then X is a closed, factorial, irreducible subset of S if and only if X consists of all factors of some regular element of S. By Lemma <ref> and Proposition <ref>, if X is an irreducible subshift, then there is a unique regular J-class J_ V( X) such that the elements of M_ V( X) are the factors of elements of J_ V( X). It also follows from Proposition <ref> that J_ V( X)≤_ J J_ V( Y) if and only if M_ V( Y)⊆ M_ V( X). Since we clearly have M_ V( Y)⊆ M_ V( X) if and only if L( Y)⊆ L( X), we conclude thatY⊆ X J_ V( X)≤_ J J_ V( Y). If X=A^ℤ then M_ V( X)= AV, and soJ_ V( X) is the minimum ideal of AV.§.§ Uncountable <R-chainsand uncountable sets of stationary points We use the standard notation ⌈α⌉ for the least integer greater than or equal to the real number α. There is a family (𝒮_β)_β∈]1,+∞[ of symbolic dynamical systems, parameterized by the set of real numbers greater than one, such that: * S_β is an irreducible subshift of {0,…,⌈β⌉-1}^ℤ; * h( S_β)=log_2β;* for every α,β∈]1,+∞[, we have α<β if and only if S_α⊊ S_β.A concrete family of symbolic dynamical systems satisfying the conditions of Theorem <ref> is the family of β-shifts. A comprehensive exposition about this family can be found in <cit.> and <cit.>. That these subshifts are irreducible follows from them being coded <cit.> — a subshift X of A^ℤ is coded if there is a prefix code Y contained inA^+ such that L( X) is the set of factors of elements of Y^+. The entropy of β-shifts was computed in <cit.>. The fact that this class fits into Property <ref> of Theorem <ref> appears at the beginning of <cit.> (actually, only the implication α≤β⇒ S_α⊆ S_β is explicit there, but from Property <ref> one gets S_α⊊ S_β⇒α <β).As usual, the notation <_ J stands for the irreflexive relation originated by ≤_ J, and similarly for <_ R and <_ L. From Theorem <ref>and equivalence (<ref>)in Remark <ref>one immediately deduces the existence of a <_ J-chainin AV formed by 2^ℵ_0 regular elements, whenever V contains LSl and A has at least two letters. The next theorem gives a refinement of this, as it shows in particular the existence in AV of a <_ R-chain formed by 2^ℵ_0 regular elements. We remark that in <cit.> an example is given of a <_ R-chain of 2^ℵ_0 non-regular elements in ALSl, when |A|>1. Let V be a finitely cancelable pseudovariety of semigroups containing LSl and let (𝒮_β)_β∈]1,+∞[ be a family of subshifts as in Theorem <ref>. Fix an integer n>1 and let A be the alphabet {0,…,n-1}. There is a family (w^(β))_β∈]1,n] of pseudowords of AV satisfying the following conditions: *w^(β)∈ J_ V( S_β)⊆AV, for every β∈]1,n];*α<β⇔ w^(β)<_ Rw^(α), for every α,β∈]1,n];*w^(n) is an element of the minimum ideal of AV;*there is a subnet of (w^(β))_β∈]1,n[ converging to w^(n), where ]1,n[ is endowed with the usual order;*for each β∈]1,n], there are v^(β), f^(β) such that (w^(β),v^(β)) is a stationary point of 𝔉(w^(n)),and, for q_β=(w^(β),v^(β))/∼, the pseudowordf^(β) is an idempotent in J_q_β stabilizing (w^(β),v^(β)) and satisfying f^(β) L w^(β);*we haveα <β⇒ q_α<q_β, and ifmoreover AV is equidivisible, then the equivalence α <β⇔ q_α<q_β holds, for every α,β∈]1,n]. The proof of Theorem <ref> will be done in several steps. But first we highlight the following corollary, which is our main motivation for the theorem.Let V be a finitely cancelable pseudovariety of semigroups containing LSl and let A be a finite alphabet with at least two elements. Then there are pseudowords w in the minimum ideal of AV such that (𝔏(w)) has 2^ℵ_0 elements. Note also that Theorem <ref> gives an example of a pseudoword in the minimum ideal of AV whose set of stationary points contains a subset with the same order type as the set of real numbers. In contrast, the following example exhibits a pseudoword also in the minimum ideal of AV with only one stationary point. Let u_1,u_2,u_3,… be an enumeration of the elements of A^+, and let V be a pseudovariety containing LSl such that AV is equidivisible. For each k≥ 1, consider in AV an accumulation point v_k of the sequence (u_ku_k+1⋯ u_n-1u_n)_n≥ k and an accumulation point w_k of the sequence (u_nu_n-1⋯ u_k+1u_k)_n≥ k. As every element of A^+ is a factor of v_k and w_k, we know that v_k and w_k belong to the minimum ideal K_A of AV. Therefore, if p and q are respectively the first and last stationary point of 𝔏(v_1w_1), then J_p=J_q=K_A by Theorem <ref>, and so p=q by Lemma <ref>.§.§ About the proof of Theorem <ref>Let S be a compact semigroup and I an ordered set. Suppose that F=(F_i)_i∈ I is a nonempty family of compact subsets of S. Denote by R_F the set of partial functions f from I to ⋃ F such that f(i)∈ F_i for all i∈ f, and such that i≤ j ⇒ f(i)≤_ R f(j) whenever i,j∈ f. We endow R_F with the partial order ≤ defined byf≤ g(f=g f⊊ g).The ordered set R_F has a maximal element.Let C be a chain of elements of R_F. We want to show that C has an upper bound in R_F. For each f∈ R_F, let f' be an element of ∏_i∈ I F_i whose restriction to f equals f. Since ∏_i∈ IF_i is compact, the net (f')_f∈ C has a subnet converging to some φ∈∏_i∈ IF_i. For achieving our goal, we may as well assume that (f')_f∈ C converges. Let us fix an element f_0 of C, and take i,j∈ f_0 such that i≤ j. For all f∈ C such that f_0≤ f, one has f(i)≤_ R f(j). As the net (f')_f∈ C f_0≤ f converges to φ and ≤_ R is a closed relation, we deduce that φ(i)≤_ Rφ(j). Moreover, since F_i is closed, we also have φ(k)∈ F_k for all k∈ f_0. As f_0 was chosen arbitrarily from C, we conclude that the restriction of φ to ⋃_f∈ C f belongs to R_F and is an upper bound for C. Hence, by Zorn's Lemma, R_F has a maximal element. For the relation ⊇, let C be a nonempty chain of irreducible subshifts of A. Consider a pseudovariety of semigroups V that contains LSl. Let 𝒥 be the family of J-classes (J_ V( X))_ X∈ C. Then there is an element of R_𝒥 with domain C.By Lemma <ref>, we know there is in R_𝒥 a maximal element f. We claim that f= C. Suppose this is false. Let Z∈ C∖ f. Supposing that I={ X∈ f:X⊆ Z} is nonempty, let u be an accumulation point of the net (f( X))_ X∈ (I,⊆); in case I=∅, we let u be any element of J_ V( Z). Since X⊆ Z implies M_ V( X)⊆ M_ V( Z), we have f( X)∈ M_ V( Z) for all X∈ I. And since M_ V( Z) is closed, we conclude that u∈ M_ V( Z). Moreover, fixed X∈ I, then, as f∈ R_𝒥, we have f( Y) ≤_ Rf( X) for all Y∈ I such that X⊆ Y, whenceX∈ I u≤_ R f( X).Let v∈ J_ V( Z). By the irreducibility of M_ V( Z), there is w∈ AV such that uwv∈ M_ V( Z). Since uwv is a factor of v and v is a J-minimum element of M_ V( Z), we have uwv∈ J_ V( Z). As f∈ R_𝒥, every two elements in the image of f are R-comparable, and so the elements in the image of f have all the same set P of finite prefixes. By Lemma <ref>, for each X∈ f such that Z⊆ X, there is a pseudoword w', depending only on uwv and P, such that uwvw'f( X)∈ M_ V( X). More precisely, we have uwvw'f( X)∈ J_ V( X), as f( X)∈ J_ V( X). The partial functionf' X∈ f ∪{ Z}↦ f( X)ifX⊊ Z,uwvifX= Z,uwvw'f( X)ifZ⊊ X,belongs to R_𝒥 (cf. implication (<ref>)) and f⊊ f'. This contradicts the fact that f is a maximal element of R_𝒥. The absurdity comes from the hypothesis C∖ f≠∅. We recall the concept of entropy of a pseudoword, first introduced in <cit.>, and applied there in the study of relatively free profinite semigroups. Some further applications were given in <cit.>. Let V be a pseudovariety containing LSl and A an alphabet with at least two letters. For w∈ AV, let q_w(n) denote the number of factors of length n of w. If w is an infinite pseudoword then the sequence 1/nlog_2 q_w(n) converges to its infimum, which is denoted by h(w) and called the entropy of w.[This is the definition used in <cit.>. In <cit.> the entropy of w is defined as 1/nlog_|A| q_w(n), which equals h(w)log_|A|2 for h(w) as defined here.] This definition extends to finite words, by letting h(w)=0 when w is finite. If X is a subshift, then h( X)=h(w) for every pseudoword w whose set of finite factors is equal to L( X). For instance, if X is irreducible then h( X)=h(w) when w∈ J_ V( X).Note that h(w)∈[0,log_2|A|], for all w∈ AV. Moreover, we have the following fact from <cit.>. Let w∈ AV. Then h(w)=log_2|A| if and only if w belongs to the minimum ideal of AV. In particular, the entropy of pseudowords of AV is not continuous, since every finite word has entropy zero and the set of finite words is dense. However, it is upper semi-continuous, as proved next. Let V be a pseudovariety containing LSl. If (w_n)_n is a sequence of elements of AV converging to w then lim sup h(w_n)≤ h(w).Since lim sup h(w_n) is the greatest accumulation point of the sequence (h(w_n))_n, the proof is reduced to the case where (h(w_n))_n converges.Since lim w_n=w and V contains LSl, for each k there is p_k such that for all n≥ p_k the pseudowords w_n and w have the same factors of length k. Let (n_k)_k be the sequence recursively defined by n_1=p_1 and n_k+1=max{n_k,p_k+1}. Given ε>0, consider the set K={k:h(w_n_k)≥ h(w)+ε}. For every k∈ K, one has1/klog_2 q_w(k)=1/klog_2 q_w_n_k(k) ≥ h(w_n_k)≥ h(w)+ε.As lim1/klog_2 q_w(k)=h(w), if K is infinite then (<ref>) leads to the contradiction h(w)≥ h(w)+ε. Hence K is finite, and so lim h(w_n_k)≤ h(w)+ε. Since ε is arbitrary and (h(w_n))_n converges, we conclude that lim h(w_n)≤ h(w). We now have all the tools to achieve the proof of Theorem <ref>. Let C be the chain (𝒮_β)_]1,n[, ordered by ⊇. Applying Proposition <ref> to the family of J-classes (J_ V( X))_ X∈ C, we conclude that there is a function f C→ AV such that f( S_β)∈ J_ V( S_β) andS_β⊇ S_α f( S_β) ≤_ Rf( S_α), ∀α,β∈]1,n[.On the other hand, if f( S_β) ≤_ Rf( S_α), then S_β⊇ S_α by equivalence (<ref>) in Remark <ref>. Therefore, we actually haveS_β⊋ S_αf( S_β) <_ Rf( S_α), ∀α,β∈]1,n[.For each β∈]1,n[, let w^(β)=f( S_β). By the given characterization of ( S_β)_β∈]1,+∞[ (cf. Theorem <ref>), we know that S_β⊋ S_α if and only if α<β, whence (<ref>) translates toα<β w^(β)<_ Rw^(α), ∀α,β∈]1,n[.Let (α_k)_k be an increasing sequence of elements of the open interval ]1,n[ such that limα_k=n. Thus, we have lim h(w^(α_k))=limlog_2 α_k=log_2 n. Hence, if w^(n) is an accumulation point of the sequence (w^(α_k))_k, then h(w^(n))=log_2n by Lemma <ref> and Remark <ref>. By Proposition <ref>, the pseudoword w^(n) then belongs to the minimum ideal of AV. Since ≤_ R is a closed relation, we have w^(n)≤_ R w^(β) for all β∈]1,n[. Then, taking into account (<ref>), we conclude that the net (w^(β))_β∈]1,n] satisfies conditions <ref>-<ref> in Theorem <ref>.As w^(n)≤_ Rw^(β), there is u^(β) with w^(n)=w^(β)u^(β). Since w^(β) is regular, there is an idempotent f^(β) in the L-class of w^(β). Take v^(β)=f^(β)u^(β). Then (w^(β),v^(β)) is an element of 𝔉(w^(n)) stabilized by f^(β). By Proposition <ref>, this implies that q_β=(w^(β),v^(β))/∼ is a stationary point. The elements of J_q_β are factors of w^(β), and so by the minimality of J_q_β we have f^(β)∈ J_q_β. Hence, condition <ref> in Theorem <ref> holds.Let α,β∈]1,n]. Since q_α≤ q_β⇒ w^(β)≤_ R w^(α), we deduce from (<ref>) that q_α≤ q_β⇒α≤β. Thus, if α<β then we cannot have q_β≤ q_α, and so assuming AV is equidivisible, we get q_α< q_β, thereby establishing condition <ref> in Theorem <ref>.§ ACKNOWLEDGMENTS The work of the first, third, and fourth authors was partly supported by the Pessoa French-Portuguese project “Separation in automata theory: algebraic, logical, and combinatorial aspects”.The work of the first three authors was also partially supported respectively by CMUP (UID/MAT/ 00144/2013), CMUC (UID/MAT/00324/2013), and CMAT (UID/MAT/ 00013/2013), which are funded by FCT (Portugal) with national (MCTES) and European structural funds (FEDER), under the partnership agreement PT2020. The work of the fourth author was partly supported by ANR 2010 BLAN 0202 01 FREC and by the DeLTA project ANR-16-CE40-0007.amsplain 10Almeida:1994a J. Almeida, Finite semigroups and universal algebra, World Scientific, Singapore, 1995, English translation.Almeida:2005c , Profinite groups associated with weakly primitive substitutions, Fundamentalnaya i Prikladnaya Matematika (Fundamental and Applied Mathematics) 11 (2005), 13–48, In Russian. English version in J. Math. Sciences 144, No. 2 (2007) 3881–3903.Almeida:2003cshort , Profinite semigroups and applications, Structural theory of automata, semigroups and universal algebra (New York) (V. B. Kudryavtsev and I. G. Rosenberg, eds.), Springer, 2005, pp. 1–45.Almeida ACosta:2007a J. Almeida and A. Costa, Infinite-vertex free profinite semigroupoids and symbolic dynamics, J. Pure Appl. Algebra 213 (2009), 605–631.Almeida ACosta:2013 , Presentations of Schützenberger groups of minimal subshifts, Israel J. Math. 196 (2013), no. 1, 1–31.Almeida ACosta:2016a , Equidivisible pseudovarieties of semigroups, Publ. Math. Debrecen (2016), To appear, arXiv:1603.00330.Almeida Klima:2015a J. Almeida and O. Klíma, Representations of relatively free profinite semigroups, irreducibility, and order primitivity, Tech. report, Univ. Masaryk and Porto, 2015, arXiv:1509.01389.Almeida Volkov:2001a J. Almeida and M. V. Volkov, Profinite methods in finite semigroup theory, Proceedings of International Conference “Logic and applications” honoring Yu. L. Ershov on his 60-th birthday anniversary and of International Conference on mathematical logic, honoring A. I. Mal'tsev on his 90-th birthday anniversary and 275-th anniversary of the Russian Academy of Sciences (Novosibirsk, Russia) (S. S. Goncharov, ed.), 2002, pp. 3–28.Almeida Volkov:2006 , Subword complexity of profinite words and subgroups of free profinite semigroups, Int. J. Algebra Comput. 16 (2006), 221–258.Almeida Weil:1996b J. Almeida and P. Weil, Free profinite R-trivial monoids, Int. J. Algebra Comput. 7 (1997), 625–671.Bertrand-Mathis:1986 A. Bertrand-Mathis, Développement en base θ; répartition modulo un de la suite (xθ^n)_n≥ 0; langages codés et θ-shift, Bull. Soc. Math. France 114 (1986), no. 3, 271–323.Birget:1984 J.-C. Birget, Iteration of expansions—unambiguous semigroups, J. Pure Appl. Algebra 34 (1984), no. 1, 1–55.Bruyere Carton:2007 V. Bruyère and O. Carton, Automata on linear orderings, J. Comput. System Sci. 73 (2007), no. 1, 1–24.Carruth Hildebrant Koch:1983 J. H. Carruth, J. A. Hildebrant, and R. J. Koch, The theory of topological semigroups, Pure and Applied Mathematics, no. 75, Marcel Dekker, New York, 1983.Chaubard Pin Straubing:2006 L. Chaubard, J.-E. Pin, and H. Straubing, Actions, wreath products of 𝒞-varieties and concatenation product, Theor. Comp. Sci. 356 (2006), 73–89.ACosta:2007t A. Costa, Semigrupos profinitos e dinmica simblica, Ph.D. thesis, Univ. Porto, 2007.ACosta Steinberg:2011 A. Costa and B. Steinberg, Profinite groups associated to sofic shifts are free., Proc. London Math. Soc. 102 (2011), 341–369.Costa:2001a J. C. Costa, Free profinite locally idempotent and locally commutative semigroups, J. Pure Appl. Algebra 163 (2001), 19–47.Eilenberg:1976 S. Eilenberg, Automata, languages and machines, vol. B, Academic Press, New York, 1976.Frougny:2000 C. Frougny, Number representation and finite automata, Topics in symbolic dynamics and applications (Temuco, 1997), London Math. Soc. Lecture Note Ser., vol. 279, Cambridge Univ. Press, Cambridge, 2000, pp. 207–228.Gool Steinberg:2016 S. J. van Gool and B. Steinberg, Pro-aperiodic monoids via saturated models, Tech. report, 2016, arXiv:1609.07736.Grillet:1995 P.-A. Grillet, Semigroups, An introduction to the structure theory, Monographs and Textbooks in Pure and Applied Mathematics, vol. 193, Marcel Dekker Inc., New York, 1995.Huschenbett Kufleitner:2014 M. Huschenbett and M. Kufleitner, Ehrenfeucht-Fraïssé games on omega-terms, LIPIcs. Leibniz Int. Proc. Inform., vol. 25, STACS, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2014, pp. 374–385.Ito Takahashi:1974 S. Ito and Y. Takahashi, Markov subshifts and realization of β-expansions, J. Math. Soc. Japan 26 (1974), 33–55.Kitchens:1998 B. Kitchens, One-sided, two-sided and countable state Markov shifts, Springer, Berlin, 1998.Kufleitner Wachter:2016 M. Kufleitner and J. P. Wächter, The word problem for omega-terms over the Trotter-Weil hierarchy (extended abstract), Computer science—theory and applications, Lecture Notes in Comput. Sci., vol. 9691, Springer, [Cham], 2016, pp. 237–250.Lallement:1979 G. Lallement, Semigroups and combinatorial applications, Wiley-Interscience, J. Wiley & Sons, Inc., New York, 1979.Lind Marcus:1996 D. Lind and B. Marcus, An introduction to symbolic dynamics and coding, Cambridge University Press, Cambridge, 1995.Lothaire:2001 M. Lothaire, Algebraic combinatorics on words, Cambridge University Press, Cambridge, UK, 2002.McKnight Storey:1969 J. D. McKnight, Jr. and A. J. Storey, Equidivisible semigroups, J. Algebra 12 (1969), 24–48.Moura:2009a A. Moura, Representations of the free profinite object over DA, Int. J. Algebra Comput. 21 (2011), 675–701.Parry:1960 W. Parry, On the β-expansions of real numbers, Acta Math. Acad. Sci. Hungar. 11 (1960), 401–416.Pin:2009S J.-E. Pin, Profinite methods in automata theory, vol. 26th International Symposium on Theoretical Aspects of Computer Science (STACS 2009), Internationales Begegnungs- Und Forschungszentrum für Informatik (IBFI), 2009, pp. 31–50.Pin Weil:2002b J.-E. Pin and P. Weil, The wreath product principle for ordered semigroups, Comm. Algebra 30 (2002), no. 12, 5677–5713.Renyi:1957 A. Rényi, Representations for real numbers and their ergodic properties, Acta Math. Acad. Sci. Hungar 8 (1957), 477–493.Rhodes Steinberg:2002 J. Rhodes and B. Steinberg, Profinite semigroups, varieties, expansions and the structure of relatively free profinite semigroups, Int. J. Algebra Comput. 11 (2002), 627–672.Rhodes Steinberg:2009qt , The q-theory of finite semigroups, Springer Monographs in Mathematics, Springer, 2009.Rosenstein:1982 J. G. Rosenstein, Linear orderings, Academic Press, New York, 1982.Schutzenberger:1965 M. P. Schtzenberger, On finite monoids having only trivial subgroups, Inform. and Control 8 (1965), 190–194.Straubing:1979a H. Straubing, Aperiodic homomorphisms and the concatenation product of recognizable sets, J. Pure Appl. Algebra 15 (1979), 319–327.Thomas:1997 W. Thomas, Languages, automata, and logic, Handbook of formal languages. Beyond words (G. Rozenberg and A. Salomaa, eds.), vol. 3, Springer, Berlin, 1997, pp. 389–455.Tilson:1987 B. Tilson, Categories as algebra: an essential ingredient in the theory of monoids, J. Pure Appl. Algebra 48 (1987), 83–198.Weil:2002 P. Weil, Profinite methods in semigroup theory, Int. J. Algebra Comput. 12 (2002), 137–178.Willard:1970 S. Willard, General topology, Addison-Wesley, Reading, Mass., 1970.
http://arxiv.org/abs/1702.08083v2
{ "authors": [ "Jorge Almeida", "Alfredo Costa", "José Carlos Costa", "Marc Zeitoun" ], "categories": [ "cs.FL", "math.GR" ], "primary_category": "cs.FL", "published": "20170226204542", "title": "The linear nature of pseudowords" }
Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA [Corresponding author: ]bazant@mit.edu Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USAPorous electrode theory, pioneered by John Newman and collaborators, provides a useful macroscopic description of battery cycling behavior, rooted in microscopic physical models rather than empirical circuit approximations. The theory relies on a separation of length scales to describe transport in the electrode coupled to intercalation within small active material particles. Typically, the active materials are described as solid solution particles with transport and surface reactions driven by concentration fields, and the thermodynamics are incorporated through fitting of the open circuit potential. This approach has fundamental limitations, however, and does not apply to phase-separating materials, for which the voltage is an emergent property of inhomogeneous concentration profiles, even in equilibrium. Here, we present a general theoretical framework for “multiphase porous electrode theory” implemented in an open-source software package called “MPET”, based on electrochemical nonequilibrium thermodynamics. Cahn-Hilliard-type phase field models are used to describe the solid active materials with suitablygeneralized models of interfacial reaction kinetics. Classical concentrated solution theory is implemented for the electrolyte phase, and Newman's porous electrode theory is recovered in the limit of solid-solution active materials with Butler-Volmer kinetics. More general, quantum-mechanical models of Faradaic reactions are also included, such as Marcus-Hush-Chidsey kinetics for electron transfer at metal electrodes, extended for concentrated solutions. The full equations and numerical algorithms are described, and a variety of example calculations are presented to illustrate the novel features of the software compared to existing battery models. Multiphase Porous Electrode Theory Martin Z. Bazant December 30, 2023 ==================================§ INTRODUCTIONLithium-based batteries have growing importance in global society <cit.> as a result of increased prevalence of portable electronic devices <cit.>, and their enabling role in the transition toward renewable energy sources <cit.>. For example, lithium batteries can help mitigate intermittency of renewable energy sources such as solar power, and lithium battery powered electric vehicles are facilitating movement away from liquid fossil fuels for transportation. Each of these growing areas demands high performance batteries, with requirements specific to the particular needs of the application driving specialized battery design for sub-markets. Thus, it is critical that battery models be based on the underlying physics, enabling them to greatly facilitate cell design to take best advantage of the existing battery technologies.Lithium-ion batteries are generally constructed using two porous electrodes and a porous separator between them. The porous electrodes consist of various interpenetrating phases including electrolyte, active material, binder, and conductive additive. A schematic is shown in Figure <ref>. In a charged state, most of the lithium in the cell is contained in the active material within the negative electrode. During discharge, the lithium undergoes transport to the surface of the active material, electrochemical reaction to move from the active material to the electrolyte, transport through the electrolyte to the positive electrode, and reaction and transport to move into the active material of the positive electrode <cit.>. Physical models must capture each of these behaviors accurately. Complicating the situation further, the microstructure of the interpenetrating porous media within the electrodes can have a strong effect on the cell behavior <cit.> as a result of inhomogeneities over a length scale smaller than the electrode but many times the size of a primary active particle <cit.>. The active materials themselves also often have highly non-trivial behavior including poor connectivity with each other and the conductive matrix <cit.> as well as complex material properties leading to deformations and accompanying stresses <cit.> and phase separation <cit.> during the intercalation process. Coupling these behaviors in multi-particle electrode environments leads to further complexities <cit.>.Because capturing all the relevant physical processes from the atomistic length scale to the cell pack level within a single simulation is computationally intractable, various approaches have been developed to simulate aspects of battery behavior <cit.>. In particular, porous electrode theory, pioneered by John Newman and co-workers over the past fifty years, has proven highly successful in describing the practical scale of individual cells <cit.>. This approach is based on volume averaging over a region of the porous electrode large enough to treat it as overlapping, homogeneous, continuous phases to describe the behavior of the electrons in the conductive matrix and the ions in the electrolyte. The behavior of the active material is treated by defining representative particles and placing them within the simulation domain as volumetric source/sink terms for ions and electrons according to the actual volume fraction of active material in the electrode. In this way, details on the length scale of transport within small active material particles can be consistently coupled to volume averaged transport over much larger length scales. Ref. <cit.> provides an excellent overview of the fundamentals of the theory.As a result of volume averaging, heterogeneities over intermediate length scales are lost, causing inaccuracies in predictions. Efforts to capture these heterogeneities in simulations have had success in characterizing the consequences of the volume averaging procedure and providing more accurate alternatives, including refinements to microstructural parameters such as tortuosity in the volume averaged approach <cit.>. Nevertheless, simulations including complete microstructure information are much more computationally expensive than volume averaged approaches, so the simpler approach retains value in situations requiring faster model calculation or development.Porous electrode theory has been developed and tested for decades for a variety of battery materials, but strictly speaking, it can only describe solid-solution active materials, whose thermodynamics are uniquely defined by fitting the open circuit voltage versus state of charge, or average composition. Active materials with more complex thermodynamics, resulting in multiple stable phases of different equilibrium concentrations, cannot be described, except by certain empirical modifications. Phase separating materials, such as lithium iron phosphate and graphite, can be accommodated by introducing artificial phase boundaries, such as shrinking cores <cit.> or shrinking annuli <cit.>, respectively, but this approach masks the true thermodynamic behavior.Instead, the open circuit voltage of a battery is an emergent property of multiphase materials, which reflects phase separation in single particles <cit.> and porous electrodes <cit.>. It can only be predicted by modeling the free energy functional, rather than the voltage directly, and consistently defining electrochemical activities, overpotentials, and reaction rates using variational nonequilibrium thermodynamics <cit.>. This is the approach of “multiphase porous electrode theory” presented below. In principle, such models are required to predict multiphase battery performance over a wide range of temperatures and currents <cit.>, as well as degradation related to mechanical stresses <cit.> and side reactions that depend on the local surface concentration profile <cit.>.Regardless of the thermodynamic model, volume averaged simulations using porous electrode theory are carried out in a number of ways. Newman'scode uses a finite difference method via the BAND subroutine, and it is freely available <cit.> and commonly used. It has been developed and tested for decades, but it requires analytical derivative information about model equations to form the Jacobian, which makes modifications to the code less straightforward. The free energy approach we take here naturally describes both phase separating and solid solution materials using the same mathematical framework in a more user-friendly implementation. Popular commercial software packages such as COMSOL <cit.> have also been used to implement versions of porous electrode theory <cit.>, usually using the finite element method. This has the advantage of being quick to set up, but it can involve costly software licenses. More importantly, the closed source means that detailed inspection of the software is impossible for the purpose of verifying, modifying, and improving the numerical methods for the particular problems investigated. A number of authors have also written custom versions of porous electrode software <cit.>, for example using a manually implemented finite volume method and some general differential algebraic equation (DAE) solver for time advancement. This approach provides significant flexibility, but it is not common to share the code to facilitate use and inspection by a broader community, although Torchio et al. recently published their MATLAB implementation using the finite volume method with a variable time stepper <cit.>. We take a similar approach in this work. More comprehensive reviews of commonly used simulation approaches can be found in refs. <cit.>.Here, we present the equations and algorithms for a finite-volume based simulation software package, which implements multiphase porous electrode theory (“MPET”). The code is freely available <cit.>, and it is developed in a modular way to facilitate modification and re-use. It is based only on open-source software and is written in Python, a modern, high-level language commonly used in the scientific computing community. Computationally expensive aspects of the code are all done using standard and freely available, open source numerical libraries. This takes advantage of Python's ease of use for the model definitions while retaining the fast and vetted computation of libraries written in lower level languages like C and Fortran. The paper is organized as follows. In Section <ref>, we begin by presenting the full mathematical framework of MPET, based on the original formulation of Ferguson and Bazant <cit.>, with several modifications. First, we incorporate the standard description of transport in concentrated electrolytes based on Stefan-Maxwell coupled fluxes and chemical diffusivities <cit.>. Second, we capture the continued development by our group of phase field models for the active solid materials <cit.>, which have increasingly been validated by direct experimental observations of phase separation dynamics <cit.>. Third, we provide alternatives to the empirical Butler-Volmer model of Faradaic reaction kinetics <cit.>, based on the quantum mechanical theory of electron transfer pioneered by Marcus <cit.> and extended here for concentrated solutions <cit.>, motivated by recent battery experiments <cit.>. Fourth, the porous electrode model is modified to allow for different network connections between active particles, as well as half-cells with Li-foil counter electrodes and full two-electrode cells. In Section <ref>, the equations are made dimensionless, and numerical methods to solve them are presented in Section <ref>, along with the overall software structure. In Section <ref>, a variety of example simulations are presented to highlight the novel features of MPET compared to previous models, and the paper concludes with an outlook for future developments in Section <ref>.§ MODEL As discussed above, the basic structure of the model involves volume averaging over a region larger than the particles of active material. Because the details of dynamics within the active material particles can strongly affect model predictions, they are simulated at their small length scale and treated as a source term for equations defined over the larger, electrode length scales. Thus, the model can be broken down into a number of scales. For the overall cell, we specify either the cell current density or voltage input profiles along with any series resistance. The unspecified current or voltage is an output of the simulation. At the electrode scale, we solve electrolyte transport equations, potential losses in the electron-conducting matrix, and potential drop between particles. At the particle scale, we simulate how they react with the electrolyte and the internal concentration dynamics. We will assume uniform (though not necessarily constant) temperature in the model derivation. The software currently only supports uniform and constant room temperature simulations, but we retain temperature factors here for generality. We will use the convention in referring to electrodes that the electrode which is negative/positive at open circuit and charged conditions is referred to as the anode/cathode. §.§ Electrode Scale Equations§.§.§ Electrolyte Model The general form of the electrolyte model equations arises from statements of conservation of species and conservation of charge within the electrolyte phase of a quasi-neutral porous medium <cit.>. We consider electrolytes of salts defined by ∑_iν_iM_i^z_i, with species M_i having valence z_i, and ν_i the number of ions M_i in solution from dissolving one molecule of the neutral salt. For example, for CuCl_2, ν_+ = 1, M_+ = Cu, z_+ = 2, ν_- = 2, M_- = Cl, and z_- = -1. With electrolyte species flux, 𝐅_ℓ,i defined per area of porous medium, conservation of species requires∂(c_ℓ,i)/∂ t = -𝐅_ℓ,i + R_V,iwhere c_ℓ,i is the concentration of species i in the electrolyte,is the electrolyte volume fraction (porosity), and R_V,i describes a volume-averaged reaction rate which is the result of interfacial electrochemical reactions with the active materials in which ions are added to or removed from the electrolyte.In general, electrolyte transport models for porous media, the macroscopic charge density may be nonzero and varies in response to current imbalances. Diffuse electrolyte charge screens internal charged surfaces of porous membrane materials <cit.> or conducting porous electrodes <cit.> and provides additional pathways for ion transport by electromigration (“surface conduction”) and electro-osmotic flows. In addition to Faradaic reactions, porous electrodes can also undergo capacitive charging by purely electrostatic forces, as in electric double layer capacitors and capacitive deionization systems <cit.>, or hybrid pseudo-capacitors <cit.>. In batteries, however, such effects are usually neglected <cit.>, since the focus is on electrochemical, rather than electrostatic, energy storage, using highly concentrated electrolytes.In these electrolyte models, we will assume they satisfy quasi-neutrality, i.e. there is no net charge in the electrolyte over the simulated length scales <cit.>. Charge conservation and quasi-neutrality together require∂( ρ_e )/∂ t≈ 0 = -𝐢_ℓ + ∑_i z_ieR_V,i,where e is the elementary charge, the charge density ρ_e = ∑_i z_iec_ℓ,i, and 𝐢_ℓ is the current density in the electrolyte, related to a sum of ionic fluxes,𝐢_ℓ = ∑_i z_ie𝐅_ℓ,i.We will relate fluxes to both concentrations and electrostatic potentials in the electrolyte, so with constitutive flux relationships, Eqs. <ref> and <ref> fully specify the system for both the set of concentrations and the electrostatic potential field. However, it is convenient to use quasi-neutrality to eliminate one of the species conservation equations using (with arbitrary n)z_nc_ℓ,n = -∑_i nz_ic_ℓ,i,which allows us to neglect Eq. <ref> for one species and post-calculate the missing concentration profile. For example, for the case of a binary electrolyte of cations, +, and anions, -, we define the neutral salt concentration, c_ℓ = c_ℓ,+/ν_+ = c_ℓ,-/ν_- and instead simulate one of∂(c_ℓ)/∂ t =1/ν_i(-𝐅_ℓ,i + R_V,i).and0 = -𝐢_ℓ + ( z_+eR_V,+ + z_-eR_V,-).In the case of Li-ion batteries for which we will assume a binary electrolyte in which only the Li^+ ions react, R_V,- = 0, so it is particularly convenient to simulate the anion species conservation equation.The boundary conditions for a porous electrode simulation relate the fluxes of the simulated species and current at the anode and cathode current collectors and depend on the simulation.At the current collectors of porous electrodes, 𝐧·𝐅_ℓ,i = 0 and 𝐧·𝐢_ℓ = 0. However, if a foil (e.g. Li metal electrode) is used for one electrode, the boundary condition at that side is replaced by 𝐧·𝐢_ℓ = i_cell, where i_cell is the macroscopic current density of the cell, Eq. <ref>. The species flux boundary conditions depend on which species is being eliminated in the set of species conservation equations, but for the case of anion conservation in a Li-ion battery, 𝐧·𝐅_ℓ,- = 0 at all current collectors (neglecting side reactions).The transport in the electrolyte can be simulated using either simple dilute Nernst-Planck equations or a concentrated solution model based on Stefan-Maxwell coupled fluxes. In both cases, we will neglect convection and assume binary electrolytes. Both formulations relate a species flux, 𝐅_ℓ,i to gradients in electrochemical potentials, μ_ℓ,i. The electrochemical potential of a given ion generally has both chemical and electrostatic contributions, and these can be separated a number of ways. For example, it can be separated using an “inner” or Galvani potential <cit.>,μ_ℓ,i = k_BTln( a_ℓ,i) + μ_ℓ,i^Θ + z_ieϕ_ℓ = k_BTln( c_ℓ,i) + μ_ℓ,i^exwhere k_B is the Boltzmann constant, T is the absolute temperature, a_ℓ,i is the activity of species i, μ_ℓ,i^Θ is its reference chemical potential, and ϕ_ℓ is the (Galvani) electrostatic potential. The final expression serves as the definition of the excess chemical potential, μ_ℓ,i^ex, and c_ℓ,i is the concentration, c_ℓ,i, scaled to some suitable reference. The excess chemical potential contains all of the entropic and enthalpic contributions to the free energy of a concentrated solution beyond that of a dilute solution of a neutral species, such as short-ranged forces, long-ranged electrostatics, and excluded volume effects for finite sized ions <cit.>. An alternative approach common in battery modeling <cit.> for separating these contributions is to define the electrostatic potential in the electrolyte as that measured with a suitable reference electrode at a position of interest in the electrolyte with respect to another reference at a fixed position in the solution. This has the advantage, unlike Eq. <ref>, of being defined entirely by measurable quantities, unlike the activities of individual ions in solution. For example, in a lithium ion battery, this is typically done with a Li/Li^+ reference electrode. We will refer to this potential as ϕ_ℓ^r, and we note that it is actually measuring (a combination of) full electrochemical potentials of ions in solution. For the Li/Li^+ example,eϕ_ℓ^r = μ_ℓ,Li^+ += k_BTln( a_ℓ,Li^+) + eϕ_ℓ + μ_ℓ,Li^+^Θ + .Quasi-electrostatic potentials, ϕ_ℓ^q, can also be used in electrolyte models <cit.>, and they are defined by the excess chemical potential of a particular ion in solution, n,z_neϕ_ℓ^q = μ_ℓ,n^ex = z_neϕ_ℓ + k_BTln( γ_ℓ,n) + μ_ℓ,n^Θwhere γ_ℓ,i = a_ℓ,i/c_ℓ,i is the activity coefficient of species i.We also note here that the electric potential field is determined in electrolytes either (1) by an assumption of quasi-neutrality, i.e. ∑_i z_ic_ℓ,i = 0 everywhere, or (2) by solving the Poisson equation, ( εϕ_ℓ) = -ρ_e with permittivity ε, which could depend on concentration or electric field <cit.>, or capture non-local ion-ion correlations as a differential operator <cit.>. Throughout this work, we assume quasi-neutrality but make a few comments about the alternative here. Use of the Poisson equation enables physical boundary conditions on the electric potential such as specified surface charge densities at interfaces, and the resulting electric potential is the potential of mean force acting on a test charge in the solution. It captures double-layers of diffuse charge with thickness characterized by the Debye length at interfaces, outside of which the net charge approaches zero. The assumption of quasi-neutrality leads to a different potential field which cannot capture the effects of electric double layers at interfaces. Of note, the form of the chemical potential which can most easily accommodate models with both the quasi-neutral and the Poisson equations is the Galvani potential. Of course, this can lead to further confusion because the potential field obtained by assuming quasi-neutrality need not satisfy the Poisson equation with zero charge density and generally will not; rather, quasi-neutrality in this case is the result of the “outer” solution to a singular perturbation in which the Debye length approaches zero. Thus, we could further distinguish between Galvani potentials obtained via each of those methods, but refrain from complicating the notation here, as the quasi-neutral Galvani potential closely resembles the Poisson-satisfying Galvani potential in the large majority of systems with dimensions much larger than the Debye length. (Semi-)Dilute Electrolyte In the simpler model, we neglect couplings between species fluxes, such that the flux of a given ionic species is a function only of gradients in its own electrochemical potential,𝐅_ℓ,i = -M_ℓ,ic_ℓ,iμ_ℓ,iwhere M_ℓ,i is the species' mobility. Using the Galvani potential form in Eq. <ref>,𝐅_ℓ,i = -D_ℓ,i/T(c_ℓ,i(Tln( a_ℓ,i)) + z_ic_ℓ,iϕ_ℓ),where we have used the Einstein relation between the diffusivity in free solution, D_ℓ,i and mobility, D_ℓ,i = M_ℓ,ik_BT, non-dimensionalized the potential by the thermal voltage at some reference temperature, ϕ_ℓ = eϕ_ℓ/k_BT_ref, and defined a non-dimensional T = T/T_ref. With D_ℓ,chem,i = D_ℓ,i( 1 + ∂lnγ_ℓ,i/∂lnc_ℓ,i) and uniform temperature,𝐅_ℓ,i = -(D_ℓ,chem,i c_ℓ,i - D_ℓ,i/Tz_ic_ℓ,iϕ_ℓ).In a porous medium we use effective transport properties, which are adjusted by the tortuosity of the electrolyte phase, τ. In addition we define the flux per area of porous medium rather than per area of electrolyte requiring a prefactor of the porosity of the electrolyte phase, .𝐅_ℓ,i = -/τ(D_ℓ,chem,i c_ℓ,i + D_ℓ,i/Tz_ic_ℓ,iϕ_ℓ).The tortuosity is often described as a function of the porosity, the volume fraction of the electrolyte phase, , commonly by employing the Bruggeman relation τ = ^a <cit.>. The value of a is often set to -0.5 but can be adjusted <cit.> to account for experimentally <cit.> or theoretically <cit.> observed departures from the original derivation. Eq. <ref> with Eqs. <ref>, <ref> and <ref> define the (semi-)dilute electrolyte model. Although the electrolyte transport model developed in the following section is more reasonable for battery models, we present and retain the (semi-)dilute model here for a number of reasons. Retaining it facilitates comparisons between the models, and the dilute model is easier to extend for electrolytes with more components, even if doing so loses the information related to the extra transport parameters associated with Stefan-Maxwell transport theories. In addition, as mentioned above, it is straightforward to connect the Galvani potential used here to extensions using the Poisson equation to investigate behaviors at interfaces. Stefan-Maxwell Concentrated Electrolyte The formulation above assumes that gradients in the electrochemical potential of species i lead to fluxes only of species i. However, the framework can be generalized by assuming that fluxes of a given species are related to gradients in electrochemical potentials of each species in the system,𝐅_ℓ,i = ∑_jU_ijμ_ℓ,j.where U_ij are the direct (i=j) and indirect (ij) transport coefficients <cit.>. This formulation has been used to describe relatively concentrated electrolytes and is the most commonly used model in battery simulation <cit.>. Noting that not all electrochemical potentials are independent (from the Gibbs-Duhem relationship), the above can be reorganized to the more commonly written form in terms of species velocities, 𝐯_i = 𝐅_ℓ,i/c_ℓ,i, <cit.>c_ℓ,iμ_ℓ,i = ∑_jK_ij( 𝐯_j - 𝐯_i ) = k_BT∑_jc_ℓ,ic_ℓ,j/c_T𝒟_ij( 𝐯_j - 𝐯_i )with c_T = ∑_ic_ℓ,i and K_ij = K_ji.For a binary electrolyte in a porous medium with cations, anions, and solvent (denoted as species 0), assuming uniform temperature and that the solvent concentration varies only negligibly with salt concentration, and again neglecting convection,𝐅_ℓ,+ = -ν_+/τD_ℓ c_ℓ + t_+^0𝐢_ℓ/z_+e 𝐅_ℓ,- =-ν_-/τD_ℓ c_ℓ + t_-^0𝐢_ℓ/z_-ewhere, defining γ_ℓ,±^ν = γ_ℓ,+^ν_+γ_ℓ,-^ν_- with ν=ν_+ + ν_-,D_ℓ = 𝒟c_T/c_ℓ,0( 1 + ∂lnγ_ℓ,±/∂lnc_ℓ), = _0+_0-( z_+ - z_- )/z_+_0+ - z_-_0-, t_+^0= 1 - t_-^0 = z_+_0+/z_+_0+ - z_-_0-.The current density is given by𝐢_ℓ = -σ_ℓ/Tτ(ϕ_ℓ^r + ν k_BT/e( s_+/nν_+ + t_+^0/z_+ν_+ - s_0c_ℓ/nc_ℓ,0) ( 1 + ∂lnγ_ℓ,±/∂lnc_ℓ)lnc_ℓ)with1/σ_ℓ = -( k_BT_ref/c_Tz_+z_-e^2) ( 1/_+- + c_ℓ,0t_-^0/c_ℓ,+_0-).The values of s_i are specified by the choice for the reference electrode with reactions_-M_-^z_- + s_+M_+^z_+ + s_0M_0⇌ ne^-.For lithium-ion batteries, the typical choice for the reference electrode defining ϕ_ℓ^r is Li/Li^+, so s_+ = -1, and s_- = s_0 = 0. Thus, Eqs. <ref> and <ref> with Eqs. <ref> and <ref> define the electrolyte model when using the Stefan-Maxwell concentrated solution theory with a binary electrolyte.§.§.§ Solid phase electronic model The solid phase of a porous electrode is composed of several length scales and percolating phases. The active material stores the reduced species (e.g. lithium), conductive additive improves electronic wiring, and binder is added to keep all the components connected. Lithium transport occurs within individual particles, and electrons must reach the surface of those particles via traveling over the length of the electrode. These equations approximately capture the variation in electric potential in the matrix of conductive material, which may be assumed to be identical to that of (well connected) active material. Otherwise, additional relations can be added to describe losses between the conductive matrix and the active materials. Over the length scale of the electrode, we describe conservation of charge as in the electrolyte,0 = -𝐢_s - ∑_iz_ieR_V,ior0 = -𝐢_s - (z_+eR_V,+ + z_-eR_V,-)for a binary electrolyte in which cations and/or anions may undergo electrochemical reactions, and where 𝐢_s is the current density in the solid phase. The sign difference compared to Eq. <ref> comes from the observation that charge entering the liquid phase must be leaving the solid phase. The current density in the solid phase is given by assuming a bulk Ohm's law,𝐢_s = -( 1-)/τσ_sϕ_swhere σ_s is the conductivity of the electronically conductive matrix, ϕ_s is its electrostatic potential, and we have assumed that the volume fraction of the conductive phase is given by the space not occupied by the electrolyte. The boundary conditions for this come from observing no electronic current can flow into the separator, 𝐧·𝐢_s = 0, and the potential at the current collector is specified by the operating voltage on the system in the macroscopic equations, ϕ_c or ϕ_a (c stands for cathode while a stands for anode). The Bruggeman relation can again be used to estimate the tortuosity in terms of the volume fraction of the conductive matrix.To avoid the computational and practical difficulties of simulating full microstructures, we describe the behavior of particle interactions with a small number of representative particles, both along the length of the electrode and also in parallel with each other in terms of electrolyte access to capture the effects of particle size distributions. The particles at the same electrode position (interacting in common with the local electrolyte) could be in parallel or in series electronically where parallel wiring would describe them each having direct access via a single resistance to a conductive network and series wiring might describe a comb structure in which some particles (perhaps the edge of a secondary particle) are connected to the conductive backbone, but electrons must pass through poorly conducting particles to get to particles without good contact to the conductive backbone, similar in concept to the hierarchical model of Dargaville and Farrell <cit.>. We demonstrate use of the parallel case with a distribution of contact resistances in ref. <cit.>. In the second case of series wiring we implement a simplified version of that developed by Stephenson et al. <cit.> by imposing a finite conductance between particles in series, indexed by k, within a simulation volume, j,G_j,k(ϕ_j,k - ϕ_j,k+1) = I_j,kwhere G_j,k is the conductance and I_j,k is the current between particle k and k+1. From charge conservation,I_j,k - I_j,k+1 = ∫_S_k+1^j_j,k+1 A,where j_j,k+1 is the intercalation rate into particle k+1. §.§ Single Particle EquationsA single particle interacts with the electrolyte via an electrochemical reaction, leading to intercalation of neutral species into the solid phase. However the electrochemistry is modeled (see Section <ref>), the reaction serves as a source/removal of species into/from the particle. We will describe several different solid models here. Generally, we begin by postulating a free energy functional describing the important physics of the particle,G = ∫_V_p^ gV + ∫_A_S^γ_S Awhere G is the total system free energy, V_p is the particle volume, g is the free energy density, A_S is the particle surface area, and γ_S is the surface energy. Typically, we will separate the free energy density into homogeneous, g_h and non-homogeneous, g_nh, contributions,g = g_h + g_nh + …where the remaining terms could describe the stress state of the system <cit.> or other energetic contributions <cit.>. Following van der Waals <cit.> and Cahn and Hilliard <cit.>, we use a simple gradient penalty term to describe the non-homogeneous free energy,g_nh = 1/21/c_s,ref^2 c_i·κ c_iwhere κ is a gradient penalty tensor (assumed to be isotropic here such that κ = κ1 and 1 is the second-order identity tensor) related to interfacial energy between phases, c_i is the concentration of species i within a single particle, and c_s,ref is a suitable concentration scale for the insertion species in the active material. The diffusional chemical potential can then be obtained form a variational derivative of the free energy,μ_i = δ G/δ c_i = ∂ g/∂ c_i - ∂ g/∂ c_i. §.§.§ Electrochemical Reactions The electrochemical reaction can be described by a number of different models, such as the empirical Butler-Volmer equation <cit.> or quantum-mechanical models based on Marcus kinetics <cit.>, which must be consistently generalized for concentrated solutions in nonequilibrium thermodynamics <cit.>. We describe here electrochemical reactions of the formS_1 = ∑_i s_i,OO_i^z_i,O + ne^- →∑_j s_j,RR_j^z_j,R = S_2,and we will describe the reactions as a function of the activation overpotential,neη = μ_R - (μ_O + nμ_e) = Δμ_rxn = Δ G_rxn,where μ_R = ∑_js_j,Rμ_j is the electrochemical potential of the reduced state, μ_O = ∑_i s_i,Oμ_i is the electrochemical potential of the oxidized state, and μ_e is the electrochemical potential of the electrons, which we relate here to the electric potential measured in the conductive matrix, μ_e = -eϕ_s. Δμ_rxn and Δ G_rxn indicate the total free energy change of the reaction. In the case of Li^+ insertion and the Stefan-Maxwell concentrated electrolyte model using Li/Li^+ as a reference electrode, μ_O = eϕ_ℓ^r. We adopt the convention here that a positive electrochemical reaction current corresponds to the net rate of reduction. The net reduction current, i, can be related to the intercalation flux, j_i, for a given species by the reaction stoichiometry. For example, for lithium intercalation, j_i = i/e. Extension to multiple reactions simply involves describing each j_i as a sum over the relevant reactions.It is worth noting that the reaction models currently implemented and discussed below follow the trend in battery modeling to neglect the impact of diffuse charge within double layers on the reaction kinetics. Accounting for this involves using a model of the double layer to adjust the surface concentrations from those outside the double layer to those at the distance of closest approach to the electrode surface (the Stern layer), which actually drive the reaction, as well as accounting for the local electric field driving electron transfer. These changes constitute the Frumkin correction <cit.> to the reaction rate model, and have been recently reviewed <cit.>. Frumkin-corrected Butler-Volmer reaction models have been applied to various electrochemical techniques including steady constant current <cit.>, voltage steps <cit.>, current steps <cit.>, and linear sweep voltammetry <cit.>, as well as nano <cit.> and porous <cit.> electrodes, with clear indication of departure from models neglecting double layers, especially at low salt concentrations with thick double layers (“Gouy-Chapman limit” <cit.>). To be used consistently with the models developed here, Frumkin reaction kinetics would need to be extended to concentrated electrolyte solutions, including models of individual ionic activities within the double layers, although Frumkin effects are reduced for very thin double layers at high salt concentration (“Helmholtz limit” <cit.>). On the other hand, Frumkin effects that dominate in dilute solutions could be important for practical battery operation at high rates, where severe electrolyte depletion can occur and limit the achievable power density. Butler-Volmer kinetics Butler-Volmer reaction kinetics are described by exponential dependence on the activation overpotential, and the net reduction current can be written asi = i_0( exp( -α eη_eff/k_BT) -exp( ( 1-α)eη_eff/k_BT))where α is a symmetry coefficient and i_0 is the exchange current density, the rate of reaction in the forward and reverse directions when the reaction is in equilibrium. Depending on the system considered, the exchange current density could be modeled as constant <cit.> or a function of species concentrations or activities. Introduced in ref. <cit.>, the effective overpotential, η_eff, accounts for any film resistance, R_film, byη_eff = η + iR_film.Bazant and co-workers proposed a form for i_0 based on reacting species' activities and a transition state activity coefficient, γ_ <cit.>, derived by assuming thermally activated transitions in an excess chemical potential energy surface with an electric field across the reaction coordinate contributing to the transition state <cit.>,i_0 = k_0ne( a_Oa_e^n )^1-αa_R^α/γ_where k_0 is a rate constant, a_O = ∏_ia_i^s_i,O, a_e is the activity of the electrons (taken to be unity here), and a_R = ∏_ja_j^s_j,R. The transition state activity coefficient is a postulate about the structure and characteristics of the transition state. For example, for lithium intercalation into LiFePO_4, Bai et al. originally proposed γ_ = ( 1 - c/c_max)^-1 where c_max is the maximum concentration of lithium within the solid, to indicate the transition state excludes one site. Of note, although the above expression is defined in terms of the activities of individual ions within the electrolyte, these quantities are difficult to directly measure. Because we have not implemented models to estimate ion activities, the software currently assumes a_Li^+ = c_ℓ for both electrolyte models.It is also common to use an exchange current density form based solely on species concentrations <cit.>,i_0 = k_0c_ℓ^ 1-αc_i^ α( 1 - ξ_ic_i)^α,where c_ℓ = c_ℓ/c_ℓ,ref, c_i = c_i/c_s,ref, and ξ_i = c_s,ref/c_i,max, and the ref subscripts indicate some suitable concentration scale. For the case of lithium insertion, we will choose c_s,ref = c_i,max, so ξ_i = 1. Marcus-Hush-Chidsey kinetics Marcus-Hush electron transfer kinetics describe an electron transfer event in terms of a reaction coordinate corresponding to the collective rearrangement of species involved in the transition state <cit.>. The electron transfer event can occur when it is energetically equivalent to occupy the donor or acceptor species, and the fluctuations which lead to this are related to dielectric rearrangement of molecules and charges around the donor-acceptor pair. The Franck-Condon condition is satisfied because the electron transfer event is much faster than the rearrangements of the nearby molecules contributing to the energy of the reacting species. Following the overview of Fedorov and Kornyshev <cit.>, the reductive and oxidative currents at an electrode can be calculated as an integral over the energy levels of the electrons in the electron conducting phase (the electrode),i_red = ek^0c_O∫_-∞^∞ρ(z)n_e(z)W_red z i_ox = ek^0c_R∫_-∞^∞ρ(z)( 1-n_e(z) )W_ox zwhere k^0 is some rate prefactor, c_O = ∏_ic_i^s_i,O and c_R = ∏_jc_j^s_j,R, z represents the energy level of electrons in the electrode, ρ(z) is the density of states, and n_e(z) is the Fermi function. W_red/ox are the transition probabilities of the elementary reduction/oxidation electron transfer processes. Marcus theory considers the collective motion and reorganization of molecules near the electron transfer event, causing higher or lower energy contributions to the initial or final states of the reaction event via the electrostatic interactions between nearby polarization and the electron donor/acceptor. By treating this collective motion as approximately parabolic around their lowest energy configurations, the following can be derived for the transition probabilities,W_red = k_wexp( -w_O/k_BT) exp( -( λ + eη_f )^2/4λ k_BT) W_ox = k_wexp( -w_R/k_BT) exp( -( λ - eη_f )^2/4λ k_BT),where k_w is a prefactor with explicit dependence on various factors, including the overlap integrals for the wave functions of the various reaction elements <cit.>. We assume reaction symmetries by the equality of the reductive and oxidative prefactors and treat it as a lumped constant here. w_R/O are energies describing the probabilities of the species arriving and orienting at the reaction site. λ is the reorganization energy, related to the force constants or the curvature of the reaction parabolas, which we take to be the same for forward and reverse reactions, i.e. symmetric Marcus theory. It is defined as the energy required to perturb the system to the stable configuration of the product without allowing electron transfer. The driving force is the reaction change in excess free energy, Δ G_rxn^ex,eη_f = Δ G_rxn^ex = eη + k_BTln( c_O/c_R)where we have assumed single-electron transfer, as supported by Marcus theory for elementary reaction events <cit.>. The tildes indicate non-dimensional concentrations c_O = ∏_ic_i^s_i,O and c_R = ∏_jc_j^s_j,R, with each scaled to its reference concentration (c_ℓ,ref for ions in the electrolyte and c_s,ref for intercalated species in the active material). Thus, η_f is also the departure of the electrode potential from the formal potential. Interestingly, for non-electrode reactions in solution, this predicts a maximum in reaction rate at driving forces given by ±λ, and experimental validation of this so-called “inverted” region of decreasing reaction rate at increasing driving force(i.e. negative differential reaction resistance <cit.>) paved the way for the Nobel Prize of Marcus <cit.>.Assuming w_R/O are independent of the electron energy level (not necessarily true <cit.>), neglecting variation in the density of states, modifying the driving force to account for the energy levels of electrons along the Fermi distribution, and lumping constants into k_M, we can arrive at the Marcus-Hush-Chidsey (MHC) electron transfer reaction model <cit.>,i_red/ox = k_Mc_O/Rexp( -w_O/R/k_BT) ∫_-∞^∞exp( -( z - λ∓ eη_f )^2/4λ k_BT) z/1 + exp( z/k_BT ),where z is related to the energy level of the electronic states in the metal, and integration is over the Fermi distribution. Reduction/oxidation correspond to the top/bottom signs.Curiously, Marcus-style kinetics were not used to describe electron transfer reactions in batteries until very recently <cit.>, possibly in part because of the complexity of the expressions involving improper integrals, making their computational evaluation cumbersome. But recently, various approaches have been developed to facilitate the evaluation of the MHC expression, including an asymmetric variant with different values of λ for the initial and final states, referred to as “asymmetric Marcus Hush” (AMH) kinetics <cit.>. We will focus exclusively on the symmetric variant here. As a result, MHC kinetics are now as easy to simulate as Butler-Volmer kinetics, so we include them in this simulation software to facilitate comparison in porous electrode modeling and data fitting. For reviews of (asymmetric) MHC kinetics and applications to experimental data, see refs. <cit.>.We make one final assumption following ref. <cit.> before arriving at the form we will use. The w_R/O functions are related to the probabilities of the system arriving at the state described by the parabolic minima in the theory. As in the derivation of the Butler-Volmer expression by Bazant <cit.>, we postulate that this could capture effects of concentrated solutions beyond simple probabilistic occupation related to the concentration prefactors above. In other words, they are related to the excess chemical potential of the state corresponding to the parabolic minima. Assuming a symmetric approach to the reacting state from the forward and reverse directions,w_O = w_R = μ_M,^exorexp( -w_R/O/k_BT) ∝1/γ_M,.Thus, we arrive at our expressions for MHC kinetics,i = i_M( c_Ok_red - c_Rk_ox)withk_red/ox = ∫_-∞^∞exp( -( z - λ∓ eη_f )^2/4λ k_BT) z/1 + exp( z/k_BT )andi_M = k_M/γ_M,.Although the derivation of the Butler-Volmer equation <cit.> and the approach followed above both lead to some prefactor related to the excess chemical potential of the transition state, we suggest that the two terms are not capturing the same physical phenomena and thus may differ in their functional forms. In this approach to the microscopic Marcus theory, there is some separation between the energetic contributions accounted for in the microscopic reorganization and the “approach” contributions lumped into γ_M,. The Butler-Volmer derivation does not make this distinction, suggesting that the associated contributions in the two theories need not necessarily be equal. Finally, following Zeng et al. <cit.>, we replace Eq. <ref> withk_red/ox≈√(πλ)/1 + exp( ±η_f )( λ - √(1 + √(λ) + η_f^2)/2√(λ))where tildes indicate scaling by the thermal energy or voltage, k_BT or k_BT/e, (z) = 1 - (z) is the complementary error function, and reduction/oxidation corresponds to the top/bottom sign.§.§.§ Species Conservation Conservation of the intercalant within the solid particles should be specialized to describe the particular physics of the material being studied. Several options are described below. We will simulate the individual particles by describing neutral species transport within them and their mass exchange with the electrolyte via the electrochemical reactions. This assumes that electron mobility within the active materials is much larger than that of the inserted species <cit.>. Homogeneous When transport within the solid particles is fast, it can be computationally beneficial to approximate the particles with an average concentration, c_s <cit.>. Then, given a reacting surface area, A_p, and volume, V_p, the dynamics of intercalant i can be described simply by the average intercalation rate from the electrochemical reaction, j_p,∂c_i/∂ t = A_p/V_pj_p,i.Allen-Cahn Reaction For Allen-Cahn reaction particles, the reaction occurs as a volumetric source term. This arises when particles are assumed to be either homogeneous or depth averaged in the direction normal to the reacting surface, as Bazant and co-workers have employed in models of LiFePO_4 <cit.>. Then, neglecting transport, the local rate of change of the concentration at particle location 𝐫 is∂ c_i/∂ t = A_p/V_pj_p,i(𝐫).Cahn-Hilliard Reaction In this model, transport of the intercalant within the particles is described by species conservation,∂ c_i/∂ t = -𝐅_i.The flux can be modeled using linear irreversible thermodynamics <cit.>, which postulates that fluxes arise from gradients in the diffusional chemical potential,𝐅_i = -D_i/Tc_iμ_i,where we have again used the Einstein relation. The diffusional chemical potential is scaled to the thermal energy at some reference temperature, k_BT_ref, and can be calculated from Eq. <ref>. Following Bazant <cit.>, D_i can be rewritten in terms of tracer diffusivity in the dilute limit, D_0,i, the activity coefficient of species i, γ_i = a_i/c_i, and the activity coefficient of the diffusion transition state, γ_,i^d,D_i = D_0,iγ_i/γ_,i^d.Assuming simple diffusion on a lattice in which the diffusion transition state has similar enthalpic contributions as the diffusing species in lattice sites but in which the transition state excludes two adjacent sites, γ_i/γ_,i^d = ( 1-c_i/c_i,max), giving𝐅_i = -D_0,i/Tc_i( 1-c_i/c_i,max)μ_i,although other effects could be accounted for in γ_,i^d including stresses in the transition state <cit.>. At the particle surface, the flux is given by the electrochemical reaction, 𝐧·𝐅_i = -j_p,i where 𝐧 is a unit normal vector pointing from the active material to the electrolyte. The natural boundary condition <cit.> imposes a constraint on the concentration at the surface as a function of the surface energy, γ_S, 𝐧·∂ g/∂c_i = 𝐧·κc_i = ∂γ_S/∂c_i. Solid Solution If the free energy can be described as a function of only the concentration (disregarding the effect of gradients and other contributions), then Eq. <ref> can be rewritten as𝐅_i = -D_i/Tc_i∂μ_i/∂ c_i c_i = -D_chem,i/T c_iwhere D_chem,i = D_ic_i∂μ_i/∂ c_i. Here, it is sufficient to prescribe the flux at the surface, as in the Cahn-Hilliard reaction model, 𝐧·𝐅_i = -j_p,i. §.§ Coupling EquationsThe general approach we take to simulate the two coupled phases is to simulate, within the same physical and simulated space as the electrolyte, a representative sample of particles, which are duplicated to the appropriate filling fraction of solids within the electrode. Instead of simulating discrete particles, an alternative approach could involve directly simulating the distribution of particles at a given state using a population balance model <cit.>. This may improve accuracy at fixed computational cost, especially for simple particle models like the homogeneous approximation. However, for the more complicated particle models which explore only a tiny fraction of their state space, the current approach may be more efficient.For example, to simulate an electrode with a very narrow particle size distribution, we simulate one particle at each electrode position. For wide distributions, we use multiple particles sampled randomly from a given input distribution, and at each location, multiple simulated particles interact with the same electrolyte. The representative simulated particles at each electrode position are scaled to occupy the specified volume fraction of active material per electrode volume. We allow the simulated particle size distribution to vary as a function of position, as this allows us to sample a broader distribution within the full electrode, but loses accuracy in situations with significant transport losses in the bulk electrolyte or electron-conducting phases. When setting up simulations, the primary consideration is to have a good representation of the full particle size distribution within the macroscopic length scales of interest, which may be the full electrode at low currents or small regions at high currents with strong electrolyte depletion or electronic losses.The two phases are coupled via the electrochemical reactions. The volumetric reaction rate of species i, R_V,i, is related to the electrochemical reactions and the flux of that species out of the solid particles. This can either be done by integrating the reaction rate over particle surfaces or by applying the divergence theorem to relate that to their average rate of filling. For particle p, at a given position in the electrode,∂c_p,i/∂ t = 1/V_p∫_A_p^j_p,i A.Thus, the volumetric source term can be cast as a sum over all the particles at a given position,R_V,i = -( 1-)P_L∑_pV_p/V_u∂c_p,i/∂ twhere P_L is the loading percent of active material in the solid phase, V_u = ∑_pV_p, and p indicates properties of particle p, and the summation is over particles at the location where R_V,i is evaluated. §.§ Macroscopic EquationsThe overall current density per electrode area, i_cell, is defined as the integral of the net charge consumed by the reactions in the electrodes per unit cross-sectional area,i_cell = ∑_i∫_L_a^z_ieR_V,i L = -∑_i∫_L_c^z_ieR_V,i Lwhere L_a and L_c are the lengths of the anode and cathode. We also find it useful to define a current in terms of a C-rate, which is determined by the electrode with the smaller capacity. The capacity of the cell per area, Q_A, is given for the case of an electrode with a single intercalating species with maximum concentration c_max,k in electrode k by the capacity per area of the limiting electrode,Q_A = min_k∈{ a,c }{ eL_k( 1-_k )P_L,kc_max,k}.With that, we can define a C-rate current,i_cell,C = i_cell/Q_Aτ_hrwhere τ_hr is a conversion factor to ensure units of hr^-1.The overall utilization (state of charge) of electrode k for species i, u_k,i, is defined in terms of the average filling fraction of all the particles in the electrode,u_k,i = 1/L_k∫_L_k^∑_pV_p/V_uξ_ic_p,i L.where k indicates either the anode or cathode and the summation is evaluated as a function of position, given the selection of particles in the simulated electrode at that position. From above, ξ_i = c_s,ref/c_i,max, and c_p,i = c_p,i/c_s,ref, so the product is the average filling fraction of particle p.The overall cell voltage, Δϕ_cell = ϕ_c - ϕ_a is defined as the difference in the electric potential at the cathode and anode current collectors. There is an arbitrary datum for the potential, and we set ϕ_c = 0 in the simulations. We account for series resistance by defining an applied voltage, Δϕ_appl, byΔϕ_cell = Δϕ_appl - i_cellR_ser,where R_ser is the area specific resistance of the cell.§ NON-DIMENSIONAL EQUATIONS All times in the simulation are scaled to a reference time scale t_ref. We choose some representative diffusive time scale within the cell, t_ref = L_ℓ,ref^2/D_ℓ,ref where L_ℓ,ref is a characteristic length within the cell (we will use the cathode length) and D_ℓ,ref is a suitably chosen scale for the electrolyte diffusion coefficient.§.§ Electrode Scale Equations §.§.§ Electrolyte ModelDefining a reference electrolyte concentration, c_ℓ,ref (e.g. c_ℓ,ref/N_A = 1 M, where N_A is the Avogadro constant),∂( c_ℓ,i)/∂t = 1/ν_i(-𝐅_ℓ,i + R_V,i) 0= -𝐢_ℓ + ∑_iz_iR_V,i 𝐢_ℓ = ∑_iz_i𝐅_ℓ,iwith = L_ref and tildes indicating non-dimensionalization by the following scalesF_ℓ,ref = c_ℓ,refL_ℓ,ref/t_refR_V,ref = c_ℓ,ref/t_refi_ℓ,ref = eF_ℓ,ref.The (semi-)dilute fluxes are non-dimensionalized by𝐅_ℓ,i = -/τ(D_ℓ,chem,ic_ℓ,i + D_ℓ,i/Tz_ic_ℓ,iϕ_ℓ)with diffusive scale as that chosen to define t_ref above. The binary concentrated solution theory can be non-dimensionalized with𝐅_ℓ,+ = -ν_+/τD_ℓc_ℓ + t_+^0𝐢_ℓ/z_+ 𝐅_ℓ,- =-ν_-/τD_ℓc_ℓ + t_-^0𝐢_ℓ/z_-𝐢_ℓ = -σ_ℓ/Tτ(ϕ_ℓ^r + νT( s_+/nν_+ + t_+^0/z_+ν_+ - s_0c_ℓ/nc_ℓ,0) ( 1 + ∂lnγ_±/∂lnc_ℓ)lnc_ℓ)with conductivity scaleσ_ref = e^2c_ℓ,refD_ℓ,ref/k_BT_ref. §.§.§ Solid phase electronic modelThe solid phase current density and conductivity are scaled to i_ref and σ_ref respectively such that0 = -𝐢_s - ( z_+R_V,+ + z_-R_V,-)and𝐢_s = -( 1-)/τσϕ_s. §.§ Single Particle Equations In the solid particles, we will use the same time scale as in the electrode scale equations but different length and concentration scales,F_s,ref = L_s,refc_s,ref/t_refwhere L_s,ref and c_s,ref are relevant scales to the particle. We will use L_s,ref = L_p, where L_p is a characteristic length scale of the active material particle and which may vary by particle. For the case of lithium insertion electrodes with a single intercalating species with maximum concentration given by c_max, we choose the solid reference concentration to be the maximum filling for lithium insertion cells, c_s,ref = c_max.§.§.§ Electrochemical ReactionsThe various forms of the overpotential are all scaled to the thermal voltage, k_BT_ref/e, and the rate prefactors and current density are both scaled toi_s,ref = eF_s,ref.The film resistance is scaled to R_film,ref = k_BT_ref/ei_s,ref, soη_eff = η + iR_filmand the Butler-Volmer expression isi = i_0( exp( -αη_eff/T) - exp( ( 1-α)η_eff/T))while the MHC expression isi = i_M( c_Ok_red - c_Rk_ox). §.§.§ Species ConservationWe scale the rate of intercalation of each species in particle p, j_p,i, to the active material reference flux, N_s,ref, such that, for homogeneous particles,∂c_i/∂t = δ_p,Lj_p,iwhere c_i = c_i/c_s,ref, andδ_p,L = A_pL_p/V_p.For Allen-Cahn reaction particles,∂c_i/∂t = δ_p,Lj_p,i( 𝐫 )where 𝐫 is non-dimensionalized by L_p. For Cahn-Hilliard reaction particles,∂c_i/∂t = -𝐅_i.The non-dimensional flux is given by𝐅_s,i = -D_i/Tc_iμ_iwhere the diffusivity is scaled toD_s,ref = L_s,ref^2/t_ref.For the case of diffusion on a lattice with a simple excluded site model for the transition state, Eq. <ref> becomes𝐅_i = -D_0,i/Tc_i( 1-ξ_ic_i )μ_i.The natural boundary condition is scaled usingκ_ref = k_BT_refL_s,ref^2c_s,ref γ_S,ref = κ_ref/L_s,refsuch that𝐧·c_i = 1/κ∂γ_S/∂c_sat the particle surface, and the flux boundary condition is simply𝐧·𝐅_i = -j_p,i.For solid solution particles,𝐅_i = -D_s,chem,i/Tc_s,iwith the same flux boundary condition as the Cahn-Hilliard reaction particles. §.§ Coupling EquationsHere, we retain the same scales as above, such that∂c_p,i/∂t = δ_p,L∫_A_p^j_p,iAandR_V,i = -β( 1-)P_L∑_pV_p/V_u∂c_p,i/∂twith β = c_s,ref/c_ℓ,ref. §.§ Macroscopic EquationsKeeping the same scales as in the electrode model,i_cell = ∑_i∫_L_a^z_iR_V,iL = -∑_i∫_L_c^z_iR_V,iLandu_k,i = 1/L_k∫_L_k^∑_pV_pξ_ic_p,iL.where k indicates either the anode or cathode, L_k = L_k/L_ℓ,ref, and V_p = V_p/V_u. The series resistance can be scaled by the electrode model scales, R_ser,ref = k_BT_ref/ei_ref, and the potentials can all be scaled by the thermal voltage at the reference temperature, k_BT_ref/e, soΔϕ_cell = Δϕ_appl - i_cellR_ser. § MODEL IMPLEMENTATION To solve the system of PDE's in the model, we take the general approach of discretizing each in space using some variant of the finite volume method to obtain a system of differential algebraic equations (DAE's), and then stepping in time using a variable-order adaptive time stepper. We discretize in space using finite volume methods both for their robustness to steep gradients and also their mass conservation to within numerical accuracy <cit.>. We use the IDAS time stepper of the SUNDIALS integration suite <cit.> to solve the resulting DAE's with a backward differentiation formula approach. The model and simulation are defined within the DAE Tools software package <cit.>, which provides a modeling language environment within Python similar to that of more specialized modeling languages like gProms <cit.> or Modelica <cit.>. This allows use of the full depth of the general purpose Python environment while adding helpful concepts from modeling languages such as logical separation of model definition from simulation setup and equation oriented model definition.For example, to define a model, the user writes the spatially discretized equations in a familiar form within a model class and defines simulation particulars such as parameter values and initial conditions elsewhere. The DAE Tools software handles the formation of all underlying system matrices and interactions with other numerical libraries involved in the actual time advancement. As an extra convenience, it wraps IDAS with the ADOL-C automatic differentiation library <cit.> to form the analytical-accuracy Jacobian matrix, greatly facilitating the solution of non-linear systems of equations involved in the implicit time stepping. This eases additions and modifications to the model, which can be made without user input of analytical derivatives or reliance on numerical approximation of the Jacobian. This approach enables MPET developers to work entirely within a Python environment while using the fast and vetted lower level numerical libraries for computationally expensive or analytically tedious aspects of the simulation. §.§ Software Organization and StructureTo make MPET flexible enough to simulate arbitrary active material models in the context of either porous electrodes or single particles to study their dynamics directly, we chose to develop the software to logically and structurally isolate the material models from the overall cell model using an object oriented framework, see Figure <ref>. Objects containing local information about active material models exchange only the required information with the cell model, reducing assumptions built into each section of the software and leading to a more modular, extensible structure. Parameters are specified within input files using a standard config-file syntax for ease of scripting multiple simulations. These input files are split according to information used by the overall system (e.g. specified current profile and system dimensions) and those defining properties of the simulated active materials (e.g. material thermodynamics, transport, and reaction properties). This enables a user to reference standard material property files while editing only a system input file related to cell design. These inputs are then non-dimensionalized for the simulation, as described in Section <ref>, and passed to the system model. Depending on the simulation to be carried out (e.g. perfect electrolyte bath, half-cell, full-cell, with or without particle size variability), the system model creates as many instances of the appropriate active material model as necessary for the representative active material particles and establishes communication between them to only exchange key information. The active material particles get access only to local concentrations and potentials in the bulk phases near the particular active material particle, and the bulk surrounding phases only need access to the total integrated reaction rate of each particle. This isolation makes it relatively straightforward to extend MPET's capabilities by adding new material models or modifying existing ones to add relevant physics such as stresses and strains <cit.> or hierarchical structures <cit.>. For example, to add and simulate a new model, a user must define the non-dimensional equations as a model class, specify the initial conditions via the simulation class, and update the handling of parameters to read and non-dimensionalize any new inputs. Material models can easily use distinct discretization and/or geometries such as the homogeneous or CHR particles. We have also already included the two-repeating-layers model developed and applied to single graphite particles <cit.>. Other variations such as 2D particles of arbitrary geometry discretized using the finite volume <cit.> or finite element method would also be straightforward, especially with the DAE Tools interface to the deal.ii finite element library <cit.>. The structure also makes it a straightforward addition to implement multiple particle types to be simulated within each electrode region. Finally, it makes it easier to modify specific parts of any given model with confidence that side-effects will be minimized through the isolation of the logical parts.In order to facilitate good reproducibility of scientific computations, MPET defaults to storing each simulation output within time-stamped directories, along with both input files and a snapshot of the source code which ran that simulation. Users can disable this feature to use their own system or take advantage of it by keeping a log matching the time-stamped directories with notes about each simulation. Simulation outputs are stored by default in a binary format which is readable using common scientific computing software including Python (with SciPy), R, and MATLAB. MPET includes a script to perform some basic plotting using this output and also a script to convert the output to comma-separated value (CSV) text files, which can be nearly universally interpreted. §.§ General OptionsAs discussed above, the software's structure makes it flexible enough for a range of possible simulation options, and the key options are described here. Either the current or voltage can be imposed as piecewise-constant functions with arbitrary numbers of steps (simulated with fast but continuous steps, similar to the method proposed in ref. <cit.>). With minor changes to the code, completely arbitrary functions are possible. Currents are imposed relative to a 1 C (dis)charge based on the simulated battery's limiting electrode, and optional cut-off voltages terminate the simulation if the system voltage reaches their values. Because it is sometimes convenient to continue an old simulation, this can be done by specifying the location of a stored output. Discretization is specified in terms of the number of finite volumes in each region of the electrode. If the anode is set to have zero volumes, the simulation uses a flat lithium metal counter-electrode with specified Butler-Volmer reaction rate kinetics. If only a single cathode volume is simulated, it places any simulated particles within a perfect electrolyte bath without any transport limitations for either single- or multi-particle studies neglecting electrolyte and bulk electrode losses. Within each discretized porous electrode volume, the chosen number of particles are simulated, drawing from a specified log-normal size distribution. Conductivity losses in the bulk electron-conducting phase or between individual particles within each volume of simulated electrodes are optionally neglected or simulated as described in Section <ref>. Electrode lengths, electrolyte volume fractions, loading percents (volume fraction of active material within the electron-conducting phase), and Bruggeman exponents describe the geometry characteristics of effective transport within the porous electrolyte. The electrolyte can be specified either as a dilute model or using a full Stefan-Maxwell concentrated solution theory model as described in Section <ref>. Arbitrary functions can be specified for the electrolyte transport properties by defining them as Python functions.Active material particles are specified by the basic model for their transport processes (e.g. Fickian diffusion, homogeneous, Cahn-Hilliard reaction), the thermodynamic function describing them (also specified as arbitrary Python functions), and the remaining properties associated with the particular model. The specified transport model determines the type of material model (defining the discretization method and equations solved) used in the simulations. When applicable, the transport flux prefactor can be specified as an arbitrary function. Electrochemical reactions are defined as Butler-Volmer, Marcus-Hush-Chidsey, or (experimental) Marcus kinetics as formulated by Bazant <cit.>, and a reaction film resistance can be added to any of these. The Butler-Volmer exchange current density is taken as one of a few built-in options, as outlined in Section <ref>, but can be specified as arbitrary functions within the particle model source code. To facilitate stability of some models, we provide the options to replace the most extreme regions of the log terms of ideal-solution parts of thermodynamic models with linear extrapolations. Finally, to ensure that materials phase separate when they naturally would based on minor fluctuations, we include the option to add Langevin noise to the rate of change of concentration within each solid finite volume <cit.>. §.§ Numerical MethodsFor each of the methods detailed below, we will describe the discretization in non-dimensional form using the scales presented in Section <ref>. We will also drop subscripts indicating species and location (electrolyte or active material particle) for clarity with discretization subscripts. §.§.§ ElectrolyteThe electrolyte equations are discretized using a typical 1D finite volume scheme with cell centers <cit.>. They are non-dimensionalized using the same scales as in the electrode scale equations, Section <ref>. For example, for species conservation in cell j with width Δx and neglecting subscripts for electrolyte/species identifiers for clarity,∂c_j/∂t = F_j-1/2 - F_j+1/2/Δxwhere the j subscript represents an average quantity within the discretized volume, and the F_j± 1/2 represent components of the flux normal to the faces on the right/left of cell j in the positive x direction, where x is along the length of the cell going between the current collectors. The flux is approximated using a finite difference two-point formula based on the adjacent cell centers. Where required (e.g. for electromigration terms), face values of field variables such as concentration are approximated using harmonic means, which greatly enhances stability in regions of strong electrolyte depletion. The harmonic mean also better represents variation in transport prefactors <cit.>.§.§.§ Active MaterialThe active material can be defined as spherical, cylindrical, or a rectangular grid approximation of platelet-like particles common in LiFePO_4 <cit.>. Those on a rectangular grid are discretized like the electrolyte, and the ratio of reacting area to volume is calculated as a function of the length and thickness, h_p, of the particle assuming reactions only occur on the top and bottom surfaces <cit.>. For spheres and cylinders, the ratio of particle volume to reacting area is specified fully by the particle radius. For cylinders, we use the particle thickness, h_p, for clarity in this section. The spherical and cylindrical particles are discretized using a variant of finite volumes directly taken from Zeng et al. <cit.>. They are non-dimensionalized using the same scales as in the single particle scale equations, Section <ref>. For the cylindrical particles, we follow the same method as that in Zeng and Bazant, but modify it for the cylindrical geometry by changing the calculation of the volumes and areas of each sub-shell. That is, for both geometries, the systems of equations can be represented as𝐌𝐕∂𝐜/∂t = 𝐛,where 𝐜 is a vector of concentrations at positions going from the center of the particle to the surface of the particle with N sub-volumes, 𝐌 is a tridiagonal matrix given by𝐌 = [ 3/4 1/8 0 0 ⋯ 0 0 0 0; 1/4 3/4 1/8 0 ⋯ 0 0 0 0; 0 1/8 3/4 1/8 ⋯ 0 0 0 0; ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋮; 0 0 0 0 ⋯ 1/8 3/4 1/8 0; 0 0 0 0 ⋯ 0 1/8 3/4 1/4; 0 0 0 0 ⋯ 0 0 1/8 3/4; ],and 𝐕 is a diagonal matrix defined by the volumes of the shells scaled to L_p^3. Indexing the volumes with j=1… N, and with a uniformly spaced radial vector, r_j, with N points going from the particle center to its surface, then for a sphere 𝐕 has components defined byV_jj^sphere =4π( Δr^3/24), j = 1 4π( r_j^2Δr + Δr^3/12), j = 2… N-1 4π( r_j^3/3 - ( r_j - Δr/2 )^3/3), j = N,and for a cylinder,V_jj^cylinder = πh_p Δr^2/4, j = 1 2πh_pr_jΔr, j = 2… N-1πh_p( r_jΔr - Δr^2/4), j = N.The right hand side, 𝐛, is defined in relation to the radial components of the fluxes evaluated at the interfaces between the shells, F_j± 1/2, and the areas of the interfaces between the shells, A_j± 1/2. The fluxes and areas are scaled to F_s,ref and L_p^2 respectively. For each geometry,b_j = A_j-1/2F_j-1/2 - A_j+1/2F_j+1/2.The shell areas differ for the two geometries, given scaled to L_p^2,A_1 - 1/2^cylinder = A_1 - 1/2^sphere = 0,A_j + 1/2^sphere =4π( r_j + Δr/2)^2, j = 1… N-1 4πr_j^2, j = N,A_j + 1/2^cylinder =2πh_p( r_j + Δr/2), j = 1… N-1 2πh_p r_j, j = N. The flux is calculated using the two-point centered finite difference approximation using the concentrations in the adjacent cells. Unlike in Zeng et al. <cit.>, we use harmonic means for face value approximations of field variables as in the electrolyte. This discretization scheme for the cylinders and spheres has the advantage of increased accuracy of simulated surface concentrations <cit.> while retaining mass conservation to within numerical precision. It is slightly less robust than a more typical cell-centered finite volume scheme under certain situations, as the coupling between the adjacent volumes via the mass matrix can cause minor oscillations during the formation of very steep concentration gradients, but this typically only presents problems when beginning with very high currents from nearly full or empty particles. These oscillations could be eliminated by using a flux limiter, commonly used in higher order discretization schemes and especially for hyperbolic problems <cit.>. This approach also has the advantage, much like typical finite volume schemes, of naturally facilitating the application of flux boundary conditions, as applied to the active material at the particle center (from symmetry) and surface (from reaction).When applied to CHR particles, to evaluate the diffusional chemical potential at grid points we must calculate the Laplacian of the concentration, and we follow Zeng and Bazant <cit.>. For interior points, j = 2… N-1,∇^2c_j^ sphere = 2/r_j.∂c/∂r|_r_j + .∂^2c/∂r^2|_r_j≈2/r_jc_j+1 - c_j-1/2Δr + c_j-1 - 2c_j + c_j+1/Δr^2,∇^2c_j^ cylinder = 1/r_j.∂c/∂r|_r_j + .∂^2c/∂r^2|_r_j≈1/r_jc_j+1 - c_j-1/2Δr + c_j-1 - 2c_j + c_j+1/Δr^2.For the center of the particle at j = 1, isotropy gives∇^2c_1^ sphere = 3.∂^2c/∂r|_r_1≈ 32c_2 - 2c_1/Δr^2,∇^2c_1^ cylinder = 2.∂^2c/∂r|_r_1≈ 22c_2 - 2c_1/Δr^2.At the surface, we use a ghost point at j = N+1 and impose the concentration gradient within the Laplacian of the concentration,𝐧·c_N= 1/κ∂γ_S/∂c_s≈c_N+1 - c_N-1/2Δrgiving c_N+1≈2Δr/κ∂γ_S/∂c_s + c_N-1, such that∇^2c_N^ sphere ≈2/r_N1/κ∂γ_S/∂c_s + 2c_N-1 - 2c_N + 2Δr/κ∂γ_S/∂c_s/Δr^2,∇^2c_N^ cylinder ≈1/r_N1/κ∂γ_S/∂c_s + 2c_N-1 - 2c_N + 2Δr/κ∂γ_S/∂c_s/Δr^2.§.§ DAE Consistent Initial ConditionsBecause the discretized equations are coupled DAE's, we must begin the system from consistent initial conditions <cit.>. Although various approaches for initializing this type of model have been developed <cit.>, we found it to be relatively robust to begin from a known equilibrium state and quickly ramp the applied current or voltage to the desired values. Although this contributes to the stiffness of the system, the higher order, adaptive backward difference formula (BDF) time stepper we use mitigates the cost of this, obviating the need to use more sophisticated methods.To run simulations with specified current, we begin at a zero-current state, where we can easily calculate the equilibrium potentials in each of the phases (because we also begin with uniform concentration profiles). Similarly, for simulations with specified applied voltages, we begin from the calculated equilibrium applied voltage before ramping to the desired voltage. In addition, to make the initialization more robust, we offset the diffusional chemical potentials in the solid phases throughout the simulation by their initial values, such that the initial equilibrium potentials everywhere throughout the simulation are zero. We found this to substantially facilitate initialization compared to leaving diffusional chemical potential at their physical values, which can leave differences in potentials across the system on the order of a hundred thermal volts (e.g. a 3 V material against a lithium metal or graphite anode is at ∼ 100 thermal volts compared to k_BT/e at room temperature). Because these potentials appear within exponentials of rate expressions, offsetting them to begin the simulation at zero substantially facilitates the numerical determination of consistent initial conditions and has no impact on the simulation output upon post-applying the reverse-offset. Finally, when performing a simulation continuation, we take the final state of the previous output as initial guesses for the new initial state.§ EXAMPLES As a detailed examination of all the applications and model variations of the software is beyond the scope of the present work, we instead focus here on a few examples highlighting some of the key distinguishing capabilities of MPET. §.§ Solid solutions and phase separation in porous electrodes We begin with a comparison of solid solution and phase separating models within porous electrode simulations, which highlights the ability of the software to compare two different approaches to modeling the same material, variations of both of which are commonly used <cit.>. A complete comparison of the different models is beyond the scope of this work, so we present only a simplified example here as a demonstration of the model behavior. To keep the system as simple as possible, we choose to investigate a fictitious material in which lithium intercalates and has a diffusional chemical potential described by a regular solution,μ_RS = k_BTln( c/1-c) + Ω( 1-2c) - κ/c_max∇^2 c + μ^Θwhere c = c/c_max is a filling fraction, c_max is the maximum concentration of lithium in the active material, and with μ^Θ arbitrarily set to -2 eV defined relative to Li/Li^+. The regular solution parameter, Ω, is set to 3k_BT_ref, where T_ref = 298 K is the absolute temperature at which the simulation is carried out. We examine an electrode with particles with radius, R = 1 μm, and we choose the interfacial gradient penalty, κ = 1.16×10^-7 J/m such that the phase interface width is several times the finite volume grid size, 20 nm. The maximum filling fraction in the active material, c_max = 25 M is chosen arbitrarily.To describe the thermodynamics of the solid solution material, we impose a diffusional chemical potential associated with the stable equilibrium diffusional chemical potential defined by Eq. <ref>, a piece-wise continuous function defined as equal to μ_RS outside the miscibility gap and μ^Θ within it, the red dashed curve in Figure <ref>. Note, this could differ from a particular equilibrium filling/emptying path because a particle described by Eq. <ref> can enter metastable regions, leading to equilibrium hysteresis <cit.> which a solid solution model cannot capture without direction-dependent thermodynamic models. In order to minimize conflating factors from the reaction kinetics, we model both phase separating and solid solution particles using the symmetric (α = 0.5) Butler-Volmer model with a constant exchange current density, i_0 = 1 A/m^2, leading to minimal reaction losses in either system. We will consider the case that intercalation occurs at the surface and phase separation occurs within the bulk, indicating it can be described by the Cahn-Hilliard reaction model <cit.>. The transport of solid solution material can be described using Fickian diffusion. For the solid solution, we use constant D_chem=1×10^-16 m^2/s, and for the phase separating case, we use Eq. <ref>, which approximately matches the concentration-independence of the chemical diffusivity in the solid solution regimes <cit.>. We adjust D_0 until we get a similar dependence of available battery capacity as a function of current (Fig. <ref> (b)), where capacity is defined as the electrode filling fraction when the voltage reaches a 1.5 V cutoff. We find D_0 = 8×10^-16 m^2/s reflecting the larger value of transport prefactor required to obtain similar fluxes in phase separating systems because of the smaller gradients which can arise in their end member phases compared to solid solution particles (Fig. <ref>).We use a cell geometry with thin electrodes to minimize electrolyte limitations and assume no electron limitations in the porous electrode or reaction resistance on the lithium foil counter electrode. The porous separator and cathode are 15 μm and 20 μm with porosities of 0.8 and 0.2 respectively. The loading fraction in the cathode is taken to be 0.7 with the Bruggeman exponent a=-0.5. We model the electrolyte using concentrated solution theory as described in Section <ref> using parameters as determined by Valo̸en and Reimers <cit.> but replacing the conductivity with that measured by Bernardi and Go <cit.>.Despite very different models for the transport in the solid phase, it is interesting to note that some macroscopic characteristics of the modeled cell are quite similar. For example a representative cell voltage polarization curve and the dependence of cell capacity as a function of current show similar behavior (Figure <ref>), indicating that macroscopic cell data alone would not differentiate between these models. We find in a companion paper that this observation holds for similar models even in the case of non-negligible reaction resistances which are dominated by film resistance <cit.>. Of note, in the companion paper, we used Eq. <ref> with constant prefactor, i.e. D_0,iγ_ic_i/γ_,i^d =, which we found here unable to produce a dependence of capacity on current similar to the solid solution with constant chemical diffusivity. Despite the similarity in Figure <ref>, we note that the models give very different predictions of the behavior at the particle level, shown in Figure <ref> in which we show both a snapshot of a representative concentration profile for each of the simulations and also a representative profile of the evolution of surface concentration in one of the particles. Because the surface concentration also affects other model behavior, including the reaction rate, we show in Figure <ref> that when both particles are simulated using the concentration-dependent exchange current density in Eq. <ref>, they diverge much more significantly in their predictions. A consistent coupling to stresses and strains would further differentiate the models, as the concentration profile throughout the particle directly couples to the stress profile <cit.>.§.§ Concentrated and dilute solutions with phase separating materials Here, to focus on the electrolyte, we consider the same (phase separating) electrode materials in dilute and concentrated electrolyte solutions. Dargaville and Farrell first considered CHR and ACR phase field particles in porous electrodes using Stefan-Maxwell electrolytes <cit.>, and we briefly compare the dilute approach previously used by Bazant and co-workers <cit.> with the concentrated model here. We use the same phase separating material as that studied Section <ref> with constant exchange current density but simulate it in a battery with a longer electrode of 200 μm. The concentrated electrolyte model is identical to that used in Section <ref>, and the dilute model is analogous to that model but replacing the fit function of the electrolyte conductivity with the linear dependence predicted by dilute solutions <cit.>, neglecting concentration dependence of the electrolyte transport coefficients, and using D_ℓ,+ = 2.42×10^-10 m^2/s and D_ℓ,- = 3.95×10^-10 m^2/s, which corresponds to the concentrated solution model with a typical value of D_ℓ = 3×10^-10 m^2/s and t_+^0 = 0.38. In this situation, the electrolyte transport is limiting, so we expect substantial differences between the two models, which we see in Figure <ref>. Because the electrolyte transport is much more limiting than the solid phase transport, the results are qualitatively similar to those of Valo̸en and Reimers despite their use of a solid solution active material model instead of the phase separating model here <cit.>. As in Figure <ref> (a), they found the concentrated model leads to a more gradual decrease in voltage over the cell discharge but a larger overall capacity. This highlights the importance of using the more consistent concentrated solution theory approach in situations with substantial electrolyte polarization. §.§ Full Cell LiFePO_4/C To demonstrate the versatility of the software, we present here the result of a battery simulation with two porous electrodes using two very different solid material models for each one. In the cathode, we use an Allen-Cahn reaction model of LiFePO_4 (LFP) which was developed by Bazant and co-workers <cit.> and used to explain in situ measurements of mosaic phase transformations in porous electrodes <cit.>. In the anode, we simulate graphite using a 2-layer Cahn-Hilliard reaction model <cit.> as used to capture the experimental intercalation of a single crystal of graphite <cit.>. Although we have found a variation of this model to better describe the behavior of porous graphite electrodes <cit.>, we use it here to highlight the ease of implementing and using very different active material models within the same simulation using MPET.We use the same concentrated electrolyte model as in the previous two sections. The lengths of the anode, separator, and cathode are 100 μm, 20 μm, and 150 μm. The porosities are 0.15, 0.4, and 0.2, and the loading percents of the anode and cathode are 0.9 and 0.7. Because the solid phase models are described in detail in the previous work, we only briefly introduce them here. The cathode particles are ACR particles described by a regular solution with an effective stress term, designed to approximately capture the tilting of the single-particle voltage plateau resulting from stresses <cit.>,μ_LFP = k_BTln( c/1-c) + Ω( 1-2c) + B/c_max,LFP( c - c) - κ_LFP/c_max,LFP∇^2c + μ^Θ_LFPwhere Ω = 4.51k_BT, the stress coefficient B = 0.19 GPa, c_max,LFP = 23 M, c is the average filling fraction in the particle, κ_LFP = 5×10^-10 J/m, and μ^Θ_LFP = -3.4 eV with respect to Li/Li^+. The anode particles are described by two-layer CHR transport with a two-parameter regular solution model including inter-layer repulsion terms to capture the staged structure found in intercalated graphite <cit.>. The flux within each layer is given by Eq. <ref>, and the diffusional chemical potential of each is given byμ_G,i = k_BTln( c_i/1-c_i) + Ω_a( 1-2c_i ) - 2κ_G/c_max,G∇^2c_i + Ω_bc_j + Ω_c( 1-2c_i )c_j( 1-c_j ) + μ^Θ_Gwhere j i and i ∈{ 1,2 } and c_i represents the filling fraction of layer i. The parameters for graphite come from refs. <cit.> and are Ω_a = 3.4k_BT, Ω_b = 1.4k_BT, Ω_c = 20k_BT, c_max,G = 28.2 M, κ_G = 4×10^-7 J/m, and μ^Θ_G = -0.12 eV with respect to Li/Li^+. In both materials, the reactions are governed by the symmetric (α=0.5) Butler-Volmer, Eq. <ref>, using Eq. <ref> for the exchange current density. For LFP, we set k_0 = 0.16 A/m^2 and use γ_ = 1/( 1-c) as proposed by Bai et al. <cit.>. For graphite, we set k_0=10 A/m^2 and use γ_,i = 1/( c_i( 1-c_i ) ) as we initially used in Guo et al. <cit.> and reexamined recently to suggest alternate models for practical battery simulation <cit.>.We present overall cell polarization curves for a selection of currents in Figure <ref>. The thick electrodes and low porosity make the cell experience strong electrolyte polarization at the moderate simulated currents, causing most of the losses in the cell. The cell capacity is limited by the cathode, evidenced by the graphite-caused shift in the C/10 voltage profile occurring at a cathode filling fraction larger than 0.5. In addition, we see erratic profiles at all simulated rates. At the lowest rates, this is primarily related to the mosaic transformation of the LFP particles along the length of the electrolyte, with one representative particle typically receiving almost all the current in the electrode, as seen experimentally <cit.> and explained theoretically <cit.>. This behavior can be seen in Figure <ref> (b) in which the cathode particles show sharp filling processes related only to their position in the electrode, because they do not have size variation as used in Ferguson and Bazant <cit.>. In contrast, the anode particles behave more like the CHR particles studied in the sections above, with more gradual filling processes governed more weakly by their position in the electrode. At higher rates, the cathode still demonstrates a mosaic transformation, but the voltage curves smooth out because other losses dominate. The voltage spikes result from the form of the reaction resistance causing short sections of low resistance. This rate expression helped capture single particle experiments <cit.> but is unable to capture experiments on a full porous electrode, where we also used a simplified version of the graphite model <cit.>. §.§ Electrochemical reactions Battery models are typically constructed assuming Butler-Volmer reaction kinetics <cit.>, although recent evidence suggests Marcus kinetics may better describe some electrochemical reactions <cit.>. One possible reason for the use of the Butler-Volmer form is that the models do not deviate until moderate to large reaction driving forces, depending on the reaction <cit.>. A second is that the expression for calculating Marcus style kinetics at an electrode involves an improper integral <cit.>, which is cumbersome to evaluate, especially within porous electrode simulations in which reaction rates are evaluated many times over the course of a simulation. Nevertheless, this Marcus-Hush-Chidsey (MHC) reaction model has been approximated with a mathematically simple and accurate expression <cit.>, enabling its practical use in porous electrode battery models, which we demonstrate here. The primary effect of MHC kinetics is a downward curvature instead of the linear Tafel slope associated with Butler-Volmer reaction kinetics.In order to accurately capture electrochemical data, the Butler-Volmer expression is commonly modified with a film resistance as in Eq. <ref> to capture a series resistance associated with any film at the surface. Curiously, this modified expression could be interpreted as a way to introduce curvature to the Tafel plot in a way that can look similar to MHC kinetics over a range of driving forces. For example, in Figure <ref>, we compare Butler-Volmer with and without film resistance (BV and BV-film respectively) with the MHC expression, all using the same exchange current density and at fixed concentrations. In the figure, we use constant reaction rate prefactors, i_0 = i_M = 1 A/m^2 and R_film = 0.001 Ω m^2, a value similar to that used to match electrochemical data using porous electrode modeling <cit.>. The MHC kinetics use the same exchange current density and a reorganization energy, λ = 18k_BT, between that approximated for LiFePO_4 <cit.> and values used in other interfacial reactions <cit.>. The limiting behaviors for the film resistance and MHC models differ substantially, as the film model approaches a linear resistor at high driving forces, whereas the MHC current saturates at a constant value. However, over a relatively broad range of driving forces, the two predict similar currents when neglecting concentration effects, which suggests that some variant of the MHC model may be able to capture some electrochemical data using a microscopically derived model. We should note that the concentration dependence of the MHC model differs from that of the Butler-Volmer model with film resistance, so comparisons are not completely straightforward. Nevertheless, use of the MHC expression has the advantage that it also makes clear connections to surface properties <cit.> which could be engineered, suggesting possible improvement paths for materials with slow kinetics. To illustrate the application of MHC kinetics in a porous electrode, we simulate a current-pulse discharge process of the idealized phase separating material studied in Section <ref> using both the Butler-Volmer and MHC models as well as the Butler-Volmer model with a film resistance. As mentioned above, the concentration dependence of the MHC model is non-trivial, as the departure from the formal potential, η_f, is offset from the activation overpotential. In addition, the prefactor, i_M = k_M/γ_M,, may also have concentration dependence related to concentrated solution effects captured in γ_M,. In order to study the two with the least impact from the different concentration dependence, we choose to modify i_M here to match the exchange current densities of the Butler-Volmer and MHC models. This could be interpreted as choosing a particular model for γ_M,, but we refrain from assigning physical meaning to the choice used here and make the selection to enable the simplest comparison between the two models.First, we plot the exchange current density of the MHC model using constant i_M in Figure <ref> (a) and note that it has a dependence on the reduced species (intercalated lithium) that is approximately a square root. This applies to both c_R and c_O. Thus, in order to approximately specify an arbitrary exchange current density for the MHC model, we can simply divide a chosen function by √(c_Rc_O) and scale the magnitude accordingly. To connect to our previous work, we choose to compare the (symmetric) Butler-Volmer form based on species activities, Eq. <ref> with Eq. <ref>. Thus, for the MHC model, we choosei_M = k_Mn( a_Oa_e^n )^1-αa_R^α/γ_√(c_Rc_O),with n=1 and α=0.5, which very nearly matches the concentration and activity dependence of the exchange current densities of the two models, leaving the primary difference in the dependence on the activation overpotential, η. We use γ_ = 1/( 1-c_R ) following previous work on LFP <cit.> and supported by recent experiments <cit.>. This gives an exchange current density with broken symmetry around half filling, as depicted in Figure <ref> (b).We choose a reaction reorganization energy for the MHC model λ = 18k_BT as in Figure <ref>. In order to minimize the effect of solid phase transport losses, we modify the phase separating particles from Section <ref> to have D_0 = 1×10^-14 m^2/s. We simulate a half-cell using the concentrated Stefan-Maxwell electrolyte model and with a lithium foil counter-electrode with fast kinetics. Although fast lithium foil kinetics may be a poor assumption because of the small relative surface area, under constant current conditions, it would only add a current-dependent, constant additional voltage drop to both simulated systems. The simulated cell has a separator of length 20 μm with porosity 0.8 and a thin cathode of length 25 μm with porosity 0.2 to minimize electrolyte transport losses and focus on the effect of the reaction kinetics. The cathode loading percent, as above, is set to 0.7, and we simulate uniform particles of radius 1 μm with k_0 = 1 A/m^2 and k_M chosen such that the magnitudes of the exchange current densities match. We adjust the value of the film resistance from that used in Figure <ref> to R_film = 0.02 Ω m^2 to achieve a closer match between the MHC and Butler-Volmer model with the film while using the concentration-dependent exchange current densities neglected in Figure <ref>. This value is higher than is commonly used and closer to that used in the introduction of the film resistance to battery modeling by Doyle et al. <cit.>.We expose the cell to the current profile and corresponding state of charge profile in Figure <ref> (b), and the output voltage profiles are shown in Figure <ref> (a). Although the cell experiences moderate electrolyte polarization at the highest currents, those losses are similar for both models and do not explain the differences in the overall output voltage. The initial current pulse at 2 C brings all the electrode particles to a state of charge within the miscibility gap, so the equilibrium voltage is given by the value of -μ^Θ/e = 2 V which we see the cell relax to after the pulse. We also can see the overlap of the BV and MHC profiles in the initial region of this pulse, where the surface concentrations are changing most substantially, confirming the match of both the magnitude and concentration dependence of the two reaction models in this lower-overpotential range when the two reaction models should overlap. However, once the particles begin to phase separate at approximately 7.5 min, the surface concentrations rise sharply, and the exchange current density decreases (Figure <ref> (b)). This causes increased reaction resistance in both models and an associated increase in required driving force, but the MHC model requires a larger increase in driving force, causing the initial departure of the models near 7.5 min. When the higher current pulses begin, we see further departure of the two reaction models with the MHC model showing substantially higher reaction losses in Figure <ref> (a) than the Butler-Volmer model. The Butler-Volmer model with the film resistance leads to behavior between the two, predicting more resistance than the other models at low rates and intermediate resistance at higher rates, indicating that the model results depart enough to clearly distinguish in these conditions. Still, the effect of the film resistance and MHC kinetics are qualitatively similar, suggesting that some of the system behavior which has been attributed to reaction film resistance may instead be caused by fundamental departures from Butler-Volmer reaction kinetics.In the limit of eη≪λ, the Butler-Volmer and MHC models predict identical results. The value of λ we used here is between a value calculated for LFP <cit.> and values commonly found in calculations or fits to data for other systems <cit.>. Using this intermediate value and an MHC prefactor adjusted to give a more similar concentration dependence to Butler-Volmer expressions, we can see that the two models give noticeably different predictions on the simulated macroscopic system output. This further highlights the importance of continuing the study of the two models in practical battery models, and the MPET software can help facilitate this.§ SUMMARY AND CONCLUSIONS Volume averaged porous electrode simulations provide insightful and industrially relevant ways to characterize battery behavior. Despite shortcomings associated with information loss from the volume averaging process, the simplicity of the approach and associated speed of model development, implementation, and simulation run time motivate its continued use. In this work, we have presented an open-source software package called MPET (Multiphase Porous Electrode Theory), which builds on foundations laid by John Newman and many others by describing the active materials with variational nonequilibrium thermodynamics <cit.> applied to porous electrodes <cit.>. Despite the prevalence of this modeling approach, few open source options are available for simulating the model, particularly ones that are easy to modify with new thermodynamic models based on the powerful phase field formalism <cit.>, adapted for electrochemical systems <cit.>. With MPET, we aimed to address this gap by providing a software platform implementing nonequilibrium thermodynamics of porous electrodes with an open source code, written with a modular design to encourage use, modifications, and improvements.Through a variety of examples, we have demonstrated some of key features of the MPET software. First, we compared solid solution and phase field approaches to modeling active materials and demonstrated that the models can give similar macroscopic outputs under some situations, but deviate at the particle scale. This leads to different predictions when the surface concentrations strongly affect reaction behavior. Second, we reexamined the comparison of Stefan-Maxwell concentrated solution theory and dilute solution models of electrolyte transport in the context of electrodes made of simple phase separating particles. Under strong electrolyte limitation, the difference in the model predictions is similar to that found in electrodes with solid solution particles. Third, to highlight the ability of the software to describe unique and distinct solid models, we implemented a full cell simulation using models of graphite and iron phosphate presented in previous works. Finally, we considered the effect of changing the reaction model from the typically used Butler-Volmer to Marcus-Hush-Chidsey (MHC) reaction kinetics, based on microscopic theories of electron transfer. We demonstrated that, for some reasonably typical parameter values, the MHC reaction kinetics can look similar to Butler-Volmer reactions with a film resistance and can lead to substantial differences from the Butler-Volmer model in predicted battery behavior at high rates.Natural extensions of this work involve implementing some of the features that other software options have and which researchers have found helpful in explaining data or better describing real systems. For example, thermal effects can substantially affect cell behavior <cit.> and exploration of their coupling with Marcus kinetics would be interesting. More complete thermal descriptions rely on temperature profiles over more than an individual cell layer <cit.>, so isothermal but non-constant temperature dependence would be a starting point. Addition of material models properly coupling the stresses to the concentration profile would also be an opportunity to study the effects of electro-mechanical models with phase separation <cit.> in porous electrodes. Other capabilities, such as simulating electrochemical impedance outputs or others of the many additions that have been made to the original model implemented by Doyle et al. <cit.> could also be added. For capacitive energy storage and desalination, the electrolyte model could also be extended to allow for diffuse charge in the electrode/electrolyte interface <cit.> or the double layers of charged porous separators <cit.>, which also activates additional mechanisms for ion transport by surface conduction and electro-osmotic flow <cit.>, which are neglected in traditional battery models.In summary, MPET provides some new capabilities for battery simulation focused on recent developments in the modeling of active materials based on nonequilibrium thermodynamics. It can also serve as a starting point for other researchers beginning to study in the area to make their own modifications and investigations. By highlighting its capabilities, we have shown the value of a flexible simulation package to expand on the existing porous electrode theory and begun to examine the impact of those developments.§ ACKNOWLEDGMENTSThe research was supported by the Samsung-MIT Program for Materials Design in Energy Applications, and in part by the D3BATT program of the Toyota Research Institute. We thank E. Khoo and K. Conforti for proofreading the manuscript.
http://arxiv.org/abs/1702.08432v1
{ "authors": [ "Raymond B. Smith", "Martin Z. Bazant" ], "categories": [ "physics.chem-ph" ], "primary_category": "physics.chem-ph", "published": "20170227185149", "title": "Multiphase Porous Electrode Theory" }
Department of Astronomy, Boston University, 725 Commonwealth Avenue, Boston, MA 02215, USA 1Visiting Graduate Student, UC San Diegoctheisse@bu.eduWe present the results of an investigation into the occurrence and properties (stellar age and mass trends) of low-mass field stars exhibiting extreme mid-infrared (MIR) excesses (L_IR / L_∗≳ 0.01).Stars for the analysis were initially selected from the Motion Verified Red Stars (MoVeRS) catalog of photometric stars with SDSS, 2MASS, and WISE photometry and significant proper motions.We identifystars exhibiting extreme MIR excesses, selected based on an empirical relationship for main sequence W1-W3 colors. For a small subset of the sample, we show, using spectroscopic tracers of stellar age (Hα and Li i) and luminosity class, that the parent sample is likely comprised of field dwarfs (≳ 1 Gyr). We also develop the Low-mass Kinematics (LoKi) galactic model to estimate the completeness of the extreme MIR excess sample. Using Galactic height as a proxy for stellar age, the completeness corrected analysis indicates a distinct age dependence for field stars exhibiting extreme MIR excesses. We also find a trend with stellar mass (using r-z color as a proxy). Our findings are consistent with the detected extreme MIR excesses originating from dust created in a short-lived collisional cascade (≲ 100,000 years) during a giant impact between two large planetismals or terrestrial planets. These stars with extreme MIR excesses also provide support for planetary collisions being the dominant mechanism in creating the observed Kepler dichotomy (the need for more than a single mode, typically two, to explain the variety of planetary system architectures Kepler has observed), rather than different formation mechanisms.§ INTRODUCTION The ability to study circumstellar environments has greatly improved around stars has greatly improved over the past decade, due in part to new technologies that provide higher sensitivity and greater resolution at infrared (IR) and radio wavelengths. Examples of facilities that have contributed to this advance include, but are not limited to the Spitzer Space Telescope <cit.>, the Atacama Large Millimeter Array (ALMA), the Wide-field Infrared Survey Explorer <cit.>, and the Herschel Space Observatory <cit.>. In recent years, observations at these facilities have led to the discovery of stars exhibiting large amounts of excess mid-IR (MIR) flux (L_IR / L_∗≳ 10^-2), termed “extreme debris disks" <cit.> or “extreme IR excesses" <cit.>. Typically found around stars with ages between 10–130 Myr <cit.>, these systems are believed to have hosted collisions between terrestrial planets or large planetismals <cit.>. The majority of stars exhibiting extreme MIR excesses have been found with ages coinciding with the final stages of terrestrial planet formation <cit.>. However, until recently, there was one known system that did not fall into the same category, BD +20 307, a ∼1 Gyr old spectroscopic binary composed of two late F-type stars <cit.> exhibiting a significant MIR excess <cit.>. An in-depth study of the disk mineralogy for BD +20 307 found that the best explanation for the observed large MIR excess and low level of crystallinity was a giant impact between two large terrestrial bodies, similar to the Moon forming event in our solar system <cit.>. However, such collisions are expected to occur much earlier during planetary system formation (as stated above), and the lifetime for the observable collisional cascade is expected to be short <cit.>. It is also possible that the close binary nature of BD +20 307 may have played a role in this late-time collision. The potential for impacts between terrestrial bodies on timescales ≳ 1 Gyr is particularly important for low-mass stars (M_∗≲ 0.8 M_⊙), which are known to host multiple terrestrial planets <cit.>, all orbiting closely to their host stars due to the proximity of the snow-line <cit.>. Low-mass stars make ideal laboratories for studying the occurrence of extreme MIR excesses, and investigating the hypothesis of planetary collisions as their origin. In addition to the observational evidence suggesting an abundance of close-in terrestrial planets surrounding them, low-mass stars are ubiquitous, making up more than 70% of the stellar population <cit.>. Until recently, all of the aforementioned observed extreme MIR excesses have been found around solar-type (FGK-spectral type) stars. However, no explanation has been put forward to explain the dearth of low-mass stars exhibiting similar extreme MIR excesses. In particular, the relative frequency of low-mass stars to solar-type stars should make it more likely to find extreme MIR excesses around low-mass stars, barring any observational limitations. Simulations of planet formation around Sun-like stars indicate that impacts are quite common during the period of terrestrial planet formation <cit.>. <cit.> noted that highly energetic giant impacts (similar to the Moon-forming event) occur far more rarely than smaller collisions, but are a necessity to build a system analogous to our present day solar system. One interesting finding by <cit.> is that by removing giant planets from their dynamical simulations, giant impacts can occur much later in the system's evolution (100 Myr to a few Gyrs versus 10–100 Myr). This may have strong implications for planetary systems around low-mass stars, which do not typically form giant planets <cit.>. Efforts are currently underway to extend these models to low-mass stars, however, initial circumstellar disk conditions are not as well constrained observationally at the bottom of the main sequence. A number of studies have undertaken searches for low-mass stars exhibiting signs of disks and/or M(IR) excesses <cit.>. <cit.> provided a theoretical framework for why primordial disks around low-mass stars could persist on longer timescales than those around higher-mass stars, in spite of most observational evidence suggesting primordial disks are dispersed around low mass stars in less than 100 Myr. For a low-mass star (M0), the timescales for dust removal by Poynting-Robertson drag and grain-grain collisions are ∼10 times longer and 40% longer than for a higher mass star (G0), respectively <cit.>. Primordial disks around low-mass stars have been observed to be longer lived than those around higher-mass stars <cit.>, potentially due to the longer timescales for Poynting-Robertson drag to remove grains from these systems relative to higher-mass systems <cit.>. However, there is currently little to no observational data to support primordial disks around low-mass stars surviving past 10s of Myr, hinting that the evolution of primordial disks around low-mass stars follow a similar evolution to primordial disks around Solar-mass stars. A search for low-mass stars exhibiting extreme MIR excesses was conducted by <cit.>. Their initial sample was pulled from the Sloan Digital Sky Survey <cit.> Data Release 7 <cit.> spectroscopic sample of M dwarfs <cit.>. TW14 discovered 168 low-mass field stars exhibiting large amounts of excess MIR flux, and estimated a collision rate of ∼130 collisions per star over its main sequence lifetime. This rate is significantly higher than the rate estimated by <cit.> for A–G type stars (0.2 impacts per star). The TW14 result suggests that collisions may be more common around low-mass stars, possibly due to a longer timescale over which collisions can act, coupled with the extremely long main sequence lifetimes of low-mass stars <cit.>, and/or the higher density of planets with small semi-major axes. One limitation of the TW14 study was the use of the SDSS DR7 spectroscopic sample, which was not produced in a systematic way, making estimates of completeness difficult. To further investigate the mechanism responsible for creating these observed extreme MIR excesses, a larger sample must be gathered, and methods to estimate the completeness of the sample must be developed. Although many large spectroscopic samples exist for low-mass stars, such as the SDSS spectroscopic M dwarf sample <cit.>, the Large Sky Area Multi-Object Fibre Spectroscopic Telescope <cit.> Data Release 1 <cit.> M dwarf catalog <cit.>, and the Palomar/Michigan State University (PMSU) Nearby Star Spectroscopic Survey <cit.>, these samples are dwarfed by the millions of photometric data products for low-mass stars that are currently available. Unfortunately, many photometric objects share similar colors and point-source-like morphologies with low-mass stars (e.g., giants, quasars, distant luminous galaxies). One way of distinguishing dwarf stars from other similarly colored objects is through the use of proper motions. Distant objects will show little to no tangential motion on the sky, while nearby stars will show significant, measurable motion in reference to background stars. The largest catalog of low-mass stars with proper motions to date is the Motion Verified Red Stars catalog <cit.>. MoVeRS was created using data from SDSS, the Two Micron All-Sky Survey <cit.>, and the Wide-field Infrared Survey Explorer <cit.>. The Late-Type Extension to MoVeRS was recently released with additional very-low-mass objects later than M5 <cit.>. The MoVeRS catalog enables the search for extreme MIR excesses in a larger capacity than was previously available. This paper performs a thorough investigation of the mass-, spatial-, and age-dependence of extreme MIR excesses around low-mass field stars. In Section <ref>, we describe the sample from which the stars are drawn. Section <ref> briefly discusses the methods used in estimating stellar parameters (Section <ref>) and distances (Section <ref>), describes how we curate the sample of stars (Section <ref>), account for interstellar extinction (Section <ref>), distinguish extreme MIR excess candidates (Section <ref>), investigate the fidelity of the WISE measurements (Section <ref>), obtain spectroscopic observations for youth (Section <ref>), and the inherent biases in the sample (Section <ref>). Section <ref> provides details about the Galactic model, which we use to estimate the completeness of the sample, and discuss the completeness corrected results. In Section <ref> we investigate the non-significant MIR excess sample for trends with stellar age. In Section <ref>, we summarize the conclusions and provide a discussion of our results. Details regarding the methods for estimating stellar parameters, including the Markov Chain Monte Carlo (MCMC) method for estimating T_eff and log g, and the methods for estimating stellar size are found in Appendix <ref>. Details for building and using the Low-mass Kinematics (LoKi) galactic model to estimate the level of completeness are discussed in Appendix <ref>.§ DATA: THE MOVERS CATALOG The occurrence rate for low-mass stars exhibiting extreme IR excesses was shown to be extremely low by TW14 (∼0.4%). To build a larger sample of candidate stars with extreme IR excesses, a massive input catalog of bona fide low-mass stars is required. Although photometric catalogs exist for large numbers of low-mass stars <cit.>, proper motions are a way to definitively separate dwarf stars from giants and extragalactic objects of similar photometric colors. <cit.> created the MoVeRS catalog, a photometric catalog of low-mass stars extracted from the SDSS, 2MASS, and WISE datasets, and selected based on their significant proper motions. The MoVeRS catalog contains 8,735,004 stars, 8,534,902 of which have cross-matches in the WISE AllWISE catalog. Along with proper motions computed in <cit.>, the current version of the MoVeRS catalog contains photometry from SDSS, 2MASS, and WISE, where available, for each star. To build the MoVeRS catalog, <cit.> initially selected stars based on their SDSS, 2MASS, and WISE colors, tracing the stellar locus for stars with 16 < r < 22 and r-z ⩾ 0.5. Stars were then selected based on a number of quality flags and proximity to neighboring objects. Proper motions for the remaining objects were computed using astrometric information from SDSS, 2MASS, and WISE, which spans a ∼12-year time baseline. The precision of the catalog is estimated to be ∼10 mas yr^-1. Only stars with significant proper motions (μ_tot⩾ 2σ_μ_tot) were included in the final catalog, increasing the likelihood that the catalog contains nearby stars as opposed to other astrophysical objects. To illustrate the effectiveness of removing giants using proper motions, we consider a giant star at the edge of the photometric selection criteria used for MoVeRS (r = 16). A giant star would be approximately 1000 times more luminous than its dwarf counterpart, putting a giant approximately 30 times farther than a dwarf for a given magnitude. The median photometric distance for stars in the MoVeRS sample is 200 pc, putting a giant star at 6 kpc. The minimum required proper motion within MoVeRS is approximately 20 mas yr^-1. For a giant at a distance of 6 kpc, this translates to a tangential velocity of 570 km s^-1. Gaia Data Release 1 <cit.> Figure 6 shows that red giants with such high tangential velocities (hypervelocity stars) are a negligible fraction of the entire population, and are likely to be unbound from the Galaxy. If we assume a similar proper motion distribution between giant stars and QSOs (both essentially non-moving on the sky) for motions measured with WISE+SDSS+2MASS, we can use Figure 3 fromto make a statical estimate of the contamination rate of giants. The average time-baseline of 12 years translates to a combined proper motion uncertainty of 10 mas yr^-1 for a non-moving population. This gives a point-source with a proper motion of 20 mas yr^-1 a 4.5% chance of being a giant. Combined with the relative fraction of all point sources that are giants (versus dwarfs) at the blue limit of the MoVeRS samples <cit.>, gives the likelihood of having an interloping giant with a proper motion of 20 mas yr^-1 less than 0.1%. The vast majority of MoVeRS stars have proper motions that exceed 20 mas yr^-1, making the likelihood for contamination by giants significantly smaller than this. More information about the construction and properties of the MoVeRS catalog can be found in <cit.>. The Late-Type Extension to MoVeRS was recently released and contains stars with spectral types later than M5 <cit.>. Photometry from WISE, taken in four MIR bands (W1, W2, W3, and W4 with effective wavelengths at 3.4, 4.6, 12, and 22 , respectively), is particularly crucial for finding extreme MIR excesses around K and M dwarfs due to the fact that dust orbiting within the snow-line, where terrestrial planets form, is warm (∼300 K), with its thermal emission peaking in the MIR. The W3 band also samples the 10silicate feature prominent in the types of disks expected to produce these extreme MIR excesses. The sensitivity of WISE, particularly the W3 band <cit.>, allows these extreme MIR excesses to be detected at much higher precision than previous all-sky MIR observatories (e.g., the Infrared Astronomical Satellite, , and Akari, ).§ METHODS§.§ Estimating Stellar Parameters An important step in identifying and quantifying the significance of a MIR excess is measuring the deviation between the expected photospheric MIR values and the measured photometric values, which requires an estimate of the fundamental stellar parameters (e.g., T_eff). Additionally, estimates for stellar temperature (T_eff) and size (R_∗) put constraints on dust temperature and orbital distance <cit.>. Photospheric models for low-mass stars are limited in replicating the myriad of complex molecules found in low-mass stellar atmospheres due to the low temperature environments <cit.>. Furthermore, the onset of potential clouds forming in the coolest stars provides further complications for modeling <cit.>. However, these models are good at producing the overall expected stellar energy distributions (SEDs), and are effective for constraining many of the fundamental stellar parameters. TW14 estimated stellar parameters using a grid of BT-Settl models based on the PHOENIX code <cit.>, which have taken into account many molecular opacities and cloud models. TW14 compared synthetic photometry and spectra from models to data from SDSS, 2MASS and WISE to estimate goodness-of-fit. Due to the lack of spectra for the MoVeRS sample, we only considered synthetic photometry in deriving the goodness-of-fit. This process involved fitting synthetic photometry, derived using relative spectral response curves for SDSS <cit.>, 2MASS <cit.>, and WISE <cit.>, to actual measurements from each photometric survey. TW14 derived stellar parameters by computing reduced-^2 values over the entire range of models, a method which is intractable computationally for the large number of stars in the MoVeRS catalog. To reduce the parameter space, we employed a Markov chain Monte Carlo (MCMC) technique to sample and build posterior probability distributions for each of the stars, used to estimate best-fit parameters and uncertainties (using the 50^th percentile value, and the 16^th and 84^th percentile values, respectively). Details of the MCMC method are described in Appendix <ref>. Using this process, we estimated T_eff and log g values for all 8.7 million sources in the MoVeRS catalog. We used the T_eff values to derive a color-T_eff relationship, also found in Appendix <ref>. With distance estimates, the scaling values derived from this fitting procedure were used to estimate stellar size (R_∗) and a radius-color relation (also found in Appendix <ref>). The new MoVeRS catalog (MoVeRS 2.0), with the estimated stellar parameters, is available through SDSS CasJobs[<http://skyserver.sdss.org/casjobs/>] and VizieR[<http://vizier.u-strasbg.fr/>]. §.§ Estimating Distances: Photometric Parallax Distances to stars are important for estimating luminosities, radii, and many other stellar and kinematic parameters (see TW14 for details). For stars with resolved disks, distances can be used to convert angular sizes into absolute sizes. For unresolved disks, stellar sizes can give approximate orbital distances for circumstellar dust, and approximate dust masses. Few parallax measurements have been made for M dwarfs, relative to higher mass stars, due to their intrinsic faintness. The two largest astrometry databases, the General Catalog of Trigonometric Stellar Parallaxes, Fourth Edition <cit.> and the Hipparcos catalog <cit.> are both severely incomplete for M dwarfs and brown dwarfs <cit.>. Although large parallax databases are incomplete for low-mass stars, two nearby stellar samples now have many parallax measurements, the REsearch Consortium On Nearby Stars <cit.> and MEarth <cit.>. The RECONS sample includes parallaxes for over 1400 M dwarfs within 25 pc <cit.>, and the MEarth sample includes over 1500 M dwarfs within 33 pc <cit.>. There is very little overlap between the two samples since the RECONS survey began operating in the southern hemisphere, while MEarth started as a survey in the northern hemisphere, only recently adding telescopes to the southern hemisphere <cit.>. Additionally, a few studies have measured trigonometric parallaxes for sub-stellar objects <cit.>, but these studies are limited by small numbers. Unfortunately, none of these trigonometric parallax surveys have data in SDSS passbands, which makes deriving a photometric relationship impossible without adding in additional errors from color transformations. The most commonly used photometric parallax relationship for low-mass stars with SDSS colors comes from <cit.>. These relationships are derived from 86 low-mass stars with trigonometric parallax measurements from various sources (B10). The average uncertainty in these relationships is ∼0.4 mags in absolute r-band magnitude (M_r), due in part to luminosity differences between stars of different metallicities <cit.> and magnetic activity <cit.>. This uncertainty in absolute magnitude corresponds to distance uncertainties of ∼20%. Efforts are underway to obtain SDSS magnitudes for many of the low-mass stars with parallax measurements in the samples listed above (C. Theissen et al., 2017, in preparation), however, to date, such measurements do not exist. For this purpose, we chose to use the B10 r-z relationship to estimate distances for the entire MoVeRS sample. Using these distances, we also estimated stellar radii for the MoVeRS sample (see Appendix <ref>). The new MoVeRS 2.0 catalog also includes our distances estimates.§.§ Sample Selection for Stars with MIR Excesses To compile a clean set of stars for our analysis, we used a number of selection criteria, most of which have been adapted from TW14. We applied the following selection criteria to the MoVeRS sample: * We selected stars that did not have a WISE extended source flag (ext_flg = 0). This requirement ensured a point-source morphology through all WISE bands. This cut left 8,483,499 stars. * We selected stars that did not have a contamination or confusion flag in either W1, W2, or W3 (cc_flg_W1,W2,W3 = 0). This ensured clean photometry for those bands. This cut left 7,899,559 stars. * We selected stars with at least a signal-to-noise ratio (S/N) of 3 in W1, W2, and W3 (WxSNR_x=W1,W2,W3⩾ 3). This cut left 185,121 stars. * We kept only the highest fidelity stars, retaining relatively bright stars satisfying Equation (12) of <cit.>. This cut ensures stars have high-precision proper motion measurements and fall within the regime confirmed with independent checks to other proper motion catalogs. This cut left 145,526 stars. * Lastly, to minimize source confusion, and reduce contamination due to dust extinction, we removed stars close to the Galactic plane (|b| < 20^∘) and in the Orion region (-30^∘ < b < 0^∘ and 190^∘ < l < 215^∘). This cut left 126,976 stars.§.§.§ WISE Sensitivity Limits To directly address one of the limitations of the TW14 study, we constructed a uniform sample of stars. We broadly categorized the stars into three groups: 1) stars which are close enough that WISE can significantly detect their photospheres at 12 ; 2) stars that are far enough away that their photospheres are undetectable at 12 , but for which an extreme MIR excess (on the order of those found in TW14) is significantly detectable by WISE; and 3) stars which are too far away to be detectable by WISE, even if they have an extreme MIR excess. We were only interested in stars that have measurable detections in W3. Below, we discuss the methods for building the “full" sample, stars that meet criterion (2), and the “clean" sample, stars that meet criterion (1), which is a subset of the “full" sample. We first discuss selecting stars exhibiting excess MIR flux (Section <ref>), and will apply further criteria to select stars with extreme MIR excesses (L_IR / L_∗⩾ 0.01) in Section <ref>. The W3 5σ point-source sensitivity limit is estimated to be 730 μJy (also the approximate 95% completeness limit; hereafter referred to as the W3 flux limit), based on external checks with Spitzer COSMOS data[<http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_3a.html>], which translates to a flux density of ∼1.89 × 10^-13 ergs s^-1 cm^-2. Using the sample of 126,976 stars, we computed the expected photospheric W3 flux for each star by scaling the best-fit stellar model to the measured z-band flux. This gave us a measure of the expected W3 flux from the stellar photosphere for each star. The map of expected W3 stellar flux for a given r-z color and r-band magnitude is shown in Figure <ref>. Figure <ref> shows that a constant expected W3 flux is approximately linear in this color-magnitude space. To quantify the relationship between r, r-z, and expected W3 flux, we started at r=16 and binned each 0.1 mag along the r-band axis, and binned each slice in 0.1 mag r-z bins. We identified the r, r-z value where the expected W3 flux dropped below 1.89 × 10^-13 ergs s^-1 cm^-2 (the W3 flux limit). We repeated this process between 16 ⩽ r ⩽ 22, and then fit a line to the r, r-z values. Our linear fit is shown as a red dashed line in Figure <ref>, and given by,r = 13.40+1.38(r-z).Every star brighter than this limit should fall within the W3 flux limit, regardless of if the star has a 12excess or not. This gives us a very uniform sample, free from a W3 sensitivity bias. Stars equal to or brighter than Equation (<ref>) will be referred to as the “clean" sample, which consists of 6,129 stars. Many of the stars in the TW14 sample had extremely large W3 excesses above the expected photospheric values, with the majority of observed 12fluxes being 10 times greater than the expected photospheric values. Considering that we were looking for similarly large excesses, the volume of space over which we might get a true W3 detection can be increased. To illustrate this point, Figure <ref> shows the expected r, r-z limit at which stars with 12excesses 10 times their photospheric values would equal the W3 detection limit (dash-dotted line). However, to increase the detections (source counts) of stars with MIR excesses, we must also consider the larger sample of stars that reside outside the W3 bias-free limit, where a MIR excess could be detected (at larger distances, and hence larger volumes). This is illustrated in Figure <ref>, where we plot the estimated distance limits corresponding to different r, r-z values. The WISE sensitivity limits are highly dependent on the source position on the sky, due to different depths of coverage and zodiacal foreground emission. Therefore, many of the stars fainter than the imposed limit can yield true detections, but stricter criteria must be implemented in their selection. Sensitivity maps for the WISE bands have been created using a profile-fit photometry noise model[<http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_3a.html>]. These sensitivity maps have been checked using 2MASS stars with spectral types earlier than F7 to estimate the sensitivity of the W3 band at different positions over the entire sky. The external comparison against 2MASS has shown that the W3 sensitivity map may slightly underestimate the sensitivity of the AllWISE catalog[<http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_3a.html>], but provides a consistent model against which we can examine the measured W3 fluxes for significance as a function of stellar position on the sky. To select the highest-fidelity stars outside the limits of the clean sample, we required that each source have a W3 ⩾ the W3 flux limit for its position on the sky according to the noise model sensitivity map. This sample, termed the “full" sample, consists of the clean sample and an additional 19,354 stars, for a total count of 25,483 stars.§.§.§ Visual Inspection To retain the highest quality detections, we performed visual inspection for each of the stars. The W3 band is especially susceptible to background and nearby contaminants due to its large point-spread-function (PSF; ∼6.5). Visual inspection removed stars superimposed on top of galaxies or blended with other nearby stars, which could cause the elevated MIR fluxes. Visual inspection also removed stars close to nearby bright objects that could produce additional MIR flux, or stars in areas of high IR cirrus. During visual inspection, we viewed SDSS and WISE archival images to ensure that the candidate objects were real MIR detections, a process similar to the procedure in TW14. Stars were assigned a quality flag, with quality = 1 indicating a star free from any contaminants, and of the highest visual quality, and quality = 2 indicating that the 12source is good but may be affected by nearby or background contamination, slightly offset between other WISE bands, or low contrast in W3. After visual inspection, we were left with 20,502 stars in the full sample, and 5,786 stars in the clean sample. The breakdown of the samples and quality flags is shown in Table <ref>. This provides a clean sample from which to select stars with excess MIR flux (Section <ref>) and account for interstellar extinction (Section <ref>). cc2 Visual Inspection QualityQuality Number of Flag Stars 2cFull Sample 2 18281 1 2221 2cClean Sample 2 4849 1 937 §.§.§ Accounting for Interstellar Extinction Due to the distances to the stars in the sample (≳ 100 pc), interstellar extinction may affect the photometry. Since dust grains along a line-of-sight in the interstellar medium both extinct and redden an object's SED, interstellar extinction increases the likelihood of a false MIR excess detection. For wavelengths longer than ∼5 , extinction effects should be negligible, with the exception of the 10silicate feature <cit.>. Although we expect extinction to minimally affect the SED fits for the sources in our sample, due to the requirement that stars reside at relatively high Galactic latitudes (|b| > 20^∘), extinction must still be evaluated, especially since the W3-band samples the 10silicate feature. Directly measuring extinction for a star is most accurately done with an optical spectrum that samples the “knee" of the extinction curve, and a comparison to an un-extincted template of the same spectral-type <cit.>. However, because optical spectra are unavailable for the vast majority of the MoVeRS sample, we employed a more broad approach. SDSS provides estimates for the relative extinction, A_λ/A_V (the ratio of extinction in a given bandpass to extinction in the V band), for each star and each band in the photometric catalog. These extinction values were estimated along the line-of-sight using the <cit.> dust maps, created using galactic extinction measurements from the Cosmic Microwave Background Explorer <cit.> and the Infrared Astronomical Satellite <cit.>. These maps estimate the total extinction along a line-of-sight out of the Galaxy, and may therefore overestimate the actual extinction values for stars closer than 1–2 kpc. Extinction effects may also occur due to circumstellar material, expected of the MIR excess candidates. However, the probability that an optically thick disk is seen directly edge-on is small assuming inclinations are random <cit.>, although edge-on has the highest probability (∼3.5% chance to view within ± 2^∘ of edge-on). Therefore, we may assume the disk to be optically thin at visible wavelengths (similar to ). To estimate the extinction in the sample, we used the SDSS extinction estimates for the riz-bands (A_r, A_i, and A_z). The extinction values for the clean and full samples are shown in Figure <ref>. The vast majority of the samples have small extinction values (< 0.1 mags), with median values for A_r, A_i, and A_z of 0.08, 0.06, and 0.04 for the full sample, and 0.09, 0.07, and 0.05 for the clean sample, respectively. Therefore, we do not expect extinction to affect the majority of our model fits from Appendix <ref>. Furthermore, extinction tends to move stars parallel to our initial selection criteria (see Figure <ref>), and should minimally bias our selected sample (Section <ref>). For our full and clean samples, we corrected for extinction using the the SDSS estimates for A_r, A_i, and A_z, and the relative extinction values (A_λ/A_V) for SDSS bandpasses from <cit.> Table 6 to compute A_V values. We then applied corrections to the rizJHK_s bandpasses using relative extinction measurements from the Asiago Database <cit.>, and an R_V = 3.1. Further details of this method can be found in <cit.>. <cit.> found that the relative extinction at 10due to the Galactic ISM extinction curve can be as large as the relative extinction in the K-band. <cit.> used 1,052,793 main sequence stars from SDSS DR8 <cit.> with |b| > 10^∘ to measure the dust extinction curve relative to the r-band for the first three WISE bands. <cit.> derived A_λ/A_K_s = 0.60, 0.33, and 0.87 for W1, W2, and W3, respectively. Another study by <cit.> using GK-type giants from the SDSS Apache Point Observatory Galaxy Evolution Experiment <cit.> spectroscopic survey found that the MIR relative extinction values were extremely sensitive to the NIR extinction, commonly expressed as a power-law A_λ∝λ^-α. This power-law also corresponds to the relative extinction between the J- and K_s-bands, i.e., A_J / A_K_s = (λ_J / λ_K_s)^-α. <cit.> measured α = 1.65 using a small number of stars, however, <cit.> measured a slightly larger value of α = 1.79. The value of α corresponding to the measurements from <cit.> is 1.25, significantly less steep than other studies. <cit.> studied the universality of the NIR extinction law using color excess ratios of APOGEE M and K giants, and found that the extinction law shows very little variation across different environments. We chose to adopt the relative extinction values from <cit.>, whose measurement of α is consistent with other measurements from the diffuse ISM <cit.>, to correct for extinction in each WISE passband. Using the extinction corrected photometry, we reran the full and clean samples through the stellar parameters pipeline (Section <ref>) to obtain new estimates for T_eff and R_∗. For the remainder of this study we use the unreddened photometry. §.§ Determining Infrared Excesses TW14 explored two different methods to determine which stars showed high levels of excess IR fluxes over the expected photospheric values (“extreme" MIR excesses will be evaluated in Section <ref>). The first method, and the method ultimately used by TW14, is a modified version of the empirical calibrations from <cit.>, using main sequence stars to determine the expected WISE colors as a function of r-z color (denoted as σ^'). Figure <ref> shows the r-z versus W1-W3 distribution for the full and clean samples, along with the empirical calibration of TW14. Figure <ref> shows the residual distribution with the TW14 empirical calibration (red line; Figure <ref>) subtracted. Although it is common to define stars with disks to be only those with highly-significant deviations from the expected photospheric values in a binary fashion, we acknowledge that the distribution is continuous, and many of the stars with non-significant deviations may have true detections but smaller disk masses or dust that is becoming optically thin. Although we used the more classical binary description of stars with an excess versus stars without an excess, we will address this continuous distribution in Section <ref>. Rather than making a blanket cut on stars with σ^'⩾ 5, as was done in TW14, we used the distributions from Figure <ref> to evaluate the false-positive probabilities of the candidates. To obtain stars with a 99% probability of hosting a true MIR excess, we define the probability threshold (assuming normal distributions),P_FP(MIRExcess) × N_sample < 0.01,where P_FP(MIRExcess) is the probability that the MIR excess is a false-positive, and N_sample is the number of sources within the given sample. For the full sample, P_FP(MIRExcess) < 4.88×10^-7, and for the clean sample P_FP(MIRExcess) < 1.73×10^-6. Converting these false-positive probabilities into σ^' values for each sample, we define stars with true MIR excesses to have σ^' > 3.48 for the full sample (4.90σ), and σ^' > 2.53 for the clean sample (4.64σ), both limits are shown in Figure <ref> (red dotted line), and candidates that meet these thresholds are marked as red points in Figure <ref>. Figure <ref> indicates that the TW14 calibration appears to be shifted to slightly redder WISE colors than the bulk of the stellar population. The peak of the distribution is shifted negative of zero, which suggests that either the TW14 relationship needs to be recalibrated, or that some other effect is shifting the distribution, such as metallicity. Recently, WISE bands have been shown to be sensitive to the metal content of stars, with metal poor stars showing redder W1-W2 color <cit.>. Although this analysis was only completed for late-K and early-M dwarfs, it is reasonable that a similar metallicity trend will hold for lower-mass stars. No metallicity relationship has been shown to exist for the W1-W3 color, however, if the primary metallicity sensitive band is W1, then we might expect metallicity to have a small effect on the W1-W3 color. The second method takes the difference between the measured flux, and the expected flux (estimated from a stellar photospheric model), weighted by the measurement uncertainty. This value is commonly abbreviated as_12 = F_12 μm,measured - F_12 μm,model/σ_F_12 μm,measured. Using stellar parameters and scaling values from the MCMC method (Section <ref>), we computed the expected 12flux densities for stars in both the full and clean samples. Next, we converted W3 magnitudes to flux densities using the WISE all-sky explanatory supplement[<http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4_4h.html>] (further details can be found in TW14). Figure <ref> shows the distribution of _12 values for the full and clean samples. The majority of both samples are well represented by normal distributions with similar widths, although the full sample is shifted to slightly higher _12 values due to a distance bias which will be discussed in Section <ref>. <cit.> showed that the empirical method outlined above was able to detect the disk around AU Mic at 22 , while methods involving SED fitting were unable to significantly detect the same disk using observational data at similar wavelengths <cit.>. Presumably this indicates that σ^' is a stronger discriminator of MIR excess significance. Although the SED fitting is important for estimating parameters that will allow us to then constrain disk parameters, we chose to select excess sources based solely on their σ^' significance, similar to TW14. Selecting stars with MIR excesses using the aforementioned criteria produced 609 stars in the full sample, and two stars in the clean sample. The cumulative false-positive probabilities for our selected stars are 0.0386% (∼0.24 stars) for the full sample, and 8.699×10^-6% (≪ 1 star) for the clean sample. We used more stringent criteria in the selection of stars exhibiting MIR excesses than those used in TW14. Additionally, the parent population of stars for this sample (MoVeRS) is different than the parent population of TW14 (W11). To quantify this, the MoVeRS sample contains 15,262 of the W11 catalog (∼22%). Of the 15,262 matches in MoVeRS, 57 (of 168) are from the TW14 study of stars with MIR excesses (∼34%). Based on the selection criteria above, only 9 (of the 57) stars with MIR excesses would meet the new criteria (∼16%). These values will be considered when comparing our results to those from TW14 in Section <ref>. Additionally, 181 of the MIR excess candidates in the full sample, and one of the MIR excess candidates in the clean sample, have W4 detections with S/N > 2. We will consider these W4 detections when we fit for fractional IR luminosities (Section <ref>). §.§.§ Extreme MIR Excesses Extreme MIR excesses arising from planetary collisions are expected to produce large amounts of dust, and hence large fractional IR luminosities (L_IR / L_∗≳ 10^-2). The primary focus of this study are these extreme MIR excesses, however, this requires knowledge about the total IR flux of the dust grains. For stars that have both W3 and W4 detections, we can fit a simple blackbody to the excess MIR flux, similar to what was done in TW14. We acknowledge that the disks we are interested in observing should emit a strong silicate features <cit.>, which would make W3 a poor indicator of the underlying blackbody continuum of the dust. However, with no ability to discern the blackbody continuum from the silicate emission (e.g., a MIR spectrum), we use the approximation that W3 is dominated by the continuum radiation. Using the extreme MIR excess candidates that had a W4 detection with a S/N > 2, we fit a combined model comprised of the best-fit photospheric model found in Section <ref>, and a simple blackbody function. To determine the best fit blackbody function, we used a least-squares minimization, fitting for T_dust and the multiplicative scaling factor for the blackbody. For the least-squares fit, we used the best-fit photosphere model, and fit the dust blackbody function to the W3 and W4 measurements, weighted by the measurement uncertainty. An example fit from this process is shown in Figure <ref>. For stars without a W4 detection, we assume the peak SED flux is at W3, giving an estimate for T_dust≈ 317.4 K (TW14). To compute L_IR / L_∗, we integrated the best-fit photospheric model to estimate L_∗, and for L_IR, we subtracted the stellar model from the combined fit (stellar model plus best-fit blackbody), and integrated the residual flux to estimate L_IR, taking the ratio of the two values <cit.>. Keeping only the stars with L_IR / L_∗⩾ 10^-2, we were left with 584 stars in the full sample and two stars in the clean sample, removing none of our stars. This is likely due to the fact that our initial selection criteria required significant MIR excesses. We will address “non-significant" MIR excesses in the following section, and again in the discussion (Section <ref>). §.§.§ Non-significant MIR Excesses In studies of disks that are inferred from their MIR excesses, it is common to only select stars with significant excesses, which deviate from the expected photospheric value. However, the distribution of stars with or without excesses is continuous, with a very subtle area between what is considered to have an excess and what is not considered to have an excess. Many of the stars that are not included in the bona fide sample of stars with MIR excesses are indeed stars with excess MIR emission above their photospheric values. For example, the region between the 2σ value and our cutoff limit (1.09 < σ^' < 3.48; Figure <ref>) contains many stars with real excesses and may trace the end of a collisional cascade where the dust is becoming optically thin. The problem is that we cannot confidently identify individual stars that have excesses in this range, since some of the stars in the 1.09 < σ^' < 3.48 range are interlopers from the stellar distribution of σ^'. Instead, we can statistically examine this population. Using the σ^' distributions (Figure <ref>), we explored the number of excesses that exist within the non-significant excess region. We fit normal distributions to the core of the σ^' distributions to minimize effects from the long tail of excess sources (blue line; Figure <ref>). Next, we subtracted the best-fit normal distribution (scaled from the normalized distribution to the true distribution) interpolated at the mid-point of each bin from the distribution of σ^' values. The residual histograms are shown in Figure <ref>. The scatter within the 1σ range (and to a lesser extent the 2σ range) can be considered noise since the distribution is not perfectly normally distributed. However, the bumps at σ^' values greater than 2σ can be considered real since there is no corresponding scatter at similar negative σ^' values about the mean. These bumps represent real sources harboring MIR excesses To quantify the number of potentially missing stars with MIR excesses, we integrated the region between the 2σ limit (light gray region, σ^' = 1.09 for the full sample and σ^' = 0.58 for the clean sample; Figure <ref>) and the significant cutoff we imposed (red dotted line, σ^' = 3.48 for the full sample and σ^' = 2.53 for the clean sample; Figure <ref>). We estimate that ∼1400 stars are excluded from the full sample and ∼90 stars from the clean sample. However, this assumes that all missing stars are hosts to “extreme MIR excesses." We computed fractional IR luminosities using the same method from the preceding section, finding that 5.6% of the non-excess stars in the full sample and 0.5% of the non-excess stars in the clean sample hosted extreme MIR excesses. This translates into ∼80 and ∼1 star(s) missing from the full and the clean samples, respectively. Although we cannot definitively say which stars within this region actually harbor a true MIR excess, it is important to consider this missing population in the context of the frequency of low-mass field stars exhibiting MIR excesses. If we consider the clean sample (as the full sample has a number of inherent biases that we will account for in Section <ref>), then accounting for the missing sources, we estimate the fraction of stars exhibiting a MIR excess is ∼0.05%. We will discuss this statistic further in Section <ref>. §.§ Fidelity of Excesses: Cross-match to Spitzer To examine the validity of the extreme MIR excess detections, we cross-matched the candidates with the Spitzer Enhanced Imaging Products catalog (this includes both IRAC and MIPS observations). We found ten candidates with Spitzer photometry matched to within 6. A search through the literature indicated that none of the Spitzer data for these sources have been published previously. Figure <ref> shows the SEDs for these ten matching stars, demonstrating that the Spitzer photometry is consistent with the WISE photometry (for both W3 and W4 detections). All of these stars appear to have true MIR excesses. We are confident that the detected MIR excesses are true excesses originating from their host stars. However, younger populations of stars are expected to exhibit MIR excesses, therefore, we must test for youth where available in the samples. §.§ Spectroscopic Tracers of Youth One strength of the TW14 sample over the MoVeRS sample is the availability of optical SDSS spectra for each star. This ensured that all objects were low-mass stars and made possible an investigation for youth. TW14 used age diagnostics such as Hα emission to determine that the stars in their sample were older fields stars and not young, pre-main sequence stars, the latter of which we expect to host circumstellar disks (and therefore MIR excesses). To examine possible age diagnostics and confirm our selection of low-mass dwarfs for the sample, we identified ten SDSS spectroscopic targets within the extreme MIR excess sample, and received time on the Discovery Channel Telescope (DCT) to obtain optical spectra for 15 additional extreme MIR excess candidates. Unfortunately, none of the spectroscopic subsample overlapped with the stars with Spitzer data (Section <ref>). TW14 used two age-dependent spectroscopic diagnostics: Hα <cit.> and Li i <cit.>. Hα emission (in addition to other Balmer transitions) is a strong indicator of accretion, resulting in large equivalent width (EW) measurements[As is convention in studies of small stars, positive EW measurements indicate emission.] <cit.> and broad lines <cit.>. Stars exhibiting Hα due to accretion are also young (< 10 Myr), and typically found in young associations rather than the field. For older populations of stars (≫ 100 Myr), Hα emission (and other Balmer transitions) is also tied to “magnetic activity," as strong magnetic fields lead to chromospheric heating <cit.>. <cit.> demonstrated that the lifetime for magnetic activity (as traced through Hα emission) is mass-dependent in the M dwarf regime. For the highest mass M dwarfs, the lifetime for magnetic activity is 500 Myr–1 Gyr, increasing to >8 Gyr for the lowest-mass M dwarfs. This makes Hα emission a moderate age diagnostic for field stars, when coupled with stellar mass or spectral type. A lack of detectable Hα emission in the earliest-type stars in our sample would indicate a relatively old (> 1–2 Gyr) field population. We used the same regions as TW14 to measure the EW of Hα, and determine stars for which an EW measurement could or could not be made. Lithium absorption is more strongly correlated with youth than Hα emission, but it is also mass dependent. Modeling results by <cit.> demonstrated that the initial lithium abundance will deplete by a factor of 10 in 10 Myr for a 0.7M_⊙ star (∼M0), while a star with a mass of 0.08M_⊙ (∼M8) will take ∼100 Myr to deplete by the same factor. This makes Li i absorption a strong discriminator of youth. Due to the difficulty in measuring the EW of Li i (primarily caused by the strong TiO features around Li i and typically low S/N), we applied a comparative technique, using SDSS template spectra <cit.>, similar to what was done by <cit.>. The template spectra from <cit.> were built from a composite of SDSS field stars spectra. Therefore, they should indicate the baseline shape of the spectrum near the Li i feature for low-mass field stars devoid of Li i absorption. A comparison between the spectra and the <cit.> template spectra provides a means to detect Li i absorption without making a direct measurement of the EW. Further details of the method are described in TW14. We discovered that ten of the extreme MIR excess candidates had been previously observed through one of the SDSS spectroscopic programs and had spectra available. Nine of these stars were included in TW14 because they were part of the SDSS DR7 spectroscopic sample of M dwarfs <cit.>, and one of the stars was observed after the <cit.> sample was compiled. All ten of these stars are classified as M dwarfs, confirming our selection of low-temperature dwarfs. The radial velocity (RV) corrected SDSS spectra are shown in Figure <ref>. Only one of these stars (an M7) showed significant Hα emission. The average activity lifetime of an M7 star is ∼8 Gyr <cit.>. None of these stars had detectable amounts of lithium. Our Li i analysis sets a lower age limit of > 100 Myr. The lack of Hα emission for stars earlier than M7 indicates a typical minimum age of ∼ 1 Gyr for the sample <cit.>, indicative of an older field population. To further assess the age for the sample of extreme MIR excess candidates, we obtained optical spectra with the DeVeny Spectrograph on the 4.3-m DCT for an additional 15 candidates with high-significance MIR excesses (σ^' > 10), shown in Figure <ref>. The spectra cover the range ∼ 5600Å–9000Å at a resolution of λ/Δλ≈ 2850 (2.5 pixel). The candidates were selected based on location in the sky, and should represent a relatively unbiased subsample of the full sample. Spectra were reduced using a modified version of the pyDIS Python package <cit.>, originally designed for use with the APO 3.5-m Dual Imaging Spectrograph (DIS). Stars were spectral typed using the PyHammer[<https://github.com/BU-hammerTeam/PyHammer>] Python package <cit.>. Although this is a small portion of the total sample, we expect a similar age distribution for the parent population. The spectroscopic observations collected indicate that the DCT sample is also made up of low-temperature stars, further confirming our sample selection. One of the stars (SDSS objID 1237668734684955989; 2MASS J18351414+4026520) has peculiar features. The TiO bands found at 7053Å are consistent with a cool star, but other features are consistent with a carbon dwarf <cit.>, while some of the features are not. This object motivates further investigation to determine its true nature. From the full spectroscopic sample of 25 stars, we estimate a contamination rate of 4% for our entire sample due to objects that are not typical low-mass stars. We observed that only three of the stars for which we have DCT spectra, all within the fully convective regime (≳ M4), showed signs of Hα emission. Additionally, none of the stars had detectable amounts of Li i. This lack of Li i absorption is consistent with the stars having ages ≫ 100 Myr estimated from the SDSS spectra. Considering the stars without Hα emission, this indicates the average of the population is ≳ 1 Gyr <cit.>, again consistent with the findings from the SDSS spectra. Based on the age limits from the two spectroscopic subsamples, we concluded (as did TW14) that the orbiting dust (inferred from the MIR excesses) was not primordial in nature, since the primordial disk is expected to be dispersed on timescales much shorter than the presumed ages lccccccc 8 Spectroscopic ParametersSDSS DR8+ objID R.A. Decl. Radial Velocity Spectral Hα EWa Telescope ⟨ L⟩(H:M:S) (D:M:S) ± 7 (km s^-1) Type (Å)1237665369038782628 10:17:40.54 +28:58:51.62 +39.5 M1 ... SDSS 21.341237651250974556408 15:47:54.70 +52:48:57.52 -32.5 M1 ... SDSS 13.771237657071156723794 01:27:51.44 +00:16:33.17 +6.2 M2 ... SDSS 21.981237655692480151822 15:16:10.43 -01:42:37.24 -48.4 M2 ... SDSS 16.951237671125374861409 09:32:04.26 +14:08:26.51 +39.0 M3 ... SDSS 92.451237662619722449089 15:38:25.49 +32:28:44.59 -10.0 M4 ... SDSS 36.111237667254011101278 11:30:25.02 +29:14:16.37 +25.6 M5 ... SDSS 59.301237659161736315205 15:48:31.45 +42:53:07.14 -21.1 M6 ... SDSS 179.041237665128545911020 12:42:03.86 +34:55:37.74 -45.7 M7 ... SDSS 240.581237661068171346281 09:31:07.08 +10:06:07.25 +16.2 M7 10.3 ± 0.9 SDSS 327.291237668331488084142 14:12:46.44 +15:01:52.55 -42.1 M0 ... DCT -1.971237651250974556408 15:47:54.70 +52:48:57.52 -8.4 M2 ... DCT 17.591237655749395022353 18:04:45.57 +46:36:57.79 -51.4 M2 ... DCT 41.551237672026249167591 22:41:17.31 +33:40:21.14 -43.6 M2 ... DCT 22.311237664852033142893 14:15:55.43 +32:54:33.84 +25.1 M3 ... DCT 11.191237662500006461639 16:01:09.94 +36:35:30.07 +5.2 M3 ... DCT 38.021237655747779363146 17:45:18.61 +57:53:59.65 +4.3 M3 ... DCT 28.021237668734684955989 18:35:14.13 +40:26:51.95 +93.0 Pecb ... DCT ...1237671941420483289 19:06:24.80 +64:36:19.88 -56.3 M4 ... DCT 40.041237656241159012941 21:58:10.54 +11:42:01.70 -122.0 M4 ... DCT 30.991237659330309456141 15:35:00.41 +48:53:42.51 -111.1 M5 ... DCT 51.761237655465932292383 16:17:07.09 +45:52:14.97 -70.0 M5 ... DCT 86.021237652943699509565 22:00:46.74 +12:44:01.96 -32.4 M5 6.3 ± 0.5 DCT 76.041237652937790915940 20:53:41.55 +08:35:14.57 -26.7 M6 3.5 ± 0.9 DCT 241.091237678920195637464 22:35:47.06 +11:42:15.67 -43.5 M7 15.8 ± 1.8 DCT 103.27 aPositive EW measurements indicate emission. Inconclusive measurements are not listed. bThis object shows peculiar spectral features. The TiO bands at ∼7050 are indicative of a low-mass star. However, the numerous bumps in the spectrum may indicate a carbon dwarf. §.§.§ Spectroscopic Estimates of Luminosity Classes We also make an estimate on the contamination rate of giants in our subsample of the MoVeRS catalog using the collected spectra. A thorough investigation into separating M-type stars based on luminosity class was undertaken by <cit.>, using a modified method similar to <cit.> for Kepler target stars. The spectroscopic features <cit.> used for determining luminosity classes included: 1) the CaH2 (6814–6846 Å) and CaH3 (6960–6990 Å) indices <cit.>; 2) the Na i doublet <cit.>; 3) the Ca ii triplet <cit.>; 4) the mix of atomic lines (Ba ii, Fe i, Mn i, and Ti i) at 6470–6530 Å<cit.>; and 5) the K i (7669–7705 Å) and Na i lines identified in <cit.>. The Ca ii triplet falls within a region prone to fringing at the red-end of the DCT spectra, therefore, we omitted measuring this feature. Most of the spectroscopic features above change with surface gravity and temperature, therefore, we compare the above spectroscopic indices against the TiO5 index <cit.>, which is sensitive to both metallicity and temperature <cit.>, but relatively insensitive to surface gravity <cit.>. All other aforementioned features were measured using the available SDSS and DCT spectra following the same prescription outlined in <cit.>. Table <ref> contains the information for the continuum region(s) and band region used to measure EWs and spectral indices.lccc 4 Spectroscopic IndicesIndex Name Band Continuum (Å) (Å)Na i (a)a 5868–5918 6345–6355 Ba ii/Fe i/Mn i/Ti ia 6470–6530 6410–6420 CaH2b 6814–6846 7042–7046 CaH3b 6960–6990 7042–7046 TiO5b 7126–7135 7042–7046 K ia 7669–7705 7677–7691, 7802–7825Na i (b)a 8172–8197 8170–8173, 8232–8235 aMeasured as an EW. Linear interpolation is done through the continuum ranges to estimate the continuum. bMeasured as a band index by calculating the mean flux within each wavelength range, and taking the ratio between the band mean flux to the continuum mean flux. To determine the expected EWs and spectral indices for low-mass dwarfs, we measured the same features for 38,722 stars from the <cit.> spectroscopic sample of M dwarfs with good photometry (goodphot =1) and good proper motions (goodpm =1). Although there is expected to be some small amount of giant contamination within this sample, it is estimated to be less than 2%, and the use of good proper motions should further minimize giant contamination. We also obtained optical spectra for 154 giant stars from <cit.>, <cit.>, <cit.> and SDSS. All giant spectra were sampled to the same resolution as our sample spectra prior to measuring spectroscopic indices to remove any potential bias. To estimate the likelihood that each star in our sample is either a dwarf or a giant, we built 2-D probability distributions for both the dwarfs and giant comparison samples for each spectroscopic tracer using a Gaussian Kernel Density Estimation using Silverman's Rule <cit.>, as is shown in Figure <ref>. The likelihood that source i is a dwarf given spectroscopic index j is estimated by the log-likelihood,L_i,j = log_10(P_dwarf/P_giant).The likelihood given all indices that a source is a dwarf versus a giant is⟨ L_i⟩ = ∑_j w_j L_i,j/∑_j w_j ,where w_j is a weighting factor for spectroscopic index j. <cit.> found that setting weights to unity (allowing all spectroscopic tracers to be equally weighted) did not significantly alter results. We chose to equally weight all the measured spectroscopic indices, simplifying Equation (<ref>) to ⟨ L_i⟩ = ∑_j L_i,j. Each source was then either assigned to the category of dwarf star (L_i > 2), giant star (L_i < -2), or undetermined (-2 < L_i < 2), based on the 99% confidence that one training set was more likely to host the source. All but one of our sources has a high probability of being a dwarf versus a giant. The earliest type star in our sample has an inconclusive classification, primarily due to all spectroscopic indices for both training sets beginning to converge for the earliest type stars (largest values of TiO5). Given this object's measured proper motion in multiple catalogs, this is most likely a dwarf star. The inclusion of this object in Gaia DR1 indicates that both a higher precision proper motion measurement and a trigonometric distance are forthcoming, which will definitively determine the luminosity class of this object. We did not attempt to ascribe a luminosity class to our peculiar object due to multiple non-similarities in its spectrum as compared to both our training sets. Based on our above analysis, we do not change our estimated contamination rate of ∼4%. §.§ Disk Properties We can further explore the properties of our extreme MIR excess systems by making some basic assumptions about the disk properties. Dust temperatures allow us to estimate both the orbital distance of the dust, and the minimum dust mass. Using the dust grain temperature estimates (Section <ref>), we calculated the minimum orbital distance of the dust assuming the dust grains are in thermal equilibrium with the host star, given by,D_min = 1/2(T_∗/T_gr)^2 R_∗,where T_∗ and T_gr are the stellar effective temperature and dust grain temperature, respectively, and R_∗ is the stellar radius. Assuming a simple geometry for the orbiting dust a dust mass (M_d) can be estimated. Similar to TW14, we assumed the dust is in a thin shell, orbiting a distance D_min from the host star, with a particulate radius a and density ρ_s, and a cross section equal to the physical cross section of a spherical grain. We take ⟨ a ⟩ = 0.5and ρ_s = 2.5 g cm^-3, similar to TW14. The dust mass is then defined as,M_d ⩾16/3πL_IR/L_∗ρ_s ⟨ a ⟩ D_min^2.Further details regarding this process can be found in TW14. The orbital distances and dust masses for the extreme MIR excess candidates are shown in Figure <ref>. The majority of stars harbor dust within 1 AU, with the peak of the distribution at a few tenths of an AU, within the snow-line for low-mass stars <cit.>. For the majority of our sample, which only have W3 measurements, the dust temperature was assumed to be 317.4 K, which predetermined the estimated orbital distance of the dust to be within the snow-line. A colder disk (< 317.4 K) would need to be even more massive to have a similar flux level at W3, making it more likely that we are observing a less massive, hotter disk. Our dust mass estimates are comparable to those found in TW14, with the median value of 10^-5 M_Moon. Obtaining MIR spectra of these stars with the next generation of telescope will help to further characterize these dust populations (e.g., constrain mineralogy). §.§ The Extreme MIR Excess Sample The general characteristics of our sample of stars with extreme MIR excesses are similar to those from TW14. We show the r-z color distribution, distance distribution, and Galactic spatial distribution of sources in Figure <ref>. The r-z color distribution peaks at r-z ≈ 2, which is equivalent to a dM4, which corresponds to the peak of the initial mass distribution <cit.>. The distance distribution peaks at approximately 200 pc, which is consistent with other low-mass stellar samples from SDSS <cit.>. The candidates are fairly spread out within the SDSS footprint. To test for clumping of objects, we ran a friends-of-friends algorithm to test for spatial groupings within 10 pc of one another (see TW14 for further details). We found 10 pairs of stars within 10 pc of each other, with no other groupings larger than two stars. We tested each pair for similar 2-D kinematics (are moving together through the Galaxy) using Equation (6) from <cit.>, given by:( Δμ_α/σ_Δμ_α) + ( Δμ_δ/σ_Δμ_δ) ⩽ 2,where Δμ_α and Δμ_δ are the differences between the two proper motion components for each pair, and their uncertainties are the quadrature sum of each individual proper motion uncertainty. The smallest value for this metric among the pairs was 5, indicating that none of these pairs showed similar 2-D kinematics. This indicates that these distances are more likely chance alignments than actual physical groupings. The catalog of candidates is available through the online journal and the column descriptions are listed in Table <ref>. cll 3 Extreme MIR Excess Candidates Catalog SchemaColumn Column Units Number Description 1 SDSS Object ID ... 2 SDSS R.A. deg. 3 SDSS Decl. deg. 4 SDSS u-band PSF mag. mag 5 SDSS u-band PSF mag. error mag 6 SDSS u-band extinction mag 7 SDSS u-band unreddened PSF mag. mag 8 SDSS g-band PSF mag. mag 9 SDSS g-band PSF mag. error mag 10 SDSS g-band extinction mag 11 SDSS g-band unreddened PSF mag. mag 12 SDSS r-band PSF mag. mag 13 SDSS r-band PSF mag. error mag 14 SDSS r-band extinction mag 15 SDSS r-band unreddened PSF mag. mag 16 SDSS i-band PSF mag. mag 17 SDSS i-band PSF mag. error mag 18 SDSS i-band extinction mag 19 SDSS i-band unreddened PSF mag. mag 20 SDSS z-band PSF mag. mag 21 SDSS z-band PSF mag. error mag 22 SDSS z-band extinction mag 23 SDSS z-band unreddened PSF mag. mag 24 2MASS J-band PSF mag. mag 25 2MASS J-band PSF corr. mag. unc. mag 26 2MASS J-band PSF total mag. unc. mag 27 2MASS J-band SNR ... 28 2MASS J-band ^2_ν goodness-of-fit ... 29 2MASS J-band extinction mag 30 2MASS J-band unreddened PSF mag. mag 31 2MASS H-band PSF mag. mag 32 2MASS H-band PSF corr. mag. unc. mag 33 2MASS H-band PSF total mag. unc. mag 34 2MASS H-band SNR ... 35 2MASS H-band ^2_ν goodness-of-fit ... 36 2MASS H-band extinction mag 37 2MASS H-band unreddened PSF mag. mag 38 2MASS K_s-band PSF mag. mag 39 2MASS K_s-band PSF corr. mag. unc. mag 40 2MASS K_s-band PSF total mag. unc. mag 41 2MASS K_s-band SNR ... 42 2MASS K_s-band ^2_ν goodness-of-fit ... 43 2MASS K_s-band extinction mag 44 2MASS K_s-band unreddened PSF mag. mag 45 2MASS photometric quality flag ... 46 2MASS read flag ... 47 2MASS blend flag ... 48 2MASS contamination & confusion flag ... 49 2MASS extended source flag ... 50 WISE W1-band PSF mag. mag 51 WISE W1-band PSF mag. unc. mag 52 WISE W1-band SNR ... 53 WISE W1-band ^2_ν goodness-of-fit ... 54 WISE W1-band extinction mag 55 WISE W1-band unreddened PSF mag. mag 56 WISE W2-band PSF mag. mag 57 WISE W2-band PSF mag. unc. mag 58 WISE W2-band SNR ... 59 WISE W2-band ^2_ν goodness-of-fit ... 60 WISE W2-band extinction mag 61 WISE W2-band unreddened PSF mag. mag 62 WISE W3-band PSF mag. mag 63 WISE W3-band PSF mag. unc. mag 64 WISE W3-band SNR ... 65 WISE W3-band ^2_ν goodness-of-fit ... 66 WISE W3-band extinction mag 67 WISE W3-band unreddened PSF mag. mag 68 WISE W4-band PSF mag. mag 69 WISE W4-band PSF mag. unc. mag 70 WISE W4-band SNR ... 71 WISE W4-band ^2_ν goodness-of-fit ... 72 WISE W4-band extinction mag 73 WISE W4-band unreddened PSF mag. mag 74 WISE contamination & confusion flag ... 75 WISE extended source flag ... 76 WISE variability flag ... 77 WISE photometric quality flag ... 78 Spitzer IRAC Ch1 PSF flux density μJy 79 Spitzer IRAC Ch1 PSF flux density unc. μJy 80 Spitzer IRAC Ch2 PSF flux density μJy 81 Spitzer IRAC Ch2 PSF flux density unc. μJy 82 Spitzer IRAC Ch3 PSF flux density μJy 83 Spitzer IRAC Ch3 PSF flux density unc. μJy 84 Spitzer IRAC Ch4 PSF flux density μJy 85 Spitzer IRAC Ch4 PSF flux density unc. μJy 86 Spitzer MIPS Ch1 PSF flux density μJy 87 Spitzer MIPS Ch1 PSF flux density unc. μJy 88 Proper motion in R.A. (μ_αcosδ) mas yr^-1 89 Proper motion in Decl. mas yr^-1 90 Total error in R.A. proper motion mas yr^-1 91 Total error in Decl. proper motion mas yr^-1 92 Full Sample Flag ... 93 Clean Sample Flag ... 94 Visual Quality Flag ... 95 Photometric distance pc 96 Distance from the Galactic plane pc 97 σ^'a ... 98 T_eff estimate K 99 Upper T_eff limit K 100 Lower T_eff limit K 101 Log g estimate dex 102 Upper Log g limit dex 103 Lower Log g limit dex 104 _12a ... 105 _22a ... 106 L_IR / L_∗ ... 107 D_min AU 108 M_d M_moon 109 T_gr K 110 σ_T_gr KaDefined in Section <ref>. §.§ Distance and Color (Temperature) Bias Due to SDSS being a magnitude limited survey, our selection of stars suffers a distance bias that is dependent on stellar effective temperature. For each stellar temperature range, there will be a minimum and maximum distance over which a dwarf star can be observed due to the saturation and faintness limits of SDSS, respectively. To explore where this bias occurs, we examined the flux ratios (F_12 μm,measured / F_12 μm,model) as a function of r-z color and distance (Figure <ref>). Figure <ref> also shows the distance corresponding to the W3 flux limit (730 μJy; see Section <ref>). For the full sample, the spread in distances are typically larger than the limit corresponding to the distance at which the photospheric flux level would be detectable at the W3 flux limit (dashed line). This makes many of the stars in the full sample undetectable (at this flux limit) unless they have a MIR excess (assuming no line-of-sight dependence on sensitivity). Figure <ref> further illustrates that we can only detect the bluest stars in W3 if they have an extreme MIR excess, since their distances are too large to detect their photospheres at the W3 flux limit. This is true for some of the redder sources as well, but we have the ability to observe many of their photospheres at 12 . Due to the distance spread above the W3 flux limit distance in the full sample, there is a bias for which we must account. The case is different for the clean sample, where the distance spread for all r-z colors is closer than the distance corresponding to the W3 flux limit. Therefore, the clean sample should be free from a higher limit distance bias, unlike the full sample, but may suffer from a lower distance limit bias due to saturation. The clean sample also does not cover the same r-z color range (a proxy for stellar temperature and mass) as the full sample, restricting its use for only mid- to late-spectral type low-mass stars. The distance bias will be accounted for using a Galactic model. § LOKI GALACTIC MODEL: ESTIMATING STELLAR COUNTS AND PROPER MOTIONS FOR COMPLETENESS A major limitation of the extreme MIR excess study completed by TW14 was a non-uniform sample, and no method to estimate completeness. To estimate the completeness of the current sample, we used a Galactic model to estimate how many stars were missing from the sample (e.g., within a local volume or along a line-of-sight). Galactic models have been used to simulate stellar densities <cit.>, kinematics <cit.>, or both <cit.>. Galactic models are typically comprised of three main components, the thin disk (cold component), the thick disk (warm component), and the halo. Each component is individually modeled in terms of its mixing fractions and kinematics. We created a model, dubbed the Low-mass Kinematics (LoKi) galactic model[<https://github.com/ctheissen/LoKi>], to estimate the total number of stars we would expect to observe within a given volume, and their respective kinematics. The model incorporates a luminosity function <cit.> to select stars in proportion to their abundance in the Galaxy, in addition to simulating their positions and kinematics. We ran 100 realizations of the model over the entire simulated volume, and kept only stars with significant proper motions (dependent on stellar color and line-of-sight; see Appendix <ref>) that would have been included in the MoVeRS sample. The methods involved in building and using LoKi are described in detail in Appendix <ref>.§.§ Extreme MIR Excess Fractions Using the larger photometric sample from MoVeRS and the LoKi galactic model, we were able to extend the findings of TW14. Using LoKi, we were able to explore the occurrence of extreme MIR excesses as a function of color (a proxy for stellar mass), and Galactic height (a proxy for stellar age). This was done by simulating the total number of stars expected to be observed within the given volume observed by SDSS. These simulations provide stellar counts and Galactic height distributions, which we used to investigate the occurrence of extreme MIR excesses in low-mass stars. TW14 compared the stars with MIR excesses to the entire W11 catalog to calculate the fraction of stars exhibiting an extreme MIR excesses (∼0.4% of field M dwarfs exhibit an extreme MIR excess), or the “extreme MIR excess fraction" (i.e., the ratio of the number of stars exhibiting an extreme MIR excess to the total number of stars). Using the same parent population selection criteria as TW14 (i.e., using all 390,006 stars with J ⩽ 17), we calculated a global extreme MIR excess fraction from the MoVeRS sample of ∼0.1%. However, because MoVeRS is not a volume complete catalog, these fractions are likely overestimates and need to be corrected using a Galactic model. In addition, as described in Section <ref>, we exclude a number of potentially real extreme MIR excesses. Without the ability to determine which of these stars harbor true excesses, as they fall within the statistical scatter of the parent population, the results in this section should be taken as lower limits. We used the LoKi galactic model to simulate the number of stars expected in the observed footprint (see Appendix <ref> for details), and their distribution in the Galaxy. Using the model, we computed volume complete fractions, i.e., estimated the denominator value for the number of stars for which we should have been able to detect an extreme MIR excess. We computed the global extreme MIR excess fraction from the model stellar counts using the mean value of the stellar counts across all 100 simulations, estimating an extreme MIR excess fraction of ∼0.02%. The model complete MIR excess fraction is an order of magnitude smaller than that found by TW14, but still orders of magnitude larger than the extreme MIR excess fraction estimated for A–G type stars by <cit.>. We will discuss this further in Section  <ref>. Galactic height is strongly correlated with stellar age for ensembles of stars. This is due to the fact that stars are born close to the Galactic plane, and, over time, are dynamically heated away from the plane <cit.>. This method of assigning ages to ensembles of stars based on absolute distance from the Galactic plane is commonly referred to as “Galactic stratigraphy" <cit.>. TW14 identified a weak trend of decreasing MIR excess fractions as a function of increasing stellar age. However, their sample was small and incomplete. To further investigate the findings of TW14, we computed MIR excess fractions using stars with extreme MIR excesses ( stars in the full sample andstars in the clean sample, Section <ref>; numerator value), and model stellar counts (denominator value) over the same volume as the SDSS observations, and with proper motions detectable by MoVeRS (dependent on stellar color and line-of-sight; see Appendix <ref>). Figure <ref> shows the model corrected extreme MIR excess fractions as a function of absolute distance from the Galactic plane (Z). Each bin has two points corresponding to the 1st and 99th percentile values across all model runs, with error bars representing the greatest and smallest binomial errors between the two percentiles. The fact that much of the sample is not at low Galactic latitudes should result in very few young stars. The estimated ages from Section <ref>, and the results from TW14, suggest that the vast majority of stars within SDSS at high Galactic latitudes are members of the field population (≫100 Myr). Figure <ref> shows a declining trend with Galactic height, with the majority of stars with extreme MIR excesses found within 100 pc of the Galactic plane. To assess the statistical significance of this trend, we performed a least-squares linear fit (of the form y = mx + b) to the average fraction for each bin, weighted by the average binomial uncertainty, finding a slope of m = (-6.836 ± 1.468)×10^-7 pc^-1. This indicates that younger field populations are more likely to have extreme MIR excesses, and that stars are less likely to host extreme MIR excesses as they age <cit.>. This also indicates that there is some typical age after which the mechanism responsible for creating an extreme MIR excess ceases to act. TW14 did not attempt to examine a stellar mass dependence with MIR excess fractions. However, with the larger sample of extreme MIR excess candidates and the Galactic model, we were able to examine the MIR excess fractions as a function of r-z color (a proxy for stellar mass). Figure <ref> shows the fraction of stars exhibiting an extreme MIR excess as a function of r-z color. Again we fit a linear function to the trend and found a slope of m = (1.486 ± 0.424) × 10^-4 pc^-1, indicating an upward trend. There is a slight distance (and hence age) bias in Figure <ref>, as bluer stars tend to be at greater distances (older) than redder stars. This effect is due to SDSS observing primarily out of the plane of the Galaxy, which makes distance strongly correlated with vertical distance from the Galactic plane <cit.>. Furthermore, the vertical distribution of stars from the Galactic plane is strongly correlated with stellar age <cit.>, with older stellar populations found farther from the Galactic plane on average. Considering the upward trend with redder colors, this is consistent with Figure <ref>, as younger stellar populations tend to have larger extreme MIR excess fractions. To minimize selection effects and explore the interplay among extreme MIR excess fractions, stellar age, and stellar mass, we examined extreme MIR excess fractions as a function of absolute distance from the Galactic plane binned in three r-z color regimes (Figure <ref>). The first bin (0.5 ⩽ r - z < 2) potentially suffers from selection effects due to the inherently large distances to these objects, dictated by the saturation limit of SDSS (see Figure <ref>), placing the majority of observed stars farther away from the Galactic plane (76% with |Z| > 200 pc). Although the model attempts to recover some fraction of these stars, we implemented the same magnitude and proper motion cuts on the model sample, therefore both the model and our sample will suffer from a similar selection effect. The intermediate mass stars within the sample (2 ⩽ r-z < 3.5) show a slight trend with |Z|, and these bins are likely to be relatively free from the selection effects affecting the other mass bins. The lowest mass bin (3.5 ⩽ r-z < 5) has very few sources and likely does not sample a large enough volume to detect MIR excesses if excesses occur at similar rates across all stellar masses. The measured best-fit slopes for all three color bins from bluest to reddest are m = (-4.254 ± 0.788) × 10^-7 pc^-1, m = (-2.683 ± 1.389) × 10^-6 pc^-1, and m = (-3.358 ± 16.809) × 10^-6 pc^-1.§ NON-SIGNIFICANT MIR EXCESSES REVISITED: A FURTHER INVESTIGATION INTO TIMESCALES The strong trend of decreasing extreme MIR excess fraction with Galactic height indicates a trend with stellar age, and motivates further investigation. To explore if the overall distribution of non-significant excess sources changes as a function of age, we examined the σ^' distribution as a function of |Z| for the full and clean samples, using stars with 2 ⩽ r-z < 3.5 to minimize selection effects due to distance. Figure <ref> shows how the distribution of σ^' changes as a function of |Z|. To assess if there is a significant difference between the distributions in both the full and clean samples, we investigated the skew of each sample distribution. The underlying hypothesis is that all samples come from a nearly Gaussian parent distribution, with the stars with excess skewing that parent population to more positive σ^' values. To statistically assess the skew of each distribution, we took 100,000 bootstrap samples of each distribution and measured the skew of the resulting distribution. We report the mean values along with the 68% (16th and 84th percentiles) and 95% (2.5th and 97.5th percentiles) confidence intervals in Table <ref>. The full sample shows a trend towards more excess sources (larger skewness) at farther distances away from the Galactic plane. This is most probably due to the fact that at larger distances, we are more sensitive to stars with excesses. The clean sample should be devoid of selection effects associated with distance at the expense of a smaller spread in Galactic height. In Figure <ref> we see a decrease in the number of high σ^' sources (MIR excess sources) at higher Galactic heights, which is also illustrated by the decreasing skew in Table <ref>, although the observed decrease is a tentative result. The decrease in skewness would be consistent with there being age evolution in all of the stars with MIR excesses, not only stars exhibiting extreme MIR excesses.lcc3 Sample SkewnessSample Distance Range Skewnessa Full 30–60 pc 0.72^+0.08 (0.15)_-0.08 (0.16) Full 60–90 pc 0.83^+0.07 (0.13)_-0.07 (0.14) Full 90–120 pc 1.07^+0.07 (0.14)_-0.07 (0.14) Clean 30–60 pc 0.39^+0.09 (0.17)_-0.09 (0.20) Clean 60–90 pc 0.34^+0.07 (0.13)_-0.07 (0.14) Clean 90–110 pc 0.18^+0.08 (0.16)_-0.08 (0.17) aConfidence intervals correspond to the 68% confidence and the 95% confidence (inside parenthesis). § CONCLUSIONS AND DISCUSSION The large sample of low-mass stars contained within the MoVeRS catalog has allowed us to compile the largest sample of low-mass field stars exhibiting large MIR excesses to date ( stars). We examined the dependence of MIR excess occurrence with stellar mass (using r-z color as a proxy), and stellar age (using Galactic height as a proxy). The sample is divided into a “full" sample ( stars), consisting of stars with high-fidelity, high-significance MIR excess detections, and a “clean" sample ( stars), which also contains high-fidelity, high-significance stars with excesses, but is magnitude (volume) limited. To build the samples, we implemented cuts to ensure relatively bright sources, with high S/N WISE observations. These stars were then visually inspected to reduce contaminants (e.g., crowded fields). The final samples, including both stars with and without excesses, were made up ofstars (full sample;stars with extreme MIR excesses) andstars (clean sample;stars with extreme MIR excesses). Stars with extreme MIR excesses were selected using modified empirical criteria from TW14. A cross-match to the Spitzer Enhanced Imaging Products catalog identified 10 stars and verified the WISE MIR excesses. The full sample covers the range 0.5 ⩽ r-z < 5, covering all spectral-sub types within the M dwarf regime (0.1M_⊙≲ M_∗≲ 0.7M_⊙). The clean sample is biased towards later-spectral type stars (2 ⩽ r-z < 5; 0.1M_⊙≲ M_∗≲ 0.35M_⊙), and was chosen to minimize biases due to distance/magnitude and WISE sensitivity. Spectroscopic observations of 25 stars in the sample taken by SDSS and using the DCT support the hypothesis that the sample is made up of field stars and confirms the selection of M dwarfs, although one star has characteristics similar to a carbon dwarf, indicating a contamination rate of ∼4%. Many carbon stars are known to show evidence for circumstellar material <cit.>, potentially making us more likely to select for them in this study, and indicating that the contamination rate for the MoVeRS catalog is likely much less than 4%. For the remainder of the stars with spectra, the vast majority lack Hα emission, consistent with an inactive, older (≫100 Myr), field population. Furthermore, none of the stars have measurable Li i absorption, expected for stars with ages < 100 Myr. Since the magnetic activity lifetimes of lower-mass stars are one to several Gyrs, and none of the stars had detectable Li i absorption, the parent population likely has an average age > 1 Gyr. The samples and their derived quantities are available in the electronic format of this manuscript. Our primary finding is that there is a strong correlation with the fraction of field stars exhibiting an extreme MIR excess as a function of absolute distance from the Galactic plane. Although the bins with higher-mass stars suffer selection effects and are biased towards stars farther away from the Galactic plane (due to the brightness of these stars and the saturation limits of SDSS), and the lowest-mass stars are biased towards extremely close distances, and therefore small volumes, we find a significant decreasing trend for stars with MIR excesses at larger Galactic heights, specifically in the intermediate-mass stars, which are largely unbiased. These data strongly support an age dependency on the presence of extreme MIR excesses. We also find that MIR excesses have a correlation with r-z color, indicating a possible dependence with stellar mass. Giant collisions between large planetismals or terrestrial planets are expected to create a collisional cascade that may last for ∼100,000 years <cit.>. If we assume a typical stellar age for the sample of 1 Gyr, and a timescale over which a MIR excess can be detected of 0.1 Myr, then only 0.01% of the sample should show a detectable excess, which reduces to ∼0.5 stars for the clean sample, roughly consistent with our findings. This is assuming a volume complete sample, and the ability for the mechanism creating MIR excesses to act at anytime during the lifetime of the star. Limiting the timescale over which the mechanism can act (to less than 1 Gyr), or increasing the lifetime of the collisional products would increase the number of predicted stars observed to have an extreme MIR excess. Although we are unable to link a distinct timescale over which a collision may occur, our findings are consistent with a short lifetime for the collisional cascade to create enough dust for a significant MIR detection. Additionally, multiple collisions can extend the lifetime of the collisional products past 100,000 years. Using the clean sample, which is relatively unbiased and complete, we reinvestigated the collision rate found in TW14. The estimated fraction of stars undergoing collisions is (3.5 ± 1.7) × 10^-4, an order of magnitude smaller than the TW14 value. However, when we consider the different selection criteria for the parent population (34%, from Section <ref>), and the more stringent criteria applied for a star to be included in the extreme MIR excess sample (16%, from Section <ref>), we find the TW14 fraction of 0.4% is reduced to 0.02%, consistent with this study. This fraction is still two orders of magnitude larger than the number estimated by <cit.> for A–G spectral type stars. Our updated fraction gives us a collision rate of ∼9 impacts per star up to its current age. This value is consistent with the findings of TW14 that planetary collisions occur more frequently around low-mass stars. Investigating the continuous distribution of stars with excess MIR flux, versus simply the high-significance sample, we estimate there are potentially 80 stars with actual extreme MIR excesses excluded from our full sample, and one star excluded from the clean sample. Non-extreme MIR excesses may represent the more evolved state of the aforementioned collisional disks, at the end of the lifetime for a collisional cascade where the disk is becoming optically thin, or perhaps smaller collisions. The addition of these stars would imply the estimated fraction of stars undergoing collisions is underestimated by a factor of ∼4. Indicating that collisions may be even more frequent in low-mass stellar systems. Planetary collisions have also been put forth to explain a dichotomy found in the Kepler data. Kepler has found a wealth of planetary systems around low-mass stars, both singly-transiting systems and multi-transiting systems. Numerous studies have used ensemble statistics to reproduce Kepler multi-planet observations with success <cit.>. However, as noted by <cit.>, the best fitting models under-predict the number of observed singly-transiting systems by a factor of ∼2. <cit.> postulate that a second population of systems with higher inclination dispersions and/or lower multiplicities may explain the dearth of singly-transiting systems. This proposed dual population has become known as the “Kepler dichotomy." Recently, <cit.> simulated planetary systems with a range of mutual inclinations and multiplicities to replicate Kepler results for the M dwarf population. <cit.> found that a high multiplicity (N ≈ 7 planets per star) with a typical mutual inclination of 2^∘ could produce a planetary population in good agreement with the Kepler multi-planet yield, both with and without invoking a range of eccentricities. <cit.> accounted for the dearth of singly-transiting systems by invoking a second population of planetary systems, either with a single planet, or with 2–3 planets and a large scatter in mutual inclination (4^∘–9^∘). The best mixture between these two populations was found to be ∼50%. <cit.> discuss two possible explanations for the Kepler dichotomy, initial formation conditions and dynamical disruption. In the former of these scenarios, <cit.> posit that, for the case of Solar-mass stars, the formation, migration, or scattering of a giant planet could suppress planet formation. This is a scenario similar to the Grand Tack model <cit.>, which was put forward to explain the anomalously low mass of Mars in our own solar system. However, the lack of massive planets found orbiting most low-mass stars makes this an unlikely scenario. <cit.> used N-body simulations of late stage planet formation to attempt to reproduce Kepler observations, and found that two separate disk surface mass densities could reproduce the dichotomy. However, it is unclear if two distinct surface density profiles are observationally motivated. Dynamical disruption as an explanation for the Kepler dichotomy has also been explored through the use of models. Simulations of tightly packed planetary systems <cit.> have shown that coplanar, high-multiple planetary systems are metastable, and are disrupted on Gyr timescales. Furthermore, in systems that experience dynamical instability, the most likely outcome is two planets colliding once they are excited to crossing orbits <cit.>. Such collisions would likely result in massive amounts of orbiting dust, and potentially planets scattered to higher inclinations. Combined with the findings of <cit.>, that suppression of giant planets can extend the timescale over which collisions can occur to Gyrs, late-time occurring giant impacts are a plausible explanation for the Kepler dichotomy. Our observed extreme MIR excesses support the hypothesis that the Kepler dichotomy arises from late occurring (> 1 Gyr) giant impacts due to dynamical disruption. Planetary collisions between orbiting planets with small semi-major axes would produce the massive dust populations inferred from these extreme MIR excesses. The high frequency of these impacts (relative to higher-mass stars) has strong implications on the habitability of terrestrial planets around low-mass stars. This analysis motivates the search for similar extreme MIR excesses in higher- and lower-mass stellar populations. The upcoming Transting Exoplanet Survey Satellite <cit.> will be instrumental in testing the evolution versus formation hypothesis for the Kepler dichotomy through a larger sample of low-mass stars than Kepler observed. TESS, and to a lesser extent the Kepler two-wheel mission (K2), will sample a larger distribution in Galactic height and rotation periods (both tracers of stellar age) to further estimate the timescale over which planetary collisions occur. Additionally, the upcoming James Webb Space Telescope <cit.> will allow us to constrain the mineralogy of the disks detected with WISE, which can distinguish disks formed through violent collisions versus disks made of differentiated bodies, such as asteroids. The authors would like to thank the anonymous referee for extremely helpful comments and suggestions which greatly improved the manuscript. The authors would like to thank Adam Burgasser, Aurora Kesseli, Daniella Bardalez Gagliuffi, Julie Skinner, Saurav Dhital, Dylan Morgan, and Sebastian Pineda for their helpful discussions. C.A.T. would like to acknowledge the Ford Foundation for his financial support. A.A.W acknowledges funding from NSF grants AST-1109273 and AST-1255568. A.A.W. and C.A.T. further acknowledge the support of the Research Corporation for Science Advancement's Cottrell Scholarship. This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. NNX16AF47G issued through the Astrophysics Data Analysis Program. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org.SDSS-IV is managed by the Astrophysical Research Consortium for theParticipating Institutions of the SDSS Collaboration including theBrazilian Participation Group, the Carnegie Institution for Science,Carnegie Mellon University, the Chilean Participation Group,the French Participation Group, Harvard-Smithsonian Center for Astrophysics,Instituto de Astrofísica de Canarias, The Johns Hopkins University,Kavli Institute for the Physics and Mathematics of the Universe (IPMU) /University of Tokyo, Lawrence Berkeley National Laboratory,Leibniz Institut für Astrophysik Potsdam (AIP),Max-Planck-Institut für Astronomie (MPIA Heidelberg),Max-Planck-Institut für Astrophysik (MPA Garching),Max-Planck-Institut für Extraterrestrische Physik (MPE),National Astronomical Observatory of China, New Mexico State University,New York University, University of Notre Dame,Observatário Nacional / MCTI, The Ohio State University,Pennsylvania State University, Shanghai Astronomical Observatory,United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona,University of Colorado Boulder, University of Oxford, University of Portsmouth,University of Utah, University of Virginia, University of Washington, University of Wisconsin,Vanderbilt University, and Yale University. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication also makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. These results made use of Lowell Observatory's Discovery Channel Telescope. Lowell operates the DCT in partnership with Boston University, Northern Arizona University, the University of Maryland, and the University of Toledo. Partial support of the DCT was provided by Discovery Communications. The authors are also pleased to acknowledge that much of the computational work reported on in this paper was performed on the Shared Computing Cluster which is administered by Boston University's Research Computing Services (<www.bu.edu/tech/support/research/>). This research made use of Astropy, a community-developed core Python package for Astronomy <cit.>. Plots in this publication were made using Matplotlib <cit.>. § ESTIMATING STELLAR PARAMETERS§.§ Markov Chain Monte Carlo Method for Stellar Parameters We calculated the parameters of the orbiting dust (D_dust and M_dust) using our estimates of the fundamental stellar parameters (T_eff and R_∗). We estimated stellar parameters using the BT-Settl models with solar abundances from <cit.>, and mixing lengths calibrated on 2-D/3-D radiative hydrodynamic simulations <cit.>. These models span temperatures ranging between 1200K–7000K in steps of 100K or 50K, dependent on surface gravity, and log g values between 2.5–5.5 in steps of 0.5 dex, with metallicities and alpha abundances set to solar values. Using a previous version of the CIFIST models, <cit.> found that the deviation between temperatures based on model comparisons to optical spectra and those derived empirically was 57 K. To produce the best model fits to stellar data requires probing parameter space to fit for T_eff, [M/H], log g, α-abundance, and the normalizing factor in the form of the square of the ratio of the stellar radius over the distance (i.e., F_λ∝ L_λ/d^2). To reduce the parameter space for fitting models to the millions of stars in the MoVeRS sample, a few basic assumptions were made that should not overly bias our results. Metallicity was set to solar abundances, removing this parameter from the search space. To further reduce the complexity of the algorithm, the normalization factor was removed from the parameter space by scaling the model fluxes to the measured z-band values (a similar process was used in TW14 using the K_s-band), leaving only two parameters for which to solve (T_eff and log g). We used the emcee package <cit.>, a Python implementation of the <cit.> affine invariant sampler, to explore the remaining stellar parameter space. Since the BT-Settl models are not continuous across the parameter space, we interpolated between grid points using a nearest-neighbor method for model selection. For each step in the MCMC, the log-likelihood is given as,lnℒ(Θ|X, σ) = -1/2∑_n=1^N[ (Θ_n - X_n)^2/σ_n^2 + ln (2 πσ_n^2)],where Θ is a vector of length N containing the model predicted, scaled fluxes for a given set of stellar parameters (T_eff and log g), X is a vector containing the observed fluxes, σ is a vector containing the measurement errors for the observed fluxes, and the length N pertains to the number of bands in which data were available. Uniform priors were chosen across the parameter space, and assumed all the parameters were normally distributed. Instead of collecting the entire posterior probability distributions for each of the stars, we calculated the 16th, 50th, and 84th percentiles of the distributions for both T_eff and log g. We plot the 50th percentile values as a function of r-z color in Figure <ref>. The T_eff estimates follow the expected trend with r-z color. The width of the distribution is likely due to different metallicity classes <cit.>. Using an F test, we compared different order polynomial relationships, and found the best-fit to the observed trend between T_eff and r-z was a 6^th order polynomial,T_eff = a+bX+cX^2+dX^3+eX^4+fX^5+gX^6,where the coefficients are listed in Table <ref>. We find good agreement between our relationship and <cit.>, except for the extremes where the <cit.> fits are not well constrained. §.§ Estimating Stellar Radii Stellar radii can be inferred using distances estimates (Section <ref>), and the scaling factor of the best-fit model to the measured photometry (see Section <ref> and ). Figure <ref> shows the estimated stellar radii as a function of r-z color. We again fit a polynomial relationship between R_∗ and r-z color and find a 6^th order polynomial provides the best-fit (using an F test). Our polynomial relationship is shown in Figure <ref> and described by an equation similar to Equation (<ref>), with coefficients listed in Table <ref>. The scatter we find for the reddest objects is likely an artifact of extrapolating the B10 relationships past their valid data range. The relationship between effective temperature and stellar radii using our polynomial equations is shown in Figure <ref>. The relationship follows similar trends to both the relationship by <cit.> and <cit.>. The upturn in radii at cooler temperatures is an artifact of the B10 photometric parallax relationship, which is not well-constrained for the reddest stars.cccccccccccc 120ptPolynomial Relationship Coefficients Y X a b c d e f g σ ^2_ν RangeT_eff (K) r-z 6691.90 -6000.26 5135.52 -2513.18 679.434 -94.2185 5.18804 47.41 1.39 0.5 ⩽ r-z ⩽ 4.84R_∗ (R_⊙)r-z 0.41895 1.3345 -1.9848 1.1474 -0.34214 0.052184 -0.0032136 0.027 0.022 0.9 ⩽ r-z ⩽ 4.30 § LOW-MASS KINEMATICS (LOKI) GALACTIC MODEL§.§ Stellar Density Profile We implemented a similar galactic model framework as that used in <cit.>. In the model, the stellar density for each galactic component is given in terms of standard galactic coordinates. For the thin (cold component) and thick (warm component) disks, the stellar density profiles are given by,ρ_thin(R,Z) = ρ(R_0, 0)exp(-|Z|/H_thin) ×exp(-|R - R_0|/L_thin), ρ_thick(R,Z) = ρ(R_0, 0)exp(-|Z|/H_thick) ×exp(-|R - R_0|/L_thick),where H is the scale heights above and below the plane, and L is the scale length within the plane. The halo stellar density is expressed as a bi-axial power-law ellipsoid,ρ_halo(R,Z) = ρ(R_0, 0) (R_0/√(R^2 + (Z/q)^2))^r_halo,where q is the halo flattening parameter, and r_halo is the halo density gradient. In each of the above formulas, R is the Galactic radius, R_0 is the Sun's distance from the Galactic center (8.5 kpc), and Z is the Galactic height. To obtain the total stellar density at a specific radius and height in the Galaxy, all three density profiles weighted by the fraction of all stars in each component are summed,ρ(R,Z) = f_thin·ρ_thin(R, Z)+ f_thick·ρ_thick(R, Z)+ f_halo·ρ_halo(R, Z),with f_thin + f_thick + f_halo = 1. The local stellar density scaled to the Galactic plane, ρ(R_0=8.5 kpc, Z=0 pc), was obtained by integrating the bias-corrected, single star luminosity function (LF) from B10 for low-mass stars from SDSS. Table <ref> contains the adopted disk parameters for the model.lccc40ptGalactic Model Parameters Component Parameter Description Valuef_thin Fractiona 1-f_thick-f_haloThin disk H_thin Scale height 300 pcL_thin Scale length 3100 pc f_thick Fractiona 0.04Thick disk H_thick Scale height 2100 pcL_thick Scale length 3700 pc f_halo Fractiona 0.0025Halo r_halo Density gradient 2.77q (=c/a)b Flattening parameter 0.64The parameters were measured using M dwarfs for the disk (bias corrected values; B10) and MS turn-off stars for the halo <cit.> in the SDSS footprint. aEvaluated in the solar neighborhood. bAssuming a bi-axial ellipsoid with axes a and c. §.§ Stellar Densities and Distance Ranges Perhaps the most fundamental parameter required in the model is the local stellar density. Many studies have measured the local stellar density ρ(R_0,0), scaled to the Galactic plane <cit.>. Stellar number densities are commonly estimated through luminosity functions <cit.>. We used the low-mass LF from B10 since the MoVeRS catalog (and hence, the sample) are built from the same photometric criteria used to create the B10 LF. However, as stated above, the B10 photometric parallax relationships extend to absolute magnitudes fainter than the B10 LF, therefore, care must be taken in obtaining stellar densities for the reddest stars. The B10 LFs are given for both M_r and M_J. M_J is a commonly used metric for the LF function, however, the B10 photometric parallax relationships map SDSS colors to M_r. <cit.> gives a relationship between M_r and M_J, which extends two magnitudes fainter in M_r than the B10 M_r LF. The <cit.> relationship also reaches to M_J ≈ 12, which is also two magnitudes deeper than the B10 M_J LF (M_J ≲ 10). Using the M_J LF from <cit.>, which begins where the B10 M_J LF ends, fainter M_r magnitudes were mapped to M_J magnitudes (using therelationship), and estimated stellar densities past the limits of the B10 LFs. The stellar densities are shown in Table <ref>. The distance ranges are dictated by both the SDSS saturation limits and the maximum distance at which we would find an extreme MIR excess. For the lower distance limits, we binned the the MoVeRS sample in 0.5 magnitude bins in r-z color and used the minimum distance value in each bin for the lower limit. The upper distance limit corresponds to the maximum MIR excess value above the photospheric value, since we can see an extremely large excess out to a farther distance than a smaller MIR excess. Figure <ref> shows the distribution of MIR excess values above the photosphere, and we found that 95% of the excesses had values up to 12 times the photospheric value. Using Equation (<ref>) scaled ∼2.7 magnitudes fainter (12 times greater than the expected photospheric flux), we derived new distance limits using the B10 photometric parallax relationships. The distance limits are shown in Table <ref>. Since the B10 M_r photometric parallax relationship did not go as red in r-z as the sample, we used the <cit.> 5 Gyr relationship between 4 < r-z ⩽ 5. The <cit.> model photometric parallax relationship is consistent with other photometric parallax relationships <cit.> to the reddest r-z extent that it can be compared to empirical data (see B10 Figure 9). llcc40ptGalactic Model Input Rangesr-z M_r ρ(R_0, 0) Distance (stars pc^-3) (pc) [0.5, 1.0) [6.52, 8.01) [0.00287, 0.00289] [390, 1100] [1.0, 1.5) [8.01, 9.59) [0.00257, 0.00259] [215, 780] [1.5, 2.0) [9.59, 11.18) [0.00677, 0.00680] [90, 520] [2.0, 2.5) [11.18, 12.74) [0.01005, 0.01010] [60, 345] [2.5, 3.0) [12.74, 14.19) [0.00657, 0.00660] [35, 240] [3.0, 3.5) [14.19, 15.46) [0.00489, 0.00493] [15, 165] [3.5, 4.0) [15.46, 16.50) [0.00461, 0.00464] [10, 125] [4.0, 4.5)a [16.50, 17.50)b [0.00143, 0.00146] [10, 105] [4.5, 5.0)a [17.50, 18.50)b [0.00086, 0.00089] [10, 105]aThis color range falls outside the limits of the B10 M_r(r-z) relationship. bValues estimated from the 5 Gyr isochrone from <cit.>.§.§ Stellar Kinematics Stellar kinematics are much more difficult to constrain than stellar densities, in part due to the difficulty in obtaining 3-dimensional kinematics of stars. Many studies have measured the mean velocities of stars as a function of Galactic height, and the velocity dispersions for the thin (cold component) and thick (warm component) disks, along with the halo <cit.>. An in-depth prescription of the kinematical model we used can be found in D10. Here we summarize the model, and explain some of the important differences in our specific model. For a given stellar population, the average stellar kinematics can be represented in Galactic cylindrical coordinates by the following equations:⟨ V_r(Z)⟩ = 0, ⟨ V_θ(Z)⟩ = V_circ - V_a - f(Z), ⟨ V_z(Z)⟩ = 0,where V_r, V_θ, and V_z are the velocities in the radial, circular, and perpendicular directions, respectively. V_circ is the circular velocity, taken as 240 km s^-1 <cit.>. The V_a term is due to interactions that stars undergo over their lifetimes, which cause circular orbits to become more eccentric and more inclined to the Galactic plane. These interactions cause the velocity component along the direction of Galactic rotation to lag the local standard of rest (LSR) for older stellar populations, a phenomenon known as asymmetric drift. V_a is approximately equal to 10 km s^-1 for low-mass stars in SDSS (D10). The last term for V_θ is a polynomial relationship between the average velocity and Galactic height, given by f(Z) = a|Z| - b|Z|^2 km s^-1, where a = 0.013 km s^-1 pc^-1 and b = 1.56 × 10^-5 km s^-1 pc^-2 (taken from D10). This last term accounts for a mixture of thin and thick disk stars, with the ratio highly dependent on Galactic height. For the velocity dispersions, we chose to explore different functional forms rather than a power law as was used in D10, which gives zero dispersion at the Galactic plane. Using results from the kinematic study of <cit.>, we found that velocity dispersions grew approximately linearly with Galactic heights up to ∼1 kpc in all three velocity components for both thin and thick disk stars. The <cit.> sample is an adequate representation of the candidate stars since they all fall within this Galactic height limit. The linear fits to the velocity dispersions take the form,σ(Z)= k + n|Z|, where the values of k and n are defined in Table <ref>. For halo stars, we used velocity dispersion values from <cit.>, using the dispersion relations taken at the Galactic plane (Z=0 pc). These velocity distributions can then be sampled to obtain expected galactic cylindrical V_R, V_θ, and V_Z velocity distributions for samples of stars at any location in the Galaxy. These V_R, V_θ, and V_Z velocities can be transformed into UVW velocities, which can then be transformed into proper motions and radial velocities using the methods of <cit.>.lccc40ptGalactic Kinematics Galactic Velocity k n Component Component (km s^-1) (km s^-1 pc^-1)V_R 22.43 0.04Thin diska V_θ 13.92 0.03 V_Z 10.85 0.03 V_R 64.04 0.07Thick diska V_θ 39.41 0.09 V_Z 44.76 0.02 V_R 135 Halob V_θ 85 V_Z 85 aThe parameters were measured using M dwarfs from <cit.> for the thin and thick disk components. bHalo components were taken from <cit.>, using the values for the bins closest to the Galactic plane.§.§ Model Comparisons: SDSS Source Counts To assess the validity of the model, we compared stellar counts from the model against counts from SDSS for all objects with colors similar to those expected for low-mass stars. Specifically, we obtained source counts for 1^∘× 1^∘ size bins within the entire SDSS footprint, and required the following criteria (taken from ): * Objects were primary sources within the PhotoObjAll table (mode = 1),* Objects had point-source-like morphologies within the PhotoObjAll table (type = 6),* i < 22,* z < 21.2,* r-i ⩾ 0.3,* i-z ⩾ 0.2, and* 16 < r < 22. To compare SDSS source counts to the model, we integrated the B10 M_r LF to get a total stellar density. Next, we integrated the model in 1^∘× 1^∘ size bins out to a distance of 2 kpc, the estimated depth of the B10 M_r LF. A comparison between the stellar counts and SDSS source counts is shown in Figure <ref>. The model has better than 90% agreement with SDSS at high Galactic latitudes. The model produces more stars in regions at the edges of the SDSS stripes, where we expect SDSS to be incomplete. Close to the Galactic plane, SDSS has a much higher number of sources. This is most likely due to bluer sources that are reddened and pulled into the color selection criteria from the higher extinction environment. Considering the input parameters for the model are based on SDSS data, it is not surprising that the model and SDSS source counts agree to such a high degree. Further comparisons must be made with independent observations to verify the model. §.§ Model Comparison: RECONS Sample The REsearch Consortium on Nearby Stars <cit.> has been compiling a sample of the low-mass stars within ∼25 pc in the southern hemisphere. The current realization of the RECONS samples was published by <cit.>, and contains 1748 systems with an M dwarf primary and with parallax measurements (trigonometric or photometric). These stars also all have significant proper motions (μ⩾ 180 mas yr^-1), to remove possible giant stars. The completeness of this sample is unknown, but extrapolating results from the 5 pc sample, <cit.> estimate their 25 pc sample to be between 48%–77% volume complete. We chose to simulate a 3600 deg^2 patch of sky away from the Galactic plane (0^∘⩽α⩽ 60^∘ and -60^∘⩽δ⩽ 0^∘). Since the RECONS sample has parallax measurements with a variety of precisions, we applied a 20% normal uncertainty to the simulated stars and kept stars within 25 pc. We ran 1000 realizations of the model over the volume listed above using the full density computed from integrating the B10 single-star r-band LF. Our results compared to the RECONS sample are shown in Figure <ref>. Both the model distributions of distances and proper motions follow the observed distributions up to the survey limits. If we use the model to estimate the incompleteness within the volume probed, we estimate the RECONS sample to be 74% complete using the 95^th percentile values. The proper motion distribution indicates that the majority of missing stars have small proper motions. §.§ Model Comparison: SUPERBLINK Sample The SUPERBLINK survey <cit.> is a proper motion and magnitude limited survey. For the comparison, we used the bright M dwarf sub-catalog <cit.>. This catalog has a magnitude limit of J < 10 and a proper motion limit of μ > 40 mas yr^-1. The completeness for stars in the northern hemisphere is estimated to be ≈ 90%. To properly simulate this sample, we were required to simulate the magnitude limits in the form of distance limits, and distance uncertainties. The J < 10 limit was implemented using the J-band LF from B10, and calculating the distance for each M_J bin using a limiting magnitude of J = 10. We integrated out to a distance of 200 pc although 80% of the stars in the <cit.> sample have distances ⩽ 75 pc. This larger simulated maximum distance was chosen due to the fact that distances were convolved with uncertainties prior to implementing a distance cut of 65 pc (comparing only to thestars with d ⩽ 65 pc). The quoted distance uncertainty in the photometric parallax relationship used in <cit.> is between 20%–50%. To determine the best uncertainty to fold into the distances, we ran small batches of simulations using different normally distributed uncertainties (between 20%–50%), and comparing their distance distributions to SUPERBLINK. We found that 30% uncertainty gave the expected trends in the distance distributions. Again, we simulated a 3600 deg^2 patch of sky away from the Galactic plane (160^∘⩽α⩽ 220^∘ and 0^∘⩽δ⩽ 60^∘) and ran 1000 realizations. Figure <ref> shows the SUPERBLINK distributions and the model results, along with the 5^th and 95^th percentile confidence intervals. We can again estimate a level of completeness using the 95^th percentile values, however, caution should be taken as the uncertainties folded into the simulations may be different than the actual uncertainties within the SUPERBLINK survey. The estimated completeness level for the simulated volume is 65%, with the majority of missing stars at smaller proper motions below the survey limit. As is shown in Figure <ref>, the completeness of SUPERBLINK should be extremely high for the largest proper motion stars. However, towards the proper motion limit of SUPERBLINK, the completeness drops off. This is to be expected as smaller proper motions are more difficult to measure to high precision. Some of this incompleteness may be accounted for if measurement uncertainty tends to scatter stars towards higher proper motions. However, there still appears to be a large population of nearby stars with small proper motions that has gone relatively undetected due to the requirement of larger proper motions (similar to the comparison with the RECONS sample). The complete SUPERBLINK sample (without the J < 10 criterion) will likely resolve much of this incompleteness when some of the fainter stars with smaller proper motions are added to the sample. The Gaia <cit.> collaboration recently made Data Release 1 <cit.>, which has a proper motion precision of ∼1 mas yr^-1 for non-Hipparcos Tycho-2 stars <cit.>. However, the final data release for Gaia is expected to have a precision better than 0.1 mas yr^-1. Gaia should detect all of the nearby (⩽ 60 pc), earliest-type M dwarfs, and lower-mass objects at closer distances. However, Gaia will not be able to detect the lowest-mass M dwarfs out to the distances SDSS, 2MASS, and WISE were able to observe them, due to its relatively blue filter <cit.>. The Gaia completeness for low-mass dwarfs has been investigated using the LaTE-MoVeRS sample <cit.> and Gaia Data Release 1. <cit.> found that Gaia was ∼70% complete for low-mass dwarfs with i < 20, and less than 30% complete for dwarfs with i ⩾ 20. Although Gaia will not be able to probe the entire volume that the MoVeRS sample covers, it will allow us to validate the model across the entire proper motion range and with much smaller simulated distance uncertainties for nearby (≲ 30 pc) stars. Gaia will be especially critical in uncovering the potential population of nearby stars with small proper motions that have been primarily ignored, and resolving the true completeness of the SUPERBLINK sample. §.§ Simulating a Galactic Volume within the SDSS Footprint To properly estimate the level of completeness, we need to simulate the complete volume (α, δ, and d) from where the sample was extracted. However, due to the time-delay-integrate nature of SDSS, getting the exact outline of the imaging footprint in α and δ coordinates is extremely complicated. To further complicate matters, some fields observed by SDSS fail processing by the photometric pipeline. This is primarily due to large or bright objects within the frame causing the photometric pipeline to time-out <cit.>. To quantify the number of bad fields within the SDSS footprint, we retrieved all the field IDs and number of extracted objects within the field from the Field table using CasJobs[<http://skyserver.sdss.org/casjobs/>]. Of the 938,046 fields in SDSS, 6,239 fields contain zero objects (∼0.67%). The vast majority of bad fields (4,271) are found in stripes within the Galactic plane (|b| < 20^∘), which we excluded from the sample. Therefore, bad fields were not a concern for the simulated SDSS volume. Rather than try to simulate the entire SDSS footprint, we chose to simulate large areas within the footprint. Figure <ref> shows the fields imaged by SDSS and the selected areas within that footprint. The stripe nature of SDSS is clearly shown, with darker regions indicating heavier coverage. The regions we chose are listed in Table <ref>, with larger regions divided into smaller subregions for computational ease and parallelization. ccc30ptModel Simulated RegionsRegion ID α Range δ Range(deg.) (deg.) 1 [0, 28] [-6, 10] 2 [0, 28] [10, 26] 3 [130, 182] [0, 20] 4 [130, 182] [20, 40] 5 [130, 182] [40, 58] 6 [182, 235] [0, 20] 7 [182, 235] [20, 40] 8 [182, 235] [40, 58] 9 [330, 360] [-6, 10] 10 [330, 360] [10, 26] §.§ Sampling with the Model to Estimate Completeness The level of completeness was estimated by simulating stars in regions defined in the previous section. This was done for all stars within the volume, and separately in absolute magnitudes bins defined in Table <ref>. The following steps were completed for all simulated regions: *For parallelization, different r-z color ranges (a proxy for stellar mass ranges) were simulated individually. For each r-z color range in Table <ref>, we used the B10 color-magnitude relations to obtain the range of absolute magnitudes (M_r).* Since the color ranges were continuous, but the B10 M_r LF is given in discrete bins, we chose to interpolate the M_r LF. Using the single-star LF from B10, we interpolated the M_r LF over the r-z color range from the previous step. The B10 LFs are given as median values with asymmetric uncertainties. All three values (median and asymmetric uncertainties) were used to provide a range of possible stellar number densities for the model. Three interpolations were done, one for the median M_r value, one for the upper M_r limit, and one for the lower M_r limit. This step is illustrated in Figure <ref>. * A random LF value was drawn for a given M_r value. To do this, the absolute magnitude range (from above) was divided into 10,000 evenly spaced bins. For each bin, a random LF value was drawn from a triangular probability distribution defined by the median value at the apex, and the lower limit and upper limit values as the first and third vertex, respectively. The median, upper limit, and lower limit values were taken from the interpolated LF at the center of each absolute magnitude bin. An example of this step is shown in Figure <ref>. * The LF values from the previous step were then integrated over the absolute magnitude range (from step <ref>) to produce the local stellar density scaled to the plane, ρ(R_0,0). * Using the stellar density from the previous step, we integrated the density profile, Equation (<ref>), along the LOS in 1 pc deep, discrete pyramidal “cells." Each cell along the LOS was parameterized by the α and δ range, and the distance range (defined in Table <ref>). Multiplying the volume of the cell by the average stellar density within the cell gave us the total number of stars within each cell. Summing all the cells gave us the total number of stars along the LOS. * The next step was distributing stars randomly within the given volume. For the relatively small angular ranges, we assumed that the α and δ positions for the stars were uniformly random within the range. Distances are more complicated as the distribution of distances is dependent on LOS through the Galaxy. To build a representative distribution of distances along the given LOS, we used the number of stars in each cell, and the distance to the center of each pyramidal cell from the previous step. This distribution was transformed into an inverse cumulative distribution function, which was sampled from in the following step <cit.>. * Stars were then distributed in a three-dimensional space within the defined volume using the rejection method <cit.>. This generated uniformly random α and δ coordinates, and distances randomly chosen through inverse sampling of the distribution created in the previous step. * The three dimensional α, δ, and distances were converted to Galactic cylindrical coordinates (R, T, Z). * Each star was then given V_R, V_θ, and V_Z velocities dependent on the average V_R, V_θ, and V_Z and corresponding dispersion found at each star's Galactic height, based on Equation (<ref>). These velocities were subsequently converted into UVW velocities. * UVW velocities were converted into proper motion components and radial velocities following the inverse of the methods described in <cit.>. We disregard the radial velocities as they are not required for the completeness estimates. * Lastly, a variable proper motion cut was made based on the minimum proper motion within the MoVeRS sample for the volume and color range simulated. This ensured the simulations only included stars which had distances and tangential motions that would have been detected for the MoVeRS sample. * The previous steps were repeated 100 times to build distributions of counts to estimate the random uncertainty in the model.The LoKi Galactic model is available to the community through GitHub[<https://github.com/ctheissen/LoKi>]. hapj
http://arxiv.org/abs/1702.08465v1
{ "authors": [ "Christopher Theissen", "Andrew West" ], "categories": [ "astro-ph.SR", "astro-ph.EP", "astro-ph.GA" ], "primary_category": "astro-ph.SR", "published": "20170227190008", "title": "Collisions of Terrestrial Worlds: The Occurrence of Extreme Mid-Infrared Excesses around Low-Mass Field Stars" }
Met Office, FitzRoy Road, Exeter, EX1 3PB, UKi.boutle@exeter.ac.uk Physics and Astronomy, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, EX4 4QL, UK Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, EX4 4QF, UK Climate of Proxima B I. A. Boutle et al.We present results of simulations of the climate of the newly discovered planet Proxima Centauri B, performed using the Met Office Unified Model (UM). We examine the responses of both an `Earth-like' atmosphere and simplified nitrogen and trace carbon dioxide atmosphere to the radiation likely received by Proxima Centauri B. Additionally, we explore the effects of orbital eccentricity on the planetary conditions using a range of eccentricities guided by the observational constraints. Overall, our results are in agreement with previous studies in suggesting Proxima Centauri B may well have surface temperatures conducive to the presence of liquid water. Moreover, we have expanded the parameter regime over which the planet may support liquid water to higher values of eccentricity (≳ 0.1) and lower incident fluxes (881.7 W m^-2) than previous work. This increased parameter space arises because of the low sensitivity of the planet to changes in stellar flux, a consequence of the stellar spectrum and orbital configuration. However, we also find interesting differences from previous simulations, such as cooler mean surface temperatures for the tidally-locked case. Finally, we have produced high resolution planetary emission and reflectance spectra, and highlight signatures of gases vital to the evolution of complex life on Earth (oxygen, ozone and carbon dioxide). Exploring the climate of Proxima B with the Met Office Unified Model Ian A. Boutle1,2,Nathan J. Mayne2, Benjamin Drummond2, James Manners1,2,Jayesh Goyal2, F. Hugo Lambert3, David M. Acreman2 Paul D. Earnshaw1December 30, 2023 =====================================================================================================================================================§ INTRODUCTIONMotivated by the grander question of whether life on Earth is unique, since the detection of the first exoplanet <cit.>, efforts have become increasingly focused on `habitable' planets. In order to partly side-step our ignorance of the possibly vast range of biological solutions to life, and exploit our mature understanding of our own planet's climate, we define the term `habitable' in a very Earth-centric way. Informed by the `habitable zone' first defined by <cit.>, we have searched for planets where liquid water, so fundamental to Earth-life, might be present on the planetary surface. The presence of liquid water is likely to depend on a large number of parameters such as initial water budget, atmospheric composition, flux received from the host star etc. Of course, the more `Earth-like' the exoplanet, the more confidence we have in applying adaptations of our theories and models developed to study Earth itself, and thereby predicting parameters such as surface temperature and precipitation. Surveys such as the Terra Hunting Experiment <cit.> aim to discover targets similar in both mass and orbital configuration to Earth, but also orbiting stars similar to our Sun, for which follow-up characterisation measurements might be possible.In the meantime, observational limitations have driven the search for `Earth-like' planets to lower mass, and smaller radius stars i.e. M-Dwarfs <cit.>. Such stars are much cooler and fainter than our own Sun, so potentially habitable planets must exist in much tighter orbits. Recent, ground breaking, detections have been made for potentially “Earth-like” planets in orbit around M-Dwarfs(e.g. Gliese 581g <cit.>, Kepler 186f <cit.>, the Trappist 1 system <cit.>). In fact, such a planet has been discovered orbiting, potentially in the `habitable zone' of our nearest neighbour Proxima Centauri <cit.>, called Proxima Centauri B (hereafter, ProC B).The announcement of this discovery was coordinated with a comprehensive modelling effort, exploring the possible effects of the stellar activity on the planet over its evolution and its budget of volatile species <cit.>, and a full global circulation model (GCM) of the climate <cit.>. Of course, lessons from solar system planets<cit.> and our own Earth-climate <cit.> have taught us that the complexity of GCMs can lead to model dependency in the results. This can often be due to subtle differences in the numerics, various schemes (i.e. radiative transfer, chemistry, clouds etc.) or boundary and initial conditions. <cit.> provides an excellent resource, using 1D models and limiting concepts with which to aid conceptual understanding of the habitability of ProC B, highly complementary to results from more complex 3D models such as <cit.> and this work.In this work we apply a GCM of commensurate pedigree and sophistication to that used by <cit.> to ProC B and explore differences due to the model, and extend the exploration to a wider observationally plausible parameter space. The GCM used is the Met Office Unified Model (or UM), which has been successfully used to study Earth's climate for decades. We have adapted this model, introducing flexibility to enable us to model a wider range of planets. Our efforts have focused on gas giant planets <cit.>, motivated by observational constraints, but have included simple terrestrial Earth-like planets <cit.>.The structure of the paper is as follows: in Section <ref> we detail the model used, and the parameters adopted. For this work we focus on two cases, an Earth-like atmosphere chosen to explore how an idealised Earth climate would behave under the irradiation conditions of ProC B, and a simple atmosphere consisting of nitrogen with trace CO_2 for a cleaner comparison with the work of <cit.>. In Section <ref> we discuss the output from our simulations, and compare them to the results of <cit.>, revealing a slightly cooler day-side of the planet in the tidally-locked case (likely driven by differences in the treatment of clouds, convection, boundary layer mixing, and vertical resolution), and a warmer mean surface temperature for the 3:2 spin-orbit resonance configuration,particularly when adopting an eccentricity of 0.3 <cit.>. Our simulations suggest that the mean surface temperatures move above the freezing point of water for eccentricities of around0.1 and greater. Section <ref> presents reflection (shortwave) spectra, emission (longwave) spectra and reflection/emission as a function oftime and orbital phase angle derived from our simulations. Our results show many similar trends to the results of <cit.>, with several important differences. In particular, our model is capable of a higher spectral resolution, allowing us to highlight the spectral signature of the gases key to the evolution of complex life on Earth (ozone, oxygen, carbon dioxide).Finally, in Section <ref> we conclude that the agreement between our simulations and those of <cit.> further confirms the potential for ProC B to be habitable. However, the discrepancies mean further inter-comparison of detailed models is required, and must always be combined with the insight provided by 1D, simplified approaches such as <cit.>. § MODEL SETUPThe basis of the model simulations presented here is the Global Atmosphere (GA) 7.0 <cit.> configuration of the Met Office Unified Model. This configuration will form the basis of the Met Office contribution to the next Intergovernmental Panel on Climate Change (IPCC) report, and will replace the current GA6.0 <cit.> configuration for operational numerical weather prediction in 2017. It therefore represents one of the most sophisticated and accurate models for Earth's atmosphere, and with minimal changes can be adapted for any Earth-like atmosphere. The model solves the full, deep-atmosphere, non-hydrostatic, Navier-Stokes equations using a semi-implicit, semi-Lagrangian approach. It contains a full suite of physical parametrizations to model sub-grid scale turbulence (including non-local turbulent transport), convection (based on a mass-flux approach), H_2O cloud and precipitation formation (with separate prognostic treatment of ice and liquid phases) and radiative transfer. Full details of the model dynamics and physics can be found in <cit.> and <cit.>. The simulations presented have a horizontal resolution of 2.5 longitude by 2 latitude, with 38 vertical levels between the surface and model-top (at 40 km), quadratically stretched to give enhanced resolution near the surface. We adopt a timestep of 1200 s.To adapt the model for simulations of ProC B, we modify the planetary parameters to those listed in Table <ref>. The orbital parameters are taken from <cit.> and we note that our values for the stellar irradiance and rotation rate (Ω) differ to those used by <cit.>. In particular, our value for the stellar irradiance (881.7 W m^-2), based on the best estimates for the stellar flux of Proxima Centauri and semi-major axis of ProC B, is considerably lower than the 956 W m^-2 used in <cit.>. The planetary parameters (radius and g) are taken from <cit.>. The SOCRATES[https://code.metoffice.gov.uk/trac/socrates] radiative transfer scheme is used with a configuration based on the Earth's atmosphere <cit.>. Incoming stellar radiation is treated in 6 "shortwave" bands (0.2-10 μm), and thermal emission from the planet in 9 "longwave" bands (3.3 μm – 10 mm), applying a correlated-k technique. Absorption by water vapour and trace gases, Rayleigh scattering, and absorption and scattering by liquid and ice clouds are included. The clouds themselves are water-based, and modelled using the PC2 scheme which is detailed in <cit.>. Adaptations are made to represent the particular stellar spectrum of Proxima Centauri. A comparison of the top of atmosphere spectral flux for Earth and ProC B is shown in Figure <ref>.The Proxima Centauri stellar spectrum is from BT-Settl <cit.> with T_ eff=3000 K, g=1000 m s^-2 and metallicity=0.3 dex, based on <cit.>. Correlated-k absorption coefficients are recalculated using this stellar spectrum to weight wavelengths within the shortwave bands, and to cover the wider range of temperatures expected on ProC B. For simplicity we ignore the effects of atmospheric aerosols in all simulations, although tests with a simple representation of aerosol absorption and scattering did not lead to a significant difference in results. For the spectra and phase curves presented in Section <ref> we additionally run short GCM simulations with high resolution spectral files containing 260 shortwave and 300 longwave bands that have been similarly adapted for ProC B from the original GA7 reference configurations.Similar to <cit.>, we use a flat, homogeneous surface at our inner boundary, but for simplicity choose a single layer `slab' model based on <cit.>. The heat-capacity of 10^7 J K^-1 m^-2 is representative of a sea-surface with 2.4 m mixed layer, although as all simulations are run to equilibrium, this choice does not affect the mean temperature, only the variability (capturing the diurnal cycle). We consider that simulations have reached equilibrium when the top-of-atmosphere is in radiative balance, the hydrological cycle (surface precipitation minus evaporation) is in balance and the stratospheric temperature is no longer evolving. We find that equilibrium is typically reached within 30 orbits, and show most diagnostics averaged over orbits 80-90 (sampled every model timestep). We retain water-like properties of the surface (even below 0C) allowing the roughness length to vary with windspeed, typically between 10^-5 and 10^-3 m. The emissivity of the surface is fixed at 0.985 and the albedo varies with stellar zenith angle, ranging from 0.05 at low zenith angles but reaching 0.5 at very high zenith angles.All simulations have an atmosphere with a mean surface pressure of 10^5 Pa, and we investigate two different atmospheric compositions, the relevant parameters for which are given in Table <ref>. These represent a nitrogen dominated atmosphere with trace amounts of CO_2, similar to that investigated by <cit.>, and a more Earth-like atmospheric composition with significant oxygen and trace amounts of other radiatively important gases. Our motivation here is to explore the possible climate and observable differences that would exist on a planet that did support complex life <cit.>. The values for the gases are taken from present day Earth, and are globally uniform with the exception of ozone, for which we apply an Earth-like distribution, with highest values in the equatorial stratosphere, decreasing towards the poles and with much lower values in the troposphere. Whether an ozone layer could form and survive on ProC B is highly uncertain. Ozone formation requires radiation with wavelengths of 0.16-0.24μm, which we expect to be in much shorter supply for ProC B, compared with Earth; see Figure <ref> and <cit.>. <cit.> also discuss that the likelihood of stellar flares destroying the ozone layer is quite high, and without it the chances of habitability are significantly reduced due to the large stellar fluxes at very short wavelengths (<0.175μm) received by ProC B. Essentially, in this work, our main aim is to investigate the response of an “Earth-like” atmosphere to the irradiation conditions (different spectrum and stellar flux patterns) characteristic of the ProC B system, so we refrain from removing individual gases which may actually be required for the planet to be habitable, or are potentially produced by an interaction of life with the atmosphere, such as ozone and methane <cit.>. A 3D model fully-consistent with the chemical composition is beyond the scope of the present work. § RESULTSIn this section we discuss results from our simulations in two orbital configurations. Firstly, the assumption of a tidally-locked planet, and then a 3:2 spin-orbit resonance, both possible for such a planet as ProC B<cit.>. §.§ Tidally-locked caseWe first consider the tidally-locked orbit with zero eccentricity. Figure <ref> shows the surface temperature from our simulations. It is colder than the simulations of <cit.>, with a maximum temperature on the day-side of 290 K (10 K colder), and a minimum temperature in the cold-traps on the night-side of150 K, (50 K colder, informed by their figures). There are several reasons for these differences which we will explore.Firstly, we adopt a stellar radiation at the top of the atmosphere which is 70 W m^-2 lower than <cit.>, as discussed in Section <ref>, and so will inevitably be colder. We have tested our model with an incoming stellar flux consistent with that used by <cit.>, and find that it increases the mean surface temperature by 5 K across the planet(slightly less on the day-side and up to 10 K in the cold-traps on the night side) which, critically, is still cooler than that found by <cit.>. This increase is approximately two-thirds of that found for Earth <cit.>, demonstrating that the sensitivity of planetary temperatures to changes in the stellar flux received by ProC B is quite low, meaning it potentially remains habitable over a larger range of orbital radii than e.g. Earth. This is likely to be due to a combination of the tidal locking and stellar spectrum.For example, changes in low cloud and ice amounts that contribute to a strong shortwave feedback on Earth are ineffective in this configuration as low clouds and ice are found largely on the night side of the planet.On the day-side, cloud cover could be a contributing factor in keeping the surface cooler in our simulations. As shown in Figure <ref>, the day-side of the planet is completely covered in cloud, due to the strong stellar heating driving convection and cloud formation. This makes the albedo of the day-side quite high (≈ 0.35), reflecting a significant fraction of the incoming radiation back to space, similar to simulations presented by <cit.>.Furthermore, the radiative heating of the thick cloud layer which forms is very high (>10 K day^-1, Fig. <ref>). It is possible that the combination of these two effects is greater in our model (driven by differences in our cloud and convection schemes, discussed later in this section), simply resulting in less radiation reaching the planet surface, and therefore a cooler surface temperature. However, the cooler day-side temperature may actually be linked to the temperature on the night-side, via the mechanisms described in <cit.>. They argue that the free tropospheric temperature should be horizontally uniform, due to the global-scale Walker circulation that exists on a tidally-locked planet <cit.>, and efficient redistribution of heat by the equatorial superrotating jet <cit.>. Figure <ref> shows this to be true in our simulations, and the weak temperature gradient <cit.> effectively implies that the temperature of the entire planet is controlled by the efficiency with which emission of longwave radiation to space can cool the night-side of the planet. Therefore, the fact that our night-side is so cold implies a very efficient night-side cooling mechanism which in turn suppresses the day-side temperatures.The temperature on the night-side is cold due to the almost complete absence of cloud and very little water vapour. This allows the surface to continually radiate heat back to space, and cool dramatically. The only mechanism to balance this heat loss is transport from the day-side of the planet at higher levels within the atmosphere, followed by subsidence (where a layer of air descends and heats under compression) or sub-grid mixing to transport the heat down to the surface. Figure <ref> shows profiles of temperature from the day- and night-side of the planet, demonstrating that the cooling is confined to the lowest3 km of the atmosphere, with the most extreme cooling(30 K) in the lowest 1 km. We speculate that it is this near surface cooling which differs between our model and that of <cit.>, as our temperature at 500 m altitude appears very similar to their surface temperature (not shown).There are several possible reasons for the surface temperature differences between our model and that of <cit.>. Firstly, the water-vapour profile could play a role. The night-side is dry because its only real source of water vapour is transport from the day-side, but this transport typically happens at high levels within the atmosphere, where the air is very dry due to the efficiency with which the deep convection precipitates water. This is likely to be a key uncertainty and potential reason for differences between simulations, as the convective parametrizations are very different – a simple adjustment scheme <cit.> used in <cit.> versus a mass-flux based transport scheme <cit.> used here. Secondly, model-resolution and the parametrization of turbulent mixing in the stable atmosphere are hugely important. How much sub-grid mixing atmospheric models should apply in stable regions is still a topic of research in the GCM community <cit.>, with many GCMs often applying more mixing than observations or theory would suggest. The UM uses a minimal amount of mixing in stable regions, which results in very little transport of heat down to the surface by sub-grid processes, and relies on the subsidence resolved on the model grid to warm the surface, which is also very weak in our lowest model level (20 m above the surface). Tests with increased mixing can produce a 20 K increase in surface temperature, and also significantly alter the positions of the cold-traps.The absence of cloud is another possible reason for surface temperature differences; results presented in <cit.> showed uniform low level cloud cover on the night-side of a tidally-locked planet, which could help to insulate the surface and keep it warm. However, what cloud there is on the night-side of our model has such low water content that it is optically very thin and has almost no effect on the radiation budget. In Figure <ref>, we show the same cloud cover field, but in the bottom panel we show cloud as any grid-box with condensed water, whereas in the top panel we only consider a grid-box to be cloudy if that cloud is radiatively important (e.g. it would be visible to the human eye). This is done by filtering all cloud with an optical depth <0.01 from the diagnostic. This shows that whilst the cloud cover can appear quite extensive on the night-side, the cloud is actually radiatively unimportant. Finally, our model is lacking a representation of condensible CO_2, which could be an important contributor to the radiative balance of the night-side, both if vapour CO_2 concentrations are locally increased on the night-side, or CO_2 clouds are present.However, for the concentrations of CO_2 considered here, condensation would occur at ≈ 125 K near the surface, and therefore condensation of CO_2 would appear unlikely, even in the cold-traps. We note that our surface temperature on the night-side appears very similar to the dry case of <cit.>, and that our night-side surface temperature appears to match the very cold results given by the simple model of <cit.> better than their GCM results which kept the surface warmer.The temperature and water-vapour profiles shown in Figure <ref> appear in good agreement with <cit.>.Figure <ref> shows that there is significant shortwave heating in the stratosphere, a result of shortwave absorption by CO_2 in our model, which is happening longward of 2μm. This is a feature of Proxima Centauri's spectrum (Fig <ref>), and would not happen on solar system planets due to the much lower flux at this wavelength from the Sun. The heating is balanced by longwave cooling from the CO_2 and water vapour, and transport of heat to the night-side of the planet. Figure <ref> shows that heat transport is the dominant mechanism of heat-loss from the day-side throughout the atmosphere, and this heat is transported to the night-side where it is the only heat source and balanced by longwave cooling.The differences due to atmospheric composition are generally quite small within the troposphere. The Earth-like composition has a similar surface temperature on the day-side, and slightly warmer surface temperature on the night-side, particularly in the cold traps. Consistent with <cit.>, this difference is primarily driven by additional heat on the day-side of the planet being transported to the night-side, effectively stabilising the temperature of the day-side and increasing the temperature of the night-side. Most other fields are very similar and not shown for brevity. There are however significant differences in the stratosphere (Fig. <ref>). The stratosphere iswarmer, and this is predominantly driven by the ozone layer.However, the warming created is much less than on Earth, because there is very little radiative flux in the region which ozone absorbs (0.2-0.32μm). The stratosphere is also wetter, and this is a direct consequence of water vapour production by methane oxidation in this configuration.This is achieved via a simple parametrization <cit.>, common in many GCMs, that increases stratospheric water vapour in proportion to the assumed methane mixing ratio and observed balance between water vapour and methane in Earth's stratosphere.Figure <ref> shows the surface precipitation rate, showing intense precipitation at, and slightly down-wind of, the sub-stellar point on the day-side of the planet, decreasing in intensity radially from this point. The most intense precipitation comes from deep convection above this point, with the depth of the convection gradually reducing with radial distance, through the congestus regime (i.e. convection that terminates in the mid-troposphere) and ultimately shallow convection near the edge of the cloud layer. This can be seen quite clearly in Figure <ref>, with cloud height transitioning from low+medium+high, to low+medium, to low at increasing distances from the sub-stellar point. In many ways the transition is similar to the transition from shallow to deep convection in the trade regions of Earth. High cloud detrained into the anvils of convection is advected downstream by the equatorial jet, giving rise to a distinct asymmetry in the high cloud cover. The phase of the precipitation switches to snow in a ring around the edge of the day-side where the temperature drops below freezing, although it is interesting to note that the dominant phase of the precipitation is still snow even for surface temperatures above freezing. This is due to a combination of the time taken for the precipitation (which forms in the ice phase) to melt at temperatures above freezing, and the fact that near surface winds are predominantly orientated radially inwards near the surface, which advects the snow into warmer regions.One interesting difference from tropical circulation on Earth is that the strong radiative heating of both the clear sky and cloud tops effectively stabilises the upper atmosphere. This keeps the majority of the convection quite low within the atmosphere, and only allows the most intense events to reach the tropopause level. The surface precipitation is therefore approximately 50% convective, with the remainder being large-scale precipitation coming from the extensive high-level cloud and driven by a large-scale ascent on the day-side of the planet. This ascent is driven by convergence, similar to that shown in Figure <ref>, occurring throughout the lowest few kilometres of the atmosphere. This results in a latent heating profile below 4 km in Figure <ref>, which is near-zero due to averaging of intermittent convective events, which generate strong heating, and persistent rain falling from the high-level cloud and evaporating, cooling the air (in contrast to Earth).Figure <ref> also shows the surface evaporation rate, and demonstrates that the moisture source for the heaviest precipitation is not local. The surface moisture flux is very low at the sub-stellar point, and highest in a ring surrounding this. This inflow region to the deep convection is where the surface winds are strongest, driving a strong surface latent heat flux. The near surface flow moistens and carries this water vapour into the central sub-stellar point, before being forced upwards in the deep convection and precipitating out. Combined with Figure <ref>, we can infer that most of the hydrological cycle on a planet like this occurs in the region where liquid water is present at the surface, i.e. the circulation does not rely strongly on evaporation from regions where the surface is likely to be frozen. Neither does the circulation transport large amounts of water vapour into these regions, and so this configuration could be stable for long periods if the return flow of water into the warm region (via glaciersor sub-surface oceans) can match the weak atmospheric transport out of this region.§.§ 3:2 resonanceWe consider now the possibility of asynchronous rotation in a 3:2 spin-orbit resonance. In this case we model an atmosphere dominated by nitrogen, as in Section  <ref>, and do not consider an Earth-like composition, as the differences between the two were found to be small for the tidally-locked case.Figure <ref> shows the results from a circular orbit, andunlike <cit.>we find that the mean surface temperature isabove 0Cin a narrow equatorial band, with seasonal maximum temperatures above freezing extending to 35 in latitude north and south of the equator.There are several possible explanations for this. Firstly, the greenhouse effect may be stronger, implying that more water vapour is retained in the atmosphere in our simulations (as we know that CO_2 concentrations are similar). Secondly, the meridional heat transport may be weaker in our simulations, as it appears (by comparison to their figures) that our polar regions may be colder. Finally, the lack of an interactive ice-albedo at our surface may be important here. To test this, we set the surface albedo to 0.27 everywhere, to be representative of an ice covered surface, based on the mean spectral albedo of the surface ice/snow cover calculated by <cit.>. We find that in this state, the mean surface temperature does fall below 0C everywhere (not shown), although seasonal maximums above freezing are still retained. Therefore, although the mean temperature of the planet is higher in our simulations, it is still likely that this configuration would fall into a snowball state.The chance of a planet existing in a resonant orbit with zero eccentricity is small <cit.> yet if ProC B is the only planet in the system, the eccentricity excited by α Centauri alone is likely to be ≈ 0.1 <cit.>. Current observations can not exclude a further planet(s) orbiting exterior to ProC B, and are consistent with an eccentricity as large as 0.35 <cit.>,with the most likely estimate 0.25 <cit.>. Therefore we have run a range of simulations assuming a 3:2 resonant orbit but with eccentricities varying from zero to 0.3, and focus discussion on the most eccentric case.With increasing eccentricity, the region where the mean surface temperature is above freezing becomes concentrated in two increasingly large patches, corresponding to the side of the planet which is facing the star at periastron on each orbit. Therefore, permanent liquid water could exist at the planet surface, and the potential for the planet to fall into a snowball state is greatly reduced.In an eccentric orbit the stellar heating is concentrated in two hot-spots on opposite sides of the planet<cit.>, leading to large regions of the surface which are warmer than their surroundings. Figure <ref> shows how the incoming top-of-atmosphere shortwave radiation varies between orbits for the circular and eccentric configurations. The increase in radiation as the eccentric orbit approaches periastron is much greater than the decrease in radiation as it approachesapoastron, resulting in a significant increase in the mean stellar flux over large regions of the planet. This, combined with the fact that the total equatorial radiation is increased by 40 W m^-2 (the global mean is closer to 30 W m^-2), keeps the hot-spots well above freezing, with mean temperaturesabove 280 K and seasonal maximums of 295 K. The global mean temperature however only rises by 4 K, which for an effective increase in stellar flux of 120 W m^-2 (30 multiplied by the surface-area to disc ratio of 4) implies an even lower sensitivity of this orbital state to changes in the stellar flux than in the tidally-locked case. We find that the sensitivity to the stellar flux changes for a 3:2 resonance in a circular orbit is approximately equal to the tidally-locked case (5 K of warming for 70 W m^-2 additional flux), implying that the lower sensitivity is due to the eccentricity of the orbit – the high cloud cover formed over the hot-spot regions (Fig. <ref>) increases the reflected shortwave radiation, increasing the planetary albedo.This shows, similar to <cit.>, that the mean flux approximation is poor for this eccentric orbit.To test the possibility of the planet falling into a snowball state in this orbital configuration, weagain set the surface albedo to be 0.27 everywhere,to represent a snow/ice covered surface. This represents the most extreme scenario possible, as it would imply that any liquid water at the surface has managed to freeze during the night, which only lasts 12 Earth days. Figure <ref> shows that even in this case, the mean surface temperature remains above zero(and in fact the minimum only just reaches freezing), implying that the chance of persistent ice formation in these regions is small and the planet is unlikely to snowball. Additional tests with intermediate values of orbital eccentricity allow us to estimate that an eccentricity of ≈ 0.1 would be required to maintain liquid water at the surface and prevent this configuration falling into a snowball state. The intermediate eccentricity simulations display many features of the most eccentric orbit presented here, i.e. the formation of hot-spot regions, but their strength obviously increases with increasing eccentricity.GCM studies of resonant orbits with eccentricity appear to be rare, and therefore we document here what this climate might look like in the most eccentric case. In many respects, it appears similar to the tidally-locked case presented in Section <ref>, except now with two hot-spots on opposite sides of the planet and a much reduced planetary area in which water would be frozen. There are no significant cold-traps, with the polar regions being the coldest area with surface temperaturesjust above 200 K, not too dissimilar from Earth (surface temperatures in Antarctica are typically 210 K). As this simulation contains no ocean circulation, we speculate that if this were included, it could transport more heat away from the hot-spots and further warm the cold regions.Figures <ref> and <ref> show that the hot-spots are dominated by deep convective cloud with heavy precipitation. The upper-level circulation appears to be dominated by a zonal jet covering most of the planet, and similar to the tidally-locked case, this acts to advect the convective anvils downstream into the cold regions of the planet,almost completely encircling the planet, and maintain a horizontally uniform temperature in the free troposphere of the planet (not shown). Most of the planet is covered in low and mid-level cloud, apart from sub-tropical regions downstream of each hot-spot where there is only low-level cloud, similar to persistent stratocumulus decks on Earth. These cloud decks form in the regions of large-scale subsidence, compensating for the large-scale ascent which occurs in the hot-spot regions.The cloud is thick enough to precipitate around the entire equatorial belt, and reaches 60 north/south at the central longitude of the hot-spots. Similar to the tidally-locked case, the heaviest precipitation is downstream of the peak stellar irradiation, due to the strong zonal winds. The winds also shift polewards at this location, which creates two limbs of enhanced precipitation stretching polewards downstream of the hot-spot. The low level flow is generally equatorwards (Fig. <ref>), and this is strongest in the warm regions, leaving weak winds in the colder subtropical regions, ideal for the formation of non-precipitating low cloud.Figure <ref> also shows the surface evaporation rate, which unlike the tidally-locked case is not strongly confined to a region surrounding the deepest convection, but instead appears quite local to the precipitation. It is predominantly to the upstream side of the heaviest precipitation, and this is due to the lower cloud cover in this region allowing more stellar radiation to reach the surface. The hydrological cycle of each hot-spot appears reasonably self contained, with the possibility that there will only be limited exchanges of water between the opposing sides of the planet. Similar to the tidally-locked case, the hydrological cycle is also largely confined within regions where surface liquid water is present, suggesting that this configuration could be stable for long periods.Whilst there are many similarities in the resultant climate between the tidally-locked case and the hot-spots on the 3:2 eccentric case, the mechanisms which create them do show some differences. The 3:2 case is more similar to Earth in its heating profiles (Fig. <ref>), with latent heating due to convection dominating over shortwave heating throughout the lower-to-mid troposphere. The magnitude of the shortwave heating is much lower than in the tidally-locked case, although still significantly higher than on Earth <cit.>, and this does not act to stabilise the upper troposphere and suppress the convection. The resulting climate therefore has deep convection mixing throughout the troposphere during the day, and the majority of the precipitation is convective in this simulation. It therefore presents a somewhat intermediate solution between the tidally-locked case and Earth. The fact that the whole planet is irradiated at some point during the orbit means there is no stratospheric transport of heat from day to night, with the stratospheric heating and cooling being entirely radiative in nature. § SPECTRA AND PHASE CURVESWe have produced individual, high-resolution reflection (shortwave) and emission (longwave) spectra, as well as reflection/emission as a function oforbital phase angle within wavelength `bins'. <cit.> include a full discussion on the possibility of detecting this planet with current and upcoming instrumentation, so we do not repeat this here. Instead, we present our simulated emissions and discuss their key features, and differences with <cit.>.We obtain the top-of-atmosphere flux directly from our GCM simulation, as done by <cit.>, where the wavelength resolution is defined by the radiative transfer calculation in the GCM. The radiative transfer calculation used in our GCM, and that of <cit.>, adopts a correlated-k approach, effectively dividing the radiation calculation into bands. The bands themselves, and details of the correlated-k calculation are essentially set to optimise the speed of the calculation while preserving an accurate heating rate <cit.>. Therefore, spectral resolution is sacrificed.To mitigate this effect, we run the model fora single orbit with a much greater number of bands (260 shortwave and 300 longwave), which greatly increases the spectral resolution.The top-of-atmosphere flux is output every two hours, or approximately 144 times per orbit for ProC B. From these simulations we compute the emission and reflection spectra, as well as the reflection/emission as a function oftime and orbital phase. The outgoing flux at the top of the model atmosphere is translated to the flux seen at a distant observer by taking the component of the radiance (assumed to be isotropic) in the direction of the observer and then summing over the solid angle subtended by each grid point over the planetary disc.Figure <ref> shows the reflection (shortwave) and emission (longwave) planet-star flux ratio for the tidally-locked case with an Earth-like atmosphere. The contrast between planet and star (F_p/F_s) is shown as a function of wavelength (in μm) fora range of orbital phases and inclinations (i). We follow the approach of <cit.>, where an inclination i = 90 represents the case where the observer is oriented perpendicular to the orbital axis. Additionally, an example for the “clear-sky” emission is shown (dashed line), ignoring the radiative effect of the cloud in the simulated observable. These figures are comparable to their counterparts in <cit.> (Figures 8 and 12). We have separated the long and shortwave flux <cit.>, and adopted a radius of 1.1 R_⊕. The differences will then be caused by the direct differences in the top of atmosphere fluxes obtained in our simulations, and the resolution of our emission calculation.In the shortwave case, our spectrum generally compares well with that of <cit.>, showing similar trends and features, particularly the absorption features from water and CO_2. However, we find an overall larger F_p/F_s ratio (e.g. by a factor of 2 between 2.0 and 2.5 μm), which is likely to be the result of subtle differences in the quantity and distribution of clouds, which have a significant influence on the shortwave reflection, as shown by the clear-sky spectrum, also shown in Figure <ref>. Our inclusion of the full complement of Earth's trace gases along with increased resolution reveals more spectral features, especially at short wavelengths, but the overall shape and magnitude of the contrasts compare well to <cit.>. Our inclusion of oxygen leads to the absorption feature at 0.76 μm, and an ozone layer to the absorption at ultra-violet wavelengths. As discussed previously, the presence of an ozone layer in the atmosphere of ProC B is very uncertain. However, our aim here is to explore how a truly Earth-like atmosphere would respond to the irradiation received by ProC B. Comparing the shortwave contrast from our outputs at ϕ=180 with (solid line) and without (dashed line) clouds we can see that the cloud acts to increase the shortwave contrast due to scattering and slightly `mute' the absorption features.Figure <ref> also shows a reasonable agreement of the overall shape and magnitude of our longwave flux ratio with that of <cit.>. As for the shortwave case our increased resolution reveals additional features in the spectrum.The main difference here is the absorption feature at 9.6μm due to the presence of ozone, which is not included in the model of <cit.>.Figure <ref> shows the emission as a function of orbital phase angle for the Earth-like, tidally-locked simulation, at three inclinations (i=30, 60 & 90) for the shortwave and at only one inclination (i=60) for the longwave (due to the invariance of the longwave phase curve with inclination).We do not adjust the radius (and therefore total planetary flux) with inclination using R_p∝ (M_min/sin i)^0.27 as in <cit.>, since to be fully consistent this would require running all climate simulations with an adjusted radius. Instead the phase curves represent an identical planet observed from different angles. As with Figure <ref> the “clear-sky” contribution is also shown as a dashed line.We generally find very good agreement with the results of <cit.>. Of notable difference in the reflectance phase curves is the much reduced flux ratio in the 0.28–0.30 μm region, due to absorption by ozone which is not included in the model of <cit.>, which means that our predicted flux contrast is two orders of magnitude smaller in this band. In addition, we find the largest flux ratio to be in the 1.20–1.30 μm band, in contrast to <cit.> who find the 0.75–0.78 μm band to possess the largest contrast (disregarding the previously discussed 0.28–0.30 μm band). We find the planetary flux in the 0.75–0.78 μm is depressed by the oxygen absorption line at 0.76 μm, which is not included in the model of <cit.>. Our longwave emission phase curves also show very similar trends to <cit.>.However, we find that the flux contrast in the bands 7.46–8.00 μm and 10.9–11.9 μm is a factor few lower in our model, likely due to the presence of additional absorption by trace gases (CH_4 and N_2O at 7.46–8.00 μm) and the cooler surface temperature.In Figure <ref> we also show the clear–sky flux, ignoring the radiative effects of clouds, to highlight the important role that clouds have on the magnitude of the reflectance phase curves. The high albedo of clouds results in increases to the planet-star flux ratio by an order of magnitude. On the other hand, clouds have a much more subtle direct impact on the longwave emission spectrum; though of course the temperature of the atmosphere/surfacehas been influenced by the presence of clouds, and so they have an important indirect effect on the longwave emission through the temperature.Figure <ref> shows the reflection and emission spectra for the nitrogen dominated atmosphere in the eccentric (e=0.3), 3:2 resonance orbit. The shortwave spectrum is very similar to the tidally-locked Earth-like case, though note the lack of ozone absorption at very short wavelengths; ozone and other trace species not being included in this model. The longwave spectrum is very insensitive toorbital phase and inclination, due to the horizontally uniform nature of this atmosphere, as opposed to the tidally-locked model. The spectrum is generally quite featureless except for theabsorption feature due to CO_2 around 15 μm. Figure <ref> shows the emission as a function of orbital phase for the 3:2 spin-orbit nitrogen dominated model.The full repeating pattern should contain two complete orbits, but due to the symmetry of the planetary climate, each orbit produces a very similar phase curve, and therefore we only present a single orbit for clarity.For the shortwave reflection, we find broadly similar results to those of the tidally-locked Earth-like case.However, in this case the phase curve is now strongly affected by the longitudinal position of the observer and eccentricity of the orbit. It is almost symmetric when viewed from periastron or apoastron, and any deviations from this are due to the atmospheric variability of the planet. For example, the peak flux contrast when viewed from periastron is at ∼ 120, and is due to reflection from the high, convectively generated cloud above the hot-spot, which was recently heated at periastron and has just appeared into view. There is no corresponding peak at ∼ 240 because although we are again seeing a hot-spot, this one has not been heated since periastron on the previous orbit, and so the convective cloud has decayed significantly. When viewed from the side, the planetary phase curves display a strong asymmetry. The most striking asymmetry is due to our choice to present the phase curve with a linear time axis (as a distant observer would see) rather than a linear phase angle axis, and is created by the apparent speedup and slowdown of the orbit near periastron and apoastron. However, the phase curve would still be asymmetric if plotted against a linear phase angle axis, due to the variation in stellar radiation received. For example, the peak clear-sky flux is at ∼ 150, and occurs because the peak radiation is received at periastron (∼102), after which the radiation available for reflection is reducing whilst the visible area of the planet which is illuminated is increasing. The total flux is then offset slightly towards 180 from this, because there is a delay in the formation of the high, convectively generated clouds above the hot-spot after peak irradiation. The fact that features in the phase curves are dependent on the hydrological cycle, and the formation and evolution of water clouds, hints at an exciting opportunity to constrain this feature given sensitive enough observations.The longwave phase curves show much less variation in the flux with orbital phase compared with the tidally-locked model. This is despite the fact that this model contains two hot spots due to the eccentricity (e=0.3) of the orbit,which are visible in the slight increase in contrast at ∼ 150. In fact, Figure <ref> shows that the small variations in the flux due to these hot spots are damped out further by radiative effects of clouds,which appears as a reduction in the contrast over the hot-spot regions.Overall, our simulated observations show many consistencies with those of <cit.>, however, we also find a few important differences. Firstly, our calculations were performed in a much higher spectral resolution, allowing us to pick out specific absorption and emission features, in particular those associated with the gases vital to complex life on Earth, oxygen, ozone and CO_2.Secondly, we find that the shortwave phase curves show significant asymmetry in the 3:2 resonance model. § CONCLUSIONSThis paper has introduced the use of the Met Office Unified Model for Earth-like exoplanets. Using this GCM, we have been able to independently confirm results presented in <cit.>, that ProC B is likely to be habitable for a range of orbital states and atmospheric compositions. Given the differences between the models, both in numerics (the fluid equations being solved and how this is done) and physical parametrizations (the processes included and level of complexity of the schemes), the level of agreement between the models is somewhat remarkable. Having this level of agreement from multiple GCMs is an important factor in the credibility of any results which are produced with a GCM, especially for cases such as ProC B where the observational constraints are (currently) very limited. As with many phenomena on Earth for which observations are limited, an alternative strategy to constrain the climate simulations could be more detailed modelling of specific parts of the climate system. For example, high-resolution convection resolving simulations of the sub-stellar point of the tidally-locked case would help to constrain the amount of cloud, precipitation, and export of moisture to the night side of the planet.We have additionally shown in this paper that therange of orbital states for which ProC B maybe habitable is larger than that proposed by <cit.>. By use of a different, weaker, stellar flux, we have shown that the planet is still easily withina habitable orbit despite the known uncertainty in the luminosity of Proxima Centauri. This is a consequence of a particularly low sensitivity of planetary temperatures to changes in stellar flux received for ProC B. The inclusion of eccentricity to a 3:2 resonant orbital state was also shown to increase thehabitability. This resultcould hold true for any planet near the outer (cold) limit of its habitability zone – including eccentricity in an orbithas the potential to increase the size of the habitable zone. Two factors combine to produce this effect. Firstly, the mean stellar flux will always be higher for an eccentric orbit, and secondly, for orbits in resonant states, a large increase in the stellar flux is received by fixed regions on the planet surface, which become permanent `hot-spots'. The circulation on planets in this configuration is not too dissimilar from the tidally locked case, with the number of hot-spots being set by the spin-orbit resonance. A planet in an eccentric orbit with a 2:1 resonance would have a single hot-spot and be very similar to the tidally-locked case. Whether eccentric orbits would reduce the habitable zone for planets near the inner (hot) limit is a matter for further study, and is likely to depend on otherplanetary factors, such as tidal heating discussed by <cit.>, and climate feedbacks such as the cloud feedbacks discussed by <cit.>.There are obviously several ingredients missing from our analysis. We have neglected the presence of any land-surface, as we have no information what this may look like, but in considering the surface to be water covered we have additionally neglected any transport of heat by the oceans. <cit.> have recently investigated tidally-locked exoplanets with ocean circulation, demonstrating that the ocean acts to transport heat away from the sub-stellar point. If this were included in our simulations here, it is likely that the region where surface temperatures were above freezing would be increased in both the tidally-locked and 3:2 resonance cases, further reducing the potential for the 3:2 case to fall into a snowball regime. There is also the possibility that the location of continents and surface orography relative to the regions where surface temperatures were above freezing may significantly affect the planet's climate and habitability.We have also considered some more exotic chemical species (oxygen, ozone, methane etc) in our analysis, but have only specified them as globally invariant values. The reason for this is their importance in the evolution of complex life on Earth. The next logical step here is to couple a chemistry scheme to the atmospheric simulations, that allows these species to be formed, destroyed and transported by the atmospheric dynamics, and this would allow a much better estimation of the likely atmospheric composition.We have generated, from our simulations, the emissions from the planet as a function of wavelength in high-resolution, allowing us to highlight signatures of several key (for complex life on Earth) gaseous species in the spectrum (oxygen, ozone and carbon dioxide). We have also generated emissions as a function oforbital phase angle from our simulations and find results largely consistent with <cit.>.Overall our findings are similar, though our results present a higher spectral resolution, and show the importance of the observer longitude on the appearance of phase curves when the planet has an eccentric orbit.I.B., J.M. and P.E. acknowledge the support of a Met Office Academic Partnership secondment. B.D. thanks the University of Exeter for support through a Ph.D. studentship. N.J.M. and J.G.'s contributions were in part funded by a Leverhulme Trust Research Project Grant, and in part by a University of Exeter College of Engineering, Mathematics and Physical Sciences studentship. We acknowledge use of the MONSooN system, a collaborative facility supplied under the Joint Weather and Climate Research Programme, a strategic partnership between the Met Office and the Natural Environment Research Council. This work also used the University of Exeter Supercomputer, a DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS and the University of Exeter.We thank an anonymous reviewer for their thorough and insightful comments on the paper, which greatly improved the manuscript. aa
http://arxiv.org/abs/1702.08463v2
{ "authors": [ "Ian A. Boutle", "Nathan J. Mayne", "Benjamin Drummond", "James Manners", "Jayesh Goyal", "F. Hugo Lambert", "David M. Acreman", "Paul D. Earnshaw" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170227190005", "title": "Exploring the climate of Proxima B with the Met Office Unified Model" }
firstpage–lastpage 2017 Maximal violation of n-locality inequalities in a star-shaped quantum networkFabio Sciarrino Accepted ???. Received ???; in original form December 30, 2023 =============================================================================== We present spectral and timing analyses of simultaneous X-ray and UV observations of the VY Scl system MV Lyr taken by , containing the longest continuous X-ray+UV light curve and highest signal-to-noise X-ray (EPIC) spectrum to date. The RGS spectrum displays emission lines plus continuum, confirming model approaches to be based on thermal plasma models. We test the sandwiched model based on fast variability that predicts a geometrically thick corona that surrounds an inner geometrically thin disc. The EPIC spectra are consistent with either a cooling flow model or a 2-T collisional plasma plus Fe emission lines in which the hotter component may be partially absorbed which would then originate in a central corona or a partially obscured boundary layer, respectively. The cooling flow model yields a lower mass accretion rate than expected during the bright state, suggesting an evaporated plasma with a low density, thus consistent with a corona. Timing analysis confirms the presence of a dominant break frequency around log(f/Hz) = -3 in the X-ray Power Density Spectrum (PDS) as in the optical PDS. The complex soft/hard X-ray light curve behaviour is consistent with a region close to the white dwarf where the hot component is generated. The soft component can be connected to an extended region. We find another break frequency around log(f/Hz) = -3.4 that is also detected by . We compared flares at different wavelengths and found that the peaks are simultaneous but the rise to maximum is delayed in X-rays with respect to UV.accretion, accretion discs - turbulence - stars: individual: MV Lyr - novae, cataclysmic variables § INTRODUCTIONA great variety of objects such as cataclysmic variables (CVs), symbiotic systems, X-ray binaries or active galactic nuclei are powered by a common physical process: accretion. In binaries the process is based on mass loss from a companion star. The transported gas falls towards the central compact object, and in the absence of a strong magnetic field, an accretion disc forms. The central accretor can either be a white dwarf in the case of CVs and symbiotic systems or a neutron star or a black hole in the case of X-ray binaries (see e.g.orfor a review).The family of CVs is divided into several subclasses based on characteristic variability patterns. The most common are the dwarf novae showing quasiregular outbursts with durations of several days and appearing on a time scale of 10 - 100 days (seefor review). VY Scl systems spend most of their life time in a high state while they sporadically transition into a transient low state. This alternating behaviour can be explained by corresponding changes in accretion rate. The high state in VY Scl systems is stable for a relatively long time, while in a dwarf nova it is only temporary (outbursts). The shorter durations of high states in dwarf novae can be explained by the mass accretion rate being unstable in the framework of the disc instability model (seefor review). Meanwhile, in VY Scl systems, the mass accretion rate remains above the critical limit required for stability, explaining the longer duration of high states. The low state events are generated by a sudden drop of mass transfer from the secondary or even a total stop of mass transfer (, ).Observations of accreting systems of various kinds suggest that the basic characteristic of accretion is fast stochastic variability (a.k.a. flickering), see, e.g. <cit.>, <cit.>, <cit.>. The remarkable observational similarities suggest that the same physical mechanism is responsible for flickering in all accretion systems (e.g. , , , ). Flickering has three basic observational characteristics; 1) linear correlation between variability amplitude and log-normally distributed flux (so called rms-flux relation) observed in a variety of accreting systems such as X-ray binaries or active galactic nuclei (), CVs (, ) and symbiotic systems (), 2) time lags where flares reach their maxima slightly earlier in the blue than in the red () and 3) red noise in power density spectra (PDS). The shape of such PDS can be a simple power law (see e.g. , ), a broken power law (two power law components) with a single break frequency in between (see e.g. , ) or a multicomponent PDS with several characteristic break frequencies (see e.g. , , , ).For sufficiently detailed studies of the PDS, a high cadence and long, continuous light curves are needed. An ideal opportunity is offered by themission which allowed us to discover a multicomponent PDS in the optical waveband of the two CVs V1504 Cyg () and MV Lyr (). The former case shows two or three characteristic break frequencies, while for the latter, four components were found. It is believed that the accretion process from the companion star toward the white dwarf surface is structured and every single structure has its own flow characteristics behaving differently. Therefore, every single characteristic break frequency in a PDS can be a footprint of a separate accretion structure.Different simulation techniques/models intended for identifying the sources of flickering (related to accretion structure) have been developed so far. A cellular-automaton model[A cellular automaton model consists of a regular grid of cells. For each cell, a set of adjacent cells is defined relative to the specified cell. An initial configuration is defined by assigning a state for each cell. Step by step, a new configuration is calculated, according to some rule that determines the new state of each cell in terms of the current state of the cell and the states of adjacent cells.] was proposed by <cit.>, where light fluctuations are produced by occasional flare-like events and a subsequent avalanche flow in the accretion disc atmosphere (seefor the original idea). Another cellular-automaton model was developed by <cit.>. It is based on changing the collection of magnetic flux tubes anchored in the disc, transporting angular momentum and driving accretion inhomogeneities. <cit.> developed a statistical model to simulate flickering based on the simple idea of angular momentum transport between two adjacent concentric rings in the accretion disc via discrete turbulent bodies. The method is based on a geometrically thin disc with a ratio of H/r < 0.01 (H is the disc scale height and r is the distance from the centre). Another way of the PDS reproduction was proposed by <cit.>. They derived an analytical expression for the fluctuating accretion rate in the disc, where fluctuations are generated at each radius on a local viscous time scale, and overall variability of the accretion rate in the innermost disc region is the product of all the fluctuations produced at all radii. Practically all mentioned models approve the basic idea of variations in the accretion rate that are produced at different disc radii (, , ), i.e. the mass accretion variability at the outer radii is propagating inside and is influencing the variability characteristics of the inner regions. Such process explains the mentioned linear rms-flux relation and log-normal flux distribution.<cit.> applied the <cit.> method to fit the highest PDS break frequency log(f) = -3.01 ± 0.06 Hz detected in the optical PDS of the nova-like system MV Lyr (). The discs in CVs are believed to be geometrically thin and optically thick. During quiescence the mass accretion rate is low, resulting in a truncated disc (seefor review), i.e. a hole around the white dwarf is formed because of inefficient cooling[Cooling in an optically thin hot plasma in this case is based mainly on free-free transitions, therefore dependent on the square of the particle number density. The plasma energy is transported toward the disc by electron conduction, and a low concentration of electrons reduces cooling. The underlying disc matter thus heats and evaporates, which increases the local particle concentration until an equilibrium is reached.] generating evaporation of the matter (). Instead of the geometrically thin disc, a central geometrically thick (H/r no longer small compared to 1) optically thin corona forms due to evaporation. <cit.> determine a ratio H/r > 0.1 with a radius of 0.81^+0.20_-0.14× 10^10 cm and an α parameter of 0.6 - 1.0 (). The author interpreted such a disc as an expanded optically thin hot corona surrounding a geometrically thin standard disc, i.e. the so-called sandwiched model.<cit.> applied a modelling method () to the study of MV Lyr. They find the geometrically thin disc to be responsible for the second lowest break frequency, while the lowest one can be generated by somehow enhanced activity of the outer disc rim. From their modelling, they also find a solution where part of the central disc can be responsible for the highest break frequency fitted by <cit.>. Although the derived disc radius of 10^10 cm and α = 0.8 - 1.0 are in agreement with <cit.>, the ratio H/r is completely different, i.e. H/r < 0.1 in <cit.> and H/r > 0.1 in <cit.>.The explanation of the optical multicomponent PDS of the nova-like MV Lyr promises the most complex image of the accretion process in a CV so far. However, the character of the highest break frequency is still not perfectly clear because of two contradictory geometry conditions from numerical modelling, but the corona interpretation is very plausible. There is only one way to resolve this puzzle, i.e. a direct X-ray observation, because the hot optically thin corona is radiating in X-rays. If the corona interpretation is correct, the highest optical break frequency log(f) = -3.01 ± 0.06 Hz must be detected in the X-ray PDS as well.While MV Lyr has been observed in X-rays before, none of them were sufficient for the required timing analysis. During theall-sky survey, MV Lyr was observed three times for 722 s (in Oct. 1990), 20250 s (in Nov. 1992), and 2218 s (in May 1996) with the position-sensitive proportional counter (PSPC) being the first two observations coincident with high optical state periods and the last occurring during a low state. The PSPC 0.1-2.4 keV count rates were 0.079 ± 0.011, 0.069 ± 0.002 and < 0.0008 () corresponding (after re-scaling the assumed source distance from 320 pc to 500 pc) to luminosities of 4.87 × 10^31 erg s^-1, 3.87 × 10^31 erg s^-1, and < 1.20 × 10^30 erg s^-1, respectively. In contrast to ourobservation, the PSPC bandpass does not cover the hard range above 2.4keV that contains a lot of critical information.MV Lyr was also observed byduring both high (Obs ID 91443 lasting for 14240 s) and low state (Obs ID 32042, 3282 s). <cit.> concentrated on the former and found that the spectrum can be well fitted by a multi-temperature plasma emission model (CEVMKL in xspec). In order to test the hypothesis of the presence of any scattering effect of X-rays from a wind or an extended component, these authors realized that adding a power-law component increased the quality of the fit. In particular, they derived only a lower limit to the plasma temperature (kT > 21,keV), Γ≃ 0.82, α≃ 1.6 and hydrogen column density of 0.13 × 10^22 cm^-2. The 0.2-10 keV flux of 5.4 × 10^-12 erg s^-1 cm^-2 corresponds to a luminosity of 1.7 × 10^32 erg s^-1.Usingdata, <cit.> derived a 2 σ upper limit to the soft black-body component (if any) associated to the boundary layer finding kT < 6.6 eV. Taking the accretion rates derived by optical and UV bands, the standard disc models predict an optically thick boundary layer with temperature (13 - 33 eV) larger than that derived from thedata and luminosity (> 10^34 erg s^-1) much higher than that inferred by using thedata. Moreover, the X-ray luminosity in the 0.1 - 50 keV band of the source is ≃ 3.2 × 10^32 erg s^-1, so that its ratio to the disc luminosity (calculated from the UV and optical observations) is in the range 0.01 - 0.001 thus suggesting that the MV Lyr system in high state is characterized by an optically thin boundary layer associated with a low efficiency accretion model (an ADAF-like flow) and/or an X-ray corona on the inner disc and close to the WD.<cit.> studied allXRT spectra, thus the observation taken during low state and the same data studied by <cit.>. They found that a good fit is obtained (see their Table 5) with a two-component thermal plasma model but also with a thermal plasma to which a power-law is added. In the high state, MV Lyr is characterized by a 0.3 - 10 keV luminosity of 1.7 × 10^32 erg s^-1 (consistently with ) which reduces to 3.6 × 10^31 erg s^-1 in the low state.In this paper we present timing and spectral analyses of our recentobservation of MV Lyr. The data are described in Section <ref>, the EPIC spectrum is analysed in Section <ref>, and timing analysis in Section <ref>. We discuss and summarize our results in Sections <ref> and <ref>, respectively.§ OBSERVATIONSSince the existingandX-ray observations either only cover a limited energy range or are too short for the sensitivity of timing analyses needed to put thedata into context, we have requested a 50-ksobservation. On 2015 September 6, this observation was realised during the more common high state. A 64.6-ks observation was taken under ObsID 0761040101 yielding 61.6, 63.3 and 63.5 ks for PN, MOS and RGS instruments, respectively. The Optical Monitor (OM) took 13 exposures of 2883s duration each in Image+Fast mode and the UVW1 filter inserted. The data were downloaded from the XMM-Newton Science Archive (XSA), and we obtained science products with the Science Analysis Software (SAS), version 14.0. We used the tool xmmextractor to re-generate calibrated events files from which in turn light curves were extracted for all instruments. Optimised extraction regions for the EPIC detectors were calculated with the tool eregionanalyse while for the Reflection Grating Spectrometer (RGS) and the optical monitor (OM), standard extraction regions were used by xmmextractor. We use the MOS light curves as a comparison while for the timing analysis we use only PN and OM light curves. The PN and OM light curves are shown in Fig. <ref> and the Julian date of theobservation in the long-term context of the AAVSO light curve in Fig. <ref>. For the spectral analysis we use both MOS and PN spectra while RGS spectra are only used for consistency checks.§ SPECTRAL ANALYSISWe have extracted X-ray spectra from the RGS and the EPIC cameras using the Science Analysis System (SAS) version 14.0. An RGS spectrum in flux units was obtained using rgsproc resulting in a merged spectrum from RGS1 and RGS2, making best use of all redundancies, especially filling chip gaps and bad pixels. The combined RGS spectrum is shown in Fig. <ref>. Prominent line transitions are marked with vertical lines and labels of the respective ions on top at the respective wavelengths which can be used as a check to see whether any of them are present. Clearly, the RGS spectrum is not sufficiently well exposed for a quantitative analysis, but it allows the conclusion that the spectrum consists of weak continuum below ∼ 17 Å (above 0.7 keV) with superimposed emission lines from oxygen, iron, and neon. Other lines, e.g., N vi may be present but can not be detected above the noise. The lines are too weak to derive any meaningful line ratios, and, e.g., density diagnostics with He-like triplets can not be applied. But we use the RGS as a consistency check as the high spectral resolution can potentially rule out spectral models, e.g., if they do not reproduce the emission lines that only the RGS can resolve. For quantitative analysis, we therefore need to rely on low-resolution spectra from the three EPIC detectors PN and MOS1/2 over the energy range 0.1-10 keV. We merged MOS1 and MOS2 spectra with the SAS tool epicspeccombine while we use the PN spectrum separately in a simultaneous fit. We use xspec to test spectral models against the combined MOS1+2 and PN spectra via simultaneous χ^2 minimization fitting. We take photoelectric absorption within the neutral interstellar medium plus potentially circumstellar material into account by the tbabs model (Tübingen absorption ). The only parameter of tbabs is the neutral hydrogen column density N_ H. As an independent estimate for the amount of interstellar absorption in the direction of MV Lyr we use the HEASARC NH tool[http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl]. Two values are computed from Galactic H i measurements in the Leiden/Argentine/Bonn (LAB) Survey and by <cit.> resulting in values of 5.35× 10^20 cm^-2 and 6.17× 10^20 cm^-2, respectively. During the fit to the X-ray spectra, we keep the parameter N_ H variable, starting with the interstellar value of 6 × 10^20 cm^-2, in the middle between the above values. If there are any measurable additional sources of circumstellar absorption, a higher value will result, while a lower value may be an indicator for using the wrong model.In pursuit of finding evidence for either coronal or boundary layer emission, we test three spectral models. All three models are different realizations of the APEC model (), which basically assumes a collisional plasma in equilibrium, thus collisional ionizations and excitations being balanced by radiative recombination and de-excitations. This way a spectrum consisting of bremsstrahlung continuum (from free electrons) plus emission lines (from bound-bound transitions) is produced. The key parameter is the electron temperature T which is converted to a Maxwellian velocity distribution of electrons and ions. A low-density plasma is assumed with an arbitrary density of log(n_e)=1 (in cgs units), and that no photons are re-absorbed (thus optically thin). The intensity depends on the differential emission measure, basically the product of volume and density, V× n_e. For details we refer to <cit.>. An isothermal plasma is unlikely in nature, more likely is a broad distribution of temperatures. This can be approximated by a multi-temperature model or by integration over a continuous temperature distribution. A comparison of these two approaches is presented by <cit.>. The APEC model assumes solar abundances by <cit.> that can be scaled by a factor to determine an overall metallicity. Meanwhile the VAPEC model allows modification of individual abundances. We used the VAPEC model iterating the Fe abundance if needed. We first tested a single temperature model (1-T VAPEC) but were not able to reproduce the observation, thus the plasma is measurably not isothermal. We then defined a second temperature component (2-T VAPEC) with variable Fe abundance for each component and show the result in the top panel of Fig. <ref>, together with the combined MOS1+2 and PN spectra. The model from each temperature component is shown with dotted lines (for clarity only for MOS1+2). This model already reproduces the data fairly well with a value of χ^2_ red= 1.2 (at 3076 degrees of freedom). The Fe xxv lines are 6.7 keV can only be reproduced if the Fe abundance of the hot component is increased. Meanwhile, the abundance of the cooler component is solar, although with large errors. This means that the hot and cool emission could come from different regions with different metallicity. We added a Gaussian component at 6.4 keV with zero line width, thus only broadened by the instrument to determine whether there is any significant flux produced by Fe i. While some flux results, there is not really a line-like features. The inset of Fig. <ref> shows the PN spectrum around the Fe lines with the 2-T VAPEC model (blue line) and the additional emission in the Gaussian (light blue-shadings). The vertical dotted lines mark the energies of the four transitions listed in the left. Two nearby Fe xxv linesappear to be resolved with a red-shifted and a blue-shifted component in the PN spectrum, but the model demonstrates that the energy resolution is not sufficient for such a conclusion.The model parameters are given in the top part of Table <ref>. The values of emission measure, log(VEM) indicate that the hotter component dominates with a factor 10 over the cooler component. Both temperatures are typical of coronal plasma, and the 2-T VAPEC model can thus be considered representative of a coronal plasma with a large range of temperatures.Closer inspection of the high-energy tail of the 2-T VAPEC model in Fig.  <ref> (and the inset) suggests that the model systematically underpredicts the continuum in the energy range ∼ 3-7.3 keV. This may be an indicator for the high-temperature component to be highly absorbed. This could occur if the high temperature component originates from the boundary layer, thus deeply embedded at the bottom of the accretion disc. To test this scenario, we modify the 2-T VAPEC model by introducing an additional absorber that acts only on the hot component while the main absorber acts on the entire emission. Since absorption within the accretion disc may not be uniform, we chose a partial absorber. The best-fit parameters are listed in the middle part of Table <ref> while the spectral model is compared with the data in the bottom part of Fig. <ref>. The lower temperature component is identical in the two models while the high-temperature component yields a lower value in the partial absorber model. The Fe abundance does not need to be modified from solar for this model, indicating that the abundance effect in the 2-T VAPEC model may not be real. The overall value of N_ H is closer to the interstellar value in the partial absorber model than in the 2-T model where it is slightly higher. If ignoring the uncertainties in the interstellar value of N_ H, one might argue that the partial absorber model is more realistic, but the difference is marginal.Finally, we test the cooling flow model that <cit.> have applied to CV spectra after noticing that their grating spectra seem consistent with a multitemperature thermal plasma with a relatively flat emission measure distribution. The flatness of this distribution can indicate an isobaric cooling flow, which assumes that the gas releases all of its energy in the form of optically thin radiation as it cools in a steady-state flow. The optically thin radiation can be modelled with the APEC model (see above), and thexspecmodel mkcflow assumes the flow as an interpolation between a minimum and a maximum APEC temperature. The mkcflow also has a flavour in which abundances can be modified individually but only for the full flow. The normalization parameter directly gives the total mass flow rate.The mkcflow model has originally been developed for Galaxy Clusters, for which a parameter is needed to red-shift the model. In addition to red-shifting, the parameter z is used to get the flux from the model via distance/z. Owing to this dual purpose of z, the computation of the mass flow rate from the normalization fails for z=0, and for our Galactic source without the need for a red shift, we fix z at a small but non-zero value, z=5× 10^-8. This is equivalent to a Doppler shift of 0.015 km s^-1, which is negligible.The best-fit mkcflow model is illustrated in Fig. <ref>, and the parameters are listed in the bottom part of Table <ref>. The inset of Fig. <ref> shows that the Fe lines are underestimated, even when leaving the Fe abundance free to vary. For the 2-T VAPEC model, a better reproduction of the Fe lines could be achieved because we could modify the abundances of the two components separately. The Fe abundance of the low-temperature component will be driven by the Fe L-shell lines around 1 keV while the Fe abundance of the high-temperature component will be driven by the K-shell lines at 6.7 keV. Since the Fe L-shell lines are much stronger, we believe the Fe abundance in the mkclow model is driven by the L-shell lines, leading to an overall lower Fe abundance and thus underprediction of the K-shell lines. The mkcflow model also is a poor representation of the energy range 0.7-1 keV, however, the overall value of χ^2 is sufficiently small so this model is not necessarily unacceptable compared to the other two models. Although the RGS spectra are not well-enough exposed for quantitative analyses, we can compare the results from the EPIC cameras with the RGS spectrum, and in Fig. <ref> we show the RGS1 and RGS2 count spectra in comparison with the 2-T VAPEC and mkcflow spectral models. We have not re-fitted nor re-normalized the models, thus only converted the models to equivalent count spectra by folding them through the spectral responses of the RGS. The main plot shows good agreement of the continuum (a result of the quality of the cross calibration) while the inset shows how much the models agree with selected emission lines. The strongest line, O viii (18.97 Å, bottom right inset) is well reproduced by both models, slightly better with the mkcflow model. The O vii triplet (21.6/21.8/22.1 Å, top right inset) is not at all reproduced, suggesting that both models may be missing some cooler plasma. The O vii forbidden line (22.1 Å) seems not detected while the intercombination line (21.8 Å) might be present. If this is real, the cooler plasma would have a density above ∼ 10^10 cm^-3, higher than typical coronal plasma and could thus come from the boundary layer.In conclusion, the best fit to the spectra has been obtained with a cool optically thin plasma model plus a partially absorbed hotter component. This model can be interpreted as a coronal component with a temperature ∼ 1 keV (1.2× 10^7 K) and a hotter emission component of ∼ 6 keV (7× 10^7 K) from the boundary layer. In this picture the boundary layer is hidden behind semi-transparent (∼ 30% transparency) plasma with a hydrogen column density of N_ H∼ 4× 10^22 cm^-2 while absorption of the coronal plasma is consistent with the interstellar medium. In terms of emission measure, the boundary layer contributes 90%. However, the other models also lead to acceptable fits, also allowing the possibility that we are either seeing a multi-temperature coronal plasma or a cooling accretion flow, although the cooling flow model requires somewhat higher N_ H than interstellar.§ TIMING ANALYSIS §.§ Power density spectra The goal of the timing analysis is to search for characteristic break frequencies L_ i that have been detected in the optical PDS determined the from long-term light curve observed with(). For the timing analysis and PDS calculation we applied the Lomb-Scargle algorithm (). This method is particularly adequate for the OM data because it can handle gaps in the light curve. The high and low frequency limits within which the PDS can be studied are determined by the light curve characteristics. The high frequency end is usually limited by the white noise or PDS power rising to the Nyquist frequency. The lower frequency limit is usually determined by the duration of the light curve. The periodograms derived from the individual instruments are depicted in log-log scale in Fig. <ref>. The white noise starts to be noticeable around a frequency of log(f) = -2.4 Hz for the X-ray data, but for higher frequencies in the case of UV data. Therefore, we adopted the frequency of log(f) = -2.4 Hz as an upper limit for subsequent analysis. Fig. <ref> reveals an apparent break frequency around L_1 in all instruments. Furthermore, the presence of the orbital frequency can clearly be seen in all PDSs except in the UV. While there is no obvious indication for other L_ i signals in X-rays, the OM data show some increased power around L_3 and L_4.For a more detailed study we calculated the PDSs as we have done for the dwarf nova RU Peg (). The light curve is divided into several subsamples, and the corresponding periodograms in log-log scale of every subsample are averaged in order to get the PDS (). The main motivation of this method to average out random features in the individual periodograms, while only real intrinsic PDS features remain. Important is the basic rule, the more subsamples are used, the lower is the scatter in the PDS, while on the other hand, the studied frequency interval becomes narrower with shorter subsamples as the low frequency limit then increases. We subsequently binned the PDS into equally spaced bins in order to reduce the scatter even more and the mean values with the errors were fitted with a model. As bin error estimate we used the standard deviation because it describes the intrinsic PDS scatter. All PDS points within the bin were used for standard deviation calculation. The selection of binning frequency interval and number of divided light curve subsamples we based on χ^2_ red and on visual inspection of the binned PDS details. It was clear from the beginning that the binned PDS has a multicomponent shape. Therefore, for fitting we used a multicomponent broken power law consisting of 4 linear functions with three break frequencies. Finally, we divided the OM data into 5 and PN (as the most relevant for timing analysis) into 6 light curve subsamples, with higher binning resolution in the OM data (because of considerably larger count rate), in order to get a solution with χ^2_ red closer to 1. For fitting, we used the Gnuplot[http://www.gnuplot.info/] software, yielding best-fit values of break frequencies with standard errors calculated from the variance-covariance matrix.It is worth noting, that we are comparing quantitatively very different data, i.e.PDSs averaged from 5 one-day light curve segments, vs. 5/6 segments ofdata with much shorter duration. Therefore, our investigation of theobservation is not adequate for detailed studies of low frequencies and thus we concentrate mainly on the break frequencies L_1 and L_2.Resulting best cases with PDS, with binned data and fitted broken power laws are depicted in Fig. <ref>. The resulting fitted PDS parameters with standard errors are summarized in Table <ref>. The break frequency L_1 can clearly be seen in both PN and OM PDSs. All fitted values of the searched break frequency (Table <ref>) agree well with the observed L_1 frequency (Table <ref>) within the errors. Furthermore, a plateau is seen between L_1 and L_2 in both PDSs. For OM data the power decrease toward lower frequencies starts at a break consistent with theL_2 value. For the PN PDS this break is slightly higher, but all values (PN, OM,L_2) agree within the errors, albeit PN and L_2 agree only marginally (the error extrema are equal). An additional power increase and possible break close toL_3 value is seen in the OM data, although still speculative, but with no indication in PN PDS. To investigate the low frequency part of the PDS, another PDS calculation with a smaller number of light curve subsamples is needed. However, this yields a much more scattered PDS with larger error bars. A lower frequency bin would reduce the errorbars, but also increases the scatter in the PDS, which is counterproductive. In general, this case appeared to not be adequate for a quantitative study and fitting.In the last solution we concentrate on the high frequency part by increasing the number of light curve subsamples. This excludes low frequencies from the PDS, but details in the remaining PDS should be clearer with lower scatter, allowing shorter/finer frequency bins also for the PN PDS. Fig. <ref> shows a 10 subsample case where the presence of a break around L_2 is clearer in the PN PDS. The values of χ^2_ red are higher (2.0 in the OM PDS) or much higher (10.4 in the PN PDS) than 1 because of smaller errorbars as a natural result of more subsamples used for PDS calculation reducing the scatter. Based on the results from the spectral analysis (Fig. <ref>), we created two light curves for a hard and a soft energy band in order to investigate whether the different radiation sources may show different variability patterns. The energy interval for the soft light curve extraction was chosen to roughly agree with the lower temperature plasma component (150 - 1500 eV) while the hard light curve was extracted from 1500 - 10000 eV. Non-averaged and non-binned PDSs are depicted in Fig. <ref>. Interestingly, the orbital modulation can only be seen in the soft band while it is absent in the hard X-rays. Averaged and binned PDSs with fits are depicted in Fig. <ref>. For direct comparison we keep the same binning resolution in both cases despite the fact that only one fit yields acceptable χ^2_ red. Both bands show a clear break frequency coincident with theL_2 value, while the break is more pronounced in the hard band. The different PDS behaviour of both bands between L_1 and L_2 is probably the reason for the slightly higher value obtained from the full PN light curve, i.e. the integrated PDS is slightly deformed by the asymmetry in both bands, while the break agrees well with L_2. The fitted PDS parameter values are given in Table <ref>.Finally we can conclude that except for the dominant break around the frequency L_1, the OM PDS shows an obvious L_2 and a possible L_3 break, also detected indata, while the PN PDS only shows a break consistent withL_2 value. §.§ PDS simulations While the presence of the L_1 break frequency indata is unambiguous, the second break at L_2 requires some more justification. For this purpose we simulated artificial light curves following the method by <cit.>. The principle is to use two gaussian-distributed random numbers following the input PDS, and use them as the real and imaginary part of the Fourier coefficient. This is done for every frequency and the required time series (synthetic light curve) is obtained by inverse Fourier transform. As input PDS we used PDS parameters from the fits to the realdata and the artificial light curves had the same duration and sampling as the originaldata. We applied the same periodogram calculation, light curve subsample division and binning procedure as for the observations.In Fig. <ref> we show various simulations compared to the PN binned PDS from the top panel of Fig. <ref>. We used three PDS models, i.e. a two-component model using two red noises before and after the break frequency at log(f/Hz) = -2.96, a three-component model, the same as the two component case but the lowest red noise is added, and finally a four-component model as shown by the fit in the top panel of Fig. <ref>. Therefore, only the latter model comprises the break frequency at log(f/Hz) = -3.28 (L_2 equivalent). The left column of Fig. <ref> shows the mean PDS calculated from 10000 simulations with standard deviations as assumed error. Apparently the four-component model is the best, although not perfect, i.e. the low-frequency part and the power depression below the L_2 break is slightly higher than the original fit used as input for the simulations. This suggests that the random process is important and not every simulated PDS from a light curve divided into 6 subsamples yields the L_2 break frequency. In the left column of Fig. <ref> we display examples of simulated PDSs close to the original fit. Apparently, the power depression below the L_2 break frequency can be the result of a random process without the L_2 break frequency inherently rooted in the PDS. However, these selected cases in the two and three-component models emerge with lower probability than in the four-component case. Therefore, we can conclude that it is not certain that the break frequency L_2 in the observed PN PDS is a real feature, but it is certainly more probable than that it is just a result of a random process. The presented PDS simulations are ideal for PDS parameters uncertainty derivation directly from the randomness of the red noise. However, the lower break frequency is not present in all simulated PDSs and the fitting procedure did not converge properly in the majority of cases and direct access to the initial parameter estimate was required. Therefore, repeating the simulation process 10000 times, the uncertainty derivation can only be realised for the highest break frequency. We performed simple broken power law fits to the binned PDSs for frequencies higher than log(f/Hz) = -3.2. A Gaussian fit to the resulting histogram yields a mean value of -3.00 and 1-σ parameter of 0.12 as break frequency error. This uncertainty value is very close to the standard error of 0.14 from Table <ref>. This suggests that the standard errors derived by the Gnuplot software are good error estimates. §.§ Time delays The UV and X-ray light curves shown in Fig. <ref> seem to be correlated and in this section we search for time lags between UV and X-rays using a cross-correlation function (CCF) as described in Section 2 of <cit.>. Because of the non-continuous nature of the OM light curve, we calculated a separate CCF for each of the 13 continuous OM light curves (Fig. <ref> in left panel). Subsequently we calculated a mean CCF which is depicted as a shaded area (the error of the mean is used as an estimate of the error in the CCF) in the right panel of Fig. <ref>.The mean CCF has a maximum around zero time shift but is clearly asymmetric indicating that not all variations are simultaneous or constantly shifted[A constant time lag would result in a symmetric shape but with a shift of the peak.]. The same indication of the positive time lags (asymmetry or shift of the peak) is seen in almost all individual CCFs except 3 cases, where the indicated time lag is reverse (marked as thick lines in left panel of Fig. <ref>). We fitted the mean CCF with a single Lorentzian yielding a poor fit (Fig. <ref>, right panel, thin line), while a sum of two Lorentzians yielded an acceptable solution (Fig. <ref>, right panel, thick line). The exact values (time lags) of Lorentzian centers depends on the time lag interval over which we performed the fitting. Some time lags are summarized in Table <ref>[For time lags larger than 400 s the CCF is loosing the Lorentzian shape and the wings become constant.].Clearly it is not straight forward to derive the time lag from the CCF, but it appears that part of the signal is (almost) simultaneous and part of the X-ray lags behind the UV. Therefore, we performed a more detailed frequency dependent time lag and coherence analysis following the method described by <cit.>. The two compared light curves must have the same sampling and number of data points. Therefore, we resampled the OM light curves by linear interpolation in order to synchronise them with the PN data[The difference is just a few seconds, which is (almost) negligible when comparing both the original and the resampled light curves.] and we filled the gaps with a linear function between two extreme data points of two consecutive OM observations. The coherence and time lags with 3-σ uncertainties are shown in Fig. <ref>. Clearly, there is a coherence peak around the L_1 break frequency, where the positive time lag, in spite of large error bars, confirms the conclusion from the CCF analysis. Also the time delay values are in good agreement with the ones derived from two-Lorentzian CCF fitting (t_ 2 value in Table. <ref>). In Fig. <ref>, we visualize the behaviour of the PN lag. We selected some examples of UV rising before PN. In order to get comparable light curves, we rescale the selected light curve segments[Except the flare around 58 ks (Fig. <ref> and bottom right panel of Fig. <ref>) because there is a very strong peak in PN. Therefore, we choose the second highest peak.], transforming the count rates to get minima at 0 and maxima at 1. The results suggest that not the whole flare is offset toward earlier time, just the increasing branch shows this characteristic, while the decreasing parts of OM and PN flares are (almost) simultaneous. Therefore, the time delay derived from Figs. <ref> and <ref> is not the true time delay between the peaks, but it represents a time delay of the rising part only. In order to study the asymmetry in more detail we analysed the time delay between the soft and hard X-ray bands. All corresponding frequency-dependent time lags are visualized in Fig. <ref>. It is clear that the soft X-ray emission lags behind the hard emission, while the soft emission lags behind the UV emission for frequencies between L_1 and L_2, with increasing time delay towards L_2. The delay between hard X-rays and UV is not as clear because the data are noisier and many values intersect the zero time lag within the error bars. It is possible that those data are (almost) simultaneous, but a PN lagging behind the OM is still possible mainly around the L_1 break frequency.The situation is depicted in Fig. <ref> with two representative examples from Fig. <ref>. It is clear how the hard and soft PN count rates rise almost simultaneously, while the hard band is declining earlier. Therefore the mentioned asymmetry/lag between OM and PN is caused mainly by the soft X-ray band. Hard X-rays and UV can thus be more synchronized as suggestive from Fig. <ref>. For a global study of the mentioned profile features, we analysed the flares in the same way as <cit.>, i.e. we averaged several flares in order to get a mean profile. Because of low photon statistics, only the 50 s binning was adequate for this analysis[Lower binning does not show smooth flares for the X-ray light curve and a central very thin spike is formed by the averaging instead of a flare.]. The peaks of the flares were defined as the local maxima with 10 data points before and 10 after the maximum in PN light curve, yielding 55 flares. The results are depicted in Fig. <ref> where the flares were divided by the underlying integrated area in order to make them comparable[Mainly because of OM flares which have much larger amplitude and count rate than the X-ray]. It is clear from the plots, that the OM is rising before the X-rays, while soft X-rays lag behind the hard band. The simultaneous rise of hard and soft X-rays as suggestive from Fig. <ref> is therefore not confirmed. The opposite seems to be present, but all lags are in agreement with the frequency-dependent analysis in Figs. <ref> and <ref>. Finally, it seems that all peaks are simultaneous. Despite the fact, that lower binning is not adequate for this analysis because of low number of counts, we checked for this simultaneity and it appeared at 30 s binning. Lower binnings yield a small deviation, but not reliable because of X-ray average flare showing just a very narrow central spike[It is the result of a flare profile in the original light curve, where the flares are just an accumulation of individual photon spikes.].§ RMS-FLUX RELATION After the PDS study, we tested whether thedata satisfies another typical feature of the flickering in accreting systems, i.e. the rms-flux relation. For this purpose we used the OM light curve samples into 10 s bins, but for the PN case we needed larger bins of 50 s because of considerably lower count rate. The resulting rms-flux relation is shown in Fig. <ref>. The typical linear trend is shown by a linear fit to the data. § DISCUSSIONWe present an analysis ofobservations of the nova-like system MV Lyr in order to test the conclusion of <cit.> that the highest break frequency L_1 detected in opticaldata () is generated by the central hot optically thin and geometrically thick corona. Such structure is radiating in X-rays, therefore the presence of the break frequency log(L_1) = -3.01 ± 0.06 Hz is expected in X-rays. §.§ Timing analysis The presence of the searched break frequency around theL_1 value can clearly be recognized in all forms of PDS we derived from the X-ray and UV data. The derived value is log(f) = -2.96 ± 0.14 Hz. Furthermore, we found significant indications of the presence of the second optical break frequency L_2 found indata. This break frequency with a value of log(f) = -3.41 ± 0.10 Hz is seen in UV data from the OM instrument of . X-ray data from the PN detector yield only a marginally consistent value of log(f) = -3.28 ± 0.07 Hz, while division of the PN light curve into soft and hard bands yields a value of log(f) = -3.40 ± 0.09 Hz (from the hard band with smaller error and better χ^2_ red). This break frequency is more pronounced in the hard band.Finally, we can mention a possible presence of L_3 in the OM data while it is not found in the PN data. This would suggest only an optical origin which is in agreement with the disc as a source (). But because of low resolution of the lower PDS limit, this conclusion still rests on weak grounds. §.§ Time delaysIf the variability characterized by the break frequency L_1 is generated by the central corona, the signal must be seen first in X-rays and later reprocessed by the geometrically thin disc (). A time lag, where reprocessed optical/UV radiation lags behind the X-ray is expected. If we assume that such irradiation generates the reprocessed signal with maximal distance from the source to be the outermost boundary of the disc, the light travel time is the expected maximal time lag. The typical radii of discs in CVs are ∼ 10^10 cm, corresponding to a travel time of 0.3 s. With the precision of ourdata we are not able to measure such a small time delay between OM and PN data. Therefore, any discussion about reprocessing time lag is unfortunately not possible.However, we found some other delays in thelight curves. First is the X-ray lagging behind the UV signal between L_1 and L_2 frequency with a specific characteristics, i.e. X-ray data lagging behind the UV at the rising branch of a flare, while the decrease is simultaneous. Such profile is not new in binary light curves. It was observed by <cit.> in the X-ray binary Cyg X-1, where the hard band lagged behind the soft on the rising part of a flare, while the decrease was simultaneous.Furthermore, we showed that the soft X-ray component lags behind the hard component, yielding an even stronger lag of soft X-rays behind the UV signal. The lag of hard X-rays behind the UV is not significant as in other band combinations, but still possible. Finally, we did not find any credible evidence of a lag of entire flares, i.e. the peaks seem to be simultaneous at least any lags are smaller than the bin size.Because of the uncertain hard X-ray - UV behaviour, we examine three scenarios, i.e. 1) entire X-ray lagging behind the UV, 2) only soft X-ray lagging behind simultaneous UV and hard X-ray (neglecting the small difference), 3) and hard X-ray lagging behind UV, with soft X-ray lagging behind hard X-ray, where all three scenarios have simultaneous peaks.How to explain the first solution? Let us assume that a turbulent element is formed somewhere in the disc. It needs some time to propagate towards the disc surface and to start to evaporate. Once the evaporation is feeding the corona, the locally enhanced plasma density generates enhanced X-ray emission. Once the turbulent element in the disc dissipates it stops to feed the corona and the response is an immediate decrease of X-ray emission, simultaneously with the turbulent eddy dissipation and local UV radiation. Therefore, the rise delay can be attributed to the turbulent eddy penetration towards the evaporation conditions. Such penetration can also be understood as an expansion which approaches the eddy to the disc surface. The idea of an expanding and cooling sphere, the so-called fireball model, was proposed by <cit.> and used by <cit.> to explain a typical color delay in CVs flickering. This mechanism does not agree with the interpretation by <cit.> of L_1 that the variability is generated by the corona, but it has also problems to explain the detected soft lag behind the hard X-ray.The second solution is the simultaneous hard X-ray and UV. Two different regions would be responsible for these three radiation bands. A similar behaviour of UV and hard X-rays suggests a region where the hard X-rays are generated by free-free transitions of the hot ionised plasma and the UV is the reprocessing of X-rays by the disc or synchrotron radiation of the same plasma. The soft X-rays with the delay should be generated in a different region of the corona after propagation of the flare generating inhomogeneity/turbulent eddy to different temperature conditions.Finally, the third scenario can be a combination of the two previous, i.e. an expanding eddy evaporating at the disc-corona transition, with the evaporated matter propagating into a lower temperature region, while the X-rays are reprocessed into UV during the overall process explaining the common decline. §.§ Spectral analysisThespectra are well fitted with a 2 temperature VAPEC or a mkcflow model. Both fits are statistically equivalent suggesting that our interpretation must include a cooler and a hotter source with several points requiring further discussion. For the VAPEC model, the hotter component would have a higher Fe abundance which for a partial absorber, applied only to the hot component, would not be necessary.The first remark is based on a partial absorber that may be associated with the boundary layer between disc and the white dwarf (Fig. <ref>). The soft component may be interpreted as coming from the corona. In such case the detected fast variability can originate in the boundary layer instead of the corona, because the former is the source of 90% of the emission. At first view, this would disagree with the original assumption based on <cit.> modelling, that the observedL_1 break frequency is generated in the corona.However, <cit.> mention the boundary layer as the source of the cooling flow radiation[Note that the mkcflow model is more of an empiric description of boundary layer emission than a rigorous model and might not be suitable for all boundary layers of accretion discs.]. If this localisation is a necessary condition for the cooling flow, a modification is suggested, i.e. X-rays are not generated in the corona nor in the disc-white dwarf boundary layer, but in the boundary layer between the corona and the white dwarf (Fig. <ref>). Such boundary layer is expected, because of different tangential velocities of the corona and the central star. In this case the fast variability can still be generated in the corona by accretion inhomogeneities summed all the way down to the white dwarf surface, where it modulates the X-ray radiation of the boundary layer. Actually this is the principle of the multiplicative accretion process where the inhomogeneities are generated at different radii and transported inwards which modulates the final mass accretion rate (, , ).The second remark is based on the mass accretion rate derived from the mkcflow model, of the order of 10^-12 M_⊙ y^-1. Such value is acceptable for a dwarf nova during quiescence () but hardly probable for a nova-like in the high state (see Fig. <ref>). Furthermore, if the hot component is associated with the boundary layer it should have a temperature of the order of 10^6 - 10^7 K. <cit.> studied MV Lyr in its high state usingUV data and derived a boundary layer temperature of 10^5 K which is strongly inconsistent with our finding of 10^7 K.However, in a quiescent dwarf nova the inner disc is evaporating and forming a geometrically thick corona (). All the matter is accreted via this structure, and a UV - X-ray delay is observed. <cit.> proposed a sandwiched model for MV Lyr, where a geometrically thick corona is surrounding a geometrically thin disc up to a certain radius, but the geometrically thin disc reaches down to the white dwarf surface. In such a case, only a small fraction of the matter is evaporated and accreted to the central compact object via the corona. This idea is in agreement with our non-detection of a UV - X-ray delay and with the very low mass accretion rate of the order of 10^-12 M_⊙ y^-1. In such case the disc-white dwarf boundary layer can have the temperature of 10^5 K derived by <cit.>, but the corona-white dwarf boundary layer is much hotter with the temperature found in this paper. The latter is in agreement with the proposed hot boundary layer merged with an accretion-dominated flow (equivalent to corona in our paper) by <cit.>.Therefore, both the cool and hot X-ray sources may either be generated in the corona where the hotter component extends closer/down to the white dwarf surface where it would be partially absorbed, or it is associated with the corona-white dwarf boundary layer. In both solutions the mass accretion fluctuations generating theL_1 break frequency may be localised in the corona as suggested by <cit.>.Unfortunately, we cannot distinguish between the case with and without the partial absorber as both models reproduce thedata equally well.In order to compare our observation with thedata we used WebPIMMS[The WebPIMMS tool is available at https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl] and adopted the (very) simple black-body model of <cit.>. We estimated the 0.3-10 keV band luminosities of all threeobservations to be 5.87 × 10^31 erg s^-1, 4.65 × 10^31 erg s^-1, and 1.45 < × 10^30 erg s^-1, being theobserved luminosity in high state at least a factor ≃ 8.3 larger than what obtained by using thedata alone and the 2T VAPEC model (see Table <ref>).The results from the spectral analysis of thedata by <cit.> and <cit.> are inconsistent with the results presented here. In particular, the models that fit thedata are not consistent with the high-statespectrum. Since thespectra were acquired in a much shorter exposure time than theones and because of the lower sensitivity, the signal to noise is considerable lower. Especially at high energies, thespectra pose little constraint, and we have some doubt that the source characteristics have really changed during the high-statespectrum and the newobservation. The low signal to noise could introduce random features that are not accounted for in the response that can lead to inconsistent results when assuming the average response. Note that for , only 'canned' response files are used while for , a customized response file is produced based on the most recent calibration, the location of the source on the CCD detector etc. We thus doubt that the source characteristics were really different during the times of observations and give more confidence to the results derived from the EPIC spectra. §.§ Comparison with past interpretations Part of the X-ray flux is probably generated in a wind from the system (, ). This may seem like a contradicting scenario to the one presented in this paper, but in fact it is not. Our corona interpretation is based on a model where inefficient cooling evaporates the corona following <cit.>. <cit.> used archivalspectra and fitted them with a combination of cooling flow model with an additional black body component. The latter were not consistent with any optically thick boundary layer (suggested by ). The authors deduced that the boundary layer must be hot with inefficient cooling and an evaporated corona is expected. However, following <cit.> the evaporation yields two solutions, i.e. the material is partially accreted on to the white dwarf via a corona, but partially lost via a wind. Therefore, the existence of the corona is directly connected to a wind. Furthermore, X-ray observations of UX UMa (), an eclipsing VY Scl system, show an eclipsed hard (kT ∼ 5 keV) component and an uneclipsed soft component. In order to produce eclipses, the hard component must thus be generated close enough to the white dwarf, while the soft component is generated further away from the system to escape eclipses. Moreover, the authors identified the boundary layer as the source of hard X-rays.MV Lyr is a low inclination system without eclipses. If the UV is radiated by the disc, the non detection of the orbital modulation seen in Fig. <ref> is natural. If the hard X-rays are generated in a corona or corona-white dwarf boundary layer, the secondary is not big enough to generate any modulation, but an extended corona in the form of a wind can reach far enough away to be partially occulted. This explains well the presence/absence of orbital modulations of hard/soft X-rays in Fig. <ref>.<cit.> folded the X-ray light curve of MV Lyr on the orbital period. They did the same for the whole energy interval and using two different intervals for soft (0.3 - 1.0 keV) and hard (2.5 - 7.0 keV) light curves. The authors did not detect significant changes in the light curve shape given the statistical errors of the low count rates, and concluded no significant energy dependence in orbital variation. Our modulation finding in Figs. <ref> and <ref> suggests the opposite, which is highlighted by our phased and binned soft and hard[Any possible out of error patterns in hard PN data can be cause by the division of the light curve into two bands which does not exactly separate the spectral components. Therefore, some residual soft X-ray counts are still present in the hard light curve.]data in Fig. <ref> using an orbital period of 3.19 h by <cit.>. Finally, <cit.> pointed out a dilemma in radiation from nova like systems, i.e. the optical and UV accretion rates and luminosities resembling those of dwarf novae in outburst, while their X-ray analysis during the same brightness state with resulting accretion rates and luminosities resemble those in quiescent dwarf novae. The sandwiched model offers a connection between these antagonistic aspects (Fig. <ref>), i.e. the existence of both, the standard disc with an optically thick boundary layer (resembling dwarf novae in outburst, proposed by ), together with a corona with an optically thin hot boundary layer (resembling quiescent dwarf novae, proposed by ). §.§ Proposed model Finally a complex model of the multicomponent PDS shape in MV Lyr, that appears the most plausible one to us is illustrated in Fig. <ref>. <cit.> proposed that the highest break frequency L_1 detected in Kepler data is generated by a geometrically thick corona, while <cit.> found agreement with L_3 and the whole geometrically thin disc (the closer to the white dwarf, the more flares are produced) with enhanced activity of the outer disc rim causing the break frequency L_4. However, despite the fact that the model by <cit.> is based on a geometrically thin disc, they found the same solution with disc parameters consistent with the geometrically thick disc solution by <cit.> for the frequency L_1. <cit.> used a statistical method developed by <cit.> based on various simplifications, while <cit.> used a proper physical method developed by <cit.>. Therefore, the geometrically thick nature of the disc solution is more relevant. Besides a coincidence, this can yield an important suggestion, that the flare statistics of a geometrically thin disc is similar to the geometrically thick disc, i.e. the thin disc behaves in a similar way as the corona, and the angular momentum redistribution/gradient, turbulence dimension scales (seefor details) are similar.The only problematic break frequency is L_2. The models of <cit.> yield L_3 and L_4 with α = 0.1 - 0.4 and an outer disc radius of 0.5 and 0.9 times the primary Roche lobe. L_1 was simulated with a small outer disc radius of 10^10 cm with α = 0.9 as found by <cit.>. But there is a so far ignored solution for L_2 in Fig. 4 of <cit.> as well, i.e. an outer disc radius equal to the coronal radius and α = 0.3 - 0.4. These values of α are consistent with solutions for L_3 and L_4. In RU Peg the low X-ray break frequency was detected also in UV (observed by ) and is interpreted as fluctuations of the inner disc transported to the boundary layer or corona (), i.e. every fluctuation of mass transfer at the inner disc modulates the UV light and X-ray, because the inner disc is feeding the corona or boundary layer, and every inhomogeneity causes inhomogeneous evaporation or feed of the boundary layer. A similar phenomenon can be applied to MV Lyr, i.e. the inhomogeneous mass accretion rate in the geometrically thin disc below the corona modulates the UV and causes inhomogeneous evaporation and subsequent X-ray radiation. The necessary condition is the detection of L_2 in UV and X-ray, which seems to be fulfilled, and this speculation would agree with the detected X-ray lag behind the UV (eddy propagation mechanism described in Section <ref>). But the inner disc fluctuations as turbulences were already accounted for in modelling the L_3 frequency. Therefore, the inner disc variability should be somehow enhanced to generate an additional component L_2. Perhaps the evaporated inhomogeneity is first radiating in X-rays and subsequently they are reprocessed by the geometrically thin disc, which causes the enhanced optical variability of the inner disc below the corona. Perhaps the overall X-ray vs. UV behaviour is a combination of two scenarios; 1) X-ray variability generated by accretion fluctuations in the corona, subsequently reprocessed into optical by the disc, 2) inner thin disc fluctuations radiating in UV and evaporating to the corona where it radiates in X-rays generating additional reprocessed optical signal. Therefore, two processes can be at work leading to the detected complicated band behaviour.Moreover, a solution where the geometrically thin disc below the corona has enhanced activity because of disc-corona interaction is attractive as well. Different densities and different physical conditions of the disc and corona can yield different tangential velocities of both, and such a velocity gradient is a necessary condition for the Kelvin-Helmholtz instability. Therefore, such transition region can be highly turbulent. A similar interpretation was proposed by <cit.> in the case of V1504 Cyg and V344 Lyr, where a break frequency in the optical PDS is seen during outburst, but absent during quiescence. The disc is developed down to the white dwarf surface in the former case, while it is absent in the latter. The disc-corona interaction during the outburst is proposed to be more turbulent and generating the additional PDS component.§ SUMMARYIn this paper we analyse new X-ray and UV data of the nova-like system MV Lyr taken by . The results can be summarized as follows.(i) The X-ray spectra are well fitted with a 2 temperature VAPEC or a mkcflow model. The VAPEC would require different Fe abundance for each temperature component.(ii) A partial absorber model, applied only to the hotter component, tests the possibility that the hotter component is generated in the boundary layer, while the cooler component originates from the inner geometrically thick corona. The Fe abundance would be the same for both components, giving strong preference to the partial absorber model.(iii) The derived mass accretion rate and independent boundary layer (between disc and white dwarf) temperature measurements by <cit.> suggest that the hotter component is probably generated in the inner hot corona or a corona-white dwarf boundary layer. This supports the sandwiched model proposed by <cit.>.(iv) We confirm the presence of the expected break frequency (log(f) = -3.06 ± 0.02 Hz) around the optical signal at log(f) = -3.01 ± 0.06 Hz detected indata (, ). An UV equivalent fromdata at log(f) = -3.08 ± 0.03 Hz is detected as well.(v) The second optical break frequency fromdata of log(f) = -3.39 ± 0.04 is present in OM (log(f) = -3.34 ± 0.04 Hz) and PN data (log(f) = -3.40 ± 0.09 Hz), while it is more pronounced in the harder PN band after light curve decomposition of the X-ray data to soft and hard band based on the two spectral components.(vi) Soft X-rays are lagging behind both the UV and hard X-rays. The two latter are (almost) simultaneous. Without this decomposition, the entire X-ray flares lag behind the UV, but with probably simultaneous peaks, similar to what is observed by <cit.> in Cyg X-1. The lags are most pronounced during the rising branch of the flares with an almost simultaneous decline.(vii) We confirm the sandwiched model suggested by <cit.>, i.e. a central hot geometrically thick optically thin disc is surrounding a geometrically thin and optically thick disc up to a distance of approximately 10^10 cm from the central white dwarf. The fastest variability around log(f) = -3 Hz is generated by accretion fluctuations in the corona and radiated as X-rays. The optical signal is generated by reprocessing of these X-rays by the geometrically thin disc.(viii) The sandwiched model explains the contradictory resemblance of VY Scl systems to dwarf novae, i.e. the corona-white dwarf boundary layer resembles dwarf novae in quiescence, while the underlying geometrically thin disc-white dwarf boundary layer resembles dwarf novae in outburst.(ix) A complex model is summarized based on additional work of <cit.>, where the other three break frequencies detected indata come from the outer disc rim, the whole accretion disc and possibly (just a speculation) from the inner disc region below the corona.§ ACKNOWLEDGEMENT AD was supported by the Slovak grant VEGA 1/0335/16 and by the ERDF - Research and Development Operational Programme under the project "University Scientific Park Campus MTF STU - CAMBO" ITMS: 26220220179. AAN acknowledges the support by the INFN project TAsP. Furthermore, we acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. We thank also Koji Mukai and Polina Zemko for advice and help in spectral modelling, and the anonymous reviewer for valuable comments.mn2e
http://arxiv.org/abs/1702.08313v1
{ "authors": [ "A. Dobrotka", "J. -U. Ness", "S. Mineshige", "A. A. Nucita" ], "categories": [ "astro-ph.SR", "astro-ph.HE" ], "primary_category": "astro-ph.SR", "published": "20170227150917", "title": "XMM-Newton observation of MV Lyr and the sandwiched model confirmation" }
Exploring the climate of Proxima B with the Met Office Unified Model Ian A. Boutle1,2,Nathan J. Mayne2, Benjamin Drummond2, James Manners1,2,Jayesh Goyal2, F. Hugo Lambert3, David M. Acreman2 Paul D. Earnshaw1December 30, 2023 ===================================================================================================================================================== Skin cancer is one of the major types of cancers with an increasing incidence over the past decades. Accurately diagnosing skin lesions to discriminate between benign and malignant skin lesions is crucial to ensure appropriate patient treatment. While there are many computerised methods for skin lesion classification, convolutional neural networks (CNNs) have been shown to be superior over classical methods. In this work, we propose a fully automatic computerised method for skin lesion classification which employs optimised deep features from a number of well-established CNNs and from different abstraction levels. We use three pre-trained deep models, namely AlexNet, VGG16 and ResNet-18, as deep feature generators. The extracted features then are used to train support vector machine classifiers. In a final stage, the classifier outputs are fused to obtain a classification. Evaluated on the 150 validation images from the ISIC 2017 classification challenge, the proposed method is shown to achieve very good classification performance, yielding an area under receiver operating characteristic curve of 83.83% for melanoma classification and of 97.55% for seborrheic keratosis classification.Medical imaging, skin cancer, melanoma classification, dermoscopy, deep learning, network fusion. § INTRODUCTION Skin cancer is one of the most common cancer types worldwide <cit.>. As an example, skin cancer is the most common cancer type in the United States and it is estimated that one in five Americans will develop skin cancer in their lifetime. Among different types of skin cancers, malignant melanoma (the deadliest type) is responsible for 10,000 deaths annually just in the United States <cit.>. However, if detected early it can be cured through a simple excision while diagnosis at later stages is associated with a greater risk of death - the estimated 5-year survival rate is over 95% for early stage diagnosis, but below 20% for late stage detection <cit.>.There are a number of non-invasive tools that can assist dermatologists in diagnosis such as macroscopic images which are acquired by standard cameras or mobile phones <cit.>. However, these images usually suffer from poor quality and resolution. Significantly better image quality is provided by dermoscopic devices which have become an important non-invasive tool for detection of melanoma and other pigmented skin lesions. Dermoscopy supports better differentiation between different lesion types based on their appearance and morphological features <cit.>.Visual inspection of dermoscopic images is a challenging task that relies on a dermatologist's experience. Despite the definition of commonly employed diagnostic schemes such as the ABCD rule <cit.> or the 7-point checklist <cit.>, due to the difficulty and subjectivity of human interpretation as well as the variety of lesions and confounding factors encountered in practice (see Fig. <ref> for some examples of common artefacts encountered in dermoscopic images), computerised analysis of dermoscopic images has become an important research area to support diagnosis <cit.>. Conventional computer-aided methods for dermoscopic lesion classification typically involve three main stages: segmenting the lesion area, extracting hand-crafted image features from the lesion and its border, and classification <cit.>. In addition, often extensive pre-processing is involved to improve image contrast, perform white balancing, apply colour normalisation or calibration, or remove image artefacts such as hairs or bubbles <cit.>. With the advent of deep convolutional neural networks (CNNs) and considering their excellent performance for natural image classification, there is a growing trend to utilise them for medical image analysis including skin lesion classification <cit.>. Likewise, in this paper, we exploit the power of deep neural networks for skin lesion classification. Using CNNs, which are pre-trained on a large dataset of natural images, as optimised feature extractors for skin lesion images can potentially overcome the drawbacks of conventional approaches and can also deal with small task-specific training datasets. A number of works <cit.> have tried to extract deep features from skin lesion images and then train a classical classifier. However, these studies are limited by exploiting specific pre-trained network architectures or using specific layers for extracting deep features. Also, the utilised pre-trained networks were limited to a single network. In <cit.>, a single pre-trained AlexNet was used while <cit.> employed a single pre-trained VGG16, and <cit.> utilised a single pre-trained Inception-v3 <cit.> network.In this work, we hypothesise that using different pre-trained models, extracting features from different layers and ensemble learning can lead to classification performance competitive with specialised state-of-the art algorithms. In our approach, we utilise three deep models, namely AlexNet <cit.>, VGG16 <cit.> and ResNet-18 <cit.>, which are pre-trained on ImageNet <cit.>, as optimised feature extractors and support vector machines, trained using a subset of images from the ISIC archive[https://www.isic-archive.com/#!/topWithHeader/wideContentTop/main], as classifiers. In the final stage, we fuse the SVM outputs to achieve optimal discrimination between the three lesion classes (malignant melanoma, seborrheic keratosis and benign nevi). § MATERIALS AND METHODS §.§ DatasetWe use the training, validation and test images of the ISIC 2016 competition <cit.> as well as the training set of the ISIC 2017 competition[https://challenge.kitware.com/#phase/5840f53ccad3a51cc66c8dab] for training the classifiers. In total, 2037 colour dermoscopic skin images are used which include 411 malignant melanoma (MM), 254 seborrheic keratosis (SK) and 1372 benign nevi (BN). The images are of various sizes (from 1022×767 to 6748×4499 pixels), photographic angles and lighting conditions and different artefacts such as the ones shown in Fig. <ref>. A separate set of 150 skin images is provided as a validation set. It is these validation images that we use to evaluate the results of our proposed method.§.§ Pre-processingA generic flowchart of our proposed approach is shown in Fig. <ref>.In our approach, we try to keep the pre-processing steps minimal to ensure better generalisation ability when tested on other dermoscopic skin lesion datasets. We thus only apply three standard pre-processing steps which are generally used for transfer learning. First, we normalise the images by subtracting the mean RGB value of the ImageNet dataset as suggested in <cit.> since the pre-trained networks were originally trained on those images. Next, the images are resized using bicubic interpolation to be fed to the networks (227×227 and 224×224). Finally, we augment the training set by rotating the images by 0, 90, 180 and 270 degree and then further applying horizontal flipping. This augmentation leads to an increase of training data by a factor of eight. §.§ Deep Learning ModelsOur deep feature extractor uses three pre-trained networks. In particular, we use AlexNet <cit.>, a variation ofnamed VGG16 <cit.>, and a variation of ResNet named ResNet-18 <cit.> as optimised feature extractors. These models have shown excellent classification performance for natural image classification in the Image Large Scale Visual Recognition Challenges <cit.> and various other tasks. We choose the shallowest variations of VGGNet and ResNet to prevent overfitting since the number of training images in our study is limited. We explore extracting features from different layers of the pre-trained models to see how they can affect classification results. The features are mainly extracted from the last fully connected (FC) layers of the pre-trained AlexNet and pre-trained VGG16. We use the first and second fully connected layers (referred to as FC6 and FC7 with dimensionality 4096) and the concept detector layer (referred to as FC8 with dimensionality 1000). For ResNet-18, since it has only one FC layer, we also extract features from the last convolutional layer of the pre-trained model.§.§ Classification and FusionThe above features along with the corresponding labels (i.e., skin lesion type) are then used to train multi-class non-linear support vector machine (SVM) classifiers. We train different classifiers for each network and then, to fuse the results, average the class scores to obtain the final classification result. To evaluate the classification results,we map SVM scores to probabilities using logistic regression <cit.>. Since the classifiers are trained for a multi-class problem with three classes, we combine the scores to yield results for the two binary classification problems defined in the ISIC 2017 challenge, which are malignant melanoma vs. all and seborrheic keratosis vs. all classifications.§ RESULTS As mentioned above, evaluation is performed based on the 150 validation images provided by the ISIC 2017 challenge. The validation set comprises 30 malignant melanoma, 42 seborrheic keratosis and 78 benign nevus images. For evaluation, we employ the suggested performance measure of area under the receiver operating characteristics curve (AUC). The raw images are resized to 227×227 pixels for AlexNet and to 224×224 pixels for VGG16 and ResNet-18. For each individual network and also for each fusion scheme, the results are derived by taking the average of the outputs over 5 iterations. The obtained classification results are shown in Table <ref> for all single networks and for all fused models. Fig. <ref> shows the receiver operating characteristic (ROC) curve of our best performing approach (i.e., fusion of all networks) while Fig. <ref> show examples of skin lesion images that are incorrectly classified by this approach.§ DISCUSSIONThe main contribution of this study is proposing a hybrid approach for skin lesion classification based on deep feature fusion, training multiple SVM classifiers and combining the probabilities for fusion in order to achieve high classification performance. From the classification results in Table <ref>, we can infer a number of observations. First of all, for all approaches, even for the worst performing approach, the classification results are far better than pure chance (i.e. AUC of 50%) which confirms that the concept of transfer learning can be successfully applied to skin lesion classification. Besides this, for all single networks, fusing the features from different abstraction levels leads to better classification performance compared to extracting features from a single FC layer.Features extracted from AlexNet lead to the best performance of a single network approach. This could be potentially related to the network depth. Since our training dataset is not very big, using a shallower network may lead to better results.The single network approaches are however outclassed by our proposed method of employing multiple CNNs and fusing their SVM classification outputs. The obtained results demonstrate that significantly better classification performance can be achieved.While our proposed method is shown to give very good performance on what is one of the most challenging public skin lesion dataset, there are some limitations that can be addressed in future work. First, the number of pre-trained networks that we have studied so far is limited. Extending the model to incorporate more advanced pre-trained models such as DenseNets <cit.> could lead to further improved classification performance. Second, extending the training data is expected to lead to better results for each individual network as well as their combinations. Finally, resizing the images to very small patches might removing some useful information from the lesions. Although in a number of studies bigger training patches were used (e.g. 339×339 in <cit.> or 448×448 in <cit.>), these are still significantly smaller compared to the captured image sizes. Cropping the images or using segmentation masks to guide the resizing could be a potential solution for dealing with this.§ CONCLUSIONSIn this paper, we have proposed a fully automatic method for skin lesion classification. In particular, we have demonstrated that pre-trained deep learning models, trained for natural image classification, can also be exploited for dermoscopic image classification. Moreover, fusing the deep features from various layers of a single network or from various pre-trained CNNs is shown to lead to better classification performance. Overall, very good classification results have been demonstrated on the challenging images of the ISIC 2017 competition, while in future work fusing more deep features also from further CNNs can potentially lead to even better predictive models.IEEEbib-abbrev 10 Oliveira2018 R. B. Oliveira, J. P. Papa, A. S. Pereira, and J. M. R. S. Tavares, “Computational methods for pigmented skin lesion classification in images: review and future trends,” Neural Computing and Applications, vol. 29, no. 3, pp. 613–636, 2018. Rogers2015 H. W. Rogers, M. A. Weinstock, S. R. Feldman, and B. M. Coldiron, “Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the U.S. population, 2012,” JAMA Dermatology, vol. 151, no. 10, pp. 1081–1086, 2015. Esteva2017 A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017. SBSWP93a K. Steiner, M. Binder, M. Schemper, K. Wolff, and H. Pehamberger, “Statistical evaluation of epiluminescence dermoscopy criteria for melanocytic pigmented lesions,” Journal of the American Academy of Dermatology, vol. 29, no. 4, pp. 581–588, 1993. SRCPAHBNLB94a W. Stolz, A. Riemann, A. B. Cognetta, L. Pillet, W. Abmayr, D. Holzel, P. Bilek, F. Nachbar, M. Landthaler, and O. Braun-Falco, “ABCD rule of dermatoscopy: a new practical method for early recognition of malignant melanoma,” European Journal of Dermatology, vol. 4, no. 7, pp. 521–527, 1994. AFCDSD98a G. Argenziano, G. Fabbrocini, P. Carli, V. De Giorgi, E. Sammarco, and M. Delfino, “Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions. comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis.,” Archives of Dermatology, vol. 134, no. 12, pp. 1536–1570, 1998. FSZGCPD98a M. G. Fleming, C. Steger, J. Zhang, J. Gao, A. B. Cognetta, I. Pollak, and C. R. Dyer, “Techniques for a structural analysis of dermatoscopic imagery,” Computerized Medical Imaging and Graphics, vol. 22, no. 5, pp. 375–389, 1998. CKUIASM07a M. E. Celebi, H. Kingravi, B. Uddin, H. Iyatomi, A. Aslandogan, W. V. Stoecker, and R. H. Moss, “A methodological approach to the classification of dermoscopy images,” Computerized Medical Imaging and Graphics, vol. 31, no. 6, pp. 362–373, 2007. Barata2015 C. Barata, M. E. Celebi, and J. S. Marques, “Improving dermoscopy image classification using color constancy,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 3, pp. 1146–1152, 2015. Lopez2017 A. R. Lopez, X. Giro-i Nieto, J. Burdick, and O. Marques, “Skin lesion classification from dermoscopic images using deep learning techniques,” in 13th IASTED International Conference on Biomedical Engineering, 2017, pp. 49–54. Kawahara2016 J. Kawahara, A. BenTaieb, and G. Hamarneh, “Deep features to classify skin lesions,” in 13th International Symposium on Biomedical Imaging, 2016, pp. 1397–1400. Codella2015 N. Codella, J. Cai, M. Abedini, R. Garnavi, A. Halpern, and J. R. Smith, “Deep learning, sparse coding, and SVM for melanoma recognition in dermoscopy images,” in International Workshop on Machine Learning in Medical Imaging, 2015, pp. 118–126. Mirunalini2017 P. Mirunalini, A. Chandrabose, V. Gokul, and S. M. Jaisakthi, “Deep learning for skin lesion classification,” arXiv preprint arXiv:1703.04364, 2017. szegedy2016rethinking C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826. Krizhevsky2012 A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105. Simonyan2014 K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. He2016 K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. Deng2009 J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255. gutman2016skin D. Gutman, N. C. Codella, E. Celebi, B. Helba, M. Marchetti, N. Mishra, and A. Halpern, “Skin lesion analysis toward melanoma detection: A challenge at the International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the International Skin Imaging Collaboration (ISIC),” arXiv preprint arXiv:1605.01397, 2016. Russakovsky2015 O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015. Platt1999 J. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” Advances in Large Margin Classifiers, vol. 10, no. 3, pp. 61–74, 1999. huang2017densely G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017. DeVries2017 T. DeVries and D. Ramachandram, “Skin lesion classification using deep multi-scale convolutional neural networks,” arXiv preprint arXiv:1703.01402, 2017.
http://arxiv.org/abs/1702.08434v2
{ "authors": [ "Amirreza Mahbod", "Gerald Schaefer", "Chunliang Wang", "Rupert Ecker", "Isabella Ellinger" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170227185441", "title": "Skin Lesion Classification Using Hybrid Deep Neural Networks" }
nicolas.bourbeau-hebert.1@ulaval.ca Centre d'optique, photonique et laser, Université Laval, Québec, QC, G1V 0A6, CanadaCentre d'optique, photonique et laser, Université Laval, Québec, QC, G1V 0A6, CanadaCentre d'optique, photonique et laser, Université Laval, Québec, QC, G1V 0A6, CanadaCentre d'optique, photonique et laser, Université Laval, Québec, QC, G1V 0A6, CanadaLaser Physics and Photonics Devices Laboratory, Future Industries Institute, and School of Engineering, University of South Australia, Mawson Lakes, SA 5095, AustraliaLaser Physics and Photonics Devices Laboratory, Future Industries Institute, and School of Engineering, University of South Australia, Mawson Lakes, SA 5095, AustraliaLaser Physics and Photonics Devices Laboratory, Future Industries Institute, and School of Engineering, University of South Australia, Mawson Lakes, SA 5095, Australia We present a dual-comb spectrometer based on two passively mode-locked waveguide lasers integrated in a single Er-doped ZBLAN chip. This original design yields two free-running frequency combs having a high level of mutual stability. We developed in parallel a self-correction algorithm that compensates residual relative fluctuations and yields mode-resolved spectra without the help of any reference laser or control system. Fluctuations are extracted directly from the interferograms using the concept of ambiguity function, which leads to a significant simplification of the instrument that will greatly ease its widespread adoption. Comparison with a correction algorithm relying on a single-frequency laser indicates discrepancies of only 50 attoseconds on optical timings. The capacities of this instrument are finally demonstrated with the acquisition of a high-resolution molecular spectrum covering 20 nm. This new chip-based multi-laser platform is ideal for the development of high-repetition-rate, compact and fieldable comb spectrometers in the near- and mid-infrared. Self-corrected chip-based dual-comb spectrometer David G. Lancaster December 30, 2023 ================================================ § INTRODUCTION The development of advanced spectrometers leads to new insights into science <cit.> and enables improvements in production environments through industrial process control <cit.>. Spectrometer development took a substantial step forward with the emergence of frequency combs; their broad and regularly spaced modal structure makes them excellent sources to achieve active spectroscopy with unrivalled frequency precision <cit.>. However, this precision can only be captured if the comb modes are spectrally resolved. Dual-comb spectroscopy <cit.> is one of the few techniques able to resolve a complete set of dense comb modes. It maps the optical information to the more accessible radio-frequency (RF) domain using mutually coherent combs having slightly detuned repetition rates. Their coherence is typically ensured by phase locking both combs together or to external references <cit.>, by using a post-correction algorithm based on auxiliary lasers <cit.>, or by using an adaptive sampling scheme <cit.>. However, all these approaches rely on external signals and additional hardware, which adds a significant layer of complexity to the dual-comb instrument.Some laser designs have recently been proposed to generate two slightly detuned combs from the same cavity in order to force a certain level of mutual coherence enabled by the rejection of common-mode noise. Most are based on non-reciprocal cavities that induce a repetition rate difference <cit.>. The generation of two combs with different central wavelengths was also reported <cit.>, but this avenue requires an additional step to broaden the lasers and obtain enough spectral overlap. However, having two pulse trains sharing the same gain and mode-locking media, which are both highly nonlinear, is worrisome as it could introduce delay-dependant distortions in interferograms (IGMs). Indeed, a pair of pulses overlapped in a nonlinear medium could be significantly different from another pair interacting separately with the medium. In fact, the long-known colliding-pulse laser <cit.> exploits this effect to shorten the duration of its pulses. Dual-comb generation using two cavities integrated on a single platform avoids this concern and has been demonstrated with few-mode semiconductor combs <cit.>. Even those common-mode designs have difficulty to yield combs with sufficient relative stability to allow coherent averaging of data <cit.>. Thus, additional hardware and signals are still needed to track and compensate residual drifts. An interesting idea was recently suggested to extract those drifts directly from the IGMs using predictive filtering <cit.>. Since it comes down to tracking the time-domain signal using a model made from the sum of all comb modes, the effectiveness of this approach still has to be demonstrated for cases with several thousand modes and where signal is only available momentarily in bursts near zero path difference (ZPD). In this paper, we present a standalone and free-running dual-comb spectrometer based on two passively mode-locked waveguide lasers (WGLs) <cit.> integrated in a single glass chip. This mutually stable system allows to fully resolve the comb modes after using a new algorithm that corrects residual relative fluctuations estimated directly from the IGMs. Thus, no single-frequency lasers, external signals or control electronics are required to retrieve the mutual coherence, which tremendously simplifies the instrument. The design we use for this demonstration is also original and consists of two ultrafast-laser-inscribed waveguides <cit.> in a chip of Er-doped ZBLAN, forming two mechanically coupled, but optically independent, laser cavities. Lasers are mode-locked using two distant areas of the same saturable absorber mirror (SAM). This design avoids nonlinear coupling between combs while maximizing their mutual stability. We use the instrument to collect a 20-nm-wide absorption spectrum of the 2ν_3 band of hydrogen cyanide (H13C14N). The high quality of the spectral data (acquired in 71 ms) is validated by fitting Voigt lines that return parameters in close agreement with published values. § INSTRUMENT DESIGN WGLs are remarkably well adapted to support dual-comb spectrometers. Indeed, several waveguides are typically available on a chip, they offer a much lower cavity dispersion than fibre lasers, thanks to the short propagation through glass, which facilitates mode-locking, and their small size is compatible with the market's demand for small-footprint instruments. Furthermore, the transparency of ZBLAN from visible to mid-infrared allows for a broad range of emission wavelengths to be supported <cit.>. Finally, rare-earth-doped glasses have proven to be excellent candidates for the generation of low-noise frequency combs of metrological quality. WGLs thus seem to be an obvious choice for the centrepiece of a dual-comb platform.Figure <ref> shows a schematic of the dual-comb spectrometer, whose design is inspired by the single-cavity mode-locked WGL presented in <cit.>. It revolves around a 13-mm-long ZBLAN glass chip containing several laser-inscribed waveguides <cit.> with diameters ranging from 30 to 55 µm, which all support single-transverse-mode operation. The glass is doped with 0.5 mol% Er3+, acting as the active ion, 2 mol% Yb3+, which enhances pump absorption <cit.>, and 5 mol% Ce3+, which reduces excited-state absorption in Er3+ <cit.>.Two laser diodes (LDs) (Thorlabs BL976-PAG900), each capable of producing around 900 mW of single-transverse-mode power at 976 nm, are used to pump the chip. They go through separate isolators (ISOs) (Lightcomm HPMIIT-976-0-622-C-1) and the end of the output fibres are stripped, brought in contact along their side, and sandwiched between two glass slides with glue. The fibres are therefore held in place with a distance of 125 µm between cores and with the end facets lying in the same plane, which is just sticking out of the sandwich. The output plane is imaged onto the chip with a pair of lenses (L1 and L2) arranged in an afocal configuration to couple the pump beams into a pair of waveguides separated by 600 µm (centre-centre). The lenses are chosen so that the ratio of the focal lengths best matches the required magnification set by the distance between waveguides and that between fibres (4.8 in this case). A software-assisted optimization of distances between components is performed for the chosen lenses in order to maximize coupling. Two parallel waveguides having diameters of 45 and 50 µm are selected since we observe that they yield the best efficiencies as a result of a good balance between mode matching and pump confinement. The waveguides' large area ensures a low in-glass intensity, which increases the threshold for undesirable nonlinear effects <cit.>.An input coupler (IC), which also acts as an output coupler (OC), is butted against the left side of the chip in order to let the pump in (T_976 > 95%) and to partially reflect the signal (R_1550 = 95%). On the other side, a pair of anti-reflective coated lenses (L3 and L4) arranged in an afocal configuration is used to image the waveguide modes onto a SAM (Batop SAM-1550-15-12ps) with a magnification of 0.16. This size reduction increases the fluence on the SAM, and thus its saturation, which permits the passive mode-locking of the lasers. A polarization beam splitter (PBS) is placed between lenses L3 and L4 to allow a single linear polarization. Both cavities make use of the same components, which ensures maximum mutual stability.The resulting mode-locked frequency combs exit their respective cavity at the OC and travel back towards the fibres to be collected. They are separated from the counter-propagating pumps with wavelength-division multiplexers (WDMs) (Lightcomm HYB-B-S-9815-0-001), which also include a stage of isolation for the signal wavelength. This conveniently gives two fibre-coupled frequency comb outputs that can be mixed in a 50/50 fibre coupler to perform dual-comb spectroscopy.Each cavity generates∼ 2 mW of comb power, of which around 10% is successfully coupled in the fibres. This is due to the alignment being optimized for the pump wavelength, thus the efficiency could be improved with an achromatic imaging system. Nevertheless, this level of power is more than sufficient for laboratory-based spectroscopy.Figure <ref>a) shows the power spectrum of each comb, as measured with an optical spectrum analyzer (Anritsu MS9470A). Their 3 dB bandwidth (Δλ_3dB) spans approximately 9 nm around 1555 nm and they show excellent spectral overlap. A zoomed view reveals spectral modulation that is identified as parasitic reflections taking place on the left surface of the OC and on the right surface of the chip. Even though anti-reflective coatings are deposited on those surfaces, the weak echoes are re-amplified through the chip and come out with non-negligible power. This issue can be solved with an angled chip <cit.> and a wedged OC. The repetition rate (f_r) of each comb is 822.4 MHz and their repetition rate difference (Δ f_r) is 10.5 kHz. This yields a beat spectrum fully contained within a single comb alias. Its central frequency is adjustable by varying one of the pump diodes' power. As for Δ f_r, it is mostly determined by the slight optical path differences through lenses and, potentially, through waveguides. Indeed, their diameters differ and this affects their effective refractive indices. Tuning Δ f_r is possible by slightly adjusting the alignment of optical components. Figure <ref>b) shows an averaged IGM obtained with a sequence of IGMs self-corrected using the algorithm presented in the next section. Small pulses on either side of the ZPD burst correspond to the parasitic reflections mentioned earlier.The mutual stability of the dual-comb platform is evaluated using the beat note between two comb modes, one from each comb, measured through an intermediate continuous-wave (CW) laser. Figure <ref> shows the beat note computed from a 71 ms measurement (grey), which corresponds to the digitizer's memory depth at 1 GS/s, along with beat notes computed from three different sections of duration 1/Δ f_r ∼ 95 µs belonging to the longer measurement (red, green, blue). Coloured traces are nearly transform-limited since their width (∼ 12.9 kHz) approaches the bandwidth of a rectangular window (1.2 Δ f_r = 12.6 kHz). This means that the dual-comb platform is stable to better than Δ f_r on a 1/Δ f_r timescale, which consists of a key enabler for the self-correction algorithm presented below. However, the beat note's central frequency oscillates on a slower timescale and turns into the wider grey trace (>10Δ f_r) after 71 ms integration. This is mostly due to vibrations that slightly change the coupling of the pumps into the waveguides as well as the alignment of intra-cavity components. § SELF-CORRECTION Although nothing forces the combs to settle individually at specific frequencies, the platform presented here is designed to provide them with mutual stability. Therefore, the frequency difference between pairs of comb modes is more stable than their absolute frequencies. This is exactly what is required for mode-resolved dual-comb spectroscopy since the measured beat spectrum is a new RF comb with modes sitting at those differential frequencies. In order to reach a specified spectral resolution, the stability constraints on the RF comb need to be more severe than those on the optical combs by a factor equal to the compression ratio between the optical and RF domains (f_r / Δ f_r).We can define the RF comb with only two parameters: its spectral offset and its spectral spacing. Mathematically, the RF modes are found at frequencies f_n = f_c + n Δ f_r, where f_c is the frequency of the mode closest to the carrier frequency (the spectrum's centre of mass) and n is the mode index. It is judicious to define the comb around f_c since this reduces the extent of n, which acts as a lever on Δ f_r, and thus increases the tolerance on the knowledge of this parameter.Of course, f_n is a time-dependent quantity since residual fluctuations δ f_c(t) and δΔ f_r(t) remain despite our instrument design. The modes' frequencies can thus be described at all times with f_n(t) =[f_c + δ f_c(t)] + n [Δ f_r + δΔ f_r(t)] When measuring dual-comb IGMs generated with free-running combs, it is required that we estimate and compensate those fluctuations. This allows reaching the spectral resolution made available by the optical combs and it opens the door to coherent averaging <cit.> by yielding mode-resolved spectra. We show here that it is possible to extract the residual fluctuations directly from the IGMs by making use of the cross-ambiguity function <cit.>, a tool initially developed for radar applications. This tool is closely related to the cross-correlation, but besides revealing the time delay τ between two similar waveforms, it also reveals their frequency offset f_o. More specifically, the cross-ambiguity function gives a measure of the similarity of two waveforms, A_1(t) and A_2(t), as a function of τ and f_o. It is given by χ_1,2(τ, f_o ) = ∫_-∞^∞ A_1(t) A_2^*(t+τ)exp(2π i f_o t)dt where ^* denotes complex conjugation. In the presence of chirp, an uncompensated frequency shift can affect the apparent delay between waveforms as retrieved by the more familiar cross-correlation method. Hence, it is important to retrieve those two parameters simultaneously from the point of maximum similarity on an ambiguity map, that is where |χ_1,2(τ, f_o)| is maximum.For a given dual-comb IGM stream, we can compute χ_1,k(τ,δ f_c ) between the first and k^th ZPD bursts, where f_o takes the form of a frequency offset δ f_c relative to the first burst's f_c in that specific context. The values τ_k and δ f_c,k at the maximum of each calculated ambiguity map reveal the instantaneous fluctuations sampled at each ZPD time. Indeed, time delays τ_k translate into fluctuations δΔ f_r(t), while δ f_c,k are samples from δ f_c(t). Figure <ref> shows an ambiguity map generated from measured IGMs for the case k=100. Only ZPD bursts and their slightly overlapping adjacent echoes are used for the calculation. The latter are responsible for the weak modulation on the ambiguity map.Initially, the uncorrected spectrum is completely smeared, as shown by the green trace in Fig. <ref> computed from a 71-ms-long IGM stream. This highlights the fact that, in the original spectrum, RF modes are wider than their nominal spacing. We first compensate spectral shifting on the RF comb using a correction based on the values δ f_c,k. They are used to estimate the continuous phase signal δϕ_c(t) = 2π∫δ f_c(t) dt required to perform a phase correction <cit.>. This corrects the fluctuations of the mode at f_c (n=0), but leaves spectral stretching around that point uncompensated, as depicted by the red higher-index modes in Fig. <ref>. Then, we use the values τ_k to construct the continuous phase signal δΔϕ_r(t) = 2π∫δΔ f_r(t) dt associated with spectral stretching. This phase signal is used to resample the IGMs on a grid where the delay between pairs of optical pulses is linearly increasing (constant Δ f_r). This yields the blue spectrum in Fig. <ref>, which shows transform-limited modes having a width determined by the von Hann window that was used to compute all aforementioned spectra (2/(71×10^-3) = 28 Hz). The improvement between the green and blue spectra indicates that this algorithm allows accounting for fluctuations greater than the RF mode spacing. The extracted δϕ_c(t) is shown in blue in Fig. <ref>a) and the extracted δΔϕ_r(t), normalized by 2π f_r to obtain time deviations from a linear delay grid, is shown in Fig. <ref>c). A detailed explanation of the algorithm is given in Appendix A.To verify the exactitude of the extracted signal δϕ_c(t), we compare it with an independent measurement of this quantity that we refer to as δϕ_c^*(t). It was obtained from the beat note between two comb modes, one from each comb, measured through an intermediate CW laser. This corresponds to the approach that is routinely taken to post-correct dual-comb IGMs <cit.>. Since the pair of optical modes that is selected by the CW laser creates a beat note at a RF frequency f_CW different from f_c, the phase signal corresponding to this beat note is adjusted with the term δΔϕ_r(t) scaled by (f_c-f_CW)/Δ f_r, the number of modes separating f_CW from f_c. This yields the measured signal δϕ_c^*(t) shown in red in Fig. <ref>a), which should give the same information as the extracted signal δϕ_c(t). The difference between δϕ_c(t) and δϕ_c^*(t) is given in Fig. <ref>b) and shows white residuals up to Δ f_r/2, the Nyquist frequency of the sampled fluctuations. The standard deviation of the residuals is 0.06 rad, which corresponds to around one hundredth of a cycle, or 50 attoseconds at 1550 nm.It is important to note that the algorithm presented here can only compensate relative fluctuations that are slower than Δ f_r/2 since they are effectively sampled by each ZPD burst. Anything above this frequency is aliased during sampling and contaminates the correction signals estimated in the 0 to Δ f_r/2 band. Thus, a high Δ f_r and a low level of relative frequency noise, especially in the band above Δ f_r/2, are desirable to achieve the best results. However, Δ f_r must always be smaller than f_r^2/(2 Δν), where Δν is the optical combs' overlap bandwidth, in order to correctly map the optical information to a single comb alias. See Appendix A for the algorithm's tolerance to relative frequency noise faster than Δ f_r/2. Moreover, we must emphasize that the self-correction algorithm simply permits retrieving the mutual coherence between comb modes from the IGMs themselves, which yields an equidistant, but arbitrary, frequency axis. Therefore, calibration against frequency standards or known spectral features is still required if an absolute frequency axis is needed. § SPECTROSCOPY OF HCN We use the spectrometer to measure the transmission spectrum of the 2ν_3 band of H13C14N by relying solely on the self-correction algorithm presented above. We mix the two frequency combs in a 50/50 coupler and send one output through a free-space gas cell (Wavelength References HCN-13-100). The 50-mm-long cell has a nominal pressure of 100 ± 10 Torr and is at room temperature (22 ± 1^∘C). The optical arrangement is such that light does three passes in the cell. The transmitted light is sent to an amplified detector (Thorlabs PDB460C-AC) while the second coupler's output goes straight to an identical detector that provides a reference measurement (see Fig. <ref>). This reference measurement is especially important to calibrate the spectral modulation present on the generated combs. Both signals are simultaneously acquired with an oscilloscope (Rigol DS6104) operating at 1 GS/s.Figure <ref> shows a transmission spectrum acquired in 71 ms that covers up to 20 nm of spectral width. This allows to observe 24 absorption lines belonging to the P branch of H13C14N with a spectral point spacing of f_r = 822.4 MHz. The absolute offset of the frequency axis is retrieved by using one of the spectral features' known centre and its scale is determined by using the measured value for f_r, as in <cit.>. We overlay the result of a fit composed of 24 Voigt lines for which the Gaussian width (Doppler broadening) is determined by calculation from <cit.> for a temperature of 22 ^∘C (FWHM ∼ 450 MHz). The Lorentzian width (pressure broadening), centre and depth of each line are left as free parameters. We use the same approach as the one described in <cit.> to fit the data and suppress the slowly varying background. The dominant structure left in the fit residuals is due to weak hot-band transitions. They represent the biggest source of systematic errors for the retrieved parameters since they often extend over lines of interest. As a final proof that the correction algorithm presented in this paper yields quality spectroscopic data, we compare Lorentzian half widths obtained from the fit to values derived from broadening coefficients reported in <cit.>. Note that reference data is not available for all lines. We calculate reference widths from reported broadening coefficients (in MHz/Torr) using a cell pressure of 92.84 Torr, which lies within the manufacturer's tolerance. This pressure value yields minimum deviations between measured and reference widths and is in good agreement with the value of 92.5 ± 0.8 Torr estimated from a different experiment using the same gas cell <cit.>. The measured and reference widths along with their deviations are gathered in Table <ref>. The measurement uncertainties correspond to the 2σ confidence intervals returned by the fit. The excellent agreement between the two value sets confirms the reliability of the spectrometer and of its correction algorithm. If the correction had left any significant fluctuations uncompensated, the spectrum would have appeared smeared, and the lines would have been broadened. § CONCLUSION We have designed and demonstrated the use of a new kind of dual-comb spectrometer based on passively mode-locked on-chip WGLs. This platform improves the mutual stability of dual-comb systems by coupling the cavities mechanically and thermally. Combined with a new correction algorithm that allows to extract and compensate residual fluctuations, this free-running system can perform mode-resolved spectroscopy without using any external information. This self-correction approach to dual-comb interferometry can be used with any pair of combs having sufficient mutual coherence on a 1/Δ f_r timescale.This compact instrument and its self-correction approach represent two important steps towards the widespread adoption of dual-comb spectroscopy. The design could even be miniaturized down to a monolithic device with a SAM directly mounted on the end-face of the chip. Single- and dual-comb versions of this device could reach multi-GHz repetition rates and compete against microresonator-based frequency combs <cit.>. Moreover, the broad transparency of the ZBLAN chip makes the platform easily adaptable to the mid-infrared, a key enabler for useful spectroscopy applications.§ FUNDINGNatural Sciences and Engineering Research Council of Canada (NSERC); Fonds de Recherche du Québec - Nature et Technologies (FRQNT).§ ACKNOWLEDGMENTSThe authors thank Sarah Scholten from Andre Luiten's group for lending the gas cell. 99 cossel2012broadband K. C. Cossel, D. N. Gresh, L. C. Sinclair, T. Coffey, L. V. Skripnikov, A. N. Petrov, N. S. Mosyagin, A. V. Titov, R. W. Field, E. R. Meyer et al., Broadband velocity modulation spectroscopy of HfF+: Towards a measurement of the electron electric dipole moment, Chem. Phys. Lett. 546, 1–11 (2012).li2008laser C.-H. Li, A. J. Benedick, P. Fendel, A. G. Glenday, F. X. Kärtner, D. F. Phillips, D. Sasselov, A. Szentgyorgyi, and R. L. Walsworth, A laser frequency comb that enables radial velocity measurements with a precision of 1 cm s-1, Nature 452(7187), 610–612 (2008).truong2015accurate G.-W. Truong, J. Anstie, E. May, T. Stace, and A. Luiten, Accurate lineshape spectroscopy and the boltzmann constant, Nat. Commun. 6 (2015).berntsson2002quantitative O. Berntsson, L.-G. Danielsson, B. Lagerholm, and S. Folestad, Quantitative in-line monitoring of powder blending by near infrared reflection spectroscopy, Powder Technol. 123(2), 185–193 (2002).funke2003techniques H. H. Funke, B. L. Grissom, C. E. McGrew, and M. W. Raynor, Techniques for the measurement of trace moisture in high-purity electronic specialty gases, Rev. Sci. Instrum. 74(9), 3909–3933 (2003).maslowski2016surpassing P. Maslowski, K. F. Lee, A. C. Johansson, A. Khodabakhsh, G. Kowzan, L. Rutkowski, A. A. Mills, C. Mohr, J. Jiang, M. E. Fermann et al., Surpassing the path-limited resolution of fourier-transform spectrometry with frequency combs, Phys. Rev. A 93(2), 021802 (2016).stowe2008direct M. C. Stowe, M. J. Thorpe, A. Pe'er, J. Ye, J. E. Stalnaker, V. Gerginov, and S. A. Diddams, Direct frequency comb spectroscopy, Adv. At. Mol. Opt. Phy. 55, 1–60 (2008).foltynowicz2013cavity A. Foltynowicz, P. Masłowski, A. J. Fleisher, B. J. Bjork, and J. Ye, Cavity-enhanced optical frequency comb spectroscopy in the mid-infrared application to trace detection of hydrogen peroxide, Applied Physics B 110(2), 163–175 (2013).marian2005direct A. Marian, M. C. Stowe, D. Felinto, and J. Ye, Direct frequency comb measurements of absolute optical frequencies and population transfer dynamics, Phys. Rev. Lett. 95(2), 023001 (2005).holzwarth2000optical R. Holzwarth, T. Udem, T. W. Hänsch, J. Knight, W. Wadsworth, and P. S. J. Russell, Optical frequency synthesizer for precision spectroscopy, Phys. Rev. Lett. 85(11), 2264 (2000).coddington2016dual I. Coddington, N. Newbury, and W. Swann, Dual-comb spectroscopy, Optica 3(4), 414–426 (2016).coddington2008coherent I. Coddington, W. C. Swann, and N. R. Newbury, Coherent multiheterodyne spectroscopy using stabilized optical frequency combs, Phys. Rev. Lett. 100(1), 013902 (2008).baumann2011spectroscopy E. Baumann, F. Giorgetta, W. Swann, A. Zolot, I. Coddington, and N. Newbury, Spectroscopy of the methane ν 3 band with an accurate midinfrared coherent dual-comb spectrometer, Phys. Rev. A 84(6), 062513 (2011).deschenes2010optical J.-D. Deschênes, P. Giaccari, and J. Genest, Optical referencing technique with cw lasers as intermediate oscillators for continuous full delay range frequency comb interferometry, Opt. Express 18(22), 23358–23370 (2010).roy2012continuous J. Roy, J.-D. Deschênes, S. Potvin, and J. Genest, Continuous real-time correction and averaging for frequency comb interferometry, Opt. Express 20(20), 21932–21939 (2012).ideguchi2014adaptive T. Ideguchi, A. Poisson, G. Guelachvili, N. Picqué, and T. W. Hänsch, Adaptive real-time dual-comb spectroscopy, Nat. Commun. 5 (2014).ideguchi2016kerr T. Ideguchi, T. Nakamura, Y. Kobayashi, and K. Goda, Kerr-lens mode-locked bidirectional dual-comb ring laser for broadband dual-comb spectroscopy, Optica 3(7), 748–753 (2016).mehravar2016real S. Mehravar, R. Norwood, N. Peyghambarian, and K. Kieu, Real-time dual-comb spectroscopy with a free-running bidirectionally mode-locked fiber laser, Appl. Phys. Lett. 108(23), 231104 (2016).gong2014polarization Z. Gong, X. Zhao, G. Hu, J. Liu, and Z. Zheng, Polarization multiplexed, dual-frequency ultrashort pulse generation by a birefringent mode-locked fiber laser, in CLEO: Science and Innovations, (Optical Society of America, 2014), pp. JTh2A–20.zhao2016picometer X. Zhao, G. Hu, B. Zhao, C. Li, Y. Pan, Y. Liu, T. Yasui, and Z. Zheng, Picometer-resolution dual-comb spectroscopy with a free-running fiber laser, Opt. Express 24(19), 21833–21845 (2016).chang2015dual M. Chang, H. Liang, K. Su, and Y. Chen, Dual-comb self-mode-locked monolithic yb: Kgw laser with orthogonal polarizations, Opt. Express 23(8), 10111–10116 (2015).fork1981generation R. Fork, B. Greene, and C. V. Shank, Generation of optical pulses shorter than 0.1 psec by colliding pulse mode locking, Appl. Phys. Lett. 38(9), 671–672 (1981).link2015dual S. M. Link, A. Klenner, M. Mangold, C. A. Zaugg, M. Golling, B. W. Tilma, and U. Keller, Dual-comb modelocked laser, Opt. Express 23(5), 5521–5531 (2015).link2016dual S. M. Link, A. Klenner, and U. Keller, Dual-comb modelocked lasers: semiconductor saturable absorber mirror decouples noise stabilization, Opt. Express 24(3), 1889–1902 (2016).villares2015chip G. Villares, J. Wolf, D. Kazakov, M. J. Süess, A. Hugi, M. Beck, and J. Faist, On-chip dual-comb based on quantum cascade laser frequency combs, Appl. Phys. Lett. 107(25), 251104 (2015).rosch2016chip M. Rösch, G. Scalari, G. Villares, L. Bosco, M. Beck, and J. Faist, On-chip, self-detected terahertz dual-comb source, Appl. Phys. Lett. 108(17), 171104 (2016).burghoff2016computational D. Burghoff, Y. Yang, and Q. Hu, Computational multiheterodyne spectroscopy, Science Advances 2(11) (2016).champak C. Khurmi, N. B. Hébert, W. Q. Zhang, S. A. V., G. Chen, J. Genest, T. M. Monro, and D. G. Lancaster, Ultrafast pulse generation in a mode-locked erbium chip waveguide laser, Opt. Express 24(24), 27177–27183 (2016).schlager2003passively J. B. Schlager, B. E. Callicoatt, R. P. Mirin, N. A. Sanford, D. J. Jones, and J. Ye, Passively mode-locked glass waveguide laser with 14-fs timing jitter, Opt. Lett. 28(23), 2411–2413 (2003).beecher2010320 S. Beecher, R. Thomson, N. Psaila, Z. Sun, T. Hasan, A. Rozhin, A. Ferrari, and A. Kar, 320 fs pulse generation from an ultrafast laser inscribed waveguide laser mode-locked by a nanotube saturable absorber, Appl. Phys. Lett. 97(11), 111114 (2010).thoen2000erbium E. Thoen, E. Koontz, D. Jones, F. Kartner, E. Ippen, and L. Kolodziejski, Erbium-ytterbium waveguide laser mode-locked with a semiconductor saturable absorber mirror, IEEE Photon. Technol. Lett. 12(2), 149–151 (2000).gross2013femtosecond S. Gross, D. G. Lancaster, H. Ebendorff-Heidepriem, T. M. Monro, A. Fuerbach, and M. J. Withford, Femtosecond laser induced structural changes in fluorozirconate glass, Opt. Mater. Express 3(5), 574–583 (2013).minoshima2001photonic K. Minoshima, A. M. Kowalevicz, I. Hartl, E. P. Ippen, and J. G. Fujimoto, Photonic device fabrication in glass by use of nonlinear materials processing with a femtosecond laser oscillator, Optics Letters 26(19), 1516–1518 (2001).davis1996writing K. M. Davis, K. Miura, N. Sugimoto, and K. Hirao, Writing waveguides in glass with a femtosecond laser, Opt. Lett. 21(21), 1729–1731 (1996).choudhury2014ultrafast D. Choudhury, J. R. Macdonald, and A. K. Kar, Ultrafast laser inscription: perspectives on future integrated applications, Laser Photon. Rev. 8(6), 827–846 (2014).smart1991cw R. Smart, D. Hanna, A. Tropper, S. Davey, S. Carter, and D. Szebesta, Cw room temperature upconversion lasing at blue, green and red wavelengths in infrared-pumped Pr3+-doped fluoride fibre, Electron. Lett. 27(14), 1307–1309 (1991).palmer2013high G. Palmer, S. Gross, A. Fuerbach, D. G. Lancaster, and M. J. Withford, High slope efficiency and high refractive index change in direct-written Yb-doped waveguide lasers with depressed claddings, Opt. Express 21(14), 17413–17420 (2013).lancasterer3+ D. G. Lancaster, Y. Li, Y. Duan, S. Gross, M. W. Withford, and T. M. Monro, Er3+ active Yb3+ Ce3+ co-doped fluorozirconate guided-wave chip lasers, IEEE Photon. Technol. Lett. 28(21), 2315–2318 (2016).lancaster2015holmium D. G. Lancaster, V. J. Stevens, V. Michaud-Belleau, S. Gross, A. Fuerbach, and T. M. Monro, Holmium-doped 2.1 μm waveguide chip laser with an output power > 1 W, Opt. Express 23(25), 32664–32670 (2015).lancaster2013efficient D. G. Lancaster, S. Gross, H. Ebendorff-Heidepriem, M. J. Withford, T. M. Monro, and S. D. Jackson, Efficient 2.9 μm fluorozirconate glass waveguide chip laser, Opt. Lett. 38(14), 2588–2591 (2013).nagamatsu2004influence K. Nagamatsu, S. Nagaoka, M. Higashihata, N. Vasa, Z. Meng, S. Buddhudu, T. Okada, Y. Kubota, N. Nishimura, and T. Teshima, Influence of Yb3+and Ce3+ codoping on fluorescence characteristics of Er3+-doped fluoride glass under 980 nm excitation, Opt. Mater. 27(2), 337–342 (2004).woodward2014probability P. M. Woodward, Probability and Information Theory with Applications to Radar (Permagon, 1953), 2nd ed.1612.00055 G.-W. Truong, E. M. Waxman, K. C. Cossel, E. Baumann, A. Klose, F. R. Giorgetta, W. C. Swann, N. R. Newbury, and I. Coddington, Accurate frequency referencing for fieldable dual-comb spectroscopy, Opt. Express 24(26), 30495–30504 (2016).demtroder2013laser W. Demtröder, Laser spectroscopy: basic concepts and instrumentation (Springer Science & Business Media, 2013).hebert2015quantitative N. B. Hébert, S. K. Scholten, R. T. White, J. Genest, A. N. Luiten, and J. D. Anstie, A quantitative mode-resolved frequency comb spectrometer, Opt. Express 23(11), 13991–14001 (2015).swann2005line W. C. Swann and S. L. Gilbert, Line centers, pressure shift, and pressure broadening of 1530-1560 nm hydrogen cyanide wavelength calibration lines, J. Opt. Soc. Am. B 22(8), 1749–1756 (2005).del2007optical P. Del'Haye, A. Schliesser, O. Arcizet, T. Wilken, R. Holzwarth, and T. Kippenberg, Optical frequency comb generation from a monolithic microresonator, Nature 450(7173), 1214–1217 (2007).kippenberg2011microresonator T. J. Kippenberg, R. Holzwarth, and S. Diddams, Microresonator-based optical frequency combs, Science 332(6029), 555–559 (2011).suh2016microresonator M.-G. Suh, Q.-F. Yang, K. Y. Yang, X. Yi, and K. J. Vahala, Microresonator soliton dual-comb spectroscopy, Science 354(6312), 600–603 (2016).§ APPENDIX A: DETAILED SELF-CORRECTION ALGORITHM The algorithm aims to correct both degrees of freedom on the RF comb: its spectral spacing and its spectral offset. This is done by extracting the values τ_k and δ f_c,k for each k^th ZPD burst using the cross-ambiguity function and by deriving the continuous phase signals δϕ_c(t) and δΔϕ_r(t) in order to perform a correction as the one described in <cit.>. We start by shifting the spectrum to DC with a phase ramp having the slope of the first IGM's carrier frequency. This slope is evaluated with a linear fit to the phase ramp in the first ZPD burst.We then interpolate the values δ f_c,k, which are measured at ZPD times deduced from the values τ_k, in order to obtain δ f_c(t) for all times. In other words, we simply interpolate the value pairs (τ_k,δ f_c,k) using a spline. We then integrate δ f_c(t) to retrieve the associated phase signal δϕ_c,1(t) and use it to apply a first phase correction on the IGM stream. This operation corrects most of spectral shifting and starts to reveal the comb's modal structure. Although they can be distinguished, the modes still occupy a significant fraction of the mode spacing. At this point, the spectrum's centre of mass is aligned with DC because of the spectral shift that was initially applied. The mode closest to DC is the mode n=0, which was initially at frequency f_c.Since this first correction signal was obtained by integrating interpolated frequency data, it did not necessarily force the right set of phase values at ZPD times. Therefore, we can refine the phase correction further by extracting the residual phase excursions in the IGM stream. To do so, we cross-correlate the first ZPD burst with the rest of the IGM stream, which is safe now that most δ f_c(t) is compensated, and extract each burst's residual phase offset ϕ_k. As long as the first correction was seeded with adequately sampled fluctuations, the ϕ_k values now have sufficiently small jumps (ϕ_k - ϕ_k-1<π) so that they can be unwrapped. However, excess noise can be found on the ϕ_k values in the case where the dual-comb system exhibits relative frequency noise faster than Δ f_r/2, which cannot be accounted for during the first correction. Therefore, this algorithm can tolerate a certain amount of relative frequency noise in the band above Δ f_r/2 as long as it translates into noise on the ϕ_k values that satisfies ϕ_k - ϕ_k-1<π.The value pairs (τ_k, ϕ_k) are unwrapped and interpolated to create a second phase signal δϕ_c,2(t), which we use for a second phase correction that fully corrects the mode n=0 to a transform-limited peak at DC. The sum δϕ_c,1(t) + δϕ_c,2(t) = δϕ_c(t) represents the complete signal that would have been required to perform a one-off correction from the start. This step completes the correction of spectral shifting, but leaves spectral stretching uncompensated. Note that the red trace in Fig. 5 corresponds to the spectrum corrected incrementally with both δϕ_c,1(t) and δϕ_c,2(t) or, equivalently, directly with δϕ_c(t). The blue curve in Fig. 6a) corresponds to the complete signal δϕ_c(t).Next, we define a phase vector that represents the evolution of the repetition rate difference. We set the phase to 0 at the first ZPD time and increment it by 2π at successive ZPD times. This is justified by the fact that the arrival of ZPD bursts is periodic and each burst indicates the start of a new IGM. We interpolate the value pairs (τ_k, 2π(k-1)) for all times and remove the linear trend on the resulting signal, which yields the continuous phase fluctuations δΔϕ_r(t). This data can finally be used to construct a resampling grid for the IGM stream where the delay between pairs of optical pulses is linearly increasing (constant Δ f_r). This resampling correction compensates spectral stretching around the mode n=0 at DC.
http://arxiv.org/abs/1702.08344v1
{ "authors": [ "Nicolas Bourbeau Hébert", "Jérôme Genest", "Jean-Daniel Deschênes", "Hugo Bergeron", "George Y. Chen", "Champak Khurmi", "David G. Lancaster" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170227160210", "title": "Self-corrected chip-based dual-comb spectrometer" }
=1 1–LastPage Feb. 28, 2017Mar. 29, 2018 automata,positioning matrixconditions ()enumi conditionsJE[2][] -.3exO. Carton]Olivier Carton IRIF, CNRS and Université Paris-Diderot olivier.carton@irif.frD. Perrin]Dominique Perrin Laboratoire d'informatique Gaspard-Monge, Université de Marne-la-Vallée dominique.perrin@esiee.frJ.-É. Pin]Jean-Éric Pin IRIF, CNRS and Université Paris-Diderot jean-eric.pin@irif.fr The third author is partially funded from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 670624). The first and third authors are partially funded by the DeLTA project (ANR-16-CE40-0007) Formal languages and automata theory, Regular languages, Algebraic language theory Difference hierarchies were originally introduced by Hausdorff and they play an important role in descriptive set theory. In this survey paper, we study difference hierarchies of regular languages. The first sections describe standard techniques on difference hierarchies, mostly due to Hausdorff. We illustrate these techniques by giving decidability results on the difference hierarchies based on shuffle ideals, strongly cyclic regular languages and the polynomial closure of group languages. A survey on difference hierarchies of regular languages [ December 30, 2023 ======================================================= Dedicated to the memory of Zoltán Ésik.§ INTRODUCTION Consider a set E and a setof subsets of E containing the empty set. The general pattern of a difference hierarchy is better explained in a picture. Saturn's rings-style Figure <ref> represents a decreasing sequenceX_1 ⊇ X_2⊇ X_3 ⊇ X_4⊇ X_5of elements of . The grey part of the picture corresponds to the set (X_1 - X_2) + (X_3 - X_4) + X_5, a typical element of the fifth level of the difference hierarchy defined by . Similarly, the n-th level of the difference hierarchy defined byis obtained by considering length-n decreasing nested sequences of sets.Difference hierarchies were originally introduced by Hausdorff <cit.>. They play an important role in descriptive set theory <cit.> and also yield a hierarchy on complexity classes known as the Boolean hierarchy <cit.>, <cit.>, <cit.>, <cit.>. Difference hierarchies were also used in the study of ω-regular languages <cit.>.The aim of this paper is to survey difference hierarchies of regular languages. Decidability questions for difference hierarchies over regular languages were studied in <cit.> and more recently by Glasser, Schmitz and Selivanov in <cit.>. The latter article is the reference paper on this topic and contains an extensive bibliography, to which we refer the interested reader. However, paper <cit.> focuses on forbidden patterns in automata, a rather different perspective than ours.We first present some general results on difference hierarchies and their connection with closure operators. The results on approximation of Section <ref>, first presented in <cit.>, lead in some cases to convenient algorithms to compute chain hierarchies.Next we turn to algebraic methods. Indeed, a great deal of results on regular languages are obtained through an algebraic approach. Typically, combinatorial properties of regular languages — being star-free, piecewise testable, locally testable, etc. — translate directly to algebraic properties of the syntactic monoid of the language (see <cit.> for a survey). It is therefore natural to expect a similar algebraic approach when dealing with difference hierarchies. However, things are not that simple. First, one needs to work with ordered monoids, which are more appropriate for classes of regular languages not closed under complement. Secondly, although Proposition <ref> yields a purely algebraic characterization of the difference hierarchy, it does not lead to decidability results, except for some special cases. Two such cases are presented at the end of the paper. The first one studies the difference hierarchy of the polynomial closure of a lattice of regular languages. The main result, Corollary <ref>, which appears to be new, states that the difference hierarchy induced by the polynomial of group languages is decidable. The second case, taken from <cit.>, deals with cyclic and strongly cyclic regular languages.Our paper is organised as follows. Prerequities are presented in Section <ref>. Section <ref> covers the results of Hausdorff on difference hierarchies and Section <ref> is a brief summary on closure operators. The results on approximation form the core of Section <ref>. Decidability questions on regular languages are introduced in Section <ref>. Section <ref> on chains is inspired by results of descriptive set theory. Two results that are not addressed in <cit.> are presented in Sections <ref> and <ref>. The final Section<ref> opens up some perspectives.§ PREREQUISITES In this section, we briefly recall the following notions: upper sets, ordered monoids,stamps and syntactic objects.Let E be a preordered set. An upper set of E is a subset U of E such that the conditions x ∈ U and x ≤ y imply y ∈ U. An ordered monoid is a monoid M equipped with a partial order ≤ compatible with the product on M: for all x, y, z ∈ M, if x ≤ y then zx ≤ zy and xz ≤ yz.A stamp is a surjective monoid morphism φ:A^* → M from a finitely generated free monoid A^* onto a finite monoid M. If M is an ordered monoid, φ is called an ordered stamp. The restricted direct product of two stamps φ_1:A^* → M_1 and φ_2:A^* → M_2 is the stamp φ with domain A^* defined by φ(a) = (φ_1(a), φ_2(a)). The image of φ is an [ordered] submonoid of the [ordered] monoid M_1 × M_2. < g r a p h i c s >Stamps and ordered stamps are used to recognise languages. A language L of A^* is recognised by a stamp φ: A^* → M if there exists a subset P of M such that L = φ^-1(P). It is recognised by an ordered stamp φ: A^* → M if there exists an upper set U of M such that L = φ^-1(U).The syntactic preorder of a language was first introduced by Schützenberger in <cit.>. Let L be a language of A^*. The syntactic preorder of L is the relation ≤_L defined on A^* by u ≤_L v if and only if, for every x, y ∈ A^*,xuy ∈ Lxvy ∈ L.The associated equivalence relation ∼_L, defined by u ∼_L v if u ≤_L v and v≤_L u, is the syntactic congruence of L and the quotient monoid M(L) = A^*/∼_L is the syntactic monoid of L. The natural morphism η: A^* → A^*/∼_L is the syntactic stamp of L. The syntactic image of L is the set P = η(L).The syntactic order ≤_P is defined on M(L) as follows: u ≤_P v if and only if for all x,y ∈ M(L),xuy ∈ Pxvy ∈ PThe partial order ≤_P is stable and the resulting ordered monoid (M(L), ≤_P) is called the syntactic ordered monoid of L. Note that P is now an upper setof (M(L), ≤_P) and η becomes an ordered stamp, called the syntactic orderedstamp of L.§ DIFFERENCE HIERARCHIES Let E be a set. In this article, a lattice is simply a collection of subsets of E containing ∅ and E and closed under taking finite unions and finite intersections. A lattice closed under complement is a Boolean algebra. Throughout this paper, we adopt Hausdorff's convention to denote unionadditively, set difference by a minus sign and intersection as a product. We alsosometimes denote L^c the complement of a subset L of a set E. Letbe a set of subsets of E containing the empty set. We set _0() = {∅} and, for each integer n≥ 1, we let _n() denote the class of all sets of the formX = X_1 - X_2 + …± X_nwhere the sets X_i are inand satisfy X_1 ⊇ X_2⊇ X_3 ⊇…⊇ X_n. By convention, the expression on the right hand side of (<ref>) should be evaluated from left to right, but given the conditions on the X_i's, it can also be evaluated as(X_1-X_2) + (X_3-X_4) + (X_5-X_6) + …Since the empty set belongs to , one has _n() ⊆_n+1() for all n ≥ 0 and the classes _n() define a hierarchy within the Boolean closure of . Moreover, the following result, due to Hausdorff <cit.>, holds: Letbe a lattice of subsets of E. The union of the classes _n() for n≥ 0 is the Boolean closure of .Let () = ∪_n≥ 1_n(). By construction, every element of _n() is a Boolean combination of members ofand thus () is contained in the Boolean closure of . Moreover _1() = and thus ⊆(). It is therefore enough to prove that () is closed under complement and finite intersection. If X = X_1-X_2+…± X_n, one hasE-X = E-X_1+X_2- …∓ X_nand thus X∈() implies E-X∈(). Thus () is closed under complement.Let X = X_1 - X_2 + …± X_n and Y = Y_1 - Y_2 + …± Y_m be two elements of (). LetZ= Z_1 - Z_2 + …± Z_n+m-1with Z_k= ∑_i+j = k+1 i and j not both even X_iY_j Therefore Z_1= X_1Y_1, Z_2= X_1Y_2 + X_2Y_1, Z_3= X_1Y_3 + X_3Y_1, Z_4= X_1Y_4 + X_2Y_3 + X_3Y_2 + X_4Y_1,= Z_n+m-1 = X_nY_mif m and n are not both even ∅ otherwiseWe claim that Z = XY. To prove the claim, consider for each set X = X_1 - X_2 + …± X_n associated with the decreasing sequence X_1, …, X_n of subsets of E, the function μ_X defined on E byμ_X(x) = max {i≥ 1| x∈ X_i}with the convention that μ_X(x) = 0 if x ∈ E - X_1. Then x∈ X if and only if μ_X(x) is odd. We now evaluate μ_Z(x) as a function of i = μ_X(x) and j = μ_Y(x). We first observe that if k ≥ i + j, then x ∉ Z_k. Next, if i and j are not both even, then x ∈ X_iY_j and X_iY_j ⊆ Z_i+j-1, whence μ_Z(x) = i + j -1. Finally, if i and j are both even, then x ∉ Z_i+j-1 and thus μ_Z(x) is either equal to 0 or to i+j-2. Summarizing the different cases, we observe that μ_X(x) and μ_Y(x) are both odd if and only if μ_Z(x) is odd, which proves the claim. It follows that () is closed under intersection. An equivalent definition of _n() was given by Hausdorff <cit.>. Let XY denote the symmetric difference of two subsets X and Y of E. For every n ≥ 0, _n() = {X_1X_2 … X_n | X_i∈}.Indeed, if X = X_1 - X_2 + …± X_n with X_1 ⊇ X_2⊇ X_3 ⊇…⊇ X_n, then X= X_1X_2 … X_n. In the opposite direction, if X = X_1X_2 … X_n, then X = Z_1 - Z_2 + …± Z_n where Z_k = ∑_i_1, …, i_k distincts X_i_1… X_i_k. § CLOSURE OPERATORS We review in this section the definition and the basic properties of closure operators.Let E be a set. A map X →X from (E) to itself is a closure operator if it is extensive, idempotent and isotone, that is, if the following properties hold for all X, Y⊆ E: * X⊆X (extensive) * X = X (idempotent) * X⊆ Y implies X⊆Y (isotone)A set F⊆ E is closed if F = F. If F is closed, and if X⊆ F, then X⊆F = F. It follows that X is the least closed set containing X. This justifies the terminology “closure”. Actually, closure operators can be characterised by their closed sets. A set of closed subsets for some closure operator on E is closed under (possibly infinite) intersection. Moreover, any set of subsets of E closed under (possibly infinite) intersection is the set of closed sets for some closure operator.Let X→X be a closure operator and let (F_i)_i∈ I be a family of closed subsets of E. Since a closure is isotone, ⋂_i∈ IF_i⊆F_i = F_i. It follows that ⋂_i∈ IF_i⊆⋂_i∈ IF_i and thus ⋂_i∈ IF_i is closed.Given a setof subsets of E closed under intersection, denote by X the intersection of all elements ofcontaining X. Then the map X→X is a closure operator for whichis the set of closed sets. In particular, X ∩ Y⊆X∩Y, but the inclusion may be strict. The trivial closure is the application defined byX = ∅ if X = ∅ E otherwiseFor this closure, the only closed sets are the empty set and E. If E is a topological space, the closure in the topological sense is a closure operator. The convex hull is a closure operator. However, it is not induced by any topology, since the union of two convex sets is not necessarily convex. The intersection of two closure operators X →X and X →X is the function X →X defined by X = X∩X. The intersection of two closure operators is a closure operator.Let X be the intersection of X and X. First, since X⊆X and X⊆X, one has X⊆X = X∩X. In particular, X⊆X. Secondly, since X∩X⊆X, X∩X⊆X = X. Similarly, X∩X⊆X. It follows thatX = X∩X∩X∩X⊆X∩X = Xand hence X = X. Finally, if X ⊆ Y, then X⊆Y and X⊆Y, and therefore X⊆Y.Let us conclude this section by giving a few examples of closure operators occurring in the theory of formal languages.Iteration. The map L → L^* is a closure operator. Similarly, the map L → L^+, where L^+ denotes the subsemigroup generated by L, is a closure operator.Shuffle ideal.The shuffle product (or simply shuffle) of two languages L_1 and L_2 over A is the languageL_1L_2 = { w ∈ A^* | w = u_1v_1 … u_nv_nfor some words u_1, …, u_n, v_1, …, v_n of A^* such that u_1 … u_n ∈ L_1 and v_1 … v_n ∈ L_2} .The shuffle product defines a commutative and associative operation over the set of languages over A. Given a language L, the language LA^* is called the shuffle ideal generated by L and it is easy to see that the map L → LA^* is a closure operator.This closure operator can be extended to infinite words in two ways: the finite and infinite shuffle ideals generated by an ω-language X are respectively:XA^*= { y_0x_1y_1 … x_ny_nx | y_0, …, y_n ∈ A^* andx_1… x_n x ∈ X } XA^ω = { y_0x_1y_1x_2 …| y_0, …, y_n, …∈ A^* andx_1x_2…∈ X }The maps X → XA^* and X → XA^ω are both closure operators.Ultimate closure. The ultimate closure of a language X of infinite words is defined by:(X) = { ux | u∈ A^*andvx ∈ Xfor somev ∈ A^*}The map X →(X) is a closure operator. § APPROXIMATION In this section, we consider a setof closed sets of E containing the empty set. It follows that the corresponding closure operator satisfies the condition ∅ = ∅. We first define the notion of an approximation of a set by a chain of closed sets. Then the existence of a best approximation will be established. In this section, L is a subset of E. A chain F_1 ⊇ F_2 ⊇…⊇ F_n of closed sets is an n-approximation of L if the following inclusions hold for all k suchthat 2k + 1 ≤ n:F_1 - F_2 ⊆ F_1 - F_2 + F_3 - F_4 ⊆…⊆ F_1 - F_2 + … + F_2k-1 - F_2k⊆… ⊆ L⊆…⊆ F_1 - F_2 + F_3 - … + F_2k+1⊆…⊆ F_1 - F_2 + F_3 ⊆ F_1 There is a natural order among the n-approximations of a given set L. An n-approximation F_1 ⊇ F_2 ⊇…⊇ F_n of L is said to be better than an n-approximation F'_1 ⊇ F'_2 ⊇…⊇ F'_n if, for all k such that 2k+1 ≤ n,F_1 - F_2 + F_3 - … + F_2k+1 ⊆ F'_1 - F'_2 + F'_3 - … + F'_2k+1and F'_1 - F'_2 + … + F'_2k-1 - F'_2k ⊆ F_1 - F_2 + … + F_2k-1 - F_2kWe will need the following elementary lemma: Let X, Y and Z be subsets of E. *The conditions X-Y ⊆ Z and X-Z⊆ Y are equivalent. *The conditions Z ⊆ X+Y and X^c ∩ Z⊆ Y are equivalent. *If Y ⊆ X and X-Y ⊆ Z, then X-Z = Y-Z and X + Z = Y + Z. The description of the best approximation of L requires the introduction of two auxiliary functions. For every subset X of E, setf(X) = X-Land g(X) = X ∩ LThe key properties of these functions are formulated in the following lemma. The following properties hold for all subsets X and Y of E: *X - f(X) ⊆ L and L ⊆ X + g(X^c), *if X ⊇ Y ⊇ L, then f(X) ⊇ f(Y) and X -f(X) ⊆ Y - f(Y) ⊆ L, *if X ⊆ Y ⊆ L, then g(X) ⊆ g(Y) and L ⊆ Y + g(Y^c) ⊆ X + g(X^c). Let X and Y be subsets of E.(<ref>) follows from a simple computation: X - f(X)= X - X-L⊆ X - (X-L) = X∩ L ⊆ L X + g(X^c)= X + X^c ∩ L⊇ (X ∩ L) + (X^c ∩ L) = L.(<ref>) Suppose that X ⊇ Y ⊇ L. Then X - L ⊇ Y - L and thus X - L⊇Y - L, that is, f(X) ⊇ f(Y). Furthermore, X - Y ⊆ X - L ⊆X - L = f(X). Applying part (<ref>) of Lemma <ref> with Z = f(X), one gets X - f(X) = Y - f(X), whence X - f(X)⊆ Y - f(Y) since f(X) ⊇ f(Y) by the first part of (<ref>).(<ref>) Suppose that X ⊆ Y ⊆ L. Then X ∩ L⊆Y ∩ L and thus g(X) ⊆ g(Y). Furthermore, Y - X = X^c ∩ Y ⊆ X^c ∩ L ⊆X^c ∩ L = g(X^c). Applying part (<ref>) of Lemma <ref> with Z = g(X^c), one gets Y + g(X^c)= X + g(X^c), whence Y + g(Y^c) ⊆ X + g(X^c) since g(Y^c) ⊆ g(X^c) by the first part of (<ref>). Let F_1 ⊇ F_2 ⊇…⊇ F_n be an n-approximation of L and, for 1 ≤ k ≤ n, let S_k = F_1 - F_2 + …± F_k. Then, for 1 ≤ k ≤ n,{ f(S_k)= f(F_k)if k is odd g(S_k^c)= g(F_k)if k is even. If k=1, then S_1=F_1 and the result is trivial. Suppose that k> 1. If k is odd, S_k-1⊆ L and thus S_k - L = (S_k-1 + F_k) - L = F_k - L. It follows that f(S_k) = f(F_k). If k is even, L ⊆ S_k-1 and thus S_k^c ∩ L = (S_k-1^c + F_k) ∩ L = F_k ∩ L. Therefore g(S_k^c) = g(F_k).Define a sequence (L_n)_n≥ 0 of subsets of E by L_0= E and, for all n ≥ 0,L_n+1 = f(L_n)if n is oddg(L_n)if n is evenThe next theorem expresses the fact that the sequence (L_n)_n≥ 0 is the best approximation of L as a Boolean combination of closed sets. In particular, if L_n = ∅ for some n > 0, then L ∈_n-1(). Let L be a subset of E. For every n > 0, the sequence (L_k)_1 ≤ k≤ n is the best n-approximation of L.We first show that the sequence (L_k)_1 ≤ k≤ n is an n-approximation of L. First, every L_k is closed by construction. We show that L_k+1⊆ L_k by induction on k. This is true for k=0 since L_0 = E. Now, if k is even, L_k+1 = L_k ∩ L⊆L_k = L_k and if k is odd, L_k+1 = L_k - L⊆L_k = L_k.Set, for k > 0, S_k = L_1 - L_2 + …± L_k. By part (<ref>) of Lemma <ref>, the relations L_2k-1 - L_2k = L_2k-1 - f(L_2k-1) ⊆ L hold for every k> 0, and similarly, L_2k - L_2k+1 = L_2k - g(L_2k) ⊆ L^c. It follows that S_2k⊆ L. Furthermore S_2k+1^c = (L_0 -L_1) + (L_2 -L_3) + … + (L_2k - L_2k+1) ⊆ L^c and thus L⊆ S_2k+1.We now show that the sequence (L_k)_1 ≤ k≤ n is the best approximation of L. Let (L'_k)_1 ≤ k ≤ n be another n-approximation of L. Set, for k > 0, S'_k = L'_1 - L'_2 + …± L'_k. Then, by definition, L ⊆ L'_1 and thusS_1 = L_1 = L⊆L'_1 = L'_1 = S'_1.Let k > 0. Suppose by induction that S_2k-1⊆ S'_2k-1. We show successively that S_2k⊆ S'_2k and S_2k+1⊆ S'_2k+1.By definition of an approximation, S'_2k = S'_2k-1 - L'_2k⊆ L, and thus S'_2k-1 - L⊆ L'_2k by part (<ref>) of Lemma <ref>. It follows that f(S'_2k-1) = S'_2k-1 - L⊆L'_2k = L'_2k. Now, since S'_2k-1⊇ S_2k-1⊇ L, one can apply part (<ref>) of Lemma <ref> to getS'_2k = S'_2k-1 - L'_2k⊆S'_2k-1 - f(S'_2k-1)⊆S_2k-1 - f(S_2k-1).Moreover since f(S_2k-1) = f(L_2k-1) = L_2k by Lemma <ref>, one gets S'_2k⊆ S_2k-1 - f(S_2k-1) = S_2k-1 - L_2k = S_2k.Similarly, L ⊆ S'_2k+1 = S'_2k + L'_2k+1 and hence (S'_2k)^c ∩ L⊆ L'_2k+1 by part (<ref>) of Lemma <ref>. It follows that g((S'_2k)^c) = (S'_2k)^c ∩ L⊆L'_2k+1 = L'_2k+1. Now, since S'_2k⊆ S_2k⊆ L, one can apply part (<ref>) of Lemma <ref> to getS_2k + g(S_2k^c) ⊆ S'_2k + g((S'_2k)^c) ⊆ S'_2k + L'_2k+1 = S'_2k+1.Moreover since the equalities g(S_2k^c) = g(L_2k) = L_2k+1 hold by Lemma <ref>, one getsS_2k+1 = S_2k + L_2k+1 = S_2k + g(S_2k^c) ⊆ S'_2k+1.which concludes the proof. Whenis a set of subsets of E closed under arbitrary intersection, Theorem <ref> provides a characterization of the classes _n(). Let L be a subset of E and letbe a set of subsets of E closed under (possibly infinite) intersection and containing the empty set. Let (L_k)_1 ≤ k ≤ n be the best n-approximation of L with respect to . Then L ∈_n-1() if and only if L_n = ∅ and in this caseL = L_1 - L_2 + …± L_n-1 If L ∈_n-1(), then L = F_1 - F_2 + …± F_n-1 with F_1, …, F_n-1∈. Let F_n = ∅. Then the sequence (F_k)_1≤ k ≤ n is an n-approximation of L. Since (L_k)_1 ≤ k ≤ n is the best n-approximation of L, one has L =L_1 - L_2 + …± L_n-1. Thus, with the notation of Lemma <ref>,{ f(L_n-1)= f(L) = ∅ if n-1 is odd g(L_n-1)= g(L^c) = ∅ if n-1 is even.Therefore, L_n = ∅ by (<ref>).Conversely, suppose that L_n = ∅. If n = 2k, then(L_1 - L_2) + … + (L_2k-1 - L_2k) ⊆ L ⊆ (L_1 - L_2) + … + (L_2k-3 - L_2k-2) + L_2k-1If n = 2k + 1, then (L_1 - L_2) + … + (L_2k-1 - L_2k) ⊆ L ⊆ (L_1 - L_2) + … + (L_2k-1 - L_2k) + L_2k+1In both cases, one gets L = L_1 - L_2 + …± L_n-1 and thus L ∈_n-1(). Let us illustrate this corollary by a concrete example. Let A = {a, b, c} and letbe the lattice of shuffle ideals. If L is the language {1, a, b, c, ab, bc, abc}, a straightforward computation gives L_0= A^* L_1= g(L_0) = A^*(L_0 ∩ L) = A^*L = A^* L_2= f(L_1) = A^*(L_1 - L) = A^* {aa, ac, ba, bb, ca, cb, cc} = A^* - {1, a, b, c, ab, bc} L_3= g(L_2) = A^*(L_2 ∩ L) = A^*abc L_4= f(L_3) = A^*(L_3 - L) = A^*((A^*abc) - abc)= A^* {aabc, abac, abca, babc, abbc, abcb, cabc, acbc, abcc } L_5= g(L_4) = A^*(L_4 ∩ L) = ∅It follows that L = L_1 - L_2 + L_3 - L_4 and L ∈_4(), but L ∉_3(). It is also possible to use the approximation algorithm for a setof subsets ofE closed under (possibly infinite) union and containing the set E. In this case,the set ^c = {L^c | L ∈}is closed under (possibly infinite) intersection and contains the empty set. Consequently, the approximation algorithm can be applied to ^c but it describes the difference hierarchy _n(^c). To recover the difference hierarchy _n(), the following algorithm can be used. First compute the best ^c-approximation of even length of L and the best ^c-approximation of odd length of L^c, sayL= L_1^c - L_2^c + …± L_n^c L^c= F_1^c - F_2^c + …± F_m^cwith n even, m odd, L_i, F_i ∈ and L_n and F_m possibly empty to fill the parity requirements. Now L admits the following -decompositions, whereL_1 and F_1 are possibly empty (and consequently deleted):L= L_n - L_n-1 + …± L_1= F_m - F_m-1 + …± F_1It remains to take the shortest of the two expressions to get the best -approximation of L.§ DECIDABILITY QUESTIONS ON REGULAR LANGUAGES Given a lattice of regular languages , four decidability questions arise: Is the membership problem fordecidable? Is the membership problem for () decidable? For a given positive integer n, is the membership problem for _n() decidable? Is the hierarchy _n() decidable? In other words, given a regular language L, Question <ref> asks to decide whether L ∈, Question <ref> whether L ∈() and Question <ref> whether L ∈_n(). Question <ref> asks whether one can one effectively compute the smallest n such that L ∈_n(), if it exists. Note that if Questions <ref> and <ref> are decidable, then so is Question <ref>. Indeed, given a language L, one first decides whether L belongs to () by Question <ref>. If the answer is positive, this ensures that L belongs to _n() for some n and Question <ref> allows one to find the smallest such n.If the latticeis finite, it is easy to solve the four questions in a positive way. In some cases, a simple application of Corollary <ref> suffices to solve Question <ref> immediately. One just needs to find the appropriateclosure operator and to provide algorithms to compute the functions f(X) and g(X)defined by (<ref>). Letbe the lattice generated by the languages of the form B^*, where B ⊆ A. Then bothand () are finite. It is known that a regular language belongs toif and only if its syntactic ordered monoid is idempotent and commutative and satisfies the inequation 1 ≤ x for all x <cit.>. It belongs to () if and only if its syntactic monoid is idempotent and commutative. Finally, one can define a closure operator by setting L = B^*, where B is the set of letters occurring in some word of L. For instance, let L = ({a,b,c}^* - {b,c}^*) + ({a,b}^* - a^*) + 1. This language belongs to () and its minimal automaton is represented below: < g r a p h i c s >Applying the approximation algorithm of Section <ref>, one gets L_0 = {a,b,c}^*, L_1 = {b,c}^*, L_2 = b^* and L_3 = ∅ and thus L = {a,b,c}^* - {b,c}^* + b^* is the best 3-approximation of L.If the lattice is infinite, our four questions become usually much harder, but can still be solved in some particular cases. We will discuss this in Sections <ref> and <ref>, but first present a powerful tool introduced in <cit.>, chains in ordered monoids.§ CHAINS AND DIFFERENCE HIERARCHIES Chains can be defined on any ordered set. We first give their definition, then establish a connection with difference hierarchies. Let (E, ≤) be a partially ordered set and let X be a subset of E. A chain of E is a strictly increasing sequencex_0 < x_1 < … < x_m-1 of elements of E. It is called an X-chain if x_0 is in X and the x_i's are alternatively elements of X and of its complement X^c. The integer m is called the length of the chain. We let m(X) denote the maximal length of an X-chain. There is a subtle connection between chains and difference hierarchies of regular languages. Let M be a finite ordered monoid and let φ : A^* → M be a surjective monoid morphism. Let= {φ^-1(U) |U is an upper set of M}By definition, every language ofis recognised by the ordered monoid M. If there exists a subset P of M such that L= φ^-1(P) and m(P) ≤ n, then L belongs to _n(). Before starting the proof, let us clarify a delicate point. The condition L= φ^-1(P) means that L is recognised by the monoid M. It does not mean that L is recognised by the ordered monoid M, a property which would require P to be an upper set. For each s ∈ M, let m(P, s) be the maximal length of a P-chain ending with s. Finally, let, for each k > 0,U_k = {s ∈ M | m(P, s) ≥ k }We claim that U_k is an upper set of M. Indeed, if s ∈ U_k, there exists a P-chain x_0 < x_1 < … < x_r-1 = s of length r ≥ k. Let t be an element of M such that s ≤ t. If s and t are not simultaneously in P, then x_0 < x_1 < … < x_r-1 < t is a P-chain of length r+1 ≥ k. Otherwise, x_0 < x_1 < … < x_r-2 < t is a P-chain of length r ≥ k. Thus m(P, t) ≥ k, and t ∈ U_k, proving the claim.We now show thatP = U_1 - U_2 + U_3 - U_4 …± U_nFirst observe that s ∈ P if and only if m(P, s) is odd. Since m(P) ≤ n, one has m(P, s) ≤ n for every s ∈ M and thus U_n+1 = ∅. Formula (<ref>) follows, since for each r ≥ 0, {s ∈ M | m(P, s) = r } = U_r - U_r+1.Let, for 1 ≤ i≤ n, L_i = φ^-1(U_i). Since U_i is an upper set, each L_i belongs to . Moreover, one gets from (<ref>) the formulaL = L_1 - L_2 + L_3 …± L_nwhich shows that L ∈_n(). We now establish a partial converse to Proposition <ref>. A lattice of regular languages is a setof regular languages of A^* containing ∅ and A^* and closed under finite union and finite intersection. Letbe a lattice of regular languages. If a language L belongs to _n(), then there exist an ordered stamp η : A^* → M and a subset P of M satisfying the following conditions: *η is a restricted product of syntactic ordered stamps of members of , *L= η^-1(P), *m(P) ≤ n. If L ∈_n(), thenL = L_1 - L_2 + L_3… ± L_nwith L_1 ⊇ L_2 ⊇…⊇ L_n and L_i ∈. Let η_i: A^* → (M_i, ≤_i) be the syntactic morphism of L_i and let P_i = η_i(L_i). Then each P_i is an upper set of M_i and L_i = η_i^-1(P_i). Let η : A^* → M be the restricted product of the stamps η_i. Condition (<ref>) is satisfied by construction.Observe that if η(u) = (s_1, …, s_n) is an element of M, the condition s_i+1∈ P_i+1 is equivalent to u ∈ L_i+1, and since L_i+1 is a subset of L_i, this condition also implies u∈ L_i and s_i ∈ P_i. Consequently, for each element s = (s_1, …, s_n) of M, there exists a unique k ∈{0, …, n} such thats_1 ∈ P_1, …, s_k ∈ P_k, s_k+1∉ P_k+1, …, s_n ∉ P_nThis unique k is called the cut of s. SettingP = {s ∈ M |the cut of s is odd}one gets, with the convention L_n+1 = ∅ for n odd, η^-1(P) = ⋃_k odd((L_1 ∩…∩ L_k) - L_k+1) = ⋃_k odd (L_k - L_k+1) = Lwhich proves (<ref>).Let now x_0 < x_1 < … < x_m-1 be a P-chain. Let, for 0 ≤ i ≤ m-1, x_i = (s_i,1, …, s_i,n) and let k_i be the cut of x_i. We claim that k_i+1 > k_i. Indeed, since x_i < x_i+1, s_i, k_i≤_i s_i+1, k_i and since P_i is an upper set, s_i, k_i∈ P_i implies s_i+1, k_i∈ P_i+1, which proves that k_i+1≥ k_i. But since x_i and x_i+1 are not simultaneously in P, their cuts must be different, which proves the claim. Since x_0 ∈ P, the cut of x_0 is odd, and in particular, non-zero. It follows that 0 < k_0 < k_1 < … < k_m-1 and since the cuts are numbers between 0 and n, m ≤ n, which proves (<ref>). It is tempting to try to improve Proposition <ref> by taking for M the syntactic morphism of L and for φ the syntactic morphism of L. However, Example <ref> ruins this hope. Indeed, let F = {1, a, b, c, ab, bc, abc} be the set of factors of the word abc. Then the syntactic monoid of L can be defined as the set F ∪{0} equipped with the product defined byxy = xyif x, y and xy are all in F 0 otherwiseNow the syntactic image of L is equal to F. It follows that M - F = {0} and thus, whatever order is taken on M, the length of a chain is bounded by 3. Nevertheless, ifis the lattice of shuffle ideals, then L does not belong to _3().Therefore, if L is a regular language, the maximal length of an L-chain cannot be in general computed in the syntactic monoid of L. It follows that decidability questions on _n(), as presented in Section <ref> below, cannot in general be solved just by inspecting the syntactic monoid. An exceptional case where the syntactic monoid suffices is presented in the next section.§ THE DIFFERENCE HIERARCHY OF THE POLYNOMIAL CLOSURE OF A LATTICE A language L of A^* is a marked product of the languages L_0, L_1, …, L_n ifL = L_0a_1L_1 … a_nL_nfor some letters a_1, …, a_n of A. Given a setof languages, the polynomial closure ofis the set of languages that are finite unions of marked products of languages of . The polynomial closure ofis denotedand the Boolean closure ofis denoted . Finally, letdenote the set of complements of languages in . In this section, weare interested in the difference hierarchy induced by . We consider several examples. §.§ Shuffle ideals If = {∅, A^*}, thenis exactly the set of shuffle ideals considered in Examples <ref> and <ref> andis the class of piecewise testable languages. The following easy result was mentioned in <cit.>. A language is a shuffle ideal if and only if its syntactic ordered monoid M satisfies the inequation 1 ≤ x for all x ∈ M. The syntactic characterization of piecewise testable languages follows from a much deeper result of Simon <cit.>. A language is piecewise testable if and only if its syntactic monoid is -trivial. Note that the closed sets of the closure operator X → XA^* of Example <ref> are exactly the shuffle ideals. It follows that for the latticeof shuffle ideals, the four questions mentioned earlier have a positive answer. More precisely, the decidability of the membership problem forand for () follows from Proposition <ref> and Theorem <ref>, respectively. The decidability of Question <ref> (and hence of Question <ref>) follows from the approximation algorithm. See Example <ref>. §.§ Group languages Recall that a group language is a language whose syntactic monoid is a group, or, equivalently, is recognized by a finite deterministic automaton in which each letter defines a permutation of the set of states. According to the definition of a polynomial closure, a polynomial of group languages is a finite union of languages of the form L_0a_1L_1 … a_kL_k where a_1, …, a_k are letters and L_0, …, L_k are group languages.Let d_ be the metric on A^* defined as follows:r_(u,v)= min{|M| |M is a finite group that separates u and v} d_(u,v)= 2^-r_(u,v)It is known that d_ defines the so-called pro-group topology on A^*. It is also known that the closure of a regular language for d_ is again regular and can be effectively computed. This result was actually proved in two steps: it was first reduced to a group-theoretic conjecture in <cit.> and this conjecture became a theorem in <cit.>.Letbe the set of group languages on A^* and letbe the polynomial closure of . We also letdenote the set of complements of languages of . The following characterization ofwas given in <cit.>. Let L be a regular language and let M be its syntactic ordered monoid. The following conditions are equivalent: * L ∈, * L is closed in the pro-group topology on A^*, * for all x ∈ M, x^ω≤ 1. Theorem <ref> shows that , and hence , is decidable. The corresponding result forhas a long story, related in detail in <cit.>, where several other characterizations can be found. Let L be a regular language and let M be its syntactic monoid. The following conditions are equivalent: * L ∈, * the submonoid generated by the idempotents of M is -trivial, * for all idempotents e, f of M, the condition efe = e implies ef = e = fe. Wenow study the difference hierarchy based on . Letbe the set of closed subsets for the pro-group topology. For each n ≥ 0, a regular language belongs to _n() if and only if it belongs to _n().Theorem <ref> shows thatis a subset of . Itfollows that any language of _n() belongs to _n().Let now L be a regular language of _n() and let (L_k)_1 ≤ k ≤ n be the best n-approximation of L with respect to . Corollary <ref> shows that L ∈_n() if and only if L_n+1 = ∅. Moreover, in this case L = L_1 - L_2 + …± L_n. According to the algorithm described at the end of Section <ref>, the best n-approximation of L is obtained by alternating the two operationsf(X) = X-Land g(X) = X ∩ LNow, as we have seen, the closure of a regular language for d_ is regular. It follows that if X is regular, then both f(X) and g(X) are regular and closed. By Theorem <ref>, they both belong to . It follows that each L_k belongs toand thus L ∈_n(). This leads to the following corollary: The difference hierarchy _n() is decidable.Let L be a regular language. Theorem <ref> shows that one can effectively decide whether L ∈. If this is the case, it remains to find the minimal n such that L ∈_n(). But Proposition <ref> shows that L belongs to _n() if and only if it belongs to _n(). Moreover, since the closure of a regular language can be effectively computed, the best n-approximation of L with respect tocan be effectively computed. Now, Corollary <ref> gives an algorithm to decide whether L ∈_n().§ CYCLIC AND STRONGLY CYCLIC REGULAR LANGUAGES Cyclic and strongly cyclic regular languages are two classes of regular languages related to symbolic dynamic and first studied in <cit.>. It was shown in <cit.> that an appropriate notion of chains suffices to characterise the difference hierarchy based on the class of strongly cyclic regular languages. This contrasts with Section <ref>, in which the general results on chain did not lead to a full characterization of difference hierarchies.Let = (Q, A, ) be a finite (possibly incomplete) deterministic automaton. A word u stabilises a subset P of Q if Pu = P. Given a subset P of Q, let (P) be the set of all words that stabilise P. The language () that stabilisesis by definition the set of all words which stabilise at least one nonempty subset of Q. A language is strongly cyclic if it stabilises some finite deterministic automaton.Ifis the automaton represented in Figure <ref>, then ({1}) = (b + aa)^*, ({2}) = (ab^*a)^*, ({1, 2}) = a^* and () = (b + aa)^* + (ab^*a)^* + a^*. One can show that the set of strongly cyclic languages of A^* forms a lattice of languages but is not closed under quotients. For instance, as shown in Example <ref>, the language L = (b + aa)^* + (ab^*a)^* + a^* is strongly cyclic, but Corollary <ref> will show that its quotient b^-1L = (b + aa)^* is not strongly cyclic, since aa ∈ (b + aa)^* but a ∉ (b + aa)^*.We will also need the following characterization <cit.>: Let = (Q,A,E) be a deterministic automaton. A word u belongs to () if and only if there is some state q of  such that for every integer n, the transition q· u^n exists. Strongly cyclic languages admit the following syntactic characterization <cit.>. As usual, s^ω denotes theidempotent power of s, which exists and is unique in any finite monoid. Let L be a non-full regular language. The following conditions are equivalent: * L is strongly cyclic, * there is a morphism φ from A^* onto a finite monoid M with zero such thatL = φ^-1({s ∈ M | s^ω≠ 0 }), * the syntactic monoid M of L has a zero and the syntactic image of L is the set of all elements s ∈ M such that s^ω≠ 0. Proposition <ref> leads to a simple syntactic characterization of strongly cyclic languages. Recall that a language of A^* is nondense if there exists a word u ∈ A^* such that L ∩ A^*uA^* = ∅. Let L be a regular language, let M be its syntactic monoid and let P be its syntactic image. Then L is strongly cyclic if and only if it satisfies the following conditions, for all u, x, v ∈ M: S *ux^ω v ∈ P implies x^ω∈ P, *x^ω∈ P if and only if x ∈ P.Furthermore, if these conditions are satisfied and if L is not the full language, then L is nondense.Let L be a strongly cyclic language, let M be its syntactic monoid and let P be its syntactic image. If L is the full language, then the conditions <ref> and <ref> are trivially satisfied. If L is not the full language, then Proposition <ref> shows that M has a zero and that P = {s ∈ M | s^ω≠ 0 }. Observing that x^ω = (x^ω)^ω, one getsx ∈ P ⟺ x^ω≠ 0 ⟺ (x^ω)^ω≠ 0 ⟺ x^ω∈ Pwhich proves <ref>. Similarly, one getsux^ω v ∈ P(ux^ω v)^ω≠ 0x^ω≠ 0x ∈ Pwhich proves <ref>.Conversely, suppose that L satisfies <ref> and <ref>. If L is full, then L is strongly cyclic. Otherwise, let z ∉ P. Then z^ω∉ P by <ref> and uz^ω v ∉ P for all u, v ∈ M by <ref>. This means that z is a zero of M and that 0 ∉ P. By Proposition <ref>, it remains to prove that x ∈ P if and only if x^ω≠ 0. First, if x ∈ P, then x^ω∈ P by <ref> and since 0 ∉ P, one has x^ω≠ 0. Conversely, if x^ω≠ 0, then ux^ω v ∈ P for some u, v ∈ M, since x^ω is not equivalent to 0 in the syntactic congruence of P. It follows that x^ω∈ P by <ref> and x ∈ P by <ref>. We turn now to cyclic languages. A subset of a monoid is said to be cyclic if it is closed under conjugation, power and root. That is, a subset P of a monoid M is cyclic if it satisfies the following conditions, for all u, v ∈ M and n > 0: C *u^n ∈ P if and only if u ∈ P, *uv ∈ P if and only if vu ∈ P. This definition applies in particular to the case of a language of A^*. If A = {a, b}, the language b^* and its complement A^*aA^* are cyclic.One can show that regular cyclic languages are closed under inverses of morphisms and under Boolean operations but not under quotients. For instance, the language L = {abc, bca, cab} is cyclic, but its quotient a^-1L = {bc} is not cyclic. Thus regular cyclic languages do not form a variety of languages. However, they admit the following straightforward characterization in terms of monoids. Let L be a regular language of A^*, let φ be a surjective morphism from A^* to a finite monoid M recognising L and let P = φ(L). Then L is cyclic if and only if P is cyclic. Every strongly cyclic language is cyclic. Let L be a strongly cyclic language, let M be its syntactic monoid and let P be its syntactic image. By Proposition <ref>, P satisfies <ref> and <ref>. It suffices now to provethat it satisfies <ref>. The sequence of implicationsxy ∈ P<ref> (xy)^ω∈ P(xy)^ω (xy)^ω∈ P(xy)^ω -1xy(xy)^ω -1xy ∈ P((xy)^ω -1x)(yx)^ω y ∈ P <ref> (yx)^ω∈ P <ref> yx ∈ P.shows that xy ∈ P implies yx ∈ P and the opposite implication follows by symmetry. Another result is worth mentioning: for any regular cyclic language, there is a least strongly cyclic language containing it <cit.>. Let L be a regular cyclic language of A^*, let η : A^* → M be its syntactic stamp and let P = η(L). There M has a zero and the language L = η^-1({s | s^ω≠ 0})if 0 ∉ P, A^* otherwise.is the least strongly cyclic language containing L.If 0 ∉ P, then the language L is strongly cyclic by Proposition <ref>. Morevover, since L is cyclic, P is cyclic by Proposition <ref>. It follows that if s ∈ P, then s^ω∈ P and in particular s^ω≠ 0. Consequently, L contains L. It remains to prove that L is the least strongly cyclic language containing L. Let X be a strongly cyclic language containing L and let u be a word of L. Let 𝒜 = (Q,A,E) be a deterministic automaton such that X = (𝒜). Setting s = η(u), one has s^ω≠ 0 by definition of L. Consequently, η(s)^n ≠ 0 for every integer n and there are two words x_n and y_n such that x_nu^ny_n belongs to L. By Proposition <ref>, there is a state q_n of  such that the transition q_n· x_nu^ny_n is defined. The transition (q_n· x_n)· u^n is thus defined for every n and by Proposition <ref> again, the word u belongs to X. Thus L⊆ X as required.Suppose now that 0 ∈ P and let z be a word of L such that η(z) = 0. Let X be a strongly cyclic language containing L. If X is not full, then X is nondense by Proposition <ref> and there exists a word u ∈ A^* such that A^*uA^* ∩ X = ∅. Since X contains L, one also gets A^*uA^* ∩ L = ∅ and in particular zu ∉ L. But this yieds a contradiction, since η(zu) = η(z)η(u) = 0 ∈ P and thus zu ∈η^-1(P) = L. Thus the only strongly cyclic language containing L is A^*. Given a finite monoid M, the Green's preorder relationdefined on M bystif and only if s ∈ MtM, or equivalently, if there exists u, v ∈ M such that s = utvis a preorder on M. The associated equivalence relationis defined by stif st and ts, or equivalently, if MsM = MtM. Let L be a regular cyclic language of A^*, let η : A^* → M be its syntactic stamp and let P = η(L). Then L is strongly cyclic if and only if for all idempotents e, f of M, the conditions e ∈ P and ef imply f ∈ P.Suppose that L is strongly cyclic and let e, f be two idempotents of M such that e ∈ P and ef. Let u,v ∈ M be such that e = ufv. Since f^ω = f, one gets uf^ω v ∈ P and thus f ∈ P by Condition <ref> of Proposition <ref>.In the opposite direction, suppose that for all idempotents e, f of M, the conditions e ∈ P and ef imply f ∈ P. Since L is cyclic, it satisfies <ref> and hence <ref>. We claim that it also satisfies <ref>. Indeed, ux^ω v ∈ P implies (ux^ω v)^ω∈ P by <ref>. Furthermore, since (ux^ω v)^ω x^ω, one also has x^ω∈ P, and finally x ∈ P by <ref>, which proves the claim. The precise connection between cyclic and strongly cyclic languages was given in <cit.>. A regular language is cyclic if and only if it is a Boolean combination of regular strongly cyclic languages. Theorem <ref> motivates a detailed study of the difference hierarchy of the classof strongly cyclic languages. This study relies on a careful analysis of the chains on the set of idempotents of a finite monoid, pre-ordered by the relation . A P-chain of idempotents is a sequence (e_0, e_1, …, e_m-1) of idempotents of M such thate_0e_1 … e_m-1e_0 ∈ P and, for 0 < i <m, e_i ∈ P if and only if e_i-1∉ P. The integer m is the length of the P-chain of idempotents. We let ℓ(M, P) denote the maximal length of a P-chain of idempotents of M. We consider in particular the case where φ : A^* → M is a stamp recognising a regular language L of A^* and P = φ(L). The next theorem shows that in this case, ℓ(M, P) does not depend on the choice of the stamp recognising L, but only depends on L. Let L be a regular language. Let φ : A^* → M and ψ : A^* → N be two stamps recognising L. If P = φ(L) and Q = ψ(L), then ℓ(M,P) = ℓ(N,Q).It is sufficient to prove the result when φ is the syntactic stamp of L. Since the morphism ψ is surjective, M is a quotient of N and there is a surjective morphism π : N → M such that π∘ψ = φ. It follows thatπ(Q) = Pand π^-1(P) = Q.We show that to any P-chain of idempotents in N, one can associate a Q-chain of idempotents of the same length in M and vice-versa.Let (e_0, …, e_m-1) be a Q-chain of idempotents in N and let f_i = π(e_i) for 0 ≤ i ≤ m-1. Since every monoid morphism preserves , the relations (<ref>) show that (f_0, …, f_m-1) is a P-chain of idempotents in M.Let now (f_0, …, f_m-1) be a P-chain of idempotents in M. Since f_i-1 f_i, there exist for 1 ≤ i ≤ m-1 elements u_i, v_i of M such that u_if_iv_i = f_i-1. Let us choose an idempotent e_m-1 such that π(e_m-1) = f_m-1 and some elements s_i and t_i of N such that π(s_i) = u_i and π(t_i) = v_i. We now define a sequence of idempotents (e_0, …, e_m-1) of N by settinge_m-2 = (s_m-1e_m-1t_m-1)^ω e_m-3 = (s_m-2e_m-2t_m-2)^ω… e_0 = (s_1e_1t_1)^ωBy construction, e_0 … e_m-1 and a straightforward induction shows that π(e_i) = f_i for 0 ≤ i ≤ m-1. Moreover the equalities (<ref>) show that e_i ∈ Q if and only if f_i ∈ P. It follows that (e_0, …, e_m-1) is a Q-chain of idempotents of N and thus ℓ(M,P) = ℓ(N,Q).Since the integers ℓ(M, P) only depend on L and not on the choice of the recognising monoid, let us define ℓ(L) as ℓ(M, P) where M [P] is the syntactic monoid [image] of L. Note that by Corollary <ref>, a cyclic language L is strongly cyclic if and only if ℓ(L) = 1. This is a special case of the following stronger result <cit.>. Let L be a regular cyclic language. Then L ∈_n() if and only if ℓ(L) ≤ n. We first prove the following lemma which states that the function ℓ is subadditive with respect to the symmetric difference. If X and Y are regular languages, then ℓ(XY) ≤ℓ(X) + ℓ(Y). Suppose that the languages X and Y are respectively recognised by the stamps φ : A^* → M and ψ : A^* → N. Let P and Q be the images of X and Y in M and N, so that X = φ^-1(P) and Y = ψ^-1(Q). The language XY is recognised by the restricted product of the stamps φ and ψ, say γ: A^* → R, and the image of XY in R isT = R ∩(P × (N - Q) + (M - P) × Q). Let ((e_0,f_0), …, (e_m-1, f_m-1)) be a T-chain of idempotents in R. Let us consider the set I (resp. J) of integers i for which exactly one of the idempotents e_i-1 or e_i (resp. f_i-1 or f_i) belongs to P (resp. Q). Formally, we define the sets of integers I and J to beI= { 1 ≤ i ≤ m-1 | e_i-1∈ Pe_i ∉P } J= { 1 ≤ i ≤ m-1 | f_i-1∈ Qf_i ∉Q } Since the sequence ((e_0,f_0), …, (e_m-1, f_m-1)) is a T-chain in R, one has e_0 … e_m-1 and f_0 … f_m-1. Moreover, every integer i between 1 and m-1 belongs to exactly one of the sets I or J. Otherwise, the idempotents (e_i-1, f_i-1) and (e_i, f_i) of R would be either both in T or both out of T. Let I = {i_1, …, i_p} and J = {j_1, …, j_q} with i_1 < … < i_p and j_1 < … < j_q. Then p + q = m - 1. Since (e_0,f_0) ∈ T, the conditions e_0 ∈ P and f_0 ∉ Q are equivalent. By symmetry, suppose that e_0 ∈ P. Then f_0 ∉ Q and thus f_1 ∈ Q. Furthermore, the definitions of I and J givee_0∈ P, e_1∈ P,… e_i_1-1 ∈ P, e_i_1 ∉ P,… e_i_2-1 ∉ P,e_i_2 ∈ P,… f_0∉ P, f_1∉ P,… f_j_1-1 ∉ P,f_j_1 ∈ P, … f_j_2-1 ∈ P,f_j_2 ∉ P,… Then the sequence (e_0, e_i_1, …, e_i_p) is a P-chain of idempotents in M and (f_j_1, …, f_q) is a Q-chain of idempotents in N. Therefore p + 1 ≤ℓ(X), q ≤ℓ(Y) and m = p + 1 + q ≤ℓ(X) + ℓ(Y). Thus ℓ(XY) ≤ℓ(X) + ℓ(Y). We can now complete the proof of Theorem <ref>.Let η:A^* → M be the syntactic stamp of L and let P = η(L). Letalso E(M) be the set of idempotents of M. If L ∈_n(), then L = L_1 … L_n for some strongly cyclic languages L_i. By Corollary <ref>, one has ℓ(L_i) = 1 for 1 ≤ i ≤ n and thus ℓ(L) ≤ n by Lemma <ref>.Suppose now that ℓ(L) ≤ n. For each idempotent e of M, let ℓ(e) denote the maximal length of a P-chain of idempotents ending with e. Then ℓ(e) ≤ℓ(L) by definition. For each i > 0, letP_i = {s ∈ M |ℓ(s^ω) ≥ i}and L_i = η^-1(P_i)Let e, f ∈ E(M). Since every idempotent e satisfies e^ω = e, the conditions e ∈ P_i and ef imply f ∈ P_i. It follows by Corollary <ref> that the languages L_i are strongly cyclic. We claim thatP = P_1 - P_2 + P_3 - P_4 … ± P_mFirst observe that since L is cyclic, an element s of M belongs to P if and only if s^ω belongs to P. Moreover, s^ω∈ P if and only if ℓ(s^ω) is odd. Since ℓ(P) ≤ n, one has ℓ(s^ω) ≤ n for every s ∈ M and thus P_n+1 = ∅. Formula (<ref>) follows, since for each r ≥ 0,{s ∈ M |ℓ(s^ω) = r } = P_r - P_r+1.Moreover, one gets from (<ref>) the formulaL = L_1 - L_2 + L_3… ± L_nwhich completes the proof of the theorem. Theorem <ref> can be used to give an another proof of Theorem <ref>. To get this result, we must prove that any cyclic language belongs to the class _n() for some integer n. By Theorem <ref>, it suffices to prove that the length of the P-chains of idempotents in a monoid recognising L is bounded. This is a consequence of the following proposition <cit.>. Let L be a regular cyclic language. Let φ : A^* → M be a stamp recognising L and let P = φ(L). Then the length of any P-chain of idempotents is bounded by the -depth of M.Let (e_0, …, e_n-1) be a P-chain of idempotents in M. Then by definitione_0 … e_n-1.Moreover, if e_i-1 e_i, then by <cit.>, the idempotents e_i-1 and e_i are conjugate. That is, there exist two elements x and y of M such that xy = e_k-1 and yx = e_k. Since L is cyclic, P is also cyclic by Proposition <ref> and <ref> implies that e_i-1∈ P if and only if e_i ∈ P, which contradicts the definition of a P-chain of idempotents. It follows that the sequence (e_0, …, e_n-1) is a strict <_-chain and hence its length is bounded by the -depth of M. Let L be the cyclic language (b+aa)^* + (ab^*a)^* +a^* - b^* + 1. Its syntactic monoid is the monoid with zero presented by the relations bb = b, a^3 = a, baa = a^2b, a^2ba = ba, bab = 0. Its transition table and its -class structure are represented below. The syntactic image of L is P = {1, a, a^2, aba, a^2b} and (aba, b, 1) is a maximal P-chain of idempotents. 2|c| 12345678 *112345678 a34523826 *b70844078 *a^252345648 ab 84408800 ba 20622026 *a^2b 40844048 *aba62206600 *bab00000000 1 b a^2a a^2bba ababa bab § CONCLUSIONDifference hierarchies of regular languages form an appealing measure of complexity. They can be studied from the viewpoint of descriptive set theory and automata theory <cit.> or from an algebraic perspective, as presented in this paper. It would be interesting to compare these two approaches.The results proposed by Glasser, Schmitz and Selivanov <cit.>, together with our new result on group languages, give hope that more decidability results might be obtained in a near future. In particular, the recent progress on concatenation hierarchies <cit.>, might lead to new decidability results for the difference hierarchies induced by the lower levels of the Straubing-Thérien hierarchy.Let us conclude with an open problem: Does there exist a lattice of regular languagesand an integer n such that the membership problems forand for () are decidable, but is undecidable for _n()? If the answer to Question <ref> is positive, a moreprecise question can be raised: For each integer n, does there exist a lattice of regular languagessuch that the membership problems for , () and _n() are decidable, but the membership problem for _n+1() is undecidable? § ACKNOWLEDGMENTS The authors would like to thank the anonymous referees, whose suggestions strongly improved the quality of this paper.biblistanglais
http://arxiv.org/abs/1702.08023v6
{ "authors": [ "Olivier Carton", "Dominique Perrin", "Jean-Éric Pin" ], "categories": [ "cs.FL", "68Q70, 68Q45, 20M35" ], "primary_category": "cs.FL", "published": "20170226122908", "title": "A survey on difference hierarchies of regular languages" }
The hyperbolic dodecahedral space of Weber and Seifert has a natural non-positively curved cubulation obtained by subdividing the dodecahedron into cubes. We show that the hyperbolic dodecahedral space has a 6–sheeted irregular cover with the property that the canonical hypersurfaces made up of the mid-cubes give a very short hierarchy. Moreover, we describe a 60–sheeted cover in which the associated cubulation is special. We also describe the natural cubulation and covers of the spherical dodecahedral space (aka Poincaré homology sphere). 57N10, 57M20, 57N35Unravelling the Dodecahedral Spaces Jonathan Spreer and Stephan Tillmann December 30, 2023 ======================================== § INTRODUCTION A cubing of a 3–manifold M is a decomposition of M into Euclidean cubes identified along their faces by Euclidean isometries. This gives M a singular Euclidean metric, with the singular set contained in the union of all edges. The cubing is non-positively curved if the dihedral angle along each edge in M is at least 2π and each vertex satisfies Gromov's link condition: The link of each vertex is a triangulated sphere in which each 1–cycle consists of at least 3 edges, and if a 1–cycle consists of exactly 3 edges, then it bounds a unique triangle. In this case, we say that M has an NPC cubing.The universal cover of an NPC cubed 3–manifold is CAT(0). Aitchison, Matsumoto and Rubinstein <cit.> showed by a direct construction that if each edge in an NPC cubed 3–manifold has even degree, then the manifold is virtually Haken. Moreover, Aitchison and Rubinstein <cit.> showed that if each edge degree in such a cubing is a multiple of four, then the manifold is virtually fibred. A cube contains three canonical squares (or 2-dimensional cubes), each of which is parallel to two sides of the cube and cuts the cube into equal parts. These are called mid-cubes. The collection of all mid-cubes gives an immersed surface in the cubed 3–manifold M, called the canonical (immersed) surface. If the cubing is NPC, then each connected component of this immersed surface is π_1–injective. If one could show that one of these surface subgroups is separable in π_1(M), then a well-known argument due to Scott <cit.> shows that there is a finite cover of M containing an embedded π_1–injective surface, and hence M is virtually Haken. In the case where the cube complex is special (see <ref>), a canonical completion and retraction construction due to Haglund and Wise <cit.> shows that these surface subgroups are indeed separable because the surfaces are convex. Whence a 3–manifold with a special NPC cubing is virtually Haken.The missing piece is thus to show that an NPC cubed 3–manifold has a finite cover such that the lifted cubing is special. This is achieved in the case where the fundamental group of the 3–manifold is hyperbolic by the following cornerstone in Agol's proof of Waldhausen's Virtual Haken Conjecture from 1968:Let G be a hyperbolic group which acts properly and cocompactly on a CAT(0) cube complex X. Then G has a finite index subgroup F so that X/F is a special cube complex. In general, it is known through work of Bergeron and Wise <cit.> that if M is a closed hyperbolic 3–manifold, then π_1(M)is isomorphic to the fundamental group of an NPC cube complex. However, the dimension of this cube complex may be arbitrarily large and it may not be a manifold. Agol's theorem provides a finite cover that is a special cube complex, and the π_1–injective surfaces of Kahn and Markovic <cit.> are quasi-convex and hence have separable fundamental group. Thus, the above outline completes a sketch of the proof that M is virtually Haken. An embedding theorem of Haglund and Wise <cit.> and Agol's virtual fibring criterion <cit.> then imply that M is also virtually fibred.Weber and Seifert <cit.> described two closed 3–manifolds that are obtained by taking a regular dodecahedron in a space of constant curvature and identifying opposite sides by isometries. One is hyperbolic and known as the Weber-Seifert dodecahedral space and the other is spherical and known as the Poincaré homology sphere. Moreover, antipodal identification on the boundary of the dodecahedron yields a third closed 3-manifold which naturally fits into this family: the real projective space. The dodecahedron has a natural decomposition into 20 cubes, which is a NPC cubing in the case of the Weber-Seifert dodecahedral space. The main result of this note can be stated as follows.The hyperbolic dodecahedral spaceof Weber and Seifert admits a cover of degree 60 in which the lifted natural cubulation ofis special. In addition, we exhibit a 6–sheeted cover ofin which the canonical immersed surface consists of six embedded surface components and thus gives a very short hierarchy of . The special cover from Theorem <ref> is the smallest regular cover ofthat is also a cover of this 6–sheeted cover. Moreover, it is the smallest regular cover ofthat is also a cover of the 5-sheeted cover with positive first Betti number described by Hempel <cit.>.We conclude this introduction by giving an outline of this note. The dodecahedral spaces are described in <ref>. Covers of the hyperbolic dodecahedral space are described in <ref>, and all covers of the spherical dodecahedral space and the real projective space in <ref>. Acknowledgements: Research of the first author was supported by the Einstein Foundation (project “Einstein Visiting Fellow Santos”). Research of the second author was supported in part under the Australian Research Council's Discovery funding scheme (project number DP160104502). The authors thank Schloss Dagstuhl Leibniz-Zentrum für Informatik and the organisers of Seminar 17072, where this work was completed. The authors thank Daniel Groves and Alan Reid for their encouragement to write up these results, and the anonymous referee for some insightful questions and comments which triggered us to find a special cover.§ CUBE COMPLEXES, INJECTIVE SURFACES AND HIERARCHIESA cube complex is a space obtained by gluing Euclidean cubes of edge length one along subcubes. A cube complex is CAT(0) if it is CAT(0) as a metric space, and it is non-positively curved (NPC) if its universal cover is CAT(0). Gromov observed that a cube complex is NPC if and only if the link of each vertex is a flag complex.We identify each n–cube as a copy of [-1/2, 1/2]^n. A mid-cube in [-1/2, 1/2]^n is the intersection with a coordinate plane x_k=0. If X is a cube complex, then a new cube complex Y is formed by taking one (n-1)–cube for each midcube of X and identifying these (n-1)–cubes along faces according to the intersections of faces of the corresponding n–cubes. The connected components of Y are the hyperplanes of X, and each hyperplane H comes with a canonical immersion H → X. The image of the immersion is termed an immersed hyperplane in X. If X is CAT(0), then each hyperplane is totally geodesic and hence embedded.The NPC cube complex X is special if* Each immersed hyperplane embeds in X (and hence the term “immersed" will henceforth be omitted).* Each hyperplane is 2–sided.* No hyperplane self-osculates.* No two hyperplanes inter-osculate. The prohibited pathologies are shown in Figure <ref> and are explained now. An edge in X is dual to a mid-cube if it intersects the midcube. We say that the edge of X is dual to the hyperplane H if it intersects its image in X. The hyperplane dual to edge a is unique and denoted H(a). Suppose the immersed hyperplane is embedded. It is 2–sided if one can consistently orient all dual edges so that all edges on opposite sides of a square have the same direction. Using this direction on the edges, H self-osculates if it is dual to two distinct edges with the same initial or terminal vertex. Hyperplanes H_1 and H_2 inter-osculate if they cross and they have dual edges that share a vertex but do not lie in a common square.The situation is particularly nice in the case where the NPC cube complex X is homeomorphic to a 3–manifold. Work of Aitchison and Rubinstein (see 3 in <cit.>) shows that each immersed hyperplane is mapped π_1–injectively into X. Hence if one hyperplane is embedded and 2–sided, then X is a Haken 3–manifold.Moreover, if each hyperplane embeds and is 2–sided, then one obtains a hierarchy for X. This is well-known and implicit in <cit.>. One may first cut along a maximal union of pairwise disjoint hypersurfaces to obtain a manifold X_1 (possibly disconnected) with incompressible boundary. Then each of the remaining hypersurfaces gives a properly embedded surface in X_1 that is incompressible and boundary incompressible. This process iterates until one has cut open X along all the mid-cubes, and hence it terminates with a collection of balls. In particular, if Y consists of three pairwise disjoint (not necessarily connected) surfaces, each of which is embedded and 2–sided, then one has a very short hierarchy.§ THE DODECAHEDRAL SPACESThe main topic of this paper is a study of low-degree covers of the hyperbolic dodecahedral space. However, we also take the opportunity to extend this study to the spherical dodecahedral space in the hope that this will be a useful reference. When the sides are viewed combinatorially, there is a third dodecahedral space which naturally fits into this family and again gives a spherical space form: the real projective space. The combinatorics of these spaces is described in this section. §.§ The Weber-Seifert Dodecahedral space The Weber-Seifert Dodecahedral spaceis obtained by gluing the opposite faces of a dodecahedron with a 3 π / 5-twist. This yields a decomposition 𝒟_ of the space into one vertex, six edges, six pentagons, and one cell (see Figure <ref> on the left). The dodecahedron can be decomposed into 20 cubes by a) placing a vertex at the centre of each edge, face, and the dodecahedron, and b) placing each cube around one of the 20 vertices of the dodecahedron with the other seven vertices in the centres of the three adjacent edges, three adjacent pentagons, and the center of the dodecahedron. Observe that identification of opposite faces of the original dodecahedron with a 3 π / 5-twist yields a 14-vertex, 54-edge, 60 square, 20-cube decomposition 𝒟̂_ of(see Figure <ref> on the right). Observe that every edge of 𝒟̂_ occurs in ≥ 4 cubes, and each vertex satisfies the link condition. We therefore have an NPC cubing.The mid-cubes form pentagons parallel to the faces of the dodecahedron, and under the face pairings glue up to give a 2–sided immersed surface of genus four. We wish to construct a cover in which the canonical surface splits into embedded components – which neither self-osculate with themselves, nor inter-osculate with other surface components. §.§ The Poincaré homology sphere The Poincaré homology sphereis obtained from the dodecahedron by gluing opposite faces by a π / 5-twist. This results in a decomposition 𝒟_ ofinto one vertex, ten edges, six pentagons, and one cell (see Figure <ref> on the left). Again, we can decompose 𝒟_ into 20 cubes. Note, however, that in this case some of the cube-edges only have degree three (the ones coming from the edges of the original dodecahedron). This is to be expected sincesupports a spherical geometry. §.§ Real projective space Identifying opposite faces of the dodecahedron by a twist of π results in identifying antipodal points of a 3-ball (see Figure <ref> on the right). Hence, the result is a decomposition 𝒟_ℝP^3of ℝℙ^3 into ten vertices, 15 edges, six faces, and one cell. As in the above cases, this decomposition can be decomposed into 20 cubes, with some of the cube-edges being of degree two. § COVERS OF THE WEBER-SEIFERT SPACE In order to obtain a complete list of all small covers of the Weber-Seifert space , we need a list of all low index subgroups of π_1 () in a presentation compatible with 𝒟_ and its cube decomposition 𝒟̂_.The complex 𝒟_ has six pentagons u, v, w, x, y, and z. These correspond to antipodal pairs of pentagons in the original dodecahedron, see Figure <ref> on the left. Passing to the dual decomposition, these six pentagons corresponds to loops which naturally generate π_1 (). The six edges of 𝒟_< g r a p h i c s > ,< g r a p h i c s > ,< g r a p h i c s > ,< g r a p h i c s > ,< g r a p h i c s > , and< g r a p h i c s >each give rise to a relator in this presentation of the fundamental group ofin the following way: fix edge< g r a p h i c s >and start at a pentagon containing< g r a p h i c s > , say u. We start at the pentagon labelled u with a back of an arrow ⊗ – the outside in Figure <ref> on the left. We traverse the dodecahedron, resurface on the other pentagon labelled u with an arrowhead ⊙ (the innermost pentagon in Figure <ref>). We then continue with the unique pentagon adjacent to the center pentagon along edge< g r a p h i c s > . In this case v labelled with the tail of an arrow, we traverse the dodecahedron, resurface at (v,⊙), and continue with (w,⊙) which we follow through the dodecahedron in reverse direction, and so on. After five such traversals we end up at the outer face where we started. The relator is now given by the labels of the pentagons we encountered, taking into account their orientation (arrowhead or tail). In this case the relator is r( < g r a p h i c s > )= uvw^-1y^-1z.Altogether we are left with [ π_1 () = ⟨ u,v,w,x,y,z|uxy^-1v^-1w,uyz^-1w^-1x,uzv^-1x^-1y,;uvw^-1y^-1z,uwx^-1z^-1v, vxzwy ⟩ . ] Using this particular representation of the fundamental group of the Weber-Seifert dodecahedral space we compute subgroups of π_1 () of index k (k < 10) viafunction<cit.>, andfunction<cit.> and use their structure to obtain explicit descriptions of their coset actions (usingfunction<cit.>) which, in turn, can be transformed into a gluing table of k copies of the dodecahedron (or 20k copies of the cube). Given such a particular decomposition, we can track how the canonical surface evolves and whether it splits into embedded components. We provide a GAP script for download from <cit.>. The script takes a list of subgroups as input (presented each by a list of generators from π_1 ()) and computes an array of data associated to the corresponding covers of 𝒟_. The script comes with a sample input file containing all subgroups of π_1 () of index less than ten. The subgroups are presented in a form compatible with the definition of π () discussed above.§.§ Covers of degree up to fiveA computer search reveals that there are no covers of degrees 2, 3, and 4, and 38 covers of degree 5. Their homology groups are listed in Table <ref>. For none of them, the canonical surface splits into embedded components. Moreover, in all but one case it does not even split into multiple immersed components, with the exception being the 5-sheeted cover with positive first Betti number described by Hempel <cit.>, where it splits into five immersed components.§.§ Covers of degree sixThere are 61 covers of degree six, for 60 of which the canonical surface does not split into multiple connected components (see Table <ref> below for their first homology groups, obtained usingfunction<cit.>). However, the single remaining example leads to an irregular cover 𝒞 with deck transformation group isomorphic to A_5, for which the canonical surface splits into six embedded components. The cover is thus a Haken cover (although this fact also follows from the first integral homology group of 𝒞 which is isomorphic to ℤ^5 ⊕ℤ_2^2 ⊕ℤ_5^3, see also Table <ref>), and the canonical surface defines a very short hierarchy.The subgroup is generated by[u, v^-1w^-1, w^-1x^-1, x^-1y^-1, y^-1z^-1,; z^-1v^-1, vuy^-1,v^2z^-1, vwy^-1,vxv^-1 ] and the complex is given by gluing six copies 1, 2, … , 6 of the dodecahedron with the orbits for the six faces as shown in Figure <ref> on the left (the orientation of the orbit is given as in Figure <ref> on the left). The dual graph of 𝒞 (with one vertex for each dodecahedron, one edge for each gluing along a pentagon, and one colour per face class in the base 𝒟_) is given in Figure <ref> on the right.The six surfaces consist of 60 mid-cubes each. All surfaces can be decomposed into 12 “pentagonal disks” of five quadrilaterals each, which are parallel to one of the pentagonal faces of the complex, but slightly pushed into one of the adjacent dodecahedra. The six surfaces are given by their pentagonal disks and listed below. Since all of their vertices (which are intersections of the edges of the dodecahedra) must have degree 5, each surface must have 12 such vertices, 12 pentagonal disks, and 30 edges of pentagonal disks, and thus is of Euler characteristic -6. Moreover, since the Weber-Seifert space is orientable and the surface is 2–sided, it must be orientable of genus 4. Every pentagonal disk is denoted by the corresponding pentagonal face it is parallel to, and the index of the dodecahedron it is contained in. The labelling follows Figure <ref>. [S_1 =⟨ (z,⊙)_1, (y,⊗)_1, (v,⊙)_2, (y,⊙)_2, (w,⊙)_3, (w,⊗)_3,;(x,⊗)_4, (z,⊗)_4, (v,⊗)_5, (u,⊗)_5, (u,⊙)_6, (x,⊙)_6⟩ ] [S_2 =⟨ (w,⊗)_1, (x,⊙)_1, (u,⊗)_2, (y,⊗)_2, (u,⊙)_3, (v,⊙)_3,;(w,⊙)_4, (y,⊙)_4, (z,⊗)_5, (z,⊙)_5, (x,⊗)_6, (v,⊗)_6⟩ ] [S_3 =⟨ (w,⊙)_1, (v,⊗)_1, (w,⊗)_2, (z,⊗)_2, (x,⊗)_3, (u,⊗)_3,;(z,⊙)_4, (u,⊙)_4, (x,⊙)_5, (v,⊙)_5, (y,⊗)_6, (y,⊙)_6⟩ ] [S_4 =⟨ (x,⊗)_1, (y,⊙)_1, (w,⊙)_2, (u,⊙)_2, (x,⊙)_3, (z,⊙)_3,;(v,⊙)_4, (v,⊗)_4, (w,⊗)_5, (y,⊗)_5, (z,⊗)_6, (u,⊗)_6⟩ ] [S_5 =⟨ (u,⊗)_1, (u,⊙)_1, (v,⊗)_2, (z,⊙)_2, (z,⊗)_3, (y,⊙)_3,;(x,⊙)_4, (y,⊗)_4, (x,⊗)_5,(w,⊙)]_5, (w,⊗)_6, (v,⊙)_6⟩ ] [S_6 =⟨ (v,⊙)_1, (z,⊗)_1, (x,⊗)_2, (x,⊙)_2, (v,⊗)_3, (y,⊗)_3,;(u,⊗)_4, (w,⊗)_4, (y,⊙)_5, (u,⊙)_5, (w,⊙)_6, (z,⊙)_6⟩ ] Note that the 12 pentagonal disks of every surface component intersect each dodecahedron exactly twice. (A priori, given a 6-fold cover of 𝒟_ with 6 embedded surface components, such an even distribution is not clear: an embedded surface can intersect a dodecahedron in up to three pentagonal disks.) Moreover, every surface component can be endowed with an orientation, such that all of its dual edges point towards the centre of a dodecahedron. Hence, all surface components must be self-osculating through the centre points of some dodecahedron. The fact that there must be some self-osculating surface components in 𝒞 can also be deduced from the fact that the cover features self-identifications (i.e., loops in the face pairing graph). To see this, assume w.l.o.g. that the top and the bottom of a dodecahedron are identified. Then, for instance, pentagonal disk P_1 (which must be part of some surface component) intersecting the innermost pentagon in edge ⟨ v_1, v_2⟩ must also intersect the dodecahedron in pentagon P_2, and the corresponding surface component must self-osculate (see Figure <ref>).§.§ A special cover of degree 60The (non-trivial) normal cores of the subgroups of index up to 6 are all of index either 60 or 360 in π_1 ().For the computation of normal cores we use thefunction<cit.>. One of the index 60 subgroups is the index 12 normal core of Hempel's cover mentioned in Section <ref>. This is equal to the index 10 normal core of π_1 (𝒞) from Section <ref>, and we now show that it producesa special cover 𝒮 ofof degree 60. The deck transformation group is the alternating group A_5 and the abelian invariant of the cover is ℤ^41⊕ℤ_2^12.The generators of π_1 (𝒮) are [u v^-1 w^-1 ,u w^-1 x^-1 ,u x^-1 y^-1 ,u y^-1 z^-1 ,u z^-1 v^-1 , u^-1 v z; u^-1 w v , u^-1 x w , u^-1 y x , u^-1 z y ,v u y^-1 u^-1 , v u^-1 w; v w y^-1 ,v x v^-1 u^-1 , v x^-1 z ,v y^-1 x^-1 ,v^-1 u z^-1 ,v^-1 u^-1 x u; v^-1 x y ,v^-1 y^-1 v u ,w u z^-1 u^-1 , w u^-1 x , w v^-1 x v , w x z^-1;w y w^-1 u^-1 , w z w^-1 v ,w z^-1 y^-1 ,w^-1 u^-1 y u ,w^-1 v z v^-1 ,w^-1 x w v^-1;w^-1 z^-1 w u ,x u v^-1 u^-1 , x v w v^-1 , x w^-1 y w ,x z x^-1 u^-1 ,x^-1 u^-1 z u;x^-1 v^-1 x u , x^-1 w x v ,x^-1 y x w^-1 ,y u w^-1 u^-1 ,y v y^-1 u^-1 , y w x w^-1; y x^-1 z x ,y^-1 u^-1 v u , y^-1 v^-1 z^-1 v ,y^-1 w^-1 y u , y^-1 x y w ,z u x^-1 u^-1;z^-1 u^-1 w u ,u^5 , u^2 v^-1 w^-1 u^-1 ,u v u y^-1 u^-2 ,u v w^-1 x^-2 ,u v x v^-1 u^-2; u v y v^-2 ,u w u z^-1 u^-2 ,u w x^-1 y^-2 , u w z w^-2 ,u x u v^-1 u^-2 ,u x y^-1 z^-2;u y u w^-1 u^-2 , u^-2 v z u ,u^-2 v^-1 w^-1 u^-2 , u^-1 v^-1 u^-1 y^-1 u^-2 , u^-1 v^-1 x^-1 v^2 .] In order to see that 𝒮 is in fact a special cover, we must establish a number of observations on embedded surface components in covers of 𝒟̂_.In the following paragraphs we always assume that we are given a finite cover ℬ of 𝒟_ together with its canonical immersed surface defined by the lift of 𝒟̂_ in ℬ. Whenever we refer to faces of the decomposition of ℬ into dodecahedra, we explicitly say so. Otherwise we refer to the faces of the lift of the natural cubulation in ℬ. We start with a simple definition. A dodecahedral vertex is said to be near a component S of the canonical immersed surface of ℬ if it is the endpoint of an edge of the cubulation dual to S. An embedded component S of the canonical immersed surface of ℬ self-osculates if and only if at least one of the following two situations occurs.* There exists a dodecahedron containing more than one pentagonal disk of S.* The number of dodecahedral vertices near S is strictly smaller than its number of pentagonal disks. First note that S is 2–sided and can be transversely oriented such that one side always points towards the centres of the dodecahedra it intersects. From this it is apparent that if one of a) or b) occurs, then the surface component must self-osculate.Assume that a) does not hold; that is, all dodecahedra contain at most one pentagonal disk of S. Hence, no self-osculation can occur through the centre of a dodecahedron. Since every surface component S is made out of pentagonal disks, with five of such disks meeting in every vertex, S has as many pentagonal disks as it has pentagonal vertices. Moreover, every such pentagonal vertex of S must be near exactly one dodecahedral vertex of ℬ. Hence, the number of dodecahedral vertices that S is near to is bounded above by its number of pentagonal disks. Equality therefore occurs if and only if S is not near any dodecahedral vertex twice. Hence, if b) does not hold, no self-osculation can occur through a vertex of a dodecahedron.It remains to prove that if S self-osculates, then it must self-osculate through a centre point of a dodecahedron or through a vertex of a dodecahedron. The only other possibilities are that it self-osculates through either the midpoint of a dodecahedral edge or through the centre point of a dodecahedral face.First assume that the surface self-osculates through the midpoint of a dodecahedral edge e. Then either the surface has two disjoint pentagonal disks both parallel to e and hence also self-osculates through the two dodecahedral endpoints of e; or the surface has two disjoint pentagonal disks both intersecting e, in which case there exists a pair of pentagonal disks in the same dodecahedron – and the surface self-osculates through the centre of that dodecahedron.Next assume the surface self-osculates through the centre point of a dodecahedral face f. Then either the surface has two disjoint pentagonal disks both parallel to f and hence also self-osculates through the five dodecahedral vertices of f; or the surface has two disjoint pentagonal disks both intersecting f, in which case there exists a pair of pentagonal disks in the same dodecahedron and the surface self-osculates through the centre of that dodecahedron. A pair of intersecting, embedded, and non-self-osculating components S and T of the canonical immersed surface of ℬ inter-osculates if and only if at least one of the following two situations occurs.* Some dodecahedron contains pentagonal disks of both S and T which are disjoint.* The number of all dodecahedral vertices near S or T minus the number of all pairs of intersecting pentagonal disks is strictly smaller than the number of all pentagonal disks in S or T. We first need to establish the following three claims.Claim 1: If S and T inter-osculate, then they inter-osculate through the centre of a dodecahedron or a vertex of a dodecahedron.This follows from the arguments presented in the second part of the proof of Lemma <ref> since inter-osculation locally behaves exactly like self-osculation.Claim 2: Every pentagonal disk of S intersects T in at most one pentagonal disk and vice versa.A pentagonal disk can intersect another pentagonal disk in five different ways. Every form of multiple intersection causes either S or T to self-osculate or even self-intersect. Claim 3: A dodecahedral vertex near an intersection of S and T cannot be near any other pentagonal disk of S or T, other than the ones close to the intersection.Assume otherwise, then this causes either S or T to self-osculate or even self-intersect.We now return to the proof of the main statement. If a) is satisfied, then the surface pair inter-osculates through the centre of the dodecahedron (see also the proof of Lemma <ref>). If b) is satisfied, then by Claim 2 and Claim 3, both S and T must be near a dodecahedral vertex away from their intersections and thus S and T inter-osculate.For the converse assume that neither a) nor b) holds. By Claim 1, it suffices to show that S and T do notinter-osculate through the centre of a dodecahedron or a vertex of a dodecahedron.We first show that S and T do not inter-osculate through the centre of a dodecahedron. If at most one of S or T meets a dodecahedron, then this is true for its centre. Hence assume that both S and T meet a dodecahedron in pentagonal discs. By Claim 2 the dodecahedron contains exactly one pentagonal disc from each surface. These intesect since a) is assumed false. The only dual edges to S (resp. T) with a vertex at the centre of the cube run from the centre of the pentagonal face of the dodecahedron dual to S (resp. T) to the centre of the dodecahedron. But these two edges lie in the boundary of a square in the dodecahedron since the pentagonal discs intersect and hence the pentagonal faces are adjacent. Hence S and T do not inter-osculate through the centre of a dodecahedron.We next show that S and T do not inter-osculate through the vertex of a dodecahedron. The negation of b) is that the number of all dodecahedral vertices near S or T minus the number of all pairs of intersecting pentagonal disks equals the number of all pentagonal disks of S and T. Suppose a dodecahedral vertex is the endpoint of dual edges to squares in S and T. If the dual edges are contained in the same dodecahedron then they are in the boundary of a common square. Hence assume they are contained in different dodecahedra. Then the equality forces at least one of the dual edges to be in the boundary of a cube intersected by both S and T. But then at least one of the surfaces self-osculates.Due to Lemmata <ref> and <ref>, checking for self-osculating embedded surface components is a straightforward task. Furthermore, as long as surface components are embedded and non-self-osculating, checking for inter-osculation of a surface pair is simple as well.In the cover 𝒮 we have: * the canonical immersed surface splits into 60 embedded components,* every surface component of 𝒮 is made up of 12 pentagonal disks (and thus is orientable of genus 4, see the description of the canonical surface components of 𝒞 in Section <ref> for details), * every surface component distributes its 12 pentagonal disks over 12 distinct dodecahedra, * every surface component is near 12 dodecahedral vertices, and* every pair of intersecting surface components intersects in exactly three pentagonal disks (and hence in exactly three dodecahedra), and for each such pair both surface components combined are near exactly 21 dodecahedral vertices. These properties of 𝒮 can be checked using thescript available from <cit.>. From them, and from Lemmata <ref> and <ref> it follows that 𝒮 is a special cover. The gluing orbits for 𝒮 of the face classes from 𝒟_, as well as all 60 surface components are listed in Appendix <ref>.§.§ Covers of higher degree An exhaustive enumeration of all subgroups up to index 9 reveals a total of 490 covers, but no further examples of covers where the canonical surface splits into embedded components (and in particular no further special covers). There are, however, 20 examples of degree 8 covers where the canonical surface splits into two immersed connected components (all with first homology group ℤ⊕ℤ_2^3⊕ℤ_3^2⊕ℤ_5^3). Moreover, there are 10 examples of degree 9 covers, where the canonical surface splits into two components, one of which is embedded (all with first homology group ℤ⊕ℤ_3⊕ℤ_4⊕ℤ_5^3⊕ℤ_7). All of them are Haken, as can be seen by their first integral homology groups.In an attempt to obtain further special covers we execute a non-exhaustive, heuristic search for higher degree covers. This is necessary since complete enumeration of subgroups quickly becomes infeasible for subgroups of index larger than 9. This more targeted search is done in essentially two distinct ways.In the first approach we compute normal cores of all irregular covers of degrees 7, 8, and 9 from the enumeration of subgroups of π_1 () of index at most 9 described above. This is motivated by the fact that the index 60 normal core of π_1 (𝒞) yields a special cover. The normal cores have indices 168, 504, 1344, 2520, 20160, and 181440. Of the ones with index at most 2520, we construct the corresponding cover. Very often, the covers associated to these normal cores exhibit a single (immersed) surface component. However, the normal cores of the 10 subgroups corresponding to the covers of degree 9 with two surface components yield (regular) covers where the canonical immersed surface splits into nine embedded components. All of these covers are of degree 504 with deck transformation group PSL(2,8). Each of the surface components has 672 pentagons. Accordingly, each of them must be (orientable) of genus 169. All nine surface components necessarily self-osculate (they are embedded and contain more pentagonal disks than there are dodecahedra in the cover). The first homology group of all of these covers is given byℤ^8⊕ℤ_2^10⊕ℤ_3 ⊕ℤ_4^9⊕ℤ_5^17⊕ℤ_7 ⊕ℤ_8^6⊕ℤ_9^7⊕ℤ_17^28⊕ℤ_27^7⊕ℤ_29^9⊕ℤ_83^18 .In addition, there are 120 subgroups with a core of order 1,344, and factor group isomorphic to a semi-direct product of ℤ_2^3 and PSL(3,2). For 40 of them the corresponding (regular) cover splits into 8 immersed components. These include the covers of degree 8 where the canonical immersed surface splits into two immersed components.In the second approach we analyse low degree covers of 𝒞 from Section <ref>. This is motivated by the fact that, in such covers, the canonical surface necessarily consists of embedded components. There are 127 2-fold covers of 𝒞, 64 of which are fix-point free (i.e., they do not identify two pentagons of the same dodecahedron – a necessary condition for a cover to be special, see the end of Section <ref>). For 40 of them the canonical surface still only splits into six embedded components. For the remaining 24, the surface splits into 7 components. For more details, see Table <ref>.The 127 2-fold covers of 𝒞 altogether have 43,905 2-fold covers. Amongst these 24-fold covers of 𝒟_, 16,192 are fix-point free. They admit 6 to 14 surface components with a single exception where the surface splits into 24 components. This cover is denoted by ℰ. Details on the number of covers and surface components can be found in Table <ref>.We have for the generators of the subgroup corresponding to the cover ℰ [u^-2, uvz, uv^-1w^-1, uwv, uw^-1x^-1, uxw, ux^-1y^-1,; uyx, uy^-1z^-1, uzy, uz^-1v^-1, vux,vu^-1w,vwy^-1,; vxz,vx^-1z, vy^-1x^-1, z^-1uy^-1,z^-1u^-1x^-1, z^-1vy^-1v^-1,z^-1wx,; z^-1w^-1zv^-1, z^-1yv^-2, z^-1y^-1w, z^-2wv^-1, wuy,wv^-1xv .] Surface components in ℰ are small (12 pentagonal disks per surface, as also observed in the degree 60 special cover 𝒮, see Section <ref>). This motivates an extended search for a degree 48 special cover by looking at degree 2 covers of ℰ. However, amongst the 131,071 fix-point free covers of degree 2, no special cover exists. More precisely, there are 120,205 covers with 24 surface components, 10,200 with 25 surface components, 240 with 26 and 27 surface components each, 162 with 28, and 24 with 33 surface components. For most of them, most surface components self-osculate.§ POINCARÉ HOMOLOGY SPHERE AND PROJECTIVE SPACE The Poincaré homology sphere has as fundamental group the binary icosahedral group of order 120, which is isomorphic to SL(2,5). From its subdivision given by the dodecahedron, we can deduce a presentation with six generators dual to the six pentagons of the subdivision, and one relator dual to each of the 10 edges: [ π_1 (𝒟_) = ⟨ u,v,w,x,y,z|uxz,uyv,uzw,uvx,uwy,; xy^-1z, yz^-1v, zv^-1w, vw^-1x,wx^-1y ⟩ . ] SL(2,5) has 76 subgroups falling into 12 conjugacy classes forming the subgroup lattice shown in Figure <ref> on the left hand side. For the corresponding hierarchy of covers together with the topological types of the covering 3-manifolds see Figure <ref> on the right hand side. By construction, the universal cover of 𝒟_ is the 120-cell, which is dual to the simplicial 600-cell. In particular, the dual cell decomposition of any of the 12 covers is a (semi-simplicial) triangulation. The dual of 𝒟_ itself is isomorphic to the minimal five-tetrahedron triangulation of the Poincaré homology sphere.Most of the topological types are determined by the isomorphism type of the subgroups. The only two non-trivial cases are the lens spaces L (5,1) and L (10,1). For the former, we passed to the dual triangulation of the cover of degree 24 using the GAP-package simpcomp <cit.>, and then fed the result to the 3-manifold software Regina <cit.> to determine the topological type of the cover to be L (5,1). The latter is then determined by the observation that there is no 2-to-1-cover of L (10,3) to L (5,1). Regarding the canonical immersed surface, the situation is quite straightforward. Since all edges of 𝒟_, or of any of its covers, are of degree three, the canonical surface is a surface decomposed into pentagonal disks with three such disks meeting in each vertex. Consequently, all surface components must be 2-spheres isomorphic to the dodecahedron, thus have 12 pentagons, and the number of connected components of the canonical surface must coincide with the degree of the cover. Moreover, each surface component runs parallel to the 2-skeleton of a single dodecahedron, and the surface components are embedded if and only if there are no self-intersections of dodecahedra.In more detail the relevant properties of all covers are listed in Table <ref>.The case of the projective space is rather simple. The only proper cover (of degree >1) is the universal cover of degree 2. Since the edges of 𝒟_ℝP^3 are all of degree two, the canonical surface of 𝒟_ℝP^3 has six embedded sphere components, eachconsisting of two pentagons glued along their boundary, surrounding one of the six pentagonal faces each. Consequently, the universal cover is a 3-sphere decomposed into two balls along a dodecahedron with the canonical surface splitting into 12sphere components.99Agol-RFRS Ian Agol: Criteria for virtual fibering. J. Topol. 1 (2008), no. 2, 269–284.AgolIan Agol: The virtual Haken conjecture. With an appendix by Agol, Daniel Groves, and Jason Manning.Doc. Math. 18 (2013), 1045–1087.AMR I. R. Aitchison, S. Matsumoto and H. Rubinstein: Immersed Surfaces in Cubed Manifolds. Asian J. Math. 1 (1997), 85–95, 1997.AM-Bull1999 Iain R. Aitchison and J. Hyam Rubinstein: Polyhedral metrics and 3-manifolds which are virtual bundles. Bull. London Math. Soc. 31 (1999), no. 1, 90–96.AR1999 Iain R. Aitchison and J. Hyam Rubinstein: Combinatorial Dehn surgery on cubed and Haken 3-manifolds. Proceedings of the Kirbyfest (Berkeley, CA, 1998), 1–21, Geom. Topol. Monogr., 2, Geom. Topol. Publ., Coventry, 1999. BW Nicolas Bergeron andDaniel T. Wise: A boundary criterion for cubulation. Amer. J. Math. 134 (2012), no. 3, 843–859. Magma Wieb Bosma, John Cannon, and Catherine Playous:The Magma algebra system. I. The user language.J. Symbolic Comput. 24 (1997), 235–265.Regina Benjamin A. Burton, Ryan Budney, William Pettersson, et al.: Regina: Software for low-dimensional topology, http://regina.sourceforge.net/, 1999–2016. simpcomp Felix Effenberger and Jonathan Spreer: simpcomp – a GAP toolkit for simplicial complexes, version 2.1.6. https://github.com/simpcomp-team/simpcomp/, 2016.GAP GAP – Groups, Algorithms, and Programming, version 4.8.7.http://www.gap-system.org/, 2017.HWFrédéric Haglund and Daniel T. Wise: Special cube complexes. Geom. Funct. Anal. 17 (2008), no. 5, 1551–1620.He John Hempel: Orientation reversing involutions and the first Betti number for finite coverings of 3-manifolds. Invent. Math. 67 (1982), no. 1, 133–142. KM Jeremy Kahn andVladimir Markovic:Immersing almost geodesic surfaces in a closed hyperbolic three manifold. Ann. of Math. (2) 175 (2012), no. 3, 1127–1190. ScottPeter Scott: Subgroups of surface groups are almost geometric. J. London Math. Soc. (2) 17 (1978), no. 3, 555–565.ST Jonathan Spreer and Stephan Tillmann: Ancillary files to Unravelling the Dodecahedral Spaces. https://arxiv.org/src/1702.08080/anc, 2017.WS1933 Constantin Weber and Herbert Seifert: Die beiden Dodekaederräume, Mathematische Zeitschrift (1933) 37 (1): 237–253.Wise Daniel Wise:From riches to raags: 3-manifolds, right-angled Artin groups, and cubical geometry. CBMS Regional Conference Series in Mathematics, 117. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2012.Discrete Geometry group, Mathematical Institute, Freie Universität Berlin,Arnimallee 2, 14195 Berlin, Germanyjonathan.spreer@fu-berlin.de —–School of Mathematics and Statistics F07,The University of Sydney,NSW 2006 Australiastephan.tillmann@sydney.edu.au § THE SPECIAL COVER 𝒮The special cover 𝒮 from Section <ref> is of degree 60 with deck transformation group A_5 and abelian invariant ℤ^41⊕ℤ_2^12. The subgroup is generated by [u v^-1 w^-1 ,u w^-1 x^-1 ,u x^-1 y^-1 ,u y^-1 z^-1 ,u z^-1 v^-1 , u^-1 v z; u^-1 w v , u^-1 x w , u^-1 y x , u^-1 z y ,v u y^-1 u^-1 , v u^-1 w; v w y^-1 ,v x v^-1 u^-1 , v x^-1 z ,v y^-1 x^-1 ,v^-1 u z^-1 ,v^-1 u^-1 x u; v^-1 x y ,v^-1 y^-1 v u ,w u z^-1 u^-1 , w u^-1 x , w v^-1 x v , w x z^-1;w y w^-1 u^-1 , w z w^-1 v ,w z^-1 y^-1 ,w^-1 u^-1 y u ,w^-1 v z v^-1 ,w^-1 x w v^-1;w^-1 z^-1 w u ,x u v^-1 u^-1 , x v w v^-1 , x w^-1 y w ,x z x^-1 u^-1 ,x^-1 u^-1 z u;x^-1 v^-1 x u , x^-1 w x v ,x^-1 y x w^-1 ,y u w^-1 u^-1 ,y v y^-1 u^-1 , y w x w^-1; y x^-1 z x ,y^-1 u^-1 v u , y^-1 v^-1 z^-1 v ,y^-1 w^-1 y u , y^-1 x y w ,z u x^-1 u^-1;z^-1 u^-1 w u ,u^5 , u^2 v^-1 w^-1 u^-1 ,u v u y^-1 u^-2 ,u v w^-1 x^-2 ,u v x v^-1 u^-2; u v y v^-2 ,u w u z^-1 u^-2 ,u w x^-1 y^-2 , u w z w^-2 ,u x u v^-1 u^-2 ,u x y^-1 z^-2;u y u w^-1 u^-2 , u^-2 v z u ,u^-2 v^-1 w^-1 u^-2 , u^-1 v^-1 u^-1 y^-1 u^-2 , u^-1 v^-1 x^-1 v^2 .] The gluing orbits of face classes from 𝒟_ are given by face orbit u ( 1, 2,14,20, 3)( 4,18,47,24, 7)( 5,12,17,46,23)( 6,19,48,25, 9)( 8,15,49,21,11)(10,16,50,22,13) (26,54,43,37,27)(28,42,52,36,29)(30,33,39,38,51)(31,44,53,40,32)(34,55,45,41,35)(56,59,60,58,57) v ( 1, 4,26,30, 5)( 2,15,51,32, 6)( 3,13,28,54,21)( 7,29,56,33,11)( 8,27,57,31,12)( 9,10,18,49,23) (14,46,40,35,16)(17,38,58,34,19)(20,25,41,42,47)(22,45,59,43,24)(36,55,44,39,37)(48,53,60,52,50) w ( 1, 6,34,36, 7)( 2,16,52,37, 8)( 3, 5,31,55,22)( 4,10,35,58,27)( 9,32,57,29,13)(11,12,19,50,24) (14,47,43,39,17)(15,18,42,60,38)(20,21,33,44,48)(23,30,56,45,25)(26,28,41,40,51)(46,49,54,59,53) x ( 1, 8,38,40, 9)( 2,17,53,41,10)( 3, 7,27,51,23)( 4,15,46,25,13)( 5,11,37,58,32)( 6,12,39,60,35) (14,48,45,28,18)(16,19,44,59,42)(20,22,29,26,49)(21,24,36,57,30)(31,33,43,52,34)(47,50,55,56,54) y ( 1,10,42,43,11)( 2,18,54,33,12)( 3, 9,35,52,24)( 4,28,59,39, 8)( 5, 6,16,47,21)( 7,13,41,60,37) (14,49,30,31,19)(15,26,56,44,17)(20,23,32,34,50)(22,25,40,58,36)(27,29,45,53,38)(46,51,57,55,48) z ( 1,12,44,45,13)( 2,19,55,29, 4)( 3,11,39,53,25)( 5,33,59,41, 9)( 6,31,56,28,10) ( 7, 8,17,48,22) (14,50,36,27,15)(16,34,57,26,18)(20,24,37,38,46)(21,43,60,40,23)(30,54,42,35,32)(47,52,58,51,49)The 60 surfaces are given by their pentagonal disks. Every pentagonal disk is denoted by the corresponding pentagonal face in the lift of 𝒟_ it is parallel to, and the index of the dodecahedron it is contained in. The labelling follows Figure 2. [ S_1=⟨ (u,⊙)_60, (x,⊙)_42, (w,⊙)_39, (v,⊙)_41, (z,⊙)_43, (y,⊙)_53,; (y,⊗)_54, (w,⊗)_28, (x,⊗)_44, (v,⊗)_33, (z,⊗)_45, (u,⊗)_56⟩ ][S_2 =⟨(x,⊗)_60,(w,⊙)_52,(u,⊗)_41,(y,⊙)_58,(v,⊗)_42,(z,⊗)_40,; (z,⊙)_16,(u,⊙)_34, (w,⊗)_9,(y,⊗)_10,(v,⊙)_32, (x,⊙)_6⟩ ][ S_3=⟨ (y,⊗)_60, (x,⊙)_40, (u,⊗)_39, (z,⊙)_58, (w,⊗)_53, (v,⊗)_37,; (v,⊙)_46, (u,⊙)_51,(x,⊗)_8, (z,⊗)_17, (w,⊙)_27, (y,⊙)_15⟩ ][ S_4=⟨ (z,⊗)_60, (y,⊙)_37, (u,⊗)_42, (v,⊙)_58, (x,⊗)_43, (w,⊗)_35,; (w,⊙)_24, (u,⊙)_36, (v,⊗)_47, (y,⊗)_16, (x,⊙)_34, (z,⊙)_50⟩ ][ S_5=⟨ (v,⊗)_60, (z,⊙)_35, (u,⊗)_53, (w,⊙)_58, (y,⊗)_41, (x,⊗)_38,;(x,⊙)_9, (u,⊙)_32, (w,⊗)_25, (z,⊗)_46, (y,⊙)_51, (v,⊙)_23⟩ ][S_6 =⟨(w,⊗)_60,(v,⊙)_38,(u,⊗)_43,(x,⊙)_58,(z,⊗)_39,(y,⊗)_52,;(y,⊙)_8,(u,⊙)_27,(v,⊗)_24,(x,⊗)_11,(z,⊙)_36, (w,⊙)_7⟩ ][ S_7=⟨ (v,⊙)_60, (x,⊙)_52, (y,⊙)_39, (w,⊗)_42, (u,⊙)_37, (z,⊗)_59,; (z,⊙)_24, (y,⊗)_47, (w,⊙)_11, (x,⊗)_33, (u,⊗)_54, (v,⊗)_21⟩ ][ S_8=⟨ (w,⊙)_60, (y,⊙)_40, (z,⊙)_42, (x,⊗)_53, (u,⊙)_35, (v,⊗)_59,;(v,⊙)_9, (z,⊗)_25, (x,⊙)_10, (y,⊗)_28, (u,⊗)_45, (w,⊗)_13⟩ ][ S_9=⟨ (x,⊙)_60, (z,⊙)_37, (v,⊙)_53, (y,⊗)_43, (u,⊙)_38, (w,⊗)_59,;(w,⊙)_8, (v,⊗)_11, (y,⊙)_17, (z,⊗)_44, (u,⊗)_33, (x,⊗)_12⟩ ][S_10=⟨ (y,⊙)_60, (v,⊙)_35, (w,⊙)_43, (z,⊗)_41, (u,⊙)_52, (x,⊗)_59,; (x,⊙)_16, (w,⊗)_10, (z,⊙)_47, (v,⊗)_54, (u,⊗)_28, (y,⊗)_18⟩ ][S_11=⟨ (z,⊙)_60, (w,⊙)_38, (x,⊙)_41, (v,⊗)_39, (u,⊙)_40, (y,⊗)_59,; (y,⊙)_46, (x,⊗)_17, (v,⊙)_25, (w,⊗)_45, (u,⊗)_44, (z,⊗)_48⟩ ][S_12=⟨ (u,⊗)_60, (v,⊗)_52, (w,⊗)_40, (x,⊗)_37, (y,⊗)_35, (z,⊗)_38,; (z,⊙)_34, (w,⊙)_36, (v,⊙)_51, (x,⊙)_32, (y,⊙)_27, (u,⊙)_57⟩ ][S_13=⟨ (u,⊙)_59, (x,⊙)_54, (w,⊙)_44, (v,⊙)_28, (z,⊙)_33, (y,⊙)_45,; (y,⊗)_30, (w,⊗)_26, (x,⊗)_55, (v,⊗)_31, (z,⊗)_29, (u,⊗)_57⟩ ][ S_14 =⟨(v,⊙)_59,(x,⊙)_43,(y,⊙)_44,(w,⊗)_54,(u,⊙)_39,(z,⊗)_56,; (z,⊙)_11,(y,⊗)_21,(w,⊙)_12,(x,⊗)_31,(u,⊗)_30, (v,⊗)_5⟩ ][ S_15 =⟨(w,⊙)_59,(y,⊙)_41,(z,⊙)_54,(x,⊗)_45,(u,⊙)_42,(v,⊗)_56,; (v,⊙)_10,(z,⊗)_13,(x,⊙)_18,(y,⊗)_26,(u,⊗)_29, (w,⊗)_4⟩ ][S_16=⟨ (x,⊙)_59, (z,⊙)_39, (v,⊙)_45, (y,⊗)_33, (u,⊙)_53, (w,⊗)_56,; (w,⊙)_17, (v,⊗)_12, (y,⊙)_48, (z,⊗)_55, (u,⊗)_31, (x,⊗)_19⟩ ][S_17=⟨ (y,⊙)_59, (v,⊙)_42, (w,⊙)_33, (z,⊗)_28, (u,⊙)_43, (x,⊗)_56,; (x,⊙)_47, (w,⊗)_18, (z,⊙)_21, (v,⊗)_30, (u,⊗)_26, (y,⊗)_49⟩ ][S_18=⟨ (z,⊙)_59, (w,⊙)_53, (x,⊙)_28, (v,⊗)_44, (u,⊙)_41, (y,⊗)_56,; (y,⊙)_25, (x,⊗)_48, (v,⊙)_13, (w,⊗)_29, (u,⊗)_55, (z,⊗)_22⟩ ][S_19=⟨ (u,⊗)_59, (v,⊗)_43, (w,⊗)_41, (x,⊗)_39, (y,⊗)_42, (z,⊗)_53,; (z,⊙)_52, (w,⊙)_37, (v,⊙)_40, (x,⊙)_35, (y,⊙)_38, (u,⊙)_58⟩ ][ S_20 =⟨(x,⊗)_58,(w,⊙)_34,(u,⊗)_40,(y,⊙)_57,(v,⊗)_35,(z,⊗)_51,;(z,⊙)_6,(u,⊙)_31,(w,⊗)_23, (y,⊗)_9,(v,⊙)_30, (x,⊙)_5⟩ ][ S_21 =⟨(y,⊗)_58,(x,⊙)_51,(u,⊗)_37,(z,⊙)_57,(w,⊗)_38,(v,⊗)_36,; (v,⊙)_15,(u,⊙)_26, (x,⊗)_7, (z,⊗)_8,(w,⊙)_29, (y,⊙)_4⟩ ][S_22=⟨ (z,⊗)_58, (y,⊙)_36, (u,⊗)_35, (v,⊙)_57, (x,⊗)_52, (w,⊗)_32,; (w,⊙)_50, (u,⊙)_55, (v,⊗)_16,(y,⊗)_6, (x,⊙)_31, (z,⊙)_19⟩ ][S_23=⟨ (v,⊗)_58, (z,⊙)_32, (u,⊗)_38, (w,⊙)_57, (y,⊗)_40, (x,⊗)_27,; (x,⊙)_23, (u,⊙)_30, (w,⊗)_46, (z,⊗)_15, (y,⊙)_26, (v,⊙)_49⟩ ][S_24=⟨ (w,⊗)_58, (v,⊙)_27, (u,⊗)_52, (x,⊙)_57, (z,⊗)_37, (y,⊗)_34,;(y,⊙)_7, (u,⊙)_29, (v,⊗)_50, (x,⊗)_24, (z,⊙)_55, (w,⊙)_22⟩ ][S_25=⟨ (u,⊗)_58, (v,⊗)_34, (w,⊗)_51, (x,⊗)_36, (y,⊗)_32, (z,⊗)_27,; (z,⊙)_31, (w,⊙)_55, (v,⊙)_26, (x,⊙)_30, (y,⊙)_29, (u,⊙)_56⟩ ][S_26=⟨ (x,⊗)_57, (w,⊙)_31, (u,⊗)_51, (y,⊙)_56, (v,⊗)_32, (z,⊗)_26,;(z,⊙)_5, (u,⊙)_33, (w,⊗)_49, (y,⊗)_23, (v,⊙)_54, (x,⊙)_21⟩ ][S_27=⟨ (y,⊗)_57, (x,⊙)_26, (u,⊗)_36, (z,⊙)_56, (w,⊗)_27, (v,⊗)_55,;(v,⊙)_4, (u,⊙)_28, (x,⊗)_22,(z,⊗)_7, (w,⊙)_45, (y,⊙)_13⟩ ][S_28=⟨ (z,⊗)_57, (y,⊙)_55, (u,⊗)_32, (v,⊙)_56, (x,⊗)_34, (w,⊗)_30,; (w,⊙)_19, (u,⊙)_44,(v,⊗)_6,(y,⊗)_5, (x,⊙)_33, (z,⊙)_12⟩ ][S_29=⟨ (v,⊗)_57, (z,⊙)_30, (u,⊗)_27, (w,⊙)_56, (y,⊗)_51, (x,⊗)_29,; (x,⊙)_49, (u,⊙)_54, (w,⊗)_15,(z,⊗)_4, (y,⊙)_28, (v,⊙)_18⟩ ][S_30=⟨ (w,⊗)_57, (v,⊙)_29, (u,⊗)_34, (x,⊙)_56, (z,⊗)_36, (y,⊗)_31,; (y,⊙)_22, (u,⊙)_45, (v,⊗)_19, (x,⊗)_50, (z,⊙)_44, (w,⊙)_48⟩ ][ S_31 =⟨(y,⊗)_55,(x,⊙)_29,(u,⊗)_50,(z,⊙)_45,(w,⊗)_36,(v,⊗)_48,;(v,⊙)_7,(u,⊙)_13,(x,⊗)_20,(z,⊗)_24,(w,⊙)_25, (y,⊙)_3⟩ ][S_32=⟨ (w,⊗)_55, (v,⊙)_22, (u,⊗)_19, (x,⊙)_45, (z,⊗)_50, (y,⊗)_44,; (y,⊙)_20, (u,⊙)_25, (v,⊗)_17, (x,⊗)_14, (z,⊙)_53, (w,⊙)_46⟩ ][ S_33 =⟨(v,⊙)_55,(x,⊙)_44,(y,⊙)_50,(w,⊗)_31,(u,⊙)_48,(z,⊗)_34,; (z,⊙)_17,(y,⊗)_12,(w,⊙)_14,(x,⊗)_16, (u,⊗)_6, (v,⊗)_2⟩ ][S_34=⟨ (x,⊙)_55, (z,⊙)_48, (v,⊙)_36, (y,⊗)_19, (u,⊙)_22, (w,⊗)_34,; (w,⊙)_20, (v,⊗)_14, (y,⊙)_24, (z,⊗)_52, (u,⊗)_16, (x,⊗)_47⟩ ][S_35=⟨ (x,⊗)_54, (w,⊙)_21, (u,⊗)_18, (y,⊙)_43, (v,⊗)_49, (z,⊗)_42,; (z,⊙)_20, (u,⊙)_24, (w,⊗)_16, (y,⊗)_14, (v,⊙)_52, (x,⊙)_50⟩ ][ S_36 =⟨(z,⊗)_54,(y,⊙)_33,(u,⊗)_49,(v,⊙)_43,(x,⊗)_30,(w,⊗)_47,;(w,⊙)_5,(u,⊙)_11,(v,⊗)_23,(y,⊗)_20,(x,⊙)_24, (z,⊙)_3⟩ ][ S_37 =⟨(w,⊙)_54,(y,⊙)_42,(z,⊙)_49,(x,⊗)_28,(u,⊙)_47,(v,⊗)_26,; (v,⊙)_16,(z,⊗)_10,(x,⊙)_14,(y,⊗)_15, (u,⊗)_4, (w,⊗)_2⟩ ][S_38=⟨ (y,⊙)_54, (v,⊙)_47, (w,⊙)_30, (z,⊗)_18, (u,⊙)_21, (x,⊗)_26,; (x,⊙)_20, (w,⊗)_14, (z,⊙)_23, (v,⊗)_51, (u,⊗)_15, (y,⊗)_46⟩ ][S_39=⟨ (y,⊗)_53, (x,⊙)_25, (u,⊗)_17, (z,⊙)_40, (w,⊗)_48, (v,⊗)_38,; (v,⊙)_20, (u,⊙)_23, (x,⊗)_15, (z,⊗)_14, (w,⊙)_51, (y,⊙)_49⟩ ][ S_40 =⟨(v,⊗)_53,(z,⊙)_41,(u,⊗)_48,(w,⊙)_40,(y,⊗)_45,(x,⊗)_46,; (x,⊙)_13, (u,⊙)_9,(w,⊗)_22,(z,⊗)_20,(y,⊙)_23, (v,⊙)_3⟩ ][ S_41 =⟨(x,⊙)_53,(z,⊙)_38,(v,⊙)_48,(y,⊗)_39,(u,⊙)_46,(w,⊗)_44,; (w,⊙)_15, (v,⊗)_8,(y,⊙)_14,(z,⊗)_19,(u,⊗)_12, (x,⊗)_2⟩ ][ S_42 =⟨(w,⊗)_52,(v,⊙)_37,(u,⊗)_47,(x,⊙)_36,(z,⊗)_43,(y,⊗)_50,; (y,⊙)_11, (u,⊙)_7,(v,⊗)_20,(x,⊗)_21,(z,⊙)_22, (w,⊙)_3⟩ ][ S_43 =⟨(y,⊙)_52,(v,⊙)_34,(w,⊙)_47,(z,⊗)_35,(u,⊙)_50,(x,⊗)_42,; (x,⊙)_19, (w,⊗)_6,(z,⊙)_14,(v,⊗)_18,(u,⊗)_10, (y,⊗)_2⟩ ][ S_44 =⟨(x,⊗)_51,(w,⊙)_32,(u,⊗)_46,(y,⊙)_30,(v,⊗)_40,(z,⊗)_49,;(z,⊙)_9, (u,⊙)_5,(w,⊗)_20,(y,⊗)_25,(v,⊙)_21, (x,⊙)_3⟩ ][ S_45 =⟨(z,⊙)_51,(w,⊙)_26,(x,⊙)_46,(v,⊗)_27,(u,⊙)_49,(y,⊗)_38,; (y,⊙)_18, (x,⊗)_4,(v,⊙)_14,(w,⊗)_17, (u,⊗)_8, (z,⊗)_2⟩ ][S_46=⟨ (w,⊗)_50, (v,⊙)_24, (u,⊗)_14, (x,⊙)_22, (z,⊗)_47, (y,⊗)_48,; (y,⊙)_21,(u,⊙)_3, (v,⊗)_46, (x,⊗)_49, (z,⊙)_25, (w,⊙)_23⟩ ][S_47=⟨ (v,⊙)_50, (x,⊙)_48, (y,⊙)_47, (w,⊗)_19, (u,⊙)_20, (z,⊗)_16,; (z,⊙)_46, (y,⊗)_17, (w,⊙)_49, (x,⊗)_18,(u,⊗)_2, (v,⊗)_15⟩ ][ S_48 =⟨(v,⊗)_45,(z,⊙)_28,(u,⊗)_22,(w,⊙)_41,(y,⊗)_29,(x,⊗)_25,;(x,⊙)_4,(u,⊙)_10, (w,⊗)_7, (z,⊗)_3, (y,⊙)_9, (v,⊙)_1⟩ ][ S_49 =⟨(v,⊙)_44,(x,⊙)_39,(y,⊙)_19,(w,⊗)_33,(u,⊙)_17,(z,⊗)_31,;(z,⊙)_8,(y,⊗)_11, (w,⊙)_2, (x,⊗)_6, (u,⊗)_5, (v,⊗)_1⟩ ][ S_50 =⟨(w,⊗)_43,(v,⊙)_39,(u,⊗)_21,(x,⊙)_37,(z,⊗)_33,(y,⊗)_24,; (y,⊙)_12, (u,⊙)_8, (v,⊗)_3, (x,⊗)_5, (z,⊙)_7, (w,⊙)_1⟩ ][ S_51 =⟨(w,⊙)_42,(y,⊙)_35,(z,⊙)_18,(x,⊗)_41,(u,⊙)_16,(v,⊗)_28,;(v,⊙)_6, (z,⊗)_9, (x,⊙)_2, (y,⊗)_4,(u,⊗)_13, (w,⊗)_1⟩ ][ S_52 =⟨(v,⊗)_41,(z,⊙)_10,(u,⊗)_25,(w,⊙)_35,(y,⊗)_13,(x,⊗)_40,;(x,⊙)_1, (u,⊙)_6, (w,⊗)_3,(z,⊗)_23,(y,⊙)_32, (v,⊙)_5⟩ ][ S_53 =⟨(w,⊗)_39,(v,⊙)_17,(u,⊗)_11,(x,⊙)_38,(z,⊗)_12,(y,⊗)_37,;(y,⊙)_2,(u,⊙)_15, (v,⊗)_7, (x,⊗)_1,(z,⊙)_27, (w,⊙)_4⟩ ][S_54=⟨ (w,⊗)_37,(v,⊙)_8, (u,⊗)_24, (x,⊙)_27, (z,⊗)_11, (y,⊗)_36,;(y,⊙)_1,(u,⊙)_4, (v,⊗)_22,(x,⊗)_3, (z,⊙)_29, (w,⊙)_13⟩ ][S_55=⟨ (x,⊗)_35, (w,⊙)_16,(u,⊗)_9, (y,⊙)_34, (v,⊗)_10, (z,⊗)_32,;(z,⊙)_2, (u,⊙)_19,(w,⊗)_5,(y,⊗)_1, (v,⊙)_31, (x,⊙)_12⟩ ][ S_56 =⟨(v,⊙)_33,(x,⊙)_11,(y,⊙)_31,(w,⊗)_21,(u,⊙)_12,(z,⊗)_30,;(z,⊙)_1, (y,⊗)_3, (w,⊙)_6,(x,⊗)_32,(u,⊗)_23, (v,⊗)_9⟩ ][ S_57 =⟨(v,⊗)_29,(z,⊙)_26, (u,⊗)_7,(w,⊙)_28,(y,⊗)_27,(x,⊗)_13,; (x,⊙)_15,(u,⊙)_18, (w,⊗)_8, (z,⊗)_1,(y,⊙)_10, (v,⊙)_2⟩ ][S_58=⟨ (v,⊗)_25, (z,⊙)_13, (u,⊗)_20,(w,⊙)_9, (y,⊗)_22, (x,⊗)_23,;(x,⊙)_7,(u,⊙)_1, (w,⊗)_24, (z,⊗)_21,(y,⊙)_5, (v,⊙)_11⟩ ][ S_59 =⟨(v,⊙)_19,(x,⊙)_17,(y,⊙)_16,(w,⊗)_12,(u,⊙)_14, (z,⊗)_6,; (z,⊙)_15, (y,⊗)_8,(w,⊙)_18,(x,⊗)_10, (u,⊗)_1, (v,⊗)_4⟩ ][S_60=⟨ (v,⊗)_13,(z,⊙)_4,(u,⊗)_3, (w,⊙)_10,(y,⊗)_7,(x,⊗)_9,;(x,⊙)_8,(u,⊙)_2, (w,⊗)_11,(z,⊗)_5,(y,⊙)_6, (v,⊙)_12⟩ ]
http://arxiv.org/abs/1702.08080v2
{ "authors": [ "Jonathan Spreer", "Stephan Tillmann" ], "categories": [ "math.GT", "57N10, 57M20, 57N35" ], "primary_category": "math.GT", "published": "20170226203033", "title": "Unravelling the Dodecahedral Spaces" }
figures/callettersOMScmsymn calletters.equation*̱ * ProofProofIntroductionIntroductionℂ𝔻𝔼𝔽ℍ̋Ł𝕃𝕄ℕℙℚℝℤαe̅u̅γ̅ØΩØωεΣxa̅b̅ A B C D E F G H I J K L M N O P Q R S T U V W X Y ZℭABCDEGHIJKLMNR̅S̅T̅U̅V̅W̅X̅Y̅Z̅x̅y̅z̅fABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOQRSTUVWXYZ#1|#1|_p #1 Tr[#1]∑∏∫/infsuplimlim inflim supmaxmin ri rf cl int dom supp conv affŁC𝐈𝐒𝐓𝐑𝔪λ distμ⊗ pw ess inf ess sup Tr#̣1[#1]P-×=̄.̇φε#1(<ref>)ı1 1t  u Proof. ∙[ϕ^N_t][ϕ^N_x][ϕ^N_y][ϕ^N_z][f_x][f_y][f_z][g_x]Δ W#̨1k+ #1#1n+ #1ȷ#1j+ #1#1E_k[#1]#1E_k[#1]#1E̅_k[#1]#1E_n[#1]#1E_n[#1]#1E̅_n[#1]2#1#1_L^2X CMAP, École Polytechnique, FranceThe authors gratefully acknowledge the financial support of the ERC 321111 Rofirm, and the Chairs Financial Risks (Risk Foundation, sponsored by Société Générale) and Finance and Sustainable Development (IEF sponsored by EDF and CA).TheoremTheorem[section] Lemma[Theorem]LemmaCorollary[Theorem]CorollaryProposition[Theorem]PropositionRemark[Theorem]RemarkExample[Theorem]ExampleDefinition[Theorem]DefinitionAssumption[Theorem]Assumption: Martingale transport plans on the line are known from Beiglböck & Juillet <cit.> to have an irreducible decomposition on a (at most) countable union of intervals. We provide an extension of this decomposition for martingale transport plans in ^d, d≥ 1. Our decomposition is a partition of ^d consisting of a possibly uncountable family of relatively open convex components, with the required measurability so that the disintegration is well-defined. We justify the relevance of our decomposition by proving the existence of a martingale transport plan filling these components. We also deduce from this decomposition a characterization of the structure of polar sets with respect to all martingale transport plans. Irreducible Convex Paving for Decomposition of Multi-dimensional Martingale Transport PlansHadrien De Marchhadrien.de-march@polytechnique.org. Nizar Touzinizar.touzi@polytechnique.edu. Accepted ???. Received ???; in original form December 30, 2023 ================================================================================================= Key words.Martingale optimal transport, irreducible decomposition, polar sets.§ INTRODUCTION The problem of martingale optimal transport was introduced as the dual of the problem of robust (model-free) superhedging of exotic derivatives in financial mathematics, see Beiglböck, Henry-Labordère & Penkner <cit.> in discrete time, and Galichon, Henry-Labordère & Touzi <cit.> in continuous-time. The robust superhedging problem was introducedby Hobson <cit.>, and was addressing specific examples of exotic derivatives by means of corresponding solutions of the Skorohod embedding problem, see <cit.>, and the survey <cit.>. Given two probability measures μ,ν on ^d, with finite first order moment, martingale optimal transport differs from standard optimal transport in that the set of all coupling probability measures (μ,ν) on the product space is reduced to the subset (μ,ν) restricted by the martingale condition. We recall from Strassen <cit.> that (μ,ν)≠∅ if and only if μ≼ν in the convex order, i.e. μ(f)≤ν(f) for all convex functions f. Notice that the inequality μ(f)≤ν(f) is a direct consequence of the Jensen inequality, the reverse implication follows from the Hahn-Banach theorem.This paper focuses on the critical observation by Beiglböck & Juillet <cit.> that, in the one-dimensional setting d=1, any such martingale interpolating probability measurehas a canonical decomposition =∑_k≥ 0_k, where _k∈(μ_k,ν_k) and μ_k is the restriction of μ to the so-called irreducible components I_k, and ν_k := ∫_x∈ I_k(dx,·), supported in J_k, k≥ 0, is independent of the choice of _k. Here, (I_k)_k≥ 1 are open intervals, I_0:=∖(∪_k≥ 1 I_k), and J_k is an augmentation of I_k by the inclusion of either one of the endpoints of I_k, depending on whether they are charged by the distribution _k. Remarkably, the irreducible components (I_k,J_k)_k≥ 0 are independent of the choice of ∈(μ,ν). To understand this decomposition, notice that convex functions in one dimension are generated by the family f_x_0(x):=|x-x_0|,x_0∈, x_0∈. Then, in terms of the potential functions U^μ(x_0):=μ(f_x_0), and U^ν(x_0):=ν(f_x_0), x_0∈, we have μ≼ν if and only if U^μ≤ U^ν and μ,ν have same mean. Then, at any contact points x_0, of the potential functions, U^μ(x_0)=U^ν(x_0), we have equality in the underlying Jensen equality, which means that the singularity x_0 of the underlying function f_x_0 is not seen by the measure. In other words, the point x_0 acts as a barrier for the mass transfer in the sense that martingale transport maps do not cross the barrier x_0. Such contact points are precisely the endpoints of the intervals I_k, k≥ 1.The decomposition in irreducible components plays a crucial role for the quasi-sure formulation introduced by Beiglböck, Nutz, and Touzi <cit.>, and represents an important difference between martingale transport and standard transport. Indeed, while the martingale transport problem is affected by the quasi-sure formulation, the standard optimal transport problem is not changed. We also refer to Ekren & Soner <cit.> for further functional analytic aspects of this duality.Our objective in this paper is to extend the last decomposition to an arbitrary d-dimensional setting, d≥ 1. The main difficulty is that convex functions do not have anymore such a simple generating family. Therefore, all of our analysis is based on the set of convex functions. A first extension of the last decomposition to the multi-dimensional case was achieved by Ghoussoub, Kim & Lim <cit.>. Motivated by the martingale monotonicity principle of Beiglböck & Juillet <cit.> (see also Zaev <cit.> for higher dimension and general linear constraints), their strategy is to find a monotone set Γ⊂^d×^d, where the robust superhedging holds with equality, as a support of the optimal martingale transport in (μ,ν). Denoting Γ_x:={y:(x,y)∈Γ}, this naturally induces the relation xx' if x∈ (Γ_x'), which is then completed to an equivalence relation ∼. The corresponding equivalence classes define their notion of irreducible components.Our subsequent results differ from <cit.> from two perspectives. First, unlike <cit.>, our decomposition is universal in the sense that it is not relative to any particular martingale measure in (μ,ν) (see example <ref>). Second, our construction of the irreducible convex paving allows to prove the required measurability property, thus justifying completely the existence of a disintegration of martingale plans. Finally, during the final stage of writing the present paper, we learned about the parallel work by Jan Obłój and Pietro Siorpaes <cit.>. Although the results are close, our approach is different from theirs. We are grateful to them for pointing to us the notions of "convex face" and "Wijsmann topology" and the relative references, which allowed us to streamline our presentation. In an earlier version of this work we used instead a topology that we called the compacted Hausdorff distance, defined as the topology generated by the countable restrictions of the space to the closed balls centered in the origin with integer radii; the two are in our case the same topologies, as the Wijsman topology is locally equivalent to the Hausdorff topology in a locally compact set. We also owe Jan and Pietro special thanks for their useful remarks and comments on a first draft of this paper privately exchanged with them.The paper is organized as follows. Section <ref> contains the main results of the paper, namely our decomposition in irreducible convex paving, and shows the identity with the Beiglböck & Juillet <cit.> notion in the one-dimensional setting. Section <ref> collects the main technical ingredients needed for the statement of our main results, and gives the structure of polar sets. In particular, we introduce the new notions of relative face and tangent convex functions, together with the required topology on the set of such functions.The remaining sections contain the proofs of these results. In particular, the measurability of our irreducible convex paving is proved in Section <ref>. Notation We denote by :=∪{-∞,∞} the completed real line, and similarly denote _+:=_+∪{∞}. We fix an integer d≥ 1. For x∈^d and r≥ 0, we denote B_r(x) the closed ball for the Euclidean distance, centered in x with radius r. We denote for simplicity B_r := B_r(0). If x∈, and A⊂, where (, d) is a metric space, (x,A):=inf_a∈ A d(x,a). In all this paper, ^d is endowed with the Euclidean distance.If V is a topological affine space and A⊂ V is a subset of V, A is the interior of A, A is the closure of A, A is the smallest affine subspace of V containing A, A is the convex hull of A, (A):=( A), and A is the relative interior of A, which is the interior of A in the topology of A induced by the topology of V. We also denote by ∂ A:= A∖ A the relative boundary of A, and by _A the Lebesgue measure of A.The setof all closed subsets of ^d is a Polish space when endowed with the Wijsman topology[The Wijsman topology on the collection of all closed subsets of a metric space (, d) is the weak topology generated by {(x,·):x∈}.] (see Beer <cit.>). As ^d is separable, it follows from a theorem of Hess <cit.> that a function F:^d⟶ is Borel measurable with respect to the Wijsman topology if and only if its associated multifunction is Borel measurable, i.e. *̱ F^-(V):={x∈^d:F(x)∩ V≠∅}  V⊂^d. * The subset ⊂ of all the convex closed subsets of ^d is closed infor the Wijsman topology, and therefore inherits its Polish structure. Clearly,is isomorphic to := { K : K∈} (with reciprocal isomorphism cl). We shall identify these two isomorphic sets in the rest of this text, when there is no possible confusion.We denote Ω:=^d×^d and define the two canonical maps *̱X :(x,y)∈Ω⟼ x∈^dY :(x,y)∈Ω⟼ y∈^d.* For φ,ψ:^d⟶, and h:^d⟶^d, we denote*̱φ⊕ψ := φ(X)+ψ(Y),h^⊗ := h(X)·(Y-X), * with the convention ∞-∞ = ∞.For a Polish space , we denote by () the collection of Borel subsets of , and () the set of all probability measures on (,()). For ∈(), we denote by _ the collection of all -null sets,the smallest closed support of , and := the smallest convex closed support of . For a measurable function f:⟶, we denote f:={|f|<∞}, and we use again the convention ∞-∞ = ∞ to define its integral, and denote*̱[f]:=^[f] = ∫_ f d = ∫_ f(x) (dx)∈().* Letbe another Polish space, and ∈(). The corresponding conditional kernel[The usual definition of a kernel requires that the map x↦_x[B] is Borel measurable for all Borel set B∈(^d). In this paper, we only require this map to be analytically measurable.]_x is defined μ-a.e. by:(dx,dy) = μ(dx)⊗_x(dy), where μ:=∘ X^-1.We denote by ^0(,) the set of Borel measurable maps fromto . We denote for simplicity ^0():=^0(,) and ^0_+():=^0(,_+). Letbe a σ-algebra of , we denote by ^(,) the set of -measurable maps fromto . For a measure m on , we denote ^1(,m):={f∈^0():m[|f|]<∞}. We also denote simply ^1(m):=^1(,m) and ^1_+(m):=^1_+(_+,m).We denote bythe collection of all finite convex functions f:^d⟶. We denote by ∂ f(x) the corresponding subgradient at any point x∈^d. We also introduce the collection of all measurable selections in the subgradient, which is nonempty by Lemma <ref>,∂ f:={p∈Ł^0(^d,^d): p(x)∈∂ f(x) for all x∈^d}.We finally denote f_∞ := lim inf_n→∞f_n, for any sequence (f_n)_n≥ 1 of real numbers, or of real-valued functions.§ MAIN RESULTS Throughout this paper, we consider two probability measures μ and ν on ^d with finite first order moment, and μ≼ν in the convex order, i.e. ν(f)≥μ(f) for all f∈. Using the convention ∞-∞=∞, we may define (ν-μ)(f)∈[0,∞] for all f∈. We denote by (μ,ν) the collection of all probability measures on ^d×^d with marginals ∘ X^-1=μ and ∘ Y^-1=ν. Notice that (μ,ν)≠∅ by Strassen <cit.>.An (μ,ν)-polar set is an element of ∩_∈(μ,ν)_. A property is said to hold (μ,ν)-quasi surely (abbreviated as q.s.) if it holds on the complement of an (μ,ν)-polar set. §.§ The irreducible convex paving The next first result shows the existence of a maximum support martingale transport plan, i.e. a martingale interpolating measurewhose disintegration _x has a maximum convex hull of supports among all measures in (μ,ν). There exists ∈(μ,ν) such that ∈(μ,ν),_X⊂ _X,μ-Furthermore _X is μ-a.s. unique, and we may choose this kernel so that(i)x⟼ _x is analytically measurable[Analytically measurable means measurable with respect to the smallest σ-algebra containing the analytic sets. All Borel sets are analytic and all analytic sets are universally measurable, i.e. measurable with respect to all Borel measures (see Proposition 7.41 and Corollary 7.42.1 in <cit.>).]^d⟶,(ii)x∈ I(x):=_x, for all x∈^d, and {I(x),x∈^d} is a partition of ^d. This Theorem will be proved in Subsection <ref>. The (μ-a.s. unique) set valued map I(X) paves ^d by its image by (ii) of Theorem <ref>. By (<ref>), this paving is stable by all ∈(μ,ν):Y∈ I(X), (μ,ν)- Finally, the measurability of the map I in the Polish spaceallows to see it as a random variable, which allows to condition probabilistic events to X∈ I, even when these components are all μ-negligible when considered apart from the others. Under the conditions of Theorem <ref>, we call such I(X) the irreducible convex paving associated to (μ,ν).Now we provide an important counterexample proving that for some (μ,ν) in dimension larger than 1, particular couplings in (μ,ν) may define different pavings. In ^2, we introduce x_0:=(0,0), x_1:=(1,0), y_0:=x_0, y_-1:=(0,-1), y_1:=(0,1), and y_2:=(2,0). Then we set μ := 1/2(δ_x_0+δ_x_1) and ν := 1/8(4δ_y_0+δ_y_-1+δ_y_1+2δ_y_2). We can show easily that (μ,ν) is the nonempty convex hull of _1 and _2 where_1:=1/8(4δ_x_0,y_0+2δ_x_1,y_2+δ_x_1,y_1+δ_x_1,y_-1)and_2:=1/8(2δ_x_0,y_0+δ_x_0,y_1+δ_x_0,y_-1+2δ_x_1,y_0+2δ_x_1,y_2) (i)The Ghoussoub-Kim-Lim <cit.> (GKL, hereafter) irreducible convex paving. Let c_1 = 1_{X=Y}, c_2 = 1-c_1 = 1_{X≠ Y}, and notice that _i is the unique optimal martingale transport plan for c_i, i=1,2. Then, it follows that the corresponding _i-irreducible convex paving according to the definition of <cit.> are given by[C__1(x_0)={x_0}, C__1(x_1)= {y_1,y_-1,y_2},; C__2(x_0)= {y_1,y_-1}, C__2(x_1)= {y_0,y_2}. ]Figure <ref> shows the extreme probabilities _1 and _2, and their associated irreducible convex pavings map C__1 and C__2.(ii)Our irreducible convex paving. The irreducible components are given by *̱I(x_0)= (y_1,y_-1)I(x_1)= (y_1,y_-1,y_2).* To see this, we use the characterization of Proposition <ref>. Indeed, as (μ,ν) = (_1,_2), for any ∈(μ,ν), ≪:=_1+_2/2, and supp _x⊂ conv( supp _x) for x=x_0,x_1. Then I(x) =conv( supp _x) for x=x_0,x_1 (i.e. μ-a.s.) by Proposition <ref>. In the one dimensional case, a convex paving which is invariant with respect to some ∈(μ, ν) is automatically invariant with respect to all ∈(μ, ν). Given a particular coupling ∈(μ, ν), the finest convex paving which is -invariant roughly corresponds to the GKL convex paving constructed in <cit.>. Then Example <ref> shows that this does not hold any more in dimension greater than two.Furthermore, in dimension one the "restriction" ν_I:=∫_I(dx,·) does not depend on the choice of the coupling ∈(μ,ν). Once again Example <ref> shows that it does not hold in higher dimension. Conditions guaranteeing that this property still holds in higher dimension will be investigated in <cit.>.§.§ Behavior on the boundary of the components For a probability measureon a topological space, and a Borel subset A, |_A:=[·∩ A] denotes its restriction to A. We may choose ∈(μ,ν) in Theorem <ref> so that for all ∈(μ,ν) and y∈^d,*̱ μ[ _X[{y}]>0]≤μ[ _X[{y}]>0], _X|_∂ I(X)⊂ _X|_∂ I(X), μ-* (i) The set-valued maps J(X):= I(X)∪{y∈^d:ν[y]>0,_X[{y}]>0}, and J̅(X):=I(X)∪ _X |_∂ I(X) are unique μ-a.s, and Y∈J̅(X), (μ,ν)-q.s. (ii) We may chose the kernel _X so that the map J̅ is convex valued, I⊂J⊂J̅⊂ I, and both J and J̅ are constant on I(x), for all x∈^d. The proof is reported in Subsection <ref>. §.§ Structure of polar sets Here we state the structure of polar sets that is a direct consequence, and will be made more precise by Theorem <ref>. A Borel set N∈(Ω) is (μ,ν)-polar if and only ifN ⊂{X∈ N_μ}∪{Y∈ N_ν}∪{Y∉ J(X)}, for some (N_μ,N_ν)∈_μ_ν and a set valued map J such that J⊂ J⊂J̅, the map J is constant on I(x) for all x∈^d, I(X)⊂(J(X)∖ N_ν'), μ-a.s. for all N_ν'∈_ν, and Y∈ J(X), (μ,ν)-q.s.§.§ The one-dimensional setting In the one-dimensional case, the decomposition in irreducible components and the structure of (μ,ν)-polar sets were introduced in Beiglböck & Juillet <cit.> and Beiglböck, Nutz & Touzi <cit.>, respectively. Let us see how the results of this paper reduce to the known concepts in the one dimensional case. First, in the one-dimensional setting, I(x) consists of open intervals (at most countable number) or single points. Following <cit.> Proposition 2.3, we denote the full dimensioncomponents (I_k)_k≥ 1. We also have J=J̅ (see Proposition <ref> below) therefore, Theorem <ref> is equivalent to Theorem 3.2 in <cit.>. Similar to (I_k)_k≥ 1, we introduce the corresponding sequence (J_k)_k≥ 1, as defined in <cit.>. Similar to <cit.>, we denote by μ_k the restriction of μ to I_k, and ν_k:=∫_x∈ I_k[dx,·] is independent of the choice of ∈(μ,ν). We define the Beiglböck & Juillet (BJ)-irreducible components *̱(I^BJ,J^BJ):x↦ (I_k,J_k) x∈ I_k, k≥ 1,({x},{x}) x∉∪_k I_k. * Let d=1. Then I=I^BJ, and J̅=J = J^BJ, μ-a.s. By Proposition <ref> (i)-(ii), we may find ∈(μ,ν) such that _X= I(X), and _X|_∂ I(X) = J̅∖ I(X), μ-a.s. Notice that as J̅∖ I(^d) only consists of a countable set of points, we have J = J̅. By Theorem 3.2 in <cit.>, we have Y∈ J^BJ(X), (μ,ν)-q.s. Therefore, Y∈ J^BJ(X), -a.s. and we have J̅(X)⊂ J^BJ(X), μ-a.s.On the other hand, let k≥ 1. By the fact that u_ν-u_μ>0 on I_k, together with the fact that J_k∖ I_k is constituted with atoms of ν, for any N_ν∈_ν, J_k⊂(J_k∖ N_ν). As μ = ν outside of the components,J^BJ(X)⊂(J^BJ(X)∖ N_ν),μ- Then by Theorem 3.2 in <cit.>, as {Y∉J̅(X)} is polar, we may find N_ν∈_ν such that J^BJ(X)∖ N_ν⊂J̅(X), μ-a.s. The convex hull of this inclusion, together with (<ref>) gives the remaining inclusion J^BJ(X)⊂J̅(X), μ-a.s.The equality I(X)=I^BJ(X), μ-a.s. follows from the relative interior taken on the previous equality. § PRELIMINARIES The proof of these results needs some preparation involving convex analysis tools. §.§ Relative face of a set For a subset A⊂^d and a∈^d, we introduce the face of A relative to a (also denoted a-relative face of A): _a A:={y∈ A: (a-(y-a),y+(y-a)) ⊂ A, for some >0}.Figure <ref> illustrates examples of relative faces of a square S, relative to some points.For later use, we list some properties whose proofs are reported in Section <ref>. [_aA is equal to the only relative interior of face of A containing a, where we extend the notion of face to non-convex sets. A face F of A is a nonempty subset of A such that for all [a,b]⊂ A, with (a,b)∩ F≠∅, we have [a,b]⊂ F. It is discussed in Hiriart-Urruty-Lemaréchal <cit.> as an extension of Proposition 2.3.7 that when A is convex, the relative interior of the faces of A form a partition of A, see also Theorem 18.2 in Rockafellar <cit.>. ] (i) For A,A'⊂^d, we have _a(A∩ A') = _a(A)∩_a(A'), and _a A ⊂_a A' whenever A⊂ A'. Moreover, _a A ≠∅ iff a∈_a A iff a∈ A. (ii) For a convex A, _a A=A ≠∅ iff a∈ A. Moreover, _a A is convex relatively open, A∖_a A is convex, and if x_0∈ A∖_a A and y_0∈ A, then [x_0,y_0)⊂ A∖_a A. Furthermore, if a∈ A, then (_a A) =(A) if and only if a∈ A. In this case, we have _a A = A =A = _aA.§.§ Tangent Convex functions Recall the notation (<ref>), and denote for all θ:Ω→: *̱_xθ := _xθ(x,·).* For θ_1,θ_2:Ω⟶, we say that θ_1=θ_2, , if *̱_Xθ_1 = _Xθ_2,θ_1(X,·)=θ_2(X,·)  _Xθ_1, μ-*The crucial ingredient for our main result is the following. A measurable function θ:Ω→_+ is a tangent convex function if *̱ θ(x,·)   θ(x,x)=0,  x∈^d.* We denote by Θ the set of tangent convex functions, and we define*̱Θ_μ := {θ∈Ł^0(Ω,_+):θ = θ', , θ≥θ',  θ'∈Θ}.* In order to introduce our main example of such functions, let *̱_pf(x,y) := f(y)-f(x)-p^⊗(x,y)≥ 0 ,f∈,  p∈∂ f. * Then, ():={_p f:f∈,p∈∂ f}⊂Θ⊂Θ_μ. The second inclusion is strict. Indeed, let d=1, and consider the convex function f:=∞1_(-∞,0). Then θ' := f(Y-X)∈Θ. Now let θ = θ'+√(|Y-X|). Notice that since _Xθ' = _Xθ = {X}, we have θ' = θ,for any measure μ, and θ≥θ'. Therefore θ∈Θ_μ. However, for all x∈^d, θ(x,·) is not convex, and therefore θ∉Θ.In higher dimension we may even have X∈ θ(X,·), and θ(X,·) is not convex. Indeed, for d=2, let f:(y_1,y_2)⟼∞(1_{|y_1|> 1} + 1_{|y_2|> 1}), so that θ := f(Y-X)∈Θ. Let x_0:=(1,0) and θ := θ' + 1_{Y = X+x_0}. Then, θ = θ',for any measure μ, and θ≥θ'. Therefore θ∈Θ_μ. However, θ∉Θ as θ(x,·) is not convex for all x∈^d.(i) Let θ∈Θ_μ, then _Xθ=_Xθ(X,·)⊂θ(X,·), μ-a.s.(ii) Let θ_1,θ_2∈Θ_μ, then _X(θ_1+θ_2) = _Xθ_1∩_Xθ_2, μ-a.s.(iii)Θ_μ is a convex cone.(i) It follows immediately from the fact that θ(X,·) is convex and finite on _Xθ, μ-a.s. by definition of Θ_μ. Then _Xθ⊂_Xθ(X,·). On the other hand, as θ(X,·)⊂ θ(X,·), the monotony of _x gives the other inclusion: _Xθ(X,·)⊂_Xθ. (ii) As θ_1,θ_2≥ 0, (θ_1+θ_2) =θ_1∩θ_2. Then, for x∈^d, (θ_1(x,·)+θ_2(x,·)) ⊂ θ_1(x,·)∩ θ_2(x,·). By Proposition <ref> (i),*̱_x(θ_1+θ_2) ⊂_xθ_1∩_xθ_2,x∈^d.* As for the reverse inclusion, notice that (i) implies that _Xθ_1∩_Xθ_2⊂θ_1(X,·)∩θ_2(X,·) = (θ_1(X,·)+θ_2(X,·))⊂ (θ_1(X,·)+θ_2(X,·)), μ-a.s. Observe that _xθ_1∩_xθ_2 is convex, relatively open, and contains x. Then, *̱_Xθ_1∩_Xθ_2 = _X(_Xθ_1∩_Xθ_2)⊂ _X( (θ_1(X,·)+θ_2(X,·)))= _X(θ_1+θ_2)  μ-*(iii) Given (ii), this follows from direct verification.A sequence (θ_n)_n≥ 1⊂Ł^0(Ω) convergesto some θ∈Ł^0(Ω) if *̱_X(θ_∞) =_Xθθ_n(X,·) ⟶θ(X,·),  _Xθ, μ-* Notice that the -limit isunique. In particular, if θ_n converges to θ, , it converges as well to θ_∞. Let (θ_n)_n≥ 1⊂Θ_μ, and θ:Ω⟶_+, such that θ_nn→∞⟶θ, ,(i)_Xθ⊂lim inf_n→∞_Xθ_n, μ-a.s.(ii) If θ_n' = θ_n, , and θ_n'≥θ_n, then θ_n'n→∞⟶θ, ;(iii)θ_∞∈Θ_μ.(i) Let x∈^d, such that θ_n(x,·) converges on _xθ to θ(x,·). Let y∈_xθ, let y'∈_xθ such that y' = x-ϵ(y-x), for some ϵ>0. As θ_n(x,y)n→∞⟶θ(x,y), and θ_n(x,y')n→∞⟶θ(x,y'), then for n large enough, both are finite, and y∈_xθ_n. y∈lim inf_n→∞_xθ_n, and _xθ⊂lim inf_n→∞_xθ_n. The inclusion is true for μ-a.e. x∈^d, which gives the result. (ii) By (i), we have _Xθ⊂lim inf_n→∞_Xθ_n = lim inf_n→∞_Xθ_n', μ-a.s. As θ_n≤θ_n', _Xθ'_∞⊂_Xθ_∞⊂lim inf_n→∞_Xθ_n, μ-a.s. We denote N_μ∈_μ, the set on which θ_n(X,·) does not converge to θ(X,·) on _Xθ(X,·). For x∉ N_μ, for y∈_xθ, θ_n(x,y)=θ_n'(x,y), for n large enough, and θ_n'(x,y)n→∞⟶θ(x,y)<∞. Then _Xθ = _Xθ'_∞, and θ_n'(X,·) converges to θ(X,·), on _Xθ, μ-a.s. We proved that θ_n'n→∞⟶θ, .(iii) has its proof reported in Subsection <ref> due to its length and technicality.The next result shows the relevance of this notion of convergence for our setting. Let (θ_n)_n≥ 1⊂Θ_μ. Then, we may find a sequence θ_n∈(θ_k,k≥ n), and θ_∞∈Θ_μ such that θ_n ⟶θ_∞,as n→∞. The proof is reported in Subsection <ref>. (i) A subset ⊂Θ_μ is -Fatou closed if θ_∞∈ for all (θ_n)_n≥ 1⊂ converging(in particular, Θ_μ is -Fatou closed by Proposition <ref> (iii)). (ii) The -Fatou closure of a subset A⊂Θ_μ is the smallest -Fatou closed set containing A: A:=⋂{⊂Θ_μ:  A⊂,  and   }. We next introduce for a≥ 0 the set _a:={f∈:(ν-μ)(f)≤ a}, and(μ,ν):=a≥ 0⋃ _a,where _a:=(_a),   (_a):={_p f: f∈_a,p∈∂ f}. (μ,ν) is a convex cone.We first prove that (μ,ν) is a cone. We consider λ,a>0, as we have λ_a=_λ a, and as convex combinations and inferior limit commute with the multiplication by λ, we have λ_a=_λ a. Then (μ,ν) =cone(_1), and therefore it is a cone.We next prove that _a is convex for all a≥ 0, which induces the required convexity of (μ,ν) by the non-decrease of the family {_a,a≥ 0}. Fix 0≤λ≤ 1, a≥ 0, θ_0∈_a, and denote (θ_0):={θ∈_a:λθ_0+(1-λ)θ∈_a}. In order to complete the proof, we now verify that (θ_0)⊃(_a) and is -Fatou closed, so that (θ_0)=_a. To see that (θ_0) is Fatou-closed, let (θ_n)_n≥ 1⊂(θ_0), converging . By definition of (θ_0), we have λθ_0+(1-λ)θ_n∈_a for all n. Then, λθ_0+(1-λ)θ_n⟶lim inf_n→∞λθ_0+(1-λ)θ_n, , and therefore λθ_0+(1-λ)θ_∞∈_a, which shows that θ_∞∈(θ_0). We finally verify that (θ_0)⊃(_a). First, for θ_0∈(_a), this inclusion follows directly from the convexity of (_a), implying that (θ_0)=_a in this case. For general θ_0∈_a, the last equality implies that (_a)⊂(θ_0), thus completing the proof. Notice that even though (_a)⊂Θ, the functions in (μ,ν) may not be in Θ as they may not be convex in y on (_xθ)^c for some x∈^d (see Example <ref>). The following result shows that some convexity is still preserved. For all θ∈(μ,ν), we may find N_μ∈_μ such that for x_1,x_2∉ N_μ, y_1,y_2∈^d, and λ∈[0,1] with := λ y_1 + (1-λ)y_2∈_x_1θ∩_x_2θ, we have:*̱λθ(x_1,y_1)+(1-λ)θ(x_1,y_2)-θ(x_1,) = λθ(x_2,y_1)+(1-λ)θ(x_2,y_2) -θ(x_2,)≥ 0.* The proof of this claim is reported in Subsection <ref>. We observe that the statement also holds true for a finite number of points y_1,...,y_k.[ This is not a direct consequence of Proposition <ref>, as the barycentrehas to be in _x_1θ∩_x_2θ. ] §.§ Extended integral We now introduce the extended (ν-μ)-integral:*̱ν⊖μ[θ]:=inf{a≥ 0 :θ∈_a}θ∈(μ,ν).* (i)[θ]≤ν⊖μ[θ]<∞ for all θ∈(μ,ν) and ∈(μ,ν).(ii)ν⊖μ[_p f] = (ν-μ)[f] for f∈∩Ł^1(ν) and p∈∂ f.(iii)ν⊖μ is homogeneous and convex.(i) For a>ν⊖μ[θ], set S^a:={F∈Θ_μ:[F]≤ a  ∈(μ,ν)}. Notice that S^a is -Fatou closed by Fatou's lemma, and contains (_a), as for f∈∩^1(ν) and p∈∂ f, [T_pf]= (ν-μ)[f] for all ∈(μ,ν). Then S^a contains _a as well, which contains θ. Hence, θ∈ S^a and [θ]≤ a for all ∈(μ,ν). The required result follows from the arbitrariness of a>ν⊖μ[θ]. (ii) Let ∈(μ,ν). For p∈∂ f, notice that T_p f∈(_a)⊂_a for some a=(ν-μ)[f], and therefore (ν-μ)[f] ≥ν⊖μ[T_p f]. Then, the result follows from the inequality (ν-μ)[f] = [T_p f] ≤ν⊖μ[T_p f]. (iii) Similarly to the proof of Proposition <ref>, we have λ_a=_λ a, for allλ,a>0. Then with the definition of ν⊖μ we have easily the homogeneity.To see that the convexity holds, let 0<λ<1, and θ,θ'∈(μ,ν) with a>ν⊖μ[θ], a'>ν⊖μ[θ'], for some a,a'>0. By homogeneity and convexity of _1, λθ +(1-λ)θ'∈_λ a + (1-λ)a', so that ν⊖μ[λθ +(1-λ)θ']≤λ a + (1-λ)a'. The required convexity property now follows from arbitrariness of a>ν⊖μ[θ] and a'>ν⊖μ[θ'].The following compacteness result plays a crucial role. Let (θ_n)_n≥ 1⊂(μ,ν) be such that sup_n≥ 1 ν⊖μ(θ_n)<∞. Then we can find a sequence θ_n∈(θ_k,k≥ n) such that *̱θ_∞∈(μ,ν), θ_n ⟶θ_∞,  ,  ν⊖μ(θ_∞) ≤lim inf_n→∞ν⊖μ(θ_n). *By possibly passing to a subsequence, we may assume that lim_n→∞(ν⊖μ)(θ_n) exists. The boundedness of ν⊖μ(θ_n) ensures that this limit is finite. We next introduce the sequence θ_n of Proposition <ref>. Then θ_n ⟶θ_∞, μ⊗ pw, and therefore θ_∞∈(μ,ν), because of the convergence θ_n ⟶θ_∞, . As (ν⊖μ)(θ_n)≤sup_k≥ n(ν⊖μ)(θ_k) by Proposition <ref> (iii), we have ∞>lim_n→∞(ν⊖μ)(θ_n)=lim_n→∞sup_k≥ n(ν⊖μ)(θ_k)≥lim sup_n→∞(ν⊖μ)(θ_n). Set l:= lim sup_n→∞ ν⊖μ(θ_n). For ϵ >0, we consider n_0∈ such that sup_k≥ n_0ν⊖μ(θ_k)≤ l+ϵ. Then for k≥ n_0, θ_k∈_l+2ϵ(μ,ν), and therefore θ_∞ = lim inf_k≥ n_0θ_k∈_l+2ϵ(μ,ν), implying ν⊖μ(θ)≤ l+2ϵ⟶ l, as ϵ→ 0. Finally, lim inf_n→∞(ν⊖μ)(θ_n)≥ν⊖μ(θ_∞).§.§ The dual irreducible convex paving Our final ingredient is the following measurement of subsets K⊂^d:G(K) := (K) + g_K(K)  g_K(dx) := e^-1/2|x|^2/(2π)^1/2 K λ_K(dx),Notice that 0 ≤ G≤ d+1 and, for any convex subsets C_1⊂ C_2 of ^d, we have G(C_1) = G(C_2)C_1 = C_2       C_1 =C_2.For θ∈_+^0(Ω),A∈(^d), we introduce the following map from ^d to the setof all relatively open convex subsets of ^d:K_θ,A(x):=_x(θ(x,·)∖ A)=_X(θ+∞ 1_^d A), for all x∈^d. We recall that a function is universally measurable if it is measurable with respect to every complete probability measure that measures all Borel subsets. For θ∈_+^0(Ω) and A∈(^d), we have: (i)θ(X,·):^d⟼, _Xθ:^d⟼, and K_θ,A:^d⟼ are universally measurable; (ii)G:⟶ is Borel measurable; (iii) if A∈_ν, and θ∈(μ,ν), then up to a modification on a μ-null set, K_θ,A(^d)⊂ is a partition of ^d with x∈ K_θ,A(x) for all x∈^d. The proof is reported in Subsections <ref> for (iii), <ref> for (ii), and <ref> for (i). The following property is a key-ingredient for our dual decomposition in irreducible convex paving.For all (θ,N_ν)∈(μ,ν)×_ν, we have the inclusion Y∈ K_θ,N_ν(X), (μ,ν)-q.s.For an arbitrary ∈(μ,ν), we have by Proposition <ref> that [θ]<∞. Then, [θ∖(^d× N_ν)] = 1 i.e. [Y∈ D_X] = 1 where D_x := (θ(x,·)∖ N_ν). By the martingale property of , we deduce that*̱X= ^[Y1_Y∈ D_X|X] = (1-Λ)E_K + Λ E_D,   μ-* Where Λ := _X[Y∈ D_X∖K_θ,N_ν(X)], E_D := ^P_X[Y|Y∈ D_X∖K_θ,N_ν(X)], E_K := ^_X[Y|Y∈K_θ,N_ν(X)], and _X is the conditional kernel to X of . We have E_K∈_XD_X⊂ D_X and E_D∈ D_X∖_XD_X because of the convexity of D_X∖_XD_X given by Proposition <ref> (ii) (D_X is convex). The lemma also gives that if Λ≠ 0, then ^[Y|X]=Λ E_D + (1-Λ) E_K∈ D_X∖K_θ,N_ν(X). This implies that*̱{Λ≠ 0} ⊂ {^[Y|X]∈ D_X∖K_θ,N_ν(X)} ⊂ {^[Y|X]∉K_θ,N_ν(X)}⊂ {^[Y|X]≠ X}.* Then [Λ≠ 0] = 0, and therefore [Y∈ D_X∖K_θ,N_ν(X)] = 0. Since [Y∈ D_X] = 1, this shows that [Y∈K_θ,N_ν(X)] = 1. In view of Proposition <ref> and Lemma <ref> (iii), we introduce the following optimization problem which will generate our irreducible convex paving decomposition: (θ,N_ν)∈(μ,ν)_νinfμ[G(K_θ,N_ν)]. The following result gives another possible definition for the irreducible paving. (i) We may find a μ-a.s. unique universally measurable minimizer K:=K_θ,N_ν:^d→ of (<ref>), for some (θ,N_ν)∈(μ,ν)_ν;(ii) for all θ∈(μ,ν) and N_ν∈_ν, we have K(X)⊂ K_θ,N_ν(X), μ-a.s; (iii) we have the equality K(X) = I(X), μ-a.s. In item (i), the measurability of I is induced by Lemma <ref> (i). Existence and uniqueness, together with (ii), are proved in Subsection <ref>. finally, the proof of (iii) is reported in Subsection <ref>, and is a consequence of Theorem <ref> below. Proposition <ref> provides a characterization of the irreducible convex paving by means of an optimality criterion on ((μ,ν),_ν). We illustrate how to get the components from optimization Problem (<ref>) in the case of Example <ref>. A (μ,ν) function minimizing this problem (with N_ν:=∅∈_ν) is θ:=lim inf_n →∞_p_nf_n, where f_n := n f, p_n:=n p for some p∈∂ f, andf(x):=(x,(y_1,y_-1))+(x,(y_1,y_2))+(x,(y_2,y_-1)).One can easily check that μ[f] = ν[f] for any n≥ 1: f,f_n∈_0. These functions separate I(x_0), I(x_1) and (I(x_0)∪ I(x_1))^c.Notice that in this example, we may as well take θ := 0, and N_ν := {y_-1,y_0,y_1,y_2}^c, which minimizes the optimization problem as well. §.§ Structure of polar sets Let θ∈(μ,ν), we denote the set valued map J_θ(X):= θ(X,·)∩J̅(X), where J̅ is introduced in Proposition <ref>. Let θ∈(μ,ν), up to a modification on a μ-null set, we have Y∈ J_θ(X), (μ,ν)-  J⊂ J_θ⊂J̅,   J_θ  I(x),    x∈^d. These claims are a consequence of Proposition <ref> together with Lemma <ref>.Our second main result shows the importance of these set-valued maps: A Borel set N∈(Ω) is (μ,ν)-polar if and only ifN ⊂{X∈ N_μ}∪{Y∈ N_ν}∪{Y∉ J_θ(X)},for some (N_μ,N_ν)∈_μ_ν and θ∈(μ,ν).The proof is reported in Section <ref>. This Theorem is an extension of the one-dimensional characterization of polar sets given by Theorem 3.2 in <cit.>, indeed in dimension one J = J_θ = J̅ by Proposition <ref>, together with the inclusion in Remark <ref>.We conclude this section by reporting a duality result which will be used for the proof of Theorem <ref>. We emphasize that the primal objective of the accompanying paper De March <cit.> is to push further this duality result so as to be suitable for the robust superhedging problem in financial mathematics. Let c:^d×^d⟶_+, and consider the martingale optimal transport problem:_μ,ν(c) := sup_∈(μ,ν)[c].Notice from Proposition <ref> (i) that _μ,ν(θ) ≤ν⊖μ(θ) for all θ∈. We denote by _μ,ν^mod(c) the collection of all (φ,ψ,h,θ) in Ł^1_+(μ)Ł^1_+(ν)Ł^0(^d,^d)(μ,ν) such that *̱_μ,ν(θ) = ν⊖μ(θ), φ⊕ψ+h^⊗+θ ≥c,  {Y∈ K_θ,{ψ = ∞}(X)}. * The last inequality is an instance of the so-called robust superhedging property. The dual problem is defined by: *̱_μ,ν^mod(c):= (φ,ψ,h,θ)∈_μ,ν^mod(c)inf μ[φ]+ν[ψ]+ν⊖μ(θ). * Notice that for any measurable function c:Ω⟶_+, any ∈(μ,ν), and any (φ,ψ,h,θ)∈_μ,ν^mod(c), we have [c]≤μ[φ]+ν[ψ]+[θ]≤μ[φ]+ν[ψ]+_μ,ν(θ), as a consequence of the above robust superhedging inequality, together with the fact that Y∈ K_θ,{ψ = ∞}(X), (μ,ν)-q.s. by Proposition <ref> This provides the weak duality:_μ,ν(c) ≤ ^mod_μ,ν(c). The following result states that the strong duality holds for upper semianalytic functions. We recall that a function f:^d→ is upper semianalytic if {f≥ a} is an analytic set for any a∈. In particular, a Borel function is upper semianalytic. Let c:Ω→_+ be upper semianalytic. Then we have (i)_μ,ν(c) = _μ,ν^mod(c); (ii) If in addition _μ,ν(c)<∞, then existence holds for the dual problem _μ,ν^mod(c).By allowing h to be infinite in some directions, orthogonal to K_θ,{ψ = ∞}(X), together with the convention ∞-∞ = ∞, we may reformulate the robust superhedging inequality in the dual set as φ⊕ψ+h^⊗+θ≥ c pointwise.§.§ One-dimensional tangent convex functions For an interval J⊂, we denote (K) the set of convex functions on K.Let d=1, then*̱(μ,ν) = {∑_k1_{X∈ I_k}_p_k f_k: f_k∈(J_k), p_k∈∂ f_k, ∑_k(ν_k-μ_k)[f_k]<∞},*(μ,ν)-q.s. Furthermore, for all such θ∈(μ,ν) and its corresponding (f_k)_k, we haveν⊖μ(θ) = ∑_k(ν_k-μ_k)[f_k].As all functions we consider are null on the diagonal, equality on ∪_kI_k J_k implies (μ,ν)-q.s. equality by Theorem 3.2 in <cit.>. Letbe the set on the right hand side.Step 1: We first show ⊂, for a≥ 0, we denote _a:={θ∈:∑_k(ν_k-μ_k)[f_k]≤ a}. Notice that _a contains (_a) modulo (μ,ν)-q.s. equality. We intend to prove that _a is -Fatou closed, so as to conclude that _a⊂_a, and therefore (μ,ν)⊂ by the arbitrariness of a≥ 0. Let θ_n=∑_k1_{X∈ I_k}_p_k^n f^n_k∈_a converging . By Proposition <ref>, θ_n⟶θ:=θ_∞, . For k≥ 1, let x_k∈ I_k be such that θ_n(x_k,·)⟶θ(x_k,·) on _x_kθ, and set f_k:=θ(x_k,·). By Proposition 5.5 in <cit.>, f_k is convex on I_k, finite on J_k, and we may find p_k∈∂ f_k such that for x∈ I_k, θ(x,·)=_p_kf_k(x,·). Hence, θ=∑_k1_{X∈ I_k}_p_k f_k, and ∑_k(ν_k-μ_k)[f_k]≤ a by Fatou's Lemma, implying that θ∈_a, as required. Step 2: To prove the reverse inclusion ⊃, let θ=∑_k1_{X∈ I_k}_p_k f_k∈. Let f_k^ϵ be a convex function defined by f_k^ϵ:=f_k on J_k^ϵ = J_k∩{x∈J_k:(x, J^c_k)≥ϵ}, and f_k^ϵ affine on ∖ J_k^ϵ. Set ϵ_n:=n^-1, f̅_n = ∑_k=1^nf_k^ϵ_n, and define the corresponding subgradient in ∂f̅_n:*̱p̅_n:=p_k+∇(f̅_n-f^_n_k)    J_k^_n, k≥ 1,p̅_n := ∇f̅_n    ∖(∪_k J_k^_n).* We have (ν-μ)[f̅_n]=∑_k=1^n(ν_k-μ_k)[f_k^ϵ_n]≤∑_k(ν_k-μ_k)[f_k]<∞. By definition, we see that _p̅_nf̅_n converges to θ pointwise on ∪_k(I_k)^2 and to θ_*(x,y):=lim inf_→ yθ(x,) on ∪_kI_k× I_k where, using the convention ∞-∞ =∞, θ':=θ-θ_*≥ 0, and θ'=0 on ∪_k(I_k)^2. For k≥ 1, set Δ^l_k:=θ'(x_k,l_k), and Δ^r_k:=θ'(x_k,l_k) where I_k = (l_k,r_k), and we fix some x_k∈ I_k. For positive ϵ<r_k-l_k/2, and M≥ 0, consider the piecewise affine function g_k^ϵ,M with break points l_k+ϵ and r_k-ϵ, and: g_k^ϵ,M(l_k) = M∧Δ_k^l,  g_k^ϵ,M(r_k) = M∧Δ_k^r,   g_k^ϵ,M(l_k+ϵ) = 0,  g_k^ϵ,M(r_k-ϵ) = 0.Notice that g_k^ϵ,M is convex, and converges pointwise to g_k^M:=M∧θ'(l_k+r_k/2,·) on J_k, as ϵ→ 0, with*̱(ν_k-μ_k)(g_k^M)= ν_k[{l_k}](M∧Δ_k^l)+ν_k[r_k](M∧Δ_k^r)≤(ν_k-μ_k)[f_k]-(ν_k-μ_k)[(f_k)_*] ≤ (ν_k-μ_k)[f_k],* where (f_k)_* is the lower semi-continuous envelop of f_k. Then by the dominated convergence theorem, we may find positive ϵ_k^n,M<r_k-l_k/2n such that*̱ (ν_k-μ_k)(g_k^ϵ_k^n,M,M) ≤ (ν_k-μ_k)[f_k] +2^-k/n. * Now let g̅_n = ∑_k=1^ng_k^ϵ_k^n,n,n, and p̅_n'∈∂g̅_n. Notice that _p̅_n'g_n⟶θ' pointwise on ∪_kI_k× J_k, furthermore, (ν-μ)(g̅_n)≤∑_k(ν_k-μ_k)[f_k]+1/n≤∑_k(ν_k-μ_k)[f_k]+1<∞.Then we have θ_n:=_p̅_nf̅_n+_p̅_n'g̅_n converges to θ pointwise on ∪_kI_k× J_k, and therefore (μ,ν)-q.s. by Theorem 3.2 in <cit.>. Since (ν-μ)(f̅_n+g̅_n) is bounded, we see that (θ_n)_n≥ 1⊂(_a) for some a≥ 0. Notice that θ_n may fail to converge . However, we may use Proposition <ref> to get a sequence θ_n∈(θ_k,k≥ n), and θ_∞∈Θ_μ such that θ_n ⟶θ_∞,as n→∞, and satisfies the same (μ,ν)-q.s. convergence properties than θ_n. Then θ_∞∈(μ,ν), and θ_∞ = θ, (μ,ν)-q.s. § THE IRREDUCIBLE CONVEX PAVING§.§ Existence and uniquenessProof of Theorem <ref> (i) The measurability follows from Lemma <ref>. We first prove the existence of a minimizer for the problem (<ref>). Let m denote the infimum in (<ref>), and consider a minimizing sequence (θ_n,N_ν^n)_ n∈⊂(μ,ν)_ν with μ[G(K_θ_n,N^n_ν)]≤ m+1/n. By possibly normalizing the functions θ_n, we may assume that ν⊖μ(θ_n)≤ 1. Set*̱ θ := ∑_n≥ 1 2^-nθ_n and N_ν :=∪_n≥ 1 N_ν^n∈_ν. * Notice that θ is well-defined as the pointwise limit of a sequence of the nonnegative functions θ_N:=∑_n≤ N2^-nθ_n. Since ν⊖μ[θ_N]≤∑_n≥ 12^-n<∞ by convexity of ν⊖μ, θ_N⟶θ, pointwise, and θ∈(μ,ν) by Lemma <ref>, since any convex extraction of (θ_n)_n≥ 1 still converges to θ. Since θ^-1_n({∞})⊂θ^-1({∞}), it follows from the definition of N_ν thatm+1/n≥μ[G(K_θ_n,N^n_ν)] ≥ μ[G(K_θ,N_ν)], hence μ[G(K_θ,N_ν)]= m as θ∈(μ,ν), N_ν∈_ν.(ii) For an arbitrary (θ,N_ν)∈(μ,ν)×_ν, we define θ̅:=θ+θ∈(μ,ν) and N̅_ν:=N_ν∪ N_ν, so that K_θ̅,N̅_ν⊂ K_θ,N_ν. By the non-negativity of θ and θ, we have m≤μ[G(K_θ̅,N̅_ν)]≤μ[G(K_θ,N_ν)] = m. Then G(K_θ̅,N̅_ν)=G(K_θ,N_ν), μ-a.s. By (<ref>), we see that, μ-a.s. K_θ̅,N̅_ν=K_θ,N_ν and K_θ̅,N̅_ν=K_θ,N_ν=I. This shows that I⊂ K_θ,N_ν, μ-a.s.§.§ Partition of the space in convex components This section is dedicated to the proof of Lemma <ref> (iii), which is an immediate consequence of Proposition <ref> (ii). Let θ∈(μ,ν), and A∈(^d). We may find N_μ∈_μ such that: (i) for all x_1,x_2∉ N_μ with K_θ,A(x_1)∩ K_θ,A(x_2)≠∅, we have K_θ,A(x_1)=K_θ,A(x_2);(ii) if A∈_ν, then x∈K_θ,A(x) for x∉ N_μ, and up to a modification of K_θ,A on N_μ, K_θ,A(^d) is a partition of ^d such that x∈ K_θ,A(x) for all x∈^d.(i) Let N_μ be the μ-null set given by Proposition <ref> for θ. For x_1,x_2∉ N_μ, we suppose that we may find ∈ K_θ,A(x_1)∩ K_θ,A(x_2). Consider y∈ K_θ,A(x_1). As K_θ,A(x_1) is open in its affine span, y':=+ ϵ/1-ϵ(-y)∈ K_θ,A(x_1) for 0<ϵ<1 small enough. Then = ϵ y+(1-ϵ)y', and by Proposition <ref>, we getϵθ(x_1,y)+(1-ϵ)θ(x_1,y')-θ(x_1,)=ϵθ(x_2,y)+(1-ϵ)θ(x_2,y')-θ(x_2,)By convexity of _x_iθ, K_θ,A(x_i)⊂_x_iθ⊂θ(x_i,·). Then θ(x_1,y'), θ(x_1,), θ(x_2,y'), and θ(x_2,) are finite and*̱θ(x_1,y)<∞θ(x_2,y)<∞.* Therefore K_θ,A(x_1)∩θ(x_1,·)⊂θ(x_2,·). We have obviously K_θ,A(x_2)∩θ(x_2,·)⊂θ(x_2,·) as well. Subtracting A, we get( K_θ,A(x_1)∩θ(x_1,·)∖ A)∪( K_θ,A(x_2)∩θ(x_2,·)∖ A)⊂θ(x_2,·)∖ A.Taking the convex hull and using the fact that the relative face of a set is included in itself, we see that (K_θ,A(x_1)∪ K_θ,A(x_2))⊂(θ(x_2,·)∖ A). Notice that, as K_θ,A(x_2) is defined as the x_2-relative face of some set, either x_2∈ K_θ,A(x) or K_θ,A(x)=∅ by the properties of _x_2. The second case is excluded as we assumed that K_θ,A(x_1)∩ K_θ,A(x_2)≠∅. Therefore, as K_θ,A(x_1) and K_θ,A(x_2) are convex sets intersecting in relative interior points and x_2∈ K_θ,A(x_2), it follows from Lemma <ref> that x_2∈ (K_θ,A(x_1)∪ K_θ,A(x_2)). Then by Proposition <ref> (ii),*̱_x_2(K_θ,A(x_1)∪ K_θ,A(x_2)) =(K_θ,A(x_1)∪ K_θ,A(x_2)) = (K_θ,A(x_1)∪ K_θ,A(x_2)).* Then, we have (K_θ,A(x_1)∪ K_θ,A(x_2))⊂_x_2(θ(x_2,·)∖ A) = K_θ,A(x_2), as _x_2 is increasing. Therefore K_θ,A(x_1)⊂ K_θ,A(x_2) and by symmetry between x_1 and x_2, K_θ,A(x_1)= K_θ,A(x_2). (ii) We suppose that A∈_ν. First, notice that, as K_θ,A(X) is defined as the X-relative face of some set, either x∈ K_θ,A(x) or K_θ,A(x)=∅ for x∈^d by the properties of _x. Consider ∈(μ,ν). By Proposition <ref>, [Y∈ K_θ,A(X)]=1. As (_X)⊂ K_θ,A(X), μ-a.s., K_θ,A(X) is non-empty, which implies that x∈ K_θ,A(x). Hence, {X∈ K_θ,A(X)} holds outside the set N_μ^0:={(_X)⊄ I(X)}∈_μ. Then we just need to have this property to replace N_μ by N_μ∪ N_μ^0∈_μ.Finally, to get a partition of ^d, we just need to redefine K_θ,A on N_μ. If x∈x'∉ N_μ⋃K_θ,A(x') then by definition of N_μ, the set K_θ,A(x') is independent of the choice of x'∉ N_μ such that x∈ K_θ,A(x'): indeed, if x_1',x_2'∉ N_μ satisfy x∈ K_θ,A(x_1')∩ K_θ,A(x_2'), then in particular K_θ,A(x_1')∩ K_θ,A(x_2') is non-empty, and therefore K_θ,A(x_1')= K_θ,A(x_2') by (i). We set K_θ,A(x):=K_θ,A(x'). Otherwise, if x∉x'∉ N_μ⋃K_θ,A(x'), we set K_θ,A(x):={x} which is trivially convex and relatively open. With this definition, K_θ,A(^d) is a partition of ^d. § PROOF OF THE DUALITY For simplicity, we denote (ξ):=μ[φ]+ν[ψ]+ν⊖μ(θ), for ξ:=(φ,ψ,h,θ)∈^mod_μ,ν(c). §.§ Existence of a dual optimizerLet c,c_n:Ω⟶_+, and ξ_n∈^mod_μ,ν(c_n), n∈, be such thatc_n ⟶ c,  (ξ_n)⟶_μ,ν(c)<∞  n→∞.Then there exists ξ∈^mod_μ,ν(c) such that (ξ_n)⟶(ξ) as n→∞.Denote ξ_n := (φ_n,ψ_n, h_n,θ_n), and observe that the convergence of (ξ_n) implies that the sequence (μ(φ_n),ν(ψ_n),ν⊖μ(θ_n))_n is bounded, by the non-negativity of φ_n,ψ_n and ν⊖μ(θ_n). We also recall the robust superhedging inequalityφ_n⊕ψ_n + h_n^⊗ + θ_n ≥ c_n, on {Y∈ K_θ_n,{ψ_n = ∞}(X)}, n≥ 1. Step 1. By Komlòs Lemma together with Lemma <ref>, we may find a sequence (φ_n,ψ_n,θ_n)∈{(φ_k,ψ_k,θ_k),k≥ n} such that[ φ_n⟶φ:=φ_∞, μ-ψ_n⟶ψ:=ψ_∞, ν-; θ_n⟶θ:=θ_∞∈(μ,ν), μ⊗ pw. ]Set φ:=∞ and ψ:=∞ on the corresponding non-convergence sets, and observe that μ[φ]+ν[ψ]<∞, by the Fatou Lemma, and therefore N_μ:={φ=∞}∈_μ and N_ν:={ψ=∞}∈_ν. We denote by (h_n,c_n) the same convex extractions from {(h_k,c_k),k≥ n}, so that the sequence ξ_n:=(φ_n,ψ_n,h_n,θ_n) inherits from (<ref>) the robust superhedging property, as for θ_1,θ_2∈(μ,ν), ψ_1,ψ_2∈_+^1(^d), and 0<λ<1, we have K_λθ_1+(1-λ)θ_2,{λψ_1+(1-λ)ψ_2=∞}⊂ K_θ_1,{ψ_1=∞}∩ K_θ_2,{ψ_2=∞}:φ_n⊕ψ_n+θ_n+ h_n^⊗≥c_n≥ 0, pointwise onK_θ_n,{ψ_n = ∞}(X).Step 2. Next, notice that l_n:=(h_n^⊗)^-:=max(-h_n^⊗,0)∈Θ for all n∈. By the convergence Proposition <ref>, we may find convex combinations l_n:=∑_k≥ nλ_k^nl_k⟶ l:= l_∞, μ⊗ pw. Updating the definition of φ by setting φ:=∞ on the zero μ-measure set on which the last convergence does not hold on (∂^x l)^c, it follows from (<ref>), and the fact that K_θ̅,{ψ=∞}⊂lim inf_n→∞ K_θ_n,{ψ_n = ∞}, thatl=l_∞≤lim inf_n ∑_k≥ nλ_k^n(φ_k⊕ψ_k+θ_k)≤φ⊕ψ+θ̅, {Y∈ K_θ̅,{ψ=∞}(X)}. where θ̅:=lim inf_n∑_k≥ nλ_k^nθ_k∈(μ,ν). As {φ = ∞}∈_μ, by possibly enlarging N_μ, we assume without loss of generality that {φ = ∞}⊂ N_μ, we see that l⊃ (N_μ^c× N_ν^c) ∩θ̅∩{Y∈ K_θ̅,{ψ=∞}(X)}, and thereforeK_θ̅,{ψ=∞}(X)⊂_X l'⊂l'(X,·),  μStep 3. Let h_n:=∑_k≥ nλ_k^n h_k. Then b_n:=h^⊗_n+l_n =∑_k≥ nλ_k^n (h_k^⊗)^+ defines a non-negative sequence in Θ. By Proposition <ref>, we may find a sequence b_n=:h_n^⊗+l_n∈(b_k,k≥ n) such that b_n⟶ b:=b_∞, μ⊗ pw, where b takes values in [0,∞]. b_n(X,·)⟶ b(X,·) pointwise on _X b, μ-a.s. Combining with (<ref>), this shows that*̱h_n^⊗(X,·)⟶(b-l)(X,·)_X b∩ K_θ̅,{ψ=∞}(X),  μ-* (b-l)(X,·)=lim inf_nh_n^⊗(X,·), pointwise on K_θ̅,{ψ=∞}(X) (where l is a limit of l_n), μ-a.s. Clearly, on the last convergence set, (b-l)(X,·)>-∞ on K_θ̅,{ψ=∞}(X), and we now argue that (b-l)(X,·)<∞ on K_θ̅,{ψ=∞}(X), therefore K_θ̅,{ψ=∞}(X)⊂_X b, so that we deduce from the structure of h_n^⊗ that the last convergence holds also on K_θ̅,{ψ=∞}(X):h_n^⊗(X,·)⟶(b-l)(X,·)=:h^⊗(X,·)K_θ̅,{ψ=∞}(X),  μ- Indeed, let x be an arbitrary point of the last convergence set, and consider an arbitrary y∈ K_θ̅,{ψ=∞}(x). By the definition of K_θ̅,{ψ=∞}, we have x∈ K_θ̅,{ψ=∞}(x), and we may therefore find y'∈ K_θ̅,{ψ=∞}(x) with x=py+(1-p)y' for some p∈(0,1). Then, p h_n^⊗(x,y)+(1-p)h_n^⊗(x,y')=0. Sending n→∞, by concavity of the lim inf, this provides p(b-l)(x,y)+(1-p)(b-l)(x,y')≤0, so that (b-l)(x,y')>-∞ implies that (b-l)(x,y)<∞. Step 4. Notice that by dual reflexivity of finite dimensional vector spaces, (<ref>) defines a unique h(X) in the vector space K_θ̅,{ψ=∞}(X)-X, such that (b-l)(X,·)=h^⊗(X,·) on K_θ̅,{ψ=∞}(X). At this point, we have proceeded to a finite number of convex combinations which induce a final convex combination with coefficients (λ̅_n^k)_k≥ n≥ 1. Denote ξ̅_n:=∑_k≥ nλ̅_n^kξ_k, and set θ:=θ̅_∞. Then, applying this convex combination to the robust superhedging inequality (<ref>), we obtain by sending n→∞ that (φ⊕ψ+h^⊗+θ)(X,·)≥ c(X,·) on K_θ̅,{ψ=∞}(X), μ-a.s. and φ⊕ψ+h^⊗+θ=∞ on the complement μ null-set. As θ is the liminf of a convex extraction of (θ_n), we have θ≥θ_∞=θ̅, and therefore K_θ,{ψ=∞}⊂ K_θ̅,{ψ=∞}. This shows that the limit point ξ:=(φ,ψ,h,θ) satisfies the pointwise robust superhedging inequalityφ⊕ψ+θ+ h^⊗≥ c, on {Y∈ K_θ,{ψ=∞}(X)}. Step 5. By Fatou's Lemma and Lemma <ref>, we have μ[φ]+ν[ψ]+ν⊖μ[θ]≤lim inf_n μ[φ_n]+ν[ψ_n]+ν⊖μ[θ_n]=_μ,ν(c).By (<ref>), we have μ[φ]+ν[ψ]+[θ]≥[c] for all ∈(μ,ν). Then, μ[φ]+ν[ψ]+_μ,ν[θ]≥_μ,ν[c]. By Proposition <ref> (i), we have _μ,ν[θ]≤ν⊖μ[θ], and therefore _μ,ν[c]≤μ[φ]+ν[ψ]+_μ,ν[θ] ≤μ[φ]+ν[ψ]+ν⊖μ[θ]≤_μ,ν(c),by (<ref>). Then we have (ξ)=μ[φ]+ν[ψ]+ν⊖μ[θ] = _μ,ν(c) and _μ,ν[θ]=ν⊖μ[θ], so that ξ∈^mod_μ,ν(c).§.§ Duality result We first prove the duality in the lattice _b of bounded upper semicontinuous fonctions Ω⟶_+. This is a classical result using the Hahn-Banach Theorem, the proof is reported for completeness. Let f∈_b, then _μ,ν(f) = ^mod_μ,ν(f) We have _μ,ν(f) ≤^mod_μ,ν(f) by weak duality (<ref>), let us now show the converse inequality _μ,ν(f)≥^mod_μ,ν(f). By standard approximation technique, it suffices to prove the result for bounded continuous f. We denote by _l(^d) the set of continuous mappings ^d→ with linear growth at infinity, and by _b(^d,^d) the set of continuous bounded mappings ^d⟶^d. Define(f) :={ (φ̅,ψ̅,h̅)∈_l(^d)_l(^d)_b(^d,^d): φ̅⊕ψ̅+h̅^⊗≥ f },and the associated _μ,ν(f) :=inf_(φ̅,ψ̅,h̅)∈(f)μ(φ̅)+ν(ψ̅). By Theorem 2.1 in Zaev <cit.>, and Lemma <ref> below, we have*̱_μ,ν(f)= _μ,ν(f) = inf_(φ̅,ψ̅,h̅)∈(f)μ(φ̅)+ν(ψ̅) ≥ ^mod_μ,ν(f),* which provides the required result. Proof of Theorem <ref> The existence of a dual optimizer follows from a direct application of the compactness Lemma <ref> to a minimizing sequence of robust superhedging strategies.As for the extension of duality result of Lemma <ref> to non-negative upper semi-analytic functions, we shall use the capacitability theorem of Choquet, similar to <cit.> and <cit.>. Let [0,∞]^Ω denote the set of all nonnegative functions Ω→ [0,∞], and _+ the sublattice of upper semianalytic functions. Note that _b is stable by infimum.Recall that a _b-capacity is a monotone map 𝐂:[0,∞]^Ω⟶ [0,∞], sequentially continuous upwards on [0,∞]^Ω, and sequentially continuous downwards on _b. The Choquet capacitability theorem states that a _b-capacity 𝐂 extends to _+ by:*̱ C(f) = sup{ C(g): g∈_b  g≤ f}f∈_+.* In order to prove the required result, it suffices to verify that _μ,ν and ^mod_μ,ν are _b-capacities.As (μ,ν) is weakly compact, it follows from similar argument as in Prosposition 1.21, and Proposition 1.26 in Kellerer <cit.> that _μ,ν is a _b-capacity. We next verify that ^mod_μ,ν is a _b-capacity. Indeed, the upwards continuity is inherited from _μ,ν together with the compactness lemma <ref>, and the downwards continuity follows from the downwards continuity of _μ,ν together with the duality result on _b of Lemma <ref>. Let c:Ω→_+, and (φ̅,ψ̅,h̅)∈(c). Then, we may find ξ∈^mod_μ,ν(c) such that (ξ) = μ[φ̅]+ν[ψ̅].Let us consider (φ̅,ψ̅,h̅)∈(c). Then φ̅⊕ψ̅+h̅^⊗≥ c ≥ 0, and thereforeψ̅(y)≥ f(y) := x∈^dsup-φ̅(x)-h̅(x)·(y-x).Clearly, f is convex, and f(x)≥ -φ̅(x) by taking value x=y in the supremum. Hence ψ̅-f≥ 0 and φ̅+f≥ 0, implying in particular that f is finite on ^d.As φ̅ and ψ̅ have linear growth at infinity, f is in ^1(ν)∩^1(μ). We have f∈_a for a=ν[f]-μ[f]≥ 0. Then we consider p∈∂ f and denote θ:=_pf. θ∈(_a)⊂(μ,ν). Then denoting φ:=φ̅+f, ψ:=ψ̅-f, and h:=h̅+p, we have ξ:=(φ,ψ, h,θ)∈_μ,ν^mod(c) andμ[φ̅]+ν[ψ̅] = μ[φ]+ν[ψ] + (ν-μ)[f]=μ[φ]+ν[ψ] + ν⊖μ[θ] = (ξ). § POLAR SETS AND MAXIMUM SUPPORT MARTINGALE PLAN§.§ Boundary of the dual paving Consider the optimization problems: (θ,N_ν)∈(μ,ν)_νinfμ[G(R_θ,N_ν)],  R_θ,N_ν:=(θ(X,·)∩∂K(X)∩ N_ν^c),and for all y∈^d we consider(θ,N_ν)∈(μ,ν)_νinf μ[y∈∂K(X)∩θ(X,·)∩ N_ν^c ]. These problems are well defined by the following measurability result, whose proof is reported in Subsection <ref>. Let F:^d⟶, γ-measurable. Then we may find N_γ∈_γ such that 1_Y∈ F(X)1_X∉ N_γ is Borel measurable, and if X∈ F(X) convex, γ-a.s., then 1_Y∈∂ F(X)1_X∉ N_γ is Borel measurable as well. By the same argument than that of the proof of existence and uniqueness in Proposition <ref>, we see that the problem (<ref>), (resp. (<ref>) for y∈^d) has an optimizer (θ^*,N^*_ν)∈(μ,ν)_ν, (resp. (θ^*_y,N^*_ν,y)∈(μ,ν)_ν). Furthermore, D:=R_θ^*,N_ν^*, (resp D_y(x):={y} if y∈∂K(x)∩θ^*_y(x,·)∩ N_ν,y^*, and ∅ otherwise, for x∈^d) does not depend on the choice of (θ^*,N^*_ν), (resp. θ_y^*) up to a μ-negligible modification.We define K̅:=D∪K, and K_θ(X) := θ(X,·)∩K̅(X) for θ∈(μ,ν). Notice that if y∈^d is not an atom of ν, we may chose N_ν,y containing y, which means that Problem (<ref>) is non-trivial only if y is an atom of ν. We denote atom(ν), the (at most countable) atoms of ν, and define the mapping K:= (∪_y∈ atom(ν)D_y)∪K,Let θ∈(μ,ν). Up to a modification on a μ-null set, we have(i)K̅ is convex valued, moreover Y∈K̅(X), and Y∈ K_θ(X), (μ,ν)-q.s.(ii)K⊂K⊂ K_θ⊂K̅⊂K,(iii)K, K_θ, and K̅ are constant on K(x), for all x∈^d.(i) For x∈^d, K̅(x)=D(x)∪K(x). Let y_1,y_2∈K̅(x), λ∈ (0,1), and set y:=λ y_1+(1-λ)y_2. If y_1,y_2∈K(x), or y_1,y_2∈ D(x), we get y∈K̅(x) by convexity of K(x), or D(x). Now, up to switching the indices, we may assume that y_1∈K(x), and y_2∈ D(x)∖K(x). As D(x)∖K(x)⊂∂K(x), y∈K(x), as λ>0. Then y∈K̅(x). Hence, K̅ is convex valued.Since θ^*(X,·)∖ N_ν^*∩K∖K⊂ R_θ^*,N_ν^*, we have θ^*(X,·)∖ N_ν^*∩K⊂ R_θ^*,N_ν^*∪K = K̅. Then, as Y∈θ^*(X,·)∖ N_ν^*, and Y∈K(X), Y∈K̅(X), (μ,ν)-q.s.Let θ∈(μ,ν), then Y∈θ(X,·), (μ,ν)-q.s. Finally we get Y∈θ(X,·)∩K̅(X)=K_θ(X), (μ,ν)-q.s. (ii) As R_θ,N_ν(X)⊂∂K(X) = K(X), K̅⊂K. By definition, K_θ⊂K̅, and K⊂K. For y∈ atom(ν), and θ_0∈(μ,ν), by minimality,D_y(X)⊂θ_0(X,·)∩∂K(X), μ-Applying (<ref>) for θ_0 = θ, we get D_y⊂θ(X,·), and for θ_0 = θ^*, D_y(X)⊂K̅(X), μ-a.s. Taking the countable union: K⊂ K_θ, μ-a.s. (This is the only inclusion that is not pointwise). Then we change K to K on this set to get this inclusion pointwise. (iii) For θ_0∈(μ,ν), let N_μ∈_μ from Proposition <ref>. Let x∈ N_μ^c, y∈∂K(x), and y':= x+y/2∈K(x). Then for any other x'∈K(x)∩ N_μ^c, 1/2θ_0(x,y)-θ_0(x,y') = 1/2θ_0(x',x)+1/2θ_0(x',y)-θ_0(x',y'), in particular, y∈θ(x,·) if and only if y∈θ(x',·). Applying this result to θ, θ^*, and θ^*_y for all y∈ atom(ν), we get N_μ such that for any x∈^d, K̅, K_θ, and K are constant on K(x)∩ N_μ^c. To get it pointwise, we redefine these mappings to this constant value on K(x)∩ N_μ, or to K(x), if K(x)∩ N_μ^c = ∅. The previous properties are preserved.§.§ Structure of polar setsA Borel set N∈(Ω) is (μ,ν)-polar if and only if for some (N_μ,N_ν)∈_μ_ν and θ∈(μ,ν), we have N ⊂{X∈ N_μ}∪{Y∈ N_ν}∪{Y∉ K_θ(X)}.One implication is trivial as Y∈ K_θ(X), (μ,ν)-q.s. for all θ∈(μ,ν), by Proposition <ref>. We only focus on the non-trivial implication. For an (μ,ν)-polar set N, we have _μ,ν(∞1_N) = 0, and it follows from the dual formulation of Theorem <ref> that 0 = (ξ) for some ξ=(φ,ψ,h,θ)∈_μ,ν^mod(∞1_N). Then, *̱φ <∞, μ-,ψ <∞, ν- θ∈(μ,ν),* As h is finite valued, and φ,ψ are non-negative functions, the superhedging inequality φ⊕ψ+θ+h^⊗≥∞1_N on {Y∈ K_θ,{ψ = ∞}(X)} implies that 1_{φ=∞}⊕1_{ψ=∞}+1_{(θ)^c}≥1_N{Y∈ K_θ,{ψ = ∞}(X)}By Proposition <ref> (ii), we have K(X)⊂ K_θ,{ψ=∞}(X), μ-a.s. Then K̅(X)⊂K(X)⊂ K_θ,{ψ=∞}(X), which implies thatK_θ(X) := θ(X,·)∩K̅(X) ⊂θ(X,·)∩ K_θ,{ψ=∞}(X), μ-We denote N_μ := {φ=∞}∪{K_θ(X)⊄θ(X,·)∩ K_θ,{ψ=∞}(X)}∈_μ, and N_ν:={ψ = ∞}∈_ν. Then by (<ref>), 1_N =0 on ({φ =∞}^c{ψ = ∞}^c)∩{Y∈θ(X,·)∩ K_θ,{ψ=∞}(X)}, and therefore by (<ref>), N⊂{X∈ N_μ}∪{Y∈ N_ν}∪{Y∉ K_θ(X)}.§.§ The maximal support probability In order to prove the existence of a maximum support martingale transport plan, we introduce the maximization problem.M:=∈(μ,ν)supμ[G(_X)]. where we rely on the following measurability result whose proof is reported in Subsection <ref>. For ∈(Ω), the map _X is analytically measurable, and the map (_X|_∂K(X)) is μ-measurable. Now we prove a first Lemma about the existence of a maximal support probability. There exists ∈(μ,ν) such that for all ∈(μ,ν) we have the inclusion _X⊂ _X, μ-a.s.We proceed in two steps: Step 1: We first prove existence for the problem <ref>. Let (^n)_n≥ 1⊂(μ,ν) be a maximizing sequence. Then the measure := ∑_n≥ 12^-n ^n∈(μ,ν), and satisfies ^n_X⊂_X for all n≥ 1. Consequently μ[G(_X^n_X)]≤μ[G(_X)], and therefore M=μ[G(_X)]. Step 2: We next prove that _X⊂_X, μ-a.s. for all ∈(μ,ν). Indeed, the measure :=+/2∈(μ,ν) satisfies M≥μ[G (_X)]≥μ[G (_X)] = M, implying that G(_X)=G(_X), μ-a.s. The required result now follows from the inclusion _X⊂_X.Proof of Proposition <ref> (iii) Let ∈(μ,ν) from Lemma <ref>, if we denote S(X):=_X, then we have (_X)⊂ S(X), μ-a.s. Then {Y∉ S(X)} is (μ,ν)-polar. By Lemma <ref>, {Y∉ S(X)}∪{X∉ N_μ'} is Borel for some N_μ'∈_μ. By Theorem <ref>, we see that {Y∉ S(X)}⊂{Y∉ S(X)}∪{X∉ N_μ'}⊂{X∈ N_μ}∪{Y∈ N_ν}∪{Y∉ K_θ(X)}, and therefore*̱{Y∈ S(X)} ⊃ {X∉ N_μ}∩{Y∈ K_θ(X)∖ N_ν},* for some N_μ∈_μ, N_ν∈_ν, and θ∈(μ,ν). The last inclusion implies that K_θ(X)∖ N_ν⊂ S(X), μ-a.s. However, by Proposition <ref> (ii), K(X)⊂(θ(X,·)∖ N_ν), μ-a.s. Then, since S(X) is closed and convex, we see that K(X)⊂ S(X). To obtain the reverse inclusion, we recall from Proposition <ref> (i) that {Y∈K(X)}, (μ,ν)-q.s. In particular [Y∈K(X)]=1, implying that S(X) ⊂K(X), μ-a.s. as K(X) is closed convex. Finally, recall that by definition I:= S and therefore K(X) =I(X), μ-a.s.We may choose ∈(μ,ν) in Theorem <ref> so that for all ∈(μ,ν) and y∈^d, μ[ _X[{y}]>0]≤μ[ _X[{y}]>0],   _X|_∂ I(X)⊂ _X|_∂ I(X), μ- In this case the set-valued maps J(X):= I(X)∪{y∈^d:ν[y]>0_X[{y}]>0}, and J̅(X):=I(X)∪ _X |_∂ I(X) are unique μ-a.s. Furthermore J(X)=K(X), J̅(X)=K̅(X), and J_θ(X) = K_θ(X), μ-a.s. for all θ∈(μ,ν).Step 1: By the same argument as in the proof of Lemma <ref>, we may find '∈(μ,ν) such that M':= ∈(μ,ν)supμ[G((_X|_∂K(X)))]=μ[G((_X'|_∂K(X)))]. We also have similarly that (_X|_∂K(X))⊂(_X'|_∂K(X)), μ-a.s. for all ∈(μ,ν). Then we prove similarly that S'(X):=(_X'|_∂K(X))=D(X), μ-a.s., where recall that D is the optimizer for (<ref>). Indeed, by the previous step, we have (_X|_∂K(X))⊂ S'(X), μ-a.s. Then {Y∉ S'(X)∪K(X)} is (μ,ν)-polar. By Theorem <ref>, we see that {Y∉ S'(X)∪K(X)}⊂{X∈ N_μ}∪{Y∈ N_ν}∪{Y∉ K_θ(X)∪K(X)}, or equivalently,{Y∈ S'(X)∪K(X)} ⊃ {X∉ N_μ}∩{Y∈ K_θ(X)∖ N_ν}, for some N_μ∈_μ, N_ν∈_ν, and θ∈(μ,ν). Similar to the previous analysis, we have K_θ(X)∖ N_ν∖K(X)⊂ S'(X), μ-a.s. Then, since S'(X) is closed and convex, we see that D(X)⊂ S'(X). To obtain the reverse inclusion, we recall from Proposition <ref> that {Y∈K̅(X)}, (μ,ν)-q.s. In particular '[Y∈K(X)∪ D(X)]=1, implying that S'(X) ⊂ D(X), μ-a.s. By Proposition <ref> (iii), we have J̅(X) = (I∪ S')(X)=(K∪ D)(X) = K̅(X), μ-a.s.Finally, +'/2 is optimal for both problems (<ref>), and (<ref>). By definition, the equality J_θ(X) = K_θ(X), μ-a.s. for θ∈(μ,ν) immediately follows. Step 2: Let y∈ atom(ν), if y is an atom of γ_1∈(^d) and γ_2∈(^d), then y in an atom of λγ_1+(1-λ)γ_2 for all 0<λ<1. By the same argument as in Step 1, we may find ^y∈(μ,ν) such thatM_y:= ∈(μ,ν)supμ[_X[{y}∩K(X)]>0] = μ[_X^y[{y}∩K(X)]>0].We denote S_y(X):=_X^y|_K(X)∩{y}. Recall that D_y is the notation for the optimizer of problem (<ref>). We consider the set N:={Y∉(K(X)∖{y})∪ S_y(X)}. N is polar as Y∈K(X), q.s., and by definition of S_y. Then N⊂{X∈ N_μ}∪{Y∈ N_ν}∪{Y∉ K_θ(X)}, or equivalently,{Y∉(K(X)∖{y})∪ S_y(X)} ⊃ {X∉ N_μ}∩{Y∈ K_θ(X)∖ N_ν}, for some N_μ∈_μ, N_ν∈_ν, and θ∈(μ,ν). Then D_y(X)⊂ K_θ(X)∖ N_ν⊂K(X)∖{y}∪ S_y(X), μ-a.s. Finally D_y(X)⊂ S_y(X), μ-a.s.On the other hand, S_y⊂ D_y, μ-a.s., as if _X^y[{y}]>0, we have θ(X,y)<∞, μ-a.s. at the corresponding points. Hence, D_y(X) = S_y(X), μ-a.s. Now if we sum up the countable optimizers for y∈ atom(ν), with the previous optimizers, then the probabilitywe get is an optimizer for (<ref>), (<ref>), and (<ref>), for all y∈^d (the optimum is 0 if it is not an atom of ν). Furthermore, the μ-a.e. equality of the maps S_y and D_y for these countable y∈ atom(ν) is preserved by this countable union, then together with Proposition <ref> (iii), we get J = K, μ-a.s.As a preparation to prove the main Theorem <ref>, we need the following lemma, which will be proved in Subsection <ref>. Let F:^d⟶ be a γ-measurable function for some γ∈(^d), such that x∈ F(x) for all x∈^d, and {F(x):x∈^d} is a partition of ^d. Then up to a modification on a γ-null set, F can be chosen in addition to be analytically measurable.Proof of Theorem <ref> Existence holds by Lemma <ref> above, (i) is a consequence of Lemma <ref>, and (ii) directly stems from Lemma <ref> (iii) together with Proposition <ref> (iii). Now we need to deal with the measurability issue. Lemma <ref> allows to modify _X to get (ii) while preserving its analytic measurability, we denote I its modification. However, we need to modify _X to get the result. As _X is analytically measurable by Lemma <ref>, the set of modification N_μ:={_X≠ I(X)}∈_μ is analytically measurable. Then we may redefine _X on N_μ, so as to preserve a kernel for . By the same arguments than the proof of Lemma <ref> (ii), the measure-valued map κ_X:= g_I(X) is a kernel thanks to the analytic measurability of I, recall the definition of g_K given by (<ref>). Furthermore, κ_X = I(X) pointwise by definition. Then a suitable kernel modification from which the result follows is given by_X':= 1_{X∈ N_μ}κ_X + 1_{X∉ N_μ}_X. Proof of Proposition <ref> The existence and the uniqueness are given by Lemma <ref> and the other properties follow from the identity between the J maps and the K maps, also given by the Lemma, together with Proposition <ref>. Proof of Theorem <ref> We simply apply Lemma <ref> to replace K_θ by J_θ in Proposition <ref>. § MEASURABILITY OF THE IRREDUCIBLE COMPONENTS§.§ Measurability of GProof of Lemma <ref> (ii) As ^d is locally compact, the Wijsman topology is locally equivalent to the Hausdorff topology[The Haussdorff distance on the collection of all compact subsets of a compact metric space (, d) is defined by d_H(K_1,K_2)=sup_x∈|(x,K_1)-(x,K_2)|, for K_1,K_2⊂, compact subsets.], i.e. as n→∞, K_n⟶ K for the Wijsman topology if and only if K_n∩ B_M⟶ K∩ B_M for the Hausdorff topology, for all M≥ 0.We first prove that K⟼ K is a lower semi-continuous map →. Let (K_n)_n≥ 1⊂ with dimension d_n≤ d'≤ d converging to K. We consider A_n:= K_n. As A_n is a sequence of affine spaces, it is homeomorphic to a d+1-uplet. Observe that the convergence of K_n allow us to chose this d+1-uplet to be bounded. Then up to taking a subsequence, we may suppose that A_n converges to an affine subspace A of dimension less than d'. By continuity of the inclusion under the Wijsman topology, K⊂ A and K≤ A ≤ d'.We next prove that the mapping K↦ g_K(K) is continuous on { K = d'} for 0≤ d'≤ d, which implies the required measurability. Let (K_n)_n≥ 1⊂ be a sequence with constant dimension d', converging to a d'-dimensional subset, K in . Define A_n:= K_n and A:= K, A_n converges to A as for any accumulation set A' of A_n, K⊂ A' and A' =A, implying that A'=A. Now we consider the map ϕ_n:A_n→ A, x↦ proj_A(x). For all M>0, it follows from the compactness of the closed ball B_M that ϕ_n converges uniformly to identity as n→∞ on B_M. Then, ϕ_n(K_n)∩ B_M⟶ K∩ B_M as n→∞, and therefore _A[ϕ_n(K_n∩ B_M)∖ K]+_A[K∖ϕ_n(K_n)∩ B_M]⟶ 0. As the Gaussian density is bounded, we also haveg_A[ϕ_n(K_n∩ B_M)]⟶g_A[K∩ B_M].We next compare g_A[ϕ_n(K_n∩ B_M)] to g_K_n(K_n∩ B_M). As (ϕ_n) is a sequence of linear functions that converges uniformly to identity, we may assume that ϕ_n is a ^1-diffeomorphism. Furthermore, its constant Jacobian J_n converges to 1 as n→∞. Then,*̱∫_K_n∩ B_Me^-|ϕ_n(x)|^2/2/(2π)^d'/2_K_n(dx) = ∫_ϕ_n(K_n∩ B_M)e^-|y|^2/2J_n^-1/(2π)^d'/2_A(dy) =J_n^-1 g_A[ϕ_n(K_n∩ B_M)]. * As the Gaussian distribution function is 1-Lipschitz, we have |∫_K_n∩ B_Me^-|ϕ_n(x)|^2/2/(2π)^d'/2_K_n(dx) - g_K_n(K_n∩ B_M)| ≤ _K_n[K_n∩ B_M]|ϕ_n-Id_A|_∞,where |·|_∞ is taken on K_n∩ B_M. Now for arbitrary ϵ>0, by choosing M sufficiently large so that g_V[V∖ B_M]≤ϵ for any d'-dimensional subspace V, we have *̱| g_K_n[K_n]- g_K[K]| ≤ | g_K_n[K_n∩ B_M]- g_A[K∩ B_M]|+2ϵ≤ | g_K_n[K_n∩ B_M]-∫_K_n∩ B_MCexp(-|ϕ_n(x)|^2/2)_K_n(dx)|+|J_n^-1 g_A[ϕ_n(K_n∩ B_M)]- g_A[K∩ B_M]|+2ϵ≤4ϵ, * for n sufficiently large, by the previously proved convergence. Hence G_d':=G|_^-1{d'} is continuous, implying that G :K⟼∑_d'=0^d1_^-1{d'}(K)G_d'(K) is Borel-measurable.§.§ Further measurability of set-valued maps This subsection is dedicated to the proof of Lemmas <ref> (i), <ref>, and <ref>. In preparation for the proofs, we start by giving some lemmas on measurability of set-valued maps. Letbe a σ-algebra of ^d. In practice we will always consider either the σ-algebra of Borel sets, the σ-algebra of analytically measurable sets, or the σ-algebra of universally measurable sets. Let (F_n)_n≥ 1⊂Ł^(^d,). Then ∪_n≥ 1F_n and ∩_n≥ 1F_n are -measurable.The measurability of the union is a consequence of Propositions 2.3 and 2.6 in Himmelberg <cit.>. The measurability of the intersection follows from the fact that ^d is σ-compact, together with Corollary 4.2 in <cit.>.Let F∈Ł^(^d,). Then, F, F, and _XF are -measurable.The measurability of F is a direct application of Theorem 9.1 in <cit.>.We next verify that F is measurable. Since the values of F are closed, we deduce from Theorem 4.1 in Wagner <cit.>, that we may find a measurable x⟼ y(x), such that y(x)∈ F(x) if F(x)≠∅, for all x∈^d. Then we may write F(x) =∪_q∈(y(x)+q(F(x)-y(x))) for all x∈^d. The measurability follows from Lemmas <ref>, together with the first step of the present proof.We finally justify that _XF is measurable. We may assume that F takes convex values. By convexity, we may reduce the definition of _x to a sequential form: *̱_x F(x) = ∪_n≥ 1{y∈^d , y+1/n(y-x)∈ F(x)andx-1/n(y-x)∈ F(x)}= ∪_n≥ 1[{y∈^d , y+1/n(y-x)∈ F(x)}∩{y∈^d , x-1/n(y-x)∈ F(x)}]= ∪_n≥ 1[(1/n+1x+n/n+1F(x)) ∩(-(n+1)x - nF(x))], * so that the required measurability follows from Lemma <ref>.We denotethe set of finite sequences of positive integers, and Σ the set of infinite sequences of positive integers. Let s∈, and σ∈Σ. We shall denote s<σ whenever s is a prefix of σ.Let (F_s)_s∈ be a family of universally measurable functions ^d⟶ with convex image. Then the mapping (∪_σ∈Σ∩_s<σF_s) is universally measurable.Letthe collection of universally measurable maps from ^d towith convex image. For an arbitrary γ∈(^d), and F:^d⟶, we introduce the map*̱γ G^*[F]:=inf_F⊂ F'∈γ G[F'],γ G[F']:= γ[G(F'(X))]  F'∈.* Clearly, γ G and γ G^* are non-decreasing, and it follows from the dominated convergence theorem that γ G, and thus γ G^*, are upward continuous. Step 1: In this step we follow closely the line of argument in the proof of Proposition 7.42 of Bertsekas and Shreve <cit.>. Set F:=(∪_σ∈Σ∩_s<σF_s), and let (F̅_n)_n a minimizing sequence for γ G^*[F]. Notice that F⊂F̅ := ∩_n≥ 1F̅_n∈, by Lemma <ref>. Then F̅ is a minimizer of γ G^*[F].For s,s'∈ S, we denote s≤ s' if they have the same length |s|=|s'|, and s_i≤ s_i' for 1≤ i≤ |s|. For s∈ S, let*̱R(s):= ∪_s'≤ s∪_σ>s'∩_s”<σF_s”K(s):= ∪_s'≤ s∩_j=1^|s'|F_s_1',...,s_j'.* Notice that K(s) is universally measurable, by Lemmas <ref> and <ref>, and R(s)⊂ K(s), ∪_s_1≥ 1R(s_1) = F,  ∪_s_k≥ 1R(s_1,...,s_k) = R(s_1,...,s_k-1).By the upwards continuity of γ G^*, we may find for all ϵ>0 a sequence σ^ϵ∈Σ s.t.*̱γ G^*[F]≤γ G^*[R(σ^ϵ_1)]+2^-1ϵ,γ G^*[R(σ_k-1)]≤γ G^*[R(σ_k)]+2^-kϵ,* for all k≥ 1, with the notation σ^_k:=(σ_1^ϵ,…,σ_k^). Recall that the minimizer F and K(s) are infor all s∈. We then define the sequence K_k^ϵ:=F∩ K(σ^ϵ_k)∈, k≥ 1, and we observe that (K_k^ϵ)_k≥ 1   F^ϵ:=∩_k≥ 1K_k^ϵ⊂ F,  γ G[K_k^ϵ]≥γ G^*[F]-ϵ = γ G[F]-ϵ,by the fact that R(σ^ϵ_k)⊂ K_k^ϵ. We shall prove in Step 2 that, for an arbitrary α>0, we may find =(α)≤α such that (<ref>) implies that γ G[F^ϵ]≥inf_k≥ 1γ G[K_k^ϵ] - α≥γ G[F]-ϵ-α. Now let α=α_n:=n^-1, _n:=ϵ(α_n), and notice that F:= ∪_n≥ 1F^ϵ_n∈, with F^ϵ_n⊂F⊂ F ⊂F, for all n≥ 1. Then, it follows from (<ref>) that γ G[F]= γ G[F], and therefore F= F =F, γ -a.s. In particular, F is γ-measurable, and we conclude that F∈ by the arbirariness of γ∈(^d).Step 2: It remains to prove that, for an arbitrary α>0, we may find =(α)≤α such that (<ref>) implies (<ref>). This is the point where we have to deviate from the argument of <cit.> because γ G is not downwards continuous, as the dimension can jump down. Set A_n:={G(F(X))-F(X)≤ 1/n}, and notice that ∩_n≥ 1A_n=∅. Let n_0≥ 1 such that γ[A_n_0]≤1/2α/d+1, and set ϵ := 1/21/n_0α/d+1>0. Then, it follows from (<ref>) thatγ[inf_n G(K^ϵ_n)-F≤ 0] ≤ γ[inf_n G(K^ϵ_n)-G(F)≤ n_0^-1] + γ[G(F)-F≤ -n_0^-1]≤ n_0(γ[G(F)]-γ[inf_n G(K^ϵ_n)] )+γ[A_n_0]= n_0(γ[G(F)]-inf_n γ[G(K^ϵ_n)] )+γ[A_n_0]≤ n_0ϵ+1/2α/d+1 = α/d+1, where we used the Markov inequality and the monotone convergence theorem. Then:*̱γ[inf_n G(K^ϵ_n)-G(F^ϵ)] ≤ γ[_{inf_n G(K^ϵ_n)-F≤ 0 }(inf_n G(K^ϵ_n)-G(F^ϵ))+ _{inf_n G(K^ϵ_n)-F>0 }(inf_n G(K^ϵ_n)-G(F^ϵ))]≤ γ[(d+1)_{inf_n G(K^ϵ_n)-F≤ 0 }+ _{inf_n G(K^ϵ_n)-F>0 }(inf_n G(K^ϵ_n)-G(F^ϵ))].* We finally note that inf_n G(K^ϵ_n)-G(F^ϵ)=0 on {inf_n G(K^ϵ_n)-F>0 }. Then (<ref>) follows by substituting the estimate in (<ref>).Proof of Lemma <ref> (i) We consider the mappings θ:Ω→_+ such that θ = ∑_k=1^n λ_k 1_C^1_k C^2_k where n∈, the λ_k are non-negative numbers, and the C^1_k,C^2_k are closed convex subsets of ^d. We denote the collection of all these mappings . Notice thatfor the pointwise limit topology contains all ^0_+(Ω). Then for any θ∈^0_+(Ω), we may find a family (θ_s)_s∈⊂, such that θ = inf_σ∈Σsup_s<σθ_s. For θ∈_+^0(Ω), and n≥ 0, we denote F_θ :x⟼θ(x,·), and F_θ,n :x⟼θ(x,·)^-1([0,n]). Notice that F_θ = ∪_n≥ 1F_θ,n. Notice as well that F_θ,n is Borel measurable for θ∈, and n≥ 0, as it takes values in a finite set, from a finite number of measurable sets. Let θ∈_+^0(Ω), we consider the associated family (θ_s)_s∈⊂, such that θ = inf_σ∈Σsup_s<σθ_s. Notice that F_θ,n = (∪_σ∈Σ∩_s<σF_θ_s,n) is universally measurable by Lemma <ref>, thus implying the universal measurability of F_θ = θ(X,·) by Lemma <ref>.In order to justify the measurability of _Xθ, we now define*̱F_θ^0 := F_θF_θ^k := (θ(X,·)∩ _XF_θ^k-1), k ≥ 1.* Note that F_θ^k = ∪_n≥ 1(∪_σ∈Σ∩_s<σF_θ_s,n∩ _xF_θ^k-1). Then, as F_θ^0 is universally measurable, we deduce that (F_θ^k)_k≥ 1 are universally measurable, by Lemmas <ref> and <ref>.As _Xθ is convex and relatively open, the required measurability follows from the claim:*̱F_θ^d = _Xθ.* To prove this identity, we start by observing that F_θ^k(x) ⊃_xθ. Since the dimension cannot decrease more than d times, we have _x F^d_θ(x) =F^d_θ(x) and*̱F^d+1_θ(x) = (θ(x,·)∩ _xF_θ^d(x))= (θ(x,·)∩_x F_θ^d-1(x)) = F_θ^d(x).* i.e. (F^d+1_θ)_k is constant for k≥ d. Consequently, *̱_x(θ(x,·)∩ _xF_θ^d(x))=F_θ^d(x)≥ (θ(x,·)∩ _xF_θ^d(x)). * As (θ(x,·)∩ _xF_θ^d(x))≥_x(θ(x,·)∩ _xF_θ^d(x)), we have equality of the dimension of (θ(x,·)∩ _xF_θ^d(x)) with its _x. Then it follows from Proposition <ref> (ii) that x∈ (θ(x,·)∩ _xF_θ^d(x)), and therefore:*̱F_θ^d(x) = (θ(x,·)∩ _xF_θ^d(x))=(θ(x,·)∩ _xF_θ^d(x))= _x(θ(x,·)∩ _xF_θ^d(x)) ⊂ _xθ.* Hence F_θ^d(x)= _xθ.Finally, K_θ,A = _X(θ+∞ 1_^d A) is universally measurable by the universal measurability of _X. Proof of Lemma <ref> We may find (F_n)_n≥ 1, Borel-measurable with finite image, converging γ-a.s. to F. We denote N_γ∈_γ, the set on which this convergence does not hold. For ϵ>0, we denote F_k^ϵ(X) := {y∈^d:dist(y,F_k(X))≤ϵ}, so that *̱ F(x) = ∩_i≥1lim inf_n→∞F_n^1/i(x),x∉ N_γ. * Then, as 1_Y∈ F(X)1_X∉ N_γ = inf_i≥1lim inf_n→∞1_Y∈ F_n^1/i(X)1_X∉ N_γ, the Borel-measurability of this function follows from the Borel-measurability of each 1_Y∈ F_n^1/i(X).Now we suppose that X∈ F(X) convex, γ-a.s. Up to redefining N_γ, we may suppose that this property holds on N_γ^c, then ∂ F(x) = ∩_n≥ 1F(x)∖(x+n/n+1(F(x)-x)), for x∉ N_γ. We denote a := 1_Y∈ F(X)1_X∉ N_γ. The result follows from the identity 1_Y∈∂ F(X)1_X∉ N_γ = a - sup_n≥ 1a(X,X+n/n+1(Y-X)). Proof of Lemma <ref> Let _:={K=(x_1,…,x_n): n∈,(x_i)_i≤ n⊂^d}. Then _x= ∪_N≥ 1∩{K∈_:_x∩ B_N⊂ K}=∪_N≥ 1∩_K∈_F_K^N(x),where F_K^N(x) := K if _x[B_N∩ K] =_x[B_N], and F_K^N(x) :=^d otherwise. As for any K∈_ and N≥ 1, the map _X[B_N∩ K] -_X[B_N] is analytically measurable, then F_K^N is analytically measurable. The required measurability result follows from lemma <ref>.Now, in order to get the measurability of (_X|_∂ I(X)), we have in the same way*̱(_X|_∂ I(X)) = ∪_n≥ 1∩_K∈_F_K'^N(x),* where F_K'^N(x) := K if _x[∂ I(x)∩ B_N∩ K] =_x[∂ I(x)∩ B_N], and F_K'^N(x) :=^d otherwise. As _X[∂ I(X)∩ B_N∩ K] = _X[1_Y∈∂ I(X)1_X∉ N_μ1_Y∉ B_N∩ K], μ-a.s., where N_μ∈_μ is taken from Lemma <ref>, _X[∂ I(X)∩ B_N∩ K] is μ-measurable, as equal μ-a.s. to a Borel function. Then similarly, _X[∂ I(X)∩ B_N∩ K] -_X[∂ I(X)∩ B_N] is μ-measurable, and therefore (_X|_∂ I(X)) is μ-measurable. Proof of Lemma <ref> By γ-measurability of F, we may find a Borel function F_B:^d⟶ such that F = F_B, γ-a.s. Let a Borel N_γ∈_γ such that F = F_B on N_γ^c. By the fact thatis Polish, we may find a sequence (F_n)_n≥ 1 of Borel functions with finite image converging pointwise towards F_B when n⟶∞. We will give an explicit expression for F_n that will be useful later in the proof. Let (K_n)_n≥ 1⊂ a dense sequence,F_n(x):= _K∈(K_i)_i≤ ndist(F_B(x),K), Where dist is the distance onthat makes it Polish, and we chose the K with the smallest index in case of equality.We fix n≥ 1, let K∈ F_n(N_γ^c), the image of F_n outside of N_γ, and A_K:= F_n^-1({K}). We will modify the image of F_n so that it is the same for all x'∈ F_B(x)= F(x), for all x∈ N_γ^c∩ A_K. Then we consider the set A'_K:= ∪_x∈ N_γ^c∩ A_K F_B(x), we now prove that this set in analytic. By Theorem 4.2 (b) in <cit.>, Gr F_B:={Y∈ F_B(X)} is a Borel set. Let λ >0, we define the affine deformation f_λ:Ω⟶Ω by f_λ(X,Y) := (X,X+λ(Y-X)). By the fact that for k≥ 1, f_1-1/k(Gr F_B) is Borel together with the fact that x∈ f_B(x) for x∉ N_γ, we have{Y∈ F_B(X)}∩{X∉ N_γ} = ∪_k≥ 1f_1-1/k(Gr F_B)∩{X∉ N_γ}.Therefore, {Y∈ F_B(X)}∩{X∉ N_γ} is Borel, and so is {Y∈ F_B(X)}∩{X∈ N_γ^c∩ A_K}. Finally,A'_K = Y({Y∈ F_B(X)}∩{X∈ N_γ^c∩ A_K}),therefore, A'_K is the projection of a Borel set, which is one of the definitions of an analytic set (see Proposition 7.41 in <cit.>). Now we define a suitable modification of F_n by F_n'(x) := K for all x∈ A'_K, we do this redefinition for all K∈ F_B(N_γ^c). Notice that thanks to the definition (<ref>) and the fact that F_B(x) = F_B(x') if x,x'∉ N_γ and x'∈ F_B(x) = F(x), we have the inclusion A'_K⊂ A_K∪ N_γ. Then the redefinitions of F_n only hold outside of N_γ, furthermore for different K_1, K_2∈ F_n(N_γ^c), A'_K_1∩ A'_K_2 = ∅ as the value of F_n(x) only depends on the value of F_B(x) by (<ref>). Notice thatN_γ' := (∪_K∈ F_n(N_γ^c)A'_K)^c =(∪_x∉ N_γF_B(x))^c⊂ N_γ is analytically measurable, as the complement of an analytic set, and does not depend on n. For x∈ N_γ', we define F_n'(x):= {x}. Notice that F_n' is analytically measurable as the modification of a Borel Function on analytically measurable sets.Now we prove that F_n' converges pointwise when n⟶∞. For x∈ N_γ', F_n'(x) is constant equal to {x}, if x∉ N_γ', by (<ref>)x∈∪_x∉ N_γF_B(x), and therefore F_n'(x) = F_B(x')=F(x') for some x∈ N_γ^c, for all n≥ 1. Then as F_n'(x') converges to F(x'), F_n'(x) converges to F(x). Let F' be the pointwise limit of F_n'. the maps F_n' are analytically measurable, and therefore, so does F'. For all n≥ 1, F_n' = F_n, γ-a.e. and therefore F' = F_B = F, γ-a.e. Finally, F'(N_γ^c) = F(N_γ^c), and ∪ F(N_γ^c) = (N_γ')^c. By property of F, F'(N_γ^c) is a partition of (N_γ')^c such that x∈ F'(x) for all x∉ N_γ'. On N_γ', this property is trivial as F'(x) = {x} for all x∈ N_γ'. § PROPERTIES OF TANGENT CONVEX FUNCTIONS§.§ x-invariance of the y-convexity We first report a convex analysis lemma.Let f:^d→ be convex finite on some convex open subset U⊂^d. We denote f_*:^d→ the lower-semicontinuous envelop of f on U, then *̱ f_*(y)=lim_ϵ↘ 0f(ϵ x+(1-ϵ)y),(x,y)∈ U U. * f_* is the lower semi-continuous envelop of f on U, i.e. the lower semi-continuous envelop of f':=f+∞1_U^c. Notice that f' is convex ^d⟶∪{∞}. Then by Proposition 1.2.5 in Chapter IV of <cit.>, we get the result as f=f' on U. Proof of Proposition <ref> The result is obvious in (_1), as the affine part depending on x vanishes. We may use N_ν = ∅. Now we denotethe set of mappings in Θ_μ such that the result from the proposition holds. Then we have (_1)⊂.We prove thatis -Fatou closed. Let (θ_n)_n be a sequence inconvergingto θ∈Θ_μ. Let n≥ 1, we denote N_μ, the set in _μ from the proposition applied to θ_n, and let N_μ^0∈_μ corresponding to theconvergence of θ_n to θ. We denote N_μ:=∪_n∈N_μ^n∈_μ. Let x_1,x_2∉ N_μ, and ∈_x_1θ∩_x_2θ. Let y_1,y_2∈_x_1θ, such that we have the convex combination = λ y_1 + (1-λ) y_2, and 0≤λ≤ 1. Then for i=1,2, θ_n(x_1,y_i)⟶θ(x_1,y_i), and θ_n(x_1,)⟶θ(x_1,), as n→∞. Using the fact that θ_n∈, for all n, we haveΔ_n:=λθ_n(x_i,y_1)+(1-λ)θ_n(x_i,y_2)-θ_n(x_i,)≥0,  i=1,2.Taking the limit n→∞ gives that θ_∞(x_2,y_i)<∞, and y_i∈θ_∞(x_2,·).is interior to _x_1θ, then for any y∈_x_1θ, y':=+ ϵ/1-ϵ(-y)∈_x_1θ for 0<ϵ<1 small enough. Then = ϵ y+(1-ϵ)y'. As we may chose any y∈_x_1θ, we have _x_1θ⊂θ_∞(x_2,·). Then, we have _x_2(_x_1θ∪_x_2θ)⊂_x_2 (θ_∞(x_2,·)) = _x_2θ.By Lemma <ref>, as _x_1θ∩_x_2θ≠∅, (_x_1θ∪_x_2θ)= (_x_1θ∪_x_2θ). In particular, (_x_1θ∪_x_2θ) is relatively open and contains x_2, and therefore _x_2(_x_1θ∪_x_2θ) = (_x_1θ∪_x_2θ). Finally, by (<ref>), _x_1θ⊂_x_2θ. As there is a symmetry between x_1, and x_2, we have _x_1θ = _x_2θ. Then we may go to the limit in equation (<ref>):Δ_∞:=λθ(x_i,y_1)+(1-λ)θ(x_i,y_2)-θ(x_i,)≥0,  i=1,2Now, let y_1,y_2∈^d, such that we have the convex combination = λ y_1 + (1-λ) y_2, and 0≤λ≤ 1. we have three cases to study. Case 1:y_i∉_x_1θ for some i=1,2. Then, as the averageof the y_i is in _x_1θ, by Proposition <ref> (ii), me may find i'=1,2 such that y_i'∉ θ(x_1,·), thus implying that θ(x_1,y_i)=∞. Then λθ(x_1,y_1)+(1-λ) θ(x_1,y_2)-θ(x_1,) = ∞≥ 0. As _x_1θ = _x_2θ, we may apply the same reasoning to x_2, we get λθ(x_1,y_1)+(1-λ) θ(x_2,y_2)-θ(x_2,) = ∞≥ 0. We get the result. Case 2:y_1,y_2∈_x_1θ. This case is (<ref>). Case 3:y_1,y_2∈_x_1θ. The problem arises here if some y_i is in the boundary ∂_x_1θ. Let x∉ N_μ, we denote the lower semi-continuous envelop of θ(x,·) in _xθ, by θ_*(x,y) := lim_ϵ↘ 0θ(x,ϵ x+(1-ϵ)y'), for y∈_xθ, where the latest equality follows from Lemma <ref> together with that fact that θ(x,·) is convex on _xθ. Let y∈_x_1θ, for 1≥ϵ>0, y^ϵ :=ϵ x_1 + (1-ϵ )y∈_x_1θ. By (<ref>), (1-ϵ )θ_n(x_1,y)-θ_n(x_1,y^ϵ) = (1-ϵ )θ_n(x_2,y)-θ_n(x_2,y^ϵ). Taking the lim inf, we have (1-ϵ )θ(x_1,y)-θ(x_1,y^ϵ) = (1-ϵ )θ(x_2,y)-θ(x_2,y^ϵ). Now taking ϵ↘ 0, we have θ(x_1,y)-θ_*(x_1,y) = θ(x_2,y)-θ_*(x_2,y). Then the jump of θ(x,·) in y is independent of x=x_1 or x_2. Now for 1≥ϵ>0, by (<ref>) λθ(x_1,y_1^ϵ)+(1-λ)θ(x_1,y_2^ϵ)-θ(x_1,^ϵ)= λθ(x_2,y_1^ϵ)+(1-λ)θ(x_2,y_2^ϵ)-θ(x_2,^ϵ)≥ 0. By going to the limit ϵ↘ 0, we getλθ_*(x_1,y_1)+(1-λ)θ_*(x_1,y_2)-θ_*(x_1,)= λθ_*(x_2,y_1)+(1-λ)θ_*(x_2,y_2)-θ_*(x_2,)≥ 0.As the (nonnegative) jumps do not depend on x= x_1 or x_2, we finally getλθ(x_1,y_1)+(1-λ)θ(x_1,y_2)-θ(x_1,)= λθ(x_2,y_1)+(1-λ)θ(x_2,y_2)-θ(x_2,)≥ 0. Finally,is -Fatou closed, and convex. _1⊂. As the result is clearly invariant when the function is multiplied by a scalar, the Result is proved on (μ,ν).§.§ Compactness       Proof of Proposition <ref> We first prove the result for θ=(θ_n)_n≥ 1⊂Θ. Denote (θ) := {θ'∈Θ^: θ_n'∈(θ_k,k≥ n), n∈}. Consider the minimization problem:m := inf_θ'∈(θ)μ[G(_Xθ'_∞)], where the measurability of G(_Xθ'_∞) follows from Lemma <ref>. Step 1: We first prove the existence of a minimizer. Let (θ'^k)_k∈∈(θ)^ be a minimizing sequence, and define the sequence θ∈(θ) by:*̱ θ_n := (1-2^-n)^-1∑_k=1^n 2^-kθ'^k_n, n≥ 1.* Then, (θ_∞)⊂⋂_k≥ 1(θ'^k_∞) by the non-negativity of θ', and we have the inclusion {θ_nn→∞⟶∞}⊂{θ_n'^kn→∞⟶∞  k≥ 1}. Consequently,*̱ _xθ_∞ ⊂ (⋂_k≥ 1θ'^k_∞(x,·)) ⊂ ⋂_k≥ 1_xθ'^k_∞   x∈^d.* Since (θ'^k)_k is a minimizing sequence, and θ∈(θ), this implies that μ[G(_Xθ_∞)]= m. Step 2: We next prove that we may find a sequence (y_i)_i≥ 1⊂^0(^d,^d) such that y_i(X)∈(_Xθ_∞) (y_i(X))_i≥ 1  _Xθ_∞,  μ- Indeed, it follows from Lemmas <ref>, and <ref> that the map x↦(_xθ_∞) is universally measurable, and therefore Borel-measurable up to a modification on a μ-null set. Since its values are closed and nonempty, we deduce from the implication (ii)(ix) in Theorem 4.2 of the survey on measurable selection <cit.> the existence of a sequence (y_i)_i≥ 1 satisfying (<ref>). Step 3: Let m(dx,dy):=μ(dx)⊗∑_i≥ 02^-iδ_{y_i(x)}(dy). By the Komlòs lemma (in the form of Lemma A1.1 in <cit.>, similar to the one used in the proof of Proposition 5.2 in <cit.>), we may find θ∈(θ) such that θ_n⟶θ_∞∈Ł^0(Ω), m-a.s. Clearly, _xθ_∞⊂_xθ_∞, and therefore μ[G(_Xθ_∞)]≤μ[G(_xθ_∞)], for all x∈^d. This shows that G(_Xθ_∞) = G(_Xθ_∞),  μ- so that θ is also a solution of the minimization problem (<ref>). Moreover, it follows from (<ref>) that*̱ _Xθ_∞= _Xθ_∞, _Xθ_∞= _Xθ_∞,  μ-a.s.*Step 4: Notice that the values taken by θ_∞ are only fixed on an m-full measure set. By the convexity of elements of Θ in the y-variable, _Xθ_n has a nonempty interior in (_Xθ_∞). Then as μ-a.s., θ_n(X,·) is convex, the following definition extends θ_∞ to Ω:*̱θ_∞(x,y):=sup{ a· y + b: (a,b)∈^d×, a· y_n(x)+b≤θ_∞(x,y_n(x))  n≥ 0}.* This extension coincides with θ_∞, in (x,y_n(x)) for μ-a.e. x∈^d, and all n≥ 1 such that y_n(x)∉∂_Xθ_k for some k≥ 1 such that _xθ_n has a nonempty interior in (_xθ_∞). As for k large enough, ∂_Xθ_k is Lebesgue negligible in (_xθ_∞), the remaining y_n(x) are still dense in (_xθ_∞). Then, for μ-a.e. x∈^d, θ_n(x,·) converges to θ_∞(x,·) on a dense subset of (_xθ_∞). We shall prove in Step 6 below thatθ_∞(X,·)  (_Xθ_∞), μ- Then, by Theorem <ref>, θ_n(X,·)⟶θ_∞(X,·) pointwise on (_Xθ_∞)∖∂θ_∞(X,·), μ-a.s. Since _Xθ_∞ = _Xθ_∞, and θ converges to θ_∞ on _Xθ_∞, μ-a.s., θ converges to θ_∞∈Θ, . Step 5: Finally for general (θ_n)_n≥ 1⊂Θ_μ, we consider θ_n', equal to θ_n, , such that θ_n'≤θ_n, for n≥ 1, from the definition of Θ_μ. Then we may find λ_n^k, coefficients such that θ_n':=∑_k≥ nλ_n^kθ_k'∈(θ') convergesto θ_∞∈Θ. We denote θ_n:=∑_k≥ nλ_n^kθ_k∈(θ), θ_n = θ_n', , and θ_n ≥θ_n'. By Proposition <ref> (iii), θ converges to θ_∞, . The Proposition is proved. Step 6: In order to prove (<ref>), suppose to the contrary that there is a set A such that μ[A]>0 and θ_∞(x,·) has an empty interior in (_xθ_∞) for all x∈ A. Then, by the density of the sequence (y_n(x))_n≥ 1 stated in (<ref>), we may find for all x∈ A an index i(x)≥ 0 such that y(x):=y_i(x)(x)∈ _xθ_∞,θ_∞(x,y(x))=∞. Moreover, since i(x) takes values in , we may reduce to the case where i(x) is a constant integer, by possibly shrinking the set A, thus guaranteeing that y is measurable. Define the measurable function on Ω:θ^0_n(x,y):=(y,L^n_x)L^n_x:={y∈^d:θ_n(x,y)<θ_n(x,y(x))}. Since L^n_x is convex, and contains x for n sufficiently large by (<ref>), we see that θ^0_n   θ^0_n(x,y)≤ |x-y|,   (x,y)∈Ω. In particular, this shows that θ^0_n∈Θ. By Komlòs Lemma, we may find*̱ θ^0_n:=∑_k≥ nλ^n_k θ^0_k∈(θ^0)    θ^0_n⟶θ^0_∞,  m-* for some non-negative coefficients (λ^n_k,k≥ n)_n≥ 1 with ∑_k≥ nλ^n_k=1. By convenient extension of this limit, we may assume that θ^0_∞∈Θ. We claim thatθ^0_∞>0  H_x:={h(x)·(y-y(x))>0},  h(x)∈^d.We defer the proof of this claim to Step 7 below and we continue in view of the required contradiction. By definition of θ^0_n together with (<ref>), we compute that*̱θ^1_n(x,y) := ∑_k≥ nλ^n_kθ_k(x,y) ≥ ∑_k≥ nλ^n_kθ_k(x,y(x)) _{θ^0_n>0}≥ ∑_k≥ nλ^n_kθ_k(x,y(x)) θ^0_k(x,y)/|x-y|≥ θ^0_n(x,y)/|x-y|inf_k≥ nθ_k(x,y(x)).* By (<ref>) and (<ref>), this shows that the sequence θ^1∈(θ) satisfies*̱θ^1_n(x,·) ⟶∞,H_x,     x∈ A.* We finally consider the sequence θ^1:=1/2(θ+θ^1)∈(θ). Clearly, θ^1_∞(X,·)⊂θ_∞(X,·), and it follows from the last property of θ^1 that θ^1_∞(x,·)⊂ H_x^c∩θ_∞(x,·) for all x∈ A. Notice that y(x) lies on the boundary of the half space H_x and, by (<ref>), y(x)∈_xθ_∞. Then G(_xθ^1_∞)< G(_xθ_∞) for all x∈ A and, since μ[A]>0, we deduce that μ[G(_Xθ^1_∞)]<μ[G(_Xθ_∞)], contradicting the optimality of θ, by (<ref>), for the minimization problem (<ref>). Step 7: It remains to justify (<ref>). Since θ_n(x,·) is convex, it follows from the Hahn-Banach separation theorem that:*̱θ_n(x,·) ≥θ_n(x,y(x))  H^n_x := {y∈^d: h^n(x)·(y-y(x))>0},* for some h^n(x)∈^d, so that it follows from (<ref>) that L^n_x⊂ (H^n_x)^c, and*̱θ^0_n(x,y)≥(y,(H^n_x)^c)=[(y-y(x))· h^n(x)]^+.* Denote g_x:=g__xθ_∞ the Gaussian kernel restricted to the affine span of _xθ_∞, and B_r(x_0) the corresponding ball with radius r, centered at some point x_0. By (<ref>), we may find r^x so that B^x_r:=B_r(y(x))⊂ _xθ_∞ for all r≤ r^x, and*̱∫_B_r^xθ^0_n(x,y)g_x(y)dy ≥ ∫_B_r^x[(y-y(x))· h^n(x)]^+g_x(y)dy≥ min_B_r^xg_x∫_B_r(0) (y· e_1)^+ dy=:b^r_x >0,* where e_1 is an arbitrary unit vector of the affine span of _xθ_∞. Then we have the inequality ∫_B_r^xθ^0_n(x,y)g_x(y)dy≥ b^r_x, and since θ^0_n has linear growth in y by (<ref>), it follows from the dominated convergence theorem that ∫_B^r_xθ^0_∞(x,y)g(dy)≥ b^r_x>0, and therefore θ^0_∞(x,y^r_x)>0 for some y^r_x∈ B^r_x. From the arbitrariness of r∈(0,r_x), We deduce (<ref>) as a consequence of the convexity of θ^0(x,·). Proof of Proposition <ref> (iii)We need to prove the existence of someθ'∈Θθ_∞= θ',  ,    θ_∞≥θ'. For simplicity, we denote θ:=θ_∞. Let *̱ F^1 := θ(X,·),   F^k :=(θ(X,·)∩ _XF^k-1), k ≥ 2,     F := ∪_n≥ 1(F^n∖_X F^n)∪_Xθ. * Fix some sequence _n↘ 0, and denote θ_*:=lim inf_n→∞θ(X,_n X+(1-_n)Y), and θ':= [∞1_Y∉ F(X)+1_Y∈_Xθθ_*]1_X∉ N_μ,where N_μ∈_μ is chosen such that 1_Y∈ F^k(X)1_X∉ N_μ are Borel measurable for all k from Lemma <ref>, and θ(x,·) (resp. θ_n(x,·)) is convex finite on _xθ (resp. _xθ_n), for x∉ N_μ. Consequently, θ' is measurable. In the following steps, we verify that θ' satisfies (<ref>). Step 1: We prove that θ'∈Θ. Indeed, θ'∈^0_+(Ω), and θ'(X,X) = 0. Now we prove that θ'(x,·) is convex for all x∈^d. For x∈ N_μ, θ'(x,·)=0. For x∉ N_μ, θ(x,·) is convex finite on _xθ, then by the fact that _xθ is a convex relatively open set containing x, it follows from Lemma <ref> that θ_*(x,·)=lim_n→∞θ(x,_n x+(1-_n)·) is the lower semi-continuous envelop of θ(x,·) on _xθ. We now prove the convexity of θ'(x,·) on all ^d. We denote F(x) := F(x)∖_xθ so that ^d = F(x)^c∪F(x) ∪_xθ. Now, let y_1,y_2∈^d, and λ∈(0,1). If y_1∈ F(x)^c, the convexity inequality is verified as θ'(x,y_1)=∞. Moreover, θ'(x,·) is constant on F(x), and convex on _xθ. We shall prove in Steps 4 and 5 below thatF(x)  _xF(x) = _xθ. In view of Proposition <ref> (ii), this implies that the sets F(x) and _xθ are convex. Then we only need to consider the case when y_1∈F(x), and y_2∈_xθ. By Proposition <ref> (ii) again, we have [y_1,y_2)⊂F(x), and therefore λ y_1 + (1-λ)y_2∈F(x), and θ'(x,λ y_1 + (1-λ)y_2) = 0, which guarantees the convexity inequality. Step 2: We next prove that θ=θ', . By the second claim in (<ref>), it follows that θ_*(X,·) is convex finite on _Xθ, μ-a.s. Then as a consequence of Proposition <ref> (ii), we have _Xθ' = _X(∞1_Y∉ F(X))∩_X(θ_*1_Y∈_Xθ), μ-a.s. The first term in this intersection is _XF(X) = _Xθ. The second contains _Xθ, as it is the _X of a function which is finite on _Xθ, which is convex relatively open, containing X. Finally, we proved that _Xθ=_Xθ', μ-a.s. Then θ'(X,·) is equal to θ_*(X,·) on _Xθ, and therefore, equal to θ(X,·), μ-a.s. We proved that θ=θ', . Step 3: We finally prove that θ'≤θ pointwise. We shall prove in Step 6 below that θ(X,·) ⊂F. Then, ∞1_Y∉ F(X)1_X∉ N_μ≤θ, and it remains to prove that*̱θ(x,y)≥θ_*(x,y)y∈_xθ,  x∉ N_μ.* To see this, let x∉ N_μ. By definition of N_μ, θ_n(x,·)⟶θ(x,·) on _xθ. Notice that θ(x,·) is convex on _xθ, and therefore as a consequence of Lemma <ref>, *̱θ_*(x,y) = lim_ϵ↘ 0θ(x,ϵ x + (1-ϵ)y),y∈_xθ. * Then y^ϵ := (1-ϵ)y + ϵ x∈_xθ_n, for ∈(0,1], and n sufficiently large by (i) of this Proposition, and therefore (1-ϵ)θ_n(x,y)-θ_n(x,y_ϵ)≥ (1-ϵ)θ_n'(x,y)-θ_n'(x,y_ϵ)≥ 0, for θ_n'∈Θ such that θ_n'=θ_n, , and θ_n≥θ_n'. Taking the lim inf as n→∞, we get (1-ϵ)θ(x,y)-θ(x,y_ϵ)≥ 0, and finally θ(x,y)≥lim_ϵ↘ 0θ(x,ϵ x + (1-ϵ)y) = θ'(x,y), by sending ϵ↘ 0. Step 4: (First claim in (<ref>)) Let x_0∈^d, let us prove that F(x_0) is convex. Indeed, let x,y∈ F(x_0), and 0<λ<1. Since _xθ is convex, and F^n(x_0)∖_X F^n(x_0) is convex by Proposition <ref> (ii), we only examine the following non-obvious cases:∙ Suppose x∈ F^n(x_0)∖_x_0 F^n(x_0), and y∈ F^p(x_0)∖_x_0 F^p(x_0), with n<p. Then as F^p(x_0)∖_x_0 F^p(x_0)⊂_x_0 F^n(x_0), we have λ x + (1-λ)y∈ F^n(x_0)∖_x_0 F^n(x_0) by Proposition <ref> (ii).∙ Suppose x∈ F^n(x_0)∖_x_0 F^n(x_0), and y∈_x_0θ, then as _x_0θ⊂_x_0 F^n(x_0), this case is handled similar to previous case. Step 5: (Second claim in (<ref>)). We have _Xθ⊂ F(X), and therefore _Xθ⊂_XF(X). Now we prove by induction on k≥ 1 that _XF(X)⊂∪_n≥ k(F^n∖_X F^n)∪_Xθ. The inclusion is trivially true for k=1. Let k≥ 1, we suppose that the inclusions holds for k, hence _XF(X)⊂∪_n≥ k(F^n∖_X F^n)∪_Xθ. As ∪_n≥ k(F^n∖_X F^n)∪_Xθ⊂ F^k. Applying _X gives *̱_XF(X) ⊂ _X[∪_n≥ k(F^n∖_X F^n)∪_Xθ]= _X[F^k∩∪_n≥ k(F^n∖_X F^n)∪_Xθ]= _X F^k∩_X[∪_n≥ k(F^n∖_X F^n)∪_Xθ]⊂ _X F^k∩∪_n≥ k(F^n∖_X F^n)∪_Xθ⊂ ∪_n≥ k+1(F^n∖_X F^n)∪_Xθ. * Then the result is proved for all k. In particular we apply it for k=d+1. Recall from the proof of Lemma <ref> that for n≥ d+1, F^n is stationary at the value _Xθ. Then ∪_n≥ d+1(F^n∖_X F^n)=∅, and _XF(X)⊂_X_Xθ = _Xθ. The result is proved. Step 6: We finally prove (<ref>). Indeed, θ(X,·)⊂ F^1 by definition. Then *̱θ(X,·) ⊂ F^1∖ F^1∪(∪_2≤ k≤ d+1(θ(X,·)∩ _XF^k-1)∖ F^k)∪ F^d+1⊂ F^1∖ F^1∪(∪_k≥ 2(θ(X,·)∩ _XF^k-1)∖ F^k)∪ _Xθ=∪_k≥ 1F^k∖ F^k∪_Xθ =F. *§ SOME CONVEX ANALYSIS RESULTS As a preparation, we first report a result on the union of intersecting relative interiors of convex subsets which was used in the proof of Proposition <ref>. We shall use the following characterization of the relative interior of a convex subset K of ^d: K= {x∈^d : x-ϵ(x'-x)∈ Kϵ>0,x'∈ K}= {x∈^d : x∈ (x',x_0],x_0∈ K,x'∈ K }. We start by proving the required properties of the notion of relative face.Proof of Proposition <ref> (i) The proofs of the first properties raise no difficulties and are left as an exercise for the reader. We only prove that _a A=A ≠∅ iff a∈ A. We assume that _a A=A≠∅. The non-emptiness implies that a∈ A, and therefore a∈_aA =A. Now we suppose that a∈ A. Then for x∈ A, [x,a-ϵ (x-a)]⊂ A⊂ A, for some ϵ>0, and therefore A⊂_a A. On the other hand, by (<ref>), A = {x∈^d : x∈ (x',x_0],x_0∈ A,x'∈ A}. Taking x_0 := a∈ A, we have the remaining inclusion _a A⊂ A. (ii) We now assume that A is convex. Step 1: We first prove that _aA is convex. Let x,y∈_aA and λ∈[0,1]. We consider ϵ >0 such that (a-ϵ(x-a),x+ϵ(x-a))⊂ A and (a-ϵ(y-a),y+ϵ(y-a))⊂ A. Then if we write z=λ x + (1-λ)y, (a-ϵ(z-a),z+ϵ(z-a))⊂ A by convexity of A, because a,x,y∈ A. Step 2: In order to prove that _aA is relatively open, we consider x,y∈_aA, and we verify that (x-ϵ(y-x),y+ϵ(y-x))⊂_aA for some ϵ>0. Consider the two alternatives: Case 1: If a,x,y are on a line. If a=x=y, then the required result is obvious. Otherwise,(a-ϵ(x-a),x+ϵ(x-a))∪(a-ϵ(y-a),y+ϵ(y-a))⊂_a AThis union is open in the line and x and y are interior to it. We can find ϵ'>0 such that (x-ϵ'(y-x),y+ϵ'(y-x))⊂_aA. Case 2: If a,x,y are not on a line. Let ϵ>0 be such that (a-2ϵ(x-a),x+2ϵ(x-a))⊂ A and (a-2ϵ(y-a),y+2ϵ(y-a))⊂ A. Then x+ϵ(x-a)∈_a A and a-ϵ(y-a)∈_a A. Then, if we take λ := ϵ/1+2ϵ,λ (a-ϵ(y-a))+(1-λ)(x+ϵ(x-a)) =(1-λ)(1+ϵ)x-λϵ y =x + λϵ (x-y)Then x + λϵ (x-y)∈_a A and symmetrically, y + λϵ (y-x)∈_a A by convexity of _aA. And still by convexity, (x-ϵ'(y-x),y+ϵ'(y-x))⊂_aA forϵ':=ϵ^2/1+2ϵ>0. Step 3: Now we prove that A∖_a A is convex, and that if x_0∈ A∖_a A and y_0∈ A, then [x_0,y_0)⊂ A∖_a A. We will prove these two results by an induction on the dimension of the space d. First if d = 0 the results are trivial. Now we suppose that the result is proved for any d'<d, let us prove it for dimension d. Case 1:a∈ A. This case is trivial as _aA =A and A⊂ A = _aA because of the convexity of A. Finally A∖_aA = ∅ which makes it trivial. Case 2:a∉ A. Then a∈∂ A and there exists a hyperplan support H to A in a because of the convexity of A. We will write the equation of E, the corresponding half-space containing A, E:c· x ≤ b with c∈^d and b∈. As x∈_aA implies that [a-ϵ(x-a),x+ϵ(x-a)]⊂ A for some ϵ >0, we have (a-ϵ(x-a))· c ≤ b and (x+ϵ(x-a))· c ≤ b. These equations are equivalent using that a∈ H and thus a· c = b to -ϵ(x-a)· c≤ 0 and (1+ϵ)(x-a)· c ≤ 0. We finally have (x-a)· c = 0 and x∈ H. We proved that _aA⊂ H.Now using (i) together with the fact that _aA⊂ H and a∈ H affine, we have_a(A∩ H) = _a A∩_a H = _a A∩ H = _aA.Then we can now have the induction hypothesis on A∩ H because H = d-1 and A∩ H⊂ H is convex. Then we have A∩ H ∖_aA which is convex and if x_0∈ A∩ H∖_a (A∩ H), y_0∈ A∩ H and if λ∈ (0,1] then λ x_0 + (1-λ)y_0∈ A∖_a (A∩ H).First A∖_aA = (A∖ H) ∪((A∩ H) ∖_aA), let us show that this set is convex. The two sets in the union are convex (A∖ H = A∩ (E∖ H)), so we need to show that a non trivial convex combination of elements coming from both sets is still in the union. We consider x∈ A∖ H, y∈ A∩ H ∖_aA and λ >0, let us show that z:= λ x + (1-λ)y∈ (A∖ H) ∪ (A∩ H ∖_aA). As x,y∈ A (_a A⊂ A because A is closed), z∈ A by convexity of A. We now prove z∉ H,z· c = λ x· c + (1-λ)y· c = λ x· c + (1-λ)b < λ b+ (1-λ)b = b.Then z is in the strict half space: z∈ E∖ H. Finally z∈ A∖ H and A∖_aA is convex.Let us now prove the second part: we consider x_0∈ A∖_a A, y_0∈_a A and λ∈ (0,1] and write z_0 := λ x_0 + (1-λ)y_0. Case 2.1:x_0,y_0∈ H. We apply the induction hypothesis. Case 2.2:x_0,y_0∈ A∖ H. Impossible because _aA⊂ H and _aA⊂ H = H. y_0∈ H. Case 2.3:x_0 ∈ A∖ H and y_0∈ H. Then by the same computation than in Step 1,z_0∈ A∖ H⊂ A∖_aA.Step 4: Now we prove that if a∈ A, then (_a A) =(A) if and only if a∈ A, and that in this case, we have _a A = A =A = _aA. We first assume that a∈ A. As by the convexity of A, A = A, _a A = A, and therefore _a A =A. Finally, taking the dimension, we have (_a A) =(A). In this case we proved as well that _a A = A =A = _aA, the last equality coming from the fact that A=_aA as a∈ A.Now we assume that a∉ A. Then a∈∂ A, and _a A⊂∂ A. Taking the dimension (in the local sense this time), and by the fact that ∂ A=∂ A< A, we have (_a A) <(A) (as _a A is convex, the two notions of dimension coincide).Let K_1,K_2⊂^d be convex with K_1∩ K_2≠∅. Then ( K_1∪ K_2) =(K_1∪ K_2).We fix y∈ K_1∩ K_2.Let x∈( K_1∪ K_2), we may write x=λ x_1+(1-λ)x_2, with x_1∈ K_1, x_2∈ K_2, and 0≤λ≤ 1. If λ is 0 or 1, we have trivially that x∈ (K_1∩ K_2). Let us now treat the case 0<λ<1. Then for x'∈(K_1∪ K_2), we may write x'=λ' x_1'+(1-λ')x_2', with x_1'∈ K_1, x_2'∈ K_2, and 0≤λ'≤ 1. We will use y as a center as it is in both the sets. For all the variables, we add a bar on it when we subtract y, for example := x-y. The geometric problem is the same when translated with y,-ϵ('-)= λ(_1-ϵ(λ'/λ_1'- _1)) + (1-λ)(_2-ϵ(1-λ'/1-λ_2'-_2)).However, as _1 and _1' are in K_1-y, as 0 is an interior point, ϵ(λ'/λ_1'- _1)∈ K_1-y for ϵ small enough. Then as _1 is interior to K_1-y as well, _1-ϵ(λ'/λ_1'- _1)∈ K_1-y as well. By the same reasoning, _2-ϵ(1-λ'/1-λ_2'-_2)∈ K_2-y. Finally, by (<ref>), for ϵ small enough, x-ϵ(x'-x)∈(K_1∪ K_2). By (<ref>), x∈ (K_1∪ K_2).Now let x∈ (K_1∪ K_2). We use again y as an origin with the notation := x-y. Asis interior, we may find ϵ >0 such that (1+ϵ)∈(K_1∪ K_2). We may write (1+ϵ)=λ_1+(1-λ)_2, with _1∈ K_1-y, _2∈ K_2-y, and 0≤λ≤ 1. Then = λ1/1+ϵ_1+(1-λ)1/1+ϵ_2. By (<ref>), 1/1+ϵ_1∈ K_1, and 1/1+ϵ_2∈ K_2. ∈( (K_1-y)∪ (K_2-y)), and therefore x∈( K_1∪ K_2).Now we use the measurable selection theory to establish the non-emptiness of ∂ f. For all f∈, we have ∂ f≠∅.By the fact that f is continuous, we may write ∂ f(x) = ∩_n≥ 1F_n(x) for all x∈^d, with F_n(x):={p∈^d:f(y_n)-f(x)≥ p· (y_n-x)} where (y_n)_n≥ 1⊂^d is some fixed dense sequence. All F_n are measurable by the continuity of (x,p)⟼ f(y_n)-f(x)-p· (y_n-x) together with Theorem 6.4 in <cit.>. Therefore the mapping x⟼∂ f(x) is measurable by Lemma <ref>. Moreover, the fact that this mapping is closed nonempty-valued is a well-known property of the subgradient of finite convex functions in finite dimension. Then the result holds by Theorem 4.1 in <cit.>. We conclude this section with the following result which has been used in our proof of Proposition <ref>. We believe that this is a standard convex analysis result, but we could not find precise references. For this reason, we report the proof for completeness. Let f_n,f:^d→ be convex functions with f ≠∅. Then f_n ⟶ f pointwise on ^d∖∂ f if and only if f_n ⟶ f pointwise on some dense subset A⊂^d∖∂ f.We prove the non-trivial implication "if". We first prove the convergence on f. f_n converges to f on a dense set. The reasoning will consist in proving that the f_n are Lipschitz, it will give a uniform convergence and then a pointwise convergence. First we consider K⊂f compact convex with nonempty interior. We can find N∈ and x_1,...x_N∈ A∩(f ∖ K) such that K⊂ (x_1,...,x_N). We use the pointwise convergence on A to get that for n large enough, f_n(x)≤ M for x∈(x_1,...,x_N), M>0 (take M=max_1≤ k≤ Nf(x_k)+1). Then we will prove that f_n is bounded from below on K. We consider a∈ A∩ K and δ_0:=sup_x∈ K|x-a|. For n large enough, f_n(a)≥ m for any a∈ A (take for example m=f(a)-1). We write δ_1 := min_(x,y)∈ K∂(x_1,...,x_N)|x-y|. Finally we write δ_2:= sup_x,y∈(x_1,...,x_N)|x-y|. Now, for x∈ K, we consider the half line x+_+(a-x), it will cut ∂(x_1,...,x_N) in one only point y∈∂(x_1,...,x_N). Then a∈[x,y], and therefore a= |a-y|/|x-y|x+|a-x|/|x-y|y. By the convex inequality, f_n(a)≤|a-y|/|x-y| f_n(x)+|a-x|/|x-y|f_n(y). Then f_n(x) ≥ -|a-x|/|a-y| M + |x-y|/|a-y| m ≥ -δ_0/δ_1 M + δ_2/δ_1 m. Finally, if we write m_0:=-δ_0/δ_1 M + δ_2/δ_1 m, *̱ M≥ f_n≥ m_0, K. * This will prove that f_n is M-m0/δ_1-Lipschitz. We consider x∈ K and a unit direction u∈^d-1 and f_n'∈∂ f_n(x). For a unique λ > 0, y:=x+λ u∈∂(x_1,...,x_N). As u is a unit vector, λ = |y-x|≥δ_1. By the convex inequality, f_n(y)≥ f_n(x)+f_n'(x)· (y-x). Then M-m_0≥δ_0 |f_n'· u| and finally |f_n'· u|≤M-m0/δ_1 as this bound does not depend on u, |f_n'|≤M-m0/δ_1 for any such subgradient. For n large enough, the f_n are uniformly Lipschitz on K, and so in f. The convergence is uniform on K, it is then pointwise on K. As this is true for any such K, the convergence is pointwise on f.Now let us consider x∈ ( f)^c. The set (x,f)∖ f has a nonempty interior because (x, f)>0 and f≠∅. As A is dense, we can consider a∈ A∩(x,f)∖ f. By definition of (x,f), we can find y∈f such that a=λ y + (1-λ) x. We have λ <1 because a∉ f. If λ = 0, f_n(x)=f_n(a)n→∞⟶∞. Otherwise, by the convexity inequality, f_n(a)≤λ f_n(y) + (1-λ) f_n(x). Then, as f_n(a)n→∞⟶∞, and f_n(y)n→∞⟶f(y)<∞, we have f_n(x)n→∞⟶∞. § ACKNOWLEDGEMENTS We are grateful to the two anonymous referees, whose fruitful remarks and comments contributed to enhance deeply this paper.plain
http://arxiv.org/abs/1702.08298v2
{ "authors": [ "Hadrien De March", "Nizar Touzi" ], "categories": [ "math.PR", "60G42, 49N05" ], "primary_category": "math.PR", "published": "20170227142358", "title": "Irreducible convex paving for decomposition of multi-dimensional martingale transport plans" }
simplelist[1][▹]#1#1 enumthm elec-tro-pa-la-tog-ra-phy acad-e-my acad-e-mies af-ter-thought anom-aly anom-alies an-ti-deriv-a-tive an-tin-o-my an-tin-o-mies apoth-e-o-ses apoth-e-o-sis ap-pen-dix ar-che-typ-al as-sign-a-ble as-sist-ant-ship as-ymp-tot-ic asyn-chro-nous at-trib-uted at-trib-ut-able bank-rupt bank-rupt-cy bi-dif-fer-en-tial blue-print busier busiest cat-a-stroph-ic cat-a-stroph-i-cally con-gress cross-hatched data-base de-fin-i-tive de-riv-a-tive dis-trib-ute dri-ver dri-vers eco-nom-ics econ-o-mist elit-ist equi-vari-ant ex-quis-ite ex-tra-or-di-nary flow-chart for-mi-da-ble forth-right friv-o-lous ge-o-des-ic ge-o-det-ic geo-met-ric griev-ance griev-ous griev-ous-ly hexa-dec-i-mal ho-lo-no-my ho-mo-thetic ideals idio-syn-crasy in-fin-ite-ly in-fin-i-tes-i-mal ir-rev-o-ca-ble key-stroke lam-en-ta-ble light-weight mal-a-prop-ism man-u-script mar-gin-al meta-bol-ic me-tab-o-lism meta-lan-guage me-trop-o-lis met-ro-pol-i-tan mi-nut-est mol-e-cule mono-chrome mono-pole mo-nop-oly mono-spline mo-not-o-nous mul-ti-fac-eted mul-ti-plic-able non-euclid-ean non-iso-mor-phic non-smooth par-a-digm par-a-bol-ic pa-rab-o-loid pa-ram-e-trize para-mount pen-ta-gon phe-nom-e-non post-script pre-am-ble pro-ce-dur-al pro-hib-i-tive pro-hib-i-tive-ly pseu-do-dif-fer-en-tial pseu-do-fi-nite pseu-do-nym qua-drat-ic quad-ra-ture qua-si-smooth qua-si-sta-tion-ary qua-si-tri-an-gu-lar quin-tes-sence quin-tes-sen-tial re-arrange-ment rec-tan-gle ret-ri-bu-tion retro-fit retro-fit-ted right-eous right-eous-ness ro-bot ro-bot-ics sched-ul-ing se-mes-ter semi-def-i-nite semi-ho-mo-thet-ic set-up se-vere-ly side-step sov-er-eign spe-cious spher-oid spher-oid-al star-tling star-tling-ly sta-tis-tics sto-chas-tic straight-est strange-ness strat-a-gem strong-hold sum-ma-ble symp-to-matic syn-chro-nous topo-graph-i-cal tra-vers-a-ble tra-ver-sal tra-ver-sals treach-ery turn-around un-at-tached un-err-ing-ly white-space wide-spread wing-spread wretch-ed wretch-ed-ly Eng-lish Euler-ian Feb-ru-ary Gauss-ian Hamil-ton-ian Her-mit-ian Jan-u-ary Japan-ese Kor-te-weg Le-gendre Mar-kov-ian Noe-ther-ian No-vem-ber Rie-mann-ian Sep-tem-ber thmTheorem lem[thm]Lemma remark rmk[thm]Remark *rmk*Remark theoremTheorem lemma[theorem]Lemma Efficient coordinate-wise leading eigenvector computation [ Efficient coordinate-wise leading eigenvector computationequal* Jialei Wangequal,uchicago Weiran Wangequal,ttic Dan Garberttic Nathan Srebrottic uchicagoUniversity of Chicago, 5801 S Ellis Ave, Chicago, IL 60637 tticToyota Technological Insititute at Chicago, 6045 S Kenwood Ave, Chicago, IL 60637Nathan Srebronati@ttic.edu eigenvalue problem, power method, shift-and-invert, coordinate descent0.3in ] We develop and analyze efficient "coordinate-wise" methods for finding the leading eigenvector, where each step involves only a vector-vector product.We establish global convergence with overall runtime guarantees that are at least as good as Lanczos's method and dominate it for slowly decaying spectrum. Our methods are based on combining a shift-and-invert approach with coordinate-wise algorithms for linear regression.§ INTRODUCTIONExtracting the top eigenvalues/eigenvectors of a large symmetric matrix is a fundamental step in various machine learning algorithms. One prominent example of this problem is principal component analysis (PCA), in which we extract the top eigenvectors of the data covariance matrix, and there has been continuous effort in developing efficient stochastic/randomized algorithms for large-scale PCA (e.g., ).The more general eigenvalue problems for large matrices without the covariance structure is relatively less studied. The method of choice for this problem has been the power method, or the faster but often less known Lanczos algorithm <cit.>, which are based on iteratively computing matrix-vector multiplications with the input matrix until the component of the vector that lies in the tailing eigenspace vanishes. However, for very large-scale and dense matrices, even computing a single matrix-vector product is expensive. An alternative is to consider much cheaper vector-vector products, , instead of updating all the entries in the vector on each iteration by a full matrix-vector product, we consider the possibility of only updating one coordinate, by only computing the inner product of single row of the matrix with the vector. Such operations do not even require to store the entire matrix in memory. Intuitively, this may result in an overall significant speedup, in certain likely scenarios, since certain coordinates in the matrix-vector product are more valuable than others for making local progress towards converging to the leading eigenvector. Indeed, this is the precise rational behind coordinate-descent methods that were extensively studied and are widely applied to convex optimization problems, see <cit.> for a comprehensive survey. Thus, given the structure of the eigenvalue problem which is extremely suitable for coordinate-wise updates, and the celebrated success of coordinate-descent methods for convex optimization, a natural question is whether such updates can be applied for eigenvector computation with provable global convergence guarantees, despite the inherent non-convexity of the problem.Recently, <cit.> have proposed two such methods, Coordinate-wise Power Method (CPM) and Symmetric Greedy Coordinate Descent (SGCD). Both methods update on each iteration only k entries in the vector, for some fixed k. CPM updates on each iteration the k coordinatesthat would change the most under one step of power method, while SGCD applies a greedy heuristic for choosing the coordinates to be updated. The authors show that CPM enjoys a global convergence rate, with rate similar to the classical power iterations algorithm provided that k, the number of coordinates to be updated on each iteration, is sufficiently large (or equivalently, the "noise" outside the k selected coordinate is sufficiently small). In principle, this might force k to be as large as the dimension d, and indeed in their experiments they set k to grow linearly with d, which is overall not significantly faster than standard power iterations, and does not truly capture the concept of coordinate updates proved useful to convex problems. The second algorithm proposed in <cit.>, SGCD, is shown to converge locally with a linear rate already for k=1 (i.e., only a single coordinate is updated on each iteration), however this results assume that the method is initialized with a vector that is already sufficiently close (in a non-trivial way) to the leading eigenvector. The dependence of CPM and SGCD on theinverse relative eigengap[Defined as σ_1 ()/σ_1 () - σ_2 () for a positive semidefinite matrix , where σ_i () is the i-th largest eigenvalue of .]of the input matrix is similar to that of the power iterations algorithm, i.e. linear dependence, which in principal is suboptimal, since square-root dependence can be obtained by methods such as the Lanczos algorithm. We present globally-convergent coordinate-wise algorithms for the leading eigenvector problem which resolves the abovementioned concerns and significantly improve over previous algorithms.Our algorithms update only a single entry at each step and enjoy linear convergence. Furthermore, for a particular variant, the convergence rate depends only on the square-root of the inverse relative eigengap,yielding a total runtime that dominates that of the standard power method and competes with that of the Lanczos's method.In Section <ref>, we discuss the basis of our algorithm, the shift-and-invert power method <cit.>, which transforms the eigenvalue problem into a series of convex least squares problems.In Section <ref>, we show the least squares problems can be solved using efficient coordinate descent methods that are well-studied for convex problems. This allow us to make use of principled coordinate selection rules for each update, all of which have established convergence guarantees.We provide a summary of the time complexities of different globally-convergent methods in Table <ref>. In particular, it is observable that in cases where either the spectrum of the input matrix or the magnitude of its diagonal entries is slowly decaying,our methods can yield provable and significant improvements over previous methods.For example, for a spiked covariance model whose eigenvalues areρ_1 > ρ_1 - Δ = ρ_2=ρ_3=…, our algorithm (SI-GSL) can have a runtime independent of the eigengap Δ, while the runtime of Lanczos's method depends on 1/√(Δ).We also verify this intuition empirically via numerical experiments. Notations We use boldface uppercase letters (, ) to denote matrices, and boldface lowercase letters (, ) to denote vectors. For a positive definite matrix , the vector norm ·_ is defined as _ = √(^⊤) = ^1/2 for any . We use [ij] to denote the element in row i and column j of the matrix , and use [i] to denote the i-th element of the vectorunless stated otherwise. Additionally, [i:] and [:j] denote the i-th row and j-th column of the matrixrespectively. Problem formulationWe consider the task of extracting the top eigenvector of a symmetric positive definite matrix ∈^d (extensions to other setting are discussed in Section <ref>). Let the complete set of eigenvalues ofbe ρ_1 ≥ρ_2 ≥…≥ρ_d ≥ 0, with corresponding eigenvectors _1, …, _d which form an orthonormal basis of ^d. Without loss of generality, we assume ρ_1 ≤ 1 (which can always be obtained by rescaling the matrix). Furthermore, we assume the existence of a positive eigenvalue gap Δ:= ρ_1 - ρ_2 > 0 so that the top eigenvector is unique. § SHIFT-AND-INVERT POWER METHODIn this section, we introduce the shift-and-invert approach to the eigenvalue problem and review its analysis, which will be the basis for our algorithms.The most popular iterative algorithm for the leading eigenvalue problem is the power method, which iteratively performs the following matrix-vector multiplications and normalization steps_t←_t-1, _t←_t/_t, for t=1,….It can be shown that the iterates become increasingly aligned with the eigenvector corresponding to the largest eigenvalue in magnitude, and the number of iterations needed to achieveϵ-suboptimality in alignment is ( ρ_1/Δlog|_0^⊤_1|/ϵ) <cit.>[We can always guarantee that |_0^⊤_1| = Ω(1/√(d)) by taking _0 to be a random unit vector.]. We see that the computational complexity depends linearly on 1/Δ, and thus power method converges slowly if the gap is small.Shift-and-invert <cit.> can be viewed as a pre-conditioning approach which improves the dependence of the time complexity on the eigenvalue gap.The main idea behind this approach is that, instead of running power method ondirectly, we can equivalently run power method on the matrix (λ - )^-1 where λ > ρ_1 is a shifting parameter. Observe that (λ - )^-1 has exactly the same set of eigenvectors as , and its eigenvalues areβ_1 ≥β_2 ≥…≥β_d > 0, whereβ_i = 1/λ - ρ_i.If we have access to a λ that is slightly larger than ρ_1, and in particular if λ-ρ_1 =(1) ·Δ, then the inverse relative eigengap of (λ - )^-1 is β_1/β_1-β_2 = O(1), which means that the power method, applied to this shift and inverted matrix, will converge to the top eigenvector in only a poly-logarithmic number of iterations, in particular, without linear dependence on 1/Δ.In shift-and-invert power method, the matrix-vector multiplications have the form_t← (λ - )^-1_t-1, which is equivalent to solving the convex least squares problem _t←_ 1/2^⊤ (λ - )- _t-1^⊤.Solving such least squares problems exactly could be costly itself if d is large.Fortunately, power method with approximate matrix-vector multiplications still converges, provided that the errors in each step is controlled; analysis of inexact power method together with applications to PCA and CCA can be found in <cit.>.
http://arxiv.org/abs/1702.07834v1
{ "authors": [ "Jialei Wang", "Weiran Wang", "Dan Garber", "Nathan Srebro" ], "categories": [ "cs.NA", "cs.LG", "stat.ML" ], "primary_category": "cs.NA", "published": "20170225051125", "title": "Efficient coordinate-wise leading eigenvector computation" }
=1roundnatbib plainnatmydefinition 5.5pt0pt..5emmytheorem 5.5pt0pt..5emmydefinition definitionDefinition exampleExample mytheorem theoremTheorem propositionProposition lemmaLemma itemize • enum2 enumerate enum2.enum2 Tensor Balancing on Statistical ManifoldMahito Sugiyama National Institute of Informatics JST, PRESTOHiroyuki Nakahara RIKEN Brain Science InstituteKoji Tsuda The University of Tokyo RIKEN AIP; NIMS December 30, 2023 =====================================================================================================================================================================================================================We solve tensor balancing, rescaling an Nth order nonnegative tensor by multiplying N tensors of order N - 1 so that every fiber sums to one.This generalizes a fundamental process of matrix balancing used to compare matrices in a wide range of applications from biology to economics.We present an efficient balancing algorithm with quadratic convergence using Newton's method and show in numerical experiments that the proposed algorithm is several orders of magnitude faster than existing ones.To theoretically prove the correctness of the algorithm, we model tensors as probability distributions in a statistical manifold and realize tensor balancing as projection onto a submanifold.The key to our algorithm is that the gradient of the manifold, used as a Jacobian matrix in Newton's method, can be analytically obtained using the Möbius inversion formula, the essential of combinatorial mathematics.Our model is not limited to tensor balancing, but has a wide applicability as it includes various statistical and machine learning models such as weighted DAGs and Boltzmann machines. § INTRODUCTIONMatrix balancing is the problem of rescaling a given square nonnegative matrix A ∈^n × n_≥ 0 to a doubly stochastic matrix RAS, where every row and column sums to one, by multiplying two diagonal matrices R and S. This is a fundamental process for analyzing and comparing matrices in a wide range of applications, including input-output analysis in economics, called the RAS approach <cit.>, seat assignments in elections <cit.>, Hi-C data analysis <cit.>, the Sudoku puzzle <cit.>, and the optimal transportation problem <cit.>. An excellent review of this theory and its applications is given by <cit.>.The standard matrix balancing algorithm is the Sinkhorn-Knopp algorithm <cit.>, a special case of Bregman's balancing method <cit.> that iterates rescaling of each row and column until convergence. The algorithm is widely used in the above applications due to its simple implementation and theoretically guaranteed convergence. However, the algorithm converges linearly <cit.>, which is prohibitively slow for recently emerging large and sparse matrices. Although <cit.> and <cit.> tried to achieve faster convergence by approximating each step of Newton's method, the exact Newton's method with quadratic convergence has not been intensively studied yet.Another open problem is tensor balancing, which is a generalization of balancing from matrices to higher-order multidimentional arrays, or tensors. The task is to rescale an Nth order nonnegative tensor to a multistochastic tensor, in which every fiber sums to one, by multiplying (N - 1)th order N tensors. There are some results about mathematical properties of multistochastic tensors <cit.>. However, there is no result for tensor balancing algorithms with guaranteed convergence that transforms a given tensor to a multistochastic tensor until now.Here we show that Newton's method with quadratic convergence can be applied to tensor balancing while avoiding solving a linear system on the full tensor. Our strategy is to realize matrix and tensor balancing as projection onto a dually flat Riemmanian submanifold (Figure <ref>), which is a statistical manifold and known to be the essential structure for probability distributions in information geometry <cit.>. Using a partially ordered outcome space, we generalize the log-linear model <cit.> used to model the higher-order combinations of binary variables <cit.>, which allows us to model tensors as probability distributions in the statistical manifold. The remarkable property of our model is that the gradient of the manifold can be analytically computed using the Möbius inversion formula <cit.>, the heart of combinatorial mathematics <cit.>, which enables us to directly obtain the Jacobian matrix in Newton's method. Moreover, we show that (n - 1)^N entries for the size n^N of a tensor are invariant with respect to one of the two coordinate systems of the statistical manifold. Thus the number of equations in Newton's method is O(n^N - 1).The remainder of this paper is organized as follows: We begin with a low-level description of our matrix balancing algorithm in Section <ref> and demonstrate its efficiency in numerical experiments in Section <ref>. To guarantee the correctness of the algorithm and extend it to tensor balancing, we provide theoretical analysis in Section <ref>. In Section <ref>, we introduce a generalized log-linear model associated with a partial order structured outcome space, followed by introducing the dually flat Riemannian structure in Section <ref>. In Section <ref>, we show how to use Newton's method to compute projection of a probability distribution onto a submanifold. Finally, we formulate the matrix and tensor balancing problem in Section <ref> and summarize our contributions in Section <ref>.§ THE MATRIX BALANCING ALGORITHMGiven a nonnegative square matrix A = (a_ij) ∈^n × n_≥ 0, the task of matrix balancing is to find r⃗, s⃗∈^n that satisfy(RAS)1⃗ = 1⃗,(RAS)^T1⃗ = 1⃗,where R = (r⃗) and S = (s⃗). The balanced matrix A' = RAS is called doubly stochastic, in which each entry a'_ij = a_ij r_i s_j and all the rows and columns sum to one. The most popular algorithm is the Sinkhorn-Knopp algorithm, which repeats updating r⃗ and s⃗ as r⃗ = 1 / (As⃗) and s⃗ = 1 / (A^T r⃗). We denote by [n] = {1, 2, …, n} hereafter.In our algorithm, instead of directly updating r⃗ and s⃗, we update two parameters θ and η defined aslog p_ij = ∑_i' ≤ i∑_j' ≤ jθ_i'j',η_ij = ∑_i' ≥ i∑_j' ≥ jp_i'j'for each i, j ∈ [n], where we normalized entries as p_ij = a_ij / ∑_ij a_ij so that ∑_ij p_ij = 1. We assume for simplicity that each entry is strictly larger than zero. The assumption will be removed in Section <ref>.The key to our approach is that we update θ_ij^(t) with i = 1 or j = 1 by Newton's method at each iteration t = 1, 2, … while fixing θ_ij with i,j ≠ 1 so that η_ij^(t) satisfies the following condition (Figure <ref>):η_i1^(t) = n - i + 1/n,η_1j^(t) = n - j + 1/n.Note that the rows and columns sum not to 1 but to 1 / n due to the normalization. The update formula is described as[ θ_11^(t + 1); θ_12^(t + 1);⋮; θ_1n^(t + 1); θ_21^(t + 1);⋮; θ_n1^(t + 1) ]=[ θ_11^(t); θ_12^(t);⋮; θ_1n^(t); θ_21^(t);⋮; θ_n1^(t) ]- J^-1[ η_11^(t) - (n - 1 + 1) / n; η_12^(t) - (n - 2 + 1) / n;⋮; η_1n^(t) - (n - n + 1) / n; η_21^(t) - (n - 2 + 1) / n;⋮; η_n1^(t) - (n - n + 1) / n ],where J is the Jacobian matrix given asJ_(ij)(i'j') = ∂η^(t)_ij/∂θ^(t)_i'j' = η_max{i, i'}max{j, j'} - n^2 η_ijη_i'j',which is derived from our theoretical result in Theorem <ref>. Since J is a (2n - 1) × (2n - 1) matrix, the time complexity of each update is O(n^3), which is needed to compute the inverse of J.After updating to θ_ij^(t + 1), we can compute p_ij^(t + 1) and η_ij^(t + 1) by Equation (<ref>). Since this update does not ensure the condition ∑_ij p_ij^(t + 1) = 1, we again update θ_11^(t + 1) asθ_11^(t + 1) = θ_11^(t + 1) - log∑_ijp_ij^(t + 1)and recompute p_ij^(t + 1) and η_ij^(t + 1) for each i, j ∈ [n].By iterating the above update process in Equation (<ref>) until convergence, A = (a_ij) with a_ij = n p_ij becomes doubly stochastic.§ NUMERICAL EXPERIMENTSWe evaluate the efficiency of our algorithm compared to the two prominent balancing methods, the standard Sinkhorn-Knopp algorithm <cit.> and the state-of-the-art algorithm BNEWT <cit.>, which uses Newton's method-like iterations with conjugate gradients. All experiments were conducted on Amazon Linux AMI release 2016.09 with a single core of 2.3 GHz Intel Xeon CPU E5-2686 v4 and 256 GB of memory. All methods were implemented inwith thelibrary and compiled with gcc 4.8.3[An implementation of algorithms for matrices and third order tensors is available at: <https://github.com/mahito-sugiyama/newton-balancing>]. We have carefully implemented BNEWT by directly translating the MATLAB code provided in <cit.> intowith thelibrary for fair comparison, and used the default parameters. We measured the residual of a matrix A' = (a_ij') by the squared norm (A'1⃗ - 1⃗, A'^T1⃗ - 1⃗)_2, where each entry a'_ij is obtained as n p_ij in our algorithm, and ran each of three algorithms until the residual is below the tolerance threshold 10^-6. Hessenberg Matrix. The first set of experiments used a Hessenberg matrix, which has been a standard benchmark for matrix balancing <cit.>. Each entry of an n × n Hessenberg matrix H_n = (h_ij) is given as h_ij = 0 if j < i - 1 and h_ij = 1 otherwise. We varied the size n from 10 to 5,000, and measured running time (in seconds) and the number of iterations of each method.Results are plotted in Figure <ref>. Our balancing algorithm with the Newton's method (plotted in blue in the figures) is clearly the fastest: It is three to five orders of magnitude faster than the standard Sinkhorn-Knopp algorithm (plotted in red). Although the BNEWT algorithm (plotted in green) is competitive if n is small, it suddenly fails to converge whenever n ≥ 200, which is consistent with results in the original paper <cit.> where there is no result for the setting n ≥ 200 on the same matrix. Moreover, our method converges around 10 to 20 steps, which is about three and seven orders of magnitude smaller than BNEWT and Sinkhorn-Knopp, respectively, at n = 100.To see the behavior of the rate of convergence in detail, we plot the convergence graph in Figure <ref> for n = 20, where we observe the slow convergence rate of the Sinkhorn-Knopp algorithm and unstable convergence of the BNEWT algorithm, which contrasts with our quick convergence. Trefethen Matrix. Next, we collected a set of Trefethen matrices from a collection website[<http://www.cise.ufl.edu/research/sparse/matrices/>], which are nonnegative diagonal matrices with primes. Results are plotted in Figure <ref>, where we observe the same trend as before: Our algorithm is the fastest and about four orders of magnitude faster than the Sinkhorn-Knopp algorithm. Note that larger matrices with n > 300 do not have total support, which is the necessary condition for matrix balancing <cit.>, while the BNEWT algorithm fails to converge if n = 200 or n = 300.§ THEORETICAL ANALYSISIn the following, we provide theoretical support to our algorithm by formulating the problem as a projection within a statistical manifold, in which a matrix corresponds to an element, that is, a probability distribution, in the manifold. We show that a balanced matrix forms a submanifold and matrix balancing is projection of a given distribution onto the submanifold, where the Jacobian matrix in Equation (<ref>) is derived from the gradient of the manifold. §.§ FormulationWe introduce our log-linear probabilistic model, where the outcome space is a partially ordered set, or a poset <cit.>. We prepare basic notations and the key mathematical tool for posets, the Möbius inversion formula, followed by formulating the log-linear model.§.§.§ Möbius InversionA poset (S, ≤), the set of elements S and a partial order ≤ on S, is a fundamental structured space in computer science. A partial order “≤” is a relation between elements in S that satisfies the following three properties: For all x, y, z ∈ S, (1) x ≤ x (reflexivity), (2) x ≤ y, y ≤ x ⇒ x = y (antisymmetry), and (3) x ≤ y, y ≤ z ⇒ x ≤ z (transitivity). In what follows, S is always finite and includes the least element (bottom) ∈ S; that is, ≤ x for all x ∈ S. We denote S ∖{} by S^+.<cit.> introduced the Möbius inversion formula on posets by generalizing the inclusion-exclusion principle. Let ζ S × S →{0, 1} be the zeta function defined asζ(s, x) = {[1 ifs ≤ x,;0 otherwise. ].The Möbius function μ S × S → satisfies ζμ = I, which is inductively defined for all x, y with x ≤ y as μ(x, y) = {[ 1ifx = y,; -∑_x ≤ s < yμ(x, s)ifx < y,; 0otherwise. ].From the definition, it follows that∑_s ∈ Sζ(s, y)μ(x, s)= ∑_x ≤ s ≤ yμ(x, s) = δ_xy, ∑_s ∈ Sζ(x, s)μ(s, y)= ∑_x ≤ s ≤ yμ(s, y) = δ_xywith the Kronecker delta δ such that δ_xy = 1 if x = y and δ_xy = 0 otherwise. Then for any functions f, g, and h with the domain S such thatg(x)= ∑_s ∈ Sζ(s, x) f(s) = ∑_s ≤ x f(s), h(x)= ∑_s ∈ Sζ(x, s) f(s) = ∑_s ≥ x f(s),f is uniquely recovered with the Möbius function:f(x) = ∑_s ∈ Sμ(s, x) g(s),f(x) = ∑_s ∈ Sμ(x, s) h(s).This is called the Möbius inversion formula and is at the heart of enumerative combinatorics <cit.>.§.§.§ Log-Linear Model on PosetsWe consider a probability vector p on (S, ≤) that gives a discrete probability distribution with the outcome space S. A probability vector is treated as a mapping p S→ (0, 1) such that ∑_x ∈ S p(x)= 1, where every entry p(x) is assumed to be strictly larger than zero.Using the zeta and the Möbius functions, let us introduce two mappings θ S→ and η S → asθ(x)= ∑_s ∈ Sμ(s, x) log p(s), η(x)= ∑_s ∈ Sζ(x, s) p(s) = ∑_s ≥ x p(s).From the Möbius inversion formula, we havelog p(x)= ∑_s ∈ Sζ(s, x) θ(s) = ∑_s ≤ xθ(s), p(x)= ∑_s ∈ Sμ(x, s) η(s).They are generalization of the log-linear model <cit.> that gives the probability p(x⃗) of an n-dimensional binary vector x⃗ = (x^1, …, x^n) ∈{0, 1}^n aslog p(x⃗) = ∑_iθ^i x^i + ∑_i < jθ^ij x^i x^j + ∑_i < j < kθ^ijk x^i x^j x^k + … + θ^1… n x^1 x^2 … x^n - ψ,where θ⃗ = (θ^1, …, θ^12… n) is a parameter vector, ψ is a normalizer, and η⃗ = (η^1, …, η^12… n) represents the expectation of variable combinations such thatη^i = [x^i] = (x^i = 1), η^ij = [x^ix^j] = (x^i = x^j = 1), i < j, … η^1… n = [x^1 … x^n] = (x^1 = … = x^n = 1).They coincide with Equations (<ref>) and (<ref>) when we let S = 2^V with V = {1, 2, …, n}, each x ∈ S as the set of indices of “1” of x⃗, and the order ≤ as the inclusion relationship, that is, x ≤ y if and only if x ⊆ y. <cit.> have pointed out that θ⃗ can be computed from p using the inclusion-exclusion principle in the log-linear model. We exploit this combinatorial property of the log-linear model using the Möbius inversion formula on posets and extend the log-linear model from the power set 2^V to any kind of posets (S, ≤). <cit.> studied a relevant log-linear model, but the relationship with Möbius inversion formula has not been analyzed yet.§.§ Dually Flat Riemannian ManifoldWe theoretically analyze our log-linear model introduced in Equations (<ref>), (<ref>) and show that they form dual coordinate systems on a dually flat manifold, which has been mainly studied in the area of information geometry <cit.>. Moreover, we show that the Riemannian metric and connection of our model can be analytically computed in closed forms.In the following, we denote by ξ the function θ or η and by ∇ the gradient operator with respect to S^+ = S ∖{}, i.e., (∇ f(ξ))(x) = ∂ f / ∂ξ(x) for x ∈ S^+, and denote bythe set of probability distributions specified by probability vectors, which forms a statistical manifold. We use uppercase letters P, Q, R, … for points (distributions) inand their lowercase letters p, q, r, … for the corresponding probability vectors treated as mappings. We write θ_P and η_P if they are connected with p by Equations (<ref>) and (<ref>), respectively, and abbreviate subscripts if there is no ambiguity.§.§.§ Dually Flat StructureWe show thathas the dually flat Riemannian structure induced by two functions θ and η in Equation (<ref>) and (<ref>). We define ψ(θ) asψ(θ) = -θ() = -log p(),which corresponds to the normalizer of p. It is a convex function since we haveψ(θ) = log∑_x ∈ Sexp( ∑_ < s ≤ xθ(s) )from log p(x) = ∑_ < s ≤ xθ(s) - ψ(θ). We apply the Legendre transformation to ψ(θ) given asϕ(η) = max_θ'(θ'η - ψ(θ')),θ' η = ∑_x ∈ S^+θ'(x) η(x).Then ϕ(η) coincides with the negative entropy.ϕ(η) = ∑_x ∈ S p(x) log p(x). From Equation (<ref>), we have θ' η = ∑_x ∈ S^+( ∑_ < s ≤ xμ(s, x) log p'(s) ∑_s ≥ x p(s) ) = ∑_x ∈ S^+ p(x)( log p'(x) - log p'() ). Thus it holds that θ'η - ψ(θ')= ∑_x ∈ S p(x) log p'(x). Hence it is maximized with p(x) = p'(x).Since they are connected with each other by the Legendre transformation, they form a dual coordinate system ∇ψ(θ) and ∇ϕ(η) of  <cit.>, which coincides with θ and η as follows.∇ψ(θ) = η,∇ϕ(η) = θ. They can be directly derived from our definitions (Equations (<ref>) and (<ref>)) as∂ψ(θ)/∂θ(x) = ∑_y ≥ xexp(∑_ < s ≤ yθ(s))/∑_y ∈ Sexp(∑_ < s ≤ yθ(s))= ∑_s ≥ x p(s) = η(x), ∂ϕ(η)/∂η(x) = ∂/∂η(x)(θη - ψ(θ)) = θ(x). Moreover, we can confirm the orthogonality of θ and η as[∂/∂θ(x)log p(s) ∂/∂η(y)log p(s)] = ∑_s ∈ S[ p(s) ∂/∂θ(x)∑_u ∈ Sζ(u, s)θ(u) ∂/∂η(y)log(∑_u ∈ Sμ(s, u)η(u)) ]= ∑_s ∈ S[ p(s) (ζ(x, s) - η(x)) μ(s, y)/p(s) ]= ∑_s ∈ Sζ(x, s)μ(s, y) = δ_xy. The last equation holds from Equation (<ref>), hence the Möbius inversion directly leads to the orthogonality.The Bregman divergence is known to be the canonical divergence <cit.> to measure the difference between two distributions P and Q on a dually flat manifold, which is defined asD[P, Q] = ψ(θ_P) + ϕ(η_Q) - θ_P η_Q.In our case, since we have ϕ(η_Q) = ∑_x ∈ S q(x) log q(x) and θ_Pη_Q - ψ(θ_P) = ∑_x ∈ S q(x) log p(x) from Theorem <ref> and Equation (<ref>), it is given asD[P, Q]= ∑_x ∈ S q(x) logq(x)/p(x),which coincides with the Kullback–Leibler divergence (KL divergence) from Q to P: D[P, Q] = [Q, P].§.§.§ Riemannian StructureNext we analyze the Riemannian structure onand show that the Möbius inversion formula enables us to compute the Riemannian metric of . The manifold (, g(ξ)) is a Riemannian manifold with the Riemannian metric g(ξ) such that for all x, y ∈ S^+ g_xy(ξ) = {[ ∑_s ∈ Sζ(x, s)ζ(y, s)p(s) - η(x)η(y)if ξ = θ,;∑_s ∈ Sμ(s, x)μ(s, y) p(s)^-1if ξ = η. ]. Since the Riemannian metric is defined asg(θ) = ∇∇ψ(θ), g(η) = ∇∇ϕ(η),when ξ = θ we haveg_xy(θ)= ∂^2/∂θ(x) ∂θ(y)ψ(θ) = ∂/∂θ(x)η(y)= ∂/∂θ(x)∑_s ∈ Sζ(y, s)exp(∑_ < u ≤ sθ(u) - ψ(θ))= ∑_s ∈ Sζ(x, s)ζ(y, s)p(s) - η(x)η(y).When ξ = η, it follows thatg_x, y(η)= ∂^2/∂η(x) ∂η(y)ϕ(η) = ∂/∂η(x)θ(y)= ∂/∂η(x)∑_s ≤ yμ(s, y) log p(s)= ∂/∂η(x)∑_s ≤ yμ(s, y) log(∑_u ≥ sμ(s, u) η(u))= ∑_s ∈ Sμ(s, x)μ(s, y)/∑_u ≥ sμ(s, u)η(u)= ∑_s ∈ Sμ(s, x)μ(s, y) p(s)^-1. Since g(ξ) coincides with the Fisher information matrix,[ ∂/∂θ(x)log p(s) ∂/∂θ(y)log p(s) ]= g_xy(θ), [ ∂/∂η(x)log p(s) ∂/∂η(y)log p(s) ]= g_xy(η).Then the Riemannian (Levi–Chivita) connection Γ(ξ) with respect to ξ, which is defined asΓ_xyz(ξ) = 1/2( ∂ g_yz(ξ)/∂ξ(x) + ∂ g_xz(ξ)/∂ξ(y) - ∂ g_xy(ξ)/∂ξ(z) )for all x, y, z ∈ S^+, can be analytically obtained. The Riemannian connection Γ(ξ) on the manifold (, g(ξ)) is given in the following for all x, y, z ∈ S^+, Γ_xyz(ξ) = {[ -1/2∑_s ∈ S(ζ(x, s) - η(x))(ζ(y, s) - η(y))(ζ(z, s) - η(z)) p(s)if ξ = θ,; -1/2∑_s ∈ Sμ(s, x)μ(s, y)μ(s, z) p(s)^-2if ξ = η. ]. We have for all x, y, z ∈ S, ∂ g_y,z(θ)/∂θ(x) = ∂/∂θ(x)∑_s ∈ Sζ(y, s)ζ(z, s)p(s) - ∂/∂θ(x)η(y)η(z), where ∂/∂θ(x)∑_s ∈ Sζ(y, s)ζ(z, s)p(s)= ∂/∂θ(x)∑_s ∈ Sζ(x, s)ζ(y, s)ζ(z, s) exp(∑_ < u ≤ sθ(u) - ψ(θ))= ∑_s ∈ Sζ(x, s)ζ(y, s)ζ(z, s)p(s) - η(x)∑_s ∈ Sζ(y, s)ζ(z, s)p(s) and ∂/∂θ(x)η(y)η(z)= ∂η(y)/∂θ(x)η(z) + ∂η(z)/∂θ(x)η(y)= η(z)∑_s ∈ Sζ(x, s)ζ(y, s)p(s) + η(y)∑_s ∈ Sζ(x, s)ζ(z, s)p(s) - 2η(x)η(y)η(z). It follows that ∂ g_y,z(θ)/∂θ(x) = ∑_s ∈ S(ζ(x, s) - η(x))(ζ(y, s) - η(y))(ζ(z, s) - η(z)) p(s). On the other hand, ∂ g_y,z(η)/∂η(x) = ∂/∂η(x)∑_s ∈ Sμ(s, y)μ(s, z)p(s)^-1 = ∂/∂η(x)∑_s ∈ Sμ(s, y)μ(s, z) (∑_u ≥ sμ(s, u) η(s))^-1= -∑_s ∈ Sμ(s, x)μ(s, y)μ(s, z) (∑_u ≥ sμ(s, u) η(s))^-2 = -∑_s ∈ Sμ(s, x)μ(s, y)μ(s, z) p(s)^-2.Therefore, from the definition of Γ(ξ), it follows that Γ_x,y,z(θ)= 1/2( ∂ g_y,z(θ)/∂θ(x) + ∂ g_x,z(θ)/∂θ(y) - ∂ g_x,y(θ)/∂θ(z) )= 1/2∑_s ∈ S(ζ(s, x) - η(x))(ζ(s, y) - η(y))(ζ(s, z) - η(z)) p(s), Γ_x,y,z(η)= 1/2( ∂ g_y,z(η)/∂η(x) + ∂ g_x,z(η)/∂η(y) - ∂ g_x,y(η)/∂η(z) )= -1/2∑_s ∈ Sμ(s, x)μ(s, y)μ(s, z) p(s)^-2.§.§ The Projection AlgorithmProjection of a distribution onto a submanifold is essential; several machine learning algorithms are known to be formulated as projection of a distribution empirically estimated from data onto a submanifold that is specified by the target model <cit.>. Here we define projection of distributions on posets and show that Newton's method can be applied to perform projection as the Jacobian matrix can be analytically computed.§.§.§ DefinitionLet (β) be a submanifold ofsuch that(β) = P ∈ | θ_P(x) = β(x)for allx ∈(β)specified by a function β with (β) ⊆ S^+. Projection of P ∈ onto (β), called m-projection, which is defined as the distribution P_β∈(β) such that{[ θ_P_β(x) = β(x) ifx ∈(β),; η_P_β(x) = η_P(x) ifx ∈ S^+ ∖(β), ].is the minimizer of the KL divergence from P to (β):P_β = _Q ∈(β)[P, Q].The dually flat structure with the coordinate systems θ and η guarantees that the projected distribution P_β always exists and is unique <cit.>. Moreover, the Pythagorean theorem holds in the dually flat manifold, that is, for any Q ∈(β) we have[P, Q] = [P, P_β] + [P_β, Q].We can switch η and θ in the submanifold (β) by changing [P, Q] to [Q, P], where the projected distribution P_β of P is given as{[ θ_P_β(x) = θ_P(x) ifx ∈ S^+ ∖(β),; η_P_β(x) = β(x) ifx ∈(β), ].This projection is called e-projection.[Boltzmann machine]Given a Boltzmann machine represented as an undirected graph G = (V, E) with a vertex set V and an edge set E ⊆{{i, j}| i, j ∈ V}.The set of probability distributions that can be modeled by a Boltzmann machine G coincides with the submanifold _B = P ∈ | θ_P(x) = 0if|x| > 2orx ∉E, with S = 2^V.Let P̂ be an empirical distribution estimated from a given dataset.The learned model is the m-projection of the empirical distribution P̂ onto _B, where the resulting distribution P_β is given as {[θ_P_β(x) = 0if|x| > 2orx ∉E,;η_P_β(x) = η_P̂(x) if|x| = 1orx ∈ E. ].§.§.§ ComputationHere we show how to compute projection of a given probability distribution. We show that Newton's method can be used to efficiently compute the projected distribution P_β by iteratively updating P_β^(0) = P as P_β^(0), P_β^(1), P_β^(2), … until converging to P_β.Let us start with the m-projection with initializing P_β^(0) = P. In each iteration t, we update θ_P_β^(t)(x) for all x ∈β while fixing η_P_β^(t)(x) = η_P(x) for all x ∈ S^+ ∖(β), which is possible from the orthogonality of θ and η. Using Newton's method, η_P_β^(t + 1)(x) should satisfy(θ_P_β^(t)(x) - β(x)) + ∑_y ∈(β) J_xy(η_P_β^(t + 1)(y) - η_P_β^(t)(y)) = 0,for every x ∈(β), where J_xy is an entry of the |(β)| × |(β)| Jacobian matrix J and given asJ_xy = ∂θ_P_β^(t)(x)/∂η_P_β^(t)(y) = ∑_s ∈ Sμ(s, x)μ(s, y) p_β^(t)(s)^-1from Theorem <ref>. Therefore, we have the update formula for all x ∈(β) asη_P_β^(t + 1)(x)= η_P_β^(t)(x) - ∑_y ∈(β) J_xy^-1(θ_P_β^(t)(y) - β(y)).In e-projection, update η_P_β^(t)(x) for x ∈(β) while fixing θ_P_β^(t)(x) = θ_P(x) for all x ∈ S^+ ∖(β). To ensure η_P_β^(t)() = 1, we addto (β) and β() = 1. We update θ_P_β^(t)(x) at each step t asθ_P_β^(t + 1)(x)= θ_P_β^(t)(x) - ∑_y ∈(β)J'_xy^-1(η_P_β^(t)(y) - β(y)), J'_xy = ∂η_P_β^(t)(x)/∂θ_P_β^(t)(y) = ∑_s ∈ Sζ(x, s)ζ(y, s)p_β^(t)(s) - |S|η_P_β^(t)(x)η_P_β^(t)(y).In this case, we also need to update θ_P_β^(t)() as it is not guaranteed to be fixed. Let us definep_β'^(t + 1)(x)= p_β^(t)(x)∏_s ∈(β)exp(θ_P_β^(t + 1)(s))/exp(θ_P_β^(t)(s)) ζ(s, x).Since we havep_β^(t + 1)(x)= exp(θ_P_β^(t + 1)())/exp(θ_P_β^(t)()) p_β'^(t + 1)(x),it follows thatθ_P_β^(t + 1)() - θ_P_β^(t)() = - log(exp(θ_P_β^(t)()) +∑_x ∈ S^+p_β'^(t + 1)(x)),The time complexity of each iteration is O(|(β)|^3), which is required to compute the inverse of the Jacobian matrix.Global convergence of the projection algorithm is always guaranteed by the convexity of a submanifold (β) defined in Equation (<ref>). Since (β) is always convex with respect to the θ- and η-coordinates, it is straightforward to see that our e-projection is an instance of the Bregman algorithm onto a convex region, which is well known to always converge to the global solution <cit.>.§ BALANCING MATRICES AND TENSORSNow we are ready to solve the problem of matrix and tensor balancing as projection on a dually flat manifold. §.§ Matrix BalancingRecall that the task of matrix balancing is to find r⃗, s⃗∈^n that satisfy (RAS)1⃗ = 1⃗ and (RAS)^T1⃗ = 1⃗ with R = (r⃗) and S = (s⃗) for a given nonnegative square matrix A = (a_ij) ∈^n × n_≥ 0.Let us define S asS = (i, j) | i, j ∈ [n]anda_ij≠ 0,where we remove zero entries from the outcome space S as our formulation cannot treat zero probability, and give each probability as p((i, j)) = a_ij / ∑_ij a_ij. The partial order ≤ of S is naturally introduced asx = (i, j) ≤ y = (k, l) ⇔ i ≤ jandk ≤ l,resulting in = (1, 1). In addition, we define ι⃗_k, m for each k ∈ [n] and m ∈{1, 2} such thatι⃗_k, m = minx = (i_1, i_2) ∈ S | i_m = k,where the minimum is with respect to the order ≤. If ι⃗_k, m does not exist, we just remove the entire kth row if m = 1 or kth column if m = 2 from A. Then we switch rows and columns of A so that the conditionι⃗_1, m≤ι⃗_2, m≤…≤ι⃗_n, mis satisfied for each m ∈{1, 2}, which is possible for any matrices.Since we haveη(ι⃗_k, m) - η(ι⃗_k + 1, m) = {[ ∑_j = 1^n p((k, j))ifm = 1,; ∑_i = 1^n p((i, k)) ifm = 2 ].if the condition (<ref>) is satisfied, the probability distribution is balanced if for all k ∈ [n] and m ∈{1, 2}η(ι⃗_k, m) = n - k + 1/n.Therefore, we obtain the following result. Matrix balancing as e-projection: Given a matrix A ∈^n × n with its normalized probability distribution P ∈ such that p((i, j)) = a_ij / ∑_ij a_ij. Define the poset (S, ≤) by Equations (<ref>) and (<ref>) and let (β) be the submanifold ofsuch that(β) = P ∈ | η_P(x) = β(x)for allx ∈(β),where the function β is given as(β)= ι⃗_k, m∈ S | k ∈ [n], m ∈{1, 2}, β(ι⃗_k, m)= n - k + 1/n.Matrix balancing is the e-projection of P onto the submanifold (β), that is, the balanced matrix (RAS) / n is the distribution P_β such that{[ θ_P_β(x) = θ_P(x) ifx ∈ S^+ ∖(β),; η_P_β(x) = β(x) ifx ∈(β), ].which is unique and always exists in , thanks to its dually flat structure. Moreover, two balancing vectors r⃗ and s⃗ areexp( ∑_k = 1^i θ_P_β(ι⃗_k, m) - θ_P(ι⃗_k, m)) = {[r_i ifm = 1,;a_i ifm = 2, ].for every i ∈ [n] and r⃗ = r⃗ n / ∑_ij a_ij.▪ §.§ Tensor BalancingNext, we generalize our approach from matrices to tensors. For an Nth order tensor A = (a_i_1 i_2 … i_N) ∈^n_1 × n_2 ×…× n_N and a vector b⃗∈^n_m, the m-mode product of A and b⃗ is defined as(A ×_m b⃗)_i_1 … i_m - 1 i_m + 1… i_N = ∑_i_m = 1^n_m a_i_1 i_2 … i_N b_i_m.We define tensor balancing as follows: Given a tensor A ∈^n_1 × n_2 ×…× n_N with n_1 = … = n_N = n, find (N - 1) order tensors R^1, R^2, …, R^N such thatA' ×_m1⃗ = 1⃗ (∈^n_1 ×…× n_m - 1× n_m + 1×…× n_N)for all m ∈ [N], i.e., ∑_i_m = 1^n a_i_1 i_2… i_N' = 1, where each entry a'_i_1 i_2… i_N of the balanced tensor A' is given asa'_i_1 i_2… i_N = a_i_1 i_2 … i_N ∏_m ∈ [N] R^m_i_1 … i_m - 1 i_m + 1… i_N.A tensor A' that satisfies Equation (<ref>) is called multistochastic <cit.>. Note that this is exactly the same as the matrix balancing problem if N = 2.It is straightforward to extend matrix balancing to tensor balancing as e-projection onto a submanifold. Given a tensor A ∈^n_1 × n_2 ×…× n_N with its normalized probability distribution P such thatp(x) = a_i_1 i_2… i_N / ∑_j_1 j_2 … j_N a_j_1 j_2… j_Nfor all x = (i_1, i_2, …, i_N). The objective is to obtain P_β such that ∑_i_m = 1^n p_β( (i_1, …, i_N)) = 1/(n^N - 1) for all m ∈ [N] and i_1, …, i_N ∈ [n]. In the same way as matrix balancing, we define S asS = (i_1, i_2, …, i_N) ∈ [n]^N | a_i_1 i_2… i_N≠ 0with removing zero entries and the partial order ≤ asx = (i_1 … i_N) ≤ y = (j_1 … j_N) ⇔∀ m ∈ [N], i_m ≤ j_m.In addition, we introduce ι⃗_k, m asι⃗_k, m = minx = (i_1, i_2, …, i_N) ∈ S | i_m = k.and require the condition in Equation (<ref>). Tensor balancing as e-projection: Given a tensor A ∈^n_1 × n_2 ×…× n_N with its normalized probability distribution P ∈ given in Equation (<ref>). The submanifold (β) of multistochastic tensors is given as(β) = P ∈ | η_P(x) = β(x)for allx ∈(β),where the domain of the function β is given as(β) = ι⃗_k, m | k ∈ [n], m ∈ [N]and each value is described using the zeta function asβ(ι⃗_k, m) = ∑_l ∈ [n]ζ(ι⃗_k, m, ι⃗_l, m) 1/n^N - 1.Tensor balancing is the e-projection of P onto the submanifold (β), that is, the multistochastic tensor is the distribution P_β such that{[ θ_P_β(x) = θ_P(x) ifx ∈ S^+ ∖(β),; η_P_β(x) = β(x) ifx ∈(β), ].which is unique and always exists in , thanks to its dually flat structure. Moreover, each balancing tensor R^m isR^m_i_1… i_m - 1i_m + 1… i_N = exp(∑_m' ≠ m∑_k = 1^i_m'θ_P_β(ι⃗_k, m') - θ_P(ι⃗_k, m'))for every m ∈ [N] and R^1 = R^1 n^N - 1 / ∑_j_1… j_N a_j_1… j_N to recover a multistochastic tensor.▪Our result means that the e-projection algorithm based on Newton's method proposed in Section <ref> converges to the unique balanced tensor whenever (β) ≠∅ holds.§ CONCLUSIONIn this paper, we have solved the open problem of tensor balancing and presented an efficient balancing algorithm using Newton's method. Our algorithm quadratically converges, while the popular Sinkhorn-Knopp algorithm linearly converges. We have examined the efficiency of our algorithm in numerical experiments on matrix balancing and showed that the proposed algorithm is several orders of magnitude faster than the existing approaches.We have analyzed theories behind the algorithm, and proved that balancing is e-projection in a special type of a statistical manifold, in particular, a dually flat Riemannian manifold studied in information geometry. Our key finding is that the gradient of the manifold, equivalent to Riemannian metric or the Fisher information matrix, can be analytically obtained using the Möbius inversion formula.Our information geometric formulation can model several machine learning applications such as statistical analysis on a DAG structure. Thus, we can perform efficient learning as projection using information of the gradient of manifolds by reformulating such models, which we will study in future work.§ ACKNOWLEDGEMENTSThe authors sincerely thank Marco Cuturi for his valuable comments. This work was supported by JSPS KAKENHI Grant Numbers JP16K16115, JP16H02870 (MS), JP26120732 and JP16H06570 (HN). The research of K.T. was supported by JST CREST JPMJCR1502, RIKEN PostK, KAKENHI Nanostructure and KAKENHI JP15H05711. 38 urlstyle[Agresti(2012)]Agresti12 A. Agresti. Categorical data analysis. Wiley, 3 edition, 2012.[Ahmed et al.(2003)Ahmed, De Loera, and Hemmecke]Ahmed03 M. Ahmed, J. De Loera, and R. Hemmecke. Polyhedral Cones of Magic Cubes and Squares, volume 25 of Algorithms and Combinatorics, pages 25–41. Springer, 2003.[Akartunalı and Knight(2016)]Akartunali16 K. Akartunalı and P. A. Knight. Network models and biproportional rounding for fair seat allocations in the UK elections. Annals of Operations Research, pages 1–19, 2016.[Amari(2001)]Amari01 S. Amari. Information geometry on hierarchy of probability distributions. IEEE Transactions on Information Theory, 470 (5):0 1701–1711, 2001.[Amari(2009)]Amari09 S. Amari. Information geometry and its applications: Convex function and dually flat manifold. In F. Nielsen, editor, Emerging Trends in Visual Computing: LIX Fall Colloquium, ETVC 2008, Revised Invited Papers, pages 75–102. Springer, 2009.[Amari(2014)]Amari14 S. Amari. Information geometry of positive measures and positive-definite matrices: Decomposable dually flat structure. Entropy, 160 (4):0 2131–2145, 2014.[Amari(2016)]Amari16 S. Amari. Information Geometry and Its Applications. Springer, 2016.[Balinski(2008)]Balinski08 M. Balinski. Fair majority voting (or how to eliminate gerrymandering). American Mathematical Monthly, 1150 (2):0 97–113, 2008.[Censor and Lent(1981)]Censor81 Y. Censor and A. Lent. An iterative row-action method for interval convex programming. Journal of Optimization Theory and Applications, 340 (3):0 321–353, 1981.[Chang et al.(2016)Chang, Paksoy, and Zhang]Chang16 H. Chang, V. E. Paksoy, and F. Zhang. Polytopes of stochastic tensors. Annals of Functional Analysis, 70 (3):0 386–393, 2016.[Cui et al.(2014)Cui, Li, and Ng]Cui14 L.-B. Cui, W. Li, and M. K. Ng. Birkhoff–von Neumann theorem for multistochastic tensors. SIAM Journal on Matrix Analysis and Applications, 350 (3):0 956–973, 2014.[Cuturi(2013)]Cuturi13 M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 26, pages 2292–2300, 2013.[Frogner et al.(2015)Frogner, Zhang, Mobahi, Araya, and Poggio]Frogner15 C. Frogner, C. Zhang, H. Mobahi, M. Araya, and T. A. Poggio. Learning with a Wasserstein loss. In Advances in Neural Information Processing Systems 28, pages 2053–2061, 2015.[Ganmor et al.(2011)Ganmor, Segev, and Schneidman]Ganmor11 E. Ganmor, R. Segev, and E. Schneidman. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences, 1080 (23):0 9679–9684, 2011.[Gierz et al.(2003)Gierz, Hofmann, Keimel, Lawson, Mislove, and Scott]Gierz03 G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, and D. S. Scott. Continuous Lattices and Domains. Cambridge University Press, 2003.[Idel(2016)]Idel16 M. Idel. A review of matrix scaling and sinkhorn's normal form for matrices and positive maps. arXiv:1609.06349, 2016.[Ito(1993)]Ito93 K. Ito, editor. Encyclopedic Dictionary of Mathematics. The MIT Press, 2 edition, 1993.[Knight(2008)]Knight08 P. A. Knight. The Sinkhorn–Knopp algorithm: Convergence and applications. SIAM Journal on Matrix Analysis and Applications, 300 (1):0 261–275, 2008.[Knight and Ruiz(2013)]Knight13 P. A. Knight and D. Ruiz. A fast algorithm for matrix balancing. IMA Journal of Numerical Analysis, 330 (3):0 1029–1047, 2013.[Lahr and de Mesnard(2004)]Lahr04 M. Lahr and L. de Mesnard. Biproportional techniques in input-output analysis: Table updating and structural analysis. Economic Systems Research, 160 (2):0 115–134, 2004.[Lamond and Stewart(1981)]Lamond81 B. Lamond and N. F. Stewart. Bregman's balancing method. Transportation Research Part B: Methodological, 150 (4):0 239–248, 1981.[Livne and Golub(2004)]Livne04 O. E. Livne and G. H. Golub. Scaling by binormalization. Numerical Algorithms, 350 (1):0 97–120, 2004.[Marshall and Olkin(1968)]Marshall68 A. W. Marshall and I. Olkin. Scaling of matrices to achieve specified row and column sums. Numerische Mathematik, 120 (1):0 83–90, 1968.[Miller and Blair(2009)]Miller09 R. E. Miller and P. D. Blair. Input-Output Analysis: Foundations and Extensions. Cambridge University Press, 2 edition, 2009.[Moon et al.(2009)Moon, Gunther, and Kupin]Moon09 T. K. Moon, J. H. Gunther, and J. J. Kupin. Sinkhorn solves sudoku. IEEE Transactions on Information Theory, 550 (4):0 1741–1746, 2009.[Nakahara and Amari(2002)]Nakahara02 H. Nakahara and S. Amari. Information-geometric measure for neural spikes. Neural Computation, 140 (10):0 2269–2316, 2002.[Nakahara et al.(2003)Nakahara, Nishimura, Inoue, Hori, and Amari]Nakahara03 H. Nakahara, S. Nishimura, M. Inoue, G. Hori, and S. Amari. Gene interaction in DNA microarray data is decomposed by information geometric measure. Bioinformatics, 190 (9):0 1124–1131, 2003.[Nakahara et al.(2006)Nakahara, Amari, and Richmond]Nakahara06 H. Nakahara, S. Amari, and B. J. Richmond. A comparison of descriptive models of a single spike train by information-geometric measure. Neural computation, 180 (3):0 545–568, 2006.[Parikh(1979)]Parikh79 A. Parikh. Forecasts of input-output matrices using the R.A.S. method. The Review of Economics and Statistics, 610 (3):0 477–481, 1979.[Parlett and Landis(1982)]Parlett82 B. N. Parlett and T. L. Landis. Methods for scaling to doubly stochastic form. Linear Algebra and its Applications, 48:0 53–79, 1982.[Rao et al.(2014)Rao, Huntley, Durand, Stamenova, Bochkov, Robinson, Sanborn, Machol, Omer, Lander, and Aiden]Rao14 S. S. P. Rao, M. H. Huntley, N. C. Durand, E. K. Stamenova, I. D. Bochkov, J. T. Robinson, A. L. Sanborn, I. Machol, A. D. Omer, E. S. Lander, and E. L. Aiden. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell, 1590 (7):0 1665–1680, 2014.[Rota(1964)]Rota64 G.-C. Rota. On the foundations of combinatorial theory I: Theory of Möbius functions. Z. Wahrseheinlichkeitstheorie, 2:0 340–368, 1964.[Sinkhorn(1964)]Sinkhorn64 R. Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. The Annals of Mathematical Statistics, 350 (2):0 876–879, 06 1964.[Sinkhorn and Knopp(1967)]Sinkhorn67 R. Sinkhorn and P. Knopp. Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 210 (2):0 343–348, 1967.[Solomon et al.(2015)Solomon, de Goes, Peyré, Cuturi, Butscher, Nguyen, Du, and Guibas]Solomon15 J. Solomon, F. de Goes, G. Peyré, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics, 340 (4):0 66:1–66:11, 2015.[Soules(1991)]Soules91 G. W. Soules. The rate of convergence of sinkhorn balancing. Linear Algebra and its Applications, 150:0 3–40, 1991.[Sugiyama et al.(2016)Sugiyama, Nakahara, and Tsuda]Sugiyama2016ISIT M. Sugiyama, H. Nakahara, and K. Tsuda. Information decomposition on structured space. In 2016 IEEE International Symposium on Information Theory, pages 575–579, July 2016.[Wu and Michor(2016)]Wu16 H.-J. Wu and F. Michor. A computational strategy to adjust for copy number in tumor Hi-C data. Bioinformatics, 320 (24):0 3695–3701, 2016.
http://arxiv.org/abs/1702.08142v3
{ "authors": [ "Mahito Sugiyama", "Hiroyuki Nakahara", "Koji Tsuda" ], "categories": [ "stat.ME", "cs.IT", "cs.NA", "math.IT", "stat.ML" ], "primary_category": "stat.ME", "published": "20170227043006", "title": "Tensor Balancing on Statistical Manifold" }
Corresponding Author: hzhlj@ustc.edu.cnDepartment of Chemical Physics & Hefei National Laboratory for Physical Sciences at Microscales, iChEM, University of Science and Technology of China, Hefei, Anhui 230026, ChinaWe present a promising mode coupling theory study for the relaxation and glassy dynamics of a system of strongly interacting self-propelled particles, wherein the self-propulsion force is described by Ornstein-Uhlenbeck colored noise and thermal noises are included. Our starting point is an effective Smoluchowski equation governing the distribution function of particle's positions, from which we derive a memory function equation for the time dependence of density fluctuations in nonequilibrium steady states. With the basic assumption of absence of macroscopic currents and standard mode coupling approximation, we can obtain expressions for the irreducible memory function and other relevant dynamic terms, wherein the nonequilibrium character of the active system is manifested through an averaged diffusion coefficient D̅ and a nontrivial structural function S_2(q) with q the magnitude of wave vector 𝐪. D̅ and S_2(q) enter the frequency term and the vortex term for the memory function, thus influence both the short time and the long time dynamics of the system. With these equations obtained, we study the glassy dynamics of this thermal self-propelled particles system by investigating the Debye-Waller factor f_q and relaxation time τ_α as functions of the persistence time τ_p of self-propulsion, the single particle effective temperature T_eff as well as the number density ρ. Consequently, we find the critical density ρ_c for given τ_p shifts to larger values with increasing magnitude of propulsion force or effective temperature, in good accordance with previous reported simulation works. In addition, the theory facilitates us to study the critical effective temperature T_eff^c for fixed ρ as well as its dependence on τ_p. We find that T_eff^c increases with τ_p and in the limit τ_p→0, it approaches the value for a simple passive Brownian system as expected. Our theory also well recovers the results for passive systems and can be easily extended to more complex systems such as active-passive mixtures. Mode Coupling Theory for Nonequilibrium Glassy Dynamics of Thermal Self-Propelled Particles Mengkai Feng,Zhonghuai Hou December 30, 2023 ===========================================================================================§ INTRODUCTION The collective behaviors of systems containing active (self-propelled) particles have gained extensive attention in recent years due to its great importance both from a fundamental physics perspective and for understanding many biological systems <cit.>. A wealth of new nonequilibrium phenomena have been reported, such as active swarming, large scale vortex formation<cit.>, phase separation<cit.>, etc, both experimentally and theoretically. Recently, a new trend in this field has been the glassy dynamics and the glass transition in dense assemblies of self-propelled particles and their comparisons to their corresponding phenomena in equilibrium systems<cit.>. Experiments demonstrated that active fluids may show dynamic features such as jamming and dynamic arrest that are very similar to those observed in glassy materials. For instance, migrating cells exhibited glassy dynamics, such as dynamic heterogeneity as the cell density increases<cit.>, amorphous solidification process was found in collective motion of a cellular monolayer<cit.>, and glassy behaviors could even be found for ant aggregates in large scales<cit.>, to list just a few. Computer simulations also demonstrated that nonequilibrium glass transition or dynamic arrest behavior does occur in dense suspensions of self-propelled particles. So far, mainly two types of self-propulsion particle systems have been studied. One is the Rotation diffusional Active Brownian (RAB) particles system, where each particle is subjected to a self-propulsion force with constant amplitude v_0 but a randomly changing direction evolving via rotational diffusion with diffusion coefficient D_r. Ni et al. <cit.>studied the glassy behavior of this RAB system of hard-sphere particles, finding that the critical density for glass transition shifts to larger density as the active force increases, thus pushing the glass transition point to the limit of random packing. The other one is the so-called Active Ornstein-Uhlenbeck(AOU) particles system, wherein the self-propulsion force is realized by a colored noise described by the OU process. In contrast to the RAB model, thermal noise is ignored in the AOU system, such that the system can never reach the equilibrium state determined by the canonical distribution. For this athermal system, an effective temperature T_eff can be introduced to quantify the strength of the self-propulsion force, and a persistence time τ_p controls the duration of the persistent self-propelled motion. Berthier and co-workers<cit.> had performed detailed studies about the structural and glassy dynamics of this AOU model in two and three dimensions. Similarly, the glass transition shifts to larger densities compared to the equilibrium one when the magnitude or persistence time of Cthe self-propulsion force increases. Besides the studies of self-propelled particles, using molecular dynamics simulations of a model glass former, Mandal et al.<cit.> showed that the incorporation of activity or self-propulsion can induce cage breaking and fluidization, resulting in a disappearance of the glassy phase beyond a critical force. And related to the glassy dynamics, it was shown that particle activity can shift the freezing density to larger values<cit.> and particularly, hydrodynamic interactions can further enhance this effect<cit.>. Besides experimental and simulation studies mentioned above, on the theoretical side, important progresses have also been achieved in recent years<cit.>. Starting from a generalized Langevin equation with colored non-thermal noise, Berthier and Kurchan<cit.> predicted that dynamic arrest can occur in systems that are far from equilibrium, showing that non-equilibrium glass transition moves to lower temperature with increasing activity and to higher temperature with increasing dissipation in spin glasses. Farage and Brader attempted to extend the mode coupling theory to the limiting case of the RAB model<cit.>, wherein activity leads to a higher effective particle diffusivity. They then started from the effective Smoluchowski equation governing the many-particle distribution function and obtained a memory function equation for the equilibrium intermediate scattering function, showing that self-propulsion could shift the glass transition to larger density<cit.>. In a recent work, Nandi proposed a phenomenological extension of random first order transition theory to study glass transition of the RAB model, showing that more active systems are stronger glass formers <cit.>. Very recently, Szamel et al.<cit.> presented an elegant theoretical modeling of the structure and glassy dynamics of the athermal AOU system. In their approach, they first integrated out the self-propulsion and then used the projection operator method and a mode-coupling-like approximation to derive an approximate equation of motion for the collective intermediate scattering functions, defined upon the nonequilibrium steady state distribution. In particular, this work highlighted the importance of the steady state correlations of particle velocities, which played a crucial role to understand the relaxation dynamics of the system. Nevertheless, extension of this framework to more general cases with thermal noise included is not present yet. In the present work, we have developed an alternative mode coupling theory for the nonequilibrium glass dynamics of a general system of active particles, wherein the self-propulsion force is described by an OU process, and besides, thermal noise is included. To be specific, we call this active OU particles system with thermal noise presented as the AOU-T system, where 'T' apparently stands for thermal noise.Our starting point is an effective Smoluchowski equation(SE) obtained via Fox approximation method, which was recently adopted by Farage et al. to study the effective interactions among RAB particles<cit.>. This effective SE allows us to derive a memory function equation for the nonequilibrium steady state collective intermediate scattering function F_q(t), as well as that for the self-intermediate scattering function F_q^s(t). With the basic assumptions of absence of macroscopic currents and standard mode coupling approximation, we are able to get the expressions for the irreducible memory functions and other relevant variables. Particularly, we find that the dynamics is governed by an averaged diffusion coefficient D̅ and a nontrivial steady state structure function S_2(q), both depending on the effective temperature T_eff, the persistence time τ_p as well as the number density ρ. With D̅, S_2(q) and the nonequilibrium structure factor S(q) as inputs, we can calculate F_q(t) and F_q^s(t) for different parameter settings and investigate the glass transition behaviors. We calculate the critical density ρ_c for glass transition as a function of the effective temperature T_eff, the magnitude of propulsion force v_0 and the persistence time τ_p, by investigating the Debye-Waller factor f_q as well as the relaxation time τ_α. Consequently, ρ_c shifts to larger values with increasing propulsion force v_0 or effective temperature T_eff, in good accordance with previous simulation works. The theory also facilitates us to study the critical temperature T_eff^c for fixed number density ρ, as well as its dependence on τ_p. The remainder of paper is organized as follows. In Section II, we present descriptions of the AOU-T model and our theory, the latter being the main result of the present work. Section III includes the numerical results predicted by our theory and conclusions in Section IV. § MODEL AND THEORY §.§ AOU-T Model for self-propelled particles We consider a system of N interacting, self-propelled particles in a volume V. The particles move in a viscous medium with single particle friction coefficient γ and hydrodynamic interactions are neglected. As mentioned in the introduction, the self-propulsion force is given by a colored noise described by OU process and thermal noises are included. The equations of motion for these AOU-T particles are thus given by 𝐫̇_i(t) =γ^-1[𝐅_i(t)+𝐟_i(t)]+ξ_i(t)where 𝐫_i denotes the position vector of particle i, the force 𝐅_i=-∑_j i∇_iu(r_ij) originates from the interactions where is the pair-potential u(r_ij), 𝐟_i is the self-propulsion force, and ξ_i(t) is the thermal noise with zero mean and variance ⟨ξ_i(t)ξ_j(t')⟩ =2D_t1δ_ijδ(t-t'),where D_t=k_BT/γ with k_B the Boltzmann constant and T the ambient temperature, and 1 denotes the unit tensor. The equations of motion for the self-propulsion force 𝐟_i are given by 𝐟̇_̇i̇(t)=-τ_p^-1𝐟_i(t)+η_i(t)where τ_p is the persistence time of self-propulsion and η_i(t) is a Gaussian white noise with zero mean and variance D_f ⟨η_i(t)η_j(t')⟩ =2D_f1δ_ijδ(t-t')Accordingly, the correlation function of the force 𝐟_i reads⟨𝐟_i(t)𝐟_j(t')⟩ =D_fτ_pe^-|t'-t|/τ_p1δ_ij For an isolated particle, the mean square displacement can be obtained as⟨δ r^2(t)⟩ =6D_fτ_p^2/γ^2[t+τ_p(e^-t/τ_p-1)]+6D_ttIn the short time limit t≪τ_p, the particle's motion is diffusive with ⟨δ r^2(t)⟩ =6D_tt, which is at variance with the AOU system, wherein the term 6D_tt is absent and the motion is ballistic ⟨δ r^2(t)⟩ =3D_fτ_pγ^-2t^2 for t≪τ_p. For long time, the motion is also diffusive but with ⟨δ r^2(t)⟩ =6(D_t+D_fτ_p^2/γ^2)t, implying that the long diffusion coefficient is given by D_0=D_t+D_fτ_p^2/γ^2This allows us to introduce a single-particle effective temperatureT_eff=T+k_BD_fτ_p^2/γ=(D_0/D_t)T.In the limit of vanishing τ_p→0, ⟨𝐟_i(t)𝐟_j(t')⟩→2D_fτ_p^21δ(t-t')δ_ij and the system becomes equivalent to a Brownian system at the effective temperature T_eff with diffusion coefficient D_0. For the AOU model where the thermal noise is absent, the effective temperature is simply given by T_ eff=D_fτ_p^2/γ<cit.> . We note here that the AOU-T model can be mapped onto the RAB model at a coarse-grained level<cit.>. The equation of motion for the RAB particles is ṙ_i=v_0 p_i+γ^-1 F_i+ξ_i, where v_0 denotes the magnitude of the propulsion force and 𝐩_i is the unit vector of the direction of particle i. 𝐩_i changes randomly with time via rotational diffusion, ṗ_i=ζ_i× p_i, where ζ_i is also a Gaussian white noise with zero mean and correlation ⟨ζ_i(t)ζ_j(t')⟩ =2D_r1δ_ijδ(t-t') where D_r denotes the rotational diffusion coefficient. Recently, it was shown that 𝐩_i can be approximated by a colored noise with persistence time τ_p=(2D_r)^-1 if one average over the angular degree of freedom, i.e., ⟨𝐩_i(t)𝐩_j(t')⟩≃1/3e^-2D_r|t-t'| 1δ_ijComparing Eq.(<ref>) with (<ref>), we see that v_0𝐩_i has same correlation property as 𝐟_i by the mapping τ_p→(2D_r)^-1 and D_fτ_p→ v_0^2/3.In the studies of the AOU model, the authors usually used T_eff as one of the independent parameters together with the persistence time τ_p and number density ρ=N/V. In the simulation works of the RAB model, however, the authors often used the magnitude of the propulsion force v_0 as an independent parameter. Note that in the simulation work performed by Ran Ni et al.<cit.>, they have adopted Stokes-Einstein relation to set the rotational diffusion coefficient D_r=3D_t/σ^2 where σ is the particle diameter. In the dimensionless version by setting D_t=1, γ=1 and σ=1, this means that D_r is fixed and the persistence time is τ_p=(2D_r)^-1=1/6. In more general cases, the coupling of D_t and D_r may not hold and one can thus set τ_p as a free independent parameter. In the present work, we will set v_0=√(3D_fτ_p) or T_eff, τ_p, and ρ as independent parameters if not otherwise specified. For simplicity, we consider a one-component pure-repulsive Lenard-Jones(LJ) system of self-propelled particles. The pair potential is given byu(r)= 4ε[(σ/r)^12-(σ/r)^6]+εr≤2^1/6σ0 r>2^1/6σwhere ε is the strength of the potential. Here we set ϵ=1k_BT, where k_BT=D_tγ is the unit of energy. Moreover, σγ/k_BT is the unit of time. The number density ρ is set to be large enough such that the phase separation dynamics is not relevant and we mainly focus on the glassy dynamics. Simulations are performed in a cubic box with L=10 and periodic boundary conditions. §.§ Effective Smoluchowski Equation The AOU-T model described in Eq.(<ref>) is non-Markovian due to the colored noise term 𝐟_i. Consequently, it is not possible to derive an exact Fokker-Planck equation (FPE) for the time evolution of the probability distribution function Ψ(𝐫^N,t), which gives the probability that the system is at a specific configuration 𝐫^N=(𝐫_1,𝐫_2,…,𝐫_N) at time t. Nevertheless, one may obtain an approximate FPE for such a colored noise system by applying the method introduced by Fox<cit.>, where a perturbative expansion in powers of correlation time is partially resumed using functional calculus. The resulting FPE thus defines implicitly a Markovian process, and is shown to be rather accurate for short correlation time of the colored noise. Very recently, T. Farage and co-workers had applied such a method to the RAB model and obtained an effective FPE for Ψ(𝐫^N,t), and analyzed the effective interaction among active particles in the low density limit. Since the FPE only involves the distribution in the configuration space 𝐫^N, it is also known as Smoluchowski equation (SE). Here we use the same method to obtain the approximate SE of the AOU-T model. The procedure is similar to that in Ref.<cit.> noting that the AOU-T model can be mapped to the RAB model as discussed in the last subsection. For self-consistency, the main steps with necessary illustrations are given in Appendix A. Finally, we can obtain the effective SE as∂/∂ tΨ(𝐫^N,t)=Ω̂Ψ(𝐫^N,t)where Ω̂ denotes the effective Smoluchowski operator given byΩ̂=∑_j=1^N∇_j· D_j(𝐫^N)[∇_j-β𝐅_j^ eff(𝐫^N)].Herein, D_j(𝐫^N) is a configuration-dependent instantaneous diffusion coefficient of particle j which is given by D_j(𝐫^N)=D_t+D_fτ_p^2/γ^2/1-τ_pβ D_t∇_j·𝐅_j^.with β=(k_BT)^-1. 𝐅_j^eff(𝐫^N) defines an instantaneous effective force subject to particle j, which also depends on the configuration, given by 𝐅_j^(𝐫^N)=D_t/D_j(𝐫^N)[𝐅_j(𝐫^N)-1/β D_t∇_jD_j(𝐫^N)] Note that for passive particles, one has D_fτ_p=0 such that D_j(𝐫^N)=D_t and 𝐅_j^eff(𝐫^N)=𝐅_j(𝐫^N) as expected. In the limit τ_p→0,corresponding to a white noise 𝐟_i, we have D_j(𝐫^N)=D_t+D_fτ_p^2/γ^2=D_0 and 𝐅_j^eff=(D_t/D_0)𝐅_j. In this latter case, the effective Smoluchowski operator is given by<cit.>Ω̂_τ_p→0=D_0∑_j=1^N∇_j(∇_j-βD_t/D_0𝐅_j)=D_0∑_j=1^N∇_j(∇_j-β_eff𝐅_j)where β_eff=(k_BT_eff)^-1, and the system reduces to N interacting Brownian particles at the effective temperature T_eff. We assume that the system will reach a nonequilibrium steady state (NESS) P_s(𝐫^N) in the long time limit, which satisfies Ω̂P_s(𝐫^N)=-∑_i∇_i·𝐉_i^s=0where the steady state current 𝐉_i^s is given by 𝐉_i^s=-D_j(𝐫^N)[∇_j-β𝐅_j^ eff(𝐫^N)]P_s(𝐫^N)For a passive system, P_s(𝐫^N) will be given by the canonical equilibrium distribution P_s^eq(𝐫^N)=exp(-β U(𝐫^N))/Z where U(𝐫^N)=1/2∑_j iu(r_ij) is the system potential and Z is the partition function. But for the active system studied in the present work, the explicit form of P_s(𝐫^N) is hard to obtain. Nevertheless, in the case τ_p→0, P_s^τ_p→0(𝐫^N)∼exp(-β_effU(𝐫^N)) satisfies Ω̂_τ_p→0P_s^τ_p→0(𝐫^N)=0 indicating that the system can be described by an effective equilibrium distribution at an effective temperature T_eff. For latter purposes, it is convenient to introduce an adjoint operator of the Smoluchowski operator as Ω̂^†=∑_j=1^N(∇_j+β𝐅_j^ eff)D_j(𝐫^N)·∇_jwhich satisfies ∫ d𝐫^Nf^*(Ω̂g)=∫ d𝐫^N(Ω̂^†f)^*g for any functions f(𝐫^N) and g(𝐫^N). For the collective dynamic behaviors of the system, one can then define the collective intermediate scattering function as<cit.> F_q(t)=1/N⟨_𝐪^*(e^Ω̂^†tρ_𝐪)⟩ =1/N⟨_-𝐪(e^Ω̂^†tρ_𝐪)⟩where ρ_ q=∑_j=1^Ne^-i q· r_jis the Fourier transform with wave vector 𝐪 of the density variable ρ(𝐫)=∑_j=1^Nδ(𝐫-𝐫_j) and q=|𝐪|. In particular, one must emphasize that the brackets ⟨⟩ in Eq.(<ref>) denotes the ensemble average over the NESS distribution P_s(𝐫^N), rather than over the equilibrium one P_s^eq(𝐫^N). For t=0, F_q(t) is related to the non-equilibrium static structure factor F_q(0)=1/N⟨_-𝐪ρ_𝐪⟩ =S(q)where again ⟨⋯⟩ denotes averaging over the NESS. Nevertheless, for the non-equilibrium system studied here, S(q) can not be calculated by analytical methods like the Ornstein-Zernike (OZ) equations and must be obtained by direct simulations. Note that F_q(t) can also be written as F_q(t)=1/N⟨_-𝐪e^Ω̂tρ_𝐪⟩wherein the operator Ω̂ acts on all the functions on its right side including P_s(𝐫^N), while in Eq.(<ref>) the adjoint operator Ω̂^† only acts on ρ_𝐪. We also consider a closely related function, F_q^s(t), called self-intermediate scattering functionF_q^s(t) =⟨ρ_-q^sρ_q^s(t)⟩ =⟨ e^-i𝐪·(𝐫_s(t)-𝐫_s(0))⟩ =1/N∑_j=1^N⟨ e^-i𝐪·(𝐫_j(t)-𝐫_j(0))⟩ =1/N∑_j=1^N⟨_-𝐪^je^Ω̂tρ_𝐪^j⟩where ρ_q^s is the Fourier transform of microscopic tagged particle (tracer) density ρ_q^s=e^-i𝐪·𝐫_s.§.§ Memory Function Equations In this subsection, we derive a formal expression for the collective (and self-) intermediate scattering functions Eqs. (<ref>) and (<ref>) in terms of the so-called irreducible memory function. This can be done most easily in the Laplace domain, and the details are given in the Appendix B. Consequently, the equation for the time evolution of F_q(t) is given by∂/∂ tF_q(t)+ω_qF_q(t)+∫_0^tduM̃^ irr(q,t-u)∂/∂ uF_q(u)=0where ω_q=-⟨ρ_𝐪^*(^†ρ_𝐪)⟩⟨ρ_𝐪^*ρ_𝐪⟩ ^-1=q^2∑_j⟨ D_j(𝐫^N)⟩/NS(q)=q^2D̅/S(q)is the frequency term where D_j(𝐫^N) is given by Eq.(<ref>) andD̅=N^-1∑_j⟨ D_j(𝐫^N)⟩ .denotes an averaged single-particle diffusion coefficient in the NESS. The irreducible memory function M̃^ irr(q,t) is given by M̃^ irr(q,t)=ρD̅/16π^3∫ d𝐤[(q̂·𝐤)C_2(𝐪;𝐤)+(q̂·𝐩)C_2(𝐪;𝐤)]^2F_k(t)F_p(t).with 𝐩=𝐪-𝐤, p=|𝐩|. Herein, a pseudo-correlation function C_2(𝐪,𝐤) is introduced which is defined as C_2(𝐪;𝐤)=ρ^-1[1-D_0/D̅S_2(p)/S(p)S^-1(k)]whereS_2(k)=1/D_0N⟨∑_i,jD_j(𝐫^N)e^-i𝐤·𝐫_j+i𝐤·𝐫_i⟩denotes a static structure function involving the coupling of the instantaneous diffusion coefficient D_j(𝐫^N) and density fluctuation e^-i𝐤·(𝐫_j-𝐫_i). Eqs. (<ref>) to (<ref>) contribute to the main theoretical results of the present paper. The equation for F_q(t), (<ref>), has the same form as that for an equilibrium colloid system<cit.>. However, Eqs.(<ref>) to (<ref>) contain important new features that are specific to the AOU-T system. The frequency ω_q depends on the parameter D̅, which denotes an averaged effective diffusion coefficient of a particle in the NESS. The irreducible memory function, Eq.(<ref>), has similar form as that for passive colloid systems, except that a new pseudo-direct correlation function C_2(𝐪;𝐤) is introduced in replace of the usual direct correlation function c(k)=ρ^-1[1-S^-1(k)]. The definition of C_2(𝐪;𝐤) now involves another function S_2(k), defined by Eq.(<ref>), which resembles the structure factor S(k) but with D_j(𝐫^N) involved. Since D_j is a configuration-dependent function, it cannot be drawn out of the summation ∑_i,j in Eq.(<ref>). Interestingly, for a homogeneous passive system, D_j=D_t=D_0, hence ω_q=q^2D_tS^-1(q), S_2(k)=N^-1⟨∑_i,je^-i𝐤·𝐫_j+i𝐤·𝐫_i⟩≡ S(k) and C_2(𝐪;𝐤) simply reduces to c(k). In this case, Eq. (<ref>) becomes M̃^irr(q,t)=ρ D_t/16π^3∫ d𝐤[(q̂·𝐤)c(k)+(q̂·𝐩)c(k)]^2F_k(t)F_p(t)which reduces exactly to that of a passive colloid system<cit.>. Note that in the limit τ_p→0, we have D_j=D=D_0 such that S_2(k)=S(k) and C_2(𝐪;𝐤)=c(k) also hold. In this case, ω_q=q^2D_0S^-1(q) and the equations describe the dynamics of an equivalent Brownian system with effective diffusion coefficient D_0 as described above. In general cases, D_j is dependent on the particle positions, thus it cannot be drawn out from the summation of S_2(k) in Eq.(<ref>). Interestingly, if we approximately replaces D_j by its ensemble average value ⟨ D_j⟩ in the summation, we can obtainS_2(k)≃1/D_0N⟨∑_i,j⟨ D_j⟩ e^-i𝐤·𝐫_j+i𝐤·𝐫_i⟩ =D̅/D_0N⟨∑_i,je^-i𝐤·𝐫_j+i𝐤·𝐫_i⟩ =D̅/D_0S(k)in the second equality, we use the fact that ⟨ D_j⟩ =D̅ for a homogenous system. We find then C_2(𝐪,𝐤)=ρ^-1[1-D_0/D̅S_2(p)/S(p)S^-1(k)]≃ c(k)and the memory function Eq.(<ref>) reduces to that for a passive system with effective diffusion coefficient D̅. Note that this approximation holds if the coupling of D_j and density fluctuation e^-i k·( r_j-𝐫_l) is weak or the fluctuation of D_j is very small. In the latter sections of the present paper, we will show by simulations that S_2(k)≃(D̅/D_0)S(k) is a very good approximation when k is large, whereas for small k S_2(k)D_0/D̅S(k) does show apparent structures.In the next section, we will adopt our above theoretical results to study the glassy behaviors of the one-component LJ active system described by the AOU-T model. In the dimensionless unit, γ=D_t=k_BT=1 and we choose D_f, τ_p together with the number density ρ as adjustable parameters. As already discussed in the model description part, now the effective temperature is given by T_eff=1+D_fτ_p^2, while the amplitude of active force is quantified by v_0=√(3D_fτ_p). To compare our results with those simulation works of Ni and others, we will choose v_0 and τ_p as independent variables together with ρ. Nevertheless, we will also study the behavior of the system by choosing T_eff and τ_p as independent free parameters since it has been a regular choice in recent studies<cit.>. To begin, we will run the system until it reaches the steady state from which we can get the parameter D̅ and the function S_2(k), with which the memory function Eq. (<ref>) can be numerically calculated. We can then investigate the time dependence of F_q(t) to address the glass transition issue. For the self-intermediate scattering function F_q^s(t), the memory function equation reads(see the Appendix <ref>)∂/∂ tF_q^s(t)+ω_q^sF_q^s(t)+∫_0^tduM_s^ irr(q,t-u)∂/∂ uF_q^s(u)=0where ω_q^s=q^2D̅ and M̃_s^ irr(q;t)=ρD̅/(2π)^3∫ d^3𝐤[(𝐤·𝐪̂)c(k)+(𝐩·𝐪̂)1/ρ(1-D_0S_2(k)/D̅S(k))]^2F_k(t)F_p^s(t) If S_2(k)≃(D̅/D_0)S(k), the second term in the bracket can be neglected, and the equation reduces to the equilibrium version. It would be instructive here to compare our theoretical results with those in the literature. As mentioned in the introduction, Farage and Brader<cit.> had tried to develop a MCT for the RAB model in the limit τ_p→0. In this circumstance, the effective Smoluchowski operator is actually given by Eq.(<ref>). Starting from this effective operator, they obtained a memory function for the collective scattering function F_q^eq(t), but defined for the equilibrium distribution. In our work, the effective Smoluchowski operator is now extended to finite (small) τ_p, and importantly, the scattering function is now defined for the NESS which is more relevant for the active system as pointed out by Szamel<cit.>. The extension to finite τ_p and using a nonequilibrium function makes it feasible to compare with simulation results. Surely, for a nonequilibrium MCT theory, some static functions such as D̅, S_2(k) in the present work must be obtained from simulations, which is currently not avoidable. On the other hand, Szamel et al.<cit.> had made important progress in the theoretical modeling of active particle systems very recently. In particular, they mainly focused on athermal system, the AOU model, which is applicable for large colloidal systems wherein thermal noise might be ignored compared to the self-propulsion. Their treatment followed a quite different way as in the present work, where they performed a projection onto the local steady state defined by the self-propulsion force 𝐟_i. With the assumption of vanishment of system currents in the local steady state and mode coupling approximation, they were able to obtain an effective Smoluchowski operator, which is time dependent, and the memory function for the nonequilibrium scattering function F_q(t). Importantly, their theory involved a function ω_||(q) which highlights the role of the velocity correlations. In particular, this theory reproduced a nontrivial non-monotonic dependence of the relaxation time τ_α with τ_p if T_eff is fixed which was observed in their simulations for a standard LJ system, although the theory apparently overestimated τ_α in the τ_p→0 limit. In our present work, we have considered the AOU-T model where thermal noise is taken into account. We have not tried to extend Szamel's method to this thermal situation, which might be hard to realize, rather we have adopted a different scheme. Given that the Fox's method is applicable, the effective Smoluchowski operator given by Eq.(<ref>) provides the starting point for the derivation. This approach actually involves a type of coarse-gaining over time, wherein the effects of colored noise has been replaced by an effective white one but with configuration-dependent correlation functions. As shown in our theory above, the dynamics is then mainly determined by the effective diffusion coefficient D̅ and a static structure function S_2(k) wherein both involves the instantaneous diffusion coefficient D_j(𝐫^N). Interestingly, although our method are quite different with that of Szamel, we note that ω_||(q)τ_p in their work plays the same role as D̅ in ours. We also note that in a recent paper, Marconi et al.<cit.> had studied the velocity correlations in the AOU-model, finding an expression very similar to D_j(𝐫^N) under mean-field approximation. § NUMERICAL RESULTS AND DISCUSSIONS§.§ Static PropertiesAs discussed above, to solve F_q(t), we must obtain the parameter D̅ and the pseudo-structure factor S_2(k) in the NESS via direct numerical simulations. In Fig.<ref>(a), the dependence of D̅ on the effective temperature T_eff is presented, for different fixed values of τ_p and ρ. As can be seen, D̅ increases monotonically with T_eff, which is reasonable since D̅ denotes a kind of averaged diffusion coefficient that should be larger for a higher temperature. If T_eff is fixed, D̅ decreases with increasing τ_p and the variation of D̅ with T_eff becomes less sharp, i.e., (∂D̅/∂ T_eff) decreases. Such qualitative behaviors are robust with the change of number density ρ, despite that the value of D̅ decreases slightly with increasing ρ for given values of τ_p and T_eff. In Fig.<ref>(b), we have also plotted D̅ as a function of persistent time τ_p for different values of T_ eff and ρ. In this case, we can see that D̅ decreases with τ_p, tending to approach D_0=k_BT_ eff/γ when τ_p→0 and close to D_t at a large τ_p value. Besides, at the lower density D̅ has a slightly larger value as shown in (a). In Fig.<ref>(a), the non-equilibrium static structure factors S(k) obtained from direct simulations are drawn for different particle activities v_0 for fixed ρ=1.12 and τ_p=0.167. The value of ρ is chosen such that the system is close to the glass transition and that of τ_p is consistent with the setting in the simulation work of Ni<cit.>. It seems that S(k) does not change much with the variation of v_0, except that the main peak decreases slightly and shifts a little to right with increasing v_0. This decreasing of the main peak indicates that the structure becomes looser with increasing active force. The other peaks at larger values of k show little discrepancy for different v_0. Such observations are in qualitative agreements with the simulation results obtained by Ni. Since we have fixed τ_p, the effective temperature T_eff∼1+D_fτ_p^2 changes in the same tendency as v_0, such that Fig.<ref>(a) also shows the change of S(k) with T_eff. In Fig.<ref>(b), S(k) for different τ_p, but with fixed T_eff have been presented. In this case, one can see that the main peak increases apparently with increasing τ_p and also shifts a little bit to smaller values of k. Since T_eff∼1+v_0^2τ_p/3, increasing τ_p with fixed T_eff corresponds to decreasing v_0, this observation is consistent with Fig.<ref>(a). The second and third peak also show observable differences with the variation of τ_p in that the peak gets higher and moves to smaller values of k with increasing τ_p. As discussed in the last section, an important new feature of our theory is the introduction of the function S_2(k), which couples the instantaneous diffusion coefficient D_j and the density fluctuations. It is therefore instructive for us to investigate how S_2(k) looks like. In Fig.<ref>, we have plotted S_2(k) with the same parameter settings as in Fig.<ref>. As shown in Fig.<ref>(a), the particle activity (or effective temperature) drastically influences S_2(k), with the main peak decreasing considerably with increasing v_0 or T_eff. Compared to Fig.<ref>(a), the value of S_2(k) is much smaller than S(k), which reflects the fact that D_j is generally less than D_0. The behaviors of S_2(k) for fixed T_eff but with variant τ_p are shown in Fig.<ref>(b). In this latter case, we see that the main peak now slightly reduces with increasing τ_p and it seems to saturate for large τ_p, which are at variance with the observations in Fig.<ref>(b). The apparent discrepancies between S_2(k) and S(k) indicate that our theory may show interesting new features.Another new feature of our theory is the pseudo-correlation function C_2(𝐪,𝐤), which plays a similar role to c(k) in the irreducible memory function M^irr(q,t). As discussed above, C_2(𝐪,𝐤) reduces to c(k) if S_2(p)D_0/D̅S(p)≃1. In Fig.<ref>, the dependence of S_2(k)D_0/D̅S(k) on k has been presented, for fixed ρ with varying T_ eff and τ_p. Interestingly, we find that it is approximately one if k is larger than 2π/σ which is approximately the peak position for S(k). Nevertheless, for small values of k, S_2(k)D_0/D̅S(k) shows some structures. Specifically, S_2(k)D_0/D̅S(k) becomes much less than one for small τ_p if T_eff is fixed. Such a feature may lead to enhancement of the irreducible memory function M_s^irr(q,t) shown in Eq.(<ref>) with decreasing τ_p→0 if T_eff is fixed. This would lead to the increment of τ_α,if other effects are not accounted for. Note that, however, D̅ increases with decreasing τ_p with constant T_eff as shown in Fig.<ref>, such that τ_α would decrease with decreasing τ_p in terms of this effect. Therefore, it might be possible that the relaxation time τ_α shows some re-entrance behaviors in the small τ_p region, similar to that reported for the AOU model<cit.>. §.§ Intermediate Scattering FunctionWith the static properties obtained above, particularly D̅ and S_2(k), we are ready to investigate the behavior of the intermediate scattering function F_q(t) by numerically solving the memory functions Eqs. (<ref>) and (<ref>). In Fig.<ref>(a), the normalized scattering functions ϕ_q(t)=F_q(t)/S(q) are shown for different values of v_0 (or T_eff) and number density ρ, wherein we have chosen q=7.5 which is around the first peak of S(q). The results for two densities ρ=1.05 and 1.10 are plotted, and the value of τ_p is fixed to be 0.167. For the higher density ρ=1.07, one can see that F_q(t) finally reaches a plateau in the long time limit for v_0=0 (or T_eff=1), indicating that the system reaches the glassy state. For a nonzero value of v_0 as shown in the figure, F_q(t) will finally relax to zero for large t indicating that the system is in a liquid state, and the relaxation time decreases apparently with increasing v_0. Therefore, activity will push the glass transition to higher number density, in consistent with the simulation results of the RAB model and other related models. For a smaller ρ=1.05, the system is in the liquid state for v_0=0 and the relaxation of F_q(t) also becomes faster with increasing v_0 or T_eff. The behaviors of the self-scattering function F_q^s(t) are similar as shown in Fig.<ref>(b). While for v_0=0 the tracer particle is trapped and F_q^s(t) reaches a non-zero value for t→∞, it relaxes to zero for v_0=10 and 20 with the relaxation time τ_α decreases apparently with increasing v_0. The limiting value f_q=lim_t→∞ϕ_q(t) at the plateau defines the so-called Debye-Waller factor. A non-zero value of f_q indicates that the system is in the glassy state. With the increase of ρ, f_q may change from zero to an apparent nonzero value, and the value ρ_c thus corresponds to the glass transition point. One may also fix ρ but vary T_eff, then f_q may become nonzero for T_eff less than some critical value T_eff^c , which defines a critical temperature for glass transition. According to the MCT Eq. (<ref>), the Debye-Waller factor f_q followsf_q=m_q(∞)/1+m_q(∞)where m_q(∞)=ρD̅/16π^3q^2∫ d^3𝐤[(𝐪̂·𝐤)C_2(𝐪;𝐤)+(𝐪̂·𝐩)C_2(𝐪;𝐩)]^2S(k)S(q)S(p)f_kf_pThis equation can be solved numerically and self-consistently to get f_q for given control parameters v_0 (or T_eff), τ_p as well as ρ.In Fig.<ref>(a), the dependence of f_q on the number density ρ is presented for different v_0 (or T_eff) with given τ_p=0.001. Clearly, f_q changes abruptly from zero to a large nonzero value at a critical density ρ_c, indicated for example by the vertical dashed line for v_0=0 at about ρ_c≃1.064. For given τ_p, the curve shifts to larger values of ρ with increasing v_0, indicating that that glass transition is pushed to higher values of ρ for larger particle activity in consistent with previously reported simulation results. The pictures for different values of τ_p=0.05 and 0.167 are shown in Fig.<ref>(b) and (c), respectively. The results are similar to those in (a), with the values of ρ_c shifting to relatively larger values for larger τ_p. In Fig.<ref>(b), it is shown that the relaxation time τ_α increases when the system approaches the glass transition and it diverges at the glass transition point. Therefore, one may also study the glass transition by investigating the behavior of τ_α as a function of ρ. The results are depicted in Fig.<ref> with the same parameter setting as in Fig.<ref>. Obviously, τ_α increases fastly with ρ for fixed values of v_0 (T_eff) and τ_p and it diverges at some critical value ρ_c. For a very small τ_p=0.001, it seems that changing v_0 does not affect very much the values of τ_α as shown in Fig.<ref>(a). The influence becomes more considerable when τ_p gets larger as demonstrated in <ref>(b) and (c), and the value of ρ_c also shifts to larger values in consistent with Fig.<ref>. Surely, the value of ρ_c should be the same either obtained by f_q or τ_α within reasonable fluctuations. In Fig.<ref>(a) and (b), the dependence of ρ_c, obtained from bothf_q and τ_α, on v_0 and T_eff are presented for different given values of τ_p. Clearly, ρ_c increases with both v_0 and T_eff as expected. Interestingly, ρ_c shows a nearly linear dependence on v_0^2, wherein the slope increases with τ_p. This linear dependence was also observed in the simulation work of Ni. We also note that ρ_c increases with τ_p for fixed v_0, whereas it decreases with τ_p for fixed T_eff according to the data presented in Fig.<ref>(a) and (b). This is shown more clearly in Fig.<ref>(c) , where we have also plotted ρ_c as a function of τ_p for different T_eff. For comparison, the dashed line gives the value of ρ_c^B for the corresponding passive Brownian system with T=T_eff, which is obtained by setting D_t=D_0 and zero self-propulsion force f_i=0 in Eq.(<ref>). Clearly, ρ_c approaches ρ_c^B in the limit τ_p→0 as expected.For equilibrium systems, glass transition is often studied in terms of the critical temperature T_c, below which the system enters the glassy state. In the present work, we may study the non-equilibrium glass transition in the same spirit by calculating the value of critical effective temperature T_eff^c with fixed number density ρ. In Fig.<ref> (a) to (c), the results of τ_α are presented as functions of T_eff for different τ_p and ρ. For fixed value of τ_p and ρ, τ_α decreases monotonically with T_eff. Below some critical values of T_eff corresponding to T_eff^c, τ_α diverges indicating the occurrence of glass transition. In Fig.<ref> (d), the dependence of T_eff^c on τ_p for different ρ is shown. One can see that T_eff^c increases monotonically with τ_p, and it approaches a constant value in the limit τ_p→0. Such a constant value corresponds to the one for a passive Brownian system with T_c=T_eff^c. We also note that T_eff^c increases with the number density ρ, indicating that a denser system enters glass transition at a higher critical temperature as expected.§ CONCLUSIONS In summary, we have developed a promising mode coupling theory to study the nonequilibrium glassy dynamics of a general model system of self-propelled particles. The self-propulsion force is given by a colored noise described by OU process, and thermal noises in the environment are also considered. Our work mainly contains two parts. By using Fox approximation method for Langevin systems with colored noise, an approximate Smoluchowski equation can be obtained, governing the time evolution of the distribution function of the particles' positions. This effective SE is expected to be exactly valid for not large persistence time τ_p of the propulsion force, and it thus serves as a promising starting point to study the system's relaxation or glassy dynamics. The SE involves a configuration dependent instantaneous diffusion function D_j(𝐫^N) which is related to the gradient of force subjected to particle j. With this SE, we are able to derive a memory function equation for the time dependent behavior of the collective or self- intermediate scattering function F_q(t) or F_q^s(t) in the nonequilibrium steady state. Applying the basic assumption that macroscopic currents vanish in the steady state and using standard mode coupling approximation, we have obtained the expressions for the irreducible memory function as well as frequency terms. Particularly, we find that the dynamics are mainly determined by an effective diffusion coefficient D̅, which is the ensemble average of D_j(𝐫_N) in the nonequilibrium steady state, and a pseudo steady state structure factor S_2(k), which involves the coupling between D_j(𝐫^N) and density fluctuations. D̅ enters the frequency term and thus governs the short time dynamics, whereas both enter the vortex for memory function and influence the long time dynamics. By direct simulations, we find that D̅ increases with the single effective temperature T_eff as well as the magnitude v_0 of propulsion force, while it decreases with τ_p for fixed T_eff or v_0. The structure function S_2(k) simply decouples into the product of D̅/D_0 and S(k), with S(k) the nonequilibrium static structure factor and D_0 the single particle diffusion coefficient in the limit τ_p→0, for relatively large values of k, whereas it shows apparent deviations from (D̅/D_0)· S(k) for small ks. Our theory makes it feasible for us to investigate the glassy dynamics of the system, by investigating the time behavior of F_q(t) or F_q^s(t) in the long time limit, chosen the persistence time τ_p, the effective temperature T_eff, as well as the number density ρ as free parameters. We find that the critical density ρ_c for glass transition shifts to larger values with increasing T_eff or v_0 if τ_p is fixed, in good qualitative accordance with the simulation results of active Brownian particles and related systems. In addition, we have also investigated how the critical density ρ_c changes with τ_p for a fixed T_eff, finding that ρ_c decreases with τ_p monotonically and it approaches the value for the corresponding passive Brownian system in the limit τ_p→0 as expected. We have also calculated the critical temperature T_eff^c for glass transition at fixed density, finding that it increases monotonically with τ_p and also approaches the Brownian particle limit for τ_p→0. In future work, we would like to extend the present method to more complex systems such as mixtures of self-propelled particles but with different driving forces or to mixtures of active-passive particles<cit.>. As mentioned in the main text, the relaxation time τ_α may show nontrivial dependence on the persistence time τ_p, which would be also an interesting topic to address for system with both propulsion force and thermal noise. In addition, our results demonstrate that only in the limit τ_p→0, the glass transition point ρ_c or T_eff^c approaches that of a Brownian system, indicating that the 'collective' effective temperature with respect to the nonequilibrium glass transition is different from the single particle one <cit.>, which may deserve more detailed study. In a word, we believe that our work presents a useful theoretical framework to study the nonequilibrium dynamics of dense active particles system from the microscopic level which could find many applications in future works. This work is supported by National Basic Research Program of China(Grant No. 2013CB834606), by National Science Foundation of China (Grant Nos. 21673212, 21521001, 21473165, 21403204), by the Ministry of Science and Technology of China (Grant No.�2016YFA0400904), and by the Fundamental Research Funds for the Central Universities (Grant Nos. WK2060030018, 2030020028,2340000074).41 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Vicsek and Zafeiris(2012)]2012_PhysRep_Vicsek_CollectiveMotion authorT. Vicsek and authorA. Zafeiris, journalPhys. Rep. volume517, pages71 (year2012).[Marchetti and Joanny(2013)]2013_RMP_Marchetti_HydroSoftActi authorM. C. Marchetti and authorJ. F. Joanny, journalRev. Mod. Phys. volume85, pages1147 (year2013).[Bechinger et al.(2016)Bechinger, Di Leonardo, Löwen, Reichhardt, Volpe, and Volpe]2016_RMP_Bechinger_ActiParti_in_complex_enviro authorC. Bechinger, authorR. Di Leonardo, authorH. Löwen, authorC. Reichhardt, authorG. Volpe, and authorG. Volpe, journalReviews of Modern Physics volume88, pages045006 (year2016).[Schaller et al.(2010)Schaller, Weber, Semmrich, Frey, and Bausch]2010_Nature_Schaller_SoftFilament authorV. Schaller, authorC. Weber, authorC. Semmrich, authorE. Frey, and authorA. R. Bausch, journalNature volume467, pages73 (year2010).[Sumino1 et al.(2012)Sumino1, Nagai, Shitaka, Tanaka, Yoshikawa, Chate, and Oiwa]2012_Nature_Suminol_HardFilament authorY. Sumino1, authorK. H. Nagai, authorY. Shitaka, authorD. Tanaka, authorK. Yoshikawa, authorH. Chate, and authorK. Oiwa, journalNature volume483, pages48 (year2012).[Schwarz-Linek et al.(2012)Schwarz-Linek, Valeriani, Cacciuto, Cates, Marenduzzo, Morozov, and Poon]2012_PNAS_Linek_PhaseSepa authorJ. Schwarz-Linek, authorC. Valeriani, authorA. Cacciuto, authorM. E. Cates, authorD. Marenduzzo, authorA. N. Morozov, and authorW. C. K. Poon, journalPNAS volume109, pages4052 (year2012).[Fily and Marchetti1(2012)]2012_PRL_Fily_AthermalPhaseSepa authorY. Fily and authorM. C. Marchetti1, journalPhys. Rev. Lett. volume108, pages235702 (year2012).[Buttinoni et al.(2013)Buttinoni, Bialké, Kümmel, Löwen, Bechinger, and Speck]2013_PRL_Buttinoni_ClustPhasSepa authorI. Buttinoni, authorJ. Bialké, authorF. Kümmel, authorH. Löwen, authorC. Bechinger, and authorT. Speck, journalPhysical review letters volume110, pages238301 (year2013).[Wittkowski et al.(2014)Wittkowski, Tiribocchi, Stenhammar, Allen, Marenduzzo, and Cates]2014_NatComm_Cates_Chi4Theory authorR. Wittkowski, authorA. Tiribocchi, authorJ. Stenhammar, authorR. J. Allen, authorD. Marenduzzo, and authorM. E. Cates, journalNature Communications volume5, pages4351 (year2014).[Das et al.(2014)Das, Egorov, Trefz, Virnau, and Binder]2014_PRL_Das_AP_Mix authorS. K. Das, authorS. A. Egorov, authorB. Trefz, authorP. Virnau, and authorK. Binder, journalPhys. Rev. Lett. volume112, pages198301 (year2014).[Berthier and Kurchan(2013)]2013_NP_Berthier_NonEquiGlasTrans authorL. Berthier and authorJ. Kurchan, journalNature Physics volume9, pages310 (year2013).[Ni et al.(2013)Ni, Stuart, and Dijkstra]2013_NC_NiRan_PushGlasTrans authorR. Ni, authorM. A. C. Stuart, and authorM. Dijkstra, journalNature Communications volume4, pages1 (year2013).[Berthier(2013)]2013_NatPhys_Kurchan_NonEquiGlasTrans authorL. Berthier, journalNature Physics volume9, pages310 (year2013).[Mandal et al.(2014)Mandal, Lang, Gross, Oettel, Raabe, Franosch, and Varnik]2014_NatComm_Mandal_ReGT authorS. Mandal, authorS. Lang, authorM. Gross, authorM. Oettel, authorD. Raabe, authorT. Franosch, and authorF. Varnik, journalNature Communications volume5, pages1 (year2014).[Berthier(2014)]2014_PRL_Berthier_GTActiveHardDisk authorL. Berthier, journalPhys. Rev. Lett. volume112, pages220602 (year2014).[Sharma et al.(2017)Sharma, Wittmann, and Brader]2017_PRE_Brader authorA. Sharma, authorR. Wittmann, and authorJ. M. Brader, journalPhys. Rev. E volume95, pages012115 (year2017).[Angelini et al.(2011)Angelini, Hannezo, Trepat, Marquez, Fredberg, and Weitz]2011_PNAS_Angelini_Exp_Cells authorT. E. Angelini, authorE. Hannezo, authorX. Trepat, authorM. Marquez, authorJ. J. Fredberg, and authorD. A. Weitz, journalPNAS volume108, pages4714 (year2011).[Krakoviack(2005)]2005_PRL_Krakoviack_porous authorV. Krakoviack, journalPhys. Rev. Lett. volume94, pages065703 (year2005).[Whitesides and Grzybowski(2002)]2002_Science_Whitesides_Self-assemble authorG. M. Whitesides and authorB. Grzybowski, journalScience volume295, pages2418 (year2002).[ten Hagen et al.(2011)ten Hagen, van Teeffelen, and Löwen]2011_JPCM_Hagen_ActBrown authorB. ten Hagen, authorS. van Teeffelen, and authorH. Löwen, journalJournal of Physics: Condensed Matter volume23, pages194119 (year2011).[Ni et al.(2014)Ni, Stuart, Dijkstra, and Bolhuis]2014_SoftMatt_NiRan_CrystHardSpheGlas authorR. Ni, authorM. A. C. Stuart, authorM. Dijkstra, and authorP. G. Bolhuis, journalSoft Matter volume10, pages6609 (year2014).[Levis and Berthier(2014)]2014_PRE_Berthier_ClustKMCSelfPropHardSph authorD. Levis and authorL. Berthier, journalPhys. Rev. E volume89, pages062301 (year2014).[Flenner et al.(2016)Flenner, Szamel, and Berthier]2016_arXiv_Flenner_NonEquiGlasTrans authorE. Flenner, authorG. Szamel, and authorL. Berthier, journalarXiv preprint arXiv:1606.00641(year2016).[Julian Bialke(2012)]2012_PRL_Bialke_CrysDenseSuspension authora. H. L. Julian Bialke, Thomas Speck, journalPhys. Rev. Lett. volume108, pages168301 (year2012).[Li et al.(2015)Li, Jiang, and Hou]2015_SoftMatt_LiShuxian authorS. Li, authorH. Jiang, and authorZ. Hou, journalSoft matter volume11, pages5712 (year2015).[Ding et al.(2015)Ding, Feng, Jiang, and Hou]2015_our_Arxiv authorH. Ding, authorM. Feng, authorH. Jiang, and authorZ. Hou, journalarXiv preprint arXiv:1506.02754(year2015).[Farage and Brader(2014)]2014_arXiv_Farage_MCTActiveGlass authorT. F. F. Farage and authorJ. M. Brader, journalarXiv pp. pages1–5 (year2014).[Farage et al.(2015)Farage, Krinninger, and Brader]2015_PRE_Farage_EffectIntera authorT. F. Farage, authorP. Krinninger, and authorJ. M. Brader, journalPhysical Review E volume91, pages042310 (year2015).[Nandi(2016)]2016_arXiv_Nandi_Activity authorS. K. Nandi, journalarXiv preprint arXiv:1605.06073(year2016).[Szamel et al.(2015)Szamel, Flenner, and Berthier]2015_PRE_Szamel_GlasDyna authorG. Szamel, authorE. Flenner, and authorL. Berthier, journalPhysical Review E volume91, pages062304 (year2015).[Szamel(2014)]2014_PRE_Szamel_EffecTemp authorG. Szamel, journalPhysical Review E volume90, pages012111 (year2014).[Fox(1986)]1986_PRA_Fox authorR. F. Fox, journalPhysical Review A volume33, pages467 (year1986).[Nägele(1996)]1996_PhysRep_Nagele_charged_suspensions authorG. Nägele, journalPhys. Rep. volume272, pages215 (year1996).[Cichocki and Hess(1987)]1987_PhysA_Cichocki_irreducible authorB. Cichocki and authorW. Hess, journalPhys. A volume141, pages475 (year1987).[Kawasaki(1994)]1994_PhysA_Kawasaki_irreducible authorK. Kawasaki, journalPhys. A volume208, pages35 (year1994).[Szamel(2016)]2016_PRE_Szamel_AthermalActi authorG. Szamel, journalPhysical Review E volume93, pages012603 (year2016).[Maggi et al.(2015)Maggi, Marconi, Gnan, and Di Leonardo]2015_ScitiRep_Maggi_Multidimen_Stationary authorC. Maggi, authorU. M. B. Marconi, authorN. Gnan, and authorR. Di Leonardo, journalScientific reports volume5 (year2015).[Levis and Berthier(2015)]2015_EPL_Levis_Single-collective-effecTemp authorD. Levis and authorL. Berthier, journalEPL (Europhysics Letters) volume111, pages60006 (year2015).[Kawasaki(1995)]1995_Kawasaki_irreducible authorK. Kawasaki, journalPhysica A: Statistical Mechanics and its Applications volume215, pages61 (year1995).[Hansen and McDonald(1990)]book_Hansen_TSL authorJ.-P. Hansen and authorI. R. McDonald, titleTheory of simple liquids (publisherElsevier, year1990).[Götze(2008)]book_gotze2008complex authorW. Götze, titleComplex dynamics of glass-forming liquids: A mode-coupling theory, vol. volume143 (publisherOUP Oxford, year2008). equationsection § DERIVATION OF THE SMOLUCHOWSKI EQUATION Generally, for a LE with colored noise, one can get the FPE within Fox approximation<cit.>. For illustration, consider a simple one dimensional over damped LE ẋ(t)=G(x)+χ(t)where G(x) denotes the external or internal force and χ(t) is the stochastic noise with correlation ⟨χ(t)χ(s)⟩ =C(t-s)Define a probability distribution function P(y,t)=∫ D[χ]P[χ]δ(y-x(t))where D[χ] denotes integration over the noisy path of χ(t) and P[χ] is the distribution functional of χ which is assumed to be Gaussian. One can then obtain the FPE governing the evolution of P(y,t) as follows <cit.>∂/∂ tP(y,t)=-∂/∂ y[G(y)P(y,t)]+∂^2/∂ y^2{∫_0^tds'C(t-s')∫ D[χ]P[χ]e^∫_s'^tdsG'(x(s))δ(y-x(t))}Note that if χ(t) is white noise, C(t-s)=D_0δ(t-s), then the second term is just D_0∂^2/∂ y^2[∫ D[χ]P[χ]δ(y-x(t))]≡ D_0∂^2/∂ y^2P(y,t)which recovers the standard FPE. For a colored noise with C(t-s)=D/τ_0exp(-|t-s|/τ_0)one can obtain the FPE approximately as∂/∂ tP(y,t)=-∂/∂ y[G(y)P(y,t)]+D∂^2/∂ y^2[1/1-τ_0G'(y)P(y,t)]by assuming ∫_s'^tdsG'(x(s))≈ G(x(t))(t-s') For a general multi-variable case, dx_i(t)/dt=G_i({x_i})+χ_i(t)where⟨χ_i(t)χ_j(s)⟩ =C_ij(t-s)with i,j=1,2,⋯,N, the FPE for distribution function P(𝐲,t)=∫ D[χ]P[χ]δ(𝐲-𝐱(t))reads∂/∂ tP(𝐲,t) = -∑_i∂_i[G_i(𝐲)P(𝐲,t)]+∑_ij∂_i{∑_l∫_0^tds'C_il(t-s')∂_j∫ D[χ]P[χ]exp[∫_s'^tds∂/∂ x_lG_j(𝐱(s))δ_jl]δ(𝐲-𝐱(t))} = -∑_i∂_i[G_i(𝐲)P(𝐲,t)]+∑_ij∂_i{∫_0^tds'C_ij(t-s')∂_j∫ D[χ]P[χ]exp[∫_s'^tds∂_jG_j(x(s))]δ(𝐲-𝐱(t))} Then, if C_ij(t-s)=δ_ijC(t-s)=δ_ijD/τ_0exp(-|t-s|/τ_0), using the assumption mentioned before we can get∂/∂ tP(𝐲,t)=-∑_i∂_i[G_i(𝐲)P(𝐲,t)]+∑_iD∂_i^2{[1/1-τ_0∂_iG_i(𝐲)]P(𝐲,t)} For our system described by Eq.(<ref>), we have correspondingly x→𝐫^N, G(x)→γ^-1𝐅(𝐫^N)=β D_t𝐅(𝐫^N), γ^-1𝐟_i(t)→χ_i(t). According to Eq.(<ref>), the variable D in Eq. (<ref>) is D_fτ_p^2/γ^2 and τ_0 is τ_p. Note that the white noise term η_i(t) in Eq.(<ref>) contributes a normal diffusion term to the FPE. Thus we finally obtain∂/∂ tΨ(𝐫^N,t) = -∑_i∇_i·[β D_t𝐅_i(𝐫^N)-D_t∇_i]Ψ(𝐫^N,t)+∑_i∇_i^2[D_fτ_p^2/γ^2/1-τ_p·β D_t∇_i𝐅_i(𝐫^N,t)]Ψ(𝐫^N,t) = +∑_i∇_i·{ D_t+[D_fτ_p^2/γ^2/1-τ_p·β D_t∇_i𝐅_i(𝐫^N)]}·∇_iΨ(𝐫^N,t)-∑_i∇_i·{β D_t𝐅_i(𝐫^N)-∇_i[D_fτ_p^2/γ^2/1-τ_p·β D_t∇_i𝐅_i(𝐫^N)]}Ψ(𝐫^N,t)WriteD_i(𝐫^N)=D_t+[D_fτ_p^2/γ^2/1-τ_p·β D_t∇_i𝐅_i(𝐫^N)]and 𝐅_i^(𝐫^N) =D_t/D_i(𝐫^N){𝐅_i(𝐫^N)-1/β D_t∇_i[D_fτ_p^2/γ^2/1-τ_p·β D_t∇_i𝐅_i(𝐫^N)]} =D_t/D_i(𝐫^N)[𝐅_i(𝐫^N)-1/β D_t∇_iD_i(𝐫^N)]is the effective force. Finally, we have ∂/∂ tΨ(𝐫^N,t)=-∑_i∇_i· D_i(𝐫^N)·[∇_i-β𝐅_i^(𝐫^N)]Ψ(𝐫^N,t)which corresponds exactly to Eqs. (<ref>) to (<ref>) in the main text. § DERIVATION OF THE GENERAL LANGEVIN EQUATION§.§ Memory Function EquationHere we present the derivation of the memory function equations, namely Eqs. (<ref>) to (<ref>) in the main text, for the scattering function F_q(t)=1/N⟨_𝐪^*e^Ω̂tρ_𝐪⟩. This is most easily done in the Laplace domain, even for the complex Smoluchowski operator Ω̂ shown in Eq.(<ref>) which contains the instantaneous effective diffusion constant D_j(𝐫^N) given by Eq.(<ref>). The main steps are similar to the derivation of MCT equations for passive colloidal systems, following Mori-Zwanzig projection operator procedures. We start from Laplace transform of the scattering function F̃(q,z)=ℒ𝒯[F_q(t)]=⟨ A_-𝐪1/z-Ω̂A_𝐪⟩where ℒ𝒯 stands for Laplace transformation and A_𝐪=ρ_𝐪/√(N). One can define a projection operator on the density 𝒫(⋯)=A_𝐪⟩⟨ A_𝐪A_-𝐪⟩ ^-1⟨ A_-𝐪(⋯)⟩which has the property𝒫A_𝐪=A_𝐪𝒫𝒫=𝒫,𝒫^n=𝒫.Accordingly, we can define 𝒬=ℐ-𝒫, which satisfies 𝒬A_𝐪=0, 𝒫𝒬=0, and 𝒬^n=𝒬. Then for the operator [z-Ω]^-1, one has the following identity1/z-Ω̂=1/z-𝒬+1/z-𝒬𝒫1/z-Ω̂which is known as Dyson decomposition and can be easily checked by right-multiplying both sides by z-Ω̂. wherein the operator Ω acts on all the functions to its right side. In Laplace domain, this reads ℒ𝒯[∂_tF(q,t)](z) = zF̃(q,z)-F(q,t=0)=⟨ A_-𝐪Ω̂1/z-Ω̂A_𝐪⟩ =⟨ A_-𝐪Ω̂𝒫1/z-Ω̂A_𝐪⟩ +⟨ A_-𝐪Ω̂𝒬1/z-Ω̂A_𝐪⟩Using the definition of 𝒫, the first term is ⟨ A_-𝐪Ω̂𝒫1/z-Ω̂A_𝐪⟩=⟨ A_-𝐪Ω̂A_𝐪⟩⟨ A_-𝐪A_𝐪⟩ ^-1⟨ A_-𝐪1/z-Ω̂A_𝐪⟩ =⟨ A_-𝐪Ω̂A_𝐪⟩⟨ A_-𝐪A_𝐪⟩ ^-1F̃(q,z)While the second term is, using the identity(<ref>), ⟨ A_-𝐪Ω̂𝒬1/z-Ω̂A_𝐪⟩ =⟨ A_-𝐪Ω̂𝒬[1/z-𝒬+1/z-𝒬𝒫1/z-Ω̂]A_𝐪⟩Note that 𝒬A_𝐪=0, hence 1/z-𝒬A_𝐪=0 and ⟨ A_-𝐪Ω̂𝒬1/z-Ω̂A_𝐪⟩=⟨ A_-𝐪Ω̂𝒬1/z-𝒬𝒫1/z-Ω̂A_𝐪⟩ =⟨ A_-𝐪Ω̂𝒬1/z-𝒬A_𝐪⟩⟨ A_-𝐪A_𝐪⟩ ^-1⟨ A_-𝐪1/z-Ω̂A_𝐪⟩ =⟨ A_-𝐪Ω̂𝒬1/z-𝒬𝒬𝒬A_𝐪⟩⟨ A_-𝐪A_𝐪⟩ ^-1F̃(q,z)where we have used the definition of 𝒫 in the second equality and the fact 𝒬𝒬=𝒬 in the third equality. We may introduce ω_q=-⟨ A_-𝐪Ω̂A_𝐪⟩⟨ A_-𝐪A_𝐪⟩ ^-1which is Eq.(<ref>) and define M̃(q,z)=-⟨ A_-𝐪Ω̂𝒬1/z-𝒬𝒬𝒬A_𝐪⟩⟨ A_-𝐪Ω̂A_𝐪⟩ ^-1Then the time-evolution of F(q,t) in Laplace domain readsℒ𝒯[∂_tF(q,t)](z)=zF̃(q,z)-F(q,t=0)=-ω_q[1-M̃(q,z)]F̃(q,z)Therefore F̃(q,z)=F(q,t=0)/z+ω_q[1-M̃(q,z)] For colloidal systems, there is a so-called irreducible issue <cit.> . Following the procedure in <cit.>one needs to introduce an irreducible memory function M̃^ irr(q,z), which is related toM̃(q,z) according to M̃(q,z)=M̃^ irr(q,z)[1+M̃^ irr(q,z)]^-1and M̃^ irr(q,z)=-⟨ A_-𝐪Ω̂𝒬1/z-𝒬^ irr𝒬𝒬A_𝐪⟩⟨ A_-𝐪Ω̂A_𝐪⟩ ^-1Herein, Ω̂^irr denotes an irreducible Smoluchowski operator of which the detailed form is not relevant within the MCT approximation below. Consequently, this leads toF̃(q,z)=F(q,t=0)/z+ω_q/1+M̃^ irr(q,z)corresponding to in the time domain ∂/∂ tF_q(t)+ω_qF_q(t)+∫_0^tduM^ irr(q,t-u)∂/∂ uF_q(u)=0,which is exactly Eq.(<ref>) in the main text. Note that above derivations are quite general and do not depend on the explicit form of the operator Ω̂, whereas the expressions for ω_q and M^irr should certainly depend on Ω̂.§.§ Frequency ω_q We now substitute A_ q=ρ_𝐪/√(N)=∑_je^-i q· r_j/√(N) to calculate ω_q. Note that ⟨ A_-𝐪Ω̂A_𝐪⟩=∫ d𝐫^NA_-𝐪∑_j∇_j· D_j[∇_j-β𝐅_j^eff]A_𝐪P_s(𝐫^N) =∫ d𝐫^NA_-𝐪∑_j∇_j· D_j{∇_j[A_𝐪P_s(𝐫^N)]-β𝐅_j^effA_𝐪P_s(𝐫^N)}where as mentioned before, the operator Ω̂ acts on all the functions to its right side including the steady-state distribution function P_s(𝐫^N). In the steady state, the summation of all the currents 𝐉_j^s, given by Eq.(<ref>), is zero according to Ω̂P_s(𝐫^N)=-∑_j𝐉_j^s=0. To proceed and as many authors have done, we assume more strongly that 𝐉_j^s=0, i.e., 𝐉_j^s=-D_j(𝐫^N)[∇_j-β𝐅_j^ eff(𝐫^N)]P_s(𝐫^N)=0.Therefore, one can obtain that ∇_jP_s(𝐫^N)=β𝐅_j^ eff(𝐫^N)P_s(𝐫^N)which is the counterpart of Yvon theorem<cit.> in this nonequilibrium system.Using this result, one has ⟨ A_-𝐪Ω̂A_𝐪⟩=∫ d𝐫^NA_-𝐪∑_j∇_j·[D_j(∇_jA_𝐪)P_s(𝐫^N)] =-∑_j∫ d𝐫^N[(∇_jA_-q)·(∇_jA_𝐪)]D_jP_s(𝐫^N) =-N^-1∑_j∫ d𝐫^Nq^2D_jP_s(𝐫^N) =-q^2∑_j⟨ D_j⟩ /N=-q^2D̅where the second equality results from partial integration and we have used ∇_jA_𝐪=-i𝐪exp(-i𝐪·𝐫_j)/√(N) in the third one. ⟨ D_j⟩ =∫ d𝐫^ND_j(𝐫^N)P_s(𝐫^N) denotes the averaged instantaneous diffusion function of particle j and D̅=N^-1∑_j⟨ D_j⟩. Therefore, the effective frequency ω_q reads ω_q=-⟨ A_-𝐪Ω̂A_𝐪⟩/⟨ A_-𝐪A_𝐪⟩=q^2D̅/S(q)which is Eq.(<ref>) in the main text. §.§ Memory Function M^irr(q,t) In the time domain, the irreducible memory function M^irr(q,t) is given by M̃^ irr(q,t)=-⟨ A_-𝐪Ω̂𝒬e^𝒬Ω̂^ irr𝒬t𝒬A_𝐪⟩⟨ A_-𝐪Ω̂A_𝐪⟩ ^-1Using the adjoint operator Ω̂^†, the first term is ⟨ A_-𝐪Ω̂𝒬e^𝒬Ω̂^ irr𝒬t𝒬Ω̂A_𝐪⟩=⟨ A_-𝐪Ω̂𝒬e^𝒬Ω̂^ irr𝒬t𝒬(Ω̂^†A_𝐪)⟩ =⟨(Ω̂^†A_-𝐪)𝒬e^𝒬Ω̂^ irr𝒬t𝒬(Ω̂^†A_𝐪)⟩ =⟨(𝒬Ω̂^†A_-𝐪)𝒬e^𝒬Ω̂^ irr𝒬t𝒬(𝒬Ω̂^†A_𝐪)⟩ =⟨ R_𝐪^*𝒬e^𝒬Ω̂^ irr𝒬t𝒬R_𝐪⟩where 𝒬𝒬=𝒬 is used in the third equality and we have introducedR_𝐪=𝒬(Ω̂^†A_𝐪)=(Ω̂^†A_𝐪)-𝒫(Ω^†A_𝐪) = (Ω̂^†A_𝐪)-⟨ A_-𝐪(Ω̂^†A_𝐪)⟩/⟨ A_-𝐪A_𝐪⟩A_𝐪 = =(Ω̂^†A_𝐪)-⟨ A_-𝐪Ω̂A_𝐪⟩/⟨ A_-𝐪A_𝐪⟩A_𝐪 = (Ω̂^†A_𝐪)+ω_qA_𝐪which is a type of “random force”. It is in this step one needs to introduce the mode-coupling approximation. The memory function is assumed to be dominated by the projection onto the coupling density modes <cit.>. One can then define a second-order projection operator𝒫_2≡1/2∑_ k, p|A_𝐩A_𝐤⟩⟨ A_𝐩^*A_𝐤^*A_𝐩A_𝐤⟩ ^-1⟨ A_𝐩A_𝐤|and make the approximation:⟨ R_𝐪^*e^𝒬Ω̂^ irr𝒬tR_𝐪⟩ ≈ ⟨ R_𝐪^*𝒫_2e^𝒬Ω̂^ irr𝒬t𝒫_2R_𝐪⟩ =1/4∑_ k, p∑_ k', p'⟨ R_𝐪^*A_𝐩A_𝐤⟩⟨ A_𝐩^*A_𝐤^*A_𝐩A_𝐤⟩ ^-1 ×⟨ A_𝐩'^*A_𝐤'^*R_𝐪⟩⟨ A_𝐩'^*A_𝐤'^*A_𝐩'A_𝐤'⟩ ^-1 ×⟨ A_𝐩A_𝐤e^𝒬Ω̂^ irr𝒬tA_𝐩'A_𝐤'⟩≈ 1/4∑_ k, p∑_ k', p'⟨ρ_𝐩'^*ρ_𝐤'^*R_𝐪⟩/NS(k')S(p')⟨ R_𝐪^*ρ_𝐩ρ_𝐤⟩/NS(k)S(p) ×1/N^2[δ_ pp'δ_ kk'⟨ρ_𝐩^*e^Ω̂tρ_𝐩'⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐤'⟩.+.δ_ pk'δ_ kp'⟨ρ_𝐩^*e^Ω̂tρ_𝐤'⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐩'⟩] =1/2∑_ k, p|⟨ρ_𝐩^*ρ_𝐤^*R_𝐪⟩|^2/[N^2S(k)S(p)]^2⟨ρ_𝐩^*e^Ω̂tρ_𝐩⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐤⟩Here we have to factories the static and dynamic four-point correlation functions into products of two-point functions⟨ρ_𝐩^*ρ_𝐤^*ρ_𝐩'ρ_𝐤'⟩ ≈ ⟨ρ_𝐩^*ρ_𝐤'⟩⟨ρ_𝐤^*ρ_𝐩'⟩ +⟨ρ_𝐩^*ρ_𝐩'⟩⟨ρ_𝐤^*ρ_𝐤'⟩ =δ_ p, k'δ_ k, p'N^2S(p)S(k)+δ_ p, p'δ_ k, k'N^2S(p)S(k)and simultaneously replace the projected operator 𝒬Ω̂^ irr𝒬 by the full Smoluchowski operator Ω̂ in the propagator governing the time evolution of the correlation function <cit.>⟨ A_𝐩A_𝐤e^𝒬Ω̂^ irr𝒬tA_𝐩'A_𝐤'⟩≈δ_ pp'δ_ kk'⟨ρ_𝐩^*e^Ω̂tρ_𝐩'⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐤'⟩ +δ_ pk'δ_ kp'⟨ρ_𝐩^*e^Ω̂tρ_𝐤'⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐩'⟩ We now need to calculate ⟨ρ_𝐩^*ρ_𝐤^*R_𝐪⟩, which is given by ⟨ρ_𝐩^*ρ_𝐤^*R_𝐪⟩ =1/√(N)[⟨(ρ_𝐩ρ_𝐤)^*(Ω̂^†ρ_𝐪)⟩ +ω_q⟨(ρ_𝐩ρ_𝐤)^*ρ_𝐪⟩]The first term in the bracket is ⟨(ρ_𝐩ρ_𝐤)^*(Ω̂^†ρ_𝐪)⟩=⟨(ρ_𝐩ρ_𝐤)^*Ω̂ρ_𝐪⟩ =∑_j⟨(-i𝐩e^i𝐩·𝐫_jρ_𝐤-i𝐤e^i𝐤·𝐫_jρ_𝐩)· D_j(-i𝐪e^-i𝐪·𝐫_j)⟩ = -𝐪·𝐩⟨∑_j,lD_je^-i(𝐪-𝐩)·𝐫_j+i𝐤·𝐫_l⟩ -𝐪·𝐤⟨∑_j,lD_je^-i(𝐪-𝐤)·𝐫_j+i𝐩·𝐫_l⟩ = -𝐪·𝐩δ_𝐪,𝐤+𝐩[⟨∑_j,lD_je^-i𝐤·𝐫_j+i𝐤·𝐫_l⟩ +⟨∑_j,lD_je^-i𝐩·𝐫_j+i𝐩·𝐫_l⟩] = -ND_0δ_𝐪,𝐤+𝐩[(𝐪·𝐩)S_2(k)+(𝐪·𝐤)S_2(p)]where the second equality is simply a result of partial integration, and the fourth equality results from translational invariance. For short of notation, we have introduced a function S_2(k) in the fourth equality defined as S_2(k)=1/ND_0⟨∑_j,lD_je^-i𝐤·(𝐫_j-𝐫_l)⟩in accordance with Eq.(<ref>) in the main text. If D_j is a constant, such asD_j=D_0 in the τ_p→0 limit, it can be drawn out of the bracket such that S_2(k)=N^-1⟨∑_j,le^-i𝐤·(𝐫_j-𝐫_l)⟩ =S(k) which is exactly the static structure factor. Nevertheless, in our present case, D_j depends on the instantaneous configuration 𝐫^N, such that S_2(k) may show abundant features different from S(k). It is interesting to note that for large k, D_j seems to be decoupled from the Fourier components exp(-i𝐤·𝐫_l), and S_2(k) can be approximated by S_2(k)≃D̅/ND_0⟨∑_j,le^-i𝐤·(𝐫_j-𝐫_l)⟩ =D̅/D_0S(k)as shown in Fig.<ref> in the main text.The second term in Eq.(<ref>) can be calculated using the so-called convolution approximation, which is assumed to be still appropriate in nonequilibrium situation<cit.>,⟨ρ_𝐩^*ρ_𝐤^*ρ_𝐪⟩≈δ_𝐤+𝐩,𝐪NS(q)S(p)S(k)Therefore, we can get⟨ρ_𝐩^*ρ_𝐤^*R_𝐪⟩= -√(N)D_0δ_𝐪,𝐤+𝐩[𝐪·𝐩S_2(k)+𝐪·𝐤S_2(p)-q^2D̅/D_0S(p)S(k)]Substituting this into Eq.(<ref>), we obtain⟨ R_𝐪^*e^𝒬Ω̂^ irr𝒬tR_𝐪⟩ ≈1/2∑_ k, p|⟨ρ_𝐩^*ρ_𝐤^*R_𝐪⟩|^2/[N^2S(k)S(p)]^2⟨ρ_𝐩^*e^Ω̂tρ_𝐩⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐤⟩ =1/2N∑_ k, p|V_𝐪(𝐤,𝐩)|^2⟨ρ_𝐩^*e^Ω̂tρ_𝐩⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐤⟩where the vortex function V_𝐪(𝐤,𝐩) is defined as V_𝐪(𝐤,𝐩) =√(N)⟨ρ_𝐩^*ρ_𝐤^*R_𝐪⟩[N^2S(k)S(p)]^-1 = -δ_𝐤+𝐩,𝐪D_0/N{(𝐪·𝐤)S_2(p)/S(p)S(k)+(𝐪·𝐩)S_2(k)/S(p)S(k)-q^2D̅/D_0} = -δ_𝐤+𝐩,𝐪D̅/N{(𝐪·𝐤)[D_0S_2(p)/D̅S(p)S(k)-1]+(𝐪·𝐩)[D_0S_2(k)/D̅S(p)S(k)-1]}Now it is instructive for us to define a pseudo “direct correlation function” as C_2(𝐪;𝐤)=δ_𝐤+𝐩,𝐪/ρ[1-D_0S_2(p)/D̅S(p)S(k)]≡1/ρ[1-D_0S_2(|𝐪-𝐤|)/D̅S(|𝐪-𝐤|)S(k)]Note that if Eq.(<ref>) becomes a equality, namely, S_2(k)=D̅S(k)/D_0, thenC_2( q; k)=1/ρ[1-1/S(k)]=c(k)which has the same form as the conventional direct correlation function. With this notation, the vortex can be written asV_𝐪(𝐤,𝐩)=ρD̅/N[(𝐪·𝐤)C_2(𝐪;𝐤)+(𝐪·𝐩)C_2(𝐪;𝐤)]and ⟨ R_𝐪^*e^𝒬Ω̂^ irr𝒬tR_𝐪⟩=1/2N∑_ k, p|V_𝐪(𝐤,𝐩)|^2⟨ρ_𝐩^*e^Ω̂tρ_𝐩⟩⟨ρ_𝐤^*e^Ω̂tρ_𝐤⟩≈1/2∑_𝐤ρ^2D̅^2/N[(𝐪·𝐤)C_2( q; k)+(𝐪·𝐩)C_2( q; p)]^2F_k(t)F_p(t)Consequently, we get the irreducible memory function M̃^ irr(q;t)≈-⟨ R_𝐪^*e^𝒬Ω̂^ irr𝒬tR_𝐪⟩⟨ A_-𝐪Ω̂A_𝐪⟩ ^-1 =ρ^2D̅/2q^2N∑_𝐤[(𝐪·𝐤)C_2( q; k)+(𝐪·𝐩)C_2( q; p)]^2F_k(t)F_p(t)Changing to integration by using ∑_𝐤→(2π)^-3V∫ d^3𝐤, one hasM̃^ irr(q;t)=ρD̅/16π^3∫ d^3𝐤[(𝐪̂·𝐤)C_2( q; k)+(𝐪̂·𝐩)C_2( q; p)]^2F_k(t)F_p(t)where 𝐪̂=𝐪/q is the unit vector in the direction of 𝐪. This is Eq. (<ref>) in the main text. §.§ Tagged Particle Dynamics We now consider tagged particle dynamics. The relevant variable is A_𝐪=ρ_𝐪^s=e^-i𝐪·𝐫_s ( the subscript or superscript 's' stands for the single particle) and the self-scattering function reads F_q^s(t)=⟨ρ_-𝐪^se^Ω̂tρ_𝐪^s⟩ with F_q^s(0)=1. The derivation of the memory function equation for F_q^s(t) are similar to that of F_q(t), except that some relevant calculations are different. Briefly, we have M_s^irr(q,t)=-⟨ R_𝐪^s*e^𝒬Ω̂^ irr𝒬tR_𝐪^s⟩⟨ρ_-𝐪^sΩ̂ρ_𝐪^s⟩ ^-1where R_𝐪^s=𝒬(Ω^†ρ_𝐪^s)=(Ω^†ρ_𝐪^s)-𝒫(Ω^†ρ_𝐪^s) = (Ω^†ρ_𝐪^s)-⟨ρ_-𝐪^s(Ω̂^†ρ_𝐪^s)⟩/⟨ρ_-𝐪^sρ_𝐪^s⟩ρ_𝐪^s = (Ω^†ρ_𝐪^s)+q^2D̅ρ_𝐪^s=(Ω^†ρ_𝐪^s)In the third equality of above equation for R_𝐪^s, we have used the result ⟨ρ_-𝐪^sΩ̂ρ_𝐪^s⟩ =-q^2⟨ D_s⟩. To calculate ⟨ R_𝐪^s*e^𝒬Ω̂^ irr𝒬tR_𝐪^s⟩, one projects R_𝐪^s onto the product of single and collective modes ρ_𝐤ρ_𝐩^s with projection operator𝒫_2^s=∑_𝐤,𝐩.ρ_𝐤ρ_𝐩^s⟩⟨(ρ_𝐤ρ_𝐩^s)^*(ρ_𝐤ρ_𝐩^s)⟩ ^-1⟨(ρ_𝐤ρ_𝐩^s)^*.Then⟨ R_-𝐪^se^𝒬Ω̂^ irr𝒬tR_𝐪^s⟩ ≈ ⟨ R_-𝐪^s𝒫_2^se^𝒬Ω̂^ irr𝒬t𝒫_2^sR_𝐪^s⟩ =∑_𝐤,𝐩∑_𝐤',𝐩'⟨ R_-𝐪^s(ρ_𝐤ρ_𝐩^s)⟩⟨(ρ_𝐤ρ_𝐩^s)^*(ρ_𝐤ρ_𝐩^s)⟩ ^-1 ×⟨(ρ_𝐤'ρ_𝐩'^s)^*R_𝐪^s⟩⟨(ρ_𝐤'ρ_𝐩'^s)^*(ρ_𝐤'ρ_𝐩'^s)⟩ ^-1⟨(ρ_𝐤ρ_𝐩^s)^*e^𝒬Ω̂^ irr𝒬t(ρ_𝐤'ρ_𝐩'^s)⟩≃ ∑_𝐤,𝐩∑_𝐤',𝐩'⟨ R_-𝐪^s(ρ_𝐤ρ_𝐩^s)⟩/NS(k)⟨(ρ_𝐤'ρ_𝐩'^s)^*R_𝐪^s⟩/NS(k')δ_𝐤𝐤'δ_𝐩𝐩'⟨ρ_𝐤^*e^Ω̂tρ_𝐤'⟩⟨ρ_𝐩^s*e^Ω̂tρ_𝐩'^s⟩ =∑_𝐤,𝐩|V_q^s(𝐤,𝐩)|^2⟨ρ_-𝐤e^Ω̂tρ_𝐤⟩⟨ρ_-𝐩^se^Ω̂tρ_𝐩^s⟩where V_𝐪^s(𝐤,𝐩)=⟨(ρ_𝐤ρ_𝐩^s)^*R_𝐪^s⟩/NS(k)Now we need to calculate ⟨(ρ_𝐤ρ_𝐩^s)^*R_𝐪^s⟩, which is given by ⟨(ρ_𝐤ρ_𝐩^s)^*R_𝐪^s⟩ =⟨(ρ_𝐤ρ_𝐩^s)^*(Ω^†ρ_𝐪^s)⟩ +q^2D̅⟨(ρ_𝐤ρ_𝐩^s)^*ρ_𝐪^s⟩ The second term is ⟨(ρ_𝐤ρ_𝐩^s)^*ρ_𝐪^s⟩=⟨ρ_-𝐤ρ_𝐪-𝐩^s⟩ =δ_𝐪,𝐤+𝐩ρ c(k)S(k)=δ_𝐪,𝐤+𝐩[S(k)-1]where the δ symbol results from translational invariance and the result ⟨ρ_𝐤^*ρ_𝐤^s⟩ =ρ c(k)S(k)=S(k)-1<cit.>.The first term is⟨(ρ_𝐤ρ_𝐩^s)^*(Ω^†ρ_𝐪^s)⟩=⟨(ρ_𝐤ρ_𝐩^s)^*Ω̂ρ_𝐪^s⟩ =∑_j=1^N⟨(-i𝐤(∑_l≠ sδ_jle^i𝐤·𝐫_l)e^i𝐩·𝐫_s-δ_jsi𝐩ρ_-𝐤e^i𝐩·𝐫_s)D_j·(-δ_jsi𝐪e^-i𝐪·𝐫_s)⟩ = -⟨(𝐩·𝐪)D_sρ_-𝐤e^i(𝐩-𝐪)·𝐫_s⟩ = -δ_𝐪,𝐤+𝐩(𝐩·𝐪)(D_0S_2(k)-D̅)where we have used partial integration and Yvon theorem in the second equality.⟨(ρ_kρ_p^s)^*R_𝐪^s⟩= -δ_𝐪,𝐤+𝐩[𝐩·𝐪(D_0S_2(k)-D̅)]+q^2D̅δ_𝐪,𝐤+𝐩(S(k)-1) =δ_𝐪,𝐤+𝐩D̅[𝐤·𝐪(S(k)-1)+𝐩·𝐪(S(k)-D_0S_2(k)/D̅)] V_q^s(𝐤,𝐩) =⟨(ρ_𝐤ρ_𝐩^s)^*R_𝐪^s⟩/NS(k)=δ_𝐪,𝐤+𝐩D̅/N[𝐤·𝐪(1-1/S(k))+𝐩·𝐪(1-D_0S_2(k)/D̅S(k))] =δ_𝐪,𝐤+𝐩ρD̅/N[𝐤·𝐪c(k)+𝐩·𝐪1/ρ(1-D_0S_2(k)/D̅S(k))]Notice if S_2(k)=D̅S(k)/D_0, V_q(𝐤,𝐩)=δ_𝐪,𝐤+𝐩ρD̅/N[𝐤·𝐪c(k)], which is the equilibrium result. Next⟨ R_𝐪^s*e^𝒬Ω̂^ irr𝒬tR_𝐪^s⟩=∑_𝐤,𝐩|V_q^s(𝐤,𝐩)|^2⟨ρ_-𝐤e^Ω̂tρ_𝐤⟩⟨ρ_-𝐩^se^Ω̂tρ_𝐩^s⟩≈∑_𝐤ρ^2D̅^2/N[𝐤·𝐪c(k)+𝐩·𝐪1/ρ(1-D_0S_2(k)/D̅S(k))]^2F_k(t)F_p^s(t)andM̃_s^ irr(q;t) = -⟨ R_𝐪^s*e^𝒬Ω̂^ irr𝒬tR_𝐪^s⟩⟨ρ_-𝐪^sΩ̂ρ_𝐪^s⟩ ^-1≈ ρ^2D̅/q^2N∑_𝐤[𝐤·𝐪c(k)+𝐩·𝐪1/ρ(1-D_0S_2(k)/D̅S(k))]^2F_k(t)F_p^s(t) =ρD̅/(2π)^3∫ d^3𝐤[𝐤·𝐪̂c(k)+𝐩·𝐪̂1/ρ(1-D_0S_2(k)/D̅S(k))]^2F_k(t)F_p^s(t)Finally, the memory function equation for the self-scattering function F_q^s(t) is given by∂/∂ tF_q^s(t)+ω_q^sF_q^s(t)+∫_0^tduM_s^ irr(q,t-u)∂/∂ uF_q^s(u)=0where ω_q^s=-⟨ρ_-𝐪^s(Ω̂^†ρ_𝐪^s)⟩/⟨ρ_-𝐪^sρ_𝐪^s⟩=q^2D̅.
http://arxiv.org/abs/1702.07863v1
{ "authors": [ "Mengkai Feng", "Zhonghuai Hou" ], "categories": [ "cond-mat.soft", "cond-mat.stat-mech" ], "primary_category": "cond-mat.soft", "published": "20170225093733", "title": "Mode Coupling Theory for Nonequilibrium Glassy Dynamics of Thermal Self-Propelled Particles" }
http://arxiv.org/abs/1702.08086v1
{ "authors": [ "T. S. Tavares", "G. A. P. Ribeiro" ], "categories": [ "nlin.SI", "cond-mat.stat-mech", "math-ph", "math.MP" ], "primary_category": "nlin.SI", "published": "20170226211127", "title": "Finite size and finite temperature studies of the $osp(1|2)$ spin chain" }
1.2 23.5cm 16cm 1ex 0pt 0pt -40pt.tifpng.png`convert #1 `dirname #1`/`basename #1 .tif`.png→ ↔∂Δϵλϵυ[1]Table <ref>[1]Equation (<ref>)[1]SU(#1)[1]#1[1]Figure <ref> CERN-TH-2017-045 On the Octonionic Self Duality equationsof 3-braneInstantons Emmanuel Floratos ^a,b,c[E-mail: ] and George K. Leontaris^c,d[E-mail: ] ^a Institute of Nuclear Physics, NRCS Demokritos, Athens, Greece ^b Department of Physics, University of Athens, Athens, Greece ^c Theory Department, CERN, CH-1211, Geneva 23, Switzerland^d Physics Department, Theory Division, Ioannina University, GR-45110 Ioannina, Greece We study the octonionic selfduality equationsfor p=3-branes in the light cone gauge and we construct explicitly, instanton solutions for spherical and toroidal topologies in various flat spacetime dimensions (D=5+1,7+1,8+1,9+1), extending previous results for p=2 membranes. Assuming factorization of timewe reduce the self-duality equations tointegrable systems and we determine explicitly periodic, in Euclidean time, solutionsin terms of the elliptic functions.These solutions describe 4d associative and non-associative calibrations in D=7,8 dimensions.It turns out that for spherical topology the calibration is non compact while for the toroidal topologyis compact. We discuss possible applications of our results to the problem of3-branetopology change and its implications for a non-perturbative definition of the3-brane interactions. § INTRODUCTION The tremendous progress ofunderstanding of perturbative superstring theory as well as its duality symmetries in various spacetime backgrounds <cit.> has led to a well substantiated proposal of M theory, the unifying theory of all superstring theories <cit.>-<cit.>. The newobjects contained in M-theory which are solitonic gravitational back-reactions of various D-branes, are the M2 and M5-branes. These objects were expected naturally from the eleven dimensional (11d) supergravity and, in this framework, they areconsidered as fundamental as the strings are for the various 10d supergravities.The basic obstacle in understanding these objects as fundamentals, lies inthe absence of a coupling constant and the consequent problem of the definition of their self-interactions. An interesting proposal to define the self-interactions of the branes is to use their Euclidean instantons to interpolate between vacua (asymptotic states) with different number of branes <cit.>. The simplest ones could be instantons interpolating between states of one and two branes. The study of the simplest such instantons is already a difficult problem but one hopes that finding explicit solutions and trying to understand their moduli space would be and interesting approach. The classification of all such instantons and the geometry of their moduli space is probably beyond the present day capabilities. Almost three decades ago the problem of determining the corresponding self-duality equations for membranes has been solved in the case of2+1 dimensional embedding spacetime in ref <cit.> and in the case of 4+1 in ref <cit.>. In the latter, using results from the study of Nahm equations <cit.> for Yang-Mills (YM) monopoles it was shown that the self-duality equations form an integrable model and its Laxpair and conserved quantities were determined. An important work by Ward <cit.> clarified further the situation reducing the self-dualityequations to Laplace equation in 3d flat space, for the time function of the surface. This function is a level set function -as it is calledin Morse theory- and the problem of determining topology changing membrane instantons is reduced into the search for non-trivial (multi-valued) time functions. Recent progress in this direction has been madein the works <cit.>. The classification ofself-duality (SD) equationsin various dimensionshas shown that the possible SD equations are determined by the existence of cross products of vectors which in turn is related to the four division algebras <cit.>. Apart from the trivial cases ofbranes of dimension p embedded in p+1 space dimensions, the interesting cases concern the p=2 brane (membrane) and the p=3 brane. In references <cit.> and <cit.> the study of membrane instanton equations in higherdimensions has provided some insight in thedifficulty of the problem and certain explicit solutions have been obtained(see also references <cit.>-<cit.>).In the present work we extend the study to the case of 3-branes in d=8 spatial dimensions using the octonionic cross product forthree 8-dimensional vectors.We obtain a convenient form of a four complex set for the SD equations which enable us to reduce theSD equations in six dimensions. The SD equations imply the Euclidean second order equations for the 3-branes and satisfy automatically the Gauss Law of volume preserving diffeomorphisms symmetry of the theory. In the case of external fluxes in the theory such as the one coming from pp-wavesupersymmetric gravitational backgrounds, it is possible to redefine the time and to reduce theSD equations to the case of flat background, and find explicit solutions for the 3-sphere and the 3-torus. In the case of the sphere the instantons are periodic in Euclidean time going from a finite radius to infinity and back in finite time, while in the case ofthe three torus the periodic solution interpolates between finite radii.The layout of this paper is as follows. In section 2 we review the Hamiltonian formalism and the Equations of Motion (EoM) for 3-branes in the light cone gauge and in flat spacetime, as well as,the corresponding Gauss Law. We discuss the symmetry in this gauge which is the volume preserving diffeomorphisms of the 3-brane and which gives the possibility to describe the 3-brane as an incompressible fluid. In sections 3 and 4, we derive the first order self-duality equations in eight dimensions and its various lower dimensional reductions and we present them in a very useful and suggestive complex form in four-dimensional complex space. We show that these equation implythe second order equations and the Gauss Law in Euclidean signature spacetime. In section 5, assuming factorization of time, we solve analytically the 3-brane self-duality equations for the case of spherical (S^3) and toroidal (T^3) topology of the brane. Finally, in section 6 we present our conclusions and discuss the application of our methods to the issue of topology change of three branes, a problem which is relevant to cosmological brane models. § S^3 AND T^3 BRANES IN THE LIGHT-CONE GAUGE In this section we briefly present the Hamiltonian system in terms of the Nambu 3-brackets. For the S^3 brane in 9+1 Minkowski spacetime the Hamiltonian in the light-cone gauge isH= T_3/2∫ dΩ_3 (Ẋ_i^2+1/3!{X_i, X_j, X_k}^2)where the indices run the 1,2,… ,8 transverse dimensions. The corresponding EoM are Ẍ_i = 1/2{{X_i, X_j, X_k}, X_j, X_k} ,and the Gauss Law takes the form{Ẋ_i, X_i}_ξ_α,ξ_β=0, α, β=θ_1,2,3, αβThe volume element isdΩ_3 =sinθ_2 sin^2θ_3 dθ_1dθ_2dθ_3 ,and the Nabu 3-bracket for S^3 is defined as follows {X_i, X_j, X_k} = 1/sinθ_2sin^2θ_3 ϵ^αβγ∂_αX_i ∂_βX_j ∂_γX_k · For S^3 there are four functions, namely the polar coordinates of the unit four-vectors, e_1=cosθ_1sinθ_2sinθ_3 , e_2=sinθ_1sinθ_2sinθ_3 , e_3=cosθ_2sinθ_3 , e_4=cosθ_3 · They satisfy the SO(4) relations {e_a,e_b,e_c} = ϵ^abcde_d,where the indices take the values 1,2,3,4. These equations are instrumental for the factorization of the time from the internal coordinates of the 3-brane. It is possible to write down explicitly the infinite dimensional algebra of the volume preserving diffeomorphisms S^3, {Y_α, Y_β, Y_γ} = f^δ_αβγY_δ ,using as a basis the hyper-spherical harmonicsY_α(Ω) = Y_nlm(θ_1,θ_2, θ_3),α =(nlm),m=-l,…, l,l=0,1,2,…, n-1 .The structure constants f^δ_αβγ can be expressed in terms of the 6-jsymbols of SU(2) since SO(4) ≃ SU(2)× SU(2).When the brane has the toroidal topology T^3, the global symmetries are U(1)× U(1)× U(1) and three translations along the cycles of the torus. The basis of functions on T^3 is taken to bee_n⃗=e^in⃗·ξ⃗, ξ⃗∈ [0, 2π]^3,n⃗ =(n_1,n_2,n_3)∈ℤ · This basis defines an infinite dimensional volume preserving diffeomorphism group of the torus through the Nabu-Lie 3-algebra { e_n⃗_1, e_n⃗_2, e_n⃗_3}= -idet (n⃗_1,n⃗_2,n⃗_3 )e_n⃗_1+n⃗_2+n⃗_3 · It is important to notice that ∀ A∈ SL_3(ℤ) the above algebra remains invariant under the transformations n⃗→ An⃗. Thus, for this 3-dimensional brane there is an important discrete symmetry, namely the 3-dimensional modular group SL_3(ℤ) which could have implications in the quantum mechanical spectrum of this object. The EoM for the 3-torus are formally the same as in S^3 but it is much more convenient to complexify the eight transverse coordinates to four complex ones as we shall see later in the next section. §SELF-DUALITY AND OCTONIONSIt is known thatp-fold, n-dimensional vector cross products are defined only in the following casesanyp=1,2,3,… n=p+1p=2n=7p=3n=7p=3n=8 . In the above cases, for any set of p vectors, v∈ℝ^n, their p-fold cross product, which we denote by Π_p(v_1,v_2,… , v_p),is a linear map which satisfies the following properties Π_p(v_1,v_2,… , v_p)· v_k= 0,k=1,2,3,… , p ,while the norm of the cross product satisfies the important property, |Π_p(v_1,v_2,… , v_p)|^2 =det(v_i· v_j) · In the present work we will focus on the last case p=3, n=8 of (<ref>) since the other cases can be obtained by appropriate reductions to lower dimensions. Now, we proceed to derive the light-cone self-duality equations for 3-branes living in eight transverse flat dimensions. We note in passing that it is possible after toroidal compactification to obtain interesting equations for charged 3-branes in lower dimensions.It is easy to observe that the potential energy term ofthe Hamiltonian (<ref>)can be rewritten in terms of the determinant of the induced metricdet(∂_αX_i∂_βX_i) =1/3!∑_i,j,k=1^8 {X_i,X_j,X_k}^2 .The cross product of three 8-dimensional vectors of case (<ref>) is explicitly given as <cit.>Π_i(v_1,v_2, v_3):= ϕ_ijklv_1^jv_2^kv_3^l, i,j,k,l=1,2,⋯ 8 · where the definition of ϕ_ijkl and conventions used in the paper are given in the appendix. Setting v_α^i=∂_αX_i where α=1,2,3 and i=1,2,…, 8, we find that the potential energy of the 3-brane is the norm squared of the above defined cross product of its tangent vectors (see eq (<ref>) ).Another way to see that is to use directly the identity (<ref>) given in the appendixfor the tensor ϕ_ijkl. It is obvious now that the Hamiltonian (<ref>) of the 3-brane, can be written asH= T_3/2∫ dΩ_3 (Ẋ_i^2+1/3!{X_i, X_j, X_k}^2) ≡T_3/2∫ dΩ_3 (Ẋ_k-i Π_k)(Ẋ_k+i Π_k) ,where i,j,k=1,2,… , 8 andΠ_k ≡Π_k(∂_1X,∂_2 X, ∂_3X) .For vacuum configurations, H=0, and in Euclidean time we find the self-duality equations Ẋ_i =±1/3!ϕ_ijkl{X_j, X_k, X_l} ·§ THE3-BRANE INSTANTONS IN VARIOUS DIMENSIONSThere is a naturalgeneralization of the self-duality equations in 8 dimensions in terms of the symbol ϕ_ijkl, Ẋ_i = 1/3!ϕ_ijkl{X_j, X_k, X_l} .where now the indices i,j,k,l run for 1 to 8. For convenience, we introduce the shorthand notationX_ijk≡{X_i, X_j, X_k} where the indices run from 1 to 8. We know however that the same equation exists in seven dimensions that is, i,j,k,l=1,2,… 7, because of the existence of the cross product p=3, n=7 givenin section 3. Then, employing the definition of ϕ_ijkl given in the appendix, we find that eq (<ref>) impliesthe following eightnon-linear (first order) differential equations Ẋ_1 =X_823+X_865+X_847+X_735+X_762+X_524+X_346 Ẋ_2 =X_183+X_684+X_857+X_743+X_167+X_635+X_154 Ẋ_3 =X_812+X_485+X_867+X_472+X_751+X_265+X_416 Ẋ_4 =X_862+X_835+X_871+X_567+X_732+X_512+X_136 Ẋ_5 =X_816+X_845+X_872+X_647+X_371+X_623+X_146 Ẋ_6 =X_851+X_824+X_873+X_457+X_172+X_325+X_314 Ẋ_7 =X_814+X_836+X_825+X_546+X_531+X_432+X_612 Ẋ_8 =X_321+X_615+X_642+X_534+X_174+X_376+X_275 · We notice that the self-duality equations of 3-branes in seven dimensions mentioned before, are simply obtained from the above system, by eliminating the last equation and the terms containing the index 8 on the right-hand side of the remaining seven equations.Next, we proceed to thecomplexificationof the general system(<ref>) by choosing specific pairing of the 8 real coordinates, as follows Z_1= X_1+i X_4, Z_2= X_2+i X_5, Z_3= X_3+i X_6, Z_4= X_8+i X_7 · After some elaborate manipulation of the original system ofequations, we arrive at the following simpleform Ż_1 = -1/2{Z_1,Z_k,Z̅_k}+ {Z̅_2,Z̅_3,Z̅_4} Ż_2 = -1/2{Z_2,Z_k,Z̅_k}+ {Z̅_3,Z̅_4,Z̅_1} Ż_3 = -1/2{Z_3,Z_k,Z̅_k}+ {Z̅_4,Z̅_1,Z̅_2} Ż_4 = -1/2{Z_4,Z_k,Z̅_k}+ {Z̅_1,Z̅_2,Z̅_3} We observe that theequations (<ref>-<ref>) have an SU(4) symmetry acting on the complex vector (Z_1,Z_2,Z_3,Z_4). It is possible from the four complex equations to obtain various interesting reductions to lower dimensions. For example, if we demand that Z_k, k=1,2,3,4 are real (i.e., X_4=X_5=X_6=X_7=0), or if Z_k are all pure imaginary, that X_1=X_2=X_3=X_8=0, then we obtain the self-duality equation of three branes in four real dimensions Ẏ_i=1/3!ϵ_ijkl{Y_j, Y_k, Y_l } ,where Y_k representthe non-zero coordinates of Z_kfor each of the abovetwo cases.A more detailed studyof toroidal compactifications anddouble dimension reductionto lower dimensional branes, as well as their relation to extended continuousToda systems, will be discussed in a forthcoming work <cit.>. § EXPLICIT SOLUTIONS FOR 3-BRANE INSTANTONS IN EIGHTDIMENSIONS We now study the solutions of the four complex non-linear differential equations (<ref>-<ref>). This system can be considerably simplified if we seek solutions with time factorization Z_a= ζ_a(t) f_a(σ_1,σ_2,σ_3) . In the subsequent analysis, we work out the cases of spherical and toroidal topology.§.§ Spherical TopologyForspherical topology of the closed 3-brane, S^3, wechoose the functions f_a to be f(a)=e_a, a=1,2,3,4, where e_a are the polar coordinates of unit 4-vectors (<ref>) in R^4.In this factorization, the first term of the right-hand side of the complex equations (<ref>-<ref>) is proportional to {e_i,e_k,e_k} which is identically zero.The simplified equations now read ζ̇_1 = ζ̅_2ζ̅_3ζ̅_4 ζ̇_2 = ζ̅_3ζ̅_4ζ̅_1 ζ̇_3 = ζ̅_4ζ̅_1ζ̅_2 ζ̇_4 = ζ̅_1ζ̅_2ζ̅_3 · We notice that the equations are invariant undera global U(1) symmetry ζ_k→ e^iq_kζ_k with∑_k=1^4 q_k=0.In the following, weconsider the conjugate equations and form the four combinationsQ_k= i/2(ζ̅_k ζ̇_k -ζ̇̅̇_k ζ_k) = i/2( ζ̅_1ζ̅_2ζ̅_3ζ̅_4-ζ_1ζ_2ζ_3ζ_4) ,for k=1,2,3,4. Assumingpolar form ζ_k = r_k(t) e^iϕ_k(t) , we findr^2_kϕ̇_k = Q ,where Q is a conserved quantityQ=-r_1r_2r_3r_4sinϕ,ϕ=∑_i=1^4ϕ_i · Furthermore, subtraction for two different values of k gives three new constants of motion|ζ_k|^2-|ζ_l|^2 =c_kl ,which are the analogue of Euler's equations for the rigid body.Substituting (<ref>)in the sums, ζ̅_k ζ̇_k +ζ̇̅̇_k ζ_kwe observe dr_k^2/dt=2 r_1r_2r_3r_4 cosϕ · For r_1^2=s, usingthe conservation laws we write this equationas follows ṡ=√(s(s-a)(s-b)(s-c)-Q^2) ·This equation can be solved using standard elliptic functions. Depending on the initial conditions, we divide the solutions into two classes: those with Q=0 where there is no rotation, and those with Q 0where we have rotating instantons. In figure <ref> we plot the `effective potential' (i.e. the square of the right-hand side of the differential equation) for Q=0 and Q 0. In the first case (left plot) we show two admissible cases. The upper curve stands for four distinct real roots (radii), and the second curve corresponds to a double root, i.e., when two radii are equal. Curves below this one do not cross the real axis and correspond to complex radii which is not admissible. In the second plot, we fix the value of Q=Q_0 and draw three characteristic cases.First, we consider the case Q=0 and the possibility of static solutions, ṡ=0. If one of the radii is zero (s=a, s=b, s=c or s=0), then we obtain static solutionsliving in the other six dimensions. In general,we obtain a solution in terms of the elliptic integral of the first kind F(ϕ|m). In particular, the positivity of all roots requires s≥ max(a,b,c). Assuming a>b>c>0 while takings=a at the initial timet=t_0, integration of (<ref>) givest-t_0 = 2/√(c (a-b))( i K(b (a-c)/c(a-b)) - F(sin ^-1(√((a-b) (s-c)/(b-c)(a-s)))|a (c-b)/c (a-b) )) · In order to find s(t)weneed the inverse ofthe elliptic integral of the first kind F(ϕ|k)=∫_0^sinϕds/√((1-ks^2)(1-s^2))which is given by the Jacobi Amplitudet= F(ϕ|k) →ϕ= F^-1(t, k) = am(t,k).Furthermore,in order to absorb unimportant factors, we redefine time t =2/√(c (a-b)) t'and make the definitionsk = a (c-b)/c(a-b) σ = √((a-b) (c-s)/(c-b) (a-s))=sin(am(t,k)) ,(also written in the literature as sn(t,k))and finally solve (<ref>) for s(t).In figure <ref> (left side) we plot time as a function of s=r^2 for a=1 and three sets ofb,c values. In all cases, the radius goes to infinity at finite time. For the sakeof simplicity, we exemplify the above for the case ofa double rootb=c<a, i.e,when two of the radii are equal.Then∫_a^sdu/√(u (u-a)) (u-b)= 2 /√(b (a-b))( π/2- tan^-1(√(s (a-b)/b (s-a))) ) · If we define x=b/a <1, a new time a t/π→ t, andq(x,t)= x tan ^2(1/2π(1-t √((1-x) x))) ,we obtain ŝ(t) ≡s(t)/a=q(x,t)/x-1+q(x,t) · This takes itsminimum value for ŝ(0)=1 and grows rapidly to infinity in finite time, at the zeros of the denominator. A few representative cases are plotted in the right side of figure <ref>. The case Q 0 can be separated into two classes:first, pure uniformrotational motion withtime independent radii (Euler tops), and second, when we have both rotation and bouncemotion.In the second class we consider three different cases according to the initial conditionsfor the radii and the value of Q. Without loss of generality we can order the initial values of theradii in degreasing magnitude, that is r_1>r_2>r_3>r_4, which impliesa>b>c>0.The solution can be written in terms of the elliptic integral of the first kind, once we use thefour roots e_1, e_2, e_3, e_4 which arealgebraic functions of a,b,c and Q. From the geometry of thepotential plot, we see that when Q=0, e_1=0, e_2=c, e_3=b, e_4=a. When Q 0 but smaller thana critical value, Q_c, the root e_1 is negative, e_2 and e_4 increase while e_3 decreases. When Q=Q_c, two roots become equal,e_2=e_3, and the potential has a local maximum V(Q_c)=0.§.§ Toroidal TopologyFor the case of toroidal topology, we defineZ_a = ζ_a(t) e ^n⃗_a·ξ⃗ · The fourvectors n⃗_a∈ℤ^3 are not completely arbitrary. When we substitute (<ref>) into the system of the four complex equations (<ref>-<ref>) we find that the above Ansatz provides a solution only if n⃗_1+ n⃗_2+ n⃗_3+ n⃗_4 = 0⃗ · Then we get ζ̇_1 = +i n ζ̅_2ζ̅_3ζ̅_4 ζ̇_2 = -in ζ̅_3ζ̅_4ζ̅_1 ζ̇_3 = +i ζ̅_4ζ̅_1ζ̅_2 ζ̇_4 = -iζ̅_1ζ̅_2ζ̅_3 , where n stands for the determinantn= det (n⃗_j, n⃗_k, n⃗_l) .From these and their complex conjugates we readily get, d/dt |ζ_1|^2= d/dt |ζ_3|^2 = +i n (ζ̅_1ζ̅_2ζ̅_3 ζ̅_4-ζ_1ζ_2 ζ_3 ζ_4)d/dt |ζ_2|^2= d/dt |ζ_4|^2 = -i n (ζ̅_1ζ̅_2ζ̅_3 ζ̅_4-ζ_1ζ_2 ζ_3 ζ_4) · Subtraction and addition of the appropriate equations gives |ζ_1|^2-|ζ_3|^2 =c_13|ζ_2|^2-|ζ_4|^2 =c_24|ζ_1|^2+|ζ_2|^2 =c_12 |ζ_3|^2+|ζ_4|^2 =c_24 · We define the currentsQ_a=-1/2( ζ̇_aζ̅_a-ζ̇̅̇_a ζ_a) ,and find that Q_1=Q_3 =Q≡n/2(J̅+J), Q_2=Q_4=-Q=-n/2(J̅+J) whereJ= ζ_1ζ_2 ζ_3 ζ_4 and J̅ its complex conjugate. We observe now that J̇̅̇ = -J̇,which impliesd Q/dt=n/2d /dt(J̅+J)=0 ,so that the charge Q is conserved.Writing ζ_k = r_k e^iϕ_k, we find that Q= r_1r_2r_3r_4cosϕ. Furthermore, using the redefinitions Q'=Q/n and t'= 2 n t,while taking r_1^2=s, we obtain the differential equation ds/dt = √(s(a-s)(s-b) (c-s)-Q^2) ,with a=c_12, b=c_13, c=c_13+c_34.This isexactly the same equation as in the case of spherical topology, but notice that r_kare bounded form equations (<ref>-<ref>). We note that with the same method it is also possible to solve the four-dimensional 3-brane self-duality equations presented before, where we shall end up with the real system in the place of equations (<ref>-<ref>). In this case, the system of equations is the elegant integral system of refs <cit.>. The solution for the case of toroidal topology is expressed for both cases Q=0 and Q 0 by the same elliptic integral as in the spherical topology, but from the conservation laws we find the interesting result that the motion is oscillatory and bounded and so, the 4-dimensional manifold is compact. This iscontrary to the case of spherical topology where we have always non-compact world-volumes. Since this Euclidean world volumes are calibrations of the (8+1)-dimensional embedding spacetime, our solution is among the first examples of compact non-associative (octonionic) calibrations.Indeed, ordering the four radii in decreasing order r_k+1<r_k,inspection of the equations (<ref>-<ref>) shows that the constants a,b,c,d must satisfy b<a<c with a,c positive while the variable s is bounded between a and b,where now b<s<a<c. In figure <ref>, the potential is plotted for three characteristic values of Q. The time evolution of the four radii, is also shown in figure <ref>. As already emphasized, all four radii are bounded and interpolate between a minimum and a maximum value. § CONCLUSIONS AND OPEN QUESTIONS The present work provides a method for solutions of the self-duality equations for 3-branes in higher dimensions. The factorization of time exploits the finite sub-algebras of the volume preserving diffeomorphismsand reduces the SD equationsto well known integrable systems with explicit solutions in terms of the standard elliptic functions. The new result is thatthe 3-sphere instanton interpolates between flat space-time at infinity and a finite radii 3-sphere. This is similar to the Einstein-Rosen wormhole solutionsof General Relativity. The minimization of the Euclidean4-volume in 8-dimensions,which is the origin of the SD equations, classifies their solutions as calibrations of the background geometry.For the sphere this calibration is non-compact while for the 3-torus the calibration is compact <cit.>.It is possible to solve with the same method SD equations in maximally supersymmetric geometry backgrounds suchas pp-waves with fluxes in 8+1 spacetime dimensionsby redefining the time variable <cit.>,a trick which has also a physical implication in the interpretation of thetime as the group renormalizationscale and the SD equations as flow equations between various geometries [We thank Costas Bachas for this observation.].It would be interesting to apply similar ideas with those of ref <cit.> to transform the nonlinearthree-brane SD equations into a linear Laplace equation problem for 8+1 space time dimensions tostudy possible topology changes <cit.>. Another interesting direction is to abandon the factorization of time and try to solve the three-brane SD equations in 8+1 dimensions requiring axial symmetry and proceed in analogy with the study of real time BPS states of 3-branes. Acknowledgments.The authors would like to thank CERN theory division for their kind hospitality and the stimulating atmosphere during which the main part of this work was realized. EGF would like to thank also the theory department of ENS in Paris for their kind hospitality and stimulating atmosphere during which the last parts of this paper were finished. § THE OCTONIONIC STRUCTURE CONSTANTSThe structure constants of the octonionic multiplicationsΨ_ijk and its dual ϕ_ijkl, which measures the non-associativity of octonions, are given by (for a textbook, see for example <cit.>) Ψ_ijk = {[ 1 2 3 4 5 6 7; 2 4 6 3 7 5 1; 3 6 7 5 2 1 4 ]. ϕ_ijkl = {[ 4 3 5 6 1 7 2; 5 7 2 1 3 4 6; 6 5 1 7 4 2 3; 7 1 4 2 6 3 5; ].Below, we provide the simplest identities between these tensors used in our computations and by summing more pairs of indices we may obtain additional ones. When two indices are summed, a usefulmultiplication rule connecting the two symbols is Ψ_ijkΨ_lmk=δ_ilδ_jm-δ_imδ_jl+ϕ_ijlm The corresponding identity for the ϕ_ijkl symbols is written as follows <cit.>ϕ_abclϕ_ijkl = (δ_aiδ_bj-δ_ajδ_bi)δ_ck+ (δ_biδ_cj-δ_bjδ_ci)δ_ak+ (δ_ciδ_aj-δ_cjδ_ai)δ_bk+ ϕ_abij δ_ck+ ϕ_bcij δ_ak+ ϕ_caij δ_bk+ ϕ_abjk δ_ci+ ϕ_bcjkδ_ai+ ϕ_cajk δ_bi+ ϕ_abki δ_cj+ ϕ_bcki δ_aj+ ϕ_caki δ_bj ·99 PolchinskiPolchinski, Joseph, String Theory, Vol. 1 and 2,(Cambridge Monographs on Mathematical Physics) Duff:1987bx M. J. Duff, P. S. Howe, T. Inami and K. S. Stelle,Phys. Lett. B191 (1987) 70. doi:10.1016/0370-2693(87)91323-2 Kugo:1982bn T. Kugo and P. K. Townsend,Nucl. Phys. B221 (1983) 357. doi:10.1016/0550-3213(83)90584-9 Witten:1995ex E. Witten,Nucl. Phys. B443 (1995) 85 doi:10.1016/0550-3213(95)00158-O [hep-th/9503124].Duff:1991peaM. J. Duff and J. X. Lu,Phys. Lett. B273 (1991) 409.doi:10.1016/0370-2693(91)90290-7 Cvetic:2002si M. Cvetic, H. Lu and C. N. Pope,Nucl. Phys. B644 (2002) 65 doi:10.1016/S0550-3213(02)00792-7 [hep-th/0203229]. Duff:1996aw M. J. Duff,Int. J. Mod. Phys. A11 (1996) 5623[Subnucl. Ser.34 (1997) 324][Nucl. Phys. Proc. Suppl.52 (1997) no.1-2,314] doi:10.1142/S0217751X96002583 [hep-th/9608117]. Banks:1996vh T. Banks, W. Fischler, S. H. Shenker and L. Susskind,Phys. Rev. D55 (1997) 5112 doi:10.1103/PhysRevD.55.5112 [hep-th/9610043].Taylor:2001vbW. Taylor,Rev. Mod. Phys.73 (2001) 419doi:10.1103/RevModPhys.73.419[hep-th/0101126]. Gibbons:1993sv G. W. Gibbons and P. K. Townsend,Phys. Rev. Lett.71 (1993) 3754 doi:10.1103/PhysRevLett.71.3754 [hep-th/9307049].Biran:1987aeB. Biran, E. G. F. Floratos and G. K. Savvidy,Phys. Lett. B198 (1987) 329.doi:10.1016/0370-2693(87)90673-3Floratos:1989hf E. G. Floratos and G. K. Leontaris,Phys. Lett. B223 (1989) 153. doi:10.1016/0370-2693(89)90232-3 Nahm:1979yw W. Nahm,Phys. Lett.90B (1980) 413. doi:10.1016/0370-2693(80)90961-2WardR.S. Ward,Phys. Lett.B 234 (1990) 81.Kovacs:2015xha S. Kovacs, Y. Sato and H. Shimada,JHEP1602 (2016) 050 doi:10.1007/JHEP02(2016)050 [arXiv:1508.03367 [hep-th]].Berenstein:2015pxa D. Berenstein, E. Dzienkowski and R. Lashof-Regas,JHEP1508 (2015) 134 doi:10.1007/JHEP08(2015)134 [arXiv:1506.01722 [hep-th]].Dundarer:1983feR. Dundarer, F. Gursey and H. C. Tze,J. Math. Phys.25, 1496 (1984).doi:10.1063/1.526321 Floratos:2002ga E. G. Floratos and G. K. Leontaris,Phys. Lett. B545 (2002) 190 doi:10.1016/S0370-2693(02)02550-9 [hep-th/0208151]. Floratos:1997biE. G. Floratos, G. K. Leontaris, A. P. Polychronakos and R. Tzani,Phys. Lett. B421 (1998) 125doi:10.1016/S0370-2693(97)01574-8[hep-th/9711044].Sfetsos:2001kuK. Sfetsos,Nucl. Phys. B629 (2002) 417doi:10.1016/S0550-3213(02)00132-3[hep-th/0112117]. Floratos:1998ba E. G. Floratos and A. Kehagias,Phys. Lett. B427 (1998) 283 doi:10.1016/S0370-2693(98)00340-2 [hep-th/9802107]. Floratos:1997ky E. G. Floratos and G. K. Leontaris,Nucl. Phys. B512 (1998) 445 doi:10.1016/S0550-3213(97)00775-X [hep-th/9710064]. Yamazaki:2008gg M. Yamazaki,Phys. Lett. B670 (2008) 215 doi:10.1016/j.physletb.2008.11.001 [arXiv:0809.1650 [hep-th]]. Corrigan:1982th E. Corrigan, C. Devchand, D. B. Fairlie and J. Nuyts,Nucl. Phys. B214 (1983) 452. doi:10.1016/0550-3213(83)90244-4 Ivanov2006 R. Ivanov,Hamiltonian formulation and integrability of a complex symmetric nonlinear system,Phys. Lett. A350, 232-235 (2006)Floratos:2017E. G. Floratos and G. K. Leontaris,in preparation Joyce Joyce, Dominic, Compact manifolds with special holonomy,Oxford University Press on Demand, 2000. Joyce, Dominic,The exceptional holonomy groups and calibrated geometry,math/0406011Lectures given at a conference in Gokova, Turkey, May 2004. Bachas:2000dx C. Bachas, J. Hoppe and B. Pioline,JHEP0107 (2001) 041 doi:10.1088/1126-6708/2001/07/041 [hep-th/0007067]. ConwayJohn H. Conway and Derek A. Smith, “On Quaternions and Qctonions” CRC Press, Taylor and Francis Group, LLC 2003.99 PolchinskiPolchinski, Joseph, String Theory, Vol. 1 and 2,(Cambridge Monographs on Mathematical Physics) Duff:1987bx M. J. Duff, P. S. Howe, T. Inami and K. S. Stelle,Phys. Lett. B191 (1987) 70. doi:10.1016/0370-2693(87)91323-2 Kugo:1982bn T. Kugo and P. K. Townsend,Nucl. Phys. B221 (1983) 357. doi:10.1016/0550-3213(83)90584-9 Witten:1995ex E. Witten,Nucl. Phys. B443 (1995) 85 doi:10.1016/0550-3213(95)00158-O [hep-th/9503124].Duff:1991peaM. J. Duff and J. X. Lu,Phys. Lett. B273 (1991) 409.doi:10.1016/0370-2693(91)90290-7 Cvetic:2002si M. Cvetic, H. Lu and C. N. Pope,Nucl. Phys. B644 (2002) 65 doi:10.1016/S0550-3213(02)00792-7 [hep-th/0203229]. Duff:1996aw M. J. Duff,Int. J. Mod. Phys. A11 (1996) 5623[Subnucl. Ser.34 (1997) 324][Nucl. Phys. Proc. Suppl.52 (1997) no.1-2,314] doi:10.1142/S0217751X96002583 [hep-th/9608117]. Banks:1996vh T. Banks, W. Fischler, S. H. Shenker and L. Susskind,Phys. Rev. D55 (1997) 5112 doi:10.1103/PhysRevD.55.5112 [hep-th/9610043]. Gibbons:1993sv G. W. Gibbons and P. K. Townsend,Phys. Rev. Lett.71 (1993) 3754 doi:10.1103/PhysRevLett.71.3754 [hep-th/9307049].Biran:1987aeB. Biran, E. G. F. Floratos and G. K. Savvidy,Phys. Lett. B198 (1987) 329.doi:10.1016/0370-2693(87)90673-3Floratos:1989hf E. G. Floratos and G. K. Leontaris,Phys. Lett. B223 (1989) 153. doi:10.1016/0370-2693(89)90232-3 Nahm:1979yw W. Nahm,Phys. Lett.90B (1980) 413. doi:10.1016/0370-2693(80)90961-2WardR.S. Ward,Phys. Lett.B 234 (1990) 81.Kovacs:2015xha S. Kovacs, Y. Sato and H. Shimada,JHEP1602 (2016) 050 doi:10.1007/JHEP02(2016)050 [arXiv:1508.03367 [hep-th]].Berenstein:2015pxa D. Berenstein, E. Dzienkowski and R. Lashof-Regas,JHEP1508 (2015) 134 doi:10.1007/JHEP08(2015)134 [arXiv:1506.01722 [hep-th]].Dundarer:1983feR. Dundarer, F. Gursey and H. C. Tze,J. Math. Phys.25, 1496 (1984).doi:10.1063/1.526321 Floratos:2002ga E. G. Floratos and G. K. Leontaris,Phys. Lett. B545 (2002) 190 doi:10.1016/S0370-2693(02)02550-9 [hep-th/0208151]. Floratos:1997biE. G. Floratos, G. K. Leontaris, A. P. Polychronakos and R. Tzani,Phys. Lett. B421 (1998) 125doi:10.1016/S0370-2693(97)01574-8[hep-th/9711044].Sfetsos:2001kuK. Sfetsos,Nucl. Phys. B629 (2002) 417doi:10.1016/S0550-3213(02)00132-3[hep-th/0112117]. Floratos:1998ba E. G. Floratos and A. Kehagias,Phys. Lett. B427 (1998) 283 doi:10.1016/S0370-2693(98)00340-2 [hep-th/9802107]. Corrigan:1982th E. Corrigan, C. Devchand, D. B. Fairlie and J. Nuyts,Nucl. Phys. B214 (1983) 452. doi:10.1016/0550-3213(83)90244-4 Ivanov2006 R. Ivanov,Hamiltonian formulation and integrability of a complex symmetric nonlinear system,Phys. Lett. A350, 232-235 (2006)Floratos:2017E. G. Floratos and G. K. Leontaris,In preparation Bachas:2000dx C. Bachas, J. Hoppe and B. Pioline,JHEP0107 (2001) 041 doi:10.1088/1126-6708/2001/07/041 [hep-th/0007067]. Joyce Joyce, Dominic D. Compact manifolds with special holonomy,Oxford University Press on Demand, 2000. Joyce, DominicThe exceptional holonomy groups and calibrated geometry,math/0406011Lectures given at a conference in Gokova, Turkey, May 2004. ConwayJohn H. Conway and Derek A. Smith, “On Quaternions and Qctonions” CRC Press, Taylor and Francis Group, LLC 2003.
http://arxiv.org/abs/1702.08063v1
{ "authors": [ "Emmanuel Floratos", "George K. Leontaris" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170226175935", "title": "On the Octonionic Self Duality equations of 3-brane Instantons" }
471 1192–1207 2017Entanglement phases as holographic duals of anyon condensates Norbert Schuch Accepted 2017 June 30. Received 2017 June 26; in original form 2017 February 27 ===================================================================================We re-examine the observational evidence for large-scale (4 Mpc) galactic conformity in the local Universe, as presented in <cit.>. We show that a number of methodological features of their analysis act to produce a misleadingly high amplitude of the conformity signal. These include a weighting in favour of central galaxies in very high-density regions, the likely misclassification of satellite galaxies as centrals in the same high-density regions, and the use of medians to characterize bimodal distributions. We show that the large-scale conformity signal in Kauffmann et al. clearly originates from a very small number of central galaxies in the vicinity of just a few very massive clusters, whose effect is strongly amplified by the methodological issues that we have identified. Some of these centrals are likely misclassified satellites, but some may be genuine centrals showing a real conformity effect. Regardless, this analysis suggests that conformity on 4 Mpc scales is best viewed as a relatively short-range effect (at the virial radius) associated with these very large neighbouring haloes, rather than a very long-range effect (at tens of virial radii) associated with the relatively low-mass haloes that host the nominal central galaxies in the analysis. A mock catalogue constructed from a recent semi-analytic model shows very similar conformity effects to the data when analysed in the same way, suggesting that there is no need to introduce new physical processes to explain galactic conformity on 4 Mpc scales.galaxies: evolution – galaxies: haloes – galaxies: statistics § INTRODUCTIONGalaxies are known to populate a broadly bimodal distribution in their star-formation rates <cit.>. On the one hand, there is a population of star-forming galaxies in which the star-formation rate closely follows the stellar mass, producing a so-called Main Sequence in which the specific star-formation rate (sSFR) has only a weak variation with stellar mass, and a dispersion of only about a factor of 2 <cit.>. On the other hand, there is also a population of galaxies in which the rate of star-formation is suppressed by one or two orders of magnitude relative to the Main Sequence. Such galaxies are evolving passively. We will refer to these two populations, split by sSFR, as star-forming and passive respectively.Understanding the process(es) by which galaxies transition from the star-forming to the passive population, a transition that is often called quenching, is a major goal for the study of galaxy evolution. It is clear that both the mass and the environment of a galaxy play a role. For instance, the fraction f_Q of galaxies that are quenched in the local SDSS sample is a separable function in terms of the stellar mass of the galaxies and of the local density of galaxies around them <cit.>. Peng et al. coined the phrases mass-quenching and environment-quenching to describe these two drivers of quenching. Most galaxies in high-density environments are satellite galaxies, i.e. galaxies orbiting within the dark matter halo of another, more massive galaxy, called the central galaxy. Environment-quenching is dominated by the quenching of satellite galaxies <cit.>. A satellite-quenching efficiency ϵ_sat, defined as the excess probability that a satellite is quenched, relative to if it were a central of the same stellar mass, is strikingly independent of its stellar mass <cit.>, which is why f_Q appears separable in stellar mass and density.One difficulty in moving towards a physical understanding of quenching is in identifying which mass and which environment are really driving it. In the ΛCDM paradigm, galaxies form and evolve at the bottom of the potential wells of collapsed dark matter haloes. For central galaxies, there is a tight correlation between the stellar mass, the dark matter halo mass, and even the mass of the central supermassive black hole, and all three of these have been claimed as the driver of the highly mass-dependent mass-quenching process (e.g. ). Similarly with environment, many mechanisms have been proposed for the quenching of star-formation in satellites, including ram-pressure stripping, tidal stripping of gas, the disruption of the fuel supply (strangulation), and the effect of close encounters between galaxies (harassment)(e.g. ).However, for about a decade there have been indications that the processes that quench centrals and satellites may be closely linked. In particular, <cit.> found that, at a fixed halo mass, passive centrals tend to have passive satellites, and star-forming centrals tend to have star-forming satellites. They named this correlation galactic conformity. It might be thought that conformity would arise if both the quenching of centrals and satellites were independently affected by the halo mass, since clearly higher mass haloes would be more likely to contain both quenched centrals and quenched satellites. However, if the samples of satellites and centrals are studied at a fixed halo mass, or with samples that are carefully matched in halo mass, as in <cit.>, then the conformity signal from such independent effects should disappear. The persistence of conformity in halo-mass-matched samples is a clear indication that the evolution of star formation within galaxies is influenced by properties beyond halo mass. This has been discussed in detail by <cit.>, who analysed the conformity signal in SDSS groups and matched no less than five parameters, namely the halo mass, the normalized group-centric distance, the local density, the stellar mass of the central, and the stellar mass of the satellite. They showed that, even after matching these five parameters, there was still a strong conformity signal in the sense that the ϵ_sat for satellites around quenched centrals was 2.5 times higher than for satellites around star-forming centrals.The existence of conformity between centrals and their satellites requires either that the quenching of satellites is to some degree consequent on the quenching of the central (or vice versa), or that the quenching of both is being driven, across the halo, by another parameter which was not matched in the analysis (seefor discussion). One possibility is effects that are linked to the assembly history of the halo. <cit.> found in the low-redshift Universe that at fixed halo mass, the bias of galaxy groups decreases as the SFR of the central galaxy increases, while <cit.> had earlier found within the Millennium simulation that at fixed halo mass, haloes that formed at earlier times also tend to be more biased (i.e. strongly clustered) than haloes that formed later, i.e. haloes that formed earlier might be expected to have older stellar populations.A surprising development was the work of <cit.>, who presented observational evidence for a strong conformity signal extending out to very large distances. In particular, they showed evidence (see their fig. 2) that, around centrals with stellar masses 10^10 < M_*< 10^10.5, a strong conformity signal extends out to 4 Mpc, i.e. of order ten times beyond the virial radii of the haloes that host these relatively low-mass central galaxies. Indeed, to first order, there is little variation in the strength of conformity with distance for these centrals. As well as the scale, the amplitude of the effect was also surprising: at distances of 3 Mpc from low-sSFR centrals, a suppression by a factor of 2 was seen in the sSFR distribution of neighbouring galaxies.Taken at face value, this suggests that distinct haloes with no direct physical relation somehow share a common evolutionary path. This could arise from large-scale causal effects operating on super-halo scales (e.g. from AGN feedback, see ), going against a commonly held assumption that the properties of the halo completely govern the properties of the galaxies therein, and indicating that a major effect is missing from our current understanding of galaxy formation and evolution.Alternatively, it could arise from the fact that parameters which could be producing conformity within a single halo, such as the assembly history of haloes, or halo concentration, will be correlated on scales of 10 Mpc <cit.>. However, studies arguing that large-scale conformity arises, via biasing, i.e. from the spatial correlation of one-halo effects, were not able to account for the strength of the effect presented in K13, although the last cited claimed that they were qualitatively similar. In semi-analytic models, which should in principle include the relevant baryonic processes within haloes, the predicted strength of this signal is an order of magnitude weaker than observed (e.g. fig. 9 of K13).Given the important implications of their results, we have examined the methodology and observational evidence that was presented in K13. Our goal is to assess the extent to which the K13 result can be considered as evidence for the existence of strong large-scale conformity on scales of 4 Mpc, and to try to identify in more detail the precise origin(s) of this strong signal. While the primary focus is on conformity, some of the methodological points will have wider interest.This paper is organized as follows: In Section <ref>, we describe the observational and simulated data used in this work. In Section <ref>, we present a detailed examination of the K13 methodology and results, and highlight some features which are cause for concern. In Section <ref>, we illustrate the effects that the highlighted features had on the final conformity result. In Section <ref>, we compare the observational results to those obtained from semi-analytic models. In Section <ref>, we discuss the more general implications that our findings have on the existence of large-scale conformity. Finally in Section <ref> we summarize our conclusions.We use a ΛCDM cosmology with Ω_Λ= 0.7, Ω_M = 0.3, and H_0 = 70 km s^-1 Mpc^-1. We use the dimensionless unit dex to denote the anti-logarithm in base 10. That is to say, a multiplicative difference by a factor of 10^n in linear space is equal to an additive difference of n dex in logarithmic space. Throughout this work, 1-σ statistical uncertainties are estimated via 100 iterations of bootstrap resampling.§ INPUT DATA AND REPRODUCTION OF THE K13 RESULT §.§ Observational data In order to replicate the results from K13, we follow as closely as possible their sample selection. We use the galaxy sample presented in the New York University Value-Added Galaxy Catalogue[http://cosmo.nyu.edu/blanton/vagc/] <cit.>, which was constructed with data from Data Release 7 of the Sloan Digital Sky Survey <cit.>. Estimates of stellar masses and star-formation rates are an updated version of those derived in <cit.>[http://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/]. From this catalogue, we select galaxies which were primary spectroscopic targets, which have redshifts within 0.017 < z < 0.03, and which have stellar masses above 10^9.25. These cuts result in a mass complete sample of 13,928 galaxies. §.§ Reproduction of K13 fig. 2 Using the selected SDSS sample, we first try to reproduce the key result presented in K13, i.e. their fig. 2, following as closely as possible their own analysis. We first identify a set of centrals, defined as any galaxy with stellar mass M_*,i > 5×10^9 which has no other galaxy that is more massive than M_*,i/2 within a projected distance of R_proj = 500kpc and within a velocity difference ofcΔ z =500km s^-1. This selection was referred to as the isolation criterion by K13.Then, for each central, the projected distances are calculated to all other nearby galaxies (henceforth referred to as neighbours) that lie within R_proj=4 Mpc and cΔ z=± 500 km s^-1 of the central. We note that the velocity criterion for neighbours was not stated in K13, but we adopt ± 500 km s^-1 for consistency with the isolation criterion. While K13 presented results for three different mass ranges of centrals, we will focus only on centrals in the middle range 10^10-10^10.5, for which the claimed conformity effect in K13 is strongest. Furthermore, in K13, the star-formation activity in the centrals was characterized not only by their sSFR (for both the fibre-aperture and a total estimate), but also by estimates of their HI gas fractions and HI-deficiencies that were derived from combinations of various observational parameters. Again, for the sake of brevity, we will focus only on the (total) sSFR. We are confident that the general conclusions presented in the current paper do not depend on these choices.The set of centrals in our chosen mass range is then divided into quartiles by their sSFRs. For the centrals in each sSFR quartile, the distribution of sSFRs in the neighbour galaxies is then calculated as a function of the projected distance from the central, and represented using the median sSFR (as in K13). Neighbours are selected without regard to their stellar mass beyond the initial M_*,i > 10^9.25selection.Fig. <ref> shows the result of this replication of the analysis in K13. It agrees very well with their analysis, and in particular with the bottom-right panel of fig. 2 in their work, which is the most directly comparable of their plots. A pronounced conformity-like correlation is seen, in that the neighbours of the centrals in the lowest sSFR-quartile (red line) have suppressed sSFRs relative to the neighbours of centrals in the other quartiles, extending all of the way out to 4 Mpc. Indeed, there is a general correlation between the average sSFR of the neighbours and the centrals over all four of the quartiles of central sSFR. We have checked that this signal is also present for centrals with masses down to 3×10^9, and is also weakly present for those with masses above 10^11, but that it is strongest for centrals between 10^10 - 10^10.5, as in K13.While our results closely resemble those of K13, the depression of the red line relative to the others in our plot is somewhat weaker (overall by about 0.2 dex) than in K13. This difference can be due to a number of details. For instance, the neighbour star-formation activity in K13 is characterized by the sSFR evaluated within the SDSS fibre. On the other hand, we consistently use only the total sSFR for both centrals and neighbours in our work. These small differences aside, we do find that, by following the K13 methodology, the 4 Mpc neighbours of low-sSFR centrals do indeed have significantly lower sSFRs than average. Throughout this work, we will refer to this strong, long-range correlation between low-sSFR centrals and low-sSFR neighbours as the signal. The authors of K13 interpreted this signal as evidence in favour of the existence of conformity extending far beyond the haloes of the centrals in question, which would have virial radii R_vir∼ 250 kpc (as estimated in K13).§.§ Group catalogueAlthough we will basically follow K13's identification of centrals using their isolation criterion, we will also make reference to the group membership of SDSS galaxies at some points in the discussion. We do so by making use of the Yang et al. SDSS DR7 group catalogue[http://gax.shao.ac.cn/data/Group.html], the construction of which is described in <cit.>. The primary use of the group catalogue will be to identify which, if any, of the galaxies that were identified by K13 as centrals could in fact be satellites. Following the central selection criteria in <cit.>, we rank the members of a given group in mass, and also in their angular position relative to the median position of the group. We then define a central as being a galaxy within the top 10 percentile of its group in both mass and centrality. In the case where more than one galaxy satisfies these criteria, the most massive of these is assigned as the central if it is at least twice as massive as the second most massive; otherwise, the group is not assigned a central. §.§ Mock dataIn order to compare the results from observations against those predicted by galaxy formation models, we also use the semi-analytic model (SAM) of <cit.>[http://galformod.mpa-garching.mpg.de/public/LGalaxies/]. This is the most recent major release of the so-called Munich models and was implemented on the Millennium dark matter simulations scaled to a Planck-year1 cosmology <cit.>. Specifically, the cosmological parameters adopted are: σ_8 = 0.829, H_0 = 67.3km s^-1Mpc^-1, Ω_Λ = 0.685, Ω_M = 0.315, Ω_b = 0.0487 (f_b = 0.155) andn = 0.96. We use the galaxy catalogue based on the Millennium simulation since it has a larger volume (meaning better statistics for satellite galaxies), and H15 showed that its properties converge with those in the catalogue based on the higher-resolution Millennium-II down to our low-mass limit (M_*=10^9.25). Both the Millennium and Millennium-II simulations trace 2160^3 (∼10 billion) particles from z = 127 to the present day. The Millennium was carried out in a box of original side 500 h^-1Mpc = 685Mpc. After rescaling to the Planck cosmology, the box size becomes 714 Mpc, implying a particle mass of 1.43×10^9.From the z = 0 snapshot of the output, we select galaxies with masses above 10^9.25, for the sake of comparison with the data. This cut results in a sample of 3,369,062 galaxies, i.e. about 240 times larger than the SDSS sample described above. We convert the full 6-dimensional position-velocity data into 3-dimensional observed coordinates (x, y, redshift) by converting the position and velocity along one Cartesian direction into redshift, omitting the velocities along the other two directions. The sSFR distribution of galaxies is unfortunately not identical to the observed distribution, having a long tail towards low sSFR values, and with many galaxies having exactly zero sSFR . In order to try to match the mock data with observations, the galaxies with log(sSFR yr^-1)≤-12 have been assigned a random Gaussian value centered at log(sSFR)=-0.3log(M_*)-8.6 and with dispersion 0.5 dex. The consequence of this adjustment will be discussed further in Section <ref>.§ METHODOLOGICAL ASPECTS OF THE K13 ANALYSIS In this Section we will examine a number of different aspects of the K13 analysis, highlighting those that are likely to have a significant and deleterious effect on the results. In the following Section <ref>, we will then modify the analysis to produce new versions of Fig. <ref> and show that the long-range conformity signal is likely to be much smaller than indicated in K13, or even absent altogether within the statistical uncertainties. §.§ Biases due to density-weighting In investigating conformity, we are trying to understand the physical drivers of galaxy evolution, using the star-formation state, e.g. the sSFR, as a probe of the physical conditions locally around each galaxy. If these local physical conditions are somehow correlated over very large scales, then we would see a correlation between the star-formation states of galaxies on similarly large scales.Within the scale of a single halo, it makes sense to correlate the sSFR of the central with those of the satellites. When investigating conformity on larger scales, the problem is no longer confined exclusively to the satellites of a given central; the general neighbours of a central can be satellites in the same halo, or other centrals, or satellites in other nearby haloes. Not least, in K13, the neighbours can have much higher, as well as much lower, stellar masses than the central in question. However, in what follows, we will adhere to the practice in K13, and consider the correlation between the sSFRs of centrals and neighbours.What should then be done to test the large-scale conformity hypothesis, i.e. that there is a correlation between the sSFRs of centrals and neighbour galaxies on scales extending well beyond individual haloes? Two possible approaches would be as follows:* Regard each central-neighbour pair as an independent test of the hypothesis that the sSFRs of centrals and neighbours are correlated. A sample of neighbours is constructed by collecting galaxies at a given distance around each and every central in a given sSFR bin. It is then tested to see whether the resulting sSFR distribution of the neighbour sample varies with the central sSFR and/or with the distance-to-central. This was the approach taken in K13.* Alternatively, one could evaluate the average neighbour sSFR-distance relation for each central, and then average these over all centrals in a given sSFR bin to examine how this relation varies with central sSFR. This approach would be hard to calculate given the discrete nature of galaxies. For instance, for many centrals there would be no neighbour within a given projected distance bin. Although the difference between the two example methods may appear superficial, they differ significantly in the weighting of each central in the sample, which may cause differences in the measured conformity signal <cit.>. The first approach above implicitly assigns equal weight to every central-neighbour pair, and thereby weights each central by the number of neighbours associated with it (i.e. the richness of the environment). In effect, this method preferentially represents the physical processes around rich centrals. In the second approach, all the centrals are weighted equally.For large-scale conformity (at least in the form which we have stated), we wish to learn about the physical processes reflected by the state of the centrals star-formation, and not necessarily by their richness. So, while a physical effect which is probed by many neighbours may be seen with more statistical confidence than one which is probed by just a few, it should not, in our view, be treated with more weight. Therefore, we would favour a method which attaches equal weight to every central, rather than one which weights each central by its number of neighbours (as done in K13).While analysis approach (ii) avoids weighting centrals by the number of neighbours, it would be difficult to implement in practice. An alternative, which we will implement in this paper (see Section <ref> below), is to employ approach (i), but to down-weight each central-neighbour pair by the number of neighbours N_neigh that that central has, i.e. that are found within 4 Mpc of the central. In effect, this treats neighbours as probes of the physical processes around centrals, and treats the effects around every central with equal importance, regardless of how many neighbours are influenced by them.As shown in the bottom panels of Fig. <ref> (the details of which will be discussed shortly), the centrals in the sample display a huge range in the number of neighbours, N_neigh. While most have just a few neighbours (the modal value of N_neigh is 2), centrals in the richest environments can have up to ∼230 neighbours, and are therefore vastly over-weighted relative to most other centrals. Correspondingly, a galaxy in a rich region can be a neighbour to up to 14 centrals, further weighting the effects of dense regions of the Universe in the analysis, and highlighting the difficulty of interpreting conformity on 4 Mpc scales in terms of the effect of a single central galaxy. The maximum number of neighbours (as defined here) that are associated with the richest centrals (N_neigh∼230) is strikingly high, given that the centrals under consideration are expected to inhabit relatively low-mass (∼10^11.5-10^12) dark matter haloes. The maximum number of 14 centrals per neighbour also indicates a remarkably high density of centrals within the roughly (9 Mpc)^3 cylindrical volume for these high-density environments.The over-weighting of rich environments in K13 is of particular concern given that we would expect (a) that galaxies in high-density regions will preferentially be passive, and (b) any difficulties in isolating true centrals will likely also be more severe in rich environments. We therefore turn to examine these two questions.In the lower panels of Fig. <ref>, we plot the N_neigh (calculated out to 4 Mpc) and the median neighbour sSFR for each central, splitting the centrals into the four quartiles of central sSFR. The horizontal dashed lines separate centrals with N_neigh>70 (plotted in red) from those in less rich environments (plotted in black). The histograms at the top of Fig. <ref> then show the distribution of the sSFRs of the neighbours, again differentiating between the neighbours of centrals with N_neigh≤70 (shown as the black histograms) and the whole sample, including the richest environments (red histograms). The difference between the red and black histograms therefore isolates the sSFR distribution of the neighbours of centrals with N_neigh> 70 (plotted as the red points in the lower panels).Fig. <ref> illustrates several important points. First, it shows that the neighbours of low-sSFR centrals tend themselves to have low-sSFR: the leftmost red histogram peaks at much lower sSFR than the other red histograms. This is the conformity signal seen in Fig. <ref>. However, it is clear that this is driven by the neighbours of the centrals in the highest density regions (N_neigh> 70), rather than by the existence of a strong correlation between a typical central and its 4 Mpc neighbours. Because of the over-weighting of rich centrals, the small number of red points in the lower panels have a disproportionate effect on the histograms in the upper panels. If we ignore all the neighbours of rich centrals (noting that only ∼ 3 per cent of centrals in this mass range have more than 70 neighbours), then the neighbour sSFR distributions (shown by the black histograms) are remarkably insensitive to the sSFR of the central, i.e. from left to right in the Figure. It is clear that the rich centrals with N_neigh> 70 are themselves concentrated in the first column of the Figure, and are therefore primarily low-sSFR centrals. Because the neighbours in these high-density regions are also of lower sSFR than typical neighbours, it is these high-density neighbours that are almost entirely responsible for the low-sSFR peak in the 0-25^th percentile quartile, and therefore for the strong conformity signal. As shown in Fig. <ref> (and as we will show also in Fig. <ref>), this small fraction of centrals indeed drives most of the strong, large-scale correlation in Fig. <ref>. In the remainder of this Section, we explore the origin of this large-scale correlation, and how a small fraction of centrals can come to produce a dominant effect on the overall neighbour distribution. Despite the fact that most of the apparent correlation is only driven by a small number of rich centrals, this could nevertheless potentially be an interesting result. These centrals tend to have low sSFRs, and out to a few Mpc, they also tend to have low-sSFR neighbours. This large correlation scale extends well beyond the virial radii of even the most massive dark matter haloes, let alone those which host 10^10-10^10.5 centrals. At face value, this is indeed suggestive of the presence of physical processes operating well beyond the scale of individual haloes.However, it is instructive to examine where these high-density environments actually occur in the Universe. Fig. <ref> shows that most of the rich centrals (highlighted in red on the Figure) are strongly clustered around just a handful of the very largest galaxy clusters that are present in the SDSS sample. They are not the centrals at the centres of these structures, but instead cluster on their outskirts.This fact already changes the significance of the 4 Mpc scales for a conformity signal. In the regions of these very largest haloes, environmental-quenching effects will be expected to lower the sSFRs of satellite galaxies over very large regions. For instance, the Coma cluster at [R.A., Dec.] = [195^∘, 28^∘] has a virial radius of about 2 Mpc <cit.>. Even with a simplistic assumption that satellite quenching extends only out to the virial radius, any galaxy (whether a true central or a satellite of Coma) that is located at the virial radius of Coma would see the passive satellites of Coma extending out to 4 Mpc away, i.e. to a virial diameter away. The large scale of the sSFR correlation between this set of galaxies does not therefore correspond to greatly super-halo scales, but rather to the span of the haloes of these extremely large clusters. By introducing an artificial conformity signal to their halo model, <cit.> have also cautioned that physical effects within single large haloes can easily produce a conformity-like correlation on several Mpc.Fig. <ref> also emphasizes the very small volume of the Universe that contains these richest centrals. We have argued above that the K13 methodology biases the conformity signal by linearly weighting centrals by the richness of their environments. If we considered a volume-averaging approach to conformity, which could be justified given the use of centrals to probe physical conditions, then we would have a further bias towards rich environments. i.e. an N^2 bias. In Section <ref>, we will show the dramatic effect of excising very small volumes from the sample. §.§ Purity of the central selection Central galaxies play an important role in conformity, as they are expected to reflect the physical processes near the centre of the dark matter halo. However, the dark matter distribution and potential is difficult to determine in practice, so selecting a complete and pure sample of centrals from an observational catalogue is not a trivial task. As described in Section <ref>, K13 used an isolation criterion to identify centrals, i.e. all other galaxies within 500 projected kpc should have less than a half the stellar mass of the central.It should be noted that misidentification of centrals and satellites would not introduce fake conformity signals if all haloes have the same quenched fraction in the satellites. However, if there is a variation of ϵ_sat across the sample, i.e. a variation in the quenched fraction of the satellites, then misidentification of the central will more likely lead to a false quenched central in those groups in which the quenched fraction is high, and therefore to a spurious conformity signal.An alternative to using an isolation criterion is to try to identify groups in the galaxy catalogue. A number of group-finding algorithms exist for this purpose. Assuming that these groups really trace the dark matter distribution, one can then try to identify the galaxies at the minima of the potential wells, i.e. the true centrals.It is then slightly worrying to find in Fig. <ref> that many of the centrals identified by the isolation criterion are found in and around large clusters, especially considering that the true centrals of these clusters are expected to be significantly more massive than the 10^10 < M_* < 10^10.5 range under consideration. The selection criterion of comparing the masses of galaxies with their near (500 kpc)neighbours allows satellites in large clusters to be identified as centrals, as long as they are sufficiently more massive than their near neighbours. K13 justified the use of this selection method by applying it to mock catalogues from the semi-analytic model of <cit.>. The contamination of the central sample from satellites varies as a function of mass, but was at most 30 per cent. However, given the existence of the biasing towards high-density regions discussed in the previous sub-section, even a few satellite contaminants from large clusters could result in a disproportionately large contamination in the results. We therefore re-examined the effectiveness of the K13 central selection method in the context of environment richness and sSFR. In order to do this, we compare the isolation criterion of K13 with central-satellite classifications made with reference both to the Yang et al. SDSS group catalogue as well as to the H15 semi-analytic model. For the latter, the true central-satellite identities of all galaxies are of course known. In both cases we compute the impurity, defined as the fraction of satellites present in the isolation-selected sample of centrals. Fig. <ref> shows this impurity as a function of N_neigh for centrals in the four quartiles of central sSFR.Fig. <ref> shows the number and impurity of the isolation-selected centrals in the H15 SAM as a bivariate function of N_neigh and the median log(sSFR) of the neighbours, restricting attention to the SAM to exploit the much larger number of groups for the bivariate analysis. We find that for identified centrals within 10^10-10^10.5, the global contamination fraction is indeed low (we find about 10 per cent). However, in regions of high number density (i.e. near large clusters), which also generally have low sSFR, we find that up to two-thirds of selected centrals are in fact satellites (see Fig. <ref> and <ref>). Given that the volume of the isolation criterion (500 kpc radius, ± 500 km s^-1) is relatively small compared to the dimensions of the largest clusters (of order 2 Mpc radius, andσ_v∼1000 km s^-1), it is perhaps unsurprising that, when applied on the outskirts of these clusters, the isolation criterion is misclassifying satellite galaxies as centrals. As previously illustrated, these clusters have high number density, and have environmentally-driven quenching that extends over several Mpc. These are therefore precisely the regions where the density-bias discussed in the previous sub-section will have the greatest effect on the results. Any small mistake in identifying centrals and satellites will produce a spurious conformity signal (since the galaxies will have low sSFRs) which will then be strongly amplified by the density-weighting to have a disproportionate effect on the final result.We conclude that out of the very small fraction of centrals (3 per cent) with N_neigh> 70, which are likely to be responsible for most of the observed conformity signal, more than half are probably satellite contaminants. It is however worth noting that it is likely that not all of these problematic rich centrals are contaminants. Some are likely to be real centrals at the centres of their respective haloes, but which reside just beyond the extent of much larger clusters. The key point is that their numerous neighbours then correspond to the members of these large clusters. If they are indeed true centrals, then the sSFR correlation seen between them and the members of these large clusters would be indicative of a real conformity effect, albeit one affecting only a very small fraction of centrals. The implications of this will be discussed further in Section. <ref>. §.§ Statistical summaries of the sSFR distributionsBoth the splitting of the centrals and the analysis of neighbour properties were based, in K13, on the sSFRdistributions of the centrals and neighbours respectively. K13 split centrals into quartiles, and summarized the sSFR of neighbours by using the median. While a normal distribution is completely characterized by its mean and variance, the distribution of sSFRs is clearly not Gaussian, and the consequences of the choice of summary statistic are worth considering. The median, as used by K13, has a number of attractive features. In a unimodal distribution, the median has the benefit of being insensitive to outliers in the wings of the distribution. This is especially important in the case of passive galaxies, for which the estimation of individual sSFR has significantly higher uncertainty than for Main Sequence galaxies, with uncertainties of order 1 dex <cit.>. The effects of these uncertainties in the individual sSFR are relatively limited with the use of the median.However, the sSFR distribution of galaxies is known to be bimodal, because of the effects of quenching. This is clearly seen in the histograms of neighbour sSFR in Fig. <ref>, where the two modes (corresponding to the star-forming and the passive population) are of comparable strength. If one mode is dominant and has a small dispersion, a small change in the relative numbers of galaxies in each component will have little effect on the median. But, if both components are narrow and of equal strength, then a small shift in their relative size can produce a very large change in the median, as the median jumps from one mode to the other. This is illustrated in Fig. <ref>, where we simply plot the sSFR of the galaxies in our complete sample as a function of their stellar mass. The black and red curves respectively show the mean and median log(sSFR) calculated in a running bin of width 0.2 dex in mass. The running mean varies smoothly with mass. The running median, however, varies much more slowly at high and low masses, where the distribution is dominated by one or other of the modes, but varies much faster than the mean in the region where the distribution is evenly divided between the two components. In essence, the choice of median to characterize a strongly bimodal distribution with roughly equal components has the effect of amplifying changes relative to what would be seen in the mean. In the specific case considered here, it is clear from Fig. <ref> that we are in the regime where both peaks of the neighbour sSFR distribution are of comparable strength. Indeed, it is clear in the top panels of Fig. <ref> that the methodological bias in favour of the highest density centrals has tipped the balance between the two modes. The low-sSFR component goes from being slightly sub-dominant (in the three right-hand histograms) to slightly dominant (in the leftmost panel in Fig. <ref>). The choice of median to characterize the sSFR distribution therefore unfortunately further amplifies the effect of the density-bias.A further possible effect arises from the splitting of the centrals into quartiles of sSFR within their mass range 10^10 < M_* < 10^10.5. While the sSFR-mass correlation in both the star-forming and passive modes is not a strong function of mass, the relative strength of the two modes changes with mass as a result of the mass-dependence of mass-quenching. As shown in Fig. <ref>, the relative size of the two modes shifts dramatically in precisely the galaxy mass range where (for the centrals) the K13 conformity signal is strongest. This means that the mass distribution, within the 10^10 < M_* < 10^10.5 range, of the centrals in the different sSFR quartiles will be different, and therefore quite possibly the host halo mass distributions. This has two implications. Ideally, a much more stringent matching of the masses of the centrals would be required to remove the possibility of conformity-like signals arising from straight-forward correlations with halo mass (seefor further discussion). An additional consequence is that, within the 500 kpc radius of the isolation criterion for the identification of centrals, the mass distribution of the neighbours will be different from the general neighbour population, and furthermore different for the different quartiles of central sSFR. This causes the pronounced upturn in the sSFR of neighbours within 500 kpc in Fig. <ref>. For this reason, the region within 500 kpc must be clearly differentiated from the larger scales, as indicated on Fig. <ref> and later Figures in the paper. A further point is that characterizing the star formation activity of the centrals using sSFR-quartile bins means that centrals in the same quartile (in particular, the 25-50^th and 50-75^th percentiles) may be somewhat heterogeneous in their star formation state, i.e. whether they are star-forming or passive. That is to say, star-forming and passive galaxies will be classified as having similar star-formation activity because they lie in the same sSFR-quartile. At high masses (above 10^11), the problem is reversed, in the sense that all centrals are passive, and so the different quartiles in the sSFR distribution contain a rather homogeneous set of passive centrals. This is likely the cause of the reduction of the conformity signal for these more massive centrals in fig. 3 of K13. §.§ Summary of the methodological aspects of K13 To summarize the previous discussion, we have identified at least three aspects of the methodology adopted in K13 which may have produced biases or amplifications of conformity-like signals in their analysis, thereby producing misleading results.* Bias due to density-weighting:By giving equal weight to every central-neighbour pair, the methodology drastically over-represents central galaxies in high-density regions, i.e. those in the neighbourhoods of the largest clusters. In doing so, it allows conformity signals to appear on the spatial scales of these largest clusters, rather than on the scales of the relatively low-mass haloes that are nominally being probed with these quite low-mass centrals.* Central selection: The isolation criterion, despite making no reference to group catalogues, performs quite well overall in identifying central galaxies. However, the contamination from satellites increases markedly in high-density regions, where the fraction of passive satellites is also high. This can produce a spurious conformity signal that is then amplified by the density-weighting discussed above.* Representation of sSFR distributions: While the choice of using the median to represent the distribution of neighbour sSFRs has some benefits, it has the unfortunate result of amplifying the apparent strength of the conformity signal, whether real or produced by the above effects, when the two bimodal components are of comparable amplitude. The choice of using sSFR percentiles to represent the star-formation activity of the centrals also groups together heterogeneous subsets of centrals, and produces a clear bias in the region within the 500 kpc radius used in the isolation criterion for selecting central galaxies. In the following Section, we demonstrate that the combination of these different methodological aspects grossly amplifies the sSFR correlations that are present in a very specific subset of the data, and may therefore produce misleading results in the minds of most readers.§ RE-ANALYSIS UNDER MODIFIED METHODOLOGY In this Section, we explore the degree to which the effects discussed in Section <ref> actually affect the data. In order to do so, we apply simple modifications to the K13 analysis which specifically address these issues, either by adjusting the given methodology, or by making different methodological choices. We emphasize that while these ad hoc modifications do, we believe, give the data a more fair representation, our purpose is to illustrate the compounded effect of the various biases on the K13 results, and not to make a serious attempt to quantify galactic conformity within this framework. We plan this for a later paper. We also note that some recent studies of large-scale conformity also examine the effects of these methodological choices <cit.>. The consistency between their findings and this work will be briefly discussed in Section <ref>.In the following, we discuss possible ways to counteract each of these effects. We then apply them to the data, exploring the extent to which the apparent conformity signal persists as one or more of them are applied.The fact that centrals are weighted in proportion to their richness results in a strong bias towards large clusters. In addressing this density-weighting bias, we apply the following measures (separately or together).* Remove the richest centrals. By removing all centrals which have more neighbours than a somewhat arbitrary limit of N_neigh = 70, we can remove that subset of centrals which most strongly bias the result. Following from the discussion in Section <ref>, this action also removes the subset of centrals for which the contamination fraction from satellites is highest. This cut removes just 68 (3 per cent) of the centrals within the relevant mass range 10^10 < M_* < 10^10.5. We note that about a half of these excised galaxies (31 out of 68) were in fact classified as satellites in the group catalogue. * Remove centrals near to the most massive cluster. As an alternative to (i), we simply remove centrals which are near to the single largest cluster in the SDSS data, which is the Coma cluster. We do so by excluding all centrals that are located within a cylinder of R_proj = 4Mpc and cΔ z = ± 1000km s^-1 that is centred on Coma. The centre of Coma is defined here as the median R.A., dec., and z of the members of Coma, where the group memberships are defined according to the Yang et al. group catalogue. It should be noted that this does not remove the central of Coma itself, as it lies well above the central mass range under consideration (10^10 - 10^10.5). This cut removes only 18 (1 per cent) of the centrals within the relevant mass range, i.e. it excludes much fewer than (i). Of the 18 excluded centrals, 11 were classified as satellites in the group catalogue, suggesting that there are some genuine centrals existing close to these largest clusters. Unsurprisingly, all 18 of these centrals have more than 70 neighbours, i.e. they are all also removed by operation (i). * De-weight rich environments. Apart from excising centrals on the basis of richness, we can also simply down-weight each central-neighbour pair by the total number of neighbours within 4 Mpc of the central, producing a result in which all centrals are equally weighted (see discussion in Section <ref>). In computing median sSFRs with the down-weighted samples, we simply compute the 50^th percentile point in weight. In addressing the specific issue of impurity of the central sample, i.e. the contamination from galaxies that are actually satellites, the straightforward solution is to: * Remove all suspected satellites from the central sample. We make use of the Yang et al. group catalogue to identify groups and their centrals in the SDSS data (as was described in Section <ref>). We then remove all satellites (i.e. non-centrals) from the sample of centrals that was selected by the K13 isolation criterion. Since the group finder has demonstrably good performance on large haloes (M_vir≳ 10^12 h^-1; ), it is well-suited to identify potential contaminants in high-density regions where, as we have seen, they have the greatest impact on the analysis. A total of 168 galaxies (7 per cent) are removed from the central sample in this way. To address the issues with the use of the median, we simply: * Use the mean (of the log) instead of the median. Note that we can make use of both the non-weighted and down-weighted samples to compute these means.In Fig. <ref>, we show the effect of applying these various modifications to the K13 analysis, either independently or in combination with each other. The upper left panel reproduces the original K13 analysis from Fig. <ref> of this paper. Subsequent rows downwards in Fig. <ref> show the effect of down-weighting pairs by N_neigh (i.e operation (iii) above), of computing non-weighted means (i.e. (v) above), and finally of computing down-weighted means (both (iii) and (v) together). The second column in Fig. <ref> shows the effect of removing suspected satellites from the central sample (i.e. (iv) above). The next column shows the effect of instead simply removing all centrals with N_neigh> 70 (i.e. (i) above), while the rightmost column shows the result of instead removing those 18 centrals lying in a cylinder centered on the Coma cluster (i.e. (ii) above).It is clear from Fig. <ref> that all of the methodological modifications described above have, as one would expect, the effect of decreasing the amplitude and/or spatial scale of the conformity signal. Detailed comparisons of the panels in Fig. <ref> are also consistent with our previous discussion in Section <ref>. We first discuss the effects of individual methodological modifications on the conformity signal, in comparison with the original K13 methodology. The difference between using the median and the mean is primarily to reduce the amplitude independently of scale, as would be expected. Interestingly, removing the relatively large number (7 per cent) of likely satellite contaminants also mostly affects the amplitude and not the scale. In both cases, the depression of the sSFR of the neighbours of the lowest-sSFR centrals (red line) relative to the others is reduced approximately by a factor of 2 (in the average of the logarithm).More importantly, Fig. <ref> illustrates the disproportionate effect of the density-weighting of centrals on the original K13 result. By weighting each central-neighbour pair by (N_neigh)^-1 (i.e. the 2^nd and the 4^th row), the conformity signal beyond 1.5 Mpc completely disappears, while the remaining sub-1.5 Mpc signal is substantially weakened. In the case of the weighted median, the depression of the red line relative to the blue line at 1 Mpc is ∼ 0.2 dex, while the same signal in the weighted mean is ∼ 0.1 dex. In both cases, the amplitude of the remaining signal is comparable to the bootstrap uncertainties, and is therefore difficult to distinguish from noise. That is to say, when we treat the effects around centrals with equal weight, regardless of their local density, there is already very little evidence for the existence of large-scale conformity.By taking all of these methodological modifications together simultaneously (i.e. in the 4^th row, 2^nd column of Fig. <ref>), we measure a conformity signal which we believe to be more robust and less biased. In order to place an upper limit on this remaining conformity signal, we compare the depression of the red line relative to the blue line in that panel. We find that the signal is at most 0.08 dex (at 1.25 Mpc); this is comparable to the respective bootstrap uncertainties at this radius, which are ∼ 0.05 dex. The data therefore does not support the existence of a conformity signal at any scale beyond that of the virial radius. In Section <ref>, we further discuss the interpretations of this null result in the context of the limited cosmological volume of this data set. Out of the three methodological issues that we have addressed, the implicit bias towards high-density regions produces the most dramatic amplification of the conformity signal. In fact, the simple operation of removing the 18 centrals in the 4 Mpc cylinder around the Coma cluster is already enough to essentially remove the large-scale signal beyond 2 Mpc. This emphasizes the fact that the large-scale conformity in the overall SDSS sample that is seen in K13 is mostly associated with the very small number of the very largest haloes, rather than with super-halo scale effects associated with the relatively low-mass haloes that host the nominal set of centrals used in the analysis.The rightmost column of Fig. <ref> serves to highlight the dramatic effect of density-weighting from a few centrals which are around the very largest cluster, and shows that most of the large-scale conformity seen in K13 is in fact driven by the very largest haloes. However, as the removal of density-weighting (i.e. 2^nd and 4^th row of Fig. <ref>) demonstrates, the remaining sub-2 Mpc signal is also driven mostly by density-weighting, presumably from centrals around clusters which are smaller, but more common, than Coma.Fig. <ref> illustrates this point more clearly. The leftmost panel of Fig. <ref> is analogous to Fig. <ref>, while the other panels, from left to right, show the cumulative effect of removing centrals around relatively large haloes of progressively smaller sizes, from the one system with virial diameter of ∼ 4 Mpc (Coma), down to those with virial diameters of only 2 Mpc. At each step of the cuts, the original conformity signal at the corresponding virial diameter is completely eliminated, while the signal at shorter ranges is substantially reduced. After removing centrals around systems with R_vir>1 Mpc, there remains only a very weak conformity signal. Comparing the sSFR of neighbours of relatively low-sSFR centrals (red and black lines) with those of relatively high-sSFR centrals (green and blue lines), one finds a much weaker systematic depression of ∼ 0.2 dex out to separations of ∼ 2 Mpc. This further illustrates the fact that, due to the effects of density-weighting, the large-scale conformity presented in K13 is primarily driven by effects on the virial scales of the largest clusters.These results still allow that, for a very small number of centrals around the richest clusters, there may be a residual real conformity effect on scales just beyond the virial radius. Such an effect could arise from large-scale spatial correlations of halo accretion rates, as illustrated by <cit.>. Their analysis of dark matter simulations showed that, due to large-scale tidal interactions, haloes residing in high-density environments tend to have lower dark matter accretion rates. Such a spatial correlation spans over several Mpc. Since the SFRs of galaxies and the accretion rates of their host haloes are expected to be positively correlated, this halo accretion conformity could be a driver of some degree of large-scale galactic conformity.Alternatively, it could be indicative of other physical processes acting beyond the halo, such as energetic feedback from active galactic nuclei residing within the larger cluster, which may eject hot gas somewhat beyond the virial radius, and thereby suppress star-formation on super-halo scales (see ). In such cases, the conformity effect, even if real, would clearly have been driven by the larger system, and should not be thought of as having been driven by the relatively low-mass centrals under consideration. The corresponding length scale of the conformity signal should therefore be compared with the size of the larger system, and not with that of the smaller halo. § COMPARISON WITH THE H15 SEMI-ANALYTIC MODELAlthough the existence of conformity is well-established between centrals and satellites of the same halo <cit.>, one does also expect some degree of sSFR correlation on the scale of several Mpc as a result of the fact that halo properties will be correlated on large scales. An example is the halo assembly history, or concentration <cit.>.In order to determine how much correlation is expected from known and predictable physical processes, one can apply an identical analysis to mock catalogues generated from semi-analytic models of galaxy evolution, within which the semi-analytical prescriptions of baryonic physics govern the evolution of simulated galaxies within their respective dark matter haloes. A significant feature in K13 was the fact that, while it was claimed that strong, long-range, conformity-like correlations existed in the data, similar signals were only very weakly present in the parallel analysis of the <cit.> SAM. This indicates that there is a physical process (or processes) operating in Nature that is not included in the SAM. This possibility was further discussed in <cit.>.We have demonstrated in this paper that most of the signal is driven by sSFR correlations in high-mass haloes via the density-weighting effect, further amplified by a number of other aspects of the analysis. Even so, prescriptions of environmental quenching are present in SAMs, and so such an effect should be present to a comparable degree if the mock catalogue is analysed in the same way.In order to explore this, we apply the same suite of methodologies that we described in Section <ref> to the mock catalogue from the H15 SAM. The results are presented in Fig. <ref>, the different panels of which are directly analogous to those in Fig. <ref>, with the exception of the rightmost column. Since there is no direct counterpart to the Coma cluster in the SAM, we instead apply an analogous cut by removing centrals near to haloes with M_vir>10^14.4 (i.e. R_vir>2 Mpc), by applying the same 4 Mpc cylinder cut as described in Section <ref>. Note that the error-bars are very much smaller in Fig. <ref> than in Fig. <ref> because of the 240-fold increase in the number of objects in the mock.Before examining these results, some key differences between the SAM and the SDSS data should be noted. First, the Main Sequence in the SAM is more sharply peaked than in the data, and this is the cause of the systematic vertical (sSFR) offset between the results for the medians in the top two rows of Fig. <ref> and <ref>. The offset is much less pronounced for the mean sSFR in the lower two rows. Second, unlike in the real data, most of the galaxies in the passive population in the SAM have exactly zero SFR. In our treatment of the SAM mock catalogue, galaxies with sSFRs below a threshold of 10^-12 yr^-1 are assigned sSFRs that are randomly drawn from a Gaussian distribution centered on ∼ 10^-11.6 yr^-1 with a dispersion of 0.5 dex (see Section <ref>), so that the sSFR-mass distribution in the SAM approximately matches that of the observations. Because of this scrambling of sSFRs for low-sSFR centrals, we treat centrals in the lowest two quartiles as a single set with a single combined neighbour sample. As a consequence of this, we plot on Fig. <ref> only a single set of points representing the two lowest quartiles combined, which may of course dilute the signal that would have been obtained if the lowest quartile could have been studied in isolation. It should also be noted that the lowest two sSFR-quartiles in Fig. <ref> and <ref> are artificially similar for the same reason. These small systematic differences aside, the results from the SAM in Fig. <ref> are strikingly similar to the observational results shown in Fig. <ref>. With only the prescriptions of known physics in the model, the application of the K13 methodology nevertheless yields the appearance of large-scale sSFR correlations that are similar to those observed in the SDSS data. The similarity between the results from the real observational data and from the simulated data, in terms of both the amplitude and the range of the sSFR correlation, suggests that there is no need (at least from this analysis) to add new physics to our view of galaxy evolution. We note that we have also analysed the original <cit.> SAM mock catalogue used in K13 in the same way. We find qualitatively similar results as in Fig. <ref>, and therefore cannot account for the apparent absence of a similar signal in K13s treatment of this same mock catalogue.However, we note that the change in the conformity signal is not identical between the two sets of data for all of the modifications. In particular, the percentage of centrals removed from the mock sample at each cut is systematically higher than in the SDSS data, and the effect of removing centrals in rich environments (i.e. operations (i) and (ii) as described in Section <ref>) produces a less dramatic reduction of the signal in the mock data compared with the SDSS data.We also note that, even when accounting for all of the methodological issues, there exists a weak conformity signal in the mock that is not seen in the data. Since inter-halo interactions are not present in the SAM, this signal must be due to the spatial correlation of halo properties, such as those mentioned in the beginning of this Section.These differences are not entirely surprising, considering that the SDSS volume is approximately 200 times smaller than the simulation volume of the H15 SAM. In order to understand how much of these differences are simply due to the limited volume of the SDSS sample, we select, from the mock, 125 independent sub-volumes which have similar spatial dimensions to our SDSS sample, and examine the variation in the sSFR distribution of the neighbours.Fig. <ref> shows the result of this analysis. The top row shows σ_Real.^2, the variance of the median neighbour sSFR across the 125 independent realizations, and reflects the total variance in an SDSS-like volume. The middle row shows ⟨σ_Boot.^2⟩, the median value of the bootstrap variances, where the median averages over all of the realizations. In this analysis, ⟨σ_Boot.^2⟩ therefore reflects the estimate of Poisson uncertainties due to the sample size. Finally, the bottom row shows the square root of the ratio of these two quantities, and effectively indicates how well the bootstrap error-bars (i.e. in Fig. <ref>, <ref>, and <ref>) reflect the total variance.The Poisson variance of neighbour sSFRs, i.e. the variance estimated from bootstrap resampling, is dominant at small radii, which corresponds to the dominance of uncertainties due to low number of neighbours per radial bin. At larger radii, the effects of volume-limited realization on the variance, i.e. cosmic variance, increase in relative importance, especially for the low-sSFR centrals and neighbours. This is due to the fact that, as we have shown, the neighbour sSFR distribution is influenced strongly by the presence of a small number of large nearby structures. The bootstrap resampling of the neighbours does not capture the small-number statistics of these rich clusters. As a result, the bootstrap error-bars in Fig. <ref>, <ref>, and <ref> underestimate the true uncertainty in the sSFR distribution of neighbours by at least a factor of 2. This should be borne in mind when comparing the observational data with the mock catalogue.Therefore, we suspect that the offset between the results from the SDSS data and from the mock data is not statistically significant. The similarity of the results under the various methodological modifications confirms our assertion that the bulk of the conformity signal is indeed driven by a known (and accounted-for) correlation which has been amplified by the density-weighting effect.Since the bootstrap uncertainties for the current SDSS sample underestimate the true uncertainty, the apparent absence of a conformity signal after the methodological modifications (i.e. in the 4^th row, 2^nd column of Fig. <ref>) does not rule out the existence of a weak signal at the level seen in the mock.§ DISCUSSION The main conclusion from our analysis is that a number of methodological issues can substantially amplify the strength of sSFR correlations. Some of these are obvious, such as the use of median statistics for a bimodal distribution, and some are more subtle, including the effective density-weighting of the pair-counting scheme in K13. There is also the issue of central-satellite misclassification, although we stress that removing all satellites (as identified in the group catalogue) does not completely eliminate the 4 Mpc conformity signal in our analysis. It is then clear from our analysis that the 4 Mpc scale conformity signal is actually being driven (via these amplification effects) by a very small number of centrals that live on the outskirts of the largest clusters in the Universe. The fact that their sSFRs are correlated with those of the large number of galaxies in the clusters appears to be a real effect, albeit greatly boosted in K13 by the methodological aspects discussed in the current paper. In the final stages of manuscript preparation of this paper, <cit.> posted a pre-print in which they reproduced the K13 result (i.e. Fig. <ref>) by using the same methodology as K13. They also identify probable satellite contaminants (which they refer to as non-pure centrals) in the sample of centrals by using their group catalogue, and found that the large-scale conformity signal is effectively eliminated when probable contaminants are excluded. Through private correspondence, we found that among the centrals which they classify as non-pure, some fall under our category of rich (N_neigh > 70) centrals. This intersecting subsample makes up ∼ 20% of their non-pure sample, and ∼ 25% of our rich sample. The same reduction in the conformity signal (in their fig. 5) can be achieved by only removing this intersecting subset of rich non-pure centrals, while the removal of non-pure centrals with fewer than 70 neighbours has essentially no impact on the conformity signal. This is consistent with our identification of the origin of the K13 conformity signal, namely that the strong effect is primarily driven by centrals in high-density regions, and is further amplified by satellite contaminants.Similar methodological issues have also been addressed in analyses of other data sets, with varying results. <cit.> investigated conformity in a sample of PRIMUS galaxies at intermediate redshift. The authors did this by splitting centrals into passive and star-forming populations, and quantified the star-formation of neighbour galaxies using their star-forming fraction. The authors explicitly tested the impact of the different weighting schemes of centrals, and found that the conformity signal, as quantified by the star-forming fraction, was insensitive to the choice of weighting scheme. However, we note that under the density-weighted scheme, the statistical significance of their measured large-scale conformity signal is far smaller than that in K13. It is therefore unclear whether this insensitivity was due to the fact that the star-forming fraction is a more robust marker of conformity, or simply that the conformity signal is intrinsically weaker in that sample.The effects of satellite contamination in the selection of centrals were discussed in <cit.>, where the authors investigated conformity in the Illustris simulation. They found that while satellite contamination in the central sample does indeed amplify the observed conformity signal, a weak large-scale conformity signal out to ∼ 3 Mpc can be detected in the simulation even after the satellite contaminants are removed.In both cases, a large-scale conformity signal is detected out to ∼ 3 Mpc even when accounting for the highlighted methodological issues. That is to say, while the methodological issues greatly amplify the conformity signal in high-density regions, there may be a weaker, true, large-scale conformity signal in the Universe. Since this work, following K13, investigates conformity using the full sSFR distribution, and not just the star-forming fraction, it is difficult to compare the strengths of the underlying conformity signals between these works. However, qualitatively, this correlation appears to be present in the SAM mock catalogues, and could have a number of origins, including effects like assembly history bias or other environment-based effects that do not involve direct super-halo interactions (since these are not in the SAM).The clear identification in this paper that the effect is being driven by low-sSFR centrals (some of which are probably real centrals, although some are likely misidentified satellites) in the close vicinity of very massive clusters (which is clear from Fig. <ref>, <ref>, <ref>, and <ref>) emphasizes the difficulty of correctly interpreting the scale of the conformity signal. Rather than a long-range effect, operating at about 10 virial radii from the small haloes hosting the set of centrals, it should better be thought of as a short-range effect, operating at about one virial radius from the very large haloes that are hosting the neighbours of those centrals. In a more general sense, this also highlights the importance of matching the centrals and neighbours in conformity studies, as discussed at length in <cit.>. Although the centrals with high and low sSFR are reasonably well-matched in stellar mass in K13, and likely therefore also in their own halo mass, it is clear (from Fig. <ref>) that they do not inhabit the same range of Mpc-scale environments. In particular, it is clear from Fig. <ref> that the set of low-sSFR centrals inhabit a much broader range of environments than the set of high-sSFR centrals, and that the (relatively few) centrals with the most neighbours (N_neigh > 70 within 4 Mpc) are predominantly of low sSFR. This is the origin of the conformity signal: it seems much more plausible that the signal comes from the very special environment where these few richest centrals lie, rather than from the centrals themselves. § SUMMARY AND CONCLUSIONS This paper has re-examined the observational evidence in the SDSS for galactic conformity effects at large scales, as presented in <cit.>. For simplicity, we focused on the analysis of the set of centrals of intermediate stellar mass 10^10< M_* < 10^10.5, where the conformity signal in K13 is strongest. Likewise, we considered only the simple (total) sSFR as the indicator of star-formation activity. We first identify three features of the K13 analysis methodology that we have shown to artificially introduce or amplify a conformity signal:* The K13 analysis is implicitly weighted towards those central galaxies which have large numbers of neighbours. Since these centrals have both generally low sSFR and have low-sSFR neighbours, this produces a positive conformity signal. The preferential weighting of these centrals boosts (in proportion to their number of neighbours) their contribution to the overall conformity signal in the sample. * Some centrals selected by the K13 isolation criterion are likely to be misclassified satellite galaxies. This can produce a spurious conformity signal if the rate of misclassification is correlated with the overall passive fraction of the satellites, which it appears to be. Since the probability of misclassification also appears to increase with the number of neighbours, the weighting of the sample in favour of centrals with many neighbours further exacerbates this problem.* In addition, the use of the median to describe the sSFR distribution of the neighbour galaxies further amplifies the size of the conformity signal. Since the neighbour galaxies have a bimodal distribution of sSFR, with roughly equal strengths of the two components, a small shift in the relative numbers of high- and low-sSFR neighbours results in a large change in the median, about twice the change in the mean (of the logarithm). We then re-analyse the SDSS data with various combinations of small but significant modifications to the analysis methodology based on these three issues. The combination of these modifications dramatically reduces the large-scale conformity signal to the level that it can no longer be detected with the available data.Removing the weighting in favour of centrals with many neighbours is already sufficient to vastly reduce the conformity signal, to the extent that the amplitude of the remaining signal is comparable to the size of the estimated uncertainties.Even without removing the implicit density-weighting, the signal beyond 2 Mpc essentially disappears if the 18 centrals within 4 Mpc of the Coma cluster are removed. More than half of these rich centrals are likely to be misclassified satellites, but some may well be real centrals. These centrals are preferentially of low sSFR, and the large number of neighbours are also preferentially of low sSFR, thereby producing a conformity signal. This signal is only present for a very small number of rich centrals in the vicinity of a few large clusters, but it came to produce the large overall effect in the K13 results via the density-weighting that is implicit in their method.This result emphasizes the difficulty of correctly interpreting the scale of the conformity signal. While a 4 Mpc-scale correlation may appear to be an extremely long-range effect when compared with the virial sizes of the relatively low-mass centrals under consideration, we have illustrated that it is an effect that arises within approximately one virial radius of the largest haloes. Indeed, we show that progressively removing centrals in the vicinity of large clusters systematically reduces the spatial extent of the large-scale conformity signal. The large-scale conformity effect seen in K13 should therefore better be thought of as a short-range effect, associated with the environmental quenching effects of neighbours around the larger haloes, rather than a very long-range effect driven by the smaller haloes.Finally, we also analyse the mock catalogue from the <cit.> semi-analytic model in exactly the same way as the SDSS data. Both the effects of the methodological issues, and also the overall levels of conformity seen at each step, are very similar in the real and mock data, suggesting little need for the inclusion of any new physical processes in the models in order to address large-scale conformity.Because the signal is dominated by the centrals that are located in the neighbourhood of a handful of the richest clusters, the actual uncertainties are substantially larger than those estimated by the bootstrap resampling. This should be borne in mind when comparing results from the real data with those from the mock catalogue. § ACKNOWLEDGEMENTSWe thank Joanna Woo for kindly providing the original compilation of the SDSS catalogues, and also Christian Knobel and Aseem Paranjape for previous discussions on galactic conformity. This work has been supported by the Swiss National Science Foundation.BMBH (ORCID 0000-0002-1392-489X) acknowledges support from an ETH Zwicky Prize Fellowship.mn2e
http://arxiv.org/abs/1702.08460v2
{ "authors": [ "Larry P. T. Sin", "Simon J. Lilly", "Bruno M. B. Henriques" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170227190005", "title": "On the evidence for large-scale galactic conformity in the local Universe" }
[JS E-mail:]js2118@math.rutgers.edu Department of Mathematics,Rutgers University,Piscataway, NJ 08854 [CG E-mail:]griffinch@ieee.orgMathematics Department,United States Naval Academy,Annapolis, MD 21666[AS E-mail:]asquicciarini@ist.psu.eduCollege of Info. Sci. and Tech.,Penn State University,University Park, PA 16802[SR E-mail: ]sarah.rajtmajer@qs-2.comQuantitative Scientific Solutions,Arlington, VA 22203In this paper, we study a model of opinion dynamics in a social network in the presence increasing interpersonal influence, i.e., increasing peer pressure. Each agent in the social network has a distinct social stress function given by a weighted sum of internal and external behavioral pressures. We assume a weighted average update rule and prove conditions under which a connected group of agents converge to a fixed opinion distribution, and under which conditions the group reaches consensus. We show that the update rule is a gradient descent and explain its transient and asymptotic convergence properties. Through simulation, we study the rate of convergence on a scale-free network and then validate the assumption of increasing peer pressure in a simple empirical model. 89.65.-s,02.10.Ox,02.50.Le Increasing Peer Pressure on any Connected Graph Leads to Consensus Sarah Rajtmajer December 30, 2023 - preprint ==================================================================§ INTRODUCTIONBeginning with DeGroot <cit.>, opinion models have been studied extensively (see e.g.,<cit.>). In these models, opinion is a dynamic state variable whose evolution in some compact subset of ℝ^n is governed by an autonomous dynamical system. Using this formalism, opinion models have been unified with flocking models (see e.g., <cit.>) in <cit.>. Most recent work on opinion dynamics (and their unification with flocking models) considers the interaction of agents on a graph structure <cit.>. When considered on a lattice, these models are share characteristics to continuous variations of Ising models <cit.>.Recent work <cit.> considers the evolution of opinion on a social network in which agents are resistant to change because of an innate belief. In particular, <cit.> use a variant of the model in <cit.> and study this problem from a game-theoretic perspective by considering the price of anarchy on the opinion formation process on a connected graph. The existence of innate beliefs, which are hidden but affect (publicly) presented opinion, is supported in recent empirical work by Stephens-Davidowitz et al. <cit.>. While the work in <cit.> introduces the concept of the stubborn agent, it does not consider the effect of situationally variant peer-pressure on agents' opinions, though statically weighted user connections are considered. Peer pressure in social networks is well-documented. Adoption of trends <cit.>, purchasing behaviors <cit.>, beliefs and cultural norms <cit.>, privacy behaviors <cit.>, bullying <cit.>, and health behaviors <cit.> have all been linked to peer influence. In this paper, weconsider the problem of opinion dynamics on a social network of agents with innate beliefs in which peer-pressure is a dynamically changing quantity, independent of the opinions themselves. This has the mathematical effect of transforming the formerly autonomous dynamical system into a non-autonomous dynamical system.Our notion of persuasion and peer-pressure affecting these dynamicsis related to the psychology literature on belief formation and social influence. Inparticular,we draw inspiration from studies on periodicity in human behavior, and social influence theories <cit.>. We follow Friedkin's foundational theory that strong ties are more likely to affect users' opinions and result in persuasion or social influence. Underpinning our model is also the notionof mimicking. Brewer and more recently Van Bareen <cit.> suggest that mimicking is used when individuals feel out ofa group and therefore will alter their behavior (to a point <cit.>) to be more socially accepted.The resulting model also accounts for agents with relatively varying resistance to changing their innate beliefs. We use a recent result from functional analysis on the composition of (distinct) contraction mappings along with the Sherman-Morrison formula to show: * Under increasing peer-pressure, the dynamical system converges.* If peer-pressure increases in an unbounded way, consensus emerges a weighted average of the innate beliefs of the individuals.* The opinion update process converges to a gradient descent, with linear convergence rate* The hypothesis of increasing peer-pressure can be supported with a live data set. Work herein is complementary to (e.g.) <cit.> in that we consider a dynamic (increasing) peer-pressure coefficient with variable weights on initial belief. Additionally, we analyze the convergence rate of the dynamical system to the fixed point, while <cit.> focus on the model from a game-theoretic perspective.The remainder of this paper is organized as follows: In Section <ref> we present the basic model. In Section <ref> we prove convergence of the model and that increasing peer pressure leads to consensus in any connected graph. We discuss the convergence rate in Section <ref> by showing the dynamical system is, effectively, gradient descent. We briefly relate our work to the cost of anarchy work from <cit.> in Section <ref>. In Section <ref>, we validate the hypothesis of increasing peer pressure by fitting our model to a live data set. Conclusions and future directions are presented in Section <ref>. § PROBLEM STATEMENT AND MODELWe model a network of agents, representing individuals in a social network in which each user communicates with her friends/associates, but not necessarily the entire network. Assume that the agents' network is represented by a simple graph G = (V,E) where vertexes V are agents and edges E are the social connections (communications) between them. It is clear that disconnected sections of the graph are independent, so we assume that G is connected. For the remainder of the paper, let V = {1, 2 , … n}, so E is a subset of the two-element subsets of V. The state of Agent i at time k is a continuous value x_i^(k)∈ [0,1] that represents disclosed opinion on a bivalent topic (e.g., “I support gun control” or “I like classical music”). Each agent has a constant preference x^+_i∈ [0,1] representing her inherent position on the topic. This may differ from the opinion disclosed to the public. The value x^+_i represents inherent agent bias. Further, Agent i is assigned a non-negative vertex weight s_i and positive edge weights w_ij respectively for (i, j) ∈ E. The weight s_i, termed stubbornness <cit.>, models the tendency of Agent i to maintain her (private) position x^+_i in public. The edge weights w_ij represent friendship affinity.The set of all disclosed opinions is denoted by the vector 𝐱^(k) while the set of constant private preferences is 𝐱^+. For the remainder of this paper, we refer to publicly disclosed opinions simply as opinions.Agent i's state is updated by minimizing its social stress: J_i(x_i^(k),𝐱^(k-1), k) = s_i(x_i^(k) - x_i^+)^2+ ρ^(k)∑_j = 1^n w_ij(x_i^(k) - x_j^(k-1))^2Here ρ^(k) is the peer-pressure coefficient. In the sequel, we assume ρ^(k) is an increasing function of k. As noted in <cit.>, under these assumptions, the first order necessary conditions are sufficient for minimizing J_i(x_i^(k),𝐱^(k-1), k).The optimal state for Agent i at time k is then:x_i^(k) = s_ix_i^+ + ρ^(k)∑_j = 1^n w_ijx_j^(k-1)/s_i + ρ^(k)d_i,where d_i = ∑_j = 1^n w_ij is the weighted degree of vertex i. The implied update rule is simply a generalization of the DeGroot model variation found in <cit.> and generalizes the model in <cit.> by including the stubbornness coefficient and an increasing peer-pressure term.Let 𝐀 be the n × n weighted adjacency matrix of G. In addition, let 𝐃 be the n × n matrix with d_i on the diagonal and let 𝐒 be the n × n matrix with s_i on the diagonal. Using these terms, the recurrence in Eq. (<ref>) can be written as: 𝐱^(k) = (𝐒 + ρ^(k)𝐃)^-1(𝐒𝐱^+ + ρ^(k)𝐀𝐱^(k-1))We say that the agents converge to consensus 𝐱̅ if there is some N so that for all n > N, ‖𝐱̅ - 𝐱^(n)‖ < ϵ for some small ϵ > 0. This represents meaningful compromise on the issue under consideration. § CONVERGENCE In this section, we consider the update rule in Eq. (<ref>) as a sequence of contraction mappings each with its own fixed point. We then show that all these fixed points converge to a weighted average. The result rests on a variation of the contraction mapping theorem from <cit.>.If 𝐋 = 𝐃-𝐀 is the weighted graph Laplacian, then 𝐋 has an eigenvalue 0 with multiplicity 1 and a corresponding eigenvector 1 where 1 is the vector of all 1's. For any ρ^(k) > 0, 𝐒 + ρ^(k)𝐋 is invertible. By definition, the graph Laplacian is a positive semidefinite symmetric matrix. In addition, the only eigenvector with eigenvalue 0 is the vector of all 1s, written 1.Since 𝐒 is symmetric and s_i≥ 0, 𝐒 + ρ^(k)𝐋 is positive semidefinite as well. Choose 𝐱∈ℝ^n such that 𝐱^T(𝐒 + ρ^(k)𝐋)𝐱 = 0. Then 𝐱^T(𝐒 + ρ^(k)𝐋)𝐱 = 𝐱^T𝐒𝐱 + ρ^(k)𝐱^T𝐋𝐱.Since 𝐒 and 𝐋 are positive semidefinite and ρ^(k) > 0, this implies that 𝐱^T𝐒𝐱 = 𝐱^T𝐋𝐱 = 0.Since 𝐋 is symmetric, by the spectral theorem it has is an orthonormal basis of eigenvectors { b_1, … b_n} with associated eigenvectors {λ_1, …λ_n}. Because 𝐋 is positive semidefinite λ_i≥ 0, that 𝐱^T𝐋𝐱 = ∑_i = 1^nλ_i (𝐱^T b_i)^2. And, because 𝐱^T𝐋𝐱 = 0, if λ_i≠ 0, 𝐱^T b_i = 0.It follows that x is an eigenvector of 𝐋 with eigenvalue 0; that is, 𝐱 = c 1 for some constant c, and therefore 𝐱^T𝐒𝐱 = c^2∑_i=1^n s_i. Since s_i≥ 0 and not all s_i are zero, we must have c = 0, so 𝐱 =0. Following, 𝐒 + ρ^(k)𝐋 is positive definite, and therefore invertible.Define:F_k(𝐱) = (𝐒 + ρ^(k)𝐃)^-1(𝐒𝐱^+ + ρ^(k)𝐀𝐱)and let:G_k = F_k∘ F_k-1∘⋯∘ F_1Then 𝐱^(k) = F_k(𝐱^(k-1)) and𝐱^(k) = G_k(𝐱^(0)). That is, iterating these F_k captures the evolution of 𝐱^(k). We show that for each k, F_k is a contraction and therefore has a fixed point by the Banach Fixed Point Theorem <cit.>. We use this result in the proof of Theorem <ref>.For all k, F_k is a contraction map with fixed point given by 𝐱^(k) = (𝐒 + ρ^(k)𝐋)^-1𝐒x^+.Let 𝐁 be the (n+1) × (n+1) matrix given by adding a row and column to ρ^(k)(𝐒 + ρ^(k)𝐃)^-1𝐀 as follows: 𝐁 = [[ ρ^(k)(𝐒 + ρ^(k)𝐃)^-1𝐀 (𝐒 + ρ^(k)𝐃)^-1𝐒1; 0 1 ]] The rows of 𝐁 sum to 1. To see this, replace x^+ and x_i^(k-1) in Eq. (<ref>) with 1. Thus 𝐁 is a stochastic matrix for a Markov process with a single absorbing state. Since G is connected and not all s_i are equal to 0, a transition exists from each state to the steady state; thus from any starting state, convergence to the steady state is guaranteed. This means that lim_i →∞(ρ^(k)(𝐒 + ρ^(k)𝐃)^-1𝐀)^i = 0, so 𝐀 is a convergent matrix. Equivalently, if ‖·‖ denotes the matrix operator norm, then, ‖ρ^(k)(𝐒 + ρ^(k)𝐃)^-1𝐀‖ < 1. Therefore for any 𝐱,𝐲∈ [0,1]^n:‖ F_k(𝐱) - F_k( y) ‖= ‖ (𝐒 + ρ^(k)𝐃)^-1ρ^(k)𝐀(𝐱 -y) ‖≤‖ (𝐒 + ρ^(k)𝐃)^-1ρ^(k)𝐀‖‖𝐱 -y‖That is, F_k is a contraction map on a compact set, so by the Banach fixed-point theorem, it has a unique fixed point 𝐱^(k). Let 𝐱^(k) be that fixed point. Then 𝐱^(k) = F_k(𝐱^(k)). Rearranging the terms yields, (𝐒 + ρ^(k)𝐃)𝐱^(k) - ρ^(k)𝐀𝐱^(k)= (𝐒 + ρ^(k)𝐋) 𝐱^(k) = 𝐒𝐱^+.Therefore:𝐱^(k) = (𝐒 + ρ^(k)𝐋)^-1𝐒𝐱^+.This completes the proof. The following lemma will allow us to consider the matrices (𝐒 + ρ^(k)𝐋)^-1 for k ∈{1,2,…} in GL_n(ℝ) (the Lie group of invertible n× n real matrices) as perturbations.This enables effective approximations of asymptotic behaviors.Let { b_1… b_n} an orthonormal basis of ℝ^n. Also let 𝐌: ℝ^n→ℝ^n be an invertible symmetric linear transformation (invertible square matrix) and { u_1, … u_n} be a set of unit vectors such that for a small constant δ, 𝐌^-1 b_1 = λ b_1 + O(δ) u_1 and 𝐌^-1 b_j = O(δ) u_j for j ≠ 1.Then if ‖ v‖ = 1, and s ∈ℝ, then unless (𝐌 + s v v^T) is not invertible, there exists a set of unit vectors { u_1', … u_n'} such that (𝐌 + s v v^T)^-1 b_1 = λ/1 + sλ ( v^T b_1)^2 b_1 + O(δ) u_1' and (𝐌 + s v v^T)^-1 b_j = O(δ) u_j' for j ≠ 1.Before proceeding to the proof of this result, based on the Sherman-Morrison formula, we note that we will establish an instance of the necessary conditions of this lemma in Theorem <ref>. Thus the lemma is not vacuous. Since { b_1… b_n} is an orthonormal basis, v = ∑_i=1^n a_i b_i where a_i =v^T b_i. This means that 𝐌^-1 v =∑_i=1^n a_i𝐌^-1( b_i) = λ a_1 b_1 + O(δ)∑_i=1^n a_i u_i. By Cauchy-Schwartz, | a_i|≤‖ v‖‖ b_i‖ = 1, so by the triangle inequality, ‖∑_i=1^n a_i u_i‖≤ n. Then letting u = ∑_i=1^n a_i u_i/n, we have that 𝐌^-1 v = λ a_1 b_1 + O(δ) u, where ‖ u‖≤ 1.By the Sherman-Morrison formula, (𝐌 + svv^T)^-1 = 𝐌^-1 - s𝐌^-1vv^T𝐌^-1/1 + sv^T𝐌^-1v. Using this, and choosing each u_i' to be an appropriate rescaling of the O(δ) terms yields: (𝐌 + s v v^T)^-1( b_1) = 𝐌^-1 b_1 - s𝐌^-1 v v^T𝐌^-1 b_1/1 + s v^TM^-1 v =λ b_1 + O(δ) u_1 - s v^T(λ b_1 + O(δ) u_1)/1 + s v^T(λ a_1 b_1 + O(δ) u)(λ a_1 b_1 + O(δ) u)= λ b_1 + O(δ) u_1 - sλ a_1 + O(δ)/1 + sλ a_1^2 + O(δ)(λ a_1 b_1 + O(δ) u)= λ/1 + sλ ( v^T b_1)^2 b_1 + O(δ) u_1'Furthermore, for j ≠ 1:(𝐌 + s v v^T)^-1( b_j) = 𝐌^-1 b_j - s𝐌^-1 v v^T𝐌^-1 b_j/1 + s v^T𝐌^-1 v = O(δ) u_j - s v^T(O(δ) u_1)/1 + s v^T(λ a_1 b_1 + O(δ) u)(λ a_1 b_1 + O(δ) u)= O(δ) u_j'This completes the proof. The results stated give insight into the motion of fixed points as ρ^(k) increases. We now show that thefixed point given by Eq. (<ref>) converge to the average of the agents' initial preferences, weighted by the stubbornness of each agent. We then use that result to prove the dynamics converge to this point when ρ^(k)→∞.If lim_k →∞ρ^(k) = ∞, then:lim_k →∞𝐱^(k) = ∑_i = 1^n s_ix_i^+/∑_i = 1^n s_i1.Since G is a graph, the Laplacian 𝐋 is a positive semidefinite symmetric matrix, and therefore has an orthonormal basis of eigenvectors { b_1, … b_n} with real eigenvalues {λ_1, …λ_n}. Since G is connected, only a single eigenvalue λ_1 = 0 and the associated unit eigenvector is b_1 = 1/√(n) 1.Since every vector is an eigenvector of the identity matrix I, { b_1, … b_n} are an orthonormal basis of eigenvectors for I + ρ^(k)𝐋 with eigenvalues {1, 1 + ρ^(k)λ_2, … 1 + ρ^(k)λ_n}. But then ( I + ρ^(k)𝐋)^-1 has the same basis of eigenvectors, with eigenvalues {1, 1/1 + ρ^(k)λ_2, …1/1 + ρ^(k)λ_n}.As ρ^(k)→∞, 1/1 + ρ^(k)λ_j→ 0 for each j ≠ 1. In particular, for anyδ > 0, for sufficiently large ρ^(k),I+ ρ^(k)𝐋 satisfies the conditions of Lemma <ref> with λ = 1.Let I + ρ^(k)𝐋 = 𝐌_0. Then, for each l up to n, let 𝐌_l = (𝐌_l-1 + (s_l - 1) e_l e_l^T) where e_l is the lth vector of the standard basis. Since e_l e_l^T is the zero matrix with a one in the lth place on the diagonal, ∑_l=1^n (s_l - 1) e_i e_i^T = 𝐒- I and therefore 𝐌_n = ( I + ρ^(k)𝐋) + ∑_l=1^n (s_l - 1) e_l e_l^T = 𝐒 + ρ^(k)𝐋.By iterating Lemma <ref> with s = s_l - 1 and v =e_l, we have that for each l there is a λ_l such that 𝐌_l^-1 b_1 = λ_l b_1 + O(δ) u_1^(l) and 𝐌_l^-1 b_j = O(δ) u_j^(l) for j ≠ 1.Since e_l^T b_1 = 1/√(n), Lemma <ref> gives the recurrence:λ_l = λ_l-1/1+ λ_l-1s_i -1/n Solving this recurrence with λ_0 = 1 yields:𝐌_l^-1b_1 = n/n + ∑_k=1^l (s_k - 1) b_1 + O(δ)u_1^(l) Since ∑_k=1^n (s_k - 1) = tr(𝐒) - n, it is clear that𝐌_n^-1b_1 = n/tr(𝐒) b_1 + O(δ)u_1^(n).Therefore, for u = ∑_i=1^n b_1^T𝐒𝐱^+ u_i^(n):𝐱^(k) = n/tr(𝐒) b_1^T𝐒𝐱^+ b_1 + O(δ) ∑_i=1^n b_1^T𝐒𝐱^+ u_i^(n)=1^T𝐒𝐱^+/tr(𝐒) 1 + O(δ)u= ∑_i=1^n s_ix_i^+/∑_i=1^n s_i 1 + O(δ)u Since δ→ 0 as ρ^(k)→∞, if lim_k →∞ρ^(k) = ∞, then:lim_k →∞ 𝐱^(k) = ∑_i=1^n s_ix_i^+/∑_i=1^n s_i1.This completes the proof. Since peer pressure increases in each step, no single F_k is sufficient to model the process of convergence. We use the following result from <cit.>Let {f_n} be a sequence of analytic contractions in a domain D with f_n(D) ⊆ E ⊆ D_0 ⊆ D for all n. Then F_n = f_n ∘ f_n-1∘⋯∘ f_1 converges uniformly in D_0 and locally uniformly in D to a constant function F(z) = c ∈ E. Furthermore, the fixed points of f_n converge to the constant c.The following corollary is now immediate from Lemmas <ref> and <ref>:From Eq. (<ref>), let G_k= F_k∘ G_k-1 = F_k∘ F_k-1∘…∘ F_1 for each k ≥ 0. Then G = lim_k →∞ G_k is a constant function and (functional) convergence is uniform.We now have the following theorem, which follows immediately from Corollary <ref> and Theorem <ref>:If ρ^(k)→∞, then:lim_k →∞ 𝐱^(k) = ∑_i = 1^n s_ix_i^+/∑_i = 1^n s_i1. This means that in the case of increasing and unbounded peer pressure, all the agents' opinions always converge to consensus. In addition, the value of this consensus is the average of their preferences weighted by their stubbornness. This is irrespective of the weighting of the edges in the network, so long as the network is connected.We illustrate opinion consensus on a simple graph with 15 vertices in Fig. <ref>. The vertices are organized into three connected cliques. Each clique was initialized with a distinct range of opinions in [0,1]. Initial stubbornness was set randomly and is shown by relative vertex size.The opinion trajectories for this example are shown in Fig. <ref>. In the case of increasing but bounded peer pressure, we have:lim_k →∞ ρ^(k) ≤ρ^*Further, this limit always exists by monotone convergence. Intuitively, this means the influence of others is limited, and that personal preferences will always slightly skew the opinions of others. Again, this is consistent with social influence theories on bounded peer pressure and trade-offs with comfort level <cit.>. Suppose ρ^(k) is increasing and bounded and that:lim_k →∞ ρ^(k) = ρ^*,then lim_k →∞ 𝐱^(k) = (𝐒 + ρ^*𝐋)^-1𝐒𝐱^+.Since ρ^(k) is increasing and bounded, it converges to a finite number ρ^* by monotone convergence. From Lemma <ref>, (𝐒 + ρ^* 𝐋) is defined and invertible. Since matrix inversion is continuous in GL_n(ℝ), by Theorem <ref>:lim_k →∞𝐱^(k) = lim_k →∞𝐱^(k)= lim_k →∞(𝐒 + ρ^(k)𝐋)^-1𝐒𝐱^+= (𝐒 + lim_k →∞ρ^(k)𝐋)^-1𝐒𝐱^+= (𝐒 + ρ^* 𝐋)^-1𝐒𝐱^+The above theorem tells us that if peer pressure is increasing and bounded, the agents' opinions converge to a fixed distribution, which may not be a consensus, but is easily computable from the initial preferences. In this case, the shape of the network is important for determining the limit distribution, as the edge weights factor into the Laplacian. This result is similar to the convergence point given in <cit.> where stubbornness coefficients are not presented and peer pressure is constant.§ CONVERGENCE RATEWe analyze the convergence rate of the algorithm and obtain a secondary result on efficiency. Define the utility of these convergent points to be the sum of the stress of the agents when the state 𝐱 is constant. Formally:U^(k) (𝐱)= ∑_i J_i(x_i, 𝐱, k)= ∑_i =1^n s_i(x_i - x_i^+)^2 +ρ^k(∑_i,j w_ij(x_i-x_j)^2)= (𝐱 - 𝐱^+)^T𝐒(𝐱 - 𝐱^+) + 2ρ^k𝐱^T𝐋𝐱= 𝐱^T(𝐒 + 2ρ^k𝐋)𝐱 - 2𝐱^T𝐒𝐱^+ + (𝐱^+)^T𝐒𝐱^+Define the limiting utility U(𝐱) as:U(𝐱) = lim_k →∞1/ρ^(k)U^(k)(𝐱)The following lemma is immediately clear from the construction of the functions J_i, the fact that U^(k) is a strictly convex function and U is the limit of these strictly convex functions:The global utility function U(𝐱) is convex. Furthermore, the fact that (i) U^(k) is smooth on its entire domain and (ii) U^(k)(𝐱)/ρ^(k) converges uniformly to U(𝐱), implies that U(𝐱) is both differentiable and its derivative can be computed as the limit of the derivatives ofU^(k)(𝐱)/ρ^(k).Using the global utility function, we can analyze the convergence rate of the update rule. From Eq. (<ref>), we can compute:Δ x_i^(k-1) = x_i^(k) - x_i^(k-1) =s_i(x_i^+ - x_i^(k-1)) + ∑_j=1^n (x_j^(k-1) -x_i^(k-1))/s_i + ρ^(k)∑_j=1^n w_ijLet:α_i^(k) = 1/s_i + ρ^(k)∑_j=1^n w_ijand define 𝐇^(k) = 12diag(α_1^(k),…,α_n^(k)). Computing the gradient of U^(k) yields:Δ𝐱 = -𝐇^(k)∇ U^(k)(𝐱_i^(k-1))We conclude the update rule, Eq. (<ref>) can be written:𝐱^(k) = 𝐱^(k-1) - 𝐇^(k)∇ U^(k)(𝐱^k-1)Necessarily, 𝐇^(k) is always positive definite and therefore - 𝐇^(k)∇ U^(k)(𝐱^k-1) is always a descent direction for U^(k). Moreover, (∇ U_k)^T ∇ U > 0 and consequently - 𝐇^(k)∇ U^(k)(𝐱^k-1) is a descent direction for U(𝐱). Thus, the update rule is a descent algorithm, which explains the initial fast convergence toward the average (see Fig. <ref>).When the descent direction converges to a Newton step, a descent algorithm can be shown to converge superlinearly<cit.>. However, these steps do not converge to Newton steps. As ρ^(k) grows large, α_i^(k)→ 0 and U_k/ρ^(k)→ U and consequently for large k:1/ρ^(k)𝐇^(k)∇U^(k)(𝐱^k-1) ≈ϵ∇U(𝐱^k-1)for ϵ∼ 1/ρ^(k). Thus, the update rule approaches a simple gradient descent. We show that a consequence of this is a linear convergence rate.Let:𝐱^*= ∑_i=1^n s_ix_i^+/∑_i=1^n s_i1and define:𝐲^(k) = 𝐱^(k) - 𝐱^*.From Eq. (<ref>) we compute:‖𝐱^(k+1) - 𝐱^*‖/‖𝐱^(k) - 𝐱^*‖ =‖𝐲^(k) - 𝐇^(k+1)∇ U^(k+1)(𝐱^(k))‖/‖𝐲^(k-1) - 𝐇^(k)∇ U^(k)(𝐱^(k-1))‖Assuming ρ^(k)→∞ as k →∞, and expanding the gradient using Eq. (<ref>) we obtain: lim_k →∞‖𝐱^(k+1) - 𝐱^*‖/‖𝐱^(k) - 𝐱^*‖= lim_k →∞1ρ^(k)/1ρ^(k)‖𝐲^(k) - 𝐇^(k+1)([𝐒 + 2ρ^(k+1)𝐋]𝐱^(k) - 2𝐒𝐱^+)‖/‖𝐲^(k-1) - 𝐇^(k)([𝐒 + 2ρ^(k)𝐋]𝐱^(k-1) - 2𝐒𝐱^+)‖ = lim_k →∞‖𝐲^(k)/ρ^(k) - 𝐇^(k+1)([𝐒/ρ^(k) + 2ρ^(k+1)ρ^(k)𝐋]𝐱^(k) - 2𝐒𝐱^+/ρ^(k))‖/‖𝐲^(k-1)/ρ^(k) - 𝐇^(k)([𝐒/ρ^(k) + 2𝐋]𝐱^(k-1) - 2𝐒𝐱^+/ρ^(k))‖ =lim_k →∞2ρ^(k+1)ρ^(k)‖𝐇^(k+1)𝐋𝐱^(k)‖/2‖𝐇^(k)𝐋𝐱^(k-1)‖ As ρ^(k)→∞, we see that:𝐇^(k) →1/2ρ^(k)𝐃^-1,where 𝐃 is the diagonal weighted degree matrix. Then:lim_k →∞2ρ^(k+1)ρ^(k)‖𝐇^(k+1)𝐋𝐱^(k)‖/2‖𝐇^(k)𝐋𝐱^(k-1)‖ =ρ^(k+1)/ρ^(k)1ρ^(k+1)‖𝐃^-1𝐋𝐱^(k)‖/1ρ^(k)‖𝐃^-1𝐋𝐱^(k-1)‖ = 1Thus we have shown:The convergence rate of the update rule given in Eq. (<ref>) is linear. In particular: lim_k →∞‖𝐱^(k+1) - 𝐱^*‖/‖𝐱^(k) - 𝐱^*‖ = 1,We illustrate the slow convergence on a larger example with 500 vertices organized into a scale-free graph using the Barabási-Albert <cit.> graph construction algorithm. The graph and snapshots of opinion evolution are shown in Fig. <ref>.We show the opinion trajectories for the 500 vertex scale-free network in Fig. <ref>(a) and illustrate Eq. (<ref>) in Fig. <ref>(b).Notice the ratio ‖𝐱^(k+1) - 𝐱^*‖/‖𝐱^(k) - 𝐱^*‖ approaches 1 as expected. § COST OF ANARCHY<cit.> observe that simultaneous minimization of Eq. (<ref>) is a game-theoretic problem and compare the total social utility in a centralized solution to a decentralized solution (Nash equilibrium); i.e., they compute a price of anarchy <cit.>. To analyze the price of anarchy of this system, we cannot use the utility function in Eq. (<ref>), as the U^(k)(lim_k →∞𝐱^(k)) → 0 when ρ^(k)→∞. Instead, we use a total utility function U_T(𝐱) = lim_k→∞ U^(k)(𝐱) to compute the cost of anarchy: The convergent point lim_k →∞𝐱^(k) minimizes total utility if and only if lim_k →∞ρ^k = ∞. If ρ^(k) converges to a finite number ρ^*, then the total utility isU_T(x) = 𝐱^T(𝐒 + 2ρ^*𝐋)𝐱 - 2𝐱^T𝐒𝐱^+ + (𝐱^+)^T𝐒𝐱^+Note that this is identical to the work in <cit.>, except with edge weights multiplied by ρ^*. We note that lim_k →∞𝐱^(k) is the Nash Equilibrium used in <cit.>. From the work in <cit.> we may conclude the convergent point is not optimal for finite ρ^*.If lim_k →∞ρ^k = ∞, then if 𝐱≠ c1 for some constant c, then 𝐱^T𝐋𝐱 > 0, so U^(k)(𝐱) grows without bound. However, for any k, we have that U^(k)(c1) = ∑_i=1^n s_i(c - x_i^+)^2, so U_T(c1) = ∑_i=1^n s_i(c - x_i^+)^2. By first order necessary conditions of optimality: ∑_i=1^n s_ix_i^+/∑_i=1^n s_i1minimizes U(𝐱), and thus lim_k →∞𝐱^(k) is optimal.This gives the following trivial corollary, which is consistent with the work in <cit.>.The cost of anarchy is 1 if and only if lim_k →∞ρ^(k) = ∞. § EMPIRICAL ANALYSIS The hypothesis of increasing peer pressure in social settings underlies this work. We attempt to (in)validate the hypothesis that peer-pressure does increase in real-world systems, by using data from the well-known Social Evolution Experiment <cit.>. The experiment tracked the everyday life of approximately 80 students in an undergraduate dormitory over 6 months using mobile phones and surveys, in order to mine spatio-temporal behavioral patterns and the co-evolution of individual behaviors and social network structure.The dataset includes proximity, location, and call logs, collected through a mobile application. Also included are sociometric survey data for relationships,political opinions, recent smoking behavior, attitudes towards exercise and fitness, attitudes towards diet, attitudes towards academic performance, current confidence and anxiety level, and musical tastes. The derived social network graph (shown Fig. <ref>) represents each student as a node; an edge is present between two nodes if either student noted any level of interaction during the surveys. Edge weights were derived based on the level of interaction recorded between the students in the surveys, as well as the number of surveys in which the interaction appeared. We note that this graph is not scale-free, as is typical of social networks. This may be a result of the size of the network, collection bias or simply representative of this social network. As a consequence, it is dense.Political opinion was modeled on a [0,1] scale, with lower numbers representing Republican preferences and higher numbers representing Democratic preferences. Individual scores were assigned based on reported political party, preferred candidate and likelihood of voting (prior to the election), as well as who they voted for and their approval rating of Barack Obama (after the election). Appendix <ref> contains the code used to set these preferences. Each month's survey was examined individually to put together a monthly time line of each person's political views. The results of the first survey were used as proxy for their inherent personal preference, prior to peer influence.Finally, individual stubbornness/lack of susceptibility to peer pressure was approximated using reported interest in politics on the first survey administered, as well as stated likelihood of voting. These survey questions were independent of those used in determining political preferences. Appendix <ref> contains the code used to set stubbornness.Given a list of ρ^(k), students' preferences were simulated by aligning each iteration of play to one day in the survey period. The simulated preferences were compared to the surveyed preferences at each month, and the distances between the vectors were summed to get a single score for each list of ρ^(k). This function of ρ^(k) was minimized with fminsearch in Matlab and the best-fit peer pressure values were found to be increasing, with a best fit line ρ^(k) = 1.06*k - 11.96 and r^2 = .9886 (see Fig. <ref>).The inferred increasing in peer pressure is consistent with the underlying hypothesis of the paper, under the assumption that the process of repeated opinion averaging with stubbornness is a valid model of human behavior. We discuss this further in Section <ref>.§ CONCLUSIONIn this paper we study an opinion formation model under the presence of increasing peer pressure. As in earlier work, we consider agents whose opinion is affected by unchanging innate beliefs. In this paper, the relative strength of these innate beliefs may vary from agent to agent. We show that in the case of unbounded peer-pressure, opinion consensus to a weighted average of innate beliefs is ensured. We also consider the case when peer-pressure is increasing, but bounded. Simulation suggests a numerically slow convergence, which is explained by showing the system dynamics converge to gradient descent applied to a certain convex function. Using this observation we show that that convergence is linear. We evaluate our hypothesis that peer-pressure increases in real world closed systems by fitting our model to a live data set. We note that that the assumption of a non-constant (and increasing) peer-pressure coefficient can help mitigate the fast initial convergence of this class of models. It is rare in the real-world to see dramatic opinion shifts over extremely short time scales. Such dramatic shifts are consistent with a gradient descent. However, by varying peer-pressure, the gradient descent can be controlled, leading to more consistency with real-world phenomena as illustrated.In future work, the limitation that the network is undirected and symmetric should be removed to account for asymmetric social influence. In addition, the network is assumed to remain static during the convergence process, with connections independent of the agents' opinions. Sufficiently different opinions could cause enough stress between agents so as to cause them to reduce influence or even sever the tie between them. A dynamic network model as in <cit.> could accommodate this kind of network update. Finally, it would be interesting to study corresponding control problems, in which we are given a 𝐱̅, the desired convergence point and we can control a subset of agents reporting values (x_i^(k)), stubbornness (s_i) or initial value (x_i^+) to determine conditions under which opinion steering is possible. This problem becomes more interesting if the other agents attempt to determine whether certain agents are intentionally attempting to manipulate the opinion value 𝐱^(k). Of equal interest is the transient control problem in which 𝐱^(k) is steered through a set X ⊂ℝ^n under the assumption that external factors will prevent convergence in the long-run.§ ACKNOWLEDGEMENTAll authors were supported in part by the Army Research Office, under Grant W911NF-13-1-0271. A portion of CG's work was supported by the National Science Foundation under grant number CMMI-1463482. A portion of AS's work was supported by the National Science foundation under grant number 1453080.§ INITIAL CONDITION CODEThe Matlab code below sets the initial preferences (𝐱^+) in this experiment. InferPrefs.m§ STUBBORNNESS SETTING CODEThe Matlab code below set the stubbornness coefficients (𝐬) in this experiment. PoliticalStubbornness.mapsrev4-1
http://arxiv.org/abs/1702.07912v2
{ "authors": [ "Justin Semonsen", "Christopher Griffin", "Anna Squicciarini", "Sarah Rajtmajer" ], "categories": [ "cs.SI", "cs.DM", "physics.soc-ph" ], "primary_category": "cs.SI", "published": "20170225160354", "title": "Increasing Peer Pressure on any Connected Graph Leads to Consensus" }
http://arxiv.org/abs/1702.08347v1
{ "authors": [ "Vittorio Basso" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170227160509", "title": "Basics of the magnetocaloric effect" }
The Institute of Mathematical Sciences (IMSc-HBNI),4th Cross Road, CIT Campus, Taramani,Chennai 600113, India1manjari@imsc.res.inThe model that fast radio bursts are caused by plunges of asteroids onto neutron stars can explain both repetitive and non-repetitivebursts. If a neutron star passes through an asteroid belt around another star, there would be a series of bursts caused by series of asteroid impacts. Moreover, the neutron star would cross the same belt repetitively if it is in a binary with the star hosting the asteroid belt, leading to a repeated series of bursts. I explore the properties of neutron star binaries which could lead to the only known repetitive fast radio burst so far (FRB121102). In this model, the next two epochs of bursts are expected around 27-February-2017 and 18-December-2017.On the other hand, if the asteroid belt is located around the neutron star itself, then a chance fall of an asteroid from that belt onto the neutron star would lead to a non-repetitive burst. Even a neutron star grazing an asteroid belt can lead to a non-repetitive burst caused by just one asteroid plunge during the grazing. This is possible even when the neutron star is in a binary with the asteroid hosting star, if the belt and the neutron star orbit are non-coplanar.§ INTRODUCTIONSince the first discovery a decade ago by <cit.>, Fast Radio Bursts (FRBs) gave rise to a plethora of models. Some of those are of catastrophic in origin, like mergers of two neutron stars <cit.>, mergers of two white-dwarfs <cit.>, mergers of two black holes when at least one is charged <cit.>, collapses of supra-massive neutron stars into black holes <cit.>, etc., while some others are non-catastrophic like the magnetospheric activity of neutron stars <cit.>, collisions between neutron stars and asteroids <cit.>, flares from stars in the Galaxy <cit.>, etc.The repetitive nature of FRB121102 <cit.> has ruled out the catastrophic models, unless this event is of an origin distinct from all other FRBs. Localization of this event to within a dwarf galaxy at a redshift of z = 0.19273(8) has excluded models involving Galactic origin as well <cit.>. The fall of an asteroid of a mass of a few 10^18 gm can cause an FRB <cit.>. When an asteroid comes close to a magnetized neutron star, a very large electric field is induced parallel to the magnetic field of the neutron star. This induced electric field detaches electrons from the surface of the deformed asteroid and accelerates these electrons to ultra-relativistic energies. Curvature radiation from these ultra-relativistic electrons moving along the magnetic field lines produces FRBs. This model has the potential to explain a diverse population of FRBs depending on the nature of the impact. If there is an asteroid belt around a neutron star, then a chance fall of an asteroid from that belt onto the neutron star would lead to a single burst. <cit.> argued that one neutron star is likely to face only one such asteroid impact, i.e. a non-repeating burst from a particular source. Later, <cit.> demonstrated that a neutron star traveling through an asteroid belt would face several asteroid impacts resulting in a series of bursts. Although <cit.> mainly concentrated on the phenomenon of isolated neutron stars moving in a galaxy and passing through asteroid belts around other stars, they also mentioned that if the neutron star comes too close to the asteroid-hosting star, then it might get captured by the other star, forming a binary. In this case, the neutron star would cross the asteroid belt repetitively leading to repeating series of bursts and this might be the case for FRB121102. In the present letter, I extend this model. Note that, it is not essential that the neutron star binary would form only via the capture process, it is also possible that the asteroid belt was created during the evolution of the binary. The neutron star would cross (twice in each orbital revolution) the asteroid belt around its stellar companion if the radius of the asteroid belt is smaller than the apastron distance of the neutron star, as well as the orbit of the neutron star is eccentric (at least mildly), as no crossing is possible in case of two concentric circular orbits. When both of the above conditions are satisfied, there would be more than one plunges of asteroids onto the neutron star giving rise to a series of FRBs when the neutron star is inside the asteroid belt followed by a quiescent period (due to the absence of such plunges) when the neutron star is out of the belt; and the whole process would repeat due to the orbital motion of the neutron star. I elaborate this special situation in section <ref>.Sections <ref> and <ref> demonstrate the application of the model for the case of FRB121102 including possible system parameters which would agree with observations. Finally, in section <ref>, I generalize the model and discuss how non-repetitive FRBs can also be explained within this model. § ASTEROID INFALLS ONTO A NEUTRON STAR PASSING THROUGH AN ASTEROID BELTThe rate of infalls of asteroids onto a neutron star passing through an asteroid belt, i.e. the rate of FRBs can be written as <cit.>: ℛ∼ 1.25 × 10^10R_ ns, 6(M_ ns/1.4M_⊙) v_ ns, 7^-1N_a(η_1/0.2)^-1(η_2/0.2)^-1(R_ belt/2 AU)^-3  h^-1 where R_ ns, 6 is the radius of the neutron star in the unit of 10^6 cm, v_ ns, 7 is the speed of the neutron star in the unit of 10^7  cm s^-1, N_a is the total number of asteroids in the belt, η_1 R_ belt and η_2 R_ belt are the thickness and width of the belt, R_ belt is the inner radius of the belt and η_1 and η_2 are two fractional numbers. R_ ns, 6=1 is the standard value for the radius of neutron stars.<cit.> used v_ns, 7=2 as the speed of isolated neutron stars moving in their host galaxies. One needs to replace this value by the orbital speed of the neutron star in order to estimate the rate of asteroid infalls during the passage of the neutron star through an asteroid belt around its binary stellar companion. The orbital speed of the neutron star can be written as: v_ b, ns (f) = √(G (M_ ns + M_ com)/a_ ns (1-e^2))[1+2 e cos f +e^2]^1/2.where G is the gravitational constant, f is the true anomaly of the neutron star, a_ ns and e are the semi-major axis and the eccentricity of the orbit of the neutron star respectively. M_ ns and M_ com are the masses of the neutron star and the companion. a_ ns is related P_ b as a_ ns =M_ com/(M_ ns + M_ com)[ (P_ b/2 π)^2G ( M_ ns + M_ com) ]^1/3. Thus, to estimate ℛ, one needs to know various parameters like a_ ns (or P_ b), e, f, M_ ns, and M_ com. The standard value of M_ ns is 1.4 M_⊙.Moreover, the path length of the neutron star within the asteroid belt can be estimated if the location of the belt in the orbit can be determined. This path length is the arc-length of the orbit inside the belt: s = ∫_f_1^f_2√(r_ ns^2 + ( dr_ ns/df)^2)df ,where f_1 and f_2 are true anomalies of the neutron star when it enters and exits the belt, and r_ ns is the magnitude of the radius vector of the neutron star in its orbit, defined as:r_ ns (f) = a_ ns (1-e^2)/(1+e cos f). In the next section, I fitreported detections and non-detections of bursts from the direction of FRB121102 to extract a value of P_ b, obtain realistic values of ℛ for a wide range of other parameters, and estimate values of the path-length.§ APPLICATION OF THE MODEL FOR THE CASE OF FRB121102 Epochs of detections and non-detections of bursts from the direction of FRB121102 are noticeable in the compilation of the published results in Table <ref>. In the present model, the epochs of non-detections (09-December-2015 to 01-February-2016 as mentioned in <cit.> and 01-May-2016 to 27-May-2016 as mentioned in <cit.>) can be explained as the neutron star being in the orbital phases out of the asteroid belt, while the epochs of detections are the times when the neutron star was inside the belt. The neutron star would pass through one of the turning points of the orbit between two such epochs of detections, i.e. it would exit the belt, pass through a turning point and enter the belt again (at another location).Table <ref> shows that the mid point of the last epoch of detection was at a separation of 295 days from that of the previous epoch of detection, which was again separated from the epoch of detection preceding it by 175 days. This fact suggests that the orbital period of the neutron star is around 470 days (295+175 days), and it takes around 295 days for the neutron star to travel from one mid-location in the belt to another through a turning point and around 175 days to return to the first location through the other turning point.By changing the above intervals slightly (1 day each)[Mid points of epochs of detections can be slightly different from the times the neutron star comes to the middle of the belt as it is not necessary that the asteroids would start plunging as soon as the neutron star enters the belt.], I can fit all the epochs shown in Table <ref>. I call the mid point of first epoch of detection as position-1 which is MJD 56233 and can be considered a location well inside the belt. I get subsequent times for the neutron star to be inside the belt as: position-a = position-1+174 days = MJD 56407, position-b = position-a+294 days = MJD 56701, position-c = position-b+174 days = MJD 56875, position-2 = position-c+294 days = MJD 57169, position-3 = position-2+174 days = MJD 57343, position-4 = position-3+294 days = MJD 57637. These are the positions when the neutron star was inside the belt experiencing asteroid infalls that triggered series of FRBs. Note that, these positions are very close to the mid-points of each epoch. This proximity supports the validity of the present model. None of these locations falls in the epochs of reported non-detections. This fit also allows me to assume P_ b = 468 days.Following the same logic, the next two epochs when the neutron star will pass the asteroid belt are position-x1 = position-4+174 days = MJD 57811 (27-February-2017) and position-x2 = position-x1 + 274 = MJD 58105 (18-December-2017). Bursts near those times are expected.Presently there is no reported data for position-a, position-b, and position-c although the present model expects bursts during those epochs. It would be interesting if the observing team clarifies whether any sensitive radio telescope was pointed to this direction during those epochs. However, observed data lacking any noticeable burst would not rule out the present model. If the observed data were contaminated by RFI then it would be difficult to identify bursts. The bursts can be even of very low luminosity. The luminosity of bursts in this model is given as L ≃ 2.63 × 10^40m_18^8/9 if standard values for other parameters are used (Table 2 of <cit.>) where m_18 is the mass of the asteroids in the unit of 10^18 gm. The luminosity would decrease by a factor of 7.7 if the mass of the asteroid is smaller by one order of magnitude and by a factor of 59.9 if the mass of the asteroid is smaller by two orders of magnitude. So it is possible to have a series of very faint bursts if all of the asteroids that fell onto the neutron star during a particular passage are of low mass. The last possibility is the true absence of bursts. This can be the case if the asteroid belt whose constituents are also orbiting around the companion of the neutron star, has some local voids and the neutron star crossed the belt through those voids.At present, it is not possible to favor one scenario over the others - more published data motivated to test these possibilities are essential.In the next section, I estimate ℛ and s for different test binaries by varying e and keeping P_ b fixed at the value of 468 days. Neutron stars can exist in such wide binaries - two binary pulsars with similar values of P_ b are PSR B1800-27 with P_ b = 407 days, e=0.0005, M_ c, med = 0.17  M_⊙ <cit.> and PSR J0214+5222 with P_ b = 512 days, e=0.005, M_ c, med = 0.48  M_⊙ <cit.>. Note that, for such wide period binaries, the general relativistic effects (over a few orbits as discussed here) can be ignored. lllrr All bursts detected so far from the direction of FRB121102. 6 1 0ptCalendar-dateMJD no. of bursts refinterpretation2012-11-02 562331 <cit.> inside the belt (position-1 is 56233) 2012-12-09 562700 <cit.>out of the belt2015-05-09 0 <cit.>out of the belt 2015-05-17 57159 2 <cit.>inside the belt 2015-06-02 57175 8 <cit.> still inside the belt3l(mid-point is MJD 57167; position-2 is 57169)2015-11-13 to 2015-11-19 57339 to 573455 <cit.>again inside the belt 3l(mid-point is MJD 57342; position-3 is 57343)2015-12-09 to 2016-02-01 57365 to 57419 0 <cit.>out of the belt2016-04-27 to 2016-07-270 <cit.>out of the belt2016-08-23 to 2016-09-20 57623 to57651 13 (9+4) <cit.>inside the belt3l(mid-point is MJD 57637; position-4 is 57637)<cit.> § DEMONSTRATION WITH FRB121102Values of f at different positions are needed to calculate v_ b, ns (hence ℛ) and s. one can estimate f solving Kepler's equations if in addition to P_b and e, the time of the periastron passage is also known. I proceed with logical guesses for the periastron passage time as discussed below.As the interval (174 days) between position-1-position-a, position-b-position-c, and position-2-position-3 is shorter than that between position-a-position-b, position-c-position-2, and position-3-position-4 (294 days), it is natural to think that the neutron star passed through the periastron during the interval for the first group. But the reverse is also possible if the asteroid belt cuts the orbit very close to the apastron. I call the first scenario `case-A' and the second scenario `case-B'. In the next two subsections, I explore `case-A' and `case-B' successively.§.§ Case-A I choose an arbitrary time, MJD 56320 between position-1 and position-a as the time of the periastron passage. I solve Kepler's equations for two very different values of e, one moderately high (0.5) and the other sufficiently low (0.001). I find that for e=0.5, position-1, position-b, position-2, and position-4 are at f=235.4^∘, and position-a, position-c, and position-3 are at f=124.7^∘. For e=0.001, position-1, position-b, position-2, and position-4 are at f=292.97^∘, and position-a, position-c, and position-3 are at f=67.10^∘. It is obvious that the values of f at different positions would change if the time of the periastron passage was different, e.g., if the periastron passage was on MJD 56300 (still between position-1 and position-a); position-1, position-b, position-2, and position-4 would have f=249.3^∘, and position-a, position-c, and position-3 would have f=135.5^∘ for e=0.5. For e=0.001, position-1, position-b, position-2, and position-4 would have f=308.4^∘, and position-a, position-c, and position-3 would have f=82.5^∘. §.§ Case-B Now I assume that one periastron passage of the neutron star was between position-a and position-b. For the purpose of demonstration, I choose MJD 56554 as the time of the periastron passage.For e=0.5, position-1, position-b, position-2, and position-4 are at f=152.31^∘, and position-a, position-c, and position-3 are at f=207.73^∘. For e=0.001, position-1, position-b, position-2, and position-4 are at f=113.18^∘, and position-a, position-c, and position-3 are at f=246.89^∘. §.§ Case-A and Case-B together Using Eqn. <ref>, I calculate v_ ns(f_i) for each value of f=f_i (in intervals of 0.1^∘) when the neutron star is inside the belt, and then compute the weighted average as v_ ns, avg = ∑_i w_i v_ ns(f_i) / ∑_i w_i where w_i = ( 1 - e )^2 /( 1 +e cos f_i )^2 is a weight factor corresponding to the relative duration the neutron star spends at a particular value of f_i <cit.>. This v_ ns, avg is used while calculating ℛ for different cases. I use the standard value for the number density of asteroids in the belt, i.e. N_a / (η_1 η_2 R_ belt^3 ) = 1.5625 × 10^11  AU^-3<cit.>. Resulting values of ℛ are consistent with the observed rate ∼ 3   h^-1 for the wide range of parameters I chose (e, M_ com, and the time of the periastron passage). This fact again supports the validity of the present model.Table<ref> shows that the longest burst period was around position-4, during MJD 57623 to MJD 57651 (28 days), so the neutron star was inside the belt for at least this time-span. I estimate arc-lengths around position-4 for different choices of M_ com, both for case-A and case-B. Table <ref> shows that the arc-length, i.e., the minimum extent of the belt along the path of the neutron star varies between 6.7 to 37 million kilometers (0.04-0.25 AU). Now I demonstrate the orbital geometry and location of the neutron star in the orbit for sample cases in Fig. <ref>. In the left panel, locations on the orbit where the neutron star crosses the asteroid belt for the periastron passage on MJD 56320 (blue squares, case-A) and on MJD 56554 (brown diamonds, case-B) are shown for e=0.5. The right panel shows the variation in f with time for case-A. The green lines (curved) are for e=0.5, while the purple lines (almost straight) are for e=0.001. Filled squares denote the epochs (and locations in the orbit) when bursts were detected. Unfilled squares are the expected epochs of bursts in the past (position-a, position-b, and position-c; see section <ref>). The asterisks (“∗") are the next two epochs (MJD 57811 and MJD 58105) when the neutron star will be inside the belt.llccccccc Calculation of arc-lengths of the orbit around location-4, average orbital speed of the neutron star in those arcs, and the rate of asteroid plunges for some sample cases. Canonical values for the mass and the radius of the neutron star have been used, i.e. M_ ns = 1.4   M_⊙ and R_ ns = 10 km.9 2 0ptperiastron passagee true anomaliesM_ com a_ Ra_ nss v_ b, ns, avg ℛ at MJD 57623 and MJD 57651 (MJD) (f_1, f_2) (M_⊙) (km)(km) (km) (km   s^-1) (h^-1) 0.2 2.1 × 10^82.6 × 10^79.2 × 10^6 86.7 3.20.5 (227.6^∘, 244.7^∘)0.6 2.2 × 10^8 6.7 × 10^72.4 × 10^7 60.3 4.5 1.0 2.4 × 10^89.8 × 10^73.5 × 10^7 54.4 5.056320 (case-A) 0.2 2.1 × 10^82.6 × 10^79.7 × 10^6 90.8 3.0 0.001(282.2^∘, 303.8^∘) 0.6 2.2 × 10^86.7 × 10^72.5 × 10^7 63.1 4.3 1.02.4 × 10^8 9.8 × 10^7 3.7 × 10^7 56.94.8 0.2 2.1 × 10^82.6 × 10^76.7 × 10^6 63.34.3 0.5(147.0^∘, 157.3^∘)0.6 2.2 × 10^86.7 × 10^71.7 × 10^7 44.0 6.2 1.02.4× 10^8 9.8 × 10^7 2.6 × 10^7 39.7 6.9 56554 (case-B) 0.2 2.1 × 10^82.6 × 10^79.7 × 10^6 90.68 3.00.001 (102.4^∘, 123.9^∘)0.6 2.2 × 10^86.7 × 10^72.5 × 10^7 63.1 4.31.02.4 × 10^89.8 × 10^73.7 × 10^7 56.9 4.8 § DISCUSSIONSDetection of bursts close to the predicted epochs, i.e. around 27-February-2017 and 18-December-2017 would be a stronger support of the present model. Ruling out via non-detection would better be done only after a number of successive failures, as I have already argued for the absence of bursts.A future discovery of an FRB with only one active epoch of several bursts can be explained either by a very wide binary in which the neutron star has crossed the asteroid belt around its companion only once (after such burst searches have been initiated) or by the passage of an isolated neutron star through an asteroid belt around another star. Non-repeating FRBs can occur under different circumstances, e.g., (i) the asteroid belt is around the neutron star itself and a chance fall of an asteroid from that belt onto the neutron star leads to a single burst, (ii) a neutron star grazing an asteroid belt around another star, or (iii) a binary neutron star grazing an asteroid belt around its companion (possible if the orbit of the neutron star and the asteroid belt are non-coplanar) and the orbit is so wide that only one such grazing event has occurred so far. Scenario (i) seems to be the most likely as the scenarios (ii) and (iii) are very restrictive - the neutron star should not pass through the middle of the belt, it must graze the belt so that it would suffer only one impact during each grazing. Under scenario (i), the rate (10^3 - 10^4   sky^-1day^-1) calculated by <cit.> holds valid. However, as the other two scenarios cannot be ruled out, we note that the first FRB (Lorimer burst) occurred on MJD 52146 which was almost 10 years ago (5631 days ago if I choose the current date as 2017-01-24); and it is possible for a neutron star to have an orbital period larger than 5631 days - as an example, the orbital period of PSR J2032+4127 is 8578 days and the eccentricity is 0.93 <cit.>.Thus, a diverse variety of FRBs can be explained with this model without changing the basic physics behind generation of bursts, only by considering different configuration of the asteroid belt. One of the non-repetitive FRBs, FRB 131104 has been recently associated with a bright gamma-ray transient <cit.>. Although soft-gamma ray emission is possible after the asteroid-neutron star impact <cit.>, the gamma-ray flux F_γ =Ė_ G / 4 π d_L^2 for d_L = 3.5 Gpc[http://www.astronomy.swin.edu.au/pulsar/frbcat/] is too low (∼ 10^-16  erg  s^-1 cm^-2) if the values of the parameters for the neutron star and the asteroids are as usual <cit.>. However, because of the uncertainties in both of the claimed association and the estimation of d_L (mainly due to the uncertainties in the models of the dispersion measure for both of the interstellar medium and the intergalactic medium), it is not yet possible to exclude the present model being the cause of this FRB. This model will remain valid even in the case of a future detection of a low dispersion measure FRB, i.e. an FRB in the Galaxy, as a binary neutron star with an asteroid belt around the companion or a neutron star having an asteroid belt around itself can very well exist in the Galaxy.The author thanks the anonymous reviewers for useful comments which improved the manuscript. [Bagchi, Lorimer, & Wolfe(2013)]blw13 Bagchi, M., Lorimer, D. R., Wolfe, S., 2013, MNRAS, 432, 1303. [Chatterjee et al.(2017)]clw17 Chatterjee, S., Law, C. J., Wharton, R. S., et al., 2017, Nature, 541, 58.[Dai et al.(2016)]dwwh16 Dai, Z. G., Wang, J. S., Wu, X. F., Huang, Y. F., 2016, ApJ, 829, 27.[DeLaunay et al.(2016)]dfm16 DeLaunay, J. J., Fox, D. B., Murase, K., et al., 2016, ApJ, 832L, 1. [Falcke & Rezzolla(2014)]fr14 Falcke, H. & Rezzolla, L., 2014, A & A, 562A, 137.[Geng & Huang(2015)]gh15 Geng, J. J. and Huang, Y. F., 2015, ApJ, 809, 24.[Johnston et al.(1995)]jml95 Johnston, S., Manchester, R. N., Lyne, A. G., Kaspi, V. M., and D' Amico, N.,1995, A & A, 293, 795.[Kashiyama, Ioka, & Mészáros(2013)]kim13 Kashiyama, K., Ioka, K., and Mészáros, P., 2013, ApJ, 776L, 39.[Katz(2016)]katz16 Katz, J. I., 2016, ApJ, 826, 226.[Loeb et al.(2014)]lsm14 Loeb, A., Shvartzvald, Y., Maoz, D., 2014, MNRAS, 439L, 46. [Lorimer et al.(2007)]lorimer07 Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J., Crawford, F., 2007, Science, 318, 777.[Lyne et al.(2015)]lsk15 Lyne, A. G., Stappers, B. W., Keith, M. J., Ray, P. S., Kerr, M., Camilo, F., Johnson, T. J., 2015, MNRAS, 451, 581.[Marcote et al.(2017)]mph17 Marcote, B., Paragi, Z., Hessels, J. w. T., et al., 2017, ApJ, 834, L8.[Pen & Connor(2015)]pl15 Pen, U. L., Connor, L., 2015, ApJ, 807, 179.[Popov & Postnov(2013)]pp13 Popov, S. B., Postnov, K. A., arXiv:1307.4924.[Scholz et al.(2016)]Scsh16 Scholz, P., Spitler, L. G., Hessels, J. W. T., et al., 2016, ApJ, 833, 177.[Spitler et al.(2016)]Spsh16 Spitler, L. G., Scholz, P., Hessels, J. W. T., et al., 2016, Nature, 531, 202.[Stovall et al.(2014)]slr14 Stovall, K., Lynch, R. S., Ransom, S. M., et al., 2014, ApJ, 791, 67.[Tendulkar et al.(2017)]tbc17 Tendulkar, S., Bassa, C. G., Cordes, J. M., et al., 2017, ApJ, 834, L7.[Totani(2013)]tot13 Totani, T., 2013, PASJ, 65L 12. [Wang et al.(2016)]wyw16 Wang, J.S., Yang, Y.P., Wu, X. F., Dai, Z. G, Wang, F. Y., 2016, ApJ, 822L, 7.[Zhang(2016)]zhang16 Zhang, B., 2016, ApJ, 827L, 31.
http://arxiv.org/abs/1702.08876v3
{ "authors": [ "Manjari Bagchi" ], "categories": [ "astro-ph.HE", "astro-ph.SR" ], "primary_category": "astro-ph.HE", "published": "20170227044522", "title": "A Unified Model for Repeating and Non-repeating Fast Radio Bursts" }
mathbox - compatibility=false0pt -CHAOS: A Parallelization Scheme for Training Convolutional Neural NetworksA. Viebke, S. Memeti, S. Pllana, and A. AbrahamA. Viebke S. Memeti S. Pllana Linnaeus University Address: Department of Computer Science, 351 95 Växjö, SwedenA. Viebke: av22cj@student.lnu.seS. Memeti: suejb.memeti@lnu.seS. Pllana: sabri.pllana@lnu.se A. Abraham Machine Intelligence Research Labs (MIR Labs) Address:1, 3rd Street NW, P.O. Box 2259 Auburn, Washington 98071, USA ajith.abraham@ieee.org CHAOS: A Parallelization Scheme for Training Convolutional Neural Networks on Intel Xeon Phi André Viebke Suejb Memeti Sabri Pllana Ajith AbrahamReceived: date / Accepted: date ================================================================================================ Deep learning is an important component of big-data analytic tools and intelligent applications, such as, self-driving cars, computer vision, speech recognition, or precision medicine. However, the training process is computationally intensive, and often requires a large amount of time if performed sequentially. Modern parallel computing systems provide the capability to reduce the required training time of deep neural networks. In this paper, we present our parallelization scheme for training convolutional neural networks (CNN) named Controlled Hogwild with Arbitrary Order of Synchronization (CHAOS). Major features of CHAOS include the support for thread and vector parallelism, non-instant updates of weight parameters during back-propagation without a significant delay, and implicit synchronization in arbitrary order. CHAOS is tailored for parallel computing systems that are accelerated with the Intel Xeon Phi. We evaluate our parallelization approach empirically using measurement techniques and performance modeling for various numbers of threads and CNN architectures. Experimental results for the MNIST dataset of handwritten digits using the total number of threads on the Xeon Phi show speedups of up to 103× compared to the execution on one thread of the Xeon Phi, 14× compared to the sequential execution on Intel Xeon E5, and 58× compared to the sequential execution on Intel Core i5.§ INTRODUCTION Traditionally engineers developed applications by specifying computer instructions that determined the application behavior. Nowadays engineers focus on developing and implementing sophisticated deep learning models that can learn to solve complex problems. Moreover, deep learning algorithms <cit.> can learn from their own experience rather than that of the engineer. Many private and public organizations are collecting huge amounts of data that may contain useful information from which valuable knowledge may be derived. With the pervasiveness of the Internet of Things the amount of available data is getting much larger <cit.>. Deep learning is a useful tool for analyzing and learning from massive amounts of data (also known as Big Data) that may be unlabeled and unstructured <cit.>. Deep learning algorithms can be found in many modern applications <cit.>, such as, voice recognition, face recognition, autonomous cars, classification of liver diseases and breast cancer, computer vision, or social media. A Convolutional Neural Network (CNN) is a variant of a Deep Neural Network (DNN) <cit.>. Inspired by the visual cortex of animals, CNNs are applied to state-of-the-art applications, including computer vision and speech recognition <cit.>. However, supervised training of CNNs is computationally demanding and time consuming, and in many cases, several weeks are required to complete a training session. Often applications are tested with different parameters, and each test requires a full session of training. Multi-core processors <cit.> and in particular many-core <cit.> processing architectures, such as the NVIDIA Graphical Processing Unit (GPU) <cit.> or the Intel Xeon Phi <cit.> co-processor, provide processing capabilities that may be used to significantly speed-up the training of CNNs. While existing research <cit.> has addressed extensively the training of CNNs using GPUs, so far not much attention is given to the Intel Xeon Phi co-processor. Beside the performance capabilities, the Xeon Phi deserves our attention because of programmability <cit.> and portability <cit.>. In this paper, we present our parallelization scheme for training convolutional neural networks, named Controlled Hogwild with Arbitrary Order of Synchronization (CHAOS). CHAOS is tailored for the Intel Xeon Phi co-processor and exploits both the thread- and SIMD-level parallelism. The thread-level parallelism is used to distribute the work across the available threads, whereas SIMD parallelism is used to compute the partial derivatives and weight gradients in convolutional layer. Empirical evaluation of CHAOS is performed on an Intel Xeon Phi 7120 co-processor. For experimentation, we use various number of threads, different CNNs architectures, and the MNIST dataset of handwritten digits <cit.>. Experimental evaluation results show that using the total number of available threads on the Intel Xeon Phi we can achieve speedups of up to 103× compared to the execution on one thread of the Xeon Phi, 14× compared to the sequential execution on Intel Xeon E5, and 58× compared to the sequential execution on Intel Core i5. The error rates of the parallel execution are comparable to the sequential one. Furthermore, we use performance prediction to study the performance behavior of our parallel solution for training CNNs for numbers of cores that go beyond the generation of the Intel Xeon Phi that was used in this paper. The main contributions of this paper include: * design and implementation of CHAOS parallelization scheme for training CNNs on the Intel Xeon Phi, * performance modeling of our parallel solution for training CNNs on the Intel Xeon Phi, * measurement-based empirical evaluation of CHAOS parallelization scheme, * model-based performance evaluation for future architectures of the Intel Xeon Phi.The rest of the paper is organized as follows. We discuss the related work in Section <ref>. Section <ref> provides background information on CNNs and the Intel Xeon Phi many-core architecture. Section <ref> discusses the design and implementation aspects of our parallelization scheme. The experimental evaluation of our approach is presented in Section <ref>. We summarize the paper in Section <ref>.§ RELATED WORKIn comparison to related work that target GPUs, the work related to machine learning for Intel Xeon Phi is sparse. In this section, we describe machine learning approaches that target the Intel Xeon Phi co-processor, and thereafter we discuss CNN solutions for GPUs and contrast them to our CHAOS implementation. §.§ Machine Learning targeting Intel Xeon PhiIn this section, we discuss existing work for Support Vector Machines (SVMs), Restricted Boltzmann Machines (RBMs), sparse auto encoders and the Brain-State-in-a-Box (BSB) model.You et al. <cit.> present a library for parallel Support Vector Machines, MIC-SVM, which facilitates the use of SVMs on many- and multi-core architectures including Intel Xeon Phi. Experiments performed on several known datasets showed up to 84x speed up on the Intel Xeon Phi compared to the sequential execution of LIBSVM <cit.>. In comparison to their work, we target deep learning.Jin et al. <cit.> perform the training of sparse auto encoders and restricted Boltzmann machines on the Intel Xeon Phi 5110p. The authors reported a speed up factor of 7 - 10× times compared to the Xeon E5620 CPU and more than 300× times compared to the un-optimized version executed on one thread on the co-processor. Their work targets unsupervised deep learning of restricted Boltzmann machines and sparse auto encoders, whereas we target supervised deep learning of CNNs.The performance gain on Intel Xeon Phi 7110p for a model called Brain-State-in-a-Box (BSB) used for text recognition is studied by Ahmed et al. in <cit.>. The authors report about two-fold speedup for the co-processor compared to a CPU with 16 cores when parallelizing the algorithm. While both approaches target Intel Xeon Phi, our work addresses training of CNNs on the MNIST dataset.§.§ Related Work Targeting CNNsIn this section, we will discuss CNNs solutions for GPUs in the context of computer vision (image classification). Work related to MNIST <cit.> dataset is of most interest, also NORB <cit.> and CIFAR 10 <cit.> is considered. Additionally, work done in speech recognition and document processing is briefly addressed. We conclude this section by contrasting the presented related work with our CHAOS parallelization scheme. Work presented by Cireşan et al. <cit.> target a CNN implementation raising the bars for the CIFAR10 (19.51% error rate), NORB (2.53% error rate) and MNIST (0.35% error rate) datasets. The training was performed on GPUs (Nvidia GTX 480 and GTX 580) where the authors managed to decrease the training time severely - up to 60× compared to sequential execution on a CPU - and decrease the error rates to an, at the time, state-of-the-art accuracy level.Later, Cireşan et al. <cit.> presented their multi-column deep neural network for classification of traffic sings. The results show that the model performed almost human-like (humans'error rate about 0.20%) on the MNIST dataset, achieving a best error rate of 0.23%. The authors trained the network on a GPU. Vrtanoski et al. <cit.> use OpenCL for parallelization of the back-propagation algorithm for pattern recognition. They showed a significant cost reduction, a maximum speedup of 25.8× was achieved on an ATI 5870 GPU compared to a Xeon W3530 CPU when training the model on the MNIST dataset. The ImageNet challenge aims to evaluate algorithms for large-scale object detection and image classification based on the ImageNet dataset. Krizhevsky et al. <cit.> joined the challenge and reduced the error rate of the test set to 15.3% from the second best 26.2% using a CNN with 5 convolutional layers. For the experiments, two GPUs (Nvidia GTX 580) were used only communicating in certain layers. The training lasted for 5 to 6 days. In a later challenge, ILSVRC 2014, a team from Google entered the competition with GoogleNet, a 22-layer deep CNN and won the classification challenge with a 6.67% error rate. The training was carried out on CPUs. The authors state that the network could be trained on GPUs within a week, illuminating the limited amount of memory to be one of the major concerns <cit.>. Yadan et al. <cit.> used multiple GPUs to train CNNs on the ImageNet dataset using both data- and model-parallelism, i.e. either the input space is divided into mini-batches where each GPU train its own batch (data parallelism) or the GPUs train one sample together (model parallelism). There is no direct comparison with the training time on CPU, however, using 4 GPUs (Nvidia Titan) and model- and data-parallelism, the network was trained for 4.8 days. Song et al. <cit.> constructed a CNN to recognize face expressions and developed a smart-phone app in which the user can capture a picture and send it to a server hosting the network. The network, predicts a face expression and sends the result back to the user. With the help of GPUs (Nvidia Titan), the network was trained in a couple of hours on the ImageNet dataset. Scherer et al. <cit.> accelerated the large-scale neural networks with parallel GPUs. Experiments with the NORB dataset on an Nvidia GTX 285 GPU showed a maximal speedup of 115× compared to a CPU implementation (Core i7 940). After training the network for 360 epochs, an error rate of 8.6% was achieved. Cireşan et al. <cit.> combined multiple CNNs to classify German traffic signs and achieved a 99.15% recognition rate (0.85 % error rate). The training was performed using an Intel Core i7 and 4 GPUs (2 x GTX 480 and 2 x GTX 580). More recently Abadi et al. <cit.> presented TensorFlow, a system for expressing and executing machine learning algorithms including training deep neural network models. Researchers have also found CNNs successful for speech tasks. Large vocabulary continuous speech recognition deals with translation of continuous speech for languages with large vocabularies. Sainath et al. <cit.> investigated the advantages of CNNs performing speech recognition tasks and compared the results with previous DNN approaches. Results indicated on a 12-14% relative improvement of word error rates compared to a DNN trained on GPUs. Chellapilla et al. <cit.> investigated GPUs (Nvidia Geforce 7800 Ultra) for document processing on the MNIST dataset and achieved a 4.11× speed up compared to the sequential execution a Intel Pentium 4 CPU running at 2.5 GHz clock frequency.In contrast to CHAOS, these studies target training of CNNs using GPUs, whereas our approach addresses training of CNNs on the MNIST dataset using the Intel Xeon Phi co-processor. While there are several review papers (such as, <cit.>) and on-line articles (such as, <cit.>) that compare existing frameworks for parallelization of training CNN architectures, we focus on detailed analysis of our proposed parallelization approach using measurement techniques and performance modeling. We compare the performance improvement achieved with CHAOS parallelization scheme to the sequential version executed on Intel Xeon Phi, Intel Xeon E5 and Intel Core i5 processor. § BACKGROUNDIn this section, we first provide some background information related to the neural networks focusing on convolutional neural networks, and thereafter we provide some information about the architecture of the Intel Xeon Phi.§.§ Neural NetworksA Convolutional Neural Network is a variant of a Deep Neural Network, which introduces two additional layer types: convolutional layers and pooling layers. The mammal visual processing system is hierarchical (deep) in nature. Higher level features are abstractions of lower level ones. E.g. to understand speech, waveforms are translated through several layers until reaching a linguistic level. A similar analogy can be drawn for images, where edges and corners are lower level abstractions translated into more spatial patterns on higher levels. Moreover, it is also known that the animal cortex consists of both simple and complex cells firing on certain visual inputs in their receptive fields. Simple cells detect edge-like patterns whereas complex cells are locally invariant, spanning larger receptive fields. These are the very fundamental properties of the animal brain inspiring DNNs and CNNs. In this section, we first describe the DNNs and the Forward- and Back-propagation, thereafter we introduce the CNNs. §.§.§ Deep Neural NetworksThe architecture of a DNN consists of multiple layers of neurons. Neurons are connected to each other through edges (weights). The network can simply be thought of as a weighted graph; a directed acyclic graph represents a feed-forward network. The depth and breadth of the network differs as may the layer types. Regardless of the depth, a network has at least one input and one output layer. A neuron has a set of incoming weights, which have corresponding outgoing edges attached to neurons in the previous layer. Also, a bias term is used at each layer as an intercept term. The goal of the learning process is to adjust the network weights and find a global minimum by reducing the overall error, i.e. the deviation between the predicted and the desired outcome of all the samples. The resulting weight parameters can thereafter be used to make predictions of unseen inputs <cit.>. §.§.§ Forward PropagationDNNs can make predictions by forward propagating an input through the network. Forward propagation proceeds by performing calculations at each layer until reaching the output layer, which contains a vector representing the prediction. For example, in image classification problems, the output layer contains the prediction score that indicates the likelihood that an image belongs to a category <cit.>. The forward propagation starts from a given input layer, then at each layer the activation for a neuron is activated using the equation y^l_i = σ(x^l_i) + I^l_i where y^l_i is the output value of neuron i at layer l, x^l_i is the input value of the same neuron, and σ (sigmoid) is the activation function. I^l_i is used for the input layer when there is no previous layer. The goal of the activation function is to return a normalized value (sigmoid return [0,1] and tanh is used in cases where the desired return values are [-1,1]).The input x^l_i can be calculated as x^l_i = ∑_j(w^l_jiy^l-1_j) where w^l_ji denotes the weight between neuron i in the current layer l, and j in the previous layer, and y^l-1_j the output of the jth neuron at the previous layer. This process is repeated until reaching the output layer. At the output layer, it is common to apply a soft max function, or similar, to squash the output vector and hence derive the prediction.§.§.§ Back-PropagationBack-propagation is the process of propagating errors, i.e. the loss calculated as the deviation between the predicted and the desired output, backward in the network, by adjusting the weights at each layer. The error and partial derivatives δ^l_i are calculated at the output layer based on the predicted values from forward propagation and the labeled value (the correct value). At each layer, the relative error of each neuron is calculated and the weight parameters are updated based on how much the neuron participated in the faulty prediction. The equation: δ Eδ y^l_i = ∑w^l_ijδ Eδ x^l+1_j denotes that the partial derivative of neuron i at the current layer l is the sum of the derivatives of connected neurons at the next layer multiplied with the weights, assuming w^l denotes the weights between the maps. Additionally, a decay is commonly used to control the impact of the updates, which is omitted in the above calculations. More concretely, the algorithm can be thought of as updating the layer's weights based on "how much it was responsible for the errors in the output" <cit.>. §.§.§ Convolutional Neural NetworksA Convolutional Neural Network is a multi-layer model constructed to learn various levels of representations where higher level representations are described based on the lower level ones <cit.>. It is a variant of deep neural network that introduces two new layer types: convolutional and pooling layers.The convolutional layer consists of several feature maps where neurons in each map connect to a grid of neurons in maps in the previous layer through overlapping kernels. The kernels are tiled to cover the whole input space. The approach is inspired by the receptive fields of the mammal visual cortex. All neurons of a map extract the same features from a map in the previous layer as they share the same set of weights.Pooling layers intervene convolutional layers and have shown to lead to faster convergence. Each neuron in a pooling layer outputs the (maximum/average) value of a partition of neurons in the previous layer, and hence only activates if the underlying grid contains the sought feature. Besides from lowering the computational load, it also enables position invariance and down samples the input by a factor relative to the kernel size <cit.>.Figure <ref> shows LeNet-5 that is an example of a Convolutional Neural Network. Each layer of convolution and pooling (that is a specific method of sub-sampling used in LeNet) comprise several feature maps. Neurons in the feature map cover different sub-fields of the neurons from the previous layer. All neurons in a map share the same weight parameters, therefore they extract the same features from different parts of the input from the previous layers.CNNs are commonly constructed similarly to the LeNet-5, beginning with an input layer, followed by several convolutional/pooling combinations, ending with a fully connected layer and an output layer <cit.>. Recent networks are much deeper and/or wider, for instance, the GoogleNet <cit.> consists of 22 layers.Various implementations target the Convolutional Neural Networks, such as: EbLearn at New York University and Caffe at Berkeley. As a basis for our work we selected a project developed by Cireşan <cit.>. This implementation targets the MNIST dataset of handwritten digits, and has the possibility to dynamically configure the definition of layers, the activation function and the connection types using a configuration file.§.§ Parallel Systems accelerated with Intel®Xeon Phi™Figure <ref> depicts an overview of the Intel Xeon Phi (codenamed Knights Corner) architecture. It is a many-core shared-memory co-processor, which runs a lightweight Linux operating system that offers the possibility to communicate with it over ssh. The Xeon Phi offers two programming models: * offload - parts of the applications running on the host are offloaded to the co-processor * native - the code is compiled specifically for running natively on the co-processor. The code and all the required libraries should be transferred on the device. In this paper, we focus on the native mode. The Intel Xeon Phi (type 7120P used in this paper) comprises 61 x86 cores, each core runs at 1.2 GHz base frequency, and up to 1.3GHz on max turbo frequency <cit.>. Each core can switch between four hardware threads in a round-robin manner, which amounts to a total of 244 threads per co-processor. Theoretically, the co-processor can deliver up to one teraFLOP/s of double precision performance, or two teraFLOP/s of single precision performance. Each core has its own L1 (32KB) and L2 (512KB) cache. The L2 cache is kept fully coherent by a global distributed tag-directory (TD). The cores are connected through a bidirectional ring bus interconnect, which forms a unified shared L2 cache of 30.5MB. In addition to the cores, there are 16 memory channels that in theory offer a maximum memory bandwidth of 352GB/s. The GDDR memory controllers provide direct interface to the GDDR5 memory, and the PCIe Client Logic provides direct interface to the PCIe bus. Efficient usage of the available vector processing units of the Intel Xeon Phi is essential to fully utilize the performance of the co-processor <cit.>. Through the 512-bit wide SIMD registers it can perform 16 (16 wide × 32 bit) single-precisionor 8 (8 wide × 64 bit) double-precision operations per cycle.The performance capabilities of the Intel Xeon Phi are discussed and investigated empirically by different researches within several domain applications <cit.>.§ OUR PARALLELIZATION SCHEME FOR TRAINING CONVOLUTIONAL NEURAL NETWORKS ON INTEL XEON PHIThe parallelism can be either divided data-wise, i.e. threads process several inputs concurrently, or model-wise, i.e. several threads share the computational burden of one input. Whether one approach can be advantageous over the other mainly depends on the synchronization overhead of the weight vectors and how well it scales with the number of processing units.In this section, we first discuss the design aspects of our parallelization scheme for training convolutional neural networks. Thereafter, we discuss the implementation aspects that allow full utilization of the Intel Xeon Phi co-processor.§.§ Design AspectsOn-line stochastic gradient descent has the advantage of instant updates of weights for each sample. However, the sequential nature of the algorithm yields impediments as the number of multi- and many-core platforms are emerging. We consider different existing parallelization strategies for stochastic gradient descent: Strategy A: Hybrid -uses both data- and model parallelism, such that data parallelism is applied in convolutional layers, and the model parallelism is applied in fully connected layers <cit.>. Strategy B: Averaged Stochastic Gradient -divides the input into batches and feeds each batch to a node. This strategy proceeds as follows: (1) Initialize the weights of the learner by randomization; (2) Split the training data into n equal chunks and send them to the learners; (3) each learner process the data and calculates the weight gradients for its batch; (4) send the calculated gradients back to the master; (5) the master computes and updates the new weights; and (6) the master sends the new weights to the nodes and a new iteration begins <cit.>. The convergence speed is slightly worse than for the sequential approach, however the training time is heavily reduced.Strategy C: Delayed Stochastic Gradient -suggests updating the weight parameters in a round-robin fashion by the workers. One solution is splitting the samples by the number of threads, and let each thread work on its own distinct chunk of samples, only sharing a common weight vector. Threads are only allowed to update the weight vector in a round-robin fashion, and hence each update will be delayed <cit.>.Strategy D: HogWild! -is a stochastic gradient descent without locks. The approach is applicable for sparse optimization problems (threads/core updates do not conflict much) <cit.>.In this paper, we introduce Controlled Hogwild with Arbitrary Order of Synchronization (CHAOS), a parallelization scheme that can exploit both thread- and SIMD-level parallelism available on Intel Xeon Phi.CHAOS is a data-parallel controlled version of HogWild! with delayed updates, which combines parts of strategies A-D. The key aspects of CHAOS are: * Thread parallelism - The overview of our parallelization scheme is depicted in Figure <ref>. Initially for as many threads as there are available network instances are created, which share weight parameters, whereas to support concurrent processing of images some variables are private to each thread. After the initialization of CNNs and images is done, the process of training starts. The major steps of an epoch include: Training, Validation and Testing. The first step, Training, proceeds with each worker picking an image, forward propagates it through the network, calculates the error, and back-propagates the partial derivatives, adjusting the weight parameters. Since each worker picks a new image from the set, other workers do not have to wait for significantly slow workers. After Training, each worker participates in Validation and Testing evaluating the prediction accuracy of the network by predicting images in the validation and test set accordingly. Adoption of data parallelism was inspired by Krizhevsky <cit.>, promoting data parallelism for convolutional layers as they are computationally intensive. * Controlled HogWild - during the back-propagation the shared weights are updated after each layer's computations (a technique inspired by <cit.>), whereas the local weight parameters are updated instantly (a technique inspired by <cit.>), which means that the gradients are calculated locally first then shared with other workers. However, the update to the global gradients can be performed at any time, which means that there is no need to wait for other workers to finish their updates. This technique, which we refer to as non-instant updates of weight parameters without significant delay, allows us to avoid unnecessary cache line invalidation and memory writes. * Arbitrary Order of Synchronization - There is no need for explicit synchronization, because all workers share weight parameter. However, an implicit synchronization is performed in an arbitrary order because writes are controlled by a first-come-first schedule and reads are performed on demand. The main goal of CHAOS is to minimize the time spent in the convolutional layers, which can be done through data parallelism, adapting the knowledge presented in strategy A.In strategy B, the synchronization is performed because of averaging worker's gradient calculations. Since work is distributed, computations are performed on stale parameters. The strategy can be applied in distributed and non-distributed settings. The division of work over several distributed workers was adapted in CHAOS. In strategy C, the updates are postponed using a round-robin-fashion where each thread gets to update when it is its turn. The difference compared to strategy B is that instances train on the same set of weights and no averaging is performed. The advantage is that all instances train on the same weights. The disadvantage of this approach is the delayed updates of the weight parameters as they are performed on stale data. Training on shared weights and delaying the updates are adopted in CHAOS. Strategy D presents a lock-free approach of updating the weight parameters, updates are performed instantly without any locks. Our updates are not instant, however, after computing the gradients there is nothing prohibiting a worker contributing to the shared weights, the notion of instant inspired CHAOS. §.§ Implementation AspectsThe main goal is to utilize the many cores of the Intel Xeon Phi co-processor efficiently to lower the training time (execution time) of the selected CNN algorithm, at the same time maintaining low deviation in error rates, especially on the test set. Moreover, the quality of the implementation is verified using errors and error rates on the validation and test set.In the sequential version, only minor modifications of the original version were performed. Mainly, we added a Reporter class to serialize execution results. The instrumentation should not add any time penalties in practice. However, if these penalties occur in the sequential version they are likely to imply corresponding penalties in the parallel version, therefore it should not impact the results.The main goal of the parallel version is to lower the execution time of the sequential implementation and to scale well with the number of processing units on the co-processor. To facilitate this, it is essential to fully consider the characteristics of the underlying hardware. From results derived in the sequential execution we found the hotspots of the application to be predominantly the convolutional layers. The time spent in both forward- and back-propagation is about 94% of the total time of all layers (up to 99% for the larger network), which is depicted in the Table <ref>.In our proposed strategy, a set of N network instances are created and assigned to T threads. We assume T == N, i.e. one thread per network instance. T threads are spawned, each responsible for its own instance. The overview of the algorithm is shown in Fig. <ref>. In Fig. <ref> the training, testing and back-propagation phase are shown in details. Training (see Fig. <ref>) picks an image, forward propagates it, determines the loss and back-propagates the partial derivatives (deltas) in the network - this process is done simultaneously by all workers, each worker processing one image. Each worker participating in testing (see Fig. <ref>), picks an image, forward propagates it and then collects errors and error rates. The results are cumulated for all threads. Perhaps the most interesting part is the back-propagation (see Fig. <ref>). The shared weights are used when propagating the deltas, however, before updating the weight gradients, the pointers are set to the local weights. Thereafter the algorithm proceeds by updating the local weights first. When a worker has contributions to the global weights it can update in a controlled manner, avoiding data races. Updates immediately affect other workers in their training process. Hence the update is delayed slightly, to decrease the invalidation of cache lines, yet almost instant and workers do not have to wait for a longer period before contributing with their knowledge.To see why delays are important, consider the following scenario: If training several network instances concurrently, they share the same weight vectors, other variables are thread private. The major consideration lies in the weight updates. Let W^j_l be the j-th weight on the l-th layer. In accordance with the current implementation, a weight is updated several times since neurons in a map (on the same layer) share the same weights, and the kernel is shifted over the neurons. Further assume that several threads work on the same weight W^j_l at some point in time. Even if other threads only read the weights, their local data, as saved in the Level 2 cache, will be invalidated and a re-fetch is required to assert their integrity. This happens because cache lines are shared between cores. The approach of slightly delaying the updates and forcing one thread to update in atomicity leads to fewer invalidations. Still a major disadvantages is that the shared weights does not infer any data locality (data cannot retain completely in Level 2 cache for a longer period).[frame=single,basicstyle=,breaklines=true,caption=An extract from the vectorization report for the partial derivative updates in the convolutional layer.,label=listing:vectorization] remark #15475: — begin vector loop cost summary — remark #15476: scalar loop cost: 30 remark #15477: vector loop cost: 7.500 remark #15478: estimated potential speedup: 3.980 remark #15479: lightweight vector operations: 6 remark #15480: medium-overhead vector operations: 1 remark #15481: heavy-overhead vector operations: 1 remark #15488: — end vector loop cost summary — To further decrease the time spent in convolutional layers, loops were vectorized to facilitate the vector processing unit of the co-processor. Data was allocated using _mm_malloc() with 64 byte alignment increasing the accuracy of memory requests. The vectorization was achieved by adding #pragma omp simd instructions and explicitly informing the compiler of the memory alignment using __assume_aligned(). Some unnecessary overhead is added through the lack of data alignment of the deltas and weights.The computations of partial derivatives and weight gradients in the convolutional layers are performed in a SIMD way, which allows efficient utiliziation of the 512 bit wide vector processing units of the Intel Xeon Phi. An extract from the vectorization report (see Listing <ref>), for the updates of partial derivatives in the convolutional layer shows an estimated potential speed up of 3.98× compared to the scalar loop. Further algorithmic optimizations were performed. For example: (1) The images are loaded into a pre-allocated memory instead of allocating new memory when requesting an image; (2) Hardware pre-fetching was applied to mitigate the shortcomings of the in-order-execution scheme. Pre-fetching loads data to L2 cache to make it available for future computations; (3) Letting workers pick images instead of assigning images to workers, allow for a smaller overhead at the end of a work-sharing construct; (4) The number of locks are minimized as far as possible; (5) We made most of the variables thread private to achieve data locality.The training phase was distributed through thread parallelism, dividing the input space over available workers. CHAOS uses the vector processing units to improve performance and tries to retain local variables in local cache as far as possible. The delayed updates decrease the invalidation of cache lines. Since weight parameters are shared among threads, there is a possibility that data can be fetched from another core's cache instead of main memory, reducing the wait times. Also, the memory was aligned to 64 bytes and unnecessary system calls were removed from the parallel work.§ EVALUATIONIn this section, we first describe the experimentation environment used for evaluation of our CHAOS parallelization scheme. Thereafter, we describe the development of a performance model for CHAOS. Finally we discuss the obtained results with respect to scalability, speedup, and prediction accuracy. §.§ Experimental SetupIn this study, OpenMP was selected to facilitate the utilization of thread- and SIMD-parallelism available in the Intel Xeon Phi co-processor. C++ programming language is used for algorithm implementation. The Intel Compiler 15.0.0 was used for native compilation of the application for the co-processor, whereas the O3 level was used for optimization.System Configuration - To evaluate our approach we use an Intel Xeon Phi accelerator that comprises 61 cores that run at 1.2 GHz. For evaluation 1, 15, 30, 60, 120, 180, 240, and 244 threads of the co-processor were used. Each thread was responsible for one network instance. For comparison, we use two general purpose CPUs, including the Intel Xeon E5-s695v2 that runs at 2.4 GHz clock frequency, and the Intel Core i5 661 that runs at 3.33GHz clock frequency. Data Set - To evaluate our approach, the MNIST <cit.> dataset of handwritten digits is used. In total the MNIST dataset comprises 70000 images, 60000 of which are used for training/validation and the rest for testing. CNN Architectures - Three different CNN architectures were used for evaluation, small, medium and large. The small and medium architecture were trained for 70 epochs, and the large one for 15 epochs, using a starting decay (eta) of 0.001 and factor of 0.9. The small and medium network consist of seven layers in total (one input layer, two convolutional layers, two max-poling layers, one fully connected layer and the output layer). The difference between these two networks is in the number of feature maps per layer and the number of neurons per map. For example, the first convolutional layer of the small network has five feature maps and 3380 neurons, whereas the first convolutional layer of the medium network has 20 feature maps and 13520 neurons. The large network differs from the small and the medium network in the number of layers as well. In total, there are nine layers, one input layer, three convolutional layers, three max-pooling layers, one fully connected layer and the output layer. Detailed information (including the number and the size of feature maps, neurons, the size of the kernels and the weights) about the considered architectures is listed in Table <ref>.To address the variability in performance measurements we have repeated the execution of each parallel configuration for three times.§.§ Performance ModelA performance model <cit.> enables us to reason about the behavior of an implementation in future execution contexts. Our performance model for CHAOS implementation can predict the performance for numbers of threads that go beyond the number of hardware threads supported in the Intel Xeon Phi model that we used for evaluation. Additionally, it can predict the performance of different CNN architectures with various number of images and epochs.The goal is to construct a parametrized model with the following parameters ep, i, it and p, where ep stands for the number of epochs, i indicates the number of images in the training/validation set, it stands for the number of images in the test set, and p is the number of processing units. Table <ref> lists the full set of variables used in our performance model, some of which are hardware dependent and some others are independent of the underlying hardware. Each variable is either measured, calculated, constant, or parameter in the model. Listing <ref> shows the formula used for our performance prediction model. The total execution time (T) is the sum of computations time (T_comp) and memory operations (T_mem). T depends on several factors including: speed, number of processing units, communication costs (such as network latency), and memory contention. The T_comp is sum of sequential work, training, validation, and testing. Most interesting is contentions causing wait times, including memory latencies and synchronization overhead. T_mem adds memory and synchronization overheads. The contention is measured through an experimental approach by executing a small script on the co-processor for different thread counts, weights and layers. We define T_mem(ep,i,p) = MemoryContention * ep *i/p where MemoryContention is the measured memory contention when p threads are fighting for the I/O weights concurrently. Table <ref> depicts the measured and predicted memory contentions for the Intel Xeon Phi.Our performance prediction model is not concerned with any practical measurements except for T_mem. Along with the CPI and OperationFactor it is possible to derive the number of instructions (theoretically) per cycle that each thread can perform.We use Prep to be different for each CNN architecture (10^9, 10^10 and 10^11 for small, medium and large architecture respectively). The OperationFactor is adjusted to closely match the measured value for 15 threads, and mitigate the approximations done for instructions in the first place, at the same time account for vectorization.When one hardware thread is present per core, one instruction per cycle can be assumed. For 4 threads per core, only 0.5 instructions per cycle can be assumed, which means that each thread gets to execute two instructions every fourth cycle (CPI of 2) and hence we use the CPI factor to control the best theoretical amount of instructions a thread can retire. The speed s is defined in Table <ref>. FProp and BProp are placeholders for the actual number of operations. §.§ ResultsIn this section, we analyze the collected data with regards to the execution time and speedup for varying number of threads and CNN architectures. The errors and error rates (incorrect predictions) are used to validate our implementation. Furthermore, we discuss the deviation in number of incorrectly predicted images.The execution time is the total time the algorithm executes, excluding the time required to initialize the network instances and images (for both the sequential and parallel version). The speed up is measured as the relativeness between two execution times, with the sequential execution times of Intel Xeon E5, Intel Core i5, and Xeon Phi as the base. The error rate is the fraction of images the network was unable to predict and the error the cumulated loss from the loss function.In the figures and tables in this section, we use the following notations: Par refers to the parallel version, Seq is the sequential version, and T denotes threads, e.g. Phi Par. 1 T is the parallel version and one thread on the Xeon Phi.Result 1: The CHAOS parallelization scheme scales gracefully to large numbers of threads.Figure <ref> depicts the total execution time of the parallel version of the implementation running on the Xeon Phi and the sequential version running on the Xeon E5 CPU. We vary the number of threads on the Xeon Phi between 1, 15, 30, 60, 120, 180, 240, and 244, and the CNN architectures between small, medium and large. We elide the results of Xeon E5 Seq. and Phi Par. 1T for simplicity and clarity. The large CNN architecture requires 31.1 hours to be completed sequentially on the Xeon E5 CPU, whereas using one thread on the Xeon Phi requires 295.5 hours. By increasing the number of threads to 15, 30, and 60, the execution time decreases to 19.7, 9.9, and 5.0 hours respectively. Using the total number of threads (that is 244) on the Xeon Phi the training may be completed in only 2.9 hours. We may observe a promising scalability while increasing the number of threads. Similar results may be observed for the small and medium architecture.It should be considered that the selected CNN architectures were trained for different number of epochs, and that larger networks tend to produce better predictions (lower error rates). A fairer comparison would be to compare the execution times until reaching a specific error rate on the test set. In Fig. <ref> the total execution times for the different CNN architectures and threads on the Xeon Phi is shown. We have set the stop criteria as the error rate ≤ 1.54%, which is the ending error rate of the test set for the small architecture. The large network executes for a longer period even if it converges in fewer epochs, and that the medium network needs less time to reach an equal (or better) ending error rate than the small and large network. Note that several other factors impact training, including the starting decay, the factor which the decay is decreased, dataset, loss function, preparation of images, initial weight values. Therefore, several combinations of parameters need to be tested before finding a balance. In this study, we focus on the number of epochs as the stop criteria and draw conclusions from this, considering the deviation of the error and error rates.Result 2: The total execution time is strongly influenced by the forward-propagation and back-propagation in the network. The convolutional layers are the most computationally expensive.Table <ref> depicts the time spent per layer for the large CNN architecture. The results were gathered as the total time spent for all network instances on all layers together. Dividing the total time by the number of network instances and later the number of epochs, yields the number of seconds spent on each layer per network instance and epoch. A lower time spent on each layer per epoch and instance indicates on a speedup. We may observe that the large architecture spends almost all the time in the convolutional layers and almost no time in the other layers. For Phi Par. 240 T about 88% is spent in the back-propagation of convolutional layers and about 10% in forward propagation. We have observed similar results for small and medium CNN architecture, however we elide these results for space. We have observed that the more threads involved in training the more percentage of the total time each thread spends in the back-propagation of the convolutional layer, and less time in the others. Overall, the time spent at each layer is decreased per thread when increasing the number of threads. Therefore, there is an interesting relationship between the layer times and the speed up of the algorithm. Table <ref> presents the speed up relative to the Phi Par. 1 T for the different architectures on the convolutional layer. The times are collected by each network instance (through instrumentation of the forward- and back-propagate function) and averaged over the number of network instances and epochs. As can be seen, in almost all cases there is an increase in speed up when increasing the network size, more importantly, the speed up does not decrease. Maybe the most interesting phenomena is that the speed up per layer have an almost direct relationship to the speed up of the algorithm, especially if compared to the back-propagation part. This emphasizes the importance of reducing the time spent in the convolutional layers.Result 3: Using CHAOS parallel implementation for training of CNNs on Intel Xeon Phi we achieved speedups of up to 103×, 14×, and 58× compared to the single-thread performance on Intel Xeon Phi, Intel Xeon E5 CPU, and Intel Core i5, respectively.Figures <ref> and <ref> emphasize the facts shown in Fig. <ref> in terms of speedup. Figure <ref> depicts the speedup compared to the sequential execution on Xeon E5 (Xeon E5 Seq.) for various number of threads and CNN architectures. As can be seen, adding more threads results in speedup increase in all cases. Using 240 threads on the Xeon Phi infer a 13.26× speedup for the small CNN architecture. Utilizing the last core of the Xeon Phi, which is used by the OS, shows even higher speedup (14.07×). We may observe that doubling the number of threads from 15, to 30, and from 30 to 60 almost doubles the speedup (2.03, 4.03, and 7.78). Increasing the number of threads further results with significant speedup, but the double speedup trend breaks.Figure <ref> shows the speedup compared to the execution running in one thread of the Xeon Phi (Phi Par. 1 T) while varying the number of threads and the CNN architectures. We may observe that the speedup is close to linear for up to 60 threads for all CNN architectures. Increasing the number of threads further results with significant speedup. Moreover it can be seen that when keeping the number of threads fixed and increasing the architecture size, the speed up increases with a small factor as well, except for 244 threads. It seems like larger architectures are beneficial. However, it could also be the case that Phi Par. 1 T executes relatively slower than Xeon E5 Seq. for larger architectures than for smaller ones.Figure <ref> shows the speedup compared to the sequential version executed in Intel Core i5 (Core i5 Seq.) while varying the number of threads and the CNN architectures. We may observe that using 15 threads we gain 10× speedup. Doubling the number of threads to 30, and then to 60 results with close to double speedup increase (19.8 and 38.3). By using 120 threads (that is two threads per core) the trend of double speedup increase breaks (55.6×). Increasing the number of threads per core to three and four results with modest speedup increase (62× and 65.3×). Result 4: The image classification accuracy of parallel implementation using CHAOS is comparable to the one running sequentially. The deviation error and the number of incorrectly predicted images is not abundant.We validate the implementation by comparing the error and error rates for each epoch and configuration. Figure <ref> depicts the ending errors for the three considered CNN architectures for both validation and test set. The black dashed line delineates the base line (that is a ratio of 1). Values below the line are considered better, whereas those above the line are worse than for Xeon E5 Seq. As a base line, we use the Xeon E5, however identical results are derived executing the sequential version on any platform. As can be seen in Fig. <ref>, the largest difference is encountered by Phi Par. 244 T, about 22 units (0.05%) worse than the base line. On the contrary, Phi Par. 15 T has 9 units'lower error compared to the base line for the large test set. The validation sets are rather stable whereas the test set fluctuates more heavily. Although one should consider the deviation in error respectfully, they are not abundant in this case. Please note that the diagram has a high zoom factor, hence the differences are magnified.Table <ref> lists the number of incorrectly classified images for each CNN architecture. For each architecture, the total (Tot) number of images and the difference (Diff) compared to the optimal numbers of Xeon E5 Seq. are shown. Negative values indicate that the ending error rate was better than optimal (less images were incorrectly predicted), whereas positive values indicate that more images than Xeon E5 Seq. were incorrectly predicted. For each column in the table, best and worst values are annotated with underline and bold fonts, respectively. No obvious pattern can be found, however, increasing the number of threads does not lead to worse prediction in general. Phi Par. 180 T stands out as it was 17 images better than Xeon E5 Seq. for small architecture on validation set. Phi Par. 15 T also performs worst on the small architecture on the validation set. The overall worst performance is achieved by Phi par. 120 T on the test set for small CNN architecture. Please note that the total number of images in the validation set is 60,000 and 10,000 for the test set. Overall, the number of incorrectly predicted images and the deviation from the base line is not abundant.Result 5: The predicted execution times obtained from the performance model match well the measured execution times.Figures <ref>, <ref>, and <ref> depict the predicted and measured execution times for small, medium and large CNN architecture. For the small network (see Fig. <ref>), the predictions are close to the measured values with a slight deviation at the end. The prediction model seems to over-estimate the execution time with a small factor. For the medium architecture (see Fig. <ref>) the prediction follow the measured values closely, although it underestimates the execution time slightly. At 120 threads, the measured and predicted values starts to deviate, which are recovered at 240 threads.The large architecture yields similar results as the medium. As can be seen, the measured values are slightly higher than the predictions, however, the predictions follow the measured values. As can be seen for 120 threads there is a deviation which is recovered for 240 threads. Also, the predictions increase between 120 and 180, and 180 and 240 threads for both predictions, whereas the actual execution time is lowered. This is most probably due to the CPI factor that is added when 3 or more threads are present on the same core. We use the expression x = m-pp to calculate the deviation in predictions for our prediction model and all considered architectures, where m is the measured and p is the predicted value. The average deviations over all measured thread counts are as follows: 14.57% for the small CNN, 14.76% for medium, and 15.36% for large CNN. Result 6: Prediction of execution time for number of threads that go beyond the 240 hardware threads of the model of Intel Xeon Phi used in this paper show that CHAOS scales well up to several thousands of threads.We used the prediction model to predict the execution times for 480, 960, 1920, and 3840 threads for different CNN architectures, using the same parameters. The results in Table <ref> show that if 3,840 threads were available, the small network should take about 4.6 minutes to train, the medium 14.5 minutes and the large 36.8 minutes. The predictions for the large CNN architecture are not as well aligned when increasing to larger thread counts as for small and medium. Additionally, we evaluated the execution time for varying image counts, and epochs, for 240 and 480 threads for the small CNN architecture. As can be seen in Table <ref> doubling the number of images or epochs, approximately doubles the execution time. However, doubling the number of threads does not reduce the execution time in half.§ SUMMARY AND FUTURE WORK Deep learning is important for many modern applications, such as, voice recognition, face recognition, autonomous cars, precision medicine, or computer vision. We have presented CHAOS that is a parallelization scheme to speed up the training process of Convolutional Neural Networks. CHAOS can exploit both thread- and SIMD-parallelism of Intel Xeon Phi co-processor. Moreover, we have described our performance prediction model, which we use to evaluate our parallelization solution and infer the performance on future architectures of the Intel Xeon Phi. Major observations include, * CHAOS parallel implementation scales well with the increase of the number of threads; * convolutional layers are the most computationally expensive part of the CNN training effort; for instance, for 240 threads, 88% of the time is spent on the back-propagation of convolutional layers; * using CHAOS for training CNNs on Intel Xeon Phi we achieved up to 103×, 14×, and 58× speedup compared to the single-thread performance on Intel Xeon Phi, Intel Xeon E5 CPU, and Intel Core i5, respectively; * image classification accuracy of CHAOS parallel implementation is comparable to the one running sequentially; * predicted execution times values obtained from our performance model match well the measured execution times; * results of the performance model indicate that CHAOS scales well beyond the 240 hardware threads of the Intel Xeon Phi that is used in this paper for experimentation. Future work will extend CHAOS to enable the use of all cores of host CPUs and the co-processor(s).10 urlstyletensorflow2015whitepaper Abadi, M., et al.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015). <http://tensorflow.org/>. Software available from tensorflow.orgahmed2014accelerating Ahmed, K., Qiu, Q., Malani, P., Tamhankar, M.: Accelerating pattern matching in neuromorphic text recognition system using intel xeon phi coprocessor. In: Neural Networks (IJCNN), 2014 International Joint Conference on, pp. 4272–4279. IEEE (2014)ng2011ufldl Andrew, N., Jiquan, N., Chuan Yu, F., Yifan, M., Caroline, S.: Ufldl tutorial on neural networks. Ufldl Tutorial on Neural Networks(2011)bahrampour2015comparative Bahrampour, S., Ramakrishnan, N., Schott, L., Shah, M.: Comparative study of deep learning software frameworks. arXiv preprint arXiv:1511.06435(2015)benkner11 Benkner, S., Pllana, S., Traff, J., Tsigas, P., Dolinsky, U., Augonnet, C., Bachmayer, B., Kessler, C., Moloney, D., Osipov, V.: PEPPHER: Efficient and Productive Usage of Hybrid Computing Systems. Micro, IEEE 31(5), 28–41 (2011)chang2011libsvm Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) 2(3), 27 (2011)chellapilla2006high Chellapilla, K., Puri, S., Simard, P.: High performance convolutional neural networks for document processing. In: Tenth International Workshop on Frontiers in Handwriting Recognition. Suvisoft (2006)chrysos2012intel Chrysos, G.: Intel® Xeon Phi™ Coprocessor-the Architecture. Intel Whitepaper(2012). <https://software.intel.com/en-us/articles/intel-xeon-phi-coprocessor-codename-knights-corner>dciresan Cireşan, D.: Simple C/C++ code for training and testing MLPs and CNNs. <http://people.idsia.ch/~ciresan/data/net.zip> (2017). [Online; accessed 14-February-2017]cirecsan2011committee Cireşan, D., Meier, U., Masci, J., Schmidhuber, J.: A committee of neural networks for traffic sign classification. In: Neural Networks (IJCNN), The 2011 International Joint Conference on, pp. 1918–1921. IEEE (2011)cirecsan2012multi Cireşan, D., Meier, U., Masci, J., Schmidhuber, J.: Multi-column deep neural network for traffic sign classification. Neural Networks 32, 333–338 (2012)cirecsan2011high Cireşan, D.C., Meier, U., Masci, J., Gambardella, L.M., Schmidhuber, J.: High-performance neural networks for visual object classification. arXiv preprint arXiv:1102.0183(2011)de2012parallelization De Grazia, M.D.F., Stoianov, I., Zorzi, M.: Parallelization of deep networks. In: ESANN (2012)110_deeplearning_lenet DeepLearning: Convolutional Neural Networks (LeNet) - DeepLearning 0.1 documentation. <http://deeplearning.net/tutorial/lenet.html> (2016). [Online; accessed 17-March-2016]deng2014deep Deng, L., Yu, D.: Deep learning: Methods and applications. Foundations and Trends in Signal Processing 7(3–4), 197–387 (2014)dokulil13 Dokulil, J., Bajrovic, E., Benkner, S., Pllana, S., Sandrieser, M., Bachmayer, B.: High-level support for hybrid parallel execution of c++ applications targeting intel® xeon phi™ coprocessors. Procedia Computer Science 18(0), 2508 – 2511 (2013). http://dx.doi.org/10.1016/j.procs.2013.05.430. 2013 International Conference on Computational Sciencefox2014 Fox, G.C., Jha, S., Qiu, J., Luckow, A.: Towards an understanding of facets and exemplars of big data applications. In: Proceedings of the 20 Years of Beowulf Workshop on Honor of Thomas Sterling's 65th Birthday, Beowulf '14, pp. 7–16. ACM, New York, NY, USA (2015). 10.1145/2737909.2737912. <http://doi.acm.org/10.1145/2737909.2737912>agibansky Gibansky, A.: Fully connected neural network algorithms. <http://andrew.gibiansky.com/blog/machine-learning/fully-connected-neural-networks/> (2016). [Online; accessed 21-March-2016]hadsell2009learning Hadsell, R., Sermanet, P., Ben, J., Erkan, A., Scoffier, M., Kavukcuoglu, K., Muller, U., LeCun, Y.: Learning long-range vision for autonomous off-road driving. Journal of Field Robotics 26(2), 120–144 (2009)Hsu2014 Hsu, C.H.: Editorial. Future Generation Computer Systems 36(Complete), 16–18 (2014). 10.1016/j.future.2014.02.003jiang2016 Jiang, P., Winkley, J., Zhao, C., Munnoch, R., Min, G., Yang, L.T.: An intelligent information forwarder for healthcare big data systems with distributed wearable sensors. IEEE Systems Journal 10(3), 1147–1159 (2016). 10.1109/JSYST.2014.2308324JinWGYH14 Jin, L., Wang, Z., Gu, R., Yuan, C., Huang, Y.: Training large scale deep neural networks on the intel xeon phi many-core coprocessor. In: IPDPS Workshops, pp. 1622–1630. IEEE Computer Society (2014)KesslerDTNRDBTP12 Kessler, C.W., Dastgeer, U., Thibault, S., Namyst, R., Richards, A., Dolinsky, U., Benkner, S., Träff, J.L., Pllana, S.: Programmability and performance portability aspects of heterogeneous multi-/manycore systems. In: 2012 Design, Automation Test in Europe Conference Exhibition (DATE), pp. 1403–1408. IEEE, Dresden, Germany (2012). 10.1109/DATE.2012.6176582krizhevsky2014one Krizhevsky, A.: One weird trick for parallelizing convolutional neural networks. CoRR abs/1404.5997 (2014)krizhevsky2009learning Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)krizhevsky2012imagenet Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)lecun2015nature LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015). <http://dx.doi.org/10.1038/nature14539>lecun1998gradient LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)lecun2010mnist LeCun, Y., Cortes, C.: Mnist handwritten digit database. AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist (2010)lecun2004learning LeCun, Y., Huang, F.J., Bottou, L.: Learning methods for generic object recognition with invariance to pose and lighting. In: Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 2, pp. II–97. IEEE (2004)leung2013investigating Leung, K.C., Eyers, D., Tang, X., Mills, S., Huang, Z.: Investigating large-scale feature matching using the intel® xeon phi™ coprocessor. In: Image and Vision Computing New Zealand (IVCNZ), 2013 28th International Conference of, pp. 148–153. IEEE (2013)lu2013optimizing Lu, M., Zhang, L., Huynh, H.P., Ong, Z., Liang, Y., He, B., Goh, R.S.M., Huynh, R.: Optimizing the mapreduce framework on intel xeon phi coprocessor. In: Big Data, 2013 IEEE International Conference on, pp. 125–130. IEEE (2013)CPE4037 Memeti, S., Pllana, S.: Combinatorial optimization of dna sequence analysis on heterogeneous systems. Concurrency and Computation: Practice and Experience(2016). 10.1002/cpe.4037. <http://dx.doi.org/10.1002/cpe.4037>DNAxphi Memeti, S., Pllana, S.: A machine learning approach for accelerating dna sequence analysis. The International Journal of High Performance Computing Applications (2016). 10.1177/1094342016654214. <http://dx.doi.org/10.1177/1094342016654214>murphy2016review Murphy, J.: Deep Learning Frameworks: A Survey of TensorFlow, Torch, Theano, Caffe, Neon, and the IBM Machine Learning Stack. <https://www.microway.com/hpc-tech-tips/deep-learning-frameworks-survey-tensorflow-torch-theano-caffe-neon-ibm-machine-learning-stack/> (2016). [Online; accessed 17-February-2017]najafabadi2015 Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R., Muharemagic, E.: Deep learning applications and challenges in big data analytics. Journal of Big Data 2(1), 1 (2015). 10.1186/s40537-014-0007-7. <http://dx.doi.org/10.1186/s40537-014-0007-7>nvidia_gpu NVIDIA: WhatisGPU-AcceleratedComputing? <http://www.nvidia.com/object/what-is-gpu-computing.html> (2016). [Online; accessed 14-November-2016]pllana09 Pllana, S., Benkner, S., Mehofer, E., Natvig, L., Xhafa, F.: Towards an intelligent environment for programming multi-core computing systems. In: Euro-Par 2008 Workshops - Parallel Processing, Lecture Notes in Computer Science, vol. 5415, pp. 141–151. Springer Berlin Heidelberg (2009)perfmod Pllana, S., Benkner, S., Xhafa, F., Barolli, L.: Hybrid performance modeling and prediction of large-scale computing systems. In: 2008 International Conference on Complex, Intelligent and Software Intensive Systems, pp. 132–138 (2008). 10.1109/CISIS.2008.20recht2011hogwild Recht, B., Re, C., Wright, S., Niu, F.: Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In: Advances in Neural Information Processing Systems, pp. 693–701 (2011)sainath2015deep Sainath, T.N., Kingsbury, B., Saon, G., Soltau, H., Mohamed, A.r., Dahl, G., Ramabhadran, B.: Deep convolutional neural networks for large-scale speech tasks. Neural Networks 64, 39–48 (2015)scherer2010accelerating Scherer, D., Schulz, H., Behnke, S.: Accelerating large-scale convolutional neural networks with parallel graphics multiprocessors. In: Artificial Neural Networks–ICANN 2010, pp. 82–91. Springer (2010)schmidhuber2015deep Schmidhuber, J.: Deep learning in neural networks: An overview. Neural Networks 61, 85–117 (2015)sharma2014 Sharma, S., Tim, U.S., Wong, J., Gadia, S., Sharma, S.: A brief review on leading big data models. Data Science Journal 13, 138–157 (2014). 10.2481/dsj.14-041shi2016benchmarking Shi, S., Wang, Q., Xu, P., Chu, X.: Benchmarking state-of-the-art deep learning software tools. arXiv preprint arXiv:1608.07249(2016)song2014deep Song, I., Kim, H.J., Jeon, P.B.: Deep learning for real-time robust facial expression recognition on a smartphone. In: Consumer Electronics (ICCE), 2014 IEEE International Conference on, pp. 564–567. IEEE (2014)strawn2016 Strawn, G.: Data scientist. IT Professional 18(3), 55–57 (2016). doi.ieeecomputersociety.org/10.1109/MITP.2016.41szegedy2015going Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)tabik2017snapshot Tabik, S., Peralta, D., Herrera, A.H.P.F.: A snapshot of image pre-processing for convolutional neural networks: case study of mnist. International Journal of Computational Intelligence Systems 10, 555––568 (2017)taigman2014deepface Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: Closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014)teodoro2013comparative Teodoro, G., Kurc, T., Kong, J., Cooper, L., Saltz, J.: Comparative performance analysis of intel xeon phi, gpu, and cpu. arXiv preprint arXiv:1311.0378(2013)TianSPGKMCP13 Tian, X., Saito, H., Preis, S., Garcia, E.N., Kozhukhov, S., Masten, M., Cherkasov, A.G., Panchenko, N.: Practical SIMD Vectorization Techniques for Intel® Xeon Phi Coprocessors. In: IPDPS Workshops, pp. 1149–1158. IEEE (2013)vrtanoski2012pattern Vrtanoski, J., Stojanovski, T.D.: Pattern recognition with opencl heterogeneous platform. In: Telecommunications Forum (TELFOR), 2012 20th, pp. 701–704. IEEE (2012)siri Washburn, A.: Siri Will Soon Understand You a Whole Lot Better | Wired. <http://www.wired.com/2014/06/siri_ai/> (2014). [Online; accessed 17-March-2016]Williams:2009 Williams, S., Waterman, A., Patterson, D.: Roofline: An insightful visual performance model for multicore architectures. Commun. ACM 52(4), 65–76 (2009). 10.1145/1498765.1498785. <http://doi.acm.org/10.1145/1498765.1498785>wu2014deep Wu, K., Chen, X., Ding, M.: Deep learning based classification of focal liver lesions with contrast-enhanced ultrasound. Optik-International Journal for Light and Electron Optics 125(15), 4057–4063 (2014)yadan2013multi Yadan, O., Adams, K., Taigman, Y., Ranzato, M.: Multi-gpu training of convnets. arXiv preprint arXiv:1312.5853 9 (2013)you2014mic You, Y., Song, S.L., Fu, H., Marquez, A., Dehnavi, M.M., Barker, K., Cameron, K.W., Randles, A.P., Yang, G.: Mic-svm: Designing a highly efficient support vector machine for advanced modern multi-core and many-core architectures. In: Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp. 809–818. IEEE (2014)zhang2012 Zhang, L., Wang, L., Wang, X., Liu, K., Abraham, A.: Research of Neural Network Classifier Based on FCM and PSO for Breast Cancer Classification, pp. 647–654. Springer Berlin Heidelberg, Berlin, Heidelberg (2012)ZinkevichSL09 Zinkevich, M., Smola, A.J., Langford, J.: Slow learners are fast. In: Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I. Williams, A. Culotta (eds.) NIPS, pp. 2331–2339. Curran Associates, Inc. (2009)
http://arxiv.org/abs/1702.07908v1
{ "authors": [ "Andre Viebke", "Suejb Memeti", "Sabri Pllana", "Ajith Abraham" ], "categories": [ "cs.DC", "cs.CV", "cs.LG" ], "primary_category": "cs.DC", "published": "20170225154844", "title": "CHAOS: A Parallelization Scheme for Training Convolutional Neural Networks on Intel Xeon Phi" }
=0cm
http://arxiv.org/abs/1702.07943v1
{ "authors": [ "Anatoly Yu. Smirnov", "Mohammad H. Amin" ], "categories": [ "quant-ph", "cond-mat.supr-con" ], "primary_category": "quant-ph", "published": "20170225200824", "title": "Quantum eigenstate tomography with qubit tunneling spectroscopy" }
http://arxiv.org/abs/1702.08228v2
{ "authors": [ "A. Prain", "S. Vezzoli", "N. Westerberg", "T. Roger", "D. Faccio" ], "categories": [ "quant-ph", "physics.optics" ], "primary_category": "quant-ph", "published": "20170227110153", "title": "Spontaneous photon production in time-dependent epsilon-near-zero materials" }
§ INTRODUCTIONHadronic showers produced by the interaction of hadrons in the absorber part of a sampling calorimeter like the CALICE SDHCAL prototype <cit.> often contain several track segments associated to charged particles. Some of these particles cross several active layers before being stopped or reacting inelastically and others could even escape the calorimeter. High-granularity calorimeters provide an excellent tool to reconstruct such track segments. These segments could be used to monitor the active layers of the calorimeters in situ. They allow a better understanding of the response of the calorimeter and consequentially a betterestimation of the hadronic energy. In addition, they help in the comparaison of the different hadronic shower models used in the simulation <cit.>. The capability of the calorimeter to construct such tracks depends on its granularity. The higher the granularity, the lower the confusion between such track segments and the remaining parts of the hadronic shower. Still, even with a highly-granular calorimeter, it is difficult to separate the track segmentsfrom the highly dense environment present in showers produced by high-energy hadrons. To achieve such a separation efficiently in the SDHCAL we propose to usethe Hough Transform (HT) method<cit.> that was developedmore than half a century ago to find tracks in a noisy environment.In this paper we present the results obtained by applying the HT method to extract track segments in hadronic showers collected during the exposure of the SDHCAL prototype to hadron beams at the H2 CERN SPS beam line in 2012, and we show how these track segments can be used to study the response of the active medium of the calorimeter in situ and how they can help to better understand the hadronic shower structure to improve on the estimation of its energy.The paper is organized as follows: sect. 2 introduces the HT methodand explains how the method is applied in the case of a dense environment such as that encountered in hadronic showers. In sect. 3the use of the HT track segments as a tool to monitor the efficiency and pad multiplicity of the active layers of the hadronic calorimeter is presented.Theirexploitation to improve on the hadronic energy resolution and to separate electromagnetic from hadronic showers is also discussed. In sect. 4, we show howthe methodis applied to estimate the number of track segments, their length and direction with respect to the incoming hadron in the data collected by the SDHCAL prototype and in simulated events using a few models. A comparison among these models and the data is presented.§HOUGH TRANSFORM TRACKING IN THE SDHCAL§.§ Hough Transform method The Hough Transform is a simple and reliable method that allows aligned points to be recognised among scattered points and the parameters of the straight line joining them to be reconstructed. The scope of this method is larger than what is presented here but all the variants of this method are based on the same principle. To search for points located on a straight line in a plane (say (z,x)), the Cartesian coordinates of each point present in the plane are used to define a curve in the associated polar plane (θ, ρ)<cit.>:ρ = z cosθ +x sinθUnder this transformation aligned points have their curves intersecting at one node (θ^0, ρ^0)of the polar plane. The node's coordinates in the polar planedetermine the angle of the straight line with respect to the ordinate axis of the (z,x) plane and the distance of the straight line to its origin point as can be seen in figure <ref>. Curves associated to the other points intersect also with those of the aligned ones. In scenarios with low density of points outside the track segments, these intersectionsscarcely coincide with(θ^0, ρ^0).Therefore, to find aligned pointsone should look for nodes in the (θ, ρ) plane. The number of intersecting curves in one node is an essential parameter to estimate the number of aligned points. In a dense environment the same method can be applied. However, one should take into account here the possibility that the points contributing to one node do not all necessarily belong to a given track.The method described above cannot be applied as such to the hits (fired pads) left by particles in a pixellated detector. The spatial resolution of the hit positions in the detector and more importantly the Multiple Coulomb Scattering (MCS) of the associated charged particles requires the method to be adapted accordingly. This can be achieved by discretising the (θ, ρ) plane in a 2D histogram. For this histogram, the bins are incrementedeach time they are crossed by a curve associated to one point in the (z,x) plane. The size of the bins reflects the expected resolution associated to the hits belonging to one particle and can be optimized to help finding not only straight tracksbut also slightly curved ones. In this way the nodes of intersecting curves are replaced by bins whose content depends on the number of aligned hits. The Hough Transform technique was proposed and successfully used in muon tracking in high energy physics <cit.>.Here we propose to use it in calorimetry.§.§Hough Transform in a hadronic shower The SDHCAL is made of 48 active layers interleaved with absorber plates, 2 cm thick each,made of stainless steel. Each active layer is segmented into 96× 96 pickup pads of 1 cm×1 cm. Two adjacent pads are separated by 0.406 mm. Hits left by charged particles in one layer are represented by fired pads. Each of the pads is read out thanks to an independent electronic channel providing a semi-digital information. Indeed, following the amount of charge induced in a pad, one, two or threethresholds could be crossed.The threshold values are fixed by software. They are chosen to indicate roughly if the pad is crossed by one, a few or many charged particles in order to improve on the hadronic shower energy reconstruction as explained in ref. <cit.>. A pad is fired if at least the lowest of the three thresholds is crossed.A small number of pads are firedby the passage of a single charged particle in an active layer. The average value of this number is called thepad multiplicity. Hadronic showers are generally characterized by a dense core (electromagnetic part) located in the centre and a less dense part (the hadronic one) in the periphery.To use the Hough Transform to find tracks within hadronic showers one should avoid using hits located in the dense coresince one can build artificially many tracks from the numerous hits of the core.This can be achieved by keeping only hits that have a small number of neighbours in the same layer.To apply the Hough Transform to the hadronic showers collected in the SDHCAL prototype, a system of coordinates is needed. In this paper the zaxis is chosen parallel to the beam and x and y are the horizontal and vertical axes parallel to the prototype layers. In order toreduce computing time and improve efficiency, the Hough Transform method is not appliedto the hits themselves but to theclusters of hits resulting from aggregatingtopologicallyneighbouring hits in one active layer (x,y) plane.These clusters are built recursively.The first cluster is built starting from the first hit of the collection of hits found in a given layer.Adjacent hits sharing an edge or an angle with this first hit are looked for and are added to the cluster.Hits adjacent to any of these hits are again added.The procedure is applied until no new adjacent hit is found. The hits belonging to this cluster are tagged and withdrawn from the hit list. The same procedure is applied to a new hit chosen among the remaining ones of the plane and repeated until all the hits are gathered into clusters.The coordinates ofa resulting cluster are that ofthe geometrical barycentre of its hits.Of all the clusters only those with less than 5 hits are kept since clusters belonging to track segments are expected to have at most 4 hits in the (x,y) plane. To eliminate the contribution of the shower's dense part, clusters are rejected if they have more than 2 neighbouring clusters in an area of 10 cm×10 cm around or ifone cluster with more than 5 hits is found in this area. To look for3D track segments our method is based on applying the Hough Transform to the (z,x) coordinates of all the remaining clusters to find thecandidate segments in the plane (z,x). A second Hough Transform iteration is then applied to the (z,y) coordinates of the hits associated to each of the candidate segments found in the first step. This allows the elimination of hits accidentally aligned with the track segment's hits in one plane while reconstructing3D track segments and find their straight line parameters without going through the technically complicated 3D application of the Hough Transform as explained in the following:The clusters are used to fill a 2D histogram with 100 bins in the range 0 < θ < πand 150 bins in the range0 < ρ < 150cm.From the (z,x)coordinates of each cluster, the ρvalue for each bin of θ iscomputed using eq. <ref> and an entry is made in the histogram.After filling the histogram, bins with more than 6 entries are considered.The choice of 6 entries is a compromisebetween finding tracks long enough to perform efficiency studies and at the same time keeping tracks associated to low energy particles affected by multiple scattering as well as those produced by particles undergoing inelastic interactions.Depending on the histogram binning, the neigbouringbins of the one where the curves intersect may also be crossed by more than six of these curves and are thus also selected. To find the right bin, only the most populated among adjacent bins is kept. To eliminate the scenario of accidentally aligned clusters in the 2D (z,x) planeand not in the 3D (z,x,y) space, the (z,y) coordinates of the clusters belonging to one selected bin are used to fill a second histogram (θ^', ρ^') with the same binning as the one used for (θ, ρ) histogram. If anyof the(θ^', ρ^') bins is found with more than 6 entries then among all the previous clusters, only those which contribute to this bin are kept. The clusters found in this way are then used to build a track segment whose parameters are determined from the four values : θ, ρ, θ^', ρ^'.It is worth mentioning here that using the (z,y) plane followed by the (z,x) plane gives similar results. Finally, to eliminate tracks made of topologically uncorrelatedclusters,each of the clusters associated to a one selected bin iscompared to the other clusters of the same bin. The cluster is kept if at least two other clusters from the track are found in the 3 adjacent layers located before and after that of the considered cluster. This requirement is fulfilled in 99% of the cases for genuine tracks in the SDHCL.§.§ Hough-Transform tracks within hadronic showers in the SDHCAL The Hough Transform method described in the previous section is first appliedto cosmic and beam-muon events collected in the SDHCAL prototype during its exposure to particle beams at the CERN SPS in 2012. Most of the hits belonging to these eventsare found to be well selected while noisy hits outside the particle path are not as is shown in figure <ref>.The Hough Transform method is then applied to events containing showers produced by negative pions.The auto-triggermode used in operating the SDHCAL prototype leads to data containing all kinds of events.Electrons, cosmic and beam muons contaminating the hadronic events were rejected usingtopology-based criteria as described in <cit.>. The noise contribution was also analyzed and found to be of the order of one hit per event which has negligible effect on the present study. The energies of the collected hadronic showers cover a range going from 10 to 80 GeV. Many of the track segmentsof the hadronic showersare associated withlow energy particles that undergo multiple Coulomb scattering but are still able to cross a few active layers. To select such tracks one can either select Hough Transform histogram bins with a low number of clusters or increase the θ bin size of this histogram. The last option is technically simpler toapply and was therefore chosen.Although the algorithm used here is not optimized to reduce the CPU time, it is found that an average time of 0.16±0.08 s is needed to analyze one hadronic shower of 80 GeV.As an illustration of the methodtwo event displays of hadronic showers at 30 and 80 GeV are shown in figure <ref> with the Hough Transform selected hits.§USE OF HOUGH TRANSFORM TRACKS IN THE SDHCAL §.§ Calibration purposes and PFA applicationsThe tracks one can extract from hadronic showers play an important role in checking the active layer behaviour in situ by studying the efficiency and multiplicity of the detector. To achieve sucha study,high quality Hough Transform segments are selected. Indeed, the aligned clusters are fitted to straight lines in both the (z,x) and (y,x) planes using a least-χ^2 technique. The variance (σ^2)used in the χ^2 definition here is the transverse cluster size (number ofpads in the cluster along x or y) divided by 12. Only tracks withχ^2/NdF <1 with NdF = N_l -2 are chosen, where N_l is the number of layers containing the clusters used to fit a straight line of 2 unknown parameters. To study one layer, only clusters belonging to the other layers are kept.The intercept ofthe straight line associated with the segment in the studied layer is determined. The efficiency of this layer is then estimated by looking for clusters in a 2 cm radius around the intercept. If at least one cluster is found,the multiplicity is then estimated by counting the number of hits associated to the closest cluster.Average efficiency and multiplicity per layer using the events collected in a 40 GeV pion run and that of simulated pion events of the same energy are shown in figure <ref>. These results are generally consistent with what was observed in ref. <cit.> where those two variables were estimated with beam muons.The slight difference, systematically positive, compared with the results obtained with beam muons (at the level of a few percent) is related to the fact that contrary to the latter the angle of the track segments in the hadronic shower is not necessarily perpendicular to the SDHCAL layers. This angular difference results in slightly higher efficiency and pad multiplicity with respect to thatobserved in cosmic muons <cit.>.In addition to the use oftrack segments in hadronic showers to check the behaviour of the SDHCAL active layers, they could be useful in Particle Flow Algorithm (PFA) techniques <cit.>.Indeed, aby-product of this method is thepossibility to tag the track segment associated to the incoming charged particle in the calorimeter as can be seen in figure <ref>. This is an interesting feature that could be exploitedin PFA techniques where the contribution of charged hadrons is to be separated from thatof neutral ones. If a tracker is placed in front of the calorimeter,the connection between the track segment in the calorimeter and that of the tracker leads to abetter estimation of the charged hadron energyby using its momentum which is often more precisely measured in the tracker.Track segments couldalso be used to separate nearby hadronic showers by connecting clusters produced by hadronic interactions of secondary charged particles to the main one as can be seen in figure <ref>. A successful association increases the probability to attach correctly the clusters to the rightparticlereducing thusthe confusion betweenthe charged and neutral hadrons and preventing a possibleenergy double-counting. To quantify howeverthe real contribution of using such track segments in improving the PFA performance a detailed study is needed. §.§ Hadronic shower energy reconstructionAnother potential advantage of the track segments is to improve on the energy reconstruction.In the SDHCAL energy reconstruction method, each of the thresholds is given a different weight <cit.> to account for the number of tracks crossing one pad. The reconstructed energy is estimated asE^_reco = α N_1 + β N_2 + γ N_3where N_1 corresponds tothe number of hits which are above the first threshold and below the second, N_2 denotes the number of hitswhich are above both the first and the second but below the third threshold andN_3the number of hits that are above the third threshold. α, β and γ are parameterized as quadratic functions of the total number of hits(N_hit =N_1+N_2+N_3).Tracks of low energy that stop inside the calorimeter may have hits passing the second or the third threshold, especially those located at the end of the segment[The high ionisation value dE/dx of the tracks at the end produces more charges and thus hitsof the higher thresholds are often observed.]. These hits of a single track segment may bias the energy estimation based on this method. Therefore, giving the same weight for all the hits belonging to these track segments could improve on the energy reconstruction. To check this assumption, the same procedure of energy reconstruction for hits others than those selected by the HT method is applied and a constant weightis assigned to the latter as follows: E^HT_reco = α' N'_1 + β' N'_2 + γ' N'_3 + c N_HTwhere N_HT is the number of hits belonging to track segments selected by the HT method. N'_1, N'_2 and N'_3 are respectivelyN_1, N_2 and N_3after subtracting the hits belonging to track segments.α', β' and γ' are new quadratic functions of the total number of hits(N_hit =N'_1+N'_2+N'_3+N_HT). A χ^2-like optimization procedure similar to the one described in ref.<cit.> is then performed to determine the nine parameters associated to α', β' and γ' functions as well as the parameter c. The evolution ofα', β' and γ' as a function of N_hit as well as the constant c is presented in figure <ref>.The energy of thehadronic events collected in the H2 test beam in 2012 is then estimated using the new formula of eq. <ref>. To estimate the energy resolution the same recipe of <cit.> is applied. First, a Gaussian is used to fit over the whole full range of the distribution. Then, a Gaussian is fitted only in the range of ± 1.5 σ of the mean value of the first fit. The σ of the second fit is used as the energy resolutionR(E^HT_reco) and the new mean value as the reconstructed energy E^HT_reco.The relative energy resolution is thus given by the ratio R(E^HT_reco)/ E^HT_reco.Results are then compared with those obtained in ref. <cit.>. The reconstructed energy obtained using the two methods as a function of the beam energy is shown in figure <ref> (top).The relative difference of the two isalso shownin figure <ref> (bottom). Good linearity is obtained with the two methods. Figure <ref> shows the energy resolution with the two methods as well as the relative difference.At energies higher than 40 GeV, where the second and thirdthresholds play an important role as explained in ref. <cit.>, assigning the same weight to hits oftrack segments independently of their threshold improves the energy resolution by a few percent though it makes the linearity slightly worse.In addition, the higher the energy the more the number of track segments produced in hadronic showers as will be shown in section <ref> which explains why the improvement is enhanced with the energy. Finally, the fact thatthe second and third threshold hitsof the track segments represent on average about5 ‰of the total number of hitsand that the contribution to the energy reconstruction of these high-threshold hitsbased on eq. <ref>is a few times higher than the one they have by usingeq. <ref>, as can be seen infigure <ref>,couldexplain the relative improvement of a few percent observed in applying the new method.Statistical and systematic uncertainties are included in the results shown in the previous figures.The sources of the systematic uncertainties included in this study are the same as those detailed in ref. <cit.>. These sources are the ones related to the method used to estimate the energy resolution, the hadronic events selection criteria, the effect of the beam intensity and the uncertainty of the beam energy. At low energy, the systematic uncertainties related to the measurement of the resolution and to the event selection are of the same order and dominate, while at high energy, the one due to the beam intensity correction represents about half of the total uncertainty. The uncertainty on the beam energy was found to be negligible in all cases. §.§ Electromagnetic and hadronic shower separationThe same HT track reconstruction is applied toevents collectedwithelectron beams in the 10 to 50 GeVenergy range using a special filter during the 2012 SDHCAL beam test as explained in ref. <cit.>. The absence, as expected in electromagnetic shower,of such tracks in the case of electrons,compared with pions as shown in figure <ref> shows the low probability of the HT method to introduce fake tracks.It is worth noting here thatthisdifference in number of track segments between electromagnetic and hadronic showers can also be used to discriminate the two species for particle identification purposes[A future paper will be dedicated toelectron and hadron separation in the SDHCAL.]. Requiring at least one track segment rejects indeed 99% (97%) of electromagnetic showers while keeping more than 95% (99%) of the hadronic showers at 10 (50) GeV respectively.§TRACKS IN HADRONIC SHOWER MODELSThe track segments produced in showers collected in the SDHCAL prototype can be used as a tool to compare different hadronic shower models used in the simulation. The number of tracks and their characteristics are related to those of charged particles (pions, kaons andprotons) produced in the hadronic shower with an energy sufficient to cross a few absorber layers.In absence of high-granularity calorimeters such variables could not be easily used to tune the phenomenological models.Studying these segments in SDHCAL thus constitutes an unbiased tool to compareamong the different models on the one hand and with the data on the other hand. Events with different energies produced by pion interactions in the SDHCAL prototype were simulated usinga few hadronic shower models within theversion 9.6p01 of the GEANT4 framework <cit.>. Adigitizer<cit.>transforms the energy deposited by the particles crossing the active volumes into charges and induced signal in the neighbouring pads.In the case of a single charged particle crossing one pad of an active layer,the digitizer's parameters aretuned to reproduce the efficiency and the pad multiplicity observed with beam muons. Cosmic muons are also used for this purpose to simulate correctly the track segments produced in hadronic showers at large angles with respect to the incoming hadron.The parameters are also optimized to reproducethe response of the GRPC to the passage of several charged particles in one pad by taking into account the charge screening effect. To do so, only the electron data are used in order to avoid biases when comparing the data with the simulation of the hadronic shower models.The same set of parameters is then used to simulate pion showers. Three phenomenological models ,and are studied. The tracksobtained using the HT in simulated events with these threemodels arecompared to each other and to data for different energies.The distributions of the total number of reconstructed tracks within 10, 40 and 70 GeV hadronic showers are shown in figure <ref>. The track length could be an interesting variable to compare simulation models with data. It is defined as the distance between the most upstream and downstreamof the clusters belonging toa given HT selectedtrack segment.Figure <ref> shows the track length distribution of the data and the simulated events for the three energies.Another feature that may help to discriminatebetween the different hadronic models is the angular distribution of the track segments with respect to the incoming hadron. To determine the angle ψ of the track segments with the incoming hadron, the direction of the latter was determined by using a linear fit of thebarycentre coordinates of the clusters in the first ten layers.The angular distribution of the track segments found with the HT method for the same three energies is shown in figure <ref>. Finally, infigure <ref> are shown the average number of track segments (left), their averagelength (middle) andtheir averageangle (right) with respect to the incoming pion as a function of the beam energy as obtained in simulated events using each of the three models and in data events. The three models seem to reproduce fairly well the number of reconstructed track segments observed in data withproviding a better description at high energy but worse at 10 GeV.In the same way,features slightly longer track segmentswith respect to the other models when compared with the data at high energy. All three models fail to describe adequately the angular distribution of the track segments.These results are in agreement with those found in the study of track segments in the CALICE AHCAL prototype <cit.> although here the model seems to describe slightly better the number of segments than thewhile in ref. <cit.> the opposite is observed. This difference could be explained by the fact that the version 9.4p02is used in in ref. <cit.>while herewe use a more recent version (9.6p01) of the .The use of the version(respectively ()here and the(respectively )in ref. <cit.> should not in principleimpact the comparison since the difference betweenthe two versions is essentially related to the treatment of neutrons which is not relevant in this study.In the present comparison only statistical uncertainties are included.Some sources of the systematic uncertainties such as the ones related to the minimum number of clusters required to apply the HT selection and the (θ, ρ)histogram binning are studiedand found to be negligible. Although more detailed systematics study is needed, the low noise of the SDHCAL <cit.>[ Only one noise hit is expected in a physics event in the SDHCAL prototype. This is to be compared with an average number of 200 (1500) hitsin a 10 (80) GeV pion shower respectively.], the fact that thetrack efficiency per layer in data is well reproduced in the simulation <cit.>suggest that the contribution of the other systematic uncertainties will not modify the conclusion of this study.The impact of the difference in the track multiplicity which varies slightly from one layer to other in data while it isalmost constant in the simulation isabsorbed since clusters rather than the hits are used here to build the track segments in a low dense environment. § CONCLUSION The Hough Transform is a simple and powerful method for finding track segmentswithin a noisy environment. A new technique to use this method in hadronic showers is developed and successfully applied to events collectedduring the exposure of the CALICE SDHCAL to hadron beams.The advantages of using track segments obtained with this technique to calibrate the hadronic calorimeter in situ are shown.A slight improvement on the energy reconstruction is also obtained by giving the same weight to the hits belonging to track segments irrespective oftheir threshold. The same technique are also applied to simulated hadronic showers. Comparison with data helps to better characterize the different hadronic shower models used in the simulation.model seems to beslightly closer to the SDHCALdata thanthe and ones.The extension of the methodto hadronic showers in the presence of magnetic field should complete this work and allows the technique to be used in highly granular calorimeters in future experiments. § ACKNOWLEDGEMENTSWe would like to thank the CERN-SPS staff for their availability and precious help during the beam test period. We would like to acknowledge the important support provided by theF.R.S.-FNRS, FWO (Belgium), CNRS and ANR (France), SEIDI and CPAN (Spain). This work was also supported by the Bundesministerium für Bildung undForschung (BMBF), Germany; by the Deutsche Forschungsgemeinschaft(DFG), Germany; by the Helmholtz-Gemeinschaft (HGF), Germany; bythe Alexander von Humboldt Stiftung (AvH), Germany;by the Korea-EU cooperation programme of National Research Foundation of Korea, Grant Agreement 2014K1A3A7A03075053 ;by the National Research Foundation of Korea. 6Prototype G. Beaulieu et al., Conception and construction of a technological prototype of a high-granularity digital hadronic calorimeter, JINST 10 (2015) P10039;e-print: arxiv:1506.05316. AHCALSegmentCALICE CollaborationTrack segments in hadronic showers in a highly granular scintillator-steel hadron calorimeter,JINST 8 (2013) P09001.HT-paper-1 P.V.C. Hough, Method and means for recognizing complex patterns. United States Patents, n.3, 069, 654 (18 December 1962).HT-paper-2R. O. Dudaand P. E. Hart, Use of the Hough Transformation to Detect Lines and Curves in Pictures, Comm. ACM, Vol. 15, pp. 1115 (January, 1972). HTT_1 I. Laktineh,Brick finding efficiency in muonic decay tau neutrino events, LYCEN-RI–2002-07.HTT_2 L. Manhaes De Andrade Filho andJ. SeixasCombining Hough Transform and Optimal Filtering for Efficient Cosmic Ray Detection with a Hadronic Calorimeter, XII Advanced Computing and Analysis Techniques in Physics Research, PoS(ACAT08)095. sdhcal-paper CALICE Collaboration, First results of the CALICE SDHCAL technological prototype, JINST(2016) 11 P04001, e-print:arXiv:1602.02276.PFA_1J. C. Brient and H. Videau, The calorimetry at the future e+ e- linear collider in Proc. of the APS/DPF/DPB Summer Study on the Future of Particle Physics (Snowmass 2001)ed. N. Graf,Snowmass, Colorado, (30 Jun - 21 Jul 2001), pp E3047, arXiv:hep-ex/0202004.PFA_2V. L. Morgunov, Calorimetry design with energy-flow concept (imaging detector for high-energy physics, Prepared for 10th International Conference on Calorimetry in High Energy Physics (CALOR 2002), Pasadena, California, (25-30 Mar 2002). PFA_3 M. A. Thomson, Particle Flow Calorimetry and the Pandora PFA Algorithm, Nucl. Instrum. Meth. A611 (2009) 25. g4-statusA. Ribon et al, Status of GEANT4 hadronic physics for the simulation of LHC experiments at the start of LHC physics program, CERN-LCGAPP-2010-02, CERN, Geneva, (May 2010). digitizer CALICE Collaboration, Resistive Plate Chamber Digitization in a Hadronic Shower Environment, JINST (2016) 11 P06014, arXiv:1604.04550.
http://arxiv.org/abs/1702.08082v2
{ "authors": [ "The CALICE Collaboration" ], "categories": [ "physics.ins-det", "hep-ex" ], "primary_category": "physics.ins-det", "published": "20170226204453", "title": "Tracking within Hadronic Showers in the CALICE SDHCAL prototype using a Hough Transform Technique" }
Hajós-like theorem for signed graphs Yingli KangPaderborn Institute for Advanced Studies in Computer Science and Engineering and institute for Mathematics, Paderborn University, Warburger Str. 100, 33102 Paderborn, Germany; yingli@mail.upb.de; Fellow of the International Graduate School “Dynamic Intelligent Systems”.========================================================================================================================================================================================================================================================================================================== The paper designs five graph operations, and proves that every signed graph with chromatic number q can be obtained from all-positive complete graphs (K_q,+) by repeatedly applying these operations. This result gives a signed version of the Hajós theorem, emphasizing the role of all-positive complete graphs played in the class of signed graphs, as played in the class of unsigned graphs. § INTRODUCTION We consider a graph to be finite and simple, i.e., with no loops or multiple edges.Let G be a graph and σ E(G)→{1,-1} be a mapping. The pair (G,σ) is called a signed graph. We say that G is the underlying graph of (G,σ) and σ is a signature of G.The sign of an edge e is the value of σ(e), and the sign product sp(H) of a subgraph H is defined as sp(H)=∏_e∈ E(H)σ(e).An edge is positive if it has positive sign; otherwise, the edge is negative.A signature σ is all-positive (resp., all-negative) if it has positive sign (resp., negative sign) on each edge. A graph G together with an all-positive signature is denoted by (G,+) and similarly, (G,-) denotes a signed graph where the signature is all-negative. Throughout the paper, to be distinguished from “a signed graph” and from “a multigraph”, “a graph” is always regarded as an unsigned simple graph.Let (G,σ) be a signed graph. For v ∈ V(G), denote by E(v) the set of edges incident to v. A switching at a vertex v defines a signed graph (G,σ') with σ'(e) = -σ(e) if e ∈ E(v), and σ'(e) = σ(e) if e ∈ E(G)∖ E(v). Two signed graphs (G,σ) and (G,σ^*) are switch-equivalent (briefly, equivalent) if they can be obtained from each other by a sequence of switchings. We also say that σ and σ^* are equivalent signatures of G.A signed graph (G,σ) is balanced if each circuit contains even number of negative edges; otherwise, (G,σ) is unbalanced. A signed graph (G,σ) is antibalanced if each circuit contains even number of positive edges. It is well known (see e.g. <cit.>) that (G,σ) is balanced if and only if σ is switch-equivalent to an all-positive signature, and (G,σ) is antibalanced if and only if σ is switch-equivalent to an all-negative signature.In the 1980s, Zaslavsky <cit.> initiated the study on vertex colorings of signed graphs. The natural constraints for a coloring c of a signed graph (G,σ) are that, (1) c(v) ≠σ(e) c(w) for each edge e=vw, and (2) the colors can be inverted under switching, i.e., equivalent signed graphs have the same chromatic number. In order to guarantee these properties of a coloring, Zaslavsky <cit.> used2k+1 “signed colors" from the color set {-k, …, 0, …, k} and studied the interplay between colorings and zero-free colorings through the chromatic polynomial.Recently, Máčajová, Raspaud and Škoviera <cit.> modified this approach. For n = 2k+1 let M_n = {0, ± 1, …,± k}, and for n = 2k let M_n = {± 1, …,± k}. A mapping c from V(G) to M_n is a signed n-coloring of (G,σ), if c(v) ≠σ(e) c(w) for each edge e=vw. Theydefined χ_±((G,σ)) to be the smallest number n such that (G,σ) has a signed n-coloring, and called it the signed chromatic number. A distinct version of vertex colorings of signed graphs, defined by homomorphisms of signed graphs, was proposed in <cit.>.In <cit.>, the authors studied circular colorings of signed graphs. The related integer k-coloring of a signed graph (G,σ) is defined as follows. Let ℤ_k denote the cyclic group of integers modulo k, and the inverse of an element x is denoted by -x. A function c : V(G) →ℤ_k is a k-coloring of (G,σ) if c(v) ≠σ(e) c(w) for each edge e=vw. Clearly, such colorings satisfy the constrains (1) and (2) of a vertex coloring of signed graphs. The chromatic number χ((G,σ)) of a signed graph (G,σ) is the smallest k such that (G,σ) has a k-coloring. As shown in <cit.>, two equivalent signed graphs have the same chromatic number. In this paper, we follow this version of vertex colorings of signed graphs. Many questions concerning the colorings of a signed graph have been discussed. In <cit.> and <cit.>, they study the signed chromatic number χ_± of signed graphs. The chromatic spectrum and signed chromatic spectrum of signed graphs are given in <cit.>. A few classical results concerning the choice number of graphs are generalized to signed graphs in <cit.>. This paper addresses an analogue of a well-known theorem of Hajós for signed graphs.In 1961, Hajós proved a result on the chromatic number of graphs, which is one of the classical results in the field of graph colorings. This result has several equivalent formulations, one of them states as the following two theorems. The class of all graphs that are not q-colorable is closed under the following three operations: * Add vertices or edges; * Identify two nonadjacent vertices; * Let G_1 and G_2 be two vertex-disjoint graphs with a_1b_1∈ E(G_1) and a_2b_2∈ E(G_2). Make a graph G from G_1∪ G_2 by removing a_1b_1 and a_2b_2, identifying a_1 with a_2, and adding a new edge between b_1 and b_2 (see Figure <ref>). The Operation (3) is known as the Hajós construction in literature. Every non-q-colorable graph can be obtained by Operations (1)-(3) from the complete graph K_q+1. The Hajós theorem have been generalized in several different ways, by considering more general colorings than vertex k-colorings of graphs.The analogues of the Hajós theorem are proposed for list-colorings <cit.>, for weighted colorings <cit.>, and for group colorings <cit.>.However, all of these extensions are still restricted to unsigned graphs. In this paper, we analogously establish a result on the chromatic number χ of signed graphs, that generalizes the result of Hajós to signed graphs. Hence, this result is a signed version of the Hajós theorem and is called the Hajós-like theorem of signed graphs (briefly, the Hajós-like theorem).To prove this theorem, we consider signed multigraphs rather than signed simple graphs. Indeed, for vertex colorings of signed multigraphs, it suffices to consider signed bi-graphs, a subclass of signed multigraphs in which no two edges of the same sign locate between two same vertices. Clearly, signed bi-graphs contains signed simple graphs as a particular subclass. Hence, the Hajós-like theorem holds for signed bi-graphs and in particular, for signed graphs. Moreover, the theorem shows that, for the class of signed bi-graphs, the complete graphs together with an all-positive signature plays the same role as it plays for the class of unsigned graphs.The structure of the rest of the paper are arranged as follows. In section <ref>, we design five operations on signed bi-graphs and show that these operations are closed in the class of non-q-colorable signed bi-graphs for any given positive integer q. Moreover, we established some lemmas necessary for the proof of the Hajós-like theorem. In section <ref>, we address the proof of the Hajós-like theorem.§ GRAPH OPERATIONS ON SIGNED BI-GRAPHS§.§ Signed bi-graphsA bi-graph is a multigraph having no loops and having at most two edges between any two distinct vertices. Let G be a bi-graph, u and v be two distinct vertices of G. Denote by E(u,v) the set of edges connecting u to v, and let m(u,v)=|E(u,v)|. Clearly, 0 ≤ m(u,v)≤ 2. A bi-graph G is bi-complete if m(x,y)=2 for any x,y ∈ V(G), and is just-complete ifm(x,y)=1 for any x,y ∈ V(G). A signed bi-graph (G,σ) is a bi-graph G together with a signature σ of G such that any two multiple edges have distinct signs.A bi-complete signed bi-graph of order n is denoted by (K_n,±). It is not hard to see that χ((K_n,±))=2n-2 and χ((K_n,+))=n. The concepts of k-coloring, chromatic number and switching of signed graphs are naturally extended to signed bi-graphs, working in the same way, and the related notations are inherited.Let (G,σ) be a signed multigraph. Between each pair of vertices, remove all the multiple edges of the same sign but one. We thereby obtain a signed bi-graph (G',σ'). We can see that c is a k-coloring of (G,σ) if and only if c' is a k-coloring of (G',σ'), where c' is the restriction of c into (G',σ'). Therefore, for the vertex colorings of signed multigraphs, it suffices to consider signed bi-graphs. §.§ Graph operationsLet k be a nonnegative integer. A signed bi-graph is k-thin if it is a bi-complete signed bi-graph minus at most k pairwise vertex-disjoint edges. Clearly, if a signed bi-graph is 0-thin, then it is bi-complete. The class of all signed bi-graphs that are not q-colorable is closed under the following operations:* Add vertices or signed edges.* Identify two nonadjacent vertices.* Let (G_1,σ_1) and (G_2,σ_2) be two vertex-disjoint signed bi-graphs. Let v be a vertex of G_1 and e be a positive edge of G_2 with ends x and y. Make a graph (G,σ) from (G_1,σ_1) and (G_2,σ_2) by splitting v into two new vertices v_1 and v_2, removing e, and identifying v_1 with x and v_2 with y (see Figure<ref>).* Switch at a vertex.* When q is even, remove a vertex that has at most q/2 neighbors; when q is odd, remove a negative edge whose ends are connected by no other edges, identify these two ends, and add signed edges so that the resulting bi-signed graph is q-3/2-thin. Since Operations (sb1),(sb2),(sb4) neither make loops nor decrease the chromatic number, it follows that the class of non-q-colorable signed bi-graphs is closed under these operations.For Operation (sb3), suppose to the contrary that (G,σ) is q-colorable. Let c be a q-coloring of (G,σ). Denote by x' and y' the vertices of G obtained from x and y, respectively. If c(x')=c(y'), then the restriction of c into G_1, where v is assigned with the same color as x' and y', gives a q-coloring of (G_1,σ_1), contradicting with the fact that (G_1,σ_1) is not q-colorable. Hence, we may assume that c(x')≠ c(y'). Note that e is a positive edge of (G_2,σ_2). Thus the restriction of c into G_2 gives a q-coloring of (G_2,σ_2), contradicting with the fact that (G_2,σ_2) is not q-colorable. Therefore, the statement holds for Operation (sb3).It remains to verify the theorem for Operation (sb5). For q even, suppose to the contrary that the removal of a vertex u from a non-q-colorable signed bi-graph (G,σ) yields the q-colorability. Let ϕ be a q-coloring of (G,σ)-u using colors from a set S, where S={0,± 1,…,± (q/2-1),q/2}. Notice that each neighbor of u makes at most two colors unavailable for u. Since u has at most q/2 neighbors, S still has a color available for u. Hence, we can extend ϕ to be a q-coloring of (G,σ), a contradiction.For the case that q is odd, let (H,σ_H) be obtained from a non-q-colorable signed bi-graph (H',σ_H') by applying this operation to a negative edge e'. Let z be the resulting vertex from the two ends of e', say x' and y'. Suppose to the contrary that (H,σ_H) is q-colorable. Let ψ be a q-coloring of (H,σ_H) using colors from the set {0,± 1,…,± (q-1/2)}. If ψ(z)≠ 0, then by assigning x' and y' with the color ψ(z), we complete a q-coloring of(H',σ_H'), a contradiction. Hence, we may assume that ψ(z)=0. For 0≤ i≤q-1/2, let V_i={v∈ V(G) |ψ(v)|=i}. Clearly, each V_i is an antibalanced set and in particular, V_0 is an independent set. Since (H,σ_H) is q-3/2-thin, we can deduce that there exists p∈{1,…,q-1/2} such that |V_p|=1. Exchange the colors between V_0 and V_p, and then assign x' and y' with the same color as z, we thereby obtain a q-coloring of (H',σ_H') from ψ, a contradiction. §.§ Useful lemmasOperation (sb3) can be extended to the following one which works for signed bi-graphs. * Let (G_1,σ_1) and (G_2,σ_2) be two vertex-disjoint signed bi-graphs. For each i∈{1,2}, let e_i be an edge of G_i with ends x_i and y_i. Make a graph (G,σ) from G_1∪ G_2 by removing e_1 and e_2, identifying x_1 with x_2, and adding a new edge e between y_1 and y_2 with σ(e)=σ_1(e_1)σ_2(e_2). Operation (sb3') is a combination of Operations (sb3) and (sb4). We use the notations in the statement of Operation (sb3'). First assume that at least one of e_1 and e_2 is a positive edge. With loss of generality, say e_1 is positive. We apply Operation (sb3) to (G_1,σ_1) and (G_2,σ_2) where e_1 is removed, x_2 is split into two new vertices x_2' and x_2” with y_2 as the neighbor of x_2' and all other neighbors of x_2 as the neighbors of x_2”, and then x_2' is identified with y_1 and x_2” is identified with x_1. The resulting signed bi-graph is exactly (G,σ), we are done. Hence, we may next assume that both e_1 and e_2 are negative edges. Switch at x_1 in (G_1,σ_1) and at x_2 in (G_2,σ_2). Since e_1 and e_2 are positive in the resulting signed bi-graphs, we may apply Operation (sb3) similarly as above, obtaining a signed bi-graph, which leads to (G,σ) by switching again at x_1 and x_2. A just-complete signed bi-graph is antibalanced if and only if the sign product on each triangle is -1, and is balanced if and only if the sign product on each triangle is 1. For the first statement, since a just-complete signed bi-graph (G,σ) is exactly a complete signed graph, G is antibalanced if and only if the sign product on each circuit of length k is (-1)^k. Hence, the proof for the necessity is trivial. Let us proceed to the sufficiency, which will be proved by induction on k. Clearly, the statement holds for k=3 because of the assumption of the lemma. Assume that k≥ 4. Let C be a circuit of length k. Take any chord e of C, which divides C into two circuits C_1 and C_2. For i∈{1,2}, let k_i denote the length of C_i. Thus, k=k_1+k_2-2. By applying the induction hypothesis, we have sp(C_i)=(-1)^k_i. It follows that sp(C)=sp(C_1)sp(C_2)=(-1)^k, the statement also holds. The second statement can be argued in the same way as for the first one. We only have to pay attention to the equivalence between that (G,σ) is balanced and that the sign product on each circuit of length r is 1. A signed bi-graph of order 3r is ▿-complete if it is (K_3r,±) minus r pairwise vertex-disjoint all-positive triangles. Clearly, a ▿-complete signed bi-graph is complete. The ▿-complete signed bi-graph of order 3r can be obtained from (K_2r+1,+) by Operations (sb1)-(sb5). Take r+1 copies of (K_2r+1,+), say (H_i,+) of vertex set {v_i^0,…,v_i^2r} for 0≤ i≤ r. For each j∈{1,…,r}, switch at v_0^j, and then apply Operation (sb3') to H_0 and H_j so that v_0^jv_0^j+r and v_j^0v_j^2j are removed and that v_0^j is identified with v_j^0, and finally identify v_0^j with v_0^j+r. The resulting signed bi-graph is denoted by (G,σ). By Theorem <ref>, since (K_2r+1,+) is not 2r-colorable, (G,σ) is not 2r-colorable either. Note that v_0^0 has precisely r neighbors in G. We can apply Operation (sb5) to v_0^0, i.e., we remove v_0^0 from (G,σ). In the resulting signed bi-graph, for each 1≤ k≤ 2r, since v_1^k,…,v_r^k are pairwise nonadjacent, we can apply Operation (sb1) to identify them into one vertex. Denote by (H,σ_H) the resulting signed bi-graph. We can see that (H,σ_H) is of order 3r and moreover, for 1≤ j≤ r, the signed bi-graph induced by {v_0^j,v_1^2j,v_1^2j-1} is an unbalanced triangle. It follows that, by adding signed edges and switching if needed, we obtain the ▿-complete signed bi-graph of order 3r from (H,σ_H). (K_r,±) can be obtained from (K_2r-2,+) by Operation (sb1)-(sb5). Let (G,σ) be a copy of (K_2r-2,+) of vertices v_1,…,v_2r-2. Clearly, (G,σ) is not (2r-3)-colorable. Switch at v_1 and apply Operation (sb5) to v_1v_2 so that each of v_3v_4,v_5v_6,…,v_2r-5v_2r-4 has no multiple edges. For each i∈{2,3,⋯,r-2}, switch at v_2i and apply Operation (sb5) to v_2i-1v_2i so that no new signed edges are added. The resulting signed bi-graph is exactly (K_r,±). § HAJÓS-LIKE THEOREM We will need the following definitions for the proof of the Hajós-like theorem.Let (G,σ) be a signed bi-graph.An antibalanced set is a set of vertices that induce an antibalanced graph.Let c be a k-coloring of (G,σ). A set of all vertices v with the same value of |c(v)| is called a partite set of (G,σ). Let U and V be two partite sets. They are completely adjacent if m(u,v)≥ 1 for any u∈ U and v∈ V, bi-completely adjacent if m(u,v)=2 for any u∈ U and v∈ V, and just-completely adjacent if m(u,v)=1 for any u∈ U and v∈ V.Let (G,σ) be a signed bi-graph. A sequence (x,y,z) of three vertices of G is a triple if there exist three integers a,b,c satisfying the following three conditions: * a,b,c∈{1,-1}, * ab=c, * a∉{σ(e) e∈ E(x,y)}, b∉{σ(e) e∈ E(x,z)}, and c∈{σ(e) e∈ E(y,z)}.The sequence (a,b,c) is called a code of (x,y,z). Note that a triple may have more than one code.(Hajós-like theorem) Every signed bi-graph with chromatic number q can be obtained from (K_q,+) by Operations (sb1)-(sb5).Let (G,σ) be a counterexample with minimum |V(G)| and subjecting to it, |E(G)| is maximum.We first claim that (G,σ) is complete. Suppose to the contrary that G has two non-adjacent vertices x and y. Let (G_1,σ_1) and (G_2,σ_2) be obtained from a copy of (G,σ) by identifying x with y into a new vertex v and by adding a positive edge e between x and y, respectively. Since (G,σ) has chromatic number q, it follows with Theorem <ref> that both (G_1,σ_1) and (G_2,σ_2) has chromatic number at least q. Note the fact that (K_i,+) can be obtained from (K_j,+) by Operation (sb1) whenever i>j. Thus by the minimality of |V(G)|, the graph (G_1,σ_1) can be obtained from (K_q,+) by Operations (sb1)-(sb5), and by the maximality of |E(G)|, so does (G_2,σ_2). We next show that (G,σ) can be obtained from (G_1,σ_1) and (G_2,σ_2) by Operations (sb2) and (sb3), which contradicts the fact that (G,σ) is a counterexample. This contradiction completes the proof of the claim. Apply Operation (sb3) to (G_1,σ_1) and (G_2,σ_2) so that e is removed and v is split into x and y. In the resulting graph, identify each pair of vertices that corresponds to the same vertex of G except x and y, we thereby obtain exactly (G,σ).We next claim that (G,σ) has no triples. The proof of this claim is analogous to the one above. Suppose to the contrary that (G,σ) has a triple, say (x,y,z). Let (a,b,c) be a code of (x,y,z). Take two copies of (G,σ). Add an edge e_1 with sign a into one copy between x and y, obtaining (G',σ'). Add an edge e_2 with sign b into the other copy between x and z, obtaining (G”,σ”). Clearly, both (G',σ') and (G”,σ”) have chromatic number at least q. By the maximality of |E(G)|, they can be obtained by Operations (sb1)-(sb5) from (K_q,+). To complete the proof of the claim, it remains to show that (G,σ) can be obtained from (G',σ') and (G”,σ”) by Operations (sb1)-(sb5). Note that Operations (sb3') is a combination of Operations (sb3) and (sb4) by Lemma <ref>. Apply Operation (sb3') to (G',σ') and (G”,σ”) so that e_1 and e_2 are removed, x' is identified with x”, and an edge e is added between y' and z”. We have σ(e)=σ(e_1)σ(e_2)=ab=c∈ E(y,z). By applying Operation (sb2) to each pair of vertices that are the copies of the same vertex of G except x, we obtain (G,σ).We proceed the proof of the theorem by distinguishing two cases according to the parity of q.Case 1: q is odd. Since χ((G,σ))=q, the vertex set V(G) can be divided into k partite sets V_1,…,V_k, where k=q+1/2, so that V_1 is an independent set and all others are antibalanced sets but not independent. It follows that |V_i|≥ 2 for all i∈{2,…,k}. By the first claim, |V_1|=1, and the graphs induced by V_2,…,V_k are just-complete and moreover, every two partite sets are completely adjacent.Subcase 1.1: every two partite sets are bi-completely adjacent. Take the vertex in V_1 and two arbitrary vertices from each of V_2,…,V_k. Let (H,σ_H) be the signed bi-graph induced by these vertices. Clearly, |V(H)|=q. By the first claim and the assumption, we can see that (H,σ_H) is a bi-complete signed bi-graph minus disjoint edges. Hence, (H,σ_H) can be obtained from (K_q,+) by switching at vertices and adding signed edges. Therefore, (G,σ) can be obtained from (K_q,+) by Operations (sb1) and (sb4), a contradiction.Subcase 1.2: there exist two partite sets V_j and V_l that are not bi-completely adjacent. If V_j and V_l are not just-completely adjacent, then there always exist three vertices x,y,z, w.l.o.g., say x∈ V_j and y,z∈ V_l, such that m(x,y)=1 and m(x,z)=2. Note that m(y,z)=1. Thus, (y,x,z) is a triple of (G,σ), contradicting the second claim. Hence, we may assume that V_j and V_l are just-completely adjacent. Recall that bothV_j and V_l induce just-complete signed bi-graphs. Thus, V_j∪ V_l induces a just-complete signed bi-graphs as well, say (Q,σ_Q). By again the second claim, every triangle in (Q,σ_Q) has sign product -1. Thus, (Q,σ_Q) is antibalanced by Lemma <ref>. It follows that 1∈{j,l} since otherwise, the division of V(G), obtained from {V_1,…,V_k} by merging V_j with V_l, yields χ((G,σ))≤ q-2, a contradiction. W.l.o.g., let j=1.We next show that every other pair of partite sets are bi-completely adjacent. Suppose to the contrary that there exist another two partite sets, say V_s and V_t, that are not bi-completely adjacent. By the same argument as above, 1∈{s,t}. We may assume that s=1. Let u_1,u_l,u_t be a vertex of V_1,V_l,V_t, respectively, such that m(u_1,u_l),m(u_1,u_t)≤ 1. Note that V_l and V_t are bi-completely adjacent, we have m(u_l,u_t)=2. It follows that (u_1,u_l,u_t) is a triple of (G,σ), contradicting the second claim.Recall that V_j∪ V_l is an antibalanced set. It follows that, except V_j and V_l, any other partite set contains at least 3 vertices since otherwise, say |V_r|=2 with r∉{j,l}, the division of V(G), obtained from {V_1,…,V_k} by merging V_j with V_l and splitting V_r into two independent sets, yields χ((G,σ))≤ q-1, a contradiction.Take a vertex from V_j, two vertices from V_l and three vertices from each of the rest partite sets. Denote by (H,σ_H) the signed bi-graph induced by these vertices. Clearly, |V(H)|=3(q-1)/2. By mutiplicity of the edges in (G,σ), we can see that (H,σ_H) is a ▿-complete signed bi-graph. By Lemma <ref>, (H,σ_H) can be obtained from (K_q,+) by Operations (sb1)-(sb5) and therefore, so does (G,σ), a contradiction.Case 2: q is even. Since χ((G,σ))=q, the vertex set V(G) can be divided into k partite sets V_1,…,V_k, where k=q+2/2, so that at least two of them are independent sets, say V_1 and V_2.Subcase 2.1: every two partite sets are bi-completely adjacent. Take a vertex from each partite set. Clearly, these vertices induce (K_q+2/2,±). Hence, (G,σ) can be obtained from (K_q+2/2,±) by Operation (sb1). By Lemma <ref>, (K_q+2/2,±) can be obtained from (K_q,+) by Operation (sb1)-(sb5), and therefore, so does (G,σ), a contradiction.Subcase 2.2: there exist two partite sets V_j and V_l that are not bi-completely adjacent. By a similar argument as in Subcase 1.2, we can deduce that {j,l}={1,2}, that every other pair of partite sets are bi-completely adjacent, and that |V_3|,…,|V_k|≥ 2. It follows with the first claim that V_1∪ V_2 inducestwo vertices with a negative edge between them.Take a vertices from each of V_1 and V_2, and two vertices from each of the rest partite sets. We can see that the signed bi-graph, induced by these vertices, can be obtained from (K_q,+) by adding signed edges and switching at vertices. Therefore, (G,σ) can be obtained from (K_q,+) by Operations (sb1)-(sb5), a contradiction. Every signed graph with chromatic number q can be obtained from (K_q,+) by Operations (sb1)-(sb5).99 An_2010X. An, B. Wu, Hajós-like theorem for group coloring, Discrete Math. Algorithm. Appl. 02(03) (2010) 433-436. Araujo_2013J. Araujo, C. L. Sales, A Hajó-like theorem for weighted coloring, J. Braz. Comput. Soc. 19 (2013) 275-278. Gravier_1996S. Gravier, A Hajó-like theorem for list coloring, Discrete Math. 152 (1996) 299-302.Hajos_1961G. Hajós, Über eine Konstruktion nicht n-färbbarer Graphen, Wiss. Z. martin Luther Univ. Math. Natu. Reihe 10 (1961) 116-117.Steffen_2015L. Jin, Y. Kang, E. Steffen, Choosability in signed planar graphs, European J. Combin. 52 (2016) 234-243.KS_2015 Y. Kang, E. Steffen, Y. Kang, E. Steffen, Circular Coloring of signed graphs, to appear in J. Graph Theory, arXiv:1509.04488Yingli_2015_00614Y. Kang, E. Steffen, The chromatic spectrum of signed graphs, Discrete Math. 339 (2016) 2660-2663.Raspaud_2014 E. Máčajová, A. Raspaud, M. Škoviera, The chromatic number of a signed graph, Electron. J. Combin. 23 (2016) #P1.14Edita_Sopena_2014R. Naserasr, E. Rollová, É. Sopena, Homomorphisms of signed graphs, J. Graph Theory (2016), doi:10.1002/jgtRaspaud_2011A. Raspaud, X. Zhu, Circular flow on signed graphs, J. Comb. Theory Ser. B 101 (2011) 464-479.Stiebitz_2015 T. Schweser, M. Stiebitz, Degree choosable signed graphs, (2015) arXiv:1507.04569Zaslavsky_1982 T. Zaslavsky, Signed graph coloring, Discrete Math. 39 (1982) 215-228.Zaslavsky_1984 T. Zaslavsky, How colorful the signed graph?, Discrete Math. 52 (1984) 279-284. Signed graphs and geometry (2013) arXiv:1303.2770
http://arxiv.org/abs/1702.08232v1
{ "authors": [ "Yingli Kang" ], "categories": [ "math.CO", "05C15, 05C22" ], "primary_category": "math.CO", "published": "20170227111200", "title": "Hajós-like theorem for signed graphs" }
[5]Elsevier map [ 1.0left=25.4mm,right=25.4mm,top=15.4mm,bottom=25.4mmtheoremTheoremassumptionAssumptiondefinitionremarkRemark elsarticle-numDepartment of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA. Compressive sampling hasbecomeawidely used approach to construct polynomial chaossurrogates when the number of available simulation samples is limited. Originally, these expensive simulation samples would be obtained at random locations in the parameter space. It was later shown that the choice of sample locations could significantly impact the accuracy ofresulting surrogates. This motivated new sampling strategies or design-of-experiment approaches, such as coherence-optimal sampling, which aim at improving the coherence property. In this paper, we propose a sampling strategy that can identify near-optimal samplelocations that lead to improvement inlocal-coherence property and also enhancement of cross-correlation properties ofmeasurement matrices.We provide theoretical motivations for the proposed sampling strategy along with several numerical examples that show that our near-optimal sampling strategy produces substantially moreaccurate results, compared to other sampling strategies. A near-optimal sampling strategy for sparse recovery of polynomial chaos expansions Hadi Meidani December 30, 2023 =====================================================================================§ INTRODUCTIONIn order to facilitatestochastic computation inanalysis and design of complex systems, analytical surrogates that approximate and replacefull-scale simulation models have been increasingly studied. One of the most widely adopted surrogates is the polynomial chaos expansion (PCE), which approximates the quantity of interest (QoI) by a spectral representation using polynomial functions of random parameters <cit.>. In estimating these spectral surrogates, non-intrusive stochastic techniques, based on either spectral projection or linear regression, are widely used especially because they don't require modifying deterministic solvers or legacy codes, which is an otherwisecumbersome task<cit.>. These non-intrusive techniques are still the subject of ongoing research as the number of required samples for accurate surrogate estimation rapidly grows with the number of randomparameters, even when efficient techniques such as sparse grid are used <cit.>. More recently,researchers have developed techniques, based on compressive sampling (CS), that are particularly advantageous whensurrogate expansions are expected to be sparse, i.e. the QoI can be accurately represented with a few polynomial chaos (PC) basis functions.Compressive sampling was first introduced in the field of signal processing to recover sparse signals using a number of samplessignificantly smaller that the conventionally used Shannon-Nyquist sampling rate <cit.>.Motivated by the fact that the solution of many high dimensional problems of interest, such as high dimensional PDEs, can be represented by sparse, or at least approximately sparse, PCEs, CS was proposed in <cit.> to estimate PC coefficients in underdetermined cases. AsCS theorems suggest, the success of sparse estimation of PCE depends not only upon the sparsity of the solution of stochastic system, butalso on the coherence property of the Vandermonde-likemeasurement matrix, formed by evaluations of orthogonal polynomials at sample locations <cit.>, as will be elaborated later.Several efforts have been made in order to improve the two mentioned conditions for successful recovery. For instance in <cit.>, for Hermite expansions with Gaussian input variables, the original inputs are rotated such that a few of the new coordinates, i.e. linear combinations of original inputs, have significant impact on QoI, thereby increasing the sparsity of solution and, in turn,the accuracy of recovery. Thesecond condition, i.e. coherence of the measurement matrix,can be poor especially when trial expansions are high-order and/or high-dimensional. To remedy this, theiterative approaches in <cit.> can be used to optimally include only the “important" basis functions into the trial expansion and its associated measurement matrix. Focusing on this second condition, another class of methods haveproposed sampling strategies that produceless coherent measurement matrices <cit.>. Among these approaches, the sampling strategy proposed in <cit.> was designed to be optimal in achieving the lowest local-coherence. In this work, we introduce a near-optimal sampling strategy by further improving the local-coherence-based sampling of <cit.> and filteringsample locations based oncross-correlation properties of the resulting measurement matrix. Specifically, we establishquantitative measures to capture thesecross-correlation properties between measurement matrixcolumns, and use these measures as the criteria for near-optimal identification of sample locations.It will be demonstrated that a sampling strategy that seeks to optimize these measures will lead to CS results that on average outperforms all other CS sampling strategies. This paper is organized as follows. Section <ref> presents general concepts in compressive sampling and its theoretical background. In Section <ref>, we introduce our sampling algorithm along with relevant theoretical supports. Finally, Section <ref> includes numerical examples and discussions about the advantages of the proposed approach. § SETUP AND BACKGROUND§.§ Polynomial chaos expansionLet I_Ξ⊆ℝ^d be a tensor-product domain that is the support of Ξ, where Ξ=(Ξ_1, ..., Ξ_d) is the vector of independent random variables, i.e. Ξ_i ∈ I_Ξ_i and I_Ξ = ×_i=1^d I_Ξ_i. Also, let ρ_i: I_Ξ_i→ℝ^+ be the probability measure for variable Ξ_i and let ρ(Ξ)=∏_i=1^dρ_i(Ξ_i). Given this setting, the set of univariate orthonormal polynomials, {ψ_α,i}_α∈ℕ_0 , satisfies ∫_I_Ξ_iψ_α,i(ξ_i) ψ_β,i(ξ_i)ρ_i(ξ_i) ξ_i = δ_αβ, α,β∈ℕ_0.where ℕ_0 = ℕ∪{0}, and δ_mn is the delta function. Therefore, the density function of Ξ_i, ρ_i(Ξ_i), determines the type of polynomial. For example, Gaussian and uniform probability distributions enforce Hermite and Legendre polynomials, respectively.The d-dimensional orthonormal polynomials are then derived from the multiplication of one dimensional polynomials in all dimensions. For example,ψ_α(ξ)= ψ_α_1,1(ξ_1)ψ_α_2,2(ξ_2) ... ψ_α_d,d(ξ_d), α=(α_1,α_2,..., α_d).Consequently, we have∫_I_Ξψ_α(ξ) ψ_β(ξ)ρ(ξ) ξ = δ_αβ, α,β∈ℕ_0^d.Using this construction, anyfunction u(Ξ):I_Ξ→ℝ that is square-integrable can be represented as u(Ξ)= ∑_α∈ℕ_0^d c_αψ_α(Ξ),where {ψ_α}_α∈ℕ_0^d is the set of orthonormal basis functions satisfying Equation (<ref>).However, for computation's sake, u(Ξ) is approximated by a finite order truncation of PC expansion given by u_k(Ξ) := ∑_α∈Λ_d,k c_αψ_α(Ξ),where k is the total order of the polynomial expansion and Λ_d,k is the set of multi-indices defined asΛ_d,k := {α∈ℕ_0^d : ‖α‖_1 ≤ k }.The cardinality of Λ_d,k, i.e. thenumber of expansion terms, here denoted by K, is a function of d and k according toK:= |Λ_d,k| = (k+d)!/k!d!.Given this setting, u_k(Ξ) approximates u(Ξ) in a proper sense and is referred to as the k-th degree PC approximation of u(Ξ). Each of the K coefficients involved in the definition of u_k can be exactly calculated by projecting u(Ξ) onto the associated basis function: c_α^j = 1/γ_α^j∫_I_Ξu(ξ)ψ_α^j(ξ) ρ(ξ) ξ,α^j ∈Λ_d,k={α^1, ⋯, α^K }. Numerical approaches such as quadrature rule or sparse grid are usually used to approximate the above integral as its exact evaluation might be very cumbersome or not even possible <cit.>. However, as the dimensionality of the problems increases, the number of samples required by these numerical techniques for an accurate integral evaluationincreases exponentially. Supplying such a large sample size is prohibitively costly, especially if these samples are drawn from expensive high-fidelity simulations or costly experiments. As a result, efforts have been made to develop adaptive approaches that reduce the number of required samples by placing fewer samples on dimensions that do not impact QoI significantly <cit.>.Another widely used approach to estimate PC coefficients is linear regression. In order to use linear regression, the number of samples must be equal or greater than the number of PC expansion terms. The well accepted oversampling rate is around 1.5 to 3 times the number of unknown coefficients. Recently, a quasi-optimal sampling approach was introduced in<cit.> that results in accurate regression with 𝒪(1) oversampling. As briefly mentioned in Section <ref>, when the QoI is sparse with respect to the PC basis functions, compressive sampling approach have proved effective in estimating expansion coefficients using a number of samples that is significantly smaller than the number ofPC terms.This section follows with the review on the basics of sparse PC recovery using compressive sampling.§.§ Stochastic collocation using compressive samplingCompressive sampling was first introduced in the field of signal processing, where conventionally the number of samples required to recover a signal was determined by Shannon-Nyquist sampling rate <cit.>. Compressive sampling allows for successful signal recovery using significantly fewer samples when the signal of interest is sparse. Therefore, it has been extensively applied in cases where the number of available samples is limited <cit.>. Due to this very advantage, compressive sampling has recently gained substantial attention in UQ, more specifically in stochastic collocation, where a sample set is used to build an analytical surrogate model (typically in the form of a PC expansion) <cit.>. In what follows, a formal briefbackground on the use of compressive sampling for PCE estimation is provided.The objective of PCE estimation is to calculate the vector of unknown expansion coefficients c= (c_α^1, .., c_α^K)^T in Equation (<ref>), given M samples of the QoI, denoted by the data vector u=(u (ξ^(1)),...,u (ξ^(M)))^T. These sampled outputs are evaluated at the M realizations, {ξ^(i)}_i=1^M,of model input Ξ. Requiring u_k(Ξ) to approximate u(Ξ) results in the following system of equation,Ψ c=u,whereΨ is the measurement matrix, constructed according to Ψ=[ψ_ij], ψ_ij=ψ_α^j(ξ ^(i)),1⩽ i⩽ M, 1⩽ j⩽ K. We are interested in the underdetermined case where M⩽ K. Under such condition, there may exist infinity many solutions for c. Compressive sampling approach can be readily borrowed to find the sparsest solution, by formulating the sparse recovery problem ascmin c _0subject toΨ c=u,where ·_0 indicates the ℓ_0-norm, i.e. the number of non-zero terms. However, since ℓ_0-norm is non-convex and discontinuous, the above problem is NP-hard. Therefore, ℓ_0 minimization is usually replaced by its convex relaxation where ℓ_1-norm of the solution c is minimized instead, i.e.cmin c _1subject toΨ c=u, ℓ_1 minimization is the closest convex problem to ℓ_0 minimization. It has been shown that when Ψ is sufficiently incoherent, as will be explained later, and c is sufficiently sparse, the solution of ℓ_0 minimization is unique and is equal to the solution of ℓ_1 minimization <cit.>. The minimization in (<ref>) is named basis pursuit <cit.>, and it can be solved using linear programming. If the measurements are known to be noisy, the problem can be reformulated ascmin c _1subject toΨ c -u_2 < ϵ,which is known as the basis pursuit denoising problem. ϵ controls the accuracy tolerance and can be prescribed if the distribution of measurement noiseis known. For example, if the measurement noise follows a normal distribution with zero mean and standard deviation σ_n, then ϵ is naturally set to be √(Mσ_n^2). When instead of measurement samples, simulation samples are used in the PC estimation, and a distribution for numerical or modeling error is absent, one may use cross-validationto determine the tolerance parameterϵ. This section continues with a brief review ontheorems developed on recoverability of compressive sampling and efforts heretofore made to improve the accuracy of PC estimation using compressive sampling. §.§ PC recoverability using compressive samplingWhether desired solution can berecovered from compressive sampling or not mainly depends on the properties of the measurement matrix Ψ. One of the properties that are shown to be significantly relevant is the Restricted Isometry Property (RIP) or the s-restricted isometry constant for the measurement matrix <cit.>. Formally speakingthe s-restricted isometry constant for a matrix Ψ∈ℝ^M × K is defined to be the smallest δ_s∈ (0,1) such that(1-δ_s) c _2^2⩽Ψ^s^* c_2^2⩽(1+δ_s)c _2^2,for every submatrixΨ^s^*∈ℝ^M × s^*,s^*⩽ s , of Ψ and every vector c ∈ℝ^s^*. Thus defined, a small RIP constant for a measurement matrix will lead to a situation wherein for any given sparse signal, energy of the measured signal is not very different from the energy of the given signal itself; hence the appeal of a small RIP constant. The following theorem provides an upperbound on RIP constant so that the optimization problem (<ref>)leads to accurate recovery. Let Ψ∈ℝ^M× K with RIP constant δ_2s such that δ_2s<√(2)-1. For a given c̅, and noisy measurement y=Ψc̅+ ϵ, ϵ_2⩽ϵ,let c be the solution of min c _1subject toΨ c -y _2 < ϵ .Then the reconstruction error satisfies c-c̅⩽ C_1 c-c^*/√(s) + C_2ϵ,whereC_1 and C_2 only depend on δ_2s and c^* is the vector c with all but the s-largest entries set to be zero. If c is s-sparse and the measurements are noiseless, then the recovery is exact.Calculating RIP constant for a given matrix is an NP-complete problem <cit.>. The followingtheoremgives a probabilistic upperbound on RIP constant for bounded orthonormal systems. Let {ψ_n}_1⩽ n ⩽ K be an orthonormal bounded system of functions on 𝒟, where 𝒟 is endowed with a probability measure ν. Specifically, we have∫_𝒟ψ_ n( ξ) ψ_m(ξ)ν(ξ) ξ=δ_mn, 1⩽ m, n ⩽ K,and the following uniform bound,ψ_n_∞=ξ∈𝒟sup | ψ_n( ξ)| ⩽ L, for all1⩽ n ⩽ K.Consequently, an orthonormal polynomial system {ψ_α}_α∈Λ _d,k defined in (<ref>) is a bounded orthonormal system if it is uniformly bounded by α∈Λ _d,ksupψ_α_∞=α∈Λ _d,ksup ξ∈Ωsup | ψ_α(ξ)| ⩽ L,for some constant L≥1.Hereinafter, let us refer to the bound L as the local-coherence. Let {ψ_n}_1⩽ n ⩽ K be a bounded orthonormal system satisfying Equations (<ref>) and(<ref>). Also letΨ∈ℝ^M × K be a measurement matrix with entries {ψ_ij=ψ_j( ξ ^(i))}_1⩽ i⩽ M, 1⩽ j⩽ K, where ξ ^(1), ..., ξ ^(M) are random samples drawn from measure ν. Assuming that M⩾ Cδ^-2L^2slog^3(s)log(K),then with probability at least 1-K^βlog^3(s) the RIP constant δ_s of 1/√(M)Ψ satisfies δ_s ⩽δ. The C, β > 0 are universal constants.It is worthwhile to highlight the differences between these two theorems. Theorem <ref> is a deterministic theorem, while Theorem <ref> is probabilistic. Theorem <ref> provides a deterministic guarantee that is universal, in the sense that once a specific RIP property is satisfied for samples of a class of measurement matrices, then the corresponding error bounds hold for all kinds of target expansions and recovery is always exact for cases with noiseless measurement and s-sparse target. On the other hand, when the condition of Theorem <ref> is satisfied,i.e. sufficient number of samples are provided, the recovery accuracy can be achieved with high probability, and notwith probability 1.§ NEAR-OPTIMAL SAMPLING STRATEGYSuppose that a limited sampling budget allows for M samples to be drawn. Our objective, then, is to optimally select M locations in the parameter space, at which samples of QoI arecomputationally or experimentally evaluated. Let us pose a slightly different, but closely related, question: out of a sufficiently large pool of M_p candidate sample locations, how can oneidentify the M“optimal"locations? The sufficiently large number M_p can be determined such that an accurate regression for the given stochastic system is achieved. In what follows,we first discuss the sampling strategies that aim at improving the local-coherence. This will then be followed by discussions aboutother relevant properties, in particular those of the measurement matrix, that have implications on accuracy and stability of sparse recovery, and how a `near-optimal' sampling strategy can be designed to improve these relevant properties. §.§ Sampling strategies focusing onlocal-coherenceAs is evident by the theoretical results of Section <ref>, specifically in Theorem <ref>,local-coherencehas an implication on recovery accuracy. This has motivated researchers to develop new sampling strategies such that this property is improved. The conventional sampling approach, namely the standard (random) sampling approach,involves drawing random realizations from uniform and normal distributions for Legendre and Hermite PC expansions, respectively. In <cit.>, it was recommended to use a preconditioned Legendre measurement matrix, where samples are taken from Chebyshev distribution, instead of a uniform distribution, to improve accuracy.With respect to this new sampling distribution, new orthogonal basis functions were calculated by scalingor weighting the Legendre polynomials. It was shown in <cit.> that when system dimensionality d is larger thanpolynomial order k, the Chebyshev preconditioning approach will result in a polynomial system with a larger local-coherence, L, and subsequently in less accurate results compared with that from standardsampling. An alternative sampling strategy was proposed in <cit.>, where the Legendre polynomial system was first turned into a discrete orthogonal system using Gauss quadrature points, from whichsamples are drawn randomly for sparse recovery. But, it was shown that the new discretized orthogonal system improves the bound L only when k⩾ d, and for k < d,recovery accuracy will be lowered than that from standard sampling. In <cit.>, a coherence-optimal, or to be compliant with the terminology in this paper, a local-coherence-optimal sampling approach was introduced, where instead of directly sampling from the probability measure, ρ(ξ), samples are drawn from a different “optimal" probability measure, ρ_o(ξ), constructed according toρ_o(ξ)= C^2 ρ(ξ) B^2(ξ), where C is a normalizing constant andB(ξ):= α∈Λ _d,kmax |ψ_α(ξ) |. Corresponding to this probability measure, the weight function, used to retrieve the orthogonality of the basis functions, should bew(ξ)=1/B(ξ). Accordingly, the ℓ_1-minimizations in (<ref>) and (<ref>) were adjusted to include the weight functions. The following weighted ℓ_1-minimizations were then used for sparse recovery cmin c _1subject to W Ψ c=Wu,cmin c _1subject to W Ψ c -Wu_2 < ϵ,where W is the M × M diagonal weight matrix, with W(i,i)= w(ξ^(i)) for i=1,⋯, M.It was also proved in <cit.> that this sampling approach results in the lowest local-coherence among all sampling approaches, and canthus outperform standard sampling for both cases, i.e., k<d and k⩾ d. In what follows, we discuss how this strategy can be further improved by considering other relevant properties with implications on recovery accuracy. §.§ Sampling strategies focusing on measurement matrix propertiesIt is obvious that to recover all s-sparse vectors c̅ from observation vector u=Ψc̅, using the ℓ_0 minimization problem in (<ref>), we must have Ψc̅≠Ψ c' for any pair of distinct s-sparse vectors c', c̅. In other words, we want c' ≠c̅ to lead to distinct observation vectors. For this to happen, the measurement matrix should not satisfy Ψ ( c̅ -c') = 0, for any vectorc̅ -c' thathas 2s or fewer nonzero terms. This means that the null space of Ψ shouldcontain no vector with 2s or fewer nonzero terms. In compressive sampling, this property is mostly characterized using spark<cit.>. The spark of a matrix is the smallest number of its columns that are linearly dependent. Using the definition of spark, it is then straightforward to guarantee that the solution of ℓ_0 minimization given in (<ref>) exactly recovers the s-sparse c̅ if spark(Ψ)> 2s<cit.>. Moreover, it can be shown that in this case the ℓ_1 minimizationin (<ref>) also recovers the exact solution, c̅<cit.>. Clearly, a measurement matrix with a larger spark allows exact recovery using compressive sampling for a larger set of target signals (solution vectors). Therefore, it is desired to maximize the spark of measurement matrices. However, computing the spark of a matrix is an NP-hard problem <cit.>. As an alternative, one can analyze recovery guarantees using alternative properties which are easier to compute. One such property is the mutual-coherence, which for a given matrix Ψ∈ℝ^M × K is defined as the maximum absolute normalized inner product, i.e. cross-correlation, between its columns <cit.>.Let ψ_1, ψ_2, ..., ψ_K∈ℝ^M be the columns of matrix Ψ. The mutual-coherence of matrix Ψ, denoted by μ(Ψ), is then given byμ(Ψ):= 1⩽i, j⩽K, i≠ jmax|ψ_j^Tψ_i|/ψ_j_2ψ_i_2. The following simple proposition explains how spark and mutual coherence are related. propPropositionFor any matrix Ψ∈ℝ^M × K the following holds: spark(Ψ)⩾ 1+1/μ(Ψ). Mutual-coherence is an indicator of the worst interdependence, i.e. maximum cross-correlation between the columns of a matrix. It is zero for an orthogonal matrix and is strictly positive when M<K. Proposition <ref> makes it obvious that a small value for mutual-coherenceis desired. It may be concluded that measurement matrix Ψ should be designed in a way that its mutual-coherence, i.e. its maximum cross-correlation,is minimized. However, it has been observed that minimizing maximum cross-correlationdoes not necessarily improve the recovery accuracy of compressive sampling <cit.>. This is because Equation (<ref>) only provides a lower bound on the spark. Therefore, minimizing mutual-coherence considers only the worst-case scenario and fails to account for other possibilities for improving compressive sampling performance<cit.>. Our objective in this work is to design a measurement matrixΨ which leads to bettercompressive sampling performance on average, and not merely in the worst-case scenario. To this end, we need to determine which property of the measurement matrix should be optimized. This property should ideally be one that directly and sufficiently controls the accuracy of compressive sampling. A widely-accepted property is the RIP constant, as suggested by Theorem <ref>. As previously mentioned, calculating RIP constant for a given matrix is an NP-complete problem. However, it is known that the eigenvalues of each column-submatrixΨ^s^*∈ℝ^M × s^*,s^*⩽ s can be bounded by the RIP constant, δ_s 1-δ_s ⩽λ _min(Ψ^s^*TΨ^s^*)⩽λ _max(Ψ^s^*TΨ^s^*)⩽ 1+δ_s.Therefore, one may suggest to minimize the condition number of all column-submatrices with s or fewer columns. The challenge, however, is that the sparsity level of solution, s, is not known in advance. Moreover, calculating such a combinatorial measure is not a trivial task and can be computationally impossible <cit.>.As a result, establishing a single matrix property thatsufficiently guarantees the accuracy of compressive sampling method and is easily computable still remains an open challenge<cit.>. To sidestep this challenge, efforts have focused on identifying properties or measures that are relatively better than maximum cross-correlation, or mutual-coherence. In <cit.>, for the first time, it was suggested that a t-averagedmutual-coherencebe minimized. Denoted by μ_t(Ψ), the t-averagedmutual-coherencefor a measurement matrix Ψ is defined as the average of cross-correlations larger thanthreshold t, i.e. μ_t(Ψ)=1⩽i, j⩽K, i≠ j∑1( | g_ij |⩾ t). |g_ij | /1⩽i, j⩽K, i≠ j∑1( | g_ij |⩾ t),where 1(·) is the indicator function, g_ij is the ijth component of the Gram matrixG= Ψ̃^TΨ̃,and Ψ̃ is the column-normalized version of Ψ. It has been shown that recovery accuracy can be significantly improved if instead of a random measurement matrix, the measurement matrix is optimized based on μ_t(Ψ). However, it was later shown in <cit.> thata μ_t(Ψ)-optimized measurement matrix is not robust whentarget signals are attached to some noise, and as such,not exactly sparse. In order to improve this robustness, efforts have focused on optimizing measurement matrices by considering all the cross-correlations, and not the ones larger than athreshold. Specifically, these efforts seek to minimize the distance between the Gram matrix, G, and the corresponding identity matrix. With ·_F denoting the Frobenius norm, it has been shown that the measurement matrix determined by solving the following minimization problemmin_Ψ∈ R^M × KI_K - Ψ̃^TΨ̃_F^2,can lead to a signal recovery that is both more accurate and more robust compared with that using the μ_t(Ψ)-optimizedmatrix <cit.>.In what follows, we propose our near-optimal sampling strategy which considers measures of maximum and average cross-correlation of measurement matrices, and optimizes sample locations accordingly.§.§ Near-optimal sampling strategy To achieve a better robustness, as argued earlier, we aim to selectsample locations that collectively minimize a measure of average cross-correlation given byγ(Ψ):= 1/NI_K - Ψ̃^TΨ̃_F^2,where N:=K × (K-1) is the total number of column pairs. For the sake of brevity, we refer to γ(Ψ) as the average cross-correlation, even though precisely speaking, it is the average of squares of cross-correlation. It should be noted that minimizing this average cross-correlation alone will not necessarily result in the smallest maximum cross-correlation, and themutual-coherence could still be undesirably large and the sparse recoverysignificantly inaccurate. As a remedy, we seek to minimize both the average cross-correlation γ(Ψ) and the mutual-coherence μ(Ψ), simultaneously in a multi-objective problem. To solve the optimization problem in (<ref>) for PC measurement matrix, an option is to adopt the solution approaches proposed for (<ref>). These are mainly iterative approaches based ongradient descent method <cit.> orbound-optimization method<cit.>. However, these algorithms are applicablefor measurement matrices that are less restricted compared to PC measurement matrix, in that their components are not constrained to beevaluations of specific orthogonal basis functions at certain sample locations, and as such can be freely optimized. Therefore, a greedy algorithm is the natural option for solving (<ref>). Our greedy algorithm fornear-optimal sampling strategy begins by populating a large pool of candidate sample locations in the multidimensional parameter space, and seeks to find the near-optimal M locations and the corresponding M × K measurement matrix. We select the first sample location randomly from the candidates pool, and this constitutes the first row in the measurement matrix.At each step of the algorithm, a new sample location, or matrix row is added. This is done by searching through the candidates poolfor the best sample location.In this two-objectiveoptimization problem, we select the “best" sample location as the one with the smallest normalized distance with respect to the utopia point <cit.>. The utopia point is an abstract point in our two-dimensional objective space, whose first and second coordinatesare the smallest maximumcross-correlation and smallest average cross-correlation. Note that this abstract point is typically not among the Pareto optimal solutions of the two-objective optimization problem, and is therefore not attainable as an optimal solution.This is why we select from the sample locations pool thelocation that is closest to the utopia point, i.e. the sample location whose corresponding average cross-correlation and maximum cross-correlation have the smallest normalized distance with the coordinates of the utopia point, as is elaborated in the following pseudo-code.Let Ψ^pool denote the M_p × K measurement matrix associated with the large pool of M_p candidate locations, andΨ^opt[M] the near-optimal M × K row-submatrix of the “pool" matrix Ψ^pool. This submatrix is identified through incremental row-concatenation, as shown inAlgorithm <ref>. In this pseudo-code, Ψ^opt[i]of size i × K denotes the near-optimal submatrix at the ith step of the algorithm, and Ψ^pool_(j) represents the jth row in the “pool" matrix. Also, at each iteration, μ'_j and γ'_j are the maximum cross-correlation and average cross-correlation, respectivelyrecorded forcandidate location j.As discussed in Section <ref>, it has been shown that local-coherence-optimal sampling improves the PC coefficients recovery accuracy, over standard sampling. To exploit this advantage, we use the local-coherence-optimal sampling strategy to generate the large pool of candidate sample locations. In order to retain orthogonality,we just need to substitute Ψ^pool with W Ψ^pool in Algorithm <ref>, where W is the diagonal weight matrix defined earlier in Section <ref>. Compared to standard sampling and local-coherence-optimal sampling, our sampling strategy incurs extra computational cost due to additional row selections from the candidates pool in the greedy algorithm. However, this additional cost is typically negligible with respect to the cost of sample collection, especially when the benefit of improved accuracy is also considered.We refer to the proposed sampling approachasnear-optimal sampling strategy. The reason we do not use the term `optimal'is two-fold: First,even though studies in signal processing have shown significant improvement in recovery accuracy by minimizing (<ref>), this orthogonality property doesnot by itself establish a sufficientcriterion forrecovery accuracy of compressive sampling (a measure that is both sufficient and tractablehas not been identified in the literature.) Second, our approach can be sensitive to (i) the random choice of candidate locations and (ii) the random choice of the initial location (first row in the submatrix), and is therefore not fully deterministic, and as such not optimal.In the next section, we provide numerical examples to demonstrate advantages of our proposed sampling strategy. § NUMERICAL EXAMPLESTo demonstrate the advantage of the near-optimal sampling strategy over other sampling approaches, four target functions are considered in this section: (i) a low-dimensional high-order polynomial function, (ii) a high-dimensional low-orderpolynomial function, (iii) a six-dimensional generalized Rosenbrock function, and (iv) the solution to a stochastic diffusion problem. This would allow for a comprehensive comparison of the sampling strategies for a variety of models with different combinations of dimension and order. The first three target functions are chosen to be exactly s-sparse (or sparse, in short), whereas the last one isapproximately s-sparse (or compressible, in short).In all examples, optimization problems (<ref>) and (<ref>), corresponding to a noise-less measurement setting, were formed and solved using the SPGL1 package <cit.>.Also, coherence-optimal samples were generated usingthe `coh_opt' package developed by the authors of <cit.>. For the sake of brevity, we only report the results for Legendre polynomial expansions in this paper, but note that similar improvements in recovery accuracy were observed when near-optimal sampling was applied to Hermite polynomial expansions. In all these examples, the proposed near-optimal sampling approach is compared with (i) standard sampling, and (ii) local-coherence-optimal sampling, or in short, `coherence-optimal' sampling. §.§ Low-dimensional high-order sparse PCELet us consider the target function, u(Ξ), to be a sparse 20th-order Legendre polynomial expansion in a two-dimension random space with uniform density on [-1,1]^2, manufactured according tou(Ξ)= ∑_i=1^5Ξ_1^2iΞ_2^2i.Using limited samples from this exact function, the objective is to recover the sparse coefficient vector. In selecting the sample locations to form the measurement matrix and data vector, the proposed near-optimal sampling approach is compared versusrandom sampling strategies, namely(i) standard sampling and (ii) coherence-optimal sampling. The candidates pool in the near-optimal sampling approach includes 100,000 coherence-optimal samples. For all three approaches, we report the performance results obtained by 100 independent runs. This is to capture the variability induced by small sample size in standard and coherence-optimal sampling approaches. In the near-optimal approach, these independent runsallow us to account for (i)the sensitivity of performance with respect to the choice of initial sample location, and (ii) the variability induced by the finite size of the candidate pool, and (iii) the variability induced by the fact that our cross-correlation measures do not necessarily guarantee CS recovery accuracy.Figure <ref> shows the median, and 1st and 3rd quantiles of relative ℓ_2 error and Figure <ref> shows the mean of relative ℓ_2 error. The relative ℓ_2 error is calculated as c - c̅_2 / c̅_2, where c̅ is the exact coefficient vector and c is the solution of ℓ_1 minimization in (<ref>).As is apparent in Figure <ref> and <ref> ,coherence optimal sampling results in a smaller error, compared to standard sampling, as the samples are drawn from a distribution with a smaller bound L defined in (<ref>). However, our proposed sampling outperforms the coherence-optimal one. To explain this, Figures <ref> and <ref> show the median, and 1st and 3rd quantiles of mutual-coherence and average cross-correlation for the measurement matrix, respectively. It can be seen that near-optimal sampling beats the other two approaches in both measures, leading to the higher observed accuracy in the PCE estimation. §.§ High-dimensional low-order sparse PCE As a contrasting example, let us consider the target function, u(Ξ), to be a sparsesecond order Legendre polynomial expansion in a 20-dimensional random space with uniform density on [-1,1]^20, manufactured according tou(Ξ)= ∑_i=1^19Ξ_iΞ_i+1.To compare the performance of near-optimal sampling with standardand coherence optimal sampling on this target function, the numerical results were obtained under a setting similar to that in the previous example: 100 independent runs were used for all three approaches, and 100,000coherence-optimal samples constituted the initial sample pool in the near-optimal approach of Algorithm <ref>. Figure <ref> shows the median, and 1st and 3rd quantiles of relative ℓ_2 error. Figure <ref> shows the mean of relative ℓ_2 error. Figure <ref> and <ref> show the improvement in the median, and 1st and 3rd quantiles of mutual-coherence and average cross-correlation, respectively, when the near-optimal sampling is used, which has translated into the observed improvement in recovery accuracy.It should be highlighted that standard sampling is mostly suitable forhigh-dimensional and low-order cases such as this example<cit.>. It should also be noted that in the examples in this section and Section <ref>, the measurement matrices have the same number of columns (K=231); however, as it can be seen in Figures <ref> and <ref>, the high-dimensionality of this example has resulted in smaller mutual-coherences and average cross-correlation for various sample sizes. Consequently,we expect to observe that recently developed sampling strategies do not outperform standard sampling in these high-dimensional low-order problem cases <cit.>. However, as it can be seen in figures <ref> and <ref>, the near-optimal sampling approach does outperform both standard and coherence optimal sampling. Although, this improvement in terms of mean or median of relative error may not seemsignificant, it should be noted that this is a rather “extreme" case, where not only the response is high-dimensional, but its order is also very low. This could be thought of as a lower bound for the improvement offered by the near-optimal sampling approach. That is,the fact that the near-optimal sampling strategy still shows improvement, even non-significant, for such extreme cases where standard sampling is supposed to work well, can be considered as a numerical evidence that it will most likelyoutperform other sampling strategies in all the other cases. As we already discussed in Section <ref>, in order to have exact recovery we need the spark of measurement matrix to be larger than two times the number of non-zero coefficients. At small sample sizes none of the sampling approaches meet this requirement. Therefore, as it can be seen in figures <ref> and <ref>, at low sample sizes, all three sampling strategies recover poor approximations. As the number of sample increasesthe chance to meet this requirement also increases; hence the variability in recovery accuracy also increases.In near-optimal sampling approach we select sample locations such that the orthogonality of measurement matrix is improved, thereby enhancing the probability of achieving a larger spark and consequently exact recovery. To further clarify this, we demonstrate the advantage of near-optimal sampling by comparing another performance measure, namely the success rate.The success rate, here, is defined as the ratio of trials, out of the total 100 trials, that result in a relative error smaller than 10^-7.Figure <ref> shows that near-optimal samplingconsistently results in a higher success rate, and is thus expected to produceaccurate recoveries, more frequently. §.§ Generalized Rosenbrock function Compared to the previous two examples with “extreme" dimension and order combinations, the third target function can be chosen to be of “moderate" combination of dimension and order. Specifically, let us consider the following 6-dimensional generalized Rosenbrock function with random inputs following a uniform density on [ -1,1]^6, u(Ξ)= ∑_i=1^5 100(Ξ_i+1-Ξ_i^2)^2+(1-Ξ_i)^2.The objective is to recover the corresponding sparse Legendre polynomial expansion.Similar setting is used for comparison of the three sampling strategies: 100 independent runs for all sampling approaches, together with a candidatepool of 100,000 coherence-optimal samples for near-optimal sampling of Algorithm <ref>. Figure <ref> demonstrates the improvements in the median, and 1st and 3rd quantiles of relative ℓ_2 error when near-optimal sampling is used. Figure <ref> demonstrates the improvements in terms of the mean of relative ℓ_2 error. Figure <ref> and <ref> show that near-optimal sampling results in smaller mutual-coherence and average cross-correlation,in terms of their median, and 1st and 3rd quantiles. Similar to the previous example, we note that at small sample sizes all approaches result in equally inaccurate recoveries as all approaches fail to achieve a measurement matrix with sufficiently large spark, i.e. spark(Ψ)> 2s. A larger sample sizes results in a larger lower bound for the spark of measurement matrix and a higher probability to achieve the critical spark value. Therefore, we observe more variability in recovery accuracy as the sample size increases. Figure <ref> shows that using near-optimal sampling leads to significant improvement in the success rate, i.e. the ratio of trials with relative errors smaller than 10^-7. This is a direct result of improving the orthogonality of measurement matrix, i.e. enhancing the chance of achieving critical spark value in near-optimal sampling.§.§ Stochastic diffusion problem In this example we consider a stochastic diffusion problemin a one dimensional physical domain, given by -∂/∂ x(a( x,Ξ) ∂ u/∂ x( x,Ξ))=2, x ∈ (0,1), u(0,Ξ) =0, u(1,Ξ)=0, Ξ∈ [-1,1]^10. We assume that the diffusion coefficient, a( x,Ξ), takes the following analytical form a( x,Ξ)=1+ ∑_k=1^101/k^2π^2cos(2π kx)Ξ_k.We consider Ξ to be uniformly distributed on [-1,1]^10 and consider the solution of diffusion problem at u(0.5,Ξ) to be the quantity of interest. We use 3rd-order Legendre polynomial expansion to approximate the QoI and employ the three sampling approaches to estimate the coefficients of expansion. Similar setting to previous examples is used here: 100 independent runs for all sampling approaches, together with a candidatepool of 100,000 coherence-optimal samples for near-optimal sampling of Algorithm <ref>. For this example, we define the relative error to be ( u - u̅_2)/(u̅_2), where u̅ is the vector including the exact solutions calculated at 1000 new random samples and u includes the approximate solutions calculated by evaluating the PC expansion at the same 1000 samples.Figure <ref> demonstrates the improvement in the median, and 1st and 3rd quantiles of relative ℓ_2 error when near-optimal sampling is used. Figure <ref> shows the improvement in the mean of relative ℓ_2 error. Themutual-coherence and average cross-correlation are also compared in figures <ref> and <ref>, respectively, in terms of their medians, and 1st and 3rd quantiles. To compare the success rates, since thetarget expansion is not exactly sparse, we consider a looser definition for successful recovery. This is justified by noting that the error levels in Figure <ref> are relatively larger than those those inFigures <ref> and<ref>. Accordingly, we define a recovery to be successful when its relative ℓ_2 error is smaller that 10^-4, and show the resulting success rates inFigure <ref>. These results show that success rates can beimproved significantly by using near-optimal sampling.§ CONCLUSIONIn this paper, we presented a new sampling strategy, or a design-of-experiment technique, for selecting sample locations for sparse estimation of PCEs using compressive sampling. The sample locations are selected such that (i) the local-coherence property is improved, and (ii) the resulting measurement matrix has the smallest mutual-coherence and the smallest average cross-correlation between its columns. It was discussed how the latter two measures have implications on recovery accuracy and numerical results were presented to support it. A greedy algorithm was introduced that selects a prescribed number of near-optimal locations out of a large pool of candidate locations. The resulting measurement matrix is claimed to be near-optimal, rather than optimal, becausethe two aforementioned cross-correlation measures may not be the only measures that control the recovery accuracy, and therefore minimizing themdoes not guarantee optimal performance. Another reason why our algorithm can be sub-optimal is that the greedy algorithm is inherently a non-deterministic algorithm, and as such is susceptible to variabilities induced by random choice of location candidates andinitial (first) sample location in the algorithm. Four numerical examples with various combinations of dimensionality and order demonstrated the advantages of our proposed sampling strategy over other sampling strategies, in terms ofaccuracy and robustness. § REFERENCES ]
http://arxiv.org/abs/1702.07830v2
{ "authors": [ "Negin Alemazkoor", "Hadi Meidani" ], "categories": [ "stat.CO" ], "primary_category": "stat.CO", "published": "20170225034620", "title": "A Near-Optimal Sampling Strategy for Sparse Recovery of Polynomial Chaos Expansions" }
School of Electrical Engineering, Tel Aviv University, Ramat-Aviv, Tel-Aviv 69978IsraelClustering in particle chains - summation techniques for the periodic Green's function Yarden Mazor[yardenm2@mail.tau.ac.il], Yakir Hadad, Ben Z. Steinberg====================================================================================== 1D lattice summations of the 3D Green's function are needed in many applications such as photonic crystals, antenna arrays, and so on. Such summations are usually divided into two cases, depending on the location of the observer: Out of the summation axis, or on the summation axis. Here, as a service for the community, we present and summarize the summation formulas for both cases. On the summation axis, we use polylogarithmic functions to express the summation, and Away from the summation axis we use Poisson summation (equivalent to the expansion of the field to cylindrical harmonics) This text is not meant to be a comprehensive overview of the literature in this topic. We have included several references to selected works that incorporate parts of this overview, or other related methods. If someone feels that we have missed or did not credit his work in related matters, please do not hesitate to approach us, and we would gladly revise the bibliography.§ INTRODUCTIONThe most general form of a cluster in a particle chain is shown in Fig. <ref>.The theoretical "Construction" process of such a structure may be visualised as taking a N basic chains composed of particles with polarizability α_n, (n=1,...,N) and inter-particle distance D, and placing them in some general configuration, so all the chains have colinear chain axes (in this case visualised by the parallel dashed gray lines). If we assume the particles are small enough with respect to the inter-particle distance and the wavelength we may use the Discrete Dipole Approximation. In addition we assume that if as small particle with polarizability α is subject to an electric field whose local value in the absence of the particle is E^L it will respond by forming an electric dipole moment p=α E^L. if we mark the dipoles induced in each particle by p_nm where n enumerates the particles inside the cluster (n=[1,...,N]), and m will be the cluster index. It is worth noting before we continue, that particular cases of the formulation presented in this summary can be found in many works. <cit.> treated simple cases of 2 parallel chains, <cit.> dealt with general longitodinal particle clusters. Cases involving various summation techniques in 2D can be found in <cit.>,<cit.>,<cit.>. We will first formulate the most general case, and then continue by treating each of the basic "building blocks" separately.§ FORMULATION The local electric field value for each of the particles may then be written in the most general form as E^L=∑_n=1^N∑_m=-∞^∞G(r_0,r_nm)p_nmwhere G is the dyadic Green's function in free space, r_0 is the location of the particle under examination, and r_nm is the location of the p_nm dipole. Since the structure is periodic with a period D, we may express each dipole along the chain as p_nm=p_n0e^iβ mDSince the system is periodic, we may treat only the particles that reside in cluster 0 (noted in darker blue in Fig. <ref>). If we mark the particle we are examining in cluster 0 as the n'th particle, taking the summation in eq1 and multiplying from the left by α_n'^-1α_n' will result in the equation α_n'^-1p_n'0= ∑_n=1^N∑_m=-∞^∞G(r_0,r_nm)p_n0e^iβ mDor in a more intuitive form α_n'^-1p_n'0= ∑_n=1^N(∑_m=-∞^∞G(r_0,r_nm)e^iβ mD)p_n0A more accurate way to write this system of equations is by separating n=n' from the rest of the possible values of n which gives the form (∑_m=-∞m≠ 0^∞G̅(r_0,r_n'm)e^iβ mD-α_n'^-1)p_n'0+ ∑_n=1 n≠ n'^N(∑_m=-∞^∞G̅(r_0,r_nm)e^iβ mD)p_n0=0Where G̅=6πϵ_0/k^3G. Each value of n' defines a certain particle to be examined, and in fact defines a certain Base Chain we are treating. The different Base Chains from which we construct a more general Clustered Chain are noted in Fig. <ref> on the right side as C1,C2,C3,C4,C5. The last equation defines a 3N× 3N matrix equation where each value selected for n' essentially defines the lines 3n'-2, 3n'-1, 3n' of it. The matrix representing the entire system may also be described as N× N block matrix 𝔐, where each block is 3× 3and defines the interaction between a certainbase chain to another. Using the block-matrix notation the equation may be written as 𝔐·p=0 or [ M_11 M_12⋯ M_1N; M_21⋱ ⋮;⋮⋮; M_N1⋯⋯ M_NN ][ p_10; p_20;⋮; p_N0 ] =0The value of M_n',q' depends mostly on the geometrical positioning of the base chain p' in relation to the base chain of reference n'. This positioning is defined by 2 parameters- the longitudinal shift d and the transverse shift d_0. These parameters are illustrated in Fig. <ref> for C2 and C4. A non-trivial solution for this system exists only if det(𝔐)=0 § THE DIAGONAL TERMS OF 𝔐The diagonal terms arise from the left brackets in eq5 and are well-known from work done on simple particle chains (not clustered), Therefore the diagonal block M_n'n' may be written as M_n'n'= [ T 0 0; 0 T 0; 0 0 L ]where [ T = 3/2[ 1/kDf_1(kD,β D)+ i/(kD)^2f_2(kD,β D)- 1/(kD)^3f_3(kD,β D) ] - α̅_n'^-1;; L =3[ - i/(kD)^2f_2(kD,β D)+ 1/(kD)^3f_3(kD,β D) ] - α̅_n'^-1; ]wheref_s(x,y)=Li_s[e^i(x+y)]+Li_s[e^i(x-y)]This block represents the interaction between a certain base chain with itself.§ TERMS OF 𝔐 WHICH REPRESENT D_0=0These terms represent the interaction of 2 chains that are completely co-linear with each other (Share a common chain axis). In Fig. <ref> the base chains that posses such property are C2,C3. This case is "isolated" in Fig. <ref>. The M_n',q' term corresponding to this case has the form M_n',q'= 6π/k^3∑_mA(mD+d)e^iβ mDWhere A(z)=e^ikz/4πz[k^2A_1+(1/z^2-ik/z)A_2]Is the dyadic Green's function simplified for the case of particles residing all on the Z-axis and A_1=(1,1,0),A_2=(-1,-1,2). In order to give a simple solution to the summation given in eq11 we are required to assume that the given inter-particle distance d is a rational fraction of the chain period D. This requires that the distances d,D are both integral multiplications of some basic distance δ meaning D=Lδ, d=ℓδUnder this assumption the summation in eq11 may be rewritten as M_n',q'=h_1[ℓ]A_1-ih_2[ℓ]A_2+h_3[ℓ]A_2where h_s[ℓ]=3/21/(kδ)^s∑_m e^ikdmL+ℓ+iβδ mL/mL+ℓ^s.§.§ Evaluation of h_s[ℓ]For 1≤ℓ≤ L-1, h_s[ℓ] can be re-written as h_s[ℓ] = 3/2e^-iβδℓ/(kδ)^s∑_m=0^∞[e^i(k+β)δ]^mL+ℓ/(mL+ℓ)^s+3/2e^-iβℓδ/(kδ)^s∑_m=1^∞[e^i(k-β)δ]^mL-ℓ/(mL-ℓ)^s.We concentrate on the first sum above. It has the form σ=∑_m=0^∞(e^ix)^mL+ℓ/(mL+ℓ)^s.This sum can be re-written as σ=∑_n=1^∞(e^ix)^n/n^s· a_n(ℓ)where a_n(ℓ) is an auxiliary periodic sequence of period N, satisfying a_n(ℓ)={[0, n=1,… , L, nℓ;1, n=ℓ ]. .Clearly, the series periodicity implies the recurrence relation a_n+L=a_n, whose characteristic polynomial p(λ)=λ^L-1 has N distinct roots λ^L-1=0 ⇒ {λ_r}_r=0^L-1=e^i2π r/L,hence the infinite sequence a_n(ℓ), ∀n can be generated by the finite sum a_n(ℓ)=∑_r=0^L-1 C_r(ℓ)λ_r^n=∑_r=0^L-1 C_r(ℓ) e^i2π r n/L.The coefficients C_r(ℓ) can be determined using the L-initial conditions of the recurrence relation [given by eq17] in eq19. The result is the L× L Vandermonde matrix equation a(ℓ)=ΛC(ℓ). Here a(ℓ) is a vector of L entries, whose elements are given by eq15, C(ℓ) is the vector of unknown coefficients, and Λ is a Vandermonde matrix, whose n,r entry is Λ_nr=e^i2π rn/L. Due to its specific structure, Λ is also a unitary transformation from the Euclidean basis a_n(ℓ) (roam with ℓ) to a Fourier basis. Its inverse is its adjoint (normalize first). Hence C_r(ℓ) =L^-1e^-i2π rℓ/L ⇒ a_n(ℓ) = 1/L∑_r=0^L-1e^i2π r (n-ℓ)/L, ∀ n.Substituting this result into eq16 and exchanging the order of summation, we can express σ as a finite sum of Polylogarithms σ=L^-1∑_r=0^L-1e^-i2π rℓ/LLi_s(e^ix+i2π r/L).Likewise, we may repeat essentially the same procedure for the second sum in eq14 (note the lower summation bound; shift the index by 1, and at the end change r↦ L-r'). The final result for h_s[ℓ],1≤ℓ≤ L-1, is h_s[ℓ] = 3/2Le^-iβδℓ/(kδ)^s∑_r=0^L-1 e^-i2π r ℓ/L f_s(kδ,βδ + 2π r/L)and f_s are given in eq9b. Though developed for the case of 1≤ℓ≤ L-1 the expression given in eq22 is in fact valid for all values of ℓ that satisfy 1≤|ℓ|≤ L-1. As a last remark for this section we mention that even though these formulas were developed here from "scratch", one could find many similarities to relations from signal processing where conversions of sampling rate take place.§ TERMS OF 𝔐 WHICH REPRESENT D_0≠ 0This case corresponds to interactions between chains as shown in Fig. <ref>. This case presents us with a different challenge since the transverse shift between the base chains couples different polarizations such as X and Z polarizations. In order to properly treat this type of interactions we will write the corresponding term M_n',q' as M_n',q'= [ M_n',q'^xx M_n',q'^xy M_n',q'^xz; M_n',q'^yx M_n',q'^yy M_n',q'^yz; M_n',q'^zx M_n',q'^zy M_n',q'^zz ]Since we are discussing only 2D planar clustering of particles (the XZ plane) M_n',q'^xy=M_n',q'^yx=M_n',q'^yz=M_n',q'^zy=0. In addition this block has to be symmetrical due to the symmetry in Green's dyad and therefore M_n',q'^xz=M_n',q'^zx. This enables to write down the block as M_n',q'= [ M_n',q'^xx0 M_n',q'^xz;0 M_n',q'^yy0; M_n',q'^xz0 M_n',q'^zz ]In the following analysis we will use the following R_m=√(d_0^2+(mD-d)^2), r_m=(d_0,0,mD)The main tool we are going to use to evaluate the terms in the M_n',q' block is the Poisson summation formula. this will allow us to "convert" the algebraic summationsto summations over a series of Hankel functions. since for large arguments we may obtain exponential-like decay in Hankel functions, this allows us to sum a finite number of elements in the series and still obtain a very good approximation for the terms in the matrix.§.§ Evaluation of M_n',q'^xxThe value of the x component of E^L that the red chain creates at the origin E_x^L(r=(0,0,0))=∑_m=-∞^∞G_xx(0,r_m)p_0xe^iβ mDwhere p_0xe^iβ mD are the X components of the dipole moments induced on the m'th particle in the red chain. substituting G_xx we may obtain an explicit form for M_n',q'^xx M_n',q'^xx=3/2∑_m=-∞^∞e^ikR_m[1/kR_m-(kd_0)^2/(kR_m)^3-1/(kR_m)^3+i1/(kR_m)^2+3(kd_0)^2/(kR_m)^5-3i(kd_0)^2/(kR_m)^4]e^iβ mDFor easier evaluation this may also be written as M_n',q'^xx=3/2∑_m=-∞^∞[e^ikR_m/kR_m+∂^2/∂ (kd_0)^2e^ikR_m/kR_m]e^iβ mDWe start by evaluating the summation over the left term in eq29. Substituting R_m we get 3/2∑_m=-∞^∞e^ikR_m/kR_m=3/2∑_m=-∞^∞e^ik√(d_0^2+(mD-d)^2)/k√(d_0^2+(mD-d)^2)To evaluate this sum we use Poisson summation formula ∑_m=-∞^∞f(m)=∑_n=-∞^∞∫_-∞^∞f(x')e^-2π inx'dx'Substituting our expressions into eq31 we obtain 3/2∑_m=-∞^∞e^ik√(d_0^2+(mD-d)^2)/k√(d_0^2+(mD-d)^2)e^iβ mD=3/2∑_n=-∞^∞∫_-∞^∞e^ik√(d_0^2+(x'D-d)^2)/k√(d_0^2+(x'D-d)^2)e^iβ x'De^-2π inx'dx'Evaluation of such an integral is possible using integral tables ∫_-∞^∞e^ik√(d_0^2+(x'D-d)^2)/k√(d_0^2+(x'D-d)^2)e^iβ x'De^-2π inx'= -e^(β/k-2π n/kD)kd/kDπ/iH_0^(1)[kd_0√(1-(2π n/kD-β/k)^2)]we may define the normalized parameters for easier notation d̅=kd,d̅_0=kd_0,D̅=kD,β̃_n=2π n/D̅-β/kand finally obtain 3/2∑_m=-∞^∞e^ikR_m/kR_m=-3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/iH_0^(1)[d̅_0√(1-β̃_n^2)]We may treat the summation over the right term in eq29 using the same method and applying the needed derivatives we obtain 3/2∑_m=-∞^∞∂^2/∂ (kd_0)^2e^ikR_m/kR_m=3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/i(1-β̃_n^2)· ·1/2(1-β̃_n^2){H_0^(1)[d̅_0√(1-β̃_n^2)]- H_2^(1)[d̅_0√(1-β̃_n^2)]}Adding eq35 and eq36 we obtain the final expression M_n',q'^xx=3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/i·{-1/2(β̃_n^2+1)H_0^(1)[d̅_0√(1-β̃_n^2)]+ 1/2(β̃_n^2-1)H_2^(1)[d̅_0√(1-β̃_n^2)]}Where H^(1)_n are Hankel functions of the first kind, of order n. We will repeat a very similar process for the other M terms in the block. §.§ Evaluation of M_n',q'^yyThe evaluation of M_n',q'^yy is somewhat less tedious since y polarization is not coupled with other polarizations. The value of the y component of E^L that the red chain creates at the origin due to the y components of the dipoles in the red chain E_y^L(r=(0,0,0))=∑_m=-∞^∞G_yy(0,r_m)p_0ye^iβ mDwhere p_0ye^iβ mD are the y components of the dipole moments induced on the m'th particle in the red chain. substituting G_yy we may obtain an explicit form for M_n',q'^yy M_n',q'^yy=3/2∑_m=-∞^∞e^ikR_m[1/kR_m+i/(kR_m)^2-1/(kR_m)^3]e^iβ mDThis may also be written as M_n',q'^yy=3/2∑_m=-∞^∞[e^ikR_m/kR_m+1/d̅_0∂/∂d̅_0e^ikR_m/kR_m]e^iβ mDSummation over the left term gives the result presented in eq35. After performing the derivatives summation over the right term gives 3/2∑_m=-∞^∞1/d̅_0∂/∂d̅_0e^ikR_m/kR_m= 3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/i√(1-β̃_n^2)/d̅_0 H_1^(1)[d̅_0√(1-β̃_n^2)]Adding eq35 and eq41 M_n',q'^yy=3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/i·{-H_0^(1)[d̅_0√(1-β̃_n^2)]+ √(1-β̃_n^2)/d̅_0H_1^(1)[d̅_0√(1-β̃_n^2)]}§.§ Evaluation of M_n',q'^zzThe value of the z component of E^L that the red chain creates at the origin due to the z components of the dipoles in the red chain E_z^L(r=(0,0,0))=∑_m=-∞^∞G_zz(0,r_m)p_0ze^iβ mDwhere p_0ze^iβ mD are the z components of the dipole moments induced on the m'th particle in the red chain. substituting G_zz we may obtain an explicit form for M_n',q'^zz M_n',q'^zz=3/2∑_m=-∞^∞e^ikR_m[1/kR_m-(d̅-mD̅)^2/(kR_m)^3-1/(kR_m)^3+i1/(kR_m)^2. .+3(d̅-mD̅)^2/(kR_m)^5-3i(d̅-mD̅)^2/(kR_m)^4]e^iβ mDThis may also be written as M_n',q'^zz=3/2∑_m=-∞^∞[e^ikR_m/kR_m+1/d̅_0∂^2/∂(d̅-mD̅)^2e^ikR_m/kR_m]e^iβ mDAgain, Summation over the the left term is given by eq35. Using Fourier transform properties, we may write an expression for the summation over the right term 3/2∑_m=-∞^∞[e^ikR_m/kR_m+1/d̅_0∂^2/∂(d̅-mD̅)^2e^ikR_m/kR_m]e^iβ mD= 3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/iβ̃_n^2H_0^(1)[d̅_0√(1-β̃_n^2)]Substituting the results into eq45 we get M_n',q'^zz=3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/i(β̃_n^2-1)H_0^(1)[d̅_0√(1-β̃_n^2)]§.§ Evaluation of M_n',q'^xzThe value of the z component of E^L that the red chain creates at the origin due to the x components of the dipoles in the red chain E_z^L(r=(0,0,0))=∑_m=-∞^∞G_xz(0,r_m)p_0xe^iβ mDwhere p_0xe^iβ mD are the z components of the dipole moments induced on the m'th particle in the red chain. substituting G_xz we may obtain an explicit form for M_n',q'^xz M_n',q'^xz=3/2∑_m=-∞^∞e^ikR_m[-d̅_0(d̅-mD̅)/(kR_m)^3-3id̅_0(d̅-mD̅)/(kR_m)^4+3d̅_0(d̅- mD̅)/(kR_m)^5]e^iβ mDeq49 can also be presented as M_n',q'^xz=3/2∑_m=-∞^∞[∂/∂d̅_0∂/∂(d̅-mD̅)e^ikR_m/kR_m]e^iβ mDUsing previous results obtained we get M_n',q'^xz=3/2∑_n=-∞^∞e^-iβ̃_nd̅/D̅π/i(iβ̃_n√(1-β̃_n^2))H_1^(1)[d̅_0√(1-β̃_n^2)] §.§ How many Hankel function terms shuold we sum?The Hankel function summations we have received are composed of terms of the general formA_n=CQ(β̃_n)H_ν^(1)[d̅_0√(1-β̃_n^2)]Where Q may be a polynomial of order up to 2 or a combination of a polynomial with the square root function (Still, the maximal power of β̃_n in Q is 2), and C is some complex constant. For the cases examined in sections  <ref> through  <ref> the "worst-case scenario" for the decay of the terms in the Hankel function series is when Q is a polynomial of order 2. for this case the series will have terms such as A_n∝β̃_n^2H_ν^(1)[d̅_0√(1-β̃_n^2)]or more explicitly A_n∝(2π n/D̅-β/k)^2H_ν^(1)[d̅_0√(1-(2π n/D̅-β/k)^2)] We may define the criteria for the number of terms to use for the summation as n_0 such that the tail of the summation ∑_n=n_0^∞A_n will be significantly smaller then a threshold constant C. For large values of n we may use the large argument approximation for Hankel's functionH_ν^(1)(z)∼√(2/π z)e^i(z-1/2νπ-1/4π)If we subsitute this approximation into the required sum we obtain|∑_n=n_0^∞A_n|≤∑_n=n_0^∞|A_n|∼∑_n=n_0^∞π/D̅n^3/2/√(2πd_0/D)e^-2d_0/Dπ n<Cfrom eq56 it is easy to see that the decay rate of the summation is strongly dependant on the ratio d_0/D. Evaluating the given summation may prove challenging, but if we don't mind giving a more strict criteria we may evaluate the needed n_0 from the following∑_n=n_0^∞π/D̅n^3/2/√(2πd_0/D)e^-2d_0/Dπ n<∑_n=n_0^∞π/D̅n^2/√(2πd_0/D)e^-2d_0/Dπ n<CAnd the required value of n_0 may be extracted from the relationπ/D̅1/√(2πd_0/D)e^-2πd_0/D(n_0+1)/(e^-2πd_0/D-1)^3[n_0^2e^-4πd_0/D+(-2n_0^2-2n_0+1)e^-2πd_0/D+(n_0+1)^2]<C§ EXAMPLE - EXAMINATION AND VERIFICATION OF THE MAGNETIC MODEL FOR PARTICLE RINGS In <cit.> a magnetic polarizability model for particle rings is developed. Each ring is composed of N spherical particles of radius a, positioned on the circumference of a ring with radius R with uniform angular shift from each other. The electric dipoles induced on each particle in this case are in a direction tangent to the ring. Example for the setup for such a particle ring with 6 particles which we will use for the example is given in Fig. <ref>. When exposed to y-polarized magnetic field H_0ŷ, an equivalent magnetic dipole moment will be induced on the particle ring that satisfies the relation α_mH_0=m_0 where the inverse magnetic polarizability is given by4π/k^3α_m^-1=8/3N(kR)^2[6πϵ_0/k^3α_p^-1] -1/4N(kR)^5∑_m=1^N-1e^2ikRsinmπ/N/sin^3(mπ/N)Q_mWhereQ_m={3-3(kR)^2+(1+4(kR)^2)cos(2mπ/N)-. .-kR(kRcos(4mπ/N) +i[5sin(mπ/N)+sin(3mpi/N)])}And α_p is the electric polarizability of the particles composing the ring. The dispersion curve for a chain of such rings may be calculated in two ways. The first, is by taking the equivalent magnetic polarizability of such a ring, and assuming the chain is a simple chain composed of magnetic particles that have the given polarizability. This way is very simple and streightforward. The second, by taking a particle-by-particle model of such a ring, using the formulation developed here previously. Ofcourse the second method contains many more degrees of freedom and the dispersion curve will most certainly contain many branches, but we expect one of these branches to be consistent with the model in <cit.>, and moreover, the nullspace vector for that branch will exhibit tangent vectors, that represend the tangent dipoles. The chain setup is demonstated in Fig. <ref>.Dispersion curves for such chain of rings is given in Fig. <ref>.Its very clear that the 2 methods provide dispersion curves that overlap. calculating the nullspace vector for some selected solution from the dispersion curves shows dipole vectors tangent to the particle ring, which proves that this curve represents magnetic mode that's propagating along the chain. the parameters used for the examined chain: D=λ_p/20, R= λ_p/200, a=λ_p/800.99 AluParallel Andrea Alu, Pavel A Belov and Nader Engheta, "Coupling and guided propagation along parallel chains of plasmonic nanoparticles", New Journal of Physics, Volume 13, (2011)LongCh Yarden Mazor, Ben Z. Steinberg, "Longitudinal chirality, enhanced nonreciprocity, and nanoscale planar one-way plasmonic guiding", Phys. Rev. B 86 (4), 045120TretyakovBook S. Tretyakov Analytical modeling in applied electromagnetics, Artech House (2003).Cappolino_PRE_2011S. Steshenko, F. Capolino, P. Alitalo, and S. Tretyakov, Phys. Rev. E, 84, 016607 (2011).Vitaly D. Van Orden and V. Lomakin, "Rapidly Convergent Representations for Periodic Green's Functions of a Linear Array in Layered Media," IEEE Transactions on Antennas and Propagation, vol. 60, no. 2, pp. 870-879, (2012).EnghetaRings Andrea Alu and Nader Engheta, "Dynamical theory of artificial optical magnetism produced by rings of plasmonic nanoparticles", Phys. Rev. B, 78, 8 085112, (2008)
http://arxiv.org/abs/1702.08030v2
{ "authors": [ "Yarden Mazor", "Yakir Hadad", "Ben Z. Steinberg" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170226132106", "title": "Clustering in particle chains - summation techniques for the periodic Green's function" }
A single camera 3D DIC system for the study of adiabatic shear bandsT. G. White et al.Institute of Shock Physics, Imperial College London, London, UKWe describe the capability of a high resolution three-dimensional digital image correlation (DIC) system specifically designed for high strain-rate experiments. Utilising open-source camera calibration and two-dimensional digital image correlation tools within the MATLAB framework, a single camera 3-D DIC system with sub-micron displacement resolution is demonstrated. The system has a displacement accuracy of up to 200 times the optical spatial resolution, matching that achievable with commercial systems. The surface strain calculations are benchmarked against commercially available software before being deployed on quasi-static tests showcasing the ability to detect both in- and out-of-plane motion. Finally, a high strain-rate (1.2×10^3 s^-1) test was performed on a top-hat sample compressed in a split-Hopkinson pressure bar in order to highlight the inherent camera synchronisation and ability to resolve the adiabatic shear band phenomenon.A single camera three-dimensional digital image correlation system for the study of adiabatic shear bands T. G. White, J. R. W. Patten, K. Wan, A. D. Pullen, D. J. Chapman and D. E. Eakins December 30, 2023 =========================================================================================================§ INTRODUCTIONFull-field deformation measurements are an important component in the analysis of material behaviour when subjected to both quasi-static and high-rate loading.Unlike traditional methods such as load cells, which provide global information, or strain gauges, which yield information averaged over a limited portion of the sample, optical techniques are a non-contact method able to provide point-wise information over a large area of the sample. This becomes particularly important when investigating inhomogeneous behaviour or systems in hostile environments that necessitate a non-contact measurement. Developed in the 1980s <cit.>, Digital Image Correlation (DIC) is a non-contact optical method capable of measuring the full-field strain distribution over a sample surface. The relatively lenient requirements necessary to perform DIC has lead to widespread use and popularity beyond competing optical techniques such as Moiré interferometry <cit.> or Holography <cit.>. Two-dimensional (2-D) DIC utilises a single camera to measure in-plane sample deformation through cross-correlation of sequential greyscale images and is capable of obtaining displacement vectors with up to 0.01 pixel precision <cit.>. Since its conception DIC has become an essential technique in experimental/solid mechanics as well as having uses in material characterization, architecture, aerospace and biology <cit.>. Two-dimensional DIC is limited to measuring the in-plane deformation of planar samples, hence requiring the camera to be located normal to the target surface. However, sample deformation is rarely limited to in-plane motion and more general target geometries require three-dimensional (3-D) position and displacement measurements. In this work this is achieved through 3-D DIC, that is stereoscopic imaging of the sample using two calibrated cameras. Combining knowledge of the relative camera angles with standard 2-D DIC reveals the out-of-plane deformation <cit.>. Many commercial systems for 3-D DIC now exist (VIC-3D <cit.>, ARAMIS <cit.>, DaVIS <cit.>), however, both the hardware and software can be typically prohibitively expensive for smaller research groups. In addition, these closed source codes prevent modification or improvement by the user and can lead to uncertainty over the precise details of the algorithms implemented. Performing 3-D DIC at high strain-rate introduces further complications and expense as this typically necessitates the use of two synchronized high speed cameras. In such systems the two greatest costs are the high frame-rate cameras and the bespoke software required for stereo camera calibration and digital image correlation, while the electronic synchronization of the two cameras provides the largest source of temporal error. To this end the development of a single camera 3D-DIC system, which eliminates issues surrounding high rate camera synchronization and offers improved stability, has received considerable attention. However, these attempts place stringent requirements on target geometry <cit.>, require precise camera and optical set-ups <cit.> or are analyzed with commercial software <cit.>. Here we aim to demonstrate a low-cost 3-D DIC system capable of achieving high rate, high spatial resolution, three-dimensional deformation measurements.In the following study we develop a single camera 3-D DIC system which alleviates synchronization errors and is based on freely available open source codes. The camera calibration was carried out using the Caltech Calibration Toolbox for MATLAB <cit.> while the 2-D DIC is achieved using Ncorr developed at Georgia Institute of Technology <cit.>. The Ncorr system has previously demonstrated excellent agreement with commercial software <cit.>. The aim of this work is to assess the applicability of using a single camera together with open-source software to achieve three-dimensional high-rate, high spatial and displacement resolution DIC, taking advantage of the improved temporal synchronisation and stability associated with single camera systems, the flexibility and transparency associated with using open-source codes and at a fraction of the cost of a dual-camera commercial system. This system has been tailored for a field of view of a few millimeters as this allows the study of adiabatic shear bands (ASBs), a widely encountered phenomenon in engineering alloys subjected to high strain-rate loading <cit.>. They form on the sub-millimeter scale and are of particular interest to the aerospace industry where shear bands often occur when a hard object impacts the rotating fan blades at high speed; the velocity of impact leads to low thermal conduction and an increase in temperature, thermal softening and ultimately in catastrophic failure from brittle-like fracture<cit.>. The study of ASBs is one example of a situation that requires high-rate 3-D DIC, however the approach discussed here is broadly applicable, and can be tailored to different applications by simple exchange of camera lens and optical components. The need for 3-D DIC in the study of ASBs is not immediately obvious as they are typically a 2D phenomenon when viewed on the surface of the sample. However, with a 2-D system the camera must be placed orthogonal to the sample surface which is assumed to move only in-plane, this constraint is relaxed in the 3-D setup as the surface normal is explicitly measured. A 3-D system thus allows measurements to be taken despite sample motion (e.g. tilt, bulging, necking) and when non-planar sample geometries are used. Furthermore, ASBs are by definition a localization phenomenon and therefore marks the material deformation in a non-homogeneous manner with regions of large plastic deformation <cit.>. Measurement of the out-of-plane displacement allows the full finite-strain Lagrange strain tensor to be calculated <cit.>. Retaining all terms in the Lagrange strain tensor ensures the large finite strains present in ASBs are appropriately described.§ EXPERIMENTAL SET-UPThe general approach uses reflective optics to image a sample surface from two viewing-angles onto a single camera. The set-up consists of two planar 25 mm square mirrors along with a single knife-edge right-angled prism mirror; these were used to create two views of the target on a single CCD, see Figure <ref>. Both the prism mirror and side mirrors are separately mounted such that they can be moved independently to adjust the image on the camera. As with most commercial set-ups the system is relatively insensitive to the exact angles used due to the robustness of the camera calibration method, the angles do not need to be known accurately a-priori. However, the angle subtended by the two mirrors and the sample (marked θ in Figure <ref>) should remain between 20^∘-30^∘ to achieve a good balance between out-of-plane precision and image correlation. A larger angle allows the out-of-plane motion to be determined with greater accuracy while conversely making the image correlation between the two images more difficult due to increased image disparity <cit.>. The distances and angles used in this work are given in Figure <ref>, as are images showing a typical field of view as focus is shifted from the knife-edge to the target. Future work could involve extending the ideas presented here to more than two images, and hence viewing angles, helping to constrain the problem further and improve accuracy.In order to demonstrate the ability to measure full-field displacement of both in- and out-of-plane motion from a single camera the system was set-up to observe a flat speckled surface. The speckle pattern was applied using an Iwata HP-B Plus airbrush with a 200 μm nozzle using Daler Rowney FW Acrylic Artist's Ink, Black 028. This ink was found to adhere best to the sample surface. The pressure was varied to alter the size of the droplets; a pressure of 1 bar was found to give an optimal speckle. In this case an optimal speckle is defined as being without large featureless areas which reduce the achievable spatial resolution and without fine marks below the optical resolution of the set-up. A nozzle to sample distance of 23 cm was used. A good review of optimal speckle patterns for DIC can be found in reference <cit.>. A typical speckle pattern can be seen in the insert of Figure <ref>. This target was mounted on a Thorlabs PT3-Z8 translation stage providing XYZ motion while the motorised Z825B actuators provide up to 25 mm travel with a minimum resolution of 29 nm. A camera (either a digital SLR or high-speed camera) was attached to an Infinity K2 DistaMax long-distance microscope lens (standard configuration, with 4× magnification provided by two NTX tube amplifiers) to allow ultra-high resolution images to be taken. The entire experiment was performed on a vibrationally-damped optical table to isolate the set-up from any external vibrations. The high frame-rate Phantom is capable of taking 6600 frames a second utilising all 800x600 pixels and up to 50000 frames per second at a reduced pixel resolution of 240x184. Table <ref> lists the parameters of the three different set-ups evaluated in this work. § RECONSTRUCTING THE 3-D WORLDThe majority of this work revolves around reconstruction of the 3-D world from a pair of stereoscopic images. The general method is outlined in Figure <ref>, with steps 1-3 representing calibration of the system, steps 4-5 digital image correlation between pairs of images, step 6 combines these results with the calibration to produce a 3-D surface and displacement map which are then both used to produce a surface strain map. This work utilises the pinhole camera model for the projective mapping from 3-D world coordinates, denoted 𝐏_𝐰=[X_w Y_w Z_w 1],to the two-dimensional camera coordinates, denoted 𝐂_1=[u_1 v_1 1]. Mathematically this is expressed as the matrix multiplication, 𝐂_1= [ u_1; v_1; 1 ]=𝐊_1·[ 𝐑_1 | 𝐓_1 ]·𝐏_𝐰 ,where K_1 is the intrinsic matrix of the camera given by, 𝐊_1= [ f_x s x_0; 0 f_y y_0; 0 0 1 ].The intrinsic matrix contains the coordinates of the principal point of the camera, (x_0, y_0), the horizontal and vertical focal lengths of the camera (f_x. f_y) given in units of pixel dimensions, and s=α_cf_x which allows for non-rectangular pixels through the angle α_c. The terms 𝐑_1 and 𝐓_1 together define the 3×4 extrinsic matrix of camera 1 and relates the position of the camera to a particular world coordinate frame, through a rotation followed by a rigid body translation. For a stereo system, i.e. a system where two cameras are used to resolve a single scene, an equivalent equation can be written for the second camera, 𝐂_2= 𝐊_2·[ 𝐑_2 | 𝐓_2 ]·𝐏_𝐰.Throughout this work the reference frame of camera 1 is used as the world reference frame, hence, 𝐑_1=𝐈, 𝐓_1=0, 𝐑_2=𝐑 and 𝐓_2=𝐓. Therefore equations (<ref>) and (<ref>) together represent a system of four simultaneous equations. Hence, it is possible with knowledge of the position of a single point on each camera, u_1,v_1,u_2,v_2 the equations can be solved to obtain the position of that point in the world reference frame, 𝐏_𝐰. The identification of equivalent points within each image is achieved through standard 2-D DIC.If the imaging system contains distortion then the reconstructed 3-D world, represented by the points 𝐏_𝐰, will also suffer from distortion. However, the camera calibration toolbox is naturally able to identify distortion in the imaging system through the chequerboard calibration method <cit.>. Radial distortion is parameterized through the two parameters κ_1 and κ_2 while tangential distortion is included through τ_1 and τ_2. The distorted world coordinates, 𝐏_𝐰, can then be related to the undistorted coordinates, 𝐏'_𝐰, through Equations (<ref>) and (<ref>), where x=X'_w/Z'_w and y=Y'_w/Z'_w are the reduced undistorted coordinates, r^2=x^2+y^2 and 𝐓_𝐝 the translational component of distortion.[ X_w/Z_w; Y_w/Z_w ] = [ x; y ] (1+κ_1r^2+κ_2r^4) + 𝐓_𝐝 ,𝐓_𝐝=[ 2τ_1xy + τ_2(r^2 2x^2); 2τ_2xy + τ_1(r^2 2y^2) ] .Steps 1-3 represent the process of camera resectioning, that is the technique by which both the intrinsic (𝐊 - focal length, pixel properties and distortion) as well as the extrinsic (𝐓 - translational and 𝐑 - rotational displacements) parameters of two or more cameras can be found. It is a crucial component of any stereoscopic imaging technique and this work utilises the Caltech Calibration Toolbox <cit.> as well as the inbuilt Stereo Calibration function in MATLAB <cit.>. Calibration in both methods is achieved using multiple images of a known, and well defined, two-dimensional pattern. Typically an asymmetrical chequerboard is used. Both methods used here utilise a non-linear minimisation on the re-projected chequerboard to find the unknown coefficients through a steepest gradient descent method <cit.>. The multiple chequerboard images enable the 3-D space to be mapped onto the 2-D images. Typically, the re-projection errors, i.e. the offset between a chequerboard corner and the re-projection of the corner using the pin-hole camera formalism are often found to be sub-pixel in all but the most distorted systems. The chequerboard calibration technique contains a single free parameter, the chequerboard width, which sets the scale of the image.Steps 4-5 of the 3-D reconstruction is based on identifying the pixel position (𝐂 - u_1,v_1,u_2,v_2) of equivalent points in the set of two stereoscopic images, this is achieved through standard 2-D DIC techniques. Ncorr is an open-source MATLAB program for 2-D DIC which runs entirely inside the MATLAB environment; this enables easy compatibility with the camera calibration methods discussed above, but additionally utilises C++ and MEX to improve efficiency <cit.>. Ncorr uses the inverse compositional method which is fast, robust and accurate compared to more traditional Newton-Raphson techniques. During the 2-D DIC the image is divided into circular subsets which are used to correlate spatial regions between the two images, the larger this subset the greater precision with which displacement or position can be found but at the cost of spatial resolution, features smaller than the subset size are unable to be resolved. A circular subset of radius 21 pixels was found to work well for the current implementation, being the smallest subset possible without producing noisy artefacts in the data, and has been used in the displacement, low rate and high rate tests. The comparison with the DaVIS software was performed at a 31 subset pixel size. Step 6 involves reconstructing the 3-D world from the 2-D information. By using the known camera parameters (𝐑_1,𝐓_1,𝐊_1,𝐑_2,𝐓_2,𝐊_2) and equivalent pixel locations (𝐂_1,𝐂_2) to solve the resultant simultaneous equations given by (<ref>) and (<ref>) the world pixel locations, 𝐏_𝐖, are obtained. In order to reconstruct a 3-D scene at a single time the 2-D DIC needs to be run between the left and right stereoscopic images only. However, obtaining 3-D displacements is more involved as it is not possible to perform 2-D DIC between the stereoscopic images at time t_1 and separately at time t_0 and subtract the results as image correspondence is lost, i.e. pixels at two differing times may not be looking at the same point in the 3-D space. Instead 2-D DIC must be performed between the images in as shown by the lines in the dashed box in Figure <ref>. This enables the position of equivalent pixels in the each images at two differing times to be obtained and solves the sub-image correspondence problem, allowing the software to ascertain which parts of each image correspond to the same part in another <cit.>. This allows for subtraction of the world coordinates providing information on the 3-D displacement. A graphical user interface was developed in MATLAB which interfaced with the camera calibration toolbox and the Ncorr DIC system allowing the entire process to be carried out from within a single program. § STRAIN CALCULATIONStep 7 involves calculation of the strain map from the surface and displacement maps. Calculation of the full field strains is performed over a local volume element, centred on each reconstructed point and containing those points which lie within a predefined radius. Judicial choice of this radius is required to optimise the balance between accuracy and smoothness. The displacements are rotated into a local coordinate system with the z axis locally normal to the surface. The displacement gradient tensor, 𝐇=[ ∂ u_x/∂ x ∂ u_x/∂ y ∂ u_x/∂ z; ∂ u_y/∂ x ∂ u_y/∂ y ∂ u_y/∂ z; ∂ u_z/∂ x ∂ u_z/∂ y ∂ u_z/∂ z ]can then be calculated through linear regression. The deformation gradient tensor (𝐅=𝐇+𝐈), Cauchy strain tensor (𝐂=(𝐅^T𝐅) and Lagrange strain tensor (𝐄=1/2(𝐂-𝐈)) are defined in the usual way. It should be noted that the derivatives with respect to the surface normal direction, here defined as z, cannot be obtained from measurements of the surface displacement without further assumptions which leads to only the E_xx, E_yy, E_xy and E_yx components of the Lagrange strain tensor being defined, with the in-plane local directions x and y chosen in advance. § DISPLACEMENT ACCURACYAn initial demonstration of the accuracy of any DIC system is to replicate known linear motions both in- and out-of-plane. For this purpose we drove a flat target with the speckle pattern shown in the inset of Fig. <ref> distances of up to 160 μm in the x-, y- and z-directions, with the z-direction corresponding to out-of-plane displacement. Using Thorlabs Z825BV actuators the speckle pattern was driven to a specified distance, always moving in the direction of positive travel to avoid backlash. At each distance the sample was held stationary and an image taken. The same speckle pattern was used for each of the tests. Figure <ref> shows the displacements measured with the high resolution Canon set-up and the read-out from the linear actuators; the results show exceptional agreement between the calculated and actual displacements with an RMS error of 0.5 μm. In order to test the performance of the reconstruction algorithm during high rate experiments the Canon EOS 600D was replaced with a Phantom V7.3 high speed camera using just 240x184 pixels of the CCD. At this resolution the camera is capable of running at 50000 frames per second, the results are shown in Figure <ref> and demonstrate a RMS error of 1.6 μm. The RMS error was found to be similar, i.e. within 20%, in each of the orthogonal directions.However, it is not clear if the reduction in precision can be attributed to the reduced optical resolution, the applicability of the speckle pattern at this resolution or instability introduced through the cooling fan in the Phantom V7.3. To test for sources of vibration the displacement of subsequent images of a stationary target were measured and found to be 1.6 μm when taken 1 s apart and 1.5 μm when taken 0.2 ms apart, suggesting some component of the resolution is due to vibrational motion. This becomes more pronounced when running the Phantom with the full CCD area (800x600 pixels) at 6600 frames per second. In this case the displacement of subsequent images of a stationary target were found to be 1.5 μm when taken 1 s apart and 0.5 μm when taken 0.15 ms apart. This discrepancy suggests a source of vibration in the measurements with the Phantom 7.3. The lack of vibrational motion in the high speed data does not preclude the possibility that the source of error is from fluctuations in the target environment, the lack of such motion with the Canon is highly suggestive that the cause is internal to the Phantom. Indeed, the Phantom V7.3 uses a continuous cooling fan while newer models (e.g. v2511) contain a `Quiet Fans' mode which turns off the fans for a short period specifically for vibration-sensitive applications. It should be noted that for optimal performance the idealised speckle pattern should be modified to match the field of view and resolution <cit.>, however this work utilised the same pattern in each case, therefore it may be possible to improve these results further. However, the achieved accuracy is consistently around 200 times greater than the optical resolution, comparable to other DIC systems <cit.>, and as such we would expect this to give marginal improvements.§ COMPARISON TO DAVIS SOFTWARE In order to demonstrate the applicability of the software we have performed a comparison of our system to a commercially available code, in this case the DaVIS software by LaVision<cit.>. A comparison test was performed on a short length of cold-formed galvanised steel C-section with the dimensions given in Figure <ref>. A two camera system compatible with the DaVIS software was used and were calibrated using their respective techniques, in the case of the DaVIS software this was a 3D calibration board, while our system was calibrated using the chequerboard technique. The same images were fed into both codes to remove ambiguity around illumination, speckle quality, camera resolution or lens quality. Speckle was applied using standard black spray paint over a white base coating to give maximum contrast, see Figure <ref>. Compression was applied to the sample at a constant velocity using an Instron SATEC with a 600 kN loadcell, at 10 mm/min. The results of the compression test are given in Figure <ref>. Figure <ref> shows the displacement of the sample along the dashed-dotted line in Figure <ref>, results are shown for times corresponding to 1 minute snapshots from the beginning of the compression. Due to the lack of a fiducial the results from the two codes have been shifted horizontally to find the closest agreement and found to match to within 200 μm. Comparison of the local strain calculations between the two systems was carried out by calculating the maximum normal surface strain across the same line out. The maximum normal surface strain was calculated from the eigenvalues of the Lagrange strain tensor. All points within a 1 mm radius sphere are used to calculate each strain value. Results are shown in Fig. <ref>. The two techniques agree to within 0.2% strain. A projection of the maximum normal surface strain onto the image of the deformed sampled is shown in Figure <ref> for clarity. § LOW STRAIN RATE TESTSIn order to demonstrate the performance of the system a quasi-static compression test and a quasi-static tension test were performed. The first of these involved investigating the shear strain induced in millimeter sized top-hat specimens when subjected to a compressive load. This test utilised the Canon EOS with full resolution and is principally designed to demonstrate the high resolution aspect of the system. A flat faced top-hat shaped sample with dimensions given in Figure <ref> was used. The samples were made from near-alpha titanium alloy IMI834, where high rate material behaviour has important relevance with regards to aero-engine discs, and in particular shear-band formation <cit.>. The characteristic shape and size of the sample leads to a small region of high shear strain localised in the region where the top-hat leg intersects the body <cit.>. A compressive force was applied to the top of the sample while the legs of the sample were held still against an anvil. Figure <ref>a-d shows an overlay of the calculated y-displacement (that is displacement in the vertical direction) plotted against global strain, where global strain is defined as ΔL/L with ΔL sample compression and L the initial length. Figure <ref>e shows a spatial average of the shear strain across the high shear region. Here, the shear strain is defined as E_xy, where 𝐄 is the Lagrange strain tensor defined above and x and y correspond to local, in-plane directions shown in Figure <ref>a. As the global compressive strain is increased a shear band can be seen to develop, the width of this shear band, approximately 400 μm, necessitates a high resolution DIC system which in this case is achieved through the use of the K2 lens. Towards the end of the test the shear strain increases further to a maximum observed shear strain of 0.12 and ultimately leads to sample failure. In order to test the 3-D aspect of the DIC system a quasti-static tension test was used. A dog-bone shaped sample of low-carbon steel was chosen due to its tendency to neck under tension, a schematic of the sample is given Figure <ref>. The sample was 33.81 mm long with a depth of 5.92 mm and width of 2.96 mm. The DIC system was set-up in order to image the thinner 2.96 mm face, this was chosen to maximise the out-of-plane motion during the test. In order to have a field-of-view that encompasses the full length of the sample the additional 4X magnification used throughout this work was removed. In this case it was found the surface roughness of the sample was such that a speckle pattern did not need to be applied. Tension was applied to the sample at a constant velocity using an Instron 5584, with a 100 kN loadcell, at 1 mm/min. Images of the resulting necking are shown in Figure <ref>, the out-of-plane motion averaged over the highlighted area is given in the insert. Each measurement contains less than 50 μm displacement variation across the sample width. § HIGH STRAIN RATE TESTSThe experimental set-up discussed above has been designed to offer a field of view of a few millimeters with a spatial resolution on the order of a few microns. The system was specifically designed to study the spatial and temporal characteristics of adiabatic shear bands, a phenomena typically associated with high strain-rate deformation and attributed to poor heat dissipation, thermal softening and ultimately material failure. Understanding this mechanism for failure is of the utmost importance in many industrial applications <cit.>. To further demonstrate the ease-of-use of the system we used a Phantom 2511 high-rate camera capable of recording 120 000 frames per second (8.3 μs inter-frame time) at a resolution of 240×896 pixels to record a top-hat sample under high strain-rate compression. The high-rate loading was performed on a split-Hopkinson pressure bar <cit.> utilising 6 mm diameter maraging steel bars held in position with frictionless air bearings. A 40 cm long striker capable of supplying a 140 μs loading pulse is used to compress millimeter-sized metallic samples at strain rates ranging from 10^2 to 10^4 s^-1. A schematic of the set-up is shown in Figure <ref>. As before flat faced top-hat shaped sample of IMI834 with dimensions given in Figure <ref> was used. A striker velocity of 4.8±0.1 m/s, measured optically, was selected to produce an average global strain rate of 1.2×10^3 s^-1. The 3-D DIC system was used to measure the surface deformation, and in particular the shear strain. Figure <ref>a-c shows the sample with an overlay of the calculated x-displacement (that is displacement in the compressive direction) while Figure <ref>d shows a spatial average of the shear strain across the high shear region. As the global compressive strain is increased a region of high shear can be seen to develop. In Figure <ref> the deformation of the target causes the speckle pattern to become distorted to such a degree that the image correlation technique is no longer able to identify the correct regions causing spatial anomalies in the displacement map. The width of the shear band, approximately 250 μm, is less than observed in the low strain-rate tests. This is expected as the shear band characteristic width is a combination of thermal diffusivity and loading time <cit.>, in the high rate test heat is localised close to the region of maximum shear. As before, towards the end of the test the shear strain increases further to a maximum observed value of 0.16 and ultimately leads to sample failure. The width of the shear bands found in this work are larger than those found previously in Ti alloy samples <cit.> and those predicted by crystal plasticity modelling <cit.> which are typically in the range 10-50 μm, however, the top-hat setup is sensitive to geometric effects, in particular, the overlap of the leg with the body of the top-hat <cit.>. Future work will investigate how the shear-band width varies with parameters such as strain-rate, geometry and microstructure.§ CONCLUSIONSExperimental mechanics is constantly investigating the performance of new materials under load as attempts are made to improve cost, efficiency and performance. A crucial metric is obtaining the stress-strain data from such tests, and the far field measurements typically obtained are supplemented with surface strain measurements. This data is then fed into constitutive material models to aid future development, and can often involve performing tests on non-uniform samples or unique sample geometries. A full 3-D DIC displacement map allows a complete and direct comparison with finite element analysis allowing greater confidence in obtaining elastoplastic parameters such as strength or strain hardening exponents <cit.>. A method for obtaining the full-field displacement of both in and out-of-plane motion from a single high frame-rate camera is demonstrated. Using a standard digital SLR (Canon EOS) and a high frame rate camera (Phantom V7.3) together with Infinity K2 DistaMax long-distance microscope lens enabled high resolution images to be taken. To achieve this we utilised two open-source codes. The Ncorr two dimensional DIC code is used to perform the correlations between a pair of 2-D images while the MATLAB camera calibration or CalTech camera calibration toolbox is used to calibrate the system through the chequerboard method. The single camera system provides improved temporal synchronisation and spatial stability while the open-source software provides flexibility and transparency without resorting to expensive commercial software or necessitating multiple high speed cameras. Tests were performed using a high resolution Canon EOS with 5184x3456 pixels as well as with the Phantom high speed camera at full (800x600) and reduced (240x184) pixel number. In each case the obtainable maximum accuracy in the DIC displacement tests was found to be around 200-300 times better than the optical resolution of the set-up, in line with previous work <cit.>. The software was then compared to the DaVis commercially available code using the same image sets. This removed any ambiguity around illumination, speckle quality, camera resolution or lens quality. The code reproduced the displacement to within 200 um and the maximum normal surface strain to within 0.2. The performance of the system was then demonstrated on two quasi-static strain tests. In the first test a millimetersized top-hat sample was compressed to induce large shear strains in the region between the legs and body of the top-hat sample. The high resolution of the DIC system enabled a 400 μm band of high shear strain to be identified and quantified before failure. The second quasi-static test utilised a dog-bone sample in a tension test to observe necking in the Z direction. Large displacements were observed and the necking was measured to within 50 μm uncertainty.Finally, a high-strain test on the same top-hat samples as compressed quasi-statically was carried out on a split-Hopkinson pressure bar at a global strain rate of 1.2×10^3 s^-1. Images captured at 120 000 Hz showed a thinner region of high-shear approximately 250 μm wide, suggesting adiabatic conditions were reach in the high rate tests.Future work will utilise the single camera system and codes to further investigate the physics surrounding dynamic events on the sub-millimeter scale such as grains, shear bands and localised surface strains. § ACKNOWLEDGEMENTS This research was supported by EPSRC grant Heterogeneous Mechanics in Hexagonal Alloys across Length and Time Scales (EP/K034332/1). In addition, the authors are very grateful to David Rugg and Rolls-Royce plc for provision of Ti material.9chu T. Chu, W. Ranson, M. Sutton, Exp. Mech. 25(3), (1985) 232244sutton M. A. Sutton, M. Cheng, W. H. Peters, Y. J. Chao, S. R. McNeill, Image Vision Comput. 4(3), (1986) 143150 interferometry D. Post, Exp. Mech. 23(2), (1983) 203-210holography T. D. Dudderar, H. J. Gorman, Exp. Mech. 13(4), (1973) 145-149suttonbook H. Schreier, J. J. Orteu, M. A. Sutton, Digital Image Correlation for Shape and Deformation Measurements (Springer, Verlag 2009) luo P. Luo, Y. Chao, M. Sutton, W. Peters, Exp. Mech. 33(2), (1993) 123132vic3dVIC-3D system (Correlated Solutions, Columbia, South Carolina)aramis ARAMIS system (GOM Company, Braunschweig, Germany) davis DaVis system (LaVision, Goettingen, Germany)singlecamera2 C. J. Tay, C. Quan, Y. H. Huang, Y. Fu, Opt. Commun. 251, (2005) 2336singlecamera3 C. Quan, C. J. Tay, W. Sun, X. He, Appl. Opt. 47, (2008) 583593singlecamera4 W. Sun, E. Dong, X. He, Proc. SPIE 6723, (2007) 37230Prentice H. J. Prentice, W. G. Proud, AIP Conf. Proc. 845 (2006) 1275-1278singlecamera1 M. Pankow, B. Justusson, A. M. Waas, Applied Optics 49(17), (2010) 3418-3427 CameraCal J. Y. Bouguet, Camera Calibration Toolbox for Matlab (2013) Available at: http://www.vision.caltech.edu/bouguetj/calib_doc/index.htmlNcorr1 J. Blaber, A. Antoniou, Ncorr: Open-source 2D Digital Image Correlation Matlab Software (2014)Available at: http://www.ncorr.comNcorr3 J. Blaber, B. Adair, A. Antoniou, Exp. Mech. 55 (2015) 1105Ncorr2 R. Harilal, M. Ramji, Adaptation of Open Source 2D DIC Software Ncorr for Solid Mechanics Applications 9th International Symposium on Advanced Science and Technology in Experimental Mechanics (2014)bdodd B. Dodd and Y. Bai Adiabatic Shear Localization (Elsevier (2012))mechconmed L. E. Malvern Introduction to the Mechanics of a Continuous Medium (New Jersey: Prentice-Hall, (1969))aerospace I. Balasundara, T. Raghua, B. P. Kashyapb, Prog. Nat. Sci.23(6) (2013) 598607speckle T. A. Berfield, J. K. Patel, R. G. Shimmin, P. V. Braun, J. Lambros, N. R. Sottos, Exp. Mech. 47, (2007) 51-62matlab MATLAB version 8.3.0.532 (R2014a). Natick, Massachusetts: The MathWorks Inc., 2014.PrenticeThesis H. J. Prentice,Development of Stereoscopic Speckle Photography Techniques for Studies of Dynamic Plate Deformation (Magdalene College, University of Cambridge, 2006)shpb1 H. Kolsky, Proc. Phys. Soc. London, B62, (1949) 676.shpb2 B. A. Gama, S. L. Lopatnikov and J. W. Gillespie, Appl. Mech. Rev 57(4), (2004) 223-250 shpb3 A. Mohammadhosseini, S. H. Masood, D. Fraser and M. Jahedi, Adv. Manuf. (3), (2015) 232tophat1 J. Peirs, P. Verleysen, J. Degrieck, and F. Coghe, International Journal of Impact Engineering 37(6), (2010) 703-714tophat2 R. Clos, U. Schreppel, U., and P. Veit, Journal de Physique, 110(4), (2003) 111-116tophat3 X. Teng, T. Wierzbicki, and H. Couque, Mechanics of Materials, 39(2), (2007) 107-125stevesmith S. W. Smith The Scientist and Engineer's Guide to Digital Signal Processing (California Technical Pub; 1st edition (1997))3ddicfea G. Li, F. Xu, G. Sun, Q. Li, Int. J. Adv. Manuf. Technol. 74 (2014) 893905zhen Z. Zhang, D. E. Eakins, F. P. E. Dunne, International Journal of Plasticity 79 (2016) 196-216bandwidth1 S. Liao, J. Duffy, Journal of the Mechanics and Physics of Solids 46(11) (1998) 2201-2231bandwidth2 F. Coghe, L. Rabet, L. Kestens, Journal De Physique. IV 134 (2006) 845-850
http://arxiv.org/abs/1702.08263v2
{ "authors": [ "T. G. White", "J. R. W. Patten", "K. -H. Wan", "A. D. Pullen", "D. J. Chapman", "D. E. Eakins" ], "categories": [ "physics.ins-det" ], "primary_category": "physics.ins-det", "published": "20170227130738", "title": "A single camera three-dimensional digital image correlation system for the study of adiabatic shear bands" }
Limits on the ultra-bright Fast Radio Burst population from the CHIME Pathfinder D.V. Wiebe December 30, 2023 ================================================================================ § INTRODUCTIONOver the past few decades, a considerable number of studies has been done on the neutrino oscillation with a great success of measuring neutrino mixing angles. However, some experiments for the neutrino oscillation revealed more or less disagreements with the three-flavor neutrino model, which termed as neutrino anomalies, as reported in LSND <cit.>, MiniBoone <cit.>, reactor experiments <cit.> and gallium experiments <cit.>. One of the approaches for explaining the neutrino anomalies is to presume the existence of the hypothetical fourth neutrino, which is called as sterile neutrino, because the sterile neutrino does not interact with other particles except for a mixing with active neutrinos.The possibility of the existence of sterile neutrino triggered by the lots of anomalies is now being widely discussed in the filed of particle, nuclear physics, and astrophysics including cosmology. Most of the neutrino experiments are classified into the long-base line accelerator and the short-base line reactor neutrino experiments. The former neutrino source comes from high-energy proton accelerators; this source produces high-energy and high-flux neutrinos from the pion or kaon decay at rest or in flight and utilize long-distance detectors from the acceleratorexcept for the MiniBooNE-like experiments. On the contrary, the latter one relies on neutrinos from nuclear reactors, which enables to use relatively the short-distance detector from the source, but needs to pin down the ambiguity on the neutrino spectrum stemming from vast numbers of nuclear fissions in the reactor <cit.>. Here, it should be noted that recent results from the IceCube neutrino telescope, which limits on the mixing of sterile neutrino and muon neutrino, does not constrain the neutrino mixing angle relevant to the reactor anomaly <cit.>.Recently many interesting studies for the existence of sterile neutrinos (ν_s) are proposed with antineutrino sources from radioactive isotopes or an accelerator-based Isotope Decay-At-Rest (IsoDAR) concept <cit.>. They utilize different neutrino sources from the two neutrino sources mentioned above.For instance, KamLAND (CeLAND) <cit.> and Borexino (Short distance neutrino Oscillations with BoreXino (SOX)) <cit.> plan to perform experiments using approximately 100 kCi of ^144Ce-^144Pr radio isotopes which can generate antineutrinos with energy of up to 3 MeV.As another type source, electron antineutrinos (ν̅_e) produced by ^8Li using an accelerator-based IsoDAR concept was proposed <cit.>. The ν̅_e from ^8Li has higher energy than that from ^144Ce-^144Pr, and thus can be used for study of antineutrino spectrum distortion in the energy region of 5 MeV < E_ν̅ < 7 MeV, where some distortions or anomalies are reported by the reactor antineutrino experiments (Daya Bay <cit.>, Double Chooz <cit.> and RENO <cit.>).On the other hand, there are many ion beam facilities such as FAIR at GSI, FRIB at MSU, HRIBF at ORNL, ISAC at TRIUMF, ISOLDE at CERN, RIBF at RIKEN, SPES at INFN, and SPIRAL2 at GANIL. They have been designed or are being developed to obtain features (e.g. use of high-intensity high-power primary beams, use of large-aperture high-field superconducting magnets, etc.) suitable for variety scientific purposes <cit.>. The ion beams can provide us with new opportunities in neutrino physics, especially, for production of artificial neutrino sources.By using the ion beams with compact neutrino detectors such as DANSS <cit.>, NEUTRINO4 <cit.>, NUCIFER <cit.>, PANDA <cit.>, PROSPECT <cit.>, and STEREO <cit.> which have been planned to measure reactor neutrinos at a distance of several meters, we can test the existence of sterile neutrinos, in particular, on 1 eV mass scale. In this work, to study the possibility of sterile neutrino, we propose a new neutrino production method with a ^13C beam and a ^9Be target. Unstable isotopes such as ^8Li and ^12B can be produced through the ^13C + ^9Be reaction and decay subsequently. Thus, we can obtain new neutrino sources (ν̅_^13C + ^9Be) from the possible beta decay processes of unstable isotopes produced at the ^13C + ^9Be reaction. In a sense, the production mechanism is similar to that of reactor neutrinos. However, in this case, neutrino energy spectrum is much easier identified compared with those of reactor neutrinos because the number of isotopes is limited and the geometry is simple.The production of secondary isotopes from the ^13C + ^9Be reaction is calculated using the GEANT4 particle transport Monte Carlo code<cit.>, with three different nucleus-nucleus (AA) reaction models. Different isotope yields are obtained using these models, but the results of the neutrino flux are shown to have a striking similarity. This unique feature gives a chance to neutrino oscillation study through shape analysis regardless of the theoretical AA models considered. Expected neutrino flux and event rates including the sterile neutrino contribution are discussed in this work.Outline of the paper is as follows. In section <ref>, proposed experimental setup, the neutrino detection and simulation tools including the nuclear reaction model are described. In section <ref>, the results of expected neutrino flux and event rates are presented, and neutrino disappearance features and possible reaction rate changes by the sterile neutrino are discussed with possible inherent errors in section <ref>. A summary is given in section <ref>. § METHODS §.§ Proposed experimental setup We propose an electron antineutrino source using an accelerator-based IsoDAR concept with a ^13C beam and a ^9Be target. 75 MeV/u ^13C beam with a current of 300 pμA, namely 293 kW of beam power, is considered in this work. There are many ion beam facilities which have been designed and are being developed with advanced features (e.g. use of high-intensity high-power primary beams). For example, a beam power of 400 kW is aimed to achieve at FRIB (MSU). In this regard, the present proposal is a feasible experimental plan.Figure <ref> (a) shows geometrical setups for the neutrino production. The ^9Be target is modeled as a cylinder of a radius of 5 cm and a thickness of 10 cm. Neutrinos are obtained from unstable isotopes produced through the ^13C + ^9Be reaction. The ^9Be target is surrounded by a D_2O layer with 5 cm thickness for cooling. In addition, tungsten, boron carbide, and stainless steel layers surround the ^9Be target and the D_2O layer for effective secondary neutron generation and shielding of γ-rays.The electron antineutrino source proposed in this work can be useful for the neutrino disappearance study of the investigation of the existence of fourth neutrino, sterile neutrino. Figure <ref> (b) shows the proposed short-baseline experiment setup consisting of the neutrino production targetry and the detection sector. Lead blocks with 1 m thickness surround the neutrino production targetry to shield the radiation from the targetry. In this work, we consider three ring-shape detectors with a height of 1 m and a thickness of 1 m. §.§ Electron antineutrino detection For electron antineutrino detection, an inverse beta decay (IBD) reaction, ν̅_e + p → e^+ + n, is considered in this work. This IBD reaction in neutrino detection offers two signals : a prompt signal due to the annihilation of a positron and a delayed signal of 2.2 MeV γ-ray following neutron capture. Characteristics of two distinct detections can make efficient rejection of possible backgrounds. The energy dependent cross section of the IBD reactions can be expressed by <cit.> σ_ν̅_e(E_ν̅_e) ≈ p_e E_e E_ν̅_e^-0.07056+0.02018lnE_ν̅_e - 0.001953 ln^3 E_ν̅_e× 10^-43 [cm^2], where p_e, E_e, and E_ν̅_e are the positron momentum, the total energy of the positron, and the energy of ν̅_e in MeV, respectively. Note that the mass difference between m_n and m_p can be described as E_e = E_ν̅_e - Δ, where Δ = m_n - m_p≈ 1.293  MeV. This cross section agrees within few per-mille with the fully calculated results including the radiative corrections and the final-state interactions in IBD.§.§ Simulation tool and nucleus-nucleus reaction modelsWe have performed GEANT4 <cit.> simulations for estimating the production yields of isotopes by considering the bombardment of 75 MeV/u ^13C beams on the ^9Be target as shown in figure <ref>. In these calculations, we use the G4ComponentGGNuclNuclXsc class for the calculation of overall cross sections of AA reactions and the G4ionIonisation class for ionization processes. The G4ComponentGGNuclNuclXsc class provides total, inelastic, and elastic cross sections for the AA reactions using the Glauber model with Gribov corrections <cit.> where this class is valid for all incident energies above 100 keV. The G4ionIonisation class is used for the calculation of the ionization processes, where the effective charge approach <cit.> and the ICRU 73 <cit.> table for the stopping power are used.An important point in the calculation of heavy ion collision reactions is the choice of hadronic models. To discuss the relative isotope production yields caused by different AA reactions, three different hadronic models; G4BinaryLightIonReaction <cit.>, G4QMDReaction <cit.> and G4INCLXXInterface <cit.> are used. They are described in detail in the Physics Reference Manual <cit.> on the web <cit.>. Here, we list some key features of them for the readers' convenience. We distinguish our model calculations by referring to G4BinaryLightIonReaction, G4INCLXXInterface, and G4QMDReaction simply as “G4BIC", “G4INCL" and “G4QMD", respectively. G4BIC : The G4BinaryLightIonReaction class is an extension of G4BinaryCascade for light ion reactions. It is a data driven from the Intra-Nuclear Cascade model based on a detailed 3-D model of the nucleus and binary scattering between reaction participants and nucleons within the nuclear model. Participant particles are either a primary particle including nucleons in a projectile nucleus, or particles generated or scattered in the process of cascade. Each participating particle is seen as a Gaussian wave packet and total wave function is assumed to be a direct product of the wave functions of the participating particles without antisymmetrization. The equations of motion has the same structure as the classical Hamilton equations, where the Hamiltonian is calculated from a simple time-independent optical potential. The nucleon distribution follows a Woods-Saxon model for heavy nuclei (A>16) and a harmonic-oscillator shell model for light nuclei (A<17). Participant particles are only propagated in the nucleus, and participant-participant interactions are not taken into account in the model. The cascade terminates when mean kinetic energy of scattered particles (participants within the nucleus) has dropped below a threshold (15 MeV).After the cascade termination, properties of the residual excitation system and the final nuclei are evaluated. Then, the residual participants and the nucleus in that state are treated by pre-equilibrium decay. For statistical description of particle emission from excited nuclei, the G4ExcitationHandler class is used.For light ion reactions, projectile excitations are determined from the binary collision participants using the statistical approach towards excitation energy calculation in an adiabatic abrasion process. Given this excitation energy, the projectile fragment is then treated by the G4ExcitationHandler. G4QMD : G4QMDReaction is based on a quantum extension of the classical molecular dynamics (QMD). It is widely used to simulate AA reactions for many-body processes, in particular, the formation of complex fragments. QMD has similar characteristics to Binary Cascade in treating each participant particle as a Gaussian wave packet and assuming total wave function to the direct product of the participants. Comparing with Binary Cascade, however, QMD has some different characteristics such as the definition of a participant particle, potential term in the Hamiltonian and participant-participant interactions. Participant particles in the QMD mean entire nucleons in the target and projectile nucleus. The potential terms of the Hamiltonian in QMD are calculated from the entire relation of particles in the system where the potential includes a Skyrme type interaction, a Coulomb interaction and a symmetry term. Because there is no criterion between participant particle and others in QMD, participant-participant interactions are naturally included. There are many different types of QMD models, but G4QMDReaction is based on JAERI Quantum Molecular Dynamics (JQMD) <cit.>. The self-generating potential field is used in G4QMD, and the potential field and the field parameters of G4QMD are also based on JQMD with Lorentz scalar modifications. In the G4QMDReaction, the reaction processes are also described by two steps. First, as a dynamical process, the direct reactions, non-equilibrium reactions, and dynamical formation of highly excited fragments are calculated in the manner of QMD. Second, as a statistical process, evaporation and fission decays are performed for the excited nucleons produced in the first step. As the excitation model, GEM (generalized evaporation model) <cit.> is used. G4INCL : This class is being used for reactions induced by nucleons, pions and light ion on any nucleus using the INCL++ model, which is a version of Liège intranuclear-cascade model (INCL) <cit.> fully re-written in C++. For light ion induced reactions, the projectile is described as a collection of independent nucleons with gaussian momentum and position distributions which use the realistic standard deviation of the projectile ion for the position distribution. Momenta and positions of the nucleons inside a target nucleus are determined by modeling the nucleus as a free Fermi gas in a static potential well with a realistic density. The reaction is described as an avalanche of binary nucleon-nucleon collisions, which can lead to the emission of energetic particles and to the formation of an excited thermalised nucleus as remnant. Particles in the model are labeled either as participants (projectile particles and particles that have undergone a collision with a projectile) or spectators (target particles that have not undergone any collision). Collisions between spectator particles are neglected. The projectile (light ion) follows globally a classical Coulomb trajectory until one of its nucleon impinges on a spherical calculation volume around the target nucleus, which is large enough to marginally neglect nuclear interactions. The nucleons entering the calculation sphere move globally (with the beam velocity) until one of them interacts with a target nucleon. The nucleon-nucleon (NN) interaction is then computed with the individual momenta with Pauli blocking restriction. Nucleons crossing the sphere of the calculation without any NN interactions are also combined in the “projectile spectator" at the end of the cascade. The cascade stops when the remnant nucleus shows signs of thermalization; a rather unique aspect of INCL is the self-consistent determination of the cascade stopping time. The projectile spectator nucleus is kinematically defined by its nucleon content and its excitation energy obtained by an empirical particle-hole model, and then the de-excitation of the projectile fragments is described by the G4ExcitationHandler class.Many different isotopes produced via the ^13C + ^9Be reaction can emit neutrinos with various energies. The energy distributions of the neutrinos are calculated using “G4RadioactiveDecay" <cit.> class based on the Evaluated Nuclear Structure Data File (ENSDF) <cit.>. § RESULTS §.§ Production isotope yields in ^9Be target The numbers of unstable isotopes accumulated inside the ^9Be target, obtained using G4BIC, G4INCL, and G4QMD, are plotted in figure <ref> (a). The figure shows that electron antineutrinos via the 75 MeV/u ^13C + ^9Be reaction are generated through β decay from ^6He, ^8Li, ^9Li, ^12Be, ^12B, and ^13B, where the summation of their fractions are99%, 96%, and 97.0% for G4BIC, G4INCL, and G4QMD, respectively. Because the half-lives of those isotopes are shorter than a few seconds, the electron antineutrinos are predominantly produced from the ^9Be target during the ^13C beam irradiation. The isotopes except for ^6He emit electron antineutrinos which have a maximum flux in the energy range of 6 MeV < E_ν̅_e < 7 MeV as shown in figure <ref>, and their fractions are shown in figure <ref> (b). Besides low energy neutrinos from ^6He, dominant contribution comes from ^8Li and sub-dominant contribution is ^12B. The summation of their fractions calculated using G4BIC, G4INCL, and G4QMD are 0.025, 0.021, and 0.03 per incident ^13C ion, respectively. This result indicates very similar shape of neutrino spectra regardless of the AA reaction model considered in this work. In the ^9Be target, ^10Be and ^14C are produced in addition to the isotopes in figure <ref> (a). Due to very long half-lives of ^10Be (1.5 × 10^6 y) and ^14C (5.7 × 10^3 y), their contributions for neutrino production are marginal, and thus they are neglected in this work.§.§ Neutrino energy spectra from ^9Be target To see expected energy spectrum of all electron antineutrino from the ^9Be target, we first obtain the energy spectra from main isotopes such as ^6He, ^8Li, ^9Li, ^12Be, ^12B, and ^13B. Radioactive decay data of these isotopes are tabulated in Table <ref>, and their neutrino energy distributions are plotted in figure <ref>. The spectrum of the electron antineutrinos from ^6He have a peak at E_ν̅_e = 2 MeV and a width of 3.5 MeV. The other spectra have similar distributions.Expected electron antineutrino spectra for the ^9Be target due to the ^13C + ^9Be reaction are plotted in figure <ref>, where 75 MeV/u ^13C beams with 300 pμA current are assumed. Two distinct broad peaks around E_ν̅_e = 2 MeV and 7 MeV are shown in the figure <ref>regardless of the nuclear reaction model predictions. The lower energy peak predominantly come from ^6He and the higher energy peak originates from ^8Li, ^9Li, ^12Be, ^12B, and ^13B, as shown in figure <ref>. It should be noted that the spectral shapes in the energy region of E_ν̅_e > 4 MeV turn out to be almost identical by multiplying the flux from G4BIC, G4INCL, and G4QMD by 1.2, 1.46, and 1, respectively. This characteristic enables us to perform model-independent shape analysis.§.§ Electron antineutrinos produced in the neutrino production targetry Electron antineutrinos can be practically produced in all components of the production targetry around the ^9Be target in figure <ref>. Figure <ref> show the calculated results where the dotted line and the solid line mean summations of the all components and the only ^9Be target in figure <ref>, respectively. For decay modes, we consider fully the decays of the isotopes with half-life shorter than 10 yr. Because all of the results from the three different reaction models mentioned in section <ref> provide the similar spectra, we only plot the result using G4BIC in figure <ref>. The contribution of the neutrino from the ^9Be target is dominant in the energy region of E_ν̅_e > 3 MeV, but the other components are dominant for E_ν̅_e < 1 MeV. Especially, in the energy region of E_ν̅_e > 4 MeV, the contribution from the ^9Be target is 99% of the total neutrinos. If we consider the neutrino energy cut of E_ν̅_e = 4 MeV for electron antineutrino detection, we can effectively remove background neutrino signals from the other components. It means that the neutrino source can be concentrated on a small volume of the ^9Be target. This feature can provide point like source experiments.§ SHORT-BASELINE ELECTRON ANTINEUTRINO DISAPPEARANCE STUDIES AND STERILE NEUTRINO To check the existence of the fourth neutrino, we compare the event rates under electron antineutrino survival probabilities in the standard 3-flavor neutrino model (P_3) and the 3+1 model (P_3+1). For the calculation of P_3, we use the equation given by <cit.> P_3 = 1 - sin^2 2 θ_13 S_23 - c^4_13sin^2 2 θ_12 S_12, where S_23 = sin^2(Δ m^2_32 L / 4 E) and S_12 = sin^2(Δ m^2_21 L / 4 E). L and E mean the source to detector distance and the neutrino energy, respectively.The neutrino oscillation parameters in Eq. (<ref>) are taken from a global fit <cit.>. Electron-antineutrino survival probabilities in the 3+1 model can be written asP_3+1 = 1 - 4|U_e4|^2(1-|U_e4|^2)sin^2( Δ m^2_41 L/4E).The oscillation parameters in Eq. (<ref>) are taken from the best-fit points by the 3+1 model for the combined short base lines (SBL) and IceCube data sets <cit.>. Ratios of the expected total event rate for the P_3+1 model to that for the P_3 model (R_3+1-to-R_3) are calculated using the G4BIC, G4INCL, and G4QMD models (see figure <ref>). Here we assume the neutrino cut-off energy of 4 MeV. Even for the case without the energy cut, we obtained almost same results compared to those from calculations with the cut. All of the results using the three models are almost unanimous regardless of L.The R_3+1-to-R_3 ratios show oscillation, and the minimum and the maximum values of the R_3+1-to-R_3 are 0.91 at L = 6 m and 0.97 at L = 13 m, respectively. If there are no sterile neutrinos, the ratio should be 1. Spectral shapes of the measured neutrinos would also give a chance to study the existence of fourth neutrinos.[The visible energy (E_vis) of the prompt signal due to a positron (e^+) is strongly correlated with the energy of ν̅_e (E_ν̅_e), E_ν̅_e ≃ E_vis + 0.78 MeV, which means that ν̅_e energy spectra can be reconstructed using the E_vis. ] To see the effect of possible sterile neutrinos using the shape analysis like Daya Bay <cit.>, Double Chooz <cit.>, and RENO <cit.> experiments,the event rates are calculated for different L values using the ring detectors as shownin figure <ref> (b).For the reconstruction of ν̅_e energy spectra, a liquid scintillator detector based on PROSPECTtype detector <cit.> is considered. In the present oscillation analysis,we assumedan statistical error of 1.5%, a systematic error of 2%, an energy resolution of 4.5%/√(E/MeV), a position resolution of 15 cm, and a IBD cross section error of 0.5% <cit.>.We use the cross section obtained using Eq. (<ref>). The R_3+1-to-R_3 ratios at L = 6 m and L = 13 m are plotted in figure <ref>. It should be also noted that the results in figure <ref> are almost identical independently of the hadronic models used in this work. In the energy region of 4 MeV < E_ν̅_e < 9 MeV corresponding to the region of 3 MeV < E_vis < 8 MeV,the R_3+1/R_3 ratio at L = 6 mrapidly decreases up to 0.9 as E_ν̅_e increases. At the energies of E_ν̅_e > 9 MeV, however, the ratio increases as E_ν̅_e increases. The comparison of the results in figures <ref> (a) and (b) in the energy region of 5 MeV < E_ν̅_e can give a meaningful signal for the existence of hypothetical neutrinos.These characteristics are unique features of the present work due tothe compact ν̅_e source. Also, the comparison among the event rates for different L (e.g. near and far detectors of reactor neutrino experiments) can also be possible in this work.The ratios of the event rate at L = 13 m to that at L = 21 m with the P_3+1 model are plotted in figure <ref> (a). In the figure, the maximum and the minimum values are 1.11 at E_ν̅_e = 9.5 MeV and 0.94 at E_ν̅_e = 13.5 MeV, respectively. If we can measure approximately 17% deviation from the expected events, we can find a clue to the problem of whether the P_3+1 model is the most appropriate scenario. Figure <ref> (b) shows the results for R_L = 21 m/R_L = 6 m, which gives a different feature compared to that from figure <ref> (a).The shapes in figures <ref>(a) and <ref>(b) can give effective chances to search for the existence of ν_s and to test the 3+1 sterile neutrino scenario. § SUMMARY AND CONCLUSIONIn this work, to investigate the existence of sterile neutrino, we propose an electron antineutrino source using ^13C beams based IsoDAR concept for short-baseline electron antineutrino disappearance study. The neutrino source is obtained through β^- decays of unstable isotopes which are generated from the ^13C + ^9Be reaction. Main isotopes for neutrino production are ^8Li, ^9Li, ^12Be, ^12B, and ^13B. They have similar half-lives and reaction Q values of β^- decay, and thus the neutrino energy spectrum with a single broad peak is expected.The production yields of those isotopes are calculated using three different nucleus-nucleus (AA) reaction models. Even though different yields of the isotopes are obtained from the models, the neutrino spectraare almost identical.This unique feature gives a realistic chance to neutrino oscillation study through shape analysis,regardless of the theoretical AA models considered.R_3+1-to-R_3 ratios at L = 6 m and L = 13 m show distinguishable features of the event rates, and thus it can also give a meaningful signal for the existence of the hypothetical ν_s. Also, complementary comparison studies among different distance L become feasible. The expected deviation between the maximum and the minimum values is approximately 17%, and thus it can give an effective answer of whetherP_3+1 models is the most appropriate model for the sterile neutrino. § ACKNOWLEDGMENTSThe work of J. W. Shin is supported by the National Research Foundation of Korea(Grant No. NRF-2015R1C1A1A01054083), the work of M.-K. Cheoun is supported by the National Research Foundation of Korea(Grant No. NRF-2014R1A2A2A05003548 and NRF-2015K2A9A1A06046598).
http://arxiv.org/abs/1702.08036v1
{ "authors": [ "Jae Won Shin", "Myung-Ki Cheoun", "Toshitaka Kajino", "Takehito Hayakawa" ], "categories": [ "physics.ins-det", "astro-ph.IM", "nucl-ex" ], "primary_category": "physics.ins-det", "published": "20170226141130", "title": "Short-baseline electron antineutrino disappearance study by using neutrino sources from $^{13}$C + $^{9}$Be reaction" }
Probabilistic 2DCCA with EMDepartment of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, IRAN Tel.: +98 31 33919063Fax: +98 31 33912451safayani@cc.iut.ac.ir(Corresponding Author)hashem.ahmadi@ec.iut.ac.irh.afraei@ec.iut.ac.ir mirzaei@cc.iut.ac.irAn EM Based Probabilistic Two-Dimensional CCA with Application to Face RecognitionMehran Safayani Seyed Hashem Ahmadi Homayun Afrabandpey Abdolreza MirzaeiReceived: date / Accepted: date =================================================================================== Recently, two-dimensional canonical correlation analysis (2DCCA) has been successfully applied for image feature extraction. The method instead of concatenating the columns of the images to the one-dimensional vectors, directly works with two-dimensional image matrices. Although 2DCCA works well in different recognition tasks, it lacks a probabilistic interpretation. In this paper, we present a probabilistic framework for 2DCCA called probabilistic 2DCCA (P2DCCA) and an iterative EM based algorithm for optimizing the parameters. Experimental results on synthetic and real data demonstrate superior performance in loading factor estimation for P2DCCA compared to 2DCCA. For real data, three subsets of AR face database and also the UMIST face database confirm the robustness of the proposed algorithm in face recognition tasks with different illumination conditions, facial expressions, poses and occlusions.§ INTRODUCTION Although many real-world applications encounter high dimensional data, the most informative part of the data can be modeled in a low dimensional space. Moreover, processing high-dimensional data is a time consuming process and requires lots of resources. To tackle these problems, feature extraction has been used as a tool for finding a compact and meaningful data representation.For single-mode source data, some subspace learning methods are conducted to learn more semantic description subspaces. Examples of these methods are principal component analysis (PCA) <cit.> and linear discriminant analysis (LDA). However, for observations from two sources that share some mutual information, canonical correlation analysis (CCA) <cit.> is a very popular approach for dimensionality reduction. CCA seeks a lower-dimensional space where two sets of variables are maximally correlated after projecting on it. This technique is widely used in different fields of pattern recognition, computer vision, bioinformatics, etc. <cit.>. In the CCA-based methods, it is necessary to vectorize 2D image matrices. Vectorization has three main drawbacks: (I) breaking the spatial structure of image data which may cause losing potentially useful structural information among column/rows <cit.>, (II) leading to a high-dimensional vector space and small sample size problem which in turn makes it difficult to calculate the covariance matrices <cit.> and (III) causing the covariance matrices to be very large which in turn makes the eigen-decomposition of such large matrices very time-consuming.To overcome these drawbacks, in 2007 two-dimensional CCA (2DCCA) was introduced by Lee and Choi <cit.> which computes CCA directions based on 2D image matrices. The proposed 2DCCA overcomes the curse of dimensionality and significantly reduces the computational cost, by directly working with 2D images instead of reshaping them into 1D vectors. In <cit.>, higher recognition accuracies were reported using 2DCCA compared to CCA using two face databases and the time complexity has been improved.However, an associated probabilistic model for observed data was notably absent form these feature extraction methods. A probabilistic feature extraction algorithm could be intuitively appealing for so many reasons <cit.>. To bridge the gap, in 1999, Tipping and Bishop proposed probabilistic PCA <cit.> based on a latent variable model known as factor analysis (FA) <cit.>. The proposed PPCA was then used as a framework for many other new formulations for PCA <cit.>. Also, there have been some probabilistic models proposed for LDA <cit.>. In 2005, Bach and Jordan <cit.> also proposed a probabilistic interpretation of CCA and estimate the parameters of their proposed model using both maximum likelihood and expectation maximization. Recently, many inspiring research proceeded in the 1D CCA domain, including kernel based, semiparametric and nonparametric methods <cit.>, but in the 2D CCA domain, we feel that more work is required. To bridge the gap, a probabilistic model of 2DCCA was introduced by Safayani et al. in 2011 <cit.>. They showed that the maximum likelihood estimation of parameters, leads to the two dimensional canonical correlation directions. However, they didn't propose an EM based solution for their model. EM does not require the explicit eigen-decomposition of covariance matrices. Moreover, using EM it is possible to handle models with incomplete data such as mixture models where the cluster labels are the missing values <cit.>.In this paper, we present a probabilistic interpretation of 2DCCA, referred to as P2DCCA, together with an EM based solution to estimate the parameters of the model. The proposed model can handle the small sample size problem effectively.The rest of the paper is organized as follows: Section <ref> briefly reviews some related algorithms such CCA, PCCA and 2DCCA which are necessary to understand how the proposed algorithms work. The proposed P2DCCA model is introduced in Section <ref>. In Section <ref>, some experiments on synthetic data and several face databases are given to evaluate performance of the proposed algorithm; finally, the paper is concluded in Section <ref>. § BACKGROUND§.§ Canonical Correlation Analysis (CCA)Imagine that we are given two sets of random vectors t_1 and t_2 where t_1,n∈R^D_1 and t_2,n∈R^D_2 for n∈1,2,...,N are realizations of the corresponding random vectors, respectively. CCA seeks transformation vectors w_1∈R^D_1 and w_2∈R^D_2 such that correlation between w_1^Tt_1 and w_2^Tt_2 are maximized. The correlation between w_1^Tt_1 and w_2^Tt_2 can be formulated asρ = cov(w_1^Tt_1,w_2^Tt_2)/√(var(w_1^Tt_1)var(w_2^Tt_2))=w_1^TΣ_12w_2/√((w_1^TΣ_11w_1)(w_2^TΣ_22w_2))where Σ_ij=1/N∑_n=1^N(t_i,n-μ_i)(t_j,n-μ_j)^T for i,j ∈1,2 is the cross-covariance matrix of t_1 and t_2 and μ_i=1/N∑_n=1^Nt_i,n for i ∈{1,2} denotes the mean vector of t_i.Then, the objective function for CCA can be written as:_ w_1,w_2 w_1^TΣ_12w_2s.t. w_1^TΣ_11w_2=1 w_2^TΣ_22w_2=1Optimizing such a constrained maximization problem with respect to w_1 and w_2 leads to the following generalized eigenvalue problem:[0 Σ_12; Σ_210 ][ w_1; w_2 ] =λ[ Σ_110;0 Σ_22 ][ w_1; w_2 ]By solving equation (<ref>), w_1 and w_2 that maximize the correlation between the projected data can be found. §.§ Probabilistic CCA The generative model introduced by Bach and Jordan for CCA is as follows:t_i=W_iz+μ_i+ϵ_i i∈{1,2}In this model, W_i∈^D_i× d i∈{1,2} are linear projections that map two sets of high dimensional observed random vectors t_i∈^D_i, i∈{1,2} to a set of lower dimensional latent vectors z∈^d. Here μ_i is the mean vector for x_i and ϵ_i is the error term which is assumed to follow a multivariate Gaussian distributions with zero mean and inverse covariance matrix Ψ_i. Bach and Jordan proved that the maximum likelihood estimation for the parameters of this model would lead to the canonical directions. The maximum likelihood estimates of the projection matrices are given by:W_i=Σ_iiU_idM_i, i∈{1,2}where Σ_ii is the sample covariance matrix, U_id are the first d canonical directions and M_i∈^d× d where i∈{1,2} are arbitrary matrices such that M_1M_2 = P_d, where P_d is the diagonal matrix of the first d canonical correlations. Figure <ref> shows a graphical representation of the model. §.§ Two-Dimensional CCA Two-dimensional CCA (2DCCA) was proposed to tackle the problem of vectorizing data in CCA. For each random matrix T_1 and T_2, 2DCCA introduces left transforms u_i and right transforms v_i where i∈{1,2}. After the projection, data would have the form u_i^TT_iv_i. 2DCCA finds these left and right transforms in a way to maximize the correlation between projected data. Therefore, the objective function of 2DCCA can be formulated as: _ u_1,u_2,v_1,v_2 cov(u_1^TT_1v_1,u_2^TT_2v_2) s.t. var(u_1^TT_1v_1)=1var(u_2^TT_2v_2)=1, u_1 and u_2 can be obtained by solving the generalized eigenvalue problem (<ref>) with fixed v_1 and v_2:[0 Σ_12^r; Σ_21^r0 ][ u_1; u_2 ]=λ[ Σ_11^r0;0 Σ_22^r ][ u_1; u_2 ]In a similar way, given u_1 and u_2, right transforms v_1 and v_2 can be found by solving[0 Σ_12^l; Σ_21^l0 ][ v_1; v_2 ]=λ[ Σ_11^l0;0 Σ_22^l ][ v_1; v_2 ]where Σ_i,j^r, i,j∈{1,2} is the cross-covariance matrix between T_i and T_j and Σ_ii^r, i∈{1,2} is the auto-covariance matrix of T_i defined as follows, respectively:Σ_ij^r=1/N∑_n=1^N(T_i,n-μ_i)v_iv_j^T(T_j,n-μ_j)^T,i,j∈{1,2} Σ_ij^l=1/N∑_n=1^N(T_i,n-μ_i)u_iu_j^T(T_j,n-μ_j)^T,i,j∈{1,2}where T_i,n for n∈{1,...,N} is the realization of the random matrix T_i and μ_i=1/N∑_n=1^NT_i,n is the corresponding matrix of mean values.Left transforms (l_x and l_y) and right transforms (r_x and r_y) are obtained by iteratively solving equations (<ref>) and (<ref>), until convergence. The eigenvectors associated with d_1 largest eigenvalues in (<ref>) determine left transform matrices U_1 and U_2 and the eigenvectors associated with d_2 largest eigenvalues in (<ref>) determine right transform matrices V_1 and V_2. Using these transform matrices it is possible to project data from a high dimensional space to a new lower dimensional feature space.§ PROBABILISTIC TWO DIMENSIONAL CCA (P2DCCA) In this section, we propose probabilistic two-dimensional CCA and an EM-based solution for finding the parameters of the model. In our model, observed data are modeled as two-dimensional matrices as follows:T_i=U_iZV_i^T+μ_i+Ξ_i i∈{1,2}where T_i∈^m_i× n_i for i∈{1,2} are observed matrices and Z∈^m× n is the latent matrix. U_i∈^m_i× m, V_i∈^n_i× n are projection matrices, μ_i is the mean matrix of the observed data and Ξ_i is the residual matrix. Based on this definition, parameters of the model are {U_i,V_i}_i=1^2 and the parameters of the distribution of Ξ_i.Let D_i={T_i;n}_n=1^N where i∈{1,2} be a set containing N observed data matrices and {Z_n}_n=1^N be the corresponding latent variable set. Then the complete data would be (T_1,n,T_2,n,Z_n) and the log likelihood of the complete data can be written asL_c=∑_n=1^Nlog p(T_1,n,T_2,n,Z_n)To estimate the parameters, first we must calculate expectation of the log-likelihood and then take the derivative of the expected log-likelihood with respect to each parameter. Unfortunately there is no closed-form solution for computing the projection matrices {U_i,V_i}_i=1^2 simultaneously. Inspired by <cit.>, a decoupled probabilistic model is employed to obtain projection matrices separately using an alternating optimization procedure. In such a model, we first assume that the value of one set of projection matrices, e.g. the right projections {V_i}_i=1^2, is known. Then observations are projected to the corresponding latent spaces. The projection procedure is a probabilistic one that introduced in section (<ref>). By doing this, the left probabilistic model is defined asT_i^l=U_iZ^l+Ξ_i^l, i=1,2where Z^l is the left model latent matrix, μ_i^l is the mean matrix of the left projected observations and Ξ_i^l is the noise source for left probabilistic model where columns of the noise matrix follow a normal distribution with zero mean and covariance matrix Ψ_i^l. By such definition, parameter set for the left probabilistic model would be Θ^l={U_i,Ψ_i^l}_i=1^2 which can be estimated using expectation maximization procedure. The estimation procedure is explained later in this section.In a similar procedure and parallel to the left probabilistic model, for the right probabilistic model, we assume that the left projection matrices, i.e. {U_i}_i=1^2, are known. Then observations are projected over the corresponding latent spaces,hence the right probabilistic model is defined asT_i^r=V_iZ^r+Ξ_i^r, i=1,2Similar to the left probabilistic model, Z^r, μ_i^r and Ξ_i^r are defined for the right model where in this model the noise source have N(0,Ψ_i^r) distribution. The parameter set for the right probabilistic model would be Θ^r={V_i,Ψ_i^r}_i=1^2. Using these definitions, the decoupled predictive density p(T_1,T_2,Z) could be defined asp(T_1,T_2,Z)∝p(T_1^l,T_2^l,Z^l)p(T_1^r,T_2^r,Z^r)Now we can rewrite the log likelihood of equation (<ref>) as L_c=∑_n=1^Nlog(p(T_1,n^l,T_2,n^l,Z_n^l)p(T_1,n^r,T_2,n^r,Z_n^r)) =∑_n=1^Nlogp(T_1,n^l,T_2,n^l,Z_n^l)+∑_n=1^Nlogp(T_1,n^r,T_2,n^r,Z_n^r). To apply the EM algorithm to the decoupled probabilistic model, in E-step expectation of log likelihood for the left probabilistic model and the right probabilistic model is computed, separately. Then each of the expected log likelihood is maximized with respect to its parameters. In the following subsections we describe how to optimize left and right probabilistic model respectively.§.§ Optimizing the left probabilistic model Let t_i,j^l be the j^th column vector of T_i^l∈^m_i× n. By assuming columns of T^l to be independent of each other, the distribution of T_i^l is defined asp(T_i^l)=∏_j=1^np(t_i,j^l), i ∈{1,2}We also consider z_j^l∈^m× 1 as the j^th column vector of Z^l which has normal distribution of N(0,I) and also in the same way μ_i,j^l is the j^th column vector of μ_j^l. Based on equation (<ref>) and the distribution considered for Z^l and Ξ_i^l, it can be concluded thatp(t_i,j^l)∼N(μ_i,j^l,U_iU_i^T+Ψ_i^l), i=1,2Suppose τ_n,j^l= [(t_1,n,j^l)^T(t_2,n,j^l)^T]^T∈^(m_1+m_2)× 1, V=[U_1^TU_2^T]^T∈^(m_1+m_2)× m, m_j^l=[(μ_1,j^l)^T(μ_2,j^l)^T]^T∈^(m_1+m_2)× 1 and Ψ^l = ( [ Ψ_1^l 0; 0 Ψ_2^l ]) for the left probabilistic model, where t_i,n,j^l refers to the j^th column vector of n^th image in the i^th observation set where i∈{1,2}. Therefore, distributions of p(τ_j^l) can be obtained as follows:p(τ_j^l)∼N(m_j^l,Σ^l),j∈[1,n]where Σ^l=UU^T+Ψ^l and we assume Σ^l > 0.Based on (<ref>) we can write:p(T_1,n^l,T_2,n^l,Z_n^l)=∏_j=1^np(τ_n,j^l,z_n,j^l)=∏_j=1^np(τ_n,j^l|z_n,j^l)p(z_n,j^l)To apply the EM algorithm to the decoupled probabilistic model, for each of the probabilistic models expectation of the log likelihood function is calculated in the E-step E(L_c^l) where the detail is given in the appendix, and then maximization step (M-step) is done by maximizing E(L_c^l) with respect to V and Ψ^l. By doing so, the values of the parameters are estimated asU_t+1=Σ^l(Ψ_t^l)^-1U_t(M_t^l)^-1/(M_t^l)^-1+(M_t^l)^-1U_t^l(Ψ_t^l)^-1Σ^l(Ψ_t^l)^-1U_t(M_t^l)^-1 Ψ_t+1^l=( [ (Σ^l-Σ^l(Ψ_t^l)^-1U_t(M_t^l)^-1U_t+1^T)_110;0 (Σ^l-Σ^l(Ψ_t^l)^-1U_t(M_t^l)^-1U_t+1^T)_22 ] )where M^l=I+U^T(Ψ^l)^-1U and A_t shows the value of parameter A in iteration t and Σ^l is the sample covariance matrix of observed data for the left probabilistic model , i.e.Σ^l=1/N∑_n=1^N[(T_1,n^l-μ_1^l)^T (T_2,n^l-μ_2^l)^T]^T[(T_1,n^l-μ_1^l)^T (T_2,n^l-μ_2^l)^T] §.§ Optimizing the right probabilistic model In the manner similar to optimization of the left probabilistic model we have:p(T_i^r)=∏_j=1^mp(t_i,j^r), i ∈{1,2}where t_i,j^r is the j^th column vector of T_i^r∈^n_i× m. Then p(t_i,j^r) is computed as:p(t_i,j^r)∼N(μ_i,j^r,V_iV_i^T+Ψ_i^r), i∈{1,2}Let τ_n,j^r= [(t_1,n,j^r)^T(t_2,n,j^r)^T]^T∈^(n_1+n_2)× 1, V=[V_1^TV_2^T]^T∈^(n_1+n_2)× n, m_j^r=[(μ_1,j^r)^T(μ_2,j^r)^T]^T∈^(n_1+n_2)× 1 and Ψ^r = ( [ Ψ_1^r 0; 0 Ψ_2^r ]), where t_i,n,j^r refers to the j^th column vector of n^th image in the i^th observation set. Then p(τ_j^r) and p(T_1,n^r,T_2,n^r,Z_n^r) are obtained as follows:p(τ_j^r)∼N(m_j^r,Σ^r),j∈[1,m] p(T_1,n^r,T_2,n^r,Z_n^r)=∏_j=1^mp(τ_n,j^r,z_n,j^r)=∏_j=1^mp(τ_n,j^r|z_n,j^r)p(z_n,j^r)where Σ^r=VV^T+Ψ^r>0. Given the details in appendix (A), after computing E(L_c^r) in the E-step, the parameters V and Ψ^r are computed by maximizingthe likelihood in the M-step. So, we have:V_t+1=Σ^r(Ψ_t^r)^-1V_t(M_t^r)^-1/(M_t^r)^-1+(M_t^r)^-1V_t^r(Ψ_t^r)^-1Σ^r(Ψ_t^r)^-1V_t(M_t^r)^-1 Ψ_t+1^r=( [ (Σ^r-Σ^r(Ψ_t^r)^-1V_t(M_t^r)^-1V_t+1^T)_110;0 (Σ^r-Σ^r(Ψ_t^r)^-1V_t(M_t^r)^-1V_t+1^T)_22 ] )where M^r=I+V^T(Ψ^r)^-1V and Σ^r is computed as follows:Σ^r=1/N∑_n=1^N[(T_1,n^r-μ_1^r)^T (T_2,n^r-μ_2^r)^T]^T[(T_1,n^r-μ_1^r)^T (T_2,n^r-μ_2^r)^T] §.§ Probabilistic projection and dimension reductionWe can project the observation matrices into the latent space using the standard projection matrices ,i.e., {U_1,U_2,V_1,V_2}. However, as described in <cit.>, it is more natural to use probabilistic projections. In this regard, we represent each projected observation matrix, T_i, by mean of distribution of corresponding latent space, i.e., E(Z|T_i). For the left model, it can be shown thatE(Z^l|T_1,T_2)=(M^r)^-1[ [ V_1^T V_2^T; ] ][ [ Ψ_1 0; 0 Ψ_2 ] ]^-1[ [ T_1-μ_1; T_2-μ_2 ] ] E(Z^l|T_1) and E(Z^l|T_2) are obtained by marginalizing (<ref>) over T_2 and T_1 respectively. So we have E(Z^l|T_i)=(M^r)^-1V_i^T(Ψ_i^r)^-1(T_i-μ_i), i∈{1,2} Similarly for the right model we have E(Z^r|T_i)=(M^l)^-1U_i^T(Ψ_i^l)^-1(T_i-μ_i), i∈{1,2}The procedure for dimension reduction is given by sequential projection in left and right models asE(Z|T_i)=(M^l)^-1U_i^T(Ψ_i^l)^-1(T_i-μ_i)((M^r)^-1V_i^T(Ψ_i^r)^-1)^TThe P2DCCA algorithm is summarized in Figure <ref>. The proposed P2DCCA model benefits the ability to extend to other methods such as Mixtures of P2DCCA, Bayesian P2DCCA and also to robust P2DCCA.§ EXPERIMENTAL RESULTS We evaluated our algorithm on both synthetic and real data. In synthetic data part we verified our implementation of P2DCCA algorithm by simple synthetic data and randomly generated projected matrices. we also compared our algorithm with 2DCCA method in projection matrices estimation . For real part evaluation, the proposed P2DCCA method was used for face recognition on two well-known face image databases (AR <cit.> and UMIST <cit.>). The AR database is divided into three subsets for evaluating the performance of the system in regard to different illumination, expression and occlusion conditions. The UMIST database is used to obtain the performance in dealing with pose variation.§.§ Experiments on synthetic data In this section, we aim to verify our implementation of the proposed method with the simplest possible scenario. So we generate some synthetic data and projection matrices. Then estimate the projection matrices using our method and compare them with the true ones. We know that P2DCCA estimations are up to rotation, hence to simplify comparison we assume Z to be 1 × 1 dimension. We set dimensions of T_1 and T_2 to 5 × 5. Then we generate 1000 samples of Z from a normal distribution with zero mean and unit variance, also we randomly generate the elements of U_i∈ ^5, V_i∈ ^5, i∈{1,2}using a uniform distribution in [0, 1] interval and consider them as the ground truth projection matrices, then T_i will obtained using (<ref>). In this equation, each element of the residual matrices sampled from a gaussian distribution with 0 mean and σ_i^2 variance. Having the synthetic data we run P2DCCA algorithm as discussed in Figure <ref> and calculate U_i and V_i. We also run 2DCCA and obtain the corresponding projection matrices. Then we compare obtained matrices by these two algorithms with the ground truth projection matrices. To cancel the scale factors we divide each transform by its norm before comparison. Euclidean distance is utilized to compare the normalized transforms. Figure <ref> shows the results. It is obvious that in the worst-case the distance value becomes one and in the best-case it is zero. As it is depicted in this figure, P2DCCA estimation of U_i and V_i for i∈{1,2} are much closer to the ground truth compared to the those obtained by 2DCCA. In this experiment we used 1000 generated samples and set σ_i = 0.1. To examine the effect of these selections, we repeat our experiment with different values of sample numbers and also noise variances. Figure <ref> demonstrates the results. As it can be observed from this figure the P2DCCA estimation in all cases is much closer to the ground truth compared to the 2DCCA method. §.§ Experiments on the AR database The AR face database contains over 4,000 color face images including frontal views of faces with different facial expressions, illumination conditions and occlusions. For most individuals, there are two sessions of images which were taken in two different time periods. Each session contains 13 images. In our experiments, we used the first session, because some individuals do not have the second session of images. We collected 1310 face images of 131 people (72 male and 59 female). For each person, there are 10 different face images in our collected images: three with different illumination conditions; three with different expressions; three with occlusions and the remaining images are those with neutral expression and no occlusion which are known as reference images in our experiments. To examine the performance of the proposed methods in different conditions, we partitioned the collected images into three subsets known as AR-1, AR-2 and AR-3. For each individual, AR-1 contains four images, three of which are images with different lighting conditions and the remaining one is the reference image. AR-2 is used to test performance of the algorithms when there exist expression variation. AR-2 involves four images per individuals: three images have different expressions and the last one is the reference image. AR-3 is prepared to test performance in the presence of occlusion. Again, this subset contains four images per individuals where three images were taken with glasses and the last one is the reference image. Figure <ref> shows exemplary face images of a man and a woman in AR-1, AR-2 and AR-3, respectively. Image are gray scaled, resized and then normalized to 50 × 50 pixels.We compared the performance of the proposed method with a range of different supervised and unsupervised dimensionality reduction algorithms and different versions of them including PCA, LDA, CCA, PPCA, PCCA, 2DPCA, 2DLDA and 2DCCA. Both PCA and LDA based methods work with one set of data. Also LDA based methods are supervised while PCA and CCA based algorithms are unsupervised. The task here is to investigate how well different algorithms can relate face images with varying illumination conditions, expressions and occlusion, in correspondence to the reference face images. The CCA, PCA, LDA, PPCA, PCCA, 2DCCA, 2DPCA, 2DLDA and P2DCCA are used to extract features from facial images and then a 1-NN classifier is employed for classification. Note that based on output type of each of the algorithms (vector or matrix), for 2DCCA, 2DPCA, 2DLDA and P2DCCA, Frobenius distance is used to calculate the distance between two feature matrices, while for CCA, PCA, LDA, PPCA and PCCA the common Euclidean distance measure is adopted. Furthermore, it should be noted here that since PCCA suffers the small sample size problem, implementing it using formulas introduced in <cit.> caused the covariance matrices to be singular. To solve the problem, we did dimension reduction using PCAbefore implementing the algorithm. To evaluate the recognition accuracy, we used three-fold cross-validation. As it is evident, CCA based algorithms need two sets of images for training, where in this paper the training sets are called left training set and Right training set. To form the training sets, e.g., for AR-1, neutral images (images with no illumination) are considered as Left training set, while to form the Right training set, one of the three images of each individual with different illumination conditions is selected randomly. The other two images are considered as test images. This procedure is repeated for three times, where each time a different image among the three images is selected for the Right training set, while neutral images are always used to form the Left training set. To test the performance of each algorithm, all images of both right and left training sets are projected on the new feature spaces using their corresponding transforms. Also, each of the test images is projected on both feature spaces, so we have two projection for every test image. Then we calculate the distance between each of the two projected test images and projected training images. The label of the training image with the nearest projection to any of the two test image projection determine the final class of the test image. This procedure iterates until we find the final class for all images in the test set. These final classes are compared to the real classes of the images and the recognition accuracy of each algorithm is calculated. Finally, the average recognition rate of the three round experiments is recorded as the final recognition accuracy. Since PCA and LDA based algorithms work with one set of data, to have a fair comparison we used two images as training and the other two images as the test data, where neutral images are always in the training data together with one of the other images with different illuminations in each iterations. Again the process is repeated three times and the final accuracy is the average of the three runs.Figure <ref> shows the test process for P2DCCA. Train and test procedure for AR2 and AR3 subsets are similar to AR1 and Table <ref> through Table <ref> demonstrate the recognition accuracy of evaluated algorithms for the experiments conducted on AR-1, AR-2 and AR-3, respectively. In these tables, d is the dimension of the reduced feature space. Note that output for two dimensional algorithms (2DCCA, 2DPCA, 2DLDA, P2DCCA and MP2DCCA) is of matrix type with dimension d×d, while for CCA, PCA, LDA, PPCA and PCCA methods output is a vector of dimension d. In these results we see that P2DCCA get the best performance among all tested methods.We see about 10% improvement of recognition rate for P2DCCA over 2DCCA in AR-1 and AR-3, and about 3% improvement in AR-2.Figure <ref> shows how the log-likelihoods of the left probabilistic model and the right probabilistic model of P2DCCA improve with each iteration. As it can be seen in the figure, both left and right models converge.It should be noted that it is very common that in the algorithms for optimizing row and column projections only one iteration with the iterative algorithm is performed<cit.>. Therefore in all the experiments we use one iteration of the algorithm, i.e., T_max=1. This significantly reduces computational cost of the algorithm. We tried more iterations and got no significant improvement in the recognition rate. Figure <ref> shows the results for 1 to 5 iterations for AR1, AR2 and AR3. This results support the idea of choosing T_max=1.§.§ Experiments on the UMIST databaseThe UMIST face database also known as Sheffield face database <cit.> consists of 564 images of 20 subjects. Subjects have different races, sexes, and appearances. For each subject, there are images with different poses from profile to frontal view. Images have 256 grey levels with resolution of 220×220 pixels. In our experiment, 360 images with 18 samples per subject are used to examine performance of different algorithms when face orientation varies significantly. Figure <ref> shows 18 images of one subject.We select frontal image as well as seven other randomly selected images for training set and the remaining images for the test set. In the training phase of CCA based methods, frontal image always is selected as the left training image and one of the seven other images as the right training image. 1-NN classifier is used for classification. This procedure is repeated for twenty times, and the average recognition rates of algorithms are reported. Table <ref> shows the recognition accuracy of evaluated algorithms for the experiments conducted on UMIST. In this test while P2DCCA achieved slightly better performance compared to 2DCCA, it is not the best. In fact LDA achieved the best performance. In the AR test, LDA had 2 images per class for training, but here it has 8 images per class which leaded to best performance for this supervised method. Ignoring LDA, we see that P2DCCA performance is higher than other methods.Since the UMIST face dataset contains 20 subjects to be discriminated, the LDA features is limited to 19. To be able to compare the results of LDA with that of the other algorithms, we showed the results for d=5, d=10 and d=15 for LDA and larger values of d for other algorithms. §.§ Evaluation of the Experimental ResultsThe above experiments showed that the accuracy of P2DCCA is consistently better than other CCA based methods, i.e. CCA, PCCA and 2DCCA. But, a question sill remains: Are these differences statistically significant?. In this section we answered the question by evaluating the experimental results using independent-samples T-test (or independent t-test, for short). In this section and also the next section, we only considered CCA based algorithms including CCA, PCCA, 2DCCA and P2DCCAsince the goal of this paper is to compare the functionality of the newly proposed CCA based method with that of the other CCA based algorithms. The desired significance level is 0.05 and the null hypothesis is that there is no significant difference between recognition rates of P2DCCA and CCA, PCCA and 2DCCA, respectively. We reject the null hypothesis whenever the resulted ρ-value becomes lower than 0.05 and in this case the result can be considered statistically significant. It is necessary to note that to run the t-test in each dataset, for each algorithm we considered the highest recognition rate. Table <ref> shows the ρ-value of the test. As can be seen from this table, P2DCCA significantly outperforms other algorithms and the null-hypothesis has been rejected in all cases.§.§ Computational ComplexityThis section compares the computational cost of the algorithms. To compare the time complexity of the algorithms, we consider input images of size m × m where we want to reduce their dimension to d × d in case of two-dimensional algorithms. For CCA and PCCA, vectorization caused the input data to have dimension m^2× 1 and the output to have dimension d × 1. However, for simplicity, d is considered to be equal to m, i.e. d=m in our analysis. Table <ref> shows the computational complexity of the algorithms. In this table, N is the number of random samples in the dataset. It should be noted that there are two types of iteration in the corresponding methods; one is the iteration necessary for the convergence of the EM part of the algorithm and the other is the iteration for alternating the optimization procedure between left and right model. We show the former by t and the latter by r in the table. However, r=1 in our experiments.§ CONCLUSIONThis paper proposed a probabilistic model for two dimensional CCA termed as P2DCCA together with an EM-based solution to estimate the parameters of the model. Experimental results demonstrated the functionality of the proposed method. The proposed P2DCCA has many advantages over 2DCCA where the most significant advantage is its ability to extend to a mixture of P2DCCA model. It may also be possible to develop a probabilistic Bayesian model for P2DCCA and gaining the benefits of a Bayesian model. These are our future works.§ As it is mentioned in Section <ref>, each column of the latent matrix Z has the distribution N(0,I). Furthermore, based on (<ref>) and (<ref>) we can write:p(τ_n,j^l|z_n,j^l)∼N(Uz_n,j^l+m_j^l,Ψ^l)p(τ_n,j^r|z_n,j^r)∼N(Vz_n,j^r+m_j^r,Ψ^r).Now from (<ref>) and (<ref>) and the distribution of columns of Z, we havep(T_1,n^l,T_2,n^l,Z_n^l)=∏_j=1^n[(2π)^-m_1+m_2/2|Ψ^l|^-1/2 exp(-1/2(τ_n,j^l-Uz_n,j^l-m_j^l)^T(Ψ^l)^-1(τ_n,j^l-Uz_n,j^l-m_j^l)) (2π)^-m/2exp(-1/2(z_n,j^l)^T(z_n,j^l))],p(T_1,n^r,T_2,n^r,Z_n^r)=∏_j=1^m[(2π)^-n_1+n_2/2|Ψ^r|^-1/2 exp(-1/2(τ_n,j^r-Vz_n,j^r-m_j^r)^T(Ψ^r)^-1(τ_n,j^r-Vz_n,j^r-m_j^r)) (2π)^-n/2exp(-1/2(z_n,j^r)^T(z_n,j^r))],where |A| denotes the determinant of matrix A.In the E-step, expectation of log likelihood for each of the probabilistic models is calculated as:E(L_c^l)=∑_n=1^N∑_j=1^n[-m_1+m_2/2log(2π)-1/2log|Ψ^l| -1/2tr{(Ψ^l)^-1(τ_n,j^l)(τ_n,j^l)^T}+tr{(E[(z_n,j^l)]-m_j^l)^TU^T(Ψ^l)^-1τ_n,j^l} -1/2tr{(E[z_n,j^l]-m_j^l)^TU^T(Ψ^l)^-1U(E[z_n,j^l]-m_j^l)}-m/2log(2π) -1/2tr{E[(z_n,j^l)(z_n,j^l)^T]}]E(L_c^r)=∑_n=1^N∑_j=1^m[-n_1+n_2/2log(2π)-1/2log|Ψ^r| -1/2tr{(Ψ^r)^-1(τ_n,j^r)(τ_n,j^r)^T}+tr{(E[(z_n,j^r)]-m_j^r)^TV^T(Ψ^r)^-1τ_n,j^r} -1/2tr{(E[z_n,j^r]-m_j^r)^TV^T(Ψ^r)^-1V(E[z_n,j^r]-m_j^r)}-n/2log(2π) -1/2tr{E[(z_n,j^r)(z_n,j^r)^T]}]E[z_n,j^l]=(M^l)^-1U^T(Ψ^l)^-1(τ_n,j^l-m_j^l) E[(z_n,j^l)(z_n,j^l)^T]=(M^l)^-1+E[(z_n,j^l)]E[(z_n,j^l)]^T E[z_n,j^r]=(M^r)^-1V^T(Ψ^r)^-1(τ_n,j^r-m_j^r) E[(z_n,j^r)(z_n,j^r)^T]=(M^r)^-1+E[(z_n,j^r)]E[(z_n,j^r)]^T.By obtaining formulas of E(L_c^l) and E(L_c^r), the M-step is done by derivation of each expected log-likelihoods.
http://arxiv.org/abs/1702.07884v1
{ "authors": [ "Mehran Safayani", "Seyed Hashem Ahmadi", "Homayun Afrabandpey", "Abdolreza Mirzaei" ], "categories": [ "cs.CV", "cs.LG", "stat.ML" ], "primary_category": "cs.CV", "published": "20170225125035", "title": "An EM Based Probabilistic Two-Dimensional CCA with Application to Face Recognition" }
amdphy@gmail.com Department of Physics, Indian Institute of Engineering Science and Technology, Shibpur, Howrah 711103, West Bengal, India shnkghosh122@gmail.com Department of Physics, Indian Institute of Engineering Science and Technology, Shibpur, Howrah 711103, West Bengal, India swapan.d11@gmail.com Department of Optometry, NSHM College of Management and Technology, 124 B.L. Saha Road, Kolkata 700053, West Bengal, India rahaman@associates.iucaa.in Department of Mathematics, Jadavpur University, Kolkata 700032, West Bengal, India bkguhaphys@gmail.com Department of Physics, Indian Institute of Engineering Science and Technology, Shibpur, Howrah 711103, West Bengal, India saibal@associates.iucaa.in Department of Physics, Government College of Engineering and Ceramic Technology, Kolkata 700010, West Bengal, IndiaWe propose a unique stellar model under the f(R,𝒯) gravity by using the conjecture of Mazur-Mottola [P. Mazur and E. Mottola, Report number: LA-UR-01-5067., P. Mazur and E. Mottola, Proc. Natl. Acad. Sci. USA 101, 9545 (2004).] which is known as gravastar and a viable alternative to the black hole as available in literature. This gravastar is described by the three different regions, viz., (I) Interior core region, (II) Intermediate thin shell, and (III) Exterior spherical region. The pressure within the interior region is equal to the constant negative matter density which provides a repulsive force over the thin spherical shell. This thin shell is assumed to be formed by a fluid of ultrarelativistic plasma and the pressure, which is directly proportional to the matter-energy density according to Zel'dovich's conjecture of stiff fluid [Y.B. Zel'dovich, Mon. Not. R. Astron. Soc. 160, 1 (1972).], does counterbalance the repulsive force exerted by the interior core region. The exterior spherical region is completely vacuum and assumed to be de Sitter spacetime which can be described by the Schwarzschild solution. Under this specification we find out a set of exact and singularity-free solution of the gravastar which presents several other physically valid features within the framework of alternative gravity. 95.30.Sf, 04.70.Bw, 04.20.Jb, Gravastars in f(R,𝒯) gravity Saibal Ray December 30, 2023 ============================§ INTRODUCTIONMazur and Mottola <cit.> first ever proposed a model considering the gravitationally vacuum star (gravastar) as an alternative to the system of gravitational collapse, i.e., black hole. They generated a new type of solution by extending the idea of Bose-Einstein condensation in construction of the gravastar as a cold, dark, and compact object of interior de Sitter condensate phase.The scenario of this gravastar can be envisaged as follows: the interior is surrounded by a thin shell of ultrarelativistic matter whereas the exterior region is completely vacuum and hence the Schwarzschild spacetime at the outside can be considered to fit for the system. The shell is assumed to be very thin with a finite width in the range r_1 < r < r_2, where r_1 ≡ D and r_2 ≡ D+ϵ are the interior and exterior radii of the gravastar respectively under consideration. Therefore, we can represent the entire system of gravastar into three specific segments based on the equation of state (EOS) as follows: (I) Interior (0 ≤ r < r_1): p = -ρ, (II) Shell (r_1 ≤ r ≤ r_2): p = +ρ, and (III) Exterior (r_2 < r): p = ρ =0.We note that related to the gravastar there are lot of works available in the literature based on different mathematical as well as physical issues. However, these works are mainly carried out by several authors in the framework of Einstein's general relativity <cit.>. Though it is well known that Einstein's general relativity is a unique tool for uncovering many hidden mysteries of Nature, yet some observational evidences of the accelerating universe along with the existence of dark matter has imposed a theoretical challenge to this theory  <cit.>. Therefore, several alternative theories have been proposed successively amongst which f(R) gravity, f(𝕋) gravity, and f(R,𝒯) gravity have received more attention. In the present project our motivation is to study the gravastar under one of the alternative gravity theories, namely f(R,𝒯) gravity <cit.> and to observe different physical features of the object - their nontriviality as well as triviality. Actually our previously performed successful works on the initial phases of compact stars under alternative gravity <cit.> motivate us to exploit the alternative formalism to the case of the gravastar, a viable alternative to the ultimate stellar phase of a black hole.It has been argued that among all other modified gravity theories the f(R,𝒯) theory of gravity can be considered as a useful formulation which is based on the nonminimally curvature matter coupling. In the f(R,𝒯) theory of gravity <cit.> the gravitational Lagrangian of the standard Einstein-Hilbert action is defined by an arbitrary function of the Ricci scalar R and the trace of the energy-momentum tensor 𝒯. One can note that such a dependence on 𝒯 may come from the presence of an imperfect fluid or from the consideration of quantum effects. The application of f(R,𝒯) gravity theory to different cosmological <cit.> realm can be noted in the literature.Among several astrophysical applications it is worthy of mentioning the Refs. <cit.>. In their work <cit.> Sharif et al. have studied the stability of collapsing spherical body of an isotropic fluid distribution considering the nonstatic spherically symmetric line element. A perturbation scheme has been used to find the collapse equation and the condition on the adiabatic index has been constructed for Newtonian and post-Newtonian eras for addressing instability problem by Noureen et al. <cit.> whereas in another work  <cit.> Noureen et al. have investigated the range of instability under the f(R,𝒯) theory for an anisotropic background constrained by zero expansion. Also, by applying a perturbation scheme on the f(R,𝒯) field equations the evolution of a spherical star has been studied by Noureen et al.  <cit.>. Zubair et al. <cit.> have analyzed the dynamics of gravitating sources along with axial symmetry under the f(R,𝒯) gravity. Some other relevant studies on the f(R,𝒯) theory of gravity can be observed in the following works <cit.> under different physical motivations. Yousaf et al. <cit.> have explored the evolutionary behaviors of compact objects in the framework of f(R,𝒯) gravity theory with the help of structure scalars whereas they <cit.> have investigated the irregularity factors for a self-gravitating spherical star evolving in the presence of imperfect fluid.The outline of the present study is therefore as follows: In Sec. II the basic mathematical formalism of the f(R,𝒯) theory has been provided as the background of the study. Thereafter in Sec. III we discuss the field equations and their solutions in f(R,𝒯) gravity considering the interior spacetime, exterior spacetime, and thin shell cases of the gravastars with their respective solutions. We provide the junction conditions, which are essential in connection to the three regions of the gravastar, in Sec. IV. Several physical properties of the models, viz. proper length, energy content, entropy and equation of state, are discussed in Sec. V. Some concluding remarks are provided in Sec. VI.§ BASIC MATHEMATICAL FORMALISM OF THE F(R,𝒯) THEORYThe action of the f(R,𝒯) theory <cit.> reads𝕊=1/16π∫ d^4xf(R,𝒯)√(-g)+∫ d^4xℒ_m√(-g),where f(R,𝒯) is the function of the Ricci scalar R and the trace of the energy-momentum tensor 𝒯, ℒ_m being the matter Lagrangian density, and g is the determinant of the metric g_μν. Throughout the paper we assume the geometrical units G=c=1.Varying the action (<ref>) with respect to the metric g_μν, one can obtain the following field equations of f(R,𝒯) gravity:f_R (R,𝒯) R_μν - 1/2 f(R,𝒯) g_μν + (g_μν - ∇_μ∇_ν) f_R (R,𝒯)= 8π T_μν - f_𝒯(R,𝒯) T_μν - f_𝒯(R,𝒯)Θ_μν,where f_R (R,𝒯)= ∂ f(R,𝒯)/∂ R, f_𝒯(R,𝒯)=∂ f(R,𝒯)/∂𝒯, ≡∂_μ(√(-g) g^μν∂_ν)/√(-g), R_μν is the Ricci tensor, ∇_μ provides the covariant derivative with respect to the symmetric connection associated to g_μν, Θ_μν= g^αβδ T_αβ/δ g^μν and the stress-energy tensor is defined as T_μν=g_μνℒ_m-2∂ℒ_m/∂ g^μν.The covariant divergence of (<ref>) reads  <cit.>∇^μT_μν = f_𝒯(R,𝒯)/8π -f_𝒯(R,𝒯)[(T_μν+Θ_μν)∇^μln f_𝒯(R,𝒯)+∇^μΘ_μν-(1/2)g_μν∇^μ𝒯]. It is vivid from Eq. (<ref>) that the energy-momentum tensor is not conserved for the f(R,𝒯) theory of gravity unlike the general relativistic case.In the present paper we assume the energy-momentum tensor to be that of a perfect fluid, i.e.,T_μν=(ρ+p)u_μ u_ν-pg_μν,with u^μu_μ = 1 and u^μ∇_ν u_μ=0. Besides these conditions we also have ℒ_m=-p and Θ_μν=-2T_μν-pg_μν.Following theproposition of Harko et al. <cit.>, we take the functional form of f(R,𝒯) as f(R,𝒯)=R+2χ𝒯, with χ being a constant. One can note that this form has been extensively used to obtain many cosmological solutions in f(R,𝒯) gravity <cit.>. By substituting the above form of f(R,𝒯) in (<ref>), we get <cit.>G_μν=8π T_μν+χ𝒯 g_μν+2χ(T_μν+pg_μν),where G_μν is the Einstein tensor.One can easily get back to the result of general relativity just by setting χ=0 in the above Eq. (<ref>). Moreover, for f(R,𝒯)=R+2χ𝒯, Eq. (<ref>) yields∇^μT_μν=-2χ/(8π+2χ)[∇^μ(pg_μν)+1/2g_μν∇^μ𝒯]. Curiously, by substituting χ=0 in Eq. (<ref>) one can verify that the energy-momentum tensor is conserved as in the case of general relativity. § THE FIELD EQUATIONS AND THEIR SOLUTIONS IN F(R,𝒯) GRAVITYFor the spherically symmetric metricds^2=e^ν(r)dt^2-e^λ(r)dr^2-r^2(dθ^2+sin^2θ dϕ^2),one can find the nonzero components of the Einstein tensors asG_0^0=e^-λ/r^2(-1+e^λ+λ' r),G_1^1=e^-λ/r^2(-1+e^λ-ν' r),G_2^2=G_3^3=e^-λ/4r[2(λ'-ν')-(2ν”+ν'^2-ν'λ')r],where primes stand for derivative with respect to the radial coordinate r.Substituting Eqs. (4), (<ref>), (<ref>), and (<ref>) in Eq. (<ref>) one can get-1+e^λ+λ'r=Π(r)[8πρ+χ(3ρ-p)],-1+e^λ-ν'r=Π(r)[-8π p+χ(ρ-3p)],[r/2(λ'-ν')-(2ν”+ν'^2-ν'λ')r^2/4]=Π(r)[-8π p+χ(ρ-3p)],with Π(r)≡ r^2/e^-λ.Now, from the equation for the nonconservation of the energy-momentum tensor in f(R,𝒯) theory (<ref>) one can obtaindp/dr+ν'/2(ρ+p)+χ/2(4π+χ)(p'-ρ')=0. If we consider the quantity m as the gravitational mass within the sphere of radius r, then from Eq. (11) we can writee^-λ=1-2m/r-χ(ρ-p/3)r^2. Again from Eqs. (12), (14), and (15) one can get the equation of hydrostatic equilibrium in f(R,𝒯) theory asp'=-(ρ+p)[ 4π pr-χ(ρ-3p)r/2]+1/2r[2m/r +χ(ρ-p/3)r^2]/[1-2m/r -χ(ρ-p/3)r^2][1+χ(1-dρ/dp)/2(4π+χ)],considering the fact that the energy density ρ depends on the pressure p i.e. ρ=ρ(p).Also, by letting χ=0 the standard form of the Tolman-Oppenheimer-Volkoff (TOV) equation can be retrieved as applicable in the case of general theory of relativity. §.§ Interior spacetimeFollowing the proposition of Mazur-Mottola <cit.>, let us assume the equation of state (EOS) for the interior region asp=-ρ. The above EOS is a special form of p=ωρ, with the EOS parameter ω=-1 and is known as the dark energy equation of state.Again using the above EOS, and from Eq. (<ref>) one can obtainρ = ρ_0 (constant),and the pressure turns out to bep=-ρ_0. Now, using Eqs. (<ref>) and (<ref>) one gets the metric potential λ ase^-λ = 1-4(2π+χ)ρ_0r^2/3+A/r,where A is an integration constant which is set to zero as the solution is regular at the center (r=0). Hence we havee^-λ = 1-4(2π+χ)ρ_0r^2/3. Again, using Eqs. (<ref>), (<ref>),(<ref>) and (<ref>) one can get the following relation between the metric potentials ν and λ ase^ν=Be^-λ,where B is an integration constant. Here the spacetime metric is free from any central singularity.Also the gravitational mass M(D) can be found out to beM(D)= ∫_0^r_1=D 4π r^2ρ_0 dr=4/3π D^3ρ_0.§.§ ShellLet us consider that the shell consists of ultrarelativistic fluid, obeying the EOS p=ρ. Zel'dovich <cit.> conceived the idea of this fluid in connection to cold baryonic universe and is known as the stiff fluid. In the present context this may come from thermal excitations with negligible chemical potential or from conserved number density of gravitational quanta at zero temperature <cit.>. This type of fluid has been extensively used by several authors to study various cosmological <cit.> as well as astrophysical <cit.> phenomena.One can note that within the nonvacuum region, i.e., the shell it is very difficult to find solution of the field equations. However, it is possible to obtain an analytical solution within the framework of thin shell limit, i.e., 0< e^-λ≪1. Physically this means that when two space-times join together at a place (in our case the vacuum interior and the Schwarzchild exterior) the intermediate region must be a thin shell (see the Ref. <cit.>). Now in thin shell as r→ 0, any parameter which is function of r is, in general, ≪1. Under this approximation along with the above EOS as well as Eqs. (<ref>), (<ref>) and (<ref>), one can find the following equationsde^-λ/dr=2/r,(3/2r+ν'/4)de^-λ/dr=1/r^2. Integrating Eq. (<ref>) we gete^-λ= 2ln r + C,where C is an integration constant and range of r is D≤ r≤D+ϵ. Under the condition ϵ≪1, we get C≪1 as e^-λ≪1.Also from Eqs. (<ref>) and (<ref>) one can gete^ν=Fr^-4,where F is an integration constant.Also Eq. (<ref>), along with the EOS p=ρ, yieldsp=ρ=Hr^4,H being a constant. As ρ∝ r^4, we can infer that the ultrarelativistic fluid within the shell is more dense at the outer boundary than the inner boundary. §.§ Exterior spacetimeThe exterior region obeying the EOS (p=ρ=0) can be defined by the well-known static exterior Schwarzschild solution which is given byds^2=(1-2M/r)dt^2-(1-2M/r)^-1dr^2 -r^2(dθ^2+sin^2θ dϕ^2),where M is the total mass of the gravitating system.§ JUNCTION CONDITIONIt is already mentioned that the gravastar consists of three regions, i.e., interior region (I), shell (II), and exterior region (III). The interior region (I) is connected with the exterior region at the junction interface, i.e., at the shell. According to the Darmois-Israel formalism <cit.> there should be smooth matching between the regions I and III of the gravastar. The metric coefficients are continuous at the junction surface (Σ), i.e., at r=D, though their derivatives may not be continuous. However, one can determine the surface stress-energy S_ij by using the above mentioned formalism.Now, the intrinsic surface stress-energy tensor S_ij is given by the Lanczos equation <cit.> asS^i_j=-1/8π(κ^i_j-δ^i_jκ^k_k),where κ_ij=K^+_ij-K^-_ij provide the discontinuity in the second fundamental forms or extrinsic curvatures. Here the signs “+” and “-” correspond to the interior and the exterior regions respectively. Now, the second fundamental forms <cit.> associated with the two sides of the shell are given byK_ij^±=-n_ν^±[∂^2x_ν/∂ξ^i∂ξ^j+Γ_αβ^ν∂ x^α/∂ξ^i∂ x^β/∂ξ^j]|_Σ,where ξ^i are the intrinsic coordinates on the shell, n_ν^± are the unit normals to the surface Σ and for the spherically symmetric static metricds^2=f(r)dt^2-dr^2/f(r)-r^2(dθ^2+sin^2θ dϕ^2),n_ν^± can be written asn_ν^±=±|g^αβ∂ f/∂ x^α∂ f/∂ x^β|^-1/2∂ f/∂ x^ν,with n^μn_μ=1.Using the Lanczos equation we can get the surface stress-energy tensor as S_ij=diag[σ, -υ, -υ, -υ], where σ is the surface energy density and υ is the surface pressure. The surface energy density (σ) and the surface pressure (υ) can be respectively expressed asσ=-1/4π D[√(f)]^+_-,υ=-σ/2+1/16π[f^'/√(f)]^+_-. So, by using the above two equations we obtainσ=-1/4π D[ √(1-2M/D)-√(1-4(2π+χ)ρ_0D^2/3)],υ=1/8π D[(1-M/D)/√(1-2M/D)- { 1-8(2π+χ)ρ_0D^2/3}/√(1-4(2π+χ)ρ_0D^2/3)]. Also, the mass of the thin shell can be written asm_s=4π D^2σ=D[ √(1-4(2π+χ)ρ_0D^2/3)-√(1-2M/D)]. Here M is the total mass of the gravastar and it can be expressed in the following formM=2(2π+χ)ρ_0D^3/3+ m_s √(1-4(2π+χ)ρ_0D^2/3)-m_s^2/2D. § PHYSICAL FEATURES OF THE MODEL§.§ Proper length of the shellLet us consider that the stiff fluid shell is situated at the surface r=D defining the phase boundary of region I. The proper thickness of the shell is assumed to be very small, i.e., ϵ≪1. Thus the region III starts from the interface at r=D+ϵ. So, the proper thickness between two interfaces, i.e., of the shell is determined asℓ= ∫_D^D+ϵ√(e^λ)dr=∫_D^D+ϵdr/√(2ln r + C). Integrating the above equation one can getℓ=[ -(-π/2e^C)^1/2 erf{√(ln(1/r^2)-C)/√(2)}]_D^D+ϵ. §.§ Energy contentIn the interior region we consider the EOS in the form p=-ρ which indicates the negative energy region confirming the repulsive nature of the interior region.However, the energy within the shell turns out to beℰ=∫_D^D+ϵ4πρ r^2dr=∫_D^D+ϵ4π H r^6dr= 4π H/7[( D+ϵ)^7-D^7 ]. Taking into account the thin shell approximation one may write the energy ℰ up to the first order of ϵ (≪1) asℰ≈ 4πϵ H D^6.The above relation indicates that the energy of the shell is directly proportional to the ϵ, i.e., the thickness of the shell. §.§ EntropyAccording to the prescription of Mazur and Mottola <cit.> in the interior region I, the entropy density is zero which is consistent with a single condensate state. However, within the shell the entropy is given byS=∫_D^D+ϵ4π r^2s(r)√(e^λ)dr,where s(r) is the entropy density for local temperature T(r) and may be written as <cit.>s(r)=α^2k_B^2T(r)/4πħ^2= α(k_B/ħ)√(p/2 π),α being a dimensionless constant. We note that in the present work we assume the geometrical units, i.e., G=c=1, and also in Planckian units k_B=ħ=1. So, the entropy density within the shell turns out to bes(r)=α√(p/2π). Therefore, Eq. (<ref>) can be written asS =(8π H)^1/2α∫_D^D+ϵr^4/√(2ln r + C)dr. Integrating the above equation we getS=(8π H)^1/2α[ -(-π/10e^5C)^1/2. × .  erf{√(5[ln(1/r^2) -C])/√(2)}]_D^D+ϵ. §.§ Equation of stateThe EOS, at r=D, as usual can be expressed in the following formυ=ω(D)σ. Hence, by virtue of Eqs. (<ref>) and (<ref>) the equation of state parameter can explicitly be written asω(D)=[(1-M/D)/√(1-2M/D)- { 1-8(2π+χ)ρ_0D^2/3}/√(1-4(2π+χ)ρ_0D^2/3)]/2[√(1-4(2π+χ)ρ_0D^2/3)-√(1-2M/D)]. For ω(D) to be real it requires 2M/D < 1 as well as 4(2π+χ)ρ_0D^2/3 < 1. Moreover, if one expands the square-root terms in the numerator and the denominator of the expressions of Eq. (<ref>) under the conditions M/D≪ 1 and 4(2π+χ)ρ_0D^2/3≪ 1 in a binomial series and retains the terms up to the first order, then one can getω(D)≈3/2[3M/2(2π+χ)ρ_0D^3-1]. Now, if one examines the above expression for ω(D) then two possibilities may emerge out: either ω(D) is positive if M/D^3>2(2π+χ)ρ_0/3, or ω(D) is negative if M/D^3<2(2π+χ)ρ_0/3.§ CONCLUSIONIn the present work we have proposed a unique stellar model under the f(R,𝒯) gravity as originally conjectured by Mazur-Mottola <cit.> in the framework of general relativity. The stellar model which they termed as gravastar, may be considered to be a viable alternative to the black hole. To fulfill the criteria of a gravastar they described the spherically symmetric stellar system by the three different regions: interior core region, intermediate thin shell, and exterior spherical region with specific EOS for each of the region. Under this type of specification we have found out a set of exact and singularity-free solution of the gravitationally collapsing system which presents several interesting properties which are physically viable within the framework of alternative gravity of the form f(R,𝒯).In studying the above mentioned structural form of a gravastar we have noted down several salient aspects of the solution set as can be described below:(1) Pressure-density profile: The pressure and density relationship (p=ρ) of the ultrarelativistic fluid in the shell is shown with respect to the radial coordinate r in Fig. 1 which maintains a constant variation throughout the shell.(2) Proper length: The proper length ℓ of the shell as plotted with respect to the thickness of the shell ϵ (in Fig. 2) shows a gradual increasing profile.(3) Energy content: The energy of the shell is directly proportional to the thickness of the shell ϵ (in Fig. 3).(4) Entropy: The entropy S within the shell has been plotted with respect to the thickness of the shell ϵ (in Fig. 4). This plot shows a physically valid feature that the entropy is gradually increasing with respect to the thickness of the shell ϵ and thus suggesting a maximum value on the surface of the gravastar.(5) Equation of state: For ω(D) to be real it requires 2M/D < 1 as well as 4(2π+χ)ρ_0D^2/3 < 1. Moreover, under the conditions M/D≪ 1 and 4(2π+χ)ρ_0D^2/3≪ 1 upon expansion of the expressions for ω(D) in a binomial series and retaining the terms up to the first order two possibilities have been emerged out: either ω(D) is positive if M/D^3>2(2π+χ)ρ_0/3, or ω(D) is negative if M/D^3<2(2π+χ)ρ_0/3.Besides these important general features we have an overall observation regarding the model in f(R,𝒯) gravity which is as follows: unlike Einstein's general relativity there is an extra term involving χ in the present model which has a definite role and makes the fundamental differences between the expressions in both the theories, as such vanishing of this coupling constant χ provides a limiting case for getting back the results of general relativity (e.g. note the Ref. <cit.>). This aspect can be verified through a comparative case study between the present work and that of Ghosh et al. <cit.> under 4-dimensional background. In this sense f(R,𝒯) gravity generates more generalized solutions for gravastar than general relativity.One final comment: as a possible astrophysical implication of our results and tests to detect gravastars under f(R,𝒯) gravity one may study their gravitational lensing effects as suggested by several authors, solely for gravastars <cit.> as well as for f(R,𝒯) gravity <cit.>. According to the methodology of Kubo and Sakai one may adopt a spherical thin-shell model of a gravastar developed by Visser and Wiltshire <cit.>, which connects interior de Sitter geometry and exterior Schwarzschild geometry. Now, assuming that its surface is optically transparent they calculate the image of a companion which rotates around the gravastar and find that some characteristic images appear, depending on whether the gravastar possess unstable circular orbits of photons (Model 1) or not (Model 2). For Model 2, Kubo and Sakai calculate the total luminosity change, which is called microlensing effects,where the maximal luminosity could be considerably larger than the black hole with the same mass. In future, if one study the similar effects under f(R,𝒯) gravity, then one can comparethe effects of modified gravity on the above mentioned tests with that of the results based on general theory of relativity.§ ACKNOWLEDGMENTSF. R. and S. R. are thankful for the support from the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India, and for providing the Visiting Associateship under which a part of this work was carried out. S. R. also thanks the authorities of the Institute of Mathematical Sciences (IMSc), Chennai, India for providing the working facilities and hospitality under the Associateship scheme. F. R. is thankful to DST-SERB (EMR/2016/000193), Govt. of India for providing financial support. Special thanks are due to Debabrata Deb for valuable suggestions which became fruitful in preparation of the manuscript. We are very grateful to the anonymous referee for several useful suggestions whichhave enabled us to revise the manuscript substantially. 99Mazur2001 P. Mazur and E. Mottola, Report number: LA-UR-01-5067.Mazur2004 P. Mazur and E. Mottola, Proc. Natl. Acad. Sci. USA 101, 9545 (2004).Visser2004M. Visser, and D. L. Wiltshire, Classical Quantum Gravity 21, 1135(2004).Cattoen2005 C. Cattoen, T. Faber, and M. Visser, Classical Quantam Gravity 22, 4189 (2005).Carter2005 B. M. N. Carter, Classical Quantam Gravity 22, 4551 (2005).Bilic2006N. Bilić , G. B. Tupper, and R. D. Viollier, J. Cosmol. Astropart. Phys. 02 (2006) 013.Lobo2006 F. S. N. Lobo,Classical Quantum Gravity 23, 1525 (2006).DeBenedictis2006 A. DeBenedictis, D. Horvat, S. Ilijić, S. Kloster, and K. S. Viswanathan, Classical Quantum Gravity 23, 2303 (2006).Lobo2007F. S. N. Lobo, and A. V. B. Arellano, Classical Quantum Gravity 24, 1069 (2007).Horvat2007D. Horvat, and S. Ilijić, Classical Quantum Gravity 24, 5637 (2007).Cecilia2007 C. B. M. H. Chirenti and L. Rezzolla, Classical Quantum Gravity 24, 4191 (2007)Rocha2008 P. Rocha, R. Chan, M. F A. da Silva, and A. Wang, J. Cosmol. Astropart. Phys. 11 (2008) 010.Horvat2008D. Horvat, S. Ilijić, and A. Marunovic, Classical Quantum Gravity 26, 025003 (2009).Nandi2009 K. K. Nandi, Y. Z. Zhang, R. G. Cai, and A. Panchenko, Phys. Rev. D 79, 024011 (2009).Turimov2009 B. V. Turimov, B. J. Ahmedov, A. A. Abdujabbarov, Mod. Phys. Lett. A 24, 733 (2009).Usmani2011 A. A. Usmani, F. Rahaman, S. Ray, K. K. Nandi, P. K. F. Kuhfittig, Sk. A. Rakib, and Z. Hasan, Phys. Lett. B 701, 388 (2011).Lobo2013F. S. N. Lobo and R. Garattini, J. High Energy Phys. 12 (2013) 065.Bhar2014 P. Bhar, Astrophys. Space Sci., 354, 2109 (2014).Rahaman2015 F. Rahaman, S. Chakraborty, S. Ray, A. A. Usmani, and S. Islam, Int. J. Theor. Phys. 54, 50 (2015).Ri1998 A. G. Riess et al., Astron. J. 116, 1009 (1998)Perl1999 S. Perlmutter et al., Astrophys. J. 517, 565 (1999)Bern2000 P. de Bernardis et al., Nature 404, 955 (2000)Hanany2000 S. Hanany et al., Astrophys. J. 545, L5 (2000)Peebles2003 P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. 75, 559 (2003)Paddy2003 T. Padmanabhan, Phys. Rep. 380, 235 (2003)clifton2012 T. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis, Phys. Rep. 513, 1 (2012)harko2011 T. Harko, F. S. N. Lobo, S. Nojiri, and S. D. Odintsov, Phys. Rev. D 84, 024020 (2011).Das2015 A. Das, F. Rahaman, B. K. Guha, and S. Ray, Astrophys. Space Sci. 358, 36 (2015).Das2016 A. Das, F. Rahaman, B. K. Guha, and S. Ray, Eur. Phys. J. C 76, 654 (2016).moraes2014b P. H. R. S. Moraes, Astrophys. Space Sci. 352, 273 (2014).moraes2015a P. H. R. S. Moraes, Eur. Phys. J. C 75, 168 (2015).moraes2015b P. H. R. S. Moraes, Int. J. Theor. Phys. 55, 1307 (2016).singh2014 C. P. Singh and P. Kumar, Eur. Phys. J. C 74, 3070 (2014).rudra2015 P. Rudra, Eur. Phys. J. Plus 130, 66 (2015).baffou2015 E. H. Baffou, A. V. Kpadonou, M. E. Rodrigues, M. J. S. Houndjo, and J. Tossa, Astrophys. Space Sci. 356, 173 (2015).shabani2013 H. Shabani and M. Farhoudi, Phys. Rev. D 88,044048 (2013).shabani2014 H. Shabani and M. Farhoudi, Phys. Rev. D 90, 044031 (2014).sharif2014b M. Sharif and M. Zubair, Astrophys. Space Sci. 349, 457 (2014).reddy2013b D. R. K. Reddy and R. S. Kumar, Astrophys. Space Sci. 344, 253 (2013).kumar2015 P. Kumar and C. P. Singh, Astrophys. Space Sci. 357, 120 (2015).shamir2015 M. F. Shamir, Eur. Phys. J. C 75, 354 (2015).Fayaz2016 V. Fayaz, H. Hossienkhani, Z. Zarei, and N. Azimi, Eur. Phys. J. Plus 131, 22 (2016).sharif2014 M. Sharif and Z. Yousaf, Astrophys. Space Sci. 354, 2113 (2014).noureen2015 I. Noureen and M. Zubair, Astrophys. Space Sci. 356, 103 (2015).noureen2015b I. Noureen and M. Zubair, Eur. Phys. J. C 75, 62 (2015).noureen2015c I. Noureen, M. Zubair, A. A. Bhatti, and G. Abbas, Eur. Phys. J. C 75, 323 (2015).zubair2015a M. Zubair and I. Noureen, Eur. Phys. J. C 75, 265 (2015).zubair2015b M. Zubair, G. Abbas, and I. Noureen, Astrophys. Space Sci. 361, 8 (2016).Ahmed2016 A. Alhamzawi and R. Alhamzawi, Int. J. Mod. Phys. D 25, 1650020 (2016).Moraes2016 P. H. R. S. Moraes, J. D. V. Arbañil, and M. Malheiro, J. Cosmol. Astropart. Phys. 06 (2016) 005.Yousaf2016a Z. Yousaf, K. Bamba, and M. Z. H. Bhatti, Phys. Rev. D 93, 064059 (2016).Yousaf2016b Z. Yousaf, K. Bamba, and M. Z. H. Bhatti, Phys. Rev. D 93, 124048 (2016).barrientos2014 O. J. Barrientos and G. F. Rubilar, Phys. Rev. D 90, 028501 (2014).singh2015 V. Singh and C. P. Singh, Astrophys. Space Sci. 356, 153 (2015).zeldovich1972 Y. B. Zel'dovich, Mon. Not. R. Astron. Soc. 160, 1 (1972).carr1975 B. J. Carr, Astrophys. J. 201, 1 (1975).Madsen1992 M. S. Madsen, J. P. Mimoso, J. A. Butcher, and G. F. R. Ellis, Phys. Rev. D 46, 1399 (1992).wesson1978 P. S. Wesson, J. Math. Phys (N.Y.) 19, 2283 (1978).braje2002 T. M. Braje and R. W. Romani, Astrophys. J. 580, 1043 (2002).linares2004 L. P. Linares, M. Malheiro, and S. Ray, Int. J. Mod. Phys. D 13, 1355 (2004).Israel1966 W. Israel, Nuovo Cimemto 44, 1 (1966); 48, 463(E) (1967).Darmois1927 G. Darmois, “Mémorial des sciences mathématiques XXV”, Fasticule XXV, (Gauthier-Villars, Paris, France, 1927), chap. V.lanczos1924 K. Lanczos, Ann. Phys. (Berlin) 379, 518 (1924).sen1924 N. Sen, Ann. Phys. (Berlin) 378, 365 (1924).perry1992 G. P. Perry and R. B. Mann, Gen. Relativ. Gravit. 24, 305 (1992).lake1996 P. Musgrave and K. Lake, Classical Quantum Gravity 13, 1885 (1996).rahaman2006 F. Rahaman, M. Kalam, and S. Chakraborty, Gen. Relativ. Gravit. 38, 1687 (2006).rahaman2009 F. Rahaman, M. Kalam, and K. A. Rahman, Acta Phys. Pol. B 40, 1575 (2009).usmani2010 A. A. Usmani, Z. Hasan, F. Rahaman, Sk. A. Rakib, S. Ray, and P. K. F. Kuhfittig, Gen. Relativ. Gravit. 42, 2901 (2010).rahaman2010 F. Rahaman, K.A. Rahman, Sk. A. Rakib, and P. K. F. Kuhfittig, Int. J. Theor. Phys. 49, 2364 (2010).dias2010 G. A. S. Dias and J. P. S. Lemos, Phys. Rev. D 82, 084023 (2010).rahaman2011 F. Rahaman, P. K. F. Kuhfittig, M. Kalam, A. A. Usmani, and S. Ray, Classical Quantum Gravity 28, 155021 (2011).Ghosh2017 S. Ghosh, F. Rahaman, B. K. Guha, and S. Ray, Phys. Lett. B 767, 380 (2017).Kubo2016 T. Kubo and N. Sakai, Phys. Rev. D 93, 084051 (2016)
http://arxiv.org/abs/1702.08873v2
{ "authors": [ "Amit Das", "Shounak Ghosh", "B. K. Guha", "Swapan Das", "Farook Rahaman", "Saibal Ray" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170226155424", "title": "Gravastars in $f(R,\\mathcal{T})$ gravity" }
<ccs2012> <concept> <concept_id>10002978.10003029.10011703</concept_id> <concept_desc>Security and privacy Usability in security and privacy</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002978.10003018.10003021</concept_id> <concept_desc>Security and privacy Information accountability and usage control</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002978.10003029.10011150</concept_id> <concept_desc>Security and privacy Privacy protections</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003122.10010854</concept_id> <concept_desc>Human-centered computing Usability testing</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Security and privacy Usability in security and privacy [300]Security and privacy Information accountability and usage control [300]Security and privacy Privacy protections [300]Human-centered computing Usability testing This is the authors' extended version of the paper published at: CODASPY'17,March 22 - 24, 2017, Scottsdale, AZ, USA 978-1-4503-4523-1/17/03 http://dx.doi.org/10.1145/3029806.3029837 2 “If You Can't Beat them, Join them”: A Usability Approach to Interdependent Privacy in Cloud Apps Hamza Harkous EPFL, Switzerland hamza.harkous@epfl.ch Karl Aberer EPFL, Switzerland karl.aberer@epfl.ch ================================================================================================================================================================================= Cloud storage services, like Dropbox and Google Drive, have growing ecosystems of 3rd party apps that are designed to work with users' cloud files. Such apps often request full access to users' files, including files shared with collaborators. Hence, whenever a user grants access to a new vendor, she is inflicting a privacy loss on herself and on her collaborators too. Based on analyzing a real dataset of 183 Google Drive users and 131 third party apps, we discover that collaborators inflict a privacy loss which is at least 39% higher than what users themselves cause. We take a step toward minimizing this loss by introducing the concept of History-based decisions. Simply put, users are informed at decision time about the vendors which have been previously granted access to their data. Thus, they can reduce their privacy loss by not installing apps from new vendors whenever possible. Next, we realize this concept by introducing a new privacy indicator, which can be integrated within the cloud apps' authorization interface. Via a web experiment with 141 participants recruited from CrowdFlower, we show that our privacy indicator can significantly increase the user's likelihood of choosing the app that minimizes her privacy loss. Finally, we explore the network effect of History-based decisions via a simulation on top of large collaboration networks. We demonstrate that adopting such a decision-making process is capable of reducing the growth of users' privacy loss by 70% in a Google Drive-based network and by 40% in an author collaboration network. This is despite the fact that we neither assume that users cooperate nor that they exhibit altruistic behavior. To our knowledge, our work is the first to provide quantifiable evidence of the privacy risk that collaborators pose in cloud apps. We are also the first to mitigate this problem via a usable privacy approach. § INTRODUCTION The Rise of Cloud Apps The popularity of consumer cloud storage providers (CSPs) over the previous decade has been on a roll. Dropbox, Google Drive, and One Drive have each amassed hundreds of millions of users. In order to further appeal to their users, the CSPs have been transitioning from being pure service providers to becoming app ecosystems. Hence, they now offer APIs for developers to import and process users' files stored in the cloud. Consider, for example, a web app called https://pandadoc.comPandaDoc, which allows creating, editing, and signing documents online. When a user uses PandaDoc from her laptop browser, she can import files stored in her Google Drive instead of her hard drive. Such a pattern is increasingly more prevalent with the growing number of 3rd Party Cloud apps (or 3PC apps) that are tightly integrated with cloud storage services. Dropbox alone claims that hundreds of thousands of apps have been integrated with its platform. Even in the enterprise setting, 3rd party cloud apps are on the rise. This is first because companies are officially adopting the likes of Dropbox Business, OneDrive for Business, and Google Drive for Work. Second, it is due to employees utilizing their personal cloud accounts to share company's files (a.k.a Shadow IT). Various reports from cloud application security providers state that organizations use from 10 to 20 times more cloud apps than their IT department thinks <cit.>. Risks in 3rd Party Cloud Apps However, in our previous work, we have shown that 76% of the 3rd party Google Drive apps featured on Google Chrome Store request full access to users' Google Drive data <cit.>. Around 64% of these apps are over-privileged: they require more permissions than are needed for them to function. Accordingly, users are now faced with a new kind of privacy adversary: the 3rd party app vendors. With every app authorization decision that users make, they are trusting a new vendor with their data and increasing the potential attack surface. Elastica, the cloud application security provider, estimates that the average financial impact on a company as a result of a cloud-storage data breach is $13.85M, including remediation costs <cit.>. In 2015, the data breach at Anthem, a US insurance company, has reportedly cost more than $100M, with 80M unencrypted health records leaked. This was a result of an exfiltration exploit leveraging a popular public cloud storage application <cit.>. Even on the personal level, the risk extends from breaches exposing financial information and health records to unnoticeable, continuous profiling based on stored files. Exposure through Collaboration An additional intricacy is that when users grant access to a 3rd party cloud app, they are not only sharing their personal data but also others' data. This is because cloud storage providers are inherently collaborative platforms where users share and cooperate on common files. Hence, protecting these files is not solely in the hands of the user. Skyhigh Networks, another provider of cloud security software, reports that 37.2% of documents (across 23 million users) are shared with at least one other user. In organizations, documents are shared, on average, with accounts from 849 external domains <cit.>. Moreover, around 23% of cloud documents were found by Elastica to be “broadly shared”, which means that they are shared (a) among all employees, (b) with external partners and clients, or (c) with the public <cit.>. Interestingly, 12% of those documents contained compliance-related or confidential data. This further highlights what has been termed as the interdependent privacy problem <cit.>, where the decisions of friends can affect the user's privacy and vice-versa. This concept was initially proposed in the context of third-party social networking apps, such as Facebook. However, while 1.92% of Facebook apps request friends' personal information, this is much more pronounced in 3rd party cloud apps, where all apps accessing one's files get access to the part which is shared too. Moreover, unlike Facebook apps, due to the collaborative nature of cloud apps, the CSPs do not provide an option for users to control whether their collaborators' apps can get access to data they own. Research Questions So far, the main approach to reducing the risk of 3PC apps has been focused on discovering over-privileged apps and deterring users from installing them <cit.>. Even then, a lot of users would still install such apps as they prioritize short-term utility over long-term risk aversion or due to the absence of alternatives. Furthermore, that approach relies on manually inspecting each app by experts and on applying a plethora of machine learning algorithms to visualize the various risks for users. These issues could present a hurdle towards a wide-scale deployment by CSPs. In this work, we address the wider problem of minimizing the risk of all 3PC apps, regardless of whether they are over-privileged or least-privileged. We are further driven by the rationale that users will inevitably continue to install apps to achieve various services. Hence, instead of stopping them, we aim to lead them to select apps from vendors in a way that minimizes their privacy risk. We achieve this by leading users to take what we term as History-based decisions. Such decisions account for the vendors who previously obtained access to the user's data, whether directly (with her consent) or via her collaborators. Our strategy consists of introducing privacy indicators to the current permissions interfaces that help users minimize the number of vendors with access to their data. Our “usable privacy” approach is guided via a data-driven study and is evaluated via a data-driven simulation. In essence, we tackle the following research questions: * From a practical perspective, are the collaborators' decisions significant enough to be accounted for in users' app adoption decisions? * Do users already account for entities with access to their data? If not, to what extent can the usage of privacy indicators lead to users taking History-based decisions? * How significant is the effect of adopting these privacy indicators in the case of large networks of users and teams? Contributions Towards addressing these questions, we make the following contributions: * In Section <ref>, we analyze a real-world dataset of Google Drive users, and we show that the median privacy loss that collaborators cause by installing apps can be much higher than that inflicted by the user's own app adoption decisions (39% higher with 5% of shared files and 523% higher with 60% of shared files). To our knowledge, this is the first usage of a real-world dataset to give a concrete evaluation of interdependent privacy in any ecosystem. * Driven by the significant impact of collaborators, we design new privacy indicators for helping users mitigate the privacy risk via History-baseddecisions (cf. Section <ref>). We assess these indicators via a web experiment with 141 users. We show that they significantly increase the likelihood that users choose the option with minimal privacy loss, even if not all of these users are motivated by privacy per se. To the best of our knowledge, this is also the first work to investigate a usable privacy approach to mitigating the problem of interdependent privacy. The few studies on this problem have mainly approached it from a theoretical perspective, such as developing game-theoretic or economic models <cit.> or from a behavioral perspective, such as studying the factors affecting real users' monetary valuation of others' privacy <cit.>. * We explore the potential of History-based decisions by performing a simulation on two large user networks. We show that the network-effects of our approach result in curtailing the growth privacy loss by 70% in a synthetic Google Drive-based collaboration network and by 40% in a real author collaboration network. We also simulate the effect of such decisions in a teams' network. We demonstrate that teams can reduce the privacy loss by up to 45% by solely accounting for team members' decisions (cf. Section <ref>). § MODELS AND PRELIMINARIES §.§ System Model There are four main entities that interact in the third-party cloud app system: * a user u who uses that app for achieving a certain service * a cloud storage provider (CSP) hosting the user's data * a data subject to whom the files belong and whose privacy is being considered. We further define two levels of data subject granularity: * individual-level granularity: i.e., the user herself is interested in guarding her own data privacy, * team-level granularity: i.e., a group of users are interested in guarding the privacy of team-owned data (e.g., using an enterprise version of cloud storage services) * a vendor v that is responsible for programming and managing a 3rd Party Cloud app (or shortly a cloud app or a 3PC app). These vendors register their apps with the CSPs. The apps themselves are hosted on any website the vendors choose (not hosted by the CSP itself). Each user has access to a set F_u of files stored at the CSP. A subset of these files is owned exclusively by the data subject while the other subset is composed of files that are each shared with at least one other collaborator. We denote the set of all collaborators of user u by C(u). For simplicity reasons, we will assume throughout this work that the files of all data subjects, as well as the collaborators for each file, are all fixed from a reference step t=0. Using the CSP's API, the vendor v can get access, at step t ∈ℕ, to the subject's data upon user authorization, which consists of u accepting a list of permissions. We will alternatively refer to this as app installation, and we will assume that exactly one app is installed at each step t. Permissions are named differently across various providers, but, in general, we can categorize them into three categories: * per-file access: where the user has to authorize the vendor for each file access individually. This is typically done via a file picker provided by the CSP itself. * full-access: where the vendor gets access to all users' data. In the interface, this is worded, for instance, as “View the files in your Google Drive” or “access to the files and folders in your Dropbox”. * per-type access: where the vendor gets access to all files of a specific type. For example, Dropbox words it as “access to images in your Dropbox”. Some platforms, like Google Drive, do not provide app developers with such fine-grained options. The authorization can also give v access to files shared with the collaborators of u. Similarly, collaborators of u can install apps that expose files shared with u to new vendors. We denote the set of files of u accessible by vendor v at step t as F_u,v(t)[Although we do not consider file deletion in this work, we note that, in the worst case, the vendor can still have access to copies of files it saved before the user deleted them.]. §.§ User Model A user is further assumed to beself-interested, only caring about optimizing the privacy of the data subject (a.k.a., privacy egoist), and non-cooperative, does not coordinate her decisions with others. We do not assume that the risks of installing each app are known to the users or calculated a priori. In fact, unlike other 3rd party app ecosystems, the risk of each cloud app cannot be automatically estimated based on techniques such as taint tracking <cit.> or code analysis <cit.> because the main app's functionality is typically implemented on the server side (which cannot be accessed by external entities). Such assumptions constitute the worst case in the scenarios we consider, and further privacy optimizations can be obtained by relaxing them. We also assume that the mental model for privacy-concerned users matches the possible permission granularities they are given. Accordingly, privacy-concerned users can have one of the following privacy-goal granularities[Per-file access already achieves the least privilege possible.]: * per-type privacy goal: where users aim to optimize their privacy independently for different file types. For example, in an ecosystem like Dropbox, where per-type access is an option, users might follow the separation-of-concerns principle. Hence, they might install photo-related apps from a set of vendors that is different from the set authorized for document processing. * all-files privacy goal:where users aim to reduce the privacy risk for their entire set of files. This can be in the case of ecosystems which do not have the option of per-type access, like Google Drive. It can be also the case that a user of Dropbox has this goal in mind despite being presented with finer-grained app permissions. §.§ Threat Model We consider the 3rd party app vendors as the adversary (and not the CSP). The privacy indicator we introduce is best implemented by the CSP, which already has access to the users' and collaborators data.Alternatively, this can be a feature within Cloud Access Security Brokers (e.g., SkyHigh Networks, Netskope, etc.), which are already trusted by thousands of enterprises to protect their cloud data against other 3rd parties. Moreover, we consider the protection against over-privileged apps as an orthogonal problem, which we have considered in <cit.>. We rather focus on the interdependent privacy problem, which covers all vendors with full access and is an issue in least-privileged apps too. §.§ Privacy Loss Metrics In order to quantify the privacy loss that a user incurs with time, we introduce now the Vendors File Coverage () metric. Consider a user u and a set V of vendors at a certain time step. For notation simplicity, we will omit the time step henceforth. _u(V) is computed as the summation of the files' fractions shared with each of these vendors: _u(V) = ∑_v∈ V|F_u,v|/|F_u| Intuitively, _u(V) increases as vendors in V get access to more files of u. It has the range [0,|V|]. [We do not normalize _u(V) by |V| as multiple vendors with access to all the user's files induce a higher privacy loss than one vendor with such access.] If we consider the set V_u of vendors explicitly authorized by user u, we can define the Self-Vendors File Coverage as: _u= _u(V_u) Similarly, if we consider the set V_C(u) of vendorsauthorized by the collaborators C(u) of u, we can define the Collaborators-Vendors File Coverage as: _u=_u(V_C(u)) Finally, the Aggregate _u for a user u is that due to all vendors authorized by u or its collaborators: _u = _u(V_u ∪ V_C(u)) Throughout this work, we will use the terms privacy loss andinterchangeably. As will become evident in Section <ref>, this metric choice allows relaying a message that is simple enough for users to grasp, yet powerful enough to capture a significant part of the privacy loss. Obviously, one can resort to a deeper inspection of content or metadata sensitivity (as in <cit.>) had the purpose been finding the best privacy model in general. However, for instigating a behavioral change, telling users that a company has 30% of their files is more concrete than a black-box description informing them that the calculated loss is 30% and constitutes less information-overload than presenting them with detailed loss metrics. § COLLABORATORS' IMPACT At this point, we are in a position to handle the first research question on the extent of collaborators' contribution to a user's privacy loss. Hence, we want to test the following hypothesis: H1: The collaborators' app adoption decisions have a significant impact on the user's privacy loss. If this hypothesis is valid in practice, it provides a strong motivation for designing privacy notices that aid users in accounting for their collaborators' decisions, which is what we will study in Section <ref>. Towards that, we will be dissecting the privacy loss, quantified by , that users incur in a realistic 3rd party cloud apps dataset. §.§ The Case of Google Drive To study the problem in a realistic context, we will be taking Google Drive as a case study in this work, given that it has one of the most popular 3rd party ecosystems. Nevertheless, the insights gained from our work are applicable to other cloud platforms as well. The main (content-related) Google Drive permissions that 3PC apps' vendors can request are presented Table <ref>, along with the Google-provided description for each. This short description is also presented to the user when installing an app (see Figure <ref> for an example app). The user can click on the info button i next to each permission to read additional explanations in a popup. The user has to accept all permissions in order to utilize the app. These apps can be found on Google Chrome Web Store (and other Google stores), where users can rate and review them. In this work, we will focus on content-related permissions. Hence, as discussed in Section <ref>, we differentiate between two levels of access: (1) full access, which includes the drive_readonly and drive permissions and (2) per-file access that includes the drive_file permission. Google Drive does not offer the per-type permissions option. §.§ Dataset One of the main challenges when studying the privacy loss in 3rd party cloud apps is the absence of public datasets with realistic file distributions, collaborator distributions, sharing patterns, 3rd party app installations, etc. We benefit in this section from a dataset that we have collected in a previous work via the PrivySeal[<https://privyseal.epfl.ch>] service <cit.>. We build our analysis on it in order to evaluate theof users in a realistic context. PrivySeal is a web service for for assisting users in avoiding Google Drive apps that are over-privileged. It deters users by showing them the Far-reaching insights that such apps can glean from their data (their topics of interest, collaboration and activity patterns, etc.). There are currently over 1500 registered users in PrivySeal, and we refer the interested reader for our previous work for more details <cit.>. The dataset, henceforth referred to as the PrivySeal Dataset, was anonymized and contained metadata-only information. It included a subset of the files' metadata of 183 PrivySeal users in addition to the Google Drive apps installed by those users prior to authorizing PrivySeal's app (the drive_apps_readonly permission was requested by PrivySeal). Each user had a minimum of N_files_min=10 files in total and at least P_min_shared=5% of files that are shared. The dataset specifically contained: * list of user IDs (anonymized via a one-way hash function); * IDs of files in each user's Google Drive, * list of anonymized collaborators' IDs for each file ID; * list of apps with full-access installed by each user; * the vendor of each app. In total, the number of users in addition to collaborators was 3422. Overall, these users had installed 131 distinct Google Drive apps from 99 distinct vendors. Figure <ref> characterizes the PrivySeal Dataset. Particularly, it displays 4 distributions in this dataset, which realistically model the system under study: * number of files per user, which follows a skewed distribution with a median of 67 files * sharing pattern: percentage of shared files out of all user files, which also follows a skewed distribution with a median around 18% * number of collaborators across all user files (a.k.a., the degree of the user node in the collaboration network): where 75% of the users had less than 23 collaborators * number of vendors authorized per user: also follows a skewed distribution with a median of 1 vendor per user §.§ Results We computed the Self-, the Collaborators-, and the Aggregate- (as defined in Section <ref>) for users in the PrivySeal Dataset[To avoid double counting, we considered the vendors authorized by both the user and her collaborators in computing Self- but not in computing Collaborators-.]. As we did not have the actual number of apps for each collaborator of users in the dataset, we assigned to these collaborators a set of apps from a random user of the dataset. We show in Figure <ref> how these metrics evolve as we gradually consider populations that collaborate more frequently. With P_min_shared=5%, we had a median of 1.39 forCollaborators-, which was 39% higher than a median of 1.00 for Self-. The significance of the median difference is evidenced by the non-overlapping box-plot notches. This difference became much larger when we considered users that share more files. We had a 100% median difference at P_min_shared=10% and 523% median difference at P_min_shared=60%. Such results indicate that: * The collaborators' app adoption decisions contribute a core component to the user's privacy loss, thus confirming our hypothesis H1. * The higher the number of collaborators is, the higher the magnitude of loss these collaborators can potentially inflict. Both conclusions motivate the need for taking collaborators' decisions into account when designing privacy indicators for cloud apps, which is what we will embark on next. § USER STUDY Up till now, we have confirmed that, if users want to minimize their privacy loss, they are better off not ignoring the app installation decisions of collaborators. In this section, we tackle the next research question, where we investigate the potential of privacy indicators in leading users to minimize their exposure to 3PC app vendors. We show first our design methodology for the privacy indicators, and we follow that by a web experiment that investigates the efficacy of these indicators in realistic scenarios. §.§ History-based Privacy Indicators We call our proposed privacy indicators “History-based Insights” ( Insights) as they allow users to account for the previous decisions taken by them or by their collaborators. We continue to consider Google Drive as a case study, and we show this indicator in the context of Google Drive apps' permissions in Figure <ref>. Compared to the current interface provided by Google (Figure <ref>), we added a new part to highlight the percentage of user files readily accessible by the vendor (computed based on _u({ v }) for each vendor v). As we prove in Appendix <ref>, selecting the vendor that already has the largest percentage of user files is the optimal strategy to minimize the privacy loss in our context. We denote this strategy as “History-based decisions”. Following the best practices in privacy indicators' design <cit.>, our indicator was multilayered, with both textual and visual components. The wording of the main textual part was brief and general enough to hold for both the data percentage exposed by friends and that exposed by the user. We used a percentage value rather than a qualitative measure to facilitate making comparisons among apps based on this value. The visual part showed the percentage as a progress bar with a neutral violet color. The bottom textual part was added in a smaller font to provide further explanation for those interested. We used the term company in our interface instead of vendor as it is more commonly understood by the general audience. §.§ Methodology In order to evaluate the new permissions interface, we performed an online web experiment (rather than a lab study) as we were mainly motivated by obtaining a large sample of users that is also geographically and culturally diverse. The hypothesis we wanted to test is: H2: Introducing the new privacy indicator significantly increases the probability that users take History-based decisions. In addition, the study allowed us to build a realistic user decision model based on the choices taken by participants in different conditions. We will utilize this model in Section <ref> to simulate the app choices in a large user network and to study the effect on the overallin the network. We structured our study to have (1) an Introductory Survey, (2) a seriesof App Installation Tasks, and (3) a Concluding Survey. User Recruitment We recruited users via CrowdFlower's crowdsourcing platform. In our study, we restricted participation, via the platform's filtering system, to the highest quality contributors (Performance Level 3). We also geographically targeted countries where English is a main language as our interface was only in English. In order to further guarantee quality responses, each user was rewarded a small amount of $ 0.5 for merely completing the study and an additional amount of $ 1.25 that was manually bonused for those who did not enter irrelevant text in the free-text fields. Instructions Participants were first presented with introductory instructions that explained the context of the study (cloud storage services and 3rd party apps that can be connected to them). They were asked to only continue if they had good familiarity with cloud storage services (Google Drive, Dropbox, etc.). We did not explicitly require that participants have experience with 3rd party cloud apps. However, we educated them about such apps throughout the instructions, particularly showing them two examples of 3rd party apps in action (PandaDoc for signing documents and iLoveIMG for cropping photos). These apps were displayed via animated GIFs that play automatically and do not rely on the user clicking. We used limited deception by neither mentioning the focus of the study on participants' privacy nor giving hints about selecting apps based on the installation history. The advertised purpose was to check how people make decisions when they install 3rd party apps. Introductory Survey After checking the instructions, users were presented with an introductory survey, where they first entered general demographic information. This survey was also front-loaded with questions about cloud storage services (several of which required free-text input) in order to discourage users who had not used these services from continuing to the actual study. §.§ Study Overview Next, users could proceed to the study page. We used a split-plot design in the study. Participants were randomly assigned to one of two groups: * Baseline Group (): where the permissions interface used is that currently provided by Google Drive (Figure <ref>). * History-based Group (): where the permissions interface (Figure <ref>) is used. In each group, the study consisted of 3 modules, which cover the main conditions that can occur when users desire to install a cloud app. On a high level, the modules investigate the following questions: * Module 1: are users likely to select apps from the same vendor they installed from before? * Module 2: are users likely to select apps from vendors that her collaborators have used before? * Module 3: do users consider the differences in access levels obtained by vendors that collaborators installed? In all modules, whenever the user was asked to choose an app, she was presented with a list of 12 apps (Figure <ref> shows an example app). Only two of these apps were relevant to the task purpose, and they were placed on top of the list (randomly positioned as first or second). With this setup, we wanted to mimic the realistic setup of app browsing while not squandering the user's effort on finding apps. All apps had the same full access permissions too (namely drive permission). Unlike in Chrome Store, we removed elements such as ratings, user reviews, and screenshots and kept a minimal interface. This is all in order to reduce the distractions from factors outside the study. We refer the user to the work of Kelly et al., <cit.> who investigated the effects of those elements on users' decisions for Android apps. In order to account for fatigue and learning effects, modules 1, 2, and 3 were presented in a random order for users. We piloted our experimental setup in two stages: with colleagues and with online users from the CrowdFlower community itself. For reviewing the online pilot testers work, we embedded a Javascript code for session recording in our study's web page, which allowed us to view the user's mouse and keyboard actions on our side. Demographics We had 157 users who completed the study. Based on manually reviewing the users' inputs, we removed 16 users who were inputting irrelevant free-text in the survey in the study. We thus report the results of 141 users, 72 of which were in thegroup and 69 in thegroup. In Table <ref>, we describe the participants' demographics based on the introductory survey. Of these participants, 66.4% were males and 33.6% were females. They were between 18 and 62 years old, with a median of 31. Moreover, 42.3% of the participants had worked or studied in IT before. Participants were mostly from India (37%), USA (35%), Britain (7%), Germany (7%), and Canada (7%). CrowdFlower presents the users with an optional satisfaction survey after completing the study, and 49 users took this survey.On average, the study received 4.2/5 for instructions clarity, 3.8/5 for questions' fairness, 3.8/5 for ease of job, 3.6/5 for pay sufficiency (before the bonus was rewarded). This ensures that participants' behavior has not been affected by either a lack of time to complete the task or the task design in general. §.§ Study Details and Results We now move to the detailed description of the modules and the results obtained. These modules are summarized in Figure <ref>, to which we refer henceforth. We also show sample screenshots from the online study in Figure <ref>. The results are also presented in Table <ref>.Module 1 (Self-History Scenario) tests whether the user is more likely to select an app from the same vendor she has just installed from before. In step (a), the user is made aware the she installed an app from a specific(Figure <ref>).In step (b), she is asked to install[Users were informed that this is a role-playing study, and no apps were actually installed.] an app that satisfies the given purpose (Figure <ref>) among a list of apps. Two of the listed apps were relevant, and one of them was from vendor v itself. Despite the participants being informed one step earlier that they installed an app from thetimetube.com, that did not make a difference in thecase: half of the users still chose the app from the new vendor nitrosafe.org (cf. Table <ref>.In the absence of traditional signals that users follow for deciding on apps (reviews, ratings, permissions), participants apparently made decisions that cancelled out, making the two apps equally favored across participants. The vast majority of users were not approaching the installation from the angle of keeping their data with fewer shareholders. Based on their provided justifications, they rather looked for other cues, such as selecting the app that, in their opinion, has a more comprehensive description, a more professional logo, a better sounding name, or a more trustable URL. Still, 12 users have explicitly mentioned in their text input that they chose an app because it is from the same vendor they have dealt with earlier. Even then, neither of them has alluded to a privacy motivation behind the choice. These 12 participants mainly provided cross-app compatibility, interface familiarity, and satisfaction with the previous vendor as justifications. For example, one participant wrote: I favoured Malware Scanner due to the fact that the name thetimetube.com was in the last app installed, and I tend to install apps from the same company due to cross-app compatibility usually found in apps by the same company. Interestingly, two users justified their installation of the app from the new vendor (nitrosafe.org) by writing that they had just installed an app from the same company before. This indicates that, even when users try to account for previous decisions, they might find it difficult to remember the previous app vendors. Given that our study had a short time span separating the current from the previous installation, we expect that such mistakes would be even more common in real scenarios when app installation instances are separated by longer time spans. Thegroup witnessed a much larger proportion of users who favored the option with less privacy loss. 72.2% of the participants selected the app from the thetimetube.com (the vendor which already has access). The difference of 22.8% compared to thegroup is statistically significant (Fisher's exact test, p=0.005). Many of the participants who chose the app from thetimetube.com reported that they were motivated by the 100% access that the app already has. We counted around 40 such users (57% of thegroup). Some of them went further and explicitly mentioned that their selection was motivated by giving data to fewer data owners (more privacy). For example, one user wrote: This company has access to all my files, so I would choose them as I don't want to have 2 companies with full access to my files. In a nutshell, we were able to verify our hypothesis in this scenario: the new privacy indicator leads users to more frequently choose the app from a vendor they already authorized. Furthermore, we have discovered that the HB Insights interface has indirectly made users think about various positive effects brought by using apps from the same vendor. This eventually lead them to make more privacy-preserving decisions. Module 2.1 (Collaborator's App Scenario) tests the likelihood that the participant selects the same app that her collaborator had used. In step (a), the participant is made aware that she had shared all her photos with a friend f (Figure <ref>). For more familiarity, we also added a picture for each of the two fictitious friends throughout the study. In step (b), the user is made aware that her friend f has installed an app a_0 (Figure <ref>) from vendor v. She is asked to type the name of the app's vendor (paste option was disabled in the input field to further ensure the participant is aware of the vendor). In step (c), the user is asked to install an app with a certain purpose (similar to Figure <ref>). One of the two matching apps is app a_0. Similar to the previous module, thegroup witnessed an almost even split between Online Player, installed previously by the friend, and Enjoy Music Player, from a new vendor (cf. Table <ref>. We also noticed that 20 participants in this group justified their decision by mentioning that their friend has used the app. Still, neither of them alluded at privacy reasons in their justifications. Instead, the two most prevalent motivations were (1) considering the friend's use of the app as a recommendation or (2) achieving compatibility with their friends' app, which facilitates data sharing within the app itself. Quoting one user: This is the same app my friend is using so it should be quite compatible for us to both share. In addition to having a significant 35.6% difference in the case of thegroup, we noticed that 32 users mentioned the existing data access as a reason for choosing the app Online Player. Also, 26 users referred to the fact that the friend has installed this app before (including those who mentioned both of the previous reasons). Unlike thegroup's justifications though, where the friend's recommendation and the app's compatibility prevailed, the privacy issue was explicitly brought up by at least 10 users. One participant put it as follows: Thanks to John, they have already access to 70% of my data. Sharing the last 30% isn't as bad as sharing 100% of my data with driveplayer.com. Module 2.2 (Collaborator's Vendor Scenario) We proceed in steps (d) and (e) as in the previous scenario's steps (b) and (c), with the difference that a new app from v is included among the options in step (e) instead of the exact same app a_0. One interesting insight from this scenario is that the line between the company and the app is blurred in the minds of several users who used the two entities interchangeably. In fact, 3 users in thegroup and 7 participants in thegroup justified their choices by mentioning that their friend installed the same app before, which was not the case. For example, one user wrote: this app already has access to my files, and I don't want to install any new app. Module 3 (Multiple Collaborators Scenario) Given collaborators f_more and f_less, where the user shares much more data with f_more, this scenario checks the likelihood of the participant authorizing an app that f_more has installed. In step (a), the participant is made aware that f_more has access to more data than f_less (Figure <ref>). In steps (b) and (c), the participant is made familiar with the apps each of the friends installed (similar to Figure <ref>). In step (d), the user is asked to select an app with a specific purpose. The two friends' apps are the only ones matching, and the choice is to be made between them (similar to Figure <ref>). In thegroup, we had 44.4% of the participants choosing the app installed by f_more. Still this percentage is relatively close to an equal split between the two apps. Out of this percentage, 13 users justified their choice by mentioning that they were encouraged to follow the choice of friend f_more. Even though they did not mention privacy, the larger number of files shared with f_more was often used as a justification. For example, one participant wrote: This is the app that John already uses, and he has access to all of my files. The PDF Mergy app is used by Lisa, but she only has access to part of my files. In thegroup, around 82.6% chose the app previously installed by the friend f_more, which is significantly more than those in thecase (Fisher's exact test, p<0.001). Looking at the justifications, around 37 users explicitly mentioned the higher access level that this app already possesses as a reason for their choice. Privacy was additionally mentioned by 8 of these users. Quoting one of them: PDF Mergy already has access to 70% of my files. Using PDF Files Merger would unnecessarily increase third party app access to my files. However, we still had 2 users who went for the app with less existing access, with one of them saying he favors the app that only had accessed 30% of files before installation. What was interesting though is that almost all users who mentioned friends were actually making a comparison between the two friends' existing access level, regardless of their final choice. §.§ Concluding Survey At the end of the user study, users were presented with a final set of questions. We asked them whether they would like to be notified when a friend installs an app that gets access to their shared files. Around 92% of users in thegroup and 90% of users in thegroup agreed. We further asked the participants whether they are fine with a collaborator being notified when they install applications that access files shared with that collaborator. The percentage of people who agreed dropped to 75% in theand 78% in thegroup. The relatively small difference between the answers to these two questions highlights that only a minority of users is not willing to make the trade-off of contributing to the overall system. Such users can be given the option to not use privacy indicators based on their friends' decisions. Next, users were asked the following question “Assume you have installed an application called YouMusic from a company called Musicana and gave it access to all your files on Google Drive. Now you are considering installing an application called YouVideo from the same company. How do you think that this application will affect your privacy:”. Only 11% of each group replied by “negatively”. The vast majority in both groups either perceived the avoidance of a new vendor as a positive outcome or considered that the privacy loss will remain the same. Interestingly, the users in theshowed a similar reasoning in justifying their choices as thegroup although the latter were primed about these aspects via the privacy indicators. This indicates that the privacy indicators actually match the first intuition for a large fraction of users. §.§ Discussion and Limitations Overall, we found out that, in the three modules, participants in thegroup were significantly more likely to install the app with less privacy loss (the app from the vendor with the largest share of the user's files) than those in thegroup. Despite showing the efficacy of History-based Insights, our study still has its limitations. In order to get a large, diverse sample size, we resorted to a web experiment based on role-playing with hypothetical data. It would be interesting to see how such results extrapolate to the case where users' own data is in question. Moreover, in our design, we have abstracted several factors (e.g., ratings and reviews), which have been previously studied in similar ecosystems <cit.>, in order to focus on one factor. These factors might have diluted the effect of the privacy indicator. Still, we conjecture that, although the absolute values of our findings might not strictly apply, the differences between the two groups will still be practically significant. Additionally, in this paper, we have investigated only one type of history-based privacy indicators. Evidently, such indicators can be integrated at different stages of the app installation process. For example, they can be part of the recommendation strategy for suggesting alternative apps. They can also be included in the apps' search interface. Apps can also be labelled as “privacy preserving” in the web store based on this metric. It is also possible that the privacy indicator is only shown when the vendor has existing access to the user's data. This might serve to reduce the habituation effect and the information overload. The best choice among these deployment scenarios needs further investigation. Furthermore, it is important to note that, although our experimental interface mentions the collaborators' name in the explanation under the progress bar, this does not have to be the case in actual deployments. We hypothesize that removing the name will not have a significant impact on the results as it was not highlighted in the interface. This allows the CSP to relay such information to the users without exposing sensitive data about particular collaborators. The CSP can resort to more sophisticated anonymization methodologies, such as showing a non-exact percentage that can be mapped to multiple collaborators. Exploring the impact of these techniques is left for a future work. Moreover, we note that this anonymization might not be needed at all in the enterprise settings, where apps installed by team members are supposed to be visible for the administrators. As we show in Section <ref>, a significant reduction in privacy loss can be achieved without even accounting for decisions by users external to the team. Finally, the privacy indicator in our study has addressed two granularity levels: full and per-file access. However, the same indicator can be extrapolated to the case of per-type access. For example, the interface can say: “The app's company already has access to 70% of your photos” (instead of files). § LARGE NETWORKS' SIMULATIONS In the previous section, we showed the significant change that our privacy indicator can effect through encouraging users to make History-based decisions. We will tackle the next research question, where we investigate the impact of adopting such privacy indicators on the privacy risk in realistic scenarios with large user networks. As we are not in the position of the CSP to study an actual implementation of theInsights interface over time, we will perform a simulation of potential users' installation behavior. We will base this on both the crowdsourced decision model inferred from the user study and on new collaboration networks that we construct. §.§ Simulation Data Collaboration Networks For the purposes of this simulation, we constructed the following three networks: * Inflated Google Drive Network: We used the standard degree-driven approach for network topology generation to construct a larger Google Drive network based on the one in the PrivySeal Dataset of Section <ref> <cit.>. Based on an input user degrees' distribution from that dataset, we particularly used the Configuration Model as described by Newman <cit.> and implemented by the library NetworkX <cit.> for inflating the graph. This model generates a random pseudograph (a graph with parallel edges and self-loops) by randomly assigning edges to match an input degree sequence. We removed the self-loops and parallel edges a posteriori from the generated graph. In the end, we had a collaboration graph with 18,000 users and 138,440 edges. This graph is, by construction, a connected graph, with an average node degree of 15. * Paper Collaboration Network: In an effort to have a realistic, large collaboration network without resorting to graph inflation, we relied on the Microsoft Academic Graph, which consists of records of scientific papers along with the authors and their affiliations <cit.>. We used a snapshot of 50,000 papers, and we constructed the collaboration graph based on it. We ended up with 41,000 collaborators and 199,980 edges. The graph itself is not a connected graph but is rather constructed of around 1700 connected components. The average node degree is 4. Our rationale is that this graph captures a realistic scenario of users collaborating on authoring documents, which is, in fact, an activity achieved via cloud services nowadays. Hence, it is fit for showing the efficacy of our privacy indicators. * Team Collaboration Network: We used the same academic graph in order to construct a network of teams. A team is defined as a frequently collaborating group of people. Motivated by research around community detection <cit.>, we use Strongly Connected Components (SCCs) in order to label teams in our graph. We ended up with 16,400 users split over 1700 teams. Unlike the previous two networks where users themselves are the data subjects (whose privacy is to be optimized), members of each team in this network consider their team as the data subject. Sharing and Installation Patterns In order to closely model the user characteristics in Google Drive, we assigned to each user in the collaboration networks a file sharing distribution and a number of apps corresponding to a user with a matching degree in the PrivySeal Dataset. Apps As we wanted to perform the simulation with a much larger number of users than we had in the dataset described in Section <ref>, we also needed a larger collection of apps. Given that Google Chrome Store has only around 500 apps that are tagged by the Works with Google Drive tag, we decided to also include all Google Chrome Apps in the dataset (even those that do not have this tag). As far as the simulation is concerned, this step is justified since the onlyrealistic information that we will rely on is the distribution of vendors per app. It is fair then to assume that this distribution does not differ significantly between the general category and the Google Drive category. Hence, we augmented the PrivySeal Dataset via apps from the Google Chrome Store to arrive at 1000 apps. In addition to the app's installation count and vendor name, we also collected the set of Related Apps that the store displays for each app. This is because, in our simulation, we will assume that users have the choice to choose the app itself or one of its related apps. Again, this is a fair assumption as these related apps are mostly the apps which deliver a close functionality to the app itself, and we will only rely on them to model the alternatives at each simulation step. User Decision Models For the purpose of this simulation, we define 3 user decision models: * Fully Aware Model ():the user always makes the decision that minimizes the privacy loss of the data subject, taking into account all previous installation decisions by her and by her collaborators. * Experimental History-based Model (): the user takes decisions similar to what a random user of theexperimental group does. In specific, we model those users as taking a history-based decision with probability q and making a random app choice with probability 1-q. We set q based on the number of users who mentioned the app' existing access in writing as a reason for their choice in each module of Section <ref>. Based on Module 1's users' responses, we set q=0.57 when the user encounters a vendor she previously authorized. Based on Module 2, we set q=0.70 whenever the user is presented with one vendor previously authorized by a single collaborator. Based on Module 3, we set q=0.67 for the cases where the user is presented with multiple vendors previously authorized by her collaborators. In all of these cases, the user will select the vendor with the minimal resulting Aggregate _u with probability q. * Experimental Baseline Model ():the user takes decisions similar to what a random user of theexperimental group does. As users in practice are rarely informed of what their friends have installed before, we do not integrate this knowledge into the model. Hence, we only account for the case of Module 1, where the user's previous decisions are concerned. Based on the fraction of users who mentioned the app's existing access as a motivation for their choice, we set the probability of taking history-based decision in this model as q=0.18. In the special case of the team collaboration network, users who take history-based decisions account for their own decisions and the decisions of their team members only. We do not consider that users account for decisions taken by members of other teams. This is to demonstrate the potential of the privacy indicators under strict conditions. §.§ Simulation Details [t] Simulation Steps We now move to the description of the simulation itself, which is detailed in Algorithm <ref>. We had three simulation groups, named after the three decision models:group,group, and thegroup. The simulation was run until the average number of apps installed across by users reached 30 apps[Comparatively, mobile users have accessed 26.7 smartphone apps on average per month in the fourth quarter of 2014 <cit.>.]. On a high level, at each simulation step, the following actions are performed: * A user is selected from the collaboration network via a weighted random sampling based on the assigned app installation frequencies (line <ref>). This accounts for the diversity of users' installation frequencies. An app a_0 is selected from the simulation apps' dataset via a weighted random sampling based on the actual app installations count in Google Chrome Store (line <ref>). That way, popular apps are installed more frequently (as is the case in practice). * A user decision is simulated. The user is assumed to be choosing the app a_0 or one of its related apps. This choice is made depending on the user's decision model, as explained previously. * Finally, the average Aggregateis computed based on all users' Aggregate _u. §.§ Simulation Results To demonstrate the simulation results, we show three types of figures per collaboration network. On a high level, in Figures <ref>, <ref>, and <ref>, we show how the privacy loss (quantified using the average Aggregate-) in each group evolves as users install more apps. In Figures <ref>, <ref>, and <ref>, we show ratios of the privacy loss in the two experimental groupsandwith respect to the baselinegroup. Finally, Figures <ref> and <ref>, and <ref> show the actual events contributing to the privacy loss growth, where we can specifically check the fraction of apps coming from new vendors, those coming from vendors previously authorized by the user, and those from vendors previously authorized by collaborators. §.§.§ Results for Individuals' Networks Based on these metrics we start by analyzing the results for the individuals' networks, where we observe the following: Curtailed growth of privacy loss From Figures <ref> and <ref>, we notice that the growth of the privacy loss is visibly curtailed in the cases ofandgroups compared to the baselinegroup. This significant divergence demonstrates the efficacy of our privacy indicators. Impact of the network effect Looking into the ratios in Figures <ref> and <ref>, we see that the privacy lossin thegroup has dropped by 41% in the inflated network and by 28% in the authors-based network (both with respect to the baseline). In thegroup, where users always optimize their privacy, the privacy loss has dropped by 70% in the inflated network and by 40% in the authors-based network. This higher impact in the case of the inflated network is due to the fact that it is a connected graph, unlike the authors-based network, which is composed of smaller connected components. Nevertheless, we can state that, although our privacy indicators have a larger effect on highly connected networks, they are still significantly effective in less connected networks, like the authors-based dataset. Importance of accounting for collaborators' decisions To dive further into events that lead to the observed privacy loss patterns, we look into Figures <ref> and <ref>. First, we observe that users in thegroup are mainly installing new apps from vendors that had no previous access to their data. This is reflected in the almost linear increase of privacy loss in Figures <ref> and <ref>. Second, we observe that, in the case of the inflated network, users have been frequently installing apps from vendors with existing access through their collaborators. In fact, as apparent in Figure <ref>, this event outnumbers the event of installing from a new vendor. Third, the number of installations from collaborators' vendors is also significant in the case of the authors-based dataset. While it does not outnumber the installations from new vendors (due to the low-graph connectivity), this is still enough to lead to 28% and 40% decrease in the privacy loss in theand thegroups respectively. Finally, we note that, although the users are more frequently encountering vendors authorized by their collaborators than by themselves, the latter event is still significantly impacting the results. This is because users still incur an incremental privacy loss with vendors authorized by their collaborators while this loss is zero with vendors they have previously authorized. Accordingly, the obtained optimizations are a result of users' accounting for their own and for others' decisions. §.§.§ Results for the Teams' NetworkWe now discuss the results for the case of the collaboration network where users work in teams and aim to protect the privacy of the team's data. We observe the following, based on Figure <ref>: Inherent usage of similar apps From Figure <ref>, it is clear that the dominant event is that of users installing apps which have been authorized by other team members before. This is even in the case of the baseline group (), which was not the case in the individuals' networks. We justify that by the fact that we selected apps at each simulation step to match their realistic installation frequencies. In practice, apps' installation counts follow a long-tail distribution, and users tend to mostly install a limited set of apps. That is why team members will naturally tend to install a set of similar apps. Curtailed growth of privacy loss Still, we observe that the trend of slower growth of privacy loss also applies in the case of teams (Figure <ref>). As we also observe in Figure <ref>, the privacy loss has decreased by 23% for thegroup and by 45% for thegroup, both with respect to the baseline group. This implies that there is an ample room for privacy optimization in teams too. Effect due to internal collaborators We finally observe that the privacy loss decrease was achieved via decisions taken by each team's members independently, without relying on other teams' decisions. This highlights the fact that privacy indicators can still be effective even when users do not account for others' decisions. Obviously, taking the external members' decisions into account can lead to further optimizations. In sum, our simulations provide further evidence of the efficacy of using History-based privacy indicators in a large network of collaborators. It is worth noting too that, although users in our study were following thedecision model, we believe that, in an actual deployment of such indicators, the model will move closer to themodel. This is because users are more protective when their personal data is at risk than when they are put in a role playing scenario about fictitious data. Moreover, users in our study were exposed to this indicator for the first time. When users are educated more about this feature, they might be more likely to take advantage of it. § RELATED WORK §.§ Interdependent Privacy The problem of interdependent privacy has been tackled before in the context of social apps. The main approaches were high-level game-theoretic or economic modeling. In <cit.>, the authors introduced the concept of interdependent privacy and modeled its impact via a game theoretic, (2-player, 1-app) model. The work by Pu and Grossklags <cit.> presented a more elaborate economic model that additionally accounts for the interplay among various social network parameters. They showed that app rankings do not accurately reflect the level of interdependent privacy harm the app can cause and that even rational users who consider their friends' well-being might adopt apps with invasive privacy practices. Evidently, these results do not apply in the cloud apps case, where all apps have the potential to inflict interdependent privacy harm. A later work by Pu and Grossklags <cit.> used a conjoint study approach to quantify the monetary value which individuals associate with their friends' personal data. They found that individuals place a significantly higher value on their own personal information than their friends' personal information. This further supports our assumption of self-interested users in this work. The same authors also built on a user survey in <cit.> to assess the factors affecting users' own privacy concerns as well as friends' privacy concerns in the context of social app adoption. In particular, they found evidence of negative association between past privacy invasion experiences and the trust in 3rd party apps handling of their own data. They also found partial support for a positive effect of privacy knowledge on concerns for users' own privacy and their friends' privacy. Other works have also investigated the issue of interdependent privacy in the context of location privacy <cit.> and genomic privacy <cit.>. In this work, we are focused on quantifying the interdependence of privacy in the context of cloud apps before addressing it from a usable privacy perspective, thus bridging the gap between the theoretical studies and the end-user needs. §.§ Apps Privacy Indicators Our previous work <cit.> was the first to study the privacy of 3rd party cloud apps and to expose that almost two thirds of those apps are over-privileged. In that work, we introduced a novel privacy indicator for deterring users from installing over-privileged apps by showing them Far-reaching insights that apps can needlessly infer from their data (top topics, faces, or locations of interest). In the context of Android apps, Kelly et al., showed that, by adding a set of privacy facts about an app, users will be more likely to choose apps with fewer permissions <cit.>. Harbach et al., tackled the same problem but presented users with random examples from their data (pictures, contacts, etc.) <cit.>. Almuhimedi et al. showed the effectiveness of privacy nudges, which regularly alert users about sensitive data collected by their apps, in encouraging users to review and adjust their permission <cit.>. All these works, however, tackle the problem of over-privileged apps and try to lead the user into either avoiding them or adjusting their permissions whenever possible. Our current work helps users improve their privacy by reducing the vendors with access to their data, even if the functionality delivered by the vendor abides by the least-privilege principle. Hence, it complements these approaches and can be deployed alongside any of them. § CONCLUSION The findings in this work are the first to concretely delineate the various aspects of interdependent privacy in 3PC apps. One of the major outcomes is that a user's collaborators can be much more detrimental to her privacy than her own decisions. Consequently, accounting for collaborators' decisions should be a key component of future privacy indicators in 3rd party cloud apps. We have shown the impact of History-based Insights as a privacy enhancing technology in this context, especially that, based on our user study, users are less likely to account for previous decisions on their own. Our privacy indicators would optimally be implemented by the CSPs themselves as they control the authorization interface and the application stores. The indicators can also be realized by third party privacy providers with access to users' data. Our approach can also be easily mapped to other ecosystems. In the mobile apps' scenario, it can enable users to reduce the number of vendors with access to her contacts. It can also be extended to the case where the goal is protection against 4th parties (e.g., ad providers and data brokers). There, the user can account for data previously held by a 4th party with which the app vendor cooperates. Finally, due to their usability and effectiveness, we envision History-based Insights as an important technique within the movement from static privacy indicators towards dynamic privacy assistants that lead users to data-driven privacy decisions. abbrv § ACKNOWLEDGMENTS We would like to thank Deniz Taneli and Nicolas Hubacher for their help in exploratory work that led to this paper. We also thank Rameez Rahman for the helpful discussions and the anonymous reviewers for their valuable feedback. The research leading to these results has received funding from the EU in the context of the project CloudSpaces: Open Service Platform for the Next Generation of Personal clouds (FP7-317555). § PROOF OF OPTIMAL USER STRATEGY In this section, we complement Section <ref> by providing a proof the optimal user strategy for minimizing the privacy risk, given our assumptions. We follow the notation introduced in Section <ref>. Let us consider that each 3PC app vendor has a probability p of exposing users' data. As we do not assume that users are provided with a per-vendor risk estimation utility, we set this probability to be the same for all vendors. In general, at a time t, a user u would have exposed her data to a set V of vendors, such that each vendor v has access to a fraction f_u,v(t) = |F_u,v(t)|/|F_u| of the files. Without loss of generality, we will consider henceforth that the user has an all-files privacy goal (cf. Section <ref>). However, the same reasoning applies in the case of a per-type privacy goal. In that case, we simply replace “files” by “files of a specific type” (e.g. photos, documents). We will also be assuming that the users themselves are the data subjects (we consider individual-level subjects). For a vendor v, we quantify the user's privacy risk magnitude as p*f_u,v(t), the fraction of user files possessed by the vendor multiplied by the probability that the vendor exposes the user's files. This vendor could have obtained access due to app installations by the user herself or by her collaborators. A user's privacy risk magnitude at time t can thus be defined as the sum of the risk magnitude across vendors in V:(t) = ∑_v∈ V p*f_u,v(t). When a user installs an app from a vendor v̂ at time t+1, the vendor gets access to the whole set of user's files. Hence, the risk magnitude is increased byp*(1- f_u,v̂(t)). Given that p is constant, the risk magnitude can be minimized by choosing v̂, such that v̂=_v f_u,v(t) (which can also be written as v̂=_v _u({ v },t)). Hence, the optimal, greedy strategy to minimize the risk is to select the vendor that already has the largest fraction of user files, thus minimizing p*(1- f_u,v̂(t)). We call this strategy: “History-based decisions”.
http://arxiv.org/abs/1702.08234v2
{ "authors": [ "Hamza Harkous", "Karl Aberer" ], "categories": [ "cs.CR", "cs.HC" ], "primary_category": "cs.CR", "published": "20170227111521", "title": "\"If You Can't Beat them, Join them\": A Usability Approach to Interdependent Privacy in Cloud Apps" }
Semi-quantum communication: Protocols for key agreement, controlled secure direct communication and dialogue Chitra Shukla^a,email: shukla.chitra@i.mbox.nagoya-u.ac.jp , Kishore Thapliyal^b,email: tkishore36@yahoo.com, Anirban Pathak^b,email: anirban.pathak@jiit.ac.in^aGraduate School of Information Science, Nagoya University, Furo-cho 1, Chikusa-ku, Nagoya, 464-8601, Japan ^bJaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307, India December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================ Semi-quantum protocols that allow some of the users to remain classical are proposed for a large class of problems associated with secure communication and secure multiparty computation. Specifically, first time semi-quantum protocols are proposed for key agreement, controlled deterministic secure communicationand dialogue, and it is shown that the semi-quantum protocols for controlled deterministic secure communication and dialogue can be reduced to semi-quantum protocols for e-commerce and private comparison (socialist millionaire problem), respectively. Complementing with the earlier proposed semi-quantum schemes for key distribution, secret sharing and deterministic secure communication, set of schemes proposed here and subsequent discussions have established that almost every secure communication and computation tasks that can be performed using fully quantum protocols can also be performed in semi-quantum manner. Further, it addresses a fundamental question in context of a large number problems- how much quantumness is (how many quantum parties are) required to perform a specific secure communication task? Some of the proposed schemes are completely orthogonal-state-based, and thus, fundamentally different from the existing semi-quantum schemes that are conjugate-coding-based. Security, efficiency and applicability of the proposed schemes have been discussed with appropriate importance.Keywords: Semi-quantum protocol, quantum communication, key agreement, quantum dialogue, deterministic secure quantum communication, secure direct quantum communication. § INTRODUCTION Since Bennett and Brassard's pioneering proposal of unconditionally secure quantum key distribution (QKD) scheme based on conjugate coding <cit.>, various facets of secure communication have been explored using quantum resources. On the one hand, a large number of conjugate-coding-based (BB84-type) schemes <cit.> have been proposed for various tasks including QKD <cit.>, quantum key agreement (QKA) <cit.>, quantum secure direct communication (QSDC) <cit.>, deterministic secure quantum communication (DSQC) <cit.>, quantum e-commerce <cit.>, quantum dialogue <cit.>, etc., on the other hand, serious attempts have been made to answer two extremely important foundational questions- (1) Is conjugate coding necessary for secure quantum communication? (2) How much quantumness is needed for achieving unconditional security? Alternatively, whether all the users involved in a secure communication scheme are required to be quantum in the sense of their capacity to perform quantum measurement, prepare quantum states in more than one mutually unbiased basis (MUBs) and/or the ability to store quantum information? Efforts to answer the first question have led to a set of orthogonal-state-based schemes <cit.>, where security is obtained without using our inability to simultaneously measure a quantum state using two or more MUBs. These orthogonal-state-based schemes <cit.> have strongly established that any cryptographic task that can be performed using a conjugate-coding-based scheme can also be performed using an orthogonal-state-based scheme. Similarly, efforts to answer the second question have led to a few semi-quantum schemes for secure communication which use lesser amount of quantum resources than that required by their fully quantum counterparts (protocols for same task with all the participants having power to use quantum resources). Protocols for a variety of quantum communication tasks have been proposed under the semi-quantum regime; for example, semi-quantum key distribution (SQKD) <cit.>, semi-quantum information splitting (SQIS) <cit.>, semi-quantum secret sharing (SQSS) <cit.>, semi-quantum secure direct communication (SQSDC) <cit.>, semi-quantum private comparison <cit.>, authenticated semi-quantum direct communication <cit.> have been proposed. The majority of these semi-quantum schemes are two-party schemes, but a set of multi-party schemes involving more than one classical Bob have also been proposed <cit.>. In some of these multi-party semi-quantum schemes (especially, for multiparty SQKD) it has been assumed that there exist a completely untrusted server/center Charlie who, is a quantum user and either all <cit.> or some <cit.> of the other users are classical. Further, some serious attempts have been made for providing security proof for semi-quantum protocols <cit.>. However, to the best of our knowledge until now no semi-quantum protocol has been proposed for a set of cryptographic tasks, e.g., (i) semi-quantum key agreement (SQKA), (ii) controlled deterministic secure semi-quantum communication (CDSSQC), (iii) semi-quantum dialogue (SQD). These tasks are extremely important for their own merit as well as for the fact that a scheme of CDSSQC can be easily reduced to a scheme of semi-quantum e-commerce in analogy with Ref. <cit.>, where it is shown that a controlled-DSQC scheme can be used for designing a scheme for quantum online shopping. Further, a scheme for online shopping will be of much more practical relevance if end users (especially buyers) do not require quantum resources and consequently can be considered as classical users. In brief, a semi-quantum scheme for e-commerce is expected to be of much use. It is also known that a Ba An type scheme for QD <cit.> can be reduced to a scheme of QPC <cit.> and then the same can be used to solve socialist millionaire problem <cit.>; a scheme of QKA can be generalized to multiparty case and used to provide semi-quantum schemes for sealed bid auction <cit.>; and in a similar manner, a CDSSQC scheme can be used to yield a scheme for semi-quantum binary quantum voting in analogy with <cit.>. The fact that no semi-quantum scheme exists for SQD, SQKA and CDSSQC and their wide applicability to e-commerce, voting, private comparison and other cryptographic tasks have motivated us to design new protocols for SQD, SQKA and CDSSQC and to critically analyze their security and efficiency. To do so, we have designed 2 new protocols for CDSSQC and one protocol each for SQD and SQKA. These new protocols provide some kind of completeness to the set of available semi-quantum schemes and allows us to safely say that any secure communication task that can be performed using full quantum scheme can also be performed with a semi-quantum scheme. Such reduction of quantum resources is extremely important as quantum resources are costly and it is not expected that all the end users would possess quantum devices.Before we proceed further, it would be apt to note that in the existing semi-quantum schemes different powers have been attributed to the classical party (lets call him as Bob for the convenience of the discussion, but in practice we often name a classical user as Alice, too). Traditionally, it is assumed that a classical Bob does not have a quantum memory and he can only perform a restricted set of classical operations over a quantum channel. Specifically, Bob can prepare new qubits only in the classical basis (i.e., in Z basis or {|0⟩,|1⟩} basis). In other words, he is not allowed to prepare |±⟩ or other quantum states that can be viewed as superposition of |0⟩ and |1⟩ states. On receipt of a qubit Bob can either resend (reflect) the qubit (independent of the basis used to prepare the initial state) without causing any disturbance or measure it only in the classical basis. He can also reorder the sequence of qubits received by him by sending the qubits through different delay lines. In fact, first ever semi-quantum scheme for key distribution was proposed by Boyer et al., in 2007 <cit.>. In this pioneering work, the user with restricted power was referred to as classical Bob and in that work and in most of the subsequent works(<cit.> and references therein) it was assumed that Bob has access to a segment of the quantum channel starting from Alice's lab going back to her lab via Bob's lab; as before, the classical party Bob can either leave the qubit passing through the channel undisturbed or perform measurement in the computational basis, which can be followed by fresh preparation of qubit in the computational basis. This was followed by a semi-quantum scheme of key distribution <cit.>, where the classical party can either choose not to disturb the qubit or to measure it in the computational basis, and instead of preparing the fresh qubits in the computational basis he may reorder the undisturbed qubits to ensure unconditional security. Later, these schemes were modified to a set of SQKD schemes with less than four quantum states, where Alice requires quantum registers in some of the protocols <cit.>, but does not require it in all the protocols. In what follows, we attribute the same power to Bob in the schemes proposed in this paper. In the schemes proposed below, senders are always classical and they are referred to as classical Bob/Alice. This is in consistency with the nomenclature used in most of the recent works(<cit.> and references therein). However, in the literature several other restrictions have been put on classical Bob. For example, in Ref. <cit.> a SQKD scheme was proposed where Bob was not allowed to perform any measurement and he was referred to as “limited classical Bob”. It was argued that a limited classical Bob can circumvent some attacks related to the measurement device <cit.>. However, the scheme proposed in <cit.> was not measurement device independent. Similarly, in <cit.> a server was delegated the task of performing measurement and application of one of the two Pauli operations, while the classical users role was restricted in randomly sending the received qubits to the server for random application of operator or measurement. Such a classical user was referred to as “nearly classical Bob”. Remaining part of the paper is organized as follows. In Sec. <ref>, a protocol for SQKA among a quantum and a classical user is proposed. Two CDSSQC schemes with a classical sender and a receiver and controller both possessing quantum powers are proposed in Sec. <ref>. In Sec. <ref>, a SQD scheme between a classical and a quantum parties are designed. The security of the proposed schemes against various possible attacks are discussed in the respective sections. The qubit efficiency of the proposed schemes has been calculated in Sec. <ref> before concluding the work in Sec. <ref>. § PROTOCOL FOR SEMI-QUANTUM KEY AGREEMENTIn analogy of the weaker notion of quantum key agreement, i.e., both the parties take part in preparation of the final shared key, most of the SQKD protocols may be categorized as SQKA schemes. Here, we rather focus on the stronger notion of the key agreement, which corresponds to the schemes where each party contribute equally to the final shared key, and none of the parties can manipulate or know the final key prior to the remaining parties (for detail see <cit.> and references therein). We will also show that the proposed SQKA scheme can be reduced to a scheme for SQKD, first of its own kind, in which a sender can send a quantum key to the receiver in an unconditionally secure and deterministic manner.In this section, we propose a two-party semi-quantum key agreement protocol, where Alice has all quantum powers, but Bob is classical, i.e., he is restricted to perform the following operations (1) measure and prepare the qubits only in the computational basis {|0⟩,|1⟩} (also called classical basis), and (2) simply reflects the qubits without disturbance. The working of the proposed protocol is illustrated through a schematic diagram shown in Fig. <ref>.The following are the main steps in the protocol. § PROTOCOL 1Step1.1: Preparation of the quantum channel: Alice prepares n+m=N number of Bell states, i.e., |ψ^+⟩^⊗ N, where |ψ^+⟩=|00⟩+|11⟩/√(2). Out of the total N Bell states, n will be used for key generation while the remaining m will be used as decoy qubits for eavesdropping checking. Subsequently, she prepares two ordered sequences of all the first qubits and all the second qubits from the initial Bell states. She keeps the first sequence with herself as home qubits (H) and sends the second sequence to Bob as travel qubits (T) as shown by arrow in Step1.1 of Fig. <ref>. Using a quantum random number generator (QRNG), Alice also prepares her raw key of n bits K_A={K_A^1,K_A^2⋯ K_A^i⋯ K_A^n}, where K_A^i is the i^th bit of K_A, and K_A^i∈{0,1} as shown in Step1.1 of Fig. <ref>. Step1.2: Bob's encoding: Bob also prepares his raw key K_B={K_B^1,K_B^2,⋯,K_B^i,⋯,K_B^n} of n bits by using a RNG[Bob being classical, we refer to his random number generator as a traditional pseudo random number generator, instead of a true random number generator which is required to be quantum. This aspect of random key generation in Bob's side is not explicitly discussed in the existing works on semi-quantum protocols. However, it's not a serious issue as on one hand, extremely good quality pseudo random numbers can be generated classically (thus, use of a classical RNG will be sufficient for the present purpose); on the other hand, QRNG are now commercially available and is not considered as a costly quantum resource, so if one just allows an otherwise classical user to have an QRNG, the modified scheme could still be considered as a semi-quantum scheme as Bob would still lack the power of performing quantum measurement and or storing quantum information. In fact, measurement in classical basis would be sufficient for generation of true random number if Bob can create a |±⟩ state using a beam splitter.], (which is independent of Alice's QRNG), where K_B^i is the i^th bit of the K_B with K_B^i∈{0,1}. After receiving all the qubits from Alice, Bob randomly chooses one of the two operations, either to measure or reflect as shown in Step1.1 of Fig. <ref>. Specifically, he measures n qubits (chosen randomly) in the computational basis, while reflects the remaining m qubits to be used later for eavesdropping checking. He forms a string of his measurement outcomes as r_B={r_B^1,r_B^2,⋯,r_B^i,⋯,r_B^n}, where r_B^i is the measurement outcome of i^th qubit chosen to be measured by Bob in the computational basis, and therefore, r_B^i∈{0,1}. Then to encode his raw key K_B, he performs a bit wise XOR operation, i.e., r_B⊕ K_B and prepares corresponding qubits in the computational basis. Finally, he inserts the encoded n qubits back into the string of reflected m qubits and sends the resultant sequence r^b back to Alice only after applying a permutation operator Π_N as shown by the first arrow in Step1.2 of Fig. <ref>. These qubits would further be used to make the final shared key K_f.Step1.3: Announcements and eavesdropping checking: After receiving an authenticated acknowledgment of the receipt of all the qubits from Alice, Bob announces the permutation operator Π_m corresponding to the qubits reflected by him. Though, this would reveal which qubits have been measured and which qubits have been reflected by Bob but Eve or Alice cannot gain any advantage due to lack of information regarding the permutation operator Π_n. Further, to detect eavesdropping, Alice firstly measures the reflected qubits in Bell basis by combining them with respective partner home qubits. If she finds the measurement result as |ψ^+⟩ state then they confirm that no eavesdropping has happened, because the initial state was prepared in |ψ^+⟩ state, and they move on to the next step, otherwise they discard the protocol and start from the beginning.Step1.4: Extraction of the final shared key K_f by Bob: After ensuring that there is no eavesdropping, Alice announces her secret key K_A publicly as shown by the first arrow in Step1.4 of Fig. <ref>, and Bob uses that information to prepare his final shared key K_f=K_A⊕ K_B as he knows K_B. Subsequently, he also reveals the the permutation operator Π_n. Here, it is important to note that Alice announces her secret key K_A only after the receipt of all the encoded qubits from Bob. So, now Bob can not make any further changes as per his wish.Step1.5: Extraction of the final shared key K_f by Alice: Once Alice has reordered Bob's encoded qubits she measures both the home (H) and travel (T) qubits in the computational basis. She obtains a string of n bits r_A corresponding to measurement outcomes of home qubits, and she can use the measurement outcomes of the travel qubits to know Bob's encoded raw key as the initial Bell states are publicly known. For the specific choice of initial Bell state here, i.e., |ψ^+⟩, the relation r_A=r_B holds. Therefore, the final shared key would be[ K_f = K_A⊕ K_B=(r_A⊕ K_A)⊕(r_B⊕ K_B). ]Hence, the final shared key K_f is shared between Alice and Bob. In Eq. (<ref>), Eve may know Alice's secret key K_A as this was announced through a classical channel. She is also aware of r_A=r_B due to public knowledge of the initial choice of Bell state. However, it does not affect the secrecy of the final shared key K_f which is prepared as K_A⊕ K_B, because Eve does not know anything about Bob's secret key K_B and the value of r_A (or r_B).Further, it should be noted here the computational basis measurement is not the only choice by Alice, rather she can extract Bob's encoding by performing Bell measurement. Here, we will skip that discussion as the same has been discussed in the following section for semi-quantum dialogue protocol.If we assume that Alice is not intended to send her raw key (i.e., she does not announce her raw key) in the proposed SQKA protocol, indeed following it faithfully, then it will reduce to a deterministic SQKD protocol. Specifically, in analogy of ping-pong protocol <cit.> to perform a quantum direct communication task, which was also shown to share a quantum key in a deterministic manner. §.§ Possible attack strategies and security * Eve's attack: As mentioned beforehand Eve's ignorance regarding the final shared key solely depends on the fact whether Alice receives Bob's raw key in a secure manner. In other words, although Eve is aware of the initial state and Alice's raw key, she still requires Bob's key to obtain the final shared key. In what follows, we will discuss some attacks, she may attempt to extract this information. The easiest technique Eve may incorporate is a CNOT attack (as described and attempted in Refs. <cit.>). To be specific, she may prepare enough ancilla qubits (initially as |0⟩) to perform a CNOT with each travel qubit at control and an ancilla as a target while Alice to Bob communication. This way the compound state of Alice's, Bob's and Eve's qubits, prior to Bob's measurement, becomes |ψ_ ABE⟩=|000⟩+|111⟩/√(2). As Bob returns some of the qubits performing single qubit measurement in the computational basis on his qubit (B). The reduced state of Alice's and Eve's qubits may be written as |ρ_ AE⟩=|00⟩⟨00|+|11⟩⟨11|/2 corresponding to Bob's measurement, while the three qubit state remains unchanged for reflected qubits. Suppose Bob prepares a fresh qubit |ξ_ B⟩=|r_B⊕ K_B⟩ and returns the string of encoded qubits (in other words, measured qubits) and reflected qubits to Alice (without applying a permutation operator). Subsequently, Eve again performs a CNOT operation while Bob to Alice communication with control on travel qubits and target on the ancilla qubits. It is straightforward to check that in case of reflected qubits the state reduces to |ψ_ ABE⟩=|00⟩+|11⟩/√(2)⊗|0⟩. Whereas, for encoded qubits it may be written as |ρ_ ABE^'⟩=1/2(|0⟩⟨0|⊗|K_B⟩⟨ K_B|+|1⟩⟨1|⊗|K_B⊕1⟩⟨ K_B⊕1|)⊗|K_B⟩⟨ K_B|. From which it may be observed that Eve will always obtain Bob's secret key. However, this problem is circumvented using a permutation operator (in Step1.2) by Bob on the string of encoded and reflected qubits.As Eve's CNOT attack strategy is foiled by the use of a permutation operator she may attempt other attack strategies. Suppose she performs an intercept and resend attack. Specifically, in this attack, she can prepare an equal number of Bell states as Alice and send all the second qubits to Bob, keeping the Alice's original sequence with herself. Bob follows the protocol and encodes n qubits randomly and sends them to Alice, which is again intercepted by Eve. Subsequently, Eve performs the Bell measurement on all the Bell pairs (which she had initially prepared), and she may come to know which n qubits were measured by Bob. Quantitatively, she can get this knowledge 75% of the time as in the Bell measurement outcomes anything other than the original state would result in a measurement performed by Bob. Depending upon the Bell measurement outcomes Bob's encoding can also be revealed as |ψ^-⟩ and |ϕ^±⟩ will correspond to Bob's 0 and 1 in the computational basis, respectively (see Section <ref> and Table <ref> for more detail). Subsequently, she performs a measurement in the computational basis on the qubits sent by Alice corresponding to each qubit Bob has measured. Finally, she sends the new string of qubits (comprising of freshly prepared and Alice's original qubits) to Alice which will never fail in eavesdropping checking and Alice will announce her key and Eve can get at least 75% of the shared key.It is important to note here that 25% of the key of Alice and Bob will also not match in this case. This may be attributed to the disturbance caused due to eavesdropping, which left that signature and is a characteristic nature of quantum communication. This fact can be explored to achieve the security from this kind of an attack. Specifically, Alice and Bob may choose to perform a verification strategy of a small part of the shared key to check this kind of an attempt. As we have already incorporated a permutation of particles scheme (performed by Bob) for security against CNOT attack, it becomes relevant to see the feasibility of this attack in this case as well. Bob discloses the permutation operator for decoy qubits only after an authenticated acknowledgment by Alice. Therefore, Eve fails to obtain the encoded bit value prior to this disclosure of Bob as it is, although it's encoded in the computational basis, but she does not know the partner Bell pair due to randomization. She will require this Bell pair to decode the information as the measurement outcome of the partner Bell particle acts as a key for decoding the Bob's information. Further, Bob announces the correct order of particles only when less than a threshold of errors are found during eavesdropping checking. Indeed, most of the attacks by an eavesdropper can be circumvented if the classical Bob is given power to permute the string of qubits with him, i.e., Bob can secure his raw key (information) in the proposed SQKA scheme by permuting the particles before sending to Alice. There are some other attacks (see <cit.> for details) which do not affect the security of the proposed protocol, like disturbance attack, denial of service attack, and impersonation attack (as it becomes void after incorporating an authentication protocol).* Alice's attack: In the eavesdropping checking, at the end of the round trip of Bell pairs, Bob announces the positions of the reflected qubits in each of the Bell pairs and the remaining string (i.e., encoded string after measurement) is in the computational basis and Alice can know Bob's encoding before she announces her own. In other words, she can control the final key completely as she can announce her raw key accordingly. However, this is not desired in a genuine key agreement scheme.This possible attack by Alice is circumvented by the use of permutation operator discussed in the last attack. As Bob reveals the permutation he had applied on the freshly prepared qubits (on which his raw key is encoded) only after Alice announces her raw key, she can not extract his raw key due to lack of knowledge of pair particles corresponding to each initially prepared Bell state. Hence, only after Alice's announcement of her raw key she comes to know Bob's raw key with his cooperation. To avoid this attack, we may also decide that both Alice and Bob share the hash values of their raw keys during their communication due to which if she wishes to change her raw key later, then the protocol is aborted as the hash value for her modified raw key will not match with original raw key.* Bob's attack: As mentioned in the Alice's attack that Bob announces the permutation operator only after receiving her raw key. One should notice here that the permuted string Bob has sent and corresponding Alice's string are in computational basis. Further, Bob knows each bit value in Alice's string as those are nothing but Bob's corresponding measurement outcomes in Step 1.2. Once Bob knows Alice's raw key, he may control the final key entirely by disclosing a new permutation operator that suits his choice of shared key.Therefore, it becomes important to incorporate the hash function as if Bob has already shared the hash value of his key he cannot change his raw key during announcement of permutation operator. § CONTROLLED DIRECT SECURE SEMI-QUANTUM COMMUNICATIONIf we observe the SQKA protocol proposed here, it can be stated that Bob sends his raw key by a DSSQC scheme and Alice announces her raw key. The security of the final key depends on the security of the raw key of Bob. Hence, a semi-quantum counterpart of direct communication scheme can be designed. However, avoiding the designing of various schemes for the same task, we rather propose a controlled version of direct communication scheme and discuss the feasibility of realizing this scheme, which would directly imply the possibility of direct communication scheme. Here, we will propose two controlled direct secure semi-quantum communication protocols. Note that in the proposed CDSSQC schemes only Alice is considered as a classical party, while Bob and Charlie possess quantum powers.§.§ Protocol 2: Controlled direct secure semi-quantum communicationThe working of this scheme is as follows. Step2.1: Preparation of shared quantum channel: Charlie prepares n+m=N copies of a three qubit entangled state |ψ⟩_GHZ- like=|ψ_1⟩|a⟩+|ψ_2⟩|b⟩/√(2),where |ψ_i⟩∈{ |ψ^+⟩,|ψ^-⟩,|ϕ^+⟩,|ϕ^-⟩:|ψ^±⟩=|00⟩±|11⟩/√(2), |ϕ^±⟩=|01⟩±|10⟩/√(2),} and ⟨ a|b⟩=δ_a,b. The classical user Alice will encode her n-bit message on the n copies, while the remaining m copies will be used as decoy qubits to check an eavesdropping attempt. Subsequently, Charlie prepares three sequences of all the first, second and third qubits of the entangled states. Finally, he sends the first and second sequences to Alice and Bob, respectively. They can check the correlations in a few of the shared quantum states to avoid an eavesdropping attempt using intercept and resend attack, i.e., Charlie measures his qubits in { |a⟩,|b⟩} basis, while Alice and Bob in computational basis. However, such an eavesdropping test would fail to provide security against measurement and resend attack. Security against such an attack is discussed later.In addition, Charlie and Bob both being capable of performing quantum operations may perform BB84 subroutine (cf. <cit.> and references therein) to ensure a secure transmission of the qubits belonging to quantum channel. This would provide additional security against intercept-resend attacks onCharlie-Bob quantum channel.Step2.2: Alice's encoding: Alice has a n bit message M={M_A^1,M_A^2⋯ M_A^i⋯ M_A^n}. To encode this message Alice measures n qubits (chosen randomly) in computational basis to obtain measurement outcomes r_A={r_A^1,r_A^2⋯ r_A^i⋯ r_A^n}, and prepares a new string of qubits in { |0⟩,|1⟩} basis corresponding to bit values M_A^i⊕ r_A^i. Finally, she reinserts all these qubits back into the original sequence and sends it to Bob only after permuting the string. It is important that she leaves enough qubits undisturbed so that those qubits may be employed as decoy qubits.Step2.3: Announcements and eavesdropping checking: After receiving an authenticated acknowledgement of the receipt of all the qubits from Bob, Alice announces which qubits have been encoded and which qubits have been left as decoy qubits. She also discloses the permutation operator applied only on the decoy qubits. Further, to detect eavesdropping, Bob firstly measures the pair of decoy qubits from Alice's and Bob's sequences in the Bell basis and with the help of Charlie's corresponding measurement outcome (which reveals the initial Bell state Alice and Bob were sharing) he can calculate the errors. If sufficiently low errors are found they proceed to the next step, otherwise start afresh.Step2.4: Decoding the message: To decode the message, Bob can perform a measurement in the computational basis on all the remaining qubits from both the sequences received from Charlie and Alice. Subsequently, Alice also discloses her permutation on the message encoded (or freshly prepared) qubits in her string. However, Bob cannotdecode Alice's secret message yet, as he remains unaware of the Bell state he was sharing with Alice until Charlie announces his measurement outcome. Step2.5: Charlie's announcement: Finally, Charlie announces his measurement outcome in { |a⟩,|b⟩} basis using which Bob can decode Alice's message.§.§ Protocol 3: Controlled direct secure semi-quantum communication based on cryptographic switch This controlled communication scheme is based on quantum cryptographic switch scheme proposed in the past <cit.> and has been shown to be useful in almost all the controlled communication schemes <cit.>. Step3.1: Preparation of the shared quantum channel: Charlie prepares n+m=N copies of one of the Bell states, out of which n Bell pairs will be used for sending messages and the rest as decoy qubits. Subsequently, Charlie prepares two sequences of all the first and second qubits of the entangled state. He also performs a permutation operator on the second sequence. Finally, he sends the first and second sequences to Alice and Bob, respectively.Both Alice and Bob may check the correlations in a few of the shared Bell states to avoid an eavesdropping attempt as was done in Step2.1. Similarly, Charlie and Bob both being capable of performing quantum operations may also perform BB84 subroutine (cf. <cit.> and references therein).Step3.2: Same as Step2.2 of Protocol 2.Step3.3: Announcements and eavesdropping checking: After receiving an authenticated acknowledgment of the receipt of all the qubits from Bob, Alice announces which qubits have been encoded and which qubits have been left as decoy qubits. She also discloses the permutation operator corresponding to the decoy qubits only. Then Charlie announces the correct positions of the partner pairs of decoy Bell states in the Bob's sequence. To detect eavesdropping, Bob measures the pairs of decoy qubits from Alice's and Bob's sequences in the Bell basis to calculate the errors. If sufficiently low errors are found they proceed to the next step, otherwise start afresh.Step3.4: Decoding the message: To decode the message Bob can perform a measurement in the computational basis on all the remaining qubits from both the sequences received from Charlie and Alice. Meanwhile, Alice discloses her permutation operator enabling Bob to decode her message. However, he cannot decode Alice's secret message yet as he is unaware of the permutation operator Charlie has applied. Step3.5: Charlie's announcement: Finally, Charlie sends the information regarding the permutation operator to Bob, using which Bob can decode Alice's message.It is important to note that two of the three parties involved in the CDSSQC protocols are considered quantum here. The possibilities of minimizing the number of parties required to have quantum resources will be investigated in the near future.In the recent past, it has been established that the controlled counterparts of secure direct communication schemes <cit.> can provide solutions of a handful of real life problems. For instance, schemes of quantum voting <cit.> and e-commerce <cit.> are obtained by modifying the schemes for controlled DSQC. Here, we present the first semi-quantum e-commerce scheme, in which Alice (buyer) is classical, and Bob (merchant) and Charlie (online store) possess quantum resources. Both Alice and Bob are registered users of the online store Charlie. When Alice wishes to buy an item from Bob she sends a request to Charlie, who prepares a tripartite state (as in Eq. <ref> of Protocol 2) to be shared with Alice and Bob. Alice encodes the information regarding her merchandise to Bob and encodes it as described in Step2.2. The merchant can decode Alice's order in Step2.5 but will deliver the order only after receiving an acknowledgment from Charlie in Step2.5. Here, it is important to note that in some of the recent schemes, Charlie can obtain information about Alice's order and/or change it, which is not desired in a genuine e-commerce scheme <cit.>. The semi-quantum e-commerce scheme modified from the proposed CDSSQC scheme is free from such an attack as Alice applies a permutation operator and discloses her permutation operator only after a successful transmission of all the travel particles. In a similar manner, another quantum e-commerce scheme may be obtained using CDSSQC scheme presented as Protocol 3, where the online store prepares only Bell states.§.§ Possible attack strategies and securityMost of the attacks on the proposed CDSQC schemes may be circumvented in the same manner as is done in the SQKA scheme in Section <ref>. Here, we only mention additional attack strategies that may be adopted by Eve.As discussed while security of SQKA scheme, the easiest technique for Eve would be a CNOT attack. Specifically, she may entangle her ancilla qubits with the travel qubits from Charlie to Alice and later disentangle them during Alice to Bob communication. She succeeds in leaving no traces using this attack and getting Alice's all the information (see Section <ref> for detail). However, Alice may circumvent this attack just by applying a permutation operator on all the qubits before sending them to Bob. Eve may choose to perform an intercept and resend attack. Specifically, she can prepare an equal number of single qubits in the computational basis as Charlie has sent to Alice and send all these qubits to Alice, keeping the Charlie's original sequence with herself. When Alice, Bob, and Charlie check the correlations in Step 1, they will detect uncorrelated string with Alice corresponding to this attack.Eve can measure the intercepted qubits in the computational basis and prepares corresponding single qubits to resend to Alice. In this case, Eve will not be detected during correlation checking. However, Alice transmits the encoded qubits after permutation to Bob due to which Eve fails to decode her message, and consequently, Eve will be detected at Bob's port during eavesdropping checking. As mentioned beforehand both these attacks and the set of remaining attacks may be circumvented due to permutation operator applied by Alice. § SEMI-QUANTUM DIALOGUE In this section, we propose a two-party protocol for SQD, where Alice has all quantum powers and Bob is classical. The following are the main steps of the protocol. Step4.1: Alice's preparation of quantum channel and encoding on it: Alice prepares n+m=N number of initial Bell states, i.e., |ψ^+⟩^⊗ N, where |ψ^+⟩=|00⟩+|11⟩/√(2). She prepares two ordered sequences of all the first qubits as home (H) qubits and all the second qubits as travel (T) qubits from the initial Bell states. She keeps the home (H) qubits with herself and sends the string of travel qubits to classical Bob. The initial Bell states and encoding schemes are publicly known and let's say that U_A and U_B are the measurement operations of Alice and Bob, respectively. Step4.2: Bob's eavesdropping checking: Bob informs Alice about the reception of the travel sequence by a classical channel. Here, we can perform an eavesdroping checking strategy as discussed above in Step 1.1 of CDSSQC schemes in Section <ref> that the joint measurement in the computational basis by Alice and Bob on the Bell pairs should be correlated. If they find error rate more than a threshold value, then they abort the protocol and starts from the beginning. Step4.3: Bob's encoding: Bob measures n qubits (chosen randomly) in the computational basis and records all the measurement outcomes in a string of bits r_B. Then he prepares a string of his message in binary as M_B. Finally, he prepares fresh qubits in the computational basis for each bit value r_B⊕ M_B and reinserts them in the original sequence. Then he sends the encoded and decoy qubits back to Alice after performing a permutation on them. Here, it is important to note that the encoding operation used by Bob can be thought equivalent to Pauli operations { I,X} but they are performed classically by Bob, i.e., remaining within his classical domain.Step4.4: Alice's eavesdropping checking: After receiving the authenticated acknowledgement of the receipt of all the qubits from Alice, Bob announces the positions of decoy qubits along with the corresponding permutation operator. Alice then measures all the decoy qubits in the Bell basis and any measurement outcome other than that of the initially prepared Bell state would correspond to an eavesdropping attempt. Step4.5: Alice's encoding and measurement: For sufficiently low errors in eavesdropping checking Bob also discloses the permutation operator applied on the freshly prepared message qubits and Alice proceeds to encoding her secret on the qubits received from Bob by applying Pauli operations { I,X}. Finally, she measures the partner pairs in Bell basis and announces her measurement outcomes. From the measurement outcomes, both Alice and Bob can extract Bob's and Alice's message, respectively. Here, it should be noted that the Bell measurement performed in Step4.5 is not necessary, a two qubit measurement performed in the computational basis will also work in the above mentioned case.In the recent past, it has been established that a scheme for quantum dialogue can be modified to provide a solution for the quantum private comparison, which can be viewed as a special form of socialist millionaire problem <cit.>. In the semi-quantum private comparison (SQPC) task <cit.>, two classical users wish to compare their secrets of n-bits with the help of an untrusted third party possessing quantum resources. Before performing the SQPC scheme, the untrusted third party (Alice here) prepares a large number of copies of the Bell states using which both Bob_1 and Bob_2 prepare two shared unconditionally secure symmetric strings in analogy with the schemes described in <cit.>. They use one symmetric string as a semi-quantum key, while the other to decide the positions of the qubits they will choose to reflect or to measure. Specifically, both the classical users decide to measure the ith qubit received during the SQPC scheme if ith bit in the shared string is 1, otherwise they reflect the qubit. Using this approach both classical users prepare fresh qubit using the encoding operation defined in Step4.3, with the only difference that this time the transmitted information is encrypted by the shared key. Once Alice receives all the qubits and measures all of them in the Bell basis. Both the classical users disclose the string they had originally shared, using which Alice announces the measurement outcomes corresponding to the reflected qubits. Bob_is can subsequently compute the error rate from the measurement outcomes and if the error rate is below the threshold, Alice publicly announces 1-bit secret whether both the users had the same amount of assets or not (see <cit.> for detail). Thus, we establish that a slight modification of Protocol 4 may lead to a new scheme for SQPC. This is interesting as to the best of our knowledge until now there exist only two proposals for SQPC <cit.>. §.§ Possible attack strategies and securityMost of the attacks on the proposed SQKA and CDSSQC schemes will also be valid on the SQD scheme. Further, as mentioned beforehand most of these attacks will be circumvented due to permutation operator Bob has applied. Here, it is worth mentioning that permutation operator is not a unique way to circumvent these attacks, a prior shared key will also ensure the security of the protocol. A similar strategy of using a prior shared key has been observed in a few protocols in the past <cit.>. We would like to emphasize here that employing a key for security is beyond the domain of direct communication. Therefore, we have preferred permutation of particles over a key in all the proposed schemes.Further, it is shown in the past by some of the present authors <cit.> that if the information regarding the initial state is not a public knowledge and sent using a QSDC/DSQC protocol, then an inherent possible attack in QD schemes, i.e., information leakage attack, can be circumvented. § EFFICIENCY ANALYSISPerformance of a quantum communication protocol can be characterized using qubit efficiency <cit.>, η=c/q+b, which is the ratio of c-bits of message transmitted using q number of qubits and b-bits of classical communication. Note that the qubits involved in eavesdropping checking as decoy qubits are counted while calculating qubit efficiency, but the classical communication associated with it is not considered.Before computing the qubit efficiency of the four protocols proposed here, we would like to note that in all the protocols the classical senders are sending n bits of secret, while in Protocol 4 (protocol for SQD, where both classical and quantum users transmit information), the quantum user Alice was also able to send the same amount of information to classical Bob. In all these cases, the classical sender encodes n bits secret information using n-qubits. However, to ensure the secure transmission of those n-qubits, another 3n-qubits are utilized (i.e., m=3n). This is so because to ensure a secure communication of n qubits, an equal number of decoy qubits are required to be inserted randomly <cit.>. The error rate calculated on the decoy qubits decides, whether to proceed with the protocol or discard. In a semi-quantum scheme, a classical users cannot produce decoy qubits, so to securely transmit n-bit of classical information, he/she must receive 2n qubits from the quantum user, who would require to send these 2n-qubits to be used by user along with another 2n qubits, which are decoy qubits for the quantum user- classical user transmission. Thus, quantum user need to prepare and send 4n qubits to a classical user for sending n bits of classical communication.The details of the number of qubits used for both sending the message (q_c) and checking an eavesdropping attempt (d) in all the protocols proposed here are explicitly mentioned in Table <ref>. In the last column of the table the computed qubit efficiencies arelisted. From Table <ref>, one can easily observe that the qubit efficiency of three party schemes (Protocol 2 and 3) is less than that of two party schemes (Protocol 1 and 4). This can be attributed to the nature of three party schemes, where one party is supervising the one-way semi-quantum communication among two remaining parties, which increases the resource requirements to ensure the control power. In the controlled semi-quantum schemes, the qubit efficiency computed for Protocol 3 is comparatively more than Protocol 2 as the controller had chosen to prepare a bipartite entangled state instead of tripartite entangled state used in Protocol 2. This fact is consistent with some of our recent observations <cit.>. Among the two party protocols, Protocol 4 has a higher qubit efficiency than that of Protocol 1 as the quantum communication involved in this case is two-way.Further, one may compare the calculated qubit efficiency with that of the set of protocols designed for the same task with all the parties possessing quantum resources. Such a comparison reveals that the requirement of unconditional security leads to decrease in the qubit efficiency for the schemes that are performed with one or more classical user(s) (for example, the qubit efficiency of a QKA scheme was 14.29% which is greater than 10% qubit efficiency obtained here for the SQKA protocol).§ CONCLUSIONA set of schemes are proposed for various quantum communication tasks involving one or more user(s) possessing restricted quantum resources. To be specific, a protocol for key agreement between a classical and a quantum party is proposed in which both parties equally contribute in determining the final key and no one can control that key. To the best of our knowledge this is the first attempt to design a key agreement between classical and quantum parties. We have also proposed two novel schemes for controlled communication from a classical sender and a quantum receiver. It is important to note here that the proposed schemes are not only the first schemes of their kind, i.e., semi-quantum in nature, are also shown to be useful in designing a semi-quantum e-commerce schemes that can provide unconditional security to a classical buyer. The presented semi-quantum e-commerce schemes are relevant as the buyer is supposed to possess minimum quantum resources, while the online store (the controller) and the merchant may be equipped with quantum resources. This kind of a semi-quantum scheme can be used as a solution to a real life problem as in daily life, end users are expected to be classical. The present work is also the first attempt for designing semi-quantum schemes having direct real life application. Further, the first and an unconditionally secure scheme for dialogue among a classical and a quantum users is proposed here. Later it has also been shown that the proposed SQD scheme can be modified to obtain a solution for private comparison or socialist millionaire problem. The security of the proposed schemes against various possible attack strategies are also established.The possibility of realizing semi-quantum schemes for these tasks establishes that the applicability of the idea of semi-quantum schemes is not restricted to key distribution and direct communication between a classical and quantum or two classical (only with the help of a third quantum party) parties. Our present work not only shows that almost all the secure communication tasks that can be performed by two quantum parties can also be performed in a semi-quantum manner at the cost of increased requirement of the quantum resources to be used by the quantum party. To establish this point, the qubit efficiencies of the proposed schemes are computed, which are evidently lower than the efficiency of the similar schemes with all the parties possessing quantum resources. With the recent development of experimental facilities and a set of recent experimental realizations of some quantum cryptographic schemes, we hope that the present results will be experimentally realized in the near future and these schemes or their variants will be used in the devices to be designed for the daily life applications.Acknowledgment: CS thanks Japan Society for the Promotion of Science (JSPS), Grant-in-Aid for JSPS Fellows no. 15F15015. KT and AP thank Defense Research & Development Organization (DRDO), India for the support provided through the project number ERIP/ER/1403163/M/01/1603. 10 bb84Bennett, C. H., Brassard, G.: Quantum cryptography: Public key distribution and coin tossing. In Proceedings of the IEEE International Conference on Computers, Systems, and Signal Processing, Bangalore, India, 175-179 (1984)bookPathak, A.: Elements of Quantum Computation and Quantum Communication. CRC Press, Boca Raton, USA (2013)ekertEkert, A. K.: Quantum cryptography based on Bell's theorem. Phys. Rev. Lett. 67, 661 (1991)b92Bennett, C. H.: Quantum cryptography using any two nonorthogonal states. Phys. Rev. Lett. 68, 3121 (1992)QKA_ourShukla, C., Alam, N., Pathak, A.: Protocols of quantum key agreement solely using Bell states and Bell measurement. Quantum Inf. Process. 13, 2391 (2014)ping-pongBostr�m, K. Felbinger, T.: Deterministic secure direct communication using entanglement. Phys. Rev. Lett. 89, 187902 (2002)AninditaBanerjee, A., Pathak, A.: Maximally efficient protocols for direct secure quantum communication. Phys. Lett. A 376, 2944 (2012)dsqc-1Shukla, C., Banerjee, A., Pathak, A.: Improved protocols of secure quantum communication using W states. Int. J. Theor. Phys. 52, 1914 (2013)reviewLong, G.-l., Deng, F.-g., Wang, C., Li, X.-h., Wen, K., Wang, W.-y.: Quantum secure direct communication and deterministic secure quantum communication. Front. Phys. China 2, 251 (2007)online-shopHuang, W., Yang, Y.-H., Jia, H.-Y.: Cryptanalysis and improvement of a quantum communication-based online shopping mechanism. Quantum Inf. Process. 14, 2211-2225, (2015)ba-anAn, N. B.: Quantum dialogue. Phys. Lett. A 328, 6 (2004)baan_newAn, N. B.: Secure dialogue without a prior key distribution. J. Kor. Phys. Soc. 47, 562 (2005)ManMan, Z. X., Zhang, Z. J., Li, Y.: Quantum dialogue revisited. Chin. Phys. Lett. 22, 22 (2005)shi-auxilaryShi, G.-F.: Bidirectional quantum secure communication scheme based on Bell states and auxiliary particles. Opt. Commun. 283, 5275 (2010)NaseriNaseri, M.: An efficient protocol for quantum secure dialogue with authentication by using single photons. Int. J. Quantum Info. 9, 1677 (2011)qdShukla, C., Kothari, V., Banerjee, A., Pathak, A.: On the qroup-theoretic structure of a class of quantum dialogue protocols. Phys. Lett. A 377, 518 (2013)xiaXia, Y., Fu, C.-B., Zhang, S., Hong, S.-K., Yeon, K.-H., Um, C.-I.: Quantum dialogue by using the GHZ state. J. Kor. Phys. Soc. 48, 24 (2006)dong-wDong, L., Xiu, X.-M., Gao, Y.-J., Chi, F.: Quantum dialogue protocol using a class of three-photon W states. Commun. Theor. Phys. 52, 853 (2009)gao-swappingGao, G.: Two quantum dialogue protocols without information leakage. Opt. Commun. 283, 2288 (2010)referee2Zhou, N. R., Hua, T. X., Wu, G. T., He, C. S., Zhang, Y.: Single-photon secure quantum dialogue protocol without information leakage. Int. J. Theor. Phys. 53, 3829 (2014)QD-qutritZhang, L.-L., Zhan,Y.-B.: Quantum dialogue by using the two-qutrit entangled states. Mod. Phys. Lett. B 23, 2993 (2009)com-QD-qutGao, G.: Information leakage in quantum dialogue by using the two-qutrit entangled states. Mod. Phys. Lett. B 28, 1450094 (2014)CV-QDYu, Z. B., Gong, L. H., Zhu, Q. B., Cheng, S., Zhou, N. R.: Efficient three-party quantum dialogue protocol based on the continuous variable GHZ states. Int. J. Theor. Phys. 55, 3147 (2016)probAuthQDHwang, T., Luo, Y.-P.: Probabilistic authenticated quantum dialogue. Quantum Inf. Process. 14, 4631 (2015)QSDD1Zheng, C., Long, G.F.: Quantum secure direct dialogue using Einstein-Podolsky-Rosen pairs. Sci. China Phys. Mech. Astron. 57, 1238 (2014)QSDD2Ye, T.-Y.: Quantum secure direct dialogue over collective noise channels based on logical Bell states. Quantum Inf. Process. 14, 1487 (2015)QD-EnSwapWang, H., Zhang, Y. Q., Liu, X. F., & Hu, Y. P.:. Efficient quantum dialogue using entangled states and entanglement swapping without information leakage. Quantum Info. Process. 15, 2593-2603 (2016)quantum_telephon1Wen, X., Liu, Y., Zhou, N.: Secure quantum telephone. Opt. Commun. 275, 278 (2007)Y_Sun_improve_telephoneSun, Y., Wen, Q.-Y., Gao, F., Zhu, F.-C.: Improving the security of secure quantum telephone against an attack with fake particles and local operations. Opt. Commun. 282, 2278 (2009)sakshi-panigrahi-eplJain, S., Muralidharan, S., Panigrahi, P. K.: Secure quantum conversation through non-destructive discrimination of highly entangled multipartite states. Eur. Phys. Lett. 87, 60008 (2009)N09Noh, T. G.: Counterfactual quantum cryptography. Phys. Rev. Lett. 103, 230501 (2009)vaidman-goldenbergGoldenberg, L., Vaidman, L.: Quantum cryptography based on orthogonal states. Phys. Rev. Lett. 75, 1239 (1995)cs-thesisShukla, C.: Design and analysis of quantum communication protocols. Ph.D. thesis, Jaypee Institute of Information Technology, Sector-62, Noida, India, 1-166 (2015)QPC_KishoreThapliyal, K., Sharma, R. D., Pathak, A.: Orthogonal-state-based and semi-quantum protocols for quantum private comparison in noisy environment. arXiv:1608.00101 (2016)Talmor2007bBoyer, M., Kenigsberg, D., Mor, T.: Quantum key distribution with classical Bob. Phys. Rev. Lett. 99, 140501 (2007)talmor2Boyer, M., Gelles, R., Kenigsberg, D., Mor, T.: Semiquantum key distribution. Phys. Rev. A 79, 032341 (2009)Zou_less_than_4_quantum_statesZou, X., Qiu., D., Li., L., Wu., L., Li, L.: Semiquantum-key distribution using less than four quantum states. Phys. Rev. A 79, 052312 (2009)Zou2015qinpZou, X., Qiu, D., Zhang, S., Mateus, P.: Semiquantum key distribution without invoking the classical party’s measurement capability. Quantum Info. Process. 14, 2981-2996 (2015)nearly-c-BobLi, Q., Chan, W.-H., Zhang, S.: Semiquantum key distribution with secure delegated quantum computation. Sci. Rep. 6, 19898 (2016)Semi-AQKDYu, K.-F., Yang, C.-W., Liao, C.-H., Hwang, T.: Authenticated semi-quantum key distribution protocol using Bell states. Quantum Inf. Process. 13, 1457-1465 (2014)MediatedSQKDKrawec, W. O.: Mediated semiquantum key distribution. Phys. Rev. A 91, 032323 (2015)multiuserSemiQKDZhang, X.-Z., Gong, W.-G., Tan, Y.-G., Ren, Z.-Z. & Guo, X.-T. Quantum key distribution series network protocol with m-classical Bobs. Chin. Phys. B 18, 2143-2148 (2009)QKD-limited-c-BobSun, Z.-W., Du, R.-G., Long, D.-Y.: Quantum key distribution with limited classical Bob. Int. J. Quant. Inf. 11, 1350005 (2013)Kraweck-phd-thesis-2015Krawec, W. O.: Semi-Quantum key distribution: protocols, security analysis, and new models. PhD thesis submitted at Stevens Institute of Technology, New Jersey, USA (2015)Krawec-3-qinpKrawec, W. O.: Restricted attacks on semi-quantum key distribution protocols. Quantum Info. Process. 13, 2417-2436, (2014)QKD_with_classical_AliceLu, H., Cai, Q.-Y.: Quantum key distribution with classical Alice. Int. J. Quantum Inf. 6, 1195-1202 (2008)Li16_without_classical_channelLi, C. M., Yu, K. F., Kao, S. H. et al. Authenticated semi-quantum key distributions without classical channel. Quantum Inf Process. 15, 2881 (2016)tamor3Boyer, M., Mor. T.: Comment on “Semiquantum-key distribution using less than four quantum states”. Phys. Rev. A 83, 046301 (2011)goutampal-eavesdroppingMaitra, A., Goutam, P.: Eavesdropping in semiquantum key distribution protocol. Info. Processing Letters 113, 418-422 (2013)nie-sqis1Nie, Y.-y., Li, Y.-h., Wang, Z.-s.: Semi-quantum information splitting using GHZ-type states. Quantum Info. process. 12, 437-448 (2013)sqss-2010Qin, L., Chan, W. H., Long, D.-Y.: Semiquantum secret sharing using entangled states. Phys. Rev. A 82, 022303 (2010)LI-sqssLi, L., Qui, D., and Mateus, P.: Quantum secret sharing with classical Bobs. Journal of Physics A Mathematical and Theoretical 46, 045304 (2013)attack_on_sqisJason, L., Yang, C.-W., Tsai, C.-W., Hwang, T.: Intercept-resend attacks on semi-quantum secret sharing and the improvements. Int. J. of Theor. Phys. 52, 156-162 (2013)semi-AQSDCLuo, Y.-P., Hwang, T.: Authenticated semi-quantum direct communication protocols using Bell states. Quantum Inf. Process 15, 947-958 (2016)3-step-QSDCZou, X.-F., Qiu, D.-W.: Three-step semiquantum secure direct communication protocol. Physics, Mechanics & Astronomy, 57 , 1696-1702 (2014)SQPCChou, W.-H., Hwang, T., Gu, J.: Semi-quantum private comparison protocol under an almost-dishonest third party, arXiv:1607.07961v2 (2016)Krwawec-2-2015-security-proofKrawec, W. O.: Security proof of a semi-quantum key distribution protocol. In Information Theory (ISIT), IEEE International Symposium, 686-690 (2015)1Security-SQKDMiyadera, T.: Relation between information and disturbance in quantum key distribution protocol with classical Alice. Int. J. of Quant. Inf.9, 1427-1435 (2011)2Security-SQKDKrawec, W.O.: Security proof of a semi-quantum key distribution protocol. DOI: 10.1109/ISIT.2015.7282542, IEEE 2015.3Security-SQKDKrawec, W.O.: Security of a semi-quantum protocol where reflections contribute to the secret key. Quantum Inf. Process. 15 2067-2090 (2016) security_zhang1Zhang, W., Qiu, D., Zou, X. and Mateus, P.: A single-state semi-quantum key distribution protocol and its security proof. arXiv preprint arXiv:1612.03087 (2016)security_zhang2Zhang, W. and Qiu, D.: Security of a single-state semi-quantum key distribution protocol. arXiv preprint arXiv:1612.03170 (2016)e-commerceChou, Y.-H., Lin, F.-J., Zeng, G.-J.: An efficient novel online shopping mechanism based on quantum communication. Electron Commer. Res. 14, 349-367 (2014) Our-auctionSharma, R. D., Thapliyal, K., Pathak, A.: Quantum sealed-bid auction using a modified scheme for multiparty circular quantum key agreement. arXiv:1612.08844v1 (2016)Voting1Thapliyal, K., Sharma, R. D., Pathak, A.: Protocols for quantum binary voting. Int. J. of Quantum Info. 15, 1750007 (2017)Voting2Sharma, R. D., De, A.: Quantum voting using single qubits. Indian Journal of Science and Technology 9, 98637 (2016)Implementation_attack1Fung, C.-H. F., Qi, B., Tamaki, K., Lo, H.-K.: Phase-remapping attack in practical quantum-key-distribution systems. Phys. Rev. A 75, 032314 (2007)Implementation_attack2Zhao, Y., Fung, C.-H. F., Qi, B., Chen, C., Lo, H.-K.: Quantum hacking: Experimental demonstration of time-shift attack against practical quantum-key-distribution systems. Phys. Rev. A 78, 042333 (2008)Implementation_attack3Lydersen, L., Wiechers, C., Wittmann, C., Elser, D., Skaar, J., Makarov, V.: Hacking commercial quantum cryptography systems by tailored bright illumination. Nat. Photonics 4, 686 (2010)talmor2007Boyer, M., Dan, K., Tal, M.: Quantum key distribution with classical Bob. In Quantum, Nano, and Micro Tech., ICQNM'07. First International Conference, IEEE, 10-10 (2007)QCBanerjee, A., Thapliyal, K., Shukla, C., Pathak, A.: Quantum conference. arxiv:1702.00389v1 (2017)AQDBanerjee, A., Shukla, C., Thapliyal, K., Pathak, A., Panigrahi, P. K.: Asymmetric quantum dialogue in noisy environment. Quantum Inf. Process. doi:10.1007/s11128-016-1508-4 (2016)decoySharma, R. D., Thapliyal, K., Pathak, A., Pan, A. K., De, A.: Which verification qubits perform best for secure communication in noisy channel? Quantum Inf. Process. 15, 1703-1718 (2016)switchSrinatha, N., Omkar, S., Srikanth, R., Banerjee, S., Pathak, A.: The quantum cryptographic switch. Quantum Inf. Process. 13, 59 (2014)crypt-switchThapliyal, K., Pathak, A.: Applications of quantum cryptographic switch: Various tasks related to controlled quantum communication can be performed using Bell states and permutation of particles. Quantum Inf. Process. 14, 2599 (2015)cdsqcPathak, A.: Efficient protocols for unidirectional and bidirectional controlled deterministic secure quantum communication: Different alternative approaches. Quantum Inf. Process. 14, 2195 (2015)referee1Yu, Z. B., Gong, L. H., Wen, R. H.: Novel multiparty controlled bidirectional quantum secure direct communication based on continuous-variable states. Int. J. Theor. Phys. 55, 1447 (2016)effCabello, A.: Quantum key distribution in the Holevo limit. Phys. Rev. Lett. 85, 5635 (2000)nielsenNielsen, M. A., Chuang, I. L.: Quantum Computation and Quantum Informatiom. Cambridge University Press, New Delhi (2008)
http://arxiv.org/abs/1702.07861v1
{ "authors": [ "Chitra Shukla", "Kishore Thapliyal", "Anirban Pathak" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170225092618", "title": "Semi-quantum communication: Protocols for key agreement, controlled secure direct communication and dialogue" }
fancy [C](, 0) node[text width = , right]easter egg; (0.5, -3) node[text width = ] ; (0.5, 0.1) node[text width=] ;(0.5, -0.5) node[text width=] ; (0, -13.15) node[right, text width=0.5] ;(, -13.1) node[left] ; [very thick, color=] (0.0, -5.75) – (0.99, -5.75);(0.12, -6.25) node[left] ;(0.53, -6) node[below, text width=0.8, text justified] We compute the energy diffusion constant D, Lyapunov time τ_l and butterfly velocity v_b in an inhomogeneous chain of coupled Majorana Sachdev-Ye-Kitaev (SYK) models in the large N and strong coupling limit.We findD≤ v_b^2 τ_l from a combination of analytical and numerical approaches.Our example necessitates the sharpening of postulated transport bounds based onquantum chaos.; §0pt [remember picture,overlay] (1, 0) node[right]; [color=] (0,-0.35) rectangle (0.7, 0.35); (0.35, 0) node;*§0pt15pt15pt [very thick, color=] (0.0, -5.75) – (0.99, -5.75); § INTRODUCTION A few years ago it was noted that many experimentally realized “strange metals" seem to be characterized by Drude “transport time"of order τ_* ∼ħ/k_BT <cit.>. As this time scale was proposed to be the “fastest possible" time scale governing quantum dynamics <cit.>,it was conjectured by Hartnoll <cit.> that these strongly interacting strange metals remained metallic due to a fundamental bound on diffusion:D≳ v^2 τ_*.In strongly interacting systems without quasiparticles, many-body quantum chaos provides a natural velocity v and time scale τ_* for such a diffusion bound <cit.>. In some large-N models, it is natural to define a Lyapunov time τ_L and butterfly velocity v_b as <cit.>⟨ V(x,t) W(0,0)V(x,t) W(0,0)⟩_T∼ 1- 1/Nexp[t/τ_l - x/τ_lv_b],for rather general Hermitian operators V and W. τ_l is an analogue of the Lyapunov time from classical chaos:it describes the rate at which quantum coherence is lost. Similarly, v_b is called the butterfly velocity:it governs the speed of chaos propagation,and is somewhat analogous to a state-dependent Lieb-Robinson velocity <cit.>. Since we now have a velocity and time scale which can be defined in any strange metal,<cit.> noted that D was naturally related to v_b^2τ_l in some simple holographic settings. A natural question to ask is whether Hartnoll's conjecture becomes D≳ v_b^2 τ_l. Originally (<ref>) was observed to hold for charge diffusion constant <cit.>, with theory-dependent O(1) prefactors, but there are now multiple known counterexamples <cit.>. It is more compelling that (<ref>) hold for the energy diffusion constant:as argued in <cit.>,v_b characterizes the loss of quantum coherence, a process related to quantum “phase relaxation" which should also characterize energy fluctuations and diffusion. Furthermore, additional evidence for an energy diffusion bound of the form (<ref>) has arisen in holographic models in the low temperature limit <cit.>, and models of Fermi surfaces coupled to gauge fields <cit.>:at weak coupling, <cit.> have also proposed a relation betweendiffusion and chaos.Much of the recent literature focuses on “homogeneous" models of disorder:these can crudely be thought of as models where momentum is not a conserved quantity (hence, by Noether's Theorem, microscopic translation symmetry has been broken),yet the effective equations governing transport remain spatially homogeneous. However, transport coefficients can be sensitive to how translation symmetry has been broken <cit.>, so it is important to test the robustness of any transport bound in inhomogeneous models. Such a test was performed for charge diffusion in a family of holographic models <cit.>, and the inequality in (<ref>) was found to be reversed. In this paper, we test (<ref>) for the energy diffusion constant in an inhomogeneous system:an inhomogeneous analogue of a Sachdev-Ye-Kitaev (SYK) chain of Majorana fermions. We will describe this solvable model of a “strange metal" without quasiparticles in more detail in Section <ref>. Our main result is that in this model the energy diffusion constant is upper bounded by chaos:D ≤ v_b^2 τ_l.We will prove this in the limit where the inhomogeneity is parametrically slowly varying, and provide examples where the ratio D/v_b^2 τ_l is arbitrarily small, in Section <ref>.Thus, we do not expect (<ref>) to be generically true in disordered strange metals: holding as a strict inequality up to a finite O(1) prefactor which may be theory-dependent[For example, <cit.> found D≈0.42v_b^2τ_l in their model, in a clean theory. In the SYK chain that we study, the numerical prefactor is 1, and in SYK-like holographic models the constant ranges from 0.5 to 1 <cit.>. Examples where the prefactor can be arbitrarily small can be found in <cit.>.] but robust to disorder. Furthermore, in a generic, nearly translation invariant 1+1 dimensional field theory without a global U(1) symmetry,the natural diffusion constant is the diffusion of energy. As noted in <cit.>, this diffusion constant will be parametrically large due to the fact that translations are only weakly broken <cit.>. Hence, we now have examples of strange metals where D is either much larger or much smaller than v_b^2τ_l. It is still an interesting open question whether transport properties of strange metals are related to quantum chaos. Any such relation may be restricted to particular diffusion constants and/or models.For example, it may be the case that diffusion constants of translation invariant field theories are related to chaos <cit.>. We hope that variational methods, related to those developed in <cit.> for hydrodynamic and holographic models, may be useful in providing rigorous bounds on transport and chaos in disordered strange metals.For the remainder of the paper, we set ħ = k_B = 1. § THE INHOMOGENEOUS SYK CHAIN The SYK model is a strongly interacting large-N model in 0+1 spacetime dimensions.It was introduced a long time ago as a model of disordered quantum magnets <cit.>;it was revived more recently <cit.> due to its possible connection to AdS_2 holography <cit.>, which is a toy model for quantum gravity <cit.>. Although it has since been shown that the SYK model does not admit a simple holographic dual <cit.>,it does share many fascinating properties of holographic theories, including being “maximally chaotic" <cit.>.The model that we introduce is a generalization of the SYK chain model developed in <cit.>.[See <cit.> for another way of adding a spatial dimension to the SYK model.]Consider a one-dimensional lattice of L sites, where on each lattice site x there exist N Majorana fermions χ_i,x (i=1,…, N), obeying the standard commutation rules {χ_i,x,χ_j,y} = _ij_xy.The Hamiltonian of these fermions is H = ∑_x=1^L(∑_i<j<k<l J_ijkl,xχ_i,xχ_j,xχ_k,xχ_l,x +∑_i<j,k<l J_ijkl,x^'χ_i,xχ_j,xχ_k,x+1χ_l,x+1),where the couplings {J_ijkl,x} and {J'_ijkl,x} are all assumed to be independent Gaussian random variables drawn from a distribution with zero mean and following variance: J_ijkl,x J_i^' j^' k^' l^',x = 3!/N^3J_0,x^2 _ii^'_jj^'_kk^'_ll^'_xy,J^'_ijkl,x J^'_i^' j^' k^' l^',x = 1/N^3J_1,x^2 _ii^'_jj^'_kk^'_ll^'_xy,The important (and only) difference comparing to <cit.> is that we do not assume the variances J_1,x^2 and J_0,x^2 take the same value for each x.We are interested in the thermodynamic limit L→∞. One can show that the replica-diagonal[Off-diagonal sectors in replica space do not contribute at the orders in 1/N that we study.] partition function, at inverse temperature β =1/T, can be written as a path integral over two bilocal fields: Z = ∫DGDΣ exp[-NS_eff[G,Σ]]with the Euclidean time action S_= ∑_x=1^L{ -logPf (∂_τ -Σ_x)+ 1/2∫_0^βd^2τ[ Σ_xG_x-J_0,x^2/4 G_x ^4 - J_1,x^2/4 G_x ^2G_x+1^2] }The “Green's function” {G_x(τ_1,τ_2) } and “self energy” {Σ_x(τ_1,τ_2) } are functions of two time variables. The product Σ_x G_x in above formula is abbreviation for Σ_x(τ_1,τ_2) G_x(τ_1,τ_2); G_x^4 and G_x^2 G_x+1^2 are similar products.J_0,x^2 and J_1,x^2 show up to exactly quadratic order in (<ref>) because the random couplings J_ijkl,x and J_ijkl,x^' were Gaussian random variables.As the manipulations in this section are essentially identical to <cit.>, we only present the few steps where they differ in an important way (due to the absence of translational invariance on average).It is convenient to rewrite the interaction term (G^4-term) into the following form:∑_x ( J_0,x^2 G_x^4 + J^2_1,x G_x^2 G_x+1^2 )= ∑_x {( J_0,x^2+J_1,x^2+ J_1,x-1^2/2) G_x^4 + 1/2 G_x^2 [ J_1,x ^2 (G_x+1^2-G_x^2) + J_1,x-1^2 ( G_x-1^2- G_x^2) ] }If one chooses J_0,x and J_1,x such that for each x,J^2 ≡ J_0,x^2 + J_1,x^2 + J_1,x-1^2/2is a constant independent of x,then the effective on-site coupling is easily seen to be x-independent. The saddle point equations of S_eff become G_x^-1(τ_1,τ_2)= Σ_x(τ_1,τ_2) - ^' (τ_1-τ_2),Σ_x(τ_1,τ_2)=J_0,x^2 G_x(τ_1,τ_2)^3 + J_1,x^2 G_x+1(τ_1,τ_2)^2 + J_1,x-1^2 G_x-1(τ_1,τ_2)^2/2G_x(τ_1,τ_2),and they admit an x-independent approximate solution:G_x^s(τ_1,τ_2) = G^s(τ_1-τ_2), with G^s( τ)= b^Δ(β J /sinτ/β)^-2Δ, 0 ≤τ < βb= 1/(1/2-Δ) tan (Δ),Δ=1/4,which becomes exact at β J →∞ (conformal) limit. The system also has a uniform specific heat per site c ≈0.396/β J.Thus, as in <cit.>,this saddle point is identical to the 0+1-dimensional SYK model of <cit.> at coupling constant J. If the choice (<ref>) is not made, then the saddle point equations do not admit a homogeneous solution, and it is unclear what the effective theory is.In the strong coupling limit, N≫β J ≫ 1, and long wavelength limit, the physics of interest to usis governed by the fluctuations induced by reparametrization modes f_x ∈(S^1),which act as G(τ_1,τ_2) → (f^'_x(τ_1)f^'_x(τ_2))^1/4G(f_x(τ_1),f_x(τ_2)). To quadratic order of the infinitesimal fluctuations, and leading order in 1/β J expansion, the effective action for the fluctuations has a simple form in Fourier space:defining f_x(τ)=τ+ ϵ_x(τ), ϵ_n= ∫_0^βdτe^2i n τ/βϵ(τ), we findS_ = 1/256∑_xy∑_n ϵ_n,x |n| (n^2-1) ( α|n|/β J_xy + C_xy) ϵ_-n,y,where all x-dependence is contained in the tridiagonal matrixC_xy = 1/3J^2([ ⋱-J_1,x-1^2 0 0;-J_1,x-1^2 J_1,x-1^2 + J_1,x^2-J_1,x^2 0; 0-J_1,x^2 J_1,x^2 + J_1,x+1^2-J_1,x+1^2; 0 0-J_1,x+1^2 ⋱ ]),and α = √(2)α_k≈ 12.7is a constant determined by numerics <cit.>. A few more steps of this derivation are contained in Appendix <ref>.The long wavelength limit alluded to earlier is the regime when the eigenvalues of C_xy are not larger than 1/β J, which (as we will see) do exist even for the disordered matrix. The derivation of this effective action is identical to <cit.>:in this previous work, C_xy was translation invariant and so (<ref>) was written in momentum space, where the matrix C_xy becomes diagonal.By writing C in the formC_xy = D^𝖳_xzΛ_zw D_wy= 1/3J^2([ ⋱-1 0 0; 0 1-1 0; 0 0 1 - 1; 0 0 0 ⋱ ])^𝖳([ ⋱ 0 0 0; 0 J_1,x^2 0 0; 0 0 J_1,x+1^2 0; 0 0 0 ⋱ ]) ([ ⋱-1 0 0; 0 1-1 0; 0 0 1 - 1; 0 0 0 ⋱ ])we immediately recognize that it is positive definite and can be interpreted as the first-order finite-difference discretized version of the differential operatorC_xy∼1/3J^2(-d/dxJ_1(x)^2 d/dx)_discretized.The interpretation of C_xy as an approximate differential operator becomes exact when J_1,x^2 varies slowly.Letting 𝔼 denote spatial averages over x, suppose that𝔼[J_1,x^2 J_1,y^2] ∼ f(|x-y|/M)with M a (large) integer, and f(x) a non-zero function for O(1) argument.To leading order in 1/M,the low-lying spectrum of the discrete operator C_xy will be identical to the continuum differential operator (<ref>).In our model, we will take J_1,x to be an arbitrary function of x,simply constrained to 0≤ J^2_1,x≤ J^2 (otherwise J_0,x^2 as defined in (<ref>) would be negative).The properties of the matrix C_xy will then depend on the inhomogeneity that we encode through x-dependent J_1,x.§ DIFFUSION AND CHAOSFrom the effective action (<ref>), we are able to extract the thermal response functions. The procedure is identical to <cit.> and a diffusion pole is found in the energy density (T^tt) two-point function:⟨ T^tt_x,n T^tt_y,-n⟩_T∼( |ω_n| _xy + 2π J/α C_xy)^-1≡( |ω_n| _xy + C̃_xy)^-1 . Upon proper analytic continuation to real time, we interpret (<ref>) as having diffusive poles (on the negative imaginary axis) whenever iω is an eigenvalue of C̃_xy.C̃_xy is analogous to a tight-binding-model hopping matrix. If C̃_xy commutes with a discrete translation operator, then we expect plane wave eigenstates, the lowest-lying of which will have an eigenvalue ∼ L^-2 in a chain of length L.If C̃_xy is random, thenstrictly speaking all eigenstates of C̃_xy at fixed ω are localized in the continuum. However, because C̃_xy is analogous to a discretized differential operator (<ref>) which has an exact delocalized zero mode, the low-lying spectrum of C̃_xy will look diffusive on length scales L.In other words, the localization length grows faster than ω^-1/2 <cit.>, and the smallest nontrivial eigenvalue scales as L^-2 in this case as well. Hence, the diffusion constant D is finite.In fact, the lowest-lying non-trivial eigenvectors u_x of C̃_xy are well approximated by plane waves: u_x ∼e^iqx,q=± 2/L, which can be verified numerically.See Appendix <ref> for more comments on this equation. The eigenvalue of such a u_x will be D q^2, with D an effective diffusion constant. In the large M limit, we may compute D by solving the following differential equation as q→ 0: -d/dx(D(x)du/dx) = Dq^2 u.The constant D is computed in Appendix <ref>: 1/D = 𝔼[1/D(x)].This equation has a straightforward physical interpretation. Because the specific heat in our model is x-independent to leading order <cit.>, D is proportional to the thermal conductivity. One can approximate our inhomogeneous chain by joining together homogeneous SYK chains of length L^'≪ M. Within each of these chains, the thermal conductivity is proportional to the diffusion constant D(x). Joining together these segments of length L^', we find a resistor network:hence, the thermal resistivity spatially averages. This leadsto (<ref>). In Appendix <ref>, we argue that (<ref>) is applicable even beyond the large M limit, under some assumptions which work relatively well in practice numerically, so long as finite size effects are small. Now we study the butterfly velocity, defined by out-of-time-ordered correlation functions of spatially separated operators.In order to extract v_b, we study the (properly regularized) connected out-of-time-ordered correlation function.One finds <cit.>, in the region β≪ t ≲βlog N that1/N^2∑_i,j⟨χ_i,x(t), χ_j,y(0) χ_i,x(t), χ_j,0(0)⟩_T,connected∼e^2/β t(2/β + C̃_xy)^-1.Comparing to (<ref>), we observe that in this model, as in the usual SYK model, τ_l = β/2.This matrix inverse is the discrete analogue of the Green's function2/β G(x;y) - d/dx(D(x)dG(x;y)/dx) = (x-y).In the long range disorder limit, when at each point xM ≫√(2/β D(x)),the solution of this equation is exponentially decaying <cit.>:G(x;y) ∼e^-|x-y|/v_bτ_lwith1/v_b = √(2/β)𝔼[1/√(D(x))].This equation can be derived by noting that, away from the points x=y,we can change coordinates from x to s, defined by D(x)∂_x ≡∂_s. One then finds2/β D(s) G = d^2G/ds^2,and when D(s) varies slowly, one can straightforwardly writeG=exp[-∫ds √(2 D(s)/β)] = exp[-∫dx √(2/D(x)β)]. Hence, we find that D and v_b are not equal. Using the Cauchy-Schwarz inequality it is straightforward to concludev_b^2 τ_l = 𝔼[1/√(D(x))]^-2≥𝔼[1/D(x)]^-1 = D.The physics at play here is essentially the same as in holography, where charge diffusion was shown to obey a similar inequality, for the same reasons <cit.>. We have numerically computed D and v_b in SYK chains of finite length L with periodic boundary conditions. D is found by averaging the two smallest non-vanishing eigenvales of C_xy. v_b^-1 is found by computing the typical value of -log G(x;y)/|x-y| for |x-y|∼ L/2.[We must also normalize the value of v_b^-1 found numerically by a factor very close to unity, to account for finite size effects.This factor depends only on L.] So long as D(x) >D_min>0,we find that D agrees with the “resistor chain" prediction (<ref>) for any M, so long as L/M≳ 20, to within about 0.1% residual error (which is possibly a numerical finite size effect). Indeed, the derivation of (<ref>) in Appendix <ref> does not rely on the assumption that J_1 is slowly varying, so this is not surprising.As shown in Figure <ref>, we see that while D agrees very well with the “hydrodynamic" theory, v_b^2 agrees withthe hydrodynamic theory at large M, while partly approaching D/τ_l from above as M → 1.This behavior is not surprising: as M becomes shorter, the four-point function (<ref>) begins to“self-average" over the inhomogeneity in a manner analogous to diffusion. Nevertheless, D<v_b^2τ_l holds as a strict inequality in the inhomogeneous systems that we have studied numerically,even when M=1 (no correlations among D(x). Our violation of the relation D=v_b^2τ_l is not limited to the regime of long range disorder. So far, the discrepancies between D and v_b^2τ_l are only on the order of a few percent in our numerical data. Yet (<ref>) implies that there is no possible upper bound on diffusion due to quantum chaos. Defining a natural probability measure on D(x) asp(X)dX ≡𝔼[Θ(X+dX-D(x)) Θ(D(x)-X)],we estimate that ifp(X → 0) ∼ X^a,with-1/2<a≤ 0,then v_b>0 but D=0.[Since our analytic calculation of v_b requires (<ref>), and disorder is correlated over M sites, we estimate that in a chain of length L the minimal value of D_0 scalesas D_min∼ (M/L)^1/(1+a). Requiring that D_minM^2 ≫ 2 /β requires that M^3+2a≫ L, up to some dimensionless constant.Hence, we conclude that the inhomogeneous SYK chain with D=0 but v_b finite only strictly exists in a somewhat subtle thermodynamic limit with M and L taken to ∞ simultaneously, making sure to obey M^3+2a≫ L.] We have looked for this parametric breakdown of the relationship between D and v_b^2 in smaller chains with M=1. As shown in Figure <ref>, we see qualitative agreement (but quantitative disagreement) with our hydrodynamic predictions for D and v_b^2τ_l (accounting for finite size effects). As a≲ 0, we observe that the ratio D/v_b^2τ_l becomes dependent on the length of the chain, and decreases for the longer chain.This provides evidence that even when M=1, this inhomogeneous SYK chain may have D=0 but v_b>0 in the thermodynamic limit. Finally, let us comment on the low temperature limit β→∞ (while, of course, taking N→∞ as well such that β J ≪ N).At small enough temperature, keeping the inhomogeneity fixed, (<ref>) will break down. Figure <ref>suggests that the large M limit is not required to obtain substantial deviations from D=v_b^2τ_l. A more interesting subtlety that arises at very low temperature is the difference between periodic inhomogeneity and random inhomogeneity.For periodic inhomogeneity, one can diagonalize the matrix C_xy as a periodic tight-binding hopping matrix in a larger unit cell, and at low enough temperatures, the solution to (<ref>) can be well-approximated by considering only the physics of the lowest band.In this regime,one will recover D=v_b^2τ_l. As the period of the periodic inhomogeneity grows longer, the temperature above which D=v_b^2τ_l decreases. We expect that for random inhomogeneity (where the eigenstates do not form bands, but are in fact localized) one finds D<v_b^2τ_l at all finite temperatures,and provide some numerical evidence for this in Figure <ref>. § OUTLOOKWe have presented a modification of the SYK chain model of <cit.>, in which there is an upper bound on the diffusion constant:D ≤ v_b^2τ_l. As we pointed out in the introduction, this suggests that there is no (simple) generic bound relating transport and quantum chaos in all strange metals.One might ask whether our violation of (<ref>) could be found in a “homogeneous" model that does not rely on explicit translation symmetry breaking in the low energy effective description. In the SYK chains that we have studied, this is easy to accomplish at leading order in 1/N, because there is no difference betwen averages over annealed disorder vs. quenched disorder. Hence, consider the partition functionZ = ∫DJ_1,x^2 ℙ[J_1,x^2]Z_SYK[J_1,x^2],with Z_SYK the partition function defined in (<ref>), and ℙ[J_1,x^2] a translation invariant function where J_1,x^2 have support on some finite domain. We may think of ℙ[J_1,x^2]as either a partition function for the “slow"dynamical variablesJ_1,x^2,or as accounting for certain correlated non-Gaussian fluctuations in the random variables J_ijkl,x and J_ijkl,x^' of the microscopic Hamiltonian (<ref>).[In this latter case, J_ijkl,x andJ^'_ijkl,x are no longer completely independent random variables.The reason forthisis that after integrating over ℙ[J_1,x] on a single site, we generically find ∫dJ_1 ℙ(J_1) exp[- N^3 ∑ J^'2_ijkl / 2J_1^2] ∏_ijklℱ(J^'_ijkl).Since the joint probability distribution of the J^'_ijkl does not factorize, we conclude that these random variables are no longer independent. Hence, integrating over fluctuations in ℙ[J_1] restores translation invariance, but necessarily introduces non-trivial correlations between the J_ijkl,x and J^'_ijkl,x. ]So long as D≤ v_b^2τ_l for each choice of J_1,x^2, by linearity, this inequality will remain true even in the homogeneous model (<ref>) after averaging over the ensemble ℙ[J_1,x^2] of random couplings.tocsectionAcknowledgements § ACKNOWLEDGEMENTS We thank Mike Blake for helpful comments on a draft of this paper. AL was supported by the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4302.YG and XLQ are supported by the National Science Foundation through the grant No. DMR-1151786, and by the David & Lucile Packard Foundation.§ DERIVATION OF EFFECTIVE ACTIONIt is convenient to expand about the saddle using renormalized variables g_x,σ_x defined byG_x(τ_1,τ_2) =G^s(τ_1,τ_2)+ |G^s(τ_1,τ_2)|^-1g_x(τ_1,τ_2), Σ_x(τ_1,τ_2) =Σ^s(τ_1,τ_2) + |G^s(τ_1,τ_2)| σ_x(τ_1,τ_2),where we have rescaled the fluctuation fields g_x,σ_x by prefactors |G^s|^-1 and |G^s|. It should be noticed that although the saddle point is uniform in space and translation invariant in time, the fluctuation fields have generic space-time dependence. Now we expand the effective action to second order in the fluctuation fields g, σ, which leads toS_[g,σ] ≈ S_eff^s -1/4∫d^4τ∑_xσ_x(τ_1,τ_2)G^s(τ_13)· |G^s(τ_34)|· G^s(τ_42)· |G^s(τ_21)|σ_x(τ_3,τ_4) +∫d^2τ(∑_x1/2σ_x(τ_1,τ_2)g_x(τ_1,τ_2) -3 J^2/4∑_x,yg_x(τ_1,τ_2)S_xyg_y(τ_1,τ_2) ).The spatial kernel S_xy is a tight-binding hopping matrixS_xy= _xy+ 1/3J^2[ (- J_1,x^2 -J_1,x-1^2) _xy + _x,y-1 J_1,x^2 +_x,y+1 J_1,x-1^2]=_xy - C_xy, ,with C_xy defined in (<ref>). Next we integrate out σ_x and obtain a quadratic action for g_x alone. We defineK̃ as the (symmetrized) four-point function kernel of the SYK model:K̃(τ_1,τ_2;τ_3,τ_4)= 3 J^2 G^s(τ_13) ·|G^s(τ_34)| · G^s(τ_42)·|G^s(τ_21)|.We have defined τ_ij≡τ_i - τ_j.The effective action of g_x is thereforeS_[g] =S_eff^s + 3 J^2/4∫d^4τ∑_x,yg_x(τ_1,τ_2)[K̃^-1(τ_1,τ_2;τ_3,τ_4)_xy-S_xy(τ_13)(τ_24)]g_y(τ_3,τ_4),The leading contribution comes from the soft modes which can be identified as induced by reparametrization f_x ∈(S^1). They can be interpreted as energy fluctuations. We can linearized these modes and write them in the fourier space: f_x(τ)=τ+ ϵ_x(τ), ϵ_n= ∫_0^βdτe^2i n τ/βϵ(τ). For such modes, we know from <cit.> that the kernel obeys(K̃^-1-1) ϵ_n,x =α |n|/β Jϵ_n,xFollowing <cit.>, we now find the following effective action for the linearized modes given in (<ref>). The additional numerical prefactors in (<ref>), including |n|(n^2-1), come from properly normalizing the eigenvector ϵ_n,x.§ EIGENVALUES OF INHOMOGENEOUS DIFFUSION MATRIXThe following argument is reminiscent of <cit.>.Let us define the matrixQ_xy = _xye^iqx,and consider the limit q→ 0. We postulate that the lowest lying eigenvalues of C̃_xy are proportional to D_effq^2: C̃_xy u_y = D_effq^2 u_x.Multiplying on both sides by Q we obtainQ_xz (D^𝖳SD Q^-1)_zw(Qu)_wy = D_effq^2 (Qu)_x.We now look for a series solution to this equation of the formQu = u_0 + iq u_1 - q^2 u_2 + ⋯. Such a series expansion is reasonable – we expect in any finite chain that the lowest few eigenvectors are delocalized, and have confirmed this numerically. At O(q^0), the Q matrix is simply the identity, and so we must take(u_0)_x = 1.At O(q^1), we find the equationQ_xz[ (D^𝖳SD Q^-1)_zw (u_0 + iq u_1)_w] = 0.Q is invertible, and hence we conclude that if u_1 is non-trivial, the left-most D must act on a non-trivial vector. Because D has a one-dimensional kernel we concludec(u_0)_z = (SD Q^-1)_zw (u_0 + iq u_1)_wWe may fix the constant c as follows. First left-multiply by S^-1, and keep only first order terms in q, to obtainc S^-1_zw(u_0)_w = (DQ^-1)_zw(u_0)_w + iq D_zw(u_1)_w ≈ -iq Q^-1_zw(u_0)_w + iqD_zw(u_1)_w.In the second equality, we have exploited the fact that Q^-1u_0 is an eigenvector of D of eigenvalue 1-e^iq.Now, we perform a non-rigorous sleight-of-hand:at leading order in q, we may treat q Q^-1_zw(u_0)_w = q (u_0)_w.We may then left-multiply by u_0, to remove only the second term of (<ref>), and obtainc = -iq u_0· u_0/u_0 · S^-1u_0.On physical grounds, this is the statement that the eigenvector looks a lot like a plane wave, and this seems to be true numerically. We now have the O(q) corrections to the eigenvector u_y, and to leading order q use the variational principle to `exactly' compute the effective diffusion constant. We find(u_0+iq u_1)· D^𝖳SD · (u_0 + iq u_1) = q^2 c^2 u_0 · S^-1 u_0 = D_eff q^2 u_0 · u_0which gives us, for a periodic chain of L sites:1/D_eff =1/L∑1/D_x.One way to rigorously obtain (<ref>) is to extend a chain of length L into an infinite chain by tiling the same S_x repeatedly:S_x+L = S_x. Using the discrete translation symmetry and choosing the appropriate definition of q, one can demand u_x = u_x+L, and then sum over only L sites in (<ref>),killing the right-most term. In practice, we have often found this tiling to be unnecessary numerically.unsrt tocsectionReferences
http://arxiv.org/abs/1702.08462v2
{ "authors": [ "Yingfei Gu", "Andrew Lucas", "Xiao-Liang Qi" ], "categories": [ "hep-th", "cond-mat.str-el" ], "primary_category": "hep-th", "published": "20170227190005", "title": "Energy diffusion and the butterfly effect in inhomogeneous Sachdev-Ye-Kitaev chains" }
We study the structure of martingale transports in finite dimensions. We consider the family (μ,ν) ofmartingale measures on × with given marginals μ,ν, and construct a family of relatively open convex sets {C_x:x∈}, which forms a partition of , and such that any martingale transport in (μ,ν) sends mass from x to within C_x, μ(dx)–a.e.Our results extend the analogous one-dimensional results of Beiglböck and Juillet <cit.> and Beiglböck et al. <cit.>. We conjecture that the decomposition is canonical and minimal in the sense that it allows tocharacterise the martingale polar sets, i.e. the sets which have zero mass under all measures in (μ,ν),and offers the martingale analogue of the characterisation of transport polar sets proved in <cit.>.Note. This work is made publicly available simultaneously to, and in mutual recognition of, a parallel and independent work <cit.> which studies the same questions. In due course, we plan to release an amended version proving the conjectured minimality of our convex partition.This research has been generously supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. 335421. Jan Obłój is also grateful to St John's College in Oxford for their financial support. Exploring the climate of Proxima B with the Met Office Unified Model Ian A. Boutle1,2,Nathan J. Mayne2, Benjamin Drummond2, James Manners1,2,Jayesh Goyal2, F. Hugo Lambert3, David M. Acreman2 Paul D. Earnshaw1December 30, 2023 =====================================================================================================================================================§ INTRODUCTION Optimal transportation is a classical and influential field in mathematics. Its origins trace back to Gaspard Monge, while its modern incarnation was born from works of Kantorovich. Since then, it has seen tremendous advances, in particular in understanding the geometry of the optimal transport maps, through the works of Brenier, Gangbo, McCann, Otto, Villani and many others, see Villani <cit.> for an extensive account of the theory. More recently, inspired by a number of applications in probability theory as well as financial mathematics, new variants of the problem have been studied under various constraints on the transports maps.The most notable example isthe so-called martingale optimal transport (MOT) problem, where the transport dynamics have to obey the martingale condition. Martingales are used in mathematical finance to model dynamics of price processes and marginal specification can be seen as equivalent to knowing sufficiently many market prices of simple (European) options. If the cost functional being optimised is given by the payoff of another (exotic) option then MOT problem values correspond to the robust (no-arbitrage) bounds for such an exotic option. Such rephrasing of robust pricing as an MOT problem was achieved by Beiglböck et al. <cit.> in discrete time, and Galichon et al. <cit.> in continuous time. The latter worked in aone dimensional setting where a martingale is a time-change of a Brownian motion. In this way, the MOT problem links naturally to the Skorokhod embedding problem, a well studied topic in probability theory, see Obłój <cit.> for an account. Recently, in a beautiful display of how new developments can be achieved when an old theory is re-interpreted using entirely novel methods, Beiglböck et al. <cit.> obtained a geometric description of supports of optimal Skorokhod embeddings, akin to the Gangbo and McCann <cit.> characterisation for optimal transportation. Similarly to optimal transport, where the Kantorovich duality was a cornerstone result, it is of paramount interest to understand duality for the MOT problem. Partial results, under suitable continuity of the cost functional, were established in <cit.>, see also <cit.>. However the continuity assumption allowed one to side-step the problem of understanding and describing the polar sets, i.e. null sets under all martingale transport plans. For (all) transports such a description was given in Beiglböck et al. <cit.>, as a corollary of thecomplete description of Kantorovich duality provided by Kellerer <cit.>. Recently, Beiglböck and Juillet <cit.> andBeiglböck et al. <cit.> obtained analogous results for martingale transports in dimension one. Our aim here is to prove similarresults in a (arbitrary) finite dimension. As we shall see, the one dimensional picture is rather special and already in dimension two, the results are much richer and more involved. To highlight the features and state our main result, let us introduce some notation.A transport from μ to ν is a positive measure θ on × whose marginals are μ and ν, and the set of all such transports is denoted by Π(μ,ν).Clearly Π(μ,ν) is non-empty as we may always take θ=μ⊗ν∈Π(μ,ν). In <cit.> it isshown that the only polar sets of Π(μ,ν) are the trivial ones:θ(N)=0∀θ∈Π(μ,ν)⟺ N⊆ (_μ×) ∪ (×_ν) ,for someμ–null set _μ andν–null set _ν. Here, for two probability measures μ,ν onwith finite first moments, we consider the subset of martingale transports(μ,ν) := {θ∈Π(μ,ν): [Y|X]=Xfor (X,Y)∼θ }.Equivalently, θ∈(μ,ν) if and only if θ∈Π(μ,ν) and for any disintegration θ=μ⊗γ, where γ(x,·) is a probability measure with a finite first moment, one has ∫ y γ(x,dy)=x for μ–a.e x. Note that (μ,ν) may often be empty. Jensen's inequality implies that if it is non-empty then μ and ν are in convex order, i.e. ∫_ϕ dν≥∫_ϕ dμ for everyconvexϕ:^N→ ,in which case we write μ≼_c ν; notice thatϕ may fail to be integrable, but its negative part is integrable since a convex function is bounded below by an affine function[Indeed if ϕ is convex then its sub-differential at any point x is non-empty, see <cit.>.], which is integrable under any measure with finite first moment. In a seminal work, Strassen <cit.> showed that the condition μ≼_c νis not only necessary but also sufficient to have (μ,ν)≠∅. In dimension one a full description of MOT polar sets was given by <cit.>, using the domain ={x∈: 0 <u_ν -μ(x)}, where u_λ=|· |∗λ is the potential function associated to λ (see (<ref>) below), and u_ν -μ=u_ν- u_μ≥ 0 since ||· -x || is convex. We will now rephrase their results in the language used inthis paper. They showed that, under any θ=μ⊗γ∈(μ,ν),the mass from x∈ may travel to the closure of theconnected component C_x= of containing x, i.e. γ(x,·) is concentrated onC_x; and the mass from x∉ is not moved, i.e. γ(x,·) is concentrated on C_x=C_x:={x} (i.e. γ(x,·)=δ_x). Moreover, they showed that this is essentially[Notice thedifference between C_x and C_x in our statements. In fact, in <cit.> a slightly stronger statement is shown, as the authors can identify a set J_x such that C_x⊆ J_x ⊆C_x and the mass in x has to stay in J_x and can travel anywhere within J_x.] the only limitation imposed by the martingale constraint, i.e. the mass from x can travel anywhere within C_x, meaning that the only martingale polar subsets of the graph (C):=∪_x∈{x}× C_x of the multifunction x ↦ C_x are the trivial ones, similarly to (<ref>). Notice that the potential functions are continuous, so the domain is open and thus the connected components are open disjoint intervals and they are at most countably many.As it turns out, if one wishes to obtain a similar characterization of martingale polar sets in dimension greater than one, the nature of these components is necessarily more complex and intriguing.Using simple examples, we argue that the components can be uncountably many and convexity, not connectedness, is their defining[Of course, a set I⊆ is connected iff it is convex (and iff it is an interval); but in ^N there are connected sets which are not convex.] property; also, different components mayhave different Hausdorff dimensions. Having defined the domainas :=:={x∈: ≠{x}} (in analogy to what holds in the one dimensional case), we show thatcan no longer be defined as the connected component of x in(in fact it just cannot be defined using); indeed, we give an example of measures μ≼_c ν andμ'≼_c ν' with the same domain = but with drastically different components.In consequence, in arbitrary dimension the proper definition of the domain and its components is much more involved.The first results in this direction were obtained by Ghoussoub et al. <cit.>, who studied mainly the geometry of the optimal martingale transports infor the cost functions ± ||x-y||. While theirfocus was different from ours, they considered – as we do – a generally uncountable decomposition of the space intorelatively open convex sets, anda corresponding disintegration of the measures involved. However,their decomposition depends on θ∈(μ,ν) and on a significant measurability assumption, and they work with a topology which is not separable which poses crucial difficulties since Polish structure is needed to consider disintegrations. Instead, we associate to each point x a relatively open convex set C_x ∋ x, which we call its (convex) component, which depends on ν -μ, but not on the choice of θ∈(μ,ν). As we provide a constructive proof of existence of the convex components, we can give explicit examples in special cases of interest, and we plan to prove the measurability of the map C:x↦C_x in the future version of this paper. Moreover, having singled out the Wijsman topology as the appropriate one to consider, we do work with a Polish space, and we can rely on many results in the literature about measurability of multifunctions: indeed, C considered as a set-valued function is Borel measurable (with respect to the Wijsman topology on the family of closed convex sets) if and only if Cconsidered as a multifunction is measurable.Our main contribution is to provide the definition of convex components (see Definition <ref> below), motivate it with examples,and establish its use in describing the possible evolutions of martingale optimal transport plans by proving the following theorem.Let μ,ν be two probability measures onin convex orderμ≼_c ν.Then the family {C_x:x∈} of convex components associated with (μ,ν) forms a partition of : C_x=C_y or C_x∩ C_y=∅, x,y∈ and x∈ C_x.Each C_x is convex and relatively open, and for every θ=μ⊗γ∈(μ,ν), γ(x,·) is concentrated onC_x for μ a.e. x, i.e. θ is concentrated on the graph(C):=∪_x∈^N{x}×C_x.We remark that it easily follows from Theorem <ref> thatμ and ν coincide on the complement of(as it happens in dimension one), justifying further our definition of domain; see Corollary <ref>. As a very special sub-case of our theorem above, we will obtain the following corollary, greatly generalising <cit.>, which amounts to the case where ϕ(y)=||y-b||^2 (where b is the common barycenter of μ and ν, i.e. b:=∫ x μ(dx)=∫ x ν(dx)).Ifμ≼_c ν and ∫ϕ dμ= ∫ϕ dν<∞ for a strictly convex ϕ:^N → then ν=μ. We conjecture that the closures of the convex components, as we defined them, are the smallest possible closed convex sets on which all martingale transports are confined, meaning that the only martingale polar subsets of (C) are the trivial ones, i.e. those of the form (<ref>). We state this as a conjecture and we are currently working towards completing its proof. In the setting of Theorem <ref>, N⊆(C) is (μ,ν)–polar, i.e. θ(N)=0 for all θ∈(μ,ν), if and only if it is Π(μ,ν)–polar.Throughout the paper we work with several running examples which we use to illustrate the arising challenges and to motivate our definitions. These examples provide benchmark cases where we can identify what the convex components should be if they are to satisfy Theorem <ref>, and hence they lead us towards developing ageneral theory whichallows us to recover them as special cases. These examples are introduced in Section <ref>. Subsequently, the paper is devoted to the definition of C_x and its motivation and to the proof of the above theorem. We introduce notation and then, in Section <ref>, defineasymptotically affine components A_x((ϕ_n)_n) corresponding to some sequences convex functions (ϕ_n)_n. Then, in Section <ref>, we construct the convex componentsC_x asa certain essential intersection of such asymptotically affine components. To allow for a smooth narrative of the construction, many technical proofs are grouped in the subsequent Section <ref>.Finally we note that we have hoped to make this paper publicly available only after having proved Conjecture <ref>. However we have been recently made aware of a parallel and independent work of De March and Touzi <cit.> who study the same problem and obtain similar results using different techniques.Consequently, we have agreed to simultaneously make our works publicly available. § EXAMPLES For k≥ 2, consider the following probability measures on ^2:μ^k:= 1/2k∑_i=0^k-12δ_x_i , ν^k:= 1/2k∑_i=0^k-1(δ_y_i^+ +δ_y_i^-),where x_i:=(i/k-1,0),y_i^±:=(i/k-1,± 1)∈ [0,1]× [-1,1].Define the kernelγ^±(x,·) to be1/2(δ_(t,-1)+δ_(t,1)) for x=(t,0) ∈(0,1) ×{0} and to be δ_xotherwise. Note that, for every x∈^2, γ^±(x,·) is a probability measure with finite first moments and with barycenter xand that ν^k=∫μ^k(dz) γ^±(z,·). It follows that θ^k:= μ^k ⊗γ^±∈(μ^k,ν^k) and in particular μ^k≼_cν^k, 2≤ k≤∞. Further it is easy to see that (μ^k,ν^k) is a singleton and θ^k is the unique martingale transport connecting μ^k and ν^k. Indeed, the martingale condition implies that the mass from (0,0) may only go up and down – it may not go right since ν^k puts no mass to the left, i.e. the atom in (0,0) has to be distributed to the atoms in {0}×{-1,1}. Iterating, we conclude. It follows Theorem <ref> holds with C_x=C_i:={i/k-1}× (-1,1) for x∈ C_i, i=0,…, k-1, and C_x={x} otherwise. Our general definitions have to reproduce this simple example.We consider the limiting case of Example <ref> above.Let μ^∞ be uniform on (0,1)×{0} and ν^∞ be uniform on (0,1)×{-1,1}. In particular,μ^∞=lim_k μ^k and ν^∞=lim_k ν^k. It is easy to see that θ^∞=μ^∞⊗γ^± is the unique element in (μ^∞,ν^∞). It follows that Theorem <ref> holds with C_(t,s)={t }× (-1,1),for all (t,s)∈ [0,1]× (-1,1),and C_x={x} otherwise. In particular there are uncountably many (C_x)_x∈^2. Note also that, given a fixed Lebesgue null set Γ in (0,1), we could arbitrarily redefine C_x for x∈Γ and Theorem <ref> would still hold. More generally, we observe that x→ C_x is only determined μ(dx)–a.e. We come back to this in Section <ref>. Using notation of examples above, let μ̃^k:=1/2(μ^k+δ_(1/2,0)), 2≤ k≤∞. This case requires us to distinguish between even and odd k since μ^k({(1/2,0)})>0 iff k is odd. For even k, or k=∞, we let γ̃^k(x,·) = γ^±(x,·)1_z≠ (1/2,0) + ν^k(·)1_x= (1/2,0). For odd k we let γ̃^k(x,·) = γ^±(x,·)1_x≠ (1/2,0) + (k/k+1ν^k(·)+ 1/2(k+1)(δ_(1/2,-1)+δ_(1/2,1)))1_x= (1/2,0).Observing that the barycentre of ν^k is (1/2,0), it follows instantly that each γ̃^k(x,·)∈_1 and has barycentre equal to x. In consequence, θ̃^k:=μ̃^k⊗γ̃^k∈(μ̃^k,ν^k). In particular, the mass from the centre of the rectangle [0,1]× [-1,1] is spread to its corners. The convex components in Theorem <ref> are the same for all 2≤ k≤∞ and given by * C_(0,s)={0 }× (-1,1) and C_(1,s)={1 }× (-1,1), for s∈ (-1,1),* C_(t,s)=(0,1) × (-1,1) for (t,s)∈ (0,1)× (-1,1), * C_x={x} for all other x. This example showcases two important features. First, the convex components may have different Hausdorff dimension for different x∈.Second, the domains in this and in the previous example coincide: _μ^∞,ν^∞= _μ̃^k,ν^k, 2≤ k≤∞, while the convex components are very different.If μ,ν are Gaussian measureson ^n then μ≼_c ν iff μ,ν have the same mean and their covariance matrices Σ_μ, Σ_ν are such that Σ:=Σ_ν -Σ_μ is positive semidefinite (see <cit.>). By an orthogonal change of coordinates we can assume w.l.o.g. that Σ is diagonal with eigenvalues σ_1≥σ_2 ≥…≥σ_n≥ 0. We will show that if Σ is (strictly) positive definite then μ≼_c ν are irreducible, i.e. C_x= for each x∈. More generally, let k∈{1,…, n} be such that σ_i>0 iff i≤ k; then the convex component C_xof each point x=(x_i)_i∈^n is ^k×{(x_k+1, …, x_n)}. § CONVEX COMPONENTS GOVERNING MARTINGALE TRANSPORTS §.§ Notation We will denote with x,y the usual dot product between x,y∈, and with ||x|| the associated Euclidian norm. Given W⊆, we will denote with (W) its convex hull, with ∘W its interior, with W̅ its closure and with∂ W its border. We will denote with(V)(resp. (V)) the affine hull(resp. the relative interior)of a convex set V⊆, and with I an arbitrary set of indices. We will denote with B_ϵ(x):= { y∈: ||y-x||<ϵ} the open ball of radius ϵ>0 centered in x∈. We will denote with [x,y] (resp. (x,y), [x,y), (x,y)) the set { x+t(y-x): t ∈ A} with A=[0,1] (resp. (0,1) , [0,∞), (-∞, ∞)). GivenK ⊆,f:→ and g:K →, we denote with f_|Kthe restriction of f to K and define[This definition makes sense if K contains at least two points, which holds in the sequel any time we need to consider a constant.](g):=sup_x,y∈ K, x≠ y|g(x)-g(y)|/||x-y||as the constant ofg and set:={ϕ:→ is convex and Lipschitz} , _+:={ϕ∈: ϕ≥ 0}. We will denote withthe family of non-empty, convex and relatively open sets of . If C⊆ is closed we define distance from Casd_C(x):=min_y∈ C ||x-y||_, and we recall that if C is convex then d_C∈_+ (see <cit.>) and, as is easily seen, (d_C)=1. If α is a real measure and ϕ a α-integrable function, we often writeα, ϕ for ∫ϕ dα. We denote with() (or simply ) the set ofpositive Borel measures α on which are finiteand have finitefirst moment (i.e. are such that ∫_ (1+||x||)α(dx)<∞), and with _1 the set of probabilities in . Notice that if f:→ isand α∈ then f∈ L^1(α). Throughout the paper we consider a given pair μ,ν∈ assumed to be in convex orderwhich we will write asμ≼_cν.We will use without further notice the fact that if a is affine then ν -μ, ± a≥ 0 and so ν -μ, a=0. Notice that the functionals ν-μ, ϕ and ( ϕ_|K), defined for ϕ∈, take values in [0,∞) and are positively homogenous. We recall that if ϕ:→ is convex, its right derivative ϕ'_+exists and is increasing and right continuous, and its second derivative in the sense of Schwartz distributions is the positive Radon measureϕ”which satisfies ϕ”((c,d])=ϕ'_+(d)-ϕ'_+(c). §.§ Asymptotically affine components In dimension one all information needed to understand the structure of (μ,ν) is contained in the potentials functionu_λ(x):= ∫ |x-y| λ(dy),x∈of λ:=ν -μ. The domain of μ≼_cν, defined as the set {u_ν >u_μ}={u_ν-μ>0 }, being open it is composed of at most countably many disjoint open intervals which are the convex components which delimit the martingale evolutions in (μ,ν). The key idea to generalise the study of martingale polar sets to dimension higher than one is that, instead of consideringu_ν-μ(x)=ν-μ, ϕ_x with ϕ_x=|· -x|, one should consider the wider familyν-μ, ϕ where ϕ∈; restricting to the ϕ∈_+ such that ϕ(x)=x gives in a way an multidimensional equivalent of considering |· -x|.With this in mind, the following remark provides the crucial property which characterizes the convex components in a way that does not make any reference to the potential functions.In dimension one (i.e. if =) the following are equivalent* { u_ν > u_μ}⊇ (c,d), * if ν-μ, ϕ = 0 for ϕ∈ then necessarily ϕ_|(c,d) is affine.* If ν-μ, ϕ^n→ 0 for (ϕ^n)_n ∈⊆ then there exist affine functions (a_n)_n s.t. (ϕ^n - a_n)(x) → 0for all x∈ (c,d). The above remark is easily proven using the identities (where ϕ∈)ν-μ, ϕ =∫_ (u_ν-u_μ) d ϕ” and ϕ”((c,d])=ϕ'_+(d)-ϕ'_+(c)and taking a_n to be an affine function supporting ϕ at x.To obtain higher dimensional analogues of the concept of convex component, it is natural to start with the second property in Remark <ref>; the idea being essentially that the convex component ofx should be the largest convex set on which all convex functions such that ν-μ, ϕ =0 are affine. To make this more precise and prove the existence of such set, notice that if ϕ:→ is convex then there exists disjoint intervals (a_n,b_n) such that ϕ is affine on each [a_n,b_n] and is locally strictly convex on ∖ [a_n,b_n]: indeed [a_n,b_n) are the intervals of constancy of the increasing right continuous function ϕ'_+, or equivalently ∖ (a_n,b_n) is the support of the measure ϕ”. Thus, for any x ∈∖ [a_n,b_n] there exists no open interval containing x on which ϕ is affine, whereas for x∈ (a_n,b_n) there exists the biggest open interval containing x and on which ϕ is affine (it is indeed (a_n,b_n)).Notice that it is crucial that we insist on the intervals being open: if ϕ(x)=|x| there does not exists a biggest interval containing 0 on which ϕ is affine, since ϕ is affine on both (-∞,0] and [0,∞)but not on their union. So, one might conjecture that for any ϕ∈ and x∈ there exists a largest set A(ϕ)_x∈ which contains x and the family of such sets forms a partition of . Building on the characterisation in Remark <ref> we would then expect that C_x⊆ A(ϕ)_x for any ϕ such that ν-μ, ϕ=0, and that C_xshould be defined as the intersection of A(ϕ)_x over such ϕ. As we will see, this is `essentially' correct, but figuring out whatexactly is the correct construction turns out to be quite delicate. First, for technical reasons related to the proof of Conjecture <ref>, we will work not with single functions ϕ such thatν-μ, ϕ=0, but ratherwith sequences (ϕ^n)_n such thatν-μ, ϕ^n→ 0. Second, it turns out that one should not take the intersection over all such sequences (ϕ^n)_n, but rather an μ-essential intersection, in a sense which we will motivate and explain later. We say that (ϕ^n)_n∈⊆ is asymptotically affine on V if there exist affine functions (a_n)_n such thatϕ^n - a_n → 0 on V.The intuitive meaning is thatif ν-μ, ϕ^n→ 0 for ϕ^n∈ then ϕ^n_|V is `affine in the limit', meaning notthat ϕ^n converge to an affine function[Indeed the value ofν-μ, ϕ^n does not change if we subtract from ϕ^n an arbitrary affine function a^n, and the (a^n)_n's do not need to converge. Said otherwise, any sequence of affine functions is asymptotically affine, even if it is not converging.], but rather that the ϕ^n are more and more `flat'. The above notion will ultimately allow us to define the multi-dimensional equivalents of the intervals (c,d) of Remark <ref>. Fix x∈ and (ϕ^n)_n∈⊆. Then there exists the biggest, with respect to set inclusion, set inwhich contains x and on which (ϕ^n)_n is asymptotically affine. Further, it is given byA_x((ϕ^n)_n):=((∪{V∈: x∈ V and (ϕ^n_|V)_n is asymptotically affine} )) Observe that (ϕ^n)_n is asymptotically affine on any singleton, and if it is asymptotically affine on a set then it is asymptotically affine on any of its subsets. A_x((ϕ^n)_n) is called the (ϕ^n)_n-asymptotically affine componentof x or, simply, the (ϕ^n)_n-component of x. When (ϕ^n)_n are fixed we write A_x. Indexed by x∈, the family forms a partition ofin the following sense.For a set Γ⊂, a family of sets U_i, i∈ is said to be a convex partition of Γ if for all i∈, U_i is convex, relatively open, ⋃_i∈ U_i = Γ andU_i∩ U_k≠∅⟹ U_i=U_j, fori,j∈. Define=((ϕ^n)_n):=∖{x∈:A_x((ϕ^n)_n)={x}} ,and notice that equals the set of x∈ for which there exist y,z∈, y≠ z such that x∈ (y,z)and(ϕ^n)_n isasymptotically affine on(y,z). It follows readilyfrom Proposition <ref> and its proof that Let (ϕ^n)_n∈⊆.The family of sets {A_x((ϕ^n)_n)} for x∈((ϕ^n)_n) (resp. for x∈) forms a convex partition of ((ϕ^n)_n) (resp. ).We give an example of (ϕ^n)_n-components which shows that, whilein dimension 1 the (ϕ^n)_n-components forma countable partition ofmade of open intervals, even in dimension two and with constant sequenceϕ^n=ϕthe non-trivial (ϕ^n)_n-components can have less than full dimension and can be uncountable. Let Γ be the half disk Γ:={ (x,y)∈^2: x^2+y^2≤ 1, y≥ 0} and define ϕ:=d_Γ. Then the family of all ϕ-componentsforms the following partition of ^2:{∘Γ,(-1,1)×{0}, (-1,1)× (-∞,0) }∪∪∪^+ ∪^- ,where*is the family of all singletons {(x,y)} such that x^2+y^2=1, y≥ 0*is the family of all half lines {t(x,y):t>1} such that x^2+y^2=1, y≥ 0* ^+ is the family of all half lines {(1,0)+t(x,y):t> 0} such that x^2+y^2=1, y< 0 ≤ x* ^- is the family of all half lines {(-1,0)+t(x,y):t> 0} such that x^2+y^2=1, y< 0 , x≤ 0. The following key proposition shows that these sets are intimately linked with the components of the domain of (μ,ν). Fix x∈ and (ϕ^n)_n∈⊆. If ν -μ, ϕ^n→ 0 then for any θ∈(μ,ν) and disintegration θ=μ⊗γ, γ(x, ·) is concentrated on A_x((ϕ^n)_n)for μ a.e. x.Proposition <ref> has the following interesting corollary.If ν -μ, ϕ^n→ 0 for (ϕ^n)_n∈⊆ thenμ_|∖((ϕ^n)_n)=ν_|∖((ϕ^n)_n).Recall that by Propositions <ref> and <ref>, the sets A_x form a convex partition such that for θ=μ⊗γ∈(μ,ν), γ(x,A_x)=1 μ(dx)-a.e. In particular, A_x={x} if x∈∖ and A_x⊆ if x∈ so that γ(x, ·)=δ_x for μ a.e. x∈∖ and γ(x,·) is concentrated onfor μ a.e. x∈. Consequently, if B⊆ is Borel we get that ν(B)=∫_μ(dx) γ(x,B)= ∫_μ(dx) γ(x,B∩) +∫_∖μ(dx) δ_x(B) andif moreover B⊆∖ we getν(B)=∫_∖μ(dx) 1_B(x)=μ(B) .We now easily obtain the result about the convex order stated in the introduction. Let ϕ_n be the infimal convolution between ϕ and n|| · ||, then ϕ_n ∈ and ϕ_n ↑ϕ (see <cit.>). Let a be an affine function such that a≤ϕ_1 and λ∈,then (ϕ_n)^- ≤ |a| ∈ L^1(λ) and ϕ_n^+ ↑ϕ^+ and so by dominated convergence and by monotone convergence we get that ∫ϕ_n dλ↑∫ϕ dλ. Since we assumed ∫ϕ d μ=∫ϕ d ν<∞ we get that so ν -μ, ϕ^n→ 0. Since ϕ is strictly convex and ϕ_n →ϕ we get that ((ϕ^n)_n)=∅, thus Corollary <ref> gives the thesis. We close this section with application of Proposition <ref> to our motivating examples. [Examples <ref>–<ref> continued.]We continue the discussion of our motivating examples. Recall the pairs of measures in convex order: μ^k≼_cν^k and μ̃^k≼_cν^k.We study the sets A((ϕ^n))_x for ν -μ, ϕ^n→ 0. In fact, for this example, we restrict out attention to constant sequences ϕ^n=ϕ with ν -μ, ϕ=0. Consider ϕ((t,s))=f(t) for a strictly convex and f so that, in particular, A(ϕ)_(t,s)={t}×.Analogously, consider ψ((t,s))=g(s), where g isLipschitz, convex, equal to 0 on [-1,1] and strictly convex on (∞,-1) and on (1,∞). It follows that A(ψ)_(t,s)=× (-1,1) for s∈ (-1,1) and t∈.It is easy to compute the difference of integrals of ϕ or ψ against our measures.First,ν^k-μ^k, ϕ=∬μ^k(dx) γ^k(x,dy) (ϕ(y)-ϕ(x))=0, where θ^k=μ^k⊗γ^k∈(μ^k,ν^k) was exhibited in Examples <ref>–<ref>, and the above follows since γ^k((t,s),· ) is concentrated on {t}×{-1,1} and ϕ is constant on {t}×.Second, we haveν^k-μ^k, ψ=0 simply since μ^k,ν^k are supported on [0,1]× [-1,1].By Proposition <ref>, for any θ∈(μ^k,ν^k) with disintegration θ=μ^k⊗γ we have γ((t,s),·) is supported on A(ϕ)_(t,s)∩ A(ψ)_(t,s)= {t}× [-1,1]μ(d(t,s))–a.e. Combined with our explicit construction of γ^k this shows that Theorem <ref> holds with C_(t,s)={t}× (-1,1) for s∈ (-1,1) and (t,0) in the support of μ^k and C_(t,s)={(t,s)} otherwise, k≤∞.Similarly to above, ν^k-μ̃^k, ψ=0 and also ν^k-μ̃^k, χ=0, where χ((t,s))=h(t) for a Lipschitz, convex function h equal to 0 on [0,1] and strictly convex on (∞,0) and on (1,∞) so that A(χ)_(t,s)=(0,1)× for t∈ (0,1), whileA(χ)_(0,s)={0}× and A(χ)_(1,s)={1}× since the point x has to belong to the relative interior of A(ϕ)_x, see (<ref>). It follows thatC_(t,s)(μ̃^k,ν^k)⊂ (0,1)× (-1,1) for (t,s)∈ (0,1)× (-1,1) and C_(0,s)(μ̃^k,ν^k)⊂{0}× (-1,1), C_(1,s)(μ̃^k,ν^k)⊂{1}× (-1,1). From our example of θ̃^k∈(μ̃^k,ν^k) we see that the inclusions may not be strict so that the convex components are indeed as asserted in Example <ref>.§.§ Convex components describing support of martingale transports We saw in Proposition <ref> that for any martingale transport θ∈(μ,ν) the mass from x is diffused within A_x((ϕ^n)_n) for μ–a.e. x. This holds for any sequence (ϕ^n)_n∈⊆ with ν -μ, ϕ^n→ 0. In consequence, one may be inclined to ask if γ(x,·), where θ=μ⊗γ, is concentrated on the intersection of A_x((ϕ^n)_n) over all such sequences (ϕ^n)_n∈⊆? This, in general, is false. Indeed, for a fixed x, we can typically find a sequence (ϕ^n)_n∈⊆ such that A_x((ϕ^n)_n) is too small.In other words, the union over all (ϕ^n)_n∈⊆ such thatν -μ, ϕ^n→ 0 of theμ-null set _μ((ϕ_n)_n) on which it does not happen that γ(x,·) is concentrated on A_x((ϕ^n)_n) is not a μ-null set. To understand this we come back to Examples <ref>–<ref>.[Examples <ref>–<ref> continued.]We continue the discussion of our motivating examples and their convex components as computed in Example <ref>. We argue that C_z may not be defined as the intersection of A((ϕ^n)) over all sequences with ν^k -μ^k, ϕ^n→ 0. Fix x_0∈ (0,1). Now let ϕ^n((x,y))=(y-n(x-x_0))^+ so that ϕ^n is affine on {x}× [-1,1] for x∉ (x_0-1/n,x_0+1/n). It follows that ν^k -μ^k, ϕ^n→ 0 for any 2≤ k≤∞. However ϕ^n((x_0,y))=y^+ from which we see that A((ϕ^n))_(x_0,0)={(x_0,0)} and hence⋂_(ϕ^n)_n∈⊆: ν^k -μ^k, ϕ^n→ 0A((ϕ^n))_(x,0)={(x,0)}⊊ C_x(μ^k,ν^k), 2≤ k≤∞.To circumvent the above problem, we would need to restrict ourselves to a suitable countable family of sequences of functions in . This can be achieved by considering a suitably defined essential intersection instead of the simple intersection above. To describe our construction we need some additional definitions. Let CL() be the set of non-empty closed subsets of . There is number of well understood topologies one may put on CL(). For our purposes it is most convenient to equip CL() with the Wijsman topology <cit.> which is the weak topology generated by mappings d_·(x):CL()→, x∈. This topology is weaker than the Vietoris topology and stronger than the Fell topology. To us, is has two main advantages. First, it makes CL() into a Polish space as shown by Beer <cit.>. Second, it generates the Effros σ–algebra which implies that weak measurability of closed–valued multifunctions can be treated similarly to regular functions, see <cit.> and the references therein. Finally, letbe the set of closed convex subsets of . Thenis a closed subset of CL() and hence also Polish with its Wijsman topology <cit.>. We equip it with partial ordering given by set inclusion. Then, in a recent work, Larsson <cit.> showed that one can build a strictly increasing measurable map fromto . With such a map, one can follow the usual arguments, to establish existence of essential infimum of a family of –valued random variables, see <cit.> for details. This allows us to give a proper definition of convex component for (μ,ν) which avoids the problems highlighted in Example <ref> above.For a fixed sequence (ϕ^n)_n∈, we see the mapping∋ x→A_x ((ϕ^n)_n)∈as a –valued random variable on (,(),μ). It follows from the structure of Wijsman topology, that the measurability of the above mapping is equivalent to its Borel measurability as a multifunction, see Hess <cit.>. We believe this follows readily but leave the details aside[We plan to add these in the subsequent version of the paper.]. We are interested in the collection of such variables over (ϕ^n)_n∈ I:={ (ϕ^n)_n ∈ such that ν-μ, ϕ^n→ 0 }.As explained above, we can take their essential infimum with respect to μ which exists, is unique μ-a.s., measurable and –valued.Further, it may be obtained as an infimum over a countable family: there exits sequence (ϕ^k,n)_n∈ I, k≥ 1,such that_x(μ,ν):=⋂_k≥ 1A_x((ϕ^k,n)_n)satisfies_x(μ,ν)= μ-essinf_(ϕ^n)_n∈ IA_x((ϕ^n)_n)μ(dx)–a.e.We now want to define the convex component of x as the largest relatively open convex subset of _x which contains x.This can be achieved using faces of a convex set, which we also exploit in Section <ref> to characterise the asymptotically convex components A_x. We recall here that given a convex set K⊆ and x∈ K,there exists a unique face[While we follow <cit.>, we warn the reader that some other authors simply call `face' what <cit.> call `exposed face', and that this distinction is important since not all faces are exposed (see immediately after<cit.>).] F_x of K that contains x in its relative interior (F_x)(see<cit.>), which <cit.> shows to be the smallest face of K containing x and, by <cit.>,is also the maximal subset inincluded in K and containing x. For x∈ the setC_x=:= (F_x(_x(μ,ν)))∈ is called the convex component of x and the set =(μ,ν):={x∈: C_x≠{x}}is called the domain. We stress that convex components are defined μ(dx)–a.e. The particular definition in (<ref>) could be modified on a μ–null set as long as the resulting sets are also a convex partition of .Recall that x∈ A_x((ϕ^n)_n) and hence x∈_x(μ,ν) and hence Definition <ref> is well posed. We need to show that the convex components C_x, x∈, of Definition <ref> form a convex partition of , x∈ C_x, and thatfor any θ∈(μ,ν) and disintegration θ=μ⊗γ, γ(x, C_x)=1 μ(dx)-a.e. Note that, by Propositions <ref> and <ref>, these properties are true for A_x((ϕ^k,n)_n), for each k≥ 1. In particular, since _x=lim_K→∞⋂_k≤ KA_x((ϕ^k,n)_n), we see thatγ(x,_x)=1, μ(dx)-a.e. and, by Theorem <ref>, γ(x,C_x)=1, μ(dx)-a.e.As recalled above, for a convex set K and x∈ K, (F_x(K)) is the largest relatively open set which includes x and is contained in K. It follows that for two convex sets K_1⊆ K_2,x∈ K_1, we have (F_x(K_1))⊆(F_x(K_2)). In particular, for any (ϕ^n)_n∈ IC_x=(F_x(C_x(μ,ν)))⊆(F_x(A_x((ϕ^n)_n)))=A_x((ϕ^n)_n),μ(dx)-a.e.,where the last equality follows from the characterisation in Lemma <ref> below. Note that we may take the above to hold for ((ϕ^k,n)_n), k≥ 1, in (<ref>), μ(dx)-a.e. Now suppose y∈ C_x. Then, by the above inclusion and by Proposition <ref>, A_x((ϕ^n,k)_n)=A_y((ϕ^n,k)_n) so, by (<ref>), we have _x=_y. It now follows that x,y are in the relative interior if the same convex face of this set and hence their convex components are equal. This concludes the proof.Finally, observe that Theorem <ref>, combined with the general argument given for the proof of Corollary <ref>, readily imply the following result. μ_|∖=ν_|∖.§ PROOFS AND FURTHER PROPERTIES OF AFFINE COMPONENTSWe turn now to proofs of the results announced in Section <ref>. We first establish Proposition <ref> and then prove Proposition <ref>by characterising A_x as the convex face containing x. For the latter proof, we establish certain results in convex analysis which are of independent interest, see Theorem <ref> below. We start however with a simple result, exploited in all of the proofs, which asserts that because of convexity, it is enough to consider very special affine functions to determine whether (ϕ^n)_n∈⊆ isasymptotically affine on a set V∈: it is enough to consider affine functions b_n supporting ϕ^n at a fixed[I.e. a point p not dependent on n; otherwise the result is false.] point p∈ V. This fact relies crucially on the assumption that[As it is clear from the proof, Lemma <ref> would hold for arbitrary V if we assumed that p∈((co(V)).] V∈ — for a counterexample one may consider V=[0,1), p=0 and ϕ^n(t)=t^+, where ϕ_|V is affine and thefunction b^n:=0 supports ϕ^n at p yet (ϕ^n-b^n)(t)=t is not identically 0 on V.Given (ϕ^n)_n∈⊆andp∈ V∈,choose[Such an affine function b_n always exists(i.e. ∂ϕ^n(p)≠∅) since ϕ is convex: see <cit.>.] b_naffine s.t. b_n≤ϕ^n and b_n(p)=ϕ^n(p).Then ϕ^n - b_n → 0 on V iff (ϕ^n)_n is asymptotically affine on V.By definition if ϕ^n - b_n → 0 on V then (ϕ^n)_n is asymptotically affine on V. Conversely, let a_n be affine and such that ϕ^n -a_n → 0 on V. If V is a singleton then trivially ϕ^n - b_n → 0 on V. IfV is not a singleton by restricting ourselves to its affine hull we can assume w.l.o.g. that V is open (in ^N with N≥ 1). We can then apply <cit.>to f_n=ϕ^n -a_n and get that max{ ||d||_: d∈∂ f_n(p) }→ 0. Since ϕ^n -b_n∈_+ equals 0 at p, it achieves it minimum 0 at p, and so0 ∈∂ (ϕ^n-b_n)(p), i.e. ∇ b_n(p)∈∂ϕ^n(p). This gives that ∇ b_n(p) - ∇ a_n(p) ∈∂ f_n(p), so we get that∇ b_n(p) - ∇ a_n(p) → 0. Since a_n,b_n are affine and b_n(p)- a_n(p) = ϕ^n(p)-a_n(p) → 0 we get thatb_n-a_n→ 0 on , so ϕ^n - b_n=(ϕ^n - a_n) +(a_n - b_n) → 0+0=0 on V.§.§ Proof of Proposition <ref>Proposition <ref> follows from the series of lemmas below: Lemmas <ref> and <ref> imply that (ϕ^n_|A_x)_n is asymptotically affine andx∈ A_x ∈, and Lemma <ref> gives that ifx∈ V ∈ and (ϕ^n_|V)_n is asymptotically affine then V⊆ A_x, which completes the proof.If (ϕ^n)_n∈⊆ is asymptotically affine onV_i∈ for each i∈ I and ∩_i∈ I V_i≠∅ then (ϕ^n)_n is asymptotically affine on (∪_i∈ I V_i).By applying Lemma <ref> we can choose an affine b_n supporting ϕ^n at p∈∩_i∈ I V_i and get that ϕ^n - b_n→ 0 on each V_i and so on ∪_i∈ I V_i. If {i_1,…, i_n}⊆ I, (t_i_j)_j=1,…, n are non-negative and such that ∑_j t_i_j=1, and x_i_j∈ V_i_j we get that 0≤ (ϕ^n - b_n)(∑_j=1^n t_i_j x_i_j) ≤∑_j=1^n t_i_j(ϕ^n - b_n)(x_i_j) → 0 .The thesis follows since the set of all points of the form ∑_j=1^n t_i_j x_i_jis(∪_i∈ I V_i): see <cit.>. IfV_i∈for each i∈ I and x∈∩_i∈ I V_i then ∩_i∈ IV_i =∩_i∈ I V_iandx∈((∪_i∈ I V_i))∈.Trivially ∩_i∈ I V_i is included in the closed set ∩_i∈ IV_i, and so is its closure. For the opposite inclusion take y∈∩_i∈ IV_i, then <cit.> gives that (y,x]⊆∩_i∈ I V_i and thus y ∈∩_i∈ I V_i, proving that ∩_i∈ IV_i =∩_i∈ I V_i.The setC:=(∪_i∈ I V_i) isconvex and x∈ C, so therelative interior (C) of C is convex and relatively open.Now by restricting our attention to (C)=(∪_i∈ I V_i) we assume w.l.o.g. that C has full dimension N.Assume by contradiction that x∈ C∖(C); thenthere exists v∈∖{0} such that the closed half space H_x^v:={ y: y-x,v≥ 0} contains C (see <cit.>). Since C=(∪_i∈ I V_i) has full dimension, it is not contained in the hyperplane ∂ H_x^v, so there exists i̅∈ Iandy ∈V_i̅ such that y ∉∂ H_x^v. Since [y,x]⊆ V_i̅∈ there exists z∈ V_i̅∩ ([y,x)∖ [y,x]), which is absurd sinceH_x^v ∩ ([y,x)∖ [y,x])=∅ and V_i̅⊆ C_x ⊆ H_x^v. Thus x∈(C).Assume that C ⊆ is convex and V∈, V⊆ C. If V∩(C) ≠∅ then V ⊆(C).Assume by contradiction that there existy∈ V∩(C), z∈V ∖(C), thenC ∩ ([y,z)∖ [y,z]) =∅ since otherwise z would belong to (C) (see <cit.>). HoweverV∈ gives that V ∩ ([y,z)∖ [y,z]) ≠∅, contradicting V⊆ C. §.§ Characterisation of A_x as a convex face & proof of Proposition <ref>We turn now to the proof of Proposition <ref> which relies on a characterisation of A_xin terms familiar to the convex analyst. We will repeatedly use without further notice the fact that, given a convex set D⊆ and x∈ D,there exists a unique face[While we follow <cit.>, we warn the reader that some other authors simply call `face' what <cit.> call `exposed face', and that this distinction is important since not all faces are exposed (see immediately after<cit.>).] F_x of D that contains x in its relative interior (F_x)(see<cit.>), which <cit.> shows to be thethe smallest face of D containing x (which exists since trivially any intersection of faces of D is a face of D) and which is given by F_x=F_x(D)={y∈ D : ∃ z∈ D , t∈ (0,1) such that x=ty+(1-t)z} ,see[The given reference<cit.> assumes that D is compact at the beginning of Section 2.3 (which contains Corollary 2.67), but clearly this is not needed to show our (<ref>). Moreover<cit.> contains a small typo (clearly t=λ should belong to (0,1), not [0,1)).] <cit.>. Given (ϕ^n)_n∈⊆ let b^n be affine and s.t. b^n≤ϕ^n and b^n(x)=ϕ^n(x). Then the face F_x=F_x((ϕ^n)_n) of the convex set {y:(ϕ^n-b^n)(y)→ 0 } for which x∈(F_x)satisfies A_x=(F_x) andA_x=F_x.Since b^n ≤ϕ^n, the set D:={y:(ϕ^n-b^n)(y)→ 0 } is convex. According to <cit.>the relative interiors of its non-empty faces constitute the maximal subsets of D in . Thus (F_x)= A_x byLemma <ref> and by maximality of F_x and of A_x. Since any convex set C satisfies C=(C) (see <cit.>) we getF_x=(F_x)= A_x.An interesting by-productof Lemma <ref> is the fact that the relative interior of the face of {ϕ^n -b^n → 0} which contains x does not change if we choose a different supporting function b^n (even if this changes {ϕ^n -b^n → 0}). To better understand this, consider the followingsimple example in dimension one and with constant sequencesϕ^n=ϕ and b^n=b. Let x=0 and ϕ(t)=t^+, thenb_1:=0 and b_2(t)=t/2 both support ϕ at x and {ϕ=b_1}=(-∞,0] is very different from {ϕ=b_2}={0}, yet A_0(ϕ)={0} isthe face F_x of D which contains x=0 in its relative interior both for D=(-∞,0] and for D={0}. Let α∈_1 be concentrated on a convex setD⊆. Let b(α):=∫ y α(dy) be the barycenter of α andF_b(α) the face of Dthat contains b(α) in its relative interior. Then b(α)∈ D andα is concentrated on F_b(α).If D is a compact subset of a locally convex space,the previous theorem is classical and holds in great generality (for examplesee <cit.>).If D is not compact even the conclusion x∈ D is generally false in infinite dimension (see for example <cit.>, or see <cit.> for a more detailed study). Notice that the convex set {ϕ^n-b^n→ 0 } in Lemma <ref> isnot necessarily even closed, so we are required to study general convex sets.To build intuition for Theorem <ref> consider a simple the two dimensional case: D =[0,1]× [-1,1] and take x=(0,0), so that F_x= {0}× (-1,1). If α has barycenter (0,0) and α((0,∞)×)>0 then, to have barycenter (0,0), α must satisfy α((-∞,0)×)>0. So, if α isconcentrated on D then it is actually concentrated on the smaller setF_x. The general situation however is more involved since not all faces are exposed and we do not assume that D is closed.The fact thatb(α)∈ D is proved in <cit.>.By definition offace C:=D∖ F_x is convex.Now assume by contradiction thatα(C)>0.Clearly α is not concentrated on C since otherwise x:=b(α)∈ C, contradicting x∈ F_x. Thus λ:=α(C)∈ (0,1), so we can define mutually singular probabilities β:=λ^-1α_|C and γ=(1-λ)^-1α_|F_x such thatα=λβ+(1-λ)γ and as stated above the barycenters of β and γ belong to the convex sets where theyare concentrated, and so b(β)∈ C, b(γ)∈ F_x. Since F_x ∋ b(α)=λ b(β)+(1-λ)b(γ) by definition of face we get that b(β)∈ F_x, contradicting b(β)∈ C. Since ∂ϕ^n is an upper hemicontinuous multifunction (see <cit.>), there exists a Borel measurable selector ϕ̇^̇ṅ of ∂ϕ^n, i.e. a Borelfunctionsuch that ϕ̇^̇ṅ(x) ∈∂ϕ^n(x) for all x∈:see for example<cit.>. Fixed such a Borel selector we define Δ_x ϕ^n(y):=ϕ^n(y)-(ϕ^n(x) + ϕ̇^̇ṅ(x),y-x). Notice that independently from the choice of the kernel γ and of the selector ϕ̇^̇ṅ, one has Δ_x ϕ^n(y) ≥ 0 and 0 ←ν-μ, ϕ^n = ∫μ(dx) ∫γ(x,dy) Δ_x ϕ^n(y) =∫θ(d(x,y)) Δ_x ϕ^n(y) and so Δϕ^n→ 0 in L^1(θ).Passing to a subsequence (without relabeling)we get thatθ a.e. Δϕ^n→ 0, i.e.for μ a.e. x we have thatγ(x,·) is concentrated on { y:Δ_x ϕ^n(y)→ 0 }. Fix any one such x and apply Theorem <ref> with α=γ(x,·) (so that b(α)=x) and D={ y:Δ_x ϕ^n(y)→ 0 }to obtain that α is concentrated on F_x and thus a fortiori onF_x, which byLemma <ref>equalsA_x((ϕ^n)_n).abbrv
http://arxiv.org/abs/1702.08433v1
{ "authors": [ "Jan Obłój", "Pietro Siorpaes" ], "categories": [ "math.PR", "60G42, 49N05" ], "primary_category": "math.PR", "published": "20170227185204", "title": "Structure of martingale transports in finite dimensions" }
A Message Passing Approach for Decision Fusion in Adversarial Multi-Sensor Networks [ December 30, 2023 ===================================================================================We propose a novel approximate inference framework that approximates a target distribution by amortising the dynamics of a user-selected Markov chain Monte Carlo (MCMC) sampler.The idea is to initialise MCMC using samples from an approximation network, apply the MCMC operator to improve these samples, and finally use the samples to update the approximation network thereby improving its quality. This provides a new generic framework for approximate inference, allowing us to deploy highly complex, or implicitly defined approximation families with intractable densities, including approximations produced by warping a source of randomness through a deep neural network. Experiments consider Bayesian neural network classification and image modelling with deep generative models. Deep models trained using amortised MCMC are shown to generate realistic looking samples as well as producing diverse imputations for images with regions of missing pixels.§ INTRODUCTION Probabilistic modelling provides powerful tools for analysing and making future predictions from data. The Bayesian paradigm offers well-calibrated uncertainty estimates on unseen data, by representing the variability of model parameters given the current observations through the posterior distribution. However, Bayesian methods are typically computationally expensive, due to the intractability ofevaluating posterior or marginal probabilities.This is especially true for complex models like neural networks, for which a Bayesian would treat all the weight matrices as random variables and integrate them out. Hence approximations have to be applied to overcome this computational intractability in order to make Bayesian methods practical for modern machine learning tasks. This work considers the problem of amortised inference,in which we approximate a given intractable posterior distribution p with a sampler q, a distribution from which we can draw exact samples. Compared with the typical Monte Carlo (MC) which approximates p with a fixed set of samples,the amortised inference distributes the computation cost to training the sampler q.This allows us to quickly generate a large number of samples at the testing time, andcan significantly save time and storage when inference is required repeatedly as inner loops of other algorithms, such as in training latent variable models and structured prediction. Variational inference <cit.> and its stochastic variants <cit.> provide a straightforward approach for amortised inference,in which we find an optimal q from a parametric family of distributions 𝒬by minimizing certain divergence measure (often KL divergence) D[q||p].Unfortunately, except a few very recent attempts <cit.>, most existing variational approaches require the distributions in 𝒬 to have computationally tractable density functions in order to solve the optimization problem.This forms a major restriction on the choice of the approximation set 𝒬, sinceexact samplers, in the most general form, are random variables of formx = f(ϵ), where f is a (non-linear) transform function,and ϵ is some standard distribution such as Gaussian distribution. Except simple cases, e.g. when f is linear,it is difficult to explicitly calculate the density function of such random variables. Therefore, a key challenge is to develop efficient approximate inference algorithms using generic samplers, which we refer as wild approximations, without needing to calculate the density functions. .Such algorithms would allow us todeploy more flexible families of approximate distributions to obtain better posterior approximations.In this paper we develop a new, simple principle for wild variational inference based on amortising MCMC dynamics.Our method deploys a student-teacher, or actor-critic frameworkto leverage existing MCMC dynamics to a supervisor for training samplers q, by iterating the following steps:(1) the sampler q (student) generates initial samples which are shown to an MCMC sampler;(2) the MCMC sampler (teacher) improves the samples by running MCMC transitions;(3) the sampler q takes feedback from the teacher and adjust itself in order to generate the improved samples next time. This framework is highly generic, works for arbitrary sampler families, and can take the advantage of any existing MCMC dynamics for efficient amortised inference.Empirical results show that our method works efficiently on Bayesian neural networks and deep generative modelling. § BACKGROUND Bayesian inferenceConsider a probabilistic model p(x|z, θ) along with a prior distribution p_0(z), where x denotes an observed variable, z anunknown latent variable,and θ a hyper-parameter thatis assumed to be given, or will be learned bymaximizing the marginal likelihood log p( x | θ). The key computational task of interest is to approximate the posterior distribution of z: p(z|x, θ) = 1/p(x|θ) p_0(z) p(x|z, θ),        p(x|θ) = ∫ p_0(z) p(x|z, θ) d z.This includes both drawing samples from p(z|x, θ) in order to estimate related average quantities,as well as estimating the normalization constant p(x|θ) for hyper-parameter optimisation or model selection. Since both the data x and θ are assumed to be fixed in inference, we may drop the dependency on them when it is clear from the context. MCMC basics MCMC provides a powerful, flexible framework fordrawing (approximate) samples from given distributions. An MCMC algorithm is typically specified by its transition kernel 𝒦(z' |z) whose unique stationary distribution equals the target distribution p of interest, that is, q = p                     q(z) = ∫ q(z') 𝒦(z|z') dz',     ∀z.This fixed point equation fully characterizes the target distribution p, and hence the inference regarding p can be framed as (approximately) solving equation (<ref>).In particular, MCMC algorithmscan be viewed as stochastic approximations for solving (<ref>),in which we start with drawing z_0 from an initial distribution q_0and iteratively draw sample z_t at the t-th iterationfrom the transition kernel conditioned on the previous state, i.e.  z_t | z_t - 1∼𝒦(z_t | z_t-1).In this way, the distribution q_t of z_t can be viewed as obtained by a fixed point update of form q_t(z) ←𝒦 q_t-1 ( z),       where   𝒦 q_t-1( z) :=∫ q_t-1(z') 𝒦(z|z') dz',so that recursively, we have q_t = 𝒦_t q_0, where 𝒦_t denotes the t-steptransition kernel. The standard theory of Markov chains suggests thatthe Markov transition monotonically decreases the KL divergence (<cit.>, see also Lemma 1 in the appendix), that is, D_KL[q_t ||  p] ≤D_KL [q_t-1 ||  p].Therefore, q_t converges to the stationary distribution p as t→∞ under proper conditions.§ AMORTISED MCMC MCMC can be viewed as approximatingthe fixed point update (<ref>)in a non-parametric fashion,returning a set of fixed samples for approximating p.This motivates amortised MCMC which uses more general parametric approximations of the fixed point update (<ref>) to train parametric samplers for amortised inference.In the sequel, we introduce our generic framework in Section <ref>,discuss in Section <ref> some particular algorithmic choices, and apply amortised MCMC to approximate maximum likelihood estimation (MLE) in Section <ref>. §.§ Main idea: learning to distil MCMC Let 𝒬 = {q_ϕ} be a set of candidate samplers parametrised by ϕ.Our goal is to find an optimal q_ϕ to closely approximate the posterior distribution p of interest.We achieve this by approximating the fixed point update (<ref>).Because of the parametrisation,an additional projection step is required to maintain q inside 𝒬,motivating the following update rule at the t-th iteration: ϕ_t_ϕ D[ q_T ||  q_ϕ],        q_T := 𝒦_T q_ϕ_t-1.where D[· || ·] issome divergence measure between distributions whose choice is discussed in Section <ref>.Note that we extended (<ref>) to use the T-step transition kernel 𝒦_T. If 𝒬 is taken to be large enough so that 𝒦_T q_ϕ_t-1∈𝒬,then the projection update (<ref>) (with T=1) reduces to (<ref>).In practice, a gradient descent method can be used to solve (<ref>): ϕ_t←ϕ_t-1 - η∇_ϕD[ 𝒦_T q_ϕ_t-1 ||  q_ϕ]|_ϕ = ϕ_t-1.It is often intractable to evaluate ∇_ϕD[ q_T  ||  q_ϕ] thus an approximation is needed.This can be done by approximating q_ϕ_t-1 with samples { z^k_0} drawn from it, and approximating q_T = 𝒦_T q_ϕ_t-1with sample { z^k_T} drawn by following the Markov transition 𝒦(· | ·) for T times starting at { z^k_0}.These samples are then used toestimate the gradient and update ϕ_tby (<ref>)in order to “move” q_ϕ_t-1 towards q_T = 𝒦_T q_ϕ_t-1,which is closer to the target distribution according to (<ref>).To summarise, our generic framework requires three main ingredients: (1) a parametric set 𝒬 = {q_ϕ} of the sampler distributions (the student); (2) an MCMC dynamics with kernel 𝒦(z_t | z_t-1) (the teacher); (3) a divergence D[·||·] and update rule for ϕ (the feedback). By selecting these components tailored to a specific approximate inference task, the method provides a highly generic framework, applicable to both continuous and discrete distribution cases, and as we shall see later, extensible to wild approximations without a tractable density. Remark MCMC can be viewed as a special case of our framework, in which the samplers 𝒬 are empirical distributions parametrised by the MCMC samples z_t, that is, q_ϕ_t( z)= δ( z- ϕ_t),  ϕ_t =z_t, where the sample z_t is treated as the parameter ϕ_t in our framework,and is the only possible output of the sampler q_ϕ_t. Our framework allows more flexible parametrisations of samplers, which significantly saves running time and storage at test time. Remark The same type of projected fixed point updates as (<ref>)have been widely used in reinforcement learning (RL), including deep Q learning (DQN) <cit.>,and temporal difference learning with function approximations in general <cit.>. In this scenario the Q- or V- networksare iteratively adjusted by applying projected fixed point updates of the Bellman equation.This provides an opportunity tostrengthen our method with the vast RL literature.For example, similar to the case of RL,the convergence of updates of form (<ref>) are not theoretically guaranteed in general, especially when the parametric set 𝒬 is complex or non-convex. However, the practical stabilisation tricks developed in the DQN literature <cit.> can be potentially applied to our case, and theoretical analysis developed in RL <cit.>can be borrowed to establish convergence of our methodunder simple assumptions (e.g., when 𝒬 is linear or convex).§.§ The choice of update rule The choice of the update signal and ways to estimate it plays a crucial role in our framework,and it should be carefully selected to ensure both strong discrimination power andcomputational tractability with respect to the parametric family of q that we use.We discuss three update rules in the following.KL divergence minimisation A simple approach is to use the inclusive KL-divergence in (<ref>): D_KL[q_T || q_ϕ] = 𝔼_q_T[ log q_T(z|x) - log q_ϕ(z|x) ],and for the purpose of optimising ϕ, it only requires an MC estimate of -𝔼_q_T[log q_ϕ(z|x)].This gives a simple algorithm that is a hybrid of MCMC and VI, and appears to be new to our knowledge. Similar algorithms include the so called cross entropy method <cit.> which replaces q_T with an importance weighted distribution, and methods for tuning proposal distributions for sequential Monte Carlo <cit.>. Adversarially estimated divergencesUnfortunately, the inclusive KL divergence requires the density of the sampler q_ϕ to be evaluated,and can not be used directly for wild approximations. Instead, we need to estimate the divergences based on samples { z_0^k}∼ q_ϕ and { z_T^k}∼ q_T.To address this, we borrow the idea of generative adversarial networks (GAN) <cit.> to construct a sample-based estimator of the selected divergence. As an example, consider the Jensen-Shannon divergence: D_JS[q_T||q] = 1/2D_KL[q_T || q̃] + 1/2D_KL[q || q̃], with q̃ = 1/2 q + 1/2 q_T. Since none of the three distributions have tractable density, a discriminator d_ψ(z|x) is trained to provide a stochastic lower-bound D_JS[q_T||q] ≥D_adv[{z_T^k } || {z_0^k } ] = 1/K∑_k=1^K logσ(d_ψ(z_T^k | x)) + 1/K∑_k=1^K log (1 - σ(d_ψ(z_0^k | x))),with σ(·) the sigmoid function and z_0^k, z_T^k samples from q and q_T, respectively. Recent work <cit.> extends adversarial training to f-divergences where the two KL-divergences are special cases in that rich family. In such case D_adv also corresponds to the variational lower-bound to the selected f-divergence and the discriminator can also be defined accordingly. Furthermore, the density ratio estimation literature <cit.> suggests that the discriminator d_ψ in (<ref>) can be used to estimate log (q_T/q_ϕ), i.e. the objective function for q_ϕ could be decoupled from that for the discriminator <cit.>. Energy matchingAn alternative approach to matching q_ϕ with q_T is to match their first M moments.In particular, matching the mean and variance is equivalent to minimising D_KL[q_T || q_ϕ] with q_T fixed and q_ϕ a Gaussian distribution. However it is difficult to know beforehand which moments are important for accurate approximations. Instead we propose energy matching, which matches the expectation of the log of the joint distribution p( x,z) ∝ p(z| x) under q_ϕ and q_T:D_em[q_T  ||  q_ϕ]= | 𝔼_q_T(z_T|x)[ log p(x, z_T) ] - 𝔼_q_(z|x)[ log p(x, z) ] |^β, β > 0,Although D_em[· || · ] is not a valid divergence measure since D_em[q_T  ||  q_ϕ]=0 does not imply q_ϕ = q_T,this construction puts emphasis on moments that are mostly important for accurate predictive distribution approximation, which is still good for inference.Furthermore, as (<ref>) can be approximated with Monte Carlo methods which only require samples from q, this energy matching objective can be applied to wild variational inference settings as it does not require density evaluation of q_ϕ and q_T. Another motivation from contrastive divergence <cit.> is also discussed in the appendix. §.§ Approximate MLE with amortised MCMCLearning latent variable models have become an important topic with increasing interest. Our amortised inference method can be used to develop more flexible and accurate approximations for learning.Consider the variational auto-encoder (VAE) <cit.> which approximates maximum likelihood training (MLE) by maximising the following variational lower-bound over the model parameters θ and variational parameters ϕ:max_θ,  ϕ{𝔼_q_ϕ[ log p(x | z; θ) ] - D_KL[q_ϕ(z|x)||p_0(z)] = log p(x|θ) - D_KL[q_ϕ(z|x) || p(z| x, θ) ] }.Our method can be directly used to update ϕ in (<ref>).This can be done using the inclusive KL divergence if the densityq_ϕ is tractable,and adversarially estimated divergence or energy matching for wild approximation when the density q_ϕ is intractable. Next we turn to the optimisation of the hyper-parameters θ where we decouple their objective function from that of ϕ. Because D_KL[q || p] ≥D_KL[q_T || p] when p is the stationary distribution of the MCMC, the followingobjectiveforms a tighter lower-bound to the marginal likelihood: log p(x|θ) - D_KL[q_T(z|x) || p(z| x, θ) ] = 𝔼_q_T[ log p(x | z, θ) ] + const of θ. Empirical evidences <cit.> suggested tighter lower-bounds often lead to better results. Monte Carlo estimation is applied to estimate the lower-bound (<ref>) with samples {z_T^k }∼ q_T.The full method when using adversarially estimated divergencesis presented in Algorithm <ref>, in which we train a discriminator d_ϕ to estimate the selected divergence, and propagate learning signals back through the samples from q_ϕ. Note here the update step for the discriminator and the q_ψ could be executed for more iterations in order to achieve better approximations to the current posterior. This strategy turns the algorithm into a stochastic EM with MCMC methods approximating the E-step <cit.>.RKHS-based and energy-based moments <cit.> can also be applied as the discrepancy measure in step 2, but this is not explored here. § RELATED WORKSince <cit.>, generative adversarial networks (GANs) have attracted large attention from both academia and industry. We should distinguish the problem scope of our work with that of GANs:amortised MCMC aims to match an (implicitly defined) q to the posterior distribution p,while GANs aim to match a q with an observed sample, which we leverage as an inner loop for the divergence minimisation. Hence our framework could also benefit from the recent advances in this area <cit.>.The amortisation framework is in similar spirit to <cit.> in that both approaches “distil” an MCMC sampler with a parametric model. Unlike the presented framework, <cit.> and <cit.> used a student model to approximate the predictive likelihood, and that student model is not used to initialise the MCMC transitions. We believe that initialising MCMC with the student model is important in amortising dynamics, as the teacher can “monitor” the student's progress and provide learning signals tailored to the student's need. Moreover, since the initialisation is improved after each student update, the quality of the teacher's samples also improves. Another related, but different approach <cit.> considered speeding-up Hybrid Monte Carlo <cit.> by approximating the transition kernel using a Gaussian process. Amortised MCMC could benefit from this line of work if the MCMC updates are too expensive.Perhaps the most related approaches to our framework (in the sense of using q of flexible forms) are operator variational inference (OPVI, <cit.>), amortised SVGD<cit.>, and adversarial variational Bayes (AVB <cit.>, also concurrently proposed by <cit.>). These works assumed the q_ϕ distribution to be represented by a neural network warping input noise. OPVI minimises the Stein discrepancy <cit.> between the exact and approximate posterior, where the optimal test function is determined by optimising a discriminator.Though theoretically appealing, this method seems still impractical for large scale problems.Amortised SVGD can be viewed as a special case of our framework, which specifically uses a deterministic Stein variational gradient dynamic <cit.> and an l_2-norm as the divergence measure. AVB estimates the KL-divergence D_KL[q||p_0] in the variational lower-bound (<ref>) with GAN and density ratio estimation, making it closely related to the adversarial auto-encoder <cit.>. However we conjecture that the main learning signal of AVB comes from the “reconstruction error” term 𝔼_q[log p(x|z, θ)], and the regularisation power strongly depends on the adversarial estimation of D_KL[q||p_0], which can be weak as the discriminator is non-optimal in almost all cases.§ EXPERIMENTSWe evaluate amortised MCMC with both toy and real-world examples. For simplicity we refer the proposed framework as AMC. In the appendix experimental settings are further presented. Code will be released at <https://github.com/FirstAuthor/AmortisedMCMC>.§.§ Synthetic example: fitting a mixture of Gaussians We first consider fitting a Gaussian mixturep(z) = 1/2𝒩(z; -3, 1) + 1/2𝒩(z; 3, 1) with the variational program proposed by <cit.> as the following: ϵ_1, ϵ_2, ϵ_3 ∼𝒩(ϵ; 0, 1), z = 1_ϵ_3 ≥ 0ReLU(w_1 ϵ_1 + b_1) - 1_ϵ_3 < 0ReLU(w_2 ϵ_2 + b_2). We further tested a small multi-layer perceptron (MLP) model of size [3, 20, 20, 1] which warps ϵ to generate the samples. The Jensen-Shannon divergence is adversarially estimated with an MLP of the same architecture. The MCMC sampler is Langevin dynamics with rejection (MALA <cit.>), and in training 10 parallel chains are used.The fitted approximations are visualised in Figure <ref>. Both models cover both modes, however the variational program performs better in terms of estimating the variance of each Gaussian component. Thus an intelligent design of the q network can achieve better performance with much fewer number of parameters.We empirically investigate the effect of the chain length T on the approximation quality using the MLP approximation. Time steps T=1, 5, 10 with step-sizes η = 0.1, 0.02, 0.01 are tested (each repeating 10 times), where by making Tη = 1.0 a constant, the particles approximately move equal distances during MCMC transitions. In Figure <ref> we shown the Kernel Stein Discrepancy (KSD <cit.>) as a metric of approximation error. With small chain length the student quickly learns the posterior shape, but running more MCMC transitions results in better approximation accuracy. A potential way to balance the time-accuracy trade-off is to initially use short Markov chains initially for AMC, but to lengthen them as AMC converges. This strategy has been widely applied to contrastive-divergence like methods <cit.>. We leave the exploration of this idea to future work. §.§ Bayesian neural network classification Next we apply amortised MCMC to classification using Bayesian neural networks. Here the random variable z denotes the neural network weights which could be thousands of dimensions. In such high dimensions a large number of MCMC samples are typically required for good approximation. Instead we consider AMC as an alternative, which allows us to use much fewer samples during training as it decouples the samples from evaluation, leading to massive savings of time and storage. To validate this, we take from the UCI repository <cit.> 7 binary classification datasets and train a 50-unit Bayesian neural network. For comparison we test the mean-field Gaussian approximation trained by VI with 10 samples, and MALA with 100 particles simulated and stored. The approximated posterior for AMC is constructed by first taking a mean-field Gaussian approximation, then normalising the K=10 samples by their empirical mean and variance. This wild approximation is trained with the energy matching objective with β = 2, where we also use MALA as the dynamics with T=1.The results are reported in Table <ref>. For test log-likelihood MALA is generally better than VI as expected. More importantly, AMC performs similarly, and for some datasets even better, than MALA. VI returns the best test error metrics on three of the datasets, with AMC and MALA topped for the rest. In order to demonstrate the speed-accuracy trade-off we visualise in Figure <ref> the (negative) test-LL and error as a function of CPU time. In this case we further test MALA with 10 samples, which is much faster than the 100 samples version, but it pays the price of slightly worse results. However AMC achieves better accuracy in a smaller amount of time, and in general out-performs the 10-sample MALA. These observations show that AMC with energy matching can be used to train Bayesian neural works and achieves a balance between computational complexity and performance. §.§ Deep generative modelsThe final experiment considers training deep generative models on the dynamically binarised MNIST dataset, containing 60,000 training datapoints and 10,000 test images <cit.>. For benchmark a convolutional VAE with dim(z) = 32 latent variables is tested. The Gaussian encoder consists of a convolutional network with 5 × 5 filters, stride 2 and [16, 32, 32] feature maps, followed by a fully connected network of size [500, 32 × 2]. The generative model has a symmetric architecture but with stride convolution replaced by deconvolution layers. This generative model architecture is fixed for all the tests. We also test AMC with inclusive KL divergence on Gaussian encoders, and compare to the naive approach which trains the encoder by maximising variational lower-bound (MCMC-VI). We construct two non-Gaussian encoders for AVB and AMC (see appendix). Both encoders start from a CNN followed by a reshaping operation. Then the first model (CNN-G) splits the CNN's output vector into [h(x), μ(x), logσ(x)], samples a Gaussian noise ϵ∼𝒩(ϵ; μ(x), diag[σ^2(x)]), and feeds [h(x), ϵ] to an MLP for generating z. The second encoder (CNN-B) simply applies multiplicative Bernoulli noise with dropout rate 0.5 to the CNN output vector, and uses the same MLP architecture as CNN-G. The discriminator consists of a CNN acting on the input image x only, and an MLP acting on both z and the CNN output. Batch normalisation is applied to non-Gaussian encoders and the discriminator. The learning rate of Adam <cit.> is tuned on the last 5000 training images. Rejection steps are not used for Langevin dynamics as we found it slows down the learning.Test Log-likelihood Results We report the test log-likelihood (LL) results in Table <ref>. We first follow <cit.> to estimate the test log-likelihood with importance sampling using 5000 samples, and for the non-Gaussian encoders we train another Gaussian encoder with VI as the proposal distribution.VAE appears to be the best method by this metric, and the best AMC model is about 2nats behind. However, effective sample size results (IW-ESS) show that the estimation results for AMC and AVB are unreliable.Indeed approximate MLE using the variational lower-bound biases the generative network towards the model whose exact posterior is close to the inference network q <cit.>. As the MCMC-guided approximate MLE trains the generative model with q_T (which could be highly non-Gaussian), the VI-fitted Gaussian proposal, employed in the IWAE, can under-estimates the true test log-likelihood by a significant amount.To verify the conjecture, we estimate the test-LL again but using Hamiltonian annealed importance sampling (HAIS) as suggested by <cit.>. We randomly select 1,000 test images for evaluation, and run HAIS with 10,000 intermediate steps and 100 parallel chains. Estimation results demonstrate that IW-LL significantly under-estimates the test LL for models trained by wild approximations. In this metric the CNN-G model with T=50 performs the best, which is significantly better than benchmark VAE. To demonstrate the improvement brought by the wild approximation we further train a generative model with “persistent MCMC”, by initialising the Markov chain with previous samples and ignoring the posterior changes. The HAIS-LL results shows that out best model is about 0.6nats better, which is a significant improvement on MNIST.Although test log-likelihood is an important measures of model quality, <cit.> has shown that this metric is often largely orthogonal to one that tests visual fidelity when the data is high dimensional. We visualise the generated images in Figure <ref>, and we see that AMC trained models produce samples of similar quality to VAE samples.Missing Data Imputation We also consider missing data imputation with pixels missing from contiguous sections of the image, i.e. not at random. We follow <cit.> using an approximate Gibbs sampling procedure for imputation. With observed and missing pixels denoted as x_o and x_m, the approximate sampling procedure iteratively applies the following transition steps: (1) sample z∼ q(z|x_o, x_m) given the imputation x_m, and (2) sample x^* ∼ p(x^*|z, θ) and set x_m ←x_m^*.In other words, the encoder q(z|x) is used to approximately generate samples from the exact posterior. As ambiguity exists, the exact conditional distribution p(x_m|x_o, θ) is expected to be multi-modal.Figure <ref> visualises the imputed images, where starting from the third column the remaining ones show samples for every 2 Gibbs steps. Clearly the approximate Gibbs sampling for VAE is trapped in local modes due to the uni-modal approximation q to the exact posterior. On the other hand, models trained with AMC return diverse imputations that explore the space of compatible images quickly, for instance, CNN-B returns imputations for digit “9” with answers 2, 0, 9, 3 and 8. To quantify this, we simulate the approximate Gibbs sampling for T=100 steps on the first 100 test images (10 for each class), find the nearest neighbour (in l_1-norm) of the imputations in the training dataset, and compute the entropy on the label distribution over these training images. The entropy values and the average l_1-distance to the nearest neighbours (divided by the number of pixels dim(x) = 784) are presented in Table <ref>. These metrics indicate that AMC trained models generate more diverse imputations compared to VAE, yet these imputed images are about the same distance from the training data. § CONCLUSION AND FUTURE WORKWe have proposed an MCMC amortisation algorithm which deploys a student-teacher framework to learn the approximate posterior. By using adversarially estimated divergences and energy matching, the algorithm allows approximations of arbitrary form to be learned. Experiments on Bayesian neural network classification showed that the amortisation method can be used as an alternative to MCMC when computational resources are limited. Application to training deep generative networks returned models that could generate high quality images, and the learned approximation captured multi-modality in generation.Future work should cover both theoretical and practical directions. Convergence of the amortisation algorithm will be studied. Flexible approximations will be designed to capture multi-modality. Efficient MCMC samplers should be applied to speed-up the fitting process. Practical algorithms for approximating discrete distributions will be further developed.plain § EXAMPLES OF WILD APPROXIMATIONSWe provide several examples of wild approximations in the following. (Deterministic transform) Sampling z∼ q(z|x) is defined by first sampling some random noise ϵ∼ p(ϵ), then transforming it with a deterministic mapping z = f(ϵ, x), which might be defined by a (deep) neural network. These distributions are also called variational programs in <cit.>, or implicit models in the generative model context <cit.>. An important note here is that f might not be invertible, which differs from the invertible transform techniques discussed in <cit.>.(Truncated Markov chain)Here the samples z∼ q(z|x) are defined by finite-step transitions of a Markov chain. Examples include Gibbs sampling in contrastive divergence <cit.>, or finite-step simulation of an SG-MCMC algorithm such as SGLD <cit.>. It has been shown in <cit.> that the trajectory of SGD can be viewed as a variational approximation to the exact posterior. In these examples the variational parameters are the parameters of the transition kernel, e.g. step-sizes and/or preconditioning matrices. Related work includes <cit.> which integrates MCMC into VI objective. These methods are more expensive as they require evaluations of ∇_zlog p(x, z| θ), but they can be much cheaper than sampling from the exact posterior. (Stochastic regularisation techniques (SRT))SRT for deep neural network training, e.g. dropout <cit.> and related variants <cit.>, have been re-interpreted as a variational inference method for network weights z = {W} <cit.>. The variational parameters ϕ = {M} are the weight matrices of a Bayesian neural network without SRT. The output is computed as h = σ((ϵ⊙x) M), with σ(·) the activation function and ϵ some randomness. This is equivalent to setting W = diag(ϵ) M, making SRT a special case of example <ref>. Fast evaluation of q(z|x) during training is intractable as different noise values ϵ are sampled for different inputs in a mini-batch. This means multiple sets of weights are processed if we were to evaluate the density, which has prohibitive costs especially when the network is wide and deep. § APPROXIMATE MLE WITH MCMC: MATHEMATICAL DETAILSIn the main text we stated that D_KL[q||p] ≥D_KL[q_T||p] if q is the stationary distribution of the kernel 𝒦. This is a direct result of the following lemma, and we provide a proof from <cit.> for completeness. <cit.> Let q and r be two distributions for z_0. Let q_t and r_t be the corresponded distributions of state z_t at time t, induced by the transition kernel 𝒦. Then D_KL[q_t||r_t] ≥D_KL[q_t+1||r_t+1] for all t ≥ 0.D_KL[q_t||r_t]= 𝔼_q_t[ logq_t(z_t)/r_t(z_t)] = 𝔼_q_t(z_t) 𝒦(z_t+1|z_t) [ logq_t(z_t) 𝒦(z_t+1|z_t) /r_t(z_t) 𝒦(z_t+1|z_t)] = 𝔼_q_t+1(z_t+1) q_t+1(z_t|z_t+1)[ logq_t+1(z_t+1) q(z_t|z_t+1) /r_t+1(z_t+1) r(z_t|z_t+1)] = D_KL[q_t+1||r_t+1] + 𝔼_q_t+1D_KL[q_t+1(z_t | z_t+1) || r_t+1(z_t | z_t+1)]. § ENERGY MATCHING AND CONTRASTIVE DIVERGENCEThe energy matching method in Section <ref> can also be roughly motivated by contrastive divergence <cit.> as follows.First define Δ_CD[q_ || p]:=D_KL[q_ || p] - D_KL[ q_T ||p],where q_T =𝒦_T q_.From Lemma <ref>,Δ_CD[q_ || p]≥ 0 and can be used as a minimisation objective to fit an approximate posterior q.Expanding this contrastive divergence equation, we have:Δ_CD[q_ || p] = 𝔼_q_T(z_T|x)[ log p(x, z_T)] - 𝔼_q_ (z|x)[ log p(x, z)] +  R ,where R := 𝔼_q_T(z|x) 𝒦_T(z_T | z)[ log q_T(z | z_T, x)/𝒦_T(z_T | z)],and q_T(z | z_T, x) = q_(z|x) 𝒦_T(z_T|z) / q_ T(z_T|x) is the “posterior” of z given the sample z_T after T-step MCMC. We ignore the residual term R in when T is small as nowand _T are highly correlated, which motivates the energy matching objective (<ref>). For large T one can also use density ratio estimations methods to estimate the third term, but we leave the investigation to future work.§ UNCORRELATED VERSUS CORRELATED SIMULATIONS In our algorithm, we generate { z_T^k} by simulating Markov transition for T steps starting from { z_0^k}∼ q, and usethese two samples{ z_0^k} and { z_T^k} to estimate the divergence D[q_T ||q]. However, note that{ z_0^k} and { z_T^k} is corrected, and this may introduce bias in the divergence estimation. A way to eliminate this correction is to simulate another {z_0^k} independently and use it to replace the original sample.In practice, we find that the correlated/uncorrelated samples exhibit different behaviour during training. We consider the extreme case K=1 and small T as an example. Using correlated samples would cause the teacher and the student's samples remaining in the same mode with high probability and thus easily confuse the discriminator and the student (generator) improves fast. On the other hand, if z_T is simulated from a Markov chain independent with z_0, then these samples might be far away from each other (especially when q is forced to be multi-modal), hence the discriminator can easily get saturated, providing no learning signal to the student. The above problem could potentially be solved using advanced techniques, e.g. Wasserstein GAN <cit.> which proposed minimising (an adversarial estimate of) Wasserstein-1 distance. In that case the gradient of q won't saturate even when the two sets of samples are separated. But minimising Wasserstein distance would fit the q distribution to the posterior in an “optimal transport” way, which presumably prefers moving the q samples to their nearest modes in the exact posterior. § EXPERIMENTAL DETAILS §.§ Bayesian neural networks: settingsWe use a one-hidden-layer neural network with 50 hidden units and ReLU activation. The mini-batch size is 32 and the learning rate is 0.001. The step-size of MALA is adaptively adjusted to achieve acceptance rate 0.99. The experiments are repeated on 20 random splits of the datasets. We time all the tests on a desktop machine with Intel(R) Core(TM) i7-4930K CPU @ 3.40GHz. §.§ Deep generative models: settingsWe construct two non-Gaussian encoders for tests of AVB and AMC. Both encoders start from a CNN with 3 × 3 filters, stride 2 and [32, 64, 128, 256] feature maps, followed by a reshaping operation. Then the first model (CNN-G) splits the output vector of the CNN into [h(x), μ(x), logσ(x)], samples a Gaussian noise variable of 32 dimensions ϵ∼𝒩(ϵ; μ(x), diag[σ^2(x)]), and feeds [h(x), ϵ] to an MLP which has hidden layer sizes [500, 500, 32]. The second encoder (CNN-B) simply applies multiplicative Bernoulli noise with dropout rate 0.5 to the CNN output vector, and uses the same MLP architecture as CNN-G. A discriminator is trained for AVB and AMC methods, with a CNN (of the same architecture as the encoders) acting on the input image x only, and a MLP of [32+1024, 500, 500, 1] layers. All networks use leaky ReLU activation functions with slope parameter 0.2, except for the output of the deconvolution network which uses sigmoid activation. Batch normalisation is applied to non-Gaussian encoders and the discriminator. The Adam optimiser <cit.> is used with learning rates tuned on the last 5000 training images. Rejection steps are not used in the Langevin dynamics as we found this slows down the learning. § MORE VISUALISATION RESULTS
http://arxiv.org/abs/1702.08343v2
{ "authors": [ "Yingzhen Li", "Richard E. Turner", "Qiang Liu" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170227160146", "title": "Approximate Inference with Amortised MCMC" }
theoremTheorem propositionProposition lemmaLemma definitionDefinition corollaryCorollary conjectureConjecture definition remarkRemark[section]
http://arxiv.org/abs/1702.08141v1
{ "authors": [ "Plamen Stefanov", "Gunther Uhlmann", "Andras vasy" ], "categories": [ "math.AP", "35R30" ], "primary_category": "math.AP", "published": "20170227042704", "title": "Local recovery of the compressional and shear speeds from the hyperbolic DN map" }
Synthesizing Training Data for Object Detection in Indoor Scenes Georgios Georgakis1, Arsalan Mousavian1, Alexander C. Berg2,Jana Košecká11George Mason University{ggeorgak,amousavi,kosecka}@gmu.edu 2University of North Carolina at Chapel Hillaberg@cs.unc.edu First version: January 17, 2017This version: February 29, 2020 =================================================================================================================================================================================================================== Detection of objects in cluttered indoor environments is one of the key enabling functionalities for service robots. The best performingobject detection approaches in computer vision exploit deep Convolutional Neural Networks (CNN) to simultaneously detect and categorize the objects of interest in cluttered scenes. Training of such models typically requires large amounts of annotated training data which is time consuming and costly to obtain. In this work we explore the ability of using synthetically generated composite images for training state-of-the-art object detectors, especially for object instance detection. We superimpose 2D images of textured object models into images of real environments at variety of locations and scales.Our experiments evaluate different superimposition strategies ranging from purely image-based blending all the way to depth and semantics informed positioning of the object models into real scenes.We demonstrate the effectiveness of these object detector training strategies on two publicly available datasets, the GMU-Kitchens <cit.> and the Washington RGB-D Scenes v2 <cit.>. As one observation, augmenting some hand-labeled training data with synthetic examples carefully composed onto scenes yields object detectors with comparable performance to using much more hand-labeled data.Broadly, this work charts new opportunities for training detectors for new objects by exploiting existing object model repositories in either a purely automatic fashion or with only a very small number of human-annotated examples.§ INTRODUCTIONThe capability of detecting and searching for common household objects in indoor environments is the key componentof the `fetch-and-delivery' taskcommonly considered one of the main functionalities of service robots. Existing approaches for object detection are dominated by machine learning techniques focusing on learning suitable representations of object instances. This is especially the case when the objects of interest are tobe localized in environments with large amounts of clutter, variations in lighting, and a range of poses. While the problem of detecting object instances in simpler table top settings has been tackled previously using local features, these methods are often not effective in the presence of large amounts of clutter or when the scale of the objects is small. Current leading object detectors exploit convolutional neural networks (CNNs) and are either trained end-to-end <cit.> for sliding-window detection or follow the region proposal approach which is jointly fine-tuned for accurate detection and classification <cit.> <cit.>. In both approaches, the training and evaluation of object detectors requires labeling of a large number of training images with objects in various backgrounds and poses with the bounding boxes or even segmentations of objects from background. Often in robotics, object detection is a prerequisite for tasks such as pose estimation, grasping, and manipulation. Notable efforts have been made to collect 3D models for object instances with and without textures, assuming that objects of interest are in proximity, typically on a table top.Existing approaches to these challenges often use either 3D CAD models <cit.> or texture mapped models of object instances obtained using traditional reconstruction pipelines  <cit.>.In this work we explore the feasibility of using such existing datasets of standalone objects on uniform backgrounds for training object detectors <cit.> that can be applied in real-world cluttered scenes. We create “synthetic” training images by superimposing the objects into images of real scenes. We investigate effects of different superimposition strategies ranging from purely image-based blending all the way to using depth and semantics to inform positioning of the objects. Toward this end we exploit the geometry and the semantic segmentation of a scene obtained using the state of the art method of <cit.> to restrict the locations and size of the superimposed object model. We demonstrate that, in the context of robotics applications in indoor environments, these positioning strategies improve the final performance of the detector. This is in contrast with previous approaches <cit.> which used large synthetic datasets with mostly randomized placement. In summary, our contributions are the following: * We propose an automated approach to generate synthetic training data for the task of object detection, which takes into consideration the geometry and semantic information of the scene.* Based on our results and observations, we offer insights regarding the superimposition design choices, that could potentially affect the way training sets for object detection are generated in the future.* We provide an extensive evaluation of current state-of-the-art object detectors and demonstrate their behavior under different training regimes. § RELATED WORKWe first briefly review related works in object detection to motivate our choice of detectors, then discuss previous attempts to use synthetic data as well as different datasets and evaluation methodologies. Object Detection Traditional methods for object detection in cluttered scenes follow the sliding window based pipeline with hand designed flat feature representations (e.g. HOG) along with discriminative classifiers, such as linear or latent SVMs.Examples include DPMs <cit.> which exploit efficient methods for feature computation and classifier evaluation. These models have been used successfully in robotics for detection in the table top setting <cit.>. Other effectively used strategies for object detection used local features and correspondences between a model reference image and the scene. These approaches <cit.> worked well with textured household objects, taking advantage of the discriminative nature of the local descriptors. In an attempt to reduce the search space of the sliding window techniques, alternative approaches concentrated on generating category-independent object proposals <cit.> using bottom up segmenation techniques followed by classification using traditional features. The flat engineered features have been recently superseded byapproaches based on Convolutional Neural Networks (CNN's), which learn features with increased amount of invariance by repeated layering of convolutional and pooling layers. While these methods have been intially introduced for image classification task <cit.>, extensions to object detection include <cit.> <cit.>.The R-CNN approach <cit.> relied on finding object proposals and extracting features from each crop using a pre-trained network, making the proposal generating module independent from the classification module. Recent stateof the art object detectors such as Faster R-CNN <cit.> and SSD <cit.> are trained jointly in a so called end-to-end fashion to both find object proposals and also classify them.Synthetic DataThere are several previous attempts to use synthetic data for training CNNs. The work of <cit.> used existing 3D CAD models, both with and without texture, to generate 2D images by varying the projections and orientations of the objects. The approach was evaluated on 20 categories in PASCAL VOC2007 dataset.That work used earlier CNN models <cit.> where the proposal generation module was independent from fine-tuning the CNN classifier, hence making the dependence on the context and background less prominent than in current models. In the work of <cit.> the authors used the rendered models and their 2D projections on varying backgrounds to train a deep CNN for pose estimation. In these representative works, objects typically appeared on simpler backgrounds and were combined with the object detection strategies that rely on the proposal generation stage. Our work differs in that we perform informed compositing on the background scenes, instead of placing object-centric synthetic images at random locations. This allows us to train the CNN object detectors to produce higher quality object proposals, rather than relying on unsupervised bottom-up techniques.In <cit.>, a Grand Theft Auto video game engine was used to collect scenes with realistic appearance and their associated category pixel level labels for the problem of semantic segmentation. Authors showed that using these high realism renderings can significantly reducethe effort for annotation. They used a combination of synthetic data and real images to train models for semantic segmentation. Perhaps the closest work to ours is <cit.>, which also generates a synthetic training set by taking advantage of scene segmentation to create synthetic training examples, however the task is that of text localization instead of object detection. § APPROACH§.§ Synthetic Set GenerationCNN-based object detectors require large amounts of annotated data for training, due to the large number of parameters that need to be learned. For object instance detection the training data should also cover the variations in the object's viewpoint and other nuisance parameters such as lighting, occlusion and clutter. Manually collecting and annotating scenes with the aforementioned properties is time-consumingand costly.Another factor in annotation is the sometimes low generalization capability of trained models across different environments and backgrounds. The work of <cit.> addressed this problem by building a map of an environment including objects of interest and using Amazon Mechanical Turk for annotation and subsequent training of object detectors in each particular environment. The authors demonstrated this approach on commonly encountered categories (≈ 20) of household objects. This approach uses human labeling effort for each new each scene and object combination, potentially limiting scalability. Our approach focuses on object instances and their superimposition into real scenes at different positions, scales, while reducing the difference in lighting conditions and exploiting proper context. To this end, we use cropped images from existing object recognition datasets such as BigBird <cit.> rather than using 3D CAD models <cit.>. This allows us to have real colors and textures for our training instances as opposed to rendering them with randomly chosen or artificial samples. The BigBird dataset captures 120 azimuth angles from 5 different elevations for a total of 600 views per object. It contains a total of 125 object instances with a variety of textures and shapes. In our experiments we use the 11 object instances that can be found in the GMU-Kitchens dataset. The process of generating a composite image with superimposed objects can be summarized in the following steps. First, we choose a background scene and estimate the positions of any support surfaces. This is further augmented by semantic segmentation of the scene, used to verify the support surfaces found by plane fitting.The objects of interest are placed on support surfaces, ensuring their location in areas with appropriate context and backgrounds. The next step is to randomly choose an object and its pose, followed by choosing a position in the image. The object scale is then determined by the depth value of the chosen position and finally the object is blended into the scene. An example of this process is shown in Figure <ref>. We next describe these steps in more detail.*Selective Positioning In natural images, small hand-held objects are usually found on supporting surfaces such as counters, tables, and desks. These planar surfaces are extracted using the method described in <cit.>, which applies RANSAC to fit planes to regions after an initial over-segmentation of the image. Given the extracted planar surfaces's orientations, we select the planes with large extent, which are aligned with the gravity direction as candidate support surfaces. To ensure that the candidate support surfaces belong to a desired semantic category, a support surface is considered valid if it overlaps in the image with semantic categories of counters, tables and desks obtained by semantic segmentation of the RGB-D image. *Semantic Segmentation To determine the semantic categories in the scene, we use the semantic segmentation CNN of <cit.>, which is pre-trained on MS-COCO and PASCAL-VOC datasets, and fine-tuned on NYU Depth v2 dataset for 40 semantic categories. The model is jointly trained for semantic segmentation and depth estimation, which allows the scene geometry to be exploited for better discrimination between some of the categories. We do not rely solely on the semantic segmentation for object positioning, since it rarely covers the entire support surface, as can be seen in Figure <ref>(c). The combination of the support surface detection and semantic segmentation produces more accurate regions for placing the objects. The aforementioned regions that belong to valid support surfaces are then randomly chosen for object positioning. Finally, occlusion levels are regulated by allowing a maximum of 40% overlap between positioned object instances in the image.*Selective Scaling and Blending The size of the object is determined by using the depth of the selected position and scaling the width w and height h accordingly:ŵ =wz̅/z ĥ =hz̅/zwhere z̅ is the median depth of the object's training images, z is the depth at the selected position in the background image, and ŵ, ĥ are the scaled width and height respectively.The last step in our process is to blend the object with the background image in order to mitigate the effects of changes in illumination and contrast. We use the implementation from Fast Seamless Cloning <cit.> with a minor modification. Instead of blending a rectangular patch of the object, we provide a masked object to the fast seamless cloning algorithm which produces a cleaner result. Figure <ref> illustrates examples of scenes with multiple blended objects. §.§ Object DetectorsFor our experiments we employ two state-of-the-art object detectors, Faster R-CNN <cit.> and Single-Shot Multibox Detector (SSD) <cit.>. Both Faster R-CNN and SSD are trained end-to-end but their architectures are different. Faster R-CNN consists of two modules. The first module is the Region Proposal Network (RPN) which is a fully convolutional network that outputs object proposals and also an objectness score for each proposal reflecting the probability of having an object inside the region. The second detection network module resizes the feature maps, corresponding to each object proposal to a fixed size, classifies it to an object category and refines the location and the height and width of the bounding box associated with each proposal.The advantage of Faster R-CNN is the modularity of the model; one module that finds object proposals and the second module which classifies each of the proposals. The downside of Faster R-CNN is that it uses the same feature map to find objects of different sizes which causes problems for small objects. SSD tackles this problem by creating feature maps of different resolutions. Each cell of the coarser feature maps captures larger area of the image for detecting large objects whereas the finer feature maps are detecting smaller objects. These multiple feature maps allow higher accuracy for a given input resolution, providing SSD's speed advantage for similar accuracy. Both detectors have difficulties for objects with small size in pixels, making input resolution an important factor.§ EXPERIMENTS In order to evaluate the object detectors trained on composited images, we have conducted three sets of experiments on two publicly available datasets, the GMU-Kitchen Scenes  <cit.> and the Washington RGB-D Scenes v2 dataset <cit.>. In the first experiment, training images are generated by choosing different compositing strategies to determine the effect of positioning, scaling, and blending on the performance. The object detectors are trained on composited images and evaluated on real scenes. In the second set of experiments we examine the effect of varying proportion of synthetic/composited images and real training images. Finally we use synthetic data for both training and testing in order to show the reduction of over-fitting to superimposition artifacts during training when the proposed approach of data generation is employed.The code for the synthesization process along with the background scenes and synthetic data are available at: <http://cs.gmu.edu/ robot/synthesizing.html>. §.§ Datasets and BackgroundsFor our experiments, we utilized the following datasets:GMU Kitchen Scenes dataset  <cit.> The GMU-Kitchens dataset includes 9 RGB-D videos of kitchen scenes with 11 object instances from the BigBird dataset. We also used all 71 raw kitchen videos from the NYU Depth Dataset V2 <cit.> with a total of around 7000 frames as background images. For each image we generate four synthetic images with different variations in objects that are added to the scene, pose, scale, and the location that the objects are put. The object identities and their poses are randomly sampled from the BigBird dataset, from 360 examples per object with 3 elevations and 120 azimuths. The images where the support surfaces were not detected are removed from the training set, making our effective set around 5000 background images. Cropped object images from BigBird dataset of the 11 instances contained in GMU-Kitchens were used for superimpositioning. We refine the provided object masks with GraphCut <cit.>, in order to get cleaner outlines for the objects. This helps with the jagged and incomplete boundaries of certain objects (e.g. coke bottle), which are due to imperfect masks obtained from the depth channel of RGB-D data caused by reflective materials. Figure <ref> illustrates a comparison between masks from BigBird and masks refined with GraphCut algorithm.For comparison with the rest of the experiments we also provide the performance of the object detectors (row 1 of Table <ref>) trained and tested on the real data. The train-test split follows the division of the dataset into three different folds. In each fold six scenes are used for training and three are used for testing, as shown in <cit.>.Washington RGB-D Scenes v2 dataset (WRGB-D) <cit.> The WRGB-D dataset includes 14 RGB-D videos of indoor table-top scenes containing instances of objects from five object categories: bowl, cap, cereal box, coffee mug, and soda can. The synthetic training data is generated using the provided background scenes (around 3000 images) and cropped object images for the present object categories in the WRGB-D v1 dataset <cit.>.For each background image we generate five synthetic images to get a total of around 4600 images. As mentioned earlier, images without a support surface are discarded.The images that belong to seven of these scenes are used for training and the rest is used for testing. Line 1 in Table <ref> shows the performance of the two object detectors with this split of the real training data. §.§ Synthetic to Real In this experiment we use the synthetic training sets generated with different combinations of generation parameters for training, and test on real data. The generation parameters that we vary are: Random Positioning (RP) / Selective Positioning (SP), Simple Superimposition (SI) / Blending (BL), and Random Scale (RS) / Selective Scale (SS), where SP, SS, and BL are explained in Section <ref>. For RP we randomly sample the position for the object in the entire image, for RS the scale of the object is randomly sampled from the range of 0.2 to 1 with a step of 0.1, and for SI we do not use blending but instead we superimpose the masked object directly on the background. The objective of this experiment is to investigate the effect of the generation parameters on the detection accuracy. For example, if a detector is trained on a set generated with selective positioning, with blending, and selective scale, how does it compare to another detector which is trained on a completely randomly generated set with blending? If the former demonstrates higher performance than the latter, then we can assume that selective positioning and scaling are important and superior to random positioning. For each trained detector, a combination of the generation parameters (e.g. SP-BL-SS) is chosen, and then the synthetic set is generated using our proposed approach along with its bounding box annotations for each object instance. The detector is trained only on the synthetic data and then tested on the real data.The results are shown on lines 2-5 in Table <ref> for the GMU-Kitchens dataset and in Table <ref> for the WRGB-D dataset. Note that for the GMU-Kitchens dataset, all frames from 9 scenes videos were used for testing. We report detection accuracy on four combinations of generation parameters, RP-SI-RS, RP-BL-RS, SP-SI-SS, and SP-BL-SS. Other combinations such as SP-BL-RS and RP-BL-SS have also been tried, however we noticed that applying selective positioning without selective scaling and vice-versa, does not yield any significant improvements.For both datasets, we first notice that using only synthetic data for training considerably lowers the detection accuracy compared to using real training data. Nevertheless, when training with synthetic data, the SP-BL-SS generation approach produced an improvement of 10.3% and 9.6% for SSD and Faster R-CNN respectively over the randomized generation approach, RP-SI-RS, on the GMU-Kitchens dataset. This suggests that selective positioning and scaling are important factors when generating the training set. In the case of the WRGB-D dataset, different blending strategies work better for SSD and Faster R-CNN, SP-SI-SS and SP-BL-SS respectively. The right choice of blending strategy seems to improve Faster R-CNN somewhat more, while the overall performance of the two detectors is comparable. The positioning strategy, SP vs RP, affects the two detectors differently on this dataset.SSD achieves higher performance with the random positioning RP-SI-RS, while Faster R-CNN shows a large improvement of 26.1% when it is trained with SP-BL-SS. This can be explained by the fact that Faster R-CNN is trained on proposals from the Region Proposal Network (RPN), which under-performs when objects are placed randomly in the image (as in RP-SI-RS). On the other hand, SSD does not have any prior knowledge about the location of the objects so it learns to regress the bounding boxes from scratch. The bounding boxes in the beginning of the training are generated randomly until the SSD learns to localize the objects. This trend is not observed for the GMU-Kitchens dataset since it has more clutter in the scenes and higher variability of backgrounds, which makes the localization of the objects harder. To justify this argument, we performed a side-experiment where we run the pre-trained RPN on both WRGB-D and GMU-Kitchens dataset and evaluated in terms or recall. Results can be seen in Table <ref>, where RPN performs much better on the WRGB-D dataset than on GMU-Kitchens.§.§ Synthetic+Real to RealWe are interested to see how effective our synthetic training set is when combined with real training data. Towards this end the two detectors are trained using the synthetic set with selective positions and blending SP-BL-SS with certain percentage of the real training data: 1%, 10%, 50%, and 100%. For the real training data, besides the case of 100%, the images are chosen randomly. Results are shown in lines 6-9 in Table <ref> for the GMU-Kitchens dataset and in Table <ref> for the WRGB-D dataset. What is surprising in these results is that when synthetic training data is combined with only 10% of the real training data, we achieve higher or comparable detection accuracy than when the training set is only comprised with real data (see line 1 in both tables). In the case of SSD in the GMU-Kitchens dataset, we observe an increase of 6%. Only exception is Faster R-CNN on the GMU-Kitchens dataset which achieves a 2.3% lower performance, however, when we use 50% of the real training data we get a better performance of 1.3%. In all cases, when the synthetic set is combined with 50% and 100% of the real data, it outperforms the training with the real training set.The results suggest that our synthetic training data can effectively augment existing datasets even when the actual number of real training examples is small. This is particularly useful when only a small subset of the data is annotated. Specifically, in our settings, the 10% of real training data refers to around 400 images in the GMU-Kitchens dataset, andaround 600 in the WRGB-D dataset. Figure <ref> presents examples for which the detectors were unable to detect objects when they were trained with only real data, but succeeded when the training set was augmented with our synthetic data. We further support our argument by comparing the performance of the detectors trained only on varying percentages of the real data to being trained by real+synthetic in Table <ref>. The synthetic set here is also generated using SP-BL-SS. Note that for most of the cases the accuracy increases when the detectors are trained with both real and synthetic data, and the largest gain is observed for SSD. Finally, we present results for the GMU-Kitchens dataset when the percentage of synthetic data (SP-BL-SS) is varied, while the real training data remains constant, in Table <ref>. Again, SSD shows a large and continuing improvement as the amount of the synthetic data increases, while Faster R-CNN achieves top performance when half of the synthetic data are used for training.§.§ Synthetic to SyntheticIn this experiment, the object detectors are trained and tested on synthetic sets. The objective is to show the reduction of over-fitting on the training data when using our approach to generate the synthetic images, instead of creating them randomly. We used the synthetic sets of RP-SI-RS and SP-BL-SS and split them in half in order to create the train-test sets.The results are presented on lines 10 and 11 in Table <ref> for the GMU-Kitchens dataset and in Table <ref> for the WRGB-D dataset. For GMU-Kitchens, we observe that RP-SI-RS achieves results of over 90%, and in the case of Faster R-CNN almost 100%, while at the same time it is the least performing synthetic set in the synthetic to real experiment (see line 2 in table <ref>) described in Section <ref>. This is because the detectors over-fit on the synthetic data and cannot generalize to an unseen set of real test data. While the detectors still seem to over-fit on SP-BL-SS, the gap between the accuracy on the synthetic testing and real testing data is much smaller, at the order of 17.3% for SSD, and 23.4% for Faster R-CNN (see line 5 in table <ref>). On the other hand, for the WRGB-D dataset both synthetic training sets achieve similar results on their synthetic test sets. This is not surprising as the complexity of the scenes is much lower in WRGB-D than in the GMU-Kitchens dataset. Please see section <ref> for more details.§.§ Additional DiscussionWe have seen in the results of section <ref>, that when a detector is trained on synthetic data and then applied on real data, the performance is consistently lower that training on real data. While this can be attributed to artifacts introduced during the blending process, one other factor is the large difference of backgrounds between the NYU V2 dataset and the GMU-Kitchens. We investigated this through a simple object recognition experiment. We trained the VGG <cit.> network on the BigBird dataset on the cropped images with elevation angles from cameras 1, 3, and 5, tested on the images with elevation angles from cameras 2 and 4, and achieved recognition accuracy of 98.2%. For comparison, when the VGG is trained on all images from BigBird, and tested on cropped images from the GMU-Kitchens, which contain real background scenes, the accuracy drops down to 79.0%.§ CONCLUSIONOne of the advantages of our method is that it is scalable both with the number of objects of interest and with the set of the possible backgrounds, which makes our method suitable for robotics application. For example, the object detectors can be trained with significantly less annotated data using our proposed training data augmentation. We also showed that our method is more effective when the object placements are based on semantic and geometric context of the scene. This is due to the fact that CNNs implicitly consider the surrounding context of the objects and when superimposition is informed by semantic and geometric factors, the gain in accuracy is larger.Another related observation is that for SSD, accuracyincreases more than for Faster R-CNN when training data is augmented by synthetic composite images. While we showed it is possible to train an object detector with fewer annotated images using synthetically generated images, alternative domain adaptation approaches can be also explored towards the goal of reducing the amount of human annotation required. In conclusion, we have presented an automated procedure for generating synthetic training data for deep CNN object detectors. The generation procedure takes into consideration geometry and semantic segmentation of the scene in order to make informed decisions regarding the positions and scales of the objects. We have employed two state-of-the-art object detectors and demonstrated an increase in their performance when they are trained with an augmented training set. In addition, we also investigated the effect of different generation parameters and provided some insight that could prove useful in future attempts to generate synthetic data for training object detectors. § ACKNOWLEDGMENTSWe acknowledge support from NSF NRI grant 1527208. Some of the experiments were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, VA. (URL: http://orc.gmu.edu).plainnat
http://arxiv.org/abs/1702.07836v2
{ "authors": [ "Georgios Georgakis", "Arsalan Mousavian", "Alexander C. Berg", "Jana Kosecka" ], "categories": [ "cs.CV", "cs.RO" ], "primary_category": "cs.CV", "published": "20170225060442", "title": "Synthesizing Training Data for Object Detection in Indoor Scenes" }
Disclaimer:This work has been accepted for publication in the IEEE Robotics and Automation Letters.Copyright: 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses,inanycurrentorfuturemedia,includingreprinting/republishingthismaterialforadvertisingor promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Urban Vibrancy and Safety in Philadelphia Colman Humphrey1 and Shane T. Jensen1 and Dylan S. Small1 and Rachel Thurston2 July 10, 2016; Revised January 10, 2017 ==================================================================================empty empty This paper presents an approach for semantic place categorization using data obtained from RGB cameras. Previous studies on visual place recognition and classification have shown that, by considering features derived from pre-trained Convolutional Neural Networks (CNNs) in combination with part-based classification models, high recognition accuracy can be achieved, even in presence of occlusions and severe viewpoint changes. Inspired by these works,we propose to exploit local deep representations, representing images as set of regions applying a Naïve Bayes Nearest Neighbor (NBNN) model for image classification. As opposed to previous methods where CNNs are merely used as feature extractors, our approach seamlessly integrates the NBNN model into a fully-convolutional neural network. Experimental results show that the proposed algorithm outperforms previous methods based on pre-trained CNN models and that, when employed in challenging robot place recognition tasks, it is robust to occlusions, environmental and sensor changes. § INTRODUCTION Recent years have seen the breakthrough of mobile robotics into the consumer market. Domestic robots have become increasingly common, as well as vehicles making use of cameras, radar and other sensors to assist the driver. An important aspect of human-robot interaction, is the ability of artificial agents to understand the way humans think and talk about abstract spatial concepts. For example, a domestic robot may be asked to “clean the bathroom”, while a car may be asked to “stop at the parking area”. Hence, a robot's definition of “bathroom”, or “parking area” should point to the same set of places that a human would recognize as such.The problem of assigning a semantic spatial label to an image has been extensively studied in the computer and robot visionliterature <cit.>.The most important challenges in identifying placescome from the complexity of the concepts to be recognized and from the variability of the conditions in which the images are captured. Scenes from the same category may differ significantly,while images corresponding to different places may look similar.The historical take on these issues has been to model the visual appearance of scenes considering a large variety of both global and local descriptors <cit.> and several (shallow) learning models (e.g. SVMs, Random Forests).Since the (re-)emergence of Convolutional Neural Networks (CNNs), approaches based on learning deep representations have becomemainstream. Several works exploited deep models for visual-based scene classification and place recognition tasks, showing improved accuracy over traditional methods based on hand-crafted descriptors <cit.>. Some of these studies <cit.> demonstrated the benefit of adopting a region-based approach (i.e. considering only specific image parts) in combination with descriptors derived from CNNs, such as to obtain models which are robust to viewpoint changes and occlusions. With a similar motivation, lately several works in computer vision have attempted to bring back the notion of localities into deep networks, e.g. by designing appropriate pooling strategies <cit.> or by casting the problem within the Image-2-Class (I2C) recognition framework <cit.>, with a high degree of success. All these works decouple the choice of the significant localities from the learning of deep representations, as the CNN feature extraction and the classifier learning are implemented as two separate modules. This leads to two drawbacks: first, choosing heuristically the relevant localities means concretely cropping parts of the images before feeding them to the chosen features extractor. This is clearly sub-optimal, and might turn out to be computationally expensive. Second, it would be desirable to fully exploit the power of deep networks by directly learning the best representations for the task at hand, rather than re-use architectures trained on general-purpose databases like ImageNet and passively processing patches from the input images without adapting its weights. Ideally,a fully-unified approach would guarantee more discriminative representations, resulting in higher recognition accuracy.This paper contributes to this last research thread by addressing these two issues. We propose an approach for semantic place categorization which exploits local representations within a deep learning framework. Our method is inspired by the recent work <cit.>, which demonstrates that, by dividing images into regions and representing them with CNN-based features, state-of-the-art scene recognition accuracy can be achieved by exploiting an I2C approach,namely a parametric extension of the Naïve Bayes Nearest Neighbor (NBNN) model.Following this intuition, we propose adeep architecture for semantic scene classification which seamlessly integrates the NBNN and CNN frameworks (Fig. <ref>). We automatize the multi-scale patch extraction process by adopting a fully-convolutional network <cit.>, guaranteeing a significant advantage in terms of computational cost over two-steps methods. Furthermore, a differentiable counterpart of the traditional NBNN loss is considered to obtain an error that can be back-propagated to the underlying CNN layers, thus enabling end-to-end training. To the best of our knowledge, this is the first attempt to fully unify NBNN and CNN, building a deep version of Naïve Bayes Nearest Neighbor.We extensively evaluate our approach on several publicly-available benchmarks. Our results demonstrate the advantage of the proposed end-to-end learning scheme over previous works based on a two-step pipeline and the effectiveness of our deep network over state-of-the-art methods on challenging robot place categorization tasks. § RELATED WORK In this section we review previous works on (i) visual-based place recognition and categorization and (ii) Naïve Bayes Nearest Neighbor classification. §.§ Visual-based Place Recognition and CategorizationIn the last decade several works in the robotic community addressed the problem of developing robust place recognition <cit.> and semantic classification <cit.> approaches using visual data. In particular, focusing on place categorization from monocular images, earlier works adopted a two-step pipeline: first, hand-crafted features, such as GIST <cit.>, CENTRIST <cit.>, CRFH <cit.> or HOUP <cit.>, are extracted from the query image, and then the image is classified into one of the predefined categories using a previously-trained discriminative model (e.g., Support Vector Machines).Similarly, earlier studies on visual-based place recognition and loop closing also considered hand-crafted feature representations <cit.>.More recently, motivated by the success of deep learning models in addressing visual recognition tasks <cit.>, robotic researchers have started to exploit feature representations derived from CNNs for both place recognition <cit.> and semantic scene categorization <cit.> tasks. Sunderhau ̈et al. <cit.> analyzed the performance of CNN-based descriptors with respect to viewpoint changes and time variations, presenting the first real-time place recognition system based on convolutional networks. Arroyo et al. <cit.> addressed the problem of topological localization across different seasons and proposed an approach which fuses information derived from multiple convolutional layers of a deep architecture. Gout et al. <cit.> evaluated the representational power of deep features for analyzing images collected by an autonomous surface vessel, studying the effectiveness of CNN descriptors in case of large seasonal and illumination changes. Uršič et al. <cit.> proposed an approach for semantic room categorization: first, images are decomposed in regions and CNN-based descriptors are extracted for each region; then, a part-based classification model is derived for place categorization. Interestingly, they showed that their method outperforms traditional CNN architectures based on global representations <cit.>, as the part-based model guarantees robustness to occlusions and image scaling. Our work develops from a similar idea, but differently from <cit.> the deep network is not merely used as feature extractor and a novel CNN architecture, suitable to end-to-end training, is proposed.§.§ Naïve Bayes Nearest Neighbor ClassificationThe NBNN approach has been widely adopted in the computer and robot vision community, as an effective method to overcome the limitations of local descriptor quantization and Image-2-Image recognition <cit.>. Several previous studies have demonstrated that the I2C paradigm implemented by NBNN models is especially beneficial for generalization and domain adaptation <cit.> and that, by adding a learning component to the non-parametric NBNN, performance can be further boosted <cit.>. Recent works have also shown that the NBNN can be successfully employed for place recognition and categorization tasks <cit.>. Kanji <cit.> introduced a NBNN scene descriptor for cross-seasonal place recognition. In a later work <cit.>, Kanji extended this approach by integrating CNN-based features and PCA, deriving a PCA-NBNN model for addressing the problem of self-localization in case of images with small view overlap. Kuzborskij et al. <cit.> proposeda multi-scale parametric version of the NBNN classifier and demonstrated its effectiveness in combination with precomputed CNN descriptors for scene recognition. Our work is inspired by <cit.>. However, the proposed learning model is based on a fully-convolutional network which can be trained in an end-to-end manner. Therefore, it is significantly faster and more accurate than<cit.>.§ FULLY-CONVOLUTIONAL CNN-NBNLIn this section we describe the proposed approach for semantic place categorization.As illustrated in Fig. <ref>, our method develops from the same idea of previous models based on local representations and CNN descriptors <cit.>: images are decomposed into multiple regions (represented with CNN features) and a part-based classifier is used to infer the labels associated to places. However, differently from previous works, our approach unifies the feature extraction and the classifier learning phases, and we propose a novel CNN architecture which implements a part-based classification strategy. As demonstrated in our experiments (Sect. <ref>), our deep network guarantees a significant boost in performance, both in term of accuracy and computational cost. Since our framework is derived from previous works on NBNN-based methods <cit.>,we first provide a brief description of these approaches (Sect. <ref>-<ref>) and then we introduce the proposed fully-convolutional NBNN-based network (Sect. <ref>). §.§ Naïve Bayes Non-Linear Learning Let X denote the set of possible images and let Y be a finite set of class labels, indicating the different scene categories. The goal is to estimate a classifier f: X→ Y from a training set T⊂ X× Y sampled from the underlying, unknown data distribution. The NBNN method <cit.> works under the assumption that there is a an intermediate Euclidean space Z and a set-valued function ϕ that abstracts an input image x∈ X into a set of descriptors in Z, ϕ(x)⊂ Z. For instance, the image could be broken into patches and a descriptor in Z could be computed for each patch. Given a training set T, let Φ_y( T) be the set of descriptors computed from images in T having labels y∈ Y, Φ_y( T)={ϕ(x) : x∈ X,(x,y)∈ T }. The NBNN classifier f_𝙽𝙱𝙽𝙽 is given as follows:f_𝙽𝙱𝙽𝙽(x; T)=_y∈ Y∑_z∈ϕ(x)d(z,Φ_y( T))^2 ,where d(x, S)=inf{‖ z-s‖_2 : s∈ S} denotes the smallest Euclidean distance between z and an element of S⊂ Z, or in other terms it is the distance between z and its nearest neighbor in S. Despite its effectiveness in terms of classification performance <cit.>, f_𝙽𝙱𝙽𝙽 has the drawback of being expensive at test time, due to the nearest-neighbor search.A possible way to reduce the complexity of this step consists in learning a small, finite set W_y⊂ Z of representative prototypes for each class y∈ Y to replace Φ_y( T). This idea was pursued by Fornoni et al. <cit.> with a method named Naïve Bayes Non-Linear Learning (NBNL). NBNL is developed from Eq. (<ref>) by replacing Φ_y( T) with the set of prototypes W_y and by assuming Z to be restricted to the unit ball. Under the latter assumptionthe bound d(z, S)^2≥ 2-ω(z, S) can be derived <cit.>, where:ω(z, S)= (∑_s∈ S|⟨ z,s⟩|_+^q)^1/q .Here, ⟨·⟩ denotes the dot product, q∈[1,+∞] and |x|_+=max(0,x). The NBNL classifier is finally obtained in the form given below by using the bound as a replacement of d()^2 in Eq.(<ref>) (and after simple algebraic manipulations):f_𝙽𝙱𝙽𝙻(x; W)=_y∈ Y∑_z∈ϕ(x)ω(z, W_y) ,where W={ W_y}_y∈ Y encompasses all the prototypes.In order to learn the prototypes W_y for each y∈ Y, Fornoni et al. did not consider f_𝙽𝙱𝙽𝙻 as classifier and T as training set, but they considered (only at training time) a classifier having the form f(x)=_y∈ Yω(z, W_y) and an extended training set {(z,y) :z∈Φ_y( T), y∈ Y}, where each descriptor extracted from an image is promoted to a training sample. In this way they derived the equivalent of a Multiclass Latent Locally Linear SVM (ML3) that is trained using the algorithm in <cit.>.§.§ CNN-NBNL Motivated by the robustness of NBNN/NBNL models and by the recent success of deep architectures in addressing challenging visual tasks, Kuzborskij et al. <cit.> introduced an approach, named CNN-NBNL, which combines the NBNL and CNN frameworks. Their method is an implementation of NBNL, where ϕ(x) is obtained by dividing an image x∈ X into patches at different scales and by employing a pre-trained CNN-based feature extractor <cit.> to compute a descriptor for each patch. In formal terms, if g_𝙲𝙽𝙽: X→ Z is the CNN-based feature extractor that takes an input image/patch and returns a single descriptor, then ϕ(x) (see, Sect. <ref>) is given by ϕ_𝙲𝙽𝙽(x)={g_𝙲𝙽𝙽(x̂) : x̂∈patches(x)} ,where patches(x)⊂ X returns a set of patches extracted from x at multiple scales and reshaped to be compatible in terms of resolution with the input dimensionality required by the implementation of g_𝙲𝙽𝙽 (CaffeNet <cit.> requires 227× 227). To learn the prototypes W_y in <cit.> a training objective similar to <cit.> is adopted, but the optimization is performed using a stochastic version of ML3 (STOML3) that better scales to larger datasets. At test time, f_𝙽𝙱𝙽𝙻 defined as in Eq. <ref> is used with ϕ replaced by ϕ_𝙲𝙽𝙽.By moving from hand-crafted features to CNN-based features, the performance of the NBNL classifier improves considerably. Nonetheless, the approach proposed in <cit.> has two limitations: 1) it requires the extraction of patches for each image as a pre-processing step, and CNN-features are extracted sequentially from each patch; 2) the CNN architecture is used as a mere feature extractor and the method lacks the advantage of an end-to-end trainable system. The first limitation has a negative impact on the computation time of the method, while the latter makes way for further performance boosts.§.§ Fully-Convolutional CNN-NBNLTo overcome the two limitations of CNN-NBNL mentioned above, in this work we introduce a fully-convolutional version of CNN-NBNL that is end-to-end trainable (Fig. <ref>). Fully-convolutional extension. Extracting patches at multiple scales and extracting CNN features independently for each of them is a very costly operation, which severely impacts training and test time. In order to perform a similar operation but with a limited impact on computation time, we propose to employ a Fully-Convolutional CNN (FC-CNN) <cit.> to simulate the extraction of descriptors from multiple patches over the entire image. A FC-CNN can be derived from a standard CNN by replacing fully-connected layers with convolutional layers. By doing so, the network is able to map an input image of arbitrary size into a set of spatially-arranged output values (descriptors).To cover multiple scales, we simply aggregate descriptors that are extracted with the FC-CNN from images at different resolutions. In this way,as the receptive fields of the FC-CNN remain the same, changing the scale of the input image induces an implicit change in the scale of the descriptors.The number of obtained descriptors per image depends on the image resolution and can in general be controlled by properly shaping the convolutional layers: for instance, by increasing the stride of the last convolutional layer it is possible to reduce the number of descriptors that the FC-CNN returns.In the following, we denote by g_𝙵𝙲𝙽(x;θ)⊂ Z the output of a FC-CNN parametrized by θ applied to an input image x∈ X. As opposed to g_𝙲𝙽𝙽 defined in Sect. <ref>, which returns a single descriptor, g_𝙵𝙲𝙽(x;θ) outputs a set of descriptors, one for each spatial location in the final convolutional layer of the FC-CNN. Each descriptor has a dimensionality that equals the number of output convolutional filters. We will also denote by η(x) the number of descriptors that the FC-CNN generates for an input image x. Note that this number does not depend on the actual parametrization of the network, but only on its topology, which is assumed to be fixed, and on the resolution of the input image. End-to-end architecture. The NBNL classifier that we propose and detail below can be implemented using layers that are commonly found in deep learning frameworks and can thus be easily stacked on top of a FC-CNN (see, Fig. <ref>). By doing so, we obtain an architecture that can be trained end-to-end. Given an input image x∈ X, we create a set of m scaled versions of x, which we denote by scale(x)⊂ X. Each scaled image x̂∈scale(x) is fed to the FC-CNN described before, yielding a set of descriptors g_𝙵𝙲𝙽(x̂;θ). Instead of aggregating the descriptors from each scale, as done in Eq. (<ref>), we keep them separated because they undergo a normalization step which avoids biasing the classifier towards scales that have a larger number of descriptors.The final form of our NBNL classifier is given by:f_𝙵𝙲𝙽 𝙽𝙱𝙽𝙻(x; W,θ)=_y∈ Yh(x; W_y,θ) ,where h defined below measures the likelihood of x given prototypes in W_y:h(x; W_y,θ)=1/m∑_x̂∈scale(x)ω̅(x̂; W_y,θ)and ω̅ is the scale-specific normalized score:ω̅(x̂; W_y,θ)=1/η(x̂)∑_z∈ g_𝙵𝙲𝙽(x̂;θ)ω(z; W_y) .This normalization step is necessary to prevent scales that generate many descriptors to bias the final likelihood. To train our network we define the following regularized empirical risk with respect to both the classifiers' parameters W and the FC-CNN's parameters θ:R( W,θ; T)=1/ T∑_(x,y)∈ Tℓ(h(x; W,θ), y) + λΩ( W,θ) .Here, h(x; W,θ)={h(x; W_y,θ)}_y∈ Y, Ω is aℓ_2-regularizer acting on all the networks' parameters, and ℓ(u,y) with u={u_y}_y∈ Y, u_y∈ℝ, is the following loss function:ℓ(u,y)=-u_y+log∑_y∈ Ye^u_y ,obtained from the composition of the log-loss with the soft-max operator.Following <cit.> we actually do not minimize directly R( W,θ; T) as defined above, but replace the loss terms with the following upper-bound, which is obtained by application of Jensen's inequality:ℓ(h(x; W,θ), y)≤1/m∑_x̂∈scale(x)1/η(x̂)∑_z∈ g_𝙵𝙲𝙽(x̂;θ)ℓ(ω(z, W),y) ,with ω(z, W)={ω(z, W_y)}_y∈ Y. This is equivalent to promoting descriptors to training samples, as in <cit.>.§ EXPERIMENTAL RESULTS In this section, we evaluate the performance of our approach. In Sect. <ref> we compare against the method in <cit.>, demonstrating the advantages of our end-to-end learning framework.In Sect. <ref> we assess the effectiveness of the proposed approach forthe place categorization task, considering images acquired from different robotic platforms in various indoor environments, comparing with state-of-the-art approaches. Finally, we demonstrate the robustness of our model to different environmental conditions and sensors (Sect. <ref>) and to occlusions and image perturbations (Sect. <ref>). Our evaluation has been performed using NVIDIA GeForce 1070 GTX GPU, implementing our approach with the popular Caffe framework <cit.>. §.§ Comparison with Holistic and Part-based CNN modelsIn a first series of experiments we demonstrate the advantages of the proposed part-based model and compare it with (i) its non end-to-end counterpart (i.e, the CNN-NBNL method in <cit.>) and (ii) traditional CNN-based approaches not accounting for local representations. To implement <cit.>following the original paper, we split the input image into multiple patches, extracting features from the last fully-connected layer of a pre-trained CNN. The patches were extracted at three different scales (32,64,128 pixels) after the original image was rescaled (longest side 200 pixels). We adopted the sparse protocol in <cit.>, based on which features from 100 random patches are extracted. The features are equally distributed between the three scales and an additional descriptor representing the full image is considered. As representative for deep models based on holistic representations, we chose thesuccessful approach of Zhou et al. <cit.>: they pre-train a CNN on huge datasets (i.e., ImageNet <cit.>, Places <cit.> or both in the hybrid configuration) and used it as features extractor for learning a linear SVM model. Note that this is a strong baseline, widely used in the computer vision community for scene recognition tasks. To demonstrate the generality of our contribution, we tested all models considering three different base networks:the Caffe <cit.> version of AlexNet <cit.>, VGG-16 <cit.> and GoogLeNet <cit.>. For AlexNet and VGG-16 we considered the networks pre-trained on both Places <cit.> and ImageNet <cit.> datasets (i.e, the hybrid configuration). For GoogLeNet no pre-trained hybrid network was available, thus we took the model pre-trained on Places365.In order to fairly compare our model with the baseline method in <cit.>, our fully-convolutional network was designed to match the resolution of local patches adopted in <cit.>.To accomplish this, since a 128x128 patch covers 64% of a 200x200 image, we rescaled the input image such that the receptive fieldscorrespond to approximately 64% of the input (355 pxls for CaffeNet and 350 pxls for VGG and GoogLeNet). The other scale features were obtained by upsampling the image twice with a deconvolutional layer. We extracted 25 local features for the larger scale (128x128 pxls), 36 for the medium and 49 for the smallest, for a total of 110 local descriptors. These number of features were obtained by regulating the stride of the last layers of the network.As in <cit.>, we extracted features at the last fully-connected layer level, applying batch normalization <cit.> before the classifier. Since the datasets considered in our evaluation have small/medium dimensions, fine-tuning was performed only in the last two layers of the network. The networks were trained with a fixed learning rate which was decreased twice of a factor 0.1.To decide the proper learning rate schedule and number of epochs, we performed parameters tuning on a separate validation set. As parameters of the NBNL classifier, we chose k=10 and p=2, applying a weight decay of 10^-5 on the prototypes. Notice that in our model we considered 110 descriptors, while 100were used for the baseline method in <cit.>. However, we experimentally verified that a difference of 10 descriptors does not influence performance. This confirms previous findings in <cit.>, where Kuzborskij also tested their approach with a dense configuration employing 400patches without significant improvements in accuracy over the sampling protocol. We performed experiments on three different datasets, previously used in <cit.>:Sports8 <cit.>, Scene15 <cit.> and MIT67 <cit.>. The Sports8 dataset <cit.> contains 8 different indoor and outdoor sport scenes (rowing, badminton, polo, bocce, snowboarding, croquet, sailing and rock climbing). The number of images per category ranges from 137 to 200. We followed the common experimental setting, taking 70 images per class for training and 60 for testing. The Scene15 dataset <cit.> is composed by different categories of outdoor and indoor scenes. It contains a maximum of 400 gray scale images per category. We considered the standard protocol, taking 100 images for training and 100 for testing for each class.The MIT67 <cit.> is a common benchmark for indoor scene recognition. It contains images of 67 indoor scenes, with at least 100 images per class. We adopted the common experimental setting, using 80 images per class for training and 20 for testing. For each dataset we took 5 random splits reporting the results as mean and standard deviation. Tab. <ref> shows the results of our evaluation. Mean and standard deviation are provided for our approach and <cit.>, while for the CNN models in <cit.> we report results from the original papers.From the table it is clear that, for all base networks and datasets, our method outperforms the baselines. These results confirm the significant advantage of the proposed part-based approach over traditional CNN architectures which do not consider local representations. Moreover, our results show that our end-to-end training model guarantees an improvement in performance compared to its non end-to-end counterpart CNN-NBNL. This improvement is mostly due to the proposed end-to-end training strategy. A pre-trained network is able to extract powerful features, but they are not always discriminative when applied to specific tasks. On the other hand, end-to-end training allows to overcome this limitation by adapting the pre-trained features to a new target task, producing class discriminative representations. This is shown in Fig. <ref> where we plot the fc7 features extracted at scale 64x64 pixels (t-SNE visualizations <cit.>) with CNN-NBNL (Fig. <ref>.a) and with our approach (Fig. <ref>.b): while a pre-trained network fails at creating discriminative local features, our model is able to learn representations that cluster accordingly to class labels. To further compare our approach and CNN-NBNL <cit.> we also analyzed the computational time required during the test phase to process an increasing number of patches. Fig. <ref> report the results of our analysis: as expected, our fully-convolutional architecture is greatly advantageous over the CNN-NBNL model which extract local features independently patch-by-patch.Weremark that reduced classification time is a fundamental for the adoption of the proposed model in robotic platforms operating in real environments. §.§ Robot Place CategorizationIn this section we show the results of our evaluation when testing the proposed approach on publicly available robot vision datasets. These experiments aim atverifying the effectiveness of our fully-convolutional network and its robustness to varying environmental conditions and occlusions.§.§.§ COLD dataset We first tested our method on the COsy Localization Database (COLD) database <cit.>. This database contains three datasets of indoor scenes acquired in three different laboratories from different robots. The COLD-Freiburg contains 26 image sequences collected in the Autonomous Intelligent Systems Laboratory at the University of Freiburg with a camera mounted on an ActivMedia Pioneer-3 robot. COLD-Ljubljana contains 18 image sequences acquired from the camera of an iRobot ATRV-Mini platform at the Visual Cognitive Systems Laboratory of University of Ljubljana. In the COLD-Saarbrücken an ActivMedia PeopleBot has been employed to gather 29 image sequences inside the Language Technology Laboratory at the German Research Center for Artificial Intelligence in Saarbrücken. In our experiments we followed the protocol described in Rubio et al. <cit.>, considering images of path 2 of each laboratory. These data depict significant changes with respect to illumination conditions and time of data acquisition. Using path 2, there are 9 categories for COLD-Freiburg, 6 for COLD-Ljubljana and 8 for COLD-Saarbrücken. We trained and tested on data collected on the same laboratory, considering 5 random splits and reporting the average values. We compared our model with the methods proposed in <cit.>, since this work is one of the most recent studies adopting this dataset. In <cit.>, Rubio et al. proposed to extract HOG features and to apply a dimensionality reduction techniquebefore providing the features as input to different classifiers. As classifiers they considered linear SVM,Naïve Bayes (NB), Bayesian Network (BN) and the Tree Augmented Naïve Bayes (TAN).In our experiments, to train our model we adopted the same settingdescribed in Sect. <ref>, fine-tuning the last two layers of the network.The results are shown in Tab. <ref>. Our model outperforms all the baselines in <cit.>, confirming the advantage of CNN-based approaches over traditional classifiers and hand crafted features. The high accuracy of our method also demonstrates that the proposed fully-convolutional networkis highly effective at discerning among different rooms, even with significant lighting and environmental changes. §.§.§ KTH-IDOL datasetTo further assess the ability of the proposed method togeneralize across different robotics platforms and illumination conditions, we performed experiments on the KTH Image Database for rObot Localization (KTH-IDOL) <cit.>. This dataset contains 12 image sequences collected by two robots (Dumbo and Minnie) on 5 different rooms. The image sequences were collected along several days on three different illumination conditions: sunny, cloudy and night. Following <cit.> we considered the first two sequences for each robot and weather condition, performing three different type of tests. First, we trained and tested using the same robot and same weather condition with one sequence used for training and the other for testing and vice-versa. As a second experiment, we used the same robot for training and testing, varying the weather conditions of the two sets. In the last experiment we trained the classifier with the same weather condition but testing it on a different robot. Notice that, differently from Sect. <ref>, in this case the illumination changes are not present in the training set. Our model is trained with the same setting of Sect. <ref>. In this case, to reduce overfitting and improve the capability of our network, we apply data augmentation to the RGB channels, following the standard procedure introduced in <cit.>.We compared our method with three state-of-the-art approaches: (i) <cit.> which used high dimensional histogram global features as input for a χ^2 kernel SVM; (ii) <cit.> which proposed the CENTRIST descriptor and performed nearest neighbor classification and (iii) <cit.> which used again the nearest neighbor classifier but with Histogram of Oriented Uniform Patterns (HOUP) as features. Tab. <ref> shows the results of our evaluation. Our method outperforms all the baselines in the first and third series of experiments (same lighting). In particular, the large improvements in performance in the third experiment clearly demonstrates its ability to generalize over different input representations of the same scene, independently of the camera mounted on the robot. These results suggest that it should be possible to train offline our model and apply it on arbitrary robotic platforms. On the second experiment, while the high classification accuracy demonstrates a significant robustness to lighting variations, our model achieves comparable performance with previous works, showing a small advantage of CNN representations over traditional methods in case of illumination changes. §.§.§ Household room datasetIn the last series of experiments we tested the robustness of our model with respect to occlusions. We evaluate the performance of our approach on the recently introduced household room (or MIT8) dataset <cit.>. This dataset is a subset of MIT67 which contains 8 room categories: bathroom, bedroom, children room, closet, corridor, dining room, kitchen and living room. We used the setting provided in <cit.>, with 641 images for training and 155 for testing. The challenge proposed by Uršič et al. <cit.> is to train the model on the original images and test its performances on various noisy conditions. The conditions are: occlusion in the center of the image, occlusion on the right border, occlusions from a person, addition of an outside border, upside down rotation and cuts on the top or right part of the image (inducing aspect ratio changes). All the test sets were produced following the protocol in <cit.>, apart from the person occlusions set provided directly by the authors.We compare our approach with the part-based model developed by Uršič et al. <cit.> and the global CNN-based model in <cit.>. In <cit.> selective search is used to extract informative regions inside the image, which are then provided as input to a pre-trained CNN. From these features, exemplar parts are learned for each category and used by a part-based mixture model for the final classification. The standard hybrid CaffeNet <cit.> is employed as CNN architecture. For a fair comparison we adopted the same base architecture, extracting features at the last fully-connected layer before the classifier. In this case we used images rescaled to 256x256 as input, upsampling them twice to obtain descriptors at multiple scales. We extracted 45 descriptors, 4 for the smallest scale (256x256), 16 for the medium and 25 for the largest. The training procedure is the one described in Sect. <ref> and the same parameters are used for the NBNL classifier, with batch normalization applied to the last layer. We trained our model 10 times, computing the average accuracy. The results of the evaluation are reported in Tab. <ref>. As shown in the table, both our approach and the method in <cit.> achieve higher classification accuracy than the CNN model in <cit.>, confirming the benefit of part-based modeling. It is interesting to compare our approach with <cit.>: while our framework guarantees better performances in certain conditions (e.g. original frames, person occlusion), the method in <cit.> is more robust to changes of the aspect ratio (e.g. cuts in the image) and scale (e.g. outside border addition). Interestingly, when the occlusion is not created artificially obscuring patches (person occluder), our model achieves higher performance than <cit.>. Oppositely, in the case of the outside border experiments, almost half of the image is black and the real content reduces to a very small scale. In this (artificial) setting, <cit.> outperforms our model. For sake of completeness, we also report the confusion matrix associated with our results on the original frames (Fig. <ref>). § CONCLUSIONSWe presented a novel deep learning architecture for addressing the semantic place categorization task. By seamlessly integrating the CNN and NBNN frameworks, our approach permits to learn local deep representations, enabling robust scene recognition.The effectiveness of the proposed method is demonstrated on various benchmarks. We show that our approach outperforms traditional CNN baselines and previous part-based models which use CNNs purely as features extractors. In robotics scenarios, our deep network achieves state-of-the-art results on three different benchmarks, demonstrating its robustness to occlusions, environmental changes and different sensors. As future work, we plan to extend this model in order to handle multimodal inputs (e.g. considering range sensors in addition to RGB cameras). IEEEtran
http://arxiv.org/abs/1702.07898v2
{ "authors": [ "Massimiliano Mancini", "Samuel Rota Bulò", "Elisa Ricci", "Barbara Caputo" ], "categories": [ "cs.RO", "cs.CV" ], "primary_category": "cs.RO", "published": "20170225145043", "title": "Learning Deep NBNN Representations for Robust Place Categorization" }
marcoantonio.amaral@gmail.com Departamento de Física, Universidade Federal de Minas Gerais, Caixa Postal 702, CEP 30161-970, Belo Horizonte - MG, Brazil Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, SI-2000 Maribor, Slovenia CAMTP – Center for Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor, Slovenia Departamento de Fisica, Universidade Federal de Ouro Preto, Ouro Preto, MG, Brazil Institute of Technical Physics and Materials Science, Centre for Energy Research, Hungarian Academy of Sciences, Post Office Box 49, H-1525 Budapest, Hungary Departamento de Física, Universidade Federal de Minas Gerais, Caixa Postal 702, CEP 30161-970, Belo Horizonte - MG, Brazil Departamento de Física, Universidade Federal de Minas Gerais, Caixa Postal 702, CEP 30161-970, Belo Horizonte - MG, Brazil “Three is a crowd” is an old proverb that applies as much to social interactions, as it does to frustrated configurations in statistical physics models. Accordingly, social relations within a triangle deserve special attention. With this motivation, we explore the impact of topological frustration on the evolutionary dynamics of the snowdrift game on a triangular lattice. This topology provides an irreconcilable frustration, which prevents anti-coordination of competing strategies that would be needed for an optimal outcome of the game. By using different strategy updating protocols, we observe complex spatial patterns in dependence on payoff values that are reminiscent to a honeycomb-like organization, which helps to minimize the negative consequence of the topological frustration. We relate the emergence of these patterns to the microscopic dynamics of the evolutionary process, both by means of mean-field approximations and Monte Carlo simulations. For comparison, we also consider the same evolutionary dynamics on the square lattice, where of course the topological frustration is absent. However, with the deletion of diagonal links of the triangular lattice, we can gradually bridge the gap to the square lattice. Interestingly, in this case the level of cooperation in the system is a direct indicator of the level of topological frustration, thus providing a method to determine frustration levels in an arbitrary interaction network. 89.75.Fb, 87.23.Ge, 89.65.-s Role-separating ordering in social dilemmas controlled by topological frustration Jafferson K. L. da Silva December 30, 2023 =================================================================================§ INTRODUCTION The evolution of cooperation is still a major open problem in biological and social sciences <cit.>. After all, why should self-interested individuals incur costs to provide benefits to others? This puzzle has been traditionally studied by means of evolutionary game theory, and with remarkable success <cit.>. The prisoner's dilemma game <cit.>, for example, is the classical setup of a social dilemma. The population is best off if everybody would cooperate, but the individual does best if it defects, and that regardless of what other choose to do. In classical game theory, the Nash equilibrium of the prisoner's dilemma game, indeed the rational choice, is thus to defect. Nevertheless, cooperation flourishes in nature, and it is in fact much more common as could be anticipated based on the fundamental Darwinian premise that only the fittest survive. Humans, birds, ants, bees, and even different species between one another, all cooperate to a more or less great extent <cit.>.An important step forward in understanding the evolution of cooperation theoretically was to consider spatially structured populations, modeled for example by a square lattice, which has been done first by Nowak and May <cit.> who discovered network reciprocity. In spatially structured populations cooperators may survive because of the formation of compact clusters, where in the interior they are protected against the invasion of defectors. Other prominent mechanisms that support the evolution of cooperation include kin selection <cit.>, mobility and dilution <cit.>, direct and indirect reciprocity <cit.>, network reciprocity <cit.>, group selection <cit.>, and population heterogeneity <cit.>. In particular, research in the realm of statistical physics has shown that properties of the interaction network can have far reaching consequences for the outcome of evolutionary social dilemmas <cit.> (for reviews see <cit.>), and moreover, that heterogeneity in general, be it introduced in the form of heterogeneous interaction networks, noisy disturbances to payoffs, or other player-specific properties like the teaching activity or the propensity to acquire new links over time, is a strong facilitator of cooperation <cit.>.However, the impact of a structured population is not always favorable for the evolution of cooperation. If the interaction network links three individuals into a triangle, it may be challenging, or even impossible, to come up with a distribution of strategies that ensures everybody is best off (even if one assumes away the constrains of the evolutionary competition) <cit.>. In the snowdrift game, anti-coordination of the two competing strategies is needed for an optimal outcome of the game. Clearly, in a triangle, if one individual cooperates and the other defects, the third player is frustrated because it is impossible to choose a strategy that would work best with both its neighbors. Similarly frustrated setups occur in traditional statistical physics, and have in fact been studied frequently in solid-state physics <cit.>. In anti-ferromagnetic systems, for example, spins seek the opposite state of their neighbors, and again, it is clearly impossible to achieve this in a triangle. As noted above, the snowdrift game is in this regard conceptually identical, and thus one can draw on methods of statistical physics and on the knowledge from related systems in solid-state physics to successfully study the evolutionary dynamics of cooperation in settings that constitute a social dilemma.The manifestation of topological frustration in the snowdrift game, however, can depend strongly on how the players update their strategies during the evolutionary process. In the light of recent human experiments <cit.>, we here consider not only the generally used imitation dynamics, but also the so-called logit rule (also known as myopic dynamics) <cit.>. The latter can be considered as more innovative, allowing players to choose strategies that are not within their neighborhood if they provide a good response to the strategies of their neighbors. Although the long term evolution in animals is best described by imitation dynamics, humans tend to be more inventive, and thus their behavior aptly described also by innovative dynamics <cit.>. Indeed, the impact of the logit rule and of closely related strategy updating protocols on the outcomes of evolutionary games on the square lattice has been studied extensively <cit.>, but there the topological frustration is absent.In what follows, we fill this gap by studying the snowdrift game on the triangular lattice, as well as the transition from the square to the triangular lattice, both by means of mean-field approximations and Monte Carlo simulations. Our main objective is to reveal how an inherent topological frustration affects the evolutionary outcomes. We observe fascinating honeycomb-like patterns, and we devise an elegant method to determine the level of frustration in an arbitrary interaction network through the stationary level of cooperation. Before presenting the main results, we first describe the mathematical model, and we conclude with a discussion of the wider implications of our findings.§ MATHEMATICAL MODEL In our model, players have only two possible strategies, namely cooperation (C) and defection (D), and the game is played in a pairwise manner as defined by the interaction network. During each pairwise interaction players receive a payoff according to the payoff matrix <cit.>C DC R SD T P ,where T∈[0,2], S∈[-1,1] and R=1, P=0. This parametrization is useful as it spans four different classes of games, namely the prisoner's dilemma game (PD), the snowdrift game (SD), the stag-hunt game (SH), and the harmony game (HG) <cit.>. After players collect their payoff, they may change their strategies based on a particular strategy updating rule. In this paper, we consider the logit rule and compare it with the classical imitation rule.The logit rule is based on the kinetic Ising model of magnetism (also known as Glauber dynamics <cit.>). The site will change its strategy with probabilityp(Δ u_i)=1/ 1+e^-(u_i*-u_i)/Kwhere u_i is the site current payoff, u_i* is the site's payoff if it changed to the opposite strategy, and the states of neighborhood remain unchanged. Finally K is a parameter that measures the irrationality of players. In the literature K is usually set between K∈[0.001,0.4] to simulate a small, but non-zero, chance of making mistakes <cit.>, we set it to be 0.1. Mathematically, the model is equivalent to the statistics used in physics to describe the dynamics of spins in a Fermi-Dirac distribution and is widely used in evolutionary dynamics <cit.>. In the context of game theory, this kind of update rule (also know as myopic best response <cit.>) is regarded as a player asking himself what would be the benefits of changing his strategy (even when there is no neighbor with different strategy). This means that the logit rule is an innovative dynamic, since new strategies can spontaneously appear. Recently, the logit rule has been the focus of many works <cit.> as it leads to very different results compared to imitation models. As we see, this rule is closely related to rational analysis of a situation, instead of the reproduction of the “fittest” behavior. Although evolutionary game theory has its bases rooted in biological populations dynamics, recent works shows that the modeling of humans playing games can have more in common with innovative dynamics <cit.>.The imitation rule, or imitation dynamics, is one of the most common update rules in iterated evolutionary game theory <cit.>, and is based on the concept of the fittest strategy reproducing to neighboring sites. Here we will use it as a baseline for comparison with our results. Site i will update its state by randomly choosing one of its neighbors, j, and then comparing their payoff. Site i adopts the strategy of j with probabilityp(Δ u_ij)=1/ 1+e^-(u_j-u_i)/K,where u_i,j is the total payoff of site i,j <cit.>. Note that player i can only change its strategy to the ones available in its neighborhood. This means that new strategies can never appear once extinguished and players never “explore” new strategies, which can be interpreted as a non-innovative dynamic. This model is associated with biological processes, where each strategy is regarded as a specie, and once extinguished it will never re-appear <cit.>. We note that this is not always the case when modeling human interactions, who can change behaviors depending also on other external, and to a large degree unpredictable, factors. We also note that many works have shown that the strategy updating rule can have profound influence on the evolution of strategies, even changing the impact of the topology of interaction network <cit.>. §.§ Triangular lattice We make a quick review here to clarify some properties of the triangular lattice. This topology has an important property: every closed loop is comprised of an even number of steps, which gives rise to frustration phenomena <cit.>. The snowdrift game, which is also known as anti-coordination game since choosing the opposite strategy of the partneris a Nash-equilibrium, is strongly affected by network inherent frustration.In square lattices, the logit rule yields a population displaying a very stable checkerboard pattern, as everyone can choose to do the opposite of all neighbors <cit.>. In contrast, this spatial ordering is impossible in the triangular lattice, as shown in Fig. <ref>. Every pair of different strategies will share at least one third neighbor that will be frustrated. This phenomenon is well explored in magnetic models, where many interesting “spin-glass” phenomena can arise <cit.>. In spin models we see that the “minimum energy” configuration would be similar to the pattern shown in Fig. <ref>. We wish to analyze this situation in evolutionary game dynamics. One type of player is surrounded by a honeycomb structure of the opposite type, repeated infinitely for a large lattice. Notice that the central site (blue) does not have any frustrated connections, while the other type (red) is frustrated in half of its connections.§ RESULTS We start showing that, mathematically, theformal relation between anti-coordination games and anti-ferromagnetic systems <cit.> is not an identity. Let us consider a matrix for the energy of a single spin in a magnet with coupling constant J and an external magnetic field B, similar to the payoff matrix <ref>:↑ ↓↑-J-B J-B ↓J+B -J+B . The spins in the anti-ferromagnet (J<0) tend to point in the opposite direction as their neighbors, as in anti-coordination games individuals tend to do the opposite of their neighbors. However, equating the payoff matrix to the energy matrix (-J-B=R, J-B=S, J+B=T, and -J+B=P) and requiring the snowdrift payoff condition(T>R>S>P) yieldJ+B>0 ,0>J and J>B ,which is a mathematical absurd. There is no combination of parameters that obey both the physical symmetry of magnetic system and the dilemma hierarchy of game theory for a general case. In other words, the magnetic system obeys a diagonal symmetry in the matrix, whereas the game theory obeys a linear hierarchy of the parameters in the matrix, both cannot befulfilled simultaneously. It is important to stress that, although we will see many phenomena in the simulations that are analogous to anti-ferromagnetism, the systems are not formally identical. §.§ Master equation Let us analyze the logit model using mean-field approximation at nearest-neighbor level <cit.>. For simplicity we set S=0 in this section. If T>1, we have the so-called weak prisoner's dilemma. Consider a central site i on a lattice. It interacts only with its four (square lattice) or six (triangular lattice) nearest neighbors (Ω neighborhood). In this setup, we present the master equationfor the average fraction of cooperators, ρ (note that ρ is a function of t):ρ̇= (1-ρ) Γ_+(C→ D) - ρΓ _-(D→ C)where Γ± is the probability for the central player to change its strategy to C (D). We obtain Nn different neighborhood configurations where N is 4 for the square lattice and 6 for the triangular lattice and n is the number of cooperative neighbors for each neighborhood configuration. Therefore:Γ_±= ∑_n=0^NNnρ^n (1-ρ)^(N-n) P_±(u_i,u_Ω). Here, Nn are the binomial coefficients and weights the repetitions of identical configurations. Note that while n varies in the summation, N is fixed for each lattice type. The term ρ^n (1-ρ)^(N-n) weights the probability of such configuration and P_±(u_i,u_Ω) is the probability, in a specific configuration, that the central site will turn into a cooperator (P_+) or a defector (P_-). This probability is the only term that is directly dependent on the update rule chosen (logit or imitation). For the logit rule the focal site changes the state comparing its current payoff (u_i) with its future payoff if the state was changed, (u^*). Calculating P_+(u_i,u_Ω), for the case where the central site is D and changes to C, we have:P_+(u_i,u_Ω)=1/1+e^-(u^*-u_i)/K. Analytically, one of the advantages of the logit model is that the probability does not depend explicitly on the payoffs of the neighborhood Ω. If the central site is D (C), the payoff difference, for any configuration,will be:(u^*-u_i)_D → C=n(1-T)  , (u^*-u_i)_C → D=n(T-1).Using A=(1-T)/K to simplify, we get:P_±(u_i,u_Ω)=1/1+e^∓ n A. Remember that the solution for the master equation of the imitation model can be found in the literature <cit.>. The master equation for the logit model becomes:ρ̇=∑_n=0^NNnρ^n (1-ρ)^(N-n)(1/1+e^- n (1-T)/K -ρ). This yields a 6th order polynomial that analytically have at least one root in the region 0<ρ ^*<1. This is independent of T, meaning that at the nearest-neighbor level there exist at least some minimum cooperation level independently of the value of temptation. The existence of a minimum level of cooperation is an interesting result, agreeing with other approaches on innovative dynamics that found similar results using Monte Carlo simulations and experiments with humans <cit.>.To obtain the time-independent solution of the master equation we use a 4th order Runge-Kutta integrator. As in other models, the system reaches a stable state after some time. In our model the behavior of ρ(t)_t→∞ is independent of the initial fraction of cooperation. This is an important feature, as not every update rule will have a equilibrium state independent of the initial conditions <cit.>. Figure <ref> shows the cooperation level for the stable equilibrium (ρ(t)_t →∞) as a function of T. We compare the Monte Carlo simulation (further analyzed bellow) with the numerical solution for the master equation in both topologies. The mean field approach agrees with the simulation results and, most importantly, both approaches report a basal cooperation level for any T. The mean-field technique is a good approximation to obtain insights and confirm the prediction of other methods. Even so, it does not always returns the same results as in the structured population <cit.>, it is only an approximation. In our case, it is interesting to notice that both methodologies (Monte Carlo and mean field) report the minimal level of cooperation that is independent of the value of temptation. This kind of basal cooperation level was also found in other studies using innovative dynamics, even with different update rules and topologies <cit.>. §.§ Monte Carlo simulations We use the asynchronous Monte Carlo procedure to simulate the evolutionary dynamics. First, a randomly chosen player, i, is selected. The cumulative payoff of i and of its nearest neighbors payoffs are calculated. Then player i changes its strategy based on the update probability defined in Eq. <ref> for logit or in Eq. <ref> for imitation dynamics. One Monte Carlo step (MCS) consists of this process repeated L^2 times, where L is the lattice linear size (here we set L=100). For a detailed discussion on Monte Carlo methods in evolutionary dynamics we suggest Refs. <cit.>. We ran the algorithm until the equilibrium state (10^4-10^5 MCS's); then we average over 1000 MCS's for 10-20 different initial conditions. We used periodic boundary conditions and random, homogeneous initial strategy distribution.Starting with the weak prisoner's dilemma (S=0), we compare the logit with the imitative dynamics. Figure <ref> showsρ as a function of T. The logit model has a sharp decay in cooperation, almost at the same point where the imitation model has a transition <cit.>. This is valid for both square and triangular lattices. Also, it is remarkable that for large T a minimal global value of cooperation survives, confirming the prediction of our mean field approach.Figure <ref> shows the fraction of cooperation in the entire T-S plane in the imitation and logit models, for both triangular and square lattices. Notice how similar the outcomes are in the HG, PD and SH games. Thedifference appearsin the snowdrift game. Imitation dynamics yields similar results in both square and triangular lattices, butlogit dynamics yields different results. More specifically, while in the logit model on the square lattice there is a flat plateau of 50% cooperation (deeply studied in <cit.>), in the triangular lattice there are basically two phases separated by a straight diagonal line (S=T-1). Notice that on the square lattice the whole SD region is associated with a static checkerboard pattern, corresponding to the Nash Equilibrium, which is themost efficient way of increasing the population payoff, as previously stressed in <cit.>). It is also interesting to notice that for the logit model, cooperation survives independently of T for some range of S (around S≃-0.15) in the PD region.Studying the SD region for triangular lattice in the logit dynamics, we find a plateau of ρ≃ 0.35 bellow the diagonal line and ρ≃ 0.65 above it, with minor fluctuations of ± 0.05. We further refer to Refs. <cit.> for the analysis ofthe square lattice, where such plateau is also found with a single phase (ρ=0.5).In principle, there would be two “ground states” exhibiting a honeycomb pattern: a concatenation of cells with a central D surrounded by C's and a concatenation of cells with a central C surrounded by D's. Let us consider the first “ground state”, where the central site in each cell of the honeycomb configuration is a defector surrounded by 6 cooperators. Each one of these 6 cooperators is shared by 3 distinct cells. The fraction ρ in an infinite lattice is calculated as the fraction of cooperators in the cell, weighting each site by the number of blocks which share it. So we haveρ=6/3/6/3+1=2/3The calculation for the other “ground state” isanalogous, yielding ρ=1/3. Most interestingly, the systems is driven to one of the two “ground states” configurationsdepending on the payoff parameters. To make this point clearer, in Fig. <ref> we show the fraction of cooperation for parameters along a straight line orthogonal to the line that divides the plateaus observed in the SD region. We can clearly see the two plateaus and the transition point where the roles of C and D players are exchanged, as shown in the incept of the patterns.The logit model seems to drive the system to the maximum attainable global payoff (related to the minimum energy level). To further study this hypothesis, wequantify the frustration, ϕ, defined as the fraction of frustrated links. In SD games the frustrated links are the CC and DD pairs. Note that our definition of frustrationis a good measurement of the “homogeneity” and global spatial structure of the lattice: the frustration is 1 for any homogeneous state, regardless of the cooperation level, and can be zero, for example, in the chess board pattern configuration of cooperators and defectors on square lattices. In both “ground states” configuration of the triangular lattice, we can easily show that the frustration is equal to 1/3. In Fig. <ref> we compare the lattice frustration of logit and imitation rules for the SD region (frustration is meaningless outside this parameter range). The imitation model maintains a high frustration, around 60%, whereasthe logit model maintains a moderate frustration, around 35%, independently of payoff values of T or S, which is very close to theanalytical solution of the honeycomb structure. Note that on triangular lattices the minimum achievable frustration is 1/3, as there is an inevitabletopological frustration. Also note howfrustration quickly rises to almost 1 in the borders of the diagram, where there is full cooperation or full defection.To further support our claims, we present snapshots of the lattices in dynamic equilibrium on the SD region. The Monte Carlo method is of course probabilistic, and accurate results are dependent on sufficiently large averages <cit.>. Even so, it is insightful to see the images of the lattice after the system has reached a dynamical equilibrium. Figure <ref> shows typical snapshots of logit and imitation update rules for square and triangular lattices. It is clear the differences in spatial organization exhibited in each model. We see that in both topologies the imitation update tends to maintain cooperators in clusters whereas the logit model tends to distribute strategies more homogeneously. Specifically, the logit model on square lattice tends to form a checkerboard pattern, a behavior that has been consistently reported in different innovative rules <cit.> and is usually attributed to the population re-arranging itself to receive the highest total payoff achievable. For the triangular lattice we can see that the expected frustrated pattern illustrated in Fig. <ref> indeed emerges. It is worth mentioning thatit is the absence of clustering that makes the mean-field approximation a good one for the logit dynamics. Such phenomena suggests a general behavior exhibited by innovative dynamics that leads to the emergence of specific spatial structures, other than cooperation islands. We note that, while clustering has a strong effect on everyday cooperative interactions <cit.>, the emergence of diluted patterns in our model suggests that some role-separating structure may also emerge in human population, where members have different roles to obtain a higher collective income.Lastly, we analyze the representative microscopic mechanisms that explain how strategy evolution accommodates to topological frustration. In Fig. <ref>(a) we present a local strategy distribution where the central site is highly unlikely to change its strategy that makes the honeycomb configuration very stable. Conceptually similar stable local distribution can be drawn where a defector is surrounded by cooperators. However, the sites around the central stable site are not fully satisfied because they have some frustrated bonds. This situation is illustrated in Fig. <ref>(b) where a frustrated node is in the center. Here the central site has a higher chance to change its strategy depending on the difference between (3T) and (3+3S). The threshold value is at the line S=T-1 which agrees perfectly with the border line we observed in Fig. <ref>. The frustrated sites have a pivotal role in the separation of phases illustrated in Fig. <ref>. For low T values, that is for S>T-1, cooperators fare better than defectors, allowing them to stay in “frustrated” sites of the honeycomb configuration. This results in a large number of cooperators, as every defector will be surrounded by 6 cooperators (ρ∼ 2/3 in the infinitely repeated limit). The opposite is also true for S<T-1, namely, the defectors have a high payoff, allowing them to stay in the frustrated positions of the honeycomb patches. As a result, a stable cooperator will be surrounded by 6 defectors yielding a relatively low cooperation level (ρ∼ 1/3 in the infinite limit).We found that frustration can induce two distinct organized patterns on triangular lattices. As we noted, square lattice topology can be considered as the opposite extreme case where there is no frustrated bonds between players. We wonder how these extreme cases can be bridged by an appropriately modified topology where the frustration level can be tuned gradually. To generate such an intermediate level of inherent frustration we modify the triangular lattice by removing two diagonal connections of each site. When we alter the originally triangular lattice then the control parameter is the X fraction of sites that have their diagonal links removed. Accordingly, X=0 corresponds to the triangular lattice while at X=1 the resulting topology agrees with the square lattice. Note thatthe network remains static throughout the evolutionary process and we study how the strategy evolution may change due to the intermediate level of topological frustration.Figure <ref> shows the resulting cooperation level in dependence on the payoff values for differently frustrated topologies as characterized by the value of X. As we start mitigating the maximal frustration by increasing X, the steep transition point separating the two plateaus vanishes immediately verifying that the two ordered phases can only be observed when maximal level of frustration is present in the topology. As we increase X further then the resulting ρ(r) function will approximate the ρ=0.5 plateau only at the X → 1 limit. It simply means that the long-range anti-ferromagnetic order of competing strategies disappears immediately when we leave the X=1 point and introduce some frustration into the perfectly frustration-free square lattice topology. In between these extreme cases the shape of the ρ(t) function may inform us about the frustration level of the unknown interaction graph. § DISCUSSIONIn social interactions the “best response” is often challenging, especially if the interaction involves a triangle. In general, frustrated situations can arise as a consequence of the type of game played, due to specific interaction topologies, but also because of other external factors. Motivated by this phenomenon, we considered the snowdrift game on a triangular lattice where the topological frustration inhibits the expected optimal anti-coordination of strategies. By means of master-equation approximations and Monte Carlo simulations, we have studied the logit strategy updating protocol, the classical imitation dynamics, and we have compared the evolutionary outcomes obtained on the triangular lattice, the square lattice, and on an abridged transition between the two that was achieved by randomly adding links to the next-nearest neighbors of the square lattice. Our principal interest was to reveal how topological frustration influence the strategy ordering in a spatial system.In stark contrast to the square lattice where anti-coordination ordering can emerge, the frustrated topology of triangular lattice generates two ordered phases in the snowdrift quadrant. These states are separated by the S=T-1 line. While for low T values cooperators occupy 2/3 of the available sites and the rest is occupied by defectors, their roles are exchanged for high T values. In both phases the system evolves into a state which is reminiscent to a honeycomb-like pattern that helps to minimize the negative consequence of the topological frustration. We have identified the microscopic mechanisms which compose these patterns, and we have found that such formations are very stable. By comparing them with the outcome of imitation dynamics, we have found that the logit rule allows the whole system to evolve into the least frustrated strategy distribution that is achievable on each lattice, which also provides the highest population payoff. This state is reached via a strategy distribution where cooperators are less clustered comparing to the patterns constructed by imitation dynamics.The striking difference between frustration-free (square) and frustrated (triangle) lattices raises a question on what we shall expect if the interaction graph is disordered and the level of topological frustration is unknown. What kind of behavior is expected in such a case? To clarify this, we have introduced a method which allowed us to modify the level of topological frustration gradually. Starting from a triangular lattice, we have randomly deleted a fraction of diagonal links, which decreased the frustration between neighboring bonds. If all diagonal links were deleted, then we arrive at the square lattice. We found that the two ordered phases in the snowdrift quadrant disappear as we mitigate the frustration level. On the other hand, the well-known anti-ferromagnetic like checkerboard pattern observed on a square lattice, which is valid for the whole scan of the mentioned quadrant of the T-S plane, evaporates immediately as we introduce a tiny frustration into the interaction topology. These phenomena highlight how frustration can drive individuals to form complex global patterns, and more importantly, how innovative dynamics can drive the system to the best, i.e., least frustrated, evolutionary outcome.As the strategy updating rule can drastically alter population dynamics, it is important to study how different protocols deal with frustration and which kind of patterns can spontaneously emerge from the applied dynamic. This is even more interesting in the light of emergence of complexity as individuals interact. The studied logit rule model is essential to the emergence of the patterns shown here, and recent research shows the importance of integrating innovative dynamics in game theoretical models, especially since humans seem to use different rules than simply imitating the best when playing evolutionary games <cit.>. We hope that this paper will motivate further research along this area in the future. This research was supported by the Brazilian Research Agencies CAPES-PDSE (Proc. BEX 7304/15-3), CNPq and FAPEMIG, by the Slovenian Research Agency (Grants J1-7009 and P5-0027), and by the Hungarian National Research Fund (Grant K-120785). 111 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Pennisi(2005)]pennisi_s05 authorE. Pennisi, journalScience volume309, pages93 (year2005).[Maynard Smith(1982)]maynard_82 authorJ. Maynard Smith, titleEvolution and the Theory of Games (publisherCambridge University Press, addressCambridge, U.K., year1982).[Weibull(1995)]weibull_95 authorJ. W. Weibull, titleEvolutionary Game Theory (publisherMIT Press, addressCambridge, MA, year1995).[Hofbauer and Sigmund(1998)]hofbauer_98 authorJ. Hofbauer and authorK. Sigmund, titleEvolutionary Games and Population Dynamics (publisherCambridge University Press, addressCambridge, U.K., year1998).[Mesterton-Gibbons(2001)]mestertong_01 authorM. Mesterton-Gibbons, titleAn Introduction to Game-Theoretic Modelling, 2nd Edition (publisherAmerican Mathematical Society, addressProvidence, RI, year2001).[Nowak(2006)]nowak_06 authorM. A. Nowak, titleEvolutionary Dynamics (publisherHarvard University Press, addressCambridge, MA, year2006).[Axelrod(1984)]axelrod_84 authorR. Axelrod, titleThe Evolution of Cooperation (publisherBasic Books, addressNew York, year1984).[Wilson(1971)]wilson_71 authorE. O. Wilson, titleThe Insect Societies (publisherHarvard Univ. Press, addressHarvard, year1971).[Skutch(1961)]skutch_co61 authorA. F. Skutch, journalCondor volume63, pages198 (year1961).[Nowak and Highfield(2011)]nowak_11 authorM. A. Nowak and authorR. Highfield, titleSuperCooperators: Altruism, Evolution, and Why We Need Each Other to Succeed (publisherFree Press, addressNew York, year2011).[Nowak and May(1992)]nowak_n92b authorM. A. Nowak and authorR. M. May, journalNature volume359, pages826 (year1992).[Hamilton(1964)]hamilton_wd_jtb64a authorW. D. Hamilton, journalJ. Theor. Biol. volume7, pages1 (year1964).[Alizon and Taylor(2008)]Alizon2008 authorS. Alizon and authorP. Taylor, journalEvolution volume62, pages1335 (year2008).[Sicardi et al.(2009)Sicardi, Fort, Vainstein, and Arenzon]Sicardi2009 authorE. A. Sicardi, authorH. Fort, authorM. H. Vainstein, and authorJ. J. Arenzon, journalJ. Theor. Biol. volume256, pages240 (year2009).[Trivers(1971)]trivers_qrb71 authorR. L. Trivers, journalQ. Rev. Biol. volume46, pages35 (year1971).[Axelrod and Hamilton(1981)]axelrod_s81 authorR. Axelrod and authorW. D. Hamilton, journalScience volume211, pages1390 (year1981).[Santos and Pacheco(2005)]santos_prl05 authorF. C. Santos and authorJ. M. Pacheco, journalPhys. Rev. Lett. volume95, pages098104 (year2005).[Santos et al.(2006)Santos, Pacheco, and Lenaerts]santos_pnas06 authorF. C. Santos, authorJ. M. Pacheco, and authorT. Lenaerts, journalProc. Natl. Acad. Sci. USA volume103, pages3490 (year2006).[Gómez-Gardeñes et al.(2007)Gómez-Gardeñes, Campillo, Floría, and Moreno]gomez-gardenes_prl07 authorJ. Gómez-Gardeñes, authorM. Campillo, authorL. M. Floría, and authorY. Moreno, journalPhys. Rev. Lett. volume98, pages108103 (year2007).[Wilson(1977)]wilson_ds_an77 authorD. S. Wilson, journalAm. Nat. volume111, pages157 (year1977).[Szolnoki and Szabó(2007)]szolnoki_epl07 authorA. Szolnoki and authorG. Szabó, journalEPL volume77, pages30004 (year2007).[Perc and Szolnoki(2008)]perc_pre08 authorM. Perc and authorA. Szolnoki, journalPhys. Rev. E volume77, pages011904 (year2008).[Santos et al.(2008)Santos, Santos, and Pacheco]santos_n08 authorF. C. Santos, authorM. D. Santos, and authorJ. M. Pacheco, journalNature volume454, pages213 (year2008).[Santos et al.(2012)Santos, Pinheiro, Lenaerts, and Pacheco]santos_jtb12 authorF. C. Santos, authorF. Pinheiro, authorT. Lenaerts, and authorJ. M. Pacheco, journalJ. Theor. Biol. volume299, pages88 (year2012).[Zimmermann et al.(2004)Zimmermann, Eguíluz, and San Miguel]zimmermann_pre04 authorM. G. Zimmermann, authorV. M. Eguíluz, and authorM. San Miguel, journalPhys. Rev. E volume69, pages065102(R) (year2004).[Zimmermann and Eguíluz(2005)]zimmermann_pre05 authorM. G. Zimmermann and authorV. M. Eguíluz, journalPhys. Rev. E volume72, pages056118 (year2005).[Fu et al.(2009)Fu, Wu, and Wang]fu_pre09 authorF. Fu, authorT. Wu, and authorL. Wang, journalPhys. Rev. E volume79, pages036101 (year2009).[Du et al.(2009)Du, Cao, Hu, and Wang]du_wb_epl09 authorW.-B. Du, authorX.-B. Cao, authorM.-B. Hu, and authorW.-X. Wang, journalEPL volume87, pages60004 (year2009).[Lee et al.(2011)Lee, Holme, and Wu]lee_s_prl11 authorS. Lee, authorP. Holme, and authorZ.-X. Wu, journalPhys. Rev. Lett. volume106, pages028702 (year2011).[Gómez-Gardeñes et al.(2011)Gómez-Gardeñes, Vilone, and Sánchez]gomez-gardenes_epl11 authorJ. Gómez-Gardeñes, authorD. Vilone, and authorA. Sánchez, journalEPL volume95, pages68003 (year2011).[Ohdaira and Terano(2011)]ohdaira_jasss11 authorT. Ohdaira and authorT. Terano, journalJournal of Artificial Societies and Social Simulation volume14, pages3 (year2011).[Tanimoto et al.(2012)Tanimoto, Brede, and Yamauchi]tanimoto_pre12 authorJ. Tanimoto, authorM. Brede, and authorA. Yamauchi, journalPhys. Rev. E volume85, pages032101 (year2012).[Santos et al.(2014)Santos, Dorogovtsev, and Mendes]santos_md_srep14 authorM. Santos, authorS. N. Dorogovtsev, and authorJ. F. F. Mendes, journalSci. Rep. volume4, pages4436 (year2014).[Pavlogiannis et al.(2015)Pavlogiannis, Chatterjee, Adlam, and Nowak]pavlogiannis_srep15 authorA. Pavlogiannis, authorK. Chatterjee, authorB. Adlam, and authorM. A. Nowak, journalSci. Rep. volume5, pages17147 (year2015).[Wu et al.(2015)Wu, Rong, and Chen]wu_zx_epl15 authorZ.-X. Wu, authorZ. Rong, and authorM. Z. Q. Chen, journalEPL volume110, pages30002 (year2015).[Hindersin and Traulsen(2015)]hindersin_pcbi15 authorL. Hindersin and authorA. Traulsen, journalPLoS Comput. Biol. volume11, pagese1004437 (year2015).[Chen et al.(2016)Chen, Wu, Li, and Wang]chen_w_pa16 authorW. Chen, authorT. Wu, authorZ. Li, and authorL. Wang, journalPhysica A volume443, pages192 (year2016).[Szabó and Fáth(2007)]szabo_pr07 authorG. Szabó and authorG. Fáth, journalPhys. Rep. volume446, pages97 (year2007).[Roca et al.(2009a)Roca, Cuesta, and Sánchez]roca_plr09 authorC. P. Roca, authorJ. A. Cuesta, and authorA. Sánchez, journalPhys. Life Rev. volume6, pages208 (year2009a).[Perc and Szolnoki(2010)]perc_bs10 authorM. Perc and authorA. Szolnoki, journalBioSystems volume99, pages109 (year2010).[Perc et al.(2013)Perc, Gómez-Gardeñes, Szolnoki, and Floría and Y. Moreno]perc_jrsi13 authorM. Perc, authorJ. Gómez-Gardeñes, authorA. Szolnoki, and authorL. M. Floría and Y. Moreno, journalJ. R. Soc. Interface volume10, pages20120997 (year2013).[Pacheco et al.(2014)Pacheco, Vasconcelos, and Santos]pacheco_plrev14 authorJ. M. Pacheco, authorV. V. Vasconcelos, and authorF. C. Santos, journalPhysics of Life Reviews volume11, pages573 (year2014).[Wang et al.(2015a)Wang, Wang, Szolnoki, and Perc]wang_z_epjb15 authorZ. Wang, authorL. Wang, authorA. Szolnoki, and authorM. Perc, journalEur. Phys. J. B volume88, pages124 (year2015a).[Wang et al.(2015b)Wang, Kokubo, Jusup, and Tanimoto]wang2015universal authorZ. Wang, authorS. Kokubo, authorM. Jusup, and authorJ. Tanimoto, journalPhys. Life Rev. volume14, pages1 (year2015b).[Vukov et al.(2006)Vukov, Szabó, and Szolnoki]vukov_pre06 authorJ. Vukov, authorG. Szabó, and authorA. Szolnoki, journalPhys. Rev. E volume73, pages067103 (year2006).[Perc(2006)]perc_njp06a authorM. Perc, journalNew J. Phys. volume8, pages22 (year2006).[Tanimoto(2007)]tanimoto_pre07b authorJ. Tanimoto, journalPhys. Rev. E volume76, pages041130 (year2007).[Szolnoki et al.(2008a)Szolnoki, Perc, and Danku]szolnoki_epl08 authorA. Szolnoki, authorM. Perc, and authorZ. Danku, journalEPL volume84, pages50007 (year2008a).[Szolnoki et al.(2008b)Szolnoki, Perc, and Szabó]szolnoki_epjb08 authorA. Szolnoki, authorM. Perc, and authorG. Szabó, journalEur. Phys. J. B volume61, pages505 (year2008b).[Jiang et al.(2009)Jiang, Zhao, Yang, Wakeling, Wang, and Zhou]jiang_ll_pre09 authorL.-L. Jiang, authorM. Zhao, authorH.-X. Yang, authorJ. Wakeling, authorB.-H. Wang, and authorT. Zhou, journalPhys. Rev. E volume80, pages031144 (year2009).[Devlin and Treloar(2009)]devlin_pre09 authorS. Devlin and authorT. Treloar, journalPhys. Rev. E volume79, pages016107 (year2009).[Shigaki et al.(2012)Shigaki, Kokubo, Tanimoto, Hagishima, and Ikegaya]shigaki_epl12 authorK. Shigaki, authorS. Kokubo, authorJ. Tanimoto, authorA. Hagishima, and authorN. Ikegaya, journalEPL volume98, pages40008 (year2012).[Hauser et al.(2014)Hauser, Traulsen, and Nowak]hauser_jtb14 authorO. P. Hauser, authorA. Traulsen, and authorM. A. Nowak, journalJ. Theor. Biol. volume343, pages178 (year2014).[Yuan and Xia(20141)]yuan_wj_pone14 authorW.-J. Yuan and authorC.-Y. Xia, journalPLoS ONE volume9, pagese91012 (year20141).[Iwata and Akiyama(2015)]iwa_pha15 authorM. Iwata and authorE. Akiyama, journalPhysica A volume448, pages224 (year2015).[Amaral et al.(2015)Amaral, Wardil, and da Silva]amaral_jpa15 authorM. A. Amaral, authorL. Wardil, and authorJ. K. L. da Silva, journalJ. Phys. A volume48, pages445002 (year2015).[Tanimoto and Kishimoto(2015)]tanimoto2015network authorJ. Tanimoto and authorN. Kishimoto, journalPhys. Rev. E volume91, pages042106 (year2015).[Liu et al.(2015)Liu, Jia, and Rong]liu_rr_epl15 authorR.-R. Liu, authorC.-X. Jia, and authorZ. Rong, journalEPL volume112, pages48005 (year2015).[Javarone(2016)]javarone_epjb16 authorM. A. Javarone, journalEur. Phys. J. B volume89, pages42 (year2016).[Amaral et al.(2016)Amaral, Wardil, Perc, and da Silva]amaral_pre16 authorM. A. Amaral, authorL. Wardil, authorM. Perc, and authorJ. K.L. da Silva, journalPhys. Rev. E volume93, pages042304 (year2016).[Matsuzawa et al.(2016)Matsuzawa, Tanimoto, and Fukuda]matsuzawa2016spatial authorR. Matsuzawa, authorJ. Tanimoto, and authorE. Fukuda, journalPhys. Rev. E volume94, pages022114 (year2016).[Javarone and Battiston(2016)]javarone2016role authorM. A. Javarone and authorF. Battiston, journalJ. Stat. Mech. volume2016, pages073404 (year2016).[Javarone et al.(2016)Javarone, Antonioni, and Caravelli]javarone2016conformity authorM. A. Javarone, authorA. Antonioni, and authorF. Caravelli, journalEPL volume114, pages38001 (year2016).[Wardil and da Silva(2009)]wardil_epl09 authorL. Wardil and authorJ. K. L. da Silva, journalEPL volume86, pages38001 (year2009).[Robinson et al.(2011)Robinson, Feldman, and McKay]Robinson2011 authorM. D. Robinson, authorD. P. Feldman, and authorS. R. McKay, journalChaos volume21, pages037114 (year2011).[Binder and Landau(1980)]binder_prb80 authorK. Binder and authorD. P. Landau, journalPhys. Rev. B volume21, pages1941 (year1980).[Gracia-Lázaro et al.(2012a)Gracia-Lázaro, Cuesta, Sánchez, and Moreno]gracia-lazaro_srep12 authorC. Gracia-Lázaro, authorJ. Cuesta, authorA. Sánchez, and authorY. Moreno, journalSci. Rep. volume2, pages325 (year2012a).[Gracia-Lázaro et al.(2012b)Gracia-Lázaro, Ferrer, Ruiz, Tarancón, Cuesta, Sánchez, and Moreno]gracia-lazaro_pnas12 authorC. Gracia-Lázaro, authorA. Ferrer, authorG. Ruiz, authorA. Tarancón, authorJ. Cuesta, authorA. Sánchez, and authorY. Moreno, journalProc. Natl. Acad. Sci. USA volume109, pages12922 (year2012b).[Grujić et al.(2014)Grujić, Gracia-Lázaro, Milinski, Semmann, Traulsen, Cuesta, Moreno, and Sánchez]Grujic2014 authorJ. Grujić, authorC. Gracia-Lázaro, authorM. Milinski, authorD. Semmann, authorA. Traulsen, authorJ. A. Cuesta, authorY. Moreno, and authorA. Sánchez, journalSci. Rep. volume4, pages4615 (year2014).[Vukov et al.(2012)Vukov, Santos, and Pacheco]vukov_njp12 authorJ. Vukov, authorF. Santos, and authorJ. Pacheco, journalNew J. Phys. volume14, pages063031 (year2012).[Blume and Gneezy(2010)]blume_a_geb10 authorA. Blume and authorU. Gneezy, journalGames Econ. Behav.(year2010).[Bonawitz et al.(2014)Bonawitz, Denison, Gopnik, and Griffiths]bonawitz_cg14 authorE. Bonawitz, authorS. Denison, authorA. Gopnik, and authorT. L. Griffiths, journalCogn. Psychol. volume74, pages35(year2014).[Blume(1995)]blume_l_geb95 authorL. E. Blume, journalGames Econ. Behav. volume11, pages111 (year1995).[Szabó et al.(2013)Szabó, Szolnoki, and Czakó]szabo_jtb12b authorG. Szabó, authorA. Szolnoki, and authorL. Czakó, journalJ. Theor. Biol. volume317, pages126 (year2013).[Szabó et al.(2005)Szabó, Vukov, and Szolnoki]szabo_pre05 authorG. Szabó, authorJ. Vukov, and authorA. Szolnoki, journalPhys. Rev. E volume72, pages047107 (year2005).[Wedekind and Milinski(1996)]wedekind_pnas96 authorC. Wedekind and authorM. Milinski, journalProc. Natl. Acad. Sci. USA volume93, pages2686 (year1996).[Dalton(2010)]Dalton2010 authorP. S. Dalton, journalSSRN Electron. J. volume2010-23, pages1 (year2010).[Macy and Flache(2002)]macy_pnas02 authorM. W. Macy and authorA. Flache, journalProc. Natl. Acad. Sci. USA volume99, pages7229 (year2002).[Roca et al.(2009b)Roca, Cuesta, and Sánchez]roca_epjb09 authorC. P. Roca, authorJ. A. Cuesta, and authorA. Sánchez, journalEur. Phys. J. B volume71, pages587 (year2009b).[Sysi-Aho et al.(2005)Sysi-Aho, Saramäki, Kertész, and Kaski]sysiaho_epjb05 authorM. Sysi-Aho, authorJ. Saramäki, authorJ. Kertész, and authorK. Kaski, journalEur. Phys. J. B volume44, pages129 (year2005).[Grujić et al.(2010)Grujić, Fosco, Araujo, Cuesta, and Sánchez]grujic_pone10 authorJ. Grujić, authorC. Fosco, authorL. Araujo, authorJ. A. Cuesta, and authorA. Sánchez, journalPLoS ONE volume5, pagese13749 (year2010).[Szolnoki et al.(2011)Szolnoki, Xie, Wang, and Perc]szolnoki_epl11 authorA. Szolnoki, authorN.-G. Xie, authorC. Wang, and authorM. Perc, journalEPL volume96, pages38002 (year2011).[Roca et al.(2009c)Roca, Cuesta, and Sánchez]roca_pre09 authorC. P. Roca, authorJ. A. Cuesta, and authorA. Sánchez, journalPhys. Rev. E volume80, pages046106 (year2009c).[Szabó et al.(2010)Szabó, Szolnoki, Varga, and Hanusovszky]szabo_pre10 authorG. Szabó, authorA. Szolnoki, authorM. Varga, and authorL. Hanusovszky, journalPhys. Rev. E volume82, pages026110 (year2010).[Szabó and Szolnoki(2012)]szabo_jtb12 authorG. Szabó and authorA. Szolnoki, journalJ. Theor. Biol. volume299, pages81 (year2012).[Wang et al.(2012)Wang, Szolnoki, and Perc]wang_z_srep12 authorZ. Wang, authorA. Szolnoki, and authorM. Perc, journalSci. Rep. volume2, pages369 (year2012).[Glauber(1963)]glauber_jmp63 authorR. J. Glauber, journalJ. Math. Phys volume4, pages294 (year1963).[Hauert and Szabó(2005)]hauert_ajp05 authorC. Hauert and authorG. Szabó, journalAm. J. Phys. volume73, pages405 (year2005).[Hauert and Doebeli(2004)]hauert_n04 authorC. Hauert and authorM. Doebeli, journalNature volume428, pages643 (year2004).[Szabó et al.(2007)Szabó, Szolnoki, and Sznaider]szabo_pre07 authorG. Szabó, authorA. Szolnoki, and authorG. A. Sznaider, journalPhys. Rev. E volume76, pages051921 (year2007).[Nowak and Sigmund(2004)]nowak_s04 authorM. A. Nowak and authorK. Sigmund, journalScience volume303, pages793 (year2004).[Choi et al.(2015)Choi, Yook, and Kim]Choi2015 authorW. Choi, authorS.-H. Yook, and authorY. Kim, journalPhys. Rev. E volume92, pages052140 (year2015).[Weisbuch and Stauffer(2007)]weisbuch_pa07 authorG. Weisbuch and authorD. Stauffer, journalPhysica A volume384, pages542 (year2007).[Blume(1993)]blume_l_geb93 authorL. E. Blume, journalGames Econ. Behav. volume5, pages387 (year1993).[Nishimori(2001)]nishimori_01 authorH. Nishimori, titleStatistical Physics of Spin Glasses and Information Processing: An Introduction (publisherClarendon Press, Oxford, UK, year2001).[Galam and Walliser(2010)]galam_pa10 authorS. Galam and authorB. Walliser, journalPhysica A volume389, pages481 (year2010).[Matsuda et al.(1992)Matsuda, Ogita, Sasaki, and Sato]matsuda_h_ptp92 authorH. Matsuda, authorN. Ogita, authorA. Sasaki, and authorK. Sato, journalProgr. Theor. Phys. volume88, pages1035 (year1992).[Schuster and Sigmund(1983)]schuster_jtb83 authorP. Schuster and authorK. Sigmund, journalJ. Theor. Biol. volume100, pages533 (year1983).[Szolnoki and Perc(2014)]szolnoki_pre14 authorA. Szolnoki and authorM. Perc, journalPhys. Rev. E volume89, pages022804 (year2014).[Fort and Viola(2005)]fort_jsm05 authorH. Fort and authorS. Viola, journalJ. Stat. Mech. Theor. Exp. volume2, pagesP01010 (year2005).[Vainstein and Arenzon(2001)]vainstein_pre01 authorM. H. Vainstein and authorJ. J. Arenzon, journalPhys. Rev. E volume64, pages051905 (year2001).[Arapaki(2009)]arapaki_pa09 authorE. Arapaki, journalPhysica A volume388, pages2757 (year2009).[Binder and Hermann(1988)]binder_88 authorK. Binder and authorD. K. Hermann, titleMonte Carlo Simulations in Statistical Physics (publisherSpringer, addressHeidelberg, year1988).[Binder(1997)]binder_rpp97 authorK. Binder, journalRep. Prog. Phys. volume60, pages487 (year1997).[Huberman and Glance(1993)]huberman_pnas93 authorB. Huberman and authorN. Glance, journalProc. Natl. Acad. Sci. USA volume90, pages7716 (year1993).[Buonanno et al.(2009)Buonanno, Montolio, and Vanin]Buonanno2009 authorP. Buonanno, authorD. Montolio, and authorP. Vanin, journalJ. Law Econ. volume52, pages145 (year2009).[Botzen(2016)]Botzen2016 authorK. Botzen, journalREGION volume3, pages1 (year2016).[Rand et al.(2014)Rand, Peysakhovich, Kraft-Todd, Newman, Wurzbacher, Nowak, and Greene]rand_nc14 authorD. G. Rand, authorA. Peysakhovich, authorG. T. Kraft-Todd, authorG. E. Newman, authorO. Wurzbacher, authorM. A. Nowak, and authorJ. D. Greene, journalNat. Commun. volume5, pages3677 (year2014).[Capraro et al.(2014)Capraro, Jordan, and Rand]capraro2014heuristics authorV. Capraro, authorJ. J. Jordan, and authorD. G. Rand, journalSci. Rep. volume4, pages6790 (year2014).[Capraro and Cococcioni(2015)]capraro2015social authorV. Capraro and authorG. Cococcioni, journalProc. R. Soc. B volume282, pages20150237 (year2015).[Bear and Rand(2016)]bear_pnas16 authorA. Bear and authorD. G. Rand, journalProc. Natl. Acad. Sci. volume113, pages936 (year2016).
http://arxiv.org/abs/1702.08542v1
{ "authors": [ "Marco A. Amaral", "Matjaz Perc", "Lucas Wardil", "Attila Szolnoki", "Elton J. da Silva Júnior", "Jafferson K. L. da Silva" ], "categories": [ "physics.soc-ph", "cond-mat.stat-mech", "q-bio.PE" ], "primary_category": "physics.soc-ph", "published": "20170227213720", "title": "Role-separating ordering in social dilemmas controlled by topological frustration" }
Simon King, Sergei Matveev, Vladimir Tarkaev,Vladimir Turaev Dijkgraaf-Witten Z_2-invariants forSeifert manifoldsFaculty of Mathematics and Natural ScienceInstitute of Mathematics EducationGronewaldstr. 2 D-50931 Cologne Germanysimon.king@uni-koeln.de Laboratory of Quantum Topology Chelyabinsk State University Brat'ev Kashirinykh street129, Chelyabinsk 454001RussiaandKrasovsky Institute of Mathematics and Mechanics ofRASsvmatveev@gmail.comLaboratory of Quantum Topology Chelyabinsk State University Brat'ev Kashirinykh street129, Chelyabinsk 454001RussiaandKrasovsky Institute of Mathematics and Mechanics ofRASv.tarkaev@gmail.comDepartment of Mathematics IndianaUniversity Bloomington IN47405, USA andLaboratory of Quantum TopologyChelyabinsk State University,Brat'ev Kashirinykh street129, Chelyabinsk 454001Russia vturaev@yahoo.comDIJKGRAAF-WITTEN Z_2-INVARIANTS FORSEIFERT MANIFOLDS VLADIMIR TURAEV Received 5 February 2016 / Accepted 6 December 2016 ======================================================= In this short paper we compute the values ofDijkgraaf-Witten invariants over Z_2for all orientable Seifert manifolds with orientable bases. Mathematics Subject Classification 2000: 57M25, 57M27 § THE DIJKGRAAF-WITTEN INVARIANTS In 1990, Dijkgraaf and Witten <cit.> proposed a new approach to constructing invariants of closed topological manifolds. Each DW-invariant of a closed orientedn-dimensional manifold M is determined by a choice of a finite group G, a subgroup U of the unitary group U[1],and an element h of the cohomology group H^n(B; U), where B=B(G) is the classifying space of G. Let S=S(M,B) be the set of all base point preserving maps M→ B considered up to base point preserving homotopy. The set S is finite and can be identified with the set ofhomomorphisms π_1(M) → G. The Dijkgraaf-Witten invariant of M associated with h is the complexnumberZ(M,h) = 1/|G|∑_f∈ S(M,B)⟨f^*(h),[M]⟩,where |G| is the order of G and [M] is the fundamental class of M.In this paper, we consider only the special case where n = 3, both groups G and U have order 2 and are identified with the group ℤ_2. For the classifying spaceofℤ_2 we take the infinite-dimensional projective space RP^∞, and for h we take a unique nontrivial element α^3 ∈H^3(RP^∞;ℤ_2 ), where α is the generator of H^1(RP^∞;ℤ_2 ). In this situation, the value ⟨ f^*(h),[M]⟩ belongs to ℤ_2, and the formula given above takes the formZ(M,α^3) = 1/2∑_f∈ S(M,B) (-1)^⟨f^*(α^3),[M]⟩, and becomes applicable to nonorientable manifolds. If H^1(M;ℤ_2 ) = 0, then Z(M,α^3) = 1/2. In all other cases Z(M,α^3) is an integer.§ QUADRATIC FUNCTION AND ARF-INVARIANT Let M be aclosed 3-manifold. We define a quadratic function Q_MH ^ 1 (M; ℤ _2) →ℤ_2 by therule Q_M (x) = ⟨ x ^ 3, [M] ⟩, where x∈ H ^ 1(M; ℤ _2), [M]is thefundamental class ofM, and x ^ 3∈ H ^ 3 (M; ℤ_2) is the cube ofx in the sense of multiplication in cohomology. The corresponding pairingℓ_MH ^ 1 (M; ℤ _2) × H ^ 1 (M; ℤ _2) →ℤ_2 defined by ℓ_M (x,y)= Q_M (x+y)- Q_M (x)- Q_M (y) is bilinear. The following relation between the DW-invariant of M and the Arf-invariant of Q_M was discovered in [MT]. <cit.>. Let M be a closed connected 3-manifold, and let A ⊂ H^1(M;ℤ _2) be the annihilator of ℓ_M.If there existsx ∈Asuch that x^3≠0 , then Z(M,α^3) = 0. If there are no such elements, then Z(M,α^3) = 2^k + m - 1(-1)^ Arf (Q_M), where m is the dimension of A and k equals half the dimension of the coset space H^1(M; ℤ _2)/A. Note that this theorem is true for orientable and non-orientable 3-manifolds. However, it follows from the Postnikov Theorem <cit.> that if M is orientable, then the annihilator A of ℓ_M coincides with H^1(M; ℤ _2). Further on we will consider only orientable 3-manifolds. For brevity we will call an element x∈ H^1(M; ℤ _2)essential if x^3≠ 0. Let M be an orientableclosed connected 3-manifold. If there exists an essentialx ∈H^1(M; ℤ _2), then Z(M,α^3) = 0. If there are no such elements, then Z(M,α^3) = 2^m - 1, where m is the dimension of H^1(M; ℤ _2). In view ofthe above corollary, the following question is crucial for computing DW-invariants: given a 3-manifold M, does H^1(M; ℤ_2) contain an essential element? In general, the calculation of products in cohomology and calculation of DW-invariantsis quite cumbersome, see <cit.>. We prefera very elementary method based on a nice structure of skeletons of Seifert manifolds. For simplicity, we restrict ourselves to Seifert manifolds fibered over S^2,although Theorems 2 – 4remaintrue for Seifert manifolds fibered over any closedorientable surface. Proofs are the same.§ SKELETONS OF SEIFERT MANIFOLDS. Let M be a closed 3-manifold. A 2-dimensional polyhedronP⊂ M is calleda skeleton ofM if M∖ P consists of open 3-balls.Let us construct a skeleton of an orientable Seifert manifold M=(S^2; (p_i,q_i), 1≤ i≤ n ) fibered over S^2 with exceptional fibers of types (p_i,q_i). Represent S^2 as a union of two discs Δ, D with common boundary.Choose inside Δ disjoint discs δ_i, 1≤ i ≤ n, and remove their interiors. The resulting punctured discwe denoteΔ_0. Then wejointhe circles ∂δ_i with ∂ Dby disjoint arcsl_i⊂Δ_0 , 1≤i ≤ n. The skeleton P⊂ M we are looking for is the union of the following surfaces, see Fig.<ref>. * AnnuliL_i = l_i×S^1 and tori t_i=∂δ_i ×S^1 , 1≤ i ≤ n. * The torusT=∂ D× S^1, the punctureddisc Δ_0,and the disc D. * Discsd_i attached to t_i alongsimple closed curves in t_i oftypes (p_i, q_i).Note that the surfaces of the first two typeslie in S^2 × S^1 whiled_i do not.§ SEIFERT MANIFOLDS HAVING Z(M,Α^3)=0 In this section we describeall orientable Seifert manifolds fibered over S^2 whose first cohomology group contains an essential x. Note that by Poincaré dualityx ∈ H^1(M; ℤ_2) is essential if and only if its dual x_∗∈ H_2(M; ℤ_2) is essential in the sense that it can be realized by an odd surface, i.e. by aclosedsurface having an odd Eulercharacteristic. So instead of looking for essential 1-cocycles we will construct essential 2-cycles.Let M=(S^2; (p_i,q_i), 1≤ i≤ n ) be an orientable Seifert manifold fibered over S^2 with exceptional fibers of types(p_i,q_i).Suppose that M containsanexceptional fiberf_i of type(p_i,q_i)and anexceptional fiberf_j of type(p_j,q_j) such thatp_i is divisible by 4 while p_j is even but not divisible by 4. Then there is an essentialx∈ H^1(M;ℤ_2).Choose disjoint discs D_i,D_j ⊂S^2 containingprojection pointsof f_i,f_j, and join their boundaries by a simple arc c ⊂ S^2. Let P be the polyhedron in M consisting of the annulus L=c× S^1, two tori t_i =∂ D_i × S^1,t_j =∂ D_j × S^1, and meridionaldiscs d_i, d_j ⊂ M ofsolid toriwhich replaceD_i× S^1andD_j× S^1 by the standard construction ofM. The boundary curves of d_i, d_j are of types (p_i,q_i), (p_j,q_j), respectively. See Fig. <ref>, to the left. Notethatd_i can be chosen so that the circles ∂ d_i and λ_i=∂ L ∩ t_idecomposet_i intop_i quadrilaterals admittinga black-white chessboard coloring. The same is true for d_j and t_j. Let us remove from P all (p_i+p_j)/2quadrilaterals havingthe same (say, white) color. We get a closed surfaceF ⊂ M. Since the Euler characteristic χ(P) of P is even and (p_i+p_j)/2 is odd, χ(F) is odd. Therefore F is odd and thus represents an essential 2-cycle.Let M=(S^2; (p_i,q_i), 1≤ i≤ n ) be an orientable Seifert manifold fibered over S^2 with exceptional fibers of types(p_i,q_i) such that(1) all p_i are odd and (2) the number of odd q_i is even. Denote by Q_∗the sum of all q_i, and byP_∗anyalternating sum of all p_i for whichthe corresponding q_i are odd. Suppose that the integer numberξ(M)=(Q_∗+P_∗)/2 is odd. Then there exists an essentialx∈ H^1(M;ℤ_2).Let us replace two exceptional fibers of types (p_i,q_i), (p_j,q_j) byexceptional fibers of types (p_i,q_i+p_i),(p_j,q_j-p_j). Theseoperations (called parameter trading) preserve the parity of ξ(M) and produce another Seifert presentation ofM, which also satisfies assumptions (1) and (2) of Theorem <ref>. Using such operationsone can easily getall q_i be divisibleby 4, except the last one, say, q_n,which must beeven in view of assumption (2). For this new presentation of M we have Q_∗≡ q_n mod 4 (since all other q_i are divisible by 4), and P_∗ =0 (since there are no odd q_i). Taking into account thatξ(M) is odd, we may conclude that q_n/2 is odd. Let P be a skeleton of Mconstructed for this new presentation of M. Consider the union P_0 of the following2-components of P, see Fig. <ref>, to the right:* The torus t_n anddiscd_n attached to t_nalong a simple closed curveof type (p_n, q_n);* The disc D =S^2 ∖ Int (δ_n), where δ_n ⊂ S^2 is the meridional disc of t_n. Then we apply the same trick as in the proof of Theorem <ref> using the circle ∂ d_n and taking the circle ∂δ_n instead of λ_n. Sinceq_n is even,the circles ∂ d_n and ∂δ_n can be chosen so thatthey decomposet_n intoq_n quadrilaterals admittinga black-white chessboard coloring.Let us remove from P_0all q_n/2quadrilaterals havingthe same (say, white) color. We get a closed surfaceF ⊂ M. Since the Euler characteristic of P_0 is even and q_n/2 = ξ(M) is odd, χ(F) is odd. Therefore F represents an essential 2-cycle.Let us introduce two classes 𝒜, ℬof Seifert manifolds. Class𝒜consists of manifolds satisfying assumptions of Theorem <ref>,class ℬ consists of manifolds satisfying assumptions of Theorem <ref>.Let M be an orientable Seifert manifold fibered over S^2 with exceptional fibers of types (p_i,q_i), 1≤i ≤ n. Suppose thatM contains an essential 2-cycle x_∗. Then M belongseither to 𝒜or to ℬIn the proof of the theorem, we need the following self-evident lemma. Let τ be a torus and𝒞a finite collection ofsimple closed curves in τ such that all their intersection points are transverse. We will consider the union G of these curves as a graph. Suppose that the faces of G havea black-white chess coloringin the sense that any edge of G separates a black face from a white one. Then any general position simple closed curve in τ crosses the edges of G at an evennumber of points. Let P be theskeleton of Mconstructedinsection <ref>. Denote by Γ itssingular graph consisting of triple and fourfold lines of P. The remaining part of P consists of 2-components of P, that is,of surfaces which are glued to Γ along their boundary circles. LetB be the carrier of x_∗, i.e. the union of2-components of P which are blackin the sense that they havecoefficient 1 in the 2-chainrepresenting x_∗. All other 2-components are white.Case 1. Suppose that Δ_0 and hence D are white. Then the polyhedron P_0=P∖Δ_0 issimple in the sense that the set of its singular points consistsof triplelines and their crossing points. It follows thatB is a closedsurface. Since x_∗ is essential, χ(B) is odd.We may assume that B is connected(if not then the Euler characteristic of at least one component of B is odd, and we can take it insteadof B). It follows thatB contains twoannuli, say, L_i, L_j, discsd_i,d_j, andone of the two annuli into which the curvesL_i ∩ T and L_j ∩ T decompose the torus T. The total Euler characteristic of these 2-components is 0. Just asin the proof of Theorem <ref> the circles ∂ d_i and λ_i= t_i∩∂ L_i decompose t_i into p_i quadrilateral 2-components. Since x_∗ is a 2-cycle, they are colored according to the chessboard rule. It follows that p_i andp_jare even. The remaining part of B consists of (p_i+p_j)/2 black quadrilaterals in t_i and t_j. Since χ(B) is odd, we may conclude that one of p_i/2, p_j/2 is even while the other is odd. Therefore M∈𝒜.Case 2. Suppose that Δ_0 is black. Then eachd_i isblack. Otherwise the boundary of x_∗ would contain ∂δ_i, which is impossible since x_∗ is a 2-cycle.Let us prove that if L_i is black then q_i is odd. Consider the graph G=∂ d_i∪λ_i ⊂ t_i. As above, x_∗ induces achess black-whitecoloring of its faces. Then a parallel copy ∂δ'_i of ∂δ_i crosses λ_i at one point and crosses ∂ d_i at q_ipoints. By Lemma <ref> the number q_i+1 must be even, which means that q_i is odd. Similarly,if L_i is white then q_i is even. Let us prove that all p_i are odd. Suppose thatL_i is black. Consider the graph G=∂ d_i∪∂δ_i⊂ t_i. It decomposes t_i into black-white colored faces. The coloring is induced by the 2-cycle x_∗.Let λ'_i ⊂ t_i be a parallel copyof λ_i. Then λ'_icrosses ∂ d_i at p_i points and crosses ∂δ_i atone point. By Lemma 1 the number p_i+1 must be even, which means that p_i is odd. Suppose thatL_i is white then q_i is even. Therefore ,p_i, being coprime with Q_i, is odd.Let us prove that the number ofblack L_iand hence the number ofodd q_i are even.This is because the boundary circles of black L_i decompose T intoblack-whitecoloredannuli such any two neighboring annuli have different colors.Thereforeany meridional circle of T, for example, ∂ D,crossesthose boundary circles at an even number of points. Let us prove that ξ(M) is odd. Just as in the proof of Theorem <ref> we transform thegiven Seifert presentationofM into a new Seifert presentation such that all q_i, 1≤ i≤ n-1, are divisibleby 4 andq_n is even. Then thecarrier B of x_∗consists of the following black surfaces:* Δ_0 andD; * Alldiscs d_i, 1≤ i ≤n;* black quadrilateralscontained int_i, 1≤ i≤ n. Each t_i contains q_i/2black quadrilaterals, where (p_i,q_i)is the type of the corresponding d_i. Of courseBmay contain the torusT, but it isis homologically trivial and thus can be neglected. Note thatB has only triple singularities and thus is a closed surface. Since x_∗ is essential,χ(B)is odd.Nowwe calculateχ(B) by counting Euler characteristics of the above black surfaces. We get χ(B)=q_n/2 mod 2. It follows thatq_n/2 is odd. Since all q_i are even, P_∗ =0. Taking into account thatall q_i, 1≤ i≤ n-1 are divisible by 4 and that q_n/2 is odd we may conclude thatQ_∗ =(∑_i=1^nq_i)/2 isodd. Therefore, ξ(M) is odd. It follows thatis in class ℬ. § ACKNOWLEDGMENTSS. Matveev, V. Tarkaev and V. Turaev were supported in part by the Laboratory of Quantum Topology, Chelyabinsk State University (contract no. 14.Z50.31.0020). S. Matveev and V. Tarkaev were supported in part by the Ministry of Education and Science of the Russia (the state task number 1.1260.2014/K) and the Russian Foundation for Basic Research (project no. 14-01-00441).WW [Br]Br Bryden, J., Hayat-Legrand, C., Zieschang, H., Zvengrowski, P.The cohomology ring of a class of Seifert manifolds, Topology Appl.105 (2) (2000), 123-156. [DT]DT Deloup, F., Turaev, V., On reciprocity. J. Pure Appl. Algebra208(2007), no. 1, 153–158.[DW]DWDijkgraaf, R., Witten, E., Topological gauge theories and group cohomology, Comm. Math. Phys. 129 (1990) 393–429.[Ha]Ha Haimiao Chen, The Dijkgraaf-Witten invariants of Seifert 3-manifolds with orientable bases,arXiv:.1307.0364v3 [math.GT] 6Apr 2014.[MT]MT Matveev, S.V., TuraevV.G. Dijkgraaf-Witten Invariants over Z_2 for 3-Manifolds, Doklady Mathematics, 2015, Vol. 91,No.1. pp.9-11. [Po]Po Postnikov, M. M., The structure of the ring of intersections of three-dimensional manifolds. (Russian) Doklady Akad. Nauk. SSSR (N.S.)61,(1948). 795–797.[Wa]Wa Wakui, M.,On Dijkgraaf-Witten invariant for 3-manifolds.Osaka J. Math.29 (1992),no. 4, 675–696.
http://arxiv.org/abs/1702.07882v1
{ "authors": [ "Simon King", "Sergei Matveev", "Vladimir Tarkaev", "Vladimir Turaev" ], "categories": [ "math.AT", "57M25, 57M27" ], "primary_category": "math.AT", "published": "20170225121540", "title": "Dijkgraaf-Witten $Z_2$-invariants for Seifert manifolds" }
DepthSynth: Real-Time Realistic Synthetic Data Generation from CAD Models for 2.5D Recognition Benjamin Planche^1, Ziyan Wu^2, Kai Ma^3, Shanhui Sun^3, Stefan Kluckner^4, Oliver Lehmann^3,Terrence Chen^3, Andreas Hutter^1, Sergey Zakharov^1, Harald Kosch^5, Jan Ernst^2^1Siemens Corporate Technology, Germany{benjamin.planche, andreas.hutter, sergey.zakharov}@siemens.com ^2Siemens Corporate Technology, USA{ziyan.wu, jan.ernst}@siemens.com ^3Siemens Healthineers, USA{kai.ma, shanhui.sun, oliver.lehmann, terrence.chen}@siemens.com ^4Siemens Mobility, Germanystefan.kluckner@siemens.com ^5University of Passau, Germanyharald.kosch@uni-passau.deDecember 30, 2023 ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Recent progress in computer vision has been dominated by deep neural networks trained over large amounts of labeled data. Collecting such datasets is however a tedious, often impossible task; hence a surge in approaches relying solely on synthetic data for their training. For depth images however, discrepancies with real scans still noticeably affect the end performance. We thus propose an end-to-end framework which simulates the whole mechanism of these devices, generating realistic depth data from 3D models by comprehensively modeling vital factors e.g. sensor noise, material reflectance, surface geometry. Not only does our solution cover a wider range of sensors and achieve more realistic results than previous methods, assessed through extended evaluation, but we go further by measuring the impact on the training of neural networks for various recognition tasks; demonstrating how our pipeline seamlessly integrates such architectures and consistently enhances their performance.§ INTRODUCTIONUnderstanding the 3D shape or spatial layout of a real-worldobject captured in a 2D image has been a classic computer vision problem for decades <cit.>. However, with the advent of low-costdepth sensors, specifically structured light cameras <cit.>Microsoft Kinect, Intel RealSense, its focus has seen a substantial paradigm shift. What in the pastrevolved around interpretation of raw pixels in 2D projections has now becomethe analysis of real-valued depth (2.5D) data. This has drastically increased the scope of practical applications ranging fromrecovering 3D geometry of complex surfaces <cit.> to real-time recognition of human actions <cit.>, inspiring research in automaticobject detection <cit.>,classification <cit.> andpose estimation <cit.>.While real data is commonly used for comparison and training, a large number of these recent studies decompose theproblems to matching acquired depth images of real-world objects to synthetic ones rendered from a database of pre-existing 3D models <cit.>.With no theoretical upper bound on obtaining synthetic images to either train complex models for classification <cit.> or fill large databases for retrieval tasks <cit.>, research continues to gain impetus in this direction. Despite the simplicity of the above flavor of approaches, their performance is often restrained by the lack of realism (discrepancy with real data) or variability (limited configurability) of their rendering process. As a workaround, some approaches fine-tune their systems on a small set of real scans <cit.>; but in many cases, access to real data is too scarce to bridge the discrepancy gap. Other methods try instead to post-process the real images to clear some of their noise, making them more similar to synthetic data but losing details in the process <cit.> which can be crucial for tasks such as pose estimation or fine-grained classification.A practical approach toaddress this problem is thus to generate more data, and in such a way that they mimic captured ones. This is however a non-trivial problem, as it is extremelydifficult to exhaustively enumerate all physical variations of agiven object—including surface geometry, reflectance, deformation,etc. Addressing those challenges in this paper, our key contributions are as follows: (a) we introduce DepthSynth,an end-to-end pipeline to synthetically generate depth images from 3Dmodels by virtually and comprehensively reproducing the sensors mechanisms (Figure <ref>), replicatingrealistic scenarios and thereby facilitatingrobust 2.5D applications, regardless the ulterior choice of algorithm or feature space; (b) we systematically evaluate and compare the quality of the resulting images with theoretical models and other modern simulation methods; (c) we demonstrate the effectiveness and flexibility of our tool by pairing it to a state-of-the-art method for two recognition tasks.The rest of the paper is organized as follows. In Section <ref>, weprovide a survey of pertinent work to the interested reader. Next, in Section  <ref>, we introduce our framework, detailing each step. In Section <ref>, we elaborate on our experimental protocol; first comparing the sensing errors induced by our tool to experimental data and theoretical models; then demonstrating the usefulness of our method by applying it to the pose estimation and classification tasks used as examples. We finally conclude with insightful discussions in Section <ref>. § RELATED WORK With the popular advocacy of 2.5D/3D sensor for vision applications, depth information is the support of active research within computer vision. We emphasize on recent approaches which employ synthetic scans, and present previous methods to generate such 2.5D data from CAD models.Depth-based Methods and Synthetic Data Crafting features to efficiently detect objects, discriminate them, evaluate their poses, etc. has long been a tedious task for computer vision researchers. With the rise of machine learning algorithms, these existing models have been complemented <cit.>, before being almost fully replaced by statistically-learned representations. Multiple recent approaches based on deep convolutional neural networks unequivocally outshone previous methods <cit.>, taking advantage of growing image datasets (such as ImageNet <cit.>) for their extensive training. As a matter of fact, collecting and accurately labeling large amounts of real data is however an extremely tedious task, especially when 3D poses are considered for ground truth.In order to tackle this limitation, and concomitantly with the emergence of 3D model databases, renewed efforts<cit.> were put into the synthetic extension of image or depth scan datasets, by applying various deformations and noise to the original pictures or by rendering images from missing viewpoints. These augmented datasets were then used to train more flexible estimators. Among other deep learning-based methods for class and pose retrieval recently proposed <cit.>, Wu 3D Shapenets <cit.> and Su Render-for-CNN <cit.> methods are two great examples of a second trend: using the ModelNet <cit.> and ShapeNet <cit.> 3D model datasets they put together, they let their networks learn features from this purely synthetic data, achieving consistent results in object registration, next-best-view prediction or pose estimation. Diving further into the problem of depth-based object classification and pose estimation chosen as illustration in this paper, Wohlhart <cit.> recently developed a scalable process addressing a two-degree-of-freedom pose estimation problem. Their approach evaluates the similarity between descriptors learned by a Convolutional Neural Network (CNN) with Euclidean distance, followed by nearest neighbor search. They trained their network with real captured data, but also simplistic synthetic images rendered from 3D models. In our work, this framework is extended to recognizing 3D pose with six degrees of freedom (6-DOF), and fed only with realistic synthetic images from DepthSynth. This way we achieve a significantly higher flexibility and scalability of the system, as well as a more seamless application to real-world use cases.Synthetic Depth Image Generation Early research along this direction involves the work of <cit.>, wherein search based on 3D representations are introduced. More recently, Rozantsev presented a thorough method for generating synthetic images <cit.>. Instead of focusing on making them look similar to real data for an empirical eye, they worked on a similarity metric based on the features extracted during the machine training. However, their model is tightly bound to properties impairing regular cameras (e.g. lighting and motion blur), which can't be applied to depth sensors.Su worked concurrently on a similar pipeline <cit.>, optimizing a synthetic RGB image renderer for the training of CNNs. While working on finding the best compromise between quality and scalability, they notice the ability CNNs have to cheat at learning from too simplistic images (e.g. by using the constant lighting to deduce the models poses, or by relying too much on contours for pictures rendered without background, etc.). Their pipeline has thus been divided into three steps: the rendering from 3D models, using random lighting parameters; the alpha composition with background images sampled from the SUN397 dataset <cit.>; and randomized cropping. By outperforming state-of-the-art pose estimation methods with their own one trained on synthetic images, they demonstrated the benefits such pipelines can bring to computer vision.Composed of similar steps as the method above, DepthSynth can also be compared to the one by Landau <cit.>, reproducing the Microsoft Kinect's behavior by simulating the infrared capture and stereo-matching process. Though their latter step inspired our own work, we opted for a less empirical, more exhaustive and generic model for the simulated projection and capture of pattern(s). Similar simulation processes were also developed to reproduce the results of Time-of-Flight (ToF) sensors <cit.>. If this paper mostly focuses on single- or multi-shot structured-light sensors, DepthSynth's genericity allows it to also simulate ToF sensors, using a subset of its operations (discarding the baseline distance within the device, defining a simpler projector with phase shift, etc.). Such a subset is then comparable to the method developed by Keller  <cit.>. For the sake of completeness, tools such as BlenSor<cit.>, or pcl::simulation<cit.> should also be mentioned. However, such simulators were implemented to help testing vision applications, and rely on a more simplistic modeling of the sensors, e.g. ignoring reflectance effects or using fractal noise for approximations.§ METHODOLOGYOur end-to-end pipeline for low-latency generation of realistic depth images from 3D CAD data covers various types of 3D/2.5D sensors including single-shot/multi-shot structured light sensors, as well as Time-of-Flight(ToF) sensors (relatively simpler than structured-light ones to simulate, using a sub-set of the pipeline's components e.g. i.i.d. per-pixel noise based on distance and object surface material, etc.).From here, we will mostly focus on single-shot sensors e.g. Microsoft Kinect, Occipital Structure and Xtion Pro Live, given their popularity among the research community.This proposed pipeline can be defined as a sequence of procedures directly inspired by the underlying mechanism of the sensors we are simulating; i.e. from pattern projection and capture, followed by pre-processing and depth reconstruction using the acquired image and original pattern, to post-processing; as illustrated in Figure <ref>. §.§ Understanding the Noise Causes To realistically generate synthetic depth data, we need first to understand the causes behind the various kinds of noise one can find in the scans generated by real structured light sensors.We thus analyzed the different kinds of noise impairing structured light sensors, and their sources and characteristics. This study highlighted how each step of the sensing process introduces its own artifacts. During the initial step of projection and capture of the pattern(s), noise can be induced by the lighting and material properties of the surfaces (too low or strong reflection of the pattern), by the composition of the scene (e.g. pattern's density per unit area drops quadratically with increasing distance causing axial noise, non-uniformity at edges causing lateral noise, and objects obstructing the path of the emitter, of the camera or both causing shadow noise), or by the sensor structure itself (structural noise due to its low spatial resolution or the warping of the pattern by the lenses). Further errors and approximations are then introduced during the block-matching and hole-filling operations—such as structural noise caused by the disparity-to-depth transform, band noise caused by windowing effect during block correlation, or growing step size as depth increases during quantization.By using the proper rendering parameters and applying the described depth data reconstruction procedure, the proposed synthetic data generation pipeline is able to exhaustively induce the aforementioned types of noise, unlike other state-of-the-art depth data simulation methods, as highlighted by the comparison in Table <ref>. §.§ Pattern Projection and CaptureIn the first part of the presented pipeline, a simulation platform is used to reproduce the pattern projection and capture mechanism. Thanks to an extensive set of parameters, this platform is able to behave like a large panel of depth sensors. Indeed, any kind of pattern can first be provided as an image asset/spotlight cookie for the projection, in order to adapt to the sensing device one wants to simulate. Moreover, the intrinsic and extrinsic parameters of the camera and projector are configurable.Our procedure covers both the full calibration of structured light sensors and the reconstruction of their projected pattern with the help of an extra camera. Once the original pattern obtained, our pipeline automatically generates a square version of it (to efficiently use spotlight simulation with cookies, projected patterns need to be padded to a square format for the 3D engine), followed by other different ones later used as reference in the block matching procedure according to the camera resolution.Once obtained, these parameters can be handed to the 3D platform to initialize the simulation. The 3D models must then be provided, along with their material(s). Even though not all models come with realistic textures, the results quality highly depends on such characteristics—especially their reflectance (physically based rendering model <cit.> or bidirectional reflectance distribution function <cit.>).Given a list of viewpoints, the platform will perform each pattern capture and projection, simulating realistic illumination sources and shadows, taking into account surface and material characteristics. Along the object, the 3D scene is thus populated with:* A spot light projector, using the desired high resolution pattern (2000 px by 2000 px) as light cookie;* A camera model, set up with the intrinsic and extrinsic parameters of the real sensor, separated from the projector by the provided baseline distance in the horizontal plan of the simulated device;* Optionally additional light sources, to simulate the effect of environmental illuminations;* Optionally other 3D models (e.g. ground, occluding objects, etc.), to ornament the scene. These settings and procedure allow our method to reproduce complex realistic effects by manipulating camera movement and exposure; e.g. rolling shutter effect can be simulated by acquiring 1 pixel-line per exposure while the camera is moving, or motion blur by averaging several exposures over movement.Using rendering components implemented by any recent 3D engine with the aforementioned parameters the virtual light projector provided with the pattern(s) and a virtual camera with the proper optical characteristics, we can simulate the light projection / capture procedures done by the real devices, and obtain a “virtually captured" image with the chosen resolution, similar to the intermediate output of the devices (IR image of the projected pattern).§.§ Pre-processing of Pattern CapturesThis intermediate result, captured in real-time by the virtual camera, is then pre-processed (fed into a compute shader layer), in order to get closer to the original quality, impinged by imaging sensor noise. In this module, noise effects are added, including radial and tangential lens distortion, lens scratch and grain, motion blur, and independent and identically distributed random noise.§.§ Stereo-matchingRelying on the principles of stereo vision, the rendered picture is then matched with its reference pattern, in order to extract the depth information from their disparity map. The emitted pattern and the resulting capture from the sensor are here used as the stereo stimuli, with these two virtual eyes (the projector and the camera) being separated by the baseline distance b. The depth value z is then a direct function of the disparity d with z = f · b / d, where f is the focal length in pixel of the receiver.The disparity map is computed by applying a block-matching process using small Sum of Absolute Differences (SAD) windows to find the correspondences <cit.>, sliding the window along the epipolar line. The function value of SAD for the location (x,y) on the captured image is:F_SAD(u,v) = ∑_j^w-1∑_i^w-1|I_s(x+i,y+j) - I_t(x+u+i,y+v+j)|where w is the window size, I_s the image from the camera, and I_t the pattern image. The matched location on the pattern image can be obtained by:(u_m,v_m) = _(u,v)F_SAD(u,v) The disparity value dcan be computed by:d = {[ u_m-x horizontal stereo; v_m-y vertical stereo ]. Based on pixel offsets, each disparity value is an integer. Refinement is done by interpolating between the closest matching block and its neighbors, achievinga sub-pixel accuracy. Given the direct relation between z and d, the possible disparity values are directly bound to the sensor's operational depth range, limiting the search range itself. §.§ Post-processing of Depth ScansFinally, another compute shader layer post-processes the depth maps, smoothing and trimming them according to the sensor's specifications. In the case that these specifications are not available, one can obtain reasonable estimation by feeding real images of captured pattern(s) from the sensor through the reconstruction pipeline and derive from the differences between this reconstructed depth image and the one actually output by the sensor. Imitating once more the original systems, a hole-filling step can be performed to reduce the proportion of missing data. Figures <ref> and <ref> show how DepthSynth is able to realistically reproduce the spatial sensitivity of the devices or the impact of surface materials. In the same way, Figure <ref> (c)-(h) reveals how the data quality of simulated multi-shot structured light sensors is highly sensitive to motion—an observation in accordance to our expectations. As highlighted in Figures <ref> and <ref> with the visual comparisons between DepthSynth and previous pipelines, the latter ones aren't sensitive to some realistic effects during capture, or preserve fine details which are actually smoothed-out up to the window size in block-matching. Our determination to closely reproduce the whole process performed by the devices paid off in terms of noise quality. §.§ Background BlendingMost of the depth rendering tools chose to ignore background addition by alpha compositing, causing significant discrepancy with real data and biasing the learner. Background modeling is hence another key component of DepthSynth. Added backgrounds can be: (1) from static predefined geometry; (2) from predefined geometry with motion; (3) with large amounts of random primitive shapes; (4) real captured scans (from public datasets).Optimized for GPU operations, the whole process can generate ~10 scans (VGA resolution) and their metadata (viewpoints) per second on a middle-range computer (Intel E5-1620v2, 16GB RAM, NVidia Quadro K4200).§ EXPERIMENTS AND RESULTSTo demonstrate the accuracy and practicality of our method, we first analyze in Subsection <ref> the depth error it induces when simulating the Kinect device, comparing with other simulation tools, experimental depth images and theoretical models for this device.In Subsection <ref>, we adapt a state-of-the-art algorithm for classification and pose estimation to demonstrate how supervised 2.5D recognition methods benefit from using our data. The pipeline developed for these evaluations makes use of Unity 3D Game Engine <cit.> (for rendering) and OpenCV <cit.> (for stereomatching).§.§ Depth Error EvaluationTo validate the correctness of our simulation pipeline, we first replicate the set of experiments used by Landau  <cit.>, to compare the depth error induced by DepthSynth to experimental values, as well as to the results from Landau  <cit.>, from BlenSor <cit.> and from 3 Kinect error models—respectively from Menna  <cit.>, Nguyen  <cit.> and Choo  <cit.>. All datasets consist of scans of a flat surface placed in front of the sensor at various distances and tilt angles to the focal plane. The experimental data was kindly provided by Landau <cit.>.Figure <ref>(A) shows how the distance between the plane and the sensor influences the standard depth error in the resulting scans. The trend in our synthetic data matches well the one observed in experimental scans, and Choo model recalibrated by Landau on the same experimental data <cit.>. As noted in <cit.>, these models are based on experimental results which are inherently correlated to the characteristics of their environment and sensor. We could expect other data not to perfectly align with such models (as proved by thediscrepancies among them). We can still notice that our synthetic images' quality degenerates slightly more for larger distances than the real scans, though our method behaves overall more realistically than the others.In Figure <ref>(B), we evaluate how the synthetic methods fares when gradually tilting the plane from orthogonal to almost parallel to the optical axis. The errors induced by our tool matches closely the experimental results for tilt angles below 70^∘, with some overestimation for steeper angles but a similar trend, unlike the other methods. It should be noted that for such incident angles, both real scans and DepthSynth ones have most of the depth information missing, due to the poor reflection and stretching of the projected pattern(s), heavily impairing the reconstruction.As a final experiment related to the error modeling, we compute the standard depth error as a function of the radial distance to the focal center. Again, Figure <ref>(C) shows us that our pipeline behaves the most realistically, despite inducing slightly more noise for larger distances and thus more distorted pattern(s). DepthSynth even satisfyingly reproduces the oscillating evolution of the noise when increasing the distance and reaching the edges of the scans—a well-documented phenomenon caused by "wiggling" and distortion of the pattern(s) <cit.>.§.§ Application to 6-DOF Pose Estimation and Classification Among the applications which can benefit from our pipeline, we formulate a 6-DOF camera pose recognition and classification problem from a single 2.5D scan into an image retrieval problem, supposing no real images can be acquired for the training of the chosen method. However in possession of the 3D models, we discretize N_p camera poses, generate the synthetic 2.5D image for each pose and each object using DepthSynth, and encode each picture via a discriminative, low-dimension image representation with its corresponding class and camera pose. We build this way a database for pose and class retrieval problems. Given an unseen image, itsrepresentation is thus computed the same way and queried in the database to find the K-nearest neighbor(s) and return the corresponding class and pose. To demonstrate the advantages of using DepthSynth data irrespective of the selected features, we adapt Wohlhart “triplet method” <cit.> which uses case-specific computer-crafted image representations generated by a CNN. We thus use a CNN (LeNet structure <cit.> with custom hyper-parameters – two 5×5 convolution layers, each followed by a ReLu layer and a 2×2 Max pooling layer; and finally two fully connected layers leading to the output one, also fully connected, as shown in Figure <ref>) to learn the discriminating features by enforcing a loss function presented in <cit.>, over all the CNN weights w:L = L_triplet + L_pairwise + λ w_2^2,where L_triplet is the triplet loss function, L_pairwise the pairwise one, andλ the regularization factor. A triplet is defined as (p_i, p_i^+, p_i^-), with p_i one class and pose sampling point, p_i^+ a point close to p_i (similar class and/or pose) and p_i^- another one far from p_i^+ (different class and/or pose). A pair is defined as (p_i, p'_i), with p_i one sampling point and p'_i its perturbation in terms of pose and noise, to enforce proximity between the descriptors of similar data. Given a margin m, L_pairwise is defined as the sum of the squared Euclidean distances between f(p_i) and f(p'_i), andL_triplet as:L_triplet = ∑_(p_i, p_i^+, p_i^-)max(0,1-f(p_i)-f(p_i^-)_2/f(p_i)-f(p_i^+)_2+m), Given such a state-of-the-art recognition method, we present the experiments to validate our solution and discuss their results. Data Preparation As target models for the experiment, we select three similar-looking office chairs, with their CAD models obtained from the manufacturers' websites (Figure <ref>).The following procedure is performed to capture the real 2.5D dataset and its ground-truth pose annotations: AR markers are placed on the floor around each chair, an Occipital Structure sensor is mounted on a tablet, and its infrared camera is calibrated according to the RGBcamera of the tablet. Using this table, an operator captures sequences of RGBD frames walking around the chairs (Figure <ref>).In a comprehensive and redundant annotation procedure using robust Direct Linear Transform, we manually generate 2D-3D correspondences on chairs regions based on visual landmarks, choosing a representative set of approximately 60 frames. These estimated camera poses and the detected 2D locations of markers are used to generate triangulated 3D markers locations in the CAD coordinate system. Given the objects' movable parts, the actual chairs deviate from their model. We thus iteratively reduce the deviation for the final ground-truth sequence by verifying the reprojections, and the consistency of the triangulated markers positions relative to the chairs elements.The IR and RGB camera calibration parameters are then used to align the depth scans into the common 3D coordinate system. In a final fine-tuning step, the poses and 3D models are fed into the simulation pipeline, to generate the corresponding noiseless depth maps, used by an Iterative Closest Point method <cit.> to be aligned to the real images; optimizing the ground-truth for our real test dataset. Evaluation on Pose Estimation As a first experiment, we limit the aforementioned approach to pose estimation only, training and testing it over the data of Chair C. For the CNN training, 30k synthetic depth scans rendered with CAD model and floor plane as shown in Figure <ref>, are used to form 100k samples (triplets + pairs). The learned representation is then applied for the indexation of all the 30k images, using FLANN <cit.>. For testing,the representations of the 1024 depth images forming the real testing dataset are extracted and indexed. For each, the nearest neighbor's pose is then rendered and aligned to the input scan to refine the final 3D pose estimation. To demonstrate how the quality of the synthetic training data impacts the estimation, three different datasets are generated; resp. using noiseless rendering, BlenSor, and DepthSynth. Each dataset is used either for both representation-learning and indexing; or only for learning, with the clean dataset used for indexing. We also further apply some fine-tuning (FT) to the CNN training, feeding it with 200 real scans, forming 3k samples (triplets + pairs). Estimated 3D poses are compared to the ground-truth ones, and the Cumulative Distribution Functions on errors in rotation and translation are shown in Figure <ref>(A-B).It reveals how the method trained over DepthSynth data gives consistently better results on both translation and rotation estimations; furthermore not gaining much in accuracy after fine-tuning with real data. Evaluation on ClassificationWe consider in a second time the classification problem for the 3 chairs. Using the same synthetic training datasets extended to all 3 objects, we evaluate the accuracy of the recognition method over a testing dataset of 1024 real depth images for each chair, taking as final estimation the class of the nearest neighbor in the database for each extracted image representation.Despite the strong similarities among the objects, the recognition method is performing quite well, as shown in Figure <ref>(C). Again, it can be seen that it gives consistently better results when trained over our synthetic data; and that unlike other training datasets, ours doesn't gain much from the addition of real data, validating its inherent realism. § CONCLUSIONWe presented DepthSynth, a pipeline to generate large depth image datasets from 3D models, simulating the mechanisms of a wide panel of depth sensors to achieve unique realism with minimum effort. We not only demonstrated the improvements in terms of noise quality compared to state-of-the-art methods; but also went further than these previous works by showcasing how our solution can be used to train recent 2.5D recognition methods, outperforming the original results using lower-quality training data.We thus believe this concept will prove itself greatly useful to the community, leveraging the parallel efforts to gather detailed 3D datasets. The generation of realistic depth data and correspondingground truth can promote a large number of data-driven algorithms, by providing the training and benchmarking resources they need.We plan to further demonstrate this in a near future, applying our pipeline to tasks of larger-scale (e.g. semantic segmentation of the NYU depth dataset <cit.>, using SUNCG models <cit.> as input for our pipeline) . We are also curious to compare—and maybe combine—DepthSynth with recent GAN-based methods such those developed by Shrivastava  <cit.> or Bousmalis  <cit.>.ieee
http://arxiv.org/abs/1702.08558v2
{ "authors": [ "Benjamin Planche", "Ziyan Wu", "Kai Ma", "Shanhui Sun", "Stefan Kluckner", "Terrence Chen", "Andreas Hutter", "Sergey Zakharov", "Harald Kosch", "Jan Ernst" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170227221225", "title": "DepthSynth: Real-Time Realistic Synthetic Data Generation from CAD Models for 2.5D Recognition" }
Current address: Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland We observe multi-step condensation of sodium atoms with spin F=1, where the different Zeeman components m_F=0,± 1 condense sequentially as the temperature decreases. The precise sequence changes drastically depending on the magnetization m_z and on the quadratic Zeeman energy q (QZE) in an applied magnetic field. For large QZE, the overall structure of the phase diagram is the same as for an ideal spin 1 gas, although the precise locations of the phase boundaries are significantly shifted by interactions. For small QZE, antiferromagnetic interactions qualitatively change the phase diagram with respect to the ideal case, leading for instance to condensation in m_F=± 1, a phenomenon that cannot occur for an ideal gas with q>0.Stepwise Bose-Einstein condensation in a spinor gas F. Gerbier December 30, 2023 ===================================================Multi-component quantum fluids described by a vector or tensor order parameter are often richer than their scalar counterparts. Examples in condensed matter are superfluid ^3He <cit.> or some unconventional superconductors with spin-triplet Cooper pairing <cit.>. In atomic physics, spinor Bose-Einstein condensates (BEC) with several Zeeman components m_F inside a given hyperfine spin F manifold can display non-trivial spin order at low temperatures <cit.>. The macroscopic population of the condensate enhances the role of small energy scales that are negligible for normal gases. This mechanism (sometimes termed Bose-enhanced magnetism <cit.>) highlights the deep connection between Bose-Einstein condensation and magnetism in bosonic gases, and raises the question of the stability of spin order against temperature.In simple cases, magnetic order appears as soon as a BEC forms. Siggia and Ruckenstein <cit.> pointed out for two-component BECs <cit.> that a well-defined relative phase between the two components implies a macroscopic transverse spin. BEC and ferromagnetism then occur simultaneously, provided the relative populations can adjust freely. A recent experiment confirmed this scenario for bosons with spin-orbit coupling <cit.>. This conclusion was later generalized to spin-F bosons without <cit.> or with spin-independent <cit.> interactions. These results indicate that without additional constraints, bosonic statistics favors ferromagnetism.In atomic quantum gases with F>1/2, this type of ferromagnetism competes with spin-exchange interactions, which may favor other spin orders such as spin-nematics <cit.>.Spin-exchange collisions can redistribute populations among the Zeeman states <cit.>, but are also invariant under spin rotations. The allowed redistribution processes are therefore those preserving the total spin, such as 2× (m_F=0) ↔ (m_F=+1)+(m_F=-1). For an isolated system driven to equilibrium only by binary collisions (in contrast with solid-state magnetic materials <cit.>), and where magnetic dipole-dipole interactions are negligible (in contrast with dipolar atoms <cit.>), the longitudinal magnetization m_z is then a conserved quantity. This conservation law has deep consequences on the thermodynamic phase diagram.The thermodynamics of spinor gases with conserved magnetization has been extensively studied theoretically using various assumptions and methods<cit.>. A generic conclusion is that Bose-Einstein condensation occurs in steps, where BEC occurs first in one specific component and magnetic order appears at lower temperatures when two or more components condense.Natural questions are the number of steps that can be expected, and the nature of the magnetic phases realized at different temperatures.In this Letter, we report on the observation of multi-step condensation in an antiferromagnetic F=1 condensate of sodium atoms. Fig. <ref> illustrates four situations that occur when lowering the temperature starting from a normal Bose gas. Without loss of generality, we focus in this work on the case of positive magnetization, given that the case of m_z < 0 can be deduced by symmetry. In all cases with m_z≠ 0, we find a sequence of transitions where different Zeeman components condense at different temperatures. Depending on the applied magnetic field B and on the magnetization, we find either two or three condensation temperatures. The purpose of this paper is to explore this rich landscape of transitions in a bosonic spinor system and to elucidate the role of atomic interactions.The present work is to the best of our knowledge the first comprehensive measurement of thermodynamic properties of spinor condensates with conserved magnetization.Previous experimental works exploring finite temperatures in spinor gases mostly studied spin dynamics in thermal gases <cit.>, or demonstrated cooling of a majority Zeeman component by selective evaporation of the minority components <cit.>. The realization of dipolar spinor gases with free magnetization <cit.> was limited to the study of spin-polarized condensed phases in equilibrium due to dipolar relaxation. More recently, a gas of spin excitations in a spin-polarized (m_z ≈ 1) ferromagnetic Bose-Einstein condensate was observed to equilibrate and even condense at sufficiently low temperatures <cit.>.Our experiments are performed with ultracold ^23Na atoms confined in a crossed optical dipole trap (ODT). The longitudinal magnetization m_z=(N_+1-N_-1)/N acts as an external control parameter independent of the externally applied magnetic field B. Here, N_m_F is the reduced population in Zeeman state m_F and N the total atom number. We vary m_z between unmagnetized (m_z≈ 0) and fully magnetized samples (m_z≈ 1) using a preparation sequence performed far above T_c <cit.>. An applied magnetic field B shifts the single-atom energy by Δ E_m_F=p m_F+q(m_F^2-1). The conservation of magnetization makes the linear Zeeman effect ∝ p irrelevant in the equilibrium state. The quadratic Zeeman energy (QZE), which lowers the energy of m_F=0 with respect to m_F=±1, is the relevant term, and is given byq= α_q B^2 with α_q/h ≈ 277Hz/G^2 for sodium atoms. The depth V_0 of the ODT determines the temperature T and total atom number N for a given V_0. We find that the magnetization m_z also varies with V_0 (by up to 15%), a byproduct of evaporative cooling. Once a condensate forms in one of the Zeeman components, evaporation tends to eliminate preferentially atoms in the other Zeeman states. The evaporative cooling dynamics is very slow compared to the microscopic thermalization time on which the gas returns to thermal equilibrium. As a result, the kinetic equilibrium state for the quantum gases studied in this work is still determined by a magnetization-conserving Hamiltonian. Furthermore, the ODT is tight enough such that a condensate forms in the so-called single-mode regime <cit.>, where the spatial shape of the condensate wavefunction is independent of the Zeeman state. In the following, we characterize our data for a given value of q by an evaporation “trajectory” (N,T,m_z)_V_0, taking four experimental realizations for each point in the trajectory. Absorption images as shown in Fig. <ref> are recorded after 3ms of expansion in an applied magnetic field gradient <cit.>. We perform a fit to a bimodal distribution for each component to extract the temperature, the populations N_m_F, and the condensed fraction f_c,m_F per component <cit.>. We found that low condensed fractions < 5 % are difficult to detect with the fit algorithm due to a combination of low signal-to-noise ratio and the complexity of fitting the three Zeeman components simultaneously. The signature of BEC, the appearance of a dense, narrow peak near the center of the atomic distribution, can instead be tracked by monitoring the peak optical density (OD) taken as a proxy for the condensed fraction <cit.>. This procedure avoids relying on bimodal fits or other indirect analyses with uncontrolled systematic biases. Fig. <ref> shows such a measurement for a particular evaporation trajectory. The peak OD increases sharply when Bose-Einstein condensation is reached, demonstrating in this particular example a two-step condensation where m_F=+1 condenses first, followed by m_F=0. For a given evaporation trajectory, we identify the critical trap depth V_ 0,c where condensation is reached by a piece-wise linear fit to the data, taking the intercept point as the experimentally determined V_ 0,c (see Fig. <ref>). We interpolate numerically the atom number, magnetization and temperature to obtain the critical values N_ c, T_ c, m_z, c from V_ 0,c.Fig. <ref> summarizes the results of this work. We show the peak optical density for each Zeeman component and each value of q in a (T-m_z) plane (Fig. <ref> a-c, e-g and i-k). In this plot, all data taken at a given QZE q are binned with respect to magnetization and temperature. The domains where condensation occurs appear in light colors. For convenience, the temperature is scaled to the critical temperature of a single-component ideal gas k_B T_c,id=ħω[N/ζ(3)]^1/3, with ω the geometric average of the trap frequencies and ζ the Riemann zeta function <cit.>. The same plot also shows the measured critical temperatures (Fig. <ref> d, h, l)[In one case, m_F=0 when m_z ≈ 0.3 and q/h=2.8Hz, the lowest temperature images do show a condensed component but the critical temperature could not be extracted reliably from the fitting procedure due to sparse sampling. This particular point is not reported in Fig. <ref>l.]. The phenomenon of sequential condensation is always observed for m_z ≠ 0, but the overall behavior changes drastically with q.We first discuss the cases with largest QZE, q/h≈ 8.9kHz (Fig. <ref> a-d) and q/h≈ 69Hz (Fig. <ref> e-h).For q/h≈ 8.9kHz and highly magnetized samples, the majority component m_F=+1 condenses first at a critical temperature T_c,1, followed by the m_F=0 component at a lower temperature T_c,2. For low magnetizations, the condensation sequence is reversed. For q/h≈ 69Hz, we observe only one sequence, a two-step condensation with m_F=+1 first and m_F=0 second. This behavior can be understood qualitatively from ideal gas theory, taking the QZE and the conservation of magnetization into account <cit.>. For ideal gases, BEC occurs when the chemical potential μ equals the energy of the lowest single-particle state <cit.>. The same criterion holds for a spin 1 gas with μ_0=μ and μ_± 1=μ±λ, where λ is a Lagrange multiplier enforcing the conservation of m_z. For m_z =0 (λ=0) and q >0, the QZE lowers the energy of m_F=0, which is therefore the first component to condense when μ=-q. For m_z > 0, λ is positive and increases with m_z. The energetic advantage of m_F=0 is in balance with the statistical trend favoring the most populated component m_F=+1. Eventually, this trend takes over at a “critical” value m_z^∗ (where λ=q). For m_z>m_z^∗, the m_F=+1 component condenses first.Coexisting m_F=0 and m_F=± 1 components with a well-defined phase relation correspond to a non-zero transverse spin ⟨Ŝ_x+iŜ_y ⟩≠ 0 (“transverse magnetized” phase – M_⊥). For large q, the condensate is reduced to an effective two-component system m_F=0,+1 with m_F=-1 mostly spectator. The case m_z=m_z^∗ (μ_0=μ_+1) realizes the Siggia-Ruckenstein (S-R) scenario, where condensation and ferromagnetic behavior appear simultaneously. Away from that point, the S-R picture breaks down (μ_0≠μ_+1) and sequential condensation takes place. Figure <ref> d-h show the critical temperatures and compare them to ideal gas theory. Although the general trends in the theory are the same as in the experiment, we observe a systematic shift of T_c,1 and T_c,2 towards lower temperatures, and an experimental “critical” m_z^∗∼ 0.3 larger than the ideal gas prediction. The behavior for q/h≈ 69Hz (Fig. <ref> e-h) is qualitatively similar to the largest q case, but with a small m_z^∗ that cannot be resolved experimentally (the ideal gas theory predicts ≈ 0.002). Repulsive interactions between the atoms can be expected to lower the critical temperatures as in single-component gases <cit.>, with an enhanced shift of T_ c,2 due to the presence of a condensate.We use a simplified version of Hartree-Fock (HF) theory to make quantitative predictions <cit.>. Our self-consistent calculations include the trap potential in a semi-classical approximation, and treat the interactions as spin-independent. These approximations are valid only above T_ c,2, where at most one component condenses <cit.>. As a result, the HF model cannot make any prediction for the low-temperature behavior below T_ c,2. The results of the HF calculations, performed for atom numbers and trap frequencies matching the experimental values <cit.>, are shown in Figure <ref>. The HF model qualitatively accounts for the experimental data, explaining in particular the strong downwards shift of T_ c,2 for all q and the shift of m_z^∗ to higher values for q/h≈ 8.9kHz.The residual discrepancy around 7-8 % could be partially explained by finite-size and trap anharmonicity effects not included in the Hartree-Fock calculation <cit.>.At the lowest field we studied, q/h≈ 2.8Hz (Fig. <ref> i-l), we observe a change in the nature of T_ c,2. For high values of m_z, T_ c,2 corresponds to condensation into m_F=-1 while m_F=0 remains uncondensed.This phenomenon is incompatible with ideal gas theory <cit.> and with our HF model with spin-independent interactions. It corresponds to a change of the magnetic ordering appearing below T_ c,2. While coexisting m_F=0 and m_F=+1 components form a M_⊥ phase with ⟨Ŝ_x+iŜ_y ⟩≠ 0, coexisting m_F=± 1 components correspond to a phase with ⟨Ŝ_x+iŜ_y ⟩ = 0 but where the spin-rotational symmetry around z is broken by a non-zero spin-quadrupole tensor (“quasi-spin nematic” phase -qSN).At T=0 and in the single-mode regime, the M_⊥- qSN transition occurs at a critical magnetization m_z,c=√(1-[1-(q/U_s)]^2), with U_s ≤ q the spin-dependent interaction energy <cit.>. When q>U_s, there is no phase transition and only the M_⊥ phase is present. This explains the qualitative difference between the data for q/h=2.8Hz and the other two values. We estimate U_s/h ≲ 50Hz and m_z,crit≈ 0.3 for a BEC without thermal fraction <cit.>. This agrees well with the lowest temperature measurements reported in Fig. <ref>j-k. In the experimental data in Fig. <ref> i-l, the region of the phase diagram occupied by the M_⊥ phase shrinks with increasing temperature. In fact, we find that m_F=-1 condenses at T_c,2 for all parameters we have explored, with m_F=0 condensing at a third, lower critical temperature (except for m_z ≈ 0, where all components appear to condense together within the accuracy of our measurement). Finally, the dashed line in Fig. <ref>k shows T_ c,2 predicted by the HF model with spin-independent interactions. Although the model incorrectly predicts that m_F=0 should condense below T_ c,2, the predicted transition closely matches the observed boundary between single-component m_F=+1 BEC and qSN m_F= ± 1 BEC. This indicates that the transition line itself (but not the magnetic order below it) is determined by the thermal component alone. In conclusion, we have studied the finite-T phase diagram of a spin-1 Bose gas with antiferromagnetic interactions. For condensates in the single-mode regime, we observed a sequence of transitions, two for high QZE and three for low QZE, with the lower two leading to different magnetic orders. We have found that a simplified HF model reproduces the trends observed in the variations of the critical temperatures T_ c,1 and T_ c,2 with magnetization and QZE. A more complete theoretical analysis accounting for all experimental features –in particular the harmonic trap, which is crucial to stabilize an antiferromagnetic condensate in a single spatial mode <cit.>– and elucidating the exact nature of the low-temperature transitions for low QZE remains open. A natural extension of this work would be to study the critical properties of the observed finite-T transitions, in particular near m_z=m_z^∗ and between the M_⊥ and qSN phases at very low q. Two-dimensional systems provide another intriguing direction to explore. Several Berezinskii-Kosterlitz-Thouless transitions mediated either by vortices or spin textures have been predicted <cit.>. We expect that such topological features will further enrich the already complex phase diagram observed in three dimensions. We acknowledge stimulating discussions with B. Evrard, L. De Sarlo, E. Witkowska, J. Beugnon, L. de Forges de Parny, A. Rançon and T. Roscilde. This work has been supported by ERC (Synergy grant UQUAM). TZ acknowledges funding from the Hamburg Center for Ultrafast Imaging, and KJG from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 701894.
http://arxiv.org/abs/1702.08265v2
{ "authors": [ "C. Frapolli", "T. Zibold", "A. Invernizzi", "K. Jiménez-García", "J. Dalibard", "F. Gerbier" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170227131226", "title": "Stepwise Bose-Einstein condensation in a spinor gas" }
=1
http://arxiv.org/abs/1702.08464v2
{ "authors": [ "Francesco Capozzi", "Ian M. Shoemaker", "Luca Vecchi" ], "categories": [ "hep-ph", "astro-ph.CO", "astro-ph.SR", "hep-ex" ], "primary_category": "hep-ph", "published": "20170227190007", "title": "Solar Neutrinos as a Probe of Dark Matter-Neutrino Interactions" }
Geometric Manin's Conjecture]Geometric Manin's Conjecture and rational curves lehmannb@bc.edu Department of Mathematics, Boston College, 1400 Commonwealth Ave, Chestnut Hill, MA, 02467 stanimoto@kumamoto-u.ac.jp Department of Mathematics, Faculty of Science, Kumamoto University, Kurokami 2-39-1 Kumamoto 860-8555 Japan;Priority Organization for Innovation and Excellence, Kumamoto University 14H10Lehmann is supported by NSF grant 1600875.Tanimoto is partially supported by Lars Hesselholt's Niels Bohr professorship, and MEXT Japan, Leading Initiative for Excellent Young Researchers (LEADER). Let X be a smooth projective Fano variety over the complex numbers.We study the moduli space of rational curves on X using the perspective of Manin's Conjecture. In particular, we bound the dimension and number of components of spaces of rational curves on X.We propose a Geometric Manin's Conjecture predicting the growth rate of a counting function associated to the irreducible components of these moduli spaces. [ Sho Tanimoto Received: date / Accepted: date =================================== § INTRODUCTION A Fano variety over ℂ carries many rational curves due to the positivity of the anticanonical bundle (<cit.>, <cit.>, <cit.>).The precise relationship between curvature and the existence of rational curves is quantified by Manin's Conjecture.For an ample divisor L on a Fano variety X, the constants a(X,L) and b(X,L) of <cit.> compare the positivity of K_X and L.Manin's Conjecture predicts that the asymptotic behavior of rational curves on X as the L-degree increases is controlled by these geometric constants.This point of view injects techniques from the minimal model program to the study of spaces of rational curves.Batyrev gave a heuristic for Manin's Conjecture over finite fields that depends on three assumptions (see <cit.> or <cit.> for Batyrev's heuristic): * after removing curves that lie on a closed subset, moduli spaces of rational curves have the expected dimension;* the number of components of moduli spaces of rational curves whose class is a nef integral 1-cycle is bounded above;* the étale cohomology of moduli spaces of rational curves enjoys certain homological stability properties. (The idea to use homological stability in Batyrev's heuristic is due to Ellenberg and Venkatesh, see, e.g., <cit.>.)In this paper, we investigate the plausibility of the first two assumptions for complex varieties.We prove that the first assumption holds for any smooth Fano variety.The second assumption fails in general: the number of components can grow polynomially as the degree of the 1-cycle grows.Thus we proceed in two different directions.First, it is conjectured by Batyrev that there is a polynomial upper bound on the growth in number of components, and we make partial progress toward this conjecture.Second, we explain how to modify the conjecture in order to discount the “extra” components and recover Batyrev's heuristic.Our proposal can be seen as a geometric analogue of Peyre's thin set version of Manin's Conjecture.§.§ Moduli of rational curves Let us discuss the contents of our paper in more detail. Let X be a smooth projective uniruled variety and let ^1(X) denote the pseudo-effective cone of divisors.Suppose L is a nef ℚ-Cartier divisor on X.When L is big, define the Fujita invariant (which we will also call the a-invariant) bya(X,L) := min{ t∈| t[L] + [K_X] ∈^1(X) }.When L is not big, we formally set a(X,L) = ∞.When X is singular, we define the a-invariant by pulling L back to a resolution of X.Let X be a smooth projective weak Fano variety and set L = -K_X.Let V ⊊ X be the proper closed subset which is the Zariski closure of all subvarieties Y such that a(Y,L|_Y) > a(X,L).Then any component of Mor(ℙ^1,X) parametrizing a curve not contained in V will have the expected dimension and will parametrize a dominant family of curves. Assuming standard conjectures about rational curves, the converse implication is also true: a subvariety with higher a-value will contain families of rational curves with dimension higher than the expected dimension in X.In this way the a-invariant should completely control the expected dimension of components of Mor(ℙ^1,X).Furthermore, an analogous statement holds for any uniruled X and any big and nef L provided we restrict our attention to curves with vanishing intersection against K_X + a(X,L)L. Theorem <ref> is significant for two reasons.The first is that V is a proper subset of X; this is the main theorem of <cit.>.The second is that Theorem <ref> gives an explicit description of the closed set V.In practice, one can use techniques from adjunction theory or the minimal model program to calculate V. In Example <ref> we show that if X is any smooth quartic hypersurface of dimension ≥ 5 then the exceptional set V in Theorem <ref> is empty so that every component of Mor(ℙ^1,X) has the expected dimension.The same approach gives a quick proof of a result of <cit.> showing an analogous property for cubic hypersurfaces.Note that for a quartic hypersurface the components of the Kontsevich moduli space of stable maps need not have the expected dimension (see <cit.>), so the method in <cit.> does not apply to this case.Let X be a smooth Fano threefold with index 2 and Picard rank 1.By <cit.>, the exceptional set V in Theorem <ref> is empty so that every component of Mor(ℙ^1,X) has the expected dimension and parametrizes a dominant family of curves.The main outstanding question concerning Mor(ℙ^1,X) is the number of components. Batyrev first conjectured that the number of components grows polynomially with the degree of the curve.In fact we expect that the growth is controlled in a precise way by another invariant in Manin's Conjecture: the b-invariant (see Definition <ref>).We prove a polynomial growth bound for components satisfying an additional hypothesis: Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-Cartier divisor on X.Fix a positive integer q and let M⊂M_0,0(X) denote the union of all components which contain a chain of free curves whose components have L-degree at most q.There is a polynomial P(d) which is an upper bound for the number of components of M of L-degree at most d. Theorem <ref> should be contrasted with bounds on the number of components of the Chow variety which are exponential in d (<cit.>, <cit.>, <cit.>, <cit.>). It is natural to wonder whether free curves of sufficiently high degree can always be deformed (as a stable map) to a chain of free curves of smaller degree.Although this property seems subtle to verify, we are not aware of any Fano variety for which it fails.We are able to verify it in some new situations for Fano varieties of small dimension.By applying general theory, we can then understand the behavior of rational curves by combining an analysis of the a and b invariants with a few computations in low degree.Let X be a smooth Fano threefold of index 2 with Pic(X)= ℤ H. We show that if H^3 ≥ 3, or H^3 = 2 and X is general in its moduli, then Mor(ℙ^1,X) has two components of any anticanonical degree 2d ≥ 4: the family of d-fold covers of lines, and a family of irreducible degree 2d curves. In fact our proof shows that M_0,0(X) has the same components; this implies, for example, that certain Gromov-Witten invariants on X are enumerative.Previously such results were known for cubic threefolds by work of Starr (see <cit.>) and for complete intersections of two quadrics by <cit.>, and our method significantly simplifies the proofs of these papers using the analysis of a, b invariants.§.§ Manin-type bound Using the previous results, we prove an upper bound of Manin-type for the moduli space of rational curves. Suppose that X is a smooth projective uniruled variety and that L is a big and nef divisor.As a first attempt at a counting function, fix a variable q and defineN(X, L, q, d) = ∑_i=1^d∑_W ∈𝒮_i q^ Wwhere 𝒮_i denotes the set of components M ⊂Mor(ℙ^1, X) satisfying: * M generically parametrizes free curves.* The curves parametrized by M have L-degree i · r(X,L), where r(X,L) is the minimal positive number of the form L ·α for a ℤ-curve class α.* The curves parametrized by M satisfy (K_X + a(X,L)L) · C = 0.This is not quite the correct definition; as usual in Manin's Conjecture one must remove the contributions of an “exceptional set.”In the number theoretic setting one must remove a thin set of points to obtain the expected growth rate.An analogous statement is true in our geometric setting as well, and in Section <ref> we give a precise formulation of which components should be included in the definition of N(X,L,q,d).After modifying the counting function in this way, we can prove an asymptotic upper bound.For simplicity we only state a special case:Let X be a smooth projective Fano variety.Fix ϵ > 0; then for sufficiently large qN(X,-K_X, q,d) = O(q^dr(X,-K_X)(1+ϵ)).In the literature there are several examples of Fano varieties for which the components of (ℙ^1,X) have been classified.In everyexample we know of the counting function has the asymptotic behavior predicted by Manin's Conjecture.Let X be a smooth del Pezzo surface of degree ≥ 2 which admits a (-1)-curve.Using <cit.>, Example <ref> shows thatN(X,-K_X,q,d) ∼q^2α(X,L)/1-q^-1 q^d d^ρ(X)-1.where α(X,L) is the volume of a polytope defined in Definition <ref> and ρ(X) is the Picard rank of X.We thank Morten Risager for his help regarding height zeta functions and Chen Jiang for suggesting we use the results of Höring. We also would like to thank Tony Várilly-Alvarado for his help to improve the exposition. We thank the anonymous referees for detailed suggestions to improve the exposition of the paper. § PRELIMINARIES Throughout we work over an algebraically closed field of characteristic 0.Varieties are irreducible and reduced.For X a smooth projective variety we let N^1(X) denote the space of ℝ-divisors up to numerical equivalence.It contains a lattice N^1(X)_ℤ consisting of classes of Cartier divisors.We let ^1(X) and ^1(X) denote the pseudo-effective and nef cones of divisors respectively; their intersections with N^1(X)_ℤ are denoted ^1(X)_ℤ and ^1(X)_ℤ.Dually, N_1(X) denotes the space of curves up to numerical equivalence with natural lattice N_1(X)_ℤ._1(X) and _1(X) denote the pseudo-effective and nef cones of curves, containing lattice points _1(X)_ℤ and _1(X)_ℤ. Suppose that f,g: ℕ→ℝ are two positive real valued functions.We use the symbol f(d) ∼ g(d) to denote “asymptotically equal”:lim_d →∞f(d)/g(d) = 1We will also use the standard “big-O” notation when we do not care about constant factors.Certain kinds of morphisms play a special role in Manin's Conjecture: We say that a morphism of projective varieties f: Y → X is a thin morphism if f is generically finite onto its image but is not both dominant and birational.§ GEOMETRIC INVARIANTS A, B §.§ BackgroundWe recall the definitions of the a and b invariants studied in <cit.>, <cit.>, <cit.>, <cit.>. These invariants also play a central role in the study of cylinders, see, e.g., <cit.>.<cit.>Let X be a smooth projective varietyand let L be a big and nef ℚ-divisor on X.The Fujita invariant isa(X, L) := min{ t∈| t[L] + [K_X] ∈^1(X) }.If L is not big, we set a(X,L) = ∞. By <cit.>, a(X, L) does not change when pulling back L by a birational map.Hence, we define the Fujita invariant for a singular projective variety Xby pulling back to a smooth resolution β : X̃ X:a(X, L):= a(X̃, β^*L).This definition does not depend on the choice of β. It follows from <cit.> that a(X,L) is positive if and only if X is uniruled. <cit.>Let X be a smooth projective variety such that K_X is not pseudo effective. Let L be a big and nef ℚ-divisor on X. We define b(X, L) tobe the codimension of the minimal supported face of ^1(X) containing the class a(X, L)[L] + [K_X].Again, this is a birational invariant (<cit.>),and we define b(X, L) for a singular variety X by taking a smooth resolution β : X̃ X and settingb(X, L) := b(X̃, β^*L).This definition does not depend on the choice of β.It turns out b has a natural geometric interpretation in terms of Picard ranks (see <cit.>).§.§ Compatibility statementsLet X be a smooth projective variety and let L be a big and nef ℚ-divisor on X.Suppose that f: Y → X is a thin morphism.It will be crucial for us to understand when(a(Y,f^*L),b(Y,f^*L)) > (a(X,L),b(X,L))in the lexicographic order.We say that f breaks the weakly balanced condition when such an inequality holds.When f only induces an inequality ≥, we say that f breaks the balanced condition.The case when f: Y → X is the inclusion of a subvariety is of particular importance.The following theorem of <cit.> describes when the a-invariant causes an inclusion f: Y → X to break the balanced condition.The proof relies upon the recent boundedness statements of Birkar.[<cit.> Theorem 4.8 and <cit.> Theorem 1.1]Let X be a smooth uniruled projective variety and let L be a big and nef ℚ-divisor on X.Let V denote the union of all subvarieties Y such that a(Y,L|_Y) > a(X,L).Then V is a proper closed subset of X and its components are precisely the maximal elements in the set of subvarieties with higher a-value.Since <cit.> has settled the Borisov-Alexeev-Borisov Conjecture, <cit.> proves that the closure of V is a proper closed subset of X.In fact the proof gives a little bit more: every component of V is dominated by a family of subvarieties with a-value higher than X.By <cit.> every component of V will also have higher a-value than X. The other important case to consider is when f: Y → X is a dominant map.For convenience we formalize this situation into a definition. Let X be a smooth uniruled projective variety and let L be a big and nef ℚ-divisor on X. We say that a morphism from a smooth projective variety f:Y → X is an a-cover if (i) f is a dominant thin morphism and (ii) a(Y, f^*L) =a(X, L).§.§ Face contractionThe following definitions encode a slightly more refined version of the b invariant. Let X be a smooth uniruled projective variety and let L be a big and nef ℚ-divisor on X. We let F(X,L) denote the face of _1(X) consisting of those curve classes α satisfying (K_X + a(X,L)L) ·α = 0. When f: Y → X is an a-cover, the Riemann-Hurwitz formula implies that the pushforward f_*: N_1(Y) → N_1(X) maps F(Y,f^*L) to F(X,L).Let X be a smooth uniruled projective variety and let L be a big and nef ℚ-divisor on X. We say that a morphism f: Y → X is face contracting if f is an a-cover and the map f_*: F(Y,f^*L) → F(X,L) is not injective. Recall that the dimensions of the faces F(Y,f^*L), F(X,L) are respectively b(Y,f^*L), b(X,L).Thus, if f breaks the weakly balanced condition then it is also automatically face contracting.However, F(Y,f^*L) need not surject onto F(X,L) and so not all face contracting morphisms break the weakly balanced condition.<cit.> identifies a del Pezzo surface X' with canonical singularities which admits a finite cover f': Y' → X' which is étale in codimension 1 and such that ρ(X') = 1 and ρ(Y')=2.Let f: Y → X be a resolution of this map and set L = -K_X.<cit.> shows that f does not break the weakly balanced condition.Nevertheless, we claim that the pushforward f_* contracts F(Y,f^*L) to a face F of smaller dimension.It suffices to find two different classes in F(Y,f^*L) whose images under f_* are the same.Since f'_*: N_1(Y') → N_1(X') drops the Picard rank by 1, there are ample curve classes β and β' on Y' whose images under f'_* are the same.Let α and α' be their pullbacks in N_1(Y); we show that f_*α = f_*α'. Note that every curve exceptional for the birational morphism X → X' pulls back under f to a union of curves exceptional for the morphism Y → Y'.Applying the projection formula to f, we deduce that f_*α and f_*α' have vanishing intersection against every exceptional curve for X → X'.Since f_*α and f_*α' also push forward to the same class on X' by construction, we conclude that f_*α = f_*α'.Face contracting morphisms are important for understanding the leading constant in Manin's Conjecture.Fix a number field K, and suppose that f: Y → X is a dominant generically finite morphism of smooth projective varieties over K with equal a,b-values.Manin's conjecture predicts that the growth rate of rational points of bounded height is the same on X and Y.Thus to obtain the correct Peyre's constant for the rate of growth of rational points one must decide whether or not to include f(Y(K)) in the counting function.Face contraction gives us a geometric criterion to distinguish whether we should include the point contributions from Y.When X is a Fano variety with an anticanonical polarization, the key situation to understand is when f : Y→ X is Galois and a(X, L)f^*L + K_Y has Iitaka dimension 0.After replacing Y by a birational modification, we may assume that any birational transformation of Y over X is regular.In this situation <cit.> gives a geometric condition determining whether f and its twists give the entire set of rational points (and thus whether or not these contributions must be removed).It turns out that the geometric condition of <cit.> is equivalent to being face contracting.§.§ Varieties with large a-invariant The papers <cit.>, <cit.>, <cit.> give a classification of varieties with large a-invariant in the spirit of the Kobayashi-Ochiai classification.The following two results are immediate consequences of <cit.> which classifies the smooth projective varieties and big and nef Cartier divisors satisfying a(X,L) > (X) - 1.Let Y be a smooth projective variety of dimension r and let H be a big and nef divisor on Y.Suppose that a(Y,H) > r.Then H^r = 1.Let Y be a smooth projective variety of dimension r≥ 2 and let H be a big and basepoint free divisor on Y.Suppose that a(Y,H) > r-1 and that κ(K_Y + a(Y,H)H) = 0.Then H^r≤ 4.Furthermore, if H^r = 4 then a surface S defined by a general complete intersection of elements of H admits a birational morphism to ℙ^2 and H|_S is the pullback of 𝒪(2). § EXPECTED DIMENSION OF RATIONAL CURVESWe let Mor(ℙ^1,X) denote the quasi-projective scheme parametrizing maps from ℙ^1 to X as constructed by <cit.>. Let X be a smooth projective variety and let α∈_1(X)_ℤ.We let Mor(ℙ^1,X,α) denote the set of components of Mor(ℙ^1,X) parametrizing curves of class α.Given an open subset U ⊂ X, Mor_U(ℙ^1,X,α) denotes the sublocus of (ℙ^1,X,α) which parametrizes curves meeting U. Let W be an irreducible component of Mor(ℙ^1, X,α).The “expected dimension” of W is -K_X·α +X.It turns out that we always have an inequality W ≥ -K_X·α +Xand when W parametrizes a dominant family of curves then equality is guaranteed (<cit.>).Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor on X. Let W be an irreducible component of Mor(ℙ^1, X,α) satisfying (K_X + a(X,L)L) ·α = 0, and let π: 𝒞→ W be the corresponding family of irreducible rational curves with the evaluation map s: 𝒞→ X.Set Z = s(𝒞). * Suppose that the dimension of W is greater than the expected dimension, i.e., W > -K_X ·α +X.Then a(Z, L|_Z) > a(X, L).* Suppose that Z ≠ X.Then a(Z, L|_Z) > a(X, L). If a(Z,L|_Z) = ∞ then both statements are true so we may suppose otherwise.Let f:Y → Z be a resolution of singularities. By taking strict transforms of curves we obtain a family of curves on Y, 𝒞^∘→ W^∘, where W^∘is an open subset of the reduced space underlying W, and with an evaluation map s : 𝒞^∘→ Y.Let C denote an irreducible curve parametrized by W^∘. Since W^∘ is contained in an irreducible component of Mor(ℙ^1, Y) parametrizing curves which dominate Y, we have(W^∘) ≤ -K_Y · C +Y.The dimension of W is always at least the expected dimension, so -K_X · f_*C+X ≤ -K_Y · C +Y.By assumption either this inequality is strict or Y <X, and in either case(K_Y - f^*K_X) · C < 0.Since (K_X + a(X,L)L)|_Z· f_*C = 0, we can equally well write (K_Y + a(X,L)L|_Y) · C < 0.Since C deforms to cover Y, K_Y+a(X,L)f^*L is not pseudo effective. This implies that a(Y, L) > a(X,L).There is of course no analogous statement away from the face of curve classes vanishing against K_X + a(X,L)L.Consider for example a K3 surface S containing infinitely many -2 curves and let X = ℙ^1× S.For any big and nef ℚ-divisor L, the divisor K_X + a(X,L)L will be the pullback of a divisor on S. Let C be a (-2)-curve in some fiber over ℙ^1.Then the component of (ℙ^1,X) corresponding to C has dimension 4 > -K_X· C + 3.Note however that C has positive intersection against K_X + a(X,L)L for any big and nef divisor L.Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor.Let U be the Zariski open subset which is the complement of the closure of all subvarieties Y ⊂ X satisfying a(Y,L|_Y) > a(X,L).Suppose that α∈_1(X)_ℤ satisfies (K_X + a(X,L)L) ·α = 0.If Mor_U(ℙ^1, X, α) is non-empty, thenMor_U(ℙ^1, X, α) = -K_X ·α +Xand every component of Mor_U(ℙ^1, X, α) parametrizes a dominant family of rational curves.By Theorem <ref> there is a closed proper subset V ⊂ X such that for every Z ⊄V, a(Z,L) ≤ a(X,L).Then apply Proposition <ref>. Our proof actually shows that for any α∈Eff_1(X)_ℤ∖_1(X)_ℤ, the space Mor_U(ℙ^1, X, α) is empty. Indeed, suppose that it is not empty. Then there is an irreducible curve C parametrized by Mor_U(ℙ^1, X, α). Let Z be the subvariety covered by deformations of C. Since U∩ Z ≠∅, we have a(Z, L|_Z)≤ a(X, L). By Proposition <ref>, C must deform to cover X, i.e., X=Z. This means that α∈_1(X)_ℤ, a contradiction.The most compelling special case is:Let X be a smooth projective weak Fano variety.Let U be the Zariski open subset which is the complement of the closure of all subvarieties Y ⊂ X satisfying a(Y,-K_X|_Y) > a(X,-K_X).Then for any α∈_1(X)_ℤ we haveMor_U(ℙ^1, X, α) = -K_X ·α +Xif it is not empty, and every component of Mor_U(ℙ^1, X, α) parametrizes a dominant family of rational curves. If X is not Fano, it is of course possible that non-dominant families of rational curves sweep out a countable collection of proper subvarieties, as in the blow-up of ℙ^2 at nine very general points.Theorem <ref> gives a new set of tools for understanding families of rational curves via adjunction theory.Hypersurfaces are perhaps the most well-known source of examples of families of rational curves: we have an essentially complete description of the components of the moduli space of rational curves for general Fano hypersurfaces (<cit.>, <cit.>, <cit.>).We briefly illustrate Theorem <ref> by discussing results which hold for all smooth hypersurfaces of a given degree. [<cit.>]Let X be a smooth cubic hypersurface of dimension n≥ 3.Let H denote the hyperplane class on X, so that K_X = -(n-1)H and a(X,H) = n-1.We show that X does not contain any subvariety with higher a-value, so that every family of rational curves has the expected dimension.This recovers a result of <cit.>.Let Y be the resolution of a subvariety of X.<cit.> shows that the largest possible a-invariant for a big and nef divisor on a projective variety Y is (Y) + 1.Thus if Y has codimension ≥ 2 then a(Y,H) ≤ n-1.If Y has codimension 1, Lemma <ref> shows that a(Y,H) ≤ n-1 unless the H-degree of Y is 1.But a smooth cubic hypersurface of dimension ≥ 3 can not contain any codimension 1 linear spaces, showing the claim. To our knowledge the following example has not been worked out explicitly in the literature.Let X be a smooth quartic hypersurface of dimension n ≥ 5.Let H denote the hyperplane class on X, so that a(X,H) = n-2.We prove that X does not contain any subvariety with higher a-value, so that every family of rational curves has the expected dimension.Suppose that Y ⊂ X is a subvariety of codimension ≥ 3.Just as in Example <ref>, we can immediately deduce that a(Y,H) ≤ a(X,H).Next suppose that Y ⊂ X has codimension 2.Applying Lemma <ref>, we see that a(Y,H) ≤ a(X,H) unless possibly if H is a linear codimension 2 space.But this is impossible in our dimension range.Finally, suppose there were a divisor Y ⊂ X satisfying a(Y,H) > a(X,H). If κ(K_Y + a(Y,H)H) > 0, then by <cit.> Y is covered by subvarieties of smaller dimension with the same a-value, an impossibility by the argument above.If κ(K_Y + a(Y,H)H) = 0, we may apply Lemma <ref> to see that H|_Y^r-1≤ 4.By the Lefschetz hyperplane theorem, the only possibility is that Y is the intersection of X with a hyperplane and H|_Y^r-1=4.Let Y denote a resolution of Y and let S be a surface which is a general complete intersection of members of H on Y.Again applying Lemma <ref>, we see that the morphism defined by a sufficiently high multiple of H|_S should define a map to ℙ^2.In our situation it defines a map to a (possibly singular) reduced irreducible quartic surface, a contradiction.(Note that this singular quartic must be normal because of <cit.>.) Thus a(Y,H) ≤ a(X,H) in every case.§ NUMBER OF COMPONENTS In this section we study the following conjecture of Batyrev:Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor on X.For a numerical class α∈_1(X)_ℤ, let h(α) denote the number of components of Mor(ℙ^1, X, α) that generically parametrize free curves. There is a polynomial P(d) ∈ℤ[d] such that h(α) ≤ P(L ·α) for all α. We prove a polynomial upper bound for components satisfying certain extra assumptions.We will also give a conjectural framework for understanding the number of components that is motivated by Manin's Conjecture. §.§ Conjectural frameworkWe expect that polynomial growth as in Conjecture <ref> should arise from dominant maps f: Y → X which are face contracting.In this case there will be many nef curve classes on Y which are identified under pushforward to X, yielding many different components of the space of morphisms. We use an example considered by <cit.> in the number theoretic setting.Set S = ℙ^1×ℙ^1 and let X = Hilb^2(S), a weak Fano variety of dimension 4.Let Y denote the blow up of S × S along the diagonal and let f: Y → X denote the natural 2:1 map.Note that f breaks the weakly balanced condition for -K_X: we have(a(X,-K_X),b(X,-K_X)) = (1,3) < (1,4) = (a(Y,-f^*K_X),b(Y,-f^*K_X))As discussed in Section <ref>, f naturally defines a contraction of faces F(Y,f^*L) → F(X,L) of the nef cone of curves.Here F(Y,f^*L) consists of curves which have vanishing intersection against the blow-up E of the diagonal.This face has dimension 4, since it contains the classes of strict transforms of curves on S × S which do not intersect the diagonal.Its image F(X,L) is the cone spanned by the classes of the curves F_1(1,2) and F_2(1,2).(Here F_1(1,2) denotes the curve parametrizing length two subschemes of S where one point is fixed and the other varies in a fiber of the first projection.F_2(1,2) is defined analogously for the second projection.)Note that f_* decreases the dimension of F(Y,f^*L) by 2.It is easy to see that a dominant component of rational curves on Y with class β∈ F(Y,f^*L) is the strict transform of a dominant component of rational curves on S × S.Since S × S is toric, there is exactly one irreducible component of each class β.Suppose that β is the strict transform of a degree (a,b,c,d) curve class on (ℙ^1)^× 4.The pushforward identifies all classes with a+c=m and b+d=n to the class mF_1(1,2) + nF_2(1,2).The pushforward of any component of Mor(ℙ^1,Y) with a class in F(Y,f^*L) yields (a dense subset of) a component of Mor(ℙ^1,X) since the expected dimensions coincide.Furthermore, a component of class mF_1(1,2) + nF_2(1,2) will be the image of exactly two different components on Y (given by (a,b,m-a,n-d) and (m-a,n-d,a,b)) except when m and n are both even and a=c and b=d, in which case there is only one component. Thus there are at least ⌈1/2(m+1)(n+1) ⌉ different components of rational curves of class mF_1(1,2) + nF_2(1,2).Since any rational curve on X avoiding E is the pushforward of a rational curve on Y, this is in fact the exact number. In the previous example the growth in components was caused by the existence of a dominant map f: Y → X breaking the weakly balanced condition.But even when there is no such map we can still have growth of components due to the existence of facecontracting maps.Let X be the smooth weak del Pezzo surface in Example <ref>; we retain the notation from this example.We claim that there are families of free rational curves representing two linearly independent classes in F(Y,f^*L).Using a gluing argument, one can then deduce that the number of components of free rational curves for classes in F grows at least linearly as the degree increases.For each generator of F(Y,f^*L) we can run an MMP to obtain a Mori fibration π: Ỹ→ Z on a birational model of Y which contracts this ray.If (Z)=1, then a general fiber will be in the smooth locus of Ỹ and its pullback on Y will be a free rational curve of the desired numerical class.If (Z)=0, we can apply <cit.> to find a rational curve in the smooth locus of Ỹ whose pullback to Y will be a free rational curve of the desired numerical class. As in the previous examples, one can expect the degree of the polynomial P(d) in Conjecture <ref> to be controlled by the relative dimension of contracted faces.Let X be a smooth projective weak Fano variety.For a numerical class α∈_1(X)_ℤ, let h(α) denote the number of components of Mor(ℙ^1, X, α) that generically parametrize free curves.Then h(mα), considered as a function of m, is bounded above by a polynomial P(m) whose degree is the largest relative dimension of a map f_*: F(Y,f^*L) → F where f is a face contracting morphism f: Y → X, F denotes the image of F(Y,f^*L), and α∈ F.§.§ Breaking chains of free curves In this section we prove some structure theorems for chains of free curves.We will pass from working with the spaces Mor(ℙ^1,X) to the Kontsevich spaces of stable maps ℳ_0,n(X,β).Note that this change in setting drops the expected dimension of spaces of rational curves by 3.We will assume familiarity with these spaces as in <cit.>, <cit.>.In fact, we will work exclusively with the projective coarse moduli space M_0,n(X,β).Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor on X.By a component of M_0,0(X), we will mean more precisely the reduced variety underlying some component of this projective scheme.For each component M ⊂M_0,0(X) which generically parametrizes free curves, we denote by M' the unique component of M_0,1(X) parametrizing a point on a curve from M, and by M” the analogous component of M_0,2(X).A chain of free curves on X of length r is a stable map f: C → X such that C is a chain of rational curves with r components and the restriction of f to any component C_i realizes C_i as a free curve on X. We can parametrize chains of free curves (coming from components M_1,…,M_r) by the productM'_1×_X M”_2×_X…×_X M”_r-1×_X M'_r.Of course such a product might also have components which do not generically parametrize chains of free curves.We will use the following notation to distinguish between the two types of component. Given a fiber product as above, a “main component” of the product is any component which dominates the parameter spaces M_i”, M_1', and M_r' under each projection map. Loosely speaking our goal is to count such main components.Note that any component of M'_1×_X…×_X M'_r which generically parametrizes chains of free curves will have the expected dimension -K_X· C + (X) -2 - r.A chain of free curves is automatically a smooth point of M_0,0(X).For a component M_i which generically parametrizes free curves, we let U_i denote the sublocus of free curves.Analogously, we define U'_i and U”_i for the one or two pointed versions.Consider an open component of chains of free curves with a marked point on each end, that is,N ⊂ U”_1×_X U”_2×_X…×_X U”_r.Then each projection map N → U”_j is dominant and flat.Furthermore, the map N → X induced by the last marked point is dominant and flat.The proof is by induction on the length of the chain.In the base case of one component, the first statement is obvious.For the second, note that U”' ⊂M_0,3(X) can be identified with an open set in Mor(ℙ^1, X).By <cit.> the evaluation map for the second marked point is flat.This factors through the natural map U”' → U”; since the forgetful map is faithfully flat, we see that the second statement also holds.We next prove the induction step.The projection from N onto the first n-1 factors maps N to a component Q of U”_1×_X…×_X U”_r-1.By induction, Q has the two desired properties.Also, since the space of free curves through a fixed point has the expected dimension, the map U”_r→ X induced by the first marked point is dominant and flat.Consider the diagramQ ×_X U”_r[r] [d]U”_r[d]Q [r] XBoth projections from Q ×_X U”_r have equidimensional fibers by base change.Furthermore, every component of Q ×_X U”_r has the same dimension (as it parametrizes chains of free curves).Together, this shows that every component of Q ×_X U”_r will dominate Q so long as there is some component which dominates Q.But it is clear that a general chain of free curves in Q can be attached to a free curve in U_r, so that the map from Q ×_X U”_r to Q must be dominant for at least one component.Noting that N is a component of Q ×_X U”_r for dimension reasons, we obtain from the induction hypothesis the first statement for N since flatness is stable under base change and composition.The last statement follows by the same logic. Consider a component of chains of free curves with a marked point on each end, that is, a main componentN ⊂ M”_1×_X M”_2×_X…×_X M”_r.Fix a closed subset Z ⊊ X.Consider the map f: N → X induced by the first marked point.For the fiber F of f over a general point of X, every component of F generically parametrizes a chain of free curves C such that the map g: C → X induced by the last marked point does not have image in Z.The proof is by induction on the length of the chain.Consider the base case f: M”_1→ X.Since reducible curves form a codimension 1 locus, every component of a general fiber of f must contain irreducible curves.Since there is a closed subset of X containing every non-free irreducible curve in M_1, we see that every component of a general fiber of F must contain free curves.The ability to avoid Z follows from Lemma <ref>.We now prove the induction step.Via projection N maps into a component Q ⊂ M”_1×_X…×_X M”_r-1.By induction Q satisfies the desired property. By <cit.> there is a proper closed subset Z_0⊂ X such that if C_0 is a component of a curve parametrized by M”_r and C_0⊄Z_0 then C_0 is free.Consider the evaluation along the first marked point (of Q) denoted by f̃: Q ×_X M”_r→ X.The fibers of this map are products F ×_X M”_r where F is a fiber of Q → X; choosing the fiber F general with respect to Z_0, we see that every component of every fiber will contain a chain of free curves.In particular, this is also true for the map f: N → X which is a restriction of f̃ to a component.The ability to avoid Z via the last marked point follows from Lemma <ref>.Consider a parameter space of chains of free curves, that is, a main componentN ⊂ M'_1×_X M”_2×_X…×_X M”_r-1×_X M'_r.Suppose that curves in M_j degenerate into a chain of two free curves in M_j' ×_XM_j'.Then N contains a main component of M'_1×_X M”_2×_X…×_X M”_j-1×_XM_j”×_XM_j”×_X M_j+1”×_X…×_X M”_r-1×_X M'_r. By definition the projection N → M”_j is dominant, hence surjective by properness.So we know that N contains a point ofM'_1×_X M”_2×_X…×_X M”_j-1×_XM_j”×_XM_j”×_X M_j+1”×_X…×_X M”_r-1×_X M'_r.If we can show that it contains a point which is a chain of free curves, then since such points are smooth in M_0,0(X) we can conclude that N will contain an entire component of chains of length r+1.Since the map N → M”_j is surjective, in particular, for any two-pointed length 2 chain in M_j”×_XM_j” there is a curve parametrized by N containing this chain.Since the curves are free, we may choose a chain such that the first and last marked points are general.The fiber of N over this point is a union of components ofG_1⊂ M'_1×_X…×_X M”_j-1under a product withG_2⊂ M”_j+1×_X…×_X M'_rwhere G_1 and G_2 are the fibers of the last or first marking respectively.Applying Lemma <ref>, we see that every component of the fiber over this point contains chains of free curves. §.§ Toward Batyrev's conjectureLet C_1∪…∪ C_r be a chain of free curves, with map f: C → X.Let f^†: { 1,…,r}→M_0,0(X) denote the function which assigns to i the unique component of the moduli space containing C_i.We call f^† the combinatorial type of f. Let X be a smooth projective variety and let M be a component of M_0,0(X).Suppose that M contains a point f parametrizing a chain of free curves.For any f̃^† which is a precomposition of f^† with a permutation, M also contains a point representing a chain of free curves with combinatorial type f̃^†.Suppose that f: C → X denotes our original chain of free curves.It suffices to prove the statement when f̃^† differs from f^† by a transposition of two adjacent elements.Suppose that T_1 and T_2 are two adjacent components of C.Let S_1 denote the rest of the chain which attaches to T_1, and S_2 denote the rest of the chain which attaches to T_2.After deforming f, we may suppose that the intersection of T_1 and T_2 maps to a general point x of X.Suppose we leave T_1 and T_2 fixed, but deform S_1 and S_2, maintaining a point of intersection with T_1 or T_2 respectively, so that the specialized curves S_1' and S_2' contain the point x.By generality of the situation, the deformed S_1' and S_2' are still chains of free curves.The stable curve g: D → X corresponding to this deformation looks like a single rational curve Z contracted by g to the point x with four chains of free curves S'_1, T_1, T_2, S'_2 attached to Z.A tangent space calculation (as in <cit.>) shows that g is a smooth point of M.However, by a similar argument g is the deformation of a chain of rational curves of the type S_1”∪ T_2∪ T_1∪ S_2” where S_1” and S_2” are deformations of S_1' and S_2'.Let X be a smooth projective variety and let L be a big and nef ℚ-divisor on X.Fix a positive integer q.Consider the set 𝒵 of generically finite dominant covers f: Z → X such that there is a component T of M_0,0(Z) which generically parametrizes free curves of f^*L-degree ≤ q and such that the induced map T →M_0,0(X) is dominant birational onto a component. Up to birational equivalence, there are only finitely many elements of 𝒵.For degree reasons, there are only finitely many components M of M_0,0(X) which can be the closure of the image of such a map.Each such M generically parametrizes free curves.Thus, there is a unique component M' of M_0,1 lying over M.Let g: M' → X denote the universal family map, and let h: Z→ X denote the Stein factorization of a resolution of g.If the map g factors rationally through a generically finite dominant map f: Z → X, then so does h.Thus for any given component M there can only be finitely many corresponding elements of 𝒵. Let X be a smooth projective variety and let L be a big and nef ℚ-divisor on X.Fix a positive integer q and fix a subset 𝒩⊂M_0,0(X) where each component generically parametrizes free curves of L-degree at most q.There is a polynomial P(d) such that there are at most P(d) components of M_0,0(X,d) which contain a chain of free curves of total L-degree d where each free curve is parametrized by a component of M_0,0(X) contained in 𝒩. Note that any chain of free curves on X can be smoothed yielding a free curve (<cit.>).Thus any component of M_0,0(X,d) which contains a chain of free curves must generically parametrize free curves.Let { M_β}_β∈Ξ denote the elements of 𝒩.For each component, we have a universal family map ν_β: M_β' → X.We let e_β denote the degree of the Stein factorization of the composition of ν_β with a resolution of singularities of M_β' and set e = sup_β∈Ξ e_β.Note that since we have included a resolution in the definition, if e_β = 1 then the general fiber of ν_β is irreducible.The proof is by induction on e.First suppose that e=1.Let M be a component of M_0,0(X,d) satisfying the desired condition.Since a chain of free curves is a smooth point of M, to count such components M it suffices to count all possible components of the parameter space of chains of free curves from 𝒩 of total degree d.In fact, by applying Lemma <ref>, we may reorder the combinatorial type however we please.Furthermore, for any choice of combinatorial type the parameter space of chains of that typeM'_1×_X M”_2×_X…×_X M”_r-1×_X M'_ris irreducible since by assumption each degree e_i=1.Thus the number of possible M is at most the possible ways of choosing (with replacement and unordered) components of 𝒩 such that the total degree adds up to d.This count is polynomial in d.Before continuing with the proof, we make an observation:Suppose that M_1 and M_2 are components of M_0,0(X) which generically parametrize free curves, and that (a resolution of) the map M'_1→ X has degree 1.Then there is a unique main component of M'_1×_X M'_2 which parametrizes length 2 chains of free curves.Indeed, since the general fiber of M'_1×_X M'_2→ M'_2 is irreducible and M'_2 is irreducible, we see that M_1' ×_X M_2' is irreducible.Now suppose that e > 1.For a dominant generically finite map g: Z → X of degree ≥ 2 with Z smooth, let 𝒩_Z denote the subset of 𝒩 consisting of components M such that the universal map M' → X factors rationally through Z.Note that the locus where the rational map to Z is not defined must miss the general fiber of M' → M.Thus, we obtain a family of free curves on Z parametrized by an open subset of M.A deformation calculation shows that the general curve has vanishing intersection with the ramification divisor; in particular, M is birational to a component of M_0,0(Z).Note that for any component of 𝒩_Z the degree of the rational map from the component to Z is strictly smaller than for the corresponding component in 𝒩.Consider the corresponding families of rational curves on Z measured with respect to the big and nef divisor g^*L.By the induction hypothesis, there is a polynomial P_Z(d) which gives an upper bound for the number of components of M_0,0(Z,d) which arise by gluing chains from 𝒩_Z on Z.Furthermore, by Lemma <ref> there are only finitely many Z for which 𝒩_Z is non-empty.Fix a positive integer r and a dominant generically finite map f: Z → X of degree ≥ 2 with Z smooth.As we vary over possible choices M_i∈𝒩, consider all main components of M'_1×_X…×_X M'_k such that there is an integer b where the component M of M_0,0(X) obtained by gluing the first b curves has L-degree r and has a universal family map which factors rationally through Z, but if we consider the component arising from gluing the first b+1 curves, the Stein factorization of a resolution of the universal family map has degree 1.We see there are at most P_Z(r) possible components M obtained by gluing the first b curves in the chain. Next consider adding one more component.By degree considerations, there can be at most e · P_Z(r) components obtained by gluing the first (b+1) curves, and for any such component the universal family has map to X with generically irreducible fibers.Finally, to add on the remaining components, we may use Lemma <ref> to reorder the other components arbitrarily.Applying Observation <ref>, we see that the total number of glued components for this choice of b and Z is bounded above by e · P_Z(r) times the number of ways to choose (with replacement and unordered) (k-b-1) components from 𝒩.In total, the number of components of M_0,0(X) containing chains of curves from 𝒩 of degree d will be bounded above by the sum of the previous bounds as we vary Z and r.Let Q(k) denote the polynomial representing the number of ways to choose k components (with replacement and unordered) from 𝒩.Altogether, the number of components is bounded above by the polynomial in d given bye · Q(d) ·∑_Z∑_r ≤ d P_Z(r). §.§ Gluing free curves In this section we attempt to improve the degree of the polynomial bound constructed in Theorem <ref>.Returning to the proof, we see that the degree of the Stein factorization of 𝒞→ X (where 𝒞 is a universal family of rational curves) plays an important role.The key observation is that we can use the a-invariant to control the properties of this Stein factorization.Let X be a smooth projective weak Fano variety.Suppose that W is a component of Mor(ℙ^1, X) parametrizing a dominant family of rational curves π: 𝒞→ W with evaluation map s: 𝒞→ X.Let 𝒞 be the resolution of a projective compactification of 𝒞 with a morphism s': 𝒞→ X extending the evaluation map. Consider the Stein factorization of s', 𝒞→ Y →^f X.Then a(Y,-f^*K_X) = a(X,-K_X).Let Y be a resolution of Y with map f: Y→ X.By taking the strict transform of the family of rational curves, one obtains a dominant family on Y which is parametrized by an open subset of a component of Mor(ℙ^1, Y).Since the dimension of this component is the same on Y and on X, and equals to the expected dimension in both cases, we haveK_X · C = K_Y· C(K_Y - f^*K_X) · C = 0.Thus the divisor K_Y + a(X,-K_X)(-f^*K_X) is pseudo-effective but not big.Suppose now that X is a smooth projective weak Fano variety satisfying: * X does not admit any a-cover.* every free curve on X deforms (as a stable map) to a chain of free curves of degree ≤ q.Since the first condition holds, we can apply Proposition <ref> to see that for every component of Mor(ℙ^1, X) parametrizing a dominant family of rational curves the evaluation morphism has connected fibers.Since the second condition holds, we can apply Theorem <ref> to control the number of components of the parameter space of rational curves.Let 𝒮 be the set of components of Mor(ℙ^1,X) that generically parametrize free curves of degree ≤ q.Consider the abelian group Λ = ⊕_M ∈𝒮ℤM.For sequences {M_i}_i=1^s, {M_j'}_j=1^t of elements in 𝒮 we introduce the relation ∑ M_i = ∑ M_j' whenever a chain of free curves parametrized by the M_i lies in the same component of M_0,0(X) as a chain of free curves which lie in the {M_j'}.The argument of Theorem <ref> shows that the total number of components of (ℙ^1,X) parametrizing free curves of degree ≤ m is bounded above by a polynomial in m of degreerank(Λ/ R)where R is the set of relations described above.By analyzing components of (ℙ^1,X) of low degree, one can hope to obtain enough relations to verify Conjecture <ref>.For example:Let X be a smooth projective Fano variety of Picard rank 1 satisfying: * X does not admit any a-cover.* every free curve on X deforms (as a stable map) to a chain of free curves of degree ≤ q.Suppose that the space of free curves of degree q! is irreducible.Then there is an upper bound on the number of components of (ℙ^1,X,α) parametrizing free curves as we vary the class α∈_1(X)_ℤ.§ GEOMETRIC MANIN'S CONJECTURE In this section we present a precise version of Manin's Conjecture for rational curves. We will need the following definitions:Let X be a smooth uniruled projective variety and let L be a big and nef ℚ-divisor on X. * The rationality index r(X,L) is the smallest positive rational number of the form L ·α as α varies over all classes in N_1(X)_ℤ.* Let V be the subspace of N_1(X) spanned by F(X,L).(Note that by <cit.> V is a rational subspace with respect to the lattice of curve classes.)Let Q denote the rational hyperplane in V consisting of all curve classes with vanishing intersection against L; there is a unique measure dΩ on Q normalized by the lattice of integral curve classes.This also induces a measure on the parallel affine plane Q_r := {β∈ V | L ·β = r(X,L) }.We define α(X,L) to be the volume of the polytope Q_r∩ F(X,L).In other words, α(X,L) is the top coefficient of the Ehrhart polynomial for the polytope obtained by slicing F by the codimension 1 plane Q_r.§.§ Statement of conjecture: rigid caseManin's Conjecture predicts the growth rate of components of Mor(ℙ^1,X) after removing the rational curves in some “exceptional set.”In the number-theoretic setting, removing points from a closed subset is not sufficient to obtain the expected growth rate; one must remove a thin set of points (see <cit.>,<cit.>, <cit.>, <cit.>). Following the results of <cit.>, we will interpret a “thin set of rational curves” via the geometry of the a and b constants.In this section we will address the situation when κ(K_X + a(X,L)L)=0.Note that this includes the case when X is weak Fano and L = -K_X.The following definition identifies exactly which components should be counted in this situation; it is identical to the conjectural description of the exceptional set for rational points. Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor on X such that κ(K_X + a(X,L)L) = 0.Let M ⊂(ℙ^1,X) be a component, let 𝒞 denote the universal family over M and let s: 𝒞→ X denote the family map.We say that M is a Manin component if: * The curves parametrized by M have class contained in F(X,L).* The morphism s does not factor rationally through any thin morphism f: Y → X such that a(Y,f^*L) > a(X,L).* The morphism s does not factor rationally through any dominant thin morphism f: Y → X such that f is face contracting and(a(Y,f^*L),b(Y,f^*L)) ≥ (a(X,L),b(X,L))in the lexicographic order.* The morphism s does not factor rationally through any dominant thin morphism f: Y → X such that a(Y, f^*L) = a(X, L) and κ(K_Y + a(Y,f^*L)f^*L) > 0.Note that by Theorem <ref> any Manin component will necessarily parametrize a dominant family of curves. Condition (ii) is necessitated by Theorem <ref> and condition (iii) is motivated by Conjecture <ref>, but we have not yet discussed condition (iv).For rational points, such a restriction is necessary to obtain the correct Peyre's constant; see <cit.>.For rational curves, this condition rules out “extraneous” components consisting of curves that are free but not very free.Again such components can modify the leading constant in Manin's Conjecture; see Theorem <ref> for an example.In order to obtain uniqueness in Conjecture <ref> below one must include condition (iv).Proposition <ref> and <cit.> show that the curves parametrized by Manin components will almost always satisfy the weak Lefschetz property. Our main conjecture concerning Manin components is:Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor such that κ(K_X + a(X,L)L) = 0.For any ℤ-curve class α contained in the relative interior of F(X,L) there is at most one Manin component parametrizing curves of class α. To obtain the correct growth rate it would be enough to show that the number of Manin components representing a numerical class is bounded above, but uniqueness holds in every example we know about.Since Conjecture <ref> is quite strong, we will formulate a weaker version which emphasizes the relationship with the theory of rational points.Define the counting functionN(X, L, q, d) = ∑_i=1^d∑_W ∈Manin_i q^ Wwhere Manin_i is the set of Manin components of Mor(ℙ^1, X) parametrizing curves of L-degree i r(X,-K_X). Let X be a smooth projective uniruled variety of dimension n and let L be a big and nef ℚ-divisor on X such that κ(K_X + a(X,L)L) = 0. ThenN(X, L, q, d) ∼q^ Xα(X,L)/1-q^-a(X,L)r(X,L) q^da(X,L)r(X,L)d^b(X,L)-1.Suppose that X is a smooth projective uniruled variety and that L is a big and nef ℚ-divisor such that κ(K_X + a(X,L)L)=0.Loosely speaking, we expect a bijection between Manin components on X and components of families of rational curves on some birational model X' of X.In other words, our counting function should actually count curves on some variety and not just curves in some face.More precisely, the argument of <cit.> shows that there is a birational model ϕ: XX' such that ϕ is a rational contraction, X' is normal ℚ-factorial with terminal singularities, and the anticanonical divisor on X' is big and nef and satisfies -K_X'≡ϕ_*a(X,L)L.We expect Manin components to correspond to components of the moduli space of rational curves on X'.Here is a heuristic argument.Fix a class α∈ F(X,L).Suppose that β∈_1(X)_ℤ\ F(X,L) is a curve class whose pushforward to the model X' is the same as α.Families of rational curves of class β have lower expected dimension than families of rational curves of class α, so the former should form subfamilies of the latter under pushforward.In particular, families of curves of class β should not contribute to the count of families of rational curves on X'.Thus counting families of rational curves on X' should be the same as counting families of rational curves contained in F(X,L).This heuristic anticipates the existence of many families of rational curves with vanishing intersection against K_X + a(X,L)L.Butthe existence of even a single such family is a famous open problem in birational geometry (see Conjecture <ref>).Conjecture <ref> is known for the following Fano varieties (equipped with the anticanonical polarization): * general hypersurfaces in ℙ^n of degree <n-1 by <cit.>,* homogeneous varieties by <cit.>, <cit.>,* Fano toric varieties by work of Bourqui, e.g. <cit.>,* Del Pezzo surfaces by <cit.>.In the last two cases we need to explain how to derive the result from the cited papers. Let X be a smooth del Pezzo surface of degree ≥ 2.Fix a nef curve class α and consider the space parametrizing dominant families of rational curves of class α. For simplicity we may assume that X has index 1, i.e., it contains a (-1)-curve. Then: * <cit.> shows that the sublocus parametrizing maps birational onto their image is either irreducible or empty.* An easy deformation count shows that if there is a component parametrizing maps which are non-birational onto their image, the image must be a fiber of a map from X to ℙ^1. <cit.> classifies the behavior of a and b constants for subvarieties and covers of del Pezzo surfaces.It shows that: * The only curves C with a(C,-K_X|_C) > a(X,-K_X) are (-1)-curves.* There are no dominant thin maps f: Y → X such that a(Y,-f^*K_X) = a(X,-K_X) and κ(K_Y - a(X,-K_X)f^*K_X) = 0.* Suppose f: Y → X is a dominant thin map such that a(Y,-f^*K_X) = a(X,-K_X) and κ(K_Y - a(X,-K_X)f^*K_X) = 1.The fibers of the Iitaka fibration for Y are mapped under f to the fibers of a map from X to a curve.Based on this analysis, Conjecture <ref> is verified by Testa's results. Let X be a smooth projective toric variety with open torus U.<cit.> shows that every nef curve class which has intersection ≥ 1 against every torus-invariant divisor is represented by a unique dominant family of rational curves. <cit.> analyzes the behavior of the a-invariant for subvarieties and covers of toric varieties.Based on this analysis, Conjecture <ref> is verified by Bourqui's results, i.e., the unique component representing a nef class with the above property is a Manin component. §.§ Outline of conjecture: general caseThe formulation of Manin's Conjecture in the general case should be essentially the same.Let X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor such that κ(K_X + a(X,L)L) > 0.After replacing X by a birational model, we can assume that the Iitaka fibration for K_X + a(X,L)L is a morphism π.The definition of a Manin component is now a bit more subtle; one can no longer focus on dominant maps but must also account for covers of fibers of π.However, after making this minor change, Conjecture <ref> and the behavior of the counting function N(X,L,q,d) should be formulated in exactly the same way. In the general case, Manin components should be in bijection with families of rational curves on a birational model of a fiber of the Iitaka fibration of K_X + a(X,L)L. §.§ Manin-type boundsLet X be a smooth projective uniruled variety and let L be a big and nef ℚ-divisor on X.Fix ϵ > 0; then for sufficiently large qN(X,L,q,d) = O ( q^d(a(X,L)r(X, L)+ϵ)).<cit.> shows that there is a positive constant C such that the number of components of free curves of degree d against a fixed big and nef divisor is at most C^d.The result follows by combining this equation with Theorem <ref> and a standard counting argument.It is also interesting to look for lower bounds on the number of components of rational curves.It is conjectured that free rational curves generate the nef cone of curves – this would follow from the existence of rational curves in the smooth locus of mildly singular Fano varieties.<cit.> proves a weaker statement:Let X be a smooth projective rationally connected variety.Then N_1(X) is spanned by the classes of free rational curves.By <cit.>, N_1(X)_ℤ is spanned by the classes of rational curves { C_i}.Since X is rationally connected, there is a family of very free curves C such that there is a very free member of the family through any point of X.By gluing sufficiently many of these C onto one of the C_i to form a comb, we can deform to get a smooth curve (as in <cit.>).It is then clear that these smoothed curves and the class of C together span N_1(X). Free curves which meet at a point can be glued to a free curve of larger degree (see <cit.>).Thus one can generate many more dominant components of (ℙ^1,X) starting from this spanning set.Suppose now that X is a Fano variety and that L = -K_X.If all the components of rational curves constructed by gluing are Manin components, then we obtain a lower bound of the formN(X,-K_X,q,d) ≥ Cq^dr(X, -K_X)d^ρ(X)-1.for some constant C.However, in general there is no reason for this construction to yield only Manin components.§.§ Geometric heuristics In our interpretation of Manin's Conjecture one should discount contributions of f: Y → X with higher a and b-values.In this section, we give a heuristic argument proving that such components must be discounted.Since we are only interested in heuristics, in this subsection we will assume the following difficult conjecture about rational curves.Let X be a smooth projective variety and let L be a big and nef ℚ-divisor on X. For each element α∈_1(X)_ℤ satisfying (K_X + a(X,L)L) ·α = 0 and with sufficiently high L-degree, there exists a dominant family of maps from ℙ^1 to X whose images have class α. Conjecture <ref> would follow quickly from standard conjectures predicting the existence of free rational curves contained in the smooth locus of a log Fano variety.Assuming this conjecture, the following two statements show that thin morphisms f: Y → X such that Y has higher a,b-values would give contributions to the counting function which are higher than the predicted growth rate.Assume Conjecture <ref>. Let X be a smooth projective weak Fano variety and set L = -K_X. Suppose that f: Y → X is a generically finite morphism such that a(Y, L)> a(X,L).Then there exists components of Mor(ℙ^1,X) of any sufficiently high degree which factor through f(Y)⊂ X and have higher than the expected dimension.Choose a dominant family of rational curves C on Y as in Conjecture <ref> such thatL · C > (X) - (Y)/a(Y,L)- a(X,L).By computing the expected dimension on X and on Y one concludes the statement.Assume Conjecture <ref>. Let X be a smooth projective weak Fano variety and set L = -K_X. Suppose that we have a surjective generically finite map f: Y → X which is face contracting for L.There is a class α∈Nef_1(X)_ℤ such that the number of components of Mor(ℙ^1, X, mα) is bounded below by a polynomial of degree in m equal to the relative dimension of the faces.Let b denote the difference in dimensions between F(Y,f^*L) and its image under f_*.<cit.> shows that that lattice points F(Y,f^*L) ∩ N_1(Y)_ℤ generate a subcone of F(Y,f^*L) which is full rank. Thus there is a class α∈_1(X)_ℤ and a constant C>0 such that for sufficiently large integers m there are ≥ Cm^b points of F(Y,f^*L) mapping to mα.For sufficiently large m, Conjecture <ref> guarantees that for each class β that pushes forward to mα there is a component M_β⊂Mor(ℙ^1, Y, β) parametrizing a dominant family of rational curves.For each such M_β, by composing with f we get a dominant family of rational curves on X.At most f different components on Y can get identified to a single component on X, showing that the number of different components on X has the desired asymptotic growth rate. We note in passing that if Conjecture <ref> is true it would allow one to use facts about rational curves to deduce results about rational points.Assuming Conjecture <ref>, the results of <cit.> show that a general hypersurface with degree not too large will not admit subvarieties with higher a-values.Switching to the number-theoretic setting, we should then expect Manin's Conjecture to hold for such hypersurfaces with no exceptional set. Indeed, such results are obtained in the seminal work <cit.> using the circle methodwhen the dimension is exponentially larger than its degree.Conversely, <cit.> uses the circle method to prove statements about the behavior of Mor(ℙ^1,X) over ℂ for hypersurfaces X of low degree.<cit.> and <cit.> prove related statements in the function field setting using universal torsors. § FANO THREEFOLDS OF PICARD RANK 1 AND INDEX 2Let X be a smooth Fano threefold such that Pic(X) = ℤH, -K_X=2H and H^3≥ 2.For such varieties the behavior of the a and b constants with respect to subvarieties and covers is understood completely (see <cit.>).By applying the general theory worked out before, we are able to classify all components of (ℙ^1,X) after making only a few computations in low degree.The main result in this section, Theorem <ref>, verifies Conjecture <ref> for Fano threefolds of this type. For the rest of this section, we let M_0,n(X, d) denote the parameter space of n-pointed stable maps whose image has degree d against the ample generator H of (X). This space admits an evaluation mapev_n : M_0,n(X, d) → X^n.First we recall the classification of Fano 3-folds of Picard rank one and index two. <cit.> Let X be a smooth Fano 3-fold with Pic(X) = ℤH,-K_X=2H, and H^3 ≥ 2. Then we have 2 ≤ H^3 ≤ 5 and the 3-fold X has the following description: * when H^3 = 5, X is a section of the Grassmannian 𝔾(1,4) of lines in ℙ^4 by a general linear subspace of codimension 3;* when H^3 = 4, X is a complete intersection of two quadrics in ℙ^5;* when H^3 = 3, X is a cubic threefold in ℙ^4;* when H^3 = 2, X is a double cover of ℙ^3 ramified along a smooth quartic surface.The starting point is to understand the geometric behavior of the a and b invariants:Let X be a smooth Fano 3-fold with Pic(X) = ℤH,-K_X=2H, and H^3 ≥ 2.* There is no subvariety Y with a(Y,-K_X|_Y) > a(X,-K_X). * Let W denote the variety of lines on X and let 𝒰→ W denote its universal family with the evaluation map s: 𝒰→ X.Then a(𝒰,-s^*K_X) = a(X,-K_X) and b(𝒰,-s^*K_X) = b(X,-K_X).Furthermore, any dominant thin map f: Y → X such that a(Y,-f^*K_X) = a(X,-K_X) factors rationally through 𝒰. The first statement is verified in <cit.>.As for the second statement, it is clear that s: 𝒰→ X satisfies the equality of a and b values and we only need to prove the final claim.It suffices to consider the case when Y is smooth, and we break into cases based on the Iitaka dimension of the adjoint pair.If κ(K_Y - f^*K_X) = 2, then the fibers of the map to the canonical model for this adjoint pair are curves with a-value 1.Thus their images on X must be lines, and f must factor through 𝒰.If κ(K_Y - f^*K_X) = 1, then the general fiber F of the canonical map would be a surface with a-value 1 and with κ(K_F - f^*K_X|_F) = 0.But by the arguments of <cit.> the adjoint pair restricted to such surfaces must have Iitaka dimension 1, showing that this case is impossible.Finally, by <cit.> there is no a-cover satisfying κ(K_Y - f^*K_X) = 0. Let α be a curve class on X such that H ·α = d.Based on the computations above, the framework of Section <ref> suggests that Mor(ℙ^1, X, α) consists of two irreducible components R_d, N_d such that a general morphism parametrized by R_d is birational and every morphism parametrized by N_d factors through 𝒰. This has been proved for cubic threefolds by Starr (<cit.>) and for complete intersections of two quadrics by Castravet (<cit.>). The goal of this section is to verify this expectation for other Fano 3-folds of Picard rank one and index two. Even though the cases of cubic threefolds and complete intersections of two quadrics are understood, we will provide proofs of these cases as well for completeness.We need to understand low degree curves on X in order to start the induction.The next two theorems describe the components of M_0,0(X) parametrizing curves of H-degree 1 and 2. Let X be a smooth Fano 3-fold such that Pic(X) = ℤH, -K_X=2H, and H^3 ≥ 2. The space M_0,0(X, 1) is isomorphic to the variety of lines on X. In particular, it is irreducible and generically parametrizes a free curve. See <cit.> for the irreducibility and dominance. Let X be a smooth Fano 3-fold such that Pic(X) = ℤH, -K_X=2H, and H^3 ≥ 2. Furthermore when H^3 = 2, assume that X is general in its moduli. Then the space M_0,0(X, 2) consists of two irreducible components ℛ_2, 𝒩_2. Any general element of ℛ_2 is a stable map from an irreducible curve to a smooth conic and any element of 𝒩_2 is a degree 2 map from ℙ^1 to a line.We study this proposition based on case by case studies.Complete intersections of two quadrics in ℙ^5:Let 𝒩_2 be the union of components M of M_0,0(X, 2) such that for any general element (C, f) of M, there is a component of C such that the restriction of f is not birational to its image. Then (C, f) is a stable map of degree 2 from ℙ^1 to a line on X. It is clear that the parameter space 𝒩_2 is irreducible because of Theorem <ref>. Let ℛ_2 be the union of components of M_0,0(X,2) not contained in 𝒩_2. By a dimension count, a general element (C, f) on ℛ_2 is a stable map from ℙ^1 to a smooth conic contained in X. Thus to prove the irreducibility of ℛ_2, we only need to show that the family of smooth conics is irreducible.Let C ⊂ X be a smooth conic. Then there is a unique plane P containing C. We denote the pencil of quadrics containing X by {Q_λ}_λ∈ℙ^1. Then there exists a unique quadric Q_λ in this family that contains P.Indeed, let q_λ be a quadric form associated to Q_λ and V be a 3 dimensional vector space associated to P. Since Q_λ intersects with P along C, the restrictions of q_0|_V and q_∞|_V are proportional. Thus there exists a unique q_λ vanishing identically on V. All smooth conics in X arise in this way, thus we only need to show that the family of planes contained in the quadrics Q_λ is irreducible. Each smooth quadric contains two families of planes, and quadrics cones contain one family of planes. Let π: W →ℙ^1 be the relative family of planes for {Q_λ} over ℙ^1.Its Stein factorization is a smooth irreducible genus 2 curve D →ℙ^1. Over D each fiber of π is irreducible. Thus ℛ_2 is irreducible. Cubic threefolds in ℙ^4:We define 𝒩_2 as before and it is easy to see that this is irreducible and parametrizes degree 2 stable maps from ℙ^1 to lines. Let ℛ_2 be the union of the remaining components. For a general element (C, f) ∈ℛ_2, (C, f) is a birational map from an irreducible curve to a smooth conic in ℙ^4. Thus we need to show that the variety of conics is irreducible. Let C be a smooth conic contained in X⊂ℙ^4. Then there exists a unique plane P⊂ℙ^4 containing C. The intersection of X and P is the union of a smooth conic and a line. Conversely if we have a line l ⊂ X and a plane P containing l, then the intersection X∩ P is the union of a conic and a line. Thus the variety of smooth conics has the structure of a ℙ^2-bundle over the variety of lines, showing that it is irreducible. Double covers of ℙ^3 ramified along smooth quartics:Again we define 𝒩_2 as before and it is easy to see that this is irreducible and parametrizing degree 2 stable maps from ℙ^1 to lines. Let ℛ_2 be the union of remaining components. For a general element (C, f) ∈ℛ_2, (C, f) is a birational map from an irreducible curve to a conic in X. Thus we need to show that the variety of conics is irreducible. Let f: X →ℙ^3 be the double cover ramified along a smooth quartic Y. Let C be a conic in X. Then there are two possibilities for C: * the image of C via f is a line and C is a double cover of the line;* the image of C via f is a conic Din ℙ^3 which is not a double line, and D is tangent to Y at each point of intersection.In the first case,since C is rational the line must have at least one point which is tangent to Y. However, such lines only form a 3-dimensional family so the corresponding C cannot form a component of the variety of conics. In the second case, for each conic there exists a unique plane P containing D. For each plane P ⊂ℙ^3, consider its intersection Γ_P = P∩ Y which is a quartic plane curve and the pullback f^-1(P) which is a degree 2 del Pezzo surface if Γ_P is smooth. Conics C corresponding to P are exactly conics in f^-1(P) so the variety of conics is a 1 dimensional family over (ℙ^3)^*:𝒲→ (ℙ^3)^*.Let 𝒟→ (ℙ^3)^* be the Stein factorization which has degree 126. We would like to show that this 𝒟 is irreducible.To see this, we use the monodromy action and Lefschetz property developed by Kollár in <cit.>. Let ℙ^14 be the space of plane quartic curves in ℙ^2. Let U ⊂ℙ^14 be the Zariski open set parametrizing smooth curves. Let 𝒟' → U be the degree 126 finite cover parametrizing classes of conics on the double cover of ℙ^2 ramified along the quartic.It is shown in <cit.> that the fundamental group π_1(U) acts on a fiber of 𝒟' → U transitively. Now consider the space ℙ^34 of quartic surfaces in ℙ^3 and let V'⊂ℙ^34 be the Zariski open set parametrizing irreducible quartic surfaces. Let U' ⊂ℙ^14 be the Zariski open set parametrizing irreducible plane quartics. We consider the following evaluation mapV' ×PGL_4U',([f], [A]) ↦ [f((x_0, x_1, x_2, 0)A)].The open subset C_V'⊂ V' ×PGL_4 where the map to U' is defined satisfies all the assumptions of <cit.>.For example, the map C_V'→ U' is smooth since any fiber is a Zariski open subset of an affine space bundle over PGL_4. Furthermore, since ℙ^14∖ U' has codimension ≥ 2, the general v ∈ V' will satisfy the Lefschetz property: the map π_1(({v}×PGL_4)^0) →π_1(U) will be surjective for a suitable open set ({v}×PGL_4)^0. Thus our assertion follows when the quartic surface Y is general.Sections of the Grassmannian 𝔾(1,4):Again we define 𝒩_2 as before and it is easy to see that this is irreducible and parametrizing degree 2 stable maps from ℙ^1 to lines. Let ℛ_2 be the union of remaining components. For a general element (C, f) ∈ℛ_2, (C, f) is a birational map from an irreducible curve to a conic in X. Thus we need to show that the variety of conics is irreducible. The result follows from <cit.>. We are now ready to describe the induction.Our approach is motivated by <cit.>.We start with an auxiliary lemma: Let Y be a projective variety of dimension n and let ϕ: Y' → Y be a resolution.Fix a point p ∈ Y and suppose that there is a dominant family of rational curves through p on Y parametrized by a variety W ⊂M_0,0(Y).Let C' denote the strict transform of a general curve in the family to Y'.If the fiber of ϕ over p has dimension k, then (W) ≤ -K_Y'· C' - 2 + k.Let W' be the variety parametrizing deformations of the strict transform C'.Since this family dominates Y', a general member is free.Thus the dimension of the sublocus W' parametrizing curves through a general fixed point p' in the preimage of p has at most the expected dimension -K_Y'· C' - 2.The statement is now clear.Let X be a smooth Fano 3-fold such that Pic(X) = ℤH, -K_X=2H, and H^3 ≥ 2. Furthermore when H^3 = 2, assume that X is general in its moduli. Let α denote a nef curve class and let d denote its anticanonical degree.Suppose that W is a component of M_0,0(X,α) and let W_p denote the sublocus parametrizing curves through the point p ∈ X.There is a finite union of points S ⊂ X such that * W_p has the expected dimension d - 2 for points p not in S, and* W_p has dimension at most d - 1 for points p ∈ S.Furthermore for p ∉S the general curve parametrized by W_p is irreducible.By <cit.>, there is a proper closed subset Q ⊊ X which contains any non-free component of a member of the family of curves in W.The proof is by induction on d.The base case is when W is the family of lines.By Theorem <ref> W is irreducible and has dimension 2, so there are only finitely many points in X which are contained in a one-parameter family of lines.We let S denote this finite set. We now prove the induction step.Let W' be a component of W_p.First suppose that the general curve parametrized by W' is irreducible.If the general curve parametrized by W' is free, then W' has the expected dimension, so we may suppose otherwise.We divide into cases based on the dimension of the subvariety swept out by curves in W'.Suppose that the curves parametrized by W' all map to a fixed curve Y ⊂ X.If the general curve maps r-to-1, then the dimension of W' is at most 2r-2.Note that d = r (-K_X· Y).Comparing against the expected dimension -K_X· C - 2, we see that W' can have larger than expected dimension only when -K_X· Y = 1.However, in this case we have a(Y,-K_X|_Y) = 2, violating Lemma <ref>. Next suppose that the curves parametrized by W'dominate an irreducible projective surface Y ⊂ Q. Let ϕ: Y' → Y be a resolution; applying Lemma <ref> we see that(W') ≤ -K_Y'· C' - 1.If (W') ≥ -K_X· C, then by the equation above we see that (K_Y' - ϕ^*K_X) · C' is negative.This implies that a(Y,-K_X) > 1, an impossibility by Lemma <ref>.This proves that (W_p) ≤ -K_X· C - 1.We also need to characterize the equality case.When K_Y'· C = K_X· C, it follows that a(Y',-K_X) = 1.By the earlier classification this means that K_Y' - K_X has Iitaka dimension 1.Since C has vanishing intersection against this divisor, it is a fiber of the Iitaka fibration.We conclude that 2 = -K_Y'· C = -K_X· C. By Theorem <ref>, this means that p ∈ S. Now suppose that the curves parametrized by W' are reducible.The dimension counting arguments of <cit.> show how to deduce the desired conclusion for curves parametrized by W' from the same properties for their irreducible components.For clarity we will outline the argument here. Assume that our assertions hold for any stable maps of degree less than d. Suppose that p ∉S and let f: C → X be a general member of W'. We analyze case-by-case: * Suppose that a node of C maps to the point p. Let D be a maximal connected subset of C contracted to the point p. Let C_1, ⋯, C_u be the closures of the connected components of C ∖ D.Set d_i =K_X.C_i. Then the induction hypothesis implies that the dimension of W' is bounded by∑_i = 1^u (d_i-2) + u-2where the term u-2 accounts for the dimension of the marked point and the points of attachment of D with C_i. Since d = ∑_i d_i, we conclude that the dimension of W' is bounded byd-u-2.In particular W' can not have larger than the expected dimension.* Suppose that a node of C maps to a point p_i contained in S. Let D be the maximal connected subset of C contracted to the point p_i. Let C_1, ⋯, C_u be the closures of the connected components of C ∖ D and let d_i be the degree of C_i. Suppose that the inverse image of p is contained in C_1. Then by the induction hypothesis, the dimension of W' is bounded byd_1-2 + ∑_i = 2^u d_i-1 + max{0, u-3} = d-u -1 +max{0, u-3}where the term max{0, u-3} accounts for the dimension of the moduli of the points of attachment. In particular W' can not have larger than the expected dimension.* Suppose that a node of C maps to a point q∉{p}∪ S. Let D be the maximal connected subset of C contracted to the point q. Let C_1, ⋯, C_u be the closures of the connected components of C ∖ D and let d_i be the degree of C_i. Suppose that the inverse image of p is contained in C_1. Then by the induction hypothesis, the dimension of W' is bounded byd_1-2 + ∑_i = 2^u (d_i-2) + max{0, u-3} + 1 = d -2u + 1 + max{0, u-3}where the term max{0, u-3} + 1 accounts for the dimension of the moduli of the points of attachment. In particular W' can not have larger than the expected dimension.Together these three cases exhaust all possibilities for the node of C, and our claim follows when p ∉S. The case of p ∈ S is similar; we refer readers to <cit.> for more details.In particular, the above discussion shows that the general curve parametrized by W' for points p ∉S is irreducible. We see as an immediate consequence: Let X be a smooth Fano 3-fold such that Pic(X) = ℤH, -K_X=2H, and H^3 ≥ 2. Furthermore when H^3 = 2, assume that X is general in its moduli.For any ℤ-curve class α, if M_0,0(X,α) is non-empty then every component generically parametrizes free curves and has the expected dimension. Applying the arguments of <cit.>, we now obtain: Let X be a smooth Fano 3-fold such that Pic(X) = ℤH, -K_X=2H, and H^3 ≥ 2. Furthermore when H^3 = 2, assume that X is general in its moduli.Then any free curve on X deforms into a chain of free curves of anticanonical degree ≤(X)+1. By Mori's Bend and Break any free curve C of anticanonical degree > (X) + 1 can be deformed to a stable map with reducible domain.Furthermore these deformed maps form a codimension 1 locus of the component of the moduli space containing C.By the classification of components of M_0,0(X), only the union of two free curves can form such a codimension 1 locus. Now Lemma <ref> and the induction argument show our claim.Let X be a smooth Fano 3-fold such that Pic(X) = ℤH, -K_X=2H, and H^3 ≥ 2. Furthermore when H^3 = 2, assume that X is general in its moduli. Let d ≥ 2. Then M_0,0(X, d) consists of two irreducible components:M_0,0(X, d) = ℛ_d ∪𝒩_dsuch that a general element (C, f) ∈ℛ_d is a birational stable map from an irreducible curve and any element (C, f) ∈𝒩_d is a degree d stable map to a line in X.Moreover the fiber ev_1^-1(x)∩ℛ_d' is irreducible for a general x∈ X.We denote by 𝒩_d the component parametrizing degree d stable maps to lines. It is clear that this is irreducible. First let M be a dominant component of M_0,0(X) generically parametrizing a birational stable map. We show that the fiber ev_1^-1(x)∩ M is irreducible for a general x∈ X. Let Y → M be a smooth resolution. We would like to show that Y → X has connected fibers. Suppose not, i.e. the Stein factorization Z → X is nontrivial. Then it is shown in <cit.> that Z factors through ℛ_1' rationally. This means that curves parametrized by M lift to ℛ_1' and have vanishing intersection with the ramification divisor of the morphism ℛ_1' → X.Therefore, these curves are multiple covers of lines which contradicts with the fact that a general curve maps birationally onto its image. Thus our assertion follows.Now we prove our theorem using induction on d. The case of d=2 is settled by Proposition <ref>. Suppose that d > 2 and we assume our assertion for any 2 ≤ d' < d.By gluing free curves of lower degree it is clear that M_0,0(X,d) has at least one component different from 𝒩_d.Let M be one such component. Then any general (C, f) ∈ M is a birational stable map from an irreducible curve by Corollary <ref>.(A dimension count shows that multiple covers of curves can not form a component of M_0,0(X) unless the curves are lines.)Using Theorem <ref> we see that M contains a chain of free curves of degree at most 2.Furthermore, by Proposition <ref> each component of the parameter space of conics contains a chain of lines.Applying Lemma <ref>, we can conclude that M contains a chain (C, f) of free lines of length d. Note that (C, f) is a smooth point of M_0,0(X). If the image of f is a single line, then (C, f) is contained in 𝒩_d. Since it is a smooth point, we conclude that M = 𝒩_d which is a contradiction.So we may assume that (C, f) has a reducible image. This means that (C, f) is a point on the image Δ_1, d-1 of the main component of ℛ_1' ×_X ℛ_d-1' which is unique by the induction hypothesis. Since (C, f) is a smooth point, we conclude that M contains Δ_1, d-1 and such M must be unique. Thus our assertion follows. Let X be a smooth Fano 3-fold such that Pic(X) = ℤH, -K_X=2H, and H^3 ≥ 2. Furthermore when H^3 = 2, assume that X is general in its moduli.For every curve class α satisfying H ·α≥ 2 there is a unique Manin component representing α. alpha
http://arxiv.org/abs/1702.08508v3
{ "authors": [ "Brian Lehmann", "Sho Tanimoto" ], "categories": [ "math.AG" ], "primary_category": "math.AG", "published": "20170227201015", "title": "Geometric Manin's Conjecture and rational curves" }
T. Q. Do and S. H. Q. Nguyen Anisotropic power-law inflation in a two-scalar-field model with a mixed kinetic termFaculty of Physics, VNU University of Science, Vietnam National University, Hanoi 120000, Vietnamtuanqdo@vnu.edu.vnFaculty of Physics, VNU University of Science, Vietnam National University, Hanoi 120000, Vietnamhungnq_kvl@vnu.edu.vnAnisotropic power-law inflation in a two-scalar-field model with a mixed kinetic term Sonnet Hung Q. Nguyen December 30, 2023 ===================================================================================== Day Month Year Day Month YearWe examine whether an extended scenario of a two-scalar-field model, in which a mixed kinetic term of canonical and phantom scalar fields is involved, admits the Bianchi type I metric, which is homogeneous but anisotropic spacetime, as its power-law solutions. Then we analyze the stability of theanisotropic power-law solutions to see whether these solutions respect the cosmic no-hair conjecture or not during the inflationary phase. In addition, we will also investigate a special scenario, where the pure kinetic terms of canonical and phantom fields disappear altogether in field equations, to test again the validity ofcosmic no-hair conjecture. As a result, the cosmic no-hair conjecture always holds in both these scenarios due to the instability of the corresponding anisotropic inflationary solutions. PACS numbers: 98.80.-k; 98.80.Cq; 98.80.Jk§ INTRODUCTIONAn inflationary universe <cit.> has been considered as a leading paradigm in modern cosmology due to its power of solving some classical cosmological problems such as the horizon, flatness, and magnetic-monopole problems as well as its consistent predictions for the cosmic microwave background.In particular, many theoretical predictions based on the inflationary mechanism have been shown to be highly consistent with recent high-tech observations on the cosmic microwave background (CMB) such as the Wilkinson Microwave Anisotropy Probe (WMAP) <cit.> or Planck <cit.>. However, some exotic features of the CMB temperature like the hemispherical asymmetry and the Cold Spot have been firstly observed by the WMAP <cit.> and then confirmed by the Planck <cit.>. Hence, the nature of these anomalies requires further investigations, which might address some additional or unusual interactions of fields, e.g., those might come from string theories <cit.>. Due to these anomalies, our imagination of the early universe, which has been thought ofbeing homogeneous and isotropic, might be changed<cit.>. For example, we might think of a scenario that the state of theearly universe might be described by the Bianchi spacetimes rather than the Friedmann-Lemaitre-Robertson-Walker (FLRW) one since it might be not isotropic but slightly anisotropic <cit.>. In cosmology, the Bianchi spacetimes are known as homogeneous but anisotropic metrics, which are classified into nine types numbered from I to IX <cit.> and are regarded as the generalization of the FLRW metric, which is homogeneous and isotropic. Recently, cosmological aspects based on the Bianchi spacetimes have been studied extensively. For example, some early works on the predictions ofan anisotropic inflationary era can be found in Ref. Pitrou:2008gk. In addition, other works within the framework of loop quantum cosmology (gravity) on understanding the issues of the resolution of initial singularity, effect of anisotropies on inflation,isotropization, and stablitily of inflationary attractors for the Bianchi type I metric have been investigated in Ref. Singh:2011gp.As mentioned above, the data of WMAP and Planckmight have shown us the state of the early universe, which might be anisotropic with small spatial anisotropies. Naturally, we can question on the state of the late time universe. "Is it isotropic or not" is an open question to all of us, which could be answered by theoretical and/or observational approaches. Fortunately, an important theoretical hint to this question might come from the cosmic no-hair conjecture proposed by Hawking and his colleagues long time ago <cit.>, which states that all classical hairs of the early universe will be removed at the late time. It is noted that a complete proof for this conjecture has not been done up to now. However, a partial proof dealing with the dominant energy condition (DEC) and strong energy condition (SEC) for all non-type-IXBianchi spacetimes has been given byWald in Ref. wald. As a result, this proof shows thatall non-type-IX Bianchi spacetimes will evolve towards the late time isotropic de Sitter spacetime ifthe DEC and SEC are both fulfilled. For the Bianchi type IX metric, it will behave similarly if the cosmological constant Λ is sufficiently large <cit.>. Recently, some people have tried to extend the Wald's proof toa case of inhomogeneous cosmologies <cit.>. Indeed, a complete proof for this conjecture has been one of great challenges to physicists. In short, if the cosmic no-hair conjecture holds, the late time state of universe should be isotropic, no matter the early state of universe. However, the cosmic no-hair conjecture has faced counter-examples coming from a supergravity motivated model proposed by Kanno, Soda, and Watanabe (KSW)<cit.>, wherea unusual coupling of the scalar ϕ and U(1) fields, f^2(ϕ)F_μνF^μν, is involved. As a result, the KSW model does admitBianchi type I metrics as its stable and attractor solutions during the inflationary phase. More interestingly, this result still holds when a canonical scalar field ϕ is replaced by non-canonical ones, e.g., the (supersymmetric-) Dirac-Born-Infeld scalar fields, as shown in Ref. WFK. Hence, the cosmic no-hair conjecture seems to beviolated extensively in the context of the KSW model. Consequently, there have been a number of papers investigating possible extensions of the KSW model to seek more counter-examples to the cosmic no-hair conjecture<cit.>. Additionally, some cosmological aspects such as imprints of anisotropic inflation on the CMB through correlations between T, E, and B modes<cit.>, and primordial gravitational waves<cit.>have also been discussed in the framework of KSW model. For recent interesting reviews on this model, see Ref. KSW1.Besides the above counter-examples, there have existed some papers <cit.> attempting to support the cosmic no-hair conjecture by introducing a unusual scalar field called a phantom field ψ, whose kinetic energy is negative definite <cit.>. Cosmologically, the phantom field has been regarded as one of alternative solutions to the dark energy problem, which is associated with the accelerating of our current universe <cit.>. For example, one can see the very firstconfirmations for the cosmological viability of phantom field model done in Ref. Singh:2003vx.However, the existence of phantom field has been shown to lead the Universe dominated by the phantom energy to the so-called Big Rip singularity, which is the finite-time future singularity<cit.>.Fortunately, a two-scalar-field model called a quintom model, which includes not only the phantom field but also the quintessence field, has provided alternative solutions to not only the Big Rip singularity but also other cosmological singularities, which have been discussed extensively in loop quantum cosmology <cit.>, such as the Big Bangsingularity <cit.> andthe BigCrunch singularity<cit.>.On the other hand, papers in Ref. Singh have pointed out that the Big Rip singularity problem associated with the existence of phantom field can be resolved once the quantum gravitational effects are involved. These facts make the role of phantom field in cosmologyimportant. Indeed, the phantom has been shown that its existence might also be necessaryfor supporting the cosmic no-hair conjecture <cit.>. In particular, the stability analysis done inRef. WFK has shown that the inclusion of phantom field does make the following anisotropic Bianchi type I solutions unstable during the inflationary phase due to the negativity of its kinetic energy as expected. However, one could ask if this result would still be valid if additional unusual terms of scalar fields are introduced into the two-scalar-field model <cit.>. In the present paper, we will partially answer this question by examining an extended scenario of the two-scalar-field model <cit.>, in which a mixed kinetic term of canonical and phantom scalar fields, i.e., ∂_μϕ∂^μψ <cit.>, is involved. Note that this mixed kinetic term can also be found in thestring motivated models of multi scalar fields <cit.>. As a result, this mixed term will not make the corresponding Bianchi type I solutions stable during the inflationary phase. Furthermore, we willshow that this result is also valid for a special scenario, in which the pure (non-mixed) kinetic terms of canonical and phantom fields are neglected altogether. Indeed, it will be shown that the corresponding anisotropic power-law solutions found in this scenario also turn out to be unstable as expected. This paper is organized as follows:A brief introduction of this research has been given in Sec. <ref>. A two-scalar-field model with the mixed term and its anisotropic power-law solutions will be solved in Sec. <ref>. Stability of the anisotropic power-law solutions will be analyzed in Sec. <ref> to see whether the cosmic no-hair conjecture is violated or not.Sec. <ref> will be devoted to investigate a special scenario, in which the pure kinetic terms of canonical and phantom fields are neglected altogether.Finally, concluding remarks will be given in Sec. <ref>.§ THE MODEL AND ITS ANISOTROPIC POWER-LAW SOLUTIONS§.§ Basic setupAn action of an extended scenario of KSW model <cit.> including the phantom field <cit.> and the mixed kinetic term <cit.> is given byS = ∫d^4x√( - g) [ M_p^2/2R -a ∂ ^μϕ∂ _μϕ + b ∂ ^μψ∂ _μψ -ω_0/2∂^μϕ∂_μψ.  .- V_ϕ(ϕ) -V_ψ(ψ) - 1/4f^2( ϕ,ψ )F_μν F^μν],where a≥0 and b≥0 are coefficients of the kinetic term ofcanonical scalar ϕ and phantom scalar ψ fields, respectively.In addition, ω_0 is a coefficient of mixed kinetic term. It appears that for ω_0>0 and ω_0 <0 we will have the quintessence-like and phantom-like mixed terms, respectively. In addition, M_p is the reduced Planck mass and F_μν≡∂_μ A_ν -∂_ν A_μ is the field strength of the vector field A_μ used for describing the electromagnetic field. In addition, it appears that F^μν=g^μρg^νσF_ρσ. Note that if ω_0=0we will obtain the two-scalar-field model studied in Ref. WFK.As a result, varying the action (<ref>) with respect to the inverse metric g^μν and choosing the canonical coefficients, a=b=1/2, lead to the following Einstein field equations:M_p^2( R_μν- 1/2Rg_μν) - ∂ _μϕ∂ _νϕ+ ∂ _μψ∂ _νψ -ω_0 ∂_μϕ∂_νψ+ 1/2g_μν( ∂ ^σϕ∂ _σϕ -∂ ^σψ∂ _σψ +ω_0 ∂ ^σϕ∂ _σψ)+ g_μν[ V_ϕ( ϕ) + V_ψ(ψ)+ 1/4f^2 ( ϕ,ψ)F^ρσ F_ρσ] - f^2 ( ϕ ,ψ)F_μγ F_ν ^γ = 0, Additionally, the Euler-Lagrange equations for the scalar fields, ϕ and ψ, and the vector field A_μ readϕ̈+ω_0/2ψ̈ = - 3H (ϕ̇+ω_0/2ψ̇)- ∂_ϕ V_ϕ( ϕ) - 1/2f( ϕ ,ψ) ∂_ϕ f ( ϕ ,ψ)F_μν F^μν ,ψ̈-ω_0/2ϕ̈ = - 3H (ψ̇-ω_0/2ϕ̇)+ ∂_ψ V_ψ(ψ) + 1/2f( ϕ,ψ) ∂_ψ f ( ϕ ,ψ)F_μν F^μν , ∂/∂ x^μ[ √( - g) f^2 ( ϕ,ψ)F^μν] = 0,respectively, where H is the Hubble constant appearing due to the derivative of √(-g), i.e., ∂_μ(√(-g)).Given the general forms of field equations, we would like to seek analytic solutions for the two-scalar-field model with the mixed term as described in the action (<ref>) by following the previous works inRefs. KSW,WFK. In particular, we are now interested in a question that whether the two-scalar-field model involving the mixed termadmits the Bianchi type I (BI) metric:ds^2 =-dt^2+exp[2α(t)-4σ(t)]dx^2+exp[2α(t)+2σ(t)](dy^2+dz^2),along with the compatible vector field, whose configuration is given by A_μ = (0,A_x(t),0,0), as its cosmological solutions. Here, σ stands for a deviation from isotropy and therefore should be much smaller than the isotropic scale factor α in order to be consistent with the recent observation data from the WMAP <cit.> and Planck <cit.>.Note that among nine Bianchi types, the Bianchi type I seems to be closest to the FLRW metric since its metric is diagonal, similar to the FLRW metric. This is a reason why the Bianchi type I metric has been investigated extensively <cit.>.Now, we would like to derive the corresponding field equations (<ref>), (<ref>), (<ref>), and (<ref>) for the BI metric shown in Eq. (<ref>). To do this task, we first define the following solution for the vector field equation (<ref>)to beȦ_x ( t ) = f^ - 2( ϕ,ψ)exp[ - α- 4σ] p_A ,where p_A is a constant of integration <cit.>. Thanks to this solution, we are able to write down the non-vanishing components of Einstein equations (<ref>) as followsα̇^2 = σ̇^2+ 1 /3M_p^2[ 1/2ϕ̇^2-1/2ψ̇^2 +ω_0/2ϕ̇ψ̇+ V_ϕ +V_ψ + f^ - 2/2exp[ - 4α- 4σ] p_A^2 ] , α̈=- 3α̇^2+ 1 /M_p^2(V_ϕ +V_ψ)+ f^ - 2/6M_p^2exp[ - 4α- 4σ] p_A^2 , σ̈ =- 3α̇σ̇+ f^ - 2/3 M_p^2exp[ - 4α- 4σ] p_A^2. On the other hand, the scalar field equations (<ref>) and (<ref>) can be reduced toϕ̈+ω_0/2ψ̈=- 3α̇(ϕ̇+ω_0/2ψ̇)- ∂_ϕ V_ϕ+ f^ - 3∂_ϕ f exp[ - 4α- 4σ] p_A^2 ,ψ̈-ω_0/2ϕ̈ =- 3α̇(ψ̇-ω_0/2ϕ̇) + ∂_ψ V_ψ- f^ - 3∂_ψ f exp[ - 4α- 4σ] p_A^2 .Note again, once we take ω_0 =0 then all above equations will reduce to that investigated in the two-scalar-field model in Ref. WFK. As a result, the following evolution equation associated with the scale factor α, which governs the evolution of inflationary universe since it is assumed to be much larger than the anisotropic deviation σ, turns out to be α̈+α̇^2 = -2σ̇^2 -1/3M_p^2(ϕ̇^2 +ω_0 ϕ̇ψ̇-ψ̇^2 - V_ϕ - V_ψ +f^ - 2/2exp[ - 4α- 4σ] p_A^2).For an inflationary universe it requires that α̈+α̇^2 >0. This constraint will be easily fulfilled if the slow-roll approximation, in which the potentials of scalar fields dominate over other terms in the field equations, i.e., V_ϕ≫ϕ̇^2/2, V_ψ≫ψ̇^2/2, and V_ϕ+V_ψ≫ f^ - 2exp[ - 4α- 4σ] p_A^2/2, is taken. It is noted again that the anisotropic scale factor σmust be much smaller than the isotropic scale factor αin order to be consistent with the recent observation data from the WMAP <cit.> and Planck <cit.>. §.§ Anisotropic power-law solutionsArmed with the basic setup for the Bianchi type I metric derived above, we would like to seek the power-law solutions for the two-scalar-field model with the mixed term by taking the following ansatz used in Refs. KSW,WFK:α = ζlog( t ),  σ= ηlog( t ), ϕ/M_p = ξ_ϕlog( t ) + ϕ _0,  ψ/M_p = ξ_ψlog( t ) + ψ _0,along with the compatible exponentialpotentials: V_ϕ(ϕ)= V_0ϕexp[λ_ϕϕ/M_p ] , V_ψ(ψ)= V_0ψexp[λ_ψψ/M_p ] ,f( ϕ, ψ) = f_0 exp[ρ_ϕϕ/M_p+ρ_ψψ/M_p ],where V_0ϕ,V_0ψ, f_0, λ_ϕ, λ_ψ, ρ_ϕ, and ρ_ψ are positive field parameters. It is straightforward to check that if we insert theansatz shown in Eq. (<ref>) into Eq. (<ref>) then we will obtain the following power-law scale factors. Note again that σ stands for a deviation from isotropy and therefore should be much smaller than the isotropic scale factor α, i.e., α≫σ or equivalently ζ≫η due to Eq. (<ref>). As a result, thefield equations (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), which are differential equations in time, can become a set of algebraic equations:ζ^2= η^2 +1/3(ξ_ϕ^2/2-ξ_ψ^2/2+ω_0/2ξ_ϕξ_ψ + u_ϕ +u_ψ+ v/2), -ζ = -3ζ^2 +u_ϕ +u_ψ+ v/6, -η = -3ζη +v/3,-ξ_ϕ - ω_0/2ξ_ψ = -3ζ(ξ_ϕ +ω_0/2ξ_ψ) - λ_ϕ u_ϕ +ρ_ϕ v, -ξ_ψ + ω_0/2ξ_ϕ = -3ζ(ξ_ψ -ω_0/2ξ_ϕ) + λ_ψ u_ψ - ρ_ψ v.It is noted that in order to derive the above algebraic equations we have used the following constraints for the field parameters:λ_ϕξ_ϕ =-2,λ_ψξ_ψ =-2,ρ_ϕξ_ϕ+ρ_ψξ_ψ+2ζ+2η =1,which make all terms in the field equations proportional to t^-2. Note also thatwe have introduced additional positive variables:u_ϕ = V_0ϕ/M_p^2 exp[λ_ϕϕ _0 ] >0 , u_ψ = V_0ψ/M_p^2 exp[λ_ψψ _0 ] >0,v = p_A^2 f_0^ - 2/M_p^2 exp[ - 2(ρ_ϕϕ _0+ρ_ψψ_0) ] >0,for convenience. For the slow-roll inflation, which is based on the slow-roll approximation mentioned above, it turns out thatu_ϕ≫2/λ_ϕ^2,   u_ψ≫2/λ_ψ^2,   u_ϕ +u_ψ≫ v/2. It is apparent that v = 3 (3ζ-1)η,by noting Eq. (<ref>). Hence, the positivity of v leads to a constraint that η >0 during the inflationary phase with ζ≫ 1. From the constraint equation (<ref>), we obtain thatη = -ζ + ρ_ϕ/λ_ϕ+ρ_ψ/λ_ψ +1/2.Hence, the constraint that η >0 implies thatζ < ρ_ϕ/λ_ϕ+ρ_ψ/λ_ψ +1/2.It it noted that ζ≫ 1 for the inflationary solutions. This will only be satisfied if ρ_ϕ≫λ_ϕ,  ρ_ψ≫λ_ψ,assuming that all these λ_ϕ,ψ and ρ_ϕ,ψ are positive parameters. Thanks to the relations shown in Eqs. (<ref>) and (<ref>), the other equations (<ref>) and (<ref>) can be further reduced to2 λ _ψλ _ϕ^2 u_ϕ + (3 ζ -1 ) (6 ζλ _ψλ _ϕρ _ϕ-6 λ _ψρ _ϕ^2-3 λ _ψλ _ϕρ _ϕ-6 λ _ϕρ _ψρ _ϕ-4 λ _ψ-2 ω _0 λ _ϕ)=0,2 λ _ψ^2 λ _ϕ u_ψ + (3 ζ -1) (6 ζλ _ψλ _ϕρ _ψ-6 λ _ϕρ _ψ^2-3 λ _ψλ _ϕρ _ψ-6 λ _ψρ _ψρ _ϕ+4 λ _ϕ-2 ω _0 λ _ψ)=0,respectively, which can be solved to give non-trivial solutions of u_ϕ and u_ψ:u_ϕ =-1/2 λ _ψλ _ϕ^2(3 ζ -1 ) (6 ζλ _ψλ _ϕρ _ϕ-6 λ _ψρ _ϕ^2-3 λ _ψλ _ϕρ _ϕ-6 λ _ϕρ _ψρ _ϕ-4 λ _ψ-2 ω _0 λ _ϕ), u_ψ = -1/2 λ _ψ^2 λ _ϕ(3 ζ -1) (6 ζλ _ψλ _ϕρ _ψ-6 λ _ϕρ _ψ^2-3 λ _ψλ _ϕρ _ψ-6 λ _ψρ _ψρ _ϕ+4 λ _ϕ-2 ω _0 λ _ψ).Given the solutions shown in Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), we can obtain the following non-trivial equation of ζ from either Eq. (<ref>) or Eq. (<ref>):-6λ_ϕλ_ψ( λ_ϕλ_ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ) ζ+4 (λ_ϕρ_ψ +λ_ψρ_ϕ) (2λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) +λ_ϕ^2 λ_ψ^2 +8 (λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2) =0,which admits a non-trivial solution of ζ:ζ = 4 (λ_ϕρ_ψ +λ_ψρ_ϕ) (2λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) +λ_ϕ^2 λ_ψ^2 +8 (λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2) /6λ_ϕλ_ψ( λ_ϕλ_ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ).Thanks to this explicit expression defined in terms of the field parameter λ_ϕ,ψ and ρ_ϕ,ψ, the other field variables η, u_ϕ, u_ψ, and v now become as followsη =  λ_ϕλ_ψ(λ_ϕλ_ψ +2λ_ϕρ_ψ +2λ_ψρ_ϕ) - 4(λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2)/3 λ _ϕλ_ψ(λ _ϕλ _ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ), u_ϕ =   Ω_0[λ_ψ^2 (λ_ϕρ_ϕ +2ρ_ϕ^2+2 )+2λ_ϕλ_ψρ_ϕρ_ψ +4 (λ_ϕρ_ϕ +λ_ψρ_ψ) ..+ω_0 (λ_ϕλ_ψ +2λ_ϕρ_ψ -2λ_ψρ_ϕ)] ,u_ψ =  Ω_0 [λ_ϕ^2 (λ_ψρ_ψ +2ρ_ψ^2-2 )+2λ_ϕλ_ψρ_ϕρ_ψ -4 (λ_ϕρ_ϕ +λ_ψρ_ψ).. +ω_0 (λ_ϕλ_ψ -2λ_ϕρ_ψ +2λ_ψρ_ϕ)] ,v =  Ω_0 [λ_ϕλ_ψ(λ_ϕλ_ψ +2λ_ϕρ_ψ +2λ_ψρ_ϕ) -4(λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2)] ,with an additional variable Ω_0, whose value is given byΩ_0 = 4 (λ_ϕρ_ψ +λ_ψρ_ϕ) (λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) -λ_ϕ^2 λ_ψ^2 +8 (λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2)/2 [λ _ϕλ_ψ(λ _ϕλ _ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ)]^2.As discussed in Ref. WFK, these variables can be approximated asζ ≃ρ_ϕ/λ_ϕ +ρ_ψ/λ_ψ≫ 1 , η ≃1/3,u_ϕ ≃ 3ρ_ϕ/λ_ϕ(ρ_ϕ/λ_ϕ +ρ_ψ/λ_ψ) ≃ 3ρ_ϕ/λ_ϕζ≫ 1 ,u_ψ ≃ 3ρ_ψ/λ_ψ(ρ_ϕ/λ_ϕ +ρ_ψ/λ_ψ) ≃ 3ρ_ψ/λ_ψζ≫ 1,v≃ 3 (ρ_ϕ/λ_ϕ +ρ_ψ/λ_ψ) ≃ 3ζ≫ 1,during the inflationary phase, in which ρ_ϕ≫λ_ϕ∼ O(1);  ρ_ψ≫λ_ψ∼ O(1).It is straightforward to see that these values satisfy the slow-roll approximation shown in Eq. (<ref>). Hence, the following slow-roll parameter, ϵ≡ -Ḣ/H^2, can be defined to beϵ= 6λ_ϕλ_ψ( λ_ϕλ_ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ)/4 (λ_ϕρ_ψ +λ_ψρ_ϕ) (2λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) +λ_ϕ^2 λ_ψ^2 +8 (λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2) ≃λ_ϕλ_ψ/λ_ϕρ_ψ + λ_ψρ_ϕ≪ 1.along with the value of the anisotropy parameter Σ/H ≡σ̇/α̇:Σ/H= 2[λ_ϕλ_ψ(λ_ϕλ_ψ +2λ_ϕρ_ψ +2λ_ψρ_ϕ) - 4(λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2) ]/4 (λ_ϕρ_ψ +λ_ψρ_ϕ) (2λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) +λ_ϕ^2 λ_ψ^2 +8 (λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2) ≃λ_ϕλ_ψ/3(λ_ϕρ_ψ + λ_ψρ_ϕ)≪ 1.It is clear that the anisotropy parameter is really small for the Bianchi type I inflationary solutions. This result turns out to be consistent with that investigated in the previous works, where the mixed term of canonical and phantom scalar fields has not been shown up <cit.>.This result is also consistent with the recent observational data of the WMAP <cit.> and Planck<cit.>. In other words, the mixed term plays a little rolein the field equations during the slow-roll inflationary phase so that it does not affect much on the obtained solutions as well as the anisotropy parameter. Below, we will see if the mixed term could change the stability of the Bianchi type I solutions during the inflationary phase. Once again, we note thatall above power-law solutions will reduce to that found in the two-scalar-field model in Ref. WFK once the limit ω_0 → 0 is taken.§ STABILITY ANALYSIS OF THE INFLATIONARY BIANCHI TYPE I SOLUTIONS Given the above anisotropic power-law solutions, we now would like to examine their stability during the inflationary phase with ζ≫ 1 in order to test the validity of the cosmic no-hair conjecture proposed by Hawking and his colleagues <cit.>, which has been proved partially for the Bianchi spaces by Wald <cit.>. It is noted that the Wald's proof deals only with the dominant and strong energy conditions and therefore it could not provide us full information about the stability of inflationary solutions, which could only be improved by perturbation analysis, e.g., the dynamical system <cit.> orpower-law perturbations <cit.>. Note thatthe power-law perturbations have been used in Ref. WFK in order to examine the validity of the cosmic no-hair conjecture since they are compatible with the following anisotropic power-law solutions.However, we have also pointed out in Ref. WFKthat the power-law perturbations approach is consistent with the dynamical system approachfor at least one-scalar-field models, e.g., the KSW model <cit.> or its non-canonical extensions <cit.>. In this paper, therefore, we would like to use the dynamical system method to investigate the stability of the anisotropic power-law solutions for the two-scalar-field model with the mixed term by introducing the following dynamical variables <cit.>:X =σ̇/α̇,  Y_ϕ=ϕ̇/M_p α̇,  Y_ψ=ψ̇/M_p α̇,Z =f^-1(ϕ,ψ)/M_p α̇exp[-2α-2σ]p_A, W_ϕ = √(V_ϕ)/M_pα̇,  W_ψ =√(V_ψ)/M_pα̇,here W_ϕ and W_ψ are auxiliary dynamical variables, which will be useful for further calculations on autonomous equations <cit.>. As a result, the derivatives of these dynamical variables with respect to α acting as a new time coordinate, dα =α̇dt <cit.>, turn out to bedX/dα =σ̈/α̇^2-α̈/α̇^2X,d Y_ϕ/dα =ϕ̈/M_p α̇^2-α̈/α̇^2Y_ϕ,d Y_ψ/dα =ψ̈/M_p α̇^2-α̈/α̇^2Y_ψ,dZ/dα =-(ρ_ϕ Y_ϕ + ρ_ψ Y_ψ)Z- 2(X+1)Z-α̈/α̇^2Z,dW_ϕ/dα = (λ_ϕ/2Y_ϕ-α̈/α̇^2)W_ϕ, dW_ψ/dα = (λ_ψ/2Y_ϕ-α̈/α̇^2)W_ψ,here the exponential forms of V_ϕ, V_ψ, and f(ϕ,ψ) shown in the previous section have been used. Thanks to these explicit expressions, the field equations (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) can be transformed into the following autonomous equations:dX/dα = [3(X^2-1) +1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2) +Z^2/3] X +Z^2/3, d Y_ϕ/dα +ω_0/2d Y_ψ/dα = [3(X^2-1) +1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2) +Z^2/3] (Y_ϕ +ω_0/2Y_ψ) -λ_ϕ W_ϕ^2 +ρ_ϕ Z^2,d Y_ψ/dα -ω_0/2d Y_ϕ/dα = [3(X^2-1) +1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2) +Z^2/3] (Y_ψ -ω_0/2Y_ϕ) +λ_ψ W_ψ^2 -ρ_ψ Z^2, dZ/dα=  Z [3(X^2-1) +1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2) +Z^2/3 -2 X - (ρ_ϕ Y_ϕ +ρ_ψ Y_ψ) +1 ], d W_ϕ/dα= [3X^2+1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2) +Z^2/3 +λ_ϕ/2Y_ϕ] W_ϕ,d W_ψ/dα= [3X^2+1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2) +Z^2/3 +λ_ψ/2Y_ψ] W_ψ,where we have used the result derived from the Friedmann equation (<ref>) such asW_ϕ^2 +W_ψ^2= -3(X^2-1) -1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2 + Z^2 ). Below, we willshow that anisotropic (X≠ 0) fixed point solutions of the autonomous equations are indeed equivalent to the obtained anisotropic power-law solutions of the Einstein field equations. Hence, the stability of anisotropic fixed points will tell us that of the corresponding anisotropic power-law solutions. It is known that anisotropic fixed points of the dynamical system are solutions of the following equations: d X/dα =d Y_ϕ/dα = d Y_ψ/ dα =dZ/dα = dW_ϕ/dα =dW_ψ/dα =0. As a result, from the last two equations in Eq. (<ref>), i.e., dW_ϕ /dα =dW_ψ/dα =0, we obtain the following relations for the dynamical variables:3X^2+1/2(Y_ϕ^2 +ω_0 Y_ϕ Y_ψ - Y_ψ^2) +Z^2/3 =-λ_ϕ/2Y_ϕ =-λ_ψ/2Y_ψ,requiring that W_ϕ≠ 0, W_ψ≠ 0, and Z≠0 for non-trivial fixed points. Furthermore, using these relations will lead the other equations for the fixed points to(λ_ϕ/2 Y_ϕ +3 ) X -Z^2/3 =0,-(λ_ϕ/2 Y_ϕ +3) (1 +ω_0/2λ_ϕ/λ_ψ) Y_ϕ-λ_ϕ W_ϕ^2 +ρ_ϕ Z^2 =0, -(λ_ϕ/2 Y_ϕ +3)(λ_ϕ/λ_ψ -ω_0/2) Y_ϕ+λ_ψ W_ψ^2 -ρ_ψ Z^2 =0,2X +(λ_ϕ/2 +ρ_ϕ +λ_ϕ/λ_ψρ_ψ) Y_ϕ +2=0.Next, we will eliminate the existence of W_ϕ and W_ψ in Eqs. (<ref>) and (<ref>) with the help of Eq. (<ref>) to obtain the following equation:-(λ_ϕ Y_ϕ/2+3) [ (λ_ψ-λ_ϕ^2 /λ_ψ+ω_0 λ_ϕ)Y_ϕ+λ_ϕλ_ψ] +(λ_ϕλ_ψ/6+λ_ψρ_ϕ+λ_ϕρ_ψ)Z^2 =0.It is apparent that we now have three independent equations (<ref>), (<ref>), and (<ref>) for three dynamical variables X, Y_ϕ, and Z^2. Indeed, solving these non-linear equations gives us two solutions, one is trivial corresponding to Z=0 and the other is a non-trivial fixed point given byX =2/Q[λ_ϕλ_ψ(λ_ϕλ_ψ +2 λ_ϕρ_ψ +2 λ_ψρ_ϕ) -4(λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2 ) ],Y_ϕ =-12/Qλ _ψ(λ_ϕλ_ψ +2 λ_ϕρ_ψ +2 λ_ψρ_ϕ),Z^2=18 /Q^2Ω̂_0 [λ_ϕλ_ψ(λ_ϕλ_ψ+2λ_ϕρ_ψ +2λ_ψρ_ϕ)-4 (λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2)] ,withΩ̂_0 ≡ 4(λ_ϕρ_ψ +λ_ψρ_ϕ)(λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) -λ_ϕ^2 λ_ψ^2 +8(λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2 ),Q≡ 4(λ_ϕρ_ψ +λ_ψρ_ϕ) (2λ_ϕλ_ψ + 3λ_ϕρ_ψ +3λ_ψρ_ϕ) +λ_ϕ^2 λ_ψ^2 + 8 (λ_ψ^2 +ω_0 λ_ϕλ_ψ -λ_ϕ^2 ). Note that Y_ψ can be defined in terms of Y_ϕ as shown in Eq. (<ref>). It is straightforward to see that these anisotropic fixed point solutions are indeed equivalent to the anisotropic power-law solutions found in the previous section. Indeed, one can easily obtain the above expressions of the dynamical variables by using the following relations:X=η/ζ,   Y_ϕ = -2/λ_ϕζ,  Y_ψ = -2/λ_ψζ,   Z^2 =v/ζ^2,   W_ϕ^2 =u_ϕ/ζ^2,   W_ψ^2 =u_ψ/ζ^2,with the explicit values ζ, η, and v in terms of λ_ϕ,ψ and ρ_ϕ,ψ have been derived in the previous section for the anisotropic power-law solutions. Hence, the stability of the anisotropic power-law solutions can be determined by considering that of the corresponding anisotropic fixed points.As a result, during the inflationary phase we haveζ≃λ_ϕ /ρ_ϕ + λ_ψ /ρ_ψ≫ 1, η≃ 1/3, u_ϕ≃ 3 (ρ_ϕ /λ_ϕ) ζ≫ 1, u_ψ≃ 3 (ρ_ψ /λ_ψ) ζ≫ 1, and v≃ 3ζ≫ 1, assuming that ρ_ϕ≫λ_ϕ∼ O(1) along withρ_ψ≫λ_ψ∼ O(1). Therefore, the inflationary anisotropic fixed points behave as follows X,  Y_ϕ, Y_ψ,  Z^2 ≪ 1 and W_ϕ^2 ∼ W_ψ^2 ∼ 3/2 assuming that ρ_ϕ /λ_ϕ∼ρ_ψ /λ_ψ. Given the above results, we now would like to perturb the autonomous equations (<ref>)-(<ref>) around the obtained anisotropic fixed points during the inflationary phase. As a result, the following perturbationequations are approximately defined to bed δ X/d α ≃ -3δ X,d δ Y_ϕ/dα +ω_0/2dδ Y_ψ/dα ≃ -3 (δ Y_ϕ +ω_0/2δ Y_ψ) - 2λ_ϕ W_ϕδ W_ϕ + ρ_ϕδ Z,d δ Y_ψ/dα - ω_0/2dδ Y_ϕ/dα ≃ -3 (δ Y_ψ - ω_0/2δ Y_ϕ) + 2λ_ψ W_ψδ W_ψ - ρ_ψδ Z,d δ Z/dα ≃ - Z (2δ X + ρ_ϕδ Y_ϕ + ρ_ψ Y_ψ),dδ W_ϕ/dα ≃λ_ϕ/2W_ϕδ Y_ϕ , dδ W_ψ/dα ≃λ_ψ/2 W_ψδ Y_ψ,here we have only kept the leading terms in the above perturbation equations for simplicity.Now, taking the exponential perturbations for the dynamical variables as <cit.>δ X = A_X exp[ωα],  δ Y_ϕ = A_Y_ϕexp[ωα],  δ Y_ψ = A_Y_ψexp[ωα],δ Z = A_Z exp[ωα],  δ W_ϕ = A_W_ϕexp[ωα],  δ W_ψ = A_W_ψexp[ωα],will lead the above perturbation equations to the following algebraic equations, which can be written as a matrix equation:H( [ A_X; A_Y_ϕ; A_Y_ψ; A_Z; A_W_ϕ; A_W_ψ; ]) ≡[ [-ω-3 0 0 0 0 0; 0-3-ω -ω_0/2(3+ω) ρ_ϕ -2λ_ϕ W_ϕ 0; 0ω_0/2(3+ω)-3-ω-ρ_ψ 02λ_ψ W_ψ; -2Z-ρ_ϕ Z-ρ_ψ Z-ω 0 0; 0 λ_ϕ W_ϕ/2 0 0-ω 0; 0 0 λ_ψ W_ψ/2 0 0-ω; ]]( [ A_X; A_Y_ϕ; A_Y_ψ; A_Z; A_W_ϕ; A_W_ψ; ]) = 0.Mathematically, the equation (<ref>) admits non-trivial solutions if and only ifH=0,which can be evaluated to be a polynomial equation of ω asω f(ω)≡ω( a_6 ω^5 + ...+a_1 ) =0,where a_6= ω _0^2/4+1 >0,a_1= -3 (λ _ψ^2 λ _ϕ^2 W_ψ^2 W_ϕ^2+ λ _ψ^2 ρ _ϕ^2 W_ψ^2 Z+ λ _ϕ^2 ρ _ψ^2W_ϕ^2Z) <0.Here, we do not show the expressions of a_i's  (i=2-5) because we only want to know the sign of the highest power term, a_6, and that of the lowest power term, a_1. The reason is based on the observation mentioned in Ref. WFK. In particular, we have observed that if a_6 >0 and a_1<0 (or inversely a_6 <0 and a_1>0) then the polynomial equation of ω, f(ω)=0 as shown in Eq. (<ref>), will admit at least one positive root ω>0, which corresponds to a unstable perturbation mode for the anisotropic fixed points. More specifically, this claim can be understood as follows: f(ω) ∼ a_6 ω^5 >0 as ω≫ 1 and f(ω=0)=a_1<0, then the curve f(ω) will cross the positive horizontal ω-axis at least one time at ω=ω^∗, and this intersection point ω=ω^∗ is indeed a positive root to the equationf(ω)=0. It appears that the coefficient a_6 is always positive whatever the sign of ω_0. In addition, the sign of a_1 is independent of that of ω_0. Hence, we can conclude that the inclusion of extra mixed term, ω_0 ∂_μϕ∂^μψ, does not change the stability of the two-scalar-field model <cit.>. § ABSENCE OF PURE KINETIC TERMS OF SCALAR FIELDS In this section, we would like to discuss a special scenario, in which the pure kinetic terms of scalar fields will not show up, i.e., a=b=0, leaving only the mixed kinetic term of scalar fields in the following action:S = ∫d^4x√( - g) [ M_p^2/2R -ω_0/2∂^μϕ∂_μψ - V_ϕ(ϕ) -V_ψ(ψ) - 1/4f^2( ϕ,ψ )F_μν F^μν]. In this scenario, it turns out that we cannot distinguish ϕ and ψ as canonical or phantom fields although their mixed kinetic term can be either canonical or phantom depending on the sign of ω_0. In particular, ω_0>0 and ω_0<0 will correspond to the quintessence-like and phantom-like mixed terms, respectively. As a result, the correspondinganisotropic power-law solutions will be solved to beζ =  4 (λ_ϕρ_ψ +λ_ψρ_ϕ) (2λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) +λ_ϕ^2 λ_ψ^2 +8ω_0 λ_ϕλ_ψ/6λ_ϕλ_ψ( λ_ϕλ_ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ), η =  λ_ϕλ_ψ +2λ_ϕρ_ψ +2λ_ψρ_ϕ - 4ω_0/3(λ _ϕλ _ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ), u_ϕ =   Ω̅_0[λ_ψ^2 ρ_ϕ(λ_ϕ +2ρ_ϕ)+2λ_ϕλ_ψρ_ϕρ_ψ +ω_0 (λ_ϕλ_ψ +2λ_ϕρ_ψ -2λ_ψρ_ϕ)] ,u_ψ =  Ω̅_0 [λ_ϕ^2 ρ_ψ(λ_ψ+2ρ_ψ)+2λ_ϕλ_ψρ_ϕρ_ψ +ω_0 (λ_ϕλ_ψ -2λ_ϕρ_ψ +2λ_ψρ_ϕ)] ,v =  Ω̅_0 λ_ϕλ_ψ(λ_ϕλ_ψ +2λ_ϕρ_ψ +2λ_ψρ_ϕ -4ω_0) ,with the value of Ω̅_0 is given byΩ̅_0 = 4 (λ_ϕρ_ψ +λ_ψρ_ϕ) (λ_ϕλ_ψ +3λ_ϕρ_ψ +3λ_ψρ_ϕ) -λ_ϕ^2 λ_ψ^2 +8 ω_0 λ_ϕλ_ψ/2 [λ _ϕλ_ψ(λ _ϕλ _ψ +2 λ _ϕρ _ψ+2 λ _ψρ _ϕ)]^2.Here, the initial setup of the fields, metric g_μν, and potentials has been remained. It turns out that we still obtain the anisotropic power-law solutions with non-vanishing η in this case although the terms associated with the pure kinetic terms of ϕ and ψ have not shown in the following Einstein field equations. It has been shown that these terms are very small compared to that involving ρ_ϕ and ρ_ψdue to the requirement for the inflationary phase that ρ_ϕ,ψ≫λ_ϕ,ψ∼ O(1). Hence, theinflationary solutions can be approximated to be that shown in Eqs. (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>). Now, we would like to discuss the stability of the anisotropic solutions found in this section.In particular, the stability analysis based on the dynamical system approach as shown in the previous section will be used again. As a result, we are able to obtain the new value for a_6 such as a_6=ω_0^2/4>0, while a_1 remains its negative value as shown in Eq. (<ref>). This result clear indicates that the corresponding anisotropic solutions found in this special scenario are also unstable during the inflationary phase, no matter the sign of ω_0. § CONCLUSIONS It is noted that most of inflation models have worked for isotropic metrics such as the FLRW or de Sitter spacetime. However, some anomalies like the hemispherical asymmetry and the Cold Spot existing in the CMB temperature <cit.> addressnecessary modifications to the inflationary models <cit.>. In particular, the anisotropic inflation dealing with anisotropic spacetimes, e.g., the Bianchi metrics, might be a better framework for investigating the properties of the early universe, where some exotic features might be explained <cit.>.As mentioned above,if the cosmic no-hair conjecture proposed by Hawking and his colleaguesholds, the state of universe at the late time should be isotropic, no matter the initial state of the universe <cit.>.Some people have tried to prove this conjecture for quite general metrics, e.g., the Bianchi spacetimes, which are homogeneous but anisotropic <cit.>, or the inhomogeneous and anisotropic ones <cit.>. Otherwise, some people have tried to seek counter-examples to this conjecture by one way or the other <cit.>. In particular, the first correct counter-example to the conjecture has been found in the supergravity motivated model <cit.>. As a result, this model has been proved to admit a stable and attractor Bianchi type I solutions during the inflationary phase, even when the scalar field is non-canonical <cit.>. In order to support the Hawking conjecture, some papers dealing with the inclusion of the phantom field, whose kinetic term is negative definite, have appeared <cit.>. However, we do not know whether this conjecture still survives in extended frameworks, where other unusual interaction terms of scalar fields, which might be allowed to appear in the early universe in order to explain the observational data <cit.>, are considered. In this paper, therefore,we would like to study the specific extended scenario of the two-scalar-field model <cit.>, in which the mixed term of the canonical and phantom scalar fields <cit.> is introduced. As a result, we have derived the following Bianchi type I power-law solutions for the studied model.We have found that the anisotropy parameter is really small for inflationary solutions, consistent with the observational data of the WMAP <cit.> or Planck <cit.>. The stability analysis has been performed to conclude that the obtained anisotropic solutions are indeed unstable during the inflationary phase, meaning that the mixed term does not affect, at least in the framework of two-scalar-field extension of KSW model, on the validity of the cosmic no-hair conjecture. Additionally, we have also considered the special case, in which the pure kinetic terms of ϕ and ψ are ignored altogether in the field equations. As a result, the corresponding power-law solutions of this case also turn out to be unstable during the inflationary phase as expected. These results along with that investigated in Ref. WFK strongly indicate that the unusual kinetic terms such as the kinetic term of phantom field and the mixed kinetic term of canonical and phantom fieldscan play a significant role in protectingthe cosmic no-hair conjecture from the counter-examples admitted by the supergravity motivated model <cit.>. In other words, the validity of cosmic no-hair conjecture seems torequire the existence of extra fields along with their interaction terms, which might exist in very high energy scales of the early universe. We would like to note that this paper has been devoted to investigate only the validity of the cosmic no-hair conjecture in the unusual scenarios associated with the inclusion of the mixed kinetic term of canonical and phantom fields. A detailed comparison withthe recent observations like the WMAP<cit.> or Planck<cit.> for the two-scalar-field model studied in this paperthrough correlations between T, E, and B modes<cit.> should be done in further works and presented elsewhere. It is known that the loop quantum cosmology (gravity) has been regarded as one of leading theories to solve the singularity problems <cit.>, such as the Big Bang and Big Rip singularities, which have been very famous challenges to classical gravity, where quantum effects have been ignored. Hence, the cosmic no-hair conjecture should be tested not only in classical gravity framework but also in quantum gravity one, such as the loop quantum cosmology <cit.>, in order to improve its cosmological validity. For example, the stability of Bianchi spacetimes should be investigated in the context of loop quantum cosmology <cit.> during the inflationary phase in order to see whether these spacetimes approach the de Sitter spacetime for large values of the time as predicted by the cosmic no-hair conjecture or not. We hope that our present paper would shed more light on the nature of early universe, especially the nature of observed anomaliesof the CMB temperature like the hemispherical asymmetry and the Cold Spot.§ ACKNOWLEDGMENTST.Q.D. is deeply grateful to Professor W. F. Kao of Institute of Physics in National Chiao Tung University for his useful advice on the cosmic no-hair conjecture and the KSW anisotropic inflation model. We would like to thank an anonymous referee very much for useful comments. This research is supported in part by VNU University of Science, Vietnam National University, Hanoi. 99Guth A. H. Guth,The inflationary universe: A possible solution to the horizon and flatness problems, Phys. Rev. D 23 (1981) 347;A. D. Linde,A new inflationary universe scenario: A possible solution of the horizon, flatness, homogeneity, isotropy and primordial monopole problems, Phys. Lett. B 108 (1982) 389;A. D. Linde,Chaotic inflation, Phys. Lett. B 129 (1983) 177.WMAPE. Komatsu et al. [WMAP Collaboration], Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: Cosmological interpretation, Astrophys. J. Suppl. 192 (2011) 18[arXiv:1001.4538]; G. Hinshaw et al. [WMAP Collaboration], Nine-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: Cosmological parameter results, Astrophys. J. Suppl.208 (2013) 19[arXiv:1212.5226].PlanckP. A. R. Ade et al. [Planck Collaboration],Planck 2013 results. XXII. Constraints on inflation,Astron. Astrophys.571 (2014) A22 [arXiv:1303.5082];P. A. R. Ade et al. [Planck Collaboration], Planck 2015 results. XX. Constraints on inflation,Astron. Astrophys. 594 (2016) A20 [arXiv:1502.02114];P. A. R. Ade et al. [Planck Collaboration], Planck 2013 results. XXIII. Isotropy and statistics of the CMB,Astron. Astrophys.571 (2014) A23 [arXiv:1303.5083];P. A. R. Ade et al. [Planck Collaboration],Planck 2015 results. XVI. Isotropy and statistics of the CMB, Astron. Astrophys.594 (2016) A16 [arXiv:1506.07135]. Chernoff:2014cbaD. F. Chernoff and S.-H. H. Tye, Inflation, string theory and cosmic strings, Int. J. Mod. Phys. D 24 (2015) 1530010 [arXiv:1412.0579]. Buchert:2015wwrT. Buchert, A. A. Coley, H. Kleinert, B. F. Roukema and D. L. Wiltshire, Observational challenges for the standard FLRW model, Int. J. Mod. Phys. D 25 (2016) 1630007 [arXiv:1512.03313]. Ade:2013vbwP. A. R. Ade et al. [Planck Collaboration], Planck 2013 results. XXVI. Background geometry and topology of the Universe, Astron. Astrophys.571 (2014) A26 [arXiv:1303.5086];P. A. R. Ade et al. [Planck Collaboration], Planck 2015 results. XVIII. Background geometry & topology,” Astron. Astrophys.594 (2016) A18 [arXiv:1502.01593].Bianchi G. F. R. Ellis and M. A. H. MacCallum, A class of homogeneous cosmological models,Commun. Math. Phys.12 (1969) 108;G. F. R. Ellis, The Bianchi models: Then and now, Gen. Rel. Grav.38 (2006) 1003. Pitrou:2008gkC. Pitrou, T. S. Pereira and J. P. Uzan, Predictions from an anisotropic inflationary era, J. Cosmol. Astropart. Phys.04(2008) 004 [arXiv:0801.3596].Singh:2011gp P. Singh, Curvature invariants, geodesics and the strength of singularities in Bianchi-I loop quantum cosmology, Phys. Rev. D 85 (2012) 104011 [arXiv:1112.6391];B. Gupt and P. Singh, Contrasting features of anisotropic loop quantum cosmologies: The role of spatial curvature, Phys. Rev. D 85 (2012) 044011[arXiv:1109.6636];B. Gupt and P. Singh, A quantum gravitational inflationary scenario in Bianchi-I spacetime, Class. Quant. Grav.30 (2013) 145013 [arXiv:1304.7686].Hawking G. W. Gibbons and S. W. Hawking, Cosmological event horizons, thermodynamics, and particle creation, Phys. Rev. D 15 (1977) 2738; S. W. Hawking and I. G. Moss,Supercooled phase transitions in the very early universe,Phys. Lett. B 110 (1982) 35.wald R. M. Wald, Asymptotic behavior of homogeneous cosmological models in the presence of a positive cosmological constant, Phys. Rev. D 28 (1983) 2118.inhomogeneousM. Kleban and L. Senatore, Inhomogeneous anisotropic cosmology, J. Cosmol. Astropart. Phys. 10 (2016) 022 [arXiv:1602.03520]; W. E. East, M. Kleban, A. Linde and L. Senatore, Beginning inflation in an inhomogeneous universe, J. Cosmol. Astropart. Phys.09 (2016) 010 [arXiv:1511.05143].KSWS. Kanno, J. Soda and M. a. Watanabe,Anisotropic power-law inflation,J. Cosmol. Astropart. Phys.12 (2010) 024 [arXiv:1010.5307]; M. a. Watanabe, S. Kanno and J. Soda, Inflationary universe with anisotropic hair,Phys. Rev. Lett.102 (2009) 191302 [arXiv:0902.2833]. KSW1 A. Maleknejad, M. M. Sheikh-Jabbari and J. Soda, Gauge fields and inflation,Phys. Rep.528 (2013) 161 [arXiv:1212.2921];J. Soda, Statistical anisotropy from anisotropic inflation, Class. Quantum Grav.29 (2012) 083001 [arXiv:1201.6434]. WFK T. Q. Do and W. F. Kao, Anisotropic power-law inflation for the Dirac-Born-Infeld theory, Phys. Rev. D 84 (2011) 123009; T. Q. Do, W. F. Kao and I.-C. Lin, Anisotropic power-law inflation for a two scalar fields model,Phys. Rev. D 83 (2011) 123002; T. Q. Do and W. F. Kao, Anisotropic power-law solutions for a supersymmetry Dirac-Born-Infeld theory,Class. Quantum Grav.33 (2016) 085009.extensionsR. Emami, H. Firouzjahi, S. M. Sadegh Movahed and M. Zarei, Anisotropic inflation from charged scalar fields, J. Cosmol. Astropart. Phys. 02 (2011)005 [arXiv:1010.5495];K. Murata and J. Soda, Anisotropic inflation with non-Abelian gauge kinetic function, J. Cosmol. Astropart. Phys. 06 (2011) 037 [arXiv:1103.6164];S. Hervik, D. F. Mota and M. Thorsrud, Inflation with stable anisotropic hair: is it cosmologically viable?, J. High Energy Phys.11 (2011) 146 [arXiv:1109.3456];K. Yamamoto, M. a. Watanabe and J. Soda, Inflation with multi-vector hair: the fate of anisotropy,Class. Quantum Grav.29 (2012) 145008 [arXiv:1201.5309];M. Thorsrud, D. F. Mota and S. Hervik, Cosmology of a scalar field coupled to matter and an isotropy-violating Maxwell field, J. High Energy Phys. 10(2012) 066 [arXiv:1205.6261];K. i. Maeda and K. Yamamoto, Inflationary dynamics with a non-Abelian gauge field, Phys. Rev. D87 (2013) 023528 [arXiv:1210.4054];J. Ohashi, J. Soda and S. Tsujikawa, Anisotropic non-gaussianity from a two-form field,Phys. Rev. D 87 (2013)083520[arXiv:1303.7340];J. Ohashi, J. Soda and S. Tsujikawa, Anisotropic power-law k-inflation,Phys. Rev. D 88 (2013) 103517 [arXiv:1310.3053];A. Ito and J. Soda, Designing anisotropic inflation with form fields, Phys. Rev. D 92 (2015)123533 [arXiv:1506.02450];A. A. Abolhasani, M. Akhshik, R. Emami and H. Firouzjahi, Primordial statistical anisotropies: The effective field theory approach, J. Cosmol. Astropart. Phys. 03 (2016) 020 [arXiv:1511.03218];S. Lahiri,Anisotropic inflation in Gauss-Bonnet gravity, J. Cosmol. Astropart. Phys. 09 (2016) 025 [arXiv:1605.09247];M. Karciauskas, Dynamical analysis of anisotropic inflation, Mod. Phys. Lett. A 31 (2016)1640002[arXiv:1604.00269].implicationM. a. Watanabe, S. Kanno and J. Soda, The nature of primordial fluctuations from anisotropic inflation, Prog. Theor. Phys.123 (2010) 1041 [arXiv:1003.0056]; A. E. Gumrukcuoglu, B. Himmetoglu and M. Peloso, Scalar-scalar, scalar-tensor, and tensor-tensor correlators from anisotropic inflation, Phys. Rev. D 81 (2010) 063528 [arXiv:1001.4088]; M. a. Watanabe, S. Kanno and J. Soda, Imprints of anisotropic inflation on the cosmic microwave background,Mon. Not. Roy. Astron. Soc.412 (2011) L83 [arXiv:1011.3604];J. Ohashi, J. Soda and S. Tsujikawa, Observational signatures of anisotropic inflationary models,J. Cosmol. Astropart. Phys. 12 (2013) 009 [arXiv:1308.4488]; N. Bartolo, S. Matarrese, M. Peloso and A. Ricciardone,Anisotropic power spectrum and bispectrum in the f(ϕ)F^2 mechanism, Phys. Rev. D 87 (2013) 023504 [arXiv:1210.3257]; X. Chen, R. Emami, H. Firouzjahi and Y. Wang,The TT, TB, EB and BB correlations in anisotropic inflation,J. Cosmol. Astropart. Phys. 08(2014) 027 [arXiv:1404.4083];R. Emami, H. Firouzjahi and M. Zarei, Anisotropic inflation with the nonvacuum initial state, Phys. Rev. D 90 (2014) 023504 [arXiv:1401.4406]. ito A. Ito and J. Soda, MHz gravitational waves from short-term anisotropic inflation, J. Cosmol. Astropart. Phys. 04(2016) 035[arXiv:1603.00602];R. Emami and H. Firouzjahi, Clustering fossil from primordial gravitational waves in anisotropic inflation, J. Cosmol. Astropart. Phys. 10 (2015) 043 [arXiv:1506.00958].phantomR. R. Caldwell,A phantom menace? Cosmological consequences of a dark energy component with super-negative equation of state, Phys. Lett. B 545 (2002) 23 [astro-ph/9908168].Singh:2003vxP. Singh, M. Sami and N. Dadhich, Cosmological dynamics of phantom field,Phys. Rev. D 68 (2003) 023522[hep-th/0305110];V. B. Johri, Phantom cosmologies,Phys. Rev. D 70 (2004) 041303[astro-ph/0311293];E. Elizalde, S. Nojiri and S. D. Odintsov, Late-time cosmology in (phantom) scalar-tensor theory: Dark energy and the cosmic speed-up, Phys. Rev. D 70 (2004) 043539[hep-th/0405034]. phantom1 R. R. Caldwell, M. Kamionkowski and N. N. Weinberg, Phantom energy and cosmic doomsday,Phys. Rev. Lett. 91 (2003) 071301[astro-ph/0302506];S. Nojiri, S. D. Odintsov and S. Tsujikawa, Properties of singularities in (phantom) dark energy universe, Phys. Rev. D 71 (2005) 063004[hep-th/0501025].quintomY. F. Cai, E. N. Saridakis, M. R. Setare and J. Q. Xia, Quintom cosmology: theoretical implications and observations, Phys. Rep.493 (2010) 1[arXiv:0909.2776].quintom1Z. K. Guo, Y. S. Piao, X. M. Zhang and Y. Z. Zhang, Cosmological evolution of a quintom model of dark energy,Phys. Lett. B 608 (2005) 177 [astro-ph/0410654]. quintom2L. E. Allen and D. Wands, Cosmological perturbations through a simple bounce, Phys. Rev. D 70 (2004) 063515[astro-ph/0404441];Y. F. Cai, T. Qiu, R. Brandenberger, Y. S. Piao and X. Zhang, On perturbations of quintom bounce, J. Cosmol. Astropart. Phys. 03 (2008) 013 [arXiv:0711.2187].quintom3B. Feng, M. Li, Y. S. Piao and X. Zhang, Oscillating quintom and the recurrent universe, Phys. Lett. B 634 (2006) 101 [astro-ph/0407432].Copeland:2006wrE. J. Copeland, M. Sami and S. Tsujikawa, Dynamics of dark energy, Int. J. Mod. Phys. D 15 (2006) 1753 [hep-th/0603057].singularityP. Singh, Loop quantum cosmology and the fate of cosmological singularities,Bull. Astron. Soc. India 42 (2014) 121 [arXiv:1509.09182].Ashtekar:2011ni A. Ashtekar and P. Singh, Loop quantum cosmology: A status report, Class. Quant. Grav.28 (2011) 213001[arXiv:1108.0893].SinghM. Sami, P. Singh and S. Tsujikawa, Avoidance of future singularities in loop quantum cosmology,Phys. Rev. D 74 (2006) 043514 [gr-qc/0605113];P. Singh, Are loop quantum cosmos never singular?,Class. Quantum Grav.26(2009) 125005 [arXiv:0901.2750].mixedL. P. Chimento, M. I. Forte, R. Lazkoz and M. G. Richarte, Internal space structure generalization of the quintom cosmological scenario,Phys. Rev. D 79 (2009) 043502 [arXiv:0811.3643];C. van de Bruck and J. M. Weller,Quintessence dynamics with two scalar fields and mixed kinetic terms, Phys. Rev. D 80 (2009) 123014 [arXiv:0910.1934]; E. N. Saridakis and J. M. Weller, A quintom scenario with mixed kinetic terms,Phys. Rev. D 81 (2010) 123523 [arXiv:0912.5304];A. Paliathanasis and M. Tsamparlis, Two scalar field cosmology: Conservation laws and exact solutions, Phys. Rev. D 90 (2014) 043529 [arXiv:1408.1798]. mixed-1D. Langlois and S. Renaux-Petel, Perturbations in generalized multi-field inflation,J. Cosmol. Astropart. Phys. 04 (2008)017 [arXiv:0801.1085]. TQD T. Q. Do and W. F. Kao, Anisotropically expanding universe in massive gravity, Phys. Rev. D 88 (2013) 063006;T. Q. Do, Higher dimensional nonlinear massive gravity, Phys. Rev. D 93 (2016)104003[arXiv:1602.05672]; T. Q. Do, Higher dimensional massive bigravity, Phys. Rev. D 94 (2016) 044022[arXiv:1604.07568].
http://arxiv.org/abs/1702.08308v1
{ "authors": [ "Tuan Q. Do", "Sonnet Hung Q. Nguyen" ], "categories": [ "gr-qc", "astro-ph.CO", "hep-th" ], "primary_category": "gr-qc", "published": "20170227145451", "title": "Anisotropic power-law inflation in a two-scalar-field model with a mixed kinetic term" }
tetsufumi.tanamoto@toshiba.co.jpCorporate R & D center, Toshiba Corporation, Saiwai-ku, Kawasaki 212-8582, JapanTopological error-correcting codes, such as surface codes and color codes, are promising because quantum operations are realized by two-dimensionally (2D) arrayed quantum bits (qubits). However, physical wiring of electrodes to qubits is complicated, and 3D integration for the wiring requires further development of fabrication technologies. Here, we propose a method to reduce the congestion of wiring to qubits by just adding a SWAP gate after each controlled-NOT (CNOT) gate. SWAP gates exchange roles of qubits. Then, the roles of qubits are sharedbetween different qubits.We found that our method transforms the qubit layout and reduces the number of qubits that cannot be accessed two-dimensionally. We show that fully 2D layouts including both qubits and control electrodes can be achieved for surface and color codes of minimum sizes.This method will be beneficial to simplifications of fabrication process of quantum circuits in addition to improvements of reliability of qubit system.03.67.Lx, 03.67.Mn, 73.21.La Work-sharing of qubits in topological error corrections Hayato Goto December 30, 2023 ======================================================= § INTRODUCTIONSolid-state quantum computers <cit.>have made significant progress recentlyin experiments of quantum error corrections <cit.>.Topologicalerror-correcting codes,such as surface codes <cit.> and color codes <cit.>,have been intensively investigated because quantum operations are realized through physical interactions between nearest neighboring qubits. However, physical wiring of electrodes to qubits is complicated and 3D integration for the wiring requires further development of fabrication technologies <cit.>. The complexity of wiring is mainly attributable to connectionsto syndrome-qubits from data-qubits. Although quantum annealing machines based on superconducting qubits have already been developed <cit.>, in order to realize quantum computation, accurate control of quantum error correction is necessary.Here, we propose a method to reduce the congestion of wiring to qubits. We just add a SWAP gate <cit.> after each controlled-NOT (CNOT)gate. Because SWAP gates exchange roles of qubits, the roles of syndrome measurements are sharedbetween different qubits.This transforms the qubit layout and reduces the number of qubits that cannot be accessed two-dimensionally. In particular, we show that fully 2D layouts including both qubits and control electrodescan be achieved for surface and color codes of minimum sizes. Because a CNOT gate is indispensable to every quantum computer, the work-sharing of qubits by SWAP gates will be applicable togeneral quantum circuits to improve uneven workloads of qubits. Topological codes are based on the stabilizer formalism <cit.>, where a parity check process is carried out by summing`1' ofdata-qubits by using CNOT operations. Errors can be detected by change of the parity.As an example, when a wave function of four data-qubits isgiven by|Ψ =∑_i_1,i_2,i_3,i_4=0,1 a_i_1i_2i_3i_4|i_4i_3i_2i_1 anda syndrome-qubit is initialized as |0,the measurement process of a Z-type check operator (Z-check) is given by |Ψ|0→∑ a_i_1i_2i_3i_4|i_4i_3i_2i_1 | i_4 ⊕ i_3 ⊕ i_2 ⊕ i_1 (Fig.1(a)), where ⊕ denotes summation modulo 2. Similarly, the measurement process of a X-type check operator (X-check) is illustrated in Fig.1(b) (|+≡ [|0+|1]/2). When the number of data-qubits increases,the corresponding number of wirings to a single syndrome-qubit increases. Physical wires have a finite width and when many wires are arranged closely,the crosstalk problem also appears <cit.>. Thus, it is desirable to avoid the concentrationof wires in a small space.Using local interactions between nearest-neighbor qubits on a two-dimensional (2D) physical plane, surface codes <cit.> and color codes <cit.>have a high tolerance against errors. However, control gate electrodesare not generally placed in the same physical plane as qubits,and vertical access to qubits is unavoidable. Figures 1(c) and 1(d) show Z-checks and X checks of the 7-qubit color code, respectively. The three syndrome-qubits `A-C' are connected to seven data-qubits, where the data-qubit `g' should be accessed from the vertical directionby stacking an additional wiring layer. Figure 1(e) shows a distance-3 surface code,where Z-checks and X-checks are performed simultaneously <cit.>.The code-distance d is the measure of a code in which⌊ (d-1)/2 ⌋ physical errors can be corrected by repeated measurements,and corresponds to the array sizein the surface code.In this code, the inner nine qubits should beaccessed vertically <cit.>. Thus, many stabilizer codes lead to a concentration ofwires to syndrome-qubits and the complexity of wiring between qubits is unavoidable <cit.>. Because qubits are sensitive to decoherence, the fabrication processof wiring is particularly difficult in view of the state-of-the-art techniques.Our method for reducing wiring congestion is to just add a SWAP gate after each CNOT gate in stabilizer measurements. SWAP gates are widely used in quantum operations and exchangequbit states without changing system entanglement <cit.>. The combination of a SWAP gate and a CNOT gate (hereafter a `CNOT+SWAP gate')shifts the role of syndrome-qubits, and reduces the congestion of wiring.The replacement of CNOT gates by CNOT+SWAP gates is effective instabilizer measurements because syndrome measurements are repeated many times. We show that this method changes qubit layouts, and importantly, for codes of small sizes,wiring can be set on the same physical plane as qubit array.The insertion of a SWAP gate in each CNOT gate imposes no overhead but rather it is advantageous when physical interactions between qubits are XY interactions <cit.>.The CNOT+SWAP gate can be performed directly by the XY interaction (iSWAP) with single-qubit rotations (Fig.2(a)) <cit.>. On the other hand, a CNOT gate needs two iSWAP gates, which makes the CNOT operation complicated and more fragile. Thus, in the case of the XY interaction, the replacement of CNOT gates by CNOT+SWAP gates makes quantum operations more reliable and saves operation time <cit.>. Moreover, high-precision iSWAP gates are experimentally feasible <cit.>.This paper is organized as follows: In Sec. II we show an example ofthe application of replacement of CNOT gate by CNOT+SWAP gateto fundamental stabilizer measurement. In Sec. III, we show layout changes of a minimum 7-qubit colorcode and the second smallest color code by our method. In Sec. IV, we first show layout changesof standard surface codes. Next we consider the application of ourmethod to a rotated surface code, and we also show results of numerical simulationsof logical error probabilities.Sec. V is devoted to a conclusion.In Appendix, we showdetailed explanations of the numerical simulation of Sec. IVanda process of fault tolerant 7-qubit color code. § STABILIZER CODEThe effect of the insertion of a SWAP gate after each CNOT gate iseasily understood when we apply this tothe single stabilizer measurement shown in Figs. 1(a) and 1(b), the results of which are shown in Figs. 2(b) and 2(c), respectively. For the Z-check, we have|Ψ|0→∑_i_1,i_2,i_3,i_4=0,1 a_i_1i_2i_3i_4|i_4i_3i_2i_1 |i_1 → ∑ a_i_1i_2i_3i_4|i_4,i_3, i_2 ⊕ i_1, i_2 |i_1 → ∑ a_i_1i_2i_3i_4|i_4,i_3 ⊕ i_2 ⊕ i_1,i_3,i_2 |i_1 → ∑ a_i_1i_2i_3i_4|i_4 ⊕ i_3 ⊕ i_2 ⊕ i_1,i_4,i_3,i_2 | i_1We can see that the SWAP gates shift the roles of qubits one by one, and finally if we regard qubit `4'as a new syndrome-qubit, we obtain the same wave function |Ψ as that of Fig. 1(a). As shown in Fig. 2(b), the qubit layout is now transformed to a one-dimensional qubit array. The same thing holds for the X-check (Fig. 2(c)):|Ψ|+ → ∑ a_i_1i_2i_3i_4 |+, i_4 ⊕ +, i_3 ⊕+, i_2 ⊕ + | i_1⊕ +If we measure qubit `4', we obtain the same wave function |Ψ as that in Fig. 1(b).Thus, we can avoid the congestion of wiring to one syndrome-qubit by just inserting SWAP gates.After the Z-checks in Fig. 2(b), whenqubit `4'is initialized to a |+ state, and the process ofFig. 2(b) is reversed from the green operation to the black operation, we can restorethe qubits to their original roles (closed circle is back to the syndrome-qubit). Thus, serial operations of X-checks following Z-checks can be repeated, and the role of syndrome qubit is shared between the two end qubits.§ COLOR CODELet us show the effect of the replacement of CNOT gates by CNOT+SWAP gates in the 7-qubit color code, which is the minimum color code (code distance 3)  <cit.>. New connections between qubits are determined such that each stepreproduces the same qubit statesas those of Figs. 1(c) and 1(d). Figures 2(d) and 2(e) show the results of applying CNOT+SWAP gates,instead of CNOT gates, to the 7-qubit color code. This replacement transforms the qubit layout to a quasi-one-dimensional layout, where 2D access to all qubits is possible (Fig. 2(f)), and consequently no additional 3D layer for wiring is needed. Figure 2(e) is carried out after Fig. 2(d), where the order of CNOT+SWAP gatesis reversed between Figs. 2(d) and 2(e).That is, we can use the same connections between the Z-checks and X-checks, and syndrome measurements,in which the role of syndrome-qubits is shared in five qubits (qubits `A', `B', `C', `a' and `b'), can be repeated. The application of our method to the next larger code (code distance 5) alsoreduces the number of qubits that cannot be accessed two-dimensionallyfrom 5 to 2 qubits as shown in Fig. <ref>.So far, the stabilizer measurements have been assumed to be error-free.Otherwise, unexpected errors propagate throughthe stabilizer measurements for the color codes. The fault-tolerant version <cit.> of the 7-qubit code is described in Appendix.§ SURFACE CODENext, we apply our idea to the standard surface code as shown in Fig. <ref>. In each step, the quantum state of Fig. <ref> coincides with that of the standard stateof Fig. 1(e).Figure <ref>(a) shows the new connections between qubits of a distance-3 surface code after the replacement of CNOT gates by CNOT+SWAP gates. When the connected qubits are rearranged closely,a new layout of qubits arises, as illustrated in Fig. <ref>(b), where 20 qubits share the role of the syndrome measurement. The number of the qubits to which two-dimensional access is impossibleis reduced to three (qubits `14', `22' and `42')from nine (=3×3 in Fig. 1(e)). Similarly, the number of the qubits thatcannot be accessed two-dimensionally for a distance-5 surface code is reduced from 49 (=7×7) to 35 (=7×5), as shown in Fig <ref>(c) and Fig. <ref>(d), respectively.In general, the number of two-dimensionally inaccessible qubits for a distance-d surface codeis reduced from (2d-3)^2 to (2d-3)×(2d-5) by our method. For distance-d codes, a set of syndrome measurementsis repeated more than d times.The next processes of Figs. <ref>(b),(d) are carried out by reversing both the order of the connections and the directions of arrows.That is, the green connections are carried out first and the black one last,reversing the directions of the interactions. After the reversing process, the next connections are the same as Figs. <ref>(a),(c). Rotated surface codes are known to be efficient surface codes <cit.>. In the case of distance-3 surface codes,the number of qubits is reduced from 25 (Fig. 1(e)) to 17 (Fig. <ref>(a)). The order of operations is crucial for the fault-tolerance <cit.>. Figure <ref>(b) shows the result of the replacement of CNOT gates by CNOT+SWAP gates.12 qubits share the role of the syndrome measurements. Since the central qubit cannot be accessed two-dimensionally as before,we introduce a technique for cutting the connections, as a result of which a fully 2D layout is achieved.The cut technique is carried out by the additional five processes shownin Figs. <ref>(c)-(g) with ancilla qubits.(These operations are performed following the fourth time step shown in Fig. <ref>(b).) The ancilla qubits are initialized to |0 states and used to hold the quantum statesof the qubits `0' and `14'. By serial application of CNOT+SWAP gates,the same operations as those without the cutcan be realized. To demonstrate the usefulness of the cut technique, we evaluated the logical error probabilities of the distance-3 rotated surface code (Figs. <ref>(h),(i)) by numerical simulations <cit.>(see Appendix for details). The simulation results shown in Figs. <ref>(h) and (i) not only prove the fault-tolerance but alsoshow substantial reduction of error probabilities. New rotated surface code layout with electrodes isshown in Fig. <ref> whereeach electrode represents a set of wiring lines as used in Ref. <cit.>. § CONCLUSIONReplacement of CNOT gates by CNOT+SWAP gates which corresponds toiSWAP gateschanges layouts of quantum error-correcting codes and relaxes the congestions of wiringbetween qubits. Importantly, we showed that fully 2D layouts including both qubits and control electrodescan be achieved for surface and color codes of minimum sizes. By the work-sharing, the concentration ofworkloads to specific qubits can be relaxed, and the degradation ofthe qubits will bemitigated.This will improve the reliability of a quantum circuit. Moreover, it is considered that simple 2D circuits are important forinitial development phases of experiments. In general, it is not easy to operate circuits as expected,because some mistakes are often found in designs after fabrications.Thus, repeated fabrications ofchips are necessary.Simple circuit layouts are beneficial fora reduction of fabrication period as well as fabrication process. We thank A. Nishiyama, M. Koyama, H. Hieda and S. Yasuda for discussions. § SIMULATION METHOD.To show the usefulness of the cut technique,we performed numerical simulations and evaluated the performance.Figure 4(h) shows the results where single-qubit error probabilities (p_1)are comparable to the two-qubit error probability (p_2) such as p_1=p_2/2.Figure 4(i) shows the resultswhere the single-qubit errors are rare compared to the two-qubit errors such as p_1=0.Here, we assume that the two-qubit error is an error during aCNOT+SWAP operation. The curves in Figs. 4(h) and 4(i) are the fits to the simulation results with a function form of α p_2^2,where α is a single fitting parameter.These excellent fits mean that any single-qubit errors have been correctedand therefore prove the fault tolerance of the syndrome measurements with the cut technique.It is also notable that the logical error probabilities are comparable to those in Ref. <cit.>,where the cut technique is not used.Thus, we conclude that the cut technique is usefulto realize fully accessible 2D layouts without spoiling performance.In this simulation, we initially prepare error-free logical |0_L or |+_Land repeat syndrome measurements.In Fig. <ref>, the next process iscarried out by reversing the order of the connectionssuch that the cut is carried out first and the black one last. At the end of each round of syndrome measurements,we perform recovery operations according to the measurement results, where we estimate error positions with the lookup table decoderdesigned for the present case (see Tables 1-4).After the recovery, we estimate the logical error probabilitiesby error-free measurements and decoding the results.The error model assumed here is the standard depolarizing noise model: One of the three single-qubitsPauli errors occurs with probability p_1/3 on each idle qubit,after each initialization to |0_L and each Hadamard gate,and before each computational-basis measurement;one of the fifteen two-qubit Pauli errors occurs with probability p_2/15after each CNOT+SWAP gate. For this simulation,we used the same stabilizer simulator as Ref. <cit.>.From the simulation results, we estimated logical error probabilities per round with the results of 40 rounds.§FAULT-TOLERANT 7-QUBIT CODE.One way to meet the condition of fault tolerance <cit.>is to use cat states in the stabilizer measurements <cit.>. Figure <ref>(a)shows a direct application of the four-qubit cat-state [|0|0 |0 |0 +|1|1|1|1]/√(2) to the color codein Z-check.(Creation of the cat state is described in Fig. <ref>.) The connections between qubits change when it is compared with Fig. 2(d) in the text, because at most four data-qubits can access the cat state simultaneously. Note that the connections A4-A3-B4-C1-C4-A4 constitute a closed loop, and consequently the qubit `g' cannot be accessed in the same physical plane.By cutting the connectionbetween `A4' and `C4', we can realize the fault-tolerant 7-qubit color code whereall qubits can be accessed two-dimensionally. Three ancilla qubits thatinteract with the qubits `A3', `B4', `C1' are introduced to store the quantum states ofthose qubits.The interaction between `A4' and `C4' with the cut technique is carried outby connecting qubit pairs among `A3', `B4', and `C1' one by one. Additional 11 steps are required (see Fig. <ref>).99 Yamamoto T. Yamamoto, Y.A. Pashkin, O. Astafiev, Y. Nakamura, and J.S. Tsai, Nature 425, 941 (2003).Niskanen0A.O. Niskanen, K. Harrabi, F. Yoshihara, Y. Nakamura, S. Lloyd, and J.S. Tsai, Science, 316,723(2007).Mooij J.H. Plantenberg, P.C. de Groot, C.J. Harmans, and J.E. Mooij, Nature 447, 836 (2007).Wallraff A. Wallraff, D.I. Schuster, A. Blais, L. Frunzio, R.S. Huang, J. Majer, S. Kumar, S.M. Girvin, and R.J. Schoelkopf, Nature 431, 162 (2004). Xmon1 R. Barends,J. Kelly, A. Megrant, D. Sank, E. Jeffrey, Y. Chen, Y. Yin, B. Chiaro, J. Mutus, C. Neill,P . ÓMalley, P. Roushan, J. Wenner,T.C. White, A.N. Cleland, and J.M. Martinis, Phys. Rev. Lett.111, 080502 (2013).Xmon2 R. Barends,J. Kelly, A. Megrant, A. Veitia, D. Sank, E. Jeffrey, T.C. White, J. Mutus,A.G. Fowler, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, C. Neill, P. ÓMalley, P. Roushan, A. Vainsencher, J. Wenner, A.N. Korotkov, A. N. Cleland, and J.M. Martinis,Nature 508, 500 (2014). LaddT.D Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J.L. ÓBrien,Nature 464, 45 (2010).Schoelkopf R.J. SchoelkopfandS.M. Girvin, Nature 451, 664 (2008).stabilizer D. Ristè,S. Poletto, M.Z. Huang, A. Bruno, V. Vesterinen, O.P. Saira, andL. DiCarlo,Nat. Commun.6, 6983 (2015). KellyJ. Kelly, R. Barends, A.G. Fowler, A. Megrant, E. Jeffrey, T.C. White, D. Sank, J.Y. Mutus, B. Campbell,Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.C. Hoi, C. Neill,P. J. J. ÓMalley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, A.N. Cleland, and J.M. Martinis, Nature 519, 66 (2015).IBMA.D. Córcoles, E. Magesan, S.J. Srinivasan, A.W. Cross, M. Steffen, J.M. Gambetta, and J.M. Chow,Nat. Commun. 6, 6979 (2015). Kitaev1 S.B. Bravyi and A.Y. Kitaev, arXiv:quant-ph/9811052.Kitaev2 E. Dennis, A. Kitaev, Y.A. Landahl, and J. Preskill,J. Math.Phys.43,4452 (2002).Fowler1 A.G. Fowler, M. Mariantoni, J.M. Martinis, and A.N. Cleland, Phys. Rev. A 86, 032324 (2012).Fowler2 A.G. Fowler, A.C. Whiteside, A.L. McInnes, andA. Rabbani, Phys. Rev. X 2 041003 (2012).Hill C.D. Hill,E. Peretz, S.J. Hile, M.G. House, M. Fuechsle, S. Rogge, M.Y. Simmons, and L.C.L. Hollenberg, Sci. Adv.1, e1500707 (2015).Devitt S.J. Devitt, Phys. Rev. A 94, 032329 (2016). Bombin H. Bombin and M.A. Martin-Delgado,Phys. Rev. Lett. 97, 180501 (2006).LandahlA.J. Landahl,J.T. Anderson, and P.R. Rice, arXiv:1108.5738.Jones C. Jones, P. Brooks, and J. Harrington, Phys. Rev. A 93, 052332 (2016). NiggD. Nigg, M. Mueller, E. A. Martinez, P. Schindler, M. Hennrich, T. Monz, M.A. Martin-Delgado, and R. Blatt, Science 345,302 (2014). Brecht T. Brecht, W. Pfaff, C. Wang, Y. Chu, L.Frunzio, M.H. Devoret, and R.J. Schoelkopf,npj Quantum Information 2 16002 (2016).DwaveT. Lanting, A.J. Przybysz, A.Y. Smirnov, F.M. Spedalieri, M.H Amin, A.J. Berkley, R. Harris, F. Altomare, S. Boixo, P. Bunyk, N. Dickson, C. Enderud, J.P. Hilton, E. Hoskinson,M.W. Johnson, E. Ladizinsky, N. Ladizinsky, R. Neufeld, T. Oh, I. Perminov, C. Rich, M.C. Thom,E. Tolkacheva, S. Uchaikin, A.B. Wilson and G. Rose,Phys. Rev. X 4, 021041 (2014). Nielsen M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information(Cambridge Univ. Press, 2000).Gottesman D. Gottesman, quant-ph/9705052.DiV D.P. DiVincenzo, D. Bacon, J. Kempe, G. Burkard, and K.B. Whaley, Nature 408, 339 (2000).DiV2 K.M. Svore, B.M. Terhal, and D.P. DiVincenzo, Phys. Rev. A 72, 022317 (2005).XY The XY model is expressed by the Hamiltonian H_xy=∑_i<j J( σ_i^xσ_j^x+ σ_i^yσ_j^y),where σ_i^α (α=x,y) are thePauli matrices acting on the i-th qubit withbasis |0⟩ =|↓⟩ and|1⟩ =|↑⟩.The iSWAP operation acting onqubits `1' and `2' is given byU_xy^(12)(t=π/(4J))=e^-i(π/4J)H_xy^(12), such as|00⟩→ |00⟩,|11⟩→ |11⟩, |01⟩→ -i|10⟩, and |10⟩→ -i|01⟩.Schuch N. Schuch and J. Siewert, Phys. Rev. A 67, 032301 (2003).iSWAP T. Tanamoto, Y.X. Liu, X. Hu, and F. Nori, Phys. Rev. Lett. 102, 100501(2009).iSWAP2 T. Tanamoto, K. Maruyama, Y.X. Liu, X. Hu, and F. Nori, Phys. Rev. A 78, 062313 (2008).Wei L.F. Wei, J.R. Johansson, L.X. Cen, S. Ashhab, and F. Nori, Phys. Rev. Lett. 100, 113601 (2008).BialczakR.C. Bialczak, M. Ansmann, M. Hofheinz, E. Lucero, M. Neeley, A.D. ÓConnell, D. Sank,H. Wang, J. Wenner, M. Steffen, A.N. Cleland, and J.M. Martinis,Nat. Phys 6, 409 (2010).Dewes A. Dewes, F.R. Ong, V. Schmitt, R. Lauro, N. Boulant, P. Bertet, D. Vion, and D. Esteve, Phys. Rev. Lett. 108, 057002 (2012). McKay D.C. McKay, S. Filipp, A. Mezzacapo, E. Magesan, J.M. Chow, and J.M. Gambetta,arXiv:1604.03076. Gottesman2 D. Gottesman,Proc. Sympos. Appl. Math68, 13 (2010).Horsman C. Horsman, A.G. Fowler, S. Devitt, and R. Van Meter, New J. Phys.14, 123011 (2012).Tomita Y. Tomita and K.M. Svore,Phys. Rev. A 90, 062320 (2014).Goto H. Goto and H. Uchikawa,Sci. Rep. 3, 2044 (2013).Shor P.W. Shor,quant-ph/9605011.
http://arxiv.org/abs/1702.08110v1
{ "authors": [ "Tetsufumi Tanamoto", "Hayato Goto" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170226231648", "title": "Work-sharing of qubits in topological error corrections" }
Triana, Corsaro, et al. Internal rotation of 13 red giants Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, BelgiumInstituto de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, SpainLaboratoire AIM, CEA/DRF-CNRS, Université Paris 7 Diderot, IRFU/SAp, Centre de Saclay, 91191 Gif-sur-Yvette, FranceDepartamento de Astrofísica, Universidad de La Laguna, E-38205 La Laguna, Tenerife, SpainINAF, Osservatorio Astrofisico di Catania, Via S.Sofia 78, 95123 Catania,ItalyRoyal Observatory of Belgium, Ringlaan 3, Brussels, Belgium trianas@oma.beThe Kepler space telescope has provided time series of red giants of such unprecedentedquality that a detailed asteroseismic analysis becomes possible. For a limited set of abouta dozen red giants, the observed oscillation frequencies obtained by peak-bagging togetherwith the most recent pulsation codes allowed us to reliably determine the core/enveloperotation ratio. The results so far show that the current models are unable to reproducethe rotation ratios, predicting higher values than what is observed and thus indicatingthat an efficient angular momentum transport mechanism should be at work. Here we providean asteroseismic analysis of a sample of 13 low-luminosity low-mass red giant starsobserved by Kepler during its first nominal mission. These targets form a subsampleof the 19 red giants studied previously <cit.>, which not only havea large number of extracted oscillation frequencies, but also unambiguous mode identifications. We aim to extend the sample of red giants for which internal rotation ratios obtainedby theoretical modeling of peak-bagged frequencies are available. We also derive the rotation ratiosusing different methods, and compare the results of these methods with each other. We built seismic models using a grid search combined with a Nelder-Mead simplex algorithm and obtained rotation averages employing Bayesian inference and inversion methods. We compared these averages with those obtained using a previously developed model-independent method. We find that the cores of the red giants in this sample are rotating 5 to 10 times faster than their envelopes, which is consistent with earlier results.The rotation rates computed from the different methods show good agreement for some targets, whilesome discrepancies exist for others. Internal rotation of 13 low-mass low-luminosity red giants in the Kepler field S. A. Triana <ref>,<ref> E. Corsaro <ref>,<ref>,<ref> J. De Ridder <ref> A. Bonanno <ref> F. Pérez Hernández<ref>,<ref> R. A. García <ref>Received ; accepted=========================================================================================================================================================================================================== § INTRODUCTION The impact of the Kepler space mission <cit.> on diverse aspects of stellar astrophysics has been enormous and revolutionary by many standards. Our understanding of stellar evolution through asteroseismology has improved dramatically,and with this improvement, new challenges have appeared.Sun-like stars, particularly red giants, exhibita very rich pulsation pattern <cit.>. Some of these pulsations can be associated with pressure (p) modes, which are excited stochastically by turbulent convection. These p modes propagate throughout the star with the highest sensitivityto the external convective envelope, as opposed to the internal gravity(g) modes, which propagate only throughout the radiative core and hence are beyond observational reach.The p and g propagation zones generally do not overlap, and the region between them is called the evanescent zone. Modes of mixed character, behaving like g modes in the core and p modes in the envelope, bridge the evanescent zone and have substantial amplitudes in both the core and the envelope <cit.>. These modes are extremely useful for obtaining information about the internal rotation of the core. This is possible because rotation induces frequency splittings in the modes, which would otherwise be degenerate in a spherically symmetric star <cit.>. Rotation induces a preferential axis in the star, and if the rotation rate is much smaller than the pulsation frequencies, then the splitting δ_nlm of a mode with radial, angular, and azimuthal wavenumbers n,l,m can be computed as <cit.> δ_nlm=m β_nl∫_0^R_* K_nl(r) Ω(r) dr, where the kernel K_nl(r) and β_nl are functions of the vertical and horizontal material displacement eigefunctions ξ_r(r) and ξ_h(r)(see Section 3 for more details). Thus, rotation lifts the azimuthal wave number degeneracy of the modes. The milestone work of <cit.> on red giants, using data obtained by the Kepler space telescope, took advantage of the detection ofrotationally split mixed modesin KIC 8366239 and concluded that the stellar core spins about ten times faster than the envelope. In the same year, <cit.> presented the core rotation of a sample of 300 red giants, establishing that the cores slow down significantly during the last stages of the red giant branch. A number of other studies followed. Most notably,<cit.> determined a core/envelope rotation rate ratio of about five for a star in the lower giant branch observed by Kepler,<cit.> computed the rotation rate ratio of six subgiants and young red giants, <cit.> obtained rotations rates for seven red giants in the secondary clump, and very recently, the work by <cit.> resolved the core rotation better than previous studies. In all these cases, the rotation rate of the core does not match the expectations of current angular momentum theories. Indeed, according to our current understanding of the evolution of the angular momentum in stellar interiors, the core is expected to spin up considerably as it contracts in stars at this stage of evolution. <cit.> explicitly showed the inadequacy of current models in reproducing the observed slow rotation of the core in RGB stars, even after magnetic effects were included. It is then clear that a very effective mechanism for angular momentum transport is at work. Internal gravity waves are capable of transferring considerable amounts of angular momentum <cit.>, which provides a suitable explanation for `anomalous' rotation rates in other types of stars,such as those reported by <cit.>, and <cit.>. However, <cit.> showed that this mechanism falls shortof explaining the observed rotation rate ratios in stars on the red giant branch.However, mixed modes can also transport angular momentum, as demonstrated recently by <cit.>. According to this study, the mixed-mode wave heat flux has an appreciable effect on the mean angular momentum in the inner regions of the star.The methods used to obtain the internal rotation rates in red giant stars have seen a number of developments well worth mentioning here. The way to proceed, after the mode detection and identification process <cit.>is usually to develop a seismic model of the star with oscillation frequencies that are as similar as possible to the observed frequencies. This process is computationally expensive as a large number of evolutionary tracks with different stellar parameters need to be computed. A seismic model provides oscillation kernels that allow the application of inversion techniques (see Sect. <ref>) to determine the approximate rotation rates in different regions of the star. This is the approach taken by <cit.>, <cit.>, and <cit.>.<cit.> developed a powerful method that allows estimating the rotation rates of both core and envelope, without recurring to any seismic model, by considering the relative amounts of `trapping' of a set of mixed modes, which as they showed arelinearly related to the rotational splittings δ (see Section <ref>). The method relies solely on observed quantitiesas inputs and particularly, on an estimate of the asymptotic period spacing ΔΠ_1 that pure high-order g modes would have in the asymptotic regime <cit.>.This latter approach was taken by <cit.> in their sample of seven core He-burning red giants. In that study, a seismic model was obtained for one of the stars in the sample (KIC 7581399) and was used to compute rotation rates through inversions. Then the authors adapted the method of <cit.>, and the rotation ratesthus obtained were invery good agreement with the inversions based on the seismic model. After assessing the validity of the Goupil approach in this way, no further seismic modeling was attempted for the other targets, and the corresponding rotation rates reported come solely from the use of the adapted method.<cit.> provided additional insight into the relationship between the relative amount of trapping of a mode (quantified by the parameter ζ) and the observed period spacing Δ P between consecutive mixed modes. The modificationof Goupil's formula introduced by <cit.> isexactly the same expression as was found by <cit.> to represent the ratio Δ P/ΔΠ_1.The model-independent method used by <cit.> has been compared with inversions based on seismic models for only two targets: the core helium-burning red giant KIC7581399 <cit.>, and the early red giant KIC4448777 <cit.>. Although a wider comparison of the method using targets in different evolutionary stages is desirable, we offer comparisons of themodel-independent method against inversions for our 13 targets, which share similar evolutionary stageswith KIC4448777. Our targets are a subset of the original selection of 19 low-mass, low-luminosityred giants studied by <cit.> using the Bayesian inference technique Diamonds<cit.> to detect and identify pulsation frequencies.<cit.> performed a grid-based search for models designed specificallyto constrain the age, mass, and initial helium content of all 19 targets.In the present work we search for optimal seismic models for the 13 targets that exhibit rotational splittings (Section <ref>) and employ Bayesian inference and inversion techniques(Section <ref>) to obtain average rotation rates. We compare these resultswith those obtained bythe model-independent method asimplemented by <cit.> and by <cit.> using their expressions for the trapping parameter ζ (Section <ref>). Additionally, we use the idea proposed recently by <cit.> that provides a way to localize the differential rotation of a red giant (whether in the radiative core or in the convective envelope of the star), provided that the rotation rate of the envelope is known by other means. § ROTATION RATE AVERAGES USING THE TRAPPING PARAMETER ΖThe parameter ζ gives an indication of how strongly a given stellarpulsation mode is localized, or`trapped', inside the radiative core. It is defined as the ratio of the mode inertia computed within the g -mode cavity I_g <cit.>to the total mode inertia I:ζ≡I_g/I =∫_r_1^r_2ρ r^2 [ξ_r^2 +l(l+1) ξ_h^2 ] dr /∫_0^R_*ρ r^2 [ξ_r^2 +l(l+1) ξ_h^2 ] dr . In the expression above, ρ is the density, r_1 and r_2 are the turning points of the g -mode cavity, l is the angular wave number of the mode, ξ_r and ξ_h are the vertical and horizontal material displacement eigenfunctions, respectively, and R_*is the stellar radius <cit.>.Figure <ref> shows the rotational kernels K_nl of two mixed modes with different ζ using a seismic model for KIC007619745.Modes with ζ close to one are gravity-dominated mixed modes, while modes with ζ close to 0.5 correspond to pressure-dominated mixed modes.The rotational splittings δ_i are linearly related to ζ_ifor each mode i, as shown by <cit.>, which we also refer to for further details. The coefficients of this linear relationship are related directly to the rotational averagesacross the core and the envelope through δ=(Ω_g/2-Ω_p)ζ+Ω_p, where Ω_p represents the average rotation rate in the envelope (approximated by the p -mode cavity), and Ω_g represents the average rotation rate in the radiative core (approximated by the g -mode cavity).Following <cit.>, they are defined as Ω_g=∫_0^r_2K(r) Ω(r) d r/∫_0^r_2K(r) dr for the core, and Ω_p=∫_r_2^R_* K(r) Ω(r) d r/∫_r_2^R_* K(r) dr for the envelope, where r_2 is the outer turning point in the g resonant cavity and K(r) is the rotational kernel of a mixed-mode. Our tests with the rotational profiles considered in Section <ref> show that while the core averages are essentially independent of the particular mixed-mode chosen, the envelope averages differ appreciably across modes with ζ≳ 0.9 (i.e., gravity-dominated modes). Using kernels from mixed modeswith ζ≲ 0.85 results in envelope averages with minimal variability. §.§ Estimation of the trapping parameter ζThe expression for ζ given by Eq. (<ref>)in principle requires knowing the material displacement eigenfunctions ξ_r,h(r) for each mode, which are only available after the computationally expensive process of deriving a seismic model of the star. However, <cit.> used an asymptotic analysis method based on the work of <cit.>to show that ζ can in principle be estimated using observational data alone. The expression for ζ was later refined by <cit.> and <cit.>. In what follows, we briefly recall the method and the main formulae, we refer to the original works for further details. The method consists of finding aproximate JWKB solutions for the material displacement eigenfunctions ξ_r,h(r) in the two separate p and g cavities of the star.Matching the solutions in the evanescent zone requires <cit.> tanθ_p=q tanθ_g, where q is the coupling constant between the p- andg-mode cavities, and the phases θ_p,g are defined through θ_g=∫_r_1^r_2k_r dr,θ_p=∫_r_3^r_4k_r dr, where r_3 and r_4 are the inner and outer turning points of the p-mode cavity. According to <cit.>, asymptotic analysis yields θ_p=π/Δν(ν-ν_p),θ_g=π( 1/ΔΠ_1 ν-ϵ_g ), where ν_p is the frequency of the theoretical l=1 pure p modes, which are related to the radial (l=0) modes ν_n,0 throughν_p=ν_n,0+(1/2-d_01)Δν. In turn, the radial modes can be expressed as a function of the radial order n involving the parameters ϵ_p, α, and the large frequency separation Δν as follows <cit.>:ν_n,0=[n+ϵ_p+α/2(n-n_max)^2]Δν, where n_max≡ν_max/Δν-ϵ_p. The approximateexpression for the trapping parameter, denoted here as ζ_as , reads ζ_as= [ 1 + ν^2 ΔΠ_1/q Δνcos^2θ_g/cos^2θ_p]^-1. A total of seven parameters are required here: the coupling constant q,the offsets ϵ_p,g, the mean large frequency separation Δν, the asymptotic period spacing ΔΠ_1, α, and d_01. In practice, the optimal parameters ϵ_p, α, and Δν are determined first by fitting the observed radial modes to Eq. <ref>. Then, the optimal parameters q, ΔΠ_1, ϵ_g, and d_01 that best reproduce the observed l=1 mode frequencies can be foundby a downhill simplex method. This requires solving Eq. <ref> for the mode frequencies ν at each search step with a particular (q, ΔΠ_1, ϵ_g, d_01) combination, using a Newton method to find the roots of the equation, for example.The observed rotational splittings δ are expected to be linearly related to ζ_as, and a linear fit of δ as a function of ζ_as leads to an estimate of the average envelope rotation and the envelope core rotation through Eq. <ref>.<cit.> obtained a result in their search of an expression for the mixed-mode relative period spacingsΔ P/ΔΠ_1that exactly matched the expression for ζ_as (Eq. <ref>) found by <cit.>. Thus, we have ζ_as=Δ P/ΔΠ_1. The above is a reflection of the fact that the rotational splittings follow the same distribution as the period spacing, because both are determined by the coupling between pressure and gravity terms. As a bonus, Equation <ref> provides a simple and direct way to estimateζ from observations without the need of the optimal parameters mentioned above (which can be slightly time consuming computationally)with the exception of ΔΠ_1. Some care must be taken because the period spacing between two consecutive mixed dipole modes Δ P(n,n+1)=ν^-1_n+1-ν^-1_n is defined properly atν=2/(ν^-1_n+1+ν^-1_n)≡ν_Δ P. In order to assign aΔ P to each mode ν_n, we therefore interpolate the two adjacent period spacingsΔ P(n,n+1) and Δ P(n-1,n) linearly.Similarly, when performing linear fits of δ vs. ζ_as, it is advisable to also include the interpolated rotational splitting at each location ν_Δ P using the two correspondingly adjacent values of δ to minimize biases, see Fig. <ref>. In Fig. <ref> we show that the trapping parameter ζ_mod as derived from a known seismic model is indeed well approximated by either ζ_as derived using asymptotic analysis, Eq. <ref>, or by the simpler expression given by Eq. <ref>.Assuming that the errors on the frequencies and the splittings are normally distributed, we can sample randomly from them and proceed to compute interpolated splittings and spacings as explained earlier. A linear fit to these points leadsto estimates of Ω_g and Ω_p according to Eq. <ref>.By repeating these steps many times, we can obtain thedistributions associated with Ω_g andΩ_pand their associated errors. §.§ Bayesian inference If a seismic model providing the oscillation eigenfunctions ξ_r,h is available,the trapping parameter can be computed from Eq. <ref>, which we denote now as ζ_mod. With two sets of inputs, that is, the splittings δ_i and the trapping parameters ζ_mod,i, we can set out to perform a Bayesian fit using Eq. <ref> as the model. To accomplish this,we first compute a Gaussian log-likelihood function defined as <cit.> Λ( Ω_g, Ω_p) = Λ_0 - 1/2∑^N_i=1[ Δ_i(Ω_g ,Ω_p) /ϵ_i]^2 , where N is the total number of rotational splittings δ_i from the observations(one for each m-multiplet of mixed modes), Δ_i(Ω_g, Ω_p)are the residuals given as the difference between the observed and the modeled splittings,ϵ_i the corresponding uncertainty, andΛ_0 = - ∑^N_i=1ln√(2 π) ϵ_i,a constant term. We multiply the likelihood distribution by the prior distributions (uniform or flat in this case, in the range Ω_g ∈[ 2, 8 ] μHz, and Ω_p ∈[ 0, 3 ] μHz), obtaining a posterior probability density distribution. By marginalizing the bidimensional posterior into two one-dimensional probability density distributions, we obtain estimates for Ω_p and Ω_g that are the medians of the two one-dimensional distributions.The corresponding error bars are the Bayesian credible intervals computed as explained in <cit.>.This statistical approach may provide similar results to a least-squares fit, but it is conceptually very different. One of the main differences is that it is able toincorporate any a priori knowledge on the estimated parameters that we may have, for instance, in the form of the prior distributions described above. An example of this Bayesian fit is shown in Figure <ref> for the star KIC007619745 (solid orange line), with 1σ errorbars overlaid. The results from this method for all targets are included in Table 1 under the `Bayes' heading.§ INVERSION METHODS In this section we present the methods we used to obtain rotation rate averages, which are all based on the oscillation kernels provided by the seismic models described in Section <ref>. Our treatment is based on <cit.>, who offered an extended presentation of the methods discussed below.The so-called forward problem states that the rotational splittings δ_nlm of an oscillation mode with radial, angular, and azimuthal wavenumbers n,l,m can be computed through Eq. (<ref>). Explicitely, the kernels are computed from the material displacement eigenfunctions viaK_nl=1/I[ ξ_r^2+l(l+1) ξ_h^2 -2ξ_rξ_h-ξ_h^2]r^2ρ, where I is the mode inertia (see Eq. <ref>). The constantβ_nl is given by β_nl=1/I∫_0^R_*[ ξ_r^2+l(l+1)ξ_h^2 -2ξ_rξ_h-ξ_h^2]r^2ρ dr.The inverse problem consists of determining the unknown inversion coefficients c_i(r) satisfying Ω̅(r)=∑_i=1^M c_i(r) δ_i/m β_i, where Ω̅(r) is the predicted internal rotation rate of the star, M is the number of observed splittings, and i denotes the collective indices (n,l,m).Clearly, the inversion coefficients c_i are not determined by Eq. <ref>, which just statesa linear relationship between the observed splittings and the predicted rotation profile. Thec_i(r) are determined by minimizing the difference between observed and predicted splittings, by minimizing of the resulting uncertainties, or by adjusting the shape of the averaging kernels, as discussed below.The approximate rotational profile Ω̅(r) can be expressed in terms of the true profile Ω(r) by means of theaveraging kernels 𝒦(r',r), which are related to the kernels K_i(r) through 𝒦(r',r)=∑_i=1^M c_i(r') K_i(r) and fulfill Ω̅(r')=∫_0^R_*𝒦(r',r) Ω(r) dr. The averaging kernels 𝒦(r',r) should be localized around r' as much as possible, ideally resembling a delta function δ(r',r). It is usually assumed that the observational errors ϵ_i are uncorrelated(as in, e.g., <cit.> or <cit.>), so that the variance of the predicted rotation rates can be estimated as σ^2[Ω̅(r)]=∑_i=1^M c_i^2(r) (ϵ_i/β_i)^2. The expression above accounts for the errors originating from the observations alone; itdoes not account for the inherent errors of the inversion process itself. §.§ Two-zone inversion models To obtain approximate averages of the core and envelope rotation rates, we can make use of simple two-zone models where we assume an inner zone extending from the stellar center to r/R_*=x_c and an outer zone extending from r/R_*=x_c all the way to stellar surface at r/R_*=1, both zones rotating uniformly with rates Ω_g and Ω_p , respectively. Our 13 targets happen to be approximately at the same evolutionary stage, and therefore it is not surprising that our seismic models show all their evanescent zones locatedat approximately the same radial locations (scaled by stellar radius) We have chosen x_c to coincide with the base of the convection zone for each target,see Figure <ref>. We can determine the inversion coefficients c_i associated with each zone by finding the optimal Ω_g and Ω_p that minimize χ^2=∑_i=1^M (δ̅_i- δ_i/ϵ_i)^2, where ϵ_i are the observation errors and δ̅_i are the predicted splittings associated with thetwo-zone rotation profile composed of Ω_g and Ω_p. The averages are determined by enforcing ∂(χ^2)/∂Ω_g,p=0 after substituting χ^2 using Eqs. <ref> and <ref>.Results from this method, with kernels from the seismic models discussed below, are presented on Table 1 under the `Two-zone' heading. §.§ Subtractive optimally localized averagingOne of the differences between the subtractive optimally localized averaging (SOLA) method <cit.> and the method described above is that with SOLA we do not minimize χ^2 , but instead, the method chooses the optimal linear combination of theinversion coefficients c_i such that the averaging kernels 𝒦(r',r) resemble a given target function T(r',r) as closely as possible while keeping the variance σ^2(Ω̅) low. Thus, we minimize ∫_0^R_*[ 𝒦(r',r)-T(r',r) ]^2 dr + μ∑_i=1^M c_i^2(r') ϵ_i^2at each r', with the additional constraint ∫_0^R_*𝒦(r',r) dr=1. The target function that we have chosen in this study is a Gaussian with unit norm,centered on r=r' with adjustable width s: T(r',r)=N e^-(r'-r/s)^2, N being a normalization factor. In addition to the free parameter μ in Eq. <ref>, we can also adjust the shape of the target function T by adjusting the width s. The problem reduces to solving the linear set of M equations (i=1,…,M) for each radial location r': ∑_k=1^M W_ikc_k(r')=∫_0^R_* K_i(r)T(r',r) dr, where W_ik=∫_0^R_* K_i(r)K_k(r) dr + μ δ_ik ϵ_i^2, together with the constraint ∑_k c_k(r')=1,which is implemented via Lagrange multipliers. Given a set of kernels K_i(r) and the two parameters (s, μ), the inversion coefficients c_i(r') are determined by solving the set of M equations given by Eq. <ref>. We note that the observed splittings are not involved in determining the c_i(r'), they determine the predicted rotation rate Ω̅(r') via Eq. <ref>. § TESTING THE METHODSBefore applying the methods described above to determine rotation rates for the stars in our sample, it is desirable to have an idea of how the methods perform under controlled situations. We choose a specific seismic model (of KIC007619745, obtained as explained in the next section) as the `true' model.Then, we consider six different rotation profiles and compute the exact rotational splittings in each case via Eq. <ref>. We also compute the `true' rotational averages for both the core and the envelope for each case using Eqs. <ref> and <ref>.For the first set of rotation profiles we adopted the functional form used by <cit.>: we assume that the inner region of the star rotates uniformly with a rate Ω_c from the center and up to 1.5 times r_H, the outer radius of the hydrogen burning shell. Then, from 1.5 r_H and up to the base of the convective zone (r_rcb), the star follows a uniform rotation rate Ω_m.The remainder of the star rotates according a power-law profile. ThusΩ(r)= Ω_c r ≤ 1.5r_H, Ω_m 1.5 r_H < r ≤ r_rcb, Ω_e(R_*/r)^αr > r_rcb, whereα=log(Ω_m/Ω_e)/log(R_*/r_rcb).The exponent α is so chosen to ensure the continuity of Ω(r) at r=r_rcb).This functional form is useful to adjust the location of the differential rotation. By setting Ω_m=Ω_e all the differential rotation in the star is localizedat r_rcb, inside the radiative region, and if Ω_m=Ω_c, the differential rotation is all contained in theconvective envelope. We have kept Ω_c and Ω_e fixed at 0.7 μHz and 0.1 μHz, respectively. We considerthree different values for Ω_m: 0.7 μHz, 0.4 μHz, and 0.1 μHz.The other set of rotation profiles are Gaussians of different widths s plus a constant term B:Ω(r)=A e^-1/2(r/R_*/s)^2+B, we set A=0.6 μHz, B=0.1 μHz and three different widths s: 0.01, 0.05 and 0.2. See Fig. <ref>.We compute the rotational splittings of the six test profiles using Eq. <ref> for allthe l=1 mode frequencies of the true model within ±3 Δν from the frequency at maximum power ν_max . Then we compute the optimal combination of parameters (q, ΔΠ_1, ϵ_g, d_01) that best reproduces these model frequencies in order to estimate ζ_as. As explained in Section <ref>, we compute interpolated splittings δ_Δ P to correspond with each period spacing Δ P as well as interpolated period spacings Δ P_ν to correspond with each splitting δ, see Fig. <ref>. Now we obtain the estimates of the rotation rates using each of the methods explained earlier. The true averages are obtained following Eqs. <ref> and <ref>. In the case of the SOLA method, we computed the predicted rotation rates in two different locations: one at the surface of the star (r/R_*=1), and the other well within the radiative core (r/R_*=10^-3). These SOLA-predicted rotation rates are sensitive to the width s of the target function. To obtain `calibrated' values for s, we therefore proceed first to compute the optimal two-zone model to determine the averages Ω_g and Ω_p. Then we compute the rotational splittings associated with this two-zone model and use them as inputs for a SOLA inversion, adjusting the widths s as necessary to make the corresponding SOLA predictions at r/R_*=10^-3 and r/R_*=1 exactly match Ω_g and Ω_p, respectively. With the widths s determined in this way, we then proceed to compute the SOLA-predicted averages using the splittings arising from the six test profiles.We assume that the splittings all have the same uncorrelated Gaussian error distribution whose width matches the mean 1σ error from the actual measurements. In the case of the inversions, the corresponding uncertainties on the predicted rotation rates are computed via Eq. <ref>. In the case of the rotation rates estimated via linear fits involving ζ_mod, as (see Fig. <ref>), the resulting uncertainties are computed through a Monte Carlo simulation sampling randomly and repeatedly for many times (10^5)from the normal distributions associated with each spliting δ. For the fits of δ, δ_Δ P vs Δ P_ν/ΔΠ_1, Δ P/ΔΠ_1 we followed the same Monte Carlo approach, except that we also sampled randomly from the distributions associated each mode frequency ν, which we set as having the same standard deviation as the actual observed errors. We note that errors on the trapping parameter or systematic errors incurred by the inversions are not considered.The predicted rotation rates are shown in Fig. <ref>. Although some scatter is present, the predictions are in acceptable agreement with the true averages across all methods. The test we just performed is an ideal situation, the modes are properly identified, the rotational kernels are based on the true seismic model, and the error distributions associated with the frequencies and splittings are centered exactly on the true values. Any deviation from this ideal situation will bring additional scatter to the predicted rotation averages.The averaging kernels 𝒦(r) from the SOLA inversions at the two radial locations mentioned above, although well resolved with respect to each other, are essentially identical to the averaging kernels of the two-zone models at the corresponding zone, which suggests already that no more than two reasonably well-defined averages can be obtained using SOLA inversions.Recently, <cit.> proposed a method aiming to determine the region in a red giant star where differential rotation is concentrated. They note that the minimum normalized splitting, min(δ/max(δ)) can be used to distinguish betweenrotation profiles with differential rotation localized in the core from those with differential rotation localized mostly in the envelope. This is precisely the motivation of the functional form of the rotation profiles given by Eq. <ref>. For a fixed ratio Ω_c/Ω_e in these profiles, the quantity min(δ/max(δ)) follows a one-to-one correspondence with Ω_m that determines the location of the differential rotation either in the core or in the envelope.<cit.> were able to resolve three rotation rates in three distinct radial locations of KIC4448777, two of themwithin the radiative core, which allowed them to conclude that there is a steep gradient in the rotation there, thus localizing the differential rotation of the star inside the radiative core. The target in that study is very similar to the targets in our target selection, sharing similar evolutionary stages, therefore it may be possible in principle to resolve the rotation rate in at least two points inside the core. Unfortunately, the set of rotational kernels for all of our targets is not suitable for obtaining more than an averaged value across the core (see Fig. <ref>). The reason for this is not evident a priori, and although it deserves special attention, it is beyond the scope of the present study. We can use the idea by <cit.> to determine whether we can indeed localize the differential rotation. We consider a rotation profile with some preestablished values for the set (Ω_c,Ω_m,Ω_e)and compute its associated splittings. We consider splittings from all l=1 mixed modes within ±3 Δν of ν_max.With this set of splittings as input and considering errors on them matching the actual observed errors, we proceed to find the optimum combination of predicted(Ω̅_c,Ω̅_m,Ω̅_e) that minimizes the difference between the input and the predicted splittings. To minimize and obtain estimates of the three parameters, we use a downhill simplex method. Thenwe make use once again of a Monte Carlo simulation sampling randomly from the normal distributions associated with the input splittings to obtain distributions for (Ω̅_c,Ω̅_m,Ω̅_e). If enough information is indeed contained in the set of input splittings to constrain Ω̅_m, then this should reflect in a sharply peaked distribution for it. Figure <ref> shows the results of this experiment. We set Ω_c=0.7 μHz, Ω_e=0.1 μHz, and vary Ω_m∈{0.1, 0.4, 0.7} μHz. The predicted values for Ω̅_c and Ω̅_e are close to the true values, but the probability densitydistribution for Ω̅_m is wide and practically flat, thus no reliable prediction for Ω_m is possible and the differential rotation cannot be localized properly. Essentially identical distributions of Ω̅_m are obtained regardless of the choice of Ω_m. This is a consequence of the magnitude of errors in the splittings together with the characteristics of the rotational kernels. Only if we artificially reduce the errors by an order of magnitude or less, a reasonable value of Ω̅_m can be recovered.All of the 13 targets in our sample exhibit this undesirable behavior. § SEISMIC MODELINGFrom the original 19 young red giants studied by <cit.>, we selected only those stars that exhibit rotationally split dipole modes (triplets). In some cases, depending on the stellar inclination angle, some of the l=1 triplets were missing their central m=0 peaksalthough the split m=±1 components were clearly visible.In these cases we assumed a central m=0 component in the middleof the observed m=±1 frequencies, and we associated with it anerror equal to three times the mean frequency error of the l=0peaks (usually larger than the error on the m=±1 components).This choice is conservative given that the asymmetry present in the full triplets (i.e., those that show central peaks)is usually smaller than this error.Assuming the presence of a central peak with this frequency uncertainty has virtually no effect on the uncertainty on the inferred rotation rates, given that in these cases the splittings are computed simply as half the distance in frequency of the m=±1 components, without involving the hypothetical central component.Equipped with these sets of pulsation frequencies, we set out to find approximate seismic models for each target making use of the MESAstellar evolution suite <cit.> together with the GYRE pulsation code <cit.>. The MESA suite includes the `astero' module, which implements a downhill simplex search method <cit.> to obtain the best stellar parameters given a set of pulsation and spectroscopic data. To reduce computing time during the search, we opted to include only the observed radial (l=0) modes andthe dipole (l=1) modes. Including higher l modes in the search,however desirable, would prohibitively increase the time required to find suitable seismic models for all targets.Our approach consisted of a combination of grid and downhill simplex searches. We set up a gridof initial metallicities [Fe/H]_ini, varying from -0.22 to 0.2 with0.06 steps (using a reference solar metallicity Z_⊙/X_⊙=0.02293 <cit.>) andinitial helium content Y_ini varying from 0.23 to 0.3with 0.005 steps. For each pair ([Fe/H]_ini ,Y_ini)we performed a downhill simplex search optimizing forinitial mass and overshoot (f_ov, expressed as a fraction of the pressure scale height H_P), in additionto age. We have kept the mixing length parameter α_MLT fixedat its solar calibrated value of 1.9. We also used Eddington-gray atmospheres and adopted the <cit.> mixture together with OPAL opacity tables <cit.>.When computing mode frequencies, we peformed atmospheric corrections following the method by <cit.> using a calibrated value of the exponent b=4.81as reported by <cit.>. The exact value of b is not critical as the observed modes can still be identified one-to-one to model frequencies using slightly different values. Results are summarized in Table <ref>.We performed a hare-and-hounds exercise to test our grid + downhill simplex approach. For this we extracted mode frequencies from the best model of KIC007619745, added some noise and used them as inputs to our search algorithm. The resulting modelwas satisfactorily close to the original, especially regarding the rotational kernels derived from them. We present more details in Appendix <ref>. § INTERNAL ROTATION RATESWe estimated the internal rotation rates using two-zone models, Bayesian inference, and SOLA inversions (all based on the kernels provided by the seismic models) in addition to the model-independent method of <cit.> as described in Section <ref>. Figure <ref> illustrates the use of the trapping parameter and associated linear fits as applied to the target KIC 007619745 as an example, as well as to the hare-and-hounds exercise explained in Appendix A.We computed two-zone models with the inter-zoneboundary in the middle of the evanescent zone for each target. For the SOLA inversions we set the trade-off parameter μ to zero since it made negligible difference on the results. In addition, for SOLA, we computed rotation rates at two radial locations, one at r/R_*=10^-3, deep in the radiative cores, and the other at the surface, r/R_*=1. The estimates of therotation rates through SOLA do depend slightly on the width s of the target functions used. To select the optimal value of s, we took the two-zone rotation rates determined earlier and computed their associated splittings via Eq. <ref>. Then, using these synthetic splittings, we iteratively determined the optimal s values required to exactly reproduce the two-zone rotation rates. With the widths s thus determined, we then computed the rotation rates at r/R_*=10^-3 and at r/R_*=1 using the observed rotational splittings. The rotation rates estimated in this way are presented inTable <ref> and in Figs. <ref> and <ref>. We assume that the observational errors on the splittings and on the mode frequencies are normally distributed. Since in general, a given mode frequency has two period spacings associated with it and each period spacing is associated with two mode frequencies (and therefore two rotational splittings), we opted to take interpolated values as explained in Section <ref>. We performed linear fits to the resulting set to estimate the average rotation rates of the core Ω_g and the envelope Ω_p according to Eq. <ref>. A straightforward Monte Carlo approach using the observed rotational splittings and frequencies, together with their normally distributed errors, reveals a correlation between Ω_gand Ω_p that is a reflection of the fact that in Eq. <ref> the slope and the intercept are not independent of each other. The rotation rates using this technique are shown in Figs. <ref> and <ref> in the Appendix B as a cloud of small violet circles, the blue box is centered on the mean, and its size corresponds to the 1σstandard deviation of the points in the cloud.Table <ref> summarizes the rotation rates for all 13 targets obtained by all the methods described in this work.§ DISCUSSIONThe rotation rates for most of the targets show good agreement, while a few present some scatter in the envelope averages. The rotation rates agree with each other within 2σ except for one case (see further below).There are minor differences in the ideal case of many exactly measured splittings and exact seismic models, as described in Section <ref>. The differences in this case are attributed to the different nature of the averages as computed from each method, for example, the averages as defined by Eqs. <ref> and <ref> are not exactly the same as the averages from Eq. <ref>, even if we had many kernels K(r) at our disposal to have very well localized averaging kernels 𝒦(r',r).However, considering the scatter of the predicted averages, we can still constrain the rotation rates in our targets with an error of about 0.05 μHz except in a few cases. No more than two averages that are well resolved spatially could be obtained using inversions and the seismic models, which was our hope given the previous success reported by <cit.>.We have chosen one of the stars in the sample (KIC 007619745) as an illustrative case to explain why only two values could be obtained. In Fig. <ref> we show the averaging kernels 𝒦(r) for the two-zone model (inner and outer zones) and SOLA (at r/R_*=10^-3 and r/R_*=1). According to Eq. <ref>, these are simply the weight functions involved in the average. They are localized (at least enough to resolve the core and the envelope),with the inner-zone kernel and the r/R_*=10^-3 SOLA kernel concentrating in approximately the same region in the stellar core. On the envelope, the outer-zone kernel and the r/R_*=1 SOLA kernel coincide roughly as well, while they oscillate rapidly around zero as the radius is varied in the core regions. These rapid oscillations around zero do not present a problem since we are only interested in the integrated averages. Indeed, as Fig. <ref> shows, the cumulative integrals of the averaging kernels are fortunately insensitive to the rapid oscillations. The outer-zone and the r/R_*=1 SOLA cumulative kernels start growing only at around r/R_*=0.2, where the inner-zone and the r/R_*=10^-3cumulative kernels have already reached unity. These properties allow us to estimate rotation rate averages of the core separately from the envelope. The inner zone averaging kernel in the SOLA inversions is basically the same when we use other radial locations well inside the radiative core, that is to say, we obtain the same average. As we approach the hydrogen burning region near r/R_*=0.01, however, there is significant contamination from the outer regions of the star, as is shown in Fig. <ref>. All of our targets exhibit essentially identical behavior.We note, however, that although the two-zone and SOLA kernels coincide for most targets, there were some instances where they differ slightly. In Fig. <ref> we show an example illustrating this point. The two-zone kernels exhibit some leakage that leads to appreciable differences in the rotation averages when compared with the SOLA kernels, for instance. This is the case for targets KIC 011913545 and KIC 012008916.The larger the number of splittings observed, the better the localization properties of the kernels will be and the error on the predicted rotation rates will be lower, see Fig.<ref>. We note that the predicted errors as computed from Eq. <ref>include any systematic errors arising from unmodeled physics, which means that the predicted rotation rates may be precise, but not accurate. This certainly constitutes a source for discrepanciesin addition to the variability induced by poorly localized averaging kernels, as discussed above.Accidental mode misidentification can potentially lead to an inadequate seismic model. As noted earlier, we are considering only modes with a detection probability higher than 0.99 <cit.>. In principle, we therefore do not expect complications arising from this issue. Since a misidentified dipole mode might reveal itself as an outlier if its rotational splitting differs considerably from its prediction, we closely examined the rotational splittings of the star KIC 012008916 and compared them with the predicted two-zone model splittings. However, no particular mode stands out in a clear way. Still, we could prune out the modes whose splittings differ the most from the two-zone prediction. This exercise modifies the predicted rotation averages and removes the scatter bringing the predictions across the six methods in good agreement. This is still not satisfactory, however, since we have no means to a priori justify the adequacy of a two-zone rotation profile, or of any other particular profile, for that matter.To conclude our discussion, we would like to point out that the target KIC 12008916 is coincidentally one of the few in the sample that has a very high inclination angle, that is, no central peaks were detected for it. This could be connected to the scatter in the rotation rates in some way. § SUMMARY AND CONCLUSION Building upon the mode-fitting and identification work of <cit.>, who analyzed the power spectra of 19 red giant targetsrecorded by the Kepler space telescope, we have estimated the average core and envelope rotation rate of 13 of these targets that showed clear rotationally split mixed modes. We employed the model-independent method developed originally by <cit.> and improved later by<cit.> and <cit.>. This model-independent method aimsto provide two rotational averages, one for the -mode cavity and another for the p-mode cavity. To investigate the possibility of obtaining more detailed information about the rotation profile, we have used the traditional approach of searching for optimal seismic models with the aid of the MESA stellar evolution suite and the GYRE oscillation code. We used a total of six different methods to compute the averages, four of them based on the optimal seismic models: SOLA inversions, two-zone inversions, Bayesian inference on two-zone models,and linear fits of the rotational splittings δ as functions of thetrapping parameter ζ_mod. We also used the model-independent method as implemented by <cit.> and a variation of it based on the result by <cit.>.Before we applied these methods to our sample of red giants, we took a particular seismic model (that of KIC007619745) as a `true' reference model, and in conjunction with six synthetic rotation profiles, we proceeded to compute the associated rotational splittings. Using these as inputs, we compared the predictions from all the methods against the 'true' synthetic rotation profiles. We found good agreement in general, with some small differences that can be attributed to the slightly different nature of the averages produced by each method. All the rotational kernels of our 13 targets allow the computation of two distinct rotation rates. We used the functional form of the rotation profiles used in the study by <cit.> to test whether we could determine the approximate location in the star where the differential rotation takes place. The results were negative given the magnitude of the observational errors. The information contained in the splittings is insufficient to localize the region with differential rotation. This is perhaps closely related to the fact that no good averaging kernels could be found for intermediate regions of any of the targets.The averages that we could obtain, however, are in good agreement with the average rotation rates of the g- and p-mode cavities as computed from the model-independent method for most of the targets, while a few targets present more scatter in the predicted averages across the different methods.The rotation rates agree with each other within 2σ with only one exception, that of the star KIC 012008916. We identify the poorly localized two-zone averaging kernels as a contributor to the discrepancy. However, many sources of systematic errors exist that cannot be ruled out. There are indeed many simplifying assumptions involved in the stellar evolution code that directly affect the mode kernels, which means that some seismic models may not be adequate. In this respect, the model-independent methods are more reliable since they are essentially free of such systematic errors.At any rate, the results for the rotation of the radiative cores show better agreement across all methods than the rotation of the envelopes, which is related to the fact that for p-dominated mixed modes the trapping parameter is ζ≈0.5, far from 0. The cores in this target selection are spinning about 5 to 10 times faster than their envelopes, which is consistent with previous studies and still calls for a better understanding of the angular momentum redistribution in stars in the red giant branch. The authors express their gratitude to the referee, whose comments greatly improved the manuscript. S. T. would like to thank Ehsan Moravveji for enlightening discussions and help with the MESA code. S. T. received partial funding from the ERC Advanced Grant ROTANUT (No. 670874).E. C. acknowledges funding by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 312844 (SPACEINN). R. A. G acknowledges the support from the CNES GOLF and PLATO grants at CEAas well as the ANR (Agence Nationale de la Recherche, France) programIDEE (n ANR-12-BS05-0008) “Interaction Des Étoiles et des Exoplanètes”. We are grateful to the Kepler team and everybody who has contributed to making this mission possible. Funding for the Kepler Mission was provided by NASA's Science Mission Directorate.The research leading to these results has received funding from the Fund for Scientific Research of Flanders (FWO) under project O6260.aa§ HARE-AND-HOUNDS TESTS The adequacy of the method we used to obtain the seismic models can be assessed by a hare-and-hounds exercise. First, we selected the best seismic model of KIC007619745 as our `true' model, then we assumed a known internal rotation profile and derived `observations' by adding some noise to the `true' mode frequencies. The errors are assumed to be the same as in the actual observations. Then we used our grid + downhill simplex search strategy to obtain a seismic model that best reproduces these `observations'.We have deliberately used a different mixing length parameter (α_MLT=1.7) and a different atmospheric correction power b during the search (b=4.9) compared to the true model (which has α_MLT=1.9, b=4.81), thus further guaranteeing that the best model found is not identical to the true model. The best model properly recovers both the initial metallicity [Fe/H]and the initial helium content Y. The inferences on the internal rotation averages are also in good agreement with the true profile within 2σ. However, the same cannot be said of other stellar parameters such as mass, f_ov , or radius. An échelle diagram comparing the frequencies of the best model with the noise-added true model is shown in Fig. <ref>. Table <ref> lists other stellar parameters from the two models.The results for the internal rotation rates using different methods based on this best model are displayed in Fig. <ref> (bottom, right panel). Here we used a `true' rotation profile according to Eq. <ref> with Ω_c=0.7 μHz, Ω_m=0.4 μHz, and Ω_e=0.1μHz. Compare also with the top and middle panel in Fig. <ref>.
http://arxiv.org/abs/1702.07910v1
{ "authors": [ "Santiago Andrés Triana", "Enrico Corsaro", "Joris De Ridder", "Alfio Bonanno", "Fernando Pérez Hernández", "Rafael García" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170225160031", "title": "Internal rotation of 13 low-mass low-luminosity red giants in the Kepler field" }
Seeing What Is Not There:Learning Context to Determine Where Objects Are Missing Jin Sun David W. JacobsDepartment of Computer ScienceUniversity of Maryland{jinsun,djacobs}@cs.umd.eduDecember 30, 2023 ===================================================================================================================== We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN).Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed.We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from various numerical experiments to demonstrate the effectiveness the proposed approach. In some settings, the proposed algorithm outperforms traditional pool-based approaches. To the best our knowledge, this is the first active learning work using GAN. § INTRODUCTION One of the most exciting machine learning breakthroughs in recent years is the generative adversarial networks (GAN) <cit.>. It trains a generative model by finding the Nash Equilibrium of a two-player adversarial game. Its ability to generate samples in complex domains enables new possibilities for active learners to synthesize training samples on demand, rather than relying on choosing instances to query from a given pool.In the classification setting, given a pool of unlabeled data samples and a fixed labeling budget, active learning algorithms typically choose training samples strategically from a pool to maximize the accuracy of trained classifiers. The goal of these algorithms is to reduce label complexity. Such approaches are called pool-based active learning. This pool-based active learning approach is illustrated in Figure <ref> (a).In a nutshell, we propose to use GANs to synthesize informative training instances that are adapted to the current learner. We then ask human oracles to label these instances. The labeled data is added back to the training set to update the learner. This protocol is executed iteratively until the label budget is reached. This process is shown in Figure <ref> (b).The main contributions of this work are as follows: * To the best of our knowledge, this is the first active learning framework using deep generative models[The appendix of <cit.> mentioned three active learning attempts but did not report numerical results. Our approach is also different from those attempts.]. * While we do not claim our method is always superior to the previous active learners in terms of accuracy, in some cases, it yields classification performance not achievable even by a fully supervised learning scheme. With enough capacity from the trained generator, our method allows us to have control over the generated instances which may not be available to the previous active learners. * We conduct experiments to compare our active learning approach with self-taught learning[See the supplementary document.]. The results are promising. * This is the first work to report numerical results in active learning synthesis for image classification. See <cit.>. The proposed framework may inspire future GAN applications in active learning. * The proposed approach should not be understood as a pool-based active learning method. Instead, it is active learning by query synthesis. We show that our approach can perform competitively when compared against pool-based methods. § RELATED WORKOur work is related to two different subjects, active learning and deep generative models.Active learning algorithms can be categorized into stream-based, pool-based and learning by query synthesis. Historically, stream-based and pool-based are the two popular scenarios of active learning <cit.>. Our method falls into the category of query synthesis. Early active learning by queries synthesis achieves good results only in simple domains such as X={0,1}^3, see <cit.>.In <cit.>, the authors synthesized learning queries and used human oracles to train a neural network for classifying handwritten characters. However, they reported poor results due to the images generated by the learner being sometimes unrecognizable to the human oracles. We will report results on similar tasks such as differentiating 5 versus 7, showing the advancement of our active learning scheme. Figure <ref> compares image samples generated by the method in <cit.> and our algorithm. The popular SVM_active algorithm from <cit.> is an efficient pool-based active learning scheme for SVM. Their scheme is a special instance of the uncertainty sampling principle which we also employ. <cit.> reduces the exhaustive scanning through database employed by SVM_active. Our algorithm shares the same advantage of not needing to test every sample in the database at each iteration of active learning. Although we do so by not using a pool at all instead of a clever trick. <cit.> proposed active transfer learning which is reminiscent to our experiments in Section <ref>. However, we do not consider collecting new labeled data in target domains of transfer learning. There have been some applications of generative models in semi-supervised learning and active learning. Previously, <cit.> proposed a semi-supervised learning approach to text classification based on generative models. <cit.> applied Gaussian mixture models to active learning. In that work, the generative model served as a classifier. Compared with these approaches, we apply generative models to directly synthesize training data. This is a more challenging task. One building block of our algorithm is the groundbreaking work of the GAN model in <cit.>. Our approach is an application of GAN in active learning.Our approach is also related to <cit.> which studied GAN in a semi-supervised setting. However, our task is active learning which is different from the semi-supervised learning they discussed.Our work shares the common strength with the self-taught learning algorithm in <cit.> as both methods use the unlabeled data to help with the task. In the supplementary document, we compare our algorithm with a self-taught learning algorithm.In a way, the proposed approach can be viewed as an adversarial training procedure <cit.>, where the classifier is iteratively trained on the adversarial example generated by the algorithm based on solving an optimization problem. <cit.> focuses on the adversarial examples that are generated by perturbing the original datasets within the small epsilon-ball whereas we seek to produce examples using active learning criterion. To the best of our knowledge, the only previous mentioning of using GAN for active learning is in the appendix of <cit.>. The authors discussed therein three attempts to reduce the number of queries. In the third attempt, they generated synthetic samples and sorted them by the information content whereas we adaptively generate new queries by solving an optimization problem. There were no reported active learning numerical results in that work. § BACKGROUND We briefly introduce some important concepts in active learning and generative adversarial network.§.§ Active Learning In the PAC learning framework <cit.>, label complexity describes the number of labeled instances needed to find a hypothesis with error ϵ. The label complexity of passive supervised learning, i.e. using all the labeled samples as training data, is 𝒪(d/ϵ) <cit.>, where d is the VC dimension of the hypothesis class ℋ. Active learning aims to reduce the label complexity by choosing the most informative instances for querying while attaining low error rate. For example, <cit.> proved that the active learning algorithm from <cit.> has the label complexity bound 𝒪(θ d log1/ϵ), where θ is defined therein as the disagreement coefficient, thus reducing the theoretical bound for the number of labeled instances needed from passive supervised learning. Theoretically speaking, the asymptotic accuracy of an active learning algorithm can not exceed that of a supervised learning algorithm. In practice, as we will demonstrate in the experiments, our algorithm may be able to achieve higher accuracy than the passive supervised learning in some cases.Stream-based active learning makes decisions on whether to query the streamed-in instances or not. Typical methods include <cit.>. In this work, we will focus on comparing pool-based and query synthesis methods.In pool-based active learning, the learner selects the unlabeled instances from an existing pool based on a certain criterion.Some pool-based algorithms make selections by using clustering techniques or maximizing a diversity measure, e.g. <cit.>.Another commonly used pool-based active learning principle is uncertainty sampling. It amounts to querying the most uncertain instances. For example, algorithms in <cit.> query the labels of the instances that are closest to the decision boundary of the support vector machine.Figure <ref> (a) illustrates this selection process. Other pool-based works include <cit.> which proposes a Bayesian active learning by disagreement algorithm in the context of learning user preferences, <cit.> which study the submodularity nature of sequential active learning schemes.Mathematically, let P be the pool of unlabeled instances, and f=W ϕ(x)+b be the separating hyperplane. ϕ is the feature map induced by the SVM kernel. The SVM_active algorithm in <cit.> chooses a new instance to query by minimizing the distance (or its proxy) to the hyperplanemin_x∈ PWϕ(x)+ b.This formulation can be justified by the version space theory in separable cases <cit.> or by other analyses in non-separable cases, e.g., <cit.>. This simple and effective method is widely applied in many studies, e.g., <cit.>.In the query synthesis scenario, an instance x is synthesized instead of being selected from an existing pool.Previous methods tend to work in simple low-dimensional domains <cit.> but fail in more complicated domains such as images <cit.>. Our approach aims to tackle this challenge.For an introduction to active learning, readers are referred to <cit.>.§.§ Generative Adversarial Networks Generative adversarial networks (GAN) is a novel generative model invented by <cit.>. It can be viewed as the following two-player minimax game between the generator G and the discriminator D,min_θ_2max_θ_1{𝔼_x∼ p_datalog D_θ_1(x) + 𝔼_zlog (1-D_θ_1(G_θ_2(z)))},where p_data is the underlying distribution of the real data and z is uniformly distributed random variable. D and G each has its own set of parameter θ_1 and θ_2. By solving this game, a generator G is obtained. In the ideal scenario, given random input z, we have G(z)∼p_data. However, finding this Nash Equilibrium is a difficult problem in practice. There is no theoretical guarantee for finding the Nash Equilibrium due to the non-convexity of D and G. A gradient descent type algorithm is typically used for solving this optimization problem.A few variants of GAN have been proposed since <cit.>. The authors of <cit.> use GAN with deep convolutional neural network structures for applications in computer vision(DCGAN). DCGAN yields good results and is relatively stable. Conditional GAN<cit.> is another variant of GAN in which the generator and discriminator can be conditioned on other variables, e.g., the labels of images. Such generators can be controlled to generate samples from a certain category. <cit.> proposed infoGAN which learns disentangled representations using unsupervised learning.A few updated GAN models have been proposed. <cit.> proposed a few improved techniques for training GAN. Another potentially important improvement of GAN, Wasserstein GAN, has been proposed by <cit.>. The authors proposed an alternative to training GAN which can avoid instabilities such as mode collapse with theoretical analysis. They also proposed a metric to evaluate the quality of the generation which may be useful for future GAN studies. Possible applications of Wasserstein GAN to our active learning framework are left for future work.The invention of GAN triggered various novel applications. <cit.> performed image inpainting task using GAN. <cit.> proposed iGAN to turn sketches into realistic images. <cit.> applied GAN to single image super-resolution. <cit.> proposed CycleGAN for image-to-image translation using only unpaired training data.Our study is the first GAN application to active learning.For a comprehensive review of GAN, readers are referred to <cit.>. § GENERATIVE ADVERSARIAL ACTIVE LEARNING In this section, we introduce our active learning approach which we call Generative Adversarial Active Learning (GAAL). It combines query synthesis with the uncertainty sampling principle.The intuition of our approach is to generate instances which the current learner is uncertain about, i.e. applying the uncertainty sampling principle.One particular choice for the loss function is based on uncertainty sampling principle explained in section <ref>.In the setting of a classifier with the decision function f(x) = Wϕ(x) + b, the (proxy) distance to the decision boundary is Wϕ(x) + b. Similar to the intuition of (<ref>), given a trained generator function G, we formulate the active learning synthesis as the following optimization problemmin_z W^⊤ϕ(G(z)) + b,where z is the latent variable and G is obtained by the GAN algorithm. Intuitively, minimizing this loss will push the generated samples toward the decision boundary. Figure <ref> (b) illustrates this idea. Compared with the pool-base active learning in Figure <ref> (a), our hope is that it may be able to generate more informative instances than those available in the existing pool. The solution(s) to this optimization problem, G(z), after being labeled, will be used as new training data for the next iteration. We outline our procedure in Algorithm <ref>.It is possible to use a state-of-the-art classifier, such as convolutional neural networks. To do this, we can replace the feature map ϕ in Equation <ref> with a feed-forward function of a convolutional neural network. In that case, the linear SVM will become the output layer of the network. In step 4 of Algorithm <ref>, one may also use a different active learning criterion. We emphasis that our contribution is the general framework instead of a specific criterion.In training GAN, we follow the procedure detailed in <cit.>. Optimization problem (<ref>) is non-convex with possibly many local minima. One typically aims at finding good local minima rather than the global minimum. We use a gradient descent algorithm with momentum to solve this problem. We also periodically restart the gradient descent to find other solutions. The gradient of D and G is calculated using back-propagation.Alternatively, we can incorporate diversity into our active learning principle. Some active learning approaches rely on maximizing diversity measures, such as the Shannon Entropy. In our case, we can include in the objective function (<ref>) a diversity measure such as proposed in <cit.>, thus increasing the diversity of samples. The evaluation of this alternative approach is left for future work.§ EXPERIMENTS We perform active learning experiments using the proposed approach.We also compare our approach to self-taught learning, a type of transfer learning method, in the supplementary document. The GAN implementation used in our experiment is a modification of a publicly available TensoFlow DCGAN implementation[https://github.com/carpedm20/DCGAN-tensorflow].The network architecture of DCGAN is described in <cit.>.In our experiments, we focus on binary image classification. Although this can be generalized to multiple classes using one-vs-one or one-vs-all scheme <cit.>. Recent advancements in GAN study show it could potentially model language as well <cit.>. Although those results are preliminary at the current stage. We use a linear SVM as our classifier of choice (with parameter γ=0.001). Even though classifiers with much higher accuracy (e.g., convolutional neural networks) can be used, our purpose is not to achieve absolute high accuracy but to study the relative performance between different active learning schemes.The following schemes are implemented and compared in our experiments. * The proposed generative adversarial active learning (GAAL) algorithm as in Algorithm <ref>. * Using regular GAN to generate training data. We refer to this as simple GAN. * SVM_active algorithm from <cit.>. * Passive random sampling, which randomly samples instances from the unlabeled pool. * Passive supervised learning, i.e., using all the samples in the pool to train the classifier. * Self-taught learning from <cit.>.We initialize the training set with 50 randomly selected samples. The algorithms proceed with a batch of 10 queries every time.We use two datasets for training, the MNIST and CIFAR-10. The MNIST dataset is a well-known image classification dataset with 60000 training samples. The training set and the test set follow the same distribution. We perform the binary classification experiment distinguishing 5 and 7 which is reminiscent to <cit.>. The training set of CIFAR-10 dataset consists of 50000 32×32 color images from 10 categories. One might speculate the possibility of distinguishing cats and dogs by training on cat-like dogs or dog-like cats. In practice, our human labelers failed to confidently identify most of the generated cat and dog images. Figure <ref> (Top) shows generated samples. The authors of <cit.> reported attempts to generate high-resolution animal pictures, but with the wrong anatomy. We leave this task for future studies, possibly with improved techniques such as <cit.>.For this reason, we perform binary classification on the automobile and horse categories. It is relatively easy for human labelers to identity car and horse body shapes. Typical generated samples, which are presented to the human labelers, are shown in Figure <ref>. §.§ Active Learning We use all the images of 5 and 7 from the MNIST training set as our unlabeled pool to train the generator G. Different from traditional active learning, we do not select new samples from the pool after initialization. Instead, we apply Algorithm <ref> to generate a training query. For the generator D and G, we follow the same network architecture of <cit.>. We use linear SVM as our classifier although other classifiers can be used, e.g. <cit.>. We first test the trained classifier on a test set that follows adistribution different from the training set.One purpose is to demonstrate the adaptive capability of the GAAL algorithm.In addition, because the MNIST test set and training set follow the same distribution, pool-based active learning methods have an natural advantage over active learning by synthesis since they use real images drawn from the exact same distribution as the test set.It is thus reasonable to test on sets that follow different, albeit similar, distributions. To this end, we use the USPS dataset from <cit.> as the test set with standard preprocessing.In reality, such settings are very common, e.g., training autonomous drivers on simulated datasets and testing on real vehicles; training on handwriting characters and recognizing writings in different styles, etc. This test setting is related to transfer learning, where the distribution of the training domain P_tr(x,y) is different from that of the target domain P_te(x,y).Figure <ref> (Top) shows the results of our first experiment.When using the full training set, with 11000 training images, the fully supervised accuracy is at 70.44%. The accuracy of the random sampling scheme steadily approaches that level. On the other hand, GAAL is able to achieve accuracies better than that of the fully supervised scheme. With 350 training samples, its accuracy improves over supervised learning and even SVM_active, an aggressive active learner <cit.>.Obviously, the accuracy of both SVM_active and random sampling will eventually converge to the fully supervised learning accuracy. Note that for the SVM_active algorithm, an exhaustive scan through the training pool is not always practical. In such cases, the common practice is to restrict the selection pool to a small random subset of the original data.For completeness, we also perform the experiments in the settings where the training and test set follow the same distribution. Figure <ref> (Bottom) shows these results. Somewhat surprisingly, in Figure <ref> (Left), GAAL's classification accuracy starts to drop after about 100 samples. One possible explanation is that GAAL may be generatingpoints close to the boundary that are also close to each other. This is more likely to happen if the boundary does not change much from one active learning cycle to the next. This probably happens because the test and train sets are the identically distributed and simple, like MNIST.Therefore, after a while, the training set may be filled with many similar points, biasing the classifier and hurting accuracy. In contrast, because of the finite and discrete nature of pools in the given datasets, a pool-based approach, such as SVM_active, most likely explores points near the boundary that are substantially different. It is also forced to explore further points once these close-by points have already been selected. In a sense, the strength of GAAL might in fact be hurting its classification accuracy. We believe this effect is not so pronounced when the test and train sets are different because the boundary changes more significantly from one cycle to the next, which in turn induces some diversity in the generated samples.To reach competitive accuracy when the training and test set follow the same distribution, we mightincorporate a diversity term into our objective function in GAAL. We will address this in future work. In the CIFAR-10 dataset, our human labeler noticed higher chances of bad generated samples, e.g., instances fail to represent either of the categories. This may be because of the significantly higher dimensions than the MNIST dataset. In such cases, we asked the labelers to only label the samples they can distinguish. We speculate recent improvements on GAN, e.g., <cit.>, may help mitigate this issue given the cause is the instability of GANs. Addressing this limitation will be left to future studies.§.§ Balancing exploitation and exploration The proposed Algorithm <ref> can be understood as an exploitation method, i.e., it focuses on generating the most informative training data based on the current decision boundary On the other hand, it is often desirable for the algorithm to explore the new areas of the data. To achieve this, we modify Algorithm <ref> by simply executing random sampling every once in a while. This is a common practice in active learning <cit.>. We use the same experiment setup as in the previous section. Figure <ref> shows the results of this mixed scheme. A mixed scheme is able to achieve better performance than either using GAAL or random sampling alone. Therefore, it implies that GAAL, as an exploitation scheme, performs even better in combination with an exploration scheme. A detailed analysis such mixed schemes will be an interesting future topic.§ DISCUSSION AND FUTURE WORKIn this work, we proposed a new active learning approach, GAAL, that employs the generative adversarial networks. One possible explanation for GAAL not outperforming the pool-based approaches in some settings is that, in traditional pool-based learning, the algorithm will eventually exhaust all the points near the decision boundary thus start exploring further points. However, this is the not the case in GAAL as it can always synthesize points near the boundary. This may in turn cause the generation of similar samples, thus reducing the effectiveness. We suspect incorporating a diversity measure into the GAAL framework as discussed at the end of Section <ref> might mitigate this issue. This issue is related to the exploitation and exploration trade-off which we explored in brief. The results of this work are enough to inspire future studies of deep generative models in active learning. However, much work remains in establishing theoretical analysis and reaching better performance. We also suspect that GAAL can be modified to generate adversarial examples such as in <cit.>. The comparison of GAAL with transfer learning (see the supplementary document) is particularly interesting and worth further investigation. We also plan to investigate the possibility of using Wasserstein GAN in our framework. plain § APPENDIX: COMPARISON WITH SELF-TAUGHT LEARNING One common strength of GAAL and self-taught learning <cit.> is that both utilize the unlabeled data to help with the classification task. As we have seen in the MNIST experiment, our GAAL algorithm seems to be able to adapt to the learner. The results in this experiment are preliminary and not meant to be taken as comprehensive evaluations. In this case, the training domain is mostly unlabeled. Thus the method we compare with is self-taught learning <cit.>. Similar to the algorithm in <cit.>, we use a Reconstruction Independent Component Analysis (RICA) model with a convolutional layer and a pooling layer. RICA is similar to a sparse autoencoder. Following standard self-taught learning procedures, We first train on the unlabeled pool dataset. Then we use trained RICA as the a feature extractor to obtain higher level features from randomly selected MNIST images. We then concatenate the features with the original image data to train the classifier. Finally, we test the trained classifier on the USPS dataset. We test the training size of 250, 500, 1000, and 5000. The reason of doing so is that deep learning type techniques are known to thrive in the abundance of training data. They may perform relatively poorly with limited amount of training data, as in the active learning scenarios. We run the experiments for 100 times and average the results. We use the same setting for the GAAL algorithm as in Section <ref>. The classifier we use is a linear SVM. Table <ref> shows the classification accuracies of GAAL, self-taught learning and baseline supervised learning on raw image data. Using GAAL on the raw features achieves a higher accuracy than that of the self-taught learning with the same training size of 250. In fact, self-taught learning performs worse than the regular supervised learning when labeled data is scarce. This is possible for an autoencoder type algorithm. However, when we increase the training size, the self-taught learning starts to perform better. With 5000 training samples, self-taught learning outperforms GAAL with 250 training samples. Based on these results, we suspect that GAAL also has the potential to be used as a self-taught algorithm[At this stage, self-taught learning has the advantage that it can utilize any unlabeled training data, i.e., not necessarily from the categories of interest. GAAL does not have this feature yet.]. In practice, the GAAL algorithm can also be applied on top of the features extracted by a self-taught algorithm. A comprehensive comparison with a more advanced self-taught learning method with deeper architecture is beyond the scope of this work.
http://arxiv.org/abs/1702.07956v5
{ "authors": [ "Jia-Jie Zhu", "José Bento" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170225224520", "title": "Generative Adversarial Active Learning" }
=1=2mm
http://arxiv.org/abs/1702.08158v3
{ "authors": [ "Chih-Hao Fu", "Yi-Jian Du", "Rijun Huang", "Bo Feng" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170227062320", "title": "Expansion of Einstein-Yang-Mills Amplitude" }
cmurphy@quark.phy.bnl.gov Department of Physics, Brookhaven National Laboratory, Upton, N.Y., 11973, U.S.A. The apparent breakdown of unitarity in low order perturbation theory is often is used to place bounds on the parameters of a theory. In this work we give an algorithm for approximately computing the next-to-leading order (NLO) perturbativity bounds on the quartic couplings of a renormalizable theory whose scalar sector is ϕ^4-like. By this we mean theories where either there are no cubic scalar interactions, or the cubic couplings are related to the quartic couplings through spontaneous symmetry breaking. The quantity that tests where perturbation theory breaks down itself can be written as a perturbative series, and having the NLO terms allows one to test how well the series converges. We also present a simple example to illustrate the effect of considering these bounds at different orders in perturbation theory. For example, there is a noticeable difference in the viable parameter when the square of the NLO piece is included versus when it is not. NLO Perturbativity Bounds on Quartic Couplings in Renormalizable Theories with ϕ^4-like Scalar Sectors Christopher W. Murphy December 30, 2023 ====================================================================================================== § INTRODUCTIONThe unitarity of the S-matrix is frequently used to place theoretical constraints on the parameters of a theory. On the one hand, if a 2 → 2 scattering amplitude grows with energy, as is typically the case in non-renormalizable theories, then the condition S^† S = 1 will inevitably be violated at some energy scale. This energy scale then sets an upper limit on where new degrees-of-freedom must appear to unitarize the scattering amplitude.While is interesting in its own right, it is not the focus of this work. On the other hand, in renormalizable theories, there are relations among the parameters of the theory that cancel this growth with energy <cit.>.Nevertheless the same procedure can be used to place “perturbativity bounds” on the parameters of a renormalizable theory. Most famously, a leading order (LO) analysis of this type yielded an upper bound on the mass of the Higgs boson in the Standard Model (SM), m_h1 TeV <cit.>.If some combination of parameters in a renormalizable theory are too large, the amplitude will appear to be non-unitary at some order in perturbation theory.Of course these theories are unitary. The more accurate statement is that perturbation theory is breaking down for this choice of parameters.This method has subsequently been improved and refined.See for instance <cit.> for studies of beyond leading order effects in the SM, including both renormalization group (RG) improvement and higher fixed order contributions. If some choice of parameters violates the perturbativity bound at tree level it may be that perturbativity is restored at one-loop, or perhaps it may be that those parameters are not viable at any (low) order in perturbation theory. The preceding discussion calls attention to the following fact. The quantity that tests where perturbation theory breaks down itself can be written as a perturbative series. If higher order terms are known, one is then able to test how well the series converges.To date not much work has been done along these lines in theories beyond the SM.The one-loop corrections necessary to compute perturbativity bounds at next-to-leading order (NLO) in the Two-Higgs Doublet Model (2HDM) with a softly broken 𝐙_2 symmetry were computed in Ref. <cit.>.Ref. <cit.> then performed a comprehensive analysis of the viable parameter space in the 2HDM using these NLO perturbativity bounds.Prior to this work, an ansatz inspired by SM results was used to to estimate the higher order corrections in the 2HDM <cit.>.Not only did the work of Ref.s <cit.> resolve the ambiguity of how to implement perturbativity bounds at NLO in the 2HDM, but they also revealed where the dominate NLO contribution comes from in the SM and the 2HDM.In this work we first derive a formula for the functional form of the partial wave matrix for high energy 2 → 2 scalar scattering in a general renormalizable theory. This allows us to construct an algorithm for approximately computing the NLO perturbativity bounds on the scalar quartic couplings of a general renormalizable theory, (approximately) generalizing the result of <cit.>. Ref. <cit.> showed that this approximation dominates the NLO contribution to the perturbativity bounds in the SM and certain special cases of the 2HDM. Ref. <cit.> went further and showed that this is generally a good approximation in the 2HDM with a (softly broken) 𝐙_2 symmetry. Since the approximation is based on the pattern found in <cit.> we expect that this approximation should generally be a good estimate of the full NLO contribution in theories whose scalar sector is ϕ^4-like.By this we mean theories where either there are no cubic scalar interactions, or the cubic couplings are related to the quartic couplings through spontaneous symmetry breaking. This approximate NLO result only requires knowledge of the leading order matrix of partial wave amplitudes, and the one-loop scalar contribution to the beta function of each quartic coupling that is to be bounded.The advantage of this approximation is its simplicity as both of the required quantities are relatively easy to determine. To further ease the calculation of these NLO bounds a package implementing this algorithm, 𝙽𝙻𝙾𝚄𝚗𝚒𝚝𝚊𝚛𝚒𝚝𝚢𝙱𝚘𝚞𝚗𝚍𝚜, is available at <https://github.com/christopher-w-murphy/NLOUnitarityBounds> in both Mathematica and Jupyter Notebook formats.A (likely incomplete) list of other models for which all of the results necessary to implement this algorithm are already known is: the Manohar-Wise Model <cit.>, its extension to include an additional color singlet, SU(2)_L doublet <cit.>, and the Left-Right Symmetric Model <cit.>.As we will show, the natural interpretation of the couplings appearing in the NLO partial wave amplitudes are RG improved couplings evaluated at a scale much larger than the typical scales of the theory. As such, these corrections are naturally useful in analyses at high scales.Examples of this include high scale flavor alignment <cit.>, effects of custodial symmetry breaking at high energies <cit.>, or simply investigating the validity of a model up to a high energy scale <cit.>.The rest of this paper starts with a brief review of perturbative unitarity in Sec. <ref>.Following that is a derivation of our main results in Sec. <ref>.We then present an example to illustrate the effect of considering perturbativity bounds at different orders in perturbation theory in Sec. <ref>. The example is simple, but it serves to highlight how the viable parameter space in a model can change from order-to-order in perturbation theory. In particular, there is a noticeable difference in the viable parameter when the square of the NLO piece is included versus when it is not. The implementation of this model is included in the example notebook associated with the 𝙽𝙻𝙾𝚄𝚗𝚒𝚝𝚊𝚛𝚒𝚝𝚢𝙱𝚘𝚞𝚗𝚍𝚜 package. Lastly, we discuss our findings in Sec. <ref>. § BRIEF REVIEW OF PERTURBATIVE UNITARITYIn this section we give a brief review of perturbative unitarity. The S-matrix is unitary, S^† S = 1.This condition can be translated into a relation among the various partial wave amplitudes of a given theory, see e.g. <cit.>Im(𝐚_j^2 → 2) = (𝐚_j^2 → 2)^†𝐚_j^2 → 2 + ∑_n > 2(𝐚_j^2 → n)^†𝐚_j^2 → n ,where 𝐚_j^2 → 2 is computed in a basis such that it is diagonal, i.e. it is the eigenvalues of 𝐚_j^2 → 2 that satisfy the relation (<ref>).Note also that an integral over the n-body phase space for each term in the sum in the rightmost term in (<ref>) is left understood.This is nothing but the equation for an n-sphere of radius 12 centered at Re(𝐚_j^2 → 2) = |𝐚_j^2 → n| = 0, Im(𝐚_j^2 → 2) = 12. The imaginary part of 𝐚_j^2 → 2 is fixed by the real part of 𝐚_j^2 → 2 and the 2 → n partial wave amplitudes.Focusing on the eigenvalues, a_j^2 → 2, of the matrix 𝐚_j^2 → 2 we have2 Im(a_j^2 → 2)_∓ = 1 ∓√(1 - 4 𝒜_j^2) , 𝒜_j^2 = [Re(a_j^2 → 2)]^2 + ∑_n > 2| a_j^2 → n|^2 ,with the - (+) solution corresponding to the case when the imaginary part of a_j^2 → 2 is less than (greater than) one-half.Assuming a perturbative expansion is viable, the first few orders of Eq. (<ref>) take the formIm(a_j^(0))_-= 0 ,Im(a_j^(1))_-= (a_j^(0))^2 ,Im(a_j^(2))_-= 2 a_j^(0) Re(a_j^(1)) + |a_j^2 → 3, (0)|^2 ,where the superscript (ℓ) is the perturbative order and we have dropped the superscript 2 → 2.For the + solution we instead have Im(a_j^(0))_+ = 1, Im(a_j^(ℓ > 0))_+ = - Im(a_j^(ℓ))_-.Assuming a valid perturbative expansion, the + solution corresponds to a scattering amplitude whose tree level imaginary part is Im(ℳ^(0)) = 16 π.Since this is not a frequently encountered scenario we will not consider this possibility further in this work, drop the subscript -, and set an upper limit of Im(a_j) ≤12 in the process.Perturbative unitarity bounds are inequalities derived from Eq. (<ref>).From (<ref>) it is clear that the conditions 0 ≤ Im(a_j^2 → 2) ≤12 and 0 ≤𝒜_j^2 ≤14 yield equivalent bounds when the imaginary part of a_j^2 → 2 is kept in its exact form.However this is not the case if Im(a_j^2 → 2) is expanded to a finite order in perturbation theory, as in (<ref>).For example, to two-loop order, the corresponding bounds are always weaker than those from 0 ≤𝒜_j^2 ≤14.Explicitly, to leading order, 0 ≤ Im(a_j^2 → 2) ≈ (a_j^(0))^2 ≤12 whereas 0 ≤𝒜_j^2 ≈ (a_j^(0))^2 ≤14.Starting at three-loops in Im(a_j), NNLO for 𝒜_j, the relative strength of the two bounds in no longer fixed.Once the approximate NLO contributions to the eigenvalues of the partial wave matrix are known, which are derived in Sec. <ref>, perturbative unitarity bounds can be obtained by evaluating one or more of the followingLO:(a_0^(0))^2 ≤1/4, NLO: 0 ≤(a_0^(0))^2+ 2 (a_0^(0)) Re(a_0^(1)) ≤1/4 , NLO+:[(a_0^(0)) + Re(a_0^(1))]^2 ≤1/4.If perturbation theory is valid it is expected that the bounds obtained from the upper limit (≤14) will be similar in all three cases since the LO eigenvalue is non-trivial and the NLO(+) piece should represent a small correction, but it is important to test, and confirm or deny, if this is actually the case. Thebound obtained from the lower limit (0 ≤) originates from Ref. <cit.>, Eq. (75) in particular.[This perturbativity bound was subsequently investigated in the 2HDM in Ref. <cit.> where it was called R_1 (not R_1^').]In contrast to the ≤14 bound, the 0 ≤ bound only becomes non-trivial at NLO, and can only be violated when perturbation theory breaks down. While it is expected that the parameter space of a given theory will be more constrained by the NLO perturbativity bounds because of the additional 0 ≤ handle, using this bound goes somewhat against the spirit of the introduction in that there is no similar higher-order term to compare with. In fact, the NLO+ expression shows how the apparent 0 ≤ violation of unitarity is resolved at higher-orders in perturbation theory.(Note that the NLO+ expression contains some NNLO terms, but of course is not the full NNLO expression.)To bring things full circle, at NNLO similar apparent 0 ≤ violations of unitarity can occur from the interference between the tree level and two-loop 2 → 2 amplitudes and/or the tree level and one-loop 2 → 3 amplitudes. However, there is no reason to expect the higher-order versions of the 0 ≤ bound to be similar to the NLO version, as was the case for the ≤14 bound, since there is little and/or no overlap between the potentially problematic terms. § GENERIC PARTIAL WAVE AMPLITUDES FOR HIGH ENERGY SCALAR SCATTERING §.§ Elements of the Partial Wave Matrix Consider a potential of the formV = 12 m_α^2 ϕ_αϕ_α + κ_αβγϕ_αϕ_βϕ_γ + λ_αβγδϕ_αϕ_βϕ_γϕ_δ,where the subscript Greek letters are flavor indices. The 2 → 2 scattering of high energy (E ≫ |κ_αβγ|, m_α) scalars can schematically by written asℳ_i → f = (Z_ϕ^1/2)^4 𝐕[ϕ^4] ,where (Z_ϕ^1/2)^4 is the product of the four external wavefunction renormalization factors, and 𝐕[ϕ^4] is the four-point function. At tree level the unrenormalized four-point function is simply a linear combination of quartic couplings,𝐕[ϕ^4]_tree = - c_m λ_m B = - c_m (λ_m + δλ_m) ,where δλ_m is the counterterm associated with the renormalized coupling λ_m, c_m is the numeric coefficient associated with a given λ_m, and a subscript Roman letter is shorthand for a set of Greek letters, e.g. m = αβγδ. At high energies the diagrams involving cubic couplings generally do not contribute to Eq. (<ref>). For s-channel processes this is manifestly true simply because we assume E ≫ |κ_αβγ|. To see this is also the case for t- and u-channel processes one should retain the full mass dependence of the diagram until after the partial wave amplitude is computed at which point it is safe to take the high energy limit. An exception to this occurs when all of the particles, internal and external, in a digram are massless. Such a situation would occur if the theory contains a neutral, CP-even Goldstone boson since it would then be possible to write down vertices with an odd number of this Goldstone boson. In this case there is a physical divergence in the forward region, analogous to Rutherford scattering, and this method as it is currently implemented is not applicable. However it is worth pointing that a careful study of the analytic structure of amplitudes involving the t-channel exchange of massless particles showed that the naïve sum rules for processes such as W_L^+ W_L^- scattering are still correct <cit.>, so perhaps there may still be a way to extract perturbativity bounds from such amplitudes.In any case, henceforth we will neglect the trilinear couplings κ_αβγ.A generic one-loop diagram in D = 4 - 2 ε dimensions with four external and two internal scalars, which is the only topology of all scalar, 1PI diagrams that persists in the high energy limit, takes the following form λ_m λ_n/16 π^2(1/ε + 2 - ln(- p^2 - i 0_+/μ^2)) .As is typically done the scale μ has been introduced to keep the quartic couplings dimensionless.The sum of all such diagrams leads to the four-point function at the one-loop level𝐕[ϕ^4]= - c_m λ_m + λ_m λ_n/16 π^2[(σ_mn + τ_mn + υ_mn) (1/ε + 2 + ln(s/μ^2)) . . + i πσ_mn- τ_mnln(- t/s) - υ_mnln(- u/s) ] ,where s, t, and u are the usual Mandelstam variables, and the branch cut in the logarithm yields ln(- p^2 - i 0_+) →ln(p^2) - i π for p^2 > 0.The one-loop correction is bilinear in the various couplings with the (model dependent) coefficients σ, τ, and υ parameterizing the s-, t-, and u-channel contributions, respectively. At one-loop the scalar wavefunction renormalization is finite, which allows the beta function of a quartic coupling to be defined simply asβ_λ_i = μ∂𝐕[ϕ^4]/∂μ ,where the particular four-point function entering the definition of the beta function is such that 𝐕[ϕ^4]_tree = - λ_i.From this we see that c_m β_λ_m = (σ_mn + τ_mn + υ_mn) λ_m λ_n/8 π^2 ,which determines the purely scalar contribution to the one-loop beta-function.After renormalization the scattering amplitude takes the formℳ_i → f = - c_m λ_m - c_m (δλ_m)_fin. - c_m λ_n (δ Z_mn)_fin. + c_m β_λ_m[1 + ln(√(s)/μ)]- λ_m λ_n/16 π^2[- i πσ_mn + τ_mnln(- t/s) + υ_mnln(- u/s) ] .The counterterms cancel the divergences arising from the one-loop diagrams, and generically contain finite parts that contribute to the scattering amplitude.The diagonal and off-diagonal wavefunction renormalization constants are δ Z_mm and δ Z_mn, respectively, both of which are real (except when using a complex-mass scheme).[In the SM there is a relation between the tree level 2 → 3 partial wave amplitudes and the wavefunction renormalization contribution to the one-loop 2 → 2 partial wave amplitudes <cit.>, which in our notation takes the form|a_j^2 → 3, (0)|^2 = δ Z_mm|a_j^2 → 2, (0)|^2 (in the SM).This causes a partial cancellation in the NLO expression for 𝒜_j that makes our approximation, discussed after (<ref>),a more accurate representation of 𝒜_j, again at least the in SM. It would be interesting to see if (a generalization of) this relation is true in other theories.]The off-diagonal terms generally involve a different linear combination of the tree level amplitudes than the diagonal terms.The wavefunction counterterms depend on the particular process under consideration, whereas the δλ_m are process independent.The full energy dependence of ℳ can be subsumed into a running coupling using standard renormalization group methodsμ∂λ̅_m(μ)/∂μ = β_λ_m, λ̅_m(μ_match.) = (λ_m)_phys. ,with (λ_m)_phys. being the combination of physical parameters that defines λ_m at the scale μ_match.. The scattering amplitude now takes the formℳ_i → f = - c_m λ̅_m - c_m (δλ_m)_fin. - c_m λ̅_n (δ Z_mn)_fin. + c_m β_λ̅_m+ λ̅_m λ̅_n/16 π^2[i πσ_mn - τ_mnln(- t/s) - υ_mnln(- u/s) ] .Retaining only the first term on the right-hand side of Eq. (<ref>) corresponds to the leading-log (LL) approximation of the NLO contribution.An element of 𝐚_j^2 → 2 is related to the scattering amplitude for the process ℳ_i → f as follows(𝐚_j^2 → 2)_i, f = 1/16 π s∫_-s^0dtℳ_i → f(s, t) P_j(1 + 2 ts),where P_j are the Legendre polynomials.In the high energy limit ℳ is independent of s and t at leading order, allowing us to concentrate on the j = 0 case. Plugging (<ref>) into (<ref>) and simplifying the result using (<ref>) we find16 π(𝐚_0^2 → 2)_i,f = - c_m λ̅_m + 3/2 c_m β_λ̅_m + i π - 1/16 π^2σ_mnλ̅_m λ̅_n - c_m (δλ_m)_fin. - c_m λ̅_n (δ Z_mn)_fin..Eq. (<ref>) is the exact expression for the functional form of the partial wave matrix of 2 → 2 scattering amplitudes in a general renormalizable theory in the high energy limit and assuming the scalar quartic couplings are parametrically larger than the gauge and Yukawa couplings. To the best of our knowledge this result has not been previously been given in the literature.§.§ Eigenvalues of the Partial Wave Matrix In this subsection we give an approximate formula for the NLO corrections to the eigenvalues of the partial wave matrix for high energy scalar scattering.One only needs to knows the leading order partial wave matrix and the (scalar one-loop contribution to) the beta functions of the theory under consideration to make use of this approximation.Recall the well known formula for the NLO perturbations of the eigenvalues of an eigensystem that is known completely at LOa_0^(1) = x⃗^⊤_(0)·𝐚_0^(1)·x⃗_(0) .It says that the NLO eigenvalues depend only on the NLO correction to the matrix and the LO eigenvectors, which are determined from the LO matrix.Here the exact and leading order eigensystems respectively are𝐚_0 ·x⃗ = a_0x⃗ ,𝐚_0^(0)·x⃗_(0) = a_0^(0) x⃗_(0) ,and each object appearing in the first line of (<ref>) is assumed to have an expansion𝐚_0= 𝐚_0^(0) + 𝐚_0^(1) + … ,x⃗ = x⃗_(0) + x⃗_(1) + … , a_0= a_0^(0) + a_0^(1) + … . The second term on the right-hand side of Eq. (<ref>) (the β-function contribution) to an element of 𝐚_0 is universal.Therefore the β-function contributions to the eigenvalues of 𝐚_0 can simply be determined using Eq (<ref>).All one has to do to obtain 𝐚_0,β^(1) from 𝐚_0^(0) is replace each quartic coupling λ that appears in 𝐚_0^(0) with - 32β_λ.Then using the LO eigenvalues x⃗_(0), which are also determined from 𝐚_0^(0), we find the β contribution to a_0^(1)a_0, β^(1) =x⃗_(0)^⊤·𝐚_0, β^(1)·x⃗_(0) ,𝐚_0,β^(1) = - 3/2. 𝐚_0^(0)|_λ_m →β_λ_m . Unlike the β-function contribution, the third term on the right-hand side of Eq. (<ref>) (the σ contribution) is not universal. Without doing an explicit calculation, the σ term contribution to the partial wave matrix is not known. However we are interested in knowing the eigenvalues of the matrix rather than the entire matrix itself. Knowing that the imaginary parts of the NLO elements of 𝐚_0 come exclusively from the σ contribution, we can use the second line of (<ref>), i.e. the fact that the theory is unitary, to determine the σ contribution to the NLO eigenvalues.Note that the numerical factor in front of σ_mn in Eq. (<ref>) is complex. This is convenient as it allows us to use the unitarity of the theory to also determine the real part of the sigma contribution the NLO eigenvalues. This yields the final result for the sigma contributiona_0, σ^(1) =(i - 1/π) (a_0^(0))^2 . To proceed further without doing an explicit calculation we must drop the finite pieces of the counterterms. Thus combining the β contribution and the σ contribution we arrive at our result for the approximation NLO contribution to the eigenvalues of the partial wave matrixa_0^(1) = a_0,β^(1) + a_0,σ^(1) .§ A SIMPLE EXAMPLEIn this section we present a simple example to sketch the effect of using perturbativity bounds at different orders in perturbation theory. Consider as a toy model the 2HDM with a U(2) symmetry, instead of 𝐙_2, to prevent tree level flavor changing neutral currents <cit.>. The scalar sector of this model has only two unique quartic couplings, and the U(2) symmetry preserves this relation along the RG-flow. The potential isV = λ_1/2[(H_1^† H_1) + (H_2^† H_2) - v^2/2]^2+ (λ_1 - λ_3) [(H_1^† H_2) (H_2^† H_1) - (H_1^† H_1) (H_2^† H_2)],where H_1,2 are the two Higgs doublets.[One could simplify the potential in this case by defining λ_4 ≡λ_1 - λ_3, but we stick with λ_1 and λ_3 as they are what is used in <cit.>.] We will take λ_1,3 to be free parameters for the purposes of this demonstration, and neglect the electroweak vev, v ≈ 246 GeV, as it is not important at high energies.Figure <ref> shows the bounds on λ_1 and λ_3 that result when the various perturbativity conditions, (<ref>), are applied.The parameter space shaded blue, orange, green, red, and purple is ruled out by perturbativity bounds from eigenvalues whose LO terms are proportional to 4 λ_1 + λ_3, - λ_1 + 2 λ_3, 2 λ_1 - λ_3, λ_1, and λ_3, respectively.There is a noticeabledifference in the viable parameter when the square of the NLO piece is included versus when it is not, as shown in panels (b) and (c) of Fig. <ref>.This is reminiscent of a result in Ref. <cit.>, where the bounds obtained on the coefficients of some dimension-6 operators in the SMEFT depend noticeably on whether or not the square of the dimension-6 amplitude is included in the calculation of the cross section under consideration.This parameter space could be further constrained theoretically by required the potential to be bounded from below, etc.Figure <ref> shows in blue, green, and red the viable parameter space for two individual eigenvalues based on the LO, NLO, and NLO+ perturbativity conditions, respectively.The contours in panels (a) and (b) of Fig. <ref> are colored blue and orange, respectively, to match the color coding of the eigenvalues in Fig. <ref>. The parameter space viable at NLO (green) in Fig. <ref> is determined using both the upper and lower limits on a_0, which, in contrast, are illustrated separately in Fig. <ref> in panels (b) and (d), respectively. The different perturbativity criteria yield different viable parameter spaces. Perhaps the simplest way to test the convergence of a_0 as a perturbative series is to see which choices of parameters are considered viable by multiple perturbativity criteria. § DISCUSSIONIn the section we discuss computing the leading order partial wave matrix and one-loop scalar beta function, and the validity of our approximation, before summarizing our algorithm for finding the NLO perturbativity bounds in a given theory.One of the advantages of this approach is its simplicity in that it only relies on knowledge of the leading order partial wave matrix and one-loop scalar beta function. In particular, the renormalization group equations in a general quantum field theory to two-loop order have been known for some time, and software exists to derive them in a specific model <cit.>.In the extreme energy limit s ≫ M_i^2, which is sufficient to consider if one is only interested in bounding dimensionless couplings, the method of Ginzburg and Ivanov can be used to simply the computation of 𝐚_0^(0) <cit.>.Though they originally considered the 2HDM, their argument can be used for any renormalizable theory where the matter fields have definite representations under the gauge group of theory.In particular at high energies masses and mixings are not important, and 𝐚_0^(0) will be block diagonal with blocks of definite representations under the gauge and global symmetries of the theory <cit.>.In gauge theories, scattering amplitudes involving longitudinally polarized vector bosons should be included when determining the bounds on the quartic couplings of the theory.Their inclusion can be greatly simplified through the use of the Goldstone Boson Equivalence Theorem, see e.g. <cit.>.Lastly, we note that in gauge theories (with spontaneous symmetry breaking) there is the additional complication of preserving gauge invariance.In a mixed MS/on-shell scheme, gauge independence can be spoiled unless tadpole diagrams are properly taken into account.This can be done by generalizing the SM results of Fleischer and Jegerlehner <cit.> to the theory under consideration.In fact this has been done in the 2HDM <cit.> and the SM Effective Field Theory (SMEFT) <cit.>.In considering how well our approximation does at capturing the exact NLO results, theories can be placed into one of three categories, broadly speaking. The first class is theories are those without spontaneous symmetry breaking.Here there exists a renormalization scheme such that the approximation is actually exact. This is simply because all of the counterterms for the quartic couplings are independent of the mass counterterms, and the finite parts of the quartic couplings can be chosen to cancel any potential wavefunction counterterm contribution. The second class of theories is those with spontaneous symmetry breaking and a ϕ^4-like scalar sector. This class includes the SM and the 2HDM, theories for which this approximation is known to be good one <cit.>. In multi-ϕ^4 theory with spontaneous symmetry breaking the wavefunction renormalization is finite at one-loop, but more importantly it is parametrically identical to the 1PI one-loop contribution. Thus, in ϕ^4-like theories is that there is no qualitative difference between the different NLO contributions, and there is a qualitative difference between the LO and NLO contributions This suggests that at the very least including some NLO contributions should give a good qualitative estimate of the full NLO contribution, which may be sufficient if one wishes to test the convergence of a_0 in perturbation theory through NLO. Finally, the third class of theories are those with spontaneous symmetry breaking and scalar cubic interactions whose cubic couplings are not related to its quartic couplings. In contrast with the first two classes of theories, it is not necessarily the case that this approximation will give a good description of the exact NLO result. The reason being that the wavefunction renormalization contribution is parametrically different from both the LO and the 1PI NLO contributions to the partial wave amplitude. On the other hand, if the cubic coupling of interest is much smaller than the internal masses in the wavefunction renormalization diagrams, the approximation may still be a good one as the theory is approaching the ϕ^4 limit in this scenario. Examples of theories of this type include extending the SM with a real scalar singlet without a 𝐙_2 symmetry, or with scalars that are color singlets and SU(2)_L triplets. This includes the original Georgi-Machacek model, but not its generalizations <cit.>. Additionally, the 2HDM plus a pseudoscalar singlet falls into this class, see for example <cit.> and the references therein. Another thing to keep in mind in our approach to finding the NLO perturbativity bounds is the quartic couplings entering into the partial-wave amplitudes are necessarily running couplings evaluated at an energy scale much larger than the other scales in the problem.A LO analysis is simpler in the sense that expressions could simply involve ordinary quartic couplings.One could always RG-improve the LO bounds by replacing the ordinary couplings with running couplings with no penalty. This is the leading-log approximation.However, to do the opposite, replace running couplings in the NLO expressions with ordinary couplings, would be a further approximation.To summarize, in this work we first derive a formula for the functional form of the partial wave matrix for high energy 2 → 2 scalar scattering in a general renormalizable theory. This allows us to construct an algorithm for approximately computing the NLO perturbativity bounds on the scalar quartic couplings of a general renormalizable theory. We expect the approximation to be a good estimate of the full NLO contribution in theories whose scalar sector is ϕ^4-like.By this we mean theories where either there are no cubic scalar interactions, or the cubic couplings are related to the quartic couplings through spontaneous symmetry breaking. The approximate NLO result only requires knowledge of the leading order matrix of partial wave amplitudes, and the one-loop scalar contribution to the beta function of each quartic coupling that is to be bounded, both of which are quantities that are relatively easy to determine. The algorithm for finding the eigenvalues of the partial wave matrix at approximate next-to-leading order is as follows: * Given the leading order partial wave matrix, 𝐚_0^(0), find its eigenvalues, a_0^(0), and eigenvectors, x⃗_(0).This may need to be done numerically.* The β-function contribution to the NLO eigenvalues, a_0,β^(1), is given by Eq. (<ref>) with𝐚_0,β^(1) = - 3/2. 𝐚_0^(0)|_λ_m →β_λ_m . * Using the unitarity of the theory, the σ-contribution to the NLO eigenvalues isa_0,σ^(1) = (i - 1/π) (a_0^(0))^2 . * The NLO contribution is given by the sum of the two pieces, a_0^(1) = a_0,β^(1) + a_0,σ^(1). In addition, Mathematica and Jupyter Notebook packages implementing this algorithm, 𝙽𝙻𝙾𝚄𝚗𝚒𝚝𝚊𝚛𝚒𝚝𝚢𝙱𝚘𝚞𝚗𝚍𝚜, are available at <https://github.com/christopher-w-murphy/NLOUnitarityBounds>. The natural interpretation of the couplings appearing in the NLO partial wave amplitudes are RG improved couplings evaluated at a scale much larger than the typical scales of the theory. Therefore these corrections are naturally useful in analyses at high scales.Once a_0^(1) is known perturbativity bounds on quartic couplings can be obtained at either leading order or next-to-leading order in theories with ϕ^4-like scalar sectors, enabling a test of the convergence of the a_0 as a perturbative series among other studies. This is highlighted in the example we present. Specifically, there is a noticeable difference in the viable parameter when the square of the NLO piece is included versus when it is not.We thank D. Chowdhury, H. Davoudiasl, S. Dawson, O. Eberhardt, B. Grinstein, and P. Uttayarat for useful discussions. This work was supported by the US DOE under grant contract DE-SC0012704.10Gunion:1990kf J. F. Gunion, H. E. Haber, and J. Wudka, “Sum rules for Higgs bosons,” http://dx.doi.org/10.1103/PhysRevD.43.904Phys. Rev. D43 (1991) 904–912. Dicus:1992vj D. A. Dicus and V. S. Mathur, “Upper bounds on the values of masses in unified gauge theories,” http://dx.doi.org/10.1103/PhysRevD.7.3111Phys. Rev. D7 (1973) 3111–3114. Lee:1977yc B. W. Lee, C. Quigg, and H. B. Thacker, “The Strength of Weak Interactions at Very High-Energies and the Higgs Boson Mass,” http://dx.doi.org/10.1103/PhysRevLett.38.883Phys. Rev. Lett. 38 (1977) 883–885. Lee:1977eg B. W. Lee, C. Quigg, and H. B. Thacker, “Weak Interactions at Very High-Energies: The Role of the Higgs Boson Mass,” http://dx.doi.org/10.1103/PhysRevD.16.1519Phys. Rev. D16 (1977) 1519. Marciano:1989ns W. J. Marciano, G. Valencia, and S. Willenbrock, “Renormalization Group Improved Unitarity Bounds on the Higgs Boson and Top Quark Masses,” http://dx.doi.org/10.1103/PhysRevD.40.1725Phys. Rev. D40 (1989) 1725. Dawson:1988va S. Dawson and S. Willenbrock, “Unitarity Constraints on Heavy Higgs Bosons,” http://dx.doi.org/10.1103/PhysRevLett.62.1232Phys. Rev. Lett. 62 (1989) 1232. Durand:1992wb L. Durand, J. M. Johnson, and J. L. Lopez, “Perturbative unitarity and high-energy W(L)+-, Z(L), H scattering. One loop corrections and the Higgs boson coupling,” http://dx.doi.org/10.1103/PhysRevD.45.3112Phys. Rev. D45 (1992) 3112–3127. Durand:1993vn L. Durand, P. N. Maher, and K. Riesselmann, “Two loop unitarity constraints on the Higgs boson coupling,” http://dx.doi.org/10.1103/PhysRevD.48.1084Phys. Rev. D48 (1993) 1084–1096, http://arxiv.org/abs/hep-ph/9303234arXiv:hep-ph/9303234 [hep-ph]. Grinstein:2015rtl B. Grinstein, C. W. Murphy, and P. Uttayarat, “One-loop corrections to the perturbative unitarity bounds in the CP-conserving two-Higgs doublet model with a softly broken ℤ_2 symmetry,” http://dx.doi.org/10.1007/JHEP06(2016)070JHEP 06 (2016) 070, http://arxiv.org/abs/1512.04567arXiv:1512.04567 [hep-ph]. Cacchio:2016qyh V. Cacchio, D. Chowdhury, O. Eberhardt, and C. W. Murphy, “Next-to-leading order unitarity fits in Two-Higgs-Doublet models with soft ℤ_2 breaking,” http://dx.doi.org/10.1007/JHEP11(2016)026JHEP 11 (2016) 026, http://arxiv.org/abs/1609.01290arXiv:1609.01290 [hep-ph]. Baglio:2014nea J. Baglio, O. Eberhardt, U. Nierste, and M. Wiebusch, “Benchmarks for Higgs Pair Production and Heavy Higgs boson Searches in the Two-Higgs-Doublet Model of Type II,” http://dx.doi.org/10.1103/PhysRevD.90.015008Phys. Rev. D90 no. 1, (2014) 015008, http://arxiv.org/abs/1403.1264arXiv:1403.1264 [hep-ph]. Chowdhury:2015yja D. Chowdhury and O. Eberhardt, “Global fits of the two-loop renormalized Two-Higgs-Doublet model with soft Z_2 breaking,” http://dx.doi.org/10.1007/JHEP11(2015)052JHEP 11 (2015) 052, http://arxiv.org/abs/1503.08216arXiv:1503.08216 [hep-ph]. He:2013tla X.-G. He, H. Phoon, Y. Tang, and G. Valencia, “Unitarity and vacuum stability constraints on the couplings of color octet scalars,” http://dx.doi.org/10.1007/JHEP05(2013)026JHEP 05 (2013) 026, http://arxiv.org/abs/1303.4848arXiv:1303.4848 [hep-ph]. Cheng:2016tlc L. Cheng and G. Valencia, “Two Higgs doublet models augmented by a scalar colour octet,” http://dx.doi.org/10.1007/JHEP09(2016)079JHEP 09 (2016) 079, http://arxiv.org/abs/1606.01298arXiv:1606.01298 [hep-ph]. Cheng:2017tbn L. Cheng and G. Valencia, “Validity of two Higgs doublet models with a scalar color octet up to a high energy scale,” http://arxiv.org/abs/1703.03445arXiv:1703.03445 [hep-ph]. Chakrabortty:2016wkl J. Chakrabortty, J. Gluza, T. Jelinski, and T. Srivastava, “Theoretical constraints on masses of heavy particles in Left-Right Symmetric Models,” http://dx.doi.org/10.1016/j.physletb.2016.05.092Phys. Lett. B759 (2016) 361–368, http://arxiv.org/abs/1604.06987arXiv:1604.06987 [hep-ph]. Gori:2017qwg S. Gori, H. E. Haber, and E. Santos, “High scale flavor alignment in two-Higgs doublet models and its phenomenology,” http://arxiv.org/abs/1703.05873arXiv:1703.05873 [hep-ph]. Blasi:2017xmc S. Blasi, S. De Curtis, and K. Yagyu, “Effects of custodial symmetry breaking in the Georgi-Machacek model at high energies,” http://arxiv.org/abs/1704.08512arXiv:1704.08512 [hep-ph]. Grinstein:2013npa B. Grinstein and P. Uttayarat, “Carving Out Parameter Space in Type-II Two Higgs Doublets Model,” http://dx.doi.org/10.1007/JHEP09(2013)110, 10.1007/JHEP06(2013)094JHEP 06 (2013) 094, http://arxiv.org/abs/1304.0028arXiv:1304.0028 [hep-ph]. [Erratum: JHEP09,110(2013)]. Chakrabarty:2014aya N. Chakrabarty, U. K. Dey, and B. Mukhopadhyaya, “High-scale validity of a two-Higgs doublet scenario: a study including LHC data,” http://dx.doi.org/10.1007/JHEP12(2014)166JHEP 12 (2014) 166, http://arxiv.org/abs/1407.2145arXiv:1407.2145 [hep-ph]. Ferreira:2015rha P. Ferreira, H. E. Haber, and E. Santos, “Preserving the validity of the Two-Higgs Doublet Model up to the Planck scale,” http://dx.doi.org/10.1103/PhysRevD.92.033003, 10.1103/PhysRevD.94.059903Phys. Rev. D92 (2015) 033003, http://arxiv.org/abs/1505.04001arXiv:1505.04001 [hep-ph]. [Erratum: Phys. Rev.D94,no.5,059903(2016)]. Itzykson:1980rh C. Itzykson and J. B. Zuber, Quantum Field Theory. International Series In Pure and Applied Physics. McGraw-Hill, New York, 1980. <http://dx.doi.org/10.1063/1.2916419>. Kilian:2014zja W. Kilian, T. Ohl, J. Reuter, and M. Sekulla, “High-Energy Vector Boson Scattering after the Higgs Discovery,” http://dx.doi.org/10.1103/PhysRevD.91.096007Phys. Rev. D91 (2015) 096007, http://arxiv.org/abs/1408.6207arXiv:1408.6207 [hep-ph]. Bellazzini:2014waa B. Bellazzini, L. Martucci, and R. Torre, “Symmetries, Sum Rules and Constraints on Effective Field Theories,” http://dx.doi.org/10.1007/JHEP09(2014)100JHEP 09 (2014) 100, http://arxiv.org/abs/1405.2960arXiv:1405.2960 [hep-th]. Ivanov:2006yq I. P. Ivanov, “Minkowski space structure of the Higgs potential in 2HDM,” http://dx.doi.org/10.1103/PhysRevD.76.039902, 10.1103/PhysRevD.75.035001Phys. Rev. D75 (2007) 035001, http://arxiv.org/abs/hep-ph/0609018arXiv:hep-ph/0609018 [hep-ph]. [Erratum: Phys. Rev.D76,039902(2007)]. Ferreira:2009wh P. M. Ferreira, H. E. Haber, and J. P. Silva, “Generalized CP symmetries and special regions of parameter space in the two-Higgs-doublet model,” http://dx.doi.org/10.1103/PhysRevD.79.116004Phys. Rev. D79 (2009) 116004, http://arxiv.org/abs/0902.1537arXiv:0902.1537 [hep-ph]. Branco:2011iw G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher, and J. P. Silva, “Theory and phenomenology of two-Higgs-doublet models,” http://dx.doi.org/10.1016/j.physrep.2012.02.002Phys. Rept. 516 (2012) 1–102, http://arxiv.org/abs/1106.0034arXiv:1106.0034 [hep-ph]. Falkowski:2016cxu A. Falkowski, M. Gonzalez-Alonso, A. Greljo, D. Marzocca, and M. Son, “Anomalous Triple Gauge Couplings in the Effective Field Theory Approach at the LHC,” http://dx.doi.org/10.1007/JHEP02(2017)115JHEP 02 (2017) 115, http://arxiv.org/abs/1609.06312arXiv:1609.06312 [hep-ph]. Lyonnet:2013dna F. Lyonnet, I. Schienbein, F. Staub, and A. Wingerter, “PyR@TE: Renormalization Group Equations for General Gauge Theories,” http://dx.doi.org/10.1016/j.cpc.2013.12.002Comput. Phys. Commun. 185 (2014) 1130–1152, http://arxiv.org/abs/1309.7030arXiv:1309.7030 [hep-ph]. Ginzburg:2005dt I. F. Ginzburg and I. P. Ivanov, “Tree-level unitarity constraints in the most general 2HDM,” http://dx.doi.org/10.1103/PhysRevD.72.115010Phys. Rev. D72 (2005) 115010, http://arxiv.org/abs/hep-ph/0508020arXiv:hep-ph/0508020 [hep-ph]. Bagger:1989fc J. Bagger and C. Schmidt, “Equivalence Theorem Redux,” http://dx.doi.org/10.1103/PhysRevD.41.264Phys. Rev. D41 (1990) 264. Fleischer:1980ub J. Fleischer and F. Jegerlehner, “Radiative Corrections to Higgs Decays in the Extended Weinberg-Salam Model,” http://dx.doi.org/10.1103/PhysRevD.23.2001Phys. Rev. D23 (1981) 2001–2026. Krause:2016oke M. Krause, R. Lorenz, M. Muhlleitner, R. Santos, and H. Ziesche, “Gauge-independent Renormalization of the 2-Higgs-Doublet Model,” http://dx.doi.org/10.1007/JHEP09(2016)143JHEP 09 (2016) 143, http://arxiv.org/abs/1605.04853arXiv:1605.04853 [hep-ph]. Denner:2016etu A. Denner, L. Jenniches, J.-N. Lang, and C. Sturm, “Gauge-independent MS renormalization in the 2HDM,” http://dx.doi.org/10.1007/JHEP09(2016)115JHEP 09 (2016) 115, http://arxiv.org/abs/1607.07352arXiv:1607.07352 [hep-ph]. Krause:2016xku M. Krause, M. Muhlleitner, R. Santos, and H. Ziesche, “Higgs-to-Higgs boson decays in a 2HDM at next-to-leading order,” http://dx.doi.org/10.1103/PhysRevD.95.075019Phys. Rev. D95 no. 7, (2017) 075019, http://arxiv.org/abs/1609.04185arXiv:1609.04185 [hep-ph]. Hartmann:2016pil C. Hartmann, W. Shepherd, and M. Trott, “The Z decay width in the SMEFT: y_t and λ corrections at one loop,” http://dx.doi.org/10.1007/JHEP03(2017)060JHEP 03 (2017) 060, http://arxiv.org/abs/1611.09879arXiv:1611.09879 [hep-ph]. Logan:2015xpa H. E. Logan and V. Rentala, “All the generalized Georgi-Machacek models,” http://dx.doi.org/10.1103/PhysRevD.92.075011Phys. Rev. D92 no. 7, (2015) 075011, http://arxiv.org/abs/1502.01275arXiv:1502.01275 [hep-ph]. Goncalves:2016iyg D. Goncalves, P. A. N. Machado, and J. M. No, “Simplified Models for Dark Matter Face their Consistent Completions,” http://dx.doi.org/10.1103/PhysRevD.95.055027Phys. Rev. D95 no. 5, (2017) 055027, http://arxiv.org/abs/1611.04593arXiv:1611.04593 [hep-ph].
http://arxiv.org/abs/1702.08511v2
{ "authors": [ "Christopher W. Murphy" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170227201314", "title": "NLO Perturbativity Bounds on Quartic Couplings in Renormalizable Theories with $φ^4$-like Scalar Sectors" }
J.B.S. Haldane Could Have Done Bettert1CV was supported by the Austrian Science Fund (FWF): DK W1225-B20. e1Veterinärmedizinische Universität WienInstitut für Tierzucht und Genetik, Veterinärmedizinische Universität Wien, Veterinärplatz 1, A-1210 Vienna, AustriaC. Vogl§ COMMENT ON: “J.B.S. HALDANE'S CONTRIBUTION TO THE BAYES FACTOR HYPOTHESIS TEST” BY ETZ AND WAGENMAKERS Etz and Wagenmakers <cit.> (and an earlier version of this paper available at: https://arxiv.org/abs/1511.08180) review the contribution of J.B.S. Haldane to the development of the Bayes factor hypothesis test. They focus particularly on Haldane's proposition of a mixture prior in his first example on genetic linkage mapping in the Chinese primrose (Primula sinensis) <cit.>. As Haldane never followed up on these ideas, it is difficult to gauge his motivation and intentions. Haldane himself states his purpose in the beginning of the article <cit.>:Bayes theorem is based on the assumption that all values in the neighborhood of that observed are equally probable a priori. It is the purpose of this article to examine what more reasonable assumptions could be made, and how it will affect the estimate given the data. Compactly restated: flat priors should be replaced by more reasonable assumptions. But I will argue that in the very same article, in the very first example, Haldane himself uses a flat prior instead of a more reasonable prior.Haldane's primrose example with a flat prior. The data come from a (hypothetical) observation of 400 meioses in the primrose; 160 of them are cross-overs. Let ρ be the recombination rate between the two loci. The likelihood is a binomialp(y=160ρ,N=400)=400160 ρ^160(1-ρ)^240 .Haldane argues that P. sinensis has twelve chromosomes of about equal length. Recombination between unlinked loci on different chromosomes is free, such that the recombination rate ρ=1/2. This is reflected in Haldane's prior by a point mass of 11/12 on ρ=1/2. With probability 1/12, the two loci reside on the same chromosome, i.e., the two loci are linked. Conditional on linkage, Haldane assumes 0≤ρ<1/2 and a flat prior of p(ρ)=2, such that his marginal posterior distribution becomesp(y=160 N=400)=1/6400160∫_0^1/2ρ^160(1-ρ)^240 dx .He continues to approximate by extending the upper integration limit to onep(y=160 N=400) ≈1/6400160∫_0^1ρ^160(1-ρ)^240 dx=1/6400160160! 240!/401!=1/6·401 .But the flat prior is unreasonable, given Haldane's knowledge of genetic linkage.A better prior. Chromosomes are one-dimensional structures on which loci reside. The recombination rate ρ between two genes is a function of their distance on the chromosome. It would have been reasonable for Haldane to assume that a locus can be located anywhere on a chromosome with equal probability and that the locations of two loci are independent of each other. Then the genetic distance x in units of proportions of the length of the chromosome (denoted with L and measured in cross-over rates, i.e.,  Morgan) between the two loci would be given by a beta p(x)=Γ(3)/Γ(1)Γ(2) x^1-1(1-x)^2-1 .Haldane <cit.> himself derived a bijective function that maps genetic distance x to recombination rate ρ: ρ_ x,L= 1-e^-2Lx/2 .The number of cross-overs per meiosis per chromosome is about one, a fact probably known to Haldane, such that I set L=1. Changing variables from x to ρ, the prior distribution of ρ then becomesp(ρ)=2+log(1-2ρ)/1-2ρwith 0≤ρ≤1-e^-2/2 (Fig. <ref>). Note that, for the primrose example, the maximum likelihood estimator of the recombination rate is ρ̂=160/400=0.4. In this parameter region the prior (<ref>) differs considerably from the flat prior p(ρ)=2. Speculations on Haldane's intentions. Haldane most certainly also went through the above considerations; after all, he himself developed a very useful mapping function. Reading the article carefully, I consider its main purpose not the mixture prior in the primrose example, but rather the investigation of different parameter regions of the binomial and its conjugate distribution, the beta. The primrose example is in a parameter region, where probabilities of failure and success are about equal. (Realize that the example data are actually closer to equal probabilities than is usually encountered in linkage studies, where sample sizes are often about 50 to 100, rather than Haldane's 400, which would have made detection of linkage unlikely with a true ρ=0.4.) For this, a flat prior is reasonable, i.e., a prior beta with α=β=1. Haldane may actually have been more interested in the approximate distribution (<ref>) than in the exact one (<ref>). The other examples in Haldane's article pertain to parameter regions where success (or failure) probabilities are close to zero or one. Then a flat prior would put too much weight into the middle of the parameter region and a prior with α→0 and β=1 proportional to 1/ρ, or with α=β→0 proportional to 1/ρ(1-ρ), would be preferable. In this light, a more complicated prior distribution than the beta and its asymptotes would have been useless, even though Haldane could have derived it easily. I thus believe that, for the sake of generality, Haldane chose to not do better than flat in the primrose example. Furthermore, I agree with Etz and Wagenmakers <cit.>: It was the specific nature of the linkage problem in genetics that caused Haldane to serendipitously adopt a mixture prior comprising a point mass and smooth distribution.A genetic red herring. Modern genetics has shown cross-over rates to be variable along a chromosome, with low rates at the chromosome ends and around the centromere. Since the distribution of genes on chromosomes also follows roughly the same pattern, this complication can be ignored, as long as genetic position is based on mapping distances (in units of Morgan) and not physical distances (in units of basepairs).§ ACKNOWLEDGMENTSI thank Alexander Etz and Eric-Jan Wagenmakers for inspiration and encouragement. My research was supported by the Austrian Science Fund (FWF): DK W1225-B20. imsart-number § FIGURES
http://arxiv.org/abs/1702.08261v1
{ "authors": [ "Claus Vogl" ], "categories": [ "stat.OT" ], "primary_category": "stat.OT", "published": "20170227130258", "title": "J.B.S. Haldane Could Have Done Better" }
Topological Interference Management withDecoded Message Passing Xinping Yi, Member, IEEE and Giuseppe Caire, Fellow, IEEE This work has been presented in part at Proc. IEEE Int. Symp. Information Theory (ISIT'16), Barcelona, Spain, Jul. 2016. X. Yi and G. Caire are with Communication and Information Theory Chair in Department of Electrical Engineering and Computer Science at Technische Universität Berlin, 10587 Berlin, Germany. (email: {xinping.yi, caire}@tu-berlin.de)December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================ The topological interference management (TIM) problem studies partially-connected interference networks with no channel state information except for the network topology (i.e., connectivity graph) at the transmitters. In this paper, we consider a similar problem in the uplink cellular networks, while message passing is enabled at the receivers (e.g., base stations), so that the decoded messages can be routed to other receivers via backhaul links to help further improve network performance.For this TIM problem with decoded message passing (TIM-MP), we model the interference pattern by conflict digraphs, connect orthogonal access to the acyclic set coloring on conflict digraphs, and show that one-to-one interference alignment boils down to orthogonal access because of message passing. With the aid of polyhedral combinatorics, we identify the structural properties of certain classes of network topologies where orthogonal access achieves the optimal degrees-of-freedom (DoF) region in the information-theoretic sense. The relation to the conventional index coding with simultaneous decoding is also investigated by formulating a generalized index coding problem with successive decoding as a result of decoded message passing. The properties of reducibility and criticality are also studied, by which we are able to prove the linear optimality of orthogonal access in terms of symmetric DoF for the networks up to four users with all possible network topologies (218 instances). Practical issues of the tradeoff between the overhead of message passing and the achievable symmetric DoF are also discussed, in the hope of facilitating efficient backhaul utilization. § INTRODUCTION As the cellular network becomes larger, denser and more heterogeneous, interference management is increasingly crucial and challenging. The substantial gain promised by sophisticated interference management techniques (e.g., interference alignment <cit.>) requires usually that (almost) perfect and instantaneous channel state information at the transmitters (CSIT) is accessible. Nevertheless, to obtain CSIT perfectly and instantaneously is challenging, if not impossible. Especially when the number of users/antennas is large or the channel changes rapidly, it will be expensive to obtain CSIT timely with reasonable accuracy. Relaxations of the perfect and instantaneous CSIT requirements have been investigated in various networks (e.g., instantaneous CSIT with limited accuracy <cit.>, perfect but delayed CSIT <cit.>). However, if only finite-precision CSIT is available, the system degrees of freedom (DoF) value, i.e., roughly speaking the number of non-interfering Gaussian channels that the system is able to support simultaneously, collapses to the situation as if no CSIT was available at all <cit.>. Indeed, with no CSIT, the transmitters cannot distinguish different receivers, and are totally blind. The no-CSIT assumption is somewhat too pessimistic. In fact, certain coarse channel information (e.g., channel fading statistics, strength, and users' locations) is easily obtained even in today's practical systems. For instance, if the fading channels of different users follow some structured patterns, then blind interference alignment could improve DoF beyond the absolutely no CSIT case <cit.>. In addition, the DoF collapse was observed under the assumption that the wireless network is fully connected, so that interference is everywhere no matter whether it is strong or weak enough to be negligible. Intuitively, it makes no sense for a system designer to take into account the interference from very far away base stations. As the interference power rapidly decays with distance for distances beyond some critical threshold due to shadowing, blocking, and Earth curvature, interference from some sources is inevitably weaker than others, which suggests the use of a partially-connected bipartite graph to model, at least approximately, the network topology.Interference networks with no channel state information (CSI) except for the knowledge of the connectivity graph at the transmitters have been considered under the name of the “topological interference management (TIM)” problem <cit.>. It has been shown that substantial gains in terms of DoF can be obtained with only this topological information for partially-connected interference networks. Surprisingly, one half DoF per user, which is optimal for an interference channel with perfect and global CSIT, can be attained for some partially-connected interference channels with only topological information. Its substantial reduction of CSIT requirement has attracted a lot of followup works aiming at various aspects, such as the consideration of fast fading channels <cit.>, alternating connectivity <cit.>, multiple antennas <cit.>, and cellular networks <cit.>. The TIM problem was also nicely bridged to the “index coding” problem <cit.>, where the former offers well-developed interference management techniques (such as interference alignment) to attack the latter, and also serves as an intriguing application of great practical interest in wireless networks for the latter. Recently, the TIM problem under a broadcast setting with distributed transmitter cooperation in the downlink cellular network was considered in <cit.>. It has been shown that, if message sharing is enabled at the base stations, higher rate transmission can be created by allocating messages to the transmitters in a way such that the interference can be perfectly avoided or aligned. As a dual problem, a natural question then to ask is, whether receiver cooperation (or cooperative decoding at the base stations) in the uplink cellular networks also offers us some gains under the TIM setting.Cooperative decoding at the base stations in the uplink cellular networks was widely studied (e.g., <cit.>), where the received signals are shared among base stations via backhaul links so that joint signal processing is enabled. Nevertheless, joint signal processing and decoding results in huge amount of backhaul overhead, even if the quantized received signal samples are shared locally in a clustered decoding fashion <cit.>.Most recently, a new type of local base station cooperation framework in uplink cellular networks was studied in <cit.> to boost the overall network performance. Differently from the strategy of sharing quantized received signals, the authors in <cit.> considered a successive decoding policy, in which the message at each receiver is decoded based on the locally received signal as well as the decoded messages passed from neighboring base stations that have already decoded their messages at an earlier stage. It has been shown that the local and single-round (non-iterative) message passing enables interference alignment without requiring symbol extensions or lattice alignment.For these results, it is crucial to exploit the partial connectivity of the interference graph while, as usual, the local interference alignment scheme requires perfect instantaneous CSIT.A natural question is whether the CSIT requirement can be relaxed in the decoded message passing setting. More specifically, with decoded message passing, is it possible to attain performance gain in partially-connected cellular networks with only topological information?In this work, we formally formulate the TIM problem with decoded message passing at the receivers, referred to as the “TIM-MP” problem. As soon as a receiver decodes its own message, it can pass its message to any other receivers who are interested.Building on this decoded message passing setting, we model the interference pattern by conflict digraphs, and connect orthogonal access to the acyclic set coloring on conflict digraphs. With the aid of polyhedral combinatorics, we identify certain classes of network topologies for which orthogonal access achieves the optimal DoF region. The relation to index coding is also studied by formulating a generalized index coding problem with successive decoding. Reducibility and criticality are also discussed in the hope of reducing large-size problems to smaller ones. By reducibility and criticality, the linear optimality of orthogonal access in terms of symmetric DoF is also shown for the small-size networks up to four users with all possible 218 non-isomorphic topologies. Practical issues for TIM-MP problems such as the tradeoff between the overhead of message passing and the achievable symmetric DoF are also discussed in the hope of facilitating the most efficient backhaul utilization.More specifically, our contributions are organized as follows. * In Section <ref>, we model the interference pattern by a conflict directed graph, turning the interference between different transmitter-receiver pairs (i.e., the respectively desired messages) to the directed connectivity between nodes (representing the corresponding messages) in a directed graph. By this graphic modeling, we connect orthogonal access of TIM-MP problems to acyclic set coloring on conflict digraphs, where the latter is well-studied in the graph theory literature. The achievable symmetric DoF due to single-round and multiple-round message passing are connected to two graph theoretic parameters, dichromatic number and fractional dichromatic number. Thanks to the equivalence between local coloring and one-to-one alignment, we also show that one-to-one alignment boils down to orthogonal access as a result of decoded message passing, by proving that local acyclic set coloring is not better than acyclic set coloring.* We establish in Section <ref> the outer bound of the achievable DoF region by cycle and clique inequalities, and further connect it to set packing and covering polytopes. With the aid of polyhedral combinatorics, we identify sufficient conditions for which orthogonal access (i.e., fractional acyclic set coloring) achieves the optimal DoF region. Such conditions ensure the integrality of the outer bound of DoF region polytopes, where the integral extreme points of the polytopes can be achieved by acyclic set coloring. Time sharing among the integral extreme points yields the whole DoF region.* The relation to index coding is also studied in Section <ref>, showing that TIM-MP corresponds to a generalized index coding problem, referred to as successive index coding (SIC). Generalizing conventional index coding, SIC allows successive decoding at the receivers, where as soon as a receiver decodes its desired message, it can declare it and pass it to other receivers as additional side information. The decoding and message passing orders play an crucial role, which makes SIC a more complex combinatorial problem. The analogous coding schemes to (partial) clique covering are also given. The vertex-reducibility and arc-criticality of SIC are also investigated, by which SIC problems with large vertex/arc size can be reduced to ones with smaller size.* The linear optimality of orthogonal access is considered in <ref>. We first consider some special network topologies that do not satisfy the sufficient conditions in Section <ref>, and then prove the linear optimality of orthogonal access with respect to symmetric DoF, if restricted to linear schemes. Thanks to the vertex-reducibility and arc-criticality investigated in Section <ref>, the linear optimality of symmetric DoF or broadcast rate for small networks up to 4 users with all possible network topologies are fully characterized by orthogonal access.* The practical issue on the tradeoff between achievable symmetric DoF and the number of passed messages is also discussed in Section <ref>. We identify a sufficient condition under which only one message passing is helpful to improve the DoF region. The tradeoff between achievable symmetric DoF and the overhead of message passing is formulated as a matrix completion problem, which can be solved algorithmically although closed-form solution remains a challenge.Notations: Throughout this paper, we define {1,2,…,K}, and [n] {1,2,…,n} for any integer n. Let A, , andrepresent a variable, a set, and a matrix, respectively. In addition, ^c is the complementary set of , andis the cardinality of the set . The setx() or x_ represents a set or tuple {x_i, i ∈} indexed by ._ij represents the ij-th entry of the matrix . Define \ a {x| x ∈, x ≠ a} and _1 \_2 {x | x ∈_1, x ∉_2}.andrepresent respectively the all-zero and all-one vectors. § SYSTEM MODEL§.§ Channel ModelWe consider the uplink of a cellular network with K user terminals (i.e., transmitters) that want to send messages to K base stations (i.e., receivers), respectively. The base stations are connected with backhaul links, through which one base station could pass its own decoded message to its neighboring ones. Both user terminals and base stations are equipped with a single antenna each. It is assumed that, due to the scarce channel state feedback resource, the users have no access to channel realizations but only know the network connectivity graph, i.e., which user is connected to which base station.The received signals in this partially-connected network are modeled, for the base station j at time instant t, byY_j(t) = ∑_i ∈_j h_ji(t) X_i(t) + Z_j(t)where X_i(t) is the transmitted signal subject to the average power constraint (X_i) ≤ P, Z_j(t) is the Gaussian noise with zero-mean and unit-variance at the base stations, and h_ji(t) is the channel coefficient between user i and base station j, and is not known by the users. Here _j represents the transmit set containing the indices of users that are connected to base station j, for j ∈{1,2,…,K}. We point out that channel coefficients {h_ji(t), ∀ i,j,t} are not available at the users, yet the network topology (i.e., _j, ∀ j) is known by both users and base stations. The network topology is assumed to be fixed throughout the duration of communication. Such a setup is referred to as the “Topological Interference Management (TIM)” setting.§.§ Problem StatementSimilarly to the definition in <cit.>, a decoding order π is a partial order ≺_π such that i ≺_π j indicates the message W_i should be decoded before W_j. We assume that the base station i only decodes its own desired message W_i, and then passes it to the base station j, even though sometimes the messages desired by other base stations are also decodable. As anticipated in Section <ref>, we refer to this setting combining TIM and decoded message passing as “TIM-MP”. In the TIM-MP problem, given a decoding order i ≺_π j, once W_i is decoded, it can be passed to receiver j to help decoding W_j. Throughout this paper, we consider unconstrained message passing, that is, a message can be passed to any other receivers who are interested [In fact, because of the locality of interference in physically motivated network topologies, only the neighboring receivers suffer from interference from message W_i, and therefore messages are passed in a neighbor-propagation fashion.].Formally, the achievable rate of the TIM-MP problem can be defined as follows. For a network with topology represented by the bipartite graph , a rate tuple (R_1^π,…,R_K^π) is said to be achievable under a specified decoding order π if there exists a (2^nR_1^π,…,2^nR_K^π,n) coding scheme consists of the following elements: * K message sets _i[1:2^nR_i^π], from which the message W_i is uniformly chosen, ∀ i ∈;* K encoding functions f_i^(n,π): _i ×↦^n, ∀ i ∈:X_i^n = f_i^(n,π) (W_i, )with power constraint [X_i] ≤ P, where each transmitter has only access to its own message and the network topology graph ;* K decoding functions g_j^(n,π): ^n ×_j^π××^K × K↦_j, ∀ j ∈:Ŵ_j = g_j^(n,π) (Y_j^n, S_j^π, , )where ={h_ji(t), ∀ i,j,t}, and S_j^π is a set of decoded messages passed from other receivers through backhaul links, defined asS_j^π = {Ŵ_k: k ≺_π j }; such that the decoding error P_e^(n,π) = max_j { (W_j ≠Ŵ_j)} tends to zero when the code block length n tends to infinity. We consider the following message passing policies: * Single-round message passing: For a decoding order π, for all (i,j) such that i ≺_π j, it is allowed to pass the messages only from receiver i to receiver j.* Multiple-round message passing: It consists of a sequence of multiple single-round message passing. Different rounds can have distinct decoding orders. For instance, it may happen that i≺_π j in one round and j≺_π' i in another round with π≠π'. The achievable rate region with respect to the decoding order π, denoted as ^π, is the set of all achievable rate tuples (R_1^π,…,R_K^π), corresponding to the single-round message passing. The capacity region with multiple-round message passing over all possible decoding orders is given by𝒞= conv (∪_π^π), which can be obtained by time sharing among multiple single-round message passing with different decoding orders allowed in different rounds.We follow the TIM setting and use symmetric DoF and DoF region as our main figures of merit.d_ = lim sup_P →∞sup_(R,…,R) ∈𝒞R/log P 𝒟 ={(d_) ∈_+: d_i = lim sup_P →∞R_i/log P, ∀ i. .s.t. (R_1,…,R_K) ∈𝒞}§.§ Interference ModelingWe model the mutual interference in the network as a directed message conflict graph. A directed graph (digraph) =(,) consists of a set of verticesand a set of arcsbetween two vertices. We denote by (u,v) the arc (i.e., directed edge) from vertex u to vertex v. More graph theoretic definitions are presented in Appendix <ref>.For a network topology, its directed conflict graph (briefly referred to as“conflict digraph”) is a digraph =(,) such that i ∈ represent the message W_i from transmitter i to receiver i and (i,j) ∈ represents the interfering link from transmitter i to receiver j in the interference network. The conflict digraphs indicate not only the message conflict due to mutual interference, but also the source and the sink of the interference. The conflict digraph captures exactly every instance of network topology. We refer to the TIM-MP problem with a specific conflict digraph as a TIM-MP instance.§ ORTHOGONAL ACCESS Orthogonal access is the simplest transmission scheme of practical interest. For the TIM problem, orthogonal access is to schedule independent sets of the conflict graph across time or frequency <cit.>, because simultaneous transmission of the messages in an independent set and orthogonal transmission of different independent sets across time or frequency avoid mutual interference. By contrast, message passing offers the possibility of interference cancelation for the messages that are not in an independent set. This complicates orthogonal access under the TIM-MP setting, as both interference avoidance and cancelation should be taken into account.In what follows, we first introduce the concept of orthogonal access in the TIM-MP problem, and propose an inner bound of symmetric DoF for the single-round message passing setting, followed by the extension to the multiple-round message passing. The uselessness of one-to-one interference alignment is also shown from a graph theoretic perspective. §.§ What is Orthogonal Access?Instead of scheduling independent sets in the TIM problem, we schedule acyclic set in the TIM-MP problem, where the acyclic set is an induced sub-digraph that contains no directed cycles (dicycles). More properties of the acyclic set can be found in Appendix <ref>. Thus, we have the following definition.Orthogonal access in the TIM-MP setting consists of scheduling orthogonally acyclic sets of conflict digraphs across time or frequency. The messages in an acyclic set can be decoded successively via decoded message passing in one time slot. [Note that the propagation of the messages over the backbone is much faster than the transmission over the wireless interface. Therefore, we may treat it, for conceptual simplicity, as one time slot counting the time to transmit (simultaneously) the codewords, and neglecting the message passing propagation time. In practice, a sequence of codewords can be multiplexed in time for the same acyclic set and decoding order, such that the propagation can be done in a pipelined way, such that effectively the time needed for end to end propagation of the decoded messages is much less than the duration of transmission in the state defined by the acyclic set.] In an acyclic digraph =(,), there always exists a topological ordering ofsuch that a vertex u ∈ comes before a vertex v ∈ if there is an arc (u,v) ∈. Such a topological order gives us the decoding and message passing order. In particular, there exists at least one vertex with no incoming arcs in an acyclic set. We start with the decoding of these messages, which are free of interference. After decoding these messages, they are passed to the next ones which are only interfered by them such that the interference can be fully canceled out, and thus these next messages are also decodable. Keep doing this until all messages in this acyclic set are decoded in such a successive way. As such, simultaneous transmission of the messages in an acyclic set does not have residual interference left after the interference cancelation with passed messages.Let us look at orthogonal access from a graph coloring perspective. If each acyclic set is assigned with one color, orthogonal access is equivalent to acyclic sets coloring of conflict digraphs. The messages assigned to the same color are simultaneously transmitted and successively decoded, whereas different colors are multiplexed over different time slots.The dichromatic number of a digraph , denoted by χ_A(), is the minimum number of colors required to color the vertices ofin such a way that every set of vertices with the same color induces an acyclic sub-digraph in . By this definition, we can immediately obtain an inner bound of symmetric DoF. For the TIM-MP instance with conflict digraph , we have an inner boundd_≥1/χ_A()which is achieved by orthogonal access with single-round message passing. Orthogonal access with single-round message passing can be seen as assigning a standard basis vector to each acyclic set. For instance, the assigning of the i-th column of an identity matrix to an acyclic set is equivalent to the scheduling of this acyclic set in i-th time slot. By Lemma <ref>, we have the following two corollaries, and relegate the proofs to Appendix <ref> and <ref>. For TIM-MP instances, the optimal symmetric DoF is d_=1 if and only if the conflict digraphs are acyclic. For TIM-MP instances, if only single-round message passing is allowed, the optimal symmetric DoF is d_=1/2 if the conflict digraph contains either only directed odd cycles or only directed even cycles.The condition in Corollary <ref> is only sufficient but not necessary. There exists a larger family of conflict digraphs with χ_A=2. There is a conjecture <cit.> in the graph theory literature, claiming that the planar digraphs without length-2 dicycles have χ_A=2. For the conflict digraphs in Fig. <ref>, there are only directed odd cycles in Fig. <ref>(a), only directed even cycles in Fig. <ref>(b), and both odd and even dicycles in Fig. <ref>(c) and (d). So, according to Corollary <ref>, the optimal symmetric DoF value with single-round message passing for both (a) and (b) is 1/2. We cannot expect that DoF 1/2 is achievable in (c). In fact d_=1/3 is optimal, which will be shown later. Nevertheless, the optimal symmetric DoF value of Fig. <ref>(d) is also 1/2, although the conflict digraph contains both even and odd cycles. This shows that the condition in Corollary <ref> is only sufficient but not necessary. ◊The dichromatic number of a digraphcan be represented as the solution to the following linear program:χ_A() = min ∑_A ∈𝒜() g(A) ∑_A ∈𝒜(,v) g(A) ≥ 1, ∀ v ∈() g(A) ∈{0,1}. where 𝒜() is the collection of all possible acyclic sets, and 𝒜(,v) is the collection of all possible acyclic sets that involve the vertex v. By relaxing g(A) ∈{0,1} to g(A) ∈ [0,1], as the fractionalized versions of other graph theoretic parameters, the linear program (<ref>) yields the fractional dichromatic number χ_A,f(), which can also serve as an inner bound of the symmetric DoF.For the TIM-MP instance with conflict digraph , we haved_≥1/χ_A,f(),which is achieved by orthogonal access with multiple-round message passing. Fractional coloring, no matter whether independent or acyclic set coloring, can be treated as time sharing among a set of proper non-fractional coloring, where e.g., g(A) ∈ [0,1] is the portion of the shared time of the acyclic set A. Thus, fractional acyclic set coloring of conflict digraphs is equivalent to orthogonal access with multiple-round message passing. If there only exists the symmetric part in the conflict digraph (i.e., without uni-directed arcs, see Appendix <ref>), both dichromatic number and its fractional version reduce to their counterparts in the underlying undirected graph, as acyclic sets in the digraph reduce to independent sets in the underlying undirected graph.The (K,L) regular network <cit.> has a connectivity pattern where each receiver is connected to its paired transmitter and the next L-1 successive ones. By Lemma <ref>, we have the following corollary for the symmetric DoF inner bound for the regular network, whose proof is relegated to Appendix <ref>. For the (K,L) regular network (K ≥ L) with _j={j, j+1, …, j+L-1} K, we haved_≥K-L+1/Kwhich is achieved by orthogonal access with multiple-round message passing. Unless otherwise specified, orthogonal access in the rest of this paper is referred to multiple-round message passing.§.§ Can Interference Alignment Help?Beyond the achievability through acyclic set coloring, one interesting question to ask is, if the more sophisticated achievability schemes, such as interference alignment, can outperform orthogonal access.Roughly speaking, (subspace) interference alignment is to associate each interference with a subspace such that the superposition of interferences occupies a reduced dimensional subspace. One-to-one interference alignment is a special case of subspace alignment. It consists of aligning the interferences in a one-to-one manner, that is, given a one-dimensional subspace, two interferences are either completely aligned or disjoint.Under the TIM setting, orthogonal access is equivalent to fractional vertex coloring on the undirected conflict graph <cit.>, and one-to-one interference alignment is a generalized version of orthogonal access <cit.>. Let us associate eachtransmitter-receiver pair (i.e., each vertex in conflict graph) with a transmission scheduling vector of length L, where L isthe number of scheduling intervals to properly serve all transmitter-receiver pairs without causing mutual interference (i.e., the number of colors for a proper vertex coloring on the conflict graph). Orthogonal access corresponds to assigning the basis vector _t (i.e., the t-th column of _L) to the transmitter-receiver pairs associated to the color t, while one-to-one alignment consists of assigning general linearly independent vectors such that each receiver can recover its desired message by solving a linear system of equations (in the absence of noise). In general, this allows for a vector dimension T≤ L, such that interference alignment may improve over orthogonal access. At a given receiver, only the transmitters that cause interference do appear in the linear system. Accordingly, the required vector dimension T depends merely on the number of different colors in the in-neighborhood of the directed conflict graph. Thus, a feasible one-to-one interference alignment scheme under the TIM setting is equivalent to a proper local coloring on the directed conflict graph <cit.>. This also analogously applies to the TIM-MP setting. Given a proper acyclic set coloring for a conflict digraph, for an acyclic set, the maximum number of acyclic sets with different colors in the in-neighborhood (i.e., causing interference) does matter. In other words, the spanned subspace by the assigned vectors of these acyclic sets in the in-neighborhood should be minimized to make interference as aligned as possible. Analogously to <cit.>, we introduce a local version of fractional acyclic set coloring. The local dichromatic number χ_LA() of a digraphis defined asχ_LA() min_cmax_v ∈{c(u): u ∈^-_v}where , _v^- is the set of vertices in the closed in-neighborhood of v, and the minimum is over all possible acyclic set coloring c: ↦ℕ. The closed in-neighborhood _v^- is defined as_v^- {v}∪{ u: (u,v) ∈() }. The local dichromatic number of a digraphcan be also represented as the solution to the following linear program:χ_LA() = min max_v ∈∑_A ∈𝒜():  A ∩_v^- ≠∅ g(A) ∑_A ∈𝒜(,v) g(A) ≥ 1, ∀ v ∈() g(A) ∈{0,1}. Its fractional version χ_LA,f() can be similarly defined by replacing g(A) ∈{0,1} with g(A) ∈ [0,1].It is clear that fractional local coloring is built upon the feasible fractional coloring of acyclic sets, and the difference is that the local coloring only counts colors in the closed in-neighborhood. So, we always have χ_LA,f() ≤χ_A,f(), because an additional condition is imposed on the local version.The linear program formulation of fractional local acyclic set coloring in (<ref>) is a straightforward extension of fractional local independent set coloring, where the acyclic sets in (<ref>) replace the independent sets. Similarly to the equivalence between interference alignment and local coloring shown in <cit.>,it follows immediately that one-to-one interference alignment with message passing is equivalent to fractional local acyclic set coloring. Thus, we have a new inner bound for the symmetric DoF due to interference alignment. For the TIM-MP instance with conflict digraph , we haved_≥1/χ_LA,f(),which is achieved by one-to-one interference alignment. By Lemma <ref>, we show that one-to-one interference alignment does not help when decoded message passing is enabled, and present the proof in Appendix <ref>. With message passing, one-to-one interference alignment boils down to orthogonal access due toχ_LA,f()=χ_A,f().Let us consider an instance of the TIM-MP problem with K=5. The conflict digraph is shown in Fig. <ref>(a), where a proper fractional acyclic set coloring is given for acyclic sets {1,3},{2,4}, {3,5}, {4,1}, {5,2}. It requires in total 5 colors for these acyclic sets, each of which receives 2 colors, such that the sub-digraph induced by the vertices with the same color is acyclic. Fig. <ref> shows the in-neighborhood of the acyclic set {5,2}, which includes all other vertices. The fractional dichromatic number is 5/2, which agrees with the fractional local dichromatic number. ◊Although one-to-one interference alignment does not outperform orthogonal access, it remains as an interesting and challenging open problem to exploit the potential benefit of subspace interference alignment, which has been shown to provide further gains in problems such as multiple groupcast TIM <cit.>.§ THE OPTIMALITY OF ORTHOGONAL ACCESS VIA POLYHEDRAL COMBINATORICS Having assessed that one-one-one interference alignment under message passing decoding boils down to orthogonal access, a natural question is how powerful orthogonal access is, and under what condition orthogonal access is DoF-optimal in the information-theoretic sense.Before proceeding further, we introduce some outer bounds. The preliminaries related to polyhedral combinatorics can be found in Appendix <ref>. §.§ Outer Bounds via Polyhedral CombinatoricsBy the nature of message passing, we conclude that cliques and dicycles are main obstacles, and thus have the following outer bounds. The DoF region 𝒟 of the TIM-MP problem is outer-bounded by𝒟⊆{(d_) : 0 ≤ d_k ≤ 1, ∀ k ∈ ∑_k ∈ Q d_k ≤ 1,∀ Q ∈ ∑_k ∈ C d_k ≤C-1,∀ C ∈}whereis the collection of all minimal dicycles (i.e., dicycles without chord), andis the collection of all maximal cliques (i.e., cliques not a sub-digraph of other cliques).See Appendix <ref>. We refer hereafter to the inequalities in (<ref>) as individual inequalities, cycle inequalities, and clique inequalities, respectively.The cliques with size 1 are vertices, so clique inequalities imply the individual ones. The clique with size 2 is also a dicycle, such that clique and cycle inequalities have some inequalities in common. As all sub-digraphs of a clique are still cliques and the clique inequality of the maximal one implies all other ones, we only count the clique inequality with the maximal size. Moreover, if a dicycle has a chord, whatever its direction is, there exists a subset of vertices that form a shorter dicycle, rendering the constraint associated with the larger one redundant. As such, we only count the cycle inequalities corresponding to the dicycles without chord. ◊ The outer bound with only individual and clique inequalities can be formed, by replacing d_k by x_k, as a set packing polytope (see Appendix <ref>)𝒫(,x_)={(x_): 0 ≤ x_k ≤ 1, ∀ k ∈ ∑_k ∈ Q x_k ≤ 1,∀ Q ∈}.The cycle inequalities, with a replacement of variables y_k=1-x_k, can be equivalently rewritten as ∑_k ∈ C y_k ≥ 1, and thus the outer bound with only individual and cycle inequalities can be formed, by replacing d_k by 1-y_k, as a set covering polytope (see Appendix <ref>)𝒫(,y_) = {(y_): 0 ≤ y_k ≤ 1, ∀ k ∈ ∑_k ∈ C y_k ≥ 1,∀ C ∈}. Taking all individual, clique, and cycle inequalities into account, we have the outer bound formed, by replacing d_k by 1-y_k and removing redundant inequalities, as the mixed set covering and packing polytope <cit.>𝒫(',',y_)={(y_) : 0 ≤ y_k ≤ 1, ∀ k ∈ ∑_k ∈ C y_k ≥ 1,∀ C ∈'∑_k ∈ Q y_k ≥Q-1,∀ Q ∈'}where' = { C: C≥ 2, ∀ C ∈} '={Q: Q≥ 3, ∀ Q ∈} C ∩ Q≤ 1, ∀ C ∈', ∀ Q ∈'.The conditions C≥ 2 and Q≥ 3 are to ensure that the redundancy between clique and cycle inequalities is removed. For some C ∈ and Q ∈, if C ∩ Q≥ 2, then the condition ∑_k ∈ C y_k ≥ 1 is redundant, so we add C ∩ Q≤ 1 to avoid redundancy. To rewrite the set packing and covering polytope into compact forms, we introduce two incidence matrices (see definitions in Appendix <ref>). Letbe the collection of all induced maximal cliques of a digraph =(,). The corresponding clique-vertex incidence matrixis a × binary matrix, where _ij={1,if v_j ∈ Q^i, 0,otherwise.where v_j ∈, and Q^i ∈ is the i-th clique in . Letbe the collection of all induced minimal dicycles of a digraph =(,). The corresponding dicycle-vertex incidence matrixis a × binary matrix, where _ij = {1,if v_j ∈ C^i, 0,otherwise.where v_j ∈, and C^i ∈ is the i-th dicycle in . As all clique inequalities correspond only to the maximal cliques, there are no dominating rows in the clique vertex incidence matrix . As all cycle inequalities corresponds only to the dicycles without chord, there are no dominating rows in the cycle-vertex incidence matrix . Nevertheless, there might be dominating rows in the concatenation ofand .By the above two incidence matrices, the compact representation of set packing and covering polytopes can respectively represented as𝒫(, ) ={∈ℝ^K: ≤≤, ≤} 𝒫(, ) ={∈ℝ^K: ≤≤, ≥}. According to polyhedral combinatorics (see Appendix <ref>), the matrixis perfect if and only if 𝒫(, ) has only integral extreme points, and the matrixis ideal if and only if 𝒫(, ) has only integral extreme points.The widely-studied balanced and totally unimodular matrices (TUM) are special cases of perfect and ideal matrices. Fig. <ref> presents their relations.§.§ The Optimality of Orthogonal AccessBy the above outer bounds, we identify three families of network topologies for which orthogonal access achieves the optimal DoF region of TIM-MP problems. The conditions of the optimality of orthogonal access are summarized in Theorem <ref>, and will be detailed case by case in the ensuing theorems. For the TIM-MP problem with the conflict digraph , the clique-vertex incidence matrixand the dicycle-vertex incidence matrix , orthogonal access via fractional acyclic set coloring achieves the optimal DoF region, if it falls in any one of the following cases. * Case I: The conflict digraphcontains no dicycles C_n with n ≥ 3, andis a perfect matrix;* Case II: The conflict digraphcontains no cliques Q_n with n ≥ 3, andis an ideal matrix;* Case III: The symmetric part S() is a perfect graph, and the dicycle-vertex incidence matrix ' of the remaining conflict digraphafter removing cliques Q_n (n ≥ 3) contains no minimally non-ideal submatrices.The converse proof relies on the integrality of set packing and covering polytopes, which has been established in polyhedral combinatorics. The achievability is due to acyclic set coloring. The sub-digraphs in the conflict digraph induced by the coordinates of the extreme points of these polytopes are acyclic sets. The detailed proofs will be shown case by case in the ensuing theorems. For the TIM setting, it has been shown in <cit.> that orthogonal access achieves the all-unicast DoF region of the TIM problem if and only if the network topology is chordal bipartite. Analogously, we have the following theorem for the TIM-MP problem when message passing is enabled.For the family of networks in which conflict digraphscontain no dicycles C_n with n ≥ 3, andis a perfect matrix, the optimal DoF region achieved by orthogonal access can be characterized by the set packing polytope𝒟={(d_): 0 ≤ d_k ≤ 1, ∀ k ∈ ∑_k ∈ Q d_i ≤ 1,∀ Q ∈(S())}where the undirected graph S() is the symmetric part of the conflict digraphs , and (S()) is the set of all maximal cliques in S(). See Appendix <ref>.The condition that conflict digraphcontains no dicycles C_n with n ≥ 3, andis a perfect matrix, indicates thatis a perfect digraph <cit.>. According to the definition of perfect digraphs in Appendix <ref>, Theorem <ref> identifies the optimality of orthogonal access for the conflict digraphexcluding the following cases: *contains dicycles C_n with length n≥ 3 as induced sub-digraph;*contains filled odd holes or filled odd antiholes, i.e., its symmetric part S() contains odd holes or odd antiholes.Similarly to <cit.>, the characterization of the optimal DoF region automatically yields the optimality of the traditional metrics such as sum or symmetric DoF. ◊ For a perfect digraph , acyclic set coloring ofreduces to vertex coloring of its symmetric part S(). Thus, any feasible coloring of the symmetric graph S() is also feasible for<cit.>. If the conflict digraphonly has symmetric part, then χ_A()=χ(S()). As such, the orthogonal access of our problem is reduced to that of the TIM problem without message passing, because in this case interference is mutual, and message passing does not help. ◊Consider a 6-cell network topology shown in Fig. <ref>(a). In the conflict digraphin Fig. <ref>(b), the symmetric part S() in Fig. <ref>(c) is perfect, and there do not exist dicycles C_n with n ≥ 3 as induced sub-digraph, although there exist dicycles for instance {1,2,6}. As there is an arc (6,2), the sub-digraph induced by {1,2,6} is not a dicycle. Thus, according to Theorem <ref>, we have the optimal DoF region𝒟={ (d_1,…,d_6) ∈ℝ_+^6 : d_1 ≤ 1, d_3 ≤ 1, d_5 ≤ 1, d_2+d_4+d_6 ≤ 1.}.It immediately follows that the symmetric and sum DoF are d_=1/3 and d_ sum=4, respectively. To achieve the symmetric DoF of 1/3, we can simply schedule {W_1,W_2}, {W_3,W_4}, and {W_5,W_6} in three time slots respectively. In each time slot, W_1,W_3,W_5 are free of interference, and W_2,W_4,W_6 can be subsequently decoded after passing the decoded messages W_1,W_3,W_5 at receivers 1, 3, 5 to receivers 2, 4, 6 respectively. ◊ Note however that, the condition that the conflict digraph is perfect in Theorem <ref> is only sufficient but not necessary. A counterexample is the dicycle C_3, where the fractional acyclic set coloring achieves the DoF region 𝒟={ (d_1,d_2,d_3): 0 ≤ d_k ≤ 1, ∀ k d_1+d_2+d_3 ≤ 2}while the conflict digraph is not perfect. Realizing that Case I focuses only on the integrality of the set packing polytopes, we identify another family of networks focusing on the integrality of the set covering polytopes in the following theorem, where orthogonal access is still DoF-optimal albeit the conflict digraph is not perfect. For the family of networks in whichcontains no cliques Q_n with n ≥ 3, andis an ideal matrix, the optimal DoF region achieved by orthogonal access can be given by the set covering polytope𝒟={(d_): 0 ≤ d_k ≤ 1, ∀ k ∈ ∑_k ∈ C d_k ≤C-1, ∀ C ∈}whereis the collection of all dicycles without chord.See Appendix <ref>.As a matter of fact, the condition thatis ideal already excludes the existence of the cliques of size 3 or more in the conflict digraph. It is because otherwise a clique of size 3 or more will lead to the existence of a non-ideal circulant submatrix _3^2 ofwhich results in a contradiction to that any submatrix (minor) of an ideal matrix is also ideal.The definition of the circulant matrix can be found in Appendix <ref>.The circulant matrices that are ideal consists only of _n^2 for even n ≥ 4, _6^3, _9^3, and _8^4.◊Differently from the perfect matrices, it is still an open problem to fully characterize all the ideal matrices. It has been shown in <cit.> that a matrix is ideal if and only if it does not contain a minimally non-ideal (MNI) submatrix minor (see definitions in Appendix <ref>). The MNI matrices are the “smallest” possible matrices that are not ideal <cit.>. A submatrix ofis a minor ofif it can be obtained fromby successively deleting a column j and the rows with a `1' in column j. If a matrix is ideal then so are all its minors. A matrixis MNI, if it is not ideal but all its proper minors are.For instance, _3^2 is a MNI matrix, so any matrix that contains _3^2 as a minor is not ideal, e.g., Fig. <ref>(c). As special cases of ideal matrices, balanced and totally unimodular (TU) matrices are completely characterizable.[Note here that, although TU and balanced matrices are special cases of perfect matrices, the conflict digraphs with incidence matrices being TU or balanced are not the subclass of those in Theorem <ref>, because two different incidence matrices are considered.] The characterization of balanced or totally unimodular matrices is well understood. A matrix is balanced if and only if it contains no odd hole matrices (i.e., _n^2 with odd n where n ≥ 3) as submatrices. A polynomial time recognition algorithm for balanced matrices was also given with the aid of decomposition <cit.>.The full characterization of all TU matrices was also given in <cit.>, where a matrix is TU if and only if it is a certain natural combination of some matrices and some copies of a particular 5-by-5 TU matrix. ◊Firstly, consider a 4-user TIM-MP instance, as shown in Fig. <ref>(a). There are two dicycles {(1,2),(2,3),(3,1)} and {(1,4),(4,3),(3,1)} without cliques, such that the outer bund is given by two cycle inequalities d_1+d_2+d_3 ≤ 2 and d_1+d_3+d_4 ≤ 2 as well as the individual ones d_k ≤ 1, ∀ k. It is not hard to verify that the dicycle-vertex incidence matrix =1 1 1 0 1 0 1 1 is totally unimodular, and thus ideal. The extreme points of the polytope consist of all possible 4-tuples excluding (1,0,1,1), (1,1,1,0) and (1,1,1,1). It can be checked that all these extreme points can be achieved by acyclic set coloring. As such, the DoF region can be achieved by time sharing among these extreme points.For the instance in Fig. <ref>(b), the corresponding dicycle-vertex incidence matrix is ideal, while in Fig. <ref>(c) with an arc (2,4) added, the resulting dicycle-vertex incidence matrix is not ideal any more. In Fig. <ref>(c), the corresponding dicycle-vertex incidence matrix is=[ 1 1 1 0 0 0; 0 0 1 1 1 0; 0 1 0 1 0 1 ]which contains an MNI matrix _3^2 as a submatrix. So, it is not a balanced matrix, nor an ideal matrix.◊ In the following corollary, we give some explicit characterization on conflict digraphs when orthogonal access is DoF optimal, according to Theorem <ref>. The proof is relegated to Appendix <ref>. For the TIM-MP problems, orthogonal access achieves the optimal DoF region given by (<ref>), if any one of the following conditions is satisfied. * All the induced dicycles in conflict digraphare disjoint (i.e., none of two induced dicycles share vertices).* There exist at most two chordless dicycles in conflict digraph , including (K,2) regular network whose conflict digraph is a single dicycle. By Theorems <ref> and <ref>, we show the optimality of orthogonal access for the three-user network with all possible topologies (16 non-isomorphic instances in total), whose proof is relegated to Appendix <ref>. For the three-user TIM-MP problem, orthogonal access achieves the optimal DoF region. Theorem <ref> handles the perfect conflict digraphs with neither dicycles of length 3 or more, nor odd hole or antihole in the symmetric part, while Theorem <ref> considers the case with dicycles but without clique of size 3 or more. The former only considers the tightness of clique inequalities, whereas the latter focuses only on the cycle inequalities. Beyond Theorems <ref> and <ref>, we come up with anther sufficient condition on the optimality of orthogonal access, where both dicycles and cliques are contained in the conflict digraphs. For the family of networks in which S() is a perfect graph, and the dicycle-vertex incidence matrix ' of the remaining conflict digraphafter removing cliques Q_n (n ≥ 3) contains no MNI submatrices, the optimal DoF region achieved by orthogonal access can be given by the mixed set packing and covering polytope𝒟={(d_): 0 ≤ d_k ≤ 1, ∀ k ∈ ∑_k ∈ C d_k ≤C-1, ∀ C ∈'∑_k ∈ Q d_k ≤ 1, ∀ Q ∈'}where ' is the collection of all minimal dicycles and ' is the collection of all maximal cliques with size no less than 3, and C ∩ Q≤ 1, ∀ C ∈', ∀ Q ∈'. See Appendix <ref>.A special type of networks in case III, is that the concatenation of clique-/dicycle-incidence matrices [[ ;]] is balanced, so that the polytope is integral for any integer C. It is still an open problem to fully sort out all the MNI matrices, while it has been proven that the MNI matrices do have some properties. It has been characterized by Lehman <cit.> that ifis an MNI matrix, thenis isomorphic (up to a permutation of rows followed by a permutation of columns) to either (1) the degenerate projective plane _n with n ≥ 2, or (2) =[[ _1; _2 ]] where _1 is a square nonsingular matrix with r≥ 2 `1's per row and per column, and each row of _2 has at least (r+1) `1's. The known MNI matrices include the circulant matrices _n^2 for odd n ≥ 3, _5^3, _8^3, _11^3, _14^3, _17^3, _7^4, _11^4, _9^5, _11^6, and _13^7 <cit.>,the degenerate projective planes _n with n ≥ 3, and the Fano plane _7.Fig. <ref> presents some conflict digraphs whose dicycle-vertex incidence matrices are MNI matrices.For the instance in Fig. <ref>(a), the optimal DoF region is given by 𝒟={(d_k):0 ≤ d_k ≤ 1, ∀ kd_1+d_2+d_3 ≤ 1d_3+d_4+d_5 ≤ 2}where the cycle inequalities d_i+d_j ≤ 1 for i j ∈{1,2,3} are replaced by a single clique inequality. The extreme points of 𝒟 are 5-tuple binary vectors apart from (1,1,*,*,*), (1,*,1,*,*), (*,1,1,*,*) and (*,*,1,1,1), where * denotes either 0 or 1. It can be easily checked that all extreme points are achievable by acyclic set coloring.For the instance in Fig. <ref>(b), the optimal DoF region is given by 𝒟={(d_k):0 ≤ d_k ≤ 1, ∀ k, d_1+d_5 ≤ 1 d_2+d_6 ≤ 1, d_3+d_4 ≤ 1d_1+d_2+d_3 ≤ 2d_4+d_5+d_6 ≤ 1}where the cycle inequalities d_i+d_j ≤ 1 for i j ∈{4,5,6} are replaced by a single clique inequality. After such a replacement, the resulting polytope is integral, and the extreme points are achievable by acyclic set coloring.◊§ A GENERALIZED INDEX CODING PROBLEM Building upon the relation between index coding and TIM <cit.>, we also establish an analogous relation to TIM-MP. As in Appendix <ref>, the goal of index coding is to minimize the number of transmissions such that all receivers are able to decode their own messages simultaneously.As TIM-MP is a generalization of TIM, we introduce a generalization of index coding, referred to as “successive index coding (SIC)”, where the message decoding is not necessarily simultaneous. §.§ Successive Index Coding (SIC)In the SIC problem, the receivers are allowed to declare their own messages once they decode them. Such a declaration offers other receivers additional side information, by which the minimal number of transmissions can be further reduced.The multiple-unicast SIC problem considers a noiseless broadcast channel, where a transmitter wants to send the message W_j to the receiver j who has access to prior knowledge of initial side information W__j (_j ⊆\{j}) as well as the additional side information due to the successive decoding and message passing. The goal is to find out the minimum number of transmissions (i.e., broadcast rate) over all possible successive decoding and message passing orders such that each receiver can successively decode its desired message.The additional side information depends on the decoding order. For the single-round message passing, given a decoding order π, the partial order i ≺_π j indicates the message passing fromreceivers i to j, which is equivalent to enhancing the side information set ^π_j_j∪ W_i. As such, the enhanced side information set can be written by^π_j = _j⋃_i: i ≺_π j W_ifor a specific decoding order π. In what follows, we formally define the n-receiver SIC problem. A (t_1,…,t_n,r) successive index coding scheme with side information index sets {_1,…,_n} and a given decoding order π consists of the following: * an encoding function, ϕ: ∏_i=1^n {0,1}^t_i↦{0,1}^r at the transmitter that encodes n-tuple of messages (codewords) x^n to a length-r index code.* a decoding function at the receiver j, ψ_j^π: {0,1}^r ×∏_k ∈_j^π{0,1}^t_k↦{0,1}^t_j,∀ j that decodes the received index code back to x_j with initial side information x(_j) held at receiver j as well as the passed messages that decoded earlier ⋃_i: i ≺_π j W_i. The initial side information digraphor conflict digraphof a successive index coding instance is identical to that of the corresponding index coding instance.Thus, for a given decoding order π, a rate tuple (R_1^π,…,R_n^π) is said to be achievable if there exists a successive index code (t_1,…,t_n,r) withψ_j^π(ϕ(x^n),x(_j^π)) = x_j, ∀ jsuch that any rate tuple (R_1^π,…,R_K^π) in the rate region ^π is achievable withR_j^π≤t_j/r, ∀ j.Similarly to the TIM-MP problem, the capacity region 𝒞 of the SIC problem is the set of all achievable rate tuples where time sharing among multiple single-round message passing is allowed. More specifically, it is the convex hull of the union of the achievable rate regions for all possible decoding orders, i.e., 𝒞 =conv (∪_π^π).By the channel enhancement approach in <cit.>, it is readily shown that the DoF region of every TIM-MP instance is outer bounded by the capacity region of the corresponding SIC instance, and both problems are equivalent for linear coding schemes (i.e., with linear encoding/decoding functions). In particular, for each single-round message passing, given a decoding order π, the resulting TIM-MP (SIC) problem can be treated as a modified TIM (index coding) problem with updated side information set in (<ref>).As the linear coding schemes are considered in the previous sections, the results obtained for TIM-MP are applicable to SIC. Specifically, the achievable symmetric DoF or DoF region of TIM-MP with conflict digraphare also the achievable symmetric rate or rate region of SIC with initial side information digraph . Similarly, the sufficient or necessary conditions seen before for orthogonal access in TIM-MP are also applicable to the corresponding SIC setup. In the rest of this section, we will focus on the broadcast rate, defined asβ^ SIC() = inf_b:(1/b,…,1/b) ∈𝒞 b,which is the minimum number of transmissions (i.e.,the number of transmitted symbols over the shared link normalized by the total message length) for the SIC problem, also known as reciprocal symmetric capacity. §.§ Analogy to Index CodingAnalogously to the index coding problem, we define some achievability schemes for the SIC problem. As partial clique covering is a generalized version of clique and cycle covering, we only present a definition of partial cliques in conflict digraphsof SIC instances, named weakly degenerate set.A conflict sub-digraph [] is a weakly m-degenerate set if any of its induced sub-digraph has a vertex of out-degree or in-degree no more than m, i.e., for all ' ⊆, ∃ v_q ∈', min{d^-(['],v_q),d^+(['],v_q)}≤ m. The weakly degenerate set is a generalized version of partial clique to the SIC problem. The acyclic set is weakly 0-degenerate, the dicycle is weakly 1-degenerate, and the clique Q_k is weakly (k-1)-degenerate. The broadcast rate of the SIC problem with conflict digraph =(,) is upper bounded byβ^() ≤min_{_1, …, _s}∑_i=1^s (m_i +1),with weakly degenerate set covering, where the minimum is over all partitions of ={_1, …, _s}, and for all i = 1,…,s, [_i] is a weakly m_i-degenerate set.The analogy of achievability between index coding and successive index coding problems is summarized in Fig. <ref>. Note here that, for index coding, (fractional) clique covering on the side information digraph is equivalent to (fractional) vertex coloring on the underlying undirected conflict graph.As a generalized version of acyclic set, however, weakly generate set covering does not offer any improvement over the acyclic set coloring, unlike that in the index coding problem where partial clique covering indeed outperforms clique covering (i.e., independent set coloring on conflict graphs). It is because χ_A([]) ≤ m+1 if [] is a weakly m-degenerate set, meaning that weakly degenerate set covering does not offer gains over acyclic set coloring. In other words, message passing renders the (weakly degenerate) set partition of conflict digraphs useless for SIC problems in terms of broadcast rate. Nevertheless, the weakly degenerate set partition has the potential to restrict message passing locally, shortens the decoding latency of the entire network, and facilitates the tradeoff between broadcast rate and message passing overhead. §.§ Reducibility and Criticality§.§.§ Reducibility: When is a Vertex/Message Reducible?The objective of studying vertex-reducibility is to remove the vertices in the conflict graph without changing the broadcast rate, so as to reduce the large-size SIC instance to a smaller one with less vertices. A vertex v in the conflict digraphis reducible, if its removal does not decrease the broadcast rate of the corresponding SIC problem, i.e., β^ SIC (-v)=β^ SIC (). By strong component decomposition (see Appendix <ref>), we have the following theorem. Given the unique strong component decomposition ={_1,…,_s}, let ^*_s be the strong component with the maximal fractional dichromatic number. If ^*_s falls into the digraph classes in Theorem <ref>, then the vertices in () \(_s^*) are reducible. See Appendix <ref>. The vertices that are not involved in any dicycles (e.g., with either only incoming or outgoing arcs) are reducible. Any vertex in a directed acyclic graph is reducible. This agrees with the fact that the broadcast rate of the SIC instances with conflict digraphs being directed acyclic is 1.◊The strong component with the maximal fractional chromatic number is not necessarily the one with the largest size. For the strong component with the largest size, denoted by [_s^*], the index coding problem corresponding to [_s^*] serves an upper bound of the original successive index coding problem, i.e., β^ SIC() ≤β([_s^*]). That is, if a broadcast rate of the index coding instance is achievable, it is also achievable for the original SIC instance. It is because choosing one single vertex from each strong component forms an acyclic set, and thus the broadcast rate of [_s^*] for the index coding setting without message passing dominates. ◊ Let us take two conflict digraphs shown in Fig. <ref> as illustrative examples. The induced sub-digraphs in the shadow are the strong components with the maximal fractional dichromatic numbers. The strong components in shadow are a clique Q_4 (on the left) and a dicycle C_2, both of which are perfect digraphs and fall into the cases in Theorem <ref>. Thus, both SIC instances can be reduced without loss of broadcast rate to the ones with only the strong components in the shadow, so that β^ SIC()=β^ SIC(_s^*)=4 for Fig. <ref>(a) and β^ SIC() = β^(_s^*)=2 for Fig. <ref>(b). Note also in Fig. <ref>(b) that the dicycle C_3 is the strong component with the largest size, but its dichromatic number χ_A,f([_s^*])=3/2 is not maximal, because the strong component in the shadow has χ_A,f(_s^*)=2.Thus, we have an upper bound β^ SIC() ≤β([_s^*])=2. Together with the cycle bound β^ SIC() ≥ 2, we also have the optimal broadcast rate β^ SIC()=2. ◊§.§.§ Criticality: When is an Arc/Interference Critical?The objective of studying arc-criticality is to remove the arcs in the conflict graph without changing the broadcast rate, so as to reduce the SIC instance with a large arc set to a smaller one with less arcs.An arc e in the conflict digraphis critical, if its removal strictly decreases the broadcast rate of the corresponding SIC problem, i.e., β^ SIC (-e) < β^ SIC ().The removal of an arc (i.e., interference link) does not increase the interference in the network, so the broadcast rate should not be increased. A conflict digraphis said to be critical, if every arc inis critical. In a conflict digraph , an arc is critical, if it belongs to the following: * the unique minimal dicycle when the dicycle-vertex incidence matrix ofis ideal;* the unique maximal clique whenis a perfect digraph.See Appendix <ref>.In a conflict digraph , if an arc is critical, then it must belong to an induced dicycle. Otherwise, it can be removed without affecting the capacity region. Thus, if a conflict digraph is critical, then it must be strongly connected.In Fig. <ref>(b), the arcs lying inside the shadow, which belong to the unique and shortest chordless dicycle, are critical. In Fig. <ref>(b), the arcs forming the clique, which is the largest and unique clique, are critical. § LINEAR OPTIMALITY In view of the equivalence between the TIM-MP and SIC problems with linear coding schemes, in this section, we restrict ourselves to linear coding schemes, and consider the optimality of orthogonal access for some instances in terms of linear symmetric DoF d_,l for TIM-MP (or linear symmetric rate R_,l=1/β_l^ SIC for SIC).§.§ Linear Optimality of Some MNI MatricesIn what follows, we show that, for two network topologies that are not included in Theorem <ref>, orthogonal access is linearly optimal for both TIM-MP and SIC problems.For the TIM-MP and SIC instances with dicycle-vertex incidence matrix _5^2 and _3, orthogonal access achieve the optimal linear symmetric DoF/rate, where d_,l((_5^2)) =R_,l((_5^2))= 2/5,d_,l((_3))=R_,l((_3)) = 2/5.See Appendix <ref>. The conflict digraphs with dicycle-vertex incidence matrices _5^2 and _3 are shown in Fig. <ref>(a) and in Fig. <ref>(b), respectively.§.§ Small Networks with ReductionTogether with reducibility and criticality of the TIM-MP/SIC problems, we show the linear optimality of orthogonal access for all 4-user network topologies (in total 218 non-isomorphic conflict graphs). For TIM-MP/SIC problems up to 4 users, orthogonal access achieves the linearly optimal symmetric DoF/rate for all topologies. See Appendix <ref>. Thanks to the reducibility and criticality, the network topologies that can be reduced to 3-user case are already done in Corollary <ref>, and we only need to focus on the case when each vertex is irreducible and each arc is critical. This substantially reduces the number of non-isomorphic instances that need to be considered from 218 to 6. It can also be checked that, for the TIM-MP/SIC instances up to 5 users, orthogonal access achieves the optimal linear symmetric DoF/rate for almost all topologies, i.e., except two out of in total 9608 non-isomorphic ones. Those two instances are (_4) and (_5^3) shown in Fig. <ref>(a) and Fig. <ref>(b), respectively. Such a reduction approach based on reducibility and criticality has great potential to further identify the symmetric DoF/rate for larger networks, although the number of non-isomorphic topologies increases dramatically as the number of users increases. § DISCUSSION: MESSAGE PASSING OVERHEAD AND ACHIEVABLE RATE TRADEOFF From the previous section, we have seen that decoded message passing is so powerful that it leads to orthogonal access with almost optimal DoF, if no constraints are imposed on message passing. In practice, however, message passing may incur some cost. For example, in the uplink of a cellular system there may be limitations of the usage of the wired network connecting the base station receivers. Then, it is meaningful to study the case where a limited number of messages can be passed along from each receiver. In this section, we first consider the case when only one passing message in the entire network can help improve DoF region of TIM problem, followed by the generalization to arbitrary number of passing messages and the formulation of a matrix completion problem for the tradeoff between achievable symmetric DoF and the passing messages overhead. §.§ When Does One Message Passing Help? Given a specific decoding order, the decoded message passing is also determined. After interference cancelation with passed messages, the TIM-MP problem becomes a modified TIM problem with some interfering links removed. According to the equivalence between TIM and index coding problems <cit.>, a decoding order corresponds to the arc removal in the conflict digraphof the TIM problem, and equivalently the arc adding in the side information digraphsof the index coding problem.A natural question is whether message passing helps in the sense that the corresponding arc removal fromincreases the DoF region of the TIM problem. The following theorem offers a sufficient condition to this question.A message passing is helpful, if the addition of the corresponding arc in the side information digraphforms new dicycles as induced sub-digraphs.See Appendix <ref>.The newly formed dicycle is not necessarily unique. It may form multiple dicycles. As message passing will not be harmful, as long as a new dicycle is formed, the DoF region will be enlarged. While the above condition is only sufficient in general, it is also necessary for chordal bipartite networks. We have the following corollary, whose proof is presented in Appendix <ref>.For chordal bipartite networks, a message passing is helpful if and only if the addition of the corresponding arc in the side information digraphforms new dicycles as induced sub-digraphs.Let us consider a simple network topology shown in Fig. <ref>(a), which is a chordal bipartite network <cit.>. Its conflict digraphand the side information digraphof the corresponding index coding problem are shown respectively in (b) and (c). The DoF region for this network topology is d_1 + d_2 + d_3 ≤ 1 and the symmetric DoF are 1/3. In Fig. <ref>(d), the addition of the arc (1,3) informs a new dicycle C_3, which increases the symmetric DoF to d_=1/2. By removing the uni-directed arc (1,2) inin Fig. <ref>(e), the DoF region is enhanced to {d_1+d_3 ≤ 1, d_2+d_3 ≤ 1}. The DoF region of Fig. <ref>(f) after removing the arc (3,1) inis still d_1 + d_2 + d_3 ≤ 1, because it is not uni-directed innor forming a new dicycle by its addition in . ◊ As a side remark, a similar setting was investigated for the index coding problem in <cit.> under the name of “critical index coding”. The goal of critical index coding problem is to figure out if one arc in the side information digraph is critical in the sense that its removal reduces the capacity region of the index coding problem. It can be regarded as the dual problem of ours in terms of arc removing/adding. §.§ The Achievable Rate with Message Passing ConstraintLet p be the total number of message passing budget. The question to ask is, given p≥ 2, how to choose the most efficient p passing such that the achievable DoF is maximized? It is a generalization of the previous subsection, in which p=1. Given a conflict digraph, it is to choose p arcs in conflict digraphs , such that after removing such p arcs, the broadcast rate of the index coding problem of the resulting conflict digraph is improved.As shown in <cit.>, the TIM problem can be formulated as a matrix completion problem, by minimizing the rank of the binary matrixthat fits the conflict digraph , where _ij = {1,ifi=j, 0, if (i,j) ∈() ,otherwise.with * being an indeterminate value. The solution to the rank minimization problem gives a realization ofwith all entries determined. By matrix decomposition =, we have ∈^K × r and ∈^r × K, where K,r are respectively the number of users and the minimal rank of . Then, we can assign the columns ofto the transmitters and the rows ofto the receivers as the precoding vectors during r symbol extension (i.e., time slots). This gives a feasible coding scheme for the corresponding TIM instance, by which the symmetric DoF 1/r is achievable. The details can be found in <cit.>.Similarly, we can also formulate the TIM-MP problem as a modified matrix completion problem. A message passing from i to j induces a change of _ij from zero to an indeterminate value. Hence, the tradeoff between achievable symmetric DoF and message passing overhead is to minimize the rank ofup to at most p changes of the zero elements. We introduce a message passing indicator matrix , such that_ij = {1,if ∃ message passing from i to j, 0, otherwise..We also define a matrix , such that_ij = {1, i=j,0, if _ij=0 and _ij=0, ,otherwise.. For a given message passing overhead p, the achievable symmetric DoF 1/r can be obtained by solving the following optimization problem r=min_  _0 ≤ p {(i,j): _ij=1 } isan acyclic set in where (<ref>) limits the number of message passing within p, and (<ref>) makes sure the message passing is feasible. It is impossible to pass the message that should be decoded later to a receiver that need to decode earlier, so that the sub-digraph induced by the passed messages should be an acyclic set.In particular, given a sufficiently large budget of message passing p, the solution to this optimization problem yields an interference alignment solution to the TIM-MP problem. This solution is not necessarily a one-to-one alignment, and therefore it may improve over orthogonal access. On the other hand, when the budget of message passing p is constrained, the orthogonal access solution under the TIM-MP setting may not be feasible, and therefore one-to-one alignment becomes useful again.The optimization problem is non-convex with a combinatorial nature, and thus hard to solve. Some existing algorithmic methods for matrix completion (see <cit.> and references therein) can be applied here to obtain the approximate solutions. The algorithm design and convergence analysis are interesting problems yet beyond the scope of this paper. Instead, we present a typical example in the following for illustration.Let us consider a 4-user triangular network as shown in Fig. <ref>(a). The tradeoff between the reciprocal symmetric DoF r=1/d_ and the number of passing messages p is also illustrated in Fig. <ref>(b). When p=0, it is a conventional TIM problem without message passing, and thus r=4. When p=1 and p=2, the removal of any one and two cross links enables a vertex coloring with 3 and 2 colors, thus yielding r=3 and r=2, respectively. To completely remove all interfering links, we need 6 passing messages.◊§ CONCLUSIONThe topological interference management with decoded message passing (TIM-MP) problem in partially-connected uplink cellular networks has been considered, where transmitters have only access to topological information without knowing the channel realizations and receivers are able to pass their decoded messages once they are decoded to other receivers. By modeling the interference pattern as a conflict directed graph, we have bridged the orthogonal access in this setting to the acyclic set coloring on directed conflict graphs. With the aid of polyhedral combinatorics, we have shown that orthogonal access achieves the optimal DoF region for certain classes of networks. The relation to index coding has been also investigated by connecting TIM-MP to a generalized index coding problem where a successive decoding and message passing policy at receivers is allowed. Reducibility and criticality were also studied to reduce large-size problems to smaller ones, by which the linear optimality of orthogonal access is shown for small-size networks up to four-user with all possible topologies. The practical issue on the tradeoff between achievable DoF and message passing was also discussed, from the usefulness of one message passing to the general case formulated by a matrix completion problem.Yet, fundamental limits of decoded message passing in the TIM-MP setting are not fully understood. With message passing, whether or nor orthogonal access is sufficient to achieve the (linearly) optimal DoF region for all network topologies is still an intriguing yet challenging problem. Although one-to-one interference alignment does not benefit beyond orthogonal access in this TIM-MP setting, it is also interesting to see if subspace alignment has gains. The tradeoff between the network performance (e.g., achievable DoF) and the overhead of backhaul links (e.g., the number of passed messages) is also a research avenue of greatinterest.§.§ Graph Theory In what follows, the definitions and results pertaining to graph theory are briefly recalled for readers' convenience. Interested readers are suggested referring to the text books <cit.> and papers <cit.> for more details.In this paper, we mainly focus on the directed graphs (digraphs), usually denoted by =(,) with a vertex setand an arc setconsists of ordered pairs of vertices. An arc (u,v) ∈ with u,v ∈ is a directed edge from u to v. The underlying undirected graphofis created in such a way that any two vertices are joint with an edge inif and only if there exists at least one arc between them in . We denote by S() the symmetric part of , which is an undirected graph such that any two vertices u,v are joint with an edge, if and only if both (u,v) ∈ and (v,u)∈ hold. An arc either (u,v) ∈ or (v,u) ∈ but not both is referred to as a uni-directed arc, otherwise it is bi-directed. The complement of a digraph =(,), denoted by =(,), has the same vertex setand (u,v) ∈ if and only if (u,v) ∉. A sub-digraph ofinduced by vertex set , denoted as induced sub-digraph [], is such that, ∀ u,v ∈, an arc (u,v) ∈([]) if (u,v) ∈(). The in-degree [resp. out-degree] of the vertex v, denoted by d^-(,v) [resp. d^+(,v)], is the number of vertices u ∈ such that (u,v) ∈ [resp. (v,u) ∈]. The maximum in-degree, denoted by Δ^-(), is the maximum value of d^-(,v) over all vertices v.A directed cycle (dicycle) with length n, denoted by C_n=(v_0,v_1,…,v_n-1),refers to the induced sub-digraph with arcs {(v_i,v_(i+1)n), i ∈{0,…,n-1}}, beyond which there do not exist any other arcs. C_n is an odd cycle if n is odd, and an even cycle if n is even. A digraph (sub-digraph) is acyclic if it does not contain any dicycles. A directed acyclic sub-digraph is referred to as an acyclic set.Every directed acyclic set has at least a vertex of in-degree 0 and at least a vertex of out-degree 0. Every directed acyclic set has an acyclic ordering of its vertices, i.e., there exists an ordering of vertices v_1,v_2,…, v_n in the acyclic set , such that for every arc (v_i,v_j) ∈, we have i<j.A directed path v_0 → v_n is a set of arcs {(v_0,v_1), …, (v_i,v_i+1),…,(v_n-1,v_n)} connecting v_0 to v_n. A digraph = (, ) is strongly-connected (or strong) if for every two distinct vertices v_i, v_j ∈, there exist directed paths v_i → v_j and v_j → v_i in . The directed cycles and cliques are strongly-connected. A strong component ofis a maximal induced sub-digraph which is strongly-connected. A partition ={_1,…,_s} with =∪_i=1^s _i and _i ∩_j=∅ for all ij is called a strong component decomposition, if every sub-digraph [_i] is a strong component. Formally, the dichromatic number <cit.> of a digraph , χ_A(), is the smallest cardinalityof a color set , so that it is possible to assign a color fromto each vertex ofsuch that for every color c ∈ the sub-digraph induced by the vertices colored with c is acyclic.Dichromatic number χ_A generalizes the notion of chromatic number χ from graphs to digraphs. The subgraph induced by the vertices with the same color in graphs is an independent set, while it is an acyclic set in digraphs. χ_A() ≤χ() holds for any digraphand its underlying undirected graph , because independent sets inare special acyclic sets, such that any proper coloring ofis a proper acyclic set coloring of<cit.>.A clique of a digraph is a sub-digraph in which for any two distinct vertices u and v both arcs (u,v) and (v,u) exist.The maximal clique of a (di)graph is a clique that cannot be contained by another clique with larger size as an induced sub-(di)graph. We denote by Q_n a clique with n vertices. A clique in the digraphis also a clique in the symmetric undirected graph S(). Each node is a clique with size 1. The clique number ω() is the size of the largest clique in . Obviously, χ_A() ≥ω(), because any two vertices in the clique cannot be in the same acyclic set.A chordless cycle is a set of vertices and edges that form a closed loop without chord. A chord is an edge that connects two non-adjacent vertices of a cycle. The length of a cycle is the number of vertices in this cycle.The hole is the chordless cycles with length greater than 4, and the antihole is the complement of hole. The odd hole is a hole with odd length, and the odd antihole is its complement. An undirected graph is perfect <cit.> if and only if it does not contain odd holes nor odd antiholes as induced subgraphs. A chordal bipartite network is a bipartite graph network that does not contain induced chordless cycles of length 6 or more.A digraphis perfect if, for any induced sub-digraph [] ofinduced by any vertex subset , χ_A([])=ω([]). By the Strong Perfect Digraph Theorem in <cit.>, a digraph =(,) is perfect if and only if the undirected symmetric part S() is perfect anddoes not contain any dicycle C_n with n ≥ 3 as induced sub-digraph. Specifically, a digraph is perfect if and only if it does neither contain a filled odd hole, nor a filled odd antihole, nor a dicycle C_n with n ≥ 3 as induced sub-digraph <cit.>.The filled odd hole is a digraph with symmetric part being an odd hole, and the filed odd antihole is the complement of the filled odd hole. §.§ Polyhedral Combinatorics A (convex) polyhedron in ^n can be defined as the solution set of a finite system of linear inequalities with n variables. A polytope is a bounded polyhedron. The - incidence matrixis a matrix that shows the relationship between two sets of objectsand , such that _ij = {1,if i ∈ has a relation to j ∈,0,otherwise..A polytope is integral if all its extreme points have only integer-valued coordinates.The set packing/covering polytopes are the most important polytopes in polyhedral combinatorics. The set packing polytope is given by𝒫() ={∈ [0,1]^n: ≤},and the set covering polytope is given by𝒬() ={∈ [0,1]^n: ≥}.A row vectorof a matrix is said to be dominating if there exists another different rowsuch that ≥ for set covering polytopes and ≤ for set packing polytopes. In other words, the linear inequality constraint associated with a dominating row is redundant and can be implied by others. A submatrix ofis a minor ofif it can be obtained fromby successively deleting a column j and the rows with a `1' in column j. Given a set , the minor of a matrixis the submatrix ofthat results from removing all columns indexed inand all the dominating rows that may occur.For the convenience of the study in integrality of set packing and covering polytopes, perfect and ideal matrices were introduced. The definitions and characterizations of perfect, ideal and balanced matrices are summarized as follows <cit.>. A matrix is perfect if the set packing polytope is integral. Letbe a binary matrix andbe an undirected graph. Let the columns ofcorrespond to the vertices ofand let the rows ofbe the incidence vectors of the maximal cliques of . Then,is a perfect matrix if and only ifis a perfect graph <cit.>.The set packing polytope is integral if and only ifis the maximal clique-node incidence matrix of a perfect graph. A matrix is ideal if the set covering polytope is integral <cit.>. Ideal matrices are also known as width-length matrices <cit.>, or matrices with (weak) max-flow min-cut property <cit.>. If a matrix is ideal, so are all its minors. It is an open problem to fully characterize all families of ideal matrices. An alternative way is to consider the “smallest” possible matrices that are not ideal, referred to as minimally nonideal (MNI) matrices <cit.>. A matrix is MNI, if it is not ideal but all its proper minors are. In other words, a matrix is MNI if (1) it does not contain a dominating row, (2) it is nonideal, and (3) the coordinates of any extreme point of the set covering polytope are either all integral or all fractional (but not both). Thus, an alternative characterization of ideal matrix is that,is ideal if and only ifdoes not contain a MNI minor <cit.>.There are some known MNI matrices, although the complete classification is an open problem. The circulant matrix _n^r is a n × n binary matrix with column indexed by {1,2,…,n} and rows equal to the incidence vectors of {j, j+1, …, j+r-1} n, i.e., _ij=1 if j ∈{i,i+1,…,i+r-1} n and _ij=0 otherwise. The degenerate projective plane _n for n ≥ 2 is a square (n+1) ×(n+1) binary matrix with columns indexed by {0,1,…,n} and rows equal to the incidence vectors of {1,…,n}, {0,1}, {0,2}, …, {0,n}. It has been shown that _n^2 for n ≥ 3 odd and _n for n ≥ 2 are MNI matrices <cit.>. Cornuejols and Novick <cit.> proved that there are exactly 10 MNI circulant matrices with k consecutive 1's (k ≥ 3). The Fano matrix _7 ∈{0,1}^7 × 7 is a circulant matrix with the initial row vector (1,1,0,1,0,0,0).Lehman <cit.> gave the properties of MNI matrices by proving that their fractional set covering polyhedron has a unique fractional extreme point.A binary matrix is balanced if, and only if, for each submatrix both the set covering polytope and the set packing polytope are integral <cit.>. A matrix is balanced if and only if it and all its submatrices are perfect or, equivalently, if and only if it and all its submatrices are ideal.An odd hole in a binary matrix is a square submatrix of odd order with two ones per row and per column. A binary matrix is balanced if it does not contain odd hole as a submatrix.The balanced matrices can be recognized in polynomial time <cit.>. A matrix is totally unimodular (TU) if and only if every square submatrix has determinant equal to 0, +1, -1. A matrixis TU if and only if the polyhedron {≥ 0: ≤} is integral for every integral vector . A TU matrix is both perfect and ideal. If each row of a 0/1 matrix (up to permutation) has consecutive 1's, then this matrix is TU.Seymour proved a full characterization of all TU matrices in <cit.>,that is, a matrix is TU if and only if it is a certain natural combination of some network matrices and some copies of a particular 5-by-5 TU matrix.§.§ Index Coding The index coding problem considers the transmission in a noiseless broadcast channel, where each receiver wants one message from the transmitter and holds some other receivers' desired messages as side information. The goal is to find out the minimum number of transmissions such that all receivers are able to decode their own messages synchronously.A formal definition is as follows.A (t,r) index code ℂ with side information index sets {_1,…,_n} is defined as follows: * an encoding function, ϕ: {0,1}^tn↦{0,1}^r at the transmitter that encodes n-tuple of messages x^n to a length-r index code.* a decoding function at each receiver j, ψ_j: {0,1}^r ×{0,1}^t_j↦{0,1}^t,∀ j that decodes the received index code back to x_j with side information x(_j) held at receiver j. The side information index sets are usually represented by a side information digraph , whose complement is the conflict digraph .For the index coding problem with message set ={W_1,…,W_n} and side information index sets {_j, ∀ j}, the side information digraph =(,) is such that = and (i,j) ∈() if and only if W_i ∈_j. Thus, a rate β'(,ℂ)=r/t is said to be achievable if there exists an index code (t,r), such thatψ(ϕ(x^n),x(_j)) = x_j, ∀ jThe broadcast rate of the index coding problem is defined asβ'() = inf_t inf_ℂβ'(,ℂ). We introduce an outer bound of the index coding problem, which will be frequently used in the proofs.(Maximal Acyclic Induced Subgraph (MAIS) Outer Bound <cit.>) For the index coding problem with side information digraph , the broadcast rate is upper bounded byβ≤α_A()where α_A() the largest size of all acyclic sets in . A conflict sub-digraph [] is a k-partial clique if and only if ∀ v_q ∈, d^-([],v_q) ≤ k, and ∃ v_q^* ∈, d^-([],v_q^*)=k. The broadcast rate of the index coding problem with conflict digraph =(,) is upper bounded byβ≤min_{_1, …, _s}∑_i=1^s (k_i +1),with partial clique covering, where the minimum is over all partitions of ={_1, …, _s}, and for all i = 1,…,s, [_i] is a k_i-partial clique with k_i=Δ^-([_i]).§.§ Proof of Corollary <ref> The achievability is due to acyclic set coloring. For the “if” part, if the conflict digraphis acyclic, we have χ_A()=1. In addition, d_≤ 1 should also hold, because if we allow full receiver cooperation, the enhanced channel is a K-user multiple access channel, such that d_≤ 1 even if full CSI is available at transmitters. For the “only if” part, we prove it by contraposition. If there exists a cycle in the conflict digraph, no message passing policy can remove the cycle completely, because for a message that will be decoded later cannot provide its decoded message before its decoding. As such, interference still exists in the network after message passing, and thus d_<1 for sure. By contraposition, if d_=1, the conflict digraphs should be acyclic. This completes the proof for both “if” and “only if” parts. §.§ Proof of Corollary <ref> The achievability is due to acyclic set coloring. It was recently proven in <cit.> that, if there exist two integers k (k ≥ 2) and r (1 ≤ r ≤ k) such that a digraphcontains no dicycle of length r k, then χ_A() ≤ k. When k=2, we have r=1,2. So, the condition turns out to be that, ifeither contains no directed odd cycle or no directed even cycle, χ_A() ≤ 2. As single-round message passing is considered, χ_A() should be an integer.Because χ_A()=1 if and only ifis a directed acyclic graph, it follows that χ_A()=2 ifcontains only directed odd cycles or even cycles. By Theorem <ref>, d_≥1/2.For the converse, because the conflict digraphs contain cycles no matter whether odd or even, they are not directed acyclic graphs, so that d_<1 according to Corollary <ref>. Because the TIM-MP problem with single-round message passing results in an enhanced TIM problem, it follows that d_≤1/2. As such, we have d_=1/2. This completes the proof. §.§ Proof of Corollary <ref> The achievability is due to fractional acyclic set coloring of the conflict digraphs. For the (K,L) regular network, the arcs (j,i) with i=j+1, j+2, …, j+L-1 belong to the conflict digraph . It can be easily checked that the sub-digraph of the conflict digraph induced by any K-L+1 neighboring vertices is an acyclic set. As such, we are able to color the acyclic sets with in total K colors and each vertices can be assigned with K-L+1 distinct colors. On average, the number of colors allocated to each vertex (i.e., message) is K/K-L+1. Thus, χ_A,f≤K/K-L+1 and in turn d_≥K-L+1/K.§.§ Proof of Lemma <ref>For a digraph =(,) and its optimal acyclic set partition ={A_1,…,A_s}, we construct another digraph '=(',') with vertices v' representing the acyclic set A_v of , and two vertices u' and v' are connected with an arc (u',v') if there exists an arc infrom a vertex in A_u to a vertex in A_v, where A_uA_v. It is clear that χ_A,f(')=χ_A,f(), because the numbers of required colors in total for ' andare the same according to the construction of '. According to Definition <ref>, we conclude that χ_LA,f(')=χ_LA,f(). It is because, given the optimal acyclic set partition, the number of colors in the most colorful neighborhood of a vertex equals to the number of neighboring acyclic sets of the one which this vertex belongs to. Otherwise, this vertex should join into other acyclic sets, which contradicts with the optimal acyclic set partition. It is clear that for every arc (u',v') ∈', there always exists an arc (v',u') ∈', because otherwise the union of A_u and A_v is still acyclic and can be united into one acyclic set which contradicts with the assumption that ={A_1,…,A_s} is the optimal acyclic set partition. For the digraph ' with all arcs being bi-directed, we have χ_A,f(')=χ_f(U(')) and χ_L,f(U('))=χ_LA,f('), where U(') is the underlying undirected graph of ' <cit.>. In addition, for the undirected graph U('), we have χ_f(U('))=χ_L,f(U(')) <cit.>. As such, we have χ_A,f(')=χ_LA,f(') <cit.>. It follows immediately that χ_A,f()=χ_LA,f(), which completes the proof. §.§ Proof of Theorem <ref> Let us first focus on the clique inequalities. A clique Q in conflict digraphcorresponds to a fully connected interference channel in network topology. Without CSI at the transmitters, the sum DoF are bounded above by 1, even if the message passing is allowed. Thus, we conclude that the achievable DoF tuple should satisfy the following inequalities:∑_i ∈ Q d_i ≤ 1,for each clique Q ∈.Note that each vertex inis also a clique with size 1. This clique inequalities also imply the individual DoF inequality d_k ≤ 1 for all k. The clique inequality of the maximal clique size dominates, because it implies other clique inequalities associated the sub-cliques with smaller size.Next, for cycle inequalities, a dicycle without chord means it does not contain any smaller dicycles as induced sub-digraphs. A dicycle C in the conflict digraphcorresponds to a cyclic Wyner interference channel. There must exist a receiver who does not obtain any passed message for decoding, such that the interference from his neighbor cannot be canceled out, and thus sum DoF of these two messages are bounded above by 1. As such, the overall sum DoF of the messages in such a dicycle cannot exceed C-1, i.e., ∑_k ∈ C d_k ≤C-1, for all dicycles. For those dicycles with chord, there must exist smaller dicycles, so that the cycle inequality due to the smaller one implies that due to the larger one, together with the individual inequality. So, for the cycle inequalities, we only count the inequalities corresponding to the minimal dicycles.§.§ Proof of Theorem <ref> The conflict digraphthat contains no dicycles C_n with n ≥ 3, andis a perfect matrix, if and only ifis a perfect digraph <cit.>. Given the outer bound of DoF region in (<ref>), we show that its extreme points are integral when the conflict digraph is perfect. Then we show that the DoF tuple of the region is achievable by acyclic set coloring (i.e., orthogonal access with single-round message passing), and thus the whole region can be achieved by time sharing, which is analogue to fractional acyclic set coloring.If the conflict digraphis a perfect digraph, then its symmetric part S() is a perfect graph <cit.>. By <cit.>, we know that the polytope defined by clique inequalities of a perfect graph has integral extreme points, meaning that each coordinate of the extreme points of the DoF region is either 0 or 1. As shown in <cit.>, the integral extreme points of the polytope defined by clique inequalities correspond to a set of messages in an independent set that do not interfere one another. It agrees with the vertex coloring of the undirected graph. It has been also shown in <cit.> that, for a perfect digraph, a feasible vertex coloring of the symmetric part of a digraph S() is also a feasible acyclic set coloring of the digraph . As such, each extreme point of the outer bound of DoF region can be achieved by acyclic set coloring (orthogonal access with single-round message passing), and time sharing between vertices can achieve the entire DoF region. According to the linear program relaxation in (<ref>), this time sharing of acyclic set coloring is actually fractional acyclic set coloring of digraphs. This completes the proof. §.§ Proof of Theorem <ref> The outer bound is given merely by the cycle inequalities, because there are no cliques in conflict digraphs. As mentioned earlier, by replacing d_k=1-y_k, the outer bound of the DoF region in (<ref>) can be written as a set covering polytope{(y_): 0 ≤ y_k ≤ 1, ∀ k ∈ ∑_k ∈ C y_k ≥ 1, ∀ C ∈}. This parameter replacement does not change the cycle-vertex incidence matrix , which is an ideal matrix. So, the set covering polytope given by {y_k} is integral, and for all extreme points of the polytope, we have y_k=0 or y_k=1. In turn, the DoF tuple (d_k=1-y_k) is also a binary vector. For the achievability, we prove that all extreme points in the outer bound of the DoF region can be achieved by acyclic set covering. For a DoF tuple corresponding to an extreme point of DoF region, we switch off the message k whose coordinate d_k=0, while switch on the ones with d_k=1. Because of the cycle inequality constraint, for each dicycles in the conflict digraph, there is at least one vertex that is switched off. The cycle inequalities collect all dicycles without chord, so that they all together ensure that, for each binary DoF tuple, the active vertices in the conflict graph cannot form any dicycles, and thus the sub-digraph induced by these active vertices is acyclic. These vertices in an acyclic set and can be assigned with the same color. Similarly to all extreme points of the DoF region, we exactly have a proper acyclic set coloring of all vertices in the conflict digraph. Time sharing among these single acyclic set coloring scheme (fractional acyclic set coloring) yields the DoF region. This completes the proof.§.§ Proof of Corollary <ref> For the above two cases, the corresponding dicycle-vertex incidence matrices are balanced, as they do not contain submatrices _n^2 with odd n where n ≥ 3. As a result, orthogonal access achieves the optimal DoF region according to Theorem <ref>. §.§ Proof of Corollary <ref> The achievability is still due to fractional acyclic set coloring. The conflict graphs except the dicycle C_3 are all perfect digraphs. According to Theorem <ref>, orthogonal access achieves the whole capacity region. For the dicycles C_3, the whole DoF region {(d_1,d_2,d_3): 0 ≤ d_k ≤ 1,∀ k, d_1+d_2+d_3 ≤ 2} is achievable, because all the vertices of the DoF region (1,0,0),(0,1,0),(0,0,1),(1,1,0),(1,0,1),(0,1,1) are integers and are achievable by acyclic set coloring. In particular, the symmetric DoF tuple (2/3,2/3,2/3) is achievable by time sharing among (1,1,0),(1,0,1),(0,1,1). §.§ Proof of Theorem <ref> The integrality of the DoF region is due to <cit.> based on vertex covering on hypergraphs. Let us make the connection. Let ()=(,) be a hypergraph, where the vertex setcorresponds to the collection of all messages or transmitter-receiver pairs, the hyperedge setis the collection of dicycles ' in conflict digraphs . To avoid the redundancy of inequalities, we only count the minimal cycles without chord. For every dicycle, the cycle inequality becomes the constraint of hyperedge E ∈, i.e., y(E) ≥ 1, ∀ E ∈, where y(E) ∑_k ∈ E y_k. Let _ be the undirected graph consisting of the hyperedges of size 2 in . Thus, each edge in _ is the bi-directed arcs (i.e., dicycles with length 2) and _ is in fact the symmetric part of the conflict digraph S(). To avoid the redundancy of inequalities, we only count the maximal cliques with size no less than 3, because size 2 clique is also a dicycle and the corresponding inequality exists already in cycle inequalities. So, for every clique, the clique inequality becomes the constraint of cliques in _, i.e., y(Q) ≥Q-1, ∀ Q ∈'(_)='(S()), where ' is the collection of all cliques Q in _ with Q≥ 3.As said before, we consider the case C ∩ Q≤ 1 to avoid the redundancy between cycle and clique inequalities, because if C ∩ Q≥ 2, the constraint y(C) ≥ 1 is redundant, which is implied by y(Q) ≥Q-1. As such, the cycle and clique inequalities can be represented with regard to the hypergraphand _, given by𝒫={(y_): 0 ≤ y_k ≤ 1,∀ k ∈y(E) ≥ 1,∀ E ∈y(Q) ≥Q-1,∀Q ∈'(_)}where d_k=1-y_k. By <cit.>, we know that the above polyhedron is integral if and only ifhas no triangle-free MNI minor and _ is perfect. A minor ofis obtained by deletion of a node set _1 ⊆ and contraction of a node set _2 ⊆ where _2 does not contain any dicycles. The deletion operation is to remove all hyperedges incident to _1, that is, to remove all dicycles involving the nodes in _1. The contraction operation is to remove the nodes in _2 from all remaining hyperedges, that is, to remove the cycle inequalities involving the nodes in _1. A minor is called triangle-free if _1 covers every triangle of _. Because a triangle in an undirected graph is a clique of size 3, it follows that _1 covers every clique in S() with size no less than 3. A minor is MNI if the resulting hypergraph formed by the inclusionwise minimal hyperedges after deletion and contraction operations is MNI, that is, after deletion and contraction operations, the resulting hyperedge-vertex incidence matrix associated with the remaining hypergraph contains no MNI submatrices.Reflecting to conflict digraphs , the condition thathas no triangle-free MNI minor indicates that, after removing all cliques with size no less than 3 fromand the associated vertices (i.e., deletion and contraction operations), the dicycle-vertex incidence matrix of the resulting induced sub-digraph ofcontains no MNI submatrices. Together with the condition that S() is perfect, the outer bound polytope defined by clique and cycle inequalities is integral.The achievability is still due to acyclic set coloring. The coordinates of the extreme points of the outer bound polytope 𝒫 correspond to the on-off of the messages. If y_k=0, the message k is on, and is off otherwise. Form the clique constraints, there are at least Q-1 messages are off for the clique Q. From the cycle constraints, there is at least one message that is off for the dicycle C. The intersection between a dicycle C and a clique Q is at most one vertex in , due to C ∩ Q≤ 1. So, for each extreme point of 𝒫, the active vertices (i.e., with coordinate being y_k=0) do not contain any dicycles in , and thus form an acyclic set. This corresponds to an acyclic set coloring. The fractional acyclic set coloring also corresponds to time sharing among the extreme points of 𝒫, by which the entire region of 𝒫 is achievable. This completes the proof. §.§ Proof of Theorem <ref> Let ={_1,…,_s} be the strong component decomposition of . This strong component decomposition is unique, because the strong connectivity of a digraph is an equivalence relation of the set of its vertices. Let _s^* denote the strong component with maximal fractional dichromatic number. As _s^* is an induced sub-digraph of , we have β^ SIC(_s^*) ≤β^ SIC(), because additional vertices do no reduce the broadcast rate.Moreover, as _s^* falls into the digraph classes in Theorem <ref>, orthogonal access achieves the optimal DoF region, and in turn the optimal symmetric DoF for the TIM-MP problem. Thus, orthogonal access (fractional acyclic set coloring) also achieves the optimal broadcast rate of the corresponding SIC problem. It follows that β^ SIC(_s^*) =χ_A,f(_s^*).According to the definition of strong decomposition, the strong component with the maximum fractional dichromatic number dominates, i.e.,χ_A,f()= max_ i={1,…,s}χ_A,f(_i) =χ_A,f(_s^*). To sum up, we haveχ_A,f(_s^*) = β^ SIC(_s^*) ≤β^ SIC() ≤χ_A,f(_s) =χ_A,f(_s^*).where the second inequality is due to the achievability of orthogonal access. Thus, it follows that β^ SIC(_s^*) = β^ SIC(), and those vertices () \(_s^*) are reducible without reducing the broadcast rate. This completes the proof.§.§ Proof of Theorem <ref> Based on the clique and cycle inequalities, we have the lower bound of broadcast rateβ^ SIC ()≥max{Q, 1+1/C-1}where the maximum is over all cliques Q ∈ and dicycles C ∈. So, the maximal clique or the minimal dicycle dominates.Let us first consider the case when the dicycle-vertex incidence matrix ofis ideal. Let the induced dicycle C_n be the shortest and unique one, where there does not exist any shorter or equal length induced dicycles. As C_n is unique and the dicycle-vertex incidence matrix ofis ideal, the optimal broadcast rate is β^ SIC ()=1+1/n-1 which can be achieved by orthogonal access. The removal of the arc e from C_n either breaks the dicycle C_n or forms a longer dicycle, both of which lead to the smallest induced dicycle being C_m with m ≥ n+1. The removal of e will lead to a lower broadcast rate achieved by orthogonal access, because the resulting dicycle-vertex incidence matrix is still ideal and β^ SIC (-e)≤χ_A(-e)=1+1/m-1 < β^ SIC ().Whenis a perfect digraph, the lower bound becomes β^ SIC ()≥max_Q ∈{Q}. Let the clique Q_n be the maximal one and unique, where there does not exist any smaller or equal size cliques. So, β^ SIC ()=n is achievable and also optimal. The removal of the arc e from Q_n does not break the perfectness and -e is also a perfect digraph. The removal of e will lead to a lower broadcast rate achieved by orthogonal access, because β^ SIC (-e)= n-1 < β^ SIC (). This completes the proof. §.§ Proof of Theorem <ref> According to the equivalence between TIM-MP and SIC with linear coding schemes, we focus in the following on TIM-MP, and apply to SIC accordingly.For the transmitter i, we assign the message W_i with a precoding matrix _i. By a bit abuse of notation, we also use _i to represent the subspace spanned by the columns of _i. Thus, we have the linear symmetric DoFd_,l= maxmin_k (_k),where (·) is the normalized dimensionality, and the overall dimension satisfies (∪_k_k) = 1. Without loss of generality, we assume that (_k) = R, ∀ k.The achievability is due to fractional acyclic set coloring. It can be easily checked that χ_A,f((_5^2)) = 5/2 and χ_A,f((_3))=5/2. Then, let us proceed to the converse proofs for linear coding schemes. For the instance (_3), we have (_0 ∩_k)=0,for all k =1,2,3 because nodes 0 and k are fully conflicting. And, for any i ≠ j ≠ k ∈{1,2,3}, we have(_i ∩_j) +(_i ∩_k) =(_i ∩ (_j ∪_k)) +(_i ∩_j ∩_k) =(_i ∩ (_j ∪_k)) ≤ (_i) = Rbecause {1,2,3} forms a dicycle and their subspaces should not have any overlap, i.e., (_i ∩_j ∩_k)=0. Thus, we have1= (∪_k=0^3 _k)= ∑_∅≠ S ⊆{0,1,2,3} (-1)^S-1 (∩_k ∈ S_k) =∑_k=0^3 (_k) - 32_0<i<j ≤ 3( _i ∩_j) ≥ 4R - 3/2R = 5/2 Rwhich yields R ≤2/5. Together with the achievability, we have the optimal linear symmetric DoF d_,l((_3))=2/5.Similarly, for the instance (_5^2), we have(_i ∩_i+1) = 0,∀ ibecause the adjacent nodes are fully conflicting, and(_i ∩_j ∩_k)=0, ∀ ijkbecause for any ijk, at least one of them is adjacent to another, and thus conflicts one another. For any ij, we also have(_i ∩_j) + (_i ∩_j+1) =(_i ∩ (_j ∪_j+1)) +(_i ∩_j ∩_j+1)≤(_i) = R.Thus, we have1= (∪_k=1^5 _k)= ∑_∅≠ S ⊆{1,2,… 5} (-1)^S-1 (∩_k ∈ S_k) =∑_k=1^5 (_k) - ∑_i=1^5 ( _i ∩_i+2) ≥ 5R - 5/2R = 5/2 Rwhich yields R ≤2/5. Together with the achievability, we have d_,l((_5^2))=2/5. §.§ Proof of Theorem <ref> Given the equivalence between TIM-MP and SIC with linear coding schemes, we refer to both of them interchangeably.According to vertex-reducibility, we only have to focus on the topologies whose conflict digraphs are strongly connected (and thus irreducible), because otherwise the 4-user instances can be reduced to 3-user ones, which have been already proven that orthogonal access achieves the optimal DoF/capacity region. According to arc-criticality, we only have to consider the arcs that belong to at least one induced dicycle, because otherwise the arcs are not critical and can be removed without changing the optimal DoF/capacity region.For these irreducible topologies, we only have to focus on the imperfect ones, because orthogonal access achieves the optimal DoF/capacity region for perfect digraphs. According to the definition of perfect digraphs, we only have to consider the conflict graphs with dicycles C_3 or C_4 as induced sub-digraph. For the case that contains C_4 as induced sub-digraph, we have only one topology which is exactly C_4 (Fig. <ref>(a)), and it was proven that orthogonal access achieves the optimal DoF/capacity region. For the case that contains C_3 as induced sub-digraph, we can restrict ourselves to a few cases. Assume without loss of generality that vertices 1,2, and 3 form C_3. Then, in view of the fact that vertex 4 is irreducible and the arcs involving it are critical, there are the following possibilities for the connection between vertex 4 and vertices in C_3: (1) vertex 4 forms another length-3 dicycle with any two vertices of C_3, as in Fig. <ref>(b); (2) vertex 4 forms another length-3 dicycle with any two vertices of C_3 and a length-2 dicycle with the third one in C_3, as in Fig. <ref>(c); (3) vertex 4 forms 1, 2, or 3 length-2 dicycles with some vertices in C_3, respectively, as in Fig. <ref>(d-f). For the digraphs in Fig. <ref>(a,b,d,e), it can be checked that the dicycle-vertex incidence matrices are ideal, and thus orthogonal access achieves the optimal DoF/capacity region, and in turn the symmetric DoF/capacity. For Fig. <ref>(c), the optimal symmetric DoF is d_=1/2 that can be achieved by orthogonal access, although the dicycle-vertex incidence matrix is not ideal. For Fig. <ref>(f), the dicycle-vertex incidence matrix is _3 and thus non-ideal. From Theorem <ref>, we have proved that orthogonal access achieves the linear optimal symmetric DoF of the TIM-MP problem, and also the linear optimal broadcast rate of the corresponding SIC problem.To sum up, we conclude that, for the TIM-MP/SIC problems up to 4 users, orthogonal access achieves the linear optimal symmetric DoF/rate. This completes the proof.§.§ Proof of Theorem <ref> Let (i,j) be the arc corresponds to the message passing i → j, and [] ∪ (i,j) be the induced new dicycle where i,j ∈. It immediately follows that the sub-digraph [] is acyclic. Due to MAIS outer bound (see Appendix <ref>), we have the achievable DoF tuple before adding (i,j) should satisfy∑_k ∈ d_k≤ 1.After adding the arc (i,j), the sub-digraph induced bybecomes a dicycle. As such, d_k=1/-1, ∀ k ∈ is achievable, leading to a larger sum DoF. As message passing does not reduce the achievable DoF of other messages not in . The addition of arc (i,j) increases the DoF region, and hence is helpful.§.§ Proof of Corollary <ref> As a special case, the sufficiency follows exactly as Theorem <ref>. Then, we focus on the necessity. By contraposition, we show that if the addition of the corresponding arc indoes not form any new dicycles, then such an arc addition will not change the DoF region. According to <cit.>, the optimal DoF region is fully characterized by the clique inequalities of the underlying undirected conflict graph U(). The addition of an arc inis equivalent to the removal of the corresponding arc in . As the arc addition does not form new dicycles, it follows that (1) the removed arc inshould not be uni-directed, because otherwise it will form new dicyles in , and (2) the removed arc inshould not introduce new dicycles inas induced sub-digraphs, because otherwise it will also results in a new dicycle inas well. So, we conclude that the arc removal indoes not change the underlying undirected conflict graph U() and the resulting network remains a chordal bipartite network, and thus the DoF region is not changed. By contraposition, this completes the proof of necessity. 10url@samestyle Jafar:IA S. A. Jafar, Interference Alignment: A New Look at Signal Dimensions in a Communication Network.1em plus 0.5em minus 0.4emNow Publishers Inc, 2011.Jindal:2006 N. Jindal, “MIMO broadcast channels with finite-rate feedback,” IEEE Trans. Inf. Theory, vol. 52, no. 11, pp. 5045 –5060, Nov. 2006.MAT M. Maddah-Ali and D. Tse, “Completely stale transmitter channel state information is still very useful,” IEEE Trans. Inf. Theory, vol. 58, no. 7, pp. 4418–4431, Jul. 2012.lapidoth:2006 A. Lapidoth, S. Shamai, and M. Wigger, “On the capacity of fading MIMO broadcast channels with imperfect transmitter side-information,” arXiv preprint cs/0605079, 2006.imageset A. G. Davoodi and S. A. Jafar, “Aligned image sets under channel uncertainty: Settling conjectures on the collapse of degrees of freedom under finite precision CSIT,” IEEE Trans. Inf. Theory, vol. 62, no. 10, pp. 5603–5618, Oct. 2016.Jafar:BIA S. Jafar, “Blind interference alignment,” IEEE J. Sel. Topics in Signal Processing, vol. 6, no. 3, pp. 216–227, 2012.Jafar:2013TIM S. A. Jafar, “Topological interference management through index coding,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp. 529–568, Jan. 2014.Avestimehr:2013TIM N. Naderializadeh and A. S. Avestimehr, “Interference networks with no CSIT: Impact of topology,” IEEE Trans. Inf. Theory, vol. 61, no. 2, pp. 917–938, Feb. 2015.Avestimehr:loserank N. Naderializadeh, A. El Gamal, and A. Salman Avestimehr, “When does an ensemble of matrices with randomly scaled rows lose rank?” arXiv preprint arXiv:1501.07544, 2015.Sun:2013TIM H. Sun, C. Geng, and S. A. Jafar, “Topological interference management with alternating connectivity,” in IEEE International Symposium on Information Theory Proceedings (ISIT), 2013.Sezgin:TIM S. Gherekhloo, A. Chaaban, and A. Sezgin, “Topological interference management with alternating connectivity: The Wyner-type three user interference channel,” in International Zurich Seminar on Communications (IZS), Feb. 2014.MIMOTIM H. Sun and S. A. Jafar, “Topological interference management with multiple antennas,” in IEEE International Symposium on Information Theory Proceedings (ISIT), 2014.Jafar:1D H. Maleki and S. Jafar, “Optimality of orthogonal access for one-dimensional convex cellular networks,” IEEE Communications Letters, vol. 17, no. 9, pp. 1770–1773, Sept. 2013.Gao:TIM Y. Gao, G. Wang, and S. Jafar, “Topological interference management for hexagonal cellular networks,” IEEE Trans. Wireless Communications, vol. 14, no. 5, pp. 2368–2376, May 2015.Yi:Fractional X. Yi, H. Sun, S. A. Jafar, and D. Gesbert, “Fractional coloring (Orthogonal access) achieves all-unicast capacity (DoF) region of index coding (TIM) if and only if network topology is chordal,” arXiv:1501.07870, Jan. 2015.Index2011 Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with side information,” IEEE Trans. Inf. Theory, vol. 57, no. 3, pp. 1479–1494, Mar. 2011.Yi:TIM X. Yi and D. Gesbert, “Topological interference management with transmitter cooperation,” IEEE Trans. Inf. Theory, vol. 61, no. 11, pp. 6107 – 6130, Nov. 2015.simeone2009local O. Simeone, O. Somekh, H. V. Poor, and S. Shamai, “Local base station cooperation via finite-capacity links for the uplink of linear cellular networks,” IEEE Trans. Inf. Theory, vol. 55, no. 1, pp. 190–204, 2009.ClusteredDec A. Lapidoth, N. Levy, S. Shamai Shitz, and M. Wigger, “Cognitive Wyner networks with clustered decoding,” IEEE Trans. Inf. Theory, vol. 60, no. 10, pp. 6342–6367, 2014.Caire:CellularIA V. Ntranos, M. A. Maddah-Ali, and G. Caire, “Cellular interference alignment,” IEEE Trans. Inf. Theory, vol. 61, no. 3, pp. 1194–1217, Mar. 2015.Dichrom V. Neumann-Lara, “The dichromatic number of a digraph,” J. Combinatorial Theory, Series B, vol. 33, no. 3, pp. 265–270, 1982.CircularNo D. Bokal, G. Fijavz, M. Juvan, P. M. Kayll, and B. Mohar, “The circular chromatic number of a digraph,” J. Graph Theory, vol. 46, no. 3, pp. 227–240, 2004.localcoloring2013 K. Shanmugam, A. G. Dimakis, and M. Langberg, “Local graph coloring and index coding,” in IEEE International Symposium on Information Theory Proceedings (ISIT), 2013.kiraly2015extension T. Király and J. Pap, “An extension of Lehman's theorem and ideal set functions,” Discrete Applied Mathematics, 2015.combinopt A. Schrijver, Combinatorial Optimization: Polyhedra and Efficiency.1em plus 0.5em minus 0.4emSpringer, 2003, vol. 24.SPDT S. D. Andres and W. Hochstättler, “Perfect digraphs,” J. Graph Theory, vol. 79, no. 1, pp. 21–29, 2015.padberg1993lehman M. Padberg, “Lehman's forbidden minor characterization of ideal 0–1 matrices,” Discrete Mathematics, vol. 111, no. 1, pp. 409–420, 1993.lehman1979width A. Lehman, “On the width-length inequality,” Mathematical Programming, vol. 16, no. 1, pp. 245–259, 1979.lehman1990width ——, “On the width-length inequality and degenerate projective planes,” Polyhedral Combinatorics, vol. 1, pp. 101–105, 1990.conforti1999decomposition M. Conforti, G. Cornuéjols, and M. Rao, “Decomposition of balanced matrices,” J. Combinatorial Theory, Series B, vol. 77, no. 2, pp. 292–406, 1999.seymour1980decomposition P. Seymour, “Decomposition of regular matroids,” J. Combinatorial Theory, Series B, vol. 28, no. 3, pp. 305–359, 1980.cornuejols1994ideal G. Cornuéjols and B. Novick, “Ideal 0, 1 matrices,” J. Combinatorial Theory, Series B, vol. 60, no. 1, pp. 145–157, 1994.Critical-JSAC M. Tahmasbi, A. Shahrasbi, and A. Gohari, “Critical graphs in index coding,” IEEE J. Sel. Areas Commun., vol. 33, no. 2, pp. 225–235, 2015.Kim-Critical F. Arbabjolfaei and Y.-H. Kim, “On critical index coding problems,” arXiv:1504.06760, 2015.hassibi2014topological B. Hassibi, “Topological interference alignment in wireless networks,” in Smart Antennas Workshop, 2014.GT J. A. Bondy and U. S. R. Murty, Graph Theory with Applications.1em plus 0.5em minus 0.4emMacmillan London, 1976, vol. 290.conforti2001perfect M. Conforti, G. Cornuéjols, A. Kapoor, and K. Vušković, “Perfect, ideal and balanced matrices,” European J. Operational Research, vol. 133, no. 3, pp. 455–461, 2001.stabqstab V. Chvátal, “On certain polytopes associated with graphs,” J. Combinatorial Theory, Series B, vol. 18, no. 2, pp. 138–154, 1975.seymour1977matroids P. D. Seymour, “The matroids with the max-flow min-cut property,” J. Combinatorial Theory, Series B, vol. 23, no. 2-3, pp. 189–222, 1977.berge1972balanced C. Berge, “Balanced matrices,” Mathematical Programming, vol. 2, no. 1, pp. 19–31, 1972.zambelli2005polynomial G. Zambelli, “A polynomial recognition algorithm for balanced matrices,” J. Combinatorial Theory, Series B, vol. 95, no. 1, pp. 49–67, 2005.ISCOD Y. Birk and T. Kol, “Informed-source coding-on-demand (ISCOD) over broadcast channels,” in Proc. INFOCOM'98, 1998.dichrom-cycles Z. Chen, J. Ma, and W. Zang, “Coloring digraphs with forbidden cycles,” J. Combinatorial Theory, Series B, vol. 115, pp. 210 – 223, 2015.korner2005local J. Körner, C. Pilotto, and G. Simonyi, “Local chromatic number and Sperner capacity,” J. Combinatorial Theory, Series B, vol. 95, no. 1, pp. 101–117, 2005.simonyi2015relations G. Simonyi, G. Tardos, and A. Zsbán, “Relations between the local chromatic number and its directed version,” J. Graph Theory, vol. 79, no. 4, pp. 318–330, 2015.
http://arxiv.org/abs/1702.08079v1
{ "authors": [ "Xinping Yi", "Giuseppe Caire" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170226202516", "title": "Topological Interference Management with Decoded Message Passing" }
arrows,shapes,trees equationsection=4000 1.2=wncyr10=wncyi10=wncysc10 thmTheorem[section] lm[thm]Lemmacl[thm]Corollary prop[thm]Proposition conj[thm]Conjectureremark *exaExample rmk[thm]Remark qtnQuestiondefinition ex[thm]Example dfDefinition *remaRemark *compComplement
http://arxiv.org/abs/1702.07879v1
{ "authors": [ "Dmitri Panyushev", "Oksana Yakimova" ], "categories": [ "math.RT", "17B08, 17B20" ], "primary_category": "math.RT", "published": "20170225112806", "title": "On seaweed subalgebras and meander graphs in type D" }
Active Learning Using Uncertainty Information Yazhou Yang1 2, Marco Loog1 3 1Pattern Recognition Laboratory, Delft University of Technology, Delft, The Netherlands 2College of Information System and Management, National University of Defense Technology, Changsha, China 3The Image Section, University of Copenhagen, Copenhagen, Denmark Email: {y.yang-4, m.loog}@tudelft.nl December 30, 2023 ========================================================================================================================================================================================================================================================================================================================================== Many active learning methods belong to the retraining-based approaches, which select one unlabeled instance, add it to the training set with its possible labels, retrain the classification model, and evaluate the criteria that we base our selection on. However, since the true label of the selected instance is unknown, these methods resort to calculating the average-case or worse-case performance with respect to the unknown label. In this paper, we propose a different method to solve this problem. In particular, our method aims to make use of the uncertainty information to enhance the performance of retraining-based models. We apply our method to two state-of-the-art algorithms and carry out extensive experiments on a wide variety of real-world datasets. The results clearly demonstrate the effectiveness of the proposed method and indicate it can reduce human labeling efforts in many real-life applications.§ INTRODUCTIONOver the past decade, a primary foundation of much progress in machine learning is the rapid growth of the number and size of data sets available, such as ImageNet <cit.> containing over 14 million labeled images for object recognition. In a practical scenario, we frequently encounter the situation where few labeled instances along with abundant unlabeled samples are available. Labeling a large amount of data is, however, very difficult due to the huge amount of time required or expensive because of the need of human experts <cit.>. Thus, it is very attractive to propose a proper labeling scheme to reduce the number of labels required in order to train a classifier.Active learning has been put forward to overcome the above labeling problem. The main assumption behind active learning is that if an active learner can freely select any samples it wants, it can outperform random sampling with less labeling<cit.>. Thus, the main task of active learning is querying as little data as possible to minimize the annotation cost while maximizing the learning performance. Active learning tries to achieve this by selecting the most valuable samples. However, it is difficult to define or measure the value of one instance to the learning problem. We can view it as the amount of information carried which potentially promotes thelearning performance, once its true label is known <cit.>. As a result of the fact that we do not have an exact measure of the value, there are a great number of selection criteria proposed from different perspectives on how to estimate the usefulness of each sample.Most commonly used criteria in active learning include query-by-committee <cit.>, uncertainty sampling<cit.>, expected error reduction <cit.>, expected model change <cit.>, variance reduction <cit.> and “Min-max” view active learning <cit.>. Query-by-committee put forward multiple models as the committees and selected the samples which receive highest level of disagreement from the committees <cit.>. Uncertainty sampling approach preferred the instances with maximum uncertainty. Based on the measurement of uncertainty, uncertainty sampling can be roughly divided two categories: maximum entropy of the estimated label<cit.> and minimum distance from the decision boundary <cit.>. For example, Tong and Koller <cit.> proposed to query the instance which is closed to the current learning boundary using the classifier of support vector machines. Campbell et al.<cit.> shared the same idea with Tong and Koller <cit.>.Roy and McCallum <cit.> proposed the expected error reduction (EER), which is a popular active learning method. EER aimed to reduce thegeneralization error when labeling a new instance. Since we do not have access to the test data, Roy and McCallum suggested tocompute the “future error” on the unlabeled poolunder the assumption that the unlabeled data set is representative of the test distribution. In other words, the unlabeled pool can be viewed as a validation set. Also, wehave no knowledge about the true labels of unlabeled samples. EER estimated the average-case criterion of potential loss instead. Expected model change followed the idea of EER, but turned to select the instance which leads to maximum change of the current model. The variance reduction methods tried to minimize the output variances <cit.>. Schein and Ungar <cit.> extended this approach to expected variance reduction method on logistic regression by following the idea of EER. “Min-max” view active learning was originally proposed by Hoi et al. <cit.>, where “Min-max” indicates the worst-case criterion is adopted. The key idea behind is to select the sample which minimizes the gain of objective function no matter what its assigned label is.Huang et al. <cit.> extended this framework by taking into account all the unlabeled data when calculating the objective function.Current active learning methods can be split in two classes: retraining-based and retraining-free active learning. Retraining-based active learning represents methods which measure the information of unlabeled sample by labeling it (any possible label) and adding it to the training set to retrain the classification model. Then, some appropriate criteria can be evaluated and used for the sample selection. The second class, retraining-free active learning, contains the remaining methods which not need repeatedly train the model for each unlabeled instance during one single selection. For example, uncertainty sampling and query-by-committee belong to this category. However, since the true label of the selected unlabeled instance is unknown, these methods resort to calculating the average-case or worse-case criteria with respect to the unknown label. In this paper, we propose a different criterion for retraining-based methods. We incorporate the uncertainty information (measured by the posterior probabilities within the min-max framework) for the selection. The proposed criterion can be seen as a trade-off of the exploration and the exploitation. The uncertainty information plays the role of the exploitation while the retraining-based models act as the exploration part. We concentrate on the pool-based active learning setting which assumes a large pool of unlabeled data along with a small set of labeled data already available <cit.>. We consider the myopic active learning which sequentially and iteratively selects unlabeled instance.§.§ OutlineThe rest of this paper is organized as follows.Section <ref> firstly reviews the framework of retraining-based active learning. Then two state-of-the-art methods under the retraining framework are briefly described. Section <ref> demonstrates the primary motivation of the proposed method and derives a general algorithm for retraining-based active learning in detail. It also illustrate how to extend the proposed criterion to current methods. Experimental design and results are reported in <ref> ; Section <ref> concludes this work followed by some future issues. § RETRAINING-BASED ACTIVE LEARNING In this section, we summarize a general framework of retraining-based active learning. Then we demonstrate two examples under this framework: Expected error reduction and Minimum Loss Increase.§.§ Retraining-based Active Learning Firstly, let us introduce some preliminaries and notation. Let ℒ ={ (x_i,y_i) }_i=1^m represent the training data set that consists of m labeled instances and 𝒰 be the pool of unlabeled instances { x_i } _ i=m+1^ n . Each x_i ∈ℝ^d is a d dimensional feature vector, and y_i ∈ C ={ +1,-1 } is the class label of x_i. In this paper, let us focus on binary classification problem firstly, and it is easy to extend this work to multi-class problem by extending C to multi-labels set. We denote P_ℒ (y|x) be the conditional probability of y given x according to a classifier trained on ℒ.For the retraining-based active learning, its framework can be summarized in Algorithm <ref>,where V(x_i,y_i) represents any selection criterion associated with (x_i,y_i). The main procedure contains the loops which checks all the points in unlabeled pool 𝒰over all the possible labels. For example, we firstly select one instance from the unlabeled pool and assign it any possible label. Then we update the labeled set (since we acquire a new labeled sample) and retrain the classifier we use. Based on the new trained classifier, we can measure some kind of selection criteria (e.g., generalization error in EER <cit.>). However, since the true label information of last selected sample is unknown, we need calculate some kind of performance, e.g., the average-case in <cit.>, worst-case in <cit.>, or even the best-case criteria in <cit.>. Finally, we will query the instance which leads to maximum or minimum value in terms of the criterion we are interested in.EER is one example of retraining-based active learning, which uses the generalization error as V(x_i,y_i). We get expected model change <cit.> by adopting model change as the criterion. By adopting variance and logistic regression as the classifier, we get expected variance reduction <cit.>. Similarly, if we want to minimize the value of objective function after labeling a new instance and use the worst-case performance (corresponding to min-max framework), then we can get<cit.>. Clearly, the retraining-based approaches may suffer from high computational cost due to the fact that they need go over all the unlabeled data and all the possible labels.§.§ Expected Error ReductionExpected error reduction has demonstrated its effectiveness on text classification domain <cit.>. There are also some follow-up work of EER contributed by other researchers <cit.> <cit.> <cit.>. EER aims to select the sample which will reduce the future generalization error.Since we can not see the test data, the unlabeled pool can be used as the validation set to predict the future test error. We encounter a new problem since we do not know the true labels of the pool. Roy and McCallum <cit.> suggested, in practice,we can approximately estimate the error using the expected log-loss or 0/1 loss over the pool. For example, if we adopt the log loss, EER can be written as follows: min_x ∈𝒰∑_y ∈ C P_ℒ(y|x) ( -∑_x_i ∈𝒰∑_y_i ∈ CP_ℒ^+(y_i|x_i) log P_ℒ^+(y_i|x_i)) where ℒ^+ = ℒ∪(x,y) means that the selected instance x is labeled y and added to ℒ. Note that the first term P_ℒ(y|x) contains the pre-trained label information. The second term is the sum of potential entropy over the unlabeled data set 𝒰. §.§ Minimum Loss Increase We can find that EER attempts to reduce the future generalization error, however, it is not easy due to the missing of test data and true label information of unlabeled data. There are some researchers which try to solve this problem from a different perspective. Hoi et at. <cit.> presented a so called “min-max” view active learning. It prefers the instance which results in a small value of an objective function in spite of its assigned label. This is because the smaller the value of an objective function, the better the learning model, at least in high probability. Assume G_ℒ is the value of an objective function on current labeled dataℒ. When we label a new instance and update the training data ℒ^+=ℒ∪{x_i, y_i}, we get a new value of objective function G_ℒ^+. What we want is the minimum increase of objective function, i.e., G_ℒ^+- G_ℒ,when adding one more labeled sample. However, because the second term G_ℒ is independent of the next queried instance, so we can ignore it and focus on minimizing G_ℒ^+. Since we expect a minimum value of G_ℒ^+ regardless of the assigned label of x_i, we adopt the worst-case performance as follows, instead of the average-case version. min_x_i ∈𝒰max_y_i∈ C G_ℒ^+ Note that we can view G_ℒ^+ as one choice of V(x_i,y_i) mentioned in Algorithm <ref>. Let us consider an unconstrained optimization problem using L_2-loss regularized classifier with arbitrary loss l(w;x_i,y_i): g(w)= 1/2λ || w ||^2 + ∑_x_i ∈ℒ l(w;x_i,y_i), where w is the parameter of learning classifier. If we adopt the Hinge loss l(w;x_i,y_i)=max(0,1-y_iw^Tx_i), we can derive the same model with “min-max” view active learning described in <cit.>, but without extend it to batch model setting. If we use square loss l(w;x_i,y_i)=(y_i-w^Tx_i)^2, we can get the same model with <cit.>. Note that, as is stated in <cit.>, though <cit.> includes all the unlabeled data when calculating the objective function, the unlabeled examples play no role since <cit.> relaxes the constraint of the labels of unlabeled pool in the end. This operation can guarantee zero contribution of unlabeled data to the objection function. Thus, <cit.> is also one special case using the square loss. Moreover, we can conclude that the main idea of min-max view active learning is to minimize the increase of the value of an objective function. In our paper, we consider the logistic loss l(w;x_i,y_i)=log(1+exp^-y_iw^Tx_i), which results in: min_x ∈𝒰max_y ∈ C1/2λ || ŵ ||^2 + ∑_x_i ∈ℒ^+-log P_ℒ^+(y_i|x_i) where ŵ is estimated parameter of L_2-regularized logistic regression model. Logistic regression is chosen as the base classifier since it is generally widely used in many fields and can output the conditional probability straightly, which can be used in active learning <cit.>. We call this method Minimum Loss Increase (MLI) in this paper.EER tries to minimize the error on unlabeled data while MLI aims to minimize the loss on data already labeled. § A NEW RETRAINING-BASED ACTIVE LEARNERIn this section, we motivate our proposed method and, subsequently, describe a general adaptation for retaining-based active learning models. §.§ Motivation Obviously, not knowing the true labels of the unlabeled data complicates calculating the final score of each instance in step 10 in Algorithm <ref>.One simple possibility is computing the average-case <cit.> or worst-case performance <cit.>, or even the best-case criterion <cit.>. These choices, however, may fail to take into account some potentially valuable information: Firstly, although the average-case criterion makes use of the label distribution information P_ℒ (y_i|x_i) already known, the expectation calculation can hide or underestimate some outstanding samples due to the re-weighting by P_ℒ (y_i|x_i). For example, the true label of instance x_i is +1 but the estimated P_ℒ (+1 |x_i) = 0.1, and the V(x_i,+1) has a maximum value compared with other instances. Then the average-case criterion of x_i, namely ∑_y_i P_ℒ (y_i|x_i)V(x_i,y_i), is highly likely to be surpassed by other instances. Secondly, as to the worst-case criterion, it suffers from not taking advantage of label distribution information at all. Worst-case analysis is a safe analysis since it is never underestimated. However, making no use of the available label information P_ℒ (y_i|x_i) can lose sight of some valuable information. Thus, to overcome the shortcomings mentioned, a new criterion for retraining-based active learning is proposed. The main motivation is that we want to incorporate the uncertainty information (e.g., known label distribution information) within min-max framework for retraining-based models. The proposed criterion is therefore as follows:min_x_i ∈𝒰max_y_i ∈ C P_ℒ(y_i|x_i) V(x_i,y_i)where P_ℒ(y_i|x_i) contains the pre-trained label information and V(x_i,y_i) represents any criteria we are interested. Note that for some classifiers like logistic regression, we can use the estimated posterior probability asP_ℒ(y_i|x_i). For classifiers which do not produce a probabilistic output, e.g., SVMs, we can transform their output to some probability using Platt's <cit.> or Duin & Tax's method <cit.>. And for V(x_i,y_i), various choices are possible, such as the test error on the unlabeled pool in EER, the output variance as in <cit.>, or the value of an objective function <cit.>.The proposed method can be interpreted as follows: it utilizes the pre-trained label information, although this kind of information might be inaccurate due to limited labeled data we have, it still shows some underlying or potential useful clues which may promote active learning. Firstly, it improves upon the average-case criterion since it does not compute the expected value. The calculation of expectation tends to ruin the discriminative information contained in the data due to its averaging manner.Secondly, it outperforms the worst-case criterion because it takes advantage of the knowledge of the potential label distribution while worst-case analysis does not use this at all. Thus, it avoids the disadvantages of average-case and worst-case criteria. It can be seen as a trade-off between the average-case and the worst-case criteria. Lastly, it can be considered as incorporating uncertainty sampling (encoded by the posterior probabilities) for retraining-based model. If all V(x_i,y_i)become one constant term like 1 or P_ℒ(y_i|x_i) itself, then the proposed method will turn into exactly the uncertainty sampling. More specifically,min_x_i ∈𝒰max_y_i ∈ C P_ℒ(y_i|x_i) or min_x_i ∈𝒰max_y_i ∈ C [P_ℒ(y_i|x_i)]^2 will act as totally same as uncertainty sampling since they will select the instance whose posterior probability comes closest to 0.5 on the binary problem. This shows that our proposed method actually fuses uncertainty sampling with retraining-based models. §.§ Two Examples of the Proposed MethodTo provide valuable insights on the underlying characteristic of the proposed method, we apply it to two state-of-the-art retraining-based models EER and MLI. We also demonstrate its advantage on a synthetic data set in Figure <ref>.Since our method tries to make use of the uncertainty information, the following adapted methods are termed uncertainty retraining-based active learners. It is easy to extend EER to uncertainty-based error reduction by adopting our method in Equation <ref> as follows: min_x ∈𝒰max_y ∈ C P_ℒ(y|x) ( -∑_x_i ∈𝒰∑_y_i ∈ CP_ℒ^+(y_i|x_i) log P_ℒ^+(y_i|x_i)) This method is called UEER for short. We can also apply our proposed criterion on MLI. The new approach is called UMLI in this paper. Note that the regularization parameter1/2λ in Equation <ref> is usually quite small, so we ignore it in our adapted criterion:min_x ∈𝒰max_y ∈ C P_ℒ(y|x) ∑_x_i ∈ℒ^+ -log P_ℒ^+(y_i|x_i)As is shown in Figure <ref> , we construct a synthetic binary data set and two colours represent different classes. We demonstrate the performance of four retraining-based active learners EER, UEER, MLI and UMLI on four corners, respectively. One black triangle and circle in each corner represent two initial labeled points. When we compare UEER with EER, it is obvious that UEER selects a number of instances near the decision boundary while EER explores points in a wider range. This is because our method helps UEER make use of the uncertainty information and uncertainty information makes UEER focus on the region which is least certain about. Similar results can also be found between UMLI and MLI. MLI explores over the data space and queries the points around the border while UMLI balances the exploration and the exploitation. UMLI concentrates on the central part (exploitation) and also searches around the edge. Therefore, we can see that our method enhances retraining-based model by balancing the exploration and the exploitation.§ EXPERIMENTSIn this section, we investigate the performance of our proposed methods to examine the effectiveness and robustness of our new criterion. The following experiments are limited to binary classification problems. Firstly, we show the experimental setting, then present the extensive experiment results, followed by further discussion and analysis.§.§ Experimental settingWe compare the our proposed methods UEER and UMLI against their original version EER and MLI, respectively. Random sampling is also included in this comparison. In all the experiments, we use L_2-regularized logistic regression included in LIBLINEAR package <cit.> as default classifier with the same regularization parameter, λ = 100, for all methods.The classification accuracy is used as the comparison criterion in our experiment. However, since active learning is a iteratively labeling procedure, we care about the performance during the whole learning process. Thus, it is not reasonable to merely compare the accuracy at some single points. Instead, we generate the learning curve of classification accuracy versus the number of labeled instances. Then, we calculate the area under the learning curve (ALC) as a measure of evaluation.We test on totally 49 real-world data sets from various real-life applications, including many UCI data sets <cit.>, MNIST handwritten digit dataset <cit.> and 20 Newsgroups dataset <cit.>. There are 39 datasets from UCI benchmark datasets, such as breast, vehicle, heart and so on. These datasets are pre-processed according to <cit.>. For wine data set, we conduct class 2 against class 1 and 3 as binary problem. For glass data set, we also split it into two groups (class 1-3 vs. class 5-7) to build binary case.We randomly sub-sample 1000 instances from mushroom for computing efficiency. We select six pairs of letters from Letter Recognition Data Set <cit.>, i.e., D vs. P, E vs. F, I vs.J , M vs.N, V vs. Y and U vs. V since these pairs look similar to each other and distinguishing them is a little challenging. 3 vs. 5, 5 vs. 8 and 7 vs. 9 are three difficult pairs taken from MNIST data set [http://yann.lecun.com/exdb/mnist/http://yann.lecun.com/exdb/mnist/] and used as the binary classification data set. We randomly sub-sample 1500 instances from the three data sets for computing efficiency. We also test the performance on 20 Newsgroups dataset which is a common benchmark used for text classification [http://qwone.com/ jason/20Newsgroups/http://qwone.com/ jason/20Newsgroups/]. Following the work of <cit.>, we also evaluate three binary tasks from 20 Newsgroups dataset: baseball vs. hockey, pc vs. mac, and religion.misc vs. alt.atheism. And the three pairs represent easy, moderate and difficult classification problems, respectively. We apply PCA to reduce the dimensionality of the above three datasets to 500 for computation efficiency. We also use the pre-processed data autos, motorcycles, baseball, hockey used in <cit.>.To objectively evaluate the performance, each data set is randomly divided into training and test data set of equal size. At the very beginning of active learning, we assume that only two instances randomly picked up from the training data are labeled, and one of them is from the positive class and the other is from the negative class. We run each active learning algorithm 20 times on each real-world dataset. The average performance of each active learning method is reported in the following section. §.§ Results Table <ref>shows the experimental results on 49 data sets. The datasets in Table <ref> are sorted with respect to the performance of random sampling. We can find that the comparisons contain the datasets which vary from very difficult problems (e.g., hill) to easy tasks (e.g., acute). To clearly demonstrate the advantage of the proposed method, we do pairwise comparison between the original algorithm and its counterpart, e.g., EER vs. UEER and MLI vs. UMLI, respectively. On each data set, a paired t-tests at 95% significance level is used to determine which method has the best performance or provides comparable outcome. These methods are highlighted in bold face. Over all the experiments, average performances are reported in Table <ref>. “Average Rank” shows the average rank of all the methods with regard to their performances on all the experiments. The lower the value of average rank, the better the method. The “win/tie/loss counts” represents times of our proposed methods versus its counterparts over all the 49 datasets.As is shown in Table <ref>, our proposed methods UEER and UMLI evidently outperform their counterparts EER and MLI, respectively. UEER surpasses EER in terms of average accuracy, and improves its performance from 0.812 to 0.822. UEER also outperforms EER in terms of “average rank”, which demonstrates the effectiveness of our method. Similar results can be found between UMLI and MLI. UMLI is superior to MLI on the overall performance. Moreover, it is interesting to observe that UEER attains the best overall performance among all the active learning methods.Over all the experimental data sets, the “win/tie/loss” counts of UEER versus EER is 29/7/13, meaning that UEER is the preferred active learner in over half the cases. With regard to UMLI and MLI, the “win/tie/loss” count is 27/11/11, which also shows the clear benefit of our scheme nonetheless. We also notice that even random sampling can surpass all the other methods, e.g., on the blood data set, indicating that, generally, one might not want to use active learners in a blind way.To investigate the robustness of our method, we also apply the worst-case criterion on EER and the average-case criterion on MLI, respectively. Due to the lack of space, we omit the results on each data set and only report the average performances. The average performance (ALC) of the worst-case on EER is 0.771 while that of the average-case on MLI is 0.710. To our surprise, they definitely show poorer performances in comparison with our method and even perform worse than random sampling. The possible reason may be that: EER computes the error on the unlabeled data and none of the true label are known, the average-case criterion is a safe choice for EER. Since MLI estimates the loss on the enlarged labeled set ℒ∪{x_i, y_i} and only the true label of x_i is unknown, the worst-case criterion is more appropriate for MLI than the average-case criterion. However, since the proposed method is a trade-off of the two criteria, it can adapt to both settings and show a robust performance for different retraining-based models.§ CONCLUSIONS In this paper, we propose a new general method for retraining-based active learning. The proposed method can balance a trade-off of the average-case and worst-case criteria by incorporating uncertainty information (carried by the pre-trained posterior probabilities) within min-max framework. It drives current retraining-based models to pay more attention to the exploitation. We employ the new idea on two state-of-the-art methods to investigate its effectiveness. The synthetic data demonstrates that our method prefers to select the instances which are near the decision boundary in comparison with the original retraining-based approaches. Moreover, extensive experiments on 49 real-world datasets also prove that the proposed method is a promising approach for promoting retraining-based active learners. IEEEtran
http://arxiv.org/abs/1702.08540v1
{ "authors": [ "Yazhou Yang", "Marco Loog" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170227213347", "title": "Active Learning Using Uncertainty Information" }
1Physics and Astronomy Department, University of California, Los Angeles, CA 90095-1547 2Núcleo de Astronomía de la Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Av. Ejército Libertador 441, Santiago, Chile 3Astronomy Department University of Cape Town Private Bag X3 Rondebosch 7701 Republic of South Africa 4NorthWest Research Associates 4118 148th Ave NE Redmond, WA 98052-5164 5Department of Physics, University of California, Davis, CA 95616 6Institute of Geophysics and Planetary Physics, Lawrence Livermore National Laboratory, Livermore CA 94551 7Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109 lake@physics.ucla.edu Thesatellite surveyed the entire sky multiple times in four infrared (IR) wavelengths <cit.>.This all-sky IR photometric survey makes it possible to leverage many of the large publicly available spectroscopic redshift surveys to measure galaxy properties in the IR.While characterizing the cross-matching ofdata to a single survey is a straightforward process, doing it with six different redshift surveys takes a fair amount of space to characterize adequately, because each survey has unique caveats and characteristics that need addressing.This work describes a data set that results from matching five public redshift surveys with the AllWISE data release, along with a reanalysis of the data described in <cit.>.The combined data set has an additional flux limit of 80 (19.14AB mag) in 's W1 filter imposed in order to limit it to targets with high completeness and reliable photometry in the AllWISE data set.Consistent analysis of all of the data is only possible if the color bias discussed in <cit.> is addressed <cit.>.The sample defined herein is used in this paper's companion paper, <cit.>, to measure the luminosity function of galaxies at 2.4 rest frame wavelength, and the selection process of the sample is optimized for this purpose. § INTRODUCTIONThe astronomy community has an embarrassment of riches when it comes to the depth and breadth of its publicly available catalog of data, both photometric (Sloan Digital Sky Survey [SDSS], Two Micron All Sky Survey [2MASS], Galaxy Evolution Explorer [GALEX], Widefield Infrared Survey Explorer [WISE], to name a few), and spectroscopic (for example, 6dF Galaxy Survey [6dFGS], SDSS, Galaxy and Mass Assembly [GAMA]).As tempting as it is to combine all of the available data into a unified measurement of the luminosity function (LF) of galaxies, there is no single standard for how targets are selected for measurement of spectroscopic redshifts or how the resulting data is characterized.So any effort to analyze a data set that is a synthesis of many data sets requires a careful consideration of whether the target selection processes are sufficiently similar to be modeled in a unified way, and characterization of the resulting data set after all quality cuts are made. We first encountered these difficulties when we made the decision to augment the data in our own small survey,<cit.>, with publicly available spectroscopic redshift surveys to measure the luminosity function of galaxies at a wavelength of 2.4.The plan was to analyze the data from multiple public surveys both separately and together to get a good grasp on systematics, increase sample size, and minimize cosmic variance. To that end, we selected five additional surveys, with the intention that no one survey should be unique in any redshift range, that were as close toin sample selection as possible. Critically, the surveys had to be as close to flux limited in one digital imaging filter as possible, ruling out surveys like: SDSS Luminous Red Galaxy Sample <cit.>, the 2dF Galaxy Redshift Survey <cit.>, and the Deep Extragalactic Evolutionary Probe 2 survey <cit.>. This paper contains a description of the sample selection process, and a characterization of the same, for spectroscopically measured redshift catalogs of galaxies pulled from six different surveys and crossmatched, wherever possible, to additional photometric information from SDSS data release 10 (SDSS-DR10), the 2-Micron All Sky Survey (2MASS), and the AllWISE Source and Reject Catalogs.The spectroscopic galaxy surveys tapped are: the 6dFGS Data Release 3 K_s selected sample <cit.>, SDSS Main Galaxy Sample <cit.>, GAMA data release 2 <cit.>, the AGN and Galaxy Evolution Survey (AGES) <cit.>, the zCOSMOS 10k-Bright Spectroscopic Sample <cit.>, and a reanalysis of thesurvey.What all of these surveys have in common is that their target selection processes are driven, primarily, by observed flux in one channel: 2MASS K_s, SDSS r, SDSS r, Hubble F814W (approximately I), NOAO Deep Wide Field Survey (NDWFS) I, andW1, respectively.Roughly, the resulting data sets synthesized split up into three low-z high Ω surveys (6dFGS, SDSS, and GAMA) and three high-z low Ω surveys (AGES, zCOSMOS, and ). This means that no one survey has a monopoly on the information from any redshift range, though the large number of targets in SDSS means that this information is only available if the surveys are analyzed separately before synthesizing them. The simple selection process these surveys share leaves room, computationally, for the imposition of a further flux cut in W1 for the combined catalog without significantly increasing the complexity of the selection process.A measurement of the LF using the data described herein takes place in a companion paper to this one, <cit.>.The technique needed to account for the biases in such a diverse group of redshift surveys simultaneously is to broaden the concept of the LF to be a density over the galaxies' entire spectral energy distribution (SED), as described in <cit.>.Two necessary components of that process are the mean and spectral covariance of galaxy spectra.This paper and its companion, LW17III, make use of the mean and covariance of galaxy SEDs as measured in <cit.>.The techniques developed allow us to address the SED dependent completeness concerns raised in <cit.>, and make the minimum cuts to the data needed in the process.In this work, the mean SED is used to cut galaxies that are likely low luminosity outliers in luminosity-redshift space, caused by contamination, as well as to compute curves bounding the regions in luminosity-redshift space where SED variety completeness is nearly constant.In spite of our efforts to make the combined data set as homogeneous as possible, the character of each component survey is still distinct enough that they require separate consideration in any application. With that in mind, we have structured this paper to reflect that fact, with each survey getting its own separate section for consideration, in spite of the repetition this causes.The layout of this paper is as follows: Section <ref> describes the data sets chosen and all cuts made to the sets.Each data set has its own subsection where details peculiar to it are described.The effects of the primary cuts on the data are demonstrated in graphs that are not completeness corrected in order to show what physical parameters, primarily redshift and luminosity, measurements based on the data set will be most sensitive to.Section <ref> contains excerpts from the machine readable tables of the selected data published with this work. Finally, Section <ref> contains concluding remarks.The cosmology used in this paper is based on the WMAP 9 year ΛCDM cosmology <cit.>[<http://lambda.gsfc.nasa.gov/product/map/dr5/params/lcdm_wmap9.cfm>], with flatness imposed, yielding: Ω_M = 0.2793, Ω_Λ = 1 - Ω_M, and H_0 = 70 ^-1^-1 (giving Hubble time t_H = H_0^-1 = 13.97Gyr, andHubble distance D_H =c t_H = 4.283 Gpc).All magnitudes will be in the AB magnitude system, unless otherwise specified.In cases where the source data was in Vega magnitudes and a conversion to the AB system was provided in the documentation, they were used (2MASS[<http://www.ipac.caltech.edu/2mass/releases/allsky/faq.html#jansky>] and AllWISE[<http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4_4h.html#WISEZMA>]).For the surveys without obviously documented Vega/AB magnitude offsets (NDWFS[<http://www.noao.edu/noao/noaodeep/>], SDWFS[<http://irsa.ipac.caltech.edu/data/SPITZER/SDWFS/>]) we performed the conversion using those provided in <cit.>. § DATA SELECTION AND CHARACTERIZATION The defining data set of this paper is theW1 selected survey described in <cit.>, hereafter .The biggest advantage ofis that the target selection function is extremely simple, driven at the faint end entirely by the target's flux at 3.4, W1 filter.The disadvantage is that the sample size is relatively small ().The smallness of thesample, and AllWISE's sky coverage, is what drove the decision to leverage the existing catalog of redshift surveys.We imposed a W1 flux limit of 80 (19.14AB mag) on all surveys to make the results from the disparate surveys as comparable to thedata set as possible.The complete list of surveys used is found in Table <ref>. lccccll 1.2 Summary of Spectroscopic Surveys Used Survey Release RedshiftsCoverage (Ω)Band m_lim Reference version min/median/max (deg^2) AB mag http://www.6dfgs.net/6dFGS 3 0.01 / 0.05 / 0.20 1.37×10^4a K_s 11.25/14.49 <cit.> http://classic.sdss.org/dr7/SDSS-DR7 7 0.01 / 0.10 / 0.33 7.88×10^3b r 13.0/17.77 <cit.> http://www.gama-survey.org/GAMA 2 0.01 / 0.18 / 0.43 144 r 14.0/19.0 <cit.> http://iopscience.iop.org/article/10.1088/0067-0049/200/1/8/metaAGES 1 0.05 / 0.31 / 1.00 7.75 I 15.5/18.9/20.4 <cit.> http://www.eso.org/sci/observing/phase3/data_releases.html#other_programmeszCOSMOS-10k 2 0.05 / 0.61 / 1.00 1.7 I^*15.0/22.5 <cit.> 2 0.05 / 0.38 / 1.00 0.190 W1 15.0/18.70/19.14c <cit.> Redshift surveys used to construct the samples here. The selection for zCOSMOS was done using the Hubble Space Telescope's Advanced Camera for Surveys (ACS) filter F814W, which is approximately I-band. The selections for AGES and are split into a complete bright and sparse faint samples. The Redshifts column contains the median redshift of the survey, and the minimum and maximum redshifts likely to be useful. The smaller surveys, for example, have a bias against selecting galaxies that are local, large, and resolved because they would obstruct the field of higher redshift galaxies, so they require a higher minimum redshift cutoff. The larger surveys, contrastingly, when they contain high redshift sources they are more likely to be redshift blunders, and so they require a lower maximum redshift. a Inital 6dFGS area is 1.7×10^4deg^2, but all data with δ≥ -11.5 was removed to eliminate overlap with other surveys. b Initial SDSS area is 8.04×10^3deg^2, but the footprint of the smaller, deeper, surveys also used here are removed to eliminate overlap. c The upper limit is in R-band magnitudes, as required in the Keck/DEIMOS documentation, and measured in the USNO's NOMAD catalog <cit.>. l|lcc 0.86 Photometric Surveys Used by Spectroscopic Survey Spectroscopic Survey Photometric Survey Bands Citation a http://galex.stsci.edu/GR6/GALEX gr7 FUV, NUV <cit.> http://www.ipac.caltech.edu/2mass/2MASS J, H, K_s <cit.> http://www.sdss3.org/dr10/SDSS-DR10 u, g, r, i, z <cit.> http://irsa.ipac.caltech.edu/Missions/wise.htmlAllWISE W1, W2, W3, W4 <cit.> 6dFGS http://galex.stsci.edu/GR6/GALEX gr7 FUV, NUV <cit.> http://www.ipac.caltech.edu/2mass/2MASS J, H, K_s <cit.> http://irsa.ipac.caltech.edu/Missions/wise.htmlAllWISE W1, W2, W3, W4 <cit.> SDSS http://www.sdss3.org/dr10/SDSS-DR10 u, g, r, i, z <cit.> http://www.ipac.caltech.edu/2mass/2MASS J, H, K_s <cit.> http://irsa.ipac.caltech.edu/Missions/wise.htmlAllWISE W1, W2, W3, W4 <cit.> GAMA http://www.gama-survey.org/GAMA FUV, NUV <cit.> http://classic.sdss.org/dr7/SDSS-DR7 u, g, r, i, z <cit.> http://www.gama-survey.org/UKIDSS LAS Y, J, H, K <cit.> http://irsa.ipac.caltech.edu/Missions/wise.htmlAllWISE W1, W2, W3, W4 <cit.> AGES http://www.sdss3.org/dr10/SDSS-DR10 u, g, r, i, z <cit.> http://archive.noao.edu/ndwfs/NDWFS-DR3 B_w, R, I, K <cit.> http://irsa.ipac.caltech.edu/data/SPITZER/SDWFS/SDWFS-DR1.1 c1, c2, c3, c4 <cit.> http://irsa.ipac.caltech.edu/Missions/wise.htmlAllWISE W1, W2, W3, W4 <cit.> zCOSMOS http://irsa.ipac.caltech.edu/Missions/cosmos.htmlCOSMOS FUV, NUV, u^*, B_j, g^+, V_j, r^+, F814W, i^+, i^*, z^+, J, K_s<cit.> http://www.sdss3.org/dr10/SDSS-DR10 u, g, r, i, z <cit.> http://irsa.ipac.caltech.edu/Missions/cosmos.htmlS-COSMOS-DR3 c1, c2, c3, c4 <cit.> http://irsa.ipac.caltech.edu/Missions/wise.htmlAllWISE W1, W2, W3, W4 <cit.> Photometric surveys used for fitting SEDs to sources, in order of increasing wavelength. a Not all sources have all data available, whether it was a question of coverage or depth, so roughly half of the sources were only characterized bydata. We cross matched the surveys to the AllWISE catalog <cit.> and reject table using a spatial match with a 6 radius, the full width at half maximum (FWHM) of thebeam, keeping only the nearest matching source.We then traversed the list again to ensure that each source from AllWISE was associated with only one target from the redshift survey, assigning the AllWISE source to the closest target in cases where multiple targets matched a single source.The reason for this choice is that the target closest to the photo-center of the AllWISE source is likely the one providing the dominant contribution to the detected flux.While an ad-hoc deblending procedure could have been developed, the issue was infrequent enough to make addressing it in this fashion not worthwhile; less than 3% of sources in zCOSMOS-10k, the deepest, and narrowest, survey in this paper.Performing the initial search out to 6 allowed us to examine how the match radius affected both the completeness and purity of the sample.In that analysis we decided that keeping only sources with a match within half of thebeam's FWHM (3) was, subjectively, an adequate compromise among all of the sample completeness and purity factors, since the target likely contributes the majority of the flux in themeasurement.Sources with matches between 6 and 3 that pass all other tests are regarded as lost to contamination, and are treated as a reduction in completeness for the survey.The same is done for sources which are flagged as having contaminated photometry in W1 in the AllWISE database (w1cc_map ≠ 0).The fraction of points lost to contamination by these criteria are: 9.2% for 6dFGS, 4.1% for SDSS, 3.8% for GAMA, 3.0% for AGES, 11.8% for zCOSMOS, and 1.4% for .The largebeam means that the vast majority of galaxies detected in the AllWISE data release are unresolved and well characterized by the point-spread function (PSF) photometry stored in the w?flux columns of the database.While there were not enough resources to perform a full and independent extended source analysis of thesurvey, the team was able to place elliptical apertures on sources that are already identified in the 2MASS Extended Source Catalog (XSC).Figure <ref> shows the trend in L_ν(2.4) computed by K-correcting SDSS r model fluxes (profile fit flux measurements) versus usingW1 fluxes as a function of the reduced χ^2 of the W1 PSF fit on sources from the SDSS survey.Panel a shows the trend when using W1 PSF photometry, and panel b shows the trend when using the W1 elliptical apertures.While both sets of data have a trend, the elliptical aperture has a smaller trend at large χ^2. The hazard of using only PSF fluxes for resolved sources (even for only marg­in­al­ly resolved ones) is twofold: first, L_⋆, the luminosity scale at which there is a “knee” in the luminosity function, will be underestimated for low redshift galaxies; leading to the second, that the evolution in L_⋆ will be overestimated.This bias will have further effects on the values observed in other LF parameters.κ_⋆, the normalization of flux counts, should be slightly decreased because the bias is a blunting of the flux counts histogram at the bright end.The definition of ϕ_⋆, the value of the luminosity function at L_⋆, makes its value dependent on L_⋆, therefore both its value and the evolution of that value will be strongly affected.We therefore attempt to minimize this bias by using the elliptical aperture flux when w?rchi2 ≥ 3, if it is not an upper limit and if the source is within 5 of the XSC source (xscprox ≤5). In the future, the ideal solution would be to generalize the photometry software used in producing the AllWISE catalog, WPHOT[<http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4_4c.html>], to process the images used to generate each survey's target list at the same time as theimages using a profile model for the galaxies; similar to what was done in <cit.>.We have intentionally chosen not to use <cit.> for the present work because it would introduce an additional systematic difference between the sources for which SDSS photometry is available, and those for which it is not, and we want to minimize such differences wherever possible.Model fitting techniques that correct for model completeness using a smoothly varying selection function, as we intend to apply to this data set in the companion work LW17III, tend to make the model parameters particularly sensitive to outliers that are in low completeness regions.The most prominent example of this outlier effect in this work are objects which pass all of the selection flux cuts, but are in a position in the luminosity-redshift plane that has an extremely low selection probability assigned to it by the model.The two most common reasons for this are: sources with contaminated optical photometry; and sources that are marginally resolved by , but which do not have elliptical aperture photometry available. We calculated L_ν(2.4) from thephotometry and, if it isn't contaminated to the same extent as the optical selection photometry, then the source can be an outlier on the low side in a luminosity-redshift graph.Low luminosity outliers are in a region that the completeness model assigns a low probability of having been selected.This causes a large swing in the estimate of the LF model parameters in order to get a finite density at the position of the outlier.An example of a source with contaminated targeting photometry can be found in Figure <ref>, and one that lacks an elliptical aperture in AllWISE but the PSF flux is incorrect is in Figure <ref>. The apparent radius of a galaxy is correlated to both its luminosity and redshift, and the probability of significant contamination varies inversely with flux and with the area subtended on the sky by the source, leading to selection effects and biases.In principle these effects can be modeled if a radius-luminosity relationship is added onto the overall model, but that would require a set of radius measurements that is consistent across surveys, and that also has an accurate determination of the effect of seeing.The factors discussed in the previous paragraph make it necessary to cut data with low selection probability and low luminosity as outliers.For that reason, a reduced maximum redshift was applied to 6dFGRS, SDSS, and GAMA, as shown in Table <ref>.Further, the minimum selection fluxes were extrapolated into luminosity cuts using the mean SED for all galaxies measured using the data from <cit.>.The fractions of 2.4 luminosity contributed by each of the templates from <cit.>, with the median AGN obscuration, can be found in Table <ref> and a graph of the mean SED, with a 1-σ type variance band around it, is in Figure <ref>.For AGES and , surveys with a tiered target selection strategy, each survey was treated as though it were comprised of two fully independent surveys for this cut, with the division line set by the intermediate magnitude limit in Table <ref>.Finally, if the survey documentation did not explicitly cite a maximum flux limit, then one was imposed that cut the brightest few sources in order to ensure an accurate upper flux limit for the survey.lllll 0.54 Mean SED Parameters ⟨ f_Ell⟩ ⟨ f_Sbc⟩ ⟨ f_Irr⟩ ⟨ f_AGN⟩ τ_B - τ_Va 0.490 0.269 0.114 0.127 0.023 Mean of the 2.4 luminosity template fractions, alongside the median excess extinction on the AGN. Numbers are given to three decimal places regardless of experimental uncertainty. a τ_B - τ_V here means the median of τ_B - τ_V. TheAll-Sky data release had a known and documented[<http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec6_3c.html#flux_under>] overestimation of the background, leading to an underestimation of the flux for faint sources.The AllWISE release remedied most, but not all, of the problem[<http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_2.html>].We, therefore, added a small flux to correct for the over-subtraction in the PSF photometry, on average, in W1 and W2.The values added are 1.5 and 7 in W1 and W2, respectively, as shown in Table 6 of the catalog completeness section of the AllWISE Explanatory Supplement[<http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_4a.html>].The aperture photometry was not affected by this issue, so when elliptical aperture photometry was used in this work it was unaltered. Targets from each survey were also matched to other photometric surveys using a 1 spatial match in order to obtain more photometric points to use in modeling the spectral energy distributions (SEDs) of the galaxies.The SED models were made using the four templates from <cit.>.The set of templates contained four basis galaxies, identified as Elliptical, Sbc, Irregular, and Active Galactic Nucleus (AGN).All of the models are normalized to be 10^10 in the wavelength range 0.03–30.The AGN template, additionally, has a dust obscuration model parametrized by E(B-V).The models, normalized to unit luminosity at 2.4 with the AGN unobscured, can be found in Figure <ref>. The templates were constructed to be fit to fluxes by minimizing χ^2 with respect to to a linear combination of the template fluxes with non-negative coefficients, and a search in the 1-dimensional parameter space for the best AGN extinction, E(B-V) = (2.5 / ln(10)) · (τ_B - τ_V).That is, the model has the form:F_ν(ν, z) = a_E F_E(ν, z) + a_S F_S(ν, z) + a_I F_I(ν, z) + a_A F_A(ν, z, τ_B - τ_V),χ^2 = ∑_i ∈{filters}(F_obs i - F_mod i/σ_i),with all a_i ≥ 0, and 12 > τ_B - τ_V ≥ 0.The a_i were fit using the SciPy optimize package's routine nnls (quadratic programming for non-negative least squares), and τ_B - τ_V were fit with the routine brent (Brent's algorithm) with fallback to fmin (Nelder-Meade simplex).There is one modification to that procedure for the fits done for this paper.The templates do not include the possibility of adjustable dust obscuration of the stellar population, so a dusty starburst that has a detection in 's 12 filter, W3, will often be best fit with a galaxy that is dominated by its Elliptical component (to satisfy optical redness) and a super-obscured AGN (τ_B - τ_V > 12) masquerading as the emission from the stellar dust component.The problem this creates is that it makes the SED fit the data more poorly in the most important range for work on the 2.4 luminosity function, where K-corrections from W1 to 2.4 are performed.We used two techniques to work around this problem.First, we limited the excess in optical depth as τ_B - τ_V ≤ 12 (equivalently, E(B-V) ≤ 13.03).Second, when the SED was badly modeled (χ^2 > max(N_df, 1) × 100) and unlikely to be an AGN (W1 - W2 > 0.5Vega mag), we used the best model with an unobscured AGN, E(B-V) = 0.The reduced χ^2 criterion was determined by eye, and the AGN selection was found in <cit.> to select low redshift AGN with 90% completeness.Limiting the excess optical depth, τ_B - τ_V, to be non-negative introduces a bias to the parameter estimation of the individual galaxies.It is even physically possible for a source to appear bluer than expected if the line of sight is unobscured and dust clouds are reflecting excess blue light into it (that is, the line of sight contains a significant contribution from reflection nebulae in the target galaxy).Even so, applying a negative optical depth excess to dust obscuration models is not likely to produce an accurate spectrum for reflection (since obscuration models both reflection and absorption), and the magnitude of the negative excess doesn't have to be large to cause the estimate of the maximum redshift at which the galaxy would be included in the sample to diverge, outweighing the impact of biases introduced by requiring τ_B-τ_V to be non-negative.Admittedly, the limitation that the a_i be non-negative introduces a source of potential bias to analyses done using them, but for their intended use, predicting SEDs at unobserved wavelengths, allowing components to take negative values produces unphysical outliers of the same sort that allowing τ_B-τ_V < 0 does, and not an insignificant number of them.While the components in Figure <ref> all look very different, and therefore unlikely to produce large negative coefficients in the fit, it should be kept in mind that the graph has a logarithmic scale in flux and the fitting is done linearly in flux.Thus, depending on what rest frame wavelengths we have photometry of the target galaxy at, and the signal to noise ratio of the photometry, the Elliptical and Sbc templates are similar (about 0.5–5), or the Sbc and Irregular templates are similar (about 2–5). Experimentally, we examined the resulting a_i when using ordinary least squares (OLS) in the zCOSMOS data set, where we have the largest amount of auxiliary data and should, therefore, expect the best behavior. Nearly all sources in the sample had at least one negative a_i, and about a quarter of sources had a an a_i that was more negative than half the sum of the a_i from doing the non-negative version of the fit. For some of them, the sum of the OLS a_i, which should correspond to the overall unobscured luminosity of the galaxy, was outright negative.The AllWISE data release also has an issue where the flux uncertainties in W1 could be overestimated, or even missing, in the ecliptic longitudes covered by the “3-band cryo" portion of the survey[<http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_2.html#w1sat>].The flux uncertainties are needed to model SEDs using Equation <ref>, so we substitute an uncertainty calculated from w1sigp2, the uncertainty in the mean flux measured from individual calibrated frames by thephotometry system, in magnitudes.This quantity differs from, and is usually less reliable than, the standard flux and uncertainty columns. This is because the standard uncertainty is calculated by simultaneously fitting all of the frames at the same time, and w1sigp2 is calculated measuring the flux on each frame individually.Therefore the standard columns are to be favored if there isn't strong evidence that the standard flux uncertainty is overestimated.Empirically, the substitution was justified when the following equation is satisfied:σ_W1 > 2√((0.02 F_W1)^2 + σ_W1p2^2),with σ_W1p2≡ 0.4 ln(10) [w1sigp2] F_0 W1 10^-0.4 [w1magp].The relationship between w1magp, the mean flux (in magnitudes) for which w1sigp2 is the uncertainty, and w1sigp2 is more tightly correlated in the same way that the standard W1 flux and uncertainty columns are, making the use of w1magp in the conversion from magnitude to flux uncertainties preferable.The template models were used to generate K-corrections from W1 flux to 2.4 rest frame luminosities using the equations from <cit.> and <cit.>.We corrected to 2.4 rest frame luminosity in order to minimize the errors associated with K-correction for the overall sample in the same fashion as was done in <cit.>.In other words, W1 fluxes were K-corrected to the wavelength W1 samples at the median redshift of sources with F_W1≥ 80 from , z=0.38.Details of how each survey was processed that are peculiar to each survey, and what auxiliary photometric data was used, can be found in the following subsections, starting with this work's defining survey, , and then in decreasing order in survey area on the sky. §.§Details consisted of a one night survey performed on the Keck II telescope using the DEIMOS instrument <cit.>, with resulting data reduced using the DEEP2 spec2d pipeline <cit.>, and analyzed using SpecPro <cit.>.included observations of 10 different slit masks at disparate positions with high galactic latitude (b > 30^∘).So, while the net area covered by those 10 masks is small, 5.78× 10^-5 = 0.190 deg^2, the sample is less affected by cosmic variance than one might naively expect because the fields are non-contiguous.Though we do not estimate cosmic variance here directly, estimates of cosmic variance for larger surveys, for example <cit.> and <cit.>, suggest that it is not small compared to the shot noise level for 222 sources, (222)^-1/2≈ 6.7%.Because the source density varies with galactic latitude, the targeting completeness varies from field to field, necessitating the use of a selection function that varies by field.A small number of sources, about 5, in <cit.> had incorrectly measured redshifts, or lack thereof.This is based on a closer reanalysis of the data with more consistent standards for when a redshift is to be assigned, as will be explained below in the discussion of quality codes.Tables <ref> and <ref> contain a short excerpt from the machine readable table published along with this work.This table contains both more rows and more columns than the one published with <cit.>.lrccccrrccc 0.86 Redshift Catalog DR2, Excerpt Part 1 Designation ID Ra DecW1targ TG mask SlitNum z z_err q_z Vega mag WISEPC J230334.10+040532.7 342 345.8921204 4.0924325 11.483 2 5 2 0.0 nan 4 WISEPC J230256.33+040511.2 351 345.7347412 4.0864501 13.531 2 5 11 0.0638 0.00017 4 WISEPC J230310.28+040517.3 352 345.7928467 4.088161 13.938 2 5 12 0.1862 0.000221 4 WISEPC J230325.34+040520.5 353 345.8556213 4.0890427 15.178 2 5 13 0.1224 8.79e-05 4 WISEPC J230315.00+040527.7 355 345.8125 4.0910487 13.871 2 5 15 0.0 nan 4 WISEPC J230336.45+040436.2 398 345.901886 4.076745 17.018 4 5 56 1.7771 0.00335 4 WISEPC J230257.17+040548.2 405 345.7382202 4.0967259 17.928 4 5 62 nan nan -1 Excerpt from the first set of columns from the machine readable table formatted data published with this paper. Designation has “NA" for serendipitous sources. ID is a unique integer assigned to each source in the catalog. Ra and Dec are the J2000 right ascension and declination of the primary target on the slit. W1targ is the W1 Vega magnitude used for target selection (“nan" if unavailable). TG is the target group (explained in the text). mask is the mask number, and it corresponds to the Field Number of Table <ref>. SlitNum is the slit number in the mask the source fell on. z is the Earth-centric redshift of the source (“nan" if no valid redshift could be determined); no correction for the motion of the sun, Earth's orbital motion, or the CMB dipole was made, but all redshifts were gathered on a single night, Universal Time 2010 September 14, using the Keck 2 telescope on Mauna Kea, Hawaii. z_err is the uncertainty in the redshift, as ascertained by the template correlation performed by SpecPro (“nan" if redshift invalid or a star). q_z is the quality code of the spectrum, explained in the text. l|rrccl 0.86 Redshift Catalog DR2 Excerpt, Part 2 Designation classSpecProTemplateR cont SpecFeatures WISEPC J230334.10+040532.7 Star M Star 1 1 NaI TiO BaI Ha TiO WISEPC J230256.33+040511.2 Gal Red Galaxy 1 1 MgI NaI BaI WISEPC J230310.28+040517.3 Gal Green Galaxy 1 1 G-band Hb MgI NaI NII Ha NII SII SII WISEPC J230325.34+040520.5 Gal Blue Galaxy 1 1 Hb NaI Ha NII SII SII WISEPC J230315.00+040527.7 Star G Star 1 1 MgI NaI BaI Ha WISEPC J230336.45+040436.2 QSO SDSS Quasar 1 1 AlIII CIII CII MgII WISEPC J230257.17+040548.2 Unseen NA 0 0 Excerpt from the second set of columns (Designation repeated) from the machine readable table formatted data published with this paper. class is the spectroscopic classification assigned to the source, one of: “Star" for stars, “Gal" for galaxy, “QSO" for broad line quasar, “Indet" for a source that had an indeterminate spectrum, “Unseen" for sources that didn't produce an observable spectrum, and “Lost" for sources that were lost to instrument constraints. SpecProTemplate is the name of the SpecPro template that matches the spectrum closest (“NA" for invalid sources). R is a flag for whether the source was “real," that is, it corresponds to a non-artifact AllWISE source. SpecFeatures is a list of spectroscopic features listed in the specpro software that were identified, in increasing wavelength order (no distinction is made between emission and absorption features). The reanalyzed data contains four main columns relevant for selecting subsamples.The column TG, short for `Target Group,' contains an integer encoding which group of targets the source was in.The values TG takes are: 1 for the central source of the DEIMOS slit mask, 2 for W1 bright sources (F_W1≥ 120, in thePreliminary Release), 3 for W1 intermediate sources (120 > F_W1≥ 80), 4 for W1 faint or non-detected sources (80> F_W1), and 5 for targets that serendipitously fell on the slit of a target.For analysis of pseudo-randomly selected galaxies with well known selection completeness, targets with 1 < TG < 5 should be used.In order to have good completeness of the initial detections we recommend further limiting the sample to TG < 4, as is done in LW17III.The quality of the redshift is encoded in a column of integers named q_z, and takes on the values: -1 for sources with no detected flux in the spectrum, 0 for sources that have a spectrum but for which it was not possible to even estimate a redshift, 1 for targets where a redshift measurement was possible but no spectral features could be identified (blunders could not be ruled out, confidence < 50%), 2 for targets where the redshift is better but still uncertain (confidence <95%), 3 for targets that have a secure redshift with at least one clearly identifiable spectral feature or more of lesser quality (absorption or emission lines), and 4 for targets with multiple clearly identifiable spectral features. The analysis of the spectra allowed the targets to be broken up by classification, class.class takes on four possible values: “Gal" for ordinary galaxies, “QSO" for broad-line AGN, “Star" for stellar spectra, “Indet" for spectra of indeterminate type, “Unseen" for sources without any detectable flux in the spectrum, and “Lost" for sources lost to instrument constraints.Naturally, an analysis of extragalactic targets must be limited to “Gal" and “QSO" targets.The last selection relevant column is R.R stands for “Real" and takes the value 1 if the source produced a spectrum or can be associated with a non-artifact source in the AllWISE database, and 0 otherwise.Only targets with R = 1 are relevant.The completeness of the W1 faint sample is much lower and more poorly defined compared to the brighter two, as can be seen by comparing the spectroscopy completenesses (f_Q≥3) in Table <ref>, so the combined sample defined in this work is limited to only targets with F_W1≥ 80 for all surveys.This was done in order to make the results from all the surveys as comparable as possible.The following subsections contain plots showing thedistribution of primary selection flux of the survey versus F_W1.They show that the effect of both cuts must be accounted for when analyzing all surveys deeper than SDSS.cccc 0.75 Field Completenesses Field {≥ 120} {80–120} {<80} Number (f_Q ≥ 3/f_targ/N_tot) (f_Q ≥ 3/f_targ/N_tot) (f_Q ≥ 3/f_targ/N_tot) 10.97/0.70/84 1.00/0.29/34 0.78/0.21/41 2 1.00/0.59/99 0.90/0.30/33 0.71/0.18/40 3 0.98/0.77/74 0.92/0.37/38 0.83/0.29/21 4 1.00/0.72/68 0.55/0.42/26 0.81/0.27/59 5 1.00/0.69/55 0.89/0.36/25 0.56/0.34/95 6 0.91/0.81/54 0.85/0.38/34 0.58/0.30/88 7 1.00/0.77/44 1.00/0.52/15 0.60/0.24/25 8 1.00/0.80/45 0.91/0.46/24 0.62/0.51/57 9 1.00/0.76/46 0.75/0.51/39 0.56/0.24/75 10 0.98/0.74/58 0.72/0.49/37 0.65/0.35/57 Spectroscopic and targeting completeness of , broken down by field. Specifically, the column contain the fraction of slits that produced high quality spectra / fraction of targets assigned slits / total available targets, broken down by W1 flux sample (limits in ). This table is adapted from Table 2 in <cit.> with an updated analysis of the spectra and target source types based on the AllWISE data release. Figure <ref> is a scatter plot of redshifts versus luminosity, alongside the marginal histograms in redshift and log-luminosity for the sources used in the sample defined here.The plots are meant to show the raw quantity of data available at each redshift and luminosity, and thus contain no completeness corrections, and are normalized to the total number of data points.Of particular note,contains few redshifts z>1, and only one with z ≤ 0.05.Given that the slit mask targeting avoided large resolved galaxies, the survey has a selection bias against redshifts lower than this, so we have limited the sources included from all surveys to be both low redshift (z≤ 1), and have z > 0.05 for small area surveys or z > 0.01 for large area ones (6dFGS, SDSS, and GAMA).Figure <ref>, panel a, also contains blue curves bounding regions where the color variety completeness is approximately constant (to within 2% for the light blue curve, and 5% for the dark blue curve).Color variety completeness is defined by:S_color(L, z) ≡⟨ S(F_sel, F_0, x⃗) ⟩_ℒ_SED/max(S(F_sel, F_0, x⃗)),where S(F_sel, F_0, x⃗) is the selection probability (completeness) for a galaxy at real space position x⃗ (for example, α, δ, and z) with two observer frame fluxes at different wavelengths, F_sel and F_0.The average in the numerator is weighted by the likelihood that a galaxy at redshift, z and with (spectral) luminosity L will be observed to have fluxes F_sel and F_0 (called ℒ_SED, see Equations 19 of LW17I), including both color variety and a model for measurement noise.The denominator is the maximum value that S(F_sel, F_0, x⃗) takes, removing factors like intentionally sparse sampling from S_color(L, z).The faint sample is automatically excluded from these regions because it is too narrow to provide a flat selection region.The reason for including the blue curves in the plot is that they show the regions where the likelihood model defined in LW17I can be neglected.In the case of , the 95% curve leaves 91 sources (43.8% of the pre-cut data), and the 98% curve leaves 40 (19.2% of pre-cut). The photometry from outside sources available forwas non-uniform, as mentioned in Table <ref>.In total, roughly half of the sources have some photometry outside of AllWISE available, but that leaves only W1 and W2 photometry for the majority of the other half.This is not a problem for the accuracy of the K-corrections used in this LW17III, shown in Figure <ref>, because non-AGN galaxy SEDs are remarkably uniform in the wavelength range of interest (1.7–3.4, see the 1-σ varience band around the mean in Figure <ref>).This is why <cit.> were able to show that this one color is remarkably good at picking out low redshift AGN, and therefore sufficient for narrowing the SED model in the wavelength range of interest. This fact also makes it extremely difficult to photometrically split the galaxies into red and blue types, as is done for most works on the galaxy luminosity functions.This is a problem because red cluster member galaxies typically have a different LF than bluer field galaxies, not to mention AGN.For that reason, most studies of the LF will remove AGN entirely, and perform two analyses on the ordinary galaxy data: one analysis with a single luminosity function, and one where the red and blue galaxies are modeled separately.Thedata could be split by spectroscopic characteristics, but only broadly (for example, by the presence of emission lines), and performing a comparable analysis on the other surveys would have been prohibitively time consuming. §.§ 6dFGS DetailsThe 6dF Galaxy Survey (6dFGS) was originally defined in <cit.>, and the final data release used in this paper is described in <cit.>.6dFGS contains several sub-samples selected using different techniques, but the sample of primary interest to this paper is the one selected from the Two Micron All Sky Survey (2MASS) extended source catalog using the K_s-band flux.This subset is designated as having PROGID =1, and satisfies K_s < 14.49 AB mag.The reason the K_s selected sample is the most relevant to this work is because K_s is adjacent to W1 in wavelength, and so subject to less potential selection bias than surveys that were selected optically, as the flux-flux graph in Figure <ref> shows.Selecting the subset of redshifts with high confidence is relatively straightforward with 6dFGS.Those targets with 3 ≤quality < 6, where `quality' is the name of the column of integers classifying redshifts by the quality (QUALITY in the 6dFGS schema), selects for targets with science quality redshifts (quality≥ 3) and removes those that are Milky Way sources (they have quality = 6). Further, the 6dFGS data set contains multiple redshift for a fraction of the sources. When multiple redshifts are available for a source, the selection of which redshift to use involved a couple of steps. The primary discriminator is quality; when they differ the sort order, in decreasing preference, is: 4, 3, 6, 2, and 1. When multiple redshifts have the same quality code the redshift with the lower measured uncertainty, ZFINALERR in the 6dFGS schema, is preferred if one or more redshifts had measured uncertainties (note well: a value of 0 in the uncertainty column is not measured).The 6dFGS survey is relatively shallow, as can be seen by the luminosity-redshift graph in Figure <ref>, and its marginalized histograms therein, but the coverage is enormous, 1.37×10^4deg^2 after imposing a δ < -11.5 cut to eliminate overlap with SDSS, so the sample size after all limits are imposed is 27,091. Because of this wide coverage, the sample defined here covers galaxies as low as z=0.01, but the shallow depth requires an upper limit on the redshifts at z=0.2.The bright limit imposed on this survey is K_s > 11.25 AB mag.The blue curves in Figure <ref> are defined by constant values of the color variety selection function, S_color(L, z) (see Equation <ref>), and bound regions where it is greater than 98% (light blue) and 95% (dark blue).They demarcate the regions where the selection function is close enough to constant that the likelihood model defined in LW17I can be neglected.For 6dFGS, 95% 17,571 sources (64.9% of the pre-cut data), and the 98% curve leaves 15,652 (57.8% of pre-cut). The photometric data used to model galaxy SEDs and define K-corrections, shown in Figure <ref>, is summarized in Table <ref>.The shape of the data distribution in Figure <ref> is consistent with Figure 4 from <cit.>, which used the same set of templates for SED fitting, and Figure 4 from <cit.>.The characteristics of the distribution can be explained as the majority of galaxies being fit primarily by the elliptical, Sbc, and irregular templates from Figure <ref>, that all have nearly the same shape in the region between 1.7–3.4, with a long tail of outliers that are dominated by the AGN template. §.§ SDSS DetailsThe Sloan Digital Sky Survey (SDSS) data release 7, as described in <cit.>, contained three main extragalactic spectroscopic samples: the main galaxy sample defined in <cit.>, the red luminous galaxy sample defined in <cit.>, and the quasar sample defined in <cit.>.While it would have been nice to be able to use all three samples, the latter two samples are defined using both flux and color cuts, which the model described in LW17I cannot yet accommodate in a timely fashion.Only the main galaxy sample is defined in terms of flux (r ≤ 17.77 mag, Petrosian) and surface brightness (μ_50≤ 24.5magarcsec^-2) in one channel, after extinction correction based on the dust maps from <cit.>.The sample defined in this paper is therefore limited to the main galaxy sample, with 476,744 sources after all limits are imposed.These limits included cutting out around the survey footprints of the three surveys with significant overlap that were deeper than SDSS, as listed in Table <ref>.There was no overlap with .Explicitly, in terms of the columns of the SDSS DR10 CasJobs[<http://skyserver.sdss3.org/casjobs/>] database, the selected galaxies had to have: class set as either “GALAXY" or “QSO," zwarning = 0, (legacy_target1 & 0x40) ≠ 0 (that is, the Main Galaxy Sample flag is set, detected using & as the `bitwise and' operator), sdssPrimary = 1, and legacyPrimary = 1.The exact tables from which we drew data were: SpecObj for redshifts and flags, SpecDR7 for the magnitudes used for selection, PhotoObj for additional SDSS photometry, and TwoMassXSC for 2MASS extended source photometry.The flux-flux plot is in Figure <ref>, and it shows that the optical flux limit is the relevant limit for the vast majority of the sources, but not 100% of them.Like 6dFGS, SDSS has a large area on the sky (7.88×10^3deg^2, after de-overlapping) and so it, too, has a lower redshift limit in this work of 0.01.Likewise, SDSS's shallow depth required upper redshift limit at z=0.33, and a bright magnitude limit at r=13.0mag.Also in Figure <ref> is a density plot that shows the relationship of the data to the surface brightness limit, defined as the mean surface brightness within a circle that contains half of the source's Petrosian flux.Ideally any analysis would include the surface brightness limit in the selection function model.Practically, the surface brightness limit is far from the main body of the data and incorporating it would require an additional measurement of a luminosity-radius relationship that is beyond the scope of the model described in LW17I.The luminosity-redshift graph is in Figure <ref>, along with its marginalizations into histograms.The K-corrections applied to calculate those luminosities are shown in Figure <ref>, and the auxiliary photometric information used to fit the SED models and calculate K-corrections is outlined in Table <ref>.The shape of the data distribution in Figure <ref> is consistent with Figure 4 from <cit.>, which used the same set of templates for SED fitting, and Figure 4 from <cit.>.The characteristics of the distribution can be explained as the majority of galaxies being fit primarily by the elliptical, Sbc, and irregular templates from Figure <ref>, that all have nearly the same shape in the region between 1.7–3.4, with a long tail of outliers that are dominated by the AGN template. The blue curves in Figure <ref> are defined by constant values of the color variety selection function, S_color(L, z) (see Equation <ref>), and bound regions where it is greater than 98% (light blue) and 95% (dark blue).They demarcate the regions where the selection function is close enough to constant that the likelihood model defined in LW17I can be neglected.For SDSS, 95% 162,916 sources (34.2% of the pre-cut data), and the 98% curve leaves 105,835 (22.2% of pre-cut). lrrrr 0.4 SDSS Cutouts for Deeper Surveys Survey α > α < δ > δ < GAMA 129.00 141.00 -1.00 +3.00 GAMA 174.00 186.00 -2.00 +2.00 GAMA 211.50 223.50 -2.00 +2.00 AGES 216.11 219.77 +32.80 +35.89 zCOSMOS 149.45 150.78 +1.60 +2.86 J2000 right ascension (α) and declination (δ) limits for data removed from SDSS for the data described in this work in order to prevent double counting of any sources. §.§ GAMA DetailsThe Galaxy And Mass Assembly (GAMA) survey was originally defined over three fields for the first data release, described in <cit.>, and later expanded to 5 fields, as described in the second data release paper, <cit.>.After the final data release the depth at which the data is complete will vary depending on the field.For the second data release, all of the fields are complete down to at least r = 19mag, extinction corrected Petrosian, and so that is the limit used for the selection in this paper.Selecting science quality redshifts from GAMA is relatively straightforward, as the GAMA team supplies a `normalized quality' integer, NQ.The high quality redshifts satisfy: NQ > 2.As can be seen in Figure <ref>, the optical limit is the controlling one for the vast majority of sources, but the W1 flux limit is relevant for a sizable fraction of the galaxies in GAMA.Like SDSS, GAMA imposes surface brightness limits, both high and low.Their relationship to the data can be found in Figure <ref>, and just like for the SDSS subsample, it is beyond the scope of the model described in LW17I to account for these limits.After all limits are imposed, this survey contributes 44,495 sources to the sample.The luminosity versus redshift density plot, found in Figure <ref> with its marginalizations, shows that GAMA is the shallowest survey to significantly sample galaxies from the median redshift of thesurvey.It is also the narrowest survey for which the selection defined in this paper covers redshifts down to z=0.01, and for which a low maximum redshift was imposed at z=0.43.The bright limit imposed here was at r=14mag.The blue curves in Figure <ref> are defined by constant values of the color variety selection function, S_color(L, z) (see Equation <ref>), and bound regions where it is greater than 98% (light blue) and 95% (dark blue).They demarcate the regions where the selection function is close enough to constant that the likelihood model defined in LW17I can be neglected.For GAMA, 95% 15,659 sources (35.2% of the pre-cut data), and the 98% curve leaves 9,641 (21.7% of pre-cut). The K-corrections applied to calculate the luminosities are shown in Figure <ref>, and the photometry used to fit the SEDs used to calculate the K-corrections are summarized in Table <ref>.The shape of the data distribution in Figure <ref> is consistent with Figure 4 from <cit.>, which used the same set of templates for SED fitting, and Figure 4 from <cit.>.The characteristics of the distribution can be explained as the majority of galaxies being fit primarily by the elliptical, Sbc, and irregular templates from Figure <ref>, that all have nearly the same shape in the region between 1.7–3.4, with a long tail of outliers that are dominated by the AGN template.§.§ AGES DetailsThe AGN and Galaxy Evolution Survey (AGES), described in <cit.>, is a spectroscopic redshift survey targeted using photometry from the NOAO Deep, Wide-Field, Survey (NDWFS) and the Spitzer Deep, Wide-Field, Survey (SDWFS).<cit.> defines a lot of subsamples with different flux-limits.The sample defined in this paper uses the main I-band selected sample, defined with (code06 & 0x80000) ≠ 0 (0x80000 is a hexadecimal integer equal to 2^20 in base 10) in <cit.>, which is defined by I-band flux limits to be complete brighter than I=18.9mag, and 20% complete below that down to I=20.4mag.The AGES data with well analyzed completeness is limited to a set of 15 overlapping circular fields, but the released redshifts cover a larger area.Limiting the sample to just those redshifts in the canonical fields requires selecting sources with field > 0.The relationship of the data to the flux limits are shown in Figure <ref>.This is the survey for which taking into account both the optical and W1 flux limits is most important because the locus on which most galaxies are found goes into the corner defined by the flux limits.AGES is narrow enough that the sample defined this paper only includes data with redshifts z > 0.05 and deep enough for redshifts out to z=1.The bright limit we imposed is at I=15.5mag.After all limits are imposed, AGES contributes 6,588 galaxies to the sample defined in this paper.The NDWFS astrometry has a known astrometric offset relative to other major surveys.So before performing the final 3 distance match cut, we calculated the mean offset of nearest neighbors and used that offset to correct the distance between the AllWISE source and the NDWFS sources.The offsets we found were:-0.29 in right ascension, and -0.14 in declination.The density plot showing the luminosities versus redshift is in Figure <ref>, alongside its marginalizations.The blue curves in Figure <ref> are defined by constant values of the color variety selection function, S_color(L, z) (see Equation <ref>), and bound regions where it is greater than 98% (light blue) and 95% (dark blue).The faint sample is automatically excluded from these regions because it is too narrow to provide a flat selection region.They demarcate the regions where the selection function is close enough to constant that the likelihood model defined in LW17I can be neglected.For AGES, 95% 2,096 sources (36.5% of the pre-cut data), and the 98% curve leaves 1,664 (29.0% of pre-cut). The K-corrections used to calculate luminosities for AGES galaxies are shown in Figure <ref>, and the photometric data used to fit the SEDs for calculating the K-corrections is summarized in Table <ref>.The shape of the data distribution in Figure <ref> is consistent with Figure 4 from <cit.>, which used the same set of templates for SED fitting, and Figure 4 from <cit.>.The characteristics of the distribution can be explained as the majority of galaxies being fit primarily by the elliptical, Sbc, and irregular templates from Figure <ref>, that all have nearly the same shape in the region between 1.7–3.4, with a long tail of outliers that are dominated by the AGN template. The photometry published with the main AGES paper, <cit.>, did not include uncertainties, so we performed a cross-match against NDWFS and SDWFS (the combined epoch IRAC c1 driven extraction stack only).The AGES sources didn't always have a counterpart in the NDWFS and SDWFS catalogs.In the case of SDWFS, that is because the data release used here is newer than the one used for AGES and this work only used the c1 stack catalog.Noise models were, therefore, also fit to the data to produce model uncertainties when a catalog uncertainty was unavailable.The noise model takes the form of a smoothly broken power law:σ(F) = σ_knee(F/F_knee)^α(1/2 + 1/2·[F/F_knee]^|β|s)^sign(β) / s,where F_knee is the location of the break, or knee, in the power law, σ_knee≡σ(F_knee), α is the faint end slope, β is the change in slope at F_knee, and s is a positive parameter setting the sharpness of the break.For s→ 0 the break becomes infinitely wide, and for s→∞ it becomes infinitely sharp (that is, a corner).The noise model parameters found from fitting individual bands, after trimming outliers, are listed in Table <ref>.cccccc 0.4 AGES Error Models Band F_knee σ_knee α β s B_w 0.130.017 0.51 -0.39 6.5 R 0.760.010 0.58 -0.47 8.3 I 1.30.15 0.49 -0.38 14 K 488.3 0.80 -0.56 5.9 c1 5.3 0.99 0.67 -0.59 6.7 c2a11.3 0.10 0 0 c3a1 7.3 0.042 0 0 c4a1 7.8 0.025 0 0 Noise model parameters used to compute flux uncertainties in the absence of uncertainties from NDWFS or SDWFS, as defined in Equation <ref>. aThe data for this channel did not exhibit a knee, so a power law fit was used instead.§.§ zCOSMOS DetailsThe sample defined in this paper uses the subset of the zCOSMOS survey known as the “10k-Bright Spectroscopic Sample," is described in <cit.> and <cit.>.The COSMOS field has been the subject of an intensive campaign of imaging by many groups, as described in <cit.>.zCOSMOS based its targeting on photometry from Hubble Advanced Camera for Surveys (ACS) Wide Field Camera (WFC) imaging with the F814W filter, which is approximately I-band.The 10k, data release 2, subset of the survey is 62% complete for compulsory targets, and 30% complete for the rest.Selecting high quality redshifts from zCOSMOS is the most involved of the surveys used here because of the detailed `confidence class' (cc) system used.The recommendation in <cit.> is to accept all sources with cc equal to: any 3.X, 4.X, 1.5, 2.4, 2.5, 9.3, and 9.5.Based on the description of those classes, the sample defined here accepts sources that fit in the recommended classes, but also those with a leading 1 (10 was added to show broad line AGN), 18.3, 18.5, and to reject all secondary targets (2 in the tens or hundreds digit).This can be done by accepting sources for which the text string version of cc matches the regular expression “" and doesn't match “".Finally, the targets fell into three selection classes, column named i, and `unintended' sources are rejected by requiring i>0.As Figure <ref> shows, even zCOSMOS is affected by the need to use both W1 and I-band limits in the analysis of the data.Like AGES, the narrowness of zCOSMOS means that the sample herein is limited to redshifts z > 0.05.After all limits are imposed, this survey contributes 1,267 galaxies to the sample.The density plot showing the data in luminosity-redshift space is in Figure <ref>, alongside its marginalizations.The blue curves in Figure <ref> are defined by constant values of the color variety selection function, S_color(L, z) (see Equation <ref>), and bound regions where it is greater than 98% (light blue) and 95% (dark blue).They demarcate the regions where the selection function is close enough to constant that the likelihood model defined in LW17I can be neglected.For zCOSMOS, 95% 890 sources (72.7% of the pre-cut data), and the 98% curve leaves 763 (62.3% of pre-cut). The K-corrections used to calculate those luminosities are shown in Figure <ref>, and the photometric information used to fit the SEDs used to calculate the K-corrections are summarized in Table <ref>.The shape of the data distribution in Figure <ref> is consistent with Figure 4 from <cit.>, which used the same set of templates for SED fitting, and Figure 4 from <cit.>.The characteristics of the distribution can be explained as the majority of galaxies being fit primarily by the elliptical, Sbc, and irregular templates from Figure <ref>, that all have nearly the same shape in the region between 1.7–3.4, with a long tail of outliers that are dominated by the AGN template. § POST SELECTION DATA TABLES Each cross-matched survey has its own layout, but the general layout is as follows: target coordinates in decimal degrees (J2000 right ascension and declination), the unique object identifier from the redshift survey (if provided), the identifiers from the photometric surveys to which the object successfully matched, the parameters from the template fits (including the χ^2 of the fit and the formal number of degrees of freedom; ignoring the impact that the constraints on the parameters have on that number), and the boundary redshifts for inclusion in this data set (including an intermediate redshift, z_mid, for sources inor AGES to account for the intermediate flux cuts of those surveys).In order to keep file size down, that is the extent of the information published with this work.The rest of the information, like the redshift and photometric properties, is available from the various original sources referenced in the tables of Section <ref>.Excerpts from the machine readable tables can be found in: Tables <ref> and <ref> for , Tables <ref> and <ref> for 6dFGS, Tables <ref> and <ref> for SDSS, Tables <ref> and <ref> for GAMA, Tables <ref> and <ref> for AGES, and Tables <ref> and <ref> for zCOSMOS.rrrrrr 0.9 Excerpt fromExtended Data, Column Set 1 Ra Dec WDID GALEX_ID SDSS_ID AllWISE_ID 310.319670 -14.525610 19 null 1237668758314746796 3099m152_ac51-053725 310.367640 -14.514580 23 null 1237668758314746981 3099m152_ac51-053735 310.315280 -14.509710 27 null 1237668758314747310 3099m152_ac51-053770 310.217560 -14.419560 59 null 1237668758314681889 3099m152_ac51-053534 312.385500 -11.708410 102 6379641521644244188 null 3121m122_ac51-047591 312.394260 -11.672530 153 null null 3121m122_ac51-048520 312.344700 -11.667790 155 null null 3121m122_ac51-047825 First set ofextended data columns. The first two columns are the right ascension and declination, in J2000 decimal degrees. The WDID column is an integer index uniquely assigned to targets in thesurvey. GALEX_ID is a uniquely identifying integer assigned in GALEX gr7 to the source (null if not matched). SDSS_ID is an integer assigned to the matched source in SDSS data release 10 (null if not matched). AllWISE_ID is the http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_1a.html#source_idsource_id assigned to the source in the AllWISE survey. r|rrrrrrrrrrr 0.8 Excerpt fromExtended Data, Column Set 2 WDID Ell Sbc Irr AGN AGN_EBmV chisqr Ndf FitMode z_min z_mid z_max 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ mag 19 3.4370e-02 0.0000e+00 9.9800e-01 1.0930e-03 0.1865 5.100e-27 -2 main 0.029 0.114 0.138 23 7.6680e+00 0.0000e+00 9.7870e-01 7.9540e-01 0.0000 1.500e+01 3 main 0.060 0.472 0.579 27 0.0000e+00 4.2510e+01 3.5730e+00 0.0000e+00 0.0000 7.000e+01 1 main 0.090 0.653 0.819 29 0.0000e+00 4.8780e+00 6.8260e+00 1.2160e-01 10.5400 2.700e-29 -2 main 0.077 0.353 0.425 34 0.0000e+00 0.0000e+00 9.9310e+00 3.4680e-01 9.6760 0.000e+00 -3 main 0.087 0.383 0.468 39 0.0000e+00 3.3100e+01 6.1190e+00 1.3270e-01 0.0000 1.300e+00 -1 main 0.093 0.525 0.680 Second set ofextended data columns. The first column, WDID, is not actually repeated in the table but is repeated here for clarity. The columns Ell, Sbc, Irr, and AGN are the template scales, a_E, a_S, a_I, and a_A in Equation <ref>. The units quoted are the overall normalization given for the templates in <cit.>. AGN_EBmV is the excess extinction, E(B-V), applied to the AGN template. chisqr is the χ^2 of the model from Equation <ref>, and Ndf is the formal number of degrees of freedom in the model (number of filters minus five). FitMode describes whether AGN_EBmV was allowed to vary (“main") or not (“alt"). z_min is the closest redshift at which the galaxy satisfies upper flux cuts, z_mid is the redshift at which it satisfies the middle flux cut, and z_max is the farthest redshift at which the galaxy satisfies the lower flux cuts. rrrrr 0.72 Excerpt from 6dFGS Extended Data, Column Set 1 Ra Dec 6dFGS_ID GALEX_ID AllWISE_ID 359.499210 -28.958080 7 6380767410811569507 0000m288_ac51-016147 359.369120 -29.047580 11 6380767411885310513 0000m288_ac51-013523 358.042580 -29.079060 19 6380767412959052516 3582m288_ac51-016561 358.872330 -27.883440 37 6380767402221635495 3583m273_ac51-000009 359.041330 -27.466580 39 6380767391484215299 3583m273_ac51-024746 0.059370 -26.730940 52 6380767390412573430 0000m273_ac51-059368 First set of 6dFGS extended data columns. The first two columns are the right ascension and declination, in J2000 decimal degrees. The 6dFGS_ID column is an integer index uniquely assigned to targets in the 6dFGS survey. GALEX_ID is a uniquely identifying integer assigned in GALEX gr7 to the source (null if not matched). AllWISE_ID is the http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_1a.html#source_idsource_id assigned to the source in the AllWISE survey. r|rrrrrrrrrr 0.77 Excerpt from 6dFGS Extended Data, Column Set 2 6dFGS_ID Ell Sbc Irr AGN AGN_EBmV chisqr Ndf FitMode z_min z_max 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ mag 7 2.5270e+01 0.0000e+00 3.5290e-01 0.0000e+00 0.0000 2.40e+02 1 alt 0.053 0.102 11 2.5500e+01 0.0000e+00 2.5950e-01 1.8700e-03 2.0360 1.00e+02 2 main 0.056 0.106 19 8.4310e-01 0.0000e+00 0.0000e+00 0.0000e+00 0.0000 5.40e+04 3 alt 0.045 0.085 37 7.0290e-01 0.0000e+00 0.0000e+00 0.0000e+00 0.0000 2.70e+03 2 alt 0.044 0.085 39 3.0380e-01 0.0000e+00 6.4680e-02 8.6020e-04 0.0000 3.40e+01 2 main 0.007 0.014 52 1.8140e+00 0.0000e+00 0.0000e+00 0.0000e+00 0.0000 1.70e+04 3 alt 0.042 0.080 Second set of 6dFGS extended data columns. The first column, 6dFGS_ID, is not actually repeated in the table but is repeated here for clarity. The columns Ell, Sbc, Irr, and AGN are the template scales, a_E, a_S, a_I, and a_A in Equation <ref>. The units quoted are the overall normalization given for the templates in <cit.>. AGN_EBmV is the excess extinction, E(B-V), applied to the AGN template. chisqr is the χ^2 of the model from Equation <ref>, and Ndf is the formal number of degrees of freedom in the model (number of filters minus five). FitMode describes whether AGN_EBmV was allowed to vary (“main") or not (“alt"). z_min is the closest redshift at which the galaxy satisfies upper flux cuts, and z_max is the farthest redshift at which the galaxy satisfies the lower flux cuts. rrrr 0.72 Excerpt from SDSS Extended Data, Column Set 1 Ra Dec SDSS_ID AllWISE_ID 54.936790 0.216800 468504134002173952 0544p000_ac51-047826 57.025340 0.208850 1398488404370417664 0574p000_ac51-036206 57.296590 0.185310 1398496375829719040 0574p000_ac51-036226 57.442290 0.158840 1398494451684370432 0574p000_ac51-038743 57.452670 0.044340 1398499399486695424 0574p000_ac51-027223 57.490400 0.074350 1398501598509950976 0574p000_ac51-027204 First set of SDSS extended data columns. The first two columns are the right ascension and declination, in J2000 decimal degrees. The SDSS_ID column is an integer index uniquely assigned to targets in the SDSS survey (comes from specObjID column of the SpecObj table in the SDSS data release 10 context of CasJobs). AllWISE_ID is the http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_1a.html#source_idsource_id assigned to the source in the AllWISE survey. r|rrrrrrrrrr 0.85 Excerpt from SDSS Extended Data, Column Set 2 SDSS_ID Ell Sbc Irr AGN AGN_EBmV chisqr Ndf FitMode z_min z_max 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ mag 468504134002173952 3.1090e+01 0.0000e+00 9.6020e-01 0.0000e+00 0.0000 5.700e+02 5 main 0.001 0.232 1398488404370417664 1.2430e+00 0.0000e+00 5.3510e-01 0.0000e+00 0.0000 1.400e+03 7 alt 0.001 0.085 1398496375829719040 3.2860e+01 0.0000e+00 4.5820e+00 0.0000e+00 0.0000 1.400e+03 7 alt 0.001 0.244 1398494451684370432 4.4810e+01 0.0000e+00 2.9780e+00 0.0000e+00 0.0000 1.400e+03 5 alt 0.001 0.291 1398499399486695424 7.1240e+00 1.0600e+01 4.5760e-01 3.1490e-01 13.0300 2.000e+02 7 main 0.001 0.154 1398501598509950976 3.2950e+01 0.0000e+00 1.0460e+00 0.0000e+00 0.0000 3.600e+02 5 main 0.001 0.239 Second set of SDSS extended data columns. The first column, SDSS_ID, is not actually repeated in the table but is repeated here for clarity. The columns Ell, Sbc, Irr, and AGN are the template scales, a_E, a_S, a_I, and a_A in Equation <ref>. The units quoted are the overall normalization given for the templates in <cit.>. AGN_EBmV is the excess extinction, E(B-V), applied to the AGN template. chisqr is the χ^2 of the model from Equation <ref>, and Ndf is the formal number of degrees of freedom in the model (number of filters minus five). FitMode describes whether AGN_EBmV was allowed to vary (“main") or not (“alt"). z_min is the closest redshift at which the galaxy satisfies upper flux cuts, and z_max is the farthest redshift at which the galaxy satisfies the lower flux cuts. rrrr 0.48 Excerpt from GAMA Extended Data, Column Set 1 Ra Dec GAMA_ID AllWISE_ID 174.022810 0.705940 6806 1739p000_ac51-051576 174.100730 0.658910 6808 1739p000_ac51-051657 174.184930 0.709040 6826 1739p000_ac51-049388 174.302790 0.789990 6837 1739p015_ac51-002444 174.346900 0.696450 6840 1739p000_ac51-049335 174.396030 0.820770 6844 1739p015_ac51-002401 First set of GAMA extended data columns. The first two columns are the right ascension and declination, in J2000 decimal degrees. The GAMA_ID column is an integer index uniquely assigned to targets in the GAMA survey. AllWISE_ID is the http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_1a.html#source_idsource_id assigned to the source in the AllWISE survey. r|rrrrrrrrrr 0.76 Excerpt from GAMA Extended Data, Column Set 2 GAMA_ID Ell Sbc Irr AGN AGN_EBmV chisqr Ndf FitMode z_min z_max 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ mag 6806 1.1150e+01 2.7000e+01 2.7750e+00 4.8930e-02 0.1746 1.900e+02 8 main 0.052 0.383 6808 1.0400e+01 0.0000e+00 5.2290e-01 3.2660e-03 0.0000 2.300e+02 6 main 0.031 0.246 6826 2.7550e+00 0.0000e+00 3.2140e-01 0.0000e+00 0.0000 3.600e+03 7 alt 0.015 0.130 6837 1.2690e+00 0.0000e+00 3.7570e-01 0.0000e+00 0.0000 1.100e+03 7 alt 0.014 0.122 6840 1.1440e+01 0.0000e+00 3.8100e-01 3.0570e-02 0.2704 5.300e+02 7 main 0.032 0.248 6844 6.3390e+00 0.0000e+00 1.1100e-01 0.0000e+00 0.0000 1.100e+03 7 alt 0.024 0.192 Second set of GAMA extended data columns. The first column, GAMA_ID, is not actually repeated in the table but is repeated here for clarity. The columns Ell, Sbc, Irr, and AGN are the template scales, a_E, a_S, a_I, and a_A in Equation <ref>. The units quoted are the overall normalization given for the templates in <cit.>. AGN_EBmV is the excess extinction, E(B-V), applied to the AGN template. chisqr is the χ^2 of the model from Equation <ref>, and Ndf is the formal number of degrees of freedom in the model (number of filters minus five). FitMode describes whether AGN_EBmV was allowed to vary (“main") or not (“alt"). z_min is the closest redshift at which the galaxy satisfies upper flux cuts, and z_max is the farthest redshift at which the galaxy satisfies the lower flux cuts. rrrrrr 0.8 Excerpt from AGES Extended Data, Column Set 1 Ra Dec AGES_ROW NDWFS_ID SDSS_ID AllWISE_ID 216.393850 32.806960 6 346042 1237662684146041043 2159p333_ac51-002610 216.548230 32.807660 11 346533 1237662684146106652 2159p333_ac51-002915 216.818990 32.809900 26 348131 null 2159p333_ac51-000046 216.245250 32.812800 53 350281 1237664852570800769 2159p333_ac51-002806 217.374410 32.814140 64 351325 1237664853108064555 2177p333_ac51-008367 216.179020 32.814450 68 351568 1237664852570800403 2159p333_ac51-005461 First set of AGES extended data columns. The first two columns are the right ascension and declination, in J2000 decimal degrees. AGES does not have an identifier for its sources, but the plain text tables in <cit.> have corresponding rows. AGES_ROW column contains the identity of the row the galaxy was published in, starting from 0. NDWFS_ID is a uniquely identifying integer assigned in NDWFS to the source (null if not matched). SDSS_ID is a uniquely identifying integer assigned in SDSS to the matching source (null if not matched). AllWISE_ID is the http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_1a.html#source_idsource_id assigned to the source in the AllWISE survey. r|rrrrrrrrrrr 0.83 Excerpt from AGES Extended Data, Column Set 2 AGES_ROW Ell Sbc Irr AGN AGN_EBmV chisqr Ndf FitMode z_min z_mid z_max 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ mag 6 1.6020e+00 5.7740e+00 0.0000e+00 0.0000e+00 0.0000 4.70e+03 4 alt 0.006 0.200 0.243 11 3.7850e+00 0.0000e+00 0.0000e+00 9.4590e-02 0.2121 2.00e+02 4 main 0.006 0.190 0.276 26 7.4010e+01 0.0000e+00 0.0000e+00 0.0000e+00 0.0000 4.90e+03 4 alt 0.026 0.580 0.865 53 2.5880e+01 0.0000e+00 2.8970e+00 0.0000e+00 0.0000 4.40e+01 4 main 0.017 0.453 0.745 64 1.0010e+01 2.7350e+00 0.0000e+00 9.0590e-02 0.1298 2.90e+02 4 main 0.010 0.293 0.475 68 1.5140e+01 1.1810e+00 1.1090e-01 0.0000e+00 0.0000 4.20e+02 1 alt 0.012 0.333 0.481 Second set of AGES extended data columns. The first column, AGES_ROW, is not actually repeated in the table but is repeated here for clarity. The columns Ell, Sbc, Irr, and AGN are the template scales, a_E, a_S, a_I, and a_A in Equation <ref>. The units quoted are the overall normalization given for the templates in <cit.>. AGN_EBmV is the excess extinction, E(B-V), applied to the AGN template. chisqr is the χ^2 of the model from Equation <ref>, and Ndf is the formal number of degrees of freedom in the model (number of filters minus five). FitMode describes whether AGN_EBmV was allowed to vary (“main") or not (“alt"). z_min is the closest redshift at which the galaxy satisfies upper flux cuts, z_mid is the redshift at which it satisfies the middle flux cut, and z_max is the farthest redshift at which the galaxy satisfies the lower flux cuts. rrrrrrr 0.86 Excerpt from zCOSMOS Extended Data, Column Set 1 Ra Dec zCOS_ID COS_ID SCOS_ID SDSS_ID AllWISE_ID 150.502790 1.877650 700137 507130 null null 1497p015_ac51-048699 150.280590 2.021280 700529 null null 1237653664722125224 1497p015_ac51-051213 150.122600 2.108540 700585 768236 null null 1497p015_ac51-054126 150.183040 2.028990 700587 null 128673 null 1497p015_ac51-053799 150.393000 2.342770 701269 1213568 199260 null 1497p030_ac51-000271 150.653290 1.625360 800270 67120 41381 1237653664185385269 1512p015_ac51-036648 First set of zCOSMOS extended data columns. The first two columns are the right ascension and declination, in J2000 decimal degrees. The zCOS_ID column is an integer index uniquely assigned to targets in the zCOSMOS survey. COS_ID is a uniquely identifying integer assigned in <cit.> to the matching source (null if not matched). SCOS_ID is a uniquely identifying integer assigned in SCOSMOS to the matching source (null if not matched). AllWISE_ID is the http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_1a.html#source_idsource_id assigned to the source in the AllWISE survey. r|rrrrrrrrrr 0.8 Excerpt from zCOSMOS Extended Data, Column Set 2 zCOS_ID Ell Sbc Irr AGN AGN_EBmV chisqr Ndf FitMode z_min z_max 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ 10^10L_⊙ mag 700137 1.1050e+01 6.2610e+00 0.0000e+00 3.6830e+00 1.0500 4.900e+02 8 main 0.069 0.885 700529 0.0000e+00 8.1030e+00 0.0000e+00 4.4850e-01 13.0300 1.600e-05 -2 main 0.016 0.283 700585 5.2150e+00 0.0000e+00 0.0000e+00 3.2440e+00 0.6921 1.400e+03 11 main 0.043 0.733 700587 1.2620e+01 0.0000e+00 1.6640e+00 0.0000e+00 0.0000 1.600e+03 2 alt 0.021 0.471 701269 0.0000e+00 2.8730e+01 7.7650e-01 0.0000e+00 0.0000 9.300e+03 17 alt 0.088 0.574 800270 0.0000e+00 2.8730e+01 7.7650e-01 0.0000e+00 0.0000 9.300e+03 17 alt 0.088 0.574 Second set of zCOSMOS extended data columns. The first column, zCOS_ID, is not actually repeated in the table but is repeated here for clarity. The columns Ell, Sbc, Irr, and AGN are the template scales, a_E, a_S, a_I, and a_A in Equation <ref>. The units quoted are the overall normalization given for the templates in <cit.>. AGN_EBmV is the excess extinction, E(B-V), applied to the AGN template. chisqr is the χ^2 of the model from Equation <ref>, and Ndf is the formal number of degrees of freedom in the model (number of filters minus five). FitMode describes whether AGN_EBmV was allowed to vary (“main") or not (“alt"). z_min is the closest redshift at which the galaxy satisfies upper flux cuts, and z_max is the farthest redshift at which the galaxy satisfies the lower flux cuts. § DISCUSSIONThe data gathered and characterized here was collected primarily to use in measuring the 2.4 luminosity function of all galaxies back to a redshift of z = 1, as is done in this work's companion paper LW17III.The main purpose of this work is to describe, in detail, the cuts made to the data and the characteristics of the resulting set.This process is an essential component in evaluating the sensitivity of the measurements carried out in LW17III and in making the data presented here both auditable and extendable. The multi-wavelength data sets available for most of the surveys covered here are extensive, and a more sophisticated spectro-luminosity functional analysis than what is in LW17III should be possible if a fast and deterministic high dimension Gaussian integrator can be developed.We would like to thank theteam.This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration.We would like to thank the SDSS team.Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.We would like to thank the GAMA team.GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programs including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/ .Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID 175.A-0839.We would like to thank the 2MASS team.This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.We would like to thank the MAST team.Some/all of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts.We would like to thank the NDWFS team.This work made use of images and/or data products provided by the NOAO Deep Wide-Field Survey (Jannuzi and Dey 1999; Jannuzi et al. 2005; Dey et al. 2005), which is supported by the National Optical Astronomy Observatory (NOAO). NOAO is operated by AURA, Inc., under a cooperative agreement with the National Science Foundation.We would like to thank the IPAC team.This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.We would like to thank the GALEX team.Based on observations made with the NASA Galaxy Evolution Explorer.GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034.We would also like to thank the teams behind 6dFGS, AGES, zCOSMOS, SDWFS, and COSMOS.RJA was supported by FONDECYT grant number 1151408.
http://arxiv.org/abs/1702.07828v2
{ "authors": [ "S. E. Lake", "E. L. Wright", "R. J. Assef", "T. H. Jarrett", "S. Petty", "S. A. Stanford", "D. Stern", "C. -W. Tsai" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170225033532", "title": "The 2.4 $μ$m Galaxy Luminosity Function as Measured Using WISE. II. Sample Selection" }
Constructing Adjacency Arrays from Incidence Arrays Hayden Jananthan^1,2    Karia Dibert^2,3    Jeremy Kepner^2,3,4 ^1Vanderbilt University Mathematics Department, ^2MIT Lincoln Laboratory Supercomputing Center, ^3MIT Mathematics Department, ^4MIT Computer Science & AI Laboratory Received: date / Accepted: date ==============================================================================================================================================================================================================================================Graph construction, a fundamental operation in a data processing pipeline, is typically done by multiplying the incidence array representations of a graph, 𝐄_in and 𝐄_out, to produce an adjacency array of the graph, 𝐀, that can be processed with a variety of algorithms.This paper provides the mathematical criteria to determine if the product 𝐀 = 𝐄^ T_out𝐄_in will have the required structure of the adjacency array of the graph.The values in the resulting adjacency array are determined by the corresponding addition ⊕ and multiplication ⊗ operations used to perform the array multiplication.Illustrations of the various results possible from different⊕ and ⊗ operations are provided using a small collection of popular music metadata. graph; incidence array; adjacency array; semiring § INTRODUCTIONThis material is based in part upon work supported by the NSF under grant number DMS-1312831.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The duality between the canonical representation of graphs as abstract collections of vertices and edges and a matrix representation has been a part of graph theory since its inception <cit.>.Matrix algebra has been recognized as a useful tool in graph theory for nearly as long <cit.>.The modern description of the duality between graph algorithms and matrix mathematics (or sparse linear algebra) has been extensively covered in the recent literature <cit.> and has further spawned the development of the GraphBLAS math library standard (GraphBLAS.org)<cit.> that has been developed in a series of proceedings <cit.> and implementations <cit.>.Adjacency arrays, typically denoted 𝐀, have much in common with adjacency matrices.Likewise, incidence arrays or edge arrays, typically denoted 𝐄, have much in common with incidence matrices <cit.>, edge matrices <cit.>, adjacency lists <cit.>, and adjacency structures <cit.>.The powerful link between adjacency arrays and incidence arrays via array multiplication is the focus of the first part of this paper.Incidence arrays are often readily obtained from raw data. In many cases, an associative array representing a spreadsheet or database table is already in the form of an incidence array.However, to analyze a graph, it is often convenient to represent the graph as an adjacency array.Constructing an adjacency array from data stored in an incidence array via array multiplication is one of the most common and important steps in a data processing system.Given a graph G with vertex set K_out∪ K_in and edge set K, the construction of adjacency arrays for G relies on the assumption that 𝐄^ T_out𝐄_in is an adjacency array of G.This assumption is certainly true in the most common case where the value set is composed of non-negative reals and the operations ⊕ and ⊗ are arithmetic plus (+) and arithmetic times (×) respectively. However, one hallmark of associative arrays is their ability to contain as values nontraditional data. For these value sets, ⊕ and ⊗ may be redefined to operate on non-numerical values.For example, for the value of all alphanumeric strings, with⊕ = max()⊗ = min()it is not immediately apparent in this case whether 𝐄^ T_out𝐄_in is an adjacency array of the graph whose set of vertices is K_out∪ K_in. In the subsequent sections,the criteria on the value set V and the operations ⊕ and ⊗ are presented so that𝐀 = 𝐄^ T_out𝐄_inalways produces an adjacency array <cit.>. §.§ Definitions For a directed graph (from here onwards, just `graph') G, K_out will denote the set of vertices which are the sources of edges, K_in will denote the set of vertices which are the targets of edges, and K will denote the set of edges.The vertex set of G will be assumed to be K_out∪ K_in. K_out, K_in, and K are assumed to be finite and totally-ordered.V will denote the set of values that the data can take on, such as non-negative real numbers or the elements of an ordered set.⊕ and ⊗ are binary operations on V (in particular, V is closed under the operations ⊕ and ⊗), such as ⊕ = + and ⊗ = × or ⊕ = max and ⊗ = +.⊕ and ⊗ each have identity elements 0 and 1, respectively, i.e.v ⊕ 0 = 0⊕ v = v v ⊗ 1 = 1⊗ v = vfor all v∈ V.For the purposes of understanding what algebraic properties are required for 𝐄^ T_out𝐄_in to be an adjacency array of a graph, ⊕ and ⊗ will not be assumed to be associative or commutative, and ⊗ does not necessarily distribute over ⊕, nor is 0 assumed to be an annihilator of ⊗. An associative array is a map 𝐀: K_1× K_2 → V, where K_1 and K_2 are finite totally-ordered sets, referred to as key sets and whose elements are called keys, and V is the value set.If 𝐀: K_1×K_2 → V is an associative array, then 𝐀^ T: K_2× K_1 → V is the associative array defined as𝐀^ T(k_2,k_1) = 𝐀(k_1,k_2)where k_1∈ K_1 and k_2∈ K_2. array!multiplicationMultiplication of associative arrays is defined as𝐂 = 𝐀⊕.⊗𝐁 = 𝐀𝐁or more specifically 𝐂(k_1,k_2) = ⊕_k_3𝐀(k_1,k_3) ⊗𝐁(k_3,k_2)where 𝐀, 𝐁, and 𝐂 are associative arrays𝐀 : K_1 × K_3 → V𝐁 : K_3 × K_1 → V𝐂 : K_1 × K_2 → Vand k_1 ∈ K_1, k_2 ∈ K_2, k_3 ∈ K_3.If G is a graph with vertex set K_out∪ K_in and edge set K, then 𝐄_out: K× K_out→ V is a source incidence array if 𝐄_out(k,a) ≠ 0 if and only if the edge k∈ K is directed outward from the vertex a∈ K_out𝐄_in: K× K_in→ V is a target incidence array if 𝐄_in(k,a) ≠ 0 if and only if the edge k∈ K is directed into the vertex a∈ K_in. If G is a graph with vertex set K_out∪ K_in and edge set K, then 𝐀: K_out× K_in→ V is a adjacency array if 𝐀(a,b) ≠ 0 if and only if there is an edge with source a and target b.§ ADJACENCY ARRAY CONSTRUCTION If 𝐀 is an adjacency array for a graph G=(K_out∪ K_in,K), then 𝐀(a,b)≠ 0 if and only if there is an edge k with source a and target b, i.e. so that 𝐄_out(k,a) ≠ 0 and 𝐄_in(k,a)≠ 0.In the case where the product of two non-zero values is non-zero, this can be subsumed to say that 𝐀(a,b) ≠ 0 if and only if 𝐄_out(k,a)𝐄_in(k,a).Writing this as𝐄_out(k,a) 𝐄_in(k,a) = 𝐄^ T_out(a,k) 𝐄_in(k,a)This latter expression looks like a term in the evaluation(𝐄_out^ T𝐄_in)(a,b) = ⊕_k∈ K𝐄^ T_out(a,k) 𝐄_in(k,b)but the introduction of more terms means that more assumptions need to be made about the relationships between ⊕,⊗, and 0.Let V be a set with closed binary operations ⊕,⊗ with identities 0,1∈ V.Then the following are equivalent: * ⊕ and ⊗ satisfy the properties * Zero-Sum-Free: a⊕ b=0 if and only if a=b=0, * No Zero Divisors: a⊗ b = 0 if and only if a=0 or b=0, and * 0 is Annihilator for ⊗: a⊗ 0 = 0⊗ a=0. * If G is a graph with out-vertex and in-vertex incidence arrays 𝐄_out:K× K_out→ V and 𝐄_in: K× K_out→ V, then 𝐄_out^ T𝐄_in is an adjacency array for G. Let 𝐀 = 𝐄_out^ T𝐄_in.As above, for 𝐀 to be the adjacency array of G, the entry 𝐀(k_out,k_in) must be nonzero if and only if there is an edge from k_out to k_in, which is equivalent to saying that the entry must be nonzero if and only if there is a k ∈ K such that𝐄^ T_out(k_out,k) ≠ 0𝐄_in(k,k_in) ≠ 0Taken altogether, the above pair of equations imply⊕_k ∈ K𝐄^ T_out(k_out,k) ⊗𝐄_in(k,k_in)≠ 0∃ k∈ Kso that 𝐄^ T_out(k_out,k) ≠ 0  and  𝐄_in(k,k_in) ≠ 0First, the above condition can be restated in a form that more easily provides the zero-sum-freeness of ⊕, lack of zero-divisors for ⊗, and the fact that 0 annihilates under ⊗. Equation <ref> is equivalent to ⊕_k ∈ K𝐄_out(k,x) ⊗𝐄_in(k,y) = 0∄ k ∈ K 𝐄_out(k,x) ≠ 0and 𝐄_in(k,y) ≠ 0which in turn is equivalent to⊕_k ∈ K𝐄_out(k,x) ⊗𝐄_in(k,y) = 0∀ k ∈ K, 𝐄_out(k,x) = 0or 𝐄_in(k,y) = 0This expression may be split up into two conditional statements⊕_k ∈ K𝐄_out(k,x) ⊗𝐄_in(k,y) = 0 ⇒ ∀ k ∈ K, 𝐄_out(k,x) = 0or 𝐄_in(k,y) = 0and∀ k ∈ K, 𝐄_out(k,x) = 0 𝐄_in(k,y) = 0 ⇒ ⊕_k ∈ K𝐄_out(k,x) ⊗𝐄_in(k,y) = 0Equation <ref> implies that V is zero-sum-free.Suppose there exist nonzero v,w ∈ V such that v ⊕ w = 0, or that nontrivial additive inverses exist.Then it is possible to choose a graph G to have edge set {k_1,k_2} and vertex set {a,b}, where both k_1,k_2 start from a and end at b.Then defining 𝐄_out(k_1,a)=v𝐄_out(k_2,a)=w𝐄_in(k_i,b)=1provides proper out-vertex and in-vertex incidence arrays for G.Moreover, it is the case that𝐄^ T_out𝐄_in(b,a)=(v⊗ 1)⊕ (w⊗ 1)=v⊕ w=0which contradicts Equation <ref>.Therefore, no such nonzero v and w may be present in V, meaning it is necessary that V be zero-sum-free.Equation <ref> implies that V has no zero-divisors. Suppose v⊗ w = 0. Define the graph G to have edge set {k} and vertex set {a} with a single self-loop given by k.Then define𝐄_out(k,a)=v𝐄_in(k,a)=wto obtain out-vertex and in-vertex incidence arrays for G.Then𝐄^ T_out𝐄_in(a,a)=𝐄_out(k,a)⊗𝐄_in(k,a)=v⊗ w= 0Thus, Equation <ref> implies that v=w=0, and hence V has no zero-divisors.Equation <ref> implies that 0 annihilates V under ⊗. Suppose v∈ V. Define the graph G to have edge set {k_1,k_2} and vertex set {a,b}, with self-loops at a and b given by k_1 and k_2, respectively.Define𝐄_out(k_1,a)=v=𝐄_in(k_1,a)and𝐄_out(k_2,b) = v = 𝐄_in(k_2,b)(and all other entries in 𝐄_out and 𝐄_in equal to 0)results in out-vertex and in-vertex incidence arrays of G.Moreover, it is true that0= 𝐄^ T_out𝐄_in(a,b) = 𝐄_out(k_1,a)⊗𝐄_in(k_1,b)⊕𝐄_out(k_2,a) ⊗𝐄_in(k_2,b) = (v⊗ 0) ⊕ (0⊗ v)By Lemma <ref>, V is zero-sum-free so it follows that v⊗ 0 = 0⊗ v = 0.Thus, 0 is an annihilator for ⊗.Now Theorem <ref>(i) is shown to be sufficient for Theorem <ref>(ii) to hold. Assume that zero is an annihilator, V is zero-sum-free, and V has no zero-divisors.Zero-sum-freeness and the nonexistence of zero divisors give∃ k ∈ Kso that 𝐄_out(k,x) ≠ 0and 𝐄_in(k,y) ≠ 0 ⇒ ⊕_k ∈ K𝐄_out(k,x) ⊗𝐄_in(k,y) ≠ 0which is the contrapositive of Equation <ref>. And, that zero is an annihilator gives∀ k ∈ K, 𝐄_out(e,x) = 0or 𝐄_in(e,y) = 0 ⇒ ⊕_k ∈𝐄_out(k,x) ⊗𝐄_in(k,y) = 0which is (<ref>).As Equation <ref> and Equation <ref> combine to form Equation <ref>, it is established that the conditions are sufficient for Equation <ref>.§ ADJACENCY ARRAY OF REVERSE GRAPH The remaining product of the incidence arrays that is defined is 𝐄^ T_in𝐄_out. The above requirements will now be shown to be necessary and sufficient for the remaining product to be the adjacency array of the reverse of the graph. Recall that the reverse of G is the graph G̅ in which all the arrows in G have been reversed. Let G be a graph with incidence matrices 𝐄_out and 𝐄_in.Condition (i) in Theorem <ref> are necessary and sufficient so that 𝐄^ T_in𝐄_out is an adjacency matrix of the reverse of G.Let G̅ denote the reverse of G, and let 𝐄̅_out and 𝐄̅_in be out-vertex and in-vertex incidence arrays for G̅, respectively.Recall that G̅ is defined to have the same edge and vertex sets as G but changes the directions of the edges, in other words, if an edge k leaves a vertex a in G, then it enters a in G̅, and vice versa.As such, 𝐄_out(k,a) ≠ 0 if and only if 𝐄̅_in(k,a) ≠ 0, and likewise 𝐄_in(k,a)≠ 0 if and only if 𝐄̅_out(k,a) ≠ 0.As such, choosing 𝐄_out=𝐄̅_in and 𝐄_in=𝐄̅_out gives valid in-vertex and out-vertex incidence matrices for G̅, respectively.Then by Theorem <ref> it can be shown that𝐄̅^ T_out𝐄̅_in=𝐄^ T_in𝐄_outIt is now straightforward to identify algebraic structures that comply with the established criteria. Notably, all zero-sum-free semirings with no zero-divisors comply, such as ℕ or ℝ_≥ 0 with the standard addition and multiplication.In addition, any linearly ordered set with ⊕ and ⊗ given by max and min, respectively.Some non-examples, however, include the max-plus algebra or non-trivial Boolean algebras, which do not satisfy the zero-product property, or rings, which except for the zero ring are not zero-sum-free.Furthermore, the value sets of associative arrays need not be defined exclusively as semirings, as several semiring-like structures satisfy the criteria. These structures may lack the properties of additive or multiplicative commutativity, additive or multiplicative associativity, or distributivity of multiplication over addition, which are not necessary to ensure that the product of incidence arrays yields an adjacency array.The criteria guarantee an accurate adjacency array for any dataset that satisfies them, regardless of value distribution in the incidence arrays. However, if the incidence arrays are known to possess a certain structure, it is possible to circumvent some of the conditions and still always produce adjacency arrays.For example, if each key set of an undirected incidence array 𝐄 is a list of documents and the array entries are sets of words shared by documents, then it is necessary that a word in 𝐄(i,j) and 𝐄(m,n) has to be in 𝐄(i,n) and 𝐄(m,j). This structure means that when multiplying 𝐄^ T𝐄 using ⊕ = ∪ and ⊗ = ∩, a nonempty set will never be “multiplied” by (intersected with) a disjoint nonempty set. This eliminates the need for the zero-product property to be satisfied, as every multiplication of nonempty sets is already guaranteed to produce a nonempty set. The array produced will contain as entries a list of words shared by those two documents.Though the criteria ensure that the product of incidence arrays will be an adjacency array, they do not ensure that certain matrix properties hold. For example, the property (𝐀𝐁)^ T=𝐁^ T𝐀^ T may be violated under these criteria, as (𝐄^ T_out𝐄^ T_in) is not necessarily equal to 𝐄^ T_in𝐄_out. (For this matrix transpose property to always hold, the operation ⊗ would have to be commutative.) § GRAPH CONSTRUCTION WITH DIFFERENT SEMIRINGSThe ability to change ⊕ and ⊗ operations allows different graph adjacency arrays to be constructed using the same element-wise addition, element-wise multiplication, and array multiplication syntax.Specific pairs of operations are best suited for constructing certain types of adjacency arrays.Thepattern of edges resulting from array multiplication of incidence arrays is generally preserved for various semirings.However, the non-zero values assigned to the edges can be very different and enable the construction different graphs.For example, constructing an adjacency array of the graph of music writers connected to music genres from Figure <ref> begins with selecting the incidence sub-arrays 𝐄_1 and 𝐄_2 as shown in Figure <ref>.Array multiplication of 𝐄_1^ T with 𝐄_2 produces the desired adjacency array of the graph. Figure <ref> illustrates this array multiplication for different operator pairs ⊕ and ⊗. The pattern of edges among vertices in the adjacency arrays shown Figure <ref> are the same for the different operator pairs, but the edge weights differ.All the non-zero values in 𝐄_1 and 𝐄_2 are 1. All the ⊗ operators in Figure <ref> have the property0 ⊗ 1 = 1 ⊗ 0 = 0for their respective values of zero be it 0, -∞, or ∞.Likewise, all the ⊗ operators in Figure <ref> also have the property1 ⊗ 1 = 1except where ⊗ = +, in which case1 ⊗ 1 = 2The differences in the adjacency array weights are less pronounced then if the values of 𝐄_1 and 𝐄_2 were more diverse.The most apparent difference is between the +.× semiring and the other semirings in Figure <ref>.In the case of +.× semiring, the ⊕ operation + aggregates values from all the edges between two vertices. Additional positive edges will increase the overall weight in the adjacency array.In the other pairs of operations, the ⊕ operator is either max or min, which effectively selects only one edge weight to use for assigning the overall weight.Additional edges will only impact the edge weight in the adjacency array if the new edge is an appropriate maximum or minimum value.Thus, +.× constructs adjacency arrays that aggregate all the edges.The sother emirings construct adjacency arrays that select extremal edges.Each can be useful for construction graph adjacency arrays in appropriate context.The impact of different semirings on the graph adjacency array weights are more pronounced if the values of 𝐄_1 and 𝐄_2 are more diverse.Figure <ref> modifies 𝐄_1 so that a value of 2 is given to the non-zero values in the column Genre|Pop and a values of 3 is given to the non-zero values in the column Genre|Rock.Figure <ref> shows the results of constructing adjacency arrays with 𝐄_1 and 𝐄_2 using different semirings.The impact of changing the values in𝐄_1 can be seen by comparing Figure <ref> with Figure <ref>.For the +.× semiring, the values in the adjacency array rows Genre|Pop and Genre|Rock are multiplied by 2 and 3.The increased adjacency array values for these rows are a result of the ⊗ operator being arithmetic multiplication × so that2 ⊗ 1 = 2 × 1 = 2 3 ⊗ 1 = 3 × 1 = 3For the max.+ and min.+ semirings, the values in the adjacency array rows Genre|Pop and Genre|Rockare larger by and 1 and 2.The larger values in the adjacency array of these rows is due to the ⊗ operator being arithmetic addition + resulting in2 ⊗ 1 = 2 + 1 = 3 3 ⊗ 1 = 3 + 1 = 4For the max.min semiring, Figure <ref> and Figure <ref> have the same adjacency array because 𝐄_2 is unchanged. The ⊗ operator corresponding to the minimum value function continues to select the smaller non-zero values from 𝐄_22 ⊗ 1 = min(2,1) = 1 3 ⊗ 1 = min(3,1) = 1In contrast, for the min.max semiring, the values in the adjacency array rows Genre|Pop and Genre|Rock are larger by and 1 and 2.The increase in adjacency array values for these rows are a result of the ⊗ operator selecting the larger non-zero values from 𝐄_12 ⊗ 1 = max(2,1) = 2 3 ⊗ 1 = max(3,1) = 3Finally, for the max.× and min.× semirings, the values in the adjacency array rows Genre|Pop and Genre|Rock are increased by and 1 and 2.Similar to the +.× semiring, the larger adjacency array values for these rows are a result of the ⊗ operator being arithmetic multiplication × resulting in2 ⊗ 1 = 2 × 1 = 2 3 ⊗ 1 = 3 × 1 = 3 Figures <ref> and <ref> show that a wide range of graph adjacency arrays can be constructed via array multiplication of incidence arrays over different semirings.A synopsis of the graph constructions illustrated in Figures <ref> and <ref> is as follows +.× sum of products of edge weights connecting two vertices; computes the strength of all connections between two connected vertices.max.× maximum of products edge weights connecting two vertices; selects the edge with largest weighted product of all the edges connecting two vertices.min.× minimum of products edge weights connecting two vertices; selects the edge with smallest weighted product of all the edges connecting two vertices.max.+ maximum of sum of edge weights connecting two vertices; selects the edge with largest weighted sum of all the edges connecting two vertices.min.+ minimum of sum of edge weights connecting two vertices; selects the edge with smallest weighted sum of all the edges connecting two vertices.max.min maximum of the minimum of weights connecting two vertices; selects the largest of all the shortest connections between two vertices.min.max minimum of the maximum of weights connecting two vertices; selects the smallest of all the largest connections between two vertices. § CONCLUSION Graph construction, a fundamental operation in a data processing pipeline, is typically done by multiplying the incidence array representations of a graph, 𝐄_in and 𝐄_out, to produce an adjacency array of the graph, 𝐀.The mathematical criteria to determine if 𝐀 will have the required structure of the adjacency array of the graph over are as follows.Let V be a set with closed binary operations ⊕,⊗ with identities 0,1∈ V.Then the following are equivalent: * ⊕ and ⊗ satisfy the properties * Zero-Sum-Free: a⊕ b=0 if and only if a=b=0, * No Zero Divisors: a⊗ b = 0 if and only if a=0 or b=0, and * 0 is Annihilator for ⊗: a⊗ 0 = 0⊗ a=0. * If G is a graph with out-vertex and in-vertex incidence arrays 𝐄_out:K× K_out→ V and 𝐄_in: K× K_out→ V, then 𝐄_out^ T𝐄_in is an adjacency array for G.The values in the resulting adjacency array are determined by the corresponding addition ⊕ and multiplication ⊗ operations used to perform the array multiplication.§ ACKNOWLEDGMENT The authors would like to thank Paul Burkhardt, Alan Edelman, Sterling Foster, Vijay Gadepally, Sam Madden, Dave Martinez, Tom Mattson, Albert Reuther, Victor Roytburd, and Michael Stonebraker.99[Anderson et al 2016]Anderson2016 M. Anderson, N. Sundaram, N. Satish, M. Patwary, T. L. Willke, & P. Dubey, GraphPad: Optimized Graph Primitives for Parallel and Distributed Platforms, submitted[Bodin & Kursh 1979]BodinKursh1979 L. Bodin & S. Kursh, A detailed description of a computer system for the routing and scheduling of street sweepers, Computers & Operations Research, 6(4), 181-198, 1979[Brualdi 1967]Brualdi1967 R.A. Brualdi, Kronecker products of fully indecomposable matrices and of ultrastrong digraphs, Journal of Combinatorial Theory, 2:135-139, 1967[Bruck & Ryser 1949]BruckRyser1949 R. Bruck & H. Ryser, The nonexistence of certain finite projective planes, Canadian Journal of Mathematics, 1, 88-93, 1949[Buluç & Gilbert 2011]BulucGilbert2011 A. Buluç & J. Gilbert, The Combinatorial BLAS: Design, implementation, and applications . International Journal of High Performance Computing Applications (IJHPCA), 2011[Buluç 2015]Buluc2015A. Buluç, GraphBLAS Special Session, IEEE HPEC 2015, Waltham, MA[Dibert et al 2015]Dibert2015 K. Dibert, H. Jansen & J. Kepner, Algebraic Conditions for Generating Accurate Adjacency Arrays, IEEE MIT Undergraduate Research Technology Conference, 2015[Dobrjanskyj & Freudenstein 1967]DobrjanskyjFreudenstein1967 L. Dobrjanskyj & F. Freudenstein, Some applications of graph theory to the structural analysis of mechanisms, Journal of Engineering for Industry, 89(1), 153-158, 1967[Ekanadham et al 2014]Ekanadham2014 K. Ekanadham, B. Horn, J. Jann, M. Kumar, J. Moreira, P. Pattnaik, M. Serrano, G. Tanase, H. Yu, Graph Programming Interface: Rationale and Specification, IBM Research Report, RC25508 (WAT1411-052) November 19, 2014[Fisher & Wing 1965]FisherWing1965 G. Fisher & O. Wing, Computer recognition and extraction of planar graphs from the incidence matrix, IEEE Transactions on Circuit Theory, 13(2), 154-163, 1966[Ford & Fulkerson 1962]FordFulkerson1962 L. Ford & D. Fulkerson, Flows in networks, Princeton university press, 1962[Fulkerson & Gross 1965]FulkersonGross1965 D. Fulkerson & O. Gross, Incidence matrices and interval graphs, Pacific journal of mathematics, 15(3), 835-855, 1965[Harary & Tauth 1964]HararyTauth1966 F. Harary & C.A. Tauth, Connectedness of products of two directed graphs, SIAM Journal on Applied Mathamatics, 14:250-254, 1966[Harary 1969]Harary1969 F. Harary, Graph Theory, Reading:Addison-Wesley, 1969[Hutchison et al 2015]Hutchison2015 D. Hutchison, J. Kepner, V. Gadepally, & A. Fuchs, Graphulo implementation of server-side sparse matrix multiply in the Accumulo database, IEEE High Performance Extreme Computing (HPEC) Conference, Walham, MA, September 2015.[Kepner & Gilbert 2011]KepnerGilbert2011 J. Kepner & J. Gilbert (editors), Graph Algorithms in the Language of Linear Algebra, SIAM Press, Philadelphia, 2011[Kepner et al 2012]Kepner2012 J. Kepner, W. Arcand, W. Bergeron, N. Bliss, R. Bond, C. Byun, G. Condon, K. Gregson, M. Hubbell, J. Kurz, A. McCabe, P. Michaleas, A. Prout, A. Reuther, A. Rosa & C. Yee, �Dynamic Distributed Dimensional Data Model (D4M) Database and Computation System,� ICASSP (International Conference on Accoustics, Speech, and Signal Processing), 2012, Kyoto, Japan[Konig 1931]Konig1931 D. Konig, Graphen und Matrizen (Graphs and Matrices), Matematikai Lapok, 38:116-119, 1931. [Konig 1936]Konig1936 D. Konig, Theorie der endlichen und unendlichen graphen (Theory of finite and infinite graphs), Leipzig:Akademie Verlag M.B.H., 1936; see Richard McCourt (Birkhauser 1990) for an english translation of this classic work[Mattson et al 2013]Mattson2013 T. Mattson, D. Bader, J. Berry, A. Buluç, J. Dongarra, C. Faloutsos, J. Feo, J. Gilbert, J. Gonzalez, B. Hendrickson, J. Kepner, C. Leiserson, A. Lumsdaine, D. Padua, S. Poole, S. Reinhardt, M. Stonebraker, S. Wallach, & A. Yoo, Standards for Graph Algorithms Primitives, IEEE HPEC 2013, Waltham, MA[Mattson 2014a]Mattson2014a T. Mattson, Workshop on Graph Algorithms Building Blocks, IPDPS 2014, Pheoniz, AZ[Mattson 2014b]Mattson2014b T. Mattson, GraphBLAS Special Session, IEEE HPEC 2014, Waltham, MA[Mattson 2015]Mattson2015 T. Mattson, �Workshop on Graph Algorithms Building Blocks,� IPDPS 2015, Hyderabad, India[Mattson 2016]Mattson2016 T. Mattson, �Workshop on Graph Algorithms Building Blocks,� IPDPS 2016, Chicago, IL[McAndrew 1963]McAndrew1963 M.H. McAndrew, On the product of directed graphs, Proceedings of the American Mathematical Society, 14:600-606, 1963[McAndrew 1965]McAndrew1965 M.H. McAndrew, On the polynomial of a directed graph, Proceedings of the American Mathematical Society, 16:303-309, 1965[Sabadusi 1960]Sabadusi1960 G. Sabadusi, Graph multiplication, Mathematische Zeitschrift, 72:446-457, 1960[Tarjan 1972]Tarjan1972 R. Tarjan, Depth-first search and linear graph algorithms, SIAM journal on computing, 1(2), 146-160, 1972[Teh & Yap 1964]TehYap1964 H.H. Teh & H.D. Yap, Some construction problems of homogeneous graphs, Bulletin of the Mathematical Society of Nanying University, 164-196, 1964[Weischel 1962]Weischel1962 P.M. Weischel. The Kronecker product of graphs. Proceedings of the American Mathematical Society, 13(1):47–52, 1962[Zhang et al 2016]Zhang2016 P. Zhang, M. Zalewski, A. Lumsdaine, S. Misurda, & S. McMillan,GBTL-CUDA: Graph Algorithms and Primitives for GPUs, GABB workshop at IPDPS 2016
http://arxiv.org/abs/1702.07832v1
{ "authors": [ "Hayden Jananthan", "Karia Dibert", "Jeremy Kepner" ], "categories": [ "cs.DS", "cs.DM", "math.CO" ], "primary_category": "cs.DS", "published": "20170225041322", "title": "Constructing Adjacency Arrays from Incidence Arrays" }
pmac ms-math[11/13.6][#1]#2#2////..??== `_`_^` {`{}`} `& [#1] #1by1 =1 =0.=1#1 #1gl sl so AWTriP N R C Z X Y#1#2#3num#1by1num#1#2 num#1^#3^ (#3).ADefinition BTheorem CLemma (#1) CLemma#1 DRemark Proof. □ϕ=φϵ=εθ=ϑρ=ϱ#1#2(#3;#4|#5;#6)_#1ϕ_#2(#3 #4|#5;#6)Representations of Askey–Wilson algebraDaniel Gromada^†, Severin Pošta^ =0pt ^† Department of Physics, Faculty of Nuclear Sciences and Physical Engineering,^† Czech Technical University in Prague, Břehová 7, CZ-115 19 Prague, Czech Republic^† E-mail: d.gromada@seznam.cz^ Department of Mathematics, Faculty of Nuclear Sciences and Physical Engineering,^ Czech Technical University in Prague, Trojanova 13, CZ-120 00 Prague, Czech Republic^ E-mail: severin.posta@fjfi.cvut.cz =0.8=0pt Abstract.We deal with the classification problem of finite-dimensional representations of so called Askey–Wilson algebra in the case when q is not a root of unity. We classify all representations satisfying certain property, which ensures diagonalizability of one of the generating elements.Keywords: algebra, representation, orthogonal polynomials IntroductionThe relationship between special classes of orthogonal polynomials from the Askey scheme is already known for some time. Such a relationship is very valuable since the basic properties of the orthogonal polynomials can be derived from this relationship very easily. In 1991 <cit.>lexei Zhedanov constructed a q-commutator algebra that correspond to the most general class of discrete polynomials from the Askey–Wilson scheme—the q-Racah polynomials defined by Askey and Wilson in <cit.>see <cit.>or detailed overview of the Askey–Wilson scheme). Zhedanov named the algebra after authors of the scheme Richard Askey and James Wilson.The important concept that emerges in both Askey–Wilson algebra and finite orthogonal polynomial sequences and connects those structures is so called Leonard pair, which is a pair of operators such that both are tridiagonal in the eigenbasis of the other. This correspondence was first discovered by Leonard <cit.> The pair of such operators was named after him by Terwilliger <cit.> For a nice introduction to the theory of Leonard pairs, see <cit.> The Leonard pair corresponding to q-Racah polynomials is studied in <cit.>In the Askey–Wilson algebra the Leonard pair is made of elements of the algebra in certain representation. If we want to find the Leonard pair and discover the orthogonal polynomials and their properties without using the properties of the orthogonal polynomials in the first place, we can make use of classification of representations and certain automorphisms, as we have shown for example in <cit.> Although Zhedanov was able to construct suitable representation of the Askey–Wilson algebra and show the connection with q-Racah polynomials, the complete classification was not available until recent work of Hau-Wen Huang <cit.> (Only in the case when q is not a root of unity. The case when q is a root of unity is much more complicated and the classification problem is still open, see <cit.>)Independently on Huang we worked on the classification problem for Askey–Wilson algebra as well. Our approach follows the Zhedanov's paper <cit.>sing the technique of shift operators to construct the representations. However, it completes the Zhedanov's paper with rigorous mathematical theorems and proofs. The technique is inspired by classification of representations of algebra U'_q(_3) <cit.> which is a special case of Askey–Wilson algebra.We show that every representation satisfying certain property (certain numbers should not be eigenvalues of one of the generating element) can be constructed using shift operators and that those representations are determined by trace of the generating element (and parameters of the algebra of course). Consequently, we were able to prove that the dual representation constructed by Zhedanov, which provides the relationship with q-Racah polynomials, is indeed equivalent to the original one without need of prior knowledge of the properties of the q-Racah polynomials.We also consider separately special case when certain parameters of the Askey–Wilson algebra are zero, which allows to construct new representations by the shift operators not considered by Zhedanov.Although complete classification of Askey–Wilson algebra representations was already presented, we think that our approach is still interesting since it is very straightforward using the classical technique of shift operators and extends the Zhedanov's paper.In the whole work, we assume that q is not a root of unity.Definition and basic propertiesThe definition of Askey–Wilson algebra was presented by Alexey Zhedanov in <cit.> We will use a _3-symmetric presentation, which was mentioned for example in <cit.> We chose the presentation in a way that the algebra U'_q(_3) is a special case when all parameters equal to zero.D.AW] Askey–Wilson algebra _q(A_1,A_2,A_3) is a complex associative algebra generated by three elements I_1, I_2, I_3 and relationseq.AW1] q^1/2I_1I_2-q^-1/2I_2I_1 =I_3+A_3, eq.AW2] q^1/2I_2I_3-q^-1/2I_3I_2 =I_1+A_1, eq.AW3] q^1/2I_3I_1-q^-1/2I_1I_3 =I_2+A_2,where A_1, A_2 and A_3 are complex parameters. We will often denote the algebra only shortly as .Later, Paul Terwilliger in <cit.>resented so called universal Askey–Wilson algebra, that was also defined by three generating elements satisfying the same commutation relations, but A_1, A_2 and A_3 are not considered as complex parameters, but as elements of the center of the algebra. The set of monomials {I_1^kI_2^mI_3^n| k,m,n∈ N_0} forms a basis of . As in the case of U'_q(_3) <cit.>r universal Askey–Wilson algebra <cit.>e make use of the Diamond lemma <cit.> We transform the generating relation into a form compatible with ordering I_1≤ I_2≤ I_3:I_2I_1 =qI_1I_2-q^1/2(I_3+A_3), I_3I_2 =qI_2I_3-q^1/2(I_1+A_1), I_3I_1 =q^-1I_1I_3+q^-1/2(I_2+A_2).We see that there are no inclusion ambiguities on the left hand side and there is only one overlap ambiguity—a monomial I_3I_2I_1 can be reduced using the first or the second relation. We show that this ambiguity is resolvable.Reducing by the first relation we getI_3I_2I_1 =qI_3I_1I_2-q^1/2I_3(I_3+A_3)=I_1I_3I_2+q^1/2(I_2(I_2+A_2)-I_3(I_3+A_3)) =qI_1I_2I_3+q^1/2(-I_1(I_1+A_1)+I_2(I_2+A_2)-I_3(I_3+A_3)),while the second relation leads to =qI_2I_3I_1-q^1/2I_3(I_1+A_1)=I_2I_1I_3+q^1/2(-I_1(I_1+A_1)+I_2(I_2+A_2)) =qI_1I_2I_3+q^1/2(-I_1(I_1+A_1)+I_2(I_2+A_2)-I_3(I_3+A_3)).Both results are the same, so the ambiguity is resolvable.Finally, we can see that {I_1^kI_2^mI_3^n} is indeed the set of all reduced monomials.L.AWiso] There are the following isomorphisms of Askey–Wilson algebrasρ_q(A_1,A_2,A_3)→_q(A_2,A_3,A_1), I_1↦ I_2, I_2↦ I_3, I_3↦ I_1, σ_q(A_1,A_2,A_3)→_q(A_2,A_1,A_3), I_1↦ I_2, I_2↦ I_1, I_3↦ I_3+(I_2I_1-I_1I_2)(q^1/2+q^-1/2), τ_ϵ,ϵ'_q(A_1,A_2,A_3)→_q(ϵ A_1,ϵ' A_2,ϵϵ' A_3), I_1↦ϵ I_1, I_2↦ϵ' I_2, I_3↦ϵϵ' I_3,where ϵ, ϵ'∈{1,-1}.Those isomorphisms can be also interpreted as automorphisms of the universal Askey–Wilson algebra. In fact, the first two form a faithful action of the group PSL_2( Z) <cit.>We are going to use the following definition of q-numbers[α]_q:=q^α-q^-α q-q^-1for α∈. L.qnum] (Lemma 10 <cit.> Let q is not a root of unity. n * If q^2α=-q^l for some l∈ Z, then [α-j]_q=[α-k]_q for j,k∈ Z, j≠ k, if and only if j+k=l. * Conversely, having [α-j]_q=[α-k]_q for some α∈ a j,k∈ Z, j≠ k, it follows that q^2α=-q^j+k. * For every λ∈ there exists α∈ such that λ=[α]_q and numbers [α]_q, [α+1]_q,[α+2]_q,… are mutually different. Classification of finite-dimensional representationsAs we indicated in the introduction, we follow the Zhedanov's construction of the representations. So, the definitions of Casimir element, shift operators, characteristic polynomial or dual representations are all inspired by the original paper <cit.>L.casimir] Casimir element eq.casimir]C:=q^2I_1^2+I_2^2+q^2I_3^2-(q^5/2-q^1/2)I_1I_2I_3+q(q+1)A_1I_1+(q+1)A_2I_2+q(q+1)A_3I_3is a central element of .For suitable complex numbers λ, let O_λ and R_λ be the following linear combinations:eq.Odef] O_λ :=- q^1/2A_1+q^-λA_2 [λ]_q([λ]_q-[λ+1]_q)I_3+ I_2+q^-λ+1/2I_1, eq.Rdef] R_λ :=- q^1/2A_1-q^λA_2 [λ]_q([λ]_q-[λ-1]_q)I_3+ I_2-q^λ+1/2I_1.In case of A_1=A_2=0 we consider this definition without the first term and we assume that λ is an arbitrary complex number. Otherwise, we require the expression to be well defined, so [λ]_q≠[λ+1]_q and [λ]_q≠[λ-1]_q, respectively. In the following text, considering a representation R of the algebraon a vector space V, we will denote the representing linear operators just I_1, I_2, and I_3 instead of R(I_1), R(I_2), and R(I_3).Let R be a representation ofon V. Let x∈(I_3+ [λ]_q). Theneq.Oshift] I_3(O_λ x) =- [λ+1]_qO_λ x, eq.Rshift] I_3(R_λ x) =- [λ-1]_qR_λ x, eq.OR] O_λ-1R_λ x =(C̃_λ-1-C)x, eq.RO] R_λ+1O_λ x =(C̃_λ-C)x,whereC̃_λ=-q[λ]_q[λ+1]_q- q([λ]_q+[λ+1]_q)A_3-qA_1^2+ (q^-λ-1/2-q^λ+1/2)A_1A_2+A_2^2([λ]_q-[λ+1]_q)^2.By inspection using the q-commutation relations <ref>eq.AW1]–<ref>eq.AW3].L.I1I2soust] Take y,z∈ V and λ∈ such that both O_λ and R_λ are well-defined. Then the system of linear equationsO_λ x=y, R_λ x=zfor vectors I_1x and I_2x has a unique solution if and only if q^λ≠ϵ for ϵ∈{-1,1}.From <ref>eq.Odef] and <ref>eq.Rdef] we can easily express (q^-λ+q^λ)I_1x. The factor in the bracket is nonzero if and only if q^λ≠ϵ. The same holds for I_2x.Now, consider a general representation of Askey–Wilson algebra. We construct a basis in which I_3 acts diagonally and we try to express the explicit form of such representation. The construction of the basis is an analogy of Theorem 3 from <cit.> where the same procedure is performed for U'_q(_3). To perform such a construction, we need to assume that 2ϵ/(q-q^-1) and ϵ/(q^1/2-q^-1/2), ϵ=± 1 are not an eigenvalues of I_3. Such representations will be called classical (inspired by notation in <cit.>.These conditions allow us to use the shift operators to construct the eigenbasis of I_3. The first condition ensures the existence of solution of the system in lemma <ref>L.I1I2soust] and the second one ensures the shift operators to be well defined. However, in the case when A_1=A_2=0, which will be considered separately, we will be able to construct the eigenbasis of I_3 even for the non-classical representations. secc.obrep] Classical representationsV.AWbaze] Let q is not a root of unity. Let R be an irreducible classical representation (i.e. satisfying(I_3-2ϵ/(q-q^-1))={0} and (I_3-ϵ/(q^1/2-q^-1/2))≠{0} for ϵ=± 1) of _q(A_1,A_2,A_3) on V, V=N+1. Then there exists a complex number μ and a non-zero vector v_0∈(I_3+[μ]_q) such that O_μ v_0=0. If we definev_j+1:=R_μ-jv_jfor j=0,1,… r-1,then the tuple (v_0,…,v_N) forms a basis of V. Let w_0 be an eigenvector corresponding to eigenvalue -[μ̃]_q, where μ̃ is chosen in such a way that the numbers -[μ̃]_q, -[μ̃+1]_q,… are mutually different (see Lemma <ref>L.qnum]). Now, definew_j+1:=O_μ̃+jw_jfor j≥ 0and denote l∈ such that w_0,…,w_l-1 are linearly independent, but w_l is already linearly dependent on them. Such l has to exist since V is finite-dimensional. From the equation <ref>eq.Oshift] it follows that the vectors w_j satisfy the eigenequation I_3w_j=-[μ̃+j]_qw_j, where all the eigenvalues are mutually different. Since eigenvectors corresponding to mutually different eigenvalues are linearly independent, we have w_l=0.Denote v_0:=w_l-1 a μ:=μ̃+l-1 and define the following vectors: eq.vjdef]v_j+1:=R_μ-jv_jfor j≥ 0.From the equation <ref>eq.Rshift] it follows that they also have to satisfy the eigenequations eq.I3v]I_3v_j=-[μ-j]_qv_j.The possibility of [μ-j]_q=[μ-j-1]_q for some j, so R_μ-j would not be well defined, contradicts the assumptions since it would follow that q^μ=ϵ q^j-1/2, so -[μ-j]_q=ϵ/(q^1/2-q^-1/2) would be an eigenvalue of I_3. Now we can denote k∈ such that the tuple v_0,…,v_k-1 is linearly independent, while v_k is already linearly dependent. From <ref>eq.OR] it follows that eq.Ov]O_μ-jv_j=O_μ-jR_μ-j+1v_j-1=(C̃_μ-j-C)v_j-1, O_μ v_0=y_l=0for j=1,2,…,k-1.From the irreducibility of the representation it follows that C is a multiple of identity. This complex number will be also denoted by C. Its value is determined by equality 0=R_μ+1O_μ v_0=(C̃_μ-C)v_0, so C=C̃_μ.The equations <ref>eq.vjdef] and <ref>eq.Ov] define a system of equations eq.I1I2soust]R_μ-jv_j=v_j+1, O_μ-jv_j=(C̃_μ-j-C)v_j-1,which, according to Lemma <ref>L.I1I2soust], has a solution for I_1v_j a I_2v_j if and only if q^μ-j≠ϵ for ϵ∈{-1,1}. This condition is satisfied thanks to our assumptions, otherwise [μ-j]_q=2ϵ/(q-q^-1) would be an eigenvalue of I_3 corresponding to the eigenvector v_j. Therefore, we were able to express the vectors I_1v_j and I_2v_j as a linear combination of v_0,…,v_k-1. Moreover, I_3 acts diagonally, so the span of {v_0,…,v_k-1} is an invariant subspace and thanks to the irreducibility it has to be equal to the whole space V. L.klasvlč] For arbitrary (N+1)-dimensional classical irreducible representation the operator I_3 is diagonalizable and has mutually different eigenvalues. Its eigenvalues are -[μ]_q, -[μ-1]_q, …, -[μ-N]_q. Denoting v_0, …, v_N the corresponding eigenvectors, we have O_μ v_0=0 and v_r:=R_μ-Nv_N=0. In addition, it holds that q^2μ≠ q^l for every l∈{-1,0,1,…,2N+1}. The diagonalizability and spectrum of I_3 follows from the previous theorem. From assumptions, we have -[μ-j]_q≠ 2ϵ/(q-q^-1), so q^μ≠ϵ q^j for j=0,…,N. We also assume that -[μ-j]_q≠ϵ/(q^1/2-q^-1/2), which means q^μ≠ϵ q^j± 1/2 for j=0,…,N. Together it follows that q^μ≠ϵ q^l/2, l=-1,…,2N+1.This condition implies that the numbers [μ+1]_q, …, [μ-N-1]_q are mutually different according to Lemma <ref>L.qnum].The vector O_μ v_0 is either zero or an eigenvector corresponding to eigenvalue -[μ+1]_q. The second possibility cannot take place since -[μ+1]_q is not in the spectrum of I_3. The same holds for the vector R_μ-Nv_N.In the following text, we will still consider a representation satisfying the assumptions of Theorem <ref>V.AWbaze].We will often deal with sums and products of consecutive q-numbers. Hence, it will be useful to summarize following relations([λ+1]_q±[λ]_q)(q^1/2∓ q^-1/2)=q^λ+1/2∓ q^-λ-1/2, [λ]_q[λ+1]_q=([λ]_q+[λ+1]_q)^2-1 (q^1/2+q^-1/2)^2, ([λ]_q-[λ+1]_q)^2=([λ]_q+[λ+1]_q)^2(q^1/2-q^-1/2)^2(q^1/2+q^-1/2)^2+4(q^1/2+q^-1/2)^2. We denote D̃_j:=C̃_μ-C̃_μ-j which will play an important role for characterization of the representations. Next, we denote eq.lambdadef]Λ_j:=[μ-j+1]_q+[μ-j]_q=q^μ-j+1/2-q^-μ+j-1/2(q^1/2-q^-1/2), g_j:=[μ-j+1]_q-[μ-j]_q=q^μ-j+1/2+q^-μ+j-1/2(q^1/2+q^-1/2).Using the relations above and the definition of C̃_μ we see that D̃_jg_j^2 is a polynomial of fourth order in variable Λ_j. The roots of this polynomial determine the form of the representation. Thus, we could call it a characteristic polynomial. (Characteristic polynomial is defined in <cit.>n a similar way.) From the definition of D̃_j it is clear that D̃_0=0; thus, one of the roots of the characteristic polynomial is Λ_0.Solving the system <ref>eq.I1I2soust] we get eq.I1v]I_1v_j=-q^-1/2 q^-μ+j+q^μ-j(D̃_jv_j-1+v_j+1)-A_1- [μ-j]_qA_2(q^1/2-q^-1/2) g_jg_j+1v_j,eq.I2v]I_2v_j= q^-μ+j+q^μ-j(q^μ-jD̃_jv_j-1-q^-μ+jv_j+1)-- [μ-j]_qA_1(q^1/2-q^-1/2)+A_2 g_jg_j+1v_j. Denote Λ_j_0,…,Λ_j_3 the roots of the characteristic polynomial and choose j_0=0. Since we know, what is the leading coefficient, we can factor the polynomial eq.Djjk]D̃_j =q(q^1/2-q^-1/2)^2(q^1/2+q^-1/2)^4∏_k=0^3(Λ_j-Λ_j_k) g_j^2 =q∏_k=0^3(q^μ-j+1/2-q^-μ+j-1/2-q^μ-j_k+1/2+q^-μ+j_k-1/2)(q-q^-1)^2(q^μ-j+1/2+q^-μ+j-1/2)^2 =q^2μ-2j+2∏_k=0^3((1-q^j-j_k)(1+q^-2μ+j+j_k-1))(q-q^-1)^2(1+q^-2μ+2j-1)^2 L.chpolkor] For any (N+1)-dimensional classical irreducible representation the numbers Λ_0 and Λ_r are mutually different roots of the characteristic polynomial. For any k∈{1,…,N} the number Λ_k is not a root of the characteristic polynomial. We can see that if D̃_k=0 for k∈{1,…,N}, it would mean that the span of v_k,…,v_N is a non-trivial invariant subspace, which contradicts the irreducibility of the representations.Substituting <ref>eq.I1v], <ref>eq.I2v] and <ref>eq.I3v] to the condition (q^1/2I_1I_2-q^-1/2I_2I_1-I_3-A_3)v_N=0 (from the relation <ref>eq.AW1]) we get the following relation eq.Dcond]- q^-1D̃_N+1 q^μ-N+q^-μ+Nv_N=0,which implies D̃_N+1=0, so Λ_N+1 is a root of the characteristic polynomial as well. Now, we have to show that it is not equal to Λ_0.Assuming Λ_0=Λ_N+1 and using the fact that q is not a root of unity we get q^2μ=-q^N, which is excluded by Lemma <ref>L.klasvlč]. The form of the representation of course depends on the parameters of the algebra A_1, A_2, and A_3. Besides that, it also depends on the value of the Casimir element C, which is determined by the number q^μ. The possible values of q^μ are restricted by the assumption of finite dimension D̃_N+1=0. This is a polynomial equation of degree eight in variable q^μ. Excluding the possibility Λ_N+1=Λ_0, we can reduce the degree to six by eliminating the factor Λ_N+1-Λ_0. However, the equation remains very hard to solve and we will not try to express the possible values of q^μ explicitly.Representations with different q^μ may be equivalent. We will show that for fixed parameters of the algebra, the class of (N+1)-dimensional equivalent representations is determined by number [μ-N/2]_q, which is, up to a constant, the trace of I_3 (traces were used to distinguish representations already in the case of U'_q(_3) <cit.>nd they are also used in the Huang's classification <cit.>.L.Rekviv] Let R_1 and R_2 be classical irreducible r-dimensional representations ofconstructed in Theorem <ref>V.AWbaze], {v_j^(1)} and {v_j^(2)} the corresponding bases, and μ_1 and μ_2 the corresponding complex numbers characterizing the representations. Then the representations R_1 and R_2 are equivalent if and only if [μ_1-N/2]_q=[μ_2-N/2]_q. We compute the trace of I_3 eq.tr]I_3=∑_j=0^N-[μ-j]_q=-q^(N+1)/2-q^-(N+1)/2 q^1/2-q^-1/2[μ-N/2]_q.Therefore, representations with different [μ-N/2]_q have to be inequivalent.Conversely, let [μ_1-N/2]_q=[μ_2-N/2]_q. The equation <ref>eq.tr] is quadratic in q^μ-N/2 so it can have two solutions. Generally, it holds that [λ]_q=-[-λ]_q. If q^μ_1-N/2 is one of the solutions, the second one has to be q^μ_2-N/2=-q^-μ_1+N/2. This implies that[μ_1-j]_q=[μ_2-N+j]_q, q^μ_1-j+q^-μ_1+j=-(q^μ_2-N+j+q^-μ_2+N-j),Λ^(1)_j=Λ^(2)_N+1-j,where Λ^(1)_j a Λ^(2)_j correspond to the definition <ref>eq.lambdadef] for μ_1 and μ_2. Similarly, we denote D̃_j^(1) and D̃_j^(2) and we define a basis {w_0,…,w_r-1} by equationv_j^(2)=∏_k=1^jD̃_k^(1)· w_N-j.Using <ref>eq.I1v], <ref>eq.I2v], and <ref>eq.I3v] we can check that the representation R_1 has the same matrix elements in the basis {v_j^(1)} as R_2 in {w_j}. To demonstrate the relationship of the representations with orthogonal polynomials it will be convenient to express the representation in terms of the numbers j_1, j_2, and j_3 instead of the parameters of the algebra A_1, A_2, and A_3. The parameters of the algebra together with the parameter of the representation q^μ determine the form of D̃_j, and hence the numbers Λ_j_1, Λ_j_2 a Λ_j_3. Expressing the numbers q^j_1, q^j_2 a q^j_3 in terms of the parameters of the algebra would lead to a bicubic equation, which would be very complicated and we will not perform it. To express the form of the representation in terms of those numbers, we will need the opposite relationship—to express the parameters of the algebra. This can be achieved using Vièt formulas for polynomial D̃_j in the variable Λ_j. For given numbers μ, j_1, j_2, j_3 we obtain the parameters A_1, A_2, A_3 such that Λ_j_1, Λ_j_2 a Λ_j_3 are roots of D̃_j.Using one of the relations, we are able to express eq.A3jk]A_3= (q^1/2+q^-1/2)^2(Λ_0+Λ_j_1+Λ_j_2+Λ_j_3),and using the others we express a system of equations for A_1 and A_2(q^1/2-q^-1/2)^3(Λ_0Λ_j_1Λ_j_2+Λ_j_1Λ_j_2Λ_j_3+Λ_j_2Λ_j_3Λ_0+Λ_j_3Λ_0Λ_j_1)- 4(q^1/2-q^-1/2)(Λ_0+Λ_j_1+Λ_j_2+Λ_j_3)=(q^1/2+q^-1/2)^2(q-q^-1)^2A_1A_2, - 4(q^1/2-q^-1/2)^2(Λ_0(Λ_j_1+Λ_j_2+Λ_j_3)+Λ_j_1(Λ_j_2+Λ_j_3)+Λ_j_2Λ_j_3)+ (q^1/2-q^-1/2)^4Λ_0Λ_j_1Λ_j_2Λ_j_3+16=(q^1/2+q^-1/2)^2(q-q^-1)^2(A_1^2+A_2^2). This system leads to a biquadratic equation, so it has four solutions. From symmetry of the problem it is evident that if we denote (A_1,A_2) one of the solutions, the other solutions are (-A_1, -A_2), (A_2,A_1) a (-A_2,-A_1). The explicit form of one of the solutions is followingeq.A1jk] A_1= (q^1/2+q^-1/2)(q-q^-1) ( q^(j_1+j_2+j_3-2μ-1)/2+q^(-j_1+j_2+j_3-2μ-1)/2+q^(j_1-j_2+j_3-2μ-1)/2+q^(j_1+j_2-j_3-2μ-1)/2 - q^(2μ+1-j_1-j_2+j_3)/2-q^(2μ+1-j_1-j_2-j_3)/2-q^(2μ+1+j_1-j_2-j_3)/2-q^(2μ+1-j_1+j_2-j_3)/2),eq.A2jk] A_2= 1(q^1/2+q^-1/2)(q-q^-1)(q^-2μ-1+(j_1+j_2+j_3)/2-q^(-j_1+j_2+j_3)/2-q^(j_1-j_2+j_3)/2-q^(j_1+j_2-j_3)/2 -q^-(-j_1+j_2+j_3)/2-q^-(j_1-j_2+j_3)/2-q^-(j_1+j_2-j_3)/2+q^2μ+1-(j_1+j_2+j_3)/2). We have found a representation for four isomorphic Askey–Wilson algebras with different parameters. If the necessary conditions for (N+1)-dimensional classical representation given by lemmata <ref>L.klasvlč] and <ref>L.chpolkor] are satisfied, then, by explicit computation, we can check that this representation really satisfies the commutation relations <ref>eq.AW1]–<ref>eq.AW3]. Since I_3 has mutually different eigenvalues, we can show the irreducibility of this representation easily. Taking an invariant subspace and an eigenvector of I_3 lying in this subspace. Using the shift operators O_λ a R_λ we construct the remaining elements of the eigenbasis. Therefore, we have proven the following theorem.V.obecklas] Let q is not a root of unity. For arbitrary classical irreducible representation of Askey–Wilson algebra there exist complex numbers μ, j_1, j_2, and j_3 such that this representation is equivalent to the representation given by equations <ref>eq.I3v], <ref>eq.I1v]–<ref>eq.Djjk]. Conversely, let the complex numbers μ, j_1, j_2, and j_3 satisfy the following assumptions: q^2μ≠- q^l for every l∈{-1,0,1,…,2N+1}, one of the numbers Λ_j_1, Λ_j_2, or Λ_j_3 equals to Λ_N+1, and none of those numbers equals to Λ_k for all k∈{1,…,N}. Then the equations <ref>eq.I3v], <ref>eq.I1v]–<ref>eq.Djjk] define an irreducible representations of algebras _q(A_1,A_2,A_3), _q(-A_1,-A_2,A_3), _q(A_2,A_1,A_3), and _q(-A_2,-A_1,A_3), where the numbers A_1, A_2, and A_3 are determined by equations <ref>eq.A3jk]–<ref>eq.A2jk].P.assumptions] The assumptions can be somehow reformulated and a bit simplified. Firstly, the representation obviously does not depend on the order of the numbers j_1, j_2, j_3, so we can fix, for example Λ_j_3=Λ_r. Note, however, that the q-Racah polynomials also depend on four parameters satisfying certain constraint and symmetry that could be used to eliminate one of them, so we will also keep all four parameters μ, j_1, j_2, and j_3. Secondly, the representation depend only on the roots Λ_j_1, Λ_j_2, and Λ_j_3, not on the numbers j_i nor q^j_i. So, we can fix j_3=r instead of Λ_j_3=Λ_r. Finally, it holds that Λ_j=Λ_k if and only if q^j=q^k or q^j=q^2μ-k+1, so the condition Λ_j_i≠Λ_r can be reformulated as q^j_i≠ q^r and q^j_i≠ q^2μ-r+1. Our goal is to show the correspondence with orthogonal polynomials. For simplicity, we will work only with the solution <ref>eq.A1jk], <ref>eq.A2jk] and we will not express the form of I_2 in the following text. It could be computed easily in a similar way, but we will not need it. eq.I1vobfin]I_1v_j= -q^μ-j+j_1+j_2+j_3-1/21+q^-2μ+2j (q-q^-1)^2A_j-1C_jv_j-1+ q^-(2μ+2-j_1-j_2-j_3)/2 (q-q^-1)(A_j+C_j-1+q^2μ+2-j_1-j_2-j_3)v_j-q^-μ+j-1/2 1+q^-2μ+2jv_j+1,where eq.Aj] A_j=(1-q^j-j_1+1)(1-q^j-j_2+1)(1-q^j-j_3+1)(1+q^-2μ+j)(1+q^-2μ+2j)(1+q^-2μ+2j+1),eq.Cj] C_j=-q^2μ+2-j_1-j_2-j_3(1-q^j)(1+q^-2μ-1+j+j_1)(1+q^-2μ-1+j+j_2)(1+q^-2μ-1+j+j_3)(1+q^-2μ+2j-1)(1+q^-2μ+2j). secc.00] Representations for A_1=A_2=0V.AWneklbaze] Let q is not a root of unity. Let R be an irreducible representation of _q(0,0,A_3) on V, V=N+1 satisfying (I_3-2ϵ/(q-q^-1))={0} for ϵ=± 1. Then there exists a complex number μ and a non-zero vector v_0∈(I_3+[μ]_q) such that O_μ v_0=0 and if we definev_j+1:=R_μ-jv_jfor j=0,1,… N,then the tuple (v_0,…,v_N) forms a basis of V. The construction can be performed in the same way as in Theorem <ref>V.AWbaze] (now, we do not have to ensure that [μ-j]_q≠[μ-j-1]_q).The case A_1=A_2=0 is, of course, more similar to U'_q(_3), where all the parameters are zero. Here, the non-zero parameter A_3 causes shift of spectra of the representations.Solving the system <ref>eq.I1I2soust] we get eq.I1v00] I_1v_j=-q^-1/2 q^-μ+j+q^μ-j(D̃_jv_j-1+v_j+1),eq.I2v00] I_2v_j= q^-μ+j+q^μ-j(q^μ-jD̃_jv_j-1-q^-μ+jv_j+1). Here, we make use of the fact that D̃_j itself is a polynomial of degree two in Λ_j and factorize it. We get eq.Dj00]D̃_j=q(q^1/2+q^-1/2)^2(Λ_j-Λ_0)(Λ_j+Λ_0+(q^1/2+q^-1/2)^2A_3). Now, consider a classical representation, we will come back to the non-classical case later. We can, of course, use Lemma <ref>L.klasvlč] also in this particular case, so we have v_N+1=0 and D̃_N+1=0. This holds if Λ_N+1=Λ_0 or Λ_N+1=-Λ_0-(q^1/2+q^-1/2)^2A_3. The first possibility was already excluded in the general case.So, assume Λ_N+1=-Λ_0-(q^1/2+q^-1/2)^2A_3. Rearranging the equality we get eq.nyA3](q^-(N+1)/2+q^(N+1)/2)[μ-N/2]_q=-(q^1/2+q^-1/2)A_3.The bracket on the left-hand side cannot be zero since q is not a root of unity. Thus, it is a quadratic equation in q^μ-N/2. In Lemma <ref>L.Rekviv] we have already proven that the representations corresponding to those two solutions have to be equivalent.Finally, we have to decide when the representation is irreducible. In Lemma <ref>L.klasvlč] we have shown that in the classical case we have q^2μ≠ -q^l, where l∈{-1,0,1,…,2N+1} and the operator I_3 has mutually different eigenvalues. We can show the irreducibility in the same way as in the general case. The condition we mentioned can be rewritten in terms of the parameter A_3 by substituting into the equation <ref>eq.nyA3]: eq.A3ner](q^(N+1)/2+q^-(N+1)/2)(q^(k-N)/2+q^-(k-N)/2) q-q^-1≠ -ϵ(q^1/2+q^-1/2)A_3, k∈{-1,0,1,…,2N+1}. The final formulas for the representation can be expressed in the following form: eq.I1vklas00] I_1v_j= -q^μ-j+3/2 (1+q^-2μ+2j)(q-q^-1)(1-q^j)(1-q^j-N-1) (1+q^-2μ+j-1)(1+q^-2μ+j+N)v_j-1+-q^-μ+j-1/2 1+q^-2μ+2jv_j+1,eq.I2vklas00] I_2v_j= - q^2μ-2j+2 (1+q^-2μ+2j)(q-q^-1)(1-q^j)(1-q^j-N-1) (1+q^-2μ+j-1)(1+q^-2μ+j+N)v_j-1+- q^-2μ 1+q^-2μ+2jv_j+1,eq.I3vklas00] I_3v_j=-q^μ-j-q^-μ+j q-q^-1v_j,where μ is an arbitrary number satisfying <ref>eq.nyA3], j∈{0,…,N}, and x_-1=x_N+1=0. We can see that those representations correspond to the general ones analysed in the previous section for q^j_1=q^r, q^j_2=- q^μ+1/2, and q^j_3= q^μ+1/2 (the order is, of course, irrelevant).Now, we move to the case of non-classical representations.Consider the same notation as in Theorem <ref>V.AWneklbaze]. Let R be an (N+1)-dimensional non-classical irreducible representation of _q(0,0,A_3) such that (I_3-2ϵ/(q-q^-1))={0} for ϵ=± 1. Than the operator I_3 is diagonalizable and has mutually different eigenvalues, which correspond to eigenvectors v_0,…,v_N. In addition, we have [μ-N]_q=[μ-N-1]_q, i.e. q^μ=ϵ q^N+1/2 for ϵ∈{-1,1} and v_N+1=av_N, where eq.a]a^2=-q[N+1]_q^2-ϵ q(q^(N+1)/2-q^-(N+1)/2)^2 q-q^-1A_3.From the assumptions it follows that ϵ/(q^1/2-q^-1/2) is an eigenvalue of I_3, so there exists k∈{0,…,N} such that -[μ-k]_q=ϵ/(q^1/2-q^-1/2), so q^μ=ϵ q^k± 1/2, and so [μ-k]_q=[μ-(k± 1)]_q. Equivalently, there exists k∈{0,…,N+1} such that q^μ=ϵ q^k-1/2, i.e. [μ-k]_q=[μ-k+1]_q=ϵ/(q^1/2-q^-1/2). From the way of construction of the basis {v_j} it is clear that we cannot have [μ]_q=[μ+1]_q, so we have k>0. Next, we show that for k<N+1 we have a reducible representation. According to Lemma <ref>L.qnum] the equality k=N+1 means that the numbers [μ]_q,…,[μ-N]_q and hence the eigenvalues of I_3 are mutually different.Consider q^μ=ϵ q^k-1/2, k∈{1,…,N+1}. According to Lemma <ref>L.qnum] we have [μ-N-1]_q=[μ-l]_q if and only if l=N+1 or l=2k-N. Therefore, the vector v_N+1 lies in an eigenspace corresponding to the eigenvalue -[μ-2k+N]_q. This subspace is one-dimensional for k>(N+1)/2, otherwise it is trivial. Thus, we can write v_N+1=av_2k-N, where v_j=0 for j≤ 0 and, for simplicity, we choose a=0 in this case, otherwise we have a∈.Substituting <ref>eq.I1v00], <ref>eq.I2v00], and <ref>eq.I3v] into condition (q^1/2I_1I_2-q^-1/2I_2I_1-I_3-A_3)v_N (relation <ref>eq.AW1]) we get eq.Dcond00nekl]D̃_N+1 q^N-k+1/2+q^-N+k-1/2v_N+a1 q^N-k+1/2-q^-N+k-1/2v_2k-N-1=0.This is satisfied if a=0 or 2k-N-1=N+1. The equality 2k-N-1=N+1, i.e. k=N+1 means v_2k-N-1=v_N+1=av_N and q^μ=ϵ q^N+1/2. Substituting in <ref>eq.Dcond00nekl] we get a^2=-D̃_N+1. Substituting in <ref>eq.Dj00] we get the equality <ref>eq.a].Now, consider k<N+1 and a=0, so k∈{1,…,N} and v_k=0, we show that the representation is reducible. According to Lemma <ref>L.qnum] we have [μ-j]_q=[μ-2k+j+1]_q for all j∈ Z. From the preceding equality it is easy to show that Λ_j=Λ_2k-j, so D̃_j=D̃_2k-j. Definew_j=∏_l=1^jD̃_l^-1/2· v_j+∏_l=1^2k-j-1D̃_l^-1/2· v_2k-j-1,for j∈ Z, where an empty product is equal to one and v_j=0 for j∉{0,…,N}. It is a linear combination of vectors corresponding to the same eigenvalue (or zero vectors), soI_3w_j=-[μ-j]_qw_j.We can also expressI_1w_j=-ϵ q^-1/2 q^-k+j+1/2+q^k-j-1/2(D̃_j^1/2w_j-1+D̃_j+1^1/2w_j+1), I_2w_j=- q^-k+j+1/2-q^k-j-1/2(q^k-j-1/2D̃_j^1/2w_j-1+q^-k+j+1/2D̃_j+1^1/2w_j+1).Since we have also w_j=w_2k-j-1 and, in particular, w_k-1=w_k and also w_-1=0 for k≥ (N+1)/2 or w_-N-1=0 for k≤ (N+1)/2, we can see that the span of the vectors w_k-1,…, w_0 for k≥ N or w_k,…,w_N for k≤ N forms an invariant subspace of the representation. Since a is determined by <ref>eq.a] up to sign, we found additional four representations of the algebra. The proof of the irreducibility is the same as in the case of classical representations. The representations with different ϵ have different spectra of I_3. The representations with different a have different traces of I_1. Just in case of A_3=∓(q^(N+1)/2+q^-(N+1)/2)^2 and ϵ=± 1 we have a=0, so there are only three non-classical representations.To write the final formulas we have to expressD̃_j =q(q^j/2-q^-j/2)(q^N+1-j/2-q^-N-1+j/2) q-q^-1((q^j/2+q^-j/2)(q^N+1-j/2+q^-N-1+j/2) q-q^-1+ϵ A_3) =q[j]_q[2N+2-j]_q+qϵ(q^j/2-q^-j/2)(q^N+1-j/2-q^-N-1+j/2) q-q^-1A_3.We are going to use the first row, but the second form illustrates the transition to the representations of U'_q(_3) (i.e. for A_3=0) listed in <cit.> eq.I1vneklas00] I_1v_j= ϵ (1-q^-2N+2j-1)(-q(1-q^j)(1-q^-2N-2+j) q-q^-1 (q^N+1-j(1+q^j)(1+q^-2N-2+j) q-q^-1+ϵ A_3)v_j-1+q^-N-1+jv_j+1),eq.I2vneklas00] I_2v_j=(1-q^-2N+2j-1)(-q^N-j+2(1-q^j)(1-q^-2N-2+j) q-q^-1 (q^N+1-j(1+q^j)(1+q^-2N-2+j) q-q^-1+ϵ A_3)v_j-1-q^-2N+2j-1v_j+1),eq.I3vneklas00] I_3v_j=ϵq^N-j+1/2-q^-N+j-1/2 q-q^-1v_j,where ϵ∈{-1,1}, j∈{0,…,N}, v_-1=0, and v_N+1=av_N, where a satisfies <ref>eq.a].Note that the form of the representation coincide with the classical representations with q^μ=ϵ q^N+1/2, j_1=j_2=r and j_3 determined by the equation Λ_j_3=Λ_0+(q^1/2+q^-1/2)^2A_3. The only differences are following. The equation <ref>eq.Djjk] or the equations <ref>eq.Aj], <ref>eq.Cj] contain an removable singularity for j=N that was caused by expanding the formula for D̃_j by g_j^2. Second difference with the classical representations is the fact that we have D̃_N+1≠ 0 and v_N+1≠ 0, which causes an extra term in matrix representation of I_1 and I_2. In the case when A_1=A_2=0 the matrices are tridiagonal with zero diagonal except the very last entry.This completes the classification of all representations of (0,0,A_3) for which 2ϵ/(q-q^-1) is not an eigenvector of I_3.Let q is not a root of unity. Let R be an irreducible representations of _q(0,0,A_3) such that 2ϵ/(q-q^-1) is not an eigenvalue of I_3. Assuming inequality <ref>eq.A3ner] the representation is equivalent to one of the five non-equivalent representations—the classical one given by equations <ref>eq.I1vklas00]–<ref>eq.I3vklas00] or one of the four non-classical ones given by equations <ref>eq.I1vneklas00]–<ref>eq.I3vneklas00]. Assuming equality in the relation <ref>eq.A3ner] for some k, the representation has to be equivalent to one of the four (or three if k=2N+1) non-classical ones given by equations <ref>eq.I1vneklas00]–<ref>eq.I3vneklas00]. sec.ekvivrep] Dual representationsFirst of all, we define a representation equivalent to the representation constructed in Section <ref>secc.obrep] multiplying the basis vectors by some scalar. The result will be a bit more symmetric. We define a basis ={x_k}_k=0^N asx_j=(1+q^-2μ+2j)∏_k=0^j q^-k+(j_1+j_2+j_3+1)/2A_k-11 (q-q^-1)· v_j.The equation <ref>eq.I1vobfin] will change to eq.I1xobfin]I_1x_j=- q^ν q-q^-1C_jx_j-1+ q^ν q-q^-1(A_j+C_j-1+q^-2ν)x_j- q^ν q-q^-1A_jx_j+1,where ν=-μ-1+(j_1+j_2+j_3)/2. The action of I_3 will, of course, not change, so eq.I3xobfin]I_3x_j=-[μ-j]_qx_j. Now, we will try to construct an equivalent representation that would define a Leonard pair. Consider a representation of algebra _q(A_1,A_2,A_3) defined by equations <ref>eq.I1xobfin], <ref>eq.I3xobfin] and numbers μ, j_1, j_2, and j_3. Our goal is to find an equivalent representation, where I_1 is diagonal and I_3 irreducible tridiagonal. We will make use of the representation of algebra _q(A_3,A_2,A_1) defined by numbers ν=-μ-1+(j_1+j_2+j_3)/2, j_1, j_2, j_3. Substituting into <ref>eq.A3jk]–<ref>eq.A2jk] we can check that it is indeed a representation of algebra with parameters A_3,A_2,A_1. Using isomorphism σ̃=ρ^-1σρ mapping I_1↦ I_3 and I_3↦ I_1 we finally get a representation of _q(A_1,A_2,A_3) we are looking for.L.dual] Let μ, j_1, j_2, j_3, ν:=-μ-1+(j_1+j_2+j_3)/2 be complex numbers satisfying the following conditions: q^2μ≠-q^l for every l∈{-1,0,1,…,2N+1}, one of the numbers q^j_1, q^j_2, q^j_3 is equal to q^N+1 and none of the numbers Λ_j_1, Λ_j_2, nor Λ_j_3 is equal to Λ_k for every k∈{1,…,N}. Suppose the same inequalities hold after μ and ν are interchanged. Then the classical irreducible representation of _q(A_1,A_2,A_3), where the parameters A_1, A_2, and A_3 are defined by equations <ref>eq.A3jk]–<ref>eq.A2jk], derived in Section <ref>secc.obrep], is equivalent to a representation defined by the same equations after changingI_1↦ I_3, I_2↦ I_2+I_1I_3-I_3I_1 q^1/2+q^-1/2, I_3↦ I_1,μ↦ν,ν↦μ.Such a representation will be called dual to the original one. It is clear that it does not matter if we use the form <ref>eq.I1vobfin] or <ref>eq.I1xobfin], <ref>eq.I3xobfin]. Let us consider the more symmetric form defined above. Then the dual representation has the following form: eq.I1y] I_1y_j=-[ν-j]_qy_j,eq.I3y] I_3y_j=- q^μ q-q^-1D_jy_j-1+ q^μ q-q^-1(B_j+D_j-1+q^-2μ)y_j- q^μ q-q^-1B_jy_j+1,where eq.Bj] B_j=(1-q^j-j_1+1)(1-q^j-j_2+1)(1-q^j-j_3+1)(1+q^-2ν+j)(1+q^-2ν+2j)(1+q^-2ν+2j+1),eq.Dj] D_j=-q^-2μ(1-q^j)(1+q^-2ν-1+j+j_1)(1+q^-2ν-1+j+j_2)(1+q^-2ν-1+j+j_3)(1+q^-2ν+2j-1)(1+q^-2ν+2j). As we mentioned, by interchanging ν and μ, we get an irreducible representation of algebra _q(A_3,A_2,A_1). Thus, applying the isomorphism σ̃, we indeed obtain an irreducible representation of _q(A_1,A_2,A_3). If it is classical, then, according to Theorem <ref>V.obecklas], it has to be equivalent to one of the representations we have already found. These are, according to Lemma <ref>L.Rekviv], determined by trace. Hence, we only have to show that the dual representation is classical and that I_3 has the same trace as the original one. By means of direct computation we can check that the trace of I_3 in dual representation is indeed -(q^(N+1)/2-q^-(N+1)/2)/(q^1/2-q^-1/2) [μ-N/2]_q. Now, we show that I_3 has the same eigenvalues in dual representation as in the original one, namely -[μ]_q,…,-[μ-N]_q.Firstly, we show that -[μ]_q is an eigenvalue of I_3, which means (I_3+[μ]_q)=0. Using the form of the representation <ref>eq.I3y] only, we can show using induction on the dimension r that in the dual representation we have(I_3+[μ]_q)=- q^μ q-q^-1∏_k=1^3 (q^1-j_k-1)(q^2-j_k-1)⋯(q^r-j_k-1)(1+q^-2μ+N+1)(1+q^-2μ+N+2)⋯(1+q^-2μ+2N+1).Using the assumption that q^j_k=q^N+1 for some k we get the zero.Without loss of generality, we can again assume that the numbers [μ-N+1]_q, [μ-N]_q, …, [μ]_q, [μ+1]_q, …are all mutually different. We can, therefore, repeat the construction of the eigenbasis for the dual representation as well. Let w̃_0 be an eigenvector corresponding to the eigenvalue -[μ]_q. We will apply O_μ+j repeatedly until we get a linearly dependent vector. The last linearly independent vector will be denoted ṽ_0 and the corresponding eigenvalue -[μ̃]_q. Then we define ṽ_j+1:=R_μ̃-jṽ_j. As we mentioned, we cannot have [μ̃-j]_q=[μ̃-j-1]_q, so we have again constructed an eigenbasis {ṽ_0,…,ṽ_N}. The corresponding eigenvalues are -[μ̃]_q,…,-[μ̃-N]_q. Computing their sum, we get the trace of I_3 in dual representation. Comparing with the previous computation we get μ̃=μ. P.superneklas] To construct a dual representation, we do not need to assume the existence of the original one. It is sufficient to fulfil assumptions for the existence of classical representation defined by numbers ν, j_1, j_2, and j_3, we do not have to exclude the possibility of q^μ=ϵ q^l/2 for some l∈{0,…,2N+1}. In that case, an eigenbasis for I_3 does not have to exist. Nevertheless, it still holds that the numbers [μ-⌊ l/2⌋]_q,[μ-⌊ l/2⌋+1]_q,… are mutually different, so choosing ṽ_0 an eigenvector corresponding to the eigenvalue -[μ]_q we can define ṽ_j+1=R_μ-jṽ_j until we get 0≠ṽ_⌊ l/2⌋, which is for l even an eigenvector corresponding to the eigenvalue -[μ-l/2]_q=2ϵ/(q-q^-1) and for l odd it is an eigenvector corresponding to -[μ-(l-1)/2]_q=ϵ/(q^1/2-q^-1/2). (The equality v_⌊ l/2⌋=0 would contradict the irreducibility of representation.)This example shows that for certain parameters of Askey–Wilson algebra there exist representations containing both 2ϵ/(q-q^-1) or ϵ/(q^1/2-q^-1/2 as eigenvalues of I_3.Correspondence with q-Racah polynomialsNow we are going to show that a representation of Askey–Wilson algebra together with its dual representation defines a Leonard pair correspondingto q-Racah polynomials. More detailed study of this Leonard pair is available in <cit.>Firstly, let us recall the explicit formula for q-Racah polynomials that were discovered by Askey and Wilson in <cit.> We use the notation from <cit.> where properties of all orthogonal polynomial series of the Askey–Wilson scheme are summarized.eq.racdef]R_n(μ(x);α,β,γ,δ| q)=43(q^-n,αβ q^n+1,q^-x,γδ q^x+1;α q,βδ q,γ q|q;q) n=0,1,2,…,N,whereμ(x)=q^-x+γδ q^x+1andeq.rackondim] α q=q^-Norβδ q=q^-Norγ q=q^-N,where N∈_0.Let μ, j_1, j_2, j_3, and ν:=-μ-1+(j_1+j_2+j_3)/2 be numbers satisfying the same assumptions as in the preceding lemma. Define linear operators A,B on V asAx_k=C_kx_k-1-(A_k+C_k-1+q^-2ν)x_k+A_kx_k+1, Bx_k=(q^-k-q^-2μ+k)x_k.Then there exists a basis , where operators A and B have the following form:Ay_k=(q^-k-q^-2ν+k)y_k, By_k=D_ky_k-1-(B_k+D_k-1+q^-2μ)y_k+B_ky_k+1.Therefore, the operators A and B form a Leonard pair. Consider a representation ofin the form <ref>eq.I1xobfin], <ref>eq.I3xobfin]. Then we can write eq.paircorr] A= q^-ν(q-q^-1)I_1, B= q^-μ(q-q^-1)I_3.The basiscorrespond to dual representation constructed in Lemma <ref>L.dual]. Note that this result agrees with <cit.> Theorem 6.2, which essentially says that irreducible representations of Askey–Wilson algebra define a Leonard pair if both the operators have mutually different eigenvalues.Now, we can show the correspondence to q-Racah polynomials. Denote eq.koresp1] α=q^-j_1,β=-q^-2μ-1+j_1,γ=q^-j_3,δ=-q^2μ+1-j_1-j_2.Then the q-Racah polynomials R_n(x), n=0,…,N with parameters α, β, γ, δ are hidden in this Leonard pair in the following way_jk=r_jR_j(μ(k)), r_j=∏_k=0^jA_k-1 C_k,whereis the transition matrix from basisto basisand μ(x)=q^-x+γδ q^x+1.Indeed, the sequences A_n, B_n, C_n, and D_n can be expressed in terms of α, β, γ, and δ aseq.racAn] A_n =(1-α q^n+1)(1-αβ q^n+1)(1-βδ q^n+1)(1-γ q^n+1)(1-αβ q^2n+1)(1-αβ q^2n+2), eq.racBn] B_n =(1-α q^n+1)(1-βδ q^n+1)(1-γ q^n+1)(1-γδ q^n+1)(1-γδ q^2n+1)(1-γδ q^2n+2), eq.racCn] C_n =q(1-q^n)(1-β q^n)(γ-αβ q^n)(δ-α q^n)(1-αβ q^2n)(1-αβ q^2n+1), eq.racDn] D_n =q(1-q^x)(1-δ q^n)(β-γ q^n)(α-γδ q^n)(1-γδ q^2n)(1-γδ q^2n+1).The similarity relations A^= A^ and B^= B^, where A^,A^,B^,B^ denote the matrices of A and B in the basesand , can be expressed in terms of the q-Racah polynomials asA_jR_j+1(μ(k))-(A_j+C_j-1-γδ q)R_j(μ(k))+C_jR_j-1(μ(k))=(q^-k+γδ q^k+1)R_j(μ(k)), (q^-j+αβ q^j+1)R_j(μ(k))=D_kR_j(μ(k-1))-(B_k+D_k-1-αβ q)R_j(μ(k))+B_kR_j(μ(k+1)),which are precisely the three-term recurrence and difference equation for the q-Racah polynomials (cf. <cit.> eqs. (14.2.3) and (14.2.6)). From the three term recurrence we could also easily compute the orthogonality relation. Now we interpret the assumptions of the Lemma <ref>L.dual] in terms of the orthogonal polynomials sequence. The condition that one of the numbers q^j_i equals to q^N+1 ensuring the finite dimension of the representation can be formulated as eq.finpol] α q=q^-Norβδ q=q^-Norγ q=q^-N,which ensures finiteness of the orthogonal polynomials series (<cit.> eq. (14.2.1)). The conditions q^j_i≠ q^k, q^j_i≠ q^2μ-k+1 and q^j_i≠ q^2ν-k+1 for k∈{1,…,N} ensuring irreducibility of the representation and hence irreducibility of the matrices in the Leonard pair can be formulated as follows eq.irpol1] α≠ q^-k,βδ≠ q^-k,γ≠ q^-k,eq.irpol2] β≠ q^-k,α≠δ q^-k,αβ≠γ q^-k,eq.irpol3] γδ≠α q^-k,γ≠β q^-k,δ≠ q^-k.Although those conditions are usually not mentioned in the literature, they are necessary to obtain orthogonal polynomial sequence with respect to quasi-definite moment functional. If they are not satisfied, one of the coefficients A_n, B_n, C_n, or D_n may be zero for certain n. See also the orthogonality relation (14.2.2) in <cit.>The last assumption that q^2μ≠ -q^l and q^2ν≠ -q^l for all l∈{-1,0,1,…,2N+1} was made a priori to ensure that the representation is classical and therefore diagonalizable. In terms of the parameters α, β,γ,δ it means eq.classpol] αβ q≠ q^-landγδ q≠ q^-l.Looking at the formulas <ref>eq.racAn]–<ref>eq.racDn] it seems that those conditions are necessary for the Leonard pair to be well defined since otherwise there may be zero in one of the denominators. Nevertheless if αβ q=q^-l or γδ q=q^-l for l∈{-1,0,2N,2N+1} the singularity is removable. However, for l∈{1,…,2N-1} the condition is indeed necessary, which is also usually not explicitly stated in the literature. Note also that, for example, if we had αβ q=q^-l for l∈{1,…,2N-1}, the n-th polynomial R_n would not be of degree n and the N-tuple would not be linearly independent.Non-classical representationsFrom the formulas for the Askey–Wilson polynomials, we can now go backwards and derive the missing non-classical representations. Those shell have the same form as the classical representations except for some changes (cf. Section <ref>secc.00]).First of all I_3 has again the spectrum [μ]_q, [μ-1]_q, …, [μ-N]_q. For q^2μ=-q^l, l∈{-1,0,2N,2N+1} the eigenvalues are pairwise distinct. In the end, we can show that the representation with l=-1 is equivalent to the representation with l=2N+1 and the representation l=0 is equivalent with representation l=2N (as in Lemma <ref>L.Rekviv]). Thus, we will work now only with the cases l∈{2N,2N+1}.Take the non-classical case l=2N+1 and define I_1 by formula <ref>eq.I1xobfin] for j_1, j_2, j_3 satisfying the standard assumptions as in Theorem <ref>V.obecklas] and q^mu=± q^N+1/2. Since C_0=0, we do not have to determine the value of x_-1. Nevertheless, we have A_N≠ 0, so we have to determine x_r. By analogy with Section <ref>secc.00], we can guess that x_N+1=ax_N. So, we can substitute into <ref>eq.I1xobfin]q^-ν(q-q^-1)I_1x_N=C_Nx_N-1-((1-a)A_N+C_N-1+q^-2ν)x_N.Such form would lead to three-term recurrence((1-a)A_N+C_N-1-γδ q)R_N(μ(k))+C_NR_N-1(μ(k))=(q^-k+γδ q^k+1)R_N(μ(k)) We can suppose that this should not contradict the standard form of three-term recurrence. Notice that for αβ q=q^-2N-1 we have R_N=R_N+1, so we can rewrite the three-term recurrence as-(C_N-1-γδ q)R_N(μ(k))+C_NR_N-1(μ(k))=(q^-k+γδ q^k+1)R_N(μ(k)). From this, we can conclude that x_N+1=x_N. Finally, we can check that the formulas for classical representation also define a non-classical representation for q^2μ=-q^2N+1 if we define x_N+1=x_N.In the case l=2N we have αβ q=q^-2N and by similar reasoning we can conclude that x_N+1=x_N-1.Correspondence with Huang's classificationAs we mentioned in the introduction, a complete classification of so-called universal Askey–Wilson algebra appeared in <cit.> The generating elements are represented by two bidiagonal and one tridiagonal matrices. From those explicit formulas we have the following.For any irreducible representation of the Askey–Wilson algebra there exists μ∈ such that spectrum of I_3 is {-[μ-j]}_j=0^N. On the other hand, for any μ∈ there exists such a representation for suitable parameters. The paper also gives the criterion for diagonalizability of the generating elements.(<cit.> Lemma 4.6) Let R be an irreducible representation of . Then all eigenspaces of I_3 are one-dimensionalCorollary(<cit.> Lemma 5.1). Let R be an irreducible representation ofand denote μ such that {-[μ-j]}_j=0^N is the spectrum of I_3. Then the following are equivalent. n * I_3 is diagonalizable, * the numbers [μ], [μ-1], …, [μ-N] are pairwise distinct, * q^2μ≠-q^l, l∈{1,…,2N-1}. From these propostions we can see that the assuption that the representation is classical (i.e. 2ϵ/(q-q^-1) and ϵ/(q^1/2-q^-1/2) are not eigenvalues of I_3) is sufficient to ensure diagonalizability of I_3, but not necessary.Therefore, our classification contains all diagonalizable representations of the Askey–Wilson algebra except the “border cases”, when q^2μ∈{q^-1,q^0,q^2N,q^2N+1}, which we discussed in the previous section.ConclusionWe classified representations of Askey–Wilson algebra satisfying certain conditions allowing us to use the shift operators to construct an eigenbasis of I_3. Representations satisfying similar condition for another generating element can be obtained by applying suitable isomorphism of the Askey–Wilson algebra as we indicated in Section <ref>sec.ekvivrep]. Those representations are very important since they define the Leonard pair connected to the q-Racah polynomials. In such way, q-Racah polynomials (and other types of orthogonal polynomials in the Askey scheme) can be then obtained “at no cost” and this is the most valuable side-effect of solving the classification problem of Askey–Wilson algebra.AcknowledgementsThis work was supported by the Grant Agency of the Czech Technical University in Prague, grant numbers SGS15/215/OHK4/3T/14 and SGS16/239/OHK4/3T/14.References eferences
http://arxiv.org/abs/1702.08237v3
{ "authors": [ "Daniel Gromada", "Severin Pošta" ], "categories": [ "math.RT", "math.QA" ], "primary_category": "math.RT", "published": "20170227112009", "title": "Representations of Askey--Wilson algebra" }
1.05 Z R C R F N E Å𝒜 ϵ ℐ 𝒮 𝒳 ℳ op aux 1 0 ø𝕆minimize subject tomaximize ℋ̋ 𝒜 ℱ 1 I·𝐩𝒫 ⊥⊥propProposition lemLemma theoTheorem observation[prop]Observation corollary[prop]Corollary theorem[theo]Theorem definition[prop]Definition definitionlemma[prop]Definition and Lemma defn[prop]Definition lemma[lem]Lemma example[prop]Example proposition[prop]Proposition problem[prop]Problem conj[prop]Conjecture notebox.pdf,.png,.jpg Dipartimento di Fisica - Sapienza Università di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy Dipartimento di Fisica - Sapienza Università di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy Dipartimento di Fisica - Sapienza Università di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy International Institute of Physics, Federal University of Rio Grande do Norte, 59070-405 Natal, Brazil fabio.sciarrino@uniroma1.it Dipartimento di Fisica - Sapienza Università di Roma, P.le Aldo Moro 5, I-00185 Roma, ItalyBell's theorem was a cornerstone for our understanding of quantum theory, and the establishment of Bell non-locality played a crucial role in the development of quantum information. Recently, its extension to complex networks has been attracting a growing attention, but a deep characterization of quantum behaviour is still missing for this novel context. In this work we analyze quantum correlations arising in the bilocality scenario, that is a tripartite quantum network where the correlations between the parties are mediated by two independent sources of states. First, we prove that non-bilocal correlations witnessed through a Bell-state measurement in the central node of the network form a subset of those obtainable by means of a separable measurement. This leads us to derive the maximal violation of the bilocality inequality that can be achieved by arbitrary two-qubit quantum states and arbitrary projective separable measurements. We then analyze in details the relation between the violation of the bilocality inequality and the CHSH inequality. Finally, we show how our method can be extended to n-locality scenario consisting of n two-qubit quantum states distributed among n+1 nodes of a star-shaped network.Maximal violation of n-locality inequalities in a star-shaped quantum networkFabio Sciarrino December 30, 2023 ===============================================================================§ INTRODUCTIONSince its establishment in the early decades of the last century, quantum theory has been elevated to the status of the “most precisely tested and most successful theory in the history of science” <cit.>. And yet, many of its consequences have puzzled – and still do– most of the physicists confronted to it. At the heart of many of the counter-intuitive features of quantum mechanics is quantum entanglement <cit.>, nowadays a crucial resource in quantum information and computation <cit.> but that also plays a central role in the foundations of the theory. For instance, as shown by the celebrated Bell's theorem<cit.>, quantum correlations between distant parts of an entangled system can violate Bell inequalities, thus precluding its explanation by any local hidden variable (LHV) model, the phenomenon known as quantum non-locality.Given its fundamental importance and practical applications in the most varied tasks of quantum information <cit.>, not surprisingly many generalizations of Bell's theorem have been pursued over the years. Bell's original scenario involves two distant parties that upon receiving their shares of a joint physical system can measure one out of possible dichotomic observables. Natural generalizations of this simple scenario include more measurements per party <cit.> and sequential measurements <cit.>, more measurement outcomes <cit.>, more parties <cit.> and also stronger notions of quantum non-locality <cit.>. All these different generalizations share the common feature that the correlations between the distant parties are assumed to be mediated by a single common source of states (see, for instance, Fig. <ref>a). However, as it is often in quantum networks <cit.>, the correlations between the distant nodes is not given by a single source but by many independent sources which distribute entanglement in a non-trivial way across the whole network and generate strong correlations among its nodes (Figs. <ref>b-d). Surprisingly, in spite of its clear relevance, such networked scenario is far less explored.The simplest networked scenario is provided by entanglement swapping <cit.>, where two distant parties, Alice and Charlie, share entangled states with a central node Bob (see fig_DAGSb). Upon measuring in an entangled basis and conditioning on his outcomes, Bob can generate entanglement and non-local correlations among the two other distant parties even though they had no direct interactions. To contrast classical and quantum correlation in this scenario, it is natural to consider classical models consisting of two independent hidden variables (Figs. <ref>b), the so-called bilocality assumption <cit.>. The bilocality scenario and generalizations to networks with an increasing number n of independent sources of states (Figs. <ref>d), the so called n-locality scenario <cit.> allow for the emergence of a new kind of non-local correlations. For instance, correlations that appear classical according to usual LHV models can display non-classicality if the independence of the sources is taken into account, a result experimentally demonstrated in <cit.>. However, previous works on the topic have mostly focused on developing new tools for the derivation of inequalities characterizing such scenarios and much less attention has been given to understand what are the quantum correlations that can be achieved in such networks.That is precisely the aim of the present work. We consider in details the bilocality scenario and the bilocality inequality derived in <cit.> and characterize the non-bilocal behavior of general qubit quantum states when the parties perform different kinds of projective measurements. First of all we show that the correlations arising in an entanglement swapping scenario, i.e. when Bob performs a Bell-state measurement (BSM), form a strict subclass of those correlations which can be achieved by performing separable measurements in all stations. Focusing on this wider class of correlations, we derive a theorem characterizing the maximal violation of the bilocality inequality<cit.> that can be achieved from a general two-qubit quantum states shared among the parties. This leads us to obtain a characterization for the violation of the bilocality inequality in relation to the violation of the CHSH inequality <cit.>. Finally we show how our maximization method can be extended to the star network case <cit.>, a n-partite generalization of the bilocality scenario, deriving thus the maximum violation of the n-locality inequality that can be extracted from this network.§ SCENARIOIn the following we will mostly consider the bilocality scenario, which classical description in terms of directed acyclic graphs (DAGs) is shown in fig_DAGS-b. It consists of three spatially separated parties (Alice, Bob and Charlie) whose correlations are mediated by two independent sources of states. In the quantum case, Bob shares two pairs of entangled particles, one with Alice and another with Charlie. Upon receiving their particles Alice, Bob and Charlie perform measurements labelled by the random variables X, Y and Z obtaining, respectively, the measurement outcomes A, B and C. The difference between Bob and the other parties is the fact that the first has in his possession two particles and thus can perform a larger set of measurements including, in particular, measurements in an entangled basis.Any probability distribution compatible with the bilocality assumption (i.e. independence of the sources) can be decomposed asp(a,b,c|x,y,z)= ∫dλ_1 dλ_2 p(λ_1) p(λ_2) p(a|x,λ_1)p(b|y, λ_1 , λ_2)p(c|z, λ_2).In particular, if we consider that each party measures two possible dichotomic observables (x,y,z,a,b,c=0,1), it follows that any bilocal hidden variable (BLHV) model described by (<ref>) must fulfill the bilocality inequalityℬ=√(|I|)+√(|J|)≤ 1,with I=14∑_x,z=0,1 ⟨A_xB_0C_z ⟩, J=14∑_x,z=0,1 (-1)^x+z ⟨A_xB_1C_z ⟩, and where ⟨A_xB_yC_z ⟩= ∑_a,b,c=0,1 (-1)^a+b+c p(a,b,c|x,y,z).As shown in <cit.>, if we impose the same causal structure to quantum mechanics (e.g. in an entanglement swapping experiment) we can nonetheless violate the bilocality inequality (even though the data might be compatible with LHV models), thus showing the existence of a new form of quantum non-locality called quantum non-bilocality.To that aim let us consider the entanglement swapping scenario with an overall quantum state |ψ^-⟩_AB⊗|ψ^-⟩_BC, with |ψ^-⟩=(1/√(2))(|01⟩-|10⟩). We can choose the measurements operators for the different parties in the following way. Stations A and C perform single qubit measurements defined by A_x=σ_z+(-1)^xσ_x√(2),C_z=σ_z+(-1)^zσ_x√(2). Station B, instead, performs a complete BSM, assigning to the two bits b_0b_1 the values00for|ϕ^+⟩,01for|ϕ^-⟩, 10for|ψ^+⟩,11for|ψ^-⟩. The binary measurement B_y is then defined such that it returns (-1)^b_y, with respect to the value of y=0,1. This leads to ⟨A_xB_yC_z ⟩= ∑_a,b_0,b_1,c=0,1 (-1)^a+b_y+c p(a,b_0,b_1,c|x,z)= ∑_a,b_y,c=0,1 (-1)^a+b_y+c p(a,b_y,c|x,z) ≡∑_a,b,c=0,1 (-1)^a+b+c p(a,b,c|x,y,z), where, in the last steps, we made explicit use of the marginalization of probability p(a,b_0,b_1,c|x,z) over b_k≠ y.With these state and measurements, the quantum mechanical correlations achieve a value ℬ=√(2)>1, which violates the bilocality inequality and thus proves quantum non-bilocality.§ RESULTS §.§ Non-bilocal correlations with separable measurements As reproduced above, in an entanglement swapping scenario QM can exhibit correlations which cannot be reproduced by any BLHV model. In turn, it was recently proved <cit.> that an equivalent form of the bilocality inequality ((<ref>)), can be violated by QM in the case where all parties only perform single qubit measurements (i.e. σ_x,σ_z, σ_y and linear combinations). Here we will prove that, given the bilocality inequality ((<ref>)), the non-bilocal correlations arising in an entanglement swapping scenario are a strict subclass of those obtainable by means of separable measurements.The core of the bilocality parameter ℬ is the evaluation of the expected value A_xB_yC_z ((<ref>)), that in the quantum case is given by <A_xB_yC_z>=[(A_x⊗B_y ⊗C_z)( ρ_AB⊗ρ_BC)]. For the entanglement swapping scenario we can summarize the measurements in stations A and C by A_x=(1-x) A_0+x A_1 x=0,1, C_z=(1-z) C_0+z C_1z=0,1, where A_x and C_z are general single qubit projective measurements with eigenvalues 1 and -1. When dealing with station B, it is suitable to consider its operatorial definition which is implicit in (<ref>). Indeed we can consider that (-1)^b_y is the outcome of our measurement, leading to values shown in tab:B_swap_values.The quantum mechanical description of the operator B_y (in an entanglement swapping scenario) is thus given by B_y= |ϕ^+⟩⟨ϕ^+|+ (1-2y)|ϕ^-⟩⟨ϕ^-| +(2y-1)|ψ^+⟩⟨ψ^+| -|ψ^-⟩⟨ψ^-| which relates each value of y=0, 1 with its correct set of outcomes. This leads to the following theorem.Given the general set of separable measurements B_y=(1-y)∑_ijλ_ij σ_i⊗σ_j+y∑_klδ_kl σ_k⊗σ_l, QM predictions for the bilocality parameter ℬ which arise in an entanglement swapping scenario (where Bob performs the measurement described in (<ref>)) are completely equivalent to those obtainable by performing a strict subclass of (<ref>), i.e. {ℬ}_B.S.M. ⊂{ℬ}_SEP.M..Let us write the Bell basis of a two qubit Hilbert space in terms of the computational basis (|00⟩, |01⟩, |10⟩, |11⟩). From (<ref>), we obtain B_y= |ϕ^+⟩ ⟨ϕ^+|+ (1-2y)|ϕ^-⟩ ⟨ϕ^-|+(2y-1)|ψ^+⟩ ⟨ψ^+| -|ψ^-⟩ ⟨ψ^-| = (1-y) (|00⟩ ⟨00|-|01⟩ ⟨01|-|10⟩ ⟨10|+|11⟩ ⟨11|)+y (|00⟩ ⟨11|+|01⟩ ⟨10|+|10⟩ ⟨01|+|11⟩ ⟨00|)= (1-y) σ_z⊗σ_z+y σ_x⊗σ_x. This shows that the entanglement swapping scenario is equivalent to the one where station B only performs the two separable measurements B_0=σ_z⊗σ_z and B_1=σ_x⊗σ_x,which form a strict subclass of the general set of separable measurements given by (<ref>). Moreover if we consider a rotated Bell basis, then we obtain B_y'=U^†_AB⊗U^†_BC B_y U_AB⊗U_BC=(1-y) U^†_AB σ_x U_AB ⊗U^†_BCσ_x U_BC+yU^†_AB σ_z U_AB ⊗U^†_BCσ_z U_BC=(1-y) a⃗·σ⃗⊗c⃗·σ⃗+ya⃗'·σ⃗ ⊗c⃗'·σ⃗where σ⃗=(σ_x,σ_y,σ_z) and a⃗, a⃗' (c⃗, c⃗') are orthogonal unitary vectors. Due to the constraints a⃗⊥a⃗' and c⃗⊥c⃗', this case still represents a strict subset of (<ref>).As it turns out, this theorem has strong implications in our understanding of the non-bilocal behavior of QM. Indeed, it shows how the entanglement swapping scenario is not capable of exploring the whole set of quantum non-bilocal correlations, since it is totally equivalent to a subclass of Bob's separable measurements. As we will show next, a better characterization of quantum correlations within the bilocality context must thus in principle take into account the general form of Bob's separable measurements, especially when dealing with different types of quantum states. §.§ Non-bilocality maximization criterionWe will now explore the maximization of the bilocality inequality considering that Bob performs the separable measurements described by (<ref>). It is convenient to consider that station B as a unique station composed of the two substations B^A and B^C, which perform single qubit measurements on one of the qubits belonging to the entangled state shared, respectively, with station A or C (see fig_DAGS-c).Let A perform a general single qubit measurement and similarly for B^A, B^C and C. We can define these measurements asStation A ⟶a⃗_x·σ⃗, Station B ⟶b⃗^A_y·σ⃗⊗b⃗^C_y·σ⃗, Station C ⟶c⃗_z·σ⃗,where σ⃗=(σ_x,σ_y,σ_z). Let us now define a general 2-qubit quantum state density matrix as ρ=14(𝕀⊗𝕀+r⃗·σ⃗⊗𝕀+𝕀⊗s⃗·σ⃗+∑^3_n, m=1t_nm σ_n ⊗σ_m). The coefficients t_nm can be used to define a real matrix T_ρ that lead to the following result: Given the set of general separable measurements described in (<ref>) and defined the general quantum state ρ_AB⊗ρ_BC accordingly to (<ref>), the bilocality parameter ℬ is given by[ ℬ=12√(| (a⃗_0+a⃗_1)·T_ρ_ABb⃗^A_0 | | b⃗^C_0·T_ρ_BC(c⃗_0+c⃗_1) |)+ 12√(| (a⃗_0-a⃗_1)·T_ρ_ABb⃗^A_1 | | b⃗^C_1·T_ρ_BC(c⃗_0-c⃗_1) |). ]Let us consider two operators O_i in the form O_i=v⃗_i·σ⃗ and a two qubit quantum state ρ described by (<ref>). We can write O_1⊗O_2_ρ=[(O_1⊗O_2)ρ]=[∑_j,k=1,2,3(v_1^j v_2^k σ_j ⊗σ_k)ρ]= ∑_j,k=1,2,3 v_1^j v_2^k t_jk=v⃗_1·(T_ρv⃗_2), where we made use of the properties of the Pauli matrices σ_i. Given the set of separable measurements described in (<ref>), and the definitions of I and J (showed in (<ref>)), the proof comes from a direct application of (<ref>) to the quantum mechanical expectation value: A_x ⊗B^A_y ⊗B^C_y ⊗C_z_ρ_AB⊗ρ_BC= A_x ⊗B^A_y_ρ_AB B^C_y ⊗C_z _ρ_BC .Next we proceed with the maximization of the parameter ℬ over all possible measurement choices, that is, the maximum violation of bilocality we can achieve with a given set of quantum states. To that aim, we introduce the following Lemma.Given a square matrix M and defined the two symmetric matrices ℳ_1=M^𝐓M and ℳ_2=MM^𝐓, each non-null eigenvalue of ℳ_1 is also an eigenvalue of ℳ_2, and vice versa.Let λ be an eigenvalue of ℳ_1 M^𝐓M v⃗=λv⃗. If λ≠ 0 we must have Mv⃗≠0⃗. We can then apply the operator M from the left, obtaining MM^𝐓 (Mv⃗)=λ(Mv⃗), which shows that Mv⃗ is an eigenvector of ℳ_2 with eigenvalue λ. The opposite statement can be analogously proved. We can now enunciate the main result of this section.Given the set of general separable measurements described in (<ref>), the maximum bilocality parameter that can be extracted from a quantum state ρ_AB⊗ρ_BC can be written asℬ_max=√(√(t^A_1 t^C_1) + √(t^A_2 t^C_2)),where t^A_1 and t^A_2 (t^C_1 and t^C_2) are the two greater (and positive) eigenvalues of the matrix T_ρ_AB^𝐓 T_ρ_AB (T_ρ_BC^𝐓 T_ρ_BC), with t^A_1≥ t^A_2 and t^C_1≥ t^C_2.We will prove theo:B_maximization_separable, following a scheme similar to the one used by Horodecki <cit.> for the CHSH inequality. Let us introduce the two pairs of mutually orthogonal vectors (a⃗_0+a⃗_1)=2cosαn⃗_A&(a⃗_0-a⃗_1)=2sinαn⃗'_A, (c⃗_0+c⃗_1)=2cosγn⃗_C&(c⃗_0-c⃗_1)=2sinγn⃗'_C, and let us apply (<ref>) to (<ref>)ℬ_max=max(√(|( n⃗_A ·T_ρ_AB b⃗^A_0 ) (b⃗^C_0·T_ρ_BC n⃗_C ) cosα cosγ|)+ √(|( n⃗'_A ·T_ρ_AB b⃗^A_1 ) ( b⃗^C_1·T_ρ_BC n⃗'_C )sinα sinγ|))=max(√(|( b⃗^A_0·T^𝐓_ρ_ABn⃗_A ) (b⃗^C_0·T_ρ_BC n⃗_C ) cosα cosγ|)+ √(|(b⃗^A_1 ·T^𝐓_ρ_ABn⃗'_A) ( b⃗^C_1·T_ρ_BC n⃗'_C )sinα sinγ|)),where the maximization is done over the variables n⃗_A, n⃗'_A, b⃗^A_0, b⃗^A_1, n⃗_C, n⃗'_C, b⃗^C_0, b⃗^C_1, α and γ. We can choose b⃗^A_0, b⃗^A_1, b⃗^C_0, and b⃗^C_1 so that they maximize the scalar product. Defining|| Mv⃗||^2=Mv⃗·Mv⃗=v⃗·M^𝐓Mv⃗, and remembering that b⃗^A_0, b⃗^A_1, b⃗^C_0, and b⃗^C_1 are unitary vectors, we obtain ℬ_max=max( √(|| T^𝐓_ρ_ABn⃗_A |||| T_ρ_BC n⃗_C|| | cosα cosγ|)+ √( || T^𝐓_ρ_ABn⃗'_A || || T_ρ_BC n⃗'_C ||| sinα sinγ|)). Next we have to choose the optimum variables variables α and γ. This leads to the set of equations [ ∂ℬ(α,γ)∂α=12√(|| T^𝐓_ρ_ABn⃗'_A |||| T_ρ_BC n⃗'_C|||sin(α)sin(γ)|) (α)-12√(|| T^𝐓_ρ_ABn⃗'_A |||| T_ρ_BC n⃗'_C|||cos(α)cos(γ)|) tan(α)=0,; ; ∂ℬ(α,γ)∂γ=12√(|| T^𝐓_ρ_ABn⃗'_A |||| T_ρ_BC n⃗'_C|||sin(α)sin(γ)|) (γ)-12√(|| T^𝐓_ρ_ABn⃗'_A |||| T_ρ_BC n⃗'_C|||cos(α)cos(γ)|) tan(γ)=0. ]This system of equations admits only solutions constrained by tan(α)^2=tan(γ)^2↔γ= ±α+ nπ , n∈ℤ,leading to[ ℬ_max=max( |cosα|√(|| T^𝐓_ρ_ABn⃗_A |||| T_ρ_BC n⃗_C|| )+| sinα|√(|| T^𝐓_ρ_ABn⃗'_A |||| T_ρ_BC n⃗'_C|| )); ; =max(√(|| T^𝐓_ρ_ABn⃗_A |||| T_ρ_BC n⃗_C|| + || T^𝐓_ρ_ABn⃗'_A || || T_ρ_BC n⃗'_C ||)) ]Next, we must take into account the constraints n⃗_A ⊥n⃗'_A and n⃗_C ⊥n⃗'_C. Since these two couples of vectors are, however, independent, we can proceed with a first maximization which deals only with the two set of variables n⃗_A and n⃗'_A. Since T_ρ_ABT^𝐓_ρ_AB is a symmetric matrix, it is diagonalizable. Let us call λ_1, λ_2 and λ_3 its eigenvalues and let us write n⃗_A and n⃗'_A in an eigenvector basis. If we define k_1=|| T_ρ_BCn⃗_C||>0 and k_2=|| T_ρ_BCn⃗'_C ||>0, our problem can be written in terms of Lagrange multipliers related to the maximization of a function f, given the constraints g_i f(n⃗_A, n⃗'_A)=k_1 √( ∑_i=1,2,3 λ_i (n^i_A)^2 )+ k_2√(∑_i=1,2,3 λ_i (n'^i_A)^2), g_1(n⃗_A)=n⃗_A·n⃗_A-1,g_2(n⃗'_A)=n⃗'_A·n⃗'_A-1,g_3(n⃗_A, n⃗'_A)=n⃗_A·n⃗'_A, where we considered that finding the values that maximize √(|f(x)|) is equivalent to find these values for |f(x)|. Let us now introduce the scaled vectors η⃗_A=k_1 n⃗_A and η⃗ '_A=k_2 n⃗ '_A. We obtain f(η⃗_A, η⃗ '_A)= √(∑_i=1,2,3 λ_i (η^i_A)^2) +√(∑_i=1,2,3 λ_i (η '^i_A)^2), g_1(η⃗_A)=η⃗_A·η⃗_A-(k_1)^2,g_2(η⃗ '_A)=η⃗ '_A·η⃗ '_A-(k_2)^2,g_3(η⃗_A, η⃗ '_A)=η⃗_A·η⃗ '_A, whose solution is given by vectors with two null components, out of three. If we define λ_1≥λ_2≥λ_3 and if k_1>k_2, the solution related to the maximal value is then given by f_max=k_1 √(λ_1) + k_2√(λ_2)which leads toℬ_max=max_n⃗_C, n⃗'_C( √(|| T_ρ_BC n⃗_C||√(t^A_1) +|| T_ρ_BC n⃗'_C || √(t^A_2))),where we made use of the Lemma <ref>.The maximization over the last two variables leads to an analogous Lagrange multipliers problem with similar solutions, thus proving the theorem.This theorem generalizes the results of <cit.> (which dealt with some particular classes of quantum states in the entanglement swapping scenario) to the more generic case of any quantum state in the separable measurements scenario (which, in a bilocality context, includes the correlations obtained through entanglement swapping). It represents an extension of the Horodecki criterion <cit.> to the bilocality scenario, taking into account the general class of separable measurements which can be performed in station B. Our result thus shows that as far as we are concerned with the optimal violations of the bilocality inequality provided by given quantum states, separable measurements or a BSM (in the right basis) are fully equivalent. §.§ The relation between the non-bilocality and non-locality of sourcesWe will now characterize quantum non-bilocal behaviour with respect to the usual non-locality of the states shared between A, B and B, C. Let us start from (<ref>) and separately consider Bell non-locality of the states ρ_AB and ρ_BC. We can quantify it by evaluating the greatest CHSH inequality violation that can be obtained with these states. Let us define the CHSH inequality as [ 𝒮^UV ≡12|U_0V_0+U_0V_1+U_1V_0-U_1V_1|≤1. ]If we apply the criterion by Horodecki et al. <cit.>, we obtain[ 𝒮^AB_max=√(t^A_1+t^A_2),𝒮^BC_max=√(t^C_1+t^C_2), ]where we defined t^A_1, t^A_2, t^C_1 and t^C_2 accordingly to (<ref>). From a direct comparison of <ref> and <ref> we can write𝒮^AB_max≤1 & 𝒮^BC_max≤1 ⟶ℬ_max≤1. Applying the Cauchy-Schwarz inequality we obtainℬ^2_max≤𝒮^AB_max 𝒮^BC_max≤1. This result shows that if the two sources cannot violate the CHSH inequality then they will also not violate the bilocality inequality. Thus, in this sense, if our interest is to check the non-classical behaviour of sources of states, it is just enough to check for CHSH violations (at least if Bob performs a BSM or separable measurements). Notwithstanding, we highlight that this does not mean that the bilocality inequality is useless, since there are probability distributions that violate the bilocality inequality but nonetheless are local according to a LHV model and thus cannot violate any usual Bell inequality.Next we consider the reverse case: is it possible to have quantum states that can violate the CHSH inequality but cannot violate the bilocality inequality? That turns out to be the case. To illustrate this phenomenon, we start considering two Werner states in the form ρ=v(|ψ^-⟩⟨ψ^-|)+(1-v)𝕀/4. In this case, indeed, in order to have a non-local behaviour between A and B (B and C) we must have v_AB> 1/√(2) (v_BC> 1/√(2)) while it is sufficient to have √(v_AB v_BC)> 1/√(2) in order to witness non-bilocality. This example shows that on one hand it might be impossible to violate the bilocality inequality although one of ρ_AB or ρ_BC is Bell non-local (for instance v_A=1 and v_C=0). It also shows that, when one witnesses non-locality for only one of the two states, it can be possible, at the same time, to have non-bilocality by considering the entire network (for instance v_A=1 and 1/2<v_C<1/√(2)).Another possibility is the one described by the following PropositionGiven a tripartite scenario ∃ ρ_AB & ρ_BCsuch that 𝒮^AB_max> 1, 𝒮^BC_max> 1 &ℬ_max≤1.We will prove this point with an example. Let us takeρ_AB=35 |ψ^+⟩⟨ψ^+|+25 |ϕ^+⟩⟨ϕ^+|=( [ 0.2 0 0 0.2; 0 0.3 0.3 0; 0 0.3 0.3 0; 0.2 0 0 0.2 ]), ρ_BC= ρ(v=710, λ=13)=( [ 0.5 0 0 0; 00.45 -0.35 0; 0 -0.350.45 0; 0 0 0 0.5 ]), where we defined ρ(v,λ) as [ ρ(v,λ) =v|ψ^-⟩⟨ψ^-| + (1-v)[λ|ψ^-⟩⟨ψ^-|+|ψ^+⟩⟨ψ^+|2+(1-λ)𝕀4]. ] For these two quantum states one can check that t_1^A=1,t_2^A=0.04,t_1^C=0.64,t_2^C=0.49, which leads to 𝒮^AB_max≃1.02, 𝒮^BC_max≃1.06, ℬ_max≃0.97. This shows how it is possible to have non-local quantum states which nonetheless cannot violate the bilocality inequality (with separable measurements).All these statements provide a well-defined picture of the relation between the CHSH inequality and the bilocality inequality in respect to the quantum states ρ_AB⊗ρ_BC. We indeed derived all the possible cases of quantum non-local correlations which may be seen between couples of nodes, or in the whole network (according to the CHSH and bilocality inequalities). This characterization is shown in fig_Exp2_New_Loc-Biloc, in terms of a Venn diagram.We finally notice that if A and B share a maximally entangled state while B and C share a generic quantum state, then it is easier to obtain a bilocality violation in the tripartite network rather than a CHSH violation between the nodes B^C and C. Indeed it is possible to derive [ ℬ_max(|Φ^+⟩⟨Φ^+|⊗ρ_BC)=√(√(t^C_1)+√(t^C_2))≥√(t^C_1+t^C_2)=𝒮^BC_max ], where we made use of the following Lemma Given the parameters t^A_1, t^A_2, t^C_1 and t^C_2 defined in (<ref>), it holds 0≤t^A_1, t^A_2, t^C_1, t^C_2 ≤1This proof will be divided in two main points. 1)∀ρ,∃ ρ'=U^†ρ U such that T_ρ' is diagonal.As discussed in <cit.>, if we apply a local unitary U=U_1⊗ U_2 to the initial quantum state ρ, the matrix T_ρ will transform accordingly to T_ρ⟶U_1 T_ρ U_2^𝐓. According to the Singular Decomposition Theorem, it is always possible to choose U_1 and U_2 such that U_1 T_ρ U_2^𝐓 is diagonal, thus demonstrating point 1.It is important to stress that we can always rotate our Hilbert space in a way that ρ→ U^†ρ U so we can take ρ' without loss of generality. 2)If T_ρ is diagonal, then the eigenvalues of T_ρ^𝐓T_ρ are less or equal to 1.It was shown in <cit.> that, for every quantum state ρ, we have |t_nm| ≤ 1,t_nm∈ℛ regardless to the basis chosen for our Hilbert space. If T_ρ is diagonal then T_ρ^𝐓T_ρ=T_ρ^2 and its eigenvalues t_i can be written as t_i= t_ii^2 ≤ 1. Given the definitions of t^A_1 and t^A_2 (t^C_1 and t^C_2) described in (<ref>), the lemma is proved.§.§ Extension to the star network scenarioWe now generalize the results of theo:B_maximization_separable, to the case of a n-partite star network. This network is the natural extension of the bilocality scenario, and it is composed of n sources sharing a quantum state between one of the n stations A_i and a central node B (see fig_DAGS-d). The bilocality scenario corresponds to the particular case where n=2. The classical description of correlations in this scenario is characterized by the probability decomposition p({a_i}_i=1,n,b|{x_i}_i=1,n, y) = ∫( ∏_i=1^n dλ_i p(λ_i) p(a_i|x_i,λ_i) ) p(b|y,{λ_i}_i=1,n).As shown in <cit.>, assuming binary inputs and outputs in all the stations, the following n-locality inequality holds 𝒩_star=|I|^1/n+|J|^1/n≤1, whereI=12^n∑_x_1...x_nA^1_x_1...A^n_x_nB_0,I=12^n∑_x_1...x_n(-1)^∑_i x_iA^1_x_1... A^n_x_nB_1, A^1_x_1...A^n_x_nB_y=∑_a_1...a_n,b (-1)^b+∑_i a_i p({a_i}_i=1,n, b|{x_i}_i=1,n, y). We will now derive a theorem showing the maximal value of parameter 𝒩_star that can be obtained by separable measurements on the central node and given arbitrary bipartite states shared between the central node and the n parties.Given single qubit projective measurements and defined the generic quantum state ρ_A_1B⊗...⊗ρ_A_nB accordingly to (<ref>), the maximal value of 𝒩_star is given by𝒩_star^max= √((∏_i=1^n t^A_i_1)^1/n + (∏_i=1^n t^A_i_2)^1/n),where t^A^i_1 and t^A^i_2 are the two greater (and positive) eigenvalues of the matrix T_ρ_A_i B^𝐓 T_ρ_A_i B with t^A_i_1 ≥ t^A_i_2.In our single qubit measurements scheme the operator B can be written as B_y=⊗_i=1^nB^i_y=⊗_i=1^n b⃗^i_y·σ⃗. As pointed out in <cit.>, this allows us to write𝒩_star=|∏_i=1^n 12(A^i_0B^i_0+A^i_1B^i_0)|^1/n+|∏_i=1^n 12 (A^i_0B^i_1-A^i_1B^i_1)|^1/n,which leads to𝒩_star=|∏_i=1^n 12(a⃗^i_0+a⃗^i_1)·T_ρ_A_iBb⃗^i_0|^1/n+|∏_i=1^n 12 (a⃗^i_0-a⃗^i_1)·T_ρ_A_iBb⃗^i_1|^1/n.Introducing the pairs of mutually orthogonal vectors (a⃗^i_0+a⃗^i_1)=2cosα_in⃗_i&(a⃗_0-a⃗_1)=2sinα_in⃗'_i,allows us to write𝒩_star=|∏_i=1^n cosα_in⃗_i·T_ρ_A_iBb⃗^i_0|^1/n+|∏_i=1^n sinα_in⃗'_i·T_ρ_A_iBb⃗^i_1|^1/n.We can choose the parameters b⃗^i_y so that they maximize the scalar products. We obtain𝒩^max_star=max(|∏_i=1^n cosα_i|| T^𝐓_ρ_A_iBn⃗_i || |^1/n+|∏_i=1^n sinα_i|| T^𝐓_ρ_A_iBn⃗'_i || |^1/n).We can now proceed to the maximization over the parameters α_i. Let us define the functionK(α_1,...α_n)= |λ_1 ∏_i=1^n cosα_i |^1/n+|λ_2∏_i=1^n sinα_i |^1/n.We can write ∂K(α_1,...α_n)∂α_j=|λ_2∏_i=1^n sinα_i|^1/nn cotα_j - |λ_1 ∏_i=1^n cosα_i|^1/nn tanα_j =0,which, similarly to (<ref>), admits only solutions constrained bytan(α_j)^2=tan(α_k)^2↔α_j = ±α_k + nπ , n∈ℤ ∀j,k.This leads to K(α_1,...α_n)_max=max_α ( |λ_1^1/n cosα| +|λ_2^1/n sinα|)=√(λ_1^2/n+λ_2^2/n), which allows us to write𝒩^max_star=max√(| ∏_i=1^n || T^𝐓_ρ_A_iBn⃗_i |||^2/n+| ∏_i=1^n || T^𝐓_ρ_A_iBn⃗'_i |||^2/n).Let us now define k_1=| ∏_i=2^n || T^𝐓_ρ_A_iBn⃗_i |||, k_2=| ∏_i=2^n || T^𝐓_ρ_A_iBn⃗'_i |||, we have that𝒩^max_star=max√(k_1^2/n || T^𝐓_ρ_A_iBn⃗_1 ||^2/n+k_2^2/n || T^𝐓_ρ_A_iBn⃗'_1 ||^2/n). Labeling λ_1, λ_2 and λ_3 as the eigenvalues of T_ρ_A_1BT^𝐓_ρ_A_1B (which is real and symmetric) and writing n⃗_1 and n⃗'_1 in an eigenvector basis we obtain the Lagrange multipliers problem related to the maximization of a function f, given the constraints g_i:f(n⃗_1, n⃗'_1)= √((k_1^2 ∑_i=1,2,3 λ_i (n^i_1)^2 )^2/n)+ √((k_2^2 ∑_i=1,2,3 λ_i (n'^i_1)^2)^2/n), g_1(n⃗_1)=n⃗_1·n⃗_1-1,g_2(n⃗'_1)=n⃗'_1·n⃗'_1-1,g_3(n⃗_1, n⃗'_1)=n⃗_1·n⃗'_1, where we considered that the values which maximize |f(x)| also maximize √(|f(x)|).This Lagrangian multipliers problem can be treated similarly to (<ref>), giving the same results. If k_1>k_2, we obtainf_max=(k_1 √(λ_1))^2/n + (k_2√(λ_2))^2/nwhich leads to𝒩_star^max=max( √((t^A_1_1)^1/n (∏_i=2^n || T^𝐓_ρ_A_iBn⃗_i ||)^2/n+(t^A_1_2)^1/n (∏_i=2^n || T^𝐓_ρ_A_iBn⃗'_i ||)^2/n)).The proof is concluded by applying iteratively this procedure. We notice that the bilocality scenario can be seen as a particular case (n=2) of a star network, where A_2≡ C and x_2≡ z. Moreover we emphasize that (<ref>) gives the same results that would be obtained if one performed an optimized CHSH test on a 2-qubit state were t_1 and t_2 are given by the geometric means of the parameters t^A_i_1 and t^A_i_2.§ CONCLUSIONSGeneralizations of Bell's theorem to complex networks offer a new theoretical and experimental ground for further understanding quantum correlations and its practical applications in information processing. Similarly to usual Bell scenarios, understanding the set of quantum correlations we can achieve and in particular what are the optimal quantum violation of Bell inequalities is of primal importance.In this work we have taken a step forward in this direction, deriving the optimal violationof the bilocality inequality proposed in <cit.> and generalized in <cit.> for the case of a star-shaped network with n independent sources. Considering that the central node in the network performs arbitrary projective separable measurements and that the other parties perform projective measurements we have obtained the optimal value for the violation of the bilocality and n-locality inequalities. Our results can be understood as the generalization for complex networks of the Horodecki's criterion <cit.> valid for the CHSH inequality <cit.>. We have analyzed in details the relation between the bilocality inequality and in particular showed that if both the quantum states cannot violate the CHSH inequality then the bilocality inequality also cannot be violated, thus precluding, in this sense, its use as a way to detect quantum correlations beyond the CHSH case. Moreover, we have showed that some quantum states can separately exhibit Bell non-local correlations, but nevertheless cannot violate the bilocality inequality when considered as a whole in the network, thus proving that not all non-local states can be used to witness non-bilocal correlations (at least according to this specific inequality).However, all these conclusions are based on the assumption that the central node in the network performs separable measurements (that in such scenario include measurements in the Bell basis as a particular case). This immediately opens a series of interesting questions for future research. Can we achieve better violations by employing more general measurements in the central station, for instance, entangled measurements in different basis, non-maximally entangled or non-projective? Related to that, it would be highly relevant to derive new classes of network inequalities <cit.>. One of the goals of generalizing Bell's theorem for complex networks is exactly the idea that since the corresponding classical models are more restrictive, it is reasonable to expect that we can find new Bell inequalities allowing us to probe the non-classical character of correlations that are local according to usual LHV models. Can it be that separable measurements or measurement in the Bell basis allow us to detect such kind of correlations if new bilocality or n-locality inequalities are considered? And what would happen if we considered general POVM measurements in all our stations? Could we witness a whole new regime of quantum states, which at the moment, instead, admit a n-local classical description? Finally, one can wonder whether quantum states of higher dimensions (qudits) would allow for higher violations of the n-locality inequalities. Note added: During the preparation of this manuscript which contains results of a master thesis <cit.>, we became aware of an independent work <cit.> preprinted in February 2017.This work was supported by the ERC-Starting Grant 3D-QUEST (3D-Quantum Integrated Optical Simulation; grant agreement no. 307783): http://www.3dquest.eu and Brazilian ministries MEC and MCTIC. GC is supported by Becas Chile and Conicyt.41 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Kleppner and Jackiw(2000)]kleppner2000one author author D. Kleppner and author R. Jackiw, @noopjournal journal Science volume 289, pages 893 (year 2000)NoStop [Horodecki et al.(2009)Horodecki, Horodecki, Horodecki, andHorodecki]horodecki2009quantum author author R. Horodecki, author P. Horodecki, author M. Horodecki,and author K. Horodecki, @noopjournal journal Reviews of modern physics volume 81, pages 865 (year 2009)NoStop [Nielsen and Chuang(2002)]nielsen2002quantum author author M. A. Nielsen and author I. Chuang, @nooptitle Quantum computation and quantum information,(year 2002)NoStop [Bell(1964)]Bell1964 author author J. S. Bell, @noopjournal journal Physics volume 1, pages 195 (year 1964)NoStop [Brunner et al.(2014)Brunner, Cavalcanti, Pironio, Scarani, and Wehner]Brunner2014 author author N. Brunner, author D. Cavalcanti, author S. Pironio, author V. Scarani,and author S. Wehner, 10.1103/RevModPhys.86.419 journal journal Rev. Mod. Phys. volume 86, pages 419 (year 2014)NoStop [Collins and Gisin(2004)]Collins2004 author author D. Collins and author N. Gisin, http://stacks.iop.org/0305-4470/37/i=5/a=021 journal journal Journal of Physics A: Mathematical and General volume 37, pages 1775 (year 2004)NoStop [Gallego et al.(2014)Gallego, Würflinger, Chaves, Acín, and Navascués]Gallego2014 author author R. Gallego, author L. E. Würflinger, author R. Chaves, author A. Acín, and author M. Navascués,@noopjournal journal New Journal of Physics volume 16, pages 033037 (year 2014)NoStop [Collins et al.(2002)Collins, Gisin, Linden, Massar, and Popescu]Collins2002 author author D. Collins, author N. Gisin, author N. Linden, author S. Massar,and author S. Popescu, 10.1103/PhysRevLett.88.040404 journal journal Phys. Rev. Lett. volume 88, pages 040404 (year 2002)NoStop [Mermin(1990)]Mermin1990 author author N. D. Mermin, 10.1103/PhysRevLett.65.1838 journal journal Phys. Rev. Lett. volume 65, pages 1838 (year 1990)NoStop [Werner and Wolf(2001)]Werner2001 author author R. F. Werner and author M. M. Wolf, 10.1103/PhysRevA.64.032112 journal journal Phys. Rev. A volume 64,pages 032112 (year 2001)NoStop [Svetlichny(1987)]Svetlichny1987 author author G. Svetlichny, 10.1103/PhysRevD.35.3066 journal journal Phys. Rev. D volume 35, pages 3066 (year 1987)NoStop [Gallego et al.(2012)Gallego, Wurflinger, Acin, andNavascues]Gallego2012 author author R. Gallego, author L. E. Wurflinger, author A. Acin, and author M. Navascues, 10.1103/PhysRevLett.109.070401 journal journal Phys. Rev. Lett. volume 109, pages 070401 (year 2012)NoStop [Bancal et al.(2013)Bancal, Barrett, Gisin, and Pironio]Bancal2013 author author J.-D. Bancal, author J. Barrett, author N. Gisin,and author S. Pironio, 10.1103/PhysRevA.88.014102 journal journal Phys. Rev. A volume 88, pages 014102 (year 2013)NoStop [Chaves et al.(2016)Chaves, Cavalcanti, and Aolita]Chaves2016causal author author R. Chaves, author D. Cavalcanti,and author L. Aolita,@noopjournal journal arXiv preprint arXiv:1607.07666(year 2016)NoStop [Kimble(2008)]Kimble2008 author author H. J. Kimble, http://www.nature.com/nature/journal/v453/n7198/full/nature07127.html journal journal Nature volume 453, pages 1023 (year 2008)NoStop [Zukowski et al.(1993)Zukowski, Zeilinger, Horne, andEkert]Zukowski1993 author author M. Zukowski, author A. Zeilinger, author M. A. Horne,andauthor A. K. Ekert, 10.1103/PhysRevLett.71.4287 journal journal Phys. Rev. Lett. volume 71, pages 4287 (year 1993)NoStop [Branciard et al.(2010)Branciard, Gisin, and Pironio]Branciard2010 author author C. Branciard, author N. Gisin, and author S. Pironio, 10.1103/PhysRevLett.104.170401 journal journal Phys. Rev. Lett. volume 104, pages 170401 (year 2010), http://arxiv.org/abs/1112.4502 1112.4502 NoStop [Branciard et al.(2012)Branciard, Rosset, Gisin, andPironio]Branciard2012 author author C. Branciard, author D. Rosset, author N. Gisin,and author S. Pironio, 10.1103/PhysRevA.85.032119 journal journal Phys. Rev. A volume 85, pages 032119 (year 2012)NoStop [Tavakoli et al.(2014)Tavakoli, Skrzypczyk, Cavalcanti, andAcín]Tavakoli2014 author author A. Tavakoli, author P. Skrzypczyk, author D. Cavalcanti,and author A. Acín, 10.1103/PhysRevA.90.062109 journal journal Phys. Rev. A volume 90, pages 062109 (year 2014)NoStop [Mukherjee et al.(2015)Mukherjee, Paul, and Sarkar]mukherjee2015correlations author author K. Mukherjee, author B. Paul, and author D. Sarkar,@noopjournal journal Quantum Information Processing volume 14, pages 2025 (year 2015)NoStop [Chaves(2016)]Chaves2016PRL author author R. Chaves, 10.1103/PhysRevLett.116.010402 journal journal Phys. Rev. Lett. volume 116, pages 010402 (year 2016)NoStop [Rosset et al.(2016)Rosset, Branciard, Barnea, Pütz, Brunner, and Gisin]Rosset2016 author author D. Rosset, author C. Branciard, author T. J. Barnea, author G. Pütz, author N. Brunner,and author N. Gisin, 10.1103/PhysRevLett.116.010403 journal journal Phys. Rev. Lett. volume 116, pages 010403 (year 2016)NoStop [Tavakoli(2016a)]tavakoli2016quantum author author A. Tavakoli, @noopjournal journal Journal of Physics A: Mathematical and Theoretical volume 49, pages 145304 (year 2016a)NoStop [Tavakoli(2016b)]Tavakoli2016 author author A. Tavakoli, 10.1103/PhysRevA.93.030101 journal journal Phys. Rev. A volume 93, pages 030101 (year 2016b)NoStop [Paul et al.(2017)Paul, Mukherjee, Karmakar, Sarkar, Mukherjee, Roy, and Bhattacharya]paul2017detection author author B. Paul, author K. Mukherjee, author S. Karmakar, author D. Sarkar, author A. Mukherjee, author A. Roy,and author S. S. Bhattacharya, @noopjournal journal arXiv preprint arXiv:1701.04114 (year 2017)NoStop [Tavakoli et al.(2017)Tavakoli, Renou, Gisin, and Brunner]tavakoli2017correlations author author A. Tavakoli, author M. O. Renou, author N. Gisin,and author N. Brunner, @noopjournal journal arXiv preprint arXiv:1702.03866 (year 2017)NoStop [Carvacho et al.(2016)Carvacho, Andreoli, Santodonato, Bentivegna, Chaves, and Sciarrino]carvacho2016experimental author author G. Carvacho, author F. Andreoli, author L. Santodonato, author M. Bentivegna, author R. Chaves,and author F. Sciarrino, @noopjournal journal arXiv preprint arXiv:1610.03327(year 2016)NoStop [Saunders et al.(2016)Saunders, Bennet, Branciard, andPryde]saunders2016experimental author author D. J. Saunders, author A. J. Bennet, author C. Branciard, and author G. J. Pryde,@noopjournal journal arXiv preprint arXiv:1610.08514(year 2016)NoStop [Popescu(1995)]Popescu1995 author author S. Popescu, 10.1103/PhysRevLett.74.2619 journal journal Phys. Rev. Lett. volume 74, pages 2619 (year 1995)NoStop [Ringbauer et al.(2016)Ringbauer, Giarmatzi, Chaves, Costa, White, and Fedrizzi]Ringbauer2016 author author M. Ringbauer, author C. Giarmatzi, author R. Chaves, author F. Costa, author A. G. White,and author A. Fedrizzi, @noopjournal journal Science Advances volume 2, pages e1600162 (year 2016)NoStop [Fritz(2012)]Fritz2012 author author T. Fritz, http://stacks.iop.org/1367-2630/14/i=10/a=103001 journal journal New Journal of Physics volume 14, pages 103001 (year 2012)NoStop [Wolfe et al.(2016)Wolfe, Spekkens, and Fritz]Wolfe2016inflation author author E. Wolfe, author R. W. Spekkens,and author T. Fritz,@noopjournal journal arXiv preprint arXiv:1609.00672(year 2016)NoStop [Hensen et al.(2015)Hensen, Bernien, Dréau, Reiserer, Kalb, Blok, Ruitenberg, Vermeulen, Schouten, Abellán et al.]Hensen2015 author author B. Hensen, author H. Bernien, author A. Dréau, author A. Reiserer, author N. Kalb, author M. Blok, author J. Ruitenberg, author R. Vermeulen, author R. Schouten, author C. Abellán,et al., http://www.nature.com/nature/journal/v526/n7575/abs/nature15759.html journal journal Nature volume 526, pages 682 (year 2015)NoStop [Giustina et al.(2015)Giustina et al.]Giustina2015 author author M. Giustina et al., 10.1103/PhysRevLett.115.250401 journal journal Phys. Rev. Lett. volume 115, pages 250401 (year 2015)NoStop [Shalm et al.(2015)Shalm et al.]Shalm2015 author author L. K. Shalm et al., 10.1103/PhysRevLett.115.250402 journal journal Phys. Rev. Lett. volume 115, pages 250402 (year 2015)NoStop [Hall(2010)]Hall2010 author author M. J. W.Hall, 10.1103/PhysRevLett.105.250404 journal journal Phys. Rev. Lett. volume 105, pages 250404 (year 2010)NoStop [Chaves et al.(2015)Chaves, Kueng, Brask, and Gross]Chaves2015PRL author author R. Chaves, author R. Kueng, author J. B. Brask,andauthor D. Gross, 10.1103/PhysRevLett.114.140403 journal journal Phys. Rev. Lett. volume 114, pages 140403 (year 2015)NoStop [Handsteiner et al.(2017)Handsteiner, Friedman, Rauch, Gallicchio, Liu, Hosp, Kofler, Bricher, Fink, Leung, Mark, Nguyen, Sanders, Steinlechner, Ursin, Wengerowsky, Guth, Kaiser, Scheidl, and Zeilinger]cosmic author author J. Handsteiner, author A. S. Friedman, author D. Rauch, author J. Gallicchio, author B. Liu, author H. Hosp, author J. Kofler, author D. Bricher, author M. Fink, author C. Leung, author A. Mark, author H. T. Nguyen, author I. Sanders, author F. Steinlechner, author R. Ursin, author S. Wengerowsky, author A. H.Guth, author D. I. Kaiser, author T. Scheidl, and author A. Zeilinger, 10.1103/PhysRevLett.118.060401 journal journal Phys. Rev. Lett. volume 118, pages 060401 (year 2017)NoStop [bbt(2016)]bbt http://thebigbelltest.org title The big bell test(year 2016)NoStop [Michalski et al.(2013)Michalski, Carbonell, and Mitchell]michalski2013machine author author R. S. Michalski, author J. G. Carbonell,and author T. M. Mitchell, @nooptitle Machine learning: An artificial intelligence approach (publisher Springer Science & Business Media, year 2013)NoStop [Clauser et al.(1969)Clauser, Horne, Shimony, andHolt]Clauser1969 author author J. F. Clauser, author M. A. Horne, author A. Shimony,andauthor R. A. Holt, 10.1103/PhysRevLett.23.880 journal journal Phys. Rev. Lett. volume 23, pages 880 (year 1969)NoStop Horodecki1995 authorHorodecki, R., authorHorodecki, P. & authorHorodecki, M. titleViolating Bell inequality by mixed spin-12 states: necessary and sufficient condition. journalPhysics Letters A volume200, pages340 – 344 (year1995)NoStop Mukherjee2016 authorMukherjee, K., authorPaul, B. & authorSarkar, D. titleRevealing advantage in a quantum network. journalQuantum Information Processing volume15, number7, pages2895–2921 (year2016) Makhlin2002 authorMakhlin, Y. titleNonlocal Properties of Two-Qubit Gates and Mixed States, and the Optimization of Quantum Computations. journalQuantum Information Processing volume1,number4, pages243 – 252 (year2002) [Andreoli and Andreoli(2017)]Andreoli author author F. Andreoli, @nooptitle Generalized Non-Local Correlations in a Tripartite Quantum Network,year discussed in January 2017 under the supervision of Prof. Fabio SciarrinoNoStop NoStop [Gisin et al.(2017)Gisin, Mei, Tavakoli, Renou, andBrunner]gisin2017all author author N. Gisin, author Q. Mei, author A. Tavakoli, author M. O. Renou,and author N. Brunner, @noopjournal journal arXiv preprint arXiv:1702.00333(year 2017).
http://arxiv.org/abs/1702.08316v1
{ "authors": [ "Francesco Andreoli", "Gonzalo Carvacho", "Luca Santodonato", "Rafael Chaves", "Fabio Sciarrino" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170227151212", "title": "Maximal violation of n-locality inequalities in a star-shaped quantum network" }
mn2eStrongly Interacting Phases of Metallic Wires in Strong Magnetic Field Xiao-Liang Qi Accepted .Received ; in original form======================================================================* In the standard model of non-linear structure formation, a cosmic web of dark-matter dominated filaments connects dark matter halos. In this paper, we stack the weak lensing signal of an ensemble of filaments between groups and clusters of galaxies. Specifically, we detect the weak lensing signal, using CFHTLenS galaxy ellipticities, from stacked filaments between SDSS-III/BOSS luminous red galaxies (LRGs). As a control, we compare the physical LRG pairs with projected LRG pairs that are more widely separatedin redshift space. We detect the excess filament mass density in the projected pairs at the 5σ level, finding a mass of (1.6 ± 0.3) × 10^13 M_⊙ for a stacked filament region 7.1 h^-1 Mpc long and 2.5 h^-1 Mpc wide. This filament signal is compared with a model based on the three-point galaxy-galaxy-convergence correlation function, as developed in <cit.>, yielding reasonable agreement.cosmology, gravitational lensing: weak, dark matter, large-scale structure of Universe, galaxies: elliptical and lenticular, cD§ INTRODUCTIONA key prediction of the cold dark matter (CDM) model is that a network of low-density filaments connects dark matter halos. Measuring the signal from these structures is therefore a key part of understanding the large-scale structure in the universe. The most prominent of these diffuse filaments are expected to thread the most massive dark matter halos in the universe, where galaxy clusters will form. The existence of this filamentary structure is widely accepted, however there is limited direct observational evidence of these dark-matter dominated filaments. One of the best ways to probe the structure of dark matter is by weak gravitational lensing, where the distortion of background galaxies can be used to map out the foreground distribution of mass density.Several authors have reported the detection of a dark matter filament connecting individual massive clusters using weak lensing. <cit.> found a dark matter filament connecting two massive (∼ 10^14) clusters, Abell 222 and Abell 223.More recently, <cit.> claimed the detection of a filament between the massive galaxy clusters CL0015.9+1609 and RX J0018.3+1618. These individual filament detections rely on somewhat arbitrary parametric filament model that is difficult to interpret. <cit.> studied filaments between clusters in N-body simulations and found that, between clusters of galaxies separated by ≲ 10 h^-1 Mpc, ∼ 90% are separated by filaments which have a typical cylindrical radius ∼ 2 h^-1 Mpc. However, these filaments are not always straight, which complicated their identification even for massive filaments.The weak lensing signal-to-noise of a single filament between a single pair of galaxy groups is expected to be much less than unity. The approach of this paper will be to stack many thousands of filaments between pairs of Luminous Red Galaxies (LRGs). LRGs inhabit halos of masses of a few times 10^13 and so can be used as a proxy for galaxy groups <cit.>. When stacking filaments, the signal is best understood as the ensemble average of shear (or projected surface mass density) around halo pairs.One way to model the stacked filament is through higher order perturbation theory, i.e., the three-point correlation function or bispectrum. The three point galaxy-galaxy-shear correlation function from weak lensing has been studied by a number of authors <cit.>Recently, <cit.> used CFHTLens data and measured three-point statistics of galaxy number density and convergence. From this extracted the excess surface mass density around stacked lens galaxy pairs, both early-type and late-type. They found an excess surface mass density around early-type lens galaxy pairs, with the excess around late-type pairs being consistent with zero. This analysis used of photometric redshifts to identify pairs of galaxies.As will be discussed in <ref> below, the disadvantage is that the relatively large error in photometric redshifts (∼ 0.05 or ∼ 150h^-1Mpc) will scatter physically connected pairs of galaxies away, and scatter seemingly independent pairs together, and so complicates the interpretation of the results.Clampitt and collaborators <cit.> have investigated the stacked weak lensing signal between SDSS LRGs at various separations, based on SDSS spectroscopy and imaging. <cit.> presented two filament models, one based on the three-point correlation function, and the other a string of Navarro-Frenk-White <cit.> halos. The published version of the same paper <cit.> instead compared the data with stacked filaments from N-body simulations, finding reasonable agreement. The latter paper reported a detection at the 4.5σ level, although no mass was quoted for the filament.In this work, we describe techniques needed to measure the stacked filament between groups and clusters of galaxies, and apply these to LRG pairs. We also the attempt to model the filament using the three-point correlation function.In Section <ref>, we discuss the data: CFHTLenS for galaxy source ellipticities and photometric redshifts, and the Baryon Oscillation Spectroscopic Survey <cit.> for spectroscopic redshifts of LRGs, a proxy for groups and cluster centres.The LRG-pair stacking procedure is outlined in Section <ref>, and the results are presented in both shear and convergence maps. We also introduce thetechnique of subtracting non-physical pairs in order to isolate the filament signal from the shear signal of the individual clusters. Finally, we provide an empirical measurement of the stacked filament surface mass density and total mass. In Section <ref>, we describe amodel for stacked filament in the context of the perturbation theory, starting from the three-point galaxy-galaxy-convergence correlation function. We compare this model with the data, and discuss possible improvements to the model. Section <ref> summarizes our resultsThroughout this work we adopt a cosmology with the following parameters: Ω_m=0.3, Ω_Λ = 0.7, h≡ H_0/(100km s^-1Mpc^-1)=0.7, n_s = 0.96, and σ_8 = 0.8.§ DATA In order to study the weak lensing signal of filaments one requires two sets of data: a catalogue of galaxy groups and cluster lens pairs, and a catalogue of background source galaxies with accurate ellipticity measurements. §.§ CFHTLenS background source galaxies The CFHTLenS data were derived from the Wide component of the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS), which was optimized for weak lensing measurements.Observations were taken between March 2003 and November 2008 with the CFHT MegaPrime instrument which has roughly a 1^∘× 1^∘ field of view. The CFHTLS Wide data includes photometry in five optical passbands (u^*, g^', r^', i^', z^') and covers ∼ 154 square degrees in four patches on the sky (W1-W4), 3 of which have substantial overlap with BOSS/SDSS-III as discussed below.Thedeepest band (i^') data yields 17 resolved galaxies per square arcminute <cit.>.Galaxy ellipticity measurements were obtained with the `lensfit' algorithm <cit.>, modelled with bulge and disk components ultimately giving the two ellipticity parameters, e_1 and e_2 by Bayesian marginalization over galaxy size, centroid and bulge fraction. A corresponding lensfit weight was assigned to each galaxy given the variance of the ellipticity likelihood surface defined in equation 8 of <cit.>. After weighting, the effective source density is 11 galaxies per square arcminute <cit.>.Photometric redshifts (photo-zs) were estimated using the Bayesian Photometric Redshift (BPZ) code outlined in <cit.>, making use of the five-band photometry available from CFHTLS <cit.>, yielding a mean photometric redshift of 0.75, much deeper than the lens sample of ∼ 0.4. The photo-zs are limited to the range 0.2 < z_phot < 1.3, with a scatter of σ_z∼ 0.04(1+z) and a catastrophic outlier rate of ≲ 4% <cit.>. For a detailed description of the methods used to estimate the photo-zs, see <cit.>. §.§ Lenses: SDSS LRG PairsIn N-body simulations, filaments connect the high density nodes where galaxy groups and clusters will be forming.To identify pairs of galaxy groups and clusters that are connected by a filaments, one requires an accurate estimate of their location in redshift space. Unfortunately, the uncertainty associated with photometric redshifts will scatter true physical pairs away from each other and scatter false projected pairs to the same redshift. For example, if there are two physically-associated galaxies are scattered by a photometric redshift of Δ z_phot = 0.05 (the typical photo-z uncertainty in CFHTLenS), the corresponding scatter in their line-of-sight separation would be ∼ 150h^-1Mpc. This is much larger than the physical line-of-sight separation of order ∼ 10h^-1Mpc.To mitigate this issue, physical pairs should be identified with spectroscopic redshifts, which have orders of magnitude better redshift accuracy (σ_z_spec∼ 10^-4 or σ_v∼ 30 km/s).BOSS has obtained spectroscopic redshifts for a large sample of LRGs, an excellent proxy for the centres of galaxy groups and clusters.In this study both the BOSS CMASS and LOWZ sample galaxies were selected using the color-magnitude cuts from <cit.>. The majority of the overlap on the sky between the BOSS and CFHTLenS surveys is in the W1, W3 and W4 patches <cit.>, and giving ∼ 24,000 LRGs in total.A catalogue of LRG pairs was constructed by selecting pairs that were separated in redshift by Δ z_spec < 0.002 (corresponding to ∼ 5 h^-1 Mpc comoving if in the Hubble flow), and separated in projection (i.e. on the sky) by 6 h^-1Mpc≤ R_sep < 10h^-1Mpc. This gave a sample of ∼ 23,000 pairs of LRGs, with a mean physical separation of ⟨ R_sep⟩∼ 8.23 h^-1Mpc, a mean redshift ⟨ z ⟩∼ 0.42, and a mean stellar mass of ⟨log_10M_⋆/M_⊙⟩∼ 11.3. According to <cit.>, these LRGs are expected to lie in halos of total mass⟨log_10M/M_⊙⟩ = 13.04±0.07, corresponding to galaxy groups. § MEASUREMENT OF FILAMENT SIGNALIn <ref>, we outline the technical details of stacking the shear signal from the lens-source system and describe our method for isolating the filament signal between the LRGs. In <ref> we present results for stacked LRG pairs.§.§ Lensing Shear Signal Unlike galaxy-galaxy lensing, where one is interested in the circularly averaged tangential shear around individual galaxy centers, measuring the shear signal around pairs of LRGs is more complicated. The main complication arises because signal is not spherically symmetric, producing a shear signal that is not purely tangential. When stacking the lens-pair-source system,it is necessary to keep track of both components of the source ellipticity, e_1 and e_2. In addition, one must account for the random orientations of LRG pairs, and their variable separation length. In <ref> below we develop a standardized coordinate system that allows for the stacking of arbitrary orientations and length, and in <ref> the actual stacking procedure is outlined.§.§.§ Standardized CoordinatesIn galaxy-galaxy lensing, one bins source galaxies in radial annuli around the lens centre. Here, however, we wish to stack LRG pairs which have uniform random orientations relative to the background galaxies (see Figure <ref>), and varying physical separations. To account for this, we define a standardized coordinate system, normalized by pair separation, R_sep, and rotated such that the LRG pair coordinates will translate to (x_L,y_L)=(-0.5,0) and (x_R,y_R)=(0.5,0). The source galaxies' positions and ellipticities must also be translated into this coordinate system as follows. * First the galaxy's position is translated such that the central right ascension and declination, (α_c, δ_c), of the LRG pair is at the origin, then projected into the tangent plane of the central point, X^'_g =- (α_g - α_c )cosδ_c Y^'_g = δ_g - δ_c. *Next the coordinates are rotated such that the LRG pair lies along the x-axis. This is done using the rotation matrix, R = [cosθsinθ; -sinθcosθ ], where θ is the angle between the individual LRGs about the central point in the tangent plane, θ=tan^-1 ( Y^'_R - Y^'_L/X^'_R - X^'_L ). The subscripts L,R represent the “left" and “right" LRGs in the pair. * Finally the coordinates are rescaled by the separation between the two LRGs in the tangent plane, s = √((α_R - α_L)^2cos^2δ_c + (δ_R - δ_L)^2). This is the angular separation that corresponds to a projected physical separation, R_sep. Putting it all together, the final position of a galaxy in this coordinate system will be, x_g = 1/s [ (α_c - α_g )cosδ_ccosθ + (δ_g - δ_c )sinθ ] y_g = 1/s [ -(α_c - α_g )cosδ_csinθ + (δ_g - δ_c )cosθ ] With the source galaxies in the new coordinate system, their ellipticities also need transformed. The 2 components of ellipticity only need to be rotated. The rotation matrix is nearly the same as (<ref>), however the property that ellipticity is invariant under 180^∘ rotation requires that the angle just be doubled, e^'_1=e_1cos2θ + e_2sin2θ e^'_2=-e_1sin2θ +e_2cos2θ§.§.§ StackingThe signal from an individual filament is expected to be very weak because the filament density is much lower than that of a galaxy or cluster of galaxies, so it is necessary to stack LRG pairs, i.e. to take an ensemble average. To stack the source ellipticities around a pair of LRGs (from here on referred to as the `lens'), a two dimensional grid is prepared based on the x-y coordinate system developed in <ref>. For each lens, at all (x,y) cells of the grid, the the shear components are computed by averaging the source galaxyellipticities (e_1 and e_2) according to their lensfit weights, w, with an additional factor of Σ^-2_crit as in <cit.>. The additional factor of Σ^-2_crit is used to down-weight sources that are near the lens in redshift, for which the signal is expected to be very weak. The critical surface density, Σ_crit, is given byΣ_crit(z_ℓ, z_j) = c^2/4π GD(z_j)/D(z_ℓ)D(z_ℓ,z_j), where D(z_ℓ) is the angular diameter distance to the lens, D(z_j) is the angular diameter distance to the source, and D(z_ℓ, z_j) is the angular diameter distance between the lens and source. To summarise, the ellipticities are stacked to obtain estimates of the shear according toγ_1(x,y)= ∑_ℓ∑_j∈(x,y) e^'_1,jw_jΣ^-2_crit;ℓ,j/∑_ℓ∑_j∈(x,y)w_jΣ^-2_crit;ℓ,jγ_2(x,y)= ∑_ℓ∑_j∈(x,y) e^'_2,jw_jΣ^-2_crit;ℓ,j/∑_ℓ∑_j∈(x,y)w_jΣ^-2_crit;ℓ,j ,where the average is over all lenses, ℓ, and background sources, j, that belong to cell (x,y) after the coordinate transformation. An additive correction is applied to the e_2 component (before rotating) when computing the shears, according equation (19) of <cit.>, that accounts for a bias in CFHTLenS lensfit ellipticity measurement.Additionally, <cit.> found that a multiplicative correction for noise bias needs to be applied after the ellipticities are stacked, calculated from 1 + K = ∑_ℓ∑_j [1 + m(ν_SNR, r_gal)_j] w_jΣ^-2_crit;ℓ,j/∑_ℓ∑_jw_jΣ^-2_crit;ℓ,j . The resulting corrected shears are then γ^cor_1,2(x,y) =γ_1,2(x,y)/1+K.§.§.§ Convergence & Surface Mass DensityOne problem with examining shear maps directly is that they are difficult to interpret. Unlike the case of galaxy-galaxy lensing, where one can interpret the stacked tangential shears in terms of the mean excess mass density, in the case studied here, there is no analogous interpretation of the individual shear components. One solution is to use the method of <cit.> to convert the shear map into a convergence map, which is proportional to the surface mass density in the lens plane. From the definition of convergence, we easily convert it to the surface mass density Σ = κΣ_crit, where Σ_crit is the ensemble average, calculated using Σ_crit = ∑_ℓ∑_jΣ_crit;ℓ,j·Σ^-2_crit;ℓ,j w_j /∑_j w_jΣ^-2_crit;ℓ,j. The mean Σ_crit was found to be 1640 M_⊙/pc^2 for our sample.§.§.§ Isolating The Filament SignalThe goal of this paper is to study the filaments that link groups and clusters. Filaments themselves are difficult to define. For our purposes, we will define the filament as the excess mass present in a pair of LRGs, over and above that expected from the individual haloes of the LRGs themselves. Therefore the contribution from the two LRGs must be removed.One requires a method that will remove any tangential shear produced by the LRG halos, leaving behind a signal only from the filament. <cit.> introduced an elegant nulling method based on combining shear data at four different points, rotated with respect to the two LRGs in such a way as to null the spherically symmetric part of the signal.The disadvantage of their scheme is that the resulting signal combines signal from several locations and so it is difficult to visualize and understand.In this paper, we opt for a simpler approach: compare physical LRG pairs with “non-physical” (projected) LRG pairs.A particular pair of LRGs are likely to be physically connected if their line-of-sight separation is small. In this paper, we have adopted Δ z = 0.002, corresponding to a line-of-sight separation ∼ 6 h^-1 Mpc, to define physical pairs. By contrast, the same approach can be used to find LRG pairs that have such a large line-of-sight separation that the probability of being connected by a filament is negligible. Such pairs only appear to be pairs in projection, and we shall refer to them as “non-physical” pairs. Non-physical pairs of LRGs are selected to have a line-of-sight separation between 100h^-1Mpc and 120h^-1Mpc corresponding to a separation in redshift of 0.033 ≲Δ z ≲ 0.04. For determining background sources, we assume that the lens redshift is the average of the pair. When the ellipticities of sources that are behind the non-physical pairs are stacked, there should only be contributions from the two LRGs. Therefore by subtracting the stacked map of the non-physical pairs from that of the physical pairs, the remaining signal should be due to the filament. With this method, the data can be compared to the model in terms of shears or in terms of convergence (κ). Since it is easier to interpret the convergence signal, the remainder of the paper will focus on the κ maps. §.§ Results The shear map after stacking pairs of LRGs with projected separations between 6h^-1Mpc and 10h^-1Mpc (average 8.23 h^-1Mpc) is shown in Figure <ref>. Figure <ref> shows the resulting convergence map, with the upper panel showing the convergence around physical pairs of LRGs. The striking feature in this panel is the clear structure connecting the two physical LRGs.The lower panel of Figure <ref> shows the convergence map from the lensing signal for non-physical LRG pairs,in the same projected separation range (6h^-1Mpc≤ R_sep < 10h^-1Mpc). A key feature of the lower panel is the lack of “bridge” between the two LRGs that is seen in the upper panel.To measure the residual filament signal, we begin by subtracting the convergence map of non-physical pairs (lower panel of Figure <ref> from the convergence map of physical pairs (upper panel of Figure <ref>). The result is shown in Figure <ref>.The excess surface mass density is clearly visible around the filament midpoint (x,y) =(0,0).To quantify the filament mass, we place a box of dimensions Δ x ×Δ y, representing the projected dimensions of the stacked filament (see Figure <ref>), and measure the average excess convergence contained inside the box. After performing the direct subtraction of physical and projected pairs, there may be a small over- or under-subtraction of the convergence in the regions closest to the LRG positions due to a small differences in the mean physical and non-physical pair LRG masses. Moreover, the halos are likely to be elliptical and pointed along the line connecting the LRGs. Therefore, we wish to exclude from our definition of the filament, regions where the elliptical component of the LRG halos dominates the convergence. We note that some studies suggest that the r_200 of a dark matter halo may not be the optimal definition of its boundary with accreting matter extending well beyond r_200 <cit.>. To avoid including these LRG halo regions in the filament mass estimate, we only consider the filament to include points farther than 2r_200 from either LRG.The final width Δ x corresponds to 7.1 h^-1Mpc.We estimate uncertainties via Monte Carlo simulations of the shape noise. Specifically,we generate 1000 realizations by adding artificial scatter to the galaxy ellipticities consistent with shape noise. These noisy realizations are propagated to the κ maps generated by the<cit.> method, and through the subtraction of non-physical pairs (the map of which has independent noise). Finally these uncertainties are propagated to the enclosed masses and mean κ measurements discussed below. Figure <ref> shows the resulting mean convergence within the box as the width of the box, Δ y, is increased. We then convert the convergence to a surface mass density using eq. (<ref>). It is then straightforward to calculated the average mass contained within the filament box, shown in Figure <ref>. From Figure <ref>, we see that the signal-to-noise peaks around Δ y = 0.3 corresponding to a physical width of 2.5h^-1Mpc at a significance of ∼ 5 σ. The corresponding mass contained within the filament is M_fil = (1.6 ± 0.3)× 10^13M_⊙. The filament mass shows no sign of increasing beyond this Δ y so we adopt 2.5h^-1Mpc as the fiducial width. The filament region has a projected length of ∼ 7 h^-1 Mpc on the sky. We estimate that this corresponds to a true length ∼ 8 h^-1 Mpc when the line of sight depth is included.Assuming that the filament is a uniform density cylinder of length 8 h^-1Mpc and diameter 2.5h^-1Mpc, the corresponding excess density within the cylinder is then δ̅ = (ρ̅ - ρ_b) / ρ_b∼ 4 where ρ̅ is the mean density within the cylinder and ρ_b is the background matter density. The filament mass found here is a factor of a few less massive than the one reported by <cit.>, and about one order of magnitude less massive than the one reported by <cit.>. The difference in mass is likely due to the typical halo masses that connect the filament. The average host halo mass here is the order of ∼ 10^13M_⊙, corresponding to a rich group rather than a massive cluster. In contrast, the host halos considered in <cit.> and <cit.> have masses of a few 10^14M_⊙ up to ∼ 10^15M_⊙ for <cit.>, corresponding to rich clusters of galaxies.The study of <cit.> is similar to this work in the sense that it studies stacked filaments between LRGs. Their sample of LRGs was selected from SDSS-II, similar to the LRGs used in this study. Their paper does not provide a filament mass, perhaps because of the way in which the nulled filament is measured in their work makes it difficult to constrain directly.They do analyze a set of stacked N-body filaments, which provide a reasonable fit to their signal. Examination of the convergence map of these filaments in their Figure 5, and allowing for the difference in Σ_ crit suggests that the signals are comparable in mass.§ MODELLING WITH THE 3-POINT CORRELATION FUNCTIONFigure <ref> shows the stacked excess surface mass density around many pairs of LRGs.It therefore does not correspond to an individual filament but an ensemble average of stacked filaments. To model it, we therefore consider the galaxy-galaxy-convergence (ggκ) 3-point correlation function (3PCF) derived from perturbation theory and developed in <cit.>. Here we summarize the key equations from that paper, to which the reader is referred for further details.We are interested in the projected 3PCF around two dark matter halos at fixed locations x⃗_1 and x⃗_2, relative some matter at x⃗_3 which is denoted by ζ_ggκ(x⃗_1, x⃗_2, x⃗_3) = ⟨δ_g(x⃗_1)δ_g(x⃗_2)κ(x⃗_3) ⟩,where δ_g is just the projected 3-dimensional galaxy overdensity.Following <cit.>, the 3PCF can be derived from the bispectrumgiven from perturbation theory by <cit.>: B(k⃗_1, k⃗_2, k⃗_3)=[ 10/7 +( k_1/k_2 + k_2/k_1 ) . . k⃗_1·k⃗_2/k_1 k_2 + .. 4/7(k⃗_1·k⃗_2)^2/k^2_1 k^2_2 ] P^L_m(k_1)P^L_m(k_2) + permutations ,where P^L_m(k) is the linear matter power spectrum. ζ_ggκ(x⃗_1, x⃗_2, x⃗_3) = Σ^-1_crit(χ_L, χ_s)/√(2π)σ_LRGρ_crit,0Ω_m,0 b^21/(2π)^3× ∫^∞_0dk_1∫^∞_0dk_2∫^2π_0dϕ k_1k_2 B(k_1, k_2, -k_12) × J_0(√(α^2 + β^2)),where b is the linear bias of LRGs and σ_LRG is the typical separation of LRGs along the line of sight, converted to physical units. This integral can be evaluated numerically for a given separation bin as described in <ref>.The three-point convergence map generated for projected separations 6h^-1Mpc≤ R_sep < 10h^-1Mpc is shown in Figure <ref>. Here we have used a linear bias, b, of 2 <cit.> and we follow <cit.> to estimate the r.m.s. line of sight separation of LRGs σ_LRG = 8 h^-1 Mpc. It is important to take care to ensure that the resulting convergence map is in physical units; the integral in eq. (<ref>) is done over comoving coordinates, which introduces an additional factor of (1+z_l)^-2. The factor of Σ_crit was measured from the data according to eq. (<ref>).§.§ Results As discussed in <ref>, the filament signal showed no significant increase beyond the fiducial width of Δ y = 0.3 ∼ 2.5 h^-1 Mpc, so we adopt this width to compare the filament data with the 3PCF model. Figure <ref> shows convergence data binned along x-axis as well as the the three-point correlation function, averaged over the fiducial width. Also shown is the total averaged convergence within the filament box. At a glance, it appears the three-point function fits the data well, however the model lies slightly above the best fitting value.While the model appears to be a good fit to the central filament region (x ∼ 0), the data do not appear to show the excess around the two LRGS (x = ± 0.5) that is both predicted by the 3PCF and seen in simulations <cit.>. Neglecting this and simply performing a least squares fit to the entire x range suggests that the modeloverestimates the data by a factor of ∼ 1.6. §.§ Discussion The required re-scaling of the 3PCF model is relatively small, being on the order of uncertainty in the data (roughly 20%). It is possible that the model is an overestimate, due to an underestimate of the effect of LRG peculiar velocities. In calculating the three-point correlation function model, we followed <cit.>, and parameterized the line-of-sight separation of the two LRGs by a Gaussian distribution with widthσ_LRG = 8 h^-1 Mpc. This separation in redshift space includes both the peculiar velocities of each LRG in the pair and the Hubble flow. The peculiar velocities are difficult to model since they include contributions from relative infall motions as well as “thermal” motions of the LRGs themselves within their host halos. This model could be improved by using a more physically motivated distribution, using two-point statistics, as well as careful calibration from N-body studies.§ CONCLUSIONS The formation of a filamentary structure that connects high density collapsed regions of the universe is a prediction from simulations that has only recently become detectable observationally. In this work, we have detected a stacked filamentary structure between SDSS-III/BOSS LRGs using the CFHTLenS data set. The filament detection is significant at the 5σ level, with a mass of (1.6 ± 0.3) × 10^13 M_⊙ for a box of fiducial physical dimensions, 7.1× 2.5 h^-1Mpc.The three-point correlation function was used as a model for the stacked filament, derived from the perturbation theory bispectrum. We have shown that the predictions of the three-point correlation function are in reasonable agreement to the data.The goal of this study was to detect filaments using weak lensing, but also to serve as a foundation for future filament studies. We have developed a simple method of stacking filaments that can be applied to any weak lensing dataset, provided one has obtained redshifts for groups and clusters of galaxies through spectroscopy.Upcoming surveys such as the DES <cit.> will obtain ellipticities over 5000 square degrees to approximately the same depth as CFHTLenS. Presently there is little spectroscopy in the DES footprint, however. Other surveys such as SuMIRe/Hyper-Suprime Cam[http://sumire.ipmu.jp/en/], 2dFLenS <cit.> and the Canada-France Imaging Survey[www.cfht.hawaii.edu/Science/CFIS] will greatly increase the overlap between spectroscopic foreground lens samples and deep samples of background source galaxies.As well as new ground-based surveys, planned space-based missions, such as Euclid <cit.> or WFIRST <cit.> have the potential to measure the ellipticities and photometry of billions of galaxies . With increases in statistical power it will be come possible to study the nature of filaments as a function of other properties such as halo mass, separation and redshift.§ ACKNOWLEDGMENTS We acknowledge useful discussions with Joseph Clampitt. We also acknowledge the substantial efforts of both the CFHT staff in implementing the Legacy Survey, and of the CFHTLenS team in preparing catalogues of galaxy ellipticities and photometric redshifts. MJH acknowledges support from NSERC.Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. 31 natexlab#1#1[Benítez(2000)]2000ApJ...536..571B Benítez N., 2000, , 536, 571[Bernardeau et al(2002)Bernardeau, Colombi, Gaztañaga, & Scoccimarro]2002PhR...367....1B Bernardeau F., Colombi S., Gaztañaga E., Scoccimarro R., 2002, , 367, 1[Blake et al(2016)Blake, Amon, Childress, Erben, Glazebrook, Harnois-Deraps, Heymans, Hildebrandt, Hinton, Janssens, Johnson, Joudaki, Klaes, Kuijken, Lidman, Marin, Parkinson, Poole, & Wolf]BlaAmoChi16 Blake C. et al., 2016, , 462, 4240[Clampitt, Jain & Takada(2014)Clampitt, Jain, & Takada]2014arXiv1402.3302C Clampitt J., Jain B., Takada M., 2014, ArXiv e-prints, 1402.3302v1[Clampitt et al(2016)Clampitt, Miyatake, Jain, & Takada]ClaMiyJai16 Clampitt J., Miyatake H., Jain B., Takada M., 2016, , 457, 2391[Colberg, Krughoff & Connolly(2005)Colberg, Krughoff, & Connolly]2005MNRAS.359..272C Colberg J. M., Krughoff K. S., Connolly A. J., 2005, , 359, 272[Dawson et al(2013)Dawson, Schlegel, Ahn, Anderson, Aubourg, Bailey, Barkhouser, Bautista, Beifiori, Berlind, Bhardwaj, Bizyaev, Blake, Blanton, Blomqvist, Bolton, Borde, Bovy, Brandt, Brewington, Brinkmann, Brown, Brownstein, Bundy, Busca, Carithers, Carnero, Carr, Chen, Comparat, Connolly, Cope, Croft, Cuesta, da Costa, Davenport, Delubac, de Putter, Dhital, Ealet, Ebelke, Eisenstein, Escoffier, Fan, Filiz Ak, Finley, Font-Ribera, Génova-Santos, Gunn, Guo, Haggard, Hall, Hamilton, Harris, Harris, Ho, Hogg, Holder, Honscheid, Huehnerhoff, Jordan, Jordan, Kauffmann, Kazin, Kirkby, Klaene, Kneib, Le Goff, Lee, Long, Loomis, Lundgren, Lupton, Maia, Makler, Malanushenko, Malanushenko, Mandelbaum, Manera, Maraston, Margala, Masters, McBride, McDonald, McGreer, McMahon, Mena, Miralda-Escudé, Montero-Dorta, Montesano, Muna, Myers, Naugle, Nichol, Noterdaeme, Nuza, Olmstead, Oravetz, Oravetz, Owen, Padmanabhan, Palanque-Delabrouille, Pan, Parejko, Pâris, Percival, Pérez-Fournon, Pérez-Ràfols, Petitjean, Pfaffenberger, Pforr, Pieri, Prada, Price-Whelan, Raddick, Rebolo, Rich, Richards, Rockosi, Roe, Ross, Ross, Rossi, Rubiño-Martin, Samushia, Sánchez, Sayres, Schmidt, Schneider, Scóccola, Seo, Shelden, Sheldon, Shen, Shu, Slosar, Smee, Snedden, Stauffer, Steele, Strauss, Streblyanska, Suzuki, Swanson, Tal, Tanaka, Thomas, Tinker, Tojeiro, Tremonti, Vargas Magaña, Verde, Viel, Wake, Watson, Weaver, Weinberg, Weiner, West, White, Wood-Vasey, Yeche, Zehavi, Zhao, & Zheng]2013AJ....145...10D Dawson K. S. et al., 2013, , 145, 10[Dietrich et al(2012)Dietrich, Werner, Clowe, Finoguenov, Kitching, Miller, & Simionescu]2012Natur.487..202D Dietrich J. P., Werner N., Clowe D., Finoguenov A., Kitching T., Miller L., Simionescu A., 2012, , 487, 202[Erben et al(2013)Erben, Hildebrandt, Miller, van Waerbeke, Heymans, Hoekstra, Kitching, Mellier, Benjamin, Blake, Bonnett, Cordes, Coupon, Fu, Gavazzi, Gillis, Grocutt, Gwyn, Holhjem, Hudson, Kilbinger, Kuijken, Milkeraitis, Rowe, Schrabback, Semboloni, Simon, Smit, Toader, Vafaei, van Uitert, & Velander]2013MNRAS.433.2545E Erben T. et al., 2013, , 433, 2545[Heymans et al(2012)Heymans, Van Waerbeke, Miller, Erben, Hildebrandt, Hoekstra, Kitching, Mellier, Simon, Bonnett, Coupon, Fu, Harnois Déraps, Hudson, Kilbinger, Kuijken, Rowe, Schrabback, Semboloni, van Uitert, Vafaei, & Velander]2012MNRAS.427..146H Heymans C. et al., 2012, , 427, 146[Higuchi et al(2015)Higuchi, Oguri, Tanaka, & Sakurai]2015arXiv1503.06373H Higuchi Y., Oguri M., Tanaka M., Sakurai J., 2015, ArXiv e-prints[Hildebrandt et al(2012)Hildebrandt, Erben, Kuijken, van Waerbeke, Heymans, Coupon, Benjamin, Bonnett, Fu, Hoekstra, Kitching, Mellier, Miller, Velander, Hudson, Rowe, Schrabback, Semboloni, & Benítez]2012MNRAS.421.2355H Hildebrandt H. et al., 2012, , 421, 2355[Hudson et al(2015)Hudson, Gillis, Coupon, Hildebrandt, Erben, Heymans, Hoekstra, Kitching, Mellier, Miller, Van Waerbeke, Bonnett, Fu, Kuijken, Rowe, Schrabback, Semboloni, van Uitert, & Velander]2015MNRAS.447..298H Hudson M. J. et al., 2015, , 447, 298[Kaiser & Squires(1993)]1993ApJ...404..441K Kaiser N., Squires G., 1993, , 404, 441[Kirk et al(2015)Kirk, Brown, Hoekstra, Joachimi, Kitching, Mandelbaum, Sifón, Cacciato, Choi, Kiessling, Leonard, Rassat, & Schäfer]KirBroHoe15 Kirk D. et al., 2015, Space Science Reviews, 193, 139[Laureijs et al(2011)Laureijs, Amiaux, Arduini, Auguères, Brinchmann, Cole, Cropper, Dabin, Duvet, Ealet, & et al.]LauAmiArd11 Laureijs R. et al., 2011, ArXiv e-prints[Mandelbaum et al(2006)Mandelbaum, Seljak, Cool, Blanton, Hirata, & Brinkmann]ManSelCoo06 Mandelbaum R., Seljak U., Cool R. J., Blanton M., Hirata C. M., Brinkmann J., 2006, , 372, 758[Mellier(2012)]2012sngi.confE...3M Mellier Y., 2012, in Science from the Next Generation Imaging and Spectroscopic Surveys, p. 3[Miller et al(2013)Miller, Heymans, Kitching, van Waerbeke, Erben, Hildebrandt, Hoekstra, Mellier, Rowe, Coupon, Dietrich, Fu, Harnois-Déraps, Hudson, Kilbinger, Kuijken, Schrabback, Semboloni, Vafaei, & Velander]2013MNRAS.429.2858M Miller L. et al., 2013, , 429, 2858[Miyatake et al(2015)Miyatake, More, Mandelbaum, Takada, Spergel, Kneib, Schneider, Brinkmann, & Brownstein]2015ApJ...806....1M Miyatake H. et al., 2015, , 806, 1[More, Diemer & Kravtsov(2015)More, Diemer, & Kravtsov]2015ApJ...810...36M More S., Diemer B., Kravtsov A. V., 2015, , 810, 36[More et al(2015)More, Miyatake, Mandelbaum, Takada, Spergel, Brownstein, & Schneider]2015ApJ...806....2M More S., Miyatake H., Mandelbaum R., Takada M., Spergel D. N., Brownstein J. R., Schneider D. P., 2015, , 806, 2[Navarro, Frenk & White(1997)Navarro, Frenk, & White]1997ApJ...490..493N Navarro J. F., Frenk C. S., White S. D. M., 1997, , 490, 493[Oman, Hudson & Behroozi(2013)Oman, Hudson, & Behroozi]OmaHudBeh13 Oman K. A., Hudson M. J., Behroozi P. S., 2013, , 431, 2307[Schneider & Watts(2005)]2005A A...432..783S Schneider P., Watts P., 2005, , 432, 783[Simon et al(2013)Simon, Erben, Schneider, Heymans, Hildebrandt, Hoekstra, Kitching, Mellier, Miller, Van Waerbeke, Bonnett, Coupon, Fu, Hudson, Kuijken, Rowe, Schrabback, Semboloni, & Velander]2013MNRAS.430.2476S Simon P. et al., 2013, , 430, 2476[Simon et al(2008)Simon, Watts, Schneider, Hoekstra, Gladders, Yee, Hsieh, & Lin]2008A A...479..655S Simon P., Watts P., Schneider P., Hoekstra H., Gladders M. D., Yee H. K. C., Hsieh B. C., Lin H., 2008, , 479, 655[Spergel et al(2015)Spergel, Gehrels, Baltay, Bennett, Breckinridge, Donahue, Dressler, Gaudi, Greene, Guyon, Hirata, Kalirai, Kasdin, Macintosh, Moos, Perlmutter, Postman, Rauscher, Rhodes, Wang, Weinberg, Benford, Hudson, Jeong, Mellier, Traub, Yamada, Capak, Colbert, Masters, Penny, Savransky, Stern, Zimmerman, Barry, Bartusek, Carpenter, Cheng, Content, Dekens, Demers, Grady, Jackson, Kuan, Kruk, Melton, Nemati, Parvin, Poberezhskiy, Peddie, Ruffa, Wallace, Whipple, Wollack, & Zhao]2015arXiv150303757S Spergel D. et al., 2015, ArXiv e-prints[Takada & Jain(2003)]2003MNRAS.340..580T Takada M., Jain B., 2003, , 340, 580[The Dark Energy Survey Collaboration(2005)]2005astro.ph.10346T The Dark Energy Survey Collaboration, 2005, ArXiv Astrophysics e-prints[Tojeiro et al(2014)Tojeiro, Ross, Burden, Samushia, Manera, Percival, Beutler, Brinkmann, Brownstein, Cuesta, Dawson, Eisenstein, Ho, Howlett, McBride, Montesano, Olmstead, Parejko, Reid, Sánchez, Schlegel, Schneider, Tinker, Magaña, & White]2014MNRAS.440.2222T Tojeiro R. et al., 2014, , 440, 2222
http://arxiv.org/abs/1702.08485v1
{ "authors": [ "Seth D. Epps", "Michael J. Hudson" ], "categories": [ "astro-ph.CO", "astro-ph.GA" ], "primary_category": "astro-ph.CO", "published": "20170227192912", "title": "The Weak Lensing Masses of Filaments between Luminous Red Galaxies" }
α β̱ .̧4̧ȩx̧χ̧ δ̣ ϵ ϕ .4exγ ηıι ȷψ κ̨ łλ μ νπ θ ρ̊ σ τ ῠ ξ ζ Δ Φ Γ Ψ ŁΛ ØΩ Π Θ Υ Ξk r q⃗ r x y p Lr̂ϕ̂φβ_± β_+ β_- ω_kΑα̂ Ββ̂ ⟨ ⟩12 13∈∞ ν̃ μ̃ ∫_-∞^T ∫_-∞^t ∫_-∞^s ∫_-∞^τ ∫_-∞^τ_i(T) ∫_-∞^τ_j(t_i(τ_i)) ϕ_0 |au| |av| τ' τ” γ ∫_-∞^∞ ∫_0^∞ dk/√(4πω_k) χ_ω ϕ^(+) ϕ_f^(+) ∂ Q^(+) Q^(-) ϕ^(-) ϕ_f^(-) e^πω/2a e^πω/a iω/a ϕ_int^(-) ϕ_int^(+) G(x,t;x',t') G_f(x,t;x',t')M_P#1Nucl. Phys. #1#1Phys. Lett. #1#1Phys. Rev. Lett. #1#1Phys. Rev.D #1#1Phys. Rev.A #1#1Prog. Theor. Phys. #1Å#1Astron. and Astrophys. #1#1#1)#1^<cit.>#1)(<ref>)#1 (#1 )Maryland Center for Fundamental Physics and Joint Quantum Institute, University of Maryland, College Park, Maryland, 20742, USAblhu@umd.edu We explore in stochastic gravity theory whether non-Gaussian noises from the higher order correlation functions of the stress tensor for quantum matter fields when back-reacting on the spacetime may reveal hints of multi-scale structures.Anomalous diffusion may depict how a test particle experiences in a fractal spacetime. The hierarchy of correlations in quantum matter field induces the hierarchy of correlations in geometric objects via the set of Einstein-Langevin equations for each correlation order. This correlation hierarchy kinetic theory conceptual framework, aided by the characteristics of stochastic processes,may serve as a conduit for connecting the low energy `Bottom-Up' approach with the `Top-Down' theories of quantum gravity which predict the appearance of fractal spacetimes at the Planck scale. § INTRODUCTION0.5cmThere are three levels of inquiry here: 1) stochastic gravity (in the restricted sense) <cit.>:Gaussian noises associated with the second-order correlations in the stress energy tensor of quantum matter field backreacting on the spacetime induce metric fluctuations via the Einstein-Langevin equation <cit.>. 2) What has not yet been done, but doable in principle,is the higher order correlations which generate non-Gaussian noises in the quantum matter field and their backreaction on the spacetime dynamics. This is stochastic gravity theory (in a broader sense), as originally intended <cit.>. The new quest posed by the title question is: 3) Whether their backreaction effects near the Planck scale, at least in principle, may indicate / permit the existence of fractal (multi-scale) spacetimes? Where would anomalous diffusion fit in? Recall the original formulation of stochastic gravity was inspired by quantum Brownian motion <cit.> describing normal diffusion, where Gaussian noise associated with the second-order correlations in the stress energy tensor of the quantum matter field can be defined exactly. Here, for non-Gaussian noises associated with the higher order correlations, we appeal to anomalous diffusion processes to see if spacetimes with multi-scale features may appear.This would entail a generalization of the Laplace-Beltrami operator governing normal wave propagation in a smooth curved manifold to some operator of more complex structure describing wave propagation in a fractal spacetime. Following this line of inquiry we give a quick description of each component in this conceptual scheme below. We then take a step back to look at the bigger picture and recap the kinetic theory <cit.> `Bottom-Up' approach to quantum gravity <cit.> where correlation hierarchies both in the matter and the spacetime sectors take the center stage. We then examine dimensional reduction and fractal spacetime near the Planck scale, review the basics of anomalous diffusion, how it can act as a probe for fractal structures, then finally discuss the procedures from stochastic gravity. 1. Backreaction: Semiclassical Gravity and Stochastic Gravity A. Backreaction of quantum matter fields – the vacuum expectation value of their stress energy tensor – on the background spacetime dynamics is the main themein semiclassical gravity begun in the late 70s. The central equation is the semiclassical Einstein equation (SCEq). B. Backreaction of the fluctuations of quantum matter fields – the vacuum expectation value of the stress energy bitensor T_mn(x)T_rs(x')– on the spacetime dynamics is the main theme in stochastic semiclassical gravity initiated in 1994. The central equation is the Einstein-Langevin Equation (ELEq), with the two point function of T_mn of quantum fields, now living in the spacetime which is a solution of the SCEq, acting as the source driving the Einstein Equation.2. Noise: Moments of Stress-Energy TensorA. Gaussian noise: For the two-point functionof the stress-energy bitensor , one can use the Feynman-Vernon Gaussian functional identity <cit.> to express it as classical stochastic force: noise. Using them as source for the Einstein-Langevin equation gives stochastic gravityin the restricted sense (SG2, 2 for second correlation order)For higher-point functions of the stress-energy tensor there is no identity, thus they cannot be expressed as simple noise. However, their innate qualities of fluctuations and correlations remain, and their importance unabated, in fact may increase, as we shall see.B. Higher moments:As shown by Fewster, Ford, Roman (FFR) <cit.> and others,even at low energy, in Minkowski spacetime,under test field condition (no backreaction on spacetime)the higher moments of T_mn contribute significantly.They may be expressed as correlation noises<cit.> which are predominantly non-Gaussian, i.e., noises associated with thehigher order correlation functions in the correlation hierarchy (e.g., the BBGKY-Boltzmann hierarchy for classical gas, the Schwinger-Dyson hierarchy for interacting quantum fields). Considering the backreactions of the higher moments of T_mn extending to the whole correlation hierarchy is the task ofstochastic gravity theory in the broader sense (SGn, n for nth correlation order). 3. Anomalous Diffusion: A.non-Gaussian noise:Where do they arise? Nonlinearity. Example: quantum Brownian motion (QBM) beyond the bilinear coupling order <cit.>, interacting quantum fields <cit.>, can be treated by perturbative techniques, both produce multiplicative colored noise.B. Anomalous diffusion: A great variety of stochastic processes studied in various fields, see, e.g., <cit.>. They have very different spectral dimensions from normal (Brownian) diffusion and different small and large scale behaviors. 4. Spacetime near the Planck scale:A.Dimensional Reduction:Spacetime at small length scales becomes effectively 2 dimensional <cit.>. See <cit.> for a lucid account of this feature from different angles. In the causal dynamical triangulation (CDT) program of Ambjørn, Jurkiewicz and Loll (AJL) et al <cit.> this is actually a more ubiquitous situation than the appearance of a large 4D spacetime. An oft-used way to define dimensionality of space is the spectral dimension from Brownian motion. B. Fractal Spacetime: many `Top-Down' theories (meaning, starting from some assumed microscopic structure of spacetime) contain this feature <cit.> The two earliest and longest running programs which reported on this behavior are CDT andasymptotic safety gravity (ASG) <cit.> program of Lauscher, Reuter et al <cit.>.This aspect is pursued in recent years with rigor by Calcagni and co-workers <cit.> . We shall draw on these work for inspirations in our pursuit.More on this later. 5. `Bottom-Up view:Does backreaction of non-Gaussian noise bring forth a modification of the wave operator reflecting possible fractal dimensionality in spacetime? We shall provide some background perspective for pursuing this in the `Bottom-Up' approach (meaning, starting from the known and proven theories of spacetime and matter, namely, general relativity and quantum field theory) to see if this is possible. My present thinking leans towards the affirmative.§ PERSPECTIVE0.5cmI'd like to first recapitulate some statements about 1) the complementary functionality ofthe `Top-Down' and the `Bottom-Up' approaches, whose central tasks give meanings to theEmergent vs Quantum Gravity theories, respectively <cit.>. 2) The ubiquitous presence and importance of a stochastic regime between the quantum and the semiclassical regimes. 3) Why the `Bottom-Up approach,though admittedly difficult, even seemingly impossible,isnecessary.I'll then summarize a conceptual scheme I proposed 15 years ago known as the “kinetic theory approach to quantum gravity" <cit.> where the correlation hierarchy plays a determinant role in unraveling the behavior of spacetime near the Planck scale, approached from low energy up. This way of thinking is enriched by interesting new results of Fewster Ford and Roman on the higher moments of the stress tensor. 1) In <cit.> I made the point that all candidate theories of Quantum Gravity, namely, theories for the microscopic structures of spacetime, shouldlook at their commonalities at the low energy – Planck scale – limit, rather than their differences at the trans-Planckian scale, which is beyond present day experimental or observational verification capabilities. They should examine from their favorite theories of quantum gravity those features they all share, present definitive predictions at sub-Planckian energies and look for experiments and observations implementable at today's ultra-low energy which can imply, directly or indirectly, their Planck scale behavior.This transition region should serve also as the meeting ground for all `Bottom-Up' approaches to compare their predictions with those from the `Top-Down' theories.2) The existence and significance of a stochastic regime between the quantum and the semiclassical regimes in many physical systems. Here, fluctuation phenomena play a pivotal role. An example we have seen played out in the last twenty years is (environment-induced) decoherence: a system's transition from quantum to classical is brought about by noise (in the environment).The role played by noise can be rephrased by correlations, as in a variant formulation of the fluctuation-dissipation theorem. Our prediction is that fluctuation phenomena should play an important role in the above-described transition region meeting ground. It is from this reasoning that we focus on effects such as the induced metric fluctuations (`spacetime foam' ) from the backreaction ofquantum matter field fluctuations in stochastic gravity theory. 3) The `Bottom Up' approach is admittedly difficult, but not impossible.After all, that is how physics has progressed over the centuries: from conceptual defects in a low energy theory or contradictory phenomena in its observational predictions, we conjecture, test, prove and establish the unknown higher energy theories and structures, which later become established laws and facts.I'll mention a couple of prior examples where a proven low energy theory when examined closely and critically can provide insight into some new features of more viable theories at higher energies:a) Renormalization (regularization) of the stress energy tensor mandates the appearance of quadratic curvature terms in the effective action. b) Inclusion of (Gaussian) quantum field fluctuations mandates the appearance of stochastic components in the curved background spacetime – the induced metric fluctuations.In light of this the question we are asking, it seems to me, is not unreasonable, or totally out of reach, namely, “Does the inclusion of non-Gaussian quantum matter field fluctuations ushers in fractal spacetime structures?" Let us recapitulate the structure of stochastic gravity and start from there.Stochastic Gravity in the restricted sense (up to second order SG2 ) in relation to classical and semiclassical gravity can be represented by the following three levels oftheoretical structures:Classical Gravity–Einstein equation with classical matter:G_μν[g]=κ T_μν[g] Semiclassical gravity (mean field theory) :G_μν[g]=κ(T_μν[g]+⟨ T_μν^q[g]⟩)where q denotes a quantum object and ⟨ ⟩ denotes taking its expectation value Stochastic gravity (including quantum fluctuations):G_μν[g+h]=κ(T_μν[g+h]+⟨ T_μν^q[g+h]⟩+ξ_μν[g])to linear order in perturbations h, where ξ_μν is the stochastic force induced by the quantum field fluctuations with the correlation⟨ξ_μν(x)ξ_αβ(y)⟩_s=N_μναβ(x,y)where ⟨ ⟩_s denotes taking a distribution average over stochastic realizations, and N^μναβ is the noise kernelN^μναβ(x,y)=1/2⟨{t^μν(x),t^αβ(y)}⟩where t^μν(x)=T^μν(x)-⟨ T^μν(x)⟩.§.§ Correlation hierarchy and the Kinetic Theory Approach to Quantum Gravity0.25cmThus noise carries information about the correlations of the quantum matter field stress tensor. One can further linkcorrelations in quantum field stress tensors tocoherence in quantum gravity. Stochastic gravity brings us closer than semiclassical gravity to quantum gravity in the sense that the correlations in the quantum matter field stress tensor and correlations in the induced geometric objects (such as the Riemann tensor correlator), which by its theoretical construct are fully present or accessible in quantum gravity, are partially retained in stochastic gravity. Because of the self-consistency condition required of the backreaction equations for the matter and spacetime sectors when solved simultaneously,the background spacetime has a way to tune in to the correlations of the quantum matter fields registered in the noise terms which manifest through the induced metric fluctuations from solutions of the Einstein-Langevin equations.Viewed in this broader light the Einstein-Langevin equation is only a partial (lowest correlation order) representation of the more complete theory for the micro-structures of spacetime and matter. There, the quantum coherence in the geometry sector is locked in and related to the quantum coherence in the matter field, as the quantum description of the combined matter and gravity sectors should be given by a completely coherent wave function of both. Semiclassical gravity forsakes all the quantum phase information in the gravity or geometry sector. Stochastic gravity captures only partial phase information or quantum coherence inthe gravity sector by way of the correlations in the quantum matter fields. Since the degree of coherence can be represented by correlations, (putting aside entanglement issues for the problem at hand),thestrategy for thestochastic gravity program (in the broader sense SGn)is tomove up the hierarchy starting with the second order correlator (the variance) of the matter field stress energy tensor to the higher order correlations, and through their linkage with gravity sector provided by the Einstein-Langevin equations for each order, retrieve whatever quantum attributes (partial coherence) of the quantum gravity theory at trans-Planckian scales.Thus, as remarked in <cit.>, in this vein, focusing on the noise kernel, the stress energy tensor two point function, is only the first step (beyond the mean field semiclassical gravity theory)towards reconstructing the full theory of quantum gravity. This is the conceptual basis for the so-called `kinetic theory approach to quantum gravity' <cit.>. The kinetic theory approach to quantum gravity proposes to unravel the microscopic structure of spacetime by examining how the correlation functions of the geometric objects formed by the basic constituents of spacetime are driven by the correlation noises from the higher moments of the quantum matter stress energy tensor. This paradigm is structured around the so-calledBoltzmann-Einstein Hierarchy of equations. For illustrative purpose, assuming that the micro-structure of spacetime can be represented by some interacting quantum field of micro-constituents, the Schwinger-Dyson hierarchy of n point correlation functions would be the quantum parallel to the BBGKY hierarchy, the lowest order being the Boltzmann equation.For any nth order correlation, if one can represent all the higher order correlations as noise, called `correlation noise' in <cit.>, this would give rise to a stochastic Boltzmann equation for the nth order correlation. At the second correlation order, the Einstein tensor correlator with induced metric fluctuations <cit.> has been calculated for the Minkowski space, so do the Weyl and the Riemann tensors <cit.> for the de Sitter space recently.The higher order correlations of these geometric objects with induced metric fluctuations can likewise be determined, albeit much harder, by solving the Einstein-Langevin equations with sources from the higher moments of the stress energy tensor of the corresponding orders. The combined set of correlators of all orders of the geometric objects is given the name of a `Boltzmann-Einstein hierarchy' in <cit.> because it has the structure of a BBGKY hierarchy whose lowest order is the Boltzmann equation while in the present spacetime context the lowest order in this hierarchy is the Einstein equation. [This is not the Einstein-Boltzmann equation in classical general relativity and relativistic kinetic theory which frames the classical matter in the Boltzmann style as source of the Einstein equation. The Boltzmann-Einstein hierarchy refers to the spacetime sector alone.] Stochastic Gravity in the broader sense SGn refers to the Boltzmann-Einstein hierarchy of equations in the spacetime sector together with the Einstein-Langevin equations for each order of the hierarchy (or levels of structure) in connecting the spacetimestructure correlators to the matter field correlators. A figurative way suggested in <cit.> to understand the formal structure of these two inter-woven hierarchies of spacetime-matter relations is, if we assume the `horizontal' dimension in this conceptual chart represents the E-L equations relating at each correlation order the spacetime correlators to the matter field correlators, then the B-E hierarchy of spacetime correlators occupy the `vertical' dimension. What stochastic gravity does to reach quantum gravity is to `climb up' the spacetime B-E hierarchy aided by the E-L equations which tap into the quantum matter sector at each level of structure.In summary, viewed in the light of mesoscopic physics, stochastic gravity is the theory which enables one to probe into the higher correlations of quantum matter and spacetime.From the excitations of the collective modes in geometro-hydrodynamics one tries to deducethe kinetic theory of spacetime meso-dynamics and eventually the full theory of quantum gravity for spacetime micro-dynamics[This paradigm was used by Mattingly <cit.> as an example of emergence of general relativity from quantum gravity, mirroring the `general relativity as geometro-hydrodynamics' perspective <cit.>, in analogy to hydrodynamics being an emergent theory from molecular dynamics.]. §.§ Probability distribution for quantum stress tensor fluctuations0.25cmThe above is a grand scheme based on the importance of the fluctuations or noise of the quantum matter field manifested in the correlations of the stress energy tensor. The expectation values of the bi-tensor is the driving source of stochastic gravity theory (in the restricted sense). Going beyond, tackling the higher moments, is the start of this new journey. This topic has been explored by Fewster, Ford and Roman (FFR) <cit.> systematically in the past. Their recent findings, it seems to me, add more weight to the correlation hierarchy conceptual framework. What FFR found was, for two-dimensional Minkowski space, the probability distribution for individual measurements of the stress-energy tensor for a conformal field in the vacuum state, smeared in time against a Gaussian test function,yields a shifted gamma distribution with the shift given by the optimal quantum inequality bound these authors found earlier. For small values of the central charge it is overwhelmingly likely that individual measurements of the sampled energy density in the vacuum give negative results. For 4D,the probability distribution of the smeared square field is also a shifted gamma distribution, but that the distribution of the energy density is not.There is a lower bound at a finite negative value, but no upper bound. These results show that arbitrarily large positive energy density fluctuations are possible. Since they fall slower than exponentially, this may even allow for the dominance of vacuum fluctuations over thermal fluctuations.The implication of these findings for gravity and cosmology is that large passive geometry fluctuations are possible. (What Ford calls active and passive correspond to our intrinsic and induced). These findings testify to the importance of induced metric fluctuations from the backreaction of the stress-energy tensor correlations of higher orders, which is in the realm of stochastic gravity (in the broader sense). § SPACETIME NEAR THE PLANCK SCALE0.5cmThe findings of Fewster, Ford and Roman described above, that the higher moments of the stress energy tensor of quantum matter fields have nontrivial effects inMinkowski spacetime at today's ultra-low energy, also have implications on the ultra-short distance structure of spacetime near the Planck scale. One such effect is dimensional reduction <cit.>.§.§ Dimensional Reduction0.25cmMany theories of quantum gravity contain the ingredients, or predict the occurrence, of dimensional reduction of spacetime at the Planck scale from 4D to 2D. Even at the classical level, within the theory of general relativity, it has been shown in the 60s-70s in the work of Belinsky, Khalatnikov,Lifshitz <cit.> and Misner <cit.> that the most general solutions to the Einstein equation near the cosmological singularity manifest a `velocity-dominated' <cit.> behavior.This refers to the contribution of the extrinsic curvature (Kasner solution) dominating over the intrinsic curvature (mixmaster solution) near the singularity, or spacetimeassuming an inhomogeneous Kasner solution at every point in space. Physically this means spatial points decouple,light cones strongly focus and shrink to timelike lines, or in a more poetic depiction, spacetime becomes `asymptotically silent'.For a description of further evidences for asymptotic silence as a generic Planck scale behavior, see, e.g., <cit.> Along the line of reasoning of Planck scale focusing by non-Gaussian vacuum stress-energy fluctuations, Carlip,Mosna and Pitellis <cit.>, using the results of <cit.> for theprobability distribution of 2D conformal field theory, have recently shown that vacuum fluctuations of the stress-energy tensor in two-dimensional dilaton gravity lead to a sharp focusing of light cones near the Planck scale.Space is effectively broken into a large number of causally disconnected regions, thus adding to the evidence of spontaneous dimensional reduction at short distances. They also argued that thesefeatures should be present in four dimensions qualitatively. It is of interest to see how the dimensionality of spacetime is defined. A common way is to use the spectral dimensions in Brownian motion, namely, the dimension of paths traversed by a random walker.Since going beyond Brownian motion is a main themes of this inquiry, it is perhaps worthy of a short description here <cit.>.The spectral dimension for a diffusion process is defined as follows:From the probability density P(x; x'; τ) of a diffusing particle on a background, one can define a return probability P_τ = V^-1∫_x P(x; x; τ). Here, x and x' denote coordinates on the Euclidean spacetime with volume V , and τ is an external diffusion time – `external' here refers to what pertains only to the diffusion process and not related to the physical time in spacetime dynamics. The spectral dimension for a background with fixed dimensionality is then defined as d_S= 2 lim_τ→ 0∂ lnP (τ)/∂ ln τ To allow for the spectral dimension to change as the scale length varies, one can generalize this definition to make d_S(τ) dependent on the diffusion `time' τ by not taking the limit τ→0. The expectation value of the spectral dimension is relatively easy to evaluate numerically, since random walks are simple to model.Intuitively arandom walker with more dimensions to explorewill diffuse more slowly from a starting point, and will also take longer to return.Quantitatively, a diffusion process on a d-dimensional manifold is described by a heat equation(∂ /∂ s - Δ_x)K(x,x';s) =0 with K(x,x',0) = δ(x-x') , with a short distance solutionK(x,x';s) ∼ (4π s)^-d/2 e^-σ(x,x')/2s( 1 + 𝒪(s)) where σ(x,x') is Synge's world function, essentially the square of the geodesic distance.In particular, the return probability K(x,x,s) is K(x,x;s) ∼ (4π s)^-d/2 . This relationship can extend to any space on which a diffusion process or random walk can occur.The spectral dimension is then defined as the coefficient corresponding to d in (<ref>). As Ambjørn, Jurkiewicz, and Loll discovered <cit.>, and reconfirmed by Kommu <cit.> and others, the spectral dimension found by Causal Dynamical Triangulationis 4 for “long” random walks, but changes to 2 for `short' random walks. The corresponding Green functions are those of four-dimensional fields at large scales, but those of two-dimensional fields at small scales.The cross over scale length isabout 15 Planck lengths <cit.>. Obtained from a Mellin transform of the Green function, the heat kernel contains just as much physical information about the correlation functions for a quantum field. The heat kernel is a useful vehicle to extract the UV behavior of a quantum field in curved spacetime, under a small s Schwinger proper time expansion, as was used for the identification of UV divergences in several stress energy tensor regularization programs (see, e.g., <cit.>). The opposite limit of small s expansion gives the short distance behavior <cit.>. These are examples of how properties of quantum field theory in curved spacetime can provide useful hints for the spacetime structure at the Planck scale.However, beware that the Schwinger proper time s has nothing to do with physical time or Euclidean time. It is a fictitious construct introduced by Schwinger, thus the heat equation for quantum fields in ordinary 4D spacetime is written in 1+4 dimensions. The diffusion `dynamics' measured by this fictitious`time' has nothing to do with the diffusion of a physical particle in real 1+3D spacetime.§.§ Fractal Spacetime0.25cmThat spacetime at the Planck scale can assume some fractional structure has been proposed from many independent considerations, such as in causal dynamical triangulation, asymptotically safe gravity, Harava-Lifshitz gravity, group field theory <cit.>.Take for example the results from the long running causal dynamical triangulation program <cit.>, the prevailing geometries are 2D fractalspacetimes which evolve for a very short time. Long-lived smooth structures are rare, but they do exist, some even evolve to a 4D spacetime of large volume with homogeneity <cit.>, desirable features like our universe. In fact after about 100 Planck time spacetime acquires semiclassical features. One can get an idea about some properties of a fractal spacetime by watching how a test particle moves in it, much like geodesics in a smooth curved spacetime. Processes like anomalous diffusion (in a classical, flat Euclidean space) are of interest to the quantum gravity community from the following consideration: In a certain energy or lengthscale range,some qualitative features of the quantum structure of spacetime can be gleaned off from the diffusive motion of a probe particle. This type of motion is captured by a generalizationof the Laplacian operator (Laplace-Beltrami, in curved spacetime) from the more familiar form defined for a smooth manifold to include stochastic features. An important fact to bear in mind is thatthe effective metric `seen' by a diffusing particle depends on the momentum of the probe. This can be captured by a renormalization group formulation, such as utilized in the asymptotic safety gravity (ASG)program <cit.>. There <cit.> it is found that in d dimensions, the spectral dimension d_S changes depending on the value of a parameter δ, which in turn depends on the probe length scale. d_S = 2d/2+δ One can identify three characteristic regimes where the spectral dimension is approximately constant over many orders of magnitude. At large distances one reaches the classical regime where δ = 0and the spectral dimension agrees nicely with both the Hausdorff and topological dimension of the spacetime, as required. At smaller distances one first encounters a semiclassical regime with δ = d, before entering into the fixed-point quantum gravity regime with δ = 2, signifying a dimensional reduction to 2. In this sense δ provides a measure of the quantum nature of spacetime. §.§ Might we see these effects from Bottom-Up?0.25cmWith occurrence of dimensional reduction and appearance of fractal spacetime likely near the Planck scale, we now ask the question: might we see these effects from low energy up? By low energy we mean today's 4D spacetime with a smooth manifold structure, and it is believed that this smooth manifold structure will persist to around, but somewhat lower than, the Planck energy, say,≈ 100 ℓ_Pl, reached from below. As mentioned earlier, in many physical systems one can find a stochastic regime in between the semiclassical and the quantum regimes. For spacetime structures we expect the same is true. Indeed, it is in the transition from thestochastic to the quantum regime where we would anticipate fractalspacetime to appear, possibly accompanying a transition from continuum to discrete structures (e.g., <cit.>) . As described earlier, for gravity there is a theory which can capture the essence of this stochastic regime, namely, stochastic gravity. This is the motivation for us to look for possible fractal structures in stochastic gravity as we move up in energy, or probing at a shorter scale length. The physical quantities of interest for this inquiry is the higher moments of the stress tensor giving rise to nonGaussian noise in the quantum matter fields. Thestochastic processes to aid us visualize the features of such spacetimes are the anomalous diffusion processes, a much broader class than the quantum Brownian motion describing normal diffusion. Let us separate the contexts of our inquries into two kinds, one is easier to answer than the other. Q1: Can a smooth manifold with a Laplace-Beltrami operator admit non-Gaussian noises from a quantum field? The answer is yes. For example, Ramsey et al in treating the nonequilibrium inflaton dynamics for reheating considered fermion production, which when expressed as noise, are multiplicative colored noises. The nonGaussian nature comes from the nonlinearity in Yukawa interaction between a scalar (inflaton) and a spinor (fermion) field.There are works in classical stochastic processes with multiplicative noise, such as the generalized Langevin equation of e.g., <cit.> and on-Gaussian noisein quantum stochastic processes, such as the nonlinear Langevin Equation of e.g., <cit.> Q2: Consider non-Gaussian noises from a quantum field back-reacting on a spacetime. Does the requirement of self-consistency in the dynamics of spacetime and matter require / permit the possibility of a fractal spacetime?We think it is possible. Note that in the above, we are talking about two different stochastic processes: Q1 concerns quantum matter (with non-Gaussian noise) moving in a given background space whereas Q2 refers to the classical spacetime dynamics driven by quantum noise via the Einstein- Langevin Equation.We explore the latter question here.§ ANOMALOUS DIFFUSION0.5cmIn this section I'll give a sketch of what anomalous diffusion is and in the next section, how it can serve as a probe into the quantum nature of spacetime <cit.>. I am not an expert in this topic so this is just sharing my learning process with you. The following contents are excerpted from the works of practitioners in these two (seemingly) disparate fields, statistical mechanics and quantum gravity, to prepare us for the journey we wish to embark upon.§.§ Anomalous transport by fractional dynamics0.25cm`Anomalous' is in comparison to the `normal' Brownian motion. In a classical diffusion process in d (embedding) spatial dimension, the deviation of the mean squared displacement is given by⟨(Δ r)^2⟩≡⟨ r^2⟩- ⟨ r ⟩^2 = 2 d K_βt^β(no summation on β)wheret is the time in classical stochastic dynamics (τ when applied to the analysis of quantum spacetime structure), and K_$̱ is the generalized diffusion constant with=̱1being the normal process and≠̱1the anomalous processes. The case with0 < <̱ 1are called subdiffusive (dispersive, slow), those with>̱1the superdiffusive (enhanced, fast) processes. Usually the domain1 < ≤̱2is considered, with=̱ 2being the ballistic limit described by a wave equation, or its forward and backward components. These processes are characterized by their PDFs: We mention several force-free processes whose PDFs share the form: P(x',t) = (4 π K_ṯ^)̱^-exp(-x^2 / (4 π K_ṯ^)̱) The processes this encompasses are: (a) Normal Brownian motion (BM):=̱1(b) Fractional Brownian motion (FBM):0 < ≤̱2(c) generalized Langevin equation (GLE) with power-law kernel0 < <̱ 2with≠̱1One can find other commonly encountered processes, such as d) subdiffusion (SD), e) Levý flight (LF), f) Levý walk (LW) etc,described in e.g., Table 1 of the review by <cit.>, which we follow in this subsection. The subdiffusive case below bears special significance for quantum spacetimes.Note the qualitative differences between the first three (a-c) and the latter three (d-f) types of motion:The fractional dynamical equations corresponding to SD, LFs and LWs are highly non-local, and carry far-reaching correlations in time and/or space, represented in the integro-differential nature (with slowly decaying power-law kernels) of these equations. In contrast, FBM and GLE on the macroscopic level are local in space and time, and carry merely time- or space- dependent coefficients. These generalizations can be obtained from Brownian motion (BM) by using the continuous time random walk (CTRW) model, which we describe below, following <cit.>. All of these models can be mapped onto the corresponding fractional equations, which we will describe afterwards. 1. In a standard random walk process each step is of a fixed length in a random direction at each tick of a system clock. A process having constant spatial and temporal increments,DxandDt, will give rise to the standard diffusion process in the long-time limit, i.e.,x(t) = N^-Σ_i=1^N x_i. After a sufficient number of steps the associated random variable wherex_iis the position after the ith step will be distributed by a Gaussian due to the central limit theorem. 2 In a CTRW process the jump length and the waiting time are distributed according to two PDFs,ł(x)andψ(t), respectively. The propagator for such a CTRW process in the absence of an external force is given in Fourier (k) Laplace (u) space by P(k,u) = 1-ψ(u)/u[1-ψ(k,u)]Subdiffusion is classically described in terms of a CTRW with a long-tailed inverse power-law waiting time: ψ(t) ≈τ^ β / t^1+β for 0 < β < 1A waiting time PDF of this form is obtained under the Laplace expansionψ(u) ≈ 1 - (u τ )^βfor u << τ. It includes normal diffusion at the limitβ = 1, in which caseψ(u) = e^-uτ≈ 1 - u τ, andψ(t) = δ(t - τ). Now combine with the Fourier transform of the jump lengthλ (x). After an analogous expansion of a short-range jump length PDF,λ (k) ≈ 1 - μ k^2 (for k → 0) for the Fourier transform ofλ, we obtain, P(k,u) = 1/u/1 + u^-βK_β k^2 whereK_≡̱μ /τ^ $̱ is the anomalous diffusion constant. For Brownian motion β =1, after using the differentiation and integration theorems for the Laplace and Fourier transforms,we obtain the normal diffusion equation: ∂ P(x, t)/∂ t = K_1 ∂^2 P(x, t)/∂ x^2The subdiffusive cases (0 < β < 1)have a term of the form u^-β f(u). Their PDFs obey the fractional diffusion equation ∂ P(x, t)/∂ t =_0D_t^1- K_β∂^2 P(x, t)/∂ x^2 where _0D_t^1-β≡∂/∂ t _0D_t^-βand _0D_t^-β is a Riemann-Liouville fractional differential operator defined by_0D_t^-β≡1/Γ (β)∫_0^t dt' f(t')/(t-t')^1-β for any well-behaved function f(t). It is important to keep track of the initial condition in this fractional diffusion equation. Noticing that _0D_t^-β 1 = t^- /Γ (1-β), we can rewrite Eq. (<ref>) in the form _0D_t^β P(x,t)-t^-β P_0(x)/Γ (1-β) = K_β∂^2 P(x, t)/∂ x^2§ DIFFUSIVE PROCESSES AND FRACTAL SPACETIME0.5cmIn this line of investigation the probability density function (PDF) contains more information than the spectral dimension, and is thus the focus of interest. The basic properties of PDF such as semi-positive definiteness need be checked for the diffusion process representative of quantum spacetime, case by case, for different theories of quantum gravity.This is discussed in e.g., <cit.> where it is shown several ways to construct diffusion equations which capture the quantum properties of spacetimewhile admitting solutions that are manifestly positive semi-definite.Note again that the diffusion `time' is not a physical time and thus the motion of the probe particle is not related to how matter moves in, affects, or is affected by the background spacetime. It carries no dynamical meaning itself beyond serving as a tool for `imaging the topography' of the quantum spacetime structure.One way suggested in <cit.> for obtaining a positive semi-definite PDF is to use nonlinear time. These authors use a renormalization group (RG)-improvement scheme where the scale k is related to the diffusion time τ with large diffusion times corresponding to the IR regime k → 0 and short diffusion times to the UV regime k →∞. Assuming a power-law relation between the effective metrics at scale k and the reference scale k_0,⟨ g^μν⟩_k ∝ k^δ ⟨ g^μν⟩_k_0; one gets (∂_τ - k^δ⟨ g^μν∇_μ∇_ν⟩_k_0) P(x,x',τ)=0, where k is the RG scale, k_0 is the IR reference scale and g^μν is the fixed IR reference metric which is taken to be the flat Euclidean metric. To encode the scaling effects in the diffusion time τ, one can multiply the equation with k^-δ so that the diffusion operator becomes a standard second-order Laplacian, (k^-δ∂/∂τ-∇^2_x)P(x,x',τ)=0 .The relation between k and τ is then fixed on dimensional grounds.Since kx is dimensionless, powerlawdiffmod implies that, dimensionally, τ∼ k^-δ-2 which suggests the scale identificationk = τ^-1/δ + 2,where the proportionality constant has been absorbed into the diffusion time τ.By changing the diffusion time variable from τ to τ^β with β = 2/δ + 2,the resulting equation can be cast into a diffusion equation in nonlinear time τ^β: (∂/∂τ^β-∇^2_x)P(x,x',τ)=0 . The probability density resulting from this diffusion equation is given by a Gaussian in r = | x-x' |:P(r, τ) = 1/(4 πτ^β)^d/2e^- r^2/4 τ^β. which is seen to be manifestly positive semi-definite. Moreover, the cutoff identification cutoffid implies that P[r, τ(k)] ∝ k^d has the correct scaling behavior of a diffusion probability in d dimensions. Notice that for the classical regime value of δ = 0 or β =1, we obtain the same expression as the heat kernel given in heatkernel without the curvature correction terms, with (r, τ) here corresponding to (σ,s) there. The spectral dimension resulting from probdist2 is independent of τ and given by dsde It was remarked in <cit.> that the spectral dimension obtained from the diffusion in nonlinear time, powerlawdiff, and the one measured within CDT is identical to the one found in <cit.>, thus offering useful grounds for the comparison of different approaches to quantum gravity, here in their stochastic dynamics representations. (See also <cit.>.)The averaged squared displacement of the test particle implied by probdist2 isfound to be ⟨ r^2 ⟩_nonlinear time = 2 dτ^β ,where angular brackets denote the expectation value with respect to the associated probability density function,⟨ f(x)⟩=∫ dx P(x,x',) f(x).For β = 1, this corresponds to a Wiener process of normal diffusion; the case β < 1is subdiffusive.There are actually two possible stochastic processes underlying powerlawdiff. One is scaled Brownian motion (SBM),i.e., a Wiener process which takes place in nonlinear time. The second is fractional Brownian motion (FBM), which is a stochastic process with correlated increments, thus non-Markovian. (See, e.g. <cit.> for details and references.) When the leading theories of quantum gravity are viewed in the anomalous diffusion light,whether the relevant stochastic processes are Markovian or non-Markovian has some special significance, because it may reveal the correlation and memory effects in the kinematics and dynamics of the basic microscopic constituents of spacetime, an essential goal of quantum gravity.§ FRACTAL SPACETIMES IN STOCHASTIC GRAVITY?0.5cmIn summary, anomalous diffusion has been used as a probe into the quantum nature of spacetime in several quantum gravity theories<cit.>.Here, what is studied is anomalous diffusion equations on a classical flat space, not even curved spacetime. The point is to get some ideas in how new qualitative features arise, such as fractal structure.A dynamical dimensional change is captured by a modification of the Laplacian operator appearing in the classical diffusion equation.The effective metric `seen' by the diffusing particle depends on the momentum of the probe.Expressing this metric through a fixed reference scale leads to a modified diffusion equation providing an effective description of the propagation of the probe particle on the quantum gravity background. A multifractal structure with spectral dimensiond such as given in Eq.(<ref>) depends on the probed length scale whereby one sees a clear transition from classical to semiclassical to quantum regime.But as stressed before, the time in such processes is not the physical time in quantum gravity (time does not exist) and many technical and conceptual issues remain.These reported activities in recent years are from `Top-Down' theories. We now ask if new features like fractal spacetime may appear from `Bottom-Up', namely, extrapolating from the low energy realm described by general relativity up in energy. We know the existence of the semiclassical and stochastic gravity regimes.Using the structural framework of stochastic gravity to probe at increasingly finer scales we now ask whether, and how, we can see some signs of fractal structure in spacetime. In the first part of my talk I motivated this feature from some long-running programs of quantum gravity theories, and in the second part I introduced some tools for examining these possibilities, amongst them anomalous diffusion processes driven by non-Gaussian noises.Let us now examine this pathway, try to identify new challenges and see what new measures we need to take to meet them. First,calculate the vacuum expectation values of the higher correlation functions of the stress energy tensor of quantum matter fields and solve the corresponding order Einstein-Langevin equations. This is an extension of what has been done quite nicely in (Gaussian) stochastic gravity with Gaussian noises from the two point function of the stress energy tensor – the noise kernel – acting as the source. The solutions of the ELEq yield the second order correlations in the spacetime sector, as was done in <cit.> for the correlation function of the Einstein tensor in Minkowski spacetime, and recently for the Weyl and Riemann second order correlators <cit.> in de Sitter space which contain information of the induced metric fluctuations.Now, for the higher moments of the stress tensor,even though there are no obvious ways to identify them as noise, one can in principle solve the ELEq to obtain the higher order correlators of the geometric objects, e.g., the Riemann tensor correlator (call this the geometric route). One can nonetheless at first a) examine the range of weakly nonGaussian stress tensor, meaning, look atsmall departures from the second order correlations in stress tensor and b) carry out a perturbative calculation using as background spacetime the Gaussian second moment-induced solutions (Einstein or Riemmann tensors) mentioned above. The results of Ford et al and Verdaguer et al will be useful for these steps a) and b) respectively. The alternative route of viewing this as a stochastic process whereby one can conceptualize the physical picture easier (that's what led me to anticipate what kind of theory should lie beyond semiclassical gravity) and can borrow known techniques toward finding solutions of the stochastic equations (call this the stochastic route) requires identifying the non-Gaussian noises from the higher moments. The challenge here is, there is no clean separation between the real and imaginary parts of the influence action and there is no non-Gaussian functional integral identity whereby one can interpret a quantum object in terms of a classical stochastic variable. However, one can first consider a case of weak nonGaussianity (from theresults for the third moment of the stress tensor by Fewster et al <cit.>) using perturbative methods off the known second moment results (where noise is well defined via the Feynman-Vernon identity). This is the route currently pursued with H T Cho <cit.>.Carrying out the above stated tasks is not easy, but a bigger challenge lies ahead. I see three demands both conceptual and technical: There may be a need to 1) create, figuratively speaking, new positions for newcomers, 2) anticipate the backreaction of non-Gaussian noise bringing forth a modification of the Laplace-Beltrami wave operator which contains the germs of a fractal spacetime 3) introducenew dynamical variables at shorter scales for the more fundamental constituents of a more basic theory and from them construct effective theories for collective variables which can match with the more familiar low energy theories.On 1),recall an example given earlier: renormalization of the stress energy tensor for quantum matter fields requires the introduction of higher-order curvature terms. Thus in going from a test field theory, namely quantum field theory in curved spacetime, to semiclassical gravity with backreaction whereby the background spacetime dynamics is determined in a self-consistent manner, one needs to create new positions for the newcomers, namely, three seats for the Δ R, the Ricci curvature-squared and the Weyl curvature-squared terms (in 4D only two places are needed thanks to the Gauss-Bonnet theorem).I would imagine new places need be created in the spacetime sector (LHS of the ELEq) to accomodate the induced geometric fluctuations due to the backreaction ofthe higher-moments of the stress energy tensor.2) may be the critical step, namely, the appearance of new structure like fractals, departing from the familiar smooth manifold structure at lower energies or probed with lesser resolutions. The example we gave from asymptotic safety quantum gravity program illustrates the change-over of dimensionality from the classical regime to the semiclassical to the quantum.The stochastic regime lying in between the quantum and the semiclassical should have a signature in this regard by itself. The challenge is to identify what types of non-Gaussian noise corresponding to what correlation order of the quantum matter source induce what kinds of generalized wave operators permitting what kinds of fractal structures. 3) is a task for all effective field theories but posed here in the reversed direction, namely, not merely constructing EFTs(nuclear physics, for example) from a proven-valid microscopic theory with known basic constituents (QCD, for the same example), but taking what is known in an effective theory to posit the more basic theories and even predict the more basic constituents and their dynamics. Phase transitions (e.g., <cit.>) in these interfaces add to the unpredictability but also the richness of possibilities. This is where cooperation between the `Bottom-Up' and `Top-Down' theories becomes necessary. What we have done here is to explore some useful ideas and tools for Task 2). AcknowledgmentI thank Prof. Thomas Elze and organizers of DICE2016 for their invitation,Profs. Steve Carlip, Larry Ford, Renate Loll and Enric Verdaguer for correspondences on this topic, and Prof. Hing-Tong Cho for discussions on the perturbative approach in treating non-Gaussian noises. Main themes of this talk were first presented at the Peyresq Physics Meeting in June 2016, supported partially by OLAM, Association pour la Recherche Fondamentale, Bruxelles. This essay contains materials from a part of the last chapter of <cit.> 99HVLivRev B. L. Hu and E. Verdaguer, “Stochastic gravity: Theory and Applications" Living Reviews in Relativity 11 (2008) 3 [arXiv:0802.0658]HVBookB. L. Hu and E. Verdaguer, Semiclassical and Stochastic Gravity – Quantum Field Effects on Curved Spacetime(Cambridge University Press, Cambridge, 2019)ELE E. Calzetta and B.L. Hu, Phys. Rev. D 49, 6636 (1994); B. L. Hu and A. Matacz, Phys. Rev. D 51, 1577 (1995); B. L. Hu and S. Sinha, Phys. Rev. D 51, 1587 (1995); A. Campos and E. Verdaguer, Phys. Rev. D 53, 1927 (1996); F. C. Lombardo and F. D. Mazzitelli, Phys. Rev. D 55, 3889 (1997).stogra99 B. L. Hu, “Stochastic Gravity" Int. J. Theor. Phys. 38, 2987 (1999); gr-qc/9902064.HPZ92 B. L. Hu, J. P. Paz and Y. Zhang, Phys. Rev. D45, 2843 (1992)kinQG B. L. Hu, “A Kinetic Theory Approach to Quantum Gravity" Int. J. Theor. Phys. 41 (2002) 2111-2138 [gr-qc/0204069]E/QG B. L. Hu, “Emergent /Quantum Gravity: Macro/Micro Structures of Spacetime" DICE2008 J. Phys. Conf. Ser. 174 (2009) 012015 arXiv:0903.0878FeyVer R. Feynman and F. Vernon, Ann. Phys. (NY) 24, 118 (1963).FFR C. Fewster,L. Ford and T. Roman, Probability distributions of smeared quantum stress tensors,Phys. Rev. D 81, 121901 (R) (2010).C. Fewster, L. Ford and T. Roman,Probability distributions for quantum stress tensors in four dimensions, Phys. Rev. D 85, 125038 (2012) CH00 E. Calzetta and B. L. Hu, “Correlations, Decoherence, Disspation and Noise in Quantum Field Theory", in Heat Kernel Techniques and Quantum Gravity, ed. S. Fulling (Texas A & M Press, College Station 1995); hep-th/9501040. E. Calzetta and B. L. Hu, Phys. Rev. D 61, 025012 (2000).HPZ93 B. L. Hu, J. P. Paz and Y. Zhang, Phys. Rev. D47, 1576 (1993)RHS98 S. A. Ramsey, B. L. Hu, and A. M. Stylianopoulos,Phys. Rev. D57, 6003-6021 (1998). MetKla00 R. Metzlerand J. Klafter,Phys. Rep. 339 1 (2000).DimRed G. t Hooft, Dimensional reduction in quantum gravity, in Salamfestschrift, A. Ali, J. Ellis and S. Randjbar-Daemi eds., World Scientific, Singapore (1993) [gr-qc/9310026] J. Ambjørn, J. Jurkiewicz and R. Loll, Spectral dimension of the universe, Phys. Rev. Lett. 95 (2005) 171301 D. Benedetti and J. Henson, Spectral geometry as a probe of quantum spacetime, Phys. Rev. D 80 (2009) 124036 [arXiv:0911.0401] S. Carlip, “The small scale structure of spacetime", in Foundations of Space and Time, G. Ellis, J. Murugan and A. Weltman eds., Cambridge University Press, Cambridge U.K. (2012) [arXiv:1009.1136]S. Carlip, “Dimensional Reduction in Quantum Gravity", Talk at the "Shapes of Gravity" workshop,Nijmegen, the Netherlands April 2016CarlipS. Carlip, “Spontaneous dimensional reduction in short-distance quantum gravity?", AIP Conf.Proc. 1196 (2009) 72 [arXiv:0909.3329]AGJL J. Ambjørn, A. Goerlich, J. Jurkiewicz, R. Loll, “Nonperturbative Quantum Gravity", Physics Reports 519, 127-210 (2012) FractalSTO. Lauscher and M. Reuter, Fractal spacetime structure in asymptotically safe gravity, JHEP 10 (2005) 050Martin Reuter, Frank Saueressig, Asymptotic Safety, Fractals, and Cosmology,Lectures given at the Sixth Aegean Summer School on Quantum Gravity and Quantum Cosmology, Chora, Naxos (Greece), 2011.G. Calcagni, Fractal universe and quantum gravity, Phys. Rev. Lett. 104 (2010) 251301 [arXiv:0912.3142] L. Modesto, Fractal structure of loop quantum gravity, Classical Quant. Grav. 26 (2009) 242002 D. Benedetti, Fractal properties of quantum spacetime, Phys. Rev. Lett. 102 (2009) 111303 D. Benedetti and J. Henson, Spectral geometry as a probe of quantum spacetime, Phys. Rev. D80 (2009) 124036 T.P. Sotiriou, M. Visser and S. Weinfurtner, Spectral dimension as a probe of the ultraviolet continuum regime of causal dynamical triangulations, Phys. Rev. Lett. 107 (2011) 131303 [arXiv:1105.5646]G. Calcagni, A. Eichhorn and F. Saueressig, Probing the quantum nature of spacetime by diffusion, Phys. Rev. D 87 (2013) 124028 [arXiv:1304.7247]G. Calcagni and G. Nardelli, Spectral dimension and diffusion in multiscale spacetimes, Phys. Rev. D 88 (2013) 124025, [arXiv:1304.2709].G. Calcagni, D. Oriti and J. Thrigen,Spectral dimension of quantum geometries, Class. Quant. Grav. 31 135014 (2014)Weinberg Steven Weinberg,“Ultraviolet divergences in quantum theories of gravitation". In "General Relativity: An Einstein centenary survey", ed. S. W. Hawking and W. Israel. (Cambridge University Press 1979).Reuter O. Lauscher and M. Reuter,“Ultraviolet Fixed Point and Generalized Flow Equation of Quantum Gravity", Phys. Rev. D65025013 (2002).M. Reuter and F. Saueressig, Lect. Notes Phys. 863, 185 (2013). AnomDiffProbeGianluca Calcagni, Astrid Eichhorn and Frank Saueressig, Probing the quantum nature of spacetime by diffusion, Phys. Rev. D87 (2013) 124028 G. Calcagni and G. Nardelli, Spectral dimension and diffusion in multiscale spacetimes, Phys. Rev. D 88 (2013) 124025. Gianluca Calcagni, Daniele Oriti, and Johannes Thrigen, Spectral dimension of quantum geometries Class. Quantum Grav.31135014 (2014)Johannes Thürigen,Discrete quantum geometries and their effective dimensionarXiv:1510.08706 PhD thesis, Humboldt-Universität zu BerlinMarVer R. Martin and E. Verdaguer, Phys. Rev. D 60, 084008 (1999). Phys. Rev. D 61, 124024 (2000).FrobM. B. Fröb, A. Roura, E. Verdaguer, JCAP 1208,009 (2012). M. B. Fröb, A. Roura, E. Verdaguer, JCAP 1407, 048 (2014). M. B. Fröb, JCAP 1412, 010 (2014).Mattingly James Mattingly, “Emergence of spacetime in stochastic gravity", Studies in History and Philosophy of Modern Physics Volume 44, Issue 3, August 2013, Pages 329337GRhydro B. L. Hu, “General Relativity as Geometro-Hydrodynamics" Invited talk at the Second Sakharov Conference, Moscow, May, 1996; gr-qc/9607070.BKL Belinskii, V.A.; Khalatnikov, I.M.; Lifshttz, E.M. . “Oscillatory Approach to a Singular Point in the Relativistic Cosmology". Advances in Physics Vol. 19 (80): 525573 (1970)Misner Charles W. Misner, “Mixmaster Universe", Phys. Rev. Lett. 22, 1071 (1969). Eardley D. M. Eardley, R. K. Sachs andE. P-T. Liang, J. Math. Phys. 13, 99 (1972). Carlip2011 S. Carlip,R. A. Mosna and J. P. M. Pitelli, Vacuum Fluctuations and the Small Scale Structure of Spacetime, Phys. Rev. Lett.107, 021303 (2011) CES Gianluca Calcagni, Astrid Eichhorn and Frank Saueressig, Probing the quantum nature of spacetime by diffusion, PRD 87 (2013) 124028AJL05 J. Ambjørn, J. Jurkiewicz and R. Loll, Spectral dimension of the universe, Phys. Rev. Lett. 95 (2005) 171301 J. Ambjørn, J. Jurkiewicz, and R. Loll, Phys. Rev.D72, 064014 (2005) Kommu R. Kommu, Class. Quant. Grav.29, 105003 (2012)Cooperman J. Cooperman, “Scale-dependent homogeneity measures for causal dynamical triangulations", Phys. Rev. D 90, 124053 (2014)HuOC84 B. L. Hu and D. J. O'Connor, Phys. Rev. D 30, 743 (1984).SinhaHu S. Sinha and B. L. Hu, Phys, Rev. D 38, 2422 (1988).EichKosl Astrid Eichhorn and Tim Koslowski, “Towards phase transitions between discrete and continuum quantum spacetime from the renormalization group", Phys. Rev. D 90, 104039 (2014)Mankin R. Mankin, K. Laasand A. Sauga, “Generalized Langevin equation with multiplicative noise: Temporal behavior of the autocorrelation functions" Phys. Rev. E 83, 061131 (2011).ChoHuNG H. T. Cho and B. L. Hu, “Non-Gaussian noise and nonlinear Langevin Equation in Quantum Brownian Motion" MetKla04 Ralf Metzler and Joseph Klafter, “The restaurant at the end of the random walk: recent developments in the description of anomalous transport by fractional dynamics", J. Phys. A: Math. Gen. 37 (2004) R161R208.ReuSau M. Reuter and F. Saueressig, “Fractal space-times under the microscope: a renormalization group view on Monte Carlo data", J. High Energy Phys. 12 (2011) 012. CounJurk D. N. Coumbe and J. Jurkiewicz. “Evidence for asymptotic safety from dimensional reduction in causal dynamical triangulations", Journal of High Energy Physics 03 (2015) 151.CDTpt J. Ambjørn, S. Jordan, J. Jurkiewicz, R. Loll, “A second-order phase transition in CDT", Phys. Rev. Lett. 107, 211303 (2011) =0pt plus 1000pt minus 1000pt =0pt#1 2# ##1 2-#1∂ norama = to#=0pt #= #=0pt #1#1∂ norama = to#=0pt #=0pt #= #=0pt #1 @ From DICE Talk@ The mean-squared displacementFor normal diffusion: linear dependence on time: Foranomalous diffusion:Here, d is the (embedding) spatial dimension, and K1 and K?are the normal and generalized diffusion constants of dimensions cm2 s?1 and cm2 s??, respectively.The process will be categorized as subdiffusive (dispersive, slow) if 0 < ? < 1, or superdiffusive (enhanced, fast) if 1 < ?. Usually, the domain 1 < ? < or = 2 is considered, ? = 2 being the ballistic limit described by the wave equation, or its forward and backward modes (Landau and Lifshitz 1984). Exception: Unconfined Levy flights, for which we observe a diverging mean squared displacement refer to Table in M KTwo groups: (from Metzler and Klafter 2004)the fractional dynamical equations corresponding toDispersive transport (SD), Levy flights (LF), orLevy walks (LW) are highly non-local, and carry far-reaching correlations in time and/or space, represented in the integro-differential nature (with slowly decaying power-law kernels) of these equations.In contrast, fractional Brownian motion (FBM) or the generalized Langevin equation (GLE) on the macroscopic level are local in space and time, and carry merely time- or space dependent coefficients.Anomalous diffusion in terms of non-linear FokkerPlanck equations based on non-extensive statistical approaches. Simplest Approach to Fractional Dynamics – via a continuous time random walk (CTRW)In a standard random walk process each step is of a fixed length in a random direction at each tick of a system clock. A processhaving constant spatial and temporal increments, Dx and Dt will give rise to the standard diffusion process in the long-time limit, i.e.,After a sufficient number of steps the associated random variable where xi is the position after the ith step, will bedistributed by a Gaussian due to the central limit theorem. 2In a CTRW process both jump length and waiting time are distributed according to two PDFs, ?(x) and ?(t). the propagator for such a CTRW process in the absence of an external force is given in FourierLaplace space by SubdiffusionFractional diffusion equation3characteristic regimes with different spectral dimensions:Three characteristic regimes where the spectral dimension is approximately constant over many orders of magnitude: At large distances, the classical regime whered = 0 The spectral dimension agrees with both the Hausdorff and topological dimension of the spacetime.2. At smaller distances one encounters a semiclassical regime with d = d, before entering into the fixed-point 3. Quantum gravity regime with d = 2. § SUMMARY Higher moments of Tmn in Minkowski Spacetime:Fewster Ford Roman .., Near Planck scale:Dimensional Reduction.Carlip, Loll et al:Spacetime at small length scales becomes effectively 2 dim ? UV, not IR Spectral dimension from Brownian motion. E.g., heat kernel expansion . Non-Gaussian noise/ Anomalous Diffusion: Very different spectral dimensions from Gaussian or normal diffusion: small vs large scale 5.Fractal Spacetime.e.g. Reuter / Ambjørn Loll/ Calcagni Oriti et al6.Does backreaction of nonGaussian noise bring forth a modification of the wave operator reflecting possible fractal spacetime dimension? 7.Solutions of the Einstein-Langevin Equation with non-Gaussian noise source involving higher moments of Tmn. Stay Tuned!
http://arxiv.org/abs/1702.08145v1
{ "authors": [ "B. L. Hu" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170227044715", "title": "Fractal Spacetimes in Stochastic Gravity? -- Views from Anomalous Diffusion and the Correlation Hierarchy" }
Least-squares Solutionsof Linear Differential Equations Daniele Mortari dedicated to John Lee Junkins[Professor, Aerospace Engineering, Texas A&M University, College Station, TX 77843-3141, USA. E-mail: mortari@tamu.edu]============================================================================================================================================================================ This study shows how to obtain least-squares solutions to initial and boundary value problems to nonhomogeneous linear differential equations with nonconstant coefficients of any order. However, without loss of generality, the approach has been applied to second order differential equations. The proposed method has two steps. The first step consists of writing a constrained expression, introduced in Ref. <cit.>, that has embedded the differential equation constraints. These expressions are given in term of a new unknown function, g (t), and they satisfy the constraints, no matter what g (t) is. The second step consists of expressing g (t) as a linear combination of m independent known basis functions, g (t) = ξh (t). Specifically, Chebyshev orthogonal polynomials of the first kind are adopted for the basis functions. This choice requires rewriting the differential equation and the constraints in term of a new independent variable, x∈[-1, +1]. The procedure leads to a set of linear equations in terms of the unknown coefficients vector, ξ, that is then computed by least-squares. Numerical examples are provided to quantify the solutions accuracy for initial and boundary values problems as well as for a control-type problem, where the state is defined in one point and the costate in another point.Acronyms used throughout this paperDE    $̄→D̄ifferential Equation IVP→Initial Value Problem BVP→Boundary Value Problem LS→Least-Squares § INTRODUCTION Then-th order nonhomogeneous ordinary linear Differential Equation (DE) with nonconstant coefficients is the equation∑_i = 0^n f_i (t)^i y (t) t^i = f (t),wheref (t)and then + 1functions,f_i (t), can be any nonlinear continuous functions andt(often the time) is the independent variable. This kind of equations appear in many problems and in almost all scientific disciplines.Equation (<ref>) can be solved by the method of variation of parameters: usingnlinearly independent solutions,y_1 (t), ⋯, y_n (t), of the homogenous part. Then, the general solution is just a linear combination of the independent solutions plus the particular solution associated to the nonhomogeneous equation <cit.>. The variation of parameters method relies on the capability of finding thenlinearly independent solutions. Unfortunately, there is no general method to find these solutions. Another method, called undetermined coefficients<cit.>, is restricted to the case of constant coefficient, only. Finally, in the specific case iff (t)and all then + 1functions,f_i (t), are polynomials, then an approximate solution can be found by power series <cit.>. However, in this study,f (t)and all thef_i (t)functions can be any nonlinear continuous functions that are nonsingular in the integration time range.The proposed Least-Squares (LS) method can be applied to solve Eq. (<ref>) for any value ofn. However, without loss of generality and for sake of brevity, the approach is here applied to second order nonhomogeneous linear DE with nonconstant coefficients,f_2 (t)^2 y (t) t^2 + f_1 (t) y (t) t + f_0 (t) y (t) = f (t). It is important to outline that, if functionsf_1 (t),f_0 (t), andf (t)are continuous and nonsingular within the integration range, the Initial Value Problems (IVP) always admit solutions, while Boundary Value Problems (BVP) may have a single, multiple, no, or infinite solutions. Final special analysis is dedicated to particular BVP (typical from optimal control) where the variable is a vector,{x, λ}, and where the state vector is defined at initial time,x (t_0) = x_0, and the costate vector at final time,λ (t_f) = λ_f.§ THE CONSTRAINED EXPRESSIONS The key idea of this study is to search the solution of Eq. (<ref>) using constrained expressions, whose theory is presented in Ref. <cit.>. These expressions have embedded all the DE constraints,.^d_i yt^d_i|_t = t_i = y_t_i^(d_i), where then-element vector,d, contains the constraints' derivatives orders and then-element vector,t, indicates where the constraints are specified.The constrained equations adopted in this study are expressed as,y (t) = g (t) + ∑_i = 1^n β_i (t, t) [y_t_i^(d_i) - g_t_i^(d_i)] where:β_i^(d_k) (t_k, t) = δ_ikexpressions that are linear functions in the unknown functiong (t)and in its derivatives,g_t_i^(d_i), evaluated at constraints times and whereδ_ikis the Kronecker delta. Theβ_i (t, t)are special functions of the time and constraints times defined by the vectort. Theβ_ifunctions given in Eq. (<ref>) are not unique but they are characterized byβ_i^(d_k) (t_k, t) = δ_ik. Detailed derivations and presentations of these constrained expressions can be found in Ref. <cit.>. However, let's give three constrained expression examples. Example #1. In the first example consider the function,y (t) = g (t) + t (2 t_2 - t)2(t_2 - t_1)(ẏ_1 - ġ_1) + t (t - 2 t_1)2(t_2 - t_1)(ẏ_2 - ġ_2),whereβ_1 (t, t) = t (2 t_2 - t)2(t_2 - t_1)andβ_2 (t, t) = t (t - 2 t_1)2(t_2 - t_1). The first derivative of Eq. (<ref>) isẏ (t) = ġ (t) + t_2 - tt_2 - t_1 (ẏ_1 - ġ_1) + t - t_1t_2 - t_1 (ẏ_2 - ġ_2).It is easy to verify that, whent = t (1) = t_1thenẏ (t_1) = ẏ_1and whent = t (2) = t_2thenẏ (t_2) = ẏ_2. Therefore, no matter whatg (t)is, Eq. (<ref>) can be used as constrained expression for functions subject to:ẏ (t_1) = ẏ_1andẏ (t_2) = ẏ_2. Example #2. This example is for a function subject to the followingn = 4constraints,.^2 y t^2|_t_1 = ÿ_t_1,y (t_2) = y_t_2,y (t_3) = y_t_3, and. y t|_t_4 = ẏ_t_4.where,d = {2, 0, 0, 1}. Let's select the constraint time vector as,t = {-1, 0, 2, 2}. A constrained expression with embedded all four constraints is[y (t) = g (t) + - 4 + 4 t - t^214t (ÿ_t_1 - g̈_t_1) + 28 - 24 t +3 t^2 + t^328 (y_t_2 - g_t_2) +;+ 24 -3 t - t^228t (y_t_3 - g_t_3) + - 10 t +3 t^2 + t^314 (ẏ_t_4 - ġ_t_4) ]whereβ_1 (t, t) = - 4 + 4 t - t^214 t, β_2 (t, t) = 28 - 24 t +3 t^2 + t^328,β_3 (t, t) = 24 -3 t - t^228 t, andβ_4 (t, t) = - 10 t +3 t^2 + t^314.It is not difficult to verify thaty (t), as defined by Eq. (<ref>), has embedded all four constraints,(ÿ_t_1, y_t_2, y_t_3, ẏ_t_4), independent whatg (t)is. Example #3. This example shows the constrained expression when the constraints are specified in a relative way, as fory (t_1) = y (t_2) andẏ (t_1) = ẏ (t_2).In this specific case, a constrained expression isy (t) = g (t) + tt_2 - t_1(g_1 - g_2) + t - 2 (t_1 + t_2)2(t_2 - t_1)t (ġ_1 - ġ_2)It is straightforward to prove this equation satisfies the two relative constraints,y_1 = y_2andẏ_1 = ẏ_2.These three examples show that the solution,y (t), can be expressed in term of an unknown function,g (t), such thaty (t)always satisfies all DE constraints. This allows us to re-write the original DE in terms of the new function,g (t), thus obtaining a DE with constraints already embedded in the DE. This new DE has two interesting properties: it is not subject to external constraints and it is linear in g (t) and its derivatives.In this study simple constrained equations have been provided to solve linear DE with nonconstant coefficients for IVP, in Eqs. (<ref>, <ref>, <ref>), and for BVP, in Eqs. (<ref>, <ref>, <ref>, <ref>, <ref>, <ref>, <ref>, <ref>, <ref>). §.§ The least-squares approach Since functiong (t)is free to be selected, then it can be expressed as a linear combinations of a set ofmlinearly independent basis functions,h_k (t),g (t) = ξ h (t) = ∑_k = 0^m ξ_k h_k (t).This means that two distinct functions,h_i (t)andh_j (t)withij, must span different function spaces. In addition, all functions,h_k (t), and their derivatives must be continuous and nonsingular within the time range. By doing this, the coefficientsξ_kof Eq. (<ref>) become our unknowns. Once these coefficients are computed, theg (t)function is known and, consequently, Eq. (<ref>) provides the DE solution.Examples of basis functions are polynomials (e.g., Lagrange, Legendre, monomial, Chebyshev, etc.), Fourier series, monomial plus Fourier series, and any combinations of continuous and nonsingular functions spanning different function spaces.By substituting the expression ofy (t), given in Eq. (<ref>), in the DE of Eq. (<ref>), along with the expression ofg (t), given in Eq. (<ref>), and its derivatives,ġ (t) = ξḣ (t)andg̈ (t) = ξḧ (t), a linear equation in terms of the unknown coefficients,ξ, is obtained. This equation can be then specialized for a set ofNvalues oft_j(e.g., uniformly distributed in the integration time range), obtaining∑_k = 0^m ξ_k p_k (t_j) = p (t_j)ξ = λ (t_j)wherep (t_j)is ann-long known vector andj ∈[1, N]andN ≥m + 1. This set ofNequations inm + 1unknowns (usually,N ≫m) can be set in the matrix formPξ = [ p_0 (t_1) p_1 (t_1) ⋯ p_m (t_1); p_0 (t_2) p_1 (t_2) ⋯ p_m (t_2); ⋮ ⋮ ⋱ ⋮; p_0 (t_N) p_1 (t_N) ⋯ p_m (t_N) ] ξ_0 ξ_1 ⋮ ξ_m = λ (t_1) λ (t_2) ⋮ λ (t_N) = λadmitting the LS solutionξ = (P P)^-1P λThis LS solution is computed by scaling thePmatrix is order to decrease the condition number ofPPand, consequently, the numerical errors. This procedure is applied in detail for a second order nonhomogeneous linear DE with nonconstant coefficients for IVP and BVP, respectively. §.§ The selected basis functions In all the examples provided in this paper, the Chebyshev Orthogonal Polynomials (COP) of the first kind have been selected to represent the basis functions set. Note that, this selection may not be the best solution. In fact, while COP are a versatile basis to describe almost any kind of function, the COP derivatives are affected by a sort of Runge's phenomenon, with one order of magnitude increase at each subsequent derivatives. Figure <ref> shows this effect for the first two derivatives. Since COP are defined in term of a new variable,x ∈[-1, +1], we setxlinearly related tot ∈[t_1, t_2], asx = 2 t - t_1t_2 - t_1 - 1 ⟷ t = t_1 + (x + 1)(t_2 - t_1)2,wheret_2is specifically defined in BVP while it can be considered as the integration upper limit in IVP. Settingδt = t_2 - t_1, the derivatives in terms of the new variable are,[y t = y x· x t = 2δ ty x; ^2 y t^2 = t( y t) =x( y t) · x t =x(2δ ty x) · x t = 4δ t^2 ^2 y x^2 ].Therefore, Eq. (<ref>) can be re-written as,4δ t^2 f_2 (x)^2 y (x) x^2 + 2δ tf_1 (x) y (x) x + f_0 (x) y (x) = f (x),where the functionsf_2,f_1,f_0, andfare now expressed in term of the new variable using Eq. (<ref>). By changing the integration variable, particular attention must be given to the constraints specified in term of derivatives. In fact, the derivativesytand^2 yt^2are related to derivativesyxand^2 yx^2as specified by Eq. (<ref>). Therefore, also the constraints, provided in term of the first and/or second derivatives, need to comply with the rules given in Eq. (<ref>),. y x|_x_1 = . y t|_t_1δ t2 = δ t2 ẏ_1 = ẏ_1xand.^2 y x^2|_x_1 = .^2 y t^2|_t_1δ t^24 = δ t^24 ÿ_1 = ÿ_1x,meaning that: the constraints on the derivatives in term of the new x variable now depend on the integration time range. § LEAST-SQUARES SOLUTION OF INITIAL VALUE PROBLEMS Three distinct IVPs can be considered, depending on the DE constraints kind. Consider first the most classic problem where the function and its first derivative are specified in one point. §.§ Initial Value Problems subject to: y(t_1) = y_1 and ẏ (t_1) = ẏ_1 In this case, the constraints written in term of the new variable (x) are,y(x_1 =-1) = y_1 and. y x|_x_1 =-1 = ẏ_1x = δ t2 ẏ_1,and a simple constrained expression for this IVP is,y (x) = g(x) + (y_1 - g_1) + (x + 1) (ẏ_1x - ġ_1),whereg (-1) = g_1andġ (-1) = ġ_1. Again, the solutiony (x), as expressed by Eq. (<ref>), has embedded the constraints, no matter what the functiong (x)is.Substitutingy (x), as expressed by Eq. (<ref>), in Eq. (<ref>) we obtain,4δ t^2 f_2 ^2 g x^2 + 2δ t f_1 ( g x - ġ_1) + f_0 [g - g_1 - ġ_1 (x + 1)] = f - 2δ tẏ_1x f_1 - f_0 [y_1 + ẏ_1x (x + 1)].Now, letg (x)be expressed as a linear combination of COPs of the first kind,g (x) = ∑_k = 0^m ξ_k T_k (x),which are defined by the recursive function,T_k + 1 = 2 x T_k - T_k - 1starting from: {[ T_0 = 1; T_1 = x ]. .All derivatives of COP can be computed in a recursive way, starting fromT_0 x = 0,T_1 x = 1 and^d T_0 x^d = ^d T_1 x^d = 0(∀d > 1),while the subsequent derivatives of Eq. (<ref>) give fork > 1,[T_k+1 x=2 T_k+ 2 x T_k x -T_k-1 x; [8pt] ^2 T_k+1 x^2= 4T_k x+ 2 x^2 T_k x^2 - ^2 T_k-1 x^2;[4pt] ⋮ ⋮⋮⋮; [4pt] ^d T_k+1 x^d=2 d^d-1 T_k x^d-1+ 2 x^d T_k x^d- ^d T_k-1 x^d,(∀d ≥ 1), ]In particular, it is easy to show that,T_k (-1) = (-1)^k, . T_k x|_x=-1 = (-1)^k+1k^2, .^2 T_k x^2|_x=-1 = (-1)^kk^2 (k^2 - 1)3.Therefore, substituting the expressions given in Eqs. (<ref>-<ref>) in Eq. (<ref>), the following equation[ ∑_k = 0^m ξ_k {4δ t^2 f_2 ^2 T_k x^2 + 2δ t f_1 [ T_k x - (-1)^k+1 k^2] + f_0 [T_k - (-1)^k - (-1)^k+1 k^2 (x + 1)]} =;= f - 2δ tẏ_1x f_1 - f_0 [y_1 + ẏ_1x (x + 1)] } ]is obtained. However, particular attention must be given to Eq. (<ref>) because, fork = 0andk = 1, all three terms of the RHS vanish,^2 T_k x^2 =T_k x - (-1)^k+1 k^2 = T_k - (-1)^k - (-1)^k+1 k^2 (x + 1) = 0 fork = 0 and k = 1which is equivalent to rewrite Eq. (<ref>) as[ ∑_k = 2^m ξ_k {4δ t^2 f_2 ^2 T_k x^2 + 2δ t f_1 [ T_k x - (-1)^k+1 k^2] + f_0 [T_k - (-1)^k - (-1)^k+1 k^2 (x + 1)]} =;= f - 2δ tẏ_1x f_1 - f_0 [y_1 + ẏ_1x (x + 1)] } ]The reason why in Eq. (<ref>) fork = 0andk = 1, all three terms of the RHS vanish, derives from the fact that the first two terms of COP are constant and linear inx. Now, the constrained expression of Eq. (<ref>) is derived using a constant plus a linear expression inx. This means that the basis functions used forg (x)cannot be composed using the same function spaces, namely, the constant and the linear expression inx, because already adopted to define the constrained expression.Note that, the two derivatives,^2 T_kx^2andT_kx, are specific known polynomials inx, derived using Eqs. (<ref>-<ref>). Therefore, Eq. (<ref>) is a linear equation in terms of the(m - 1)unknown coefficientsξ_k. This allows us to estimate these coefficients by LS, by specifying Eq. (<ref>) for a set ofNvaluesx_j, ranging fromx_1 =-1tox_2 =+1. Specifically, we havep_k (x_j) = 4δ t^2 f_2 (x_j) .^2 T_k x^2|_x_j + 2δ t f_1 (x_j) [. T_k x|_x_j - (-1)^k+1 k^2] + + f_0 (x_j) [T_k (x_j) - (-1)^k - (-1)^k+1 k^2 (x_j + 1)]λ (x_j) =f (x_j) - 2δ tẏ_1x f_1 (x_j) - f_0 (x_j) [y_1 + ẏ_1x (x_j + 1)] In the next subsection the proposed approach is applied to a DE with known analytical solution. Accuracy comparison is provided with respect to the solution obtained by the Runge-Kutta-Fehlberg step-varying integrator (MATLAB function ). §.§ Accuracy tests Consider integrating the following IVP fromt_1 = 1tot_2 = 4,t^2^2 y t^2 - t(t + 2) y t + (t + 2) y = 0 subject to: {[ y (1) = y_1 = 1; ẏ (1) = ẏ_1 = 0 ].This impliesẏ_1x = ẏ_1 32 = 0. The general solution of this equation isy (t) = (2 - e^t - 1) t. Equation (<ref>) has been solved using the proposed LS approach (withm = 16andN = 1,000) and integrated using MATLAB function , implementing the Runge-Kutta-Fehlberg variable step integrator. The results are shown in Fig. <ref>. In the left plot of Fig. <ref> the absolute values of mean and standard deviation of the(P ξ - λ)residuals are shown as a function ofm. When the residuals standard deviation reaches the minimum (atm = 17)[The value of m = 17 implies a 16× 16 size of matrix (P P).] the LS approach provides the best accuracy results. The errors with respect to the true solution for the LS approach and the errors obtained using MATLAB function , are shown in the right plot. For this IVP, the LS method provides about five order of magnitudes accuracy gain with respect tointegrator. §.§ Initial Value Problems subject to: y (t_1) = y_1 and ÿ (t_1) = ÿ_1→ ÿ_1x = ÿ_1δ t^24 Using the constrained equation,y (x) = g (x) - x (y_1 - g_1) + x^2 + x2 (ÿ_1x - g̈_1)then Eq. (<ref>) becomes[ 4δ t^2 f_2 (^2 g x^2 - g̈_1) + 2δ t f_1 ( g x + g_1 - g̈_1 2 x + 12) + f_0 (g + g_1 x - g̈_1 x^2 + x2) =; = f - 4δ t^2 f_2ÿ_1x - 2δ t f_1 (-y_1 + ÿ_1x2 x + 12) - f_0 (- y_1 x + ÿ_1xx^2 + x2) ]Note that, ify_1andÿ_1xare known, thenẏ_1xcan be derived using Eq. (<ref>) evaluated atx_1 =-1ẏ_1x = δ t^2 f (t_1) - 4 f_2ÿ_1x - δ t^2 f_0 (t_1) y_12δ t f_1 (t_1)provided thatf_1 (t_1) 0. Therefore, the solution of the DE given in Eq. (<ref>) with constraintsy_1andÿ_1xcan be solved as in the previous section with constraintsy_1andẏ_1x, whereẏ_1xis provided as a function ofẏ_1and integration time rangeδt. §.§ Initial Value Problems subject to: ẏ (t_1) = ẏ_1 and ÿ (t_1) = ÿ_1 Using the constrained equation,y (x) = g (x) + x (ẏ_1x - ġ_1) + (x^22 + x) (ÿ_1x - g̈_1)Eq. (<ref>) becomes[ 4δ t^2 f_2 (^2 g x^2 - g̈_1) + 2δ t f_1 [ g x - ġ_1 - g̈_1 (x + 1)] + f_0 [g - ġ_1 x - g̈_1 (x^22 + x)] =;= f - 4δ t^2 f_2ÿ_1x - 2δ t f_1 [ẏ_1x + ÿ_1x (x + 1)] - f_0 [ẏ_1xx + ÿ_1x(x^22 + x)] ]Again, ifẏ_1xandÿ_1xare known, theny_1can also be computed by specializing Eq. (<ref>) atx_1 =-1y_1 = δ t^2 f (t_1) - 4 f_2ÿ_1x - 2δ t f_1 (t_1)ẏ_1xδ t^2 f_0 (t_1) f_0 (t_1)0.Therefore, the solution of the DE given in Eq. (<ref>) with constraintsẏ_1xandÿ_1xcan be solved as in the previous section with constraintsy_1andẏ_1x, wherey_1is provided by Eq. (<ref>). § LEAST-SQUARES SOLUTION OF BOUNDARY VALUE PROBLEMS Boundary Value Problems (BVP) appear in many applications arising in science and engineering. Examples are the modeling of chemical reactions, heat transfer, and diffusion. A thorough survey of the existing solutions to this problem can be found in Ref. <cit.>, describing most of the existing methods with the exception of those using Bézier curves. The use of implicit Bézier functions to obtain approximate solutions of BVP (or IVP) is not a new idea, albeit it is quite recent (2004). Venkataraman has attacked the problem using optimization techniques <cit.> while Zheng uses analytical LS approach <cit.>. Bézier curves have been adopted also to solve specific problems such as singular perturbed BVP <cit.> as well as integro-DE <cit.>. Two-point BVP are usually solved by iterative techniques. The most common approaches are the shooting methods, transforming the BVP into IVP.With respect to the analytical LS approach proposed in Ref. <cit.>, this paper has developed a practical, fast and easy to implement, numerical LS approach. The proposed method does not require any sophisticated optimization technique to solve BVP applied to linear second-order nonhomogeneous DE. Examples are provided with particularly emphasis on the approximate solutions accuracy levels.Let's first consider the most common BVP whose constraints arey (-1) = y_1andy (1) = y_2. A constrained function isy (x) = g(x) + 1 - x2(y_1 - g_1) + 1 + x2(y_2 - g_2)Substituting in Eq. (<ref>) we obtain[ 4δ t^2 f_2 ^2 g x^2 + 2δ t f_1 ( g x + g_1 - g_22) + f_0 (g - g_1 + g_22 + g_1 - g_22 x) =; = f - 1δ t f_1 (y_2 - y_1) - f_0 (y_1 + y_22 - y_1 - y_22 x) ]In particular, using COP to describeg (x), we haveT_k (1) = 1, . T_k x|_x = 1 = k^2, and.^2 T_k x^2|_x = 1 = k^2 (k^2 - 1)3.Using these expressions in Eq. (<ref>) we obtain[ ∑_k = 0^n ξ_k {4δ t^2 f_2 ^2 T_k x^2 + 2δ t f_1 [ T_k x + (-1)^k - 12] + f_0 [T_k - (-1)^k + 12 + (-1)^k - 12 x]} =;= f - 1δ t f_1 (y_2 - y_1) - f_0 (y_1 + y_22 - y_1 - y_22 x) ]However, fork = 0andk = 1, all three terms on the RHS of Eq. (<ref>) becomes zeros^2 T_k x^2 =T_k x + (-1)^k - 12 = T_k - (-1)^k + 12 + (-1)^k - 12 x = 0 fork = 0, 1For this reason Eq. (<ref>) must be selected as[ ∑_k = 2^m ξ_k {4δ t^2 f_2 ^2 T_k x^2 + 2δ t f_1 [ T_k x + (-1)^k - 12] + f_0 [T_k - (-1)^k + 12 + (-1)^k - 12 x]} =;= f - 1δ t f_1 (y_2 - y_1) - f_0 (y_1 + y_22 - y_1 - y_22 x) ]The(m-1)coefficientsξ_kof Eq. (<ref>) are then computed by LS using Eq. (<ref>). §.§ Numerical accuracy tests with known solution Consider the following BVP with constant coefficients,^2 y t^2 + 2y t + y = 0 subject to: {[ y (0) = 1; y (1) = 3 ].,whose solution is,y (t) = e^-t + (3 e - 1) t e^-t, with derivatives,ẏ (t) =-e^-t (3 e t - t - 3 e + 2), andÿ (t) = e^-t (3 e t - t - 6 e + 3).Figure <ref> show the LS approach results for this test in terms of mean and standard deviation of(P ξ - λ)residuals (top, left) and the condition number of matrixPP(top, center) as a function of the number of COP adopted (m-1) to solve the LS problem. The residuals of the DE of Eq. (<ref>) are provided (top, right), and the errors of the LS solution,y (t)(bottom, left), the first derivative,ẏ (t)(bottom, center), and the second derivative,ÿ (t)(bottom, right), with respect to the true solution, are provided. Shooting methods are transforming BVP into IVP. Numerical integrations of IVP, provide subsequent estimates based on previous estimates. This implies that the error, in general, is accumulating. In the contrary, the accuracy provided by LS solution is “uniformly” distributed within the integration bounds. If more accuracy is desired on a specific range, then by increasing the number of points on that range or by providing greater weights to the points on that range, the accuracy increase is obtained where desired. §.§ Tests with unknown solution, with no solution, and with infinite solutions Consider the DE with unknown solution,(1 + 2t) ^2 y t^2 + (cos t^2 - 3 t + 1)y t + (6 sin t^2 - e^cos (3 t)) y = 2[1 - sin(3 t)](3 t - π)4 - t,subject toy (0) = 2andy (1) = 2. In this case the LS solution results are given in the plots of Fig. <ref> with the same meaning to those provided in Fig. <ref>. The LS solution accuracy increases up tom = 21-degree COP. The standard deviation of the residuals reaches about10^-14accuracy level while the DE residuals are lower than4.0 ·10^-14. Consider the following BVP with no solution,^2 y t^2 - 6 y t + 25 y = 0 subject to: {[ y (0) = 1; y (π) = 2 ]..In fact, the general solution of Eq. (<ref>) isy (t) = [a cos(4t) + b sin(4t)] e^3t, where the constrainty (0) = 1givesa = 1while the constrainty (π) = 2gives2 = e^3 π, a wrong identity! Figure <ref> shows the results when trying to solve by LS the problem given in Eq. (<ref>). For this example the number of COP terms has been increased up to when the matrixPPbecome numerically singular, atm = 22, with a condition number value above10^15. The mean and standard deviation of the(P ξ - λ)residuals show no convergence while the condition number ofPPindicates that the problem has no solution. It is possible to show that, even in the no solution case, the proposed LS approach provides anyway a “solution” complying with the DE constraints!Finally, consider the BVP with infinite solutions,^2 y t^2 + 4 y = 0 subject to: {[y (0) =-2; y (2π) =-2 ]..In fact, the general solution of Eq. (<ref>) is,y (t) = a cos(2 t) + b sin(2 t), consisting of infinite solutions asbcan have any value. Results of the LS approach are given in Fig. <ref> showing the convergence, but not at machine level accuracy.Note the differences between the two cases of no and infinite solutions. Both of them experience bad condition number, but the convergence is experienced in the infinite solution case, only.§.§ Constraints: y (t_1) = y_1 and ẏ (t_2) = ẏ_2→ ẏ_2x = ẏ_22δ t For this case the constrained equationy (x) = g (x) + (y_1 - g_1) + (x + 1) (ẏ_2x - ġ_2)can be used. Substituting this equation in Eq. (<ref>), we obtain[ 4δ t^2 f_2 ^2 g x^2 + 2δ t f_1 ( g x - ġ_2) + f_0 [g - g_1 - ġ_2 (x + 1)] =;= f - 2δ t f_1 ẏ_2x - f_0 [y_1 + ẏ_2x (x + 1)] ]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested, providing excellent results, which are not included for sake of brevity. §.§ Constraints: y (t_1) = y_1 and ÿ (t_2) = ÿ_2→ ÿ_2x = ÿ_24δ t^2 For this case the constrained equationy (x) = g (x) - x (y_1 - g_1) + x^2 + x2(ÿ_2x - g̈_2)can be used. Substituting this equation in Eq. (<ref>), we obtain[ 4δ t^2 f_2 (^2 g x^2 - g̈_2) +2δ t f_1 ( g x + g_1 - g̈_2 2 x + 12) + f_0 (g + g_1 x - g̈_2x^2 + x2) =;= f - 4δ t^2ÿ_2x f_2 -2δ t f_1 (- y_1 + ÿ_2x2x + 12) - f_0 (- y_1 x + ÿ_2xx^2 + x2) ]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested, providing excellent results, which are not included for sake of brevity. §.§ Constraints: ẏ (t_1) = ẏ_1→ ẏ_1x = ẏ_12δ t and y (t_2) = y_2 For this case the constrained equationy (x) = g (x) + (y_2 - g_2)+ (x - 1) (ẏ_1x - ġ_1)can be used. Substituting this equation in Eq. (<ref>), we obtain[ 4δ t^2 f_2 ^2 g x^2 + 2δ t f_1 ( g x - ġ_1) + f_0 [g - g_2 - ġ_1 (x - 1)] =; = f - 2δ tẏ_1x f_1 - f_0 [y_2 + ẏ_1x (x - 1)] ]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested, providing excellent results, which are not included for sake of brevity. §.§ Constraints: ẏ (t_1) = ẏ_1→ ẏ_1x = ẏ_12δ t and ẏ (t_2) = ẏ_2→ ẏ_2x = ẏ_22δ t For this case the constrained equationy (x) = g (x) + x2(1 - x2)(ẏ_1x - ġ_1) + x2(1 + x2) (ẏ_2x - ġ_2)can be used. Substituting this equation in Eq. (<ref>), we obtain4δ t^2 f_2 (^2 g x^2 + ġ_1 - ġ_22) + 2δ t f_1 [ g x - ġ_1 (1 - x) + ġ_2 (x + 1)2] + + f_0 {g - x2[ġ_1 (1 - x2) + ġ_2 (x2 + 1)]} = f - 2δ t^2 (ẏ_2x - ẏ_1x) f_2 - 1δ t f_1 [ẏ_1x (1 - x) + ẏ_2x (x + 1)] - f_0 x2[ẏ_1x(1 - x2) + ẏ_2x(x2 + 1)]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested for the special case of the Mathieu's DE <cit.>. §.§ Constraints: ẏ (t_1) = ẏ_1→ ẏ_1x = ẏ_12δ t and ÿ (t_2) = ÿ_2→ ÿ_2x = ÿ_24δ t^2 For this case the constrained equationy (x) = g(x) + x (ẏ_1x - ġ_1) + x(x2 + 1) (ÿ_2x - g̈_2)can be used. Substituting this equation in Eq. (<ref>), we obtain[ 4δ t^2 f_2 (^2 g x^2 - g̈_2) + 2δ t f_1 [ g x - ġ_1 - g̈_2 (x + 1)] + f_0 [g - ġ_1 x - g̈_2 (x2 + 1) x] =; = f - 4δ t^2ÿ_2x f_2 - 2δ t f_1 [ẏ_1x + ÿ_2x (x + 1)] - f_0 [ẏ_1x x + ÿ_2x(x2 + 1) x] ]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested, providing excellent results, which are not included for sake of brevity. §.§ Constraints: ÿ (t_1) = ÿ_1→ ÿ_1x = ÿ_14δ t^2 and y (x_2) = y_2 For this case the constrained equationy (x) = g (x) + x (y_2 - g_2) + x2(x - 1) (ÿ_1 - g̈_1)can be used. Substituting this equation in Eq. (<ref>), we obtain[ 4δ t^2 f_2 (^2 g x^2 - g̈_1) + 2δ t f_1 ( g x - g_2 - g̈_1 2 x - 12) + f_0 (g -g_2 x - g̈_1x^2 - x2) =;= f - 4δ t^2ÿ_1 f_2 - 2δ t f_1 (y_2 + ÿ_1 2x - 12) - f_0 (y_2 + ÿ_1 x^2 - x2) ]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested, providing excellent results, which are not included for sake of brevity. §.§ Constraints: ÿ (t_1) = ÿ_1→ ÿ_1x = ÿ_14δ t^2 and ẏ (t_2) = ẏ_2→ ẏ_2x = ẏ_22δ t For this case the constrained equationy (x) = g(x) + x (ẏ_2x - ġ_2) + x2(x - 2) (ÿ_1 - g̈_1)can be used. Substituting this equation in Eq. (<ref>), we obtain[ 4δ t^2 f_2 (^2 g x^2 - g̈_1) + 2δ t f_1 [ g x - ġ_2 - g̈_1 (x - 1)] + f_0 [g - ġ_2 x - g̈_1 (x^22 - x)] =; = f - 4δ t^2ÿ_1 f_2 - 2δ t f_1 [ẏ_2x + ÿ_1 (x - 1)] - f_0 [ẏ_2x x + ÿ_1 (x^22 - x)] ]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested, providing excellent results, which are not included for sake of brevity. §.§ Constraints: ÿ (t_1) = ÿ_1→ ÿ_1x = ÿ_14δ t^2 and ÿ (t_2) = ÿ_2→ ÿ_2x = ÿ_24δ t^2 For this case the constrained equationy (x) = g(x) + x^212(3 - x) (ÿ_1x - g̈_1) + x^212(3 + x) (ÿ_2x - g̈_2)can be used. Substituting this equation in Eq. (<ref>), we obtain[ 4δ t^2 f_2 [^2 g x^2 - g̈_1 (1 - x) + g̈_2 (x + 1)2] +; + 2δ t f_1 [ g x - g̈_1 (2 x - x^2) + g̈_2 (x^2 + 2 x)4] +;+ f_0 [g - g̈_1 (3 x^2 - x^3) + g̈_2 (x^3 + 3 x^2)12] =; = f - 2δ t^2 f_2 [ÿ_1x (1 - x) + ÿ_21x (x + 1)] - 2δ t f_1 ÿ_1 (2 x - x^2) + ÿ_2x (x^2 + 2 x)4;- f_0 ÿ_1x (3 x^2 - x^3) + ÿ_2x (x^3 + 3 x^2)12 ]Then, using the expressions provided in Eqs. (<ref>-<ref>) in Eq. (<ref>) the LS solution can be obtained using the procedure described in Eqs. (<ref>-<ref>). Equation (<ref>) has been tested, providing excellent results, which are not included for sake of brevity. §.§ Optimal control example: state known at initial time and costate at final time This example consists of the linear DE,ẋ λ̇ = [ A_11 (t) A_12 (t); A_21 (t) A_22 (t) ]x λsubject to: {[ x (t_0) = x_0; λ (t_f) = λ_f ].,with the constrained expressions,{[x (t) = g_x (t) + (x_0 - g_x0); λ (t) = g_λ (t) + (λ_f - g_λ f) ]..Assuming,x = {x, ẋ}andλ = {λ_x, λ_ẋ}, theng_x (t) = [ h (t); ḣ (t) ]αandg_λ (t) = [ β; γ ]h (t)and the constrained expressions become,x (t) = x_0 + [ h - h_0; ḣ - ḣ_0 ]αandλ (t) = λ_f + [ β; γ ] (h - h_f).Then, the dynamic equation becomes,{[ [ ḣ; ḧ ]α - A_11[ h - h_0; ḣ - ḣ_0 ]α - A_12[ β; γ ] (h - h_f) = A_11x_0 + A_12λ_f; [ β; γ ]ḣ - A_21[ h - h_0; ḣ - ḣ_0 ]α - A_22[ β; γ ] (h - h_f) = A_21x_0 + A_22λ_f ].,which can be written in matrix form,ℳ α β γ = [ A_11 A_12; A_21 A_22 ]x_0λ_f,whereℳ = [ [ ḣ; ḧ ] - A_11[ h - h_0; ḣ - ḣ_0 ]- A_12[ h - h_f; 0 ]- A_12[ 0; h - h_f ];- A_21[ h - h_0; ḣ - ḣ_0 ] [ ḣ; 0 ] - A_22[ h - h_f; 0 ] [ 0; ḣ ] - A_22[ 0; h - h_f ] ].Equation (<ref>) is linear in the unknown coefficients (vectors:α,β, andγ) and, therefore, it can be solved by LS as done in the previous numerical examples.Equation (<ref>) can be subject to different constraints. The following are two examples that can be solved by LS using the corresponding constrained expressions,{[ x (t_0) = x_0; ẋ (t_0) = ẋ_0 ]. → {[ x (t) = g_x (t) + (x_0 - g_x0) + (t - t_0) (ẋ_0 - ġ_x0); λ (t) = g_λ (t) ].{[ x (t_0) = x_0; λ (t_f) = x (t_f) ].→ {[x (t) = g_x (t) + (x_0 - g_x0); λ (t) = g_λ (t) + (g_xf + x_0 - g_x0 - g_λ f) ].§ CONCLUSIONS AND FUTURE WORK This study presents a new approach to provide least-squares solutions of linear nonhomogeneous differential equations of any order with nonconstant coefficients, continuous and non singular in the independent variable integration range. For sake of brevity and without loosing generality, the implementation of the proposed method has been applied to second order differential equations. This least-squares approach can be adopted to solve initial and boundary value problems with constraints given in terms of the function and/or derivatives.The proposed method is based on searching the solution with a specific expression, called constrained expression, which is a function with embedded differential equation constraints. This expression is given in terms of a new unknown function,g (t). The original differential equation is rewritten in terms ofg (t), thus obtaining a new differential equation where the constraints are embedded in the differential equation itself. Then, theg (t)function is expressed as a linear combination of basis functions,h (t). The coefficients of this linear expansion are then computed by least-squares by specializing the new differential equation for a set ofNdifferent values of the independent variable. In this study the Chebyshev orthogonal polynomial of the first kind have been selected as basis functions. This choice may not be a good choice because each subsequent derivative degree increases the range by approximately one order of magnitude (and because polynomials are in general a bad choice to describe potential periodic solutions).Numerical tests have been performed for initial value problems with known solution. A direct comparison has been made with the solution provided by MATLAB function , implementing the Runge-Kutta-Fehlberg variable-step integrator. In this test, the least-squares approach shows five orders of magnitude accuracy gain. Numerical tests have been performed for boundary value problems for the four cases of known, unknown, no, and infinite solutions. In particular, the condition number and the residual mean of the least-square approach can be used to discriminate whether a boundary value problem has no solution or infinite solutions.The proposed method is easy to implement, not iterative and, as opposed to classic numerical approaches, the solution error distribution do not increase along the integration but it is approximately uniformly distributed in the integration range. In addition, the proposed technique is identical to solve initial and boundary value problems. The method can also be used to solve higher order linear differential equations with linear constraints.This study is not complete as many investigations should be performed before setting this approach as a standard way to integrate linear differential equations. Many research areas are still open, full of question marks. A few of these open research areas are:*Extension to weighted least-squares;*Nonuniform distribution of points and optimal distributions of points to increase accuracy in specific ranges of interest;*Comparisons with different function bases and identification of an optimal function base (if it exists!);*Analysis using Fourier bases;*Accuracy analysis of number of basis functions versus points distribution;*Extension to nonlinear differential equations;*Extension to partial differential equations.*Extension to nonlinear constraints.This study does not provide answers to the above questions, but it provides suggestions of important area of research and basic tools to dig.9Mortari Mortari, D. “The Theory of Connections. Part 1: Connecting Points,” AAS 17-256 of 2017 AAS/AIAA Space Flight Mechanics Meeting Conference, San Antonio, TX, February 5-9, 2017. Strang Strang, G. Differential Equations and Linear Algebra, Wellesley-Cambridge Press, 2015, ISBN 0980232791, 9780980232790. Lin Lin, Y., Enszer, J.A., and Stadtherr, M.A. “Enclosing all Solutions of Two-Point Boundary Value Problems for ODEs,”Computers and Chemical Engineering, 2008, pp. 1714-1725. Venkataraman1 Venkataraman, P. “A New Class of Analytical Solutions to Nonlinear Boundary Value Problems,” DETC2005-84604, 25th Computers and Information in Engineering (CIE) Conference, Long Beach, CA, September, 2005. Venkataraman2Venkataraman, P. “Explicit Solutions for Linear Boundary Value Problems using Bézier Functions,” DETC2006-99227, 26th Computers and Information in Engineering (CIE) Conference, Philadelphia, PA, September, 2006. Venkataraman3 Venkataraman, P. and Michopoulos, J.G. “Explicit Solutions for Nonlinear Partial Differential Equations,” DETC2007-35439, 27th Computers and Information in Engineering (CIE) Conference, Las Vegas, NA, September, 2007. Zheng Zheng, J.M., Sederberg, T.W., and Johnson, R.W. “Least-Squares Methods for Solving Differential Equations using Bézier Control Points,”Applied Numerical Mathematics, Vol. 48, No. 2, Feb. 2004, pp. 237-252. Evrenosoglu Evrenosoglu, M., Somali, S. “Least-Squares Methods for Solving Singular Perturbed Two-Point Boundary Value Problems Using Bézier Control Points,”Applied Mathematics Letters, Vol. 21, No. 10, Oct. 2008, pp. 1029-1032. Ghomanjani Ghomanjani, F., Kamyad, A.V., and Kiliman, A. “Bézier Curves Method for Fourth-Order Integro-Differential Equations,”Abstract and Applied Analysis, Vol. 2013, Article ID 672058, 5 pages, 2013. doi:10.1155/2013/672058. Doha Doha, E.H., Bhrawy, A.H., and Saker, M.A. “On the Derivatives of Bernstein Polynomials: An Application for the Solution of High Even-Order Differential Equations,” Hindawi Publishing Corporation, Boundary Value Problems, Vol. 2011, Article ID 829543, 16 pages, doi:10.1155/2011/829543 Mathieu1 Mathieu, E. “Mémoire sur Le Mouvement Vibratoire d'une Membrane de Forme Elliptique,”Journal de Mathématiques Pures et Appliquées, 137-203, 1868. Mathieu2 Meixner, J. and Schäfke, F.W. “Mathieusche Funktionen und Sphäroidfunktionen” Springer, 1954.
http://arxiv.org/abs/1702.08437v1
{ "authors": [ "Daniele Mortari" ], "categories": [ "math.CA", "34-02", "G.1.7" ], "primary_category": "math.CA", "published": "20170225005019", "title": "Least-squares Solutions of Linear Differential Equations" }
3D Scanning System for Automatic High-Resolution Plant PhenotypingChuong V. Nguyen ARC Centre of Excellence for Robotic VisionResearch School of Engineering, Australian National UniversityCanberra ACT 2601, AustraliaEmail: Chuong.Nguyen@anu.edu.au Jurgen Fripp CSIRO Health and BiosecurityAustralian eHealth Research CentreHerston QLD 4029, AustraliaEmail: Jurgen.Fripp@csiro.au David R. Lovell Electrical Eng. & Computer ScienceQueensland University of TechnologyBrisbane QLD 4001, AustraliaEmail: David.Lovell@qut.edu.au Robert Furbank CoE for Translational PhotosynthesisAustralian National UniversityCanberra ACT 2601, AustraliaEmail: Robert.Furbank@anu.edu.au Peter Kuffner, Helen Daily, Xavier Sirault CSIRO Agriculture and FoodHigh Resolution Plant Phenomics CentreCanberra ACT 2601, AustraliaEmail: Xavier.Sirault@csiro.auDecember 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing structures make plants difficult for three-dimensional (3D) scanning and reconstruction – two critical steps in automated visual phenotyping. Many current solutions such as laser scanning, structured light, and multiview stereo can struggle to acquire usable 3D models because of limitations in scanning resolution and calibration accuracy.In response, we have developed a fast, low-cost, 3D scanning platform to image plants on a rotating stage with two tilting DSLR cameras centred on the plant. This uses new methods of camera calibration and background removal to achieve high-accuracy 3D reconstruction. We assessed the system's accuracy using a 3D visual hull reconstruction algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum plants and 2 wheat plants across different sets of tilt angles. Scan times ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes (to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas and perimeters of the plastic models were measured manually and compared to measurements from the scanning system: results were within 3-4% of each other. The 3D reconstructions obtained with the scanning system show excellent geometric agreement with all six plant specimens, even plants with thin leaves and fine stems.§ INTRODUCTIONThe structures of different plant species pose a range of challenges for 3D scanning and reconstruction. Various solutions to the issue of digitally imaging plants have been reported.For plants with larger leaves and simple structures (such as maize, sorghum, cereal seedlings) it is possible to capture a small number of digital images from various viewing angles (typically 3), analyse these in 2D, then develop a relationship between these 2D poses and the (destructively measured) leaf area and biomass of the species <cit.>. However, commercial systems using this approach are relatively expensive, with closed and proprietary analysis software; these have generally been deployed in large phenomics centres with conveyor based systems <cit.>. The 2D approach has difficulty in resolving concavities, leaf overlap and other occlusions; many of the powerful image analysis tools which can be applied to 3D meshes (e.g., volumetric based shape recognition and organ tracking) are more difficult in 2D <cit.>.Laser scanning, e.g., light detection and ranging (LIDAR), has been applied to plant digitisation but reconstructing a mesh from a pointcloud sufficiently dense to capture thin narrow leaves is computationally intensive. While this approach has been applied successfully to forestry <cit.> and to statistical analysis of canopies, it is not well suited to extracting single plant attributes <cit.>. Full waveform LIDAR is extremely expensive; simpler machine vision LIDAR systems of sufficient resolution can cost tens of thousands of dollars. Structured light approaches using affordable sensors such as the Kinect gaming sensor or camera-video projector setups do not offer the resolution or spatial repeatability to cope with complex plant structures <cit.>.Recently, approaches using multiple images from a larger number of viewing angles have yielded promising results <cit.>, either by using a silhouette-based 3D mesh reconstruction method or patch based stereo reconstruction transforming the images to point clouds. Silhouette-based reconstruction is prone to errors in camera calibration and silhouette extraction. Image acquisition for 3D reconstruction remains largely manual <cit.>, limiting the speed and accuracy of 3D reconstruction.Existing background removal techniques for silhouette extraction <cit.> are not reliable for complex plant structures. For current patch based methods,reconstruction quality is usually poor due to both weak-textures (patterns on leaves) and thin structures. In both cases, high accuracy 3D reconstruction requires a very rigid imaging station and the engineering required for sensor integration is costly. Recent work has included semi-automated approaches to modelling the plants <cit.> increasing completeness and visual quality at the expense of throughput. To address the shortcomings of existing scanning systems, we describe “PlantScan Lite”, an affordable automated 3D imaging platform system to accurately digitise complex plant structures. This can be readily built for less than AU$8000 of components (including DSLR cameras). The system has a number of novel elements including the use of high-resolution digital single-lens reflex (DSLR) cameras synchronised with a turntable; a high-accuracy, easy-to-setup camera calibration; and an accurate background removal method for 3D reconstruction.§ METHODSPlantScan Lite uses a four step process of 3D scanning and reconstruction (Fig. <ref>): * Image acquisition. Multiple-view images of a calibration target, a plant of interest and background are captured using a turntable and tilting cameras. * Camera calibration estimates camera parameters for all captured images. * Image processing corrects for optical distortion and extract plant silhouettes (background removal).* 3D reconstruction. This paper focuses on the visual hull reconstruction method which uses silhouette images and corresponding camera parameters and poses to create a 3D mesh model. The model is then processed to extract plant geometry information.§.§ Image acquisition systemPlantScan Lite's image acquisition system consists of (Fig. <ref>):* Two Canon DSLR cameras (1, 2) at an angle of approximately 40 degrees. The cameras are powered by AC adapter kits and connect to a computer through USB cables for tethering capture. * Two aluminum frames, one tilted (3) and one fixed to the ground (4). The tilted frame is to mount the cameras and move them up/down. The frames join together with two hinges where the axis of rotation crosses that of the turntable near the middle of a plant to be scanned. Two guiding bars (5) are attached to the lower frame (4) both to keep the tilted frame moving on a vertical plane. * Two Phidget bipolar stepper motors (6, 7); one drives a turntable, the other moves the upper frame and the two cameras via a threaded rod (8). Both stepper motors have a gearbox to increase the rotation resolution (0.018 degree/step for the turntable) and torque (18 kg·cm for the threaded rod). The turntable controller is synchronised with the cameras via software based on Phidget Python library <cit.> and Piggyphoto<cit.>.§.§ Camera calibration for turntable Fig. <ref> shows the schematic description of the system with a calibration target and a plant (notation follows <cit.>). A chessboard target has a local coordinate system {T} with relative pose ^Tξ_Wto the world {W} which is fixed to the turntable. A camera at tilt position j has a local coordinate system {K} and relative pose ^Kξ_Wto the world coordinates {W}. Standard mono camera calibration only provides camera pose ^Kξ_T relative to target at angles where the chessboard pattern is visible to the camera. The aim of camera calibration for the turntable is to find intrinsic parameters and poses ^Kξ_W for all cameras at all positions. Three constraints are considered: a) turntable rotation angle is known, b) images captured by the same camera have the same intrinsic parameters, and c) different extrinsic parameters are assigned to different tilt positions as if these belong to different cameras that form a vertical arc. The calibration process consists of (1) mono and stereo camera calibrations (2) plane and circle fitting (3) camera to world pose derivation, and (4) optimisation of all camera parameters. Theses step are detailed below.§.§.§ Mono and stereo camera calibration The OpenCV library provides mono and stereo camera calibration routines <cit.>. The calibration procedure starts by taking images of a chessboard target at different angles and distances. Corners of the squares on chessboard target are detected. To make this detection efficient, images of the chessboard calibration target are repeatedly scaled down 50% in a pyramid fashion to approximately 1K × 1K resolution at the top level. The detected corner positions are obtained at the top level (lowest resolution) using OpenCV's function findChessboardCorners and tracked with subpixel accuracy using function cornerSubPix on images at lower levels of the pyramid. Fig. <ref> describes transformations between different coordinate systems. Thin arrows represent coordinate vectors. A pair of orthogonal arrows denotes a coordinate system. Thick arrows show relative pose between two coordinate systems. The coordinates (in mm) of a corner p with respect to the chessboard target coordinate system {T} are represented as ^T p = [X, Y, Z=0, 1]^T. The position of p relative to a camera coordinate system {K} is ^K p = [x, y, z]^T. The relationship between ^T p and ^K p is expressed as:^Kp = ^Kξ_T ·^Tp = [R | t] · [X, Y, Z, 1]^Twhere R is a 3 × 3 rotation matrix, t a 3 × 1 translation vector. This rotation translation matrix [R | t] represents the extrinsic parameters of camera K relative to target T.The rotation matrix can be represented as a 3 × 1 angle-axis rotation vector, so extrinsic parameters have only 6 independent components [r_0, r_1, r_2, t_0, t_1, t_2]. The “·” operator is a matrix-to-vector multiplication.Point p forms an image on camera sensor at coordinates ^Ip = [u, v]^T. An extended pinhole camera model is used to describe this relationship between ^Kp and ^Ip with radial optical distortion:r = √((x/z)^2 + (y/z)^2) u = f x/z(1 + d_1 r^2 + d_2 r^4) + c_u v = f y/z(1 + d_1 r^2 + d_2 r^4) + c_v where f is focal length, [c_u, c_v] optical center on image, and [d_1, d_2] radial distortion coefficients. A vector [f, c_y, c_v, d_1, d_2] represents intrinsic camera parameters. To calibrate the camera, multiple images of the same target are captured at different pan angles θ_i and tilt angles ϕ_j, where i = 0 to i_max and j = 0 to j_max. Mono camera calibration, using OpenCV's calibrateCamera (based on <cit.>), takes lists of ^Tp and ^Ip and computes intrinsic parameters [f, c_y, c_v, d_1, d_2]_k for the camera, and extrinsic parameters [R_ijk | t_ijk] for each of the images. Fig. <ref>B shows the multiple camera setup with an additional camera K' at a different tilt position. The same point p is seen by the camera K' at coordinates ^K'p. The transformation between the two camera coordinate systems gives: ^K'p = ^K'ξ_K ·^Kp = ^K'ξ_K ⊗^Kξ_T ·^Tp The transformation ^K'ξ_K between cameras K and K' is equal to stereo transformation [R_k,k' | t_k,k'] (using OpenCV's stereoCalibCameras). If K and K' are of the same camera but at different tilting angles ϕ_j and ϕ_j+1, the transformation becomes [R_j,j+1 | t_j,j+1]. The “⊗” operator denotes matrix-to-matrix multiplication. The transformation can be applied repeatedly between successive camera pairs at different tilt positions. Given stereo transformations between successive cameras and the pose of the first camera, the poses of other cameras are also obtained.§.§.§ Estimation of axis and centre of rotation Location of the world coordinate system fixed to the rotation axis is found in two steps (Fig. <ref>): (1) compute the normal vector n of the plane containing the rotation orbit of the target's corners, and (2) fit a circle to find the centre of rotation o. The normal vector and centre of rotation define the world coordinates. First, rotation axis is estimated from the rotational motion of the chessboard target. From extrinsic parameters [R_ijk | t_ijk], the same chessboard corner ^Tp is seen as ^Kp = [R_ijk | t_ijk] ^Tp moving on a circular orbit as shown in Fig. <ref>A. Note that the chessboard pattern and its positions can only be detected in 1/4 to 1/3 of the number of target images around the circular orbit. The equation of the orbit plane is expressed as:a(t_x - x_m) + b(t_y - y_m) + c(t_z - z_m) = 0where [x_m, y_m, x_m] is the centroid (or the mean) of all positions ^Kp on the same orbit, and [a, b, c] is the normal vector of the plane.Equation (6) can be rewritten in matrix form 𝐁𝐧 = 0 where 𝐧 = [a, b, c] and 𝐁 is a matrix containing chessboard positions relative to the centroid of all the positions. Vector 𝐧 is an eigenvector corresponding to the smallest eigenvalue obtained from Singular Value Decomposition of matrix 𝐁. Second, to find the centre of rotation of the calibration target, a different coordinate system {L} is used (Fig. <ref>B). {L} is in fact equivalent to {K} with a rotation transformation ^L ξ_K such that the y-axis is parallel to 𝐧. In {L}, a 2D circle can be fitted onto the target point orbit and the centre of rotation can be obtained. The relationship between {L} and {K} with respect to {W} is:^Lξ_W = ^Lξ_K ⊗^Kξ_W The transformation ^Lξ_W has the form [R_ω | 0], where R_ω is a rotation matrix whose angle-axis rotation vector ω𝐰 = [ω w_x, ω w_y, ω w_z]^T can be obtained as: ω = arctan( |𝐧×𝐲|/𝐧·𝐲)𝐰 = 𝐧×𝐲/|𝐧×𝐲|where ω is the rotation angle and 𝐰 is a vector around which the rotation is applied to turn 𝐧 to y-axis 𝐲 of {W}. The bar denotes vector normalization.R_ω is computed from ω𝐰 by Rodrigues' formula:R_ω = cosω I + (1 - cosω)𝐰𝐰^T + sinω [𝐰]_× After applying the rotation transformation R_ω to target positions, a circle can be fitted to [z, x]^T coordinates by a Linear Least-Squares algorithm <cit.>. This fitting gives the centre of the orbit [x_0, y_0, z_0]^T, with y_0 to be the averaged y-component of the target point positions in {L} coordinate system.Now the world coordinate system is set at centre of rotation 𝐨 and with its axes parallel to those of {L}, the transformation from {L} to {W} is:^Lξ_W = [I | t_0]where t_0 = [x_0, y_0, z_0]^T. As a result, pose of camera K relative to {W} can be expressed as: ^Kξ_W =^Kξ_L ⊗^Lξ_W = ( ^Lξ_K )^-1⊗^Lξ_W= [R_ω | 0]^-1 [I | t_0] = [R^T_ω | R^T_ω t_0] §.§.§ Estimation of camera poses relative to the world coordinate system fixed to the turntable axis Since the pose ^Kξ_W of the camera K at tilt position ϕ_j is obtained, the pose of any additional camera K' relative to the world coordinate system can be obtained from a given stereo transformation ^Kξ_Kas shown in Fig. <ref>A:^K'ξ_T = ^K'ξ_K ⊗^Kξ_WSimilarly, the pose of the target relative to the world coordinate system is expressed as:^Tξ_W = ^Tξ_K ⊗^Kξ_W = ( ^Kξ_T )^-1⊗^Kξ_W We are interested in the reverse transformation to obtain target point position in the world coordinate system:^Wξ_T =^Wξ_K ⊗^Kξ_T = ( ^Kξ_W )^-1⊗^Kξ_T= [R_ijk | t_ijk]^-1 [R^T_ω | R^T_ω t_0]= [R^T_ijk R^T_ω | R^T_ijk R^T_ω t_0 - R^T_ijk t_ijk] Since there are multiple estimates of ^Wξ_T for different rotation angle θ_i, ^Wξ_T0 for zero rotation angle is obtained by applying an inverse of the rotation to the corresponding pose:^Wξ_T0= [R^-1_θ_i | 0] ^Wξ_T= [R^T_θ_i R^T_ijk R^T_ω | R^T_θ_i (R^T_ijk R^T_ω t_0 - R^T_ijk t_ijk)]where R_ω is the matrix obtained from angle-axis rotation vector (equation (<ref>)), and R^-1_θ_i = R^T_θ_i.Since the world coordinate systems and the target are fixed, the camera coordinate system needs to move in a circle around y axis of {W} to represent the correct relative motion seen by the camera, as shown in Fig. <ref>B. The pose of camera K for rotation angle θ_i is:^Kθ_iξ_W =^Kξ_W [R_θ_i | 0] = [R^T_ω R_θ_i | R^T_ω t_0] The pose of camera K' for rotation angleis :^K'θ_iξ_W =^K'θ_iξ_W [R_θ_i | 0]= [R_j,j+1 R^T_ω R_θ_i | R_j,j+1 R^T_ω t_0 + t_j,j+1]§.§.§ Optimisation to refine camera parametersA nonlinear least-square optimisation is applied to refine estimates of camera intrinsic parameters [f, c_u, c_v, d_1, d_2] and angle-rotation vector and translation vector of camera pose ^Kξ_W and ^K'ξ_W at different tilt angles, and target inverse pose ^Wξ_T0. Optimisation seeks to minimize is the pixel distance between projected target corners to camera and the corners on the actual images. Estimated position of chessboard corner relative to camera K at rotation θ_i: ^Kp_estimated = ^Kθ_iξ_W ⊗^Wξ_T0·^TpA pinhole camera projection with radial distortion is applied with equations (<ref>), (<ref>) and (<ref>) to obtain corresponding image point ^Iθ_ip_estimated. This is applied to other tilt positions and the second camera. The squared distance between image projection^Iθ_ip_estimated of estimated corners and their detected image positions is minimised. §.§ Background subtractionPlants with large leaves can cast strong shadows, so a simple image threshold will not completely remove the background. We found that a shadow removal algorithm based on static background proposed in <cit.> performsbackground removalfor thin leaves more accurately and with less computation than other techniques <cit.>. Here, we extend the technique of <cit.> for LAB color space <cit.>, further improvingbackground removal accuracy. Suppose L(u,v), A(u,v) and B(u,v) are the luminance and two color channels of the current image and L'(u,v), A'(u,v) and B'(u,v) the corresponding image channels of the background image. Three error functions applied to each pixel position [u,v] are defined as follows:Δ(u,v) = | L(u,v) - L'(u,v) | Θ(u,v) = | A(u,v) - A'(u,v) | +| B(u,v) - B'(u,v) | Ψ(u,v) = | L(u,v)/L(u+1,v) - L'(u,v)/L'(u+1,v)| +| L(u,v)/L(u,v+1) - L'(u,v)/L'(u,v+1)|These error functions Δ(u,v), Θ(u,v) and Ψ(u,v) represent the differences in luminance, color and texture respectively. An overall score is computed to determine a pixel as foreground or background: Ω(u,v) = αΔ(u,v) + βΘ(u,v) + γΨ(u,v)/α + β + γwhere the values of α, β, γ are found empirically. A threshold t is applied to Ω(u,v) to separate background and foreground. For our images, α = 0.1, β = 0.5, γ = 0.4 and t = 5 to 10 were found to work well. Unlike <cit.>, our proposed technique allows for segmentation of dark objects (such as the plant pots) and this is controlled via coefficient α. Fig. <ref> shows an example of background removal using our proposed algorithm. §.§ 3D reconstruction §.§.§ Bounding box estimation The bounding box of the subject is obtained in two steps:* An initial 3D bounding box is estimated based on silhouettes from the most horizontal camera view. These silhouette images are overlapped/combined into a single image and 2D bounding box is computed (Fig. <ref>). The Y axis and the origin are projected onto this overlapped image. The crossing points of the projected axis with the 2D bounding box are mapped back to the turntable axis in 3D space to obtain y_min and y_max. The back projection of the rectangle width to the world origin gives a single magnitude for x_min, x_max, z_min and z_max. * A refined bounding box is calculated from a 3D reconstruction at a low resolution (128^3 voxels) using the initial bounding box. This takes only a few seconds to compute. Particularly, we found that no thin parts of the plant are missing when reconstructed at low resolution. As a result, the refined bounding box tightly contains the 3D space of the plant. §.§.§ Volume reconstruction In this work, the 3D plant was reconstructed using a visual-hull volume carving approach. This method recovers sharp and thin structures common to plants (although a major drawback is that it cannot correctly recover concave surfaces, making reconstructions of curved surfaces such as leaves thicker than they should be). There may be plant movements induced by air circulation or mechanical vibration which can be accounted for by some tuning during reconstruction. The reconstruction method consisted of 3 steps: * A 3D volume equal to the bounding box is generated and split into voxels. Each voxel is repeatedly projected into the silhouettes and its 2D signed distance to the nearest boundary of each silhouette is calculated. If the distance is negative (outside) in any of the silhouettes, the voxel is flagged as empty. To accommodate some uncertainty in the silhouettes and plant movements, a voxel is set to be removed if it is outside more than a fixed number of silhouettes (3 is chosen in this paper). The process repeats until the end where remaining voxels form a 3D hull model of plant. An octree structure is used for voxel removal from lowest resolution to the highest resolution <cit.>, giving a 3X speedup as compared to processing all voxels of a full resolution 3D volume. * Removal of pot and pot carrier. Since we are only interested in the plant, the pot and pot carrier need to be removed to simplify mesh analysis as well as reduce the mesh size. One method is to subtract voxels inside a given bounding tapered cylinder of the pot and pot carrier. Fig. <ref> shows the snapshot of the mesh of one of the plastic plants before and after pot subtraction. This needs to be done before the 3D meshing step to produce a clean and watertight mesh. * 3D meshing by marching cube from the remaining voxels. A grid point is checked against 8 surrounding voxels. The value of the 8 surrounding voxels is matched with a 256-element lookup table to determine if the grid is on or close to the mesh so that a polygon can be created from this grid and nearby grid points.§.§ Mesh segmentation and geometry Mesh segmentation algorithms involve assigning a unique label to all the vertices of the 3D mesh that belong to the same region. This paper uses a simplified version of the “hybrid” segmentation pipeline previously presented in <cit.>. Primarily it is based around a constrained region-growing algorithm. In short, the curvature and normal for the 3D mesh were pre-computed. A user defined curvature threshold was provided to find large “flat” regions (e.g., broad leaves) for use as seed regions. A curvature-constrained region growing was then performed from each seed region. The geometry (area, width, length, and circumference) of each segmented leaf was then extracted using the approach outlined in <cit.>. The result for large plastic plant is shown in Fig. <ref>. § RESULTSFig. <ref> shows reconstructions of different plants with different complexity and leaf shapes. 3D meshes of plants with thin and narrow leaves are reconstructed with excellent geometric agreement, although there is a minor discrepancy at the tips of the leaves. For visual comparison the pots are included in this figure, however they are removed (as in Fig. <ref>) before geometry measurement. The same plastic plants are also reconstructed twice with different numbers of input images to see how this affects the reconstruction quality. Without tilting the cameras, it took 3 minutes for two cameras to capture a total of 72 images, as compared to 30 minutes to capture 360 images where the two camera were moved to 5 tilt positions (taking half of the total scanning time). A visual comparison (not shown in paper) does not show obvious differences between the two meshes of the same plant. The leaves of the large plastic plant (bottom of Fig. <ref>) were dissected and scanned to measure the length, width, perimeter and area). There are 12 leaves grouped into 3 sizes shown in Fig. <ref> and Tab. <ref>. Since the leaves mostly curve along their length, the length measurement is likely to be affected. For validation of the reconstruction accuracy, the width of the leaves is chosen as this is less affected by the curving. A quantitative comparison between the two cases is shown in Fig. <ref>A and B. The ground truth obtained from the 2D scans of the dissected leaves (Fig. <ref>) was graphed againstthe measurement obtained from the 3D meshes of the plant. To fit into the plots, the values of perimeter are scaled down half and the values of area are root-squared. It can be seen that the measurements agree quite well with the ground truth. The average relative error ϵ = 1/N∑_i=0^N-1|truth_i - measurement_i|/truth_i of the measurement is 4.0% for 72 images and 3.3% for 360 images: § CONCLUSION AND DISCUSSIONWe have presented a complete system for automatic high-resolution 3D plant phenotyping. Several technical solutions in camera calibration, image processing and 3D reconstruction have been proposed for high accuracy of 3D mesh models. Notably, we proposed a camera calibration procedure that uses a standard chessboard calibration target that is easy to make and use in production environment. We also proposed an extension of foreground segmentation to LAB color space for improved segmentation accuracy for plants with thin leaves commonly found in major crop plants.The system captures high quality images with accurate camera poses for image-based 3D reconstruction algorithm. The quantitative measurements using 3D visual hull algorithm provided an estimate of the accuracy of the whole system in general. We showed that useful metrics such as leaf width, length and area can be obtained with high accuracy from the 3D mesh models. Fast scanning only takes 3 minutes (72 images) per plant and still produces a reasonable measurement (4% error). More images (360 images) per plant is required for better accuracy (3.3% error) especially for complex plant structure, but requires 5 to 10 times more time to scan. Future works include a calibration using both pan and tilt axes so that camera pose can be obtained for an arbitrary pair of pan-tilt rotation angles. This would enable a more flexible scanning trajectory other than circular rotation with fixed number of images per tilt angle. § ACKNOWLEDGMENTS § ACKNOWLEDGMENTSChuong Nguyen acknowledges the support by ARC DP120103896 and CE140100016 through ARC Centre of Excellence for Robotics Vision (http://www.roboticvision.org), CSIRO OCE Postdoctoral Scheme and the National Collaborative Infrastructure Strategy (NCRIS) project, "Australian Plant Phenomics Centre". Thanks to Dr. Geoff Bull of the High Resolution Plant Phenomics Centre for his valuable feedback to the manuscript. IEEEtran
http://arxiv.org/abs/1702.08112v1
{ "authors": [ "Chuong V Nguyen", "Jurgen Fripp", "David R Lovell", "Robert Furbank", "Peter Kuffner", "Helen Daily", "Xavier Sirault" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170226235200", "title": "3D Scanning System for Automatic High-Resolution Plant Phenotyping" }
Multimodal deep learning approach for joint EEG-EMG data compression and classification Ahmed Ben Said, Amr Mohamed, Tarek Elfouly Computer Scienceand Engineering DepartmentQatar Univrsity2713, Doha, QatarEmail: {abensaid, amrm, tarekfouly}@qu.edu.qa Khaled Harras Computer Science DepartmentCarnegie Melon university-Qatar24866, Doha, QatarEmail: kharras@qatar.cmu.edu Z. Jane Wang Electrical and ComputerEngineering Department University of British Columbia Vancouver, BC, Canada Email: zjanew@ece.ubc.caDecember 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================= In this paper, we present a joint compression and classification approach of EEG and EMG signals using a deep learning approach. Specifically, we build our system based on the deep autoencoder architecture which is designed not only to extract discriminant features in the multimodal data representation but also to reconstruct the data from the latent representation using encoder-decoder layers. Since autoencoder can be seen as a compression approach, we extend it to handle multimodal data at the encoder layer, reconstructed and retrieved at the decoder layer. We show through experimental results, that exploiting both multimodal data intercorellation and intracorellation 1) Significantly reduces signal distortion particularly for high compression levels 2) Achieves better accuracy in classifying EEG and EMG signals recorded and labeled according to the sentiments of the volunteer. mHealth, deep learning, compression, classification§ INTRODUCTIONHealthcare has always been considered as a strategic priority worldwide. The increasing number of elderly and chronic disease patients has made the physical contact between the caregiver and patients more and more difficult. Following the fast development of wireless technologies, the interoperability between healthcare entities and mobile has grown. The development of complex devices has stimulated the creation of many mobile health or `mHealth' applications and wearable devices for fitness tracker, sleep monitoring <cit.>…The mHealth industry is predicted to grow $12 billions by 2018 <cit.>.Motivated by the myriad of biomedical sensors, mobile phones and applications, the scientific communities have standardized the system that focuses on the acquisition of vital signs such as electroencephalogram (EEG) and Electromyogram (EMG) by body area sensor networks (BASN) under IEEE 802.15.6 <cit.>. Thus, a typical mHealth BASN system consists of sensors collecting the data, a Personal Data Aggregator (PDA) and remote server. However, due to network limitation, data delivery through the network can be hindered. Consequently, we need to optimize every bit of data being sent. One of the possible pre-processing stages is to encode the data in the PDA i.e. mapping x_i to compressed data z_i. At the server level, the received data z_i is decoded i.e. mapped to x̂_̂îwhich approximates the original data x_i. Many successful algorithms have been proposed for time series compression. Srinvasan et al. <cit.> designed a 2-D lossless EEG compression where the signal is arranged in 2-D matrix as a preprocessing. Compression is achieved through a two-stage coder composed of Set Partitioning In Hierarchical Trees (SPIHT) <cit.> layer and Arithmetic Coding (RC) layer. Hussein et al. <cit.> proposed a scalable and energy efficient EEG compression scheme based on Discret Wavelet Transform (DWT) and Compressive Sensing (CS) in wireless sensors. Several parameters have been considered to control the total energy consumption of the encoder and transmitter. The optimal configuration of these parameters is chosen based on optimization scheme where the total power consumption should not exceed a certain threshold. In <cit.>, authors applied CS technique for EEG signal compression. Since the multichannel EEG signals have common sparse support in the transform domain, they stack the sparse transform coefficients as columns of matrix. Thus, the recovery problem becomes row-sparse and solved through Bregman algorithm <cit.>. Majumdar et al. <cit.> argued that CS is not efficient for EEG compression because there is no sparsifying basis that fulfills the requirements of incoherence and sparsity. Instead, authors formulated the problem as a rank deficiency problem solved by a Bregman-derived algorithm.Following the development of wireless BASN, vital signs data have become abundant. mHealth systems are now capable of collecting data from different modalities (EEG, EMG, etc). Although, they may seem totally different, these data can describe the same phenomena. For example, in case of schizophrenic person, when a stimulus is presented, a peak in the EEG registration is witnessed while the functional Magnetic ResonanceImaging (fMRI) data shows activations in the temporal lobe and the middle anterior cingulate region <cit.>. Thus, both modalities are very likely to be correlated. Each modality has its advantages and limitations but analyzing multiple modalities offers better understanding of the investigated phenomena. The aforementioned methods, although exhibit good performance, do not exploit the correlation among multiple modalities. Deep learning approach has emerged as one of the possible techniques to exploit the correlation of the data from multiple modalities. Ngiam et al. <cit.> proposed a multimodal deep learning approach for cross modality feature learning from video and speech data. Srivastava et al. <cit.> built a multimodal deep belief network <cit.> to learn multimodal representation from image and text data for image annotation and retrieval tasks. In <cit.>, authors designed a deep Boltzmann machine <cit.> based architecture to extract a meaningful representation from multimodal data for classification and information retrieval task. Liu et al. <cit.> proposed a multimodal autoencoder <cit.> approach for video classification based on audio, image and text data where the intra-modality semantic for each data is separately learning by a stacked autoencoder. Next, the learned features are concatenated and fed to another deep autoencoder with a softmax layer for classification.Few research attempts have addressed the possible application of autoencoderfor biomedical and mHealth applications. In <cit.>, Yann Ollivier proved that there is a strong relationship between minimizing the code length of the data and minimizing reconstruction error that an autoencoder seeks.Tan et al. <cit.> used a stacked autoencoder for mammogram image compression. Training is conducted on image patches instead of the whole images. In <cit.>, authors applied the autoencoder for Electrocardiogram (ECG) compression. Comparison results with various classic compression methods showed that this special type network is reliable for signal compression. However, the problem of multimodal data compression in context of mHealth is still not well-investigated. We propose in this paper a multimodal approach fordata compression and feature learning. The encoding-decoding scheme can be achieved through a stack of autoencoders. Our approach exploits the intracorrelation as well as the intercorrelation among multiple modalities to achieve efficient compression and classification.The rest of the paper is organized as follows: in section II, we present the autoencoder architecture. Section III is dedicated for presenting our multimodal approach for joint EEG and EMG data compression and classification. Experimental results are illustrated and discussed in section IV and we conclude in the last section.§ BACKGROUND §.§ AutoencoderAn autoencoder, illustrated in Fig. <ref>,is a special type of neural network consisting of three layers. The data are first fed into the input layer, propagated to a second layer called the hidden or bottleneck layer and then reconstructed at a third layer called the reconstruction layer. The encoder transforms the set of data vectors x ∈ R^X into hidden representationh ∈ R^H via an activation function f:h=f(Wx+b)The decoder transforms back the hidden representation h to reconstruction data r ∈ R^X via an activation function g:r=g(W^'h+b^') The Parameters W ∈ R^X × H and W^'∈ R^H × X are called weight matrices. b ∈ R^H and b^'∈ R^X are called the bias vectors. f and g are typically hyperbolic function tanh(x)=(e^x-e^-x)/(e^x+e^-x) or sigmoid function sigmoid(x)=1/( 1+e^-x). In practice, we use tight weight configuration i.e. W^'=W^T. Autoencoder seeks the optimal set of parameters Θ={W, b, b^'} that minimizes the reconstruction error J_Θ(x,r). This error is generally the Squared Euclidean distance L(x,r)=|| x-r ||^2 or cross-entropy loss L(x,r)=-∑^X_i=1x_i log(r_i)+(1-x_i)log(1-r_i). When using affine activation function and squared error loss, autoencoder essentially performs Principal Component Analysis (PCA) [It will find the same subspace as PCA but the projection direction does not essentially correspond to the principal components directions].Minimization is generally carried outvia gradient descent algorithm. In other words, the purpose of this minimization is to obtain r ≈ x i.e. an approximation to the identity function. But, by constraining the system bylimiting the number of hidden units at the hidden layer, we are forcing the system to learn a compressed version of the data. Furthermore, to prevent overfitting, i.e. just learning the identity function, another constraint is often added: a weight decay term that regularizes J_Θ(x,r). Then, we have:J_Θ(x,r)=L(x,r)+ λ||W||^2_2Where λ is the decay parameter that controls the amount of regularization.§.§ Stacked autoencoderStacked autoencoder (SAE), illustrated in Fig <ref>, is a neural network which consists of multiple layers of autoencoders. The output of each layer is fed to the next layer. SAE is trained via a greedy layer-wise training <cit.>. Specifically, it is done one layer at a time. At each layer, we consider the autoencoder composed of the current layer and its previous one which is the output of the previous layer. Once N-1 layers are trained, we can compute the output of th N^th layer wired to it. This unsupervised stage is followed by a supervised fine-tuning of the parameters where a softmax layer is added on top of the SAE. § MULTIMODAL AUTOENCODER FOR EEG-EMG COMPRESSION AND CLASSIFICATIONFig. <ref> exhibits the multimodal autoencoder architecture. It consists of two pathways for EEG and EMG. Each pathway represents a unimodal stacked autoencoder dedicated to learn the intra-modality correlation of the data while the joint layer merge the higher level features. §.§ Unimodal data pre-trainingSAE is applied separately for each modality, we use the sigmoid activation function and the Squared Euclidean distance as loss function regularized by a weight decay term. We apply also tied weight configuration. The output of the i^th layer is obtained as follows:z_1=sigmoid(W_1 x_1+b_1) i=1 z_i=sigmoid(W_i x_i+b_i) i=2..N The SAE is trained using the greedy-layer wise training approach where we feed the latent representation of the autoencoder found below to the current layer. This deep architecture makes the system more scalable and efficient while progressively extractinghigher level features from the high dimensional data. §.§ Deep multimodal learningThe single modal pre-training does not involve inter-modality correlation which can contribute in better representation of the higher level features. It especially allows encoding the multiple modalities in a single shared representation obtained by the joint layer. The output of this layer encompasses the contribution of each modality in the code which represents the compressed data. The joint representation is obtained as follows:z=∑_ i ∈{e,m}sigmoid(W_N+1^i z^i_N+1+b^i_N+1)Where e and m refer to EEG and EMG respectively. Furthermore, we train the multimodal autoencoder with an augmented noisy data where additional examples are added leading to samples with only one single modality. In practice, we add zeros values examples for one modality while keeping the original values for the other modality and vice-versa. Thus, one third of the training data is EEG only, another one third is EMG only and the rest has both EEG and EMG data. This strategy, inspired from Nigiam et al <cit.>, follows the denoising autoencoder paradigm <cit.> and is justified by twofold:* Correlation among multiple modalities is very likely to be non-linear.* This non-linearity often leads to hidden units being activated by one single modality.Therefore, the original and corrupted inputs are propagated independently to the higher layers which are then trained progressively to reconstruct the clean presentation from both inputs.§.§ Fine-tuningThe compressed data can be used for classification task, that is, to fine-tune the layers with respect to a supervised criterion by plugging the bottleneck layer to a softmax classifier <cit.>:p̂=exp(Wy+b)/∑_l=1^L exp(W^l y+b^l) Where p̂ is the predicted object label, y represents the compressed data and L is the number of classification labels. Therefore, the overall objective function to minimize is:(x,r,p,p̂)=J_Θ(x,r)+ L(p,p̂)Where p is the true label and L(p,p̂) can be an entropic loss function. § EXPERIMENTAL RESULTSmHealth systems acquire, process, store, secure and transport the medical data. Data delivery should be as efficient and optimized as possible in terms of energy consumption and bandwidth usage. A typical system consists of mHealth wearable device that senses vital signs. These data are collected by a PDA and should be transmitted to a remote server handled by a medical entity <cit.>. At the server level, a multimodal autoencoder is already trained and the optimal configuration is already found. This configuration is also known by the PDA which should apply it on the collected data for compression. We present in this section several experimental results where we compare our compression scheme with some state of the art compression methods. Furthermore, we compare our multimodal strategy with the unimodal one to highlight the importance of exploiting the intermodality correlation. §.§ DatasetWe conduct our experiments on the DEAP dataset <cit.>. It consists of EEG, EMG and multiple physiological signals recorded from 32 participants during 63 seconds at 128 Hz. During experiments, volunteers watched 40 music videos and rate them on a scale from 1 to 9 with respect to four criteria: likeness (dislike, like), valence (ranges from unpleasant to pleasant), arousal (ranges from uninterested or bored to excited) and dominance (ranges from helpless and weak feelings to empowered feeling). Signals are segmented into 6 seconds segments, whitened and normalized between 0 and 1. For both EEG and EMG data, we have 23040 samples of 896 dimensionality. These data should then be divided into training and testing sets.§.§ Compression tasksWe compare our compression method with the Discrete Wavelet Transform (DWT) <cit.>, Compressed Sensing (CS) <cit.> and the 2D compression approach which is based on SPIHT and FastICA <cit.>. For the latter algorithm, we use two configurations of 3 and 6 independent components denoted 2D-SPIHT-3-ICs and 2D-SPIHT-6-ICs. We evaluate performance using compression ratio (CR) and residual distortion (D).CR=(1-m/n)*100D=||r-x||/||x||*100Where m and n are the length of the compressed and original signals (number of samples). D is the percentage root-mean-square difference between the compressed and original signals. For each data pathway, we use a two-layer SAE. Table <ref> presents the numbers of hidden units for each layer of the multimodal autoencoder as well as the DWT thresholds and their corresponding CRs. We divide data to 50% training and testing. Fig. <ref> and <ref> exhibit distortion variation with respect to different CR values for EEG and EMG. The findings show that for higher compression ratios, the multimodal approach performs better than DWT and CS. For example, for CR=80%, our approach is able to reconstruct EEG and EMG with distortions of 12% and 13.85% respectively while CS distorts EEG by 22% and EMG by 17.21%. With 2D-SPIHT-3-ICs, EEG and EMG distortions are 33.7% and 35.7% respectively while with 2D-SPIHT-3-ICs, EEG and EMG are distorted by 33% and 33.5%. DWT exhibits low performance with 68% and 73.12% for EEG and EMG respectively. Although DWT, CS and the 2D approach perform better for low compression levels, the proposed method presents stable performance for different compression levels. This can be explained by the capacity of the underlying architecture to exploit the statistics of the data to achieve better compression.We further examine the effect of training/testing data partition on the compression results. Fig. <ref> and Fig. <ref> illustrate the whisker diagrams for EEG and EMG signals respectively. We can clearly deduce that more training data result in less distortion. This confirms a known deep learning rule of thumb stating the more training data we have, the better the results are. §.§ Classification task The objective of this experiment is to demonstrate the importance of the multimodal approach. We conduct binary classification of the EEG and EMG with respect to two of the four labeling possibilities: dominance and arousal. We follow the same approach as in <cit.>: video ratings are thresholded into two classes. On the scale of 1 to 9, we simply place the threshold in the middle. We compare our approach with two-layer SAE and Deep Boltzmann Machine (DBM) architectures <cit.> with the softmax classifier on top of them. For SAE, we use the sigmoid activation function. We choose 75% training-testing partition. Figures <ref> and <ref> illustrate the classification results with respect to the dominance and arousal respectively. By exploiting the inter-modality correlation, the proposed approach achieves the best results with 78.1%. The single-modality approaches are less accurate. These findings confirm that, when available, multiple modalities can offer better understanding of the underlying phenomena even if the data exhibit different characteristics.§.§ Discussion In a typical mHealth system, a client-server architecture is the common choice where the system relies on the available networks to deliver the data. In general, the healthcare giver generally relies on multiple vital signs for an accurate diagnosis. The proposed approach is flexible in the sense that, if an additional modality is collected by the PDA via a wearable device, can be easily incorporated in the architecture presented in Fig. <ref>, compressed and classified. The deep neural network can be trained offline. Once it achieves good performance, the optimal configuration (weights and biases) is applied at the client side for efficient data delivery. However, it is worth-noting that our approach it less efficient for low compression ratio.§ CONCLUSIONWe have presented a deep learning approach for multimodal data compression and classification. Our strategy focuses on exploiting the inter and intra correlation among multiple modalities to enhance the compression and classification of data in context of mHealth application. The core of the proposed method is based on the classic autoencoder which has been originally designed for encoding-decoding data. For each modality presented, we dedicate a stacked autoencoder to extract high level abstraction of the data by modeling the intra-correlation. A joint layer is added on top of each encoding part of the stacked autoencoders to model data intercorrelation.We have conducted compression and classification experiments. Comparison with DWT and CS have shown that our approach performs better with high compression ratio. We have also demonstrated the effectiveness of the multimodal approach for classification of EEG and EMG. Comparison with some unimodal algorithms e.g. Deep Botzmann Machines and stacked autoencoders shows that the multimodal autoencoder leads to better classification accuracy.In future work, we will investigate the possible application of Convolutional Neural Network. Furthermore, we intend to make the autoencoder-based compression scheme adaptive by including the network resourcein the choice of the neural network architecture. § ACKNOWLEDGMENTThis publication was made possible by NPRP grant #7-‐684-‐1‐-127 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.IEEEtran 00 app mhealth app market sizing 2015 - 2020 data report to size opportunities in the mhealth app market. study [Online]. Available: <http://mhealthintelligence.com/news/the-history-of-mobile-health-from-cell-phones-to-wearables> ieee Ieee 802.15 wpan task group 6 body area networks, 2012. [Online]. Available: <http://standards.ieee.org/findstds/standard/802.15.6-2012.html> lossless K. Srinivasan, J. Dauwels, and M. R. Reddy, “A two-dimensional approach for lossless eeg compression,” Biomedical Signal Processing and Control, 2011. spiht A. Said and W. A. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Transactions on Circuits and Systems for Video Technology, 1996. ramy R. Hussein, A. Mohamed, and M. Alghoniemy, “Scalable real-time energy-efficient eeg compression scheme for wireless body area sensor network,” Biomedical Signal Processing and Control, 2015. cs_shukla A. Shukla and A. Majumdar, “Row-sparse blind compressed sensing for reconstructing multi-channel eeg signals,” Biomedical Signal Processing and Control, 2015. bregman W. Yin, S. Osher, D. Goldfarb, and J. Darbon, “Bregman iterative algorithms for L1-minimization with applications to compressed sensing,” SIAM Journal on Imaging Sciences, 2008. low_rank A. Majumdar, A. Gogna, and R. Ward, “A low-rank matrix recovery approach for energy efficient eeg acquisition for a wireless body area network,” Sensors, 2014. fmri_eeg N. Correa, Y.-O. Li, T. Adali, and V. D. Calhoun, “Examining associations between fmri and eeg data using correlation analysis,” in In proceeding of 5th IEEE international symposium on biomedical imaging: From nano to macro,, 2008. ngiam J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in Proceedings of the 28th International Conference on Machine Learning, 2011. nitish_m_dbn N. Srivastava and R. Salakhutdinov, “Learning representations for multimodal data with deep belief nets,” in Proceedings of the 29th International Conference on Machine Learning, 2012. dbn G. E. Hinton. (2009) Deep belief networks. [Online]. Available: <www.scholarpedia.org/article/Deep_belief_networks> m_dbm N. Srivastava and R. Salakhutdinov, “Multimodal learning with deep boltzmann machines,” in Advances in neural information processing systems, 2012. dbm R. Salakhutdinov and G. E. Hinton, “Deep boltzman machines,” in In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2009. m_video Y. Liu, X. Feng, and Z. Zhou, “Multimodal video classification with stacked contractive autoencoders,” Signal Processing, 2016. science G. E. Hinton and R. Salakhutdinov, “Reducing the dimensionality ofdata with neural networks,” Science, 2006. Ollivier2014 Y. Ollivier, “Auto-encoders: reconstruction versus compression,” 2014, working paper or preprint. [Online]. Available: <https://hal.archives-ouvertes.fr/hal-01104268> mammogram C. C. Tan and C. Eswaran, “Using autoencoders for mammogram compression,” Journal of Medical Systems, 2011. ecg_ae D. D. Testa and M. Rossi, “Lightweight lossy compression of biometric patterns via denoising autoencoders,” IEEE Signal Processing Letters, vol. 22, pp. 2304–2308, 2015. greedy Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy layer-wise training of deep networks,” in Inproceedings of Advances in Neural Information Processing Systems, 2006. dae P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” in International Conference on Machine Learning, 2008. autoencoder Y. Bengio, “Learning deep architectures for AI,” Foundations and Trends in Machine Learning, vol. 2, pp. 1–127, 2009. 7552682 A. Awad, A. Mohamed, and C.-F. Chiasserini, “User-centric network selection in multi-rat systems,” in 2016 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), April 2016, pp. 97–102. deap S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, , T. Pun, A. Nijholt, and I. Patras, “Deap: A database for emotion analysis ;using physiological signals,” IEEE Transactions on Affective Computing, vol. 3, pp. 18–31, 2012. dwt S. Mallat, A Wavelet Tour of Signal Processing, Third Edition, A. Press, Ed., 2008. cs D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. fastica A. Hyvarinen and E. Oja, “Independent component analysis: algorithms and applications,” Neural Networks, vol. 13, no. 4–5, pp. 411–430, 2000.
http://arxiv.org/abs/1703.08970v1
{ "authors": [ "Ahmed Ben Said", "Amr Mohamed", "Tarek Elfouly", "Khaled Harras", "Z. Jane Wang" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170327083735", "title": "Multimodal deep learning approach for joint EEG-EMG data compression and classification" }
A Sentence Simplification System for Improving Relation Extraction Christina Niklaus, Bernhard Bermeitinger, Siegfried Handschuh,André Freitas Faculty of Computer Science and MathematicsUniversity of PassauInnstr. 41, 94032 Passau, Germany{christina.niklaus, bernhard.bermeitinger, siegfried.handschuh, andre.freitas } @uni-passau.de ===================================================================================================================================================================================================================================================================================================== In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems. § INTRODUCTION This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ Relation Extraction (RE) is the task of recognizing the assertion of relationships between two or more entities in NL text. Traditional RE systems have concentrated on identifying and extracting relations of interest by taking as input the target relations, along with hand-crafted extraction patterns or patterns learned from hand-labeled training examples. Consequently, shifting to a new domain requires to first specify the target relations and then to manually create new extraction rules or to annotate new training examples by hand <cit.>. As this manual labor scales linearly with the number of target relations, this supervised approach does not scale to large, heterogeneous corpora which are likely to contain a variety of unanticipated relations <cit.>. To tackle this issue, Banko08 introduced a new extraction paradigm named 'Open RE' that facilitates domain-independent discovery of relations extracted from text by not depending on any relation-specific human input. Generally, state-of-the-art Open RE systems identify relationships between entities in a sentence by matching patterns over either its POS tags, e. g. <cit.>, or its dependency tree, e. g. <cit.>. However, particularly in long and syntactically complex sentences, relevant relations often span several clauses or are presented in a non-canonical form <cit.>, thus posing a challenge for current Open RE approaches which are prone to make incorrect extractions - while missing others - when operating on sentences with an intricate structure.To achieve a higher accuracy on Open RE tasks, we have developed a framework for simplifying the linguistic structure of NL sentences. It identifies components of a sentence which usually provide supplementary information that may be easily extracted without losing essential information. By applying a set of hand-crafted grammar rules that have been defined in the course of a rule engineering process based on linguistic features, these constituents are then disembedded and transformed into self-contained simpler context sentences. In this way, sentences that present a complex syntax are converted into a set of more concise sentences that are easier to process for subsequently applied Open RE systems, while still expressing the original meaning.§ SYSTEM DESCRIPTIONReferring to previous attempts at syntax-based sentence compression <cit.>, the idea of our text simplification framework is to syntactically simplify a complex input sentence by splitting conjoined clauses into separate sentences and by eliminating specific syntactic sub-structures, namely those containing only minor information. However, unlike recent approaches in the field of extractive sentence compression, we do not delete these constituents, which would result in a loss of background information, but rather aim at preserving the full informational content of the original sentence. Thus, on the basis of syntax-driven heuristics, components which typically provide mere secondary information are identified and transformed into simpler stand-alone context sentences with the help of paraphrasing operations adopted from the text simplification area.Definition of the Simplification Rules By analyzing the structure of hundreds of sample sentences from the English Wikipedia, we have determined constituents that commonly supply no more than contextual background information. These components comprise the following syntactic elements:* non-restrictive relative clauses (e. g. "The city's top tourist attraction was the Notre Dame Cathedral, which welcomed 14 million visitors in 2013.")* non-restrictive (e. g. "He plays basketball, a sport he participated in as a member of his high school's varsity team.") and restrictive appositive phrases (e. g. "He met with former British Prime Minister Tony Blair.")* participial phrases offset by commas (e. g. "The deal, titled Joint Comprehensive Plan of Action, saw the removal of sanctions.")* adjective and adverb phrases delimited by punctuation (e. g. "Overall, the economy expanded at a rate of 2.9 percent in 2010.") * particular prepositional phrases (e. g. "In 2012, Time magazine named Obama as its Person of the Year.")* lead noun phrases (e. g. "Six weeks later, Alan Keyes accepted the Republican nomination.")* intra-sentential attributions (e. g. "He said that both movements seek to bring justice and equal rights to historically persecuted peoples.")* parentheticals (e. g. "He signed the reauthorization of the State Children's Health Insurance Program (SCHIP).")Besides, both conjoined clauses presenting specific features and sentences incorporating particular punctuation are disconnected into separate ones.After having thus identified syntactic phenomena that generally require simplification, we have determined the characteristics of those constituents, using a number of syntactic features (constituency-based parse trees as well as POS tags) that have occasionally been enhanced with the semantic feature of NE tag. For computing them, a number of software tools provided by the Stanford CoreNLP framework have been employed (Stanford Parser, Stanford POS Tagger and Stanford Named Entity Recognizer).[<http://nlp.stanford.edu/software/>] Based upon these properties, we have then specified a set of hand-crafted grammar rules for carrying out the syntactic simplification operations which are applied one after another on the given input sentence. In that way, linguistically peripheral material is disembedded, thus producing a more concise core sentence which is augmented by a number of related self-contained contextual sentences (see the example displayed in figure <ref>). Application of the Simplification Operations The simplification rules we have specified are applied one after another to the source sentence, following a three-stage approach (see algorithm <ref>). First, clauses or phrases that are to be separated out - including their respective antecedent, where required - have to be identified by pattern matching. In case of success, a context sentence is constructed byr7.5cm [ >=latex', scale=0.7, transform shape, auto ][intg] (ki3) [draw = white] input: NL text "A few hours later, Matthias Goerne, a German baritone, offered an all-German program at the Frick Collection." ; [intg] (ki4) [node distance=2cm,below of=ki3, fill=celadon] syntax-based sentence simplification; [intg] (ki5) [draw = white, node distance=3cm,below of=ki4]* core sentence: Matthias Goerne offered an all-German program.* context sentence: Matthias Goerne was a German baritone.* context sentence: This was a few hours later.* context sentence: This was at the Frick Collection.;[intg] (ki6) [node distance=3cm,below of=ki5, fill=celadon] relation extraction (using the Open IE system from UW); [intg] (ki8) [draw = white, node distance=3cm,below of=ki6] output: extractions (in JSON format) * core fact: offered (Matthias Goerne; an all-German program)* context 1: was (Matthias Goerne; a German baritone)* context 2: was (core fact; at the Frick Collection)* context 3: was (core fact; a few hours later); [->] (ki3) – (ki4);[->] (ki4) – (ki5);[->] (ki5) – (ki6);[->] (ki6) – (ki8);Simplification and extraction pipeline either linking the extractable component to its antecedent or by inserting a complementary constituent that is required in order to make it a full sentence. Finally, the main sentence has to be reduced by dropping the clause or phrase, respectively, that has been transformed into a stand-alone context sentence.In this way, a complex source sentence is transformed into a simplified two-layered representation in the form of core facts and accompanying contexts, thus providing a kind of normalization of the input text. Accordingly, when carrying out the task of extracting semantic relations between entities on the reduced core sentences, the complexity of determining intricate predicate-argument structures with variable arity and nested structures from syntactically complex input sentences is removed. Beyond that, the phrases of the original sentence that convey no more than peripheral information are converted into independent sentences which, too, can be more easily extracted under a binary or ternary predicate-argument structure (see the example illustrated in figure <ref>). § EVALUATION r11cm [ >=latex', scale=0.75, transform shape, auto ][int1] (ki3) [draw = black, fill = gray] Matthias Goerne; [int] (ki4) [node distance=8cm,right of=ki3, draw=white] an all-German program; [int] (ki5) [node distance=2cm,below of=ki3, draw= white] a German baritone; (set d) [left=of ki3,xshift=-0.85cm] ;(set c) [right=of ki4,xshift=0.3cm] ;[draw, fit=(set d)(ki3)(ki4)(set c), fill=lightgray] (ki) ; [int1] (ki3) [draw = black, fill = gray] Matthias Goerne; [int] (ki4) [node distance=8cm,right of=ki3, draw=none] an all-German program; [int] (ki6) [node distance=2cm,below of=ki, draw= white] at the Frick Collection;[int] (ki7) [node distance=2cm,below of=ki4, draw= white] a few hours later;[->] (ki3) – (ki4) node [midway] offered;[->] (ki3) – (ki5) node [midway] was;[->] (ki) – (ki6) node [midway] was;[->] (ki) – (ki7) node [midway] was;Extracted relations when operating on the simplified sentencesThe results of an experimental evaluation show that state-of-the-art Open RE approaches obtain a higher accuracy and lower information loss when operating on sentences that have been pre-processed by our simplification framework. In particular when dealing with sentences that contain nested structures, Open RE systems benefit from a prior simplification step (see figures <ref> and <ref> for an example). The full evaluation methodology and detailed results are reported in Nikl16.§ USAGEThe text simplification framework is publicly available[<https://gitlab.com/nlp-passau/SimpleGraphene>] as both a library and a command line tool whose workflow is depicted in figure <ref>. It takes as input NL text in the form of either a single sentence or a file with line separated sentences. As described above, each input sentence is first transformed into a structurally simplified version consisting of 1 to n core sentences and 0 to m associated context sentences. In a second step, the relations contained in the input are extracted by applying the Open IE system[<https://github.com/allenai/openie-standalone>] upon the simplified sentences. Finally, the results generated in this way are written to the console or a specified output file in JSON format. As an example, the outcome produced by our simplification system when applied to a full Wikipedia article is provided online.[<https://gitlab.com/nlp-passau/SimpleGraphene/tree/master/examples>]§ CONCLUSIONWe have described a syntax-driven rule-based text simplification framework that simplifies the linguistic structure of input sentences with the objective of improving the coverage of state-of-the-art Open RE systems. As an experimental analysis has shown, the text simplification pre-processing improves the result of current Open RE approaches, leading to a lower information loss and a higher accuracy of the extracted relations.acl
http://arxiv.org/abs/1703.09013v1
{ "authors": [ "Christina Niklaus", "Bernhard Bermeitinger", "Siegfried Handschuh", "André Freitas" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170327111558", "title": "A Sentence Simplification System for Improving Relation Extraction" }
label1]Amirhossein Sobhanicor1 a_sobhani@mathdep.iust.ac.ir, a_sobhani@aut.ac.ir [label1]School of Mathematics, Iran University of Science and Technology, 16844 Tehran,Iran. [cor1]Corresponding author label2]Mariyan Milev [label2]UFT-PLOVDIV, Department of Mathematics and Physics marianmilev2002@gmail.com In this Article, a fast numerical numerical algorithm for pricing discrete double barrier option is presented. According to Black-Scholes model, the price of option in each monitoring date can be evaluated by a recursive formula upon the heat equation solution. These recursive solutions are approximated by using Legendre multiwavelets as orthonormal basis functions and expressed in operational matrix form. The most important feature of this method is that its CPU time is nearly invariant when monitoring dates increase. Besides, the rate of convergence of presented algorithm was obtained. The numerical results verify the validity and efficiency of the numerical method. Double and single barrier options Black–Scholes model Option pricing Legendre Multiwavelete [2010] 65D15 35E1546A32 § INTRODUCTIONBarrier options play a key role in the price risk management of financial markets. There are two types of barrier options: single and double. In single case we have one barrier but in double case there are two barriers. A barrier option is called knock-out (knock-in) if it is deactivated (activated) when the stock price touches one of the barriers. If the hitting of barriers by the stock price is checked in fixed dates, for example weakly or monthly, the barrier option is called discrete. Option pricing as one of the most interesting topics in the mathematical finance has been investigated vastly in the literature.Kamrad and Ritchken <cit.>, Boyle and Lau <cit.>, Kwok <cit.>, Heyen and Kat <cit.>, Tian <cit.> and Dai and Lyuu <cit.> used standard lattice techniques, the binomial and trinomial trees, for pricing barrier options. Ahn et al. <cit.> introduce the adaptive mesh model (AMM) that increases the efficiency of trinomial lattices. The Monte Carlo simulation methods were implemented in <cit.>.In <cit.>, numerical algorithms based on quadrature methods were proposed. Actually a great variety of semi-analytical methods to price barrier options have been recently developed which are based on integral transforms <cit.>, or on the transition probability density function of the process used to describe the underlying asset price <cit.>.These techniques are very high performing for pricing discretely monitored one and double barrier options and our computational results are in very good agreement with them. We would like to make the following essential remarks. An analytical solution for single barrier option is driven by Fusai et. al. in <cit.> where the problem of one barrier is reduced to a Wiener-Hopf integral equation and a given z-transform solution of it. To derive a formula for continuous double barrier knock-out and knock-in options Pelsser inverts analytically the Laplace transform by a contour integration <cit.>. Broadie et. al. have found an explicit correction formula for discretely monitored option with one barrier <cit.>. However, these three well-known methods <cit.> have not been still applied in the presence of two barriers, i.e. a discrete double barrier option. Farnoosh et al. <cit.> presented numerical algorithms for pricing discrete single and double barrier options with time-dependent parameters. Also, in my last work <cit.> a numerical method for pricing discrete single and double barrier options by projection methods have been presented.This article is organized as follows. In Section 2, the process of finding price of discrete double barrier option under the Black-Scholes model by a recursive formula has bean explained.Definition and some features of Legendre multi-wavelets are given in section 3. In section 4, Legendre multi-wavelet expansion is implemented for pricing of discrete double barrier option.Finally, numerical results are given in section 5 to confirm efficiency of proposed method.§ THE PRICING MODELWe assume that the stock price process follows geometric Brownian motion: dS_t = r̂S_t dt + σS_tdB_twhere S_0, r̂ and σ areinitial stock price, risk-free rate and volatility respectively. We consider the problem of pricing knock-out discrete double barrier call option, i.e. a call option that becomes worthless if the stock price touches either lower or upper barrier at the predetermined monitoring dates:0 = t_0 < t_1 <⋯< t_M = T.If the barriers are not touched in monitoring dates, the pay off at maturity time is max(S_T-E,0), where E is exercise price. The price of option is defined discounted expectation of pay off at the maturity time.Based on Black-Scholes framework, the option price 𝒫( S,t,m - 1) as a function of stock price at time t ∈( t_m - 1,t_m), satisfies in the following partial differential equations<cit.> - ∂𝒫/∂ t + r̂ S∂𝒫/∂ S + 1/2σ ^2S^2∂ ^2𝒫/∂S^2 - r̂𝒫 = 0, subject to the initial conditions:𝒫( S,t_0,0) = ( S - E)1_(max (E,L) ≤ S ≤ U) 𝒫( S,t_m,0) = 𝒫( S,t_m,m - 1)1_(L ≤ S ≤ U);m = 1,2,...,M - 1 ,where 𝒫( S,t_m,m - 1): = lim_t →t_m𝒫( S,t,m - 1). By change of variablez = ln( S/L) the partial differential equation <ref> and its initial condition is reduced as follows:-C_t + μC_z + σ ^2/2C_zz = r̂C C( z,t_0,0) = L( e^z - e^E^*)1_( δ≤ z ≤θ) C( z,t_m,m) = C( z,t_m,m - 1)1_( 0 ≤ z ≤θ) ;m = 1,2,...,M - 1 where C(z,t,m): = 𝒫(S,t,m); E^* = ln( E/L); μ= r̂ - σ ^2/2; θ= ln( U/L) and δ= max{E^*,0}. Next, by considering C( z,t_m,m) = e^α z + β th(z,t,m) where: α=- μ/σ ^2;c^2 =- σ ^2/2; β= αμ+ α ^2σ ^2/2 - r̂.the equation <ref> is reduced to the well known heat equation: - h_t + c^2h_zz = 0 h( z,t_0,0) = Le^ - α z( e^z - e^E^*)1_(δ≤ z ≤θ); m = 0 h( z,t_m,m) = h( z,t_m,m - 1)1_( 0 ≤θ≤ z ); m = 1,...,M - 1that could be resolved analytically, see e.g <cit.> as follows; h(z,t,m) = {[L∫_δ ^θk( z - ξ ,t)e^ - αξ( e^ξ - e^E^*)dξ ;m = 0; ∫_0^θk( z - ξ ,t - t_m)h( ξ ,t_m,m - 1)dξ ;m = 1,2,...,M-1 ].wherek(z,t) = 1/√(4πc^2t)e^ - z^2/4c^2t.By assuming that monitoring dates areequally spaced, i.e; t_m = mτ where τ =T/M, h( z,t_m,m - 1) is a function of two variables z, m. Therefore, by defining f_m( z ) := h(z,t_m,m - 1), we have: f_1(z) = ∫_0^θk(z - ξ ,τ)f_0( ξ)dξ f_m(z) = ∫_0^θk(z - ξ ,τ)f_m - 1( ξ)dξ ; m = 2,3,...,M wheref_0( z) = Le^ - α z( e^z - e^E^*)1_( δ≤ z≤θ). by defining f_m(z):= f_m( θ z) andk(z,τ):=θk(θ z,τ)=1/√(4πc^2t)e^ - (θ z)^2/4c^2t we reach the following relations from <ref>,<ref> and <ref>:f_1(z) = ∫_0^1k(z - ξ ,τ)f_0( ξ)dξ f_m(z) = ∫_0^1k(z - ξ ,τ)f_m - 1( ξ)dξ ; m = 2,3,...,Mwheref_0( z) = Le^ - αθ z( e^θ z - e^E^*)1_( δ/θ≤ z≤ 1 ).which helps us to use Legendre multiwavelete on interval [0,1]. § LEGENDRE MULTIWAVELET Let L^2([0,1]) be the Hilbert space of all square-integrable functions on interval [0,1] with the inner product <f,g>:=∫_0^1f(x)g(x)dx andthe norm ‖ f ‖=√(<f,f>). An orthonormal multi resolution analysis (MRA) with multiplicity r of L^2([0,1]) is defined as follows:A chain of closed functional subspaces V_j, j≥ 0 of L^2([0,1]) is called orthonormal multi resolution analysis of multiplicity r if: * V_j⊂ V_j+1,j ≥ 0. * ⋃_ j ≥ 0V_j is dense in L^2([0,1]),i.e.⋃_ j ≥ 0V_j=L^2([0,1]).*There exists a vector of orthonormal functions Φ=[ϕ^0,...,ϕ^r-1]^T in L^2([0,1]), that is called multiscale vector, such that {ϕ^l_j,k:=2^j/2ϕ^l(2^jx-k); 0 ≤ l ≤ r-1, 0 ≤ k ≤ 2^j-1} form an orthonormal basis for V_j. Now let wavelet space W_j be subspace of V_j+1 such that V_j+1=Vj⊕ W_j and Vj⊥ W_j, i.e. the orthogonal complement of V_j in V_j+1, so we haveV_j=V_0⊕ W_0⊕ W_1⊕ ...W_j-1 L^2([0,1])=V0 ⊕⊕_j=0^∞W_j.The property<ref> of MRA shows that dim(V_j)=dim(W_j)=r2^j. Let the function vector Ψ=[ψ^0,...,ψ^r-1] be vector of orthonormal basis of W_0, that is called multiwavelet vector, then the structure of MRA implies that W_j=span{ψ^l_j,k; 0 ≤ l ≤ r-1, 0 ≤ k ≤ 2^j-1},where ψ^l_j,k:=2^j/2ψ^l(2^jx-k). According to <ref> and <ref> for any V_j we have two orthonormal basis set as follow:Φ_j(x)=[ϕ_j,0^0(x),...,ϕ_j,0^r-1(x),...,ϕ_j,2^j-1^0(x),...,ϕ_j,2^j-1^r-1(x)] Ψ_j(x)=[ϕ_0,0^0(x),...,ϕ_0,0^r-1(x),ψ_0,0^0(x),...,ψ_0,0^r-1(x),..., ψ_j-1,0^0(x),...,ψ_j-1,0^r-1(x),...,ψ_j-1,2^j-1-1^0(x),...,ψ_j-1,2^j-1-1^r-1(x)] From relation <ref> for any f ∈ L^2([0,1]) we havef(x)=∑_l=0^r-1c_l ϕ^l(x)+∑_j=0^∞∑_k=0^2^j-1∑_l=0^r-1 c_j,kψ_j,k^l(x)where c_l=∫_0^1f(x)ϕ^l(x)dx and c_j,k=∫_0^1f(x)ψ_j,k^l(x)dx.Now we define orthonormal projection operator P_J:L^2([0,1])→ V_J as follows:P_J(f):=∑_l=0^r-1c_l ϕ^l(x)+∑_j=0^J-1∑_k=0^2^j-1∑_l=0^r-1 c_j,kψ_j,k^l(x)or equivalentlyP_J(f):=∑_k=0^2^J∑_l=0^r-1 d_J,kϕ_J,k^l(x)where d_j,k=∫_0^1f(x)ϕ_j,k^l(x)dx. In order to simplify notation, we denote the i-th element of Ψ_j(x) by ψ_i(x), so: Ψ_j(x)=[ψ_1(x),ψ_2(x),...,ψ_2^j(x)]and then we can rewrite <ref>:P_J(f):=∑_i=0^2^J a_i ψ_i(x)=Ψ_j(x)'Fwhere a_i=∫_0^1f(x)ψ_i(x)dx and F=[a_1,...,a_2^j].From relation <ref> P_J is convergence pointwise to identity operator I, i.e. ∀ f∈ L^2[0,θ]   lim_J →∞P_J(f) - f = 0. We use Legendre polynomial to construct Legendre Multiwavelet that has introduced by Alpert in <cit.>. Legendre polynomial, p_i(x), is defined as follows p_0(x) = 1, p_1(x) = xwith the following recurrence formula: p_i(x) = xp_i - 1(x) + ( i/i + 1)( xp_i - 1(x) - p_i - 2(x))The {p_i(x)}_i = 0^∞ is an orthogonal basis for L^2([ - 1,1]).We define V_j as followsV_j:={ f | f  be  a  polynomial  of  degree ≤ r  on  I_i, 1 ≤ i ≤ 2^j}where I_i:=[2^-j(i-1),2^-ji). It is obvious that V_j⊂ V_j+1 and ⋃_ j ≥ 0V_j=L^2([0,1]). Now let ϕ^l be a Legendre multiscaling function, that is defined asϕ^l:={[ √(2l+1)p_l(2x-1) x ∈ [0,1),; 0, o.w, ].and Φ:=[ϕ^0,...,ϕ^r-1]^T be the multiscale vector. It is easy to verify that{ϕ^l_j,k:=2^j/2ϕ^l(2^jx-k); 0 ≤ l ≤ r-1, 0 ≤ k ≤ 2^j-1},forms an orthonormal basis for V_j. Now let Ψ=[ψ^1,...,ψ^r-1] be the Legendre multiwavelet vector. Because of W_0 ⊂ V_1 each ψ_l could be expanded as follows:ψ^l= ∑_k=0^r-1 g_l,k^0ϕ^k(2x)+ ∑_k=0^r-1 g_l,k^1ϕ^k(2x-1)  ,    0 ≤ l ≤ r-1In addition, W_0 ⊥ V_0 and 1,x,..,x^r-1∈ V_0, so the first r moment of {ψ^l}_l=0^r-1 vanish:∫_0^1ψ^l(x)x^idx=0   0≤ l,i ≤ r-1on the other hand, we have ∫_0^1ψ^i(x)ψ^j(x)dx=0   0≤ i,j ≤ r-1 so for finding 2r^2 unknown coefficients g_i,j in <ref>, it is enough to solve 2r^2 equations <ref> and <ref>. If f ∈ L^2([0,1]) be k times differentiable, the following theorem about bound of error is obtained <cit.>:Suppose that the real function f ∈ C^r([0,1]). Then P_J(f) approximates f with the following error bound:‖ P_J(f)-f ‖≤2^(-Jr+1)/4^rr!sup_x∈ [0,1]| f^r(x) |. Legendre multiscaling and multiwavelet functions are presented for r=4 as follows <cit.>:[ϕ^0(x)= 1 0≤ x < 1;ϕ^1(x)= √(3)( 2 x-1 )0 ≤ x < 1;ϕ^2(x)= √(5)( 6 x^2-6 x+1 )0 ≤ x < 1; ϕ^3(x)= √(7)( 20 x^3-30 x^2+12 x-1 ) 0 ≤x < 1 ] [ψ^0(x)= -√(15/34)( 224 x^3-216 x^2+56 x-3 )0 ≤ x< 1/2√(15/34)( 224 x^3-456 x^2+296 x-61 )1/2 ≤ x < 1/2; ψ^1(x)= -√(1/21)( 1680 x^3-1320 x^2+270 x-11 )0 ≤ x≤ 1/2√(1/21)( 1680 x^3-3720 x^2+2670 x-619 ) 1/2 ≤ x < 1/2; ψ^2(x)= -√(35/17)( 256 x^3-174 x^2+30 x-1 )0 ≤ x< 1/2√(35/17)( 256 x^3-594 x^2+450 x-111 ) 1/2 ≤ x< 1/2; ψ^3(x)=√(5/42)( 420 x^3-246 x^2+36 x-1 )0 ≤ x< 1/2√(5/42)( 420 x^3-1014 x^2+804 x-209 ) 1/2 ≤ x < 1/2 ]§ PRICING BY LEGENDRE MULTIWAVELETLet operator 𝒦:L^2([0,1]) → L^2([0,1 ]) is defined as follows:𝒦( f )(z): = ∫_0^1κ (z - ξ ,τ )f(ξ ) dξ .where κ is defined in <ref>. Because κ is a continuous function, 𝒦 is a bounded linear compact operator on L^2([0,1])<cit.>. According to the definition of operator 𝒦, equations <ref> and <ref> can be rewritten as below:f_1=𝒦f_0 f_m=𝒦f_m-1   m = 2,3,...,MWe denotef̃_1,J = P_J𝒦( f_0) f̃_m,J = P_J𝒦( f̃_m - 1,J)= (P_J𝒦)^m(f_0) , m ≥ 2.where P_J𝒦 is as follows:(P_J𝒦)(f) = P_J( 𝒦(f)).Since the continuous projection operators P_J converge pointwise to identity operator I, then operator P_J𝒦 is also a compact operator and lim_n →∞P_J𝒦 - 𝒦 = 0(see <cit.>). With attention to the following inequality ( P_J𝒦)^m-𝒦^m≤‖( P_J𝒦) ‖‖( P_J𝒦)^m-1-𝒦^m-1‖ - ‖P_J𝒦 - 𝒦‖‖𝒦‖^m-1and relation <ref> by induction we getlim_n →∞( P_J𝒦)^m-𝒦^m = 0.Therefore, the following convergence result is concluded: f̃_m,J - f_m =( P_J𝒦)^m(f_0)- 𝒦^m (f_0)≤( P_J𝒦)^m - 𝒦^mf_0→ 0 asJ →∞.From <ref> and <ref>, we infer that the rate of convergencef̃_m,J to f_m and P_J𝒦 to 𝒦 are the same. Using the relation <ref> and properties of integral operator 𝒦, it is easy to confirm that P_J𝒦 - 𝒦≤2^(-Jr+1)/4^rr!sup_z,ξ∈ [0,1]|∂κ (z - ξ ,τ )/∂ z^r|. Since, f̃_m,J∈V_J for m ≥ 1, we can write f̃_m,J = ∑_i = 0^r2^Ja_miψ _i(z)= Ψ '_J(x)F_m, where F_m = [a_m0,a_m1, ⋯ ,a_m2^j]'. From equation <ref> we obtainf̃_m,J = (P_J𝒦)^m - 1( f̃_1,J).Since V_J is a finite dimensional linear space, thus the linear operator P_J𝒦 on V_J could be considered as a r2^J×r2^J matrix K. Consequently equation <ref> can be written as following matrix operator formf̃_m,J = Ψ '_JK^m - 1 F_1.For evaluation of the option price by <ref>, it is enough to calculate the matrix operator K and the vector F_1. It is easy to check (see <cit.>) that:F_1 = [a_11,a_12, ⋯ ,a_1r2^J]' K = ( k_ij)_r2^J×r2^Jwherea_1i = ∫_0^1 ∫_δ/θ ^1 ψ_i(η )κ (η- ξ ,τ )f_0(ξ )dξ dη ,0 ≤ i ≤ r2^J. k_ij = ∫_0^1∫_0 ^1ψ _i(η )ψ _j(ξ )κ (η- ξ ,τ )dξ dη . Therefore,the price of the knock-out discrete double barrier option can be estimated as follows: 𝒫( S_0,t_M,M - 1) ≃e^α z_0 + β tf̃_M,J( z_0/θ)where z_0=log( S_0/L) and f̃_M,n from <ref>. The matrix form of relation <ref> implies that the computational time of presented algorithm be nearly fixed when monitoring dates increase. Actually, if we set N=r2^J the complexity of our algorithm is 𝒪(N^2) that dose not depend on number of monitoring dates.§ NUMERICAL RESULTIn the current section, the presented method in previous section for pricing knock-out call discrete double barrier option is compared with some other methods. The numerical results are obtained from the relation <ref> with r2^J basis functions. In the following we denote f̃_m,J - f_m by e_2(J) and L^2-error(J). As we discussed in the previous section, the rate of convergencef̃_m,J to f_m and P_J𝒦 to 𝒦 are the same. Therefore, e_2(J-1)/e_2(J) must be about 2^r from <ref>. In addition, relation <ref> implies that the slope of log(L^2-error(J)) be about α= -r log(2). Source code has been written in Matlab 2015 on a 3.2 GHz Intel Core i5 PC with 8 GB RAM.In the first example, the pricing of knock-out call discrete double barrier option is considered with the following parameters: r=0.05, σ=0.25, T=0.5, S_0=100, E=100, U=120 and L=80 ,90 ,95 ,99 ,99.5. In table <ref>, numerical results of presented method with Milev numerical algorithm <cit.>, Crank-Nicholson <cit.>, trinomial, adaptive mesh model (AMM) and quadrature method QUAD-K200 as benchmark <cit.> are compared for various number of monitoring dates. In addition, it can be seen that CPU time of presented method is fixed against increases of monitoring dates. The L^2-error(J) are demonstrated for L=90 and M=250 in Table <ref> which results verify the convergence rate of our algorithm. Fig.<ref> shows the plotof log(L^2-error(J)) for r=3,4 and it can be seen that the slope of log(L^2-error(J)) is near to α= -r log(2). In this example, the parameters of knock-out call discrete double barrier option is considered as r=0.05, σ=0.25, T=0.5, E=100, U=110 and L=95. In table <ref> the option price for different spot prices are evaluated and compared with Milev numerical algorithm <cit.>, Crank-Nicholson <cit.> and the Monte Carlo (MC) method with 10^7 paths <cit.>. Due to the fact that the probability of crossing upper barrier during option's life when U≥2E is too small, the price of discrete single down-and-out call option can be estimated by double ones by setting upper barrier greater than 2E (for more details see<cit.>). Now, we consider a discrete single down-and-out call option with the following parameters: r=0.1,σ=0.2, T=0.5, S_0=100, E=100 and L=95 ,99.5 ,99.9. The price is estimated by double ones with U=2.5 E. The numerical results are shown in table <ref> and compared with Fusai13s analytical formula <cit.>, the Markov chain method (MCh)<cit.> and the Monte Carlo method (MC) with 10^8 paths <cit.> that shows the validity of presented method in this case. Fig.<ref> shows the plotof log(L^2-error(J)) for r=3,4 and it can be seen that the slope of log(L^2-error(J)) is near to α= -r log(2).§ CONCLUSION AND REMARKSIn this article, we used the Legendre multiwavelet for pricing discrete single and double barrier options. In section 4 we obtained a matrix relation <ref> for solving this problem. Numerical results confirm that growth of computational time is negligible when the number of monitoring dates increase. On the other hand, the rate of convergence of presented algorithm has been obtained theoretically and verified numerically . § REFERENCES
http://arxiv.org/abs/1703.09129v2
{ "authors": [ "Amirhossein Sobhani", "Mariyan Milev" ], "categories": [ "q-fin.CP", "q-fin.MF", "65D15, 35E15, 46A32" ], "primary_category": "q-fin.CP", "published": "20170327150216", "title": "A Numerical Method for Pricing Discrete Double Barrier Option by Legendre Multiwavelet" }
=1#1
http://arxiv.org/abs/1703.09246v2
{ "authors": [ "Gert Aarts", "Chris Allton", "Davide De Boni", "Simon Hands", "Benjamin Jäger", "Chrisanthi Praki", "Jon-Ivar Skullerud" ], "categories": [ "hep-lat", "hep-ph", "nucl-th" ], "primary_category": "hep-lat", "published": "20170327181328", "title": "Light baryons below and above the deconfinement transition: medium effects and parity doubling" }
1Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse 1, D-85748 Garching, Germany 2Department of Astronomy, University of Florida, Gainesville, FL 32611, USA 3Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, the Netherlands 4School of Physics and Astronomy, Cardiff University, Queen's Buildings, The Parade, Cardiff, CF24 3AA, UK 5Research Center for Astronomy, Academy of Athens, Soranou Efesiou 4, GR-115 27 Athens, Greece 6Department of Physics, Section of Astrophysics, Astronomy and Mechanics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece 7European Southern Observatory, Headquarters, Karl-Schwarzschild-Strasse 2, D-85748, Garching bei München, Germany 8Raymond and Beverly Sackler School of Physics & Astronomy, Tel Aviv University, Ramat Aviv, 69978, Israel 9Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ, UK TGB: tbisbas@ufl.eduWe explore the effects of the expected higher cosmic ray (CR) ionization rates ζ_ CR on the abundances of carbon monoxide (CO), atomic carbon (C), and ionized carbon (C^+) in the H_2 clouds of star-forming galaxies. The study of <cit.> is expanded by:a) using realistic inhomogeneous Giant Molecular Cloud (GMC) structures, b) a detailed chemical analysis behind the CR-induced destruction of CO, and c) exploring the thermal state of CR-irradiated molecular gas. CRs permeating the interstellar medium with ζ_ CR10×(Galactic) are found to significantly reduce the [CO]/[H_2] abundance ratios throughout the mass of a GMC. CO rotational line imaging will then show much clumpier structures than the actual ones. For ζ_ CR100×(Galactic) this bias becomes severe, limiting the utility of CO lines for recovering structural and dynamical characteristics of H_2-rich galaxies throughout the Universe, including many of the so-called Main Sequence (MS) galaxies where the bulk of cosmic star formation occurs. Both C^+ and C abundances increase with rising ζ_ CR, with C remaining the most abundant of the two throughout H_2 clouds, when ζ_ CR∼ (1-100)×(Galactic). C^+ starts to dominate for ζ_ CR10^3×(Galactic). The thermal state of the gas in the inner and denser regions of GMCs is invariant with T_ gas∼ 10K for ζ_ CR∼ (1-10)×(Galactic). For ζ_ CR∼ 10^3×(Galactic) this is no longer the case and T_ gas∼ 30-50K are reached. Finally we identify OH as the key species whose T_ gas-sensitive abundance could mitigate the destruction of CO at high temperatures. ISM: abundances, (ISM:) cosmic rays, galaxies: ISM, methods: numerical, astrochemistry§ INTRODUCTIONMolecular hydrogen (H_2) gas and its mass distribution in galaxies is of fundamental importance in determining their structural and dynamical characteristics, as well as the process of star formation in them. It does not have a permanent dipole moment, and at its lowest energy level (∼510K), the S(2-0) quadrupole transition in the far-IR wavelength cannot trace the bulk of the H_2 molecules which predominantly lie in the cold (≲100K) phase. The astronomical community has therefore implemented other lines to infer this mass indirectly typically using CO, the next most abundant molecule after H_2 itself with its bright rotational transitions in the millimeter/sub-millimeter wavelength regime <cit.>. Unlike H_2, CO has a permanent dipole moment and rotational transitions with Δ J=1 are allowed, e.g. CO J=1-0 at 115GHz is the most commonly used as an H_2 gas tracer, with higher-J transitions becoming accessible at high redshifts in the age of Atacama Large Millimeter/submillimeter Array (ALMA) at the high altitude of Llano de Chajnantor plateau in Chile. The goal of this work is to explore to what extent CO remains a good tracer of the molecular gas mass and dynamics in regions with elevated CRs, such as expected in actively star-forming galaxies typical for the early Universe.Once the CO (J=1-0) line emission is detected, a scaling factor is used to convert its velocity-integrated brightness temperature (or the line luminosity) to H_2 column density on scales of molecular clouds or larger. This method is statistically robust for M(H_2)10^5M_ <cit.>. This CO-to-H_2 method, calibrated in Galactic conditions <cit.>, is widely used in extragalactic observations <cit.>. If multi-J CO (or other molecules like HCN) line observations reveal average gas densities, temperatures and/or dynamic states of molecular clouds that differ from those in the Milky Way there exists a theoretical framework to use appropriately modified CO-to-H_2 conversion factors <cit.>. All these techniques work as long as CO and other molecules used to study its average conditions (e.g. HCN) remain sufficiently abundant in GMCs, typically not much less abundantas in the Galactic GMCs where these techniques have been calibrated. Low-metallicity (Z) molecular gas, especially when irradiated by strong FUV radiation, was thefirst H_2 gas phase for which early studies showed that the standard techniques actually fail <cit.>. This means that low-Z gas in the outer parts of even ordinary spiral galaxies, like the Milky Way, may then be in a very CO-poor phase and thus impossible to trace using CO lines <cit.>.Atomic carbon (C) line emission is another alternative for deducing the molecular gas distribution in galaxies and one that can be as reliable as low-J CO lines. This is because of its widespread emission in H_2 clouds despite of what is expected from the classical theory of Photodissociation Regions (PDRs) <cit.>. There are anumber of reasons contributing towards C line emission being fully associated with CO line emission and having larger emergent flux densities per H_2 column density than those of the low-CO rotational lines used as global H_2 gas tracers. This led to an early proposal for using the two C lines, ^3P_1-^3P_0 (W_ CI,1-0) at 492 GHz and ^3P_2-^3P_1 (W_ CI,2-1) at 809 GHz, and especially the lower frequency line, as routine H_2 gas tracers in galaxies for z 1 when the lines shift into the millimeter band <cit.>. Such a method can now be extended in the local Universe as imaging at high frequencies can be performed by ALMA <cit.>. In our Galaxy, the Vela Molecular Ridge cloud C shows that atomic carbon can trace accurately the H_2 gas mass <cit.>. For extragalactic studies, <cit.> find that in the centre of the Seyfert galaxy Circinus, the C-traced H_2 mass is consistent with that derived from sub-millimeter dust continuum and multiple-J CO excitation analysis, while C observations have recently been used to trace the H_2 gas mass in distant starbursts at z∼4 <cit.>.The ongoing discussion regarding the widespread C line emission in molecular clouds, and thus their ability to trace H_2 independently of ^12CO and ^13CO lines, took another turn after the recent discovery that cosmic rays (CRs) can very effectively destroy CO throughout H_2 clouds, leaving C (but not much C^+) in their wake <cit.>. Unlike FUV photons that only do so at the surface of H_2 clouds and produce C^+ rather than C, CRs destroy CO volumetrically and can render H_2 clouds partly or wholly CO-invisible even in ISM environments with modestly boosted CR ionization rates of ζ_ CR∼(10-50)×Galactic, where ζ_ CR is the cosmic-ray ionization rate (s^-1) <cit.>. The latter values are expected in typical star-forming (SF) galaxies in the Universe <cit.>, currently studied only using CO <cit.>. For example, <cit.> inferred a cosmic-ray ionization rate of ζ_ CR∼3×10^-14s^-1 in their analysis of CO/C^+ emissions in the high redshift HDF 850.1. B15 found that besides the ability of C lines in tracing the CO-rich parts of an H_2 cloud, they also probe the CO-poor regions. This is of particular interest especially if its lines are to be a viable H_2-tracing alternative to CO lines. In the current work we re-examine these CR-induced effects discussed by B15 in the much more realistic setting of inhomogeneous H_2 clouds, that could affect their `visibility' in CO, C, and C^+ lineemission. Furthermore, we discuss in more detail the chemistry behind the CR-control of the [CO]/[H_2] abundance ratio and its dependence on the gas temperature which itself is affected by cosmic rays. The latter proves to be a very important factor that should be taken into account in turbulent-dynamic cloud simulations that explore similar issues. Models of CO-destruction in cosmic-ray dominated regions (CRDRs), predict that low-J CO/C line flux ratios are general low, <1. Recent ALMA observations of the Spiderweb galaxy by <cit.> find that W_ CO(7-6)/W_ CI,2-1∼0.2 which can be potentially explained from the presence of high CR energy densities. Another interesting recent example is the observation of the W_ CO(1-0)/W_ CI,1-0∼0.1-0.4 ratio in the starburst galaxy NGC253 <cit.> which, in association with early W_ CO(7-6) observations indicating warm H_2 gas <cit.>, could be due to high ζ_ CR values. High CR energy densities areexpected to maintain higher gas temperatures even in far-UV-shielded environments. B15 estimate a gas temperature of ∼50K when the CR ionization rate, ζ__ CR, is boosted up to ∼10^3 times the mean Galactic value.In this paper we perform astrochemical simulations of the effects of larger than Galactic CR energy densities on inhomogeneous molecular clouds, using the 3d-pdr code <cit.> to infer the distributions of the CO, C, C^+ abundances and of the gas temperature. This is a continuation of the B15 work using much more realistic molecular cloud structuresrather than those of uniform-density or radially varying densities explored previously. Moreover, we now also analyze the chemistry involved in the CR-induced destruction of CO, and its conversion to C, in greater detail. In all of our simulations we assume that the bulk of the H_2 gas interacts with CRs throughout the cloud volume (i.e. the H_2 gas `sees' CRs, with the same spectrum, throughout the volume of the cloud). While this is not true for some regions deep inside clouds <cit.>, and can depend on the specifics of magnetic fields <cit.>, it remains a very good approximation for the bulk of H_2 clouds in SF-galaxies <cit.>.The paper is organized as follows. In Section <ref> we present the setup of our simulations. In Section <ref> we present the results of our calculations and in particular how the probability density functions and the abundance distribution of the above key species, as well as the corresponding heating and cooling functions vary under the different conditions explored. In Section <ref> we discuss how OH enhances the [CO]/[H_2] abundance ratio at higher temperatures when ζ__ CR increases and in Section <ref> we refer to the impact of our findings in observations. We conclude in Section <ref>.§ DESCRIPTION OF SIMULATIONSWeconsider a three-dimensional density distribution of a non-uniform giant molecular cloud (GMC) and use the 3d-pdr <cit.> code to perform chemistry and full thermal balance calculations and estimate the abundance distribution of chemical species and the gas temperature distribution. §.§ Density distributionThe inhomogeneous spherical GMC in our models is rendered by a fractal structure with a fractal dimension of D=2.4 constructed using the method described in <cit.>. It has a radius of R=10pc and mass of M=1.1×10^5M_⊙. This corresponds to an average H-nucleus number density of ⟨ n⟩≃760cm^-3, typical for Milky Way GMCs. The central part of the cloud contains a dense region with peak density ∼2×10^4cm^-3. The fractal dimension is in accordance to the clumpiness factor observed in evolved Galactic Hii regions <cit.>. On the contrary, for diffuse clouds the fractal dimension is higher (D∼2.8-3.0) meaning that they are more uniform <cit.>. The chosen dimension of D=2.4 corresponds to a GMC containing non-homogeneously distributed high density clumps typical of those that eventually undergo star formation. They are therefore expected to be H_2-rich and for the particular Milky Way conditions, also CO-rich. We do not evolve the cloud hydrodynamicaly and in order to resolve its densest parts, we use a Smoothed Particle Hydrodynamics setup of the cloud and represent it with 8.33×10^5 particles[The density of each particle is calculated using the SPH code seren <cit.>.].§.§ 3d-pdr initial conditions We use the 3d-pdr code <cit.> in order to calculate the abundances of chemical species in the above fractal cloud. 3d-pdr obtains the gas temperature and the abundance distribution of any arbitrary three-dimensional density distribution by balancing various heating and cooling functions (see <ref>). For the simulations of this work we use the same chemical network and initial abundances of species as used in the B15 paper. In particular we use a subset of the UMIST 2012 network <cit.> consisting of 6 elements (H, He, C, O, Mg, S), 58 species and more than 600 reactions. Table <ref> shows the initial abundances used which correspond to undepleted Solar abundances with hydrogen mostly in molecular form <cit.>. We chemically evolve the cloud for t_ chem=10^7yr at which point the system has reached chemical equilibrium. Chemical equilibrium is typically obtained after t_ chem∼10^5yr for a cloud in which H_2 has already formed, <cit.>, which is comparable to turbulent diffusion timescales for GMCs in ULIRG environments <cit.>. For our modelled GMC, we find that the sound crossing time is ∼3Myr. On the other hand the H_2 formation time is t_ chem=1/Rn_ H5Myr, where R=3×10^-18(T_ gas/ K)^1/2cm^3s^-1. We therefore do not expect turbulence to strongly affect our results, although hydrodynamical simulations exploring this effect are needed towards this direction <cit.>. We include H_2 formation on dust grains but we do not model CO freeze-out. The effects of different networks and different elemental abundances are presented in Appendix <ref>, which shows that our trends are robust.In all simulations we consider an isotropic FUV radiation field strength of χ/χ_0=1, normalized to the <cit.> spectral shape and width is equivalent to 2.7×10^-3ergcm^-2s^-1 integrated over the 91.2-240nm wavelength range <cit.>. At the surface of the cloud, the field strength is therefore approximately equal to 1/4Draine <cit.>. We perform a suite of four simulations by varying the cosmic-ray ionization rate, ζ__ CR from 10^-17s^-1 to 10^-14s^-1, the upper limit of which corresponds to values suggested for the Central Molecular Zone <cit.>. For convenience we normalize ζ__ CR asζ'≡ζ__ CR/ζ__ MW,where ζ__ MW=10^-17s^-1 is the typically adopted ionization rate of the Milky Way. This latter value is ∼0.1 times that observed in the diffuse ISM <cit.> but close to the Heliospheric value (1.45-1.58×10^-17s^-1) as measured by the Voyager 1 spacecraft <cit.>. Our baseline choice of a value lower than that observed is made under the assumption that cosmic rays in our model do not attenuate as a function of column density; instead the corresponding ionization rate remains constant everywhere in the cloud. We therefore adopt a baseline value that corresponds to an already attenuated ζ__ CR within denser H_2 gas[A similar approximation has also been made by <cit.>].§.§ Cosmic-ray ionization rate and UVHigh cosmic ray ionization rates, on the order of ζ'=10^3, are expected in starburst environments such as the (ultra-) luminous infrared galaxies <cit.>. In these systems the star formation rate (SFR) density, ρ__ SFR≡ SFR/V (where SFR is in M_⊙/ year and V is the corresponding volume), is enhanced by a factor up to ∼10^3 compared with the Milky Way. This drives a higher cosmic ray energy density as U__ CR∝ρ__ SFR <cit.>. Enhanced FUV fields are also expected in such environments, although dust attenuation in these metal-rich objects will keep the boost of the average FUV field incident on the H_2 clouds lower than proportional to ρ__ SFR <cit.>. In this paper we do not vary the isotropic FUV radiation field in our simulations, wanting to isolate the effects of CRs. We note, however, that chemo-hydrodynamical simulations performed by <cit.> suggest that if both ζ' and χ are increased by two orders of magnitude, clouds with mass M∼10^4M_⊙ might be dispersed by the thermal pressure which would dominate over the gravitational collapse. The attenuation of the FUV radiation is calculated using the method described in <cit.> which accounts for the attenuation due to dust,H_2 self-shielding CO self-shielfing, CO shielding by H_2 lines and CO shielding by dust.§ RESULTS §.§ Dependency of column density and volumetric mass of species on ζ'Our description begins with analysing the abundance distribution of species in all four different 3d-pdr simulations. <cit.>, <cit.> and <cit.> were the first to divide the gas in `CO-poor' and `CO-rich' populations based on the abundance ratio of [CO]/[H_2]. In this work, we adopt the B15 definition for which `CO-deficient' refers to gas that fullfills the conditions[CO]/ [H_2] < 10^-5[Hi]/2 [H_2] < 0.5.In this case, the abundance of CO averaged over the cloud is ∼10× lower than the average value of ∼10^-4 typically found in molecular clouds while the gas remains H_2-rich[In B15 the gas fulfilling conditions (<ref>) and (<ref>) was defined as `CO-dark'.]. We define the gas `CO-rich' when the gas is H_2-rich and [CO]/[H_2]≥10^-5. For comparison with observations, it is column density ratios rather than just local abundance ratios that are most relevant, since column densities ultimately control the strength of the velocity-integrated line emission. Figure <ref> shows column density plots of H_2, Hi, CO, C, and C^+ species as well as cross-section plots of the gas temperature (T_ gas) at the z=0pc plane, all as a function of ζ'. We map the SPH particle distribution on a 256^3 grid using the method described in Appendix <ref>. The recovery of T_ gas∼10K for the H_2 gas inside the cloud with χ/χ_0=1 and ζ'=1 obtained using our thermal balance calculations is in agreement with the typical temperatures of FUV-shielded H_2 observed in our Galaxy <cit.> and found in other simulations <cit.>. On the high end of the average CR energy densities and similar gas densities we recover similar T_ gas values as in the calculations for CR-dominated regions (CRDRs) performed in the past <cit.>.We find that as ζ' increases, the column density of molecular hydrogen, N(H_2) remains remarkably unaffected for ζ' up to 10^3. We note, however, that if we were to evolve the cloud hydrodynamically <cit.> the higher gas temperature of the cloud would act to reduce the number of high density clumps, thus affecting the underlying total column density distribution and the chemistry itself. N(Hi) remains low and nearly constant with ζ' up to ζ'=10^3 when H_2 starts being significantly destroyed towards Hi. These trends further reflect the findings of <cit.> who use the ζ__ CR/n_ H ratio to determine whether the ISM gas is predominantly atomic or molecular. The thin Hi shell seen in Fig. <ref> results from photodissociation by the FUV radiation, and the column density ∼10^21cm^-2 is in agreement with <cit.> and <cit.>.The most interesting interplay is between CO, C and C^+. As can be seen from Fig. <ref>, N(CO) starts already decreasing from ζ'≃10. For ζ'≳10^2 it is everywhere approximately one order of magnitude lower than at ζ'=1. Note that for N(CO) at ζ'=10^2 and 10^3 the upper limit of the colour bar is already one order of magnitude less than for ζ'=1 and 10. While N(H_2) remains high even at ζ'≃10^3, the large decrease of N(CO) points to a CO-to-H_2 conversion factorwell above its Galactic value, and one that may well become uncalibratable (see <ref>). At the same time, as CR particles interact with He, they create He^+ ions which then react with CO forming C^+. The latter further recombines with free electrons to form neutral carbon. On the other hand N(C) increases already from ζ'≳10, peaking at ζ'∼10^2. As shown in Fig. <ref> for the particular comparison between ζ'=1 and 10^2, it is remarkable to find thatN(H_2)_ζ'=1 ≃ N(H_2)_ζ'=100N(C^+)_ζ'=1 ≃ N(C^+)_ζ'=100N(C)_ζ'=1 ≃ N(C)_ζ'=100N(CO)_ζ'=1 ≃ 30N(CO)_ζ'=100suggesting that nearly all CO that has been destroyed by CRs is converted to C^+ and C. It is evident from Fig. <ref> that while at ζ'=1 CO traces the H_2 structure very well, it only traces regions of higher column densities at ζ'=10, whereas at ζ'=10^2 it is almost vanished. It is then replaced primarily by C showing a much better resemblance with the molecular structure.An insidious aspect of a CR-controlled [CO]/[H_2] abundance ratio inside CR-irradiated clouds revealed by Fig. <ref> is that if one were to perform typical CO line observations meant to find the H_2 mass and also characterize average gas density and temperature via CO and ^13CO line ratios, their analysis would consistently indicate dense and warm gas, located in those cloud regions where CO manages to survive in a high-CR environment. Yet these routine observations would be totally oblivious to the CO-poor H_2 gas mass (and its conditions) that surrounds these CO-rich warm and dense gas `peaks'. For ζ'≥100 that would be most of the H_2 gas (see Fig. <ref>), an effect that may have wide-ranging implications for galaxies where most of the SF in the Universe occurs. Apart from dust continuum emission, only C and C^+ line imaging could reveal that extra gas mass. Of these two, only C line imaging offers a practical method using ground-based telescopes, since the very high frequency of the C^+ line makes it inaccessible for imaging over much of redshift space where star-forming galaxies evolve. Figure <ref>shows the total mass of the above species in the simulated GMC as a function of ζ'. The total mass of H_2 in the GMC remains nearly unchanged for up to ζ'∼10^2. It is expected that for ζ'>10^4 the GMC will be Hi dominated with only trace amounts of H_2 even at the most dense regions. The particular mass of atomic carbon appears to have a local maximum at ζ'∼10^2, at which point the mass of CO is two orders of magnitude less than the corresponding value for ζ'=1. On the other hand the mass of C^+ increases monotonically at all times while for ζ'=10^3 we find that M(C^+)≃M(C). It is interesting to see that the masses of Hi and C^+ increase monotonically, with the mass of C^+ increasing somewhat faster than that of Hi. Both of these species are products of cosmic-rays interacting with H_2, CO and C hence it is expected that by increasing ζ' their abundances will also increase. The observed trend, however, is likely to be a result of additional volumetric (3D) effects. §.§ Heating and cooling processesThe 3d-pdr code performs thermal balance iterations and converges when the total heating rate matches with the total cooling rate calculated for each position within the cloud. The heating processes considered include the <cit.> grain and PAH photoeletric heating with the modifications suggested by <cit.> to account for the revised PAH abundance estimate from the Spitzer data; carbon photoionization heating <cit.>; H_2 formation and photodissociation heating <cit.>; collisional de-excitation of vibrationally excited H_2 following FUV pumping <cit.>; heat deposition per cosmic-ray ionization <cit.>; supersonic turbulent decay heating <cit.>; exothermic chemical reaction heating <cit.>; and gas-grain collisional coupling <cit.>. The particular H_2 formation rate is calculated using the treatment of <cit.>. The turbulent heating that is included in 3d-pdr is ∝ v_ turb^3/L, where v_ turb=1.5kms^-1 and L=5pc. These values are constant throughout all calculations, giving v_ turb^3/L∼2×10^-4cm^2 s^-3. Our chosen v_ turb is on par of what is expected from the Larson relation and its subsequent observational study by <cit.>. This turbulent heating term assumes that turbulence is driven at the largest scale of the cloud <cit.>. The gas primarily cools due to the collisional excitation, subsequent C^+, C, O fine-structure line emission and emission due to rotational transitions of CO. The cooling rate of each processes are estimated using a 3D escape probability routine. The details are described in <cit.> and the data-files are adopted from the Leiden Atomic and Molecular Database <cit.>[http://home.strw.leidenuniv.nl/∼moldata/]. We use a macroturbulent expression to account for the optical depth <cit.>.For densities n__ H≲10^2cm^-3 located mainly at the outer regions of the GMC, heating comes predominantly from photoelectrons which are produced due to the isotropic FUV radiation field (see Fig. <ref>). For 10^2≲ n__ H≲10^3cm^-3 and for all ζ', we find that heating results predominantly from contributions due to photoelectrons, dissipation of turbulence, exothermic reactions due to recombinations of HCO^+, H_3^+, H_3O^+ and ion-neutral reactions of He^+ + H_2 (chemical heating), energy deposition due to cosmic-ray reactions and heating due to H_2 formation. For higher densities and for ζ'=1, heating results from the turbulence with smaller contributions from cosmic rays and chemical heating. As ζ' increases, however, we find that cosmic rays dominate over all other heating mechanisms. The chemical heating also contributes significantly. The latter results from the abundance increase of all participating ions due to reactions ignited by the high cosmic-ray energy density. This is reflected in the lower panel of Fig. <ref> where we show the heating functions at ζ'=10^3.Likewise, cooling depends on n__ H and ζ' (see Fig. <ref>). In particular, for all ζ' we find that at low densities, cooling results predominantly from C^+ which – along with photoelectric heating – controls the gas temperature at the outer shell of the GMC. The increase of the cosmic-ray ionization rate results in the increase of C^+ abundance and hence its cooling efficiency, which in turn decreases the gas temperature. This is actually the reason why the gas temperature is lower at low A_ V,eff (see <ref>) with increasing cosmic-rays (see <ref>). This result has been further reproduced by 1D calculations confirming the importance of the [C^+] increase. For ζ'∼10^3, we find that C^+ cooling dominates for densities up to n__ H∼10^3cm^-3. On the other hand, cooling due to C is important for ζ'≲10^2 particularly for densities between 10^2≲ n__ H≲10^3.5. Finally, for n__ H≳10^3.5cm^-3 CO rotational lines contribute predominantly in the gas cooling for ζ'≲10^2 with O to become substantially important for high densities (n__ H>10^3.5) and high cosmic-ray ionization rates (ζ'∼10^3), although it is not a main coolant in all other cases.Dust temperatures are calculated for each SPH particle using the treatment of <cit.> for the heating due to the incident FUV photons. This approach is further modified to include the attenuation of the IR radiation as described by <cit.>. Since the UV radiation on the surface of the cloud is approximately 1/4 Draine (see <ref>), the maximum dust temperature we find is T_ dust∼12K, located at large radii. We impose a floor dust temperature of 10K which is consisted with the average lowest temperatures observed <cit.>. We can therefore assume that the dust temperature in the entire cloud is approximately uniform and equal to T_ dust=10K. In regions with densities exceeding 10^4cm^-3, CO freeze-out on dust grains may become an important process and the CO abundance in gas phase can be sufficiently reduced affecting its emissivity. Cosmic-ray induced (photo-)desorption can then bring a small fraction of this gas back to gas phase. Our results would not be altered if we were to include this process. This is because only ∼ 0.4% of the total mass of the simulated cloud has densities exceeding 10^4cm^-3 and the corresponding CO abundance never exceeds ∼ 16% of the total CO abundance throughout the cloud (for ζ'=10^2 whereas for all other cases it is well below ∼10%). Moreover in GMCs typically only small H_2 gas mass fractions reside at regions with n_ H>10^4cm^-3 making CO-freeze of little importance for the bulk of their mass.§.§ Probability density functionsFigure <ref> shows mass-weighted probability density distribution functions (PDFs) for each simulation. In these plots it can be seen how the effect of CO-destruction operates volumetrically, particularly when applying conditions (<ref>) and (<ref>). In all plots, the non-shaded part corresponds to CO-rich densities, the light-shaded part to all H_2-rich but CO-deficient densities and the dark-shaded part to all Hi-rich densities. It is interesting to compare the CO-rich, CO-deficient and Hi regimes with those predicted by B15 from one-dimensional calculations. For this purpose, we plot also in each case the limits for the CO-deficient (vertical solid) and Hi-rich (vertical dashed) regions as indicated in the B15 parameter plot (their Fig. 1). For ζ'=1, B15 find that for densities n__ H≲25cm^-3 the gas will be CO-deficient. However, in our 3D simulations we find that for densities up to this value the gas will also be in Hi form.The CO-deficient/H_2-rich density range now lies in 25≲ n__ H≲200cm^-3 (Fig. <ref>a). This difference occurs because of the additional effect of photodissociation of CO due to the isotropic FUV radiation being more effective at lower densities which are located at the outer parts of the cloud (larger radii). This radiation creates also some additional amount of Hi at the outer shell of the cloud, on top of the CR interaction, due to photodissociation of H_2.The fact that lower densities are located mostly at the outer parts of the cloud is verified in Fig. <ref> where we correlate the effective visual extinction <cit.>, A_ V,eff, defined asA_V,eff=-0.4ln(1/ N_ℓ∑_i=1^ N_ℓe^-2.5 A_V[i]),with the n__ H number density. This A_V,eff is different from the observed visual extinction. When looking towards the centre of a spherically symmetric cloud, this expression would give half of the observed A_V, which is calculated from one edge of the cloud to the other. In the above equation, N_ℓ corresponds to the number of HEALPix <cit.> rays we used and which is equal[Following the analysis by <cit.> we do not expect our result to sensitively depend on the chosen angular resolution. See also <cit.>] to 12. Indeed from Fig. <ref> we find that densities of n_ H≲200cm^-3 have mean visual extinction of A_ V,eff≲0.8mag and are located mainly at the outer shell of the cloud <cit.>. They are therefore affected by the FUV radiation. For ζ'=10 and 10^2 (Fig. <ref>b, c) we find very good agreement with the B15 parameter plot in estimating the density range of the CO-deficient gas. As discussed above, this is the range of cosmic-ray ionization rates for which we obtain high abundances of C while the gas remains almost entirely H_2-rich. As can be seen in both cases, the [CO]/[H_2] ratio is ≳10^-5only for moderate/high densities <cit.>As seen in Fig. <ref>d, for ζ'=10^3 we find that the density range dominated by Hi is in agreement with the B15 parameter plot at a remarkable precision. However, the density range of the CO-deficient gas is now wider. Although for this ζ', B15 predict that the CO-deficient gas will be observed in the 300≲ n_ H≲7×10^3cm^-3 density range, the corresponding upper limit that we find here is ∼1.5×10^4cm^-3. As discussed in B15, the `turnover' point is sensitive to the gas temperature obtained from the thermal balance, whereas the latter is also sensitive to the cooling functions, which depend on the density distribution. We therefore assign this discrepancy to the additional 3D effects that cannot be modelled with corresponding 1D calculations. §.§ Abundances distribution and gas temperaturesIn Fig. <ref> we show the gas temperature, T_ gas, versus the n__ H number density for all four different ζ' simulations. For all ζ' and for log_10(n__ H)≳3.5 we find very good agreement with the predicted T_ gas of B15 (their Fig. 9). This is because for this range of densities, A_ V,eff≳2mag thus the isotropic FUV is sufficiently attenuated. Note that the standard-deviation bars of T_ gas at n_ H≳10^3cm^-3 decrease while for n_ H∼10^4cm^-3 they are negligible. While the FUV has been severely extinguished, this regime is predominantly controlled by the cosmic-ray interaction which in turn depends very weakly on n_ H as illustrated in Fig. 9 of B15. We also find that the mean gas temperatures, ⟨ T_ gas⟩_ζ', in each different ζ' are ⟨ T_ gas⟩_1≃11K, ⟨ T_ gas⟩_10≃11K, ⟨ T_ gas⟩_100≃22K and ⟨ T_ gas⟩_1000≃40K. The low temperatures obtained for Galactic average CR energy densities are similar to those observed for FUV-shielded dark cores <cit.>. Moreover, the fact that T_ gas remains low and nearly constant for modestly boosted CR energy densities (e.g. ∼(1-10)×Galactic) recovers the result obtained by <cit.> for uniform clouds, further demonstrating the robustness of the initial conditions of star-formation set deep inside such FUV-shielded dense gas regions. This robustness is an important starting point for all gravoturbulent theories of star-formation inside GMCs <cit.>.Note also that for low densities i.e. n__ H<10^2cm^-3, found mostly in outer cloud layers and in principle exposed to the isotropic FUV radiation, T_ gas decreases as ζ' increases. This is because the FUV radiation along with the high CR ionization rate creates large amounts of C^+, whose emission line is an effective coolant (as discussed in <ref>), driving the decrease of T_ gas.To further understandhow the abundance distribution of species changes with ζ' it is convenient to correlate them with A_ V,eff. This is shown in Fig. <ref> where panels (a)-(e) show the abundances of H_2, Hi, C^+, C and CO, and panel (f) that for T_ gas versus A_ V,eff. As demonstrated earlier, the abundance of H_2 remains remarkably similar under all ζ' values. The differences in H_2 abundance as a function of ζ' are reflected in the abundance of Hi in each case; here we can see that for all ζ', [Hi]≲10^-1 in the interior of the cloud i.e. where A_ V,eff>7mag. On the contrary, C^+, C and CO depend more sensitively on an increasing ζ' with CO abundance destroyed even at high density clumps close to the centre of the GMC when ζ' are high. As expected, C and C^+ follow the reverse trend in which they increase in abundance with increasing ζ'. Observe again that for ζ'=10^3 the abundance of C is less than in ζ'=10^2 (as it is also destroyed) indicating that there is a range of cosmic-ray energy densities for which the overall abundance of C peaks and where the C-to-H_2 method will be particularly robust. Note that in both Fig. <ref> and Fig. <ref>f, the error bars (corresponding to 1σ standard deviation) are much smaller for high n_ H and A_ V,eff respectively meaning that T_ gas in this regime is approximately uniform and entirely controlled by cosmic-ray heating.§ THERMAL BALANCE AND THE CRUCIAL ROLE OF OHThe CO molecule can form through various channels <cit.>. An important formation route, especially at moderate-to-high cosmic-ray or X-ray ionization rates, as well as in low metallicity gas <cit.>, depends on the OH intermediary. In cold gas (T_ gas≲ 100K) the ion-molecule chemistry dominates, the OH formation is initiated by cosmic-ray ionization of atomic oxygen or its reaction with H_3^+, and the OH abundance increases with ζ__ CR <cit.>. However, as discussed by <cit.>, this trend holds only up to a criticalionization rate of ζ_ CR, crit≈ 10^-14 n_3 Z' s^-1 (where n_3 is the density in units of 10^3cm^-3 and Z' is the metallicity relative to Solar). For higher ζ__ CR the Hi-to-H_2 transition occurs and the abundances of both OH and CO decrease with increasing ζ__ CR.In B15, it was shown that the [CO]/[H_2] abundance ratio changes when varying ζ_ CR and n_ H number density(their Figures 1 and 7 respectively). The chemical analysis discussed in that work(their 4.1) used a gas temperature obtained from full thermal balance calculations. Comparison of our isothermal models at T_ gas=100K with those of <cit.> showed excellent agreement. Here, we additionally consider isothermal simulations at T_ gas=50K and at 20K to explore how the [CO]/[H_2] ratio depends on T_ gas sensitivity, a process that was left unclear in the B15 work. We complement the latter work by examining the chemical network responsible for this behaviour and what determines the [CO]/[H_2] ratio at different temperatures and a given ζ_ CR and n_ H.We use three different isothermal models, at T_ gas=100K, at 50K and at 20K gas temperatures with ζ'=10^2. Figure <ref> shows the abundances of OH (upper panel) and [CO]/[H_2] (lower panel) for those two different temperatures in red, green and blue colours respectively. As can be seen in the upper panel of Fig. <ref>, at T_ gas=20K (thick blue dashed lines), the abundance of OH slightly increases from ζ_ CR/n_ H≳10^-21cm^3s^-1 until ∼8×10^-19cm^3 s^-1 at which point OH strongly decreases for an increasing ζ_ CR/n_ H ratio. As soon as T_ gas is increased, the abundance of OH also increases affecting the [CO]/[H_2] ratio. In particular, for T_ gas=50K (thick dot-dashed lines) the abundance of OH keeps increasing monotonically until ∼2×10^-17cm^3s^-1 where it peaks at an abundance of ≃2.5×10^-7 with respect to hydrogen. For T_ gas=100K (thick red solid lines), the OH abundance peaks at ≃3×10^-6. This trend is reflected in the [CO]/[H_2] abundance ratio as shown in the lower panel of this figure. In particular, for T_ gas=20K, [CO]/[H_2] decreases continuously with increasing ζ_ CR/n_ H. For T_ gas=50K and 100K, a different situation is seen: for ζ_ CR/n_ H≳10^-19cm^3s^-1 a `turnover' appears with a local minimum at ζ_ CR/n_ H∼10^-18cm^3s^-1 and a local maximum at ζ_ CR/n_ H∼10^-17cm^3s^-1, while for higher ζ_ CR/n_ H, [CO]/[H_2] falls.CO forms through the OH intermediary, and OH is initiated by two important reactions; via proton transfer:O + H_3^+ →OH^+ + H_2or via charge transfer:O+H^+ →O^+ + H O^++H_2 →OH^++H<cit.>. A sequence of abstraction reactions with H_2 followed by dissociative recombination then leads to the formation of OH <cit.>. For low gas temperatures, Reaction <ref> is substantially inefficient since it is endoergic by 224K and therefore OH is mainly formed via the H_3^+ route (Reaction <ref>). CO is destroyed with He^+ and the reaction rate increases with increasing ζ_ CR/n_ H implying that [CO]/[H_2] also decreases with increasing ζ_ CR/n_ H. Note that at all times as we have illustrated above, H_2 remains unaffected and all changes in the [CO]/[H_2] ratio reflect mostly the CO behaviour.For high gas temperatures and as long as ζ_ CR/n_ H≲10^-19cm^3s^-1, the abundance of protons is low and therefore OH formation is dominated by Reaction <ref>. This makes the abundance of OH at T_ gas=50K and 100K to be almost identical to that at T_ gas=20K. In this ζ_ CR/n_ H regime, we further find that the removal of OH by C^+ is more efficient at low gas temperatures. Once ζ_ CR/n_ H≳10^-19cm^3s^-1, the abundance of protons is rapidly increasing and Reaction <ref> becomes very efficient. This reflects the sudden increase of OH abundance as (red solid line of Fig. <ref> upper panel) and hence [CO]/[H_2] rises (red solid line, lower panel). Finally, for ζ_ CR/n_ H∼10^-17cm^3s^-1, the Hi-to-H_2 transition takes place and more Hi is formed. This makes the OH and consequently CO formation to become inefficient and thus both those abundances fall. We then perform a test to study the contribution of Reaction <ref> in determining the [CO]/[H_2] abundance ratio at different gas temperatures. To do this, we neglect this reaction by setting its rate to a negligible value and re-running the models discussed here. The resultant abundances are plotted in dashed lines in both panels of Fig. <ref>. For T_ gas=20K the abundances of OH and [CO]/[H_2] (blue dashed lines) are identical to the previous case (blue solid lines) indicating that the charge tranfer reaction is very inefficient at low temperatures. However, for higher temperatures we see that Reaction <ref> plays the dominant role in OH formation at high ζ_ CR/n_ H since it is primarily responsible for removing almost all protons; by neglecting it, we obtain the results of the T_ gas=20K test (red dashed). In turn, this is reflected in the [CO]/[H_2] (red dashed) as expected. This work considers Reaction <ref> and uses it with temperature dependency. We find that Reaction <ref> becomes important for gas temperatures exceeding T_ gas≳20-30K.Here it is important to consider that even in vigorously SF galaxies, temperatures significantly higher than 50K may not be reached for most of their molecular gas mass. Thus the large CR-induced depressions of the average [CO]/[H_2] abundance ratio are expected to be maintained by the T_ gas-sensitive chemistry of the chemical network controlling the OH abundance. Indeed, as our Fig. <ref> shows, even when ζ_ CR=10^3×Galactic (ULIRG-type of ISM), T_ gas 50K. Furthermore, for metal-rich ISM environments, FUV photons cannot propagate through sufficiently high gas mass fractions to raise the average T_ gas beyond that range either <cit.>, while turbulent heating can only do this for minute fraction 1% of molecular gas mass even in the most turbulent of clouds <cit.>. Exceptions to this will be places, such as the Galactic Center, and possibly some very extreme ULIRGs, such as Arp 220, where T_ gas∼(50-100)K are reached, places that either do not contain much of the total H_2 gas in otherwise SF-quiescent galaxies or represent SF outliers with respect to the major mode of SF in the Universe. § DISCUSSIONIn this work we recover the results of B15 of a CR-induced CO destruction in H_2 clouds using the more realistic rendering of inhomogeneous clouds. Our three-dimensional simulations demonstrate that by increasing the cosmic-ray ionization rate, the abundance of H_2 molecule remains unaffected even for high cosmic-ray ionization rates of the order of 10^3 times the mean Galactic value. On the other hand the CO abundance is sensitive to even small boosts of ζ_ CR, andis easily destroyed forming C^+ and consequently C (via recombination with free electrons) as long as gas temperatures T_ gas 50K. Thus low-J CO line emission may become very weak in such ISM environments. Figures 2 and 3 of B15 show that the emissivities of both C lines are stronger than low-J CO lines in CO-poor/H_2-rich regimes. This consequently yields a potential advantage of the C lines in tracing the CO-poor H_2 gas around the CO-rich regions of inhomogeneous H_2 clouds (Bisbas et al. in prep.), along withthe CO-rich H_2 gas. Secondary effects of a CR-induced and n(H_2)-sensitive CO destruction can make the visible H_2 gas distributions in SF galaxies traced by low-J CO line appear clumpier than it actually is. This has been discussed by B15, but here these effects are actually computed for inhomogeneous H_2 clouds irradiated by elevated CR energy backgrounds (see Fig. <ref>). It is worth noting that the CR-induced effects studied in this work mark the warm `end' of the thermal states of potentially CO-invisible H_2 gas, while those affected by an enhanced Cosmic Microwave Background Radiation on low-J CO line (and dust continuum) brightness distributions at high redshifts mark the cold `end' <cit.>. Both of these regimes may contain large amounts of molecular gas in galaxies at high (z 3) redshifts as SF is typically a highly inefficient process, i.e. there will always be large amounts of cold non-SF H_2 gas and dust mass even in SF galaxies. §.§ Main Sequence galaxies: on the CR `firing' lineThe destruction of the CO molecule by SF-powered CRs in H_2-rich galaxies is of great importance for studying star formation and its modes in the early Universe where such galaxies still strongly evolve. This is because the H_2 gas mass surface density and H_2 gas velocity dispersions are deduced from low-J CO lines and are the tools to evaluate the stability ofgas-rich disks via the Q-Toomre criterion <cit.>. Current theoretical views of what drives SF in strongly evolving gaseous galactic disks <cit.> depend on an accurate depiction of the H_2 mass surface density and velocity fields, a picture that may be incomplete because of the CR-induced CO destruction in exactly such systems.In B15 we discussed the possibility that the average [CO]/[H_2] abundance may remain high in the ISM of U/LIRGs because the effect of CRs is countered by higher average molecular gas densities. However, such strong merger/starburst systems are not the main mode of SF in the early Universe. Indeed it is in massive gas-rich galaxies, evolving along a narrow region of stellar mass (M_*)-SFR plane, the so-called “Main Sequence” (MS) galaxies <cit.>, where ∼90% of the cosmic star formation takes place up to z∼3 <cit.>.It is these systems, with SFR∼(20-300)M_⊙yr^-1 <cit.> and seemingly ordinary GMCs <cit.>, that are expected to be the most affected by a CR-induced destruction of CO. This is apparent from Fig. 1 of B15 (where BzK galaxies are MS systems) as well as Fig. <ref> of this work from where it can be seen that for SFR∼(10-100)M_⊙yr^-1, which may correspond to ζ'∼10-10^2, the CO `marking' of a typical molecular cloudis significantly reduced.The metallicity-insensitive CR-induced destruction of CO in MS galaxies can only compound the difficulties already posed by the lower metallicities prevaling in some of these systems <cit.>, making any CO-deduced H_2 gas mass distributions, their scale-lengths, their SFR-controlled gas depletion timescales, dynamical masses and Q-Toomre stability criteria, provisional. In this context it is important to remember that even the well-known effects of strong FUV/low-Z on the [CO]/[H_2] abundance ratio can render entire clouds CO-free <cit.> boosting their C (and C^+) content. In the Z-χ_0/CR domain where a phase transition to very CO-poor H_2 gas phases happens, it will be highly non-linear, making a practical calibration of the so-called X_ CO factor in MS galaxies <cit.> and local spiral LIRGs <cit.> challenging, even for their CO-marked H_2 gas distributions. Indeed as we have already discussed in <ref>, the ability of CO to survive only in the densest regions of CR-irradiated clouds (Fig. <ref>) can yield a misleading picture about the actual H_2 gas distribution and its thermal and dynamical state. Moreover, a nearly-Galactic X_ CO factor may still be obtained if such CO-marked sub-regions of the underlying H_2 distribution are used for its calibration, even as they are no longer representative of the actual H_2 clouds. This can be shown if we consider the dense H_2 gas regions where CO survives embedded in columns of CO-free H_2 gas. The latter will exert a non-thermal pressure of P_ e≈π/2 G Σ ( H_2)[Σ ( H_2)+(σ_g(V)/σ_*(V))Σ_*],where we assumed that the CO-rich gas regions lie mid-plane in a rotating H_2-rich disk, with stars mixed in, at a surface mass density of Σ_* and vertical velocity dispersion of σ_*(V) (with Σ ( H_2) and σ_g(V) the corresponding quantities for the H_2 gas). For Milky Way, P_ e/k_ B∼ 1.4× 10^4cm^-3K is the average non-thermal pressure on the boundaries of molecular clouds <cit.>. This, crucially, determines the normalization of the so-called linewidth-size relation for a molecular cloud of radius R:σ (R) = σ _0(P_ e/k_ B/10^4Kcm^-3)^1/4(R/ pc)^1/2<cit.>. Should a CO-invisible H_2 gas mass lie `on-top' of CO-rich (and thus observable) cloud regions in the mid-plane of a SF disk, it would exert an `overpressure' on the CO-rich ones. Thiswould appear as a deviation from a Galactic line-width size relation andCO clouds that would seem to be out of virial equillibrium, lowering their corresponding X_ CO factor <cit.>. Nevertheless the veryweak dependance of the line-width size relation on P_ e allows large amounts of CO-invisible gas to exist without easily discernible observational effects. For a purely H_2gas disk (Σ_*=0) (assumed here for simplicity), a Σ_g (CO-invisible)=5×Σ_g(CO-visible) larger H_2 gas surface density would raise P_ e by a factor of 25, but the corresponding σ (R) of CO-rich clouds embedded inside such overlying columns of CO-invisible gas only by a factor of ∼2.2. The latter is within the observational uncertainty of the σ(R) relation in the Galaxy <cit.> and thus any X_ CO calibration of such overpressured CO-rich clouds would still give a value consistent with a Galactic one within the uncertainties. §.§ CO chemistry in SF galaxies: towards a dynamical framework The physics and chemistry of the CR-induced destruction of CO can now be readily used ingalaxy-sized/cosmological evolution models. Doing so will:a) shed light on what happens in the context of galaxy evolution models as the χ_0/CR `boundary' conditions of molecular clouds evolve, and b) help the interface of such models with actual observables (e.g. low-J CO and C images of galaxies with ALMA/JVLA). Past work has already incorporated, in a sub-grid fashion, the effects of FUV destruction of CO in H_2 clouds inside galaxies <cit.>.CR-driven effects will be even easier to implement in such models insofar as full transparency of H_2 clouds to CRs and U_ CR∝ρ_ SFR are assumed. Moreover, C lines as H_2 gas mass tracers in galaxies at high redshifts have already been discussed in a cosmological context <cit.>. Once CR effects are taken into account in galaxy-scale models, one can then: a) evaluate the best method(s) in obtaining the H_2 mass distributions and velocity fields in SF galaxies evolving across cosmic epoch, and b) ascertain whether current theoretical views about the role of unstablegiant H_2 clumps in driving the SF of gas-rich early galaxies <cit.> still hold (e.g. a C-imaged Σ ( H_2) distribution may be a smoother one than a CO-imaged one in a CR-irradiated gaseous disk, impacting also the deduced gas velocity fields from these lines).Such models can also shed light on another very important caveat discussed by B15, namely the role of turbulence. Observations of the so called (U)LIRGs, extreme merger/starburst systems, indicate that regions with high SFR density (and thus ζ_ CR) are also regions of strong turbulence of H_2 clouds, and thus of high M(n_ H>10^4cm^-3)/M( H_2) mass fractions per GMC. With CO remaining abundant in high-density gas (n_ H>10^4cm^-3), even when average ζ_ CR are high, this can diminish and even counteractthe effects of CR-induced CO destruction in such environments as now most of the H_2 gas no longer resides in the low-density regime (∼10^2-10^3cm^-3) as in the MW (densities where CO would be CR-destroyed very effectively) but at high densities. In this regard, the [CO]/[H_2] abundance ratio obtained from our simulations is of particular interest. In Fig. <ref> we plot this ratio versus the total H-nucleus number density for all four different ζ' examined. For ζ'≳10^2 most of the molecular cloud gas has [CO]/[H_2]<10^-5, making it very CO-poor. For higher densities and according to B15, this ratio would exceed 10^-5 at ζ'=10^3 only for n_ H>10^4cm^-3 assuming no significant freeze-out. Numerical simulations of individual turbulent H_2 clouds study the effects of constant, pre-set FUV radiation fields and CR energy densities on the CO and C distributions and the corresponding line emission <cit.>. Such models, while useful in finding trends of CO and C line emission as H_2 gas mass tracers in GMCs, they cannot address the issue of what happens when such clouds are immersed in actual galaxies, where the FUV and CR energy densities around these clouds vary strongly on timescales equal or shorter than internal cloud chemical and dynamical timescales <cit.>. This is because in individual cloud simulations the `boundary' conditions of FUV radiation, ζ_ CR, and turbulent energy injectionare not tracked. Galaxy-sized models <cit.> that include H_2 clouds, along with the appropriate physics and chemistry behind the FUV/CR `drivers', modelled in tandem with the evolving conditions of a SF galaxy are thus invaluable in examining whether in high SFR-density environments, H_2 gas remains mostly CO-rich or not <cit.>.Regardless of any future theoretical `verdict' onwhether CO-invisible molecular gas can exist in large quantities in SF galaxies in the early Universe during periods of high SFR densities, observations of low-J CO and C lines in such systems are indispensable. Here we re-iterate that the C column density retains its robustness in tracing H_2 column near-proportionally for ζ_ CR∼10^-15s^-1 (see Fig. <ref>). In particular we find that for such CR-ionization rates, N( Ci)≃4×10^-4 N( H_2) (for our adopted carbon elemental abundance). This relation depends weakly on ζ_ CR provided that ζ_ CR≳10^-15s^-1, contrary to the corresponding one for CO. Moreover, even for MW-level of ζ_ CR values, the W_ CI,1-0 per beam will remain larger than that of CO J=1-0 or J=2-1 (the two CO transitions used to trace the bulk of H_2 mass in galaxies) as long as the same beam is used to image the H_2 gas in all lines[The lowest observed brightness temperature ratio of W_ CI,1-0/W_ CO(1-0) in Milky Way is ∼0.1 <cit.>. If we were to observe W_ CI,1-0 and W_ CO(1-0) at the same resolution, then in the Rayleigh-Jeans regime the flux per beam boost would be (492GHz/115GHz)^2×0.1∼1.83. This can lead to signal-to-noise advantages for the C line obsrevations depending on the redshift of the object <cit.>. If ζ' is increased, C lines are becoming brighter still, increasing this kind of advantage.]. This, along with the possibility that C line imaging of SF disks at high-z finds a different (smoother and/or possibly more extended) H_2 gas distribution, because of large quantities of CO-poor H_2, argues strongly for sensitive C line imaging of gas-rich SF galaxies (Bisbas et al. in preparation). In this work we also identified the exact chemistry behind the large gas temperature sensitivity on the CO formation in H_2 clouds (see <ref>), an issue initially discussed by B15. The gas temperature sensitivity of our results elevates the importance of reliable computation of the average thermal state of H_2 gas in FUV/CR-intensive environments found within SF galaxies. As demonstrated in Fig. <ref>, CRs can provide an important heating source throughout the GMC and particularly in regions with n_ H>10^3cm^-3. This in turn can increase the gas temperature deep in the cloud (where the FUV has been severely attenuated) to values of the order of ∼50K <cit.>. Such gas temperatures are still low for the CO formation to occur via the O+H^+ charge transfer. In low metallicity galaxies, a higher gas temperature may be expected as cooling efficiency and shielding is less than in solar metallicity ones, perhaps moderating the CR-induced destruction of CO in high ζ' environments, as discussed in <ref>.It is thus necessary that the CR effectsarestudied together with those driven bylower metallicitiesin orderto discern their combined impact onthe average [CO]/[H_2] abundance in metal-poor star forming galaxies.Even though we usedstandard cooling/heating mechanisms of PDR/CRDR physics (see <ref>), turbulence will also heat the molecular gas <cit.>, and do so in a volumetric manner just like the CRs. Turbulent heating has even been argued as a dominant heating mechanism of galaxy-sized H_2 gas reservoirs in some extreme SF galaxies <cit.>, even as numerical simulations of individual molecular clouds show that turbulent heating typically affects only 1% of their mass <cit.>. The so-called `Brick' cloud is a well studied object close to the Galactic Centre. Simulations performed by <cit.> have reproduced its observed gas and dust temperatures when their modeled cloud interacts with a FUV of strength χ/χ_0∼10^3 and a ζ_ CR∼10^-14s^-1. Early suggestions by <cit.> proposed that the gas heating of the Central Molecular Zone (CMZ) at the Galactic Centre is primarily dominated by cosmic-rays and/or turbulence. Recent observations by <cit.>, however, show that the dominant heating mechanism of the particular `Brick' cloud is turbulence which, in association with an LVG analysis, give an upper limit of ζ_ CR<10^-14s^-1. However, in places of the CMZ cosmic-rays may still be a very important heating source particular in less turbulent sub-regions <cit.>.Our simple treatment of turbulent heating <cit.> leaves unaswered the question of how much it can influence the average thermal states of the typically very turbulent H_2 gas in extreme starbursts with high SFR densities. Numerical simulations of individual H_2 gas clouds, at Mach numbers appropriate for the ISMof galaxies with very high SFR densities (M∼3-10 times that of ordinary spirals), that include turbulent heating along with the chemistry and physics of CR-induced CO destruction are necessary for answering this question. If strong turbulence can elevate the average H_2 gas temperatures and densities of galaxies with highSFR densities (typically merger/starbursts), it may still keep the [CO]/[H_2] abundance ratio high and the H_2 gas traceable via the traditional methods based on CO (see B15 for the relevant discussion).§ CONCLUSIONS In this paper, continuing the study of <cit.>, we present results from a suite of three-dimensional astrochemical simulations of inhomogeneous molecular clouds, rendered as a fractal, and embedded in different cosmic-ray ionization rates spanning three orders of magnitude (ζ_ CR=10^-17-10^-14s^-1) along with a constant isotropic FUV radiation field (χ/χ_0=1). Our study therefore focuses only on the effect ofhigh cosmic-ray ionization rates expected in SF galaxies in the Universe, and how it affects the abundances of CO, C, C^+, Hi and H_2. We used the 3d-pdr <cit.> code to perform full thermal balance and chemistry calculations. Our results can be summarized as follows: * The column density andtotal H_2 mass of a typical inhomogeneous GMC remains nearly constant for increasing ζ', with the total mass of H_2 decreasingby ≲ 10% for ζ_ CR∼10^-14s^-1 (∼ 10^3× Galactic). On the other handa significant reduction of the [CO]/[H_2] abundance ratio sets in throughout the cloud, even when ζ_ CR∼10^-16s^-1 (∼ 10× Galactic), a value expected for the ISM of many star-forming galaxies in the Universe. * When the average ζ_ CR increases further, up to ∼ 10^-15-10^-14s^-1 the CO molecule is destroyed so thoroughly that only the densest regions of the GMC remain CO-rich. The abundances of C and C^+ on the other hand increase, with the latter becoming particularly abundant for ζ_ CR∼ 10^-14s^-1. Atomic carbon is the species that proves to be the most abundant, `marking' most of the H_2 mass of the cloud over a wide range of ζ_ CR values. Using only CO rotational transitions to discern the average state and mass of such CR-irradiated GMCs will only recover their highest density peaks (n_ H≳10^3cm^-3), make the clouds appear clumpier than they truly are, and convey biased information on the molecular gas velocity fields. * We expect significant effects of CR-induced destruction of CO to occur in the so-called Main Sequence Galaxies, the systems where most of the cosmic history of star formation unfolds. This is a result of their high SF rates (implying high CR rates) and seemingly Galactic-type molecular clouds. Thewidespread CR destruction of CO expected in such systems will make the calibration of their X_ CO factor challenging, even for their CO-bright gas. * Our computations recover gas temperatures of T_ gas∼10K for the CR-irradiated and FUV-shielded dense regions inside those GMCs. This is indeed typical for such regions in the Galaxy and it remains robust over ζ_ CR≲10^-16s^-1. This is of particular importance if the initial conditions of SF, and the stellar initial mass function (IMF) mass scale (i.e. the IMF `knee') are indeed set within such regions. Nevertheless, once ζ_ CR∼10^-15-10^-14s^-1, the temperature of such regions rises up to T_ gas∼30-50K, and the initial conditions of star formation in such galaxies are bound to change. * The main heating mechanisms in cosmic-ray dominated regions apart from CRs, are the chemical mechanism (due to the large amounts of ions expected in CRDRs) and the H_2 formation mechanism. Cooling on the other hand, is mainly due to C^+ and O with the contribution of CO cooling nearly negligible, as its abundance is at least two orders of magnitude lower than in normal Galactic conditions. * We find the CR-regulated [CO]/[H_2] abundance ratio to be sensitive to the temperature of the gas once T_gas>50 K. A significant production of the OH molecule, acting as an intermediary, is the T_ gas-sensitive part of the chemical network that determines the [CO]/[H_2] ratio. For warm gas at T_ gas=100K abundant OH can keep the molecular gas CO-rich (i.e. [CO]/[H_2]∼ 10^-4), even in high CR energy environments. The severe CR-induced destruction of CO sets in for T_gas 50 K, which our thermochemical calculations indicate as containing the bulk of H_2 mass in our inhomogeneous cloud models, and indeed the bulk of molecular gas in SF galaxies, except perhaps in the most extreme merger/starbursts. * Our simple treatment of turbulent heating, and the fact that GMCs in the very high SFR density environments of merger/starburst galaxies are much more turbulent and thus denser, necessitate careful considerations of turbulent heating and a dynamic rendering of density inhomogeneities in order to explore our findings in a fully realistic setting.* Finally, the chemistry and thermal-balance calculations behind the CR-controlled [CO]/[H_2], [C]/[H_2], and [C^+]/[H_2] abundance ratios inside inhomogeneous H_2 clouds can be used in a sub-grid fashion as elements of galaxy-sized numerical simulations of evolving galaxies. This is perhaps a vital ingredient of any realistic galaxy evolution model across cosmic epoch, given the elevated SFR densities –and thus CR energy densities– typically observed in galaxies in the distant Universe. As a final conclusion wemention that because of thestrong effects of CRs on the CO abundance combined with theeffects of high FUV and/or low metallicity environments in further reducing its abundance, and theimpracticallity of C^+ imaging in SF galaxies except for the highest redshift objects (z 4), a concerted effort must be mounted by the extragalactic community towards C line imaging of H_2 gas in the Universe as a viable alternative. § ACKNOWLEDGEMENTSThe authors thank an anonymous referee for reviewing the manuscript and whose comments have impoved the clarity of this work. We thank Andreas Schruba, Andrew Strong, Rob Ivison, Nick Indriolo, Steffi Walch and Paola Caselli for the useful discussions. This work is supported by a Royal Netherlands Academy of Arts and Sciences (KNAW) professor prize, and by the Netherlands Research School for Astronomy (NOVA). The work of PPP was funded by an Ernest Rutherford Fellowship. SB acknowledges support from the DFG via German-Israel Project Cooperation grant STE1869/2-1 GE625/17-1. LSz acknowledges support from the A-ERC grant 108477 PALs. ZYZ acknowledges support from ERC in the form of the Advanced Investigator Programme, 321302 COSMICISM. [Ao et al.(2013)]Ao13 Ao, Y., Henkel, C., Menten, K. M., et al. 2013, , 550, A135[Asplund et al.(2009)]Aspl09 Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, , 47, 481 [Bakes & Tielens(1994)]Bake94 Bakes, E. L. O., & Tielens, A. G. G. M. 1994, , 427, 822 [Bell et al.(2005)]Bell05 Bell, E. F., Papovich, C., Wolf, C., et al. 2005, , 625, 23[Bell et al.(2006)]Bell06 Bell, T. A., Roueff, E., Viti, S., & Williams, D. A. 2006, , 371, 1865[Bell et al.(2007)]Bell07 Bell, T. A., Viti, S., & Williams, D. A. 2007, , 378, 983 [Bergin & Tafalla(2007)]Berg07 Bergin, E. A., & Tafalla, M. 2007, , 45, 339[Bialy & Sternberg(2016)]Bial16 Bialy, S., & Sternberg, A. 2016, , 822, 83[Bialy & Sternberg(2015)]Bial15 Bialy, S., & Sternberg, A. 2015, , 450, 4424 [Bisbas et al.(2015)]Bisb15 Bisbas, T. G., Papadopoulos, P. P., & Viti, S. 2015, , 803, 37 [`B15'] [Bisbas et al.(2012)]Bisb12 Bisbas, T. G., Bell, T. A., Viti, S., Yates, J., & Barlow, M. J. 2012, , 427, 2100[Black (1987)]Blac87 Black J. H., 1987, in Hollenbach D. J., Thronson H. A. Jr., eds, Astrophys. Space Sci. Libr., Vol. 134, Interstellar processes. Reidel, Dordrecht, p. 731 [Bradford et al.(2003)]Brad03 Bradford, C. M., Nikola, T., Stacey, G. J., et al. 2003, , 586, 891[Bryant & Scoville(1996)]Brya96 Bryant, P. M., & Scoville, N. Z. 1996, , 457, 678[Bolatto et al.(1999)]Bola99 Bolatto, A. D., Jackson, J. M., & Ingalls, J. G. 1999, , 513, 275[Bolatto et al.(2013)]Bola13 Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, , 51, 207[Bothwell et al.(2017)]Both17 Bothwell, M. S., Aguirre, J. E., Aravena, M., et al. 2017, , 466, 2825[Bournaud & Elmegreen(2009)]Bour09 Bournaud, F., & Elmegreen, B. G. 2009, , 694, L158[Bournaud et al.(2011)]Bour11 Bournaud, F., Chapon, D., Teyssier, R., et al. 2011, , 730, 4[Bournaud et al.(2014)]Bour14 Bournaud, F., Perret, V., Renaud, F., et al. 2014, , 780, 57 [Burke & Hollenbach(1983)]Burk83 Burke, J. R., & Hollenbach, D. J. 1983, , 265, 223[Cardelli et al.(1996)]Card96 Cardelli, J. A., Meyer, D. M., Jura, M., & Savage, B. D. 1996, , 467, 334[Carleton et al.(2016)]Carl16 Carleton, T., Cooper, M. C., Bolatto, A. D., et al. 2016, arXiv:1611.04587[Cartledge et al.(2004)]Cart04 Cartledge, S. I. B., Lauroesch, J. T., Meyer, D. M., & Sofia, U. J. 2004, , 613, 1037[Cazaux & Spaans(2004)]Caza04 Cazaux, S., & Spaans, M. 2004, , 611, 40[Cazaux & Tielens(2002)]Caza02 Cazaux, S., & Tielens, A. G. G. M. 2002, , 575, L29[Chen et al.(2015)]Chen15 Chen, B.-Q., Liu, X.-W., Yuan, H.-B., Huang, Y., & Xiang, M.-S. 2015, , 448, 2187[Chieze(1987)]Chie87 Chieze, J. P. 1987, , 171, 225[Clark et al.(2012)]Clar12 Clark, P. C., Glover, S. C. O., & Klessen, R. S. 2012, , 420, 745[Clark et al.(2013)]Clar13 Clark, P. C., Glover, S. C. O., Ragan, S. E., Shetty, R., & Klessen, R. S. 2013, , 768, L34[Clavel et al.(1978)]Clav78 Clavel, J., Viala, Y. P., & Bel, N. 1978, , 65, 435[Cummings et al.(2015)]Cumm15 Cummings, A. C., Stone, E. C., Heikkila, B. C., et al., 2015, PoS(ICRC2015)318 [Daddi et al.(2010)]Dadd10 Daddi, E., Bournaud, F., Walter, F., et al. 2010, , 713, 686[Dalgarno(2006)]Dalg06 Dalgarno, A. 2006, Proceedings of the National Academy of Science, 103, 12269[Dickman et al.(1986)]Dick86 Dickman, R. L., Snell, R. L., & Schloerb, F. P. 1986, , 309, 326[Downes & Solomon(1998)]Down98 Downes, D., & Solomon, P. M. 1998, , 507, 615[Draine(1978)]Drai78 Draine, B. T. 1978, , 36, 595 [Elbaz et al.(2007)]Elba07 Elbaz, D., Daddi, E., Le Borgne, D., et al. 2007, , 468, 33[Elmegreen(1989)]Elme89 Elmegreen, B. G. 1989, , 338, 178[Elmegreen et al.(2008a)]Elme08a Elmegreen, B. G., Bournaud, F., & Elmegreen, D. M. 2008a, , 688, 67-77 [Elmegreen et al.(2008b)]Elme08b Elmegreen, B. G., Bournaud, F., & Elmegreen, D. M. 2008b, , 684, 829-834[Genzel et al.(2012)]Genz12 Genzel, R., Tacconi, L. J., Combes, F., et al. 2012, , 746, 69[Genzel et al.(2015)]Genz15 Genzel, R., Tacconi, L. J., Lutz, D., et al. 2015, , 800, 20[Gerin & Phillips(2000)]Geri00 Gerin, M., & Phillips, T. G. 2000, , 537, 644[Ginsburg et al.(2016)]Gins16 Ginsburg, A., Henkel, C., Ao, Y., et al. 2016, , 586, A50[Glover et al.(2010)]Glov10 Glover, S. C. O., Federrath, C., Mac Low, M.-M., & Klessen, R. S. 2010, , 404, 2 [Glover et al.(2015)]Glov15 Glover, S. C. O., Clark, P. C., Micic, M., & Molina, F. 2015, , 448, 1607 [Glover & Clark(2016)]Glov16 Glover, S. C. O., & Clark, P. C. 2016, , 456, 3596[Glover & Clark(2012)]Glov12 Glover, S. C. O., & Clark, P. C. 2012, , 421, 9[Górski et al.(2005)]Gors05 Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, , 622, 759[Gratier et al.(2016)]Grat16 Gratier, P., Braine, J., Schuster, K., et al. 2016, arXiv:1609.03791[Gullberg et al.(2016)]Gull16 Gullberg, B., Lehnert, M. D., De Breuck, C., et al. 2016, , 591, A73[Habing(1968)]Habi68 Habing, H. J. 1968, , 19, 421[Herbst & Klemperer(1973)]Herb73 Herbst, E., & Klemperer, W. 1973, , 185, 505[Heyer & Brunt(2004)]Heye04 Heyer, M. H., & Brunt, C. M. 2004, , 615, L45 [Hodge et al.(2012)]Hodg12 Hodge, J. A., Carilli, C. L., Walter, F., et al. 2012, , 760, 11[Hollenbach et al.(1991)]Holl91 Hollenbach, D. J., Takahashi, T., & Tielens, A. G. G. M. 1991, , 377, 192[Hollenbach & McKee(1979)]Holl79 Hollenbach, D., & McKee, C. F. 1979, , 41, 555[Hopkins & Beacom(2006)]Hopk06 Hopkins, A. M., & Beacom, J. F. 2006, , 651, 142 [Hubber et al.(2011)]Hubb11 Hubber, D. A., Batty, C. P., McLeod, A., & Whitworth, A. P. 2011, , 529, A27[Indriolo & McCall(2012)]Indr12 Indriolo, N., & McCall, B. J. 2012, , 745, 91[Indriolo et al.(2015)]Indr15 Indriolo, N., Neufeld, D. A., Gerin, M., et al. 2015, , 800, 40[Israel & Baas(2001)]Isra01 Israel, F. P., & Baas, F. 2001, , 371, 433 [Krips et al.(2016)]Krip16 Krips, M., Martín, S., Sakamoto, K., et al. 2016, , 592, L3[Lacy et al.(1994)]Lacy94 Lacy, J. H., Knacke, R., Geballe, T. R., & Tokunaga, A. T. 1994, , 428, L69[Lada & Blitz(1988)]Lada88 Lada, E. A., & Blitz, L. 1988, , 326, L69[Le Petit et al.(2016)]LePe16 Le Petit, F., Ruaud, M., Bron, E., et al. 2016, , 585, A105[Lo et al.(2014)]Lo14 Lo, N., Cunningham, M. R., Jones, P. A., et al. 2014, , 797, L17 [Mashian et al.(2013)]Mash13 Mashian, N., Sternberg, A., & Loeb, A. 2013, , 435, 2407[McCall et al.(2003)]McCa03 McCall, B. J., Huneycutt, A. J., Saykally, R. J., et al. 2003, , 422, 500[McElroy et al.(2013)]McEl13 McElroy, D., Walsh, C., Markwick, A. J., et al. 2013, , 550, A36 [Meijerink et al.(2011)]Meij11 Meijerink, R., Spaans, M., Loenen, A. F., & van der Werf, P. P. 2011, , 525, A119[Monaghan & Lattanzio(1985)]Mona85 Monaghan, J. J., & Lattanzio, J. C. 1985, , 149, 135[Narayanan & Krumholz(2016)]Nara16 Narayanan, D., & Krumholz, M. 2016, arXiv:1601.05803[Neufeld et al.(2010)]Neuf10 Neufeld, D. A., González-Alfonso, E., Melnick, G., et al. 2010, , 521, L5[Nishimura et al.(2015)]Nish15 Nishimura, A., Tokuda, K., Kimura, K., et al. 2015, , 216, 18[Noeske et al.(2007)]Noes07 Noeske, K. G., Weiner, B. J., Faber, S. M., et al. 2007, , 660, L43[Offner et al.(2013)]Offn13 Offner, S. S. R., Bisbas, T. G., Viti, S., & Bell, T. A. 2013, , 770, 49 [Offner et al.(2014)]Offn14 Offner, S. S. R., Bisbas, T. G., Bell, T. A., & Viti, S. 2014, , 440, L81 [Olsen et al.(2015)]Olse15 Olsen, K. P., Greve, T. R., Narayanan, D., et al. 2015, , 814, 76[Padoan et al.(2009)]Pado09 Padoan, P., Juvela, M., Kritsuk, A., & Norman, M. L. 2009, , 707, L153[Padovani et al.(2013)]Pado13 Padovani, M., Hennebelle, P., & Galli, D. 2013, , 560, A114[Pak et al.(1998)]Pak98 Pak, S., Jaffe, D. T., van Dishoeck, E. F., Johansson, L. E. B., & Booth, R. S. 1998, , 498, 735[Pan & Padoan(2009)]Pan09 Pan, L., & Padoan, P. 2009, , 692, 594[Papadopoulos & Seaquist(1999)]Papa99 Papadopoulos, P. P., & Seaquist, E. R. 1999, , 516, 114[Papadopoulos et al.(2002)]Papa02 Papadopoulos, P. P., Thi, W.-F., & Viti, S. 2002, , 579, 270[Papadopoulos et al.(2004)]Papa04 Papadopoulos, P. P., Thi, W.-F., & Viti, S. 2004, , 351, 147 [Papadopoulos & Greve(2004b)]Papa04b Papadopoulos, P. P., & Greve, T. R. 2004b, , 615, L29[Papadopoulos(2010)]Papa10 Papadopoulos, P. P. 2010, , 720, 226 [Papadopoulos et al.(2011)]Papa11 Papadopoulos, P. P., Thi, W.-F., Miniati, F., & Viti, S. 2011, , 414, 1705[Papadopoulos et al.(2012a)]Papa12a Papadopoulos, P. P., van der Werf, P., Xilouris, E., Isaak, K. G., & Gao, Y. 2012a, , 751, 10[Papadopoulos et al.(2012b)]Papa12b Papadopoulos, P. P., van der Werf, P. P., Xilouris, E. M., et al. 2012b, , 426, 2601[Papadopoulos et al.(2014)]Papa14 Papadopoulos, P. P., Zhang, Z.-Y., Xilouris, E. M., et al. 2014, , 788, 153 [Pelupessy et al.(2006)]Pelu06 Pelupessy, F. I., Papadopoulos, P. P., & van der Werf, P. 2006, , 645, 1024[Pelupessy & Papadopoulos(2009)]Pelu09 Pelupessy, F. I., & Papadopoulos, P. P. 2009, , 707, 954[Planck Collaboration et al.(2016)]Plan16 Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, , 594, A28[Polychroni et al.(2012)]Poly12 Polychroni, D., Moore, T. J. T., & Allsopp, J. 2012, , 422, 2992[Pon et al.(2012)]Pon12 Pon, A., Johnstone, D., & Kaufman, M. J. 2012, , 748, 25[Price(2007)]Pric07 Price, D. J. 2007, , 24, 159[Richings & Schaye(2016)]Rich16 Richings, A. J., & Schaye, J. 2016, , 460, 2297[Rimmer et al.(2012)]Rimm12 Rimmer, P. B., Herbst, E., Morata, O., & Roueff, E. 2012, , 537, A7 [Rodighiero et al.(2011)]Rodi11 Rodighiero, G., Daddi, E., Baronchelli, I., et al. 2011, , 739, L40[Rodríguez-Fernández et al.(2001)]Rodr01 Rodríguez-Fernández, N. J., Martín-Pintado, J., Fuente, A., et al. 2001, , 365, 174[Rowan-Robinson(1980)]Rowa80 Rowan-Robinson, M. 1980, , 44, 403[Sánchez et al.(2010)]Sanc10 Sánchez, N., Añez, N., Alfaro, E. J., & Crone Odekon, M. 2010, , 720, 541 [Sanders et al.(2003)]Sand03 Sanders, D. B., Mazzarella, J. M., Kim, D.-C., Surace, J. A., & Soifer, B. T. 2003, , 126, 1607[Schöier et al.(2005)]Scho05 Schöier, F. L., van der Tak, F. F. S., van Dishoeck, E. F., & Black, J. H. 2005, , 432, 369[Solomon et al.(1997)]Solo97 Solomon, P. M., Downes, D., Radford, S. J. E., & Barrett, J. W. 1997, , 478, 144[Solomon et al.(1987)]Solo87 Solomon, P. M., Rivolo, A. R., Barrett, J., & Yahil, A. 1987, , 319, 730[Smith et al.(2014)]Smit14 Smith, R. J., Glover, S. C. O., Clark, P. C., Klessen, R. S., & Springel, V. 2014, , 441, 1628[Sternberg & Dalgarno(1995)]Ster95 Sternberg, A., & Dalgarno, A. 1995, , 99, 565[Sternberg et al.(2014)]Ster14 Sternberg, A., Le Petit, F., Roueff, E., & Le Bourlot, J. 2014, , 790, 10[Strong et al.(2004a)]Stro04a Strong, A. W., Moskalenko, I. V., Reimer, O., Digel, S., & Diehl, R. 2004, , 422, L47[Strong et al.(2004b)]Stro04b Strong, A. W., Moskalenko, I. V., & Reimer, O. 2004, , 613, 962 [Szűcs et al.(2016)]Szuc16 Szűcs, L., Glover, S. C. O., & Klessen, R. S. 2016, , 460, 82[Tielens(2013)]Tiel13 Tielens, A. G. G. M. 2013, Reviews of Modern Physics, 85, 1021[Tielens & Hollenbach(1985)]Tiel85 Tielens, A. G. G. M., & Hollenbach, D. 1985, , 291, 722 [Tomassetti et al.(2014)]Toma14 Tomassetti, M., Porciani, C., Romano-Díaz, E., Ludlow, A. D., & Papadopoulos, P. P. 2014, , 445, L124[van Dishoeck & Black(1986)]vanD86 van Dishoeck, E. F., & Black, J. H. 1986, , 62, 109[van Dishoeck & Black(1988)]vanD88 van Dishoeck, E. F., & Black, J. H. 1988, , 334, 771 [van Dishoeck(1992)]vanD92 van Dishoeck, E. F. 1992 in IAU Symp. 150, Astrochemistry of Cosmic Phenomena, ed. P. D. Singh (Dordrecht: Kluwer), 143 [Walch et al.(2015)]Walc15 Walch, S., Whitworth, A. P., Bisbas, T. G., Hubber, D. A., Wünsch, R. 2015, , 452, 2794[Wolfire et al.(2003)]Wolf03 Wolfire, M. G., McKee, C. F., Hollenbach, D., & Tielens, A. G. G. M. 2003, , 587, 278[Wolfire et al.(2010)]Wolf10 Wolfire, M. G., Hollenbach, D., & McKee, C. F. 2010, , 716, 1191 [Wu et al.(2015)]Wu15 Wu, B., Van Loo, S., Tan, J. C., & Bruderer, S. 2015, , 811, 56[Xie et al.(1995)]Xie95 Xie, T., Allen, M., & Langer, W. D. 1995, , 440, 674[Zhang et al.(2014)]Zhan14 Zhang, Z.-Y., Henkel, C., Gao, Y., et al. 2014, , 568, A122[Zhang et al.(2016)]Zhan16 Zhang, Z.-Y., Papadopoulos, P. P., Ivison, R. J., et al. 2016, Royal Society Open Science, 3, 160025 § A. CHEMICAL NETWORK AND INITIAL ELEMENTAL ABUNDANCESWe explore the dependence of our results on the choice of chemical network and the choice of initial elemental abundances. To do this, we perform a suite of 0D calculations where we switch off the UV radiation field. We use densities of n_ H=10^2-4cm^-3 interacting with ζ'=10^0-3 cosmic-ray ionization rates (see Eqn. <ref>). We consider two subsets and the full UMIST 2012 <cit.> consisting of 33 species (4 elements: H, He, C, O), 58 species (2 additional elements: Mg, S), and 215 species (4 additional elements: Na, Fe, Si, N) respectively. In addition to Table <ref>, the initial abundances of the last four elements used are Na=1.738×10^-6, Fe=3.162×10^-5, Si=3.236×10^-5 and N=6.76×10^-5 <cit.>. The results of the above tests are shown in red solid (33 species), green solid (58 species) and blue solid (215 species) lines in Fig. <ref>1. We further perform additional simulations using the subset of 58 species only and in which we change the initial values of elemental abundances to those that have been measured by optical/UV absorption lines in diffuse clouds with densities similar to those of our fractal GMCs. We use C=1.4×10^-4 <cit.>, O=2.8×10^-4 <cit.> and Mg=7×10^-9 while keeping S as shown in Table <ref> as it is observed to remain largely undepleted. The reduction of the Mg abundance by ∼4 orders of magnitude compared to the value shown in Table <ref> is motivated by the fact that such high abundances of Mg may act as a non-negligible source of electrons which can in turn affect the [CO]/[H_2] ratio. The results of this test are shown in Fig. <ref>1 as green dashed lines. These abundances correspond to environments with metallicities Z≃ Z_⊙. We note that thoughout this work we have assumed solar metallicity at all times and we do not explore the effect of CR-induced CO destruction in sub-solar and super-solar environments. For all reasonable assumptions, C/O∼0.5 which is consistent with diffuse ISM observations, and always <1.Overall, from the above suite of tests we find that the general trend of the [CO]/[H_2] abundance ratio decrease by increasing the cosmic-ray ionization rate remains robust. We therefore demonstrate that the validity of findings presented in this work and in particular the column density maps shown in Fig. <ref> do not strongly depend on the complexity of the chemical network used or the choice of initial elemental abundances adopted.§ B. MAPPING SPH TO GRIDWe convert the properties of the cloud (number density distribution, gas temperatures, etc.) from SPH to uniform grid in order to produce the column density plots of Fig. <ref>. Each SPH particle, p, comprising the cloud has a smoothing length h_p and carries the corresponding PDR information from the 3d-pdr calculations. In order to weight an SPH quantity, A_p, in the centroid, q, of a given cell of the uniform grid we use the equationA_q=∑_p=1^Nn_pA_pW(| r_q-r_p|,h_p),where N=50 is the number of the closest neighbouring SPH particles to the centroid of the cell and W is the <cit.> softening kernelW(ℓ,p)=1/π h_p^31-3/2ℓ^2+3/4ℓ^3, if 0≤ℓ<1 1/4(2-ℓ)^3, if 1≤ℓ≤2 0, if ℓ>0,where ℓ=| r_q-r_p|/h_p. The number of SPH particles in each grid cell varies from few tens (highest density regions) to none (outside the cloud). Similar techniques have been discussed by <cit.>.
http://arxiv.org/abs/1703.08598v1
{ "authors": [ "Thomas G. Bisbas", "Ewine F. van Dishoeck", "Padelis P. Papadopoulos", "László Szücs", "Shmuel Bialy", "Zhi-Yu Zhang" ], "categories": [ "astro-ph.GA", "astro-ph.SR" ], "primary_category": "astro-ph.GA", "published": "20170324210922", "title": "Cosmic-ray induced destruction of CO in star-forming galaxies" }
The spectral determinations of the multicone graphs K_w▽ mC_nAli Zeydi Abdian[Lorestan University, College of Science, Lorestan, Khoramabad, Iran; e-mail: aabdian67@gmail.com; azeydiabdi@gmail.com] The main goal of the paper is to characterize new classes of multicone graphs which are determined by both adjacency and Laplacian spectra. A multicone graph is defined to be the join of a clique and a regular graph.A wheel graph obtained from the join of a complete graph on a vertex with a cycle. A question about when wheel graphs are determined by their adjacency spectra is still unsolved. So, any indication about the determinations ofthese graphs with respect to their adjacency spectra can be an interesting and important problem. In [Y. Zhang, X. Liu, and X. Yong:Which wheel graphs are determined by their Laplacian spectra?. Comput. Math. Appl., 58 (2009) 1887–1890] and[M.-H. Liu: Some graphs determined by their (signless) Laplacian spectra. Czech. Math. J., 62, (2012) 1117–1134]it have been shown that except for, the wheel graph oforder seven, all wheel graphs are determined by their Laplacian spectra and wheel graphs are determined by their signless Laplacian spectra, respectively. In this study, we present new classes of connected multicone graphs which are a natural generalization of wheel graphs and we show that these graphs are determined by their adjacency spectra as well as their Laplacian spectra. Also, we show thatcomplement of some of these graphs are determined bytheir adjacency spectra. In addition, we give a necessary and sufficient condition for perfecting graphs cospectral with presented graphs in the paper. Finally, we pose two problems for further work. MSC(2010):05C50. Keywords: Adjacency spectrum, Laplacian spectrum, DS graph, Multicone graph, Wheel graph.§INTRODUCTIONIn this paper, all graphs, except in Section 5, areconnected undirected simple graphs (for answering to this question why we consider connected graphs, one can see the paragrah before Theorem <ref> and also Theorem <ref> of the paper). Let G = (V (G),E(G)) be a graph with vertex set V = V (G) = {v_1, . . . , v_n} and edge set E(G). All graphs considered here are simple and undirected. All notions on graphs that are not defined here can be found in <cit.>. A graph consisting of k disjoint copies of an arbitrary graph G will be denoted by the kG. The complement of a graph G is denoted by G. The join of two graphs G and H is the graph obtainedfrom disjoint union of G and H and connecting any vertex of G to any vertex of H. The join of two graphs G and H is denoted by G▽ H. We say that a graph G is an r-regular graph, if the degree of its regularity is r. Given a graph G, the cone over G is the graph formed by adjoining a vertex adjacent to every vertex of G. Let the matrix A(G) be the (0,1)-adjacency matrix of G and d_k be the degree of the vertex v_k. The matrix L(G) = D(G)-A(G) is called the Laplacian matrix of G, where D(G) is the n× n diagonal matrix with V = V (G) = {d_1, . . . , d_n} as diagonal entries (and all other entries 0). Since both matrices A(G) and L(G) are real and symmetric, their eigenvalues are all real numbers. Assume that λ_1≥λ_2 ≥ . . . ≥λ_n and μ_1≥μ_2≥ . . . ≥μ_n (= 0) are respectively the adjacency eigenvalues and the Laplacian eigenvalues of graph G. The adjacency spectrum of the graph G consists of the adjacency eigenvalues (together with their multiplicities), and the Laplacian spectrum of the graph G consists of the Laplacian eigenvalues (together with their multiplicities) and we denote them by Spec_A(G) and Spec_L(G), respectively. Two graphs G and H are said to be cospectral if they have equal spectrum (i.e., equal characteristic polynomial).If G and H are isomorphic, they are necessarily cospectral. Clearly, if two graphs are cospectral, they must possess equal number of vertices. We say that a graph G is determined by its adjacency (Laplacian) spectra (DS, for short), if for any graph H with Spec_A(G)=Spec_A(H) (Spec_L(G)=Spec_L(H)), G is isomorphic to H. So far numerous examples of cospectral but non-isomorphic graphs are constructed by interesting techniques such as Seidel switching, Godsil-McKay switching, Sunada or Schwenk method. For more information, one may see <cit.> and the references cited in them. Only a few graphs with very special structures have been reported to be determined by their spectra (DS, for short) (see <cit.> and the references cited in them). Recently Wei Wang and Cheng-Xian Xu have developed a new method in <cit.> to show that many graphs are determined by their spectrum and the spectrum of their complement. Van Dam and Haemers <cit.> conjectured that almost all graphs are determined by their spectra. Nevertheless, the set of graphs that are known to be determined by their spectra is too small. So, discovering classes of graphs that are determined by their spectra can be an interesting problem. The characterization of DS graphs goes back about half of a century and it originated in Chemistry <cit.>. About the background of the question ”Which graphs are determined by their spectrum?”, we refer to <cit.>. A spectral characterization of multicone graphs were studied in <cit.>. In <cit.>, Wang, Zhao and Huang investigated on the spectral characterization of multicone graphs and also they claimed that friendship graphs F_n (that are special classes of multicone graphs) are DS with respect to their adjacency spectra. In addition, Wang, Belardo, Huang and Borovićanin <cit.> proposed such conjecture on the adjacency spectrum of F_n. This conjecture caused some activities on the spectral characterization of F_n.Finally, Cioabä and et al., <cit.> proved that if n≠ 16, then friendship graphs F_n are DS with respect to their adjacency spectra. Abdian and Mirafzal <cit.> characterized new classes of multicone graphs which were DS with respect to their spectra. Abdian <cit.> characterized two classes of multicone graphs and proved that the join of an arbitrary complete graph and the generalized quadrangle graph GQ(2,1) or GQ(2,2) is DS with respect to its adjacency spectra as well as its Laplacian spectra. This author also proposed four conjectures about adjacency spectrum of complement and signless Laplacian spectrum of these multicone graphs. In <cit.>, the author showed that multicone graphs K_w▽ P_17 and K_w▽ S are DS with respect to their adjacency spectra as well as their Laplacian spectra, where P_17 and S denote the Paley graph of order 17 and the Schläfli graph, respectively. Also, this author conjectured that these multicone graphs are DS with respect to their signless Laplacian spectra. In <cit.>, the author proved that multicone graphs K_w▽ L(P) are DS with respect to both their adjacency and Laplacian spectra, where L(P) denotes the line graph of the Petersen graph. He also proposed three conjectures about the signless Laplacian spectrum and the complement spectrum of these multicone graphs. For getting further information about characterizing some multicone graphs which are DS see <cit.>. We believe that the proofs in <cit.> contain some gaps. In <cit.>, the authors conjectured that if a graph is cospectral to a friendship graph, then its minimum degree is 2 (see Conjecture 1). In other words, they could not determine the minimum degree of graphs cospectral to a (bidegreed) multicone graph (see Conjecture 1). Hence, by their techniques (<cit.>) cannot characterize new classes of multicone graphs that we want to characterize them. Conjectures (Conjectures 1 and 2) which had been proposed by Wang, Zhao and Huang <cit.> are not true and there is a counterexample for them (see the first paragraph after Corollary 2 of <cit.>). In Theorem 3 (ii) of <cit.> first the minimum degree of a graph cospectral to a graphs belonging to β(n-1, δ) (classes of bidegreed graphs with degree sequence δ and n-1, where n denotes the number of vertices) must be determined, since in general the minimum degree of a graph cannot be determined by its spectrum. Therefore, we think that theorem without knowing the minimum degree of a graph cospectral with one of graphs β(n-1, δ) will not be effective and useful.In this paper, we present some techniques which enable us to characterize graphs that are DS with respect to their adjacency and Laplacian spectra. This paper is organized as follows. In Section 2, we review some basic information and preliminaries. In Section 3, we present new classes graphs that are either bidegreed or regularwith respect to their adjacency spectrum. In Section 4, we prove that anygraph cospectral with one of these graphs are determined by their adjacency spectrum.In Section 5, we show that complement of some classes of these graphs are determined by their adjacency spectrum. In Section 6, we prove that these graphs are DS with respect to their Laplacian spectrum. In Section 7, we show that any graph cospectral with special classes of these graphs must be perfect. In Section 8, we review what were saidin the previous sections and finally we propose two conjectures for further research. §PRELIMINARIES In this section we present some results which will play an important role throughout this paper.<cit.> Let G be a graph. For the adjacency matrix and theLaplacian matrix of G, the following can be obtained from the spectrum: (i) The number of vertices,(ii) The number of edges.For the adjacency matrix, the following follows from the spectrum:(iii) The number of closed walks of any length,(iv) Whether G is regular, and the common degree,(v) Being bipartite or not.For the Laplacian matrix, the following follows from the spectrum:(vi) The number of spanning trees,(vii) The number of components,(viii) The sum of squares of degrees of vertices.<cit.> If G_1 is r_1-regular with n_1 vertices, and G_2 is r_2-regular with n_2 vertices, then the characteristic polynomial of the join G_1▽ G_2 is given by:P_G_1▽ G _2(y)=P_G_1(y)P_G_2(y)/(y-r_1)(y-r_2)((y-r_1)(y-r_2)-n_1n_2).The spectral radius of a graph Λ is the largest eigenvalue of adjacency matrix of graph Λ and it is denoted by ϱ(Λ). A graph is called bidegreed, if the set of degrees of its vertices consists of two elements. For further information about the following inequality we refer the reader to<cit.>(see the first paragraph after Corollary 2.2 and also Theoremof <cit.>). It is stated in <cit.> that if G is disconnected, then the equality in the following can also occur. However, in this paper we only consider connected case and we state the equality in this case. <cit.>Let G be a simple graph with n vertices and m edges. Let δ=δ(G) be the minimum degree of vertices of G and ϱ(G) be the spectral radius of the adjacency matrix of G. Thenϱ(G)≤δ-1/2+√(2m-nδ+(δ+1)^2/4).Equality holds if and only if G is either a regular graph or a bidegreed graph in which each vertex is of degree either δ or n-1.<cit.> Let G and H be two graphs with Laplacian spectrum λ_1≥λ_2≥...≥λ_n and μ_1≥μ_2≥...≥μ_m, respectively. Then Laplacian spectra of G and G ▽ H are n -λ_1 ,n - λ_2 ,...,n - λ_n - 1 ,0 and n + m,m + λ_1 ,...,m + λ_n - 1 ,n + μ_1 ,. . ., n + μ_m - 1 ,0, respectively. <cit.> Let G be a graph on n vertices. Then n is one of the Laplacian eigenvalue of G if and only if G is the join of two graphs.<cit.> For a graph G, the following statements are equivalent: (i) G is d-regular.(ii) ϱ(G)=d_G, the average vertex degree.(iii) G has v=(1,1,...,1)^T as an eigenvector for ϱ(G).<cit.> Let G-j be the graph obtained from G by deleting the vertex j and all edges containing j. Then P_G-j(y)=P_G(y)∑_i = 1^m α^2 _ij/y - μ _i, where m and α _ij are the number of distinct eigenvalues and the main angles (see <cit.>) of graph G, respectively. <cit.> Let G be a disconnected graph that is determined by the Laplacian spectrum. Then the cone over G, the graph H; that is, obtained from G by adding one vertex that is adjacent to all vertices of G, is also determined by its Laplacian spectrum.<cit.> Let Γ be a non-regular graph with three distinct eigenvalues θ_0>θ_1>θ_2. Then the following hold:(i) Γ has diameter two. (ii) If θ_0 is not an integer, then Γ is complete bipartite. (iii) θ_1≥ 0 with equality if and only if Γ is complete bipartite. (iv) θ_2 ≤ -√(2) with equality if and only if Γ is the path of length 2.<cit.>A graph has exactly one positive eigenvalue if and only if its non-isolated vertices form a complete multipartite graph.In the following, we always suppose that w andn≥ 3are natural numbers. Also, C_n and K_w denote a cycle of order n and a complete graph on w vertices, respectively.In addition, when Spec_A(G)=Spec_A(K_w▽ mC_n),we always suppose that G is connected. Because, there are some classes of disconnected graphsthat are notdetemined by their spectra. For example, Spec_A((2C_4▽ (3C_4∪ K_3))∪ 5C_4)=Spec_A(K_3▽ 10C_4) but (2C_4▽ (3C_4∪ K_3))∪ 5C_4 K_3▽ 10C_4, Spec_A((C_5▽ (6C_5∪ K_3))∪ 4C_5)=Spec_A(K_3▽ 11C_5) but C_5▽ (6C_5∪ K_3))∪ 4C_5K_3▽ 11C_5 and Spec_A((C_6▽ (2C_6∪ K_3))∪ 2C_6)=Spec_A(K_3▽ 5C_6) but C_6▽ (2C_6∪ K_3))∪ 2C_6K_3▽ 5C_6. If graphs cospectral with one of multicone graphs C_3▽ mC_n are connected, then thesegraphs are determined by their adjacency spectrum (see Theorem 4.1).§ MAIN RESULTS The aim of this section is to show that any graph cospectral with a multicone graphs K_w▽ mC_nis either regular or bidegreed. §.§ Connected graphs cospectral with a multicone graph K_w▽ mC_n with respect to adjacency spectrum.Let G be agraph cospectral with a multicone graph K_w▽ mC_n. Then Spec_A(G) is either ⋃_k = 1^n/2 - 1{[- 1]^w-1,[ 2cos2kπn]^2m, [ 2 ]^m - 1, [- 2]^m, [ Ω- √(Ω ^2 - 4Γ)2]^1,[ Ω+ √(Ω ^2 - 4Γ)2]^1} or ⋃_k = 1^n-1/2{[- 1]^w-1,[ 2cos2kπn]^2m, [ 2 ]^m - 1, [ Ω- √(Ω ^2 - 4Γ)2]^1,[ Ω+ √(Ω ^2 - 4Γ)2]^1}, where Ω=w+1 andΓ=2(w-1)-mnw. Proof It is well-known that the n-cycle C_n has eigenvalues 2cos2kπn, where k = 0, . . . , n - 1. All multiplicities are 2, except that of 2 and possibly -2. Now, by Theorem <ref> the proof is straightforward.Let G be agraph cospectral with amulticone graph K_w▽ mC_n. Then δ(G) = w+2. ProofLet δ(G) = w+2+x, where x is an integer number. First, it is clear that in this case the equality in Theorem <ref> happens if and only if x=0. We claim that x=0. By contrary, we suppose that x≠ 0.Theorem <ref> together with Proposition 3.1 followthat ϱ(G)=w+1+ √(8k-4l(w+2)+(w+3)^2)2≨w+1+x+ √(8k-4l(w+2)+(w+3)^2+x^2+(2w+6-4l)x )2,where (as usual) k and l denote the numbers of vertices and edges in G, respectively.For convenience, we let S=8k-4l(w+2)+(w+3)^2≥ 0 and C = w+3-2l, and also let q(x)=x^2+(2w+6-4l)x = x^2 + 2Cx. Then clearly √(S)-√(S+q(x)) < x. 1We consider two cases:Case 1. x ≨ 0. It is easy and straightforward to see that |√(S)-√(S+q(x))| >|x|, since x<0.Transposing and squaring yields 2S+q(x)-2√(S(S+q(x))) > x^2. Replacing q(x) by x^2 + 2Cx, we getS+Cx > √(S(S+x^2 + 2Cx)). Obviously Cx≥ 0. Squaring again and simplifying yieldsC^2 > S. 2 Therefore, k<l(l-1)/23.So, if x<0, then G cannot be a complete graph. In other words, if G is a complete graph, then x> 0. Or one can say that if G is a complete graph, then δ(G) > w+2 (†).Case 2. x≩ 0. In the same way of Case 1, we can conclude that:If G is a complete graph, then δ(G) < w+2 (††).But, two inequalities (†) and (††) cannot happen together. So we must have x=0. Therefore, the claim holds. Let G be a graph cospectral with amulticone graph K_w▽ mC_n. Then G is either regular or bidegreed in which any vertex of G is of degree w-1+mn or w+2.ProofThis follows from Lemma <ref> together with Theorem <ref>. § CONNECTED GRAPHS COSPECTRAL WITH THE MULTICONE GRAPH K_1▽ MC_N.In this section, we show that any graph cospectral with themulticone graphs K_1▽ mC_n, the cone of graphs mC_n, is isomorphic to K_1▽ mC_n. Anygraph cospectral with the multicone graph K_1▽ mC_n is DS with respect to its adjacency spectrum.Proof Let G be cospectral with the multicone graph K_1▽ mC_n. If m=1 and n=3 there is nothing to prove, since in this case G is regular (see Theorem <ref>). Hence we suppose that m≠ 1 or n≠ 3. By Lemma <ref>, it is clear that G has one vertex of degree mn, say j. We consider two cases:Case 1. Let n be even. In this case, it follows from Proposition <ref>thatP_G-j(x)=(x-μ_0)^m-2∏_r=1^n/2-1(x - μ _r)^2m - 1(x-μ_n/2)^m-1∑_k = 0^n/2+2α^2_kjA_k, where A_k=∏__iki = 0^n /2+2(x - μ _i), μ_n/2=-2, μ_n/2+1=1+√(1+mn), μ_n/2+2=1-√(1+mn) andμ _r = 2cos2π r /n ( 0≤ r≤n/2-1).Case 2. Let n be odd. In the same way of Case 1, it follows from Proposition <ref>that P_G-j(x)=(x - μ _0)^m - 2∏_r = 1^n-1/2(x - μ _i)^2m - 1∑_l = 0^n -1/2+2α^2_ljB_l, where B_l=∏__ili = 0^n -1/2+2(x - μ _i), μ_n-1/2+1=1+√(1+mn),μ_n-1/2+2=1-√(1+mn) andμ _r = 2cos2π r /n ( 0≤ r≤n-1/2).Now, from Lemma <ref>, it follows that G-j is a regular graph and degree of its regularity is 2. Also, G-j has mn vertices of degree 2. Therefore, we conclude that Spec_A(G-j) is either ⋃_k = 1^n/2 - 1{ [ 2cos2kπn]^2m, [ 2 ]^m , [- 2]^m}or ⋃_k = 1^n-1/2{ [ 2cos2kπn]^2m, [ 2 ]^m }. Hence G-j≅ mC_n. So, G≅ K_1▽ mC_n. This completes the proof. Up to now, we have shown that the multicone graph K_1▽ mC_n is DS with respect to their adjacency spectrum. The natural question is; what happens formulticone graph K_w▽ mC_n? we will respond to this question in the following theorem. Anygraph cospectral with a multicone graph K_w▽ mC_n is isomorphic to K_w▽ mC_n.Proof We perform the mathematical induction on w. If w=1, by Lemma <ref> the proof is clear. We suppose that the claim be valid for w. Inother words, if Spec_A(H)= Spec_A(K_w▽ mC_n), then H≅ K_w▽ mC_n, where H is an arbitrary graph cospectral with a multicone graph K_w▽ mC_n. We show that the claim is true for w+1; that is, we show that if Spec_A(K)=Spec_A(K_w+1▽ mC_n), then K≅ K_w+1▽ mC_n, where K denotes a graph cospectral with multicone graph K_w+1▽ mC_n. Graph K has one vertex and w+mn edges more than H. By Lemma <ref>, H has w vertices of degree w-1+mn and mn vertices of degree w+2. Also, this lemma implies that K has w+1 vertices of degree w+mn and mn vertices of degree w+3. So, we must have K≅ K_1▽ H. Now, the inductive hypothesis completes the proof.In the following, we present an alternative proof ofTheorem <ref>. Proof(an alternative proof of Theorem <ref>) Let G be a graph cospectral with a multicone graph K_w▽ mC_n.By Lemma <ref>, G consists of subgraph Γ and degree of anyvertex of Γ is w-1+mn. Inother words, G≅ K_w▽ H, where H is a subgraph of G. Now, we removevertices of the complete graph K_w and weconsider mn another vertices. Considersubgraph H consisting of these mn vertices. H is regular anddegree of its regularity is 2 and multiplicity of 2 is m. In other words, H is a cycle or it is a disjoint union ofseveral n-cycles. By Theorem <ref>,Spec_A(H) is either⋃_k = 1^n/2 - 1{ [ 2cos2kπn]^2m, [ 2 ]^m [- 2]^m}or ⋃_k = 1^n-1/2{ [ 2cos2kπn]^2m, [ 2 ]^m }. Hence Spec_A(H)=Spec_A(mC_n). This follows the result.§ SOME CLASSES OF GRAPHS K_W▽ MC_N ANDTHEIR ADJACENCY SPECTRUMIn this section, we show that graphs K_w▽ mC_3 are DS with respect to their adjacency spectra.Let G be a graph and Spec_A(G)=Spec_A(K_w▽ mC_3). Then G≅K_w▽ mC_3. Proof It is clear that Spec_A(G)={[- 3]^m - 1, [ 0 ]^2m + w, [ 3m - 3]^1}. If m=1, there is nothing to prove. If m=2, Theorem <ref> implies that G≅ G_1∪ sK_1, where s and G_1 area natural number and a complete multipartite graph, respectively. On the other hand, it follows from Theorem <ref> that G_1isa complete bipartite graph.Therefore, G_1≅ K_3, 3. Hence G≅ wK_1 ∪ K_3, 3. Therefore, we can suppose that m⩾ 3. We know that regularity and the number of triangles of graph G can be determined by its adjacency spectrum. So, G is a non-regular graph and it is not a complete bipartite graph. Therefore, from Theorems <ref> and <ref> we conclude that G is not connected and G=G_1∪ mK_1, where m is a natural number. Also, it is clear that G_1 has exactly three distinct eigenvalues. Now,by contrary, we suppose that G_1 be a non-regular graph. In this case, it follows from Theorem <ref>that G_1 is a complete bipartite graph and so G is a bipartite graph. This is a contradiction. Hence G_1 is a regular graph and so ϱ(G_1)=3m-3. Itfollows from Theorem <ref>that G_1 is a multipartite graph. Hence G_1≅ K_ 3, 3, ..., 3_m times and so G=G_1 ∪ wK_1≅ K_ 3, 3, ..., 3_m times∪ wK_1≅K_w ▽ mK_3.This follows the result.§CONNECTED GRAPHS COSPECTRAL WITH A MULTICONE GRAPH K_W▽ MC_N WITH RESPECT TO LAPLACIAN SPECTRUMIn this section, we show that if w, m≠ 1 and n≠ 6 (see 19, Fig. 1), then multicone graphs K_w▽ mC_n are DS with respect to their Laplacian spectrum.Multicone graphs K_w▽ mC_n are DS with respect to their Laplacian spectrum, where w, m≠ 1 and n≠ 6. Proof It is well-known that Spec_L(C_n)= 2-2cos2kπn, where k = 0, . . . , n - 1. All multiplicities are 2, except that of 0 and possibly 4. We perform mathematical induction on w. First, we suppose that n be even. If w=1, by Proposition <ref> the proof is straightforward. Let the claim be true for w, that is, Spec_L(G_1)= Spec_L(K_w▽ mC_n)=⋃_k = 1^n/2 - 1{ [ w + mn]^w ,[ w + 2 - 2cos2π kn]^2m, [ w ]^m - 1, [ w + 4]^m, [ 0 ]^1} follows that G_1≅ K_w▽ mC_n. We show that the claim is true for w+1, that is, we show that Spec_L(G)=Spec_L(K_w+1▽ mC_n)= ⋃_k = 1^n/2 - 1{ [ w +1+ mn]^w+1 ,[ w + 3 - 2cos2π kn]^2m, [ w+1 ]^m - 1, [ w + 5]^m, [ 0 ]^1} follows that G≅ K_w+1▽ mC_n. It is clear that, G has one vertex and w+ mn edges more than G_1. On the other hand, Theorem <ref>follows that any of graphs G and G_1 is the join of two graphs. In addition, Spec_L(K_1▽ G_1)=Spec_L(G). Therefore, we must have G≅ K_1▽ G_1. Now, the induction hypothsis completes the proof. If n is odd, the proof is similar. § SOME ALGEBRAIC PROPERTIES ABOUT MULTICONE GRAPHSK_W▽ MC_N.It is proved that a graph G is perfect if and only if G is Berge; that is, it contains no odd hole or antihole as induced subgraph, where odd hole and antihole are odd cycle, C_m for m≥ 5, and its complement, respectively. Also, in 1972 Lovász proved that, a graph is perfect if and only if its complement is perfect (see <cit.>).In thissection, we show that if n is either even or3, then any graph cospectral with K_w▽ mC_n with respect to its adjacency spectrum as well as its Laplacian spectrum must be perfect. Let graph G be cospectral with a multicone graph K_w▽ mC_n. Then: G and G are perfect if and only if n is either even or3. Proof (⇒) By what were saidin the beginning of this section and Theorem <ref> the proof is straightforward. ( ⇐) It is quite clear that G cannot consist of an odd hole of order greater than or equal to five as an induced subgraph. We show that G contains no odd antihole of order greater than or equal to five as an induced subgraph. By contrary, we suppose that G contains C_k as an induced subgraph, where k is an odd natural number greater than or equal to five. Hence G=wK_1∪mC_n must consists of C_k as an induced subgraph. In other words, mC_n=C_n▽ . . .▽C_n_mtimes must consists of C_k as an induced subgraph.This is obviously a contradiction.This observation completes the proof.Let G be a graph andSpec_L(G)=Spec_L(K_w▽ mC_n). Then: G and G are perfect if and only if n is either even or3.Proof The proof is in a similar manner of Theorem <ref>. §TWO CONJECTURESIn this paper, it were proved that connected multicone graphs K_w▽ mC_n are determined by both their adjacency and Laplacian spectra. Also, we show that K_w▽ mC_3 are determined by their adjacency spectra. By <cit.> (Theorem 5.2), we can deduce that graphs K_w▽ C_4 are determined by their adjacency spectra. Now, we pose the following conjectures. Graphs K_w▽ mC_n are DS with respect to their adjacency spectrum.Multicone graphs K_w▽ mC_n are DS with respect to their signless Laplacian spectrum. 999AA.Z. Abdian, and S. M. Mirafzal: On new classes of multicone graph determined by their spectrums. Alg. Struc. Appl. 2 (2015), 23-34. AAA.Z. Abdian: Graphs which are determined by their spectrum. Konuralp J. Math. 4 (2016), 34–41. AAAA.Z. Abdian: Two classes of multicone graphs determined by their spectra. J. Math. Ext.10 (2016), 111-121.AAAAA.Z. Abdian: Graphs cospectral with multicone graphs K_w▽ L(P). TWMS. J. App and Eng. Math. 7 (2017), 181–187.AAAAAA.Z. Abdian: The spectral determination of the multicone graphs K_w▽ P. arXiv preprint arXiv:1703.08728 (2017). AAAAAA A.Z. Abdian and S. M. Mirafzal: The spectral characterizations of the connected multicone graphs K_w▽ LHS and K_w▽ LGQ(3,9). Discrete Math., Algorithm. and Appl. (DMAA). 10 (2018),1850019. DOI 10.1142/S1793830918500192. AAAAAA1 A.Z. Abdian and S. M. Mirafzal: The spectral characterizations of the connected multicone graphs K_w▽ mP_17 and K_w▽ mS. Czech. Math. J, (accepted in 2018). Ba R. B. Bapat: Graphs and Matrices. Springer-Verlag, New York, (2010).B N. L. Biggs: Algebraic Graph Theory. Cambridge University press, Cambridge, (1933).BJ R. Boulet and B. Jouve: The lollipop graph is determined by its spectrum. Electron. J. Comb.15 (2008), R74.B123 A. Brandstädt, V. B. Le, J. P. Spinrad: Graph classes : a survey, SIAM monographs on discrete mathematics and applications, 1999 BH A. E. Brouwer and W. H. Haemers: Spectra of Graphs, Universitext. Springer, New York, (2012).CHVWS. M. Cioabä, W. H. Haemers, J. R. Vermette and W. Wong: The graphs with all but two eigenvalues equal to ± 1. J. Algebr. Combin. 41 (2013), 887–897.CRS D. Cvetković, P. Rowlinson and S. Simić: An Introduction to the Theory of graph spectra. London Mathematical Society Student Teyts, 75, Cambridge University Press, Cambridge (2010).DK.C. Das:Proof of conjectures on adjacency eigenvalues of graphs. Disceret Math. 31 (2013), 19–25. DHM. Doob and W. H. Haemers: The complement of the path is determined by its spectrum. Linear Algebra Appl. 356 (2002) 57-65. P1 H. H. Günthard and H. Primas: Zusammenhang von Graph theory und Mo-Theorie von Molekeln mit Systemen konjugierter Bindungen, Helv. Chim. Acta. 39 (1925), 1645–1653. HLZ W. H. Haemers, X. G. Liu and Y. P. Zhang: Spectral characterizations of lollipop graphs. Linear Algebra Appl. 428 (2008), 2415-2423.K U. Knauer: Algebraic Graph Theory, Morphism, Monoids and Matrices. de Gruyters Studies in Mathematics, Walter de Gruyters and Co., Berlin and Boston. 41 (2011).LS Y. Liu and Y. Q. Sun: On the second Laplacian spectral moment of a graph. Czech. Math. J. 2 (2010), 401-410.LS2 M.-H. Liu: Some graphs determined by their (signless) Laplacian spectra. Czechoslovak Math.J., 62 (2012), 1117–1134 . MerR. Merris: Laplacian matrices of graphs: a survey. Linear Algebra Appl. 197 (1994) 143–176. M S.M. Mirafzal and A.Z. Abdian: Spectral characterization of new classes of multicone graphs. Stud. Univ. Babes-Bolyai Math. 62 (2017), 275–286. M12S.M. Mirafzal and A.Z. Abdian: The spectral determinations of some of multicone graphs, J. Discrete Math. Sci. and Crypt. (JDMSC), (accepted in 2017). P W. Peisert: All self-complementary symmetric graph. J. Algebra. 240 (2001), 209–229.R P. Rowlinson: The main eigenvalues of a graph: a survey. Appl. Anal. Discrete Math. 1 (2007), 445–471. . VH E. R. van Dam and W. H. Haemers: Which graphs are determined by their spectrum?. Linear Algebra. Appl. 373 (2003), 241–272.VH1 E. R. van Dam and W. H. Haemers: Developments on spectral characterizations of graphs. Discrete Math. 309 (2009), 576–586. WBHB J. Wang, F. Belardo, Q. Huang, B. Borovićanin: On the two largest Q-eigenvalues of graphs. Discrete Math. 310 (2010), 2858–2866.W2 W. Wang and C. X. Xu: A sufficient condition for a family of graphs being determined by their generalized spectra.European J. Combin. 27 (2006), 826-840. W1 J. Wang, H. Zhao, and Q. Huang: Spectral charactrization of multicone graphs. Czech. Math. J. 62 (2012), 117–126.W D. B. West: Introduction to Graph Theory, Upper Saddle River, Prentice hall, (2001).
http://arxiv.org/abs/1703.08728v2
{ "authors": [ "Ali Zeydi Abdian" ], "categories": [ "math.CO", "05C50", "F.2.2" ], "primary_category": "math.CO", "published": "20170325184017", "title": "The spectral determinations of the multicone graphs Kw+mCn" }
This project was partially supported by PAI 79140019 Instituto de Matemáticas Universidad de Valparaíso Gran Bretaña 1091, Valparaíso, Chile marcelo.flores@uv.cl A bt–algebra of type 𝙱 M. Flores December 30, 2023 ====================== We introduce a bt–algebra of type 𝙱. As the original construction of the bt–algebra, we define this bt–algebra of type 𝙱 by building from . Notably we find a basis for it, a faithful tensorial representation, and we prove that it supports a Markov trace, from which we derive invariants of classical links in the solid torus. § INTRODUCTIONThe algebra of braids an ties, known as well as the bt–algebra, was defined originally by Aicardi and Juyumaya in <cit.>, having as goal to construct new representations of the braid group. Later, it was observed that its generators and relations have a diagrammatical interpretation in terms of braids and ties, hence its name, see <cit.>. For a positive integer n, the bt–algebra with parameteris denoted by ℰ_n(), and its definition is obtained by considering abstractly as the subalgebra of the Yokonuma–Hecke algebra Y_d,n:= Y_d,n() generated by the braid generators and the family of idempotents that appear in the quadratic relations of these generators. Thus, there is a natural homomorphism from ℰ_n in Y_d,n, which is injective for d≥ n, see <cit.>, cf. <cit.>.In <cit.> was proved that the bt–algebra has finite dimension, although that the autors couldn't get an explicit basis for it. This problem was solved later by Ryom-Hansen in <cit.>, who constructed a basis for ℰ_n, proving that the dimension of the algebra is b_n n!, where b_n is the n-th bell number. He also constructed a faithful tensorial representation (Jimbo–type) of this algebra, which is used to classify the irreducible representations of ℰ_n, and plays a essential role in the proof of the linear independency of the basis proposed there. Later, in <cit.> is proved that the algebra ℰ_n supports a Markov trace, this is achieved by using the method of relative traces, and the basis provided by Ryom-Hansen; for more examples of the relative traces method see <cit.>, <cit.>, <cit.>. Then, by using this trace as ingredient in the Jones's recipe <cit.>, they define an invariant Δ(,𝖠,𝖡) for classical knots (respectively Γ(,𝖠,𝖡) for singular knots) with parameters , 𝖠 and 𝖡. It's worth to say that for links, the invariant Δ is more powerful than the Homeflypt polynomial, see <cit.>. The bt–algebra have been studied by several researchers lately, which has helped to respond some important questions of its structure. In <cit.> J. Espinoza and Ryom-Hansen found a cellular basis for the bt-algebra, which is used to obtain an isomorphism theorem between ℰ_n and a sum of matrix algebras over certain wreath products. In her Ph.D. tesis <cit.> E. Banjo got an explicit isomorphism between the specialization ℰ_n(1) and the small ramified partition algebra <cit.>, and using it, she determined the complex generic representation of ℰ_n. Finally, in <cit.> I. Marin introduced a generalization of the bt–algebra, more precisely, given any Coxeter system (W,S), he defined an extension of the corresponding Iwahori–Hecke algebra, denoted by C_W, which coincide with the algebra ℰ_n when W is the Coxeter group of type 𝙰. Recently, in <cit.> we introduce a framization of the Hecke algebra of type 𝙱, denoted by , which is some kind of analogous of the Yokonuma–Hecke algebra for the 𝙱-type case, hence its notation. As we recalled above, the definition of the algebra ℰ_n() is strongly related with certain subalgebra of Y_d,n. Then, it is natural try to define an analogue of the bt–algebra, this time, as a subalgebra of . Thus, in this paper we introduce a new algebra, denoted by =(), that contains the algebra of braids and ties, and we can say that it is a bt-algebra of type 𝙱. Moreover, we also construct a basis and a tensorial representation for it (adjusting the ideas given by Ryom-Hansen in <cit.>), having as goal to prove thatsupports a Markov trace, which is the main result of this work. It is important to note that the algebra introduced here doesn't coincide with the given by I. Marin considering W as the Coxeter group of 𝙱 type, see Remark <ref>. The article is organized as follows. In Section 2 we fix some notations and recall the results used in the paper. In Section 3, making an analogy with the classical case, we introduce the algebra , which contains the bt–algebra. Also, we give some relations that hold on it (which are, mostly, direct consequence of results from <cit.> and <cit.>), and we propose a diagrammatical interpretation for ℰ_n, in the sense of <cit.>. In Section 4 we construct two linear bases forreadjusting the ideas given in <cit.>. Similarly as in <cit.>, one of these bases has a technical role, and the other is used for define a Markov trace in the last section. Moreover, we also construct a faithful tensorial representation of , which is the natural extension of the representation of the bt–algebra given by Ryom-Hansen, and plays a key role in the proof of the linear independency of one of the bases given here, see Theorem <ref> (cf. <cit.>). In Section 5 we prove thatsupports a Markov trace (Theorem <ref>), we prove that, constructing a family of relative traces using the basis given in the previous section. We use this method since as in the classical case, the basis obtained here cannot be defined in an inductive manner, then it is extremely difficult to define a Markov trace analogously to the Ocneanu's trace <cit.>. Thus, keeping the approach in <cit.>, we split the proof of the main result in several lemmas, which give step by step the necessary conditions for the trace. Finally, using our trace as ingredient in Jones's recipe we define an invariant of classical links in the solid torus, which restricted to classical links (that is, braids of 𝙱–type without the loop generatorinvolved, see Section 2) coincide with Δ, and therefore it is more powerful than the Homflypt polynomial, whenever is evaluated in classical links. Acknowledgements. The results contained in this manuscript were obtained during a research visit to the Maths Section of ICTP, in Trieste, Italy, which I thank its support and hospitality. A particular thank also to F. Aicardi, for her comments about the diagrammatical interpretation of the algebra introduced here, which were essential to develop this work. § PRELIMINARIESIn this section we review known results, necessary for the sequel, and we also fix the following terminology and notations that will be usedalong the article:– The letters , denote indeterminates. Consider K:= C(,). – The term algebra means unital associative algebra overK. – The sets {0,1,…,n} and {1,…,n} will be denoted simply by _0 andrespectively. – As usual, we denote by ℓ the length function associated to the Coxeter groups.§.§ Set n≥ 1. Let us denote by W_n the Coxeter group of type 𝙱_n. This is the finite Coxeter groupassociated to the following Dynkin diagram(350,40) (82,20)_1 (120,20)_1 (200,20)_n-2 (240,20)_n-1(85,10)5 (87.5,11)(1,0)35 (87.5,9)(1,0)35 (125,10)5 (127.5,10)(1,0)10(145,10)*2 (165,10)*2 (185,10)*2(205,10)5 (207.5,10)(1,0)35 (245,10)5 (192.5,10)(1,0)10Define _k=_k-1…_1 _1 _1…_k-1 for 2≤ k≤ n. It is known, see <cit.>,that every element w∈ W_n can be written uniquely as w=w_1… w_n with w_k∈𝙽_k, 1≤ k≤ n, where𝙽_k:={ 1, _k, _k-1⋯_i, _k-1⋯_i_i;1≤ i ≤ k-1 }.Moreover, this expression for w is reduced. Hence, we have ℓ(w)=ℓ(w_1)+⋯ +ℓ(w_n). Further, the group W_n can be realized as a subgroup of the permutation group of the setX_n:={-n, … , -2, -1, 1, 2, …, n}. Specifically, the elements of W_n are the permutations w suchthat w(-m) = - w(m), for all m ∈ X_n. Then, the elements of W_n can be parameterizedby the elements of X_n^n:={(m_1,…,m_n) | m_i∈ X_n for all i} (see <cit.>). More precisely, the element w∈ W_n corresponds to the element (m_1, … ,m_n)∈ X_n^n such that m_i= w(i), for details see <cit.>. The corresponding braid group of type 𝙱_n associated to W_n, is defined as the group W_n generatedby ρ_1 , σ_1 ,… ,σ_n-1 subject to the following relations [σ_i σ_j=σ_j σ_ifor| i-j| >1,;σ_i σ_j σ_i=σ_j σ_i σ_j for| i-j| = 1,; ρ_1σ_i= σ_iρ_1 for i>1,; ρ_1 σ_1 ρ_1σ_1=σ_1 ρ_1 σ_1ρ_1.] Geometrically, braids of type 𝙱_n can be viewed as classical braids of type 𝙰_n with n+1 strands, such that the first strand is identically fixed. This is called `the fixed strand'. The 2nd, …, (n+1)st strands are renamed from 1 to n and they are called `the moving strands'. The `loop' generator ρ_1 stands for the looping of the first moving strand around the fixed strand in the right-handed sense, see <cit.>. In Figure <ref> we illustrate a braid of type 𝙱_4. §.§ Recently in <cit.> we introduced a new framization of the Hecke algebra of type 𝙱, denoted by :=(,). This algebra was constructed searching an analogous of the Yokonuma–Hecke algebra for the type 𝙱 case, this, with the final objective to explore their usefulness in knot theory. Thus, in this article we constructed two linear bases, a faithful tensorial representation of Jimbo type for Y_d,n^𝙱(,),and we proved thatsupports a Markov trace. Finally we defined, by using Jones's recipe, a new invariant for framed and classical links in the solid torus. Along this paper we use several properties of this algebra, then we recall some of them. We begin with its definitionLet n≥ 2. The algebra Y_d, n^𝙱 :=Y_d, n^𝙱(, ), is defined as the algebra over 𝕂:=ℂ(,) generated by framing generators t_1,…,t_n, braiding generators g_1,…,g_n-1 and the loop generator b_1, subject to the following relations g_ig_j = g_jg_i for| i-j| > 1,g_i g_j g_i = g_j g_i g_jfor| i-j| = 1, b_1 g_i = g_i b_1 for alli≠ 1, b_1 g_1 b_1 g_1 =g_1 b_1 g_1 b_1, t_i t_j =t_j t_ifor all i, j,t_j g_i =g_i t_s_i(j)for alli, j,t_i b_1 = b_1 t_i for all i,t_i^d = 1 for all i, g_i^2 =1+ (-^-1)e_ig_ifor all i, b_1^2 =1 + (-^-1)f_1b_1.,where e_i:=1/d∑_s=0^d-1 t_i^st_i+1^d-sand f_j:=1/d∑_s=0^d-1 t_j^s;for 1≤ i≤ n-1, and 1≤ j≤ n.For n=1, we define the algebra Y_d,1^ as the algebra generated by 1, b_1 and t_1 satisfying the relations (<ref>), (<ref>) and (<ref>). Notice that the elements f_j's and e_i's are idempotents. Also, It is clear that the element f_1 commutes with b_1ande_i commutes with g_i. These facts imply that the generators b_1 and the g_i's are invertible. Namely, we have:b_1^-1 = b_1 - (-^-1)f_1and g_i^-1 = g_i - ( -^-1)e_i. Now we recall the two basis ofgiven in <cit.>, which we will useful for the sequel.Set b_1 := b_1, b_k := g_k-1… g_1 b_1 g_1… g_k-1, and b_k := g_k-1… g_1 b_1 g_1^-1… g_k-1^-1 for all 2≤ k≤ n. For all 1≤ k≤ n, let us define inductivelythe sets N_d,k by N_d,1 := {t_1^m, b_1 t_1^m;0 ≤ m ≤ d-1} andN_d,k := {t_k^m, b_kt_k^m, g_k-1 x ;x ∈ N_d,k-1,0≤ m≤ d-1} Analogously, for all 1≤ k≤ n we define inductively the sets M_d,k exactly like N_d,k's but exchanging b_k by b_k in every case. Finally, consider 𝖣_n={𝔫_1𝔫_2⋯𝔫_n |𝔫_i∈ N_d,i} and 𝖢_n={𝔪_1𝔪_2⋯𝔪_n |𝔪_i∈ M_d,i}. Then, we have that 𝖣_n and 𝖢_n are bases of , for details see <cit.>.§.§ We denote bythe set formed by the set-partitions of 𝐧, recall that the cardinality ofis the n–th Bell number, denoted by b_n. The subsets ofentering a partition are called blocks. For short we shall omit the subset of cardinality 1 (single blocks) in the partition. For example, the partition I=({1,2,3},{4,6},{5},{7}) in 𝖯(7), will be simply written as I=({1,2,3},{4,6}). Moreover, Supp(I) will be denote the union of non–single blocksof I.The symmetric group S_n acts naturally on . More precisely, set I=(I_1,…,I_m)∈ . The action w(I) for a w∈ S_n is given byw(I)=((w(I_1),…,w(I_m)))where w(I_k) is the set obtained of applying w to the set I_k.The pair (, ≼) is a poset. Specifically, given I=(I_1,…,I_m), J=(J_1,…,J_s) ∈, the partial order ≼ is defined by.I≼ Jif and only J is a union of some blocks of IWhen I≼ J, we will say that I refines J.Let I, J ∈, we denote I*J the minimal set partition refined by I and J. Let A a subset of . Along the work we will use for short I*A instead I*(A). Thus, if I=(I_1,…,I_k,I_i_k+1,…,I_i_m), where the first k blocks are the blocks that have intersection with A, and the rest are those that don't have. Then I*A is given by I*A=(A',I_k+1,…,I_m)where A'=A∪ I_1∪…∪ I_k. In particular, I*{j,m} coincides with I if j and m already belong to the same block, otherwise, I*{j,m} coincides with I except for the blocks containing j and m, which merge in a sole block. For short, we will write I*j instead of I*{j,j+1}. For instace, for the set partition I=({1,4},{2,5},{3,6,7}) in 𝖯(8):I*{4,5,8}=({1,2,4,5,8},{3,6,7}), I*2=({1,4},{2,3,5,6,7})and I*6=IAlso, for I∈, we denote I\ n the element in 𝖯(𝐧-1) that is obtained by removing n from I. Let be 𝒫(_0) the set of partitions of _0, note that, 𝖯(_0) is essentially 𝖯(𝐧+1), then all the definitions and notations above are valid for partitions in . Finally, for A⊆_0we define A^*=A\{0}.§ AN ALGEBRA OF BRAIDS AND TIES INSIDE In this section we propose a generalization of the algebra of braids and ties ℰ_n() defined originally in <cit.> and posteriorly studied in <cit.>. As we note previously, the definition of ℰ_n() definition was obtained by considering abstractly as a subalgebra of Y_d,n(). In <cit.> we introduce a framization of the Hecke algebra of type 𝙱, denoted by , which is the some kind of analogous of the Yokonuma–Hecke algebra for the 𝙱-type case. Then, it is natural to define an analogue of the bt–algebra, this time, considering a subalgebra of , which carries us to the next definition.Let n≥ 2. We define the bt–algebra of type , denoted by =(,), as the algebra generated by B_1, T_1…, T_n-1 and F_1,… F_n, E_1…, E_n-1, subject to the following relation T_iT_j=T_jT_iT_iT_i+1T_i= T_i+1T_iT_i+1T_i^2 = 1+(-^-1)E_iT_i E_i^2=E_iE_iE_j = E_jE_iE_iT_i = T_iE_i E_iT_j = T_jE_i E_iE_jT_i = T_iE_iE_j= E_j T_i E_j E_iT_jT_i = T_jT_iE_jB_1T_1B_1T_1 = T_1B_1T_1B_1B_1T_i = T_iB_1 B_1^2 = 1+(-^-1)F_1B_1B_1E_i = E_iB_1F_i^2 = F_i B_1F_j = F_jB_1 F_iE_j = E_jF_i F_jT_i = T_iF_s_i(j)where s_i is the transposition (i,i+1)E_iF_i = F_iF_i+1=E_iF_i+1 For n=1 we define the algebra ℰ_1^ as the algebra generated by 1, B_1 and F_1 subject to the relations (<ref>),(<ref>) and (<ref>). More precisely, the definition ofis obtained by considering abstractly the subalgebra ofgenerated by b_1, and the elements g_i's, e_i's and f_i's. Thus, considering g_i as T_i, e_i as E_i, f_i as F_i and b_1 as B_1, the defining relations ofcorrespond to the set of relations derived from the relations (<ref>)-(<ref>) of .There is a natural homomorphism φ_n:→ defined by the mapping T_i↦ g_i, E_i↦ e_i, F_i↦ f_i and B_1↦ b_1.Let be (W,S) a Coxeter system, and C_W the algebra introduced by I. Marin in <cit.>. It is known that C_W=ℰ_n when W is the Coxeter group of type 𝙰_n-1, then, it is natural to think that C_W should coincide withwhen W=W_n, but this doesn't happen. In fact, we will prove in Section 4 that the dimension ofis b_n+1|W_n|, meanwhile C_W has dimension Bell(W)|W|, where the Bell(W) is a entire number called the Bell number of W (by obvious reasons). These numbers are not completely determined for type 𝙱_n, but are known for low dimensions, more precisely, for n≥ 2 the sequence of dimensions is the following: 8, 38, 218, 1430, 10514,…; for details see <cit.>. Thus, we have that these algebras are different, which indicates that the algebrashould be interesting by itself.Note that the quadratic relation for Y_d,n and ℰ_n used in <cit.> is different than the used here. However, it is well known that modifying the set of generators of Y_d,n (respectively ℰ_n), it is possible to get a presentation with the desired quadratic relation . More precisely, if g_i (respectively T_i) denote the original braid generators of the Yokonuma–Hecke algebra (respectively, of the bt–algebra), and u the parameter used in the usual quadratic relation. Then, by taking u=^2 and g_i=g_i+(^-1-1)e_ig_i (respectively T_i=T_i+(^-1-1)e_iT_i), we obtain a presentation including the quadratic relation used by us, cf. <cit.>. Consequently, the bt-algebra ℰ_n can be regarded as the algebra generated by the elements T_i's and E_i's subject to the relations (<ref>)-(<ref>), thus, in particular, we have that ℰ_n⊆. The map φ:→ℰ_n given by φ(T_i)=T_i, φ(E_i)=E_i, φ(B_1)=1 and φ(F_i)=1 define a natural epimorphism.Now, we recall some useful result from<cit.>. For i<j, we define E_i,j byE_i,j={[ E_i for j=i+1; T_i… T_j-2E_j-1T_j-2^-1… T_i^-1 otherwise ].For a nonempty subset J of 𝐧 we define E_J=1 if |J|=1 andE_J:=∏_(i,j)∈ J× J, i<jE_i,jNote that E_{i,j}=E_i,j. Also we have by<cit.> thatE_j=∏_j∈ J, j≠i_0E_i_0,j,where i_0= min(J)Moreover, for I=(I_1,…, I_m)∈𝖯(𝐧) we define E_I byE_I=∏_kE_I_k.In the same fashion, for any subset A of 𝐧 we defineF_A=∏_i∈ AF_iNotice that by (<ref>) we have thatF_i=T_i-1… T_1 F_1 T_1^-1… T_i-1^-1=T_i-1^-1… T_1^-1 F_1 T_1… T_i-1. Then, we can omit F_2,…, F_n from the presentation (changing/adding certain relations simultaneously!), but we prefer include them, since the computations become simpler by using them. Also note that, using (<ref>), we can obtain a generalization of (<ref>) by conjugating it. More precisely we have E_i,jF_i=E_i,jF_j=F_iF_jfor all 1≤ i<j≤ n Now, we introduce certain elements that will be used for the construction of a linear basis of . Let be B_1=B_1, and for k≥ 2 we defineB_k:=T_k-1… T_1 B_1 T_1… T_k-1 B_k:=T_k-1… T_1 B_1 T_1^-1… T_k-1^-1 Further, we considere the sets 𝖬_k defined inductively by 𝖬_1 := {1, B_1 } and𝖬_k := {1, B_k, T_k-1x | x∈𝖬_k-1}Analogously, for all 1≤ k≤ n we define inductively the sets 𝖭_k's exactly like 𝖬_k's but exchanging B_k by B_k in each case.Now notice thatevery element of 𝖬_k has the form 𝕋_k,j^+ or𝕋_k,j^- with j≤ k,where𝕋_k,k^+:=1, 𝕋_k,j^+ := T_k-1⋯ T_jfor j<k,and𝕋_k,k^-:=B_k, 𝕋_k,j^- := T_k-1⋯ T_jB_jfor j<k.Similar expressions exist for elements in 𝖭_k exchanging B_k by B_k as well, which will be denoted by 𝕋_k,j^+ and𝕋_k,j^-.The following results are direct consequence of <cit.>, and these will be used frequently in the sequelFor n≥ 2 the following relations hold i)𝕋^-_n,kT_j={[T_j𝕋^-_n,k j<k-1; 𝕋^-_n,k-1+(-^-1 )𝕋^-_n,kE_j j=k-1; 𝕋^-_n,k+1 j=k;T_j-1𝕋^-_n,k j>k ].ii) 𝕋_n,k^+T_j={[ T_j𝕋^+_n,kj<k-1;𝕋_n,k-1^+j=k-1; 𝕋_n,k+1^++(-^-1)𝕋_n,k^+E_jj=k; T_j-1𝕋^+_n,kj>k ].iii) 𝕋^-_n,k B_1={[ 𝕋_n,k^+ + (-^-1)𝕋_n,k^-F_1for k=1; B_1𝕋^-_n,kfor k≠1 ]. iv)𝕋^+_n,k B_1={[𝕋^-_n,kfor k=1; B_1𝕋^+_n,kfor k≠1 ]. In particular, we have that – B_nT_j={[T_jB_nfor j< n-1; T_n-1B_n-1+(-^-1)B_nE_n-1for j= n-1 ]. – B_nT_n-1=T_n-1B_n-1+(-^-1)B_nE_n-1 See <cit.>.For n≥ 2 the following relations hold i)𝕋^±_n,kT_j={[ T_j𝕋^±_n,kj<k-1;𝕋^±_n,k-1j=k-1; 𝕋^±_n,k+1+(-^-1)𝕋^±_n,kE_jj=k; T_j-1𝕋^±_n,kj>k ].ii) 𝕋^-_n,k B_1={[ 𝕋^+_n,k + (-^-1)𝕋^-_n,kF_1for k=1;B_1𝕋^-_n,k+ (-^-1)α_n,kfor k≠1 ]. iii)𝕋^+_n,k B_1={[𝕋^-_n,kfor k=1; B_1𝕋^+_n,kfor k≠1 ]. where α_n,k=(B_1T_1^-1… T_k-2^-1𝕋_n,1^- E_1,k-T_1^-1… T_k-2^-1𝕋_n,1^- B_1E_1,k). In particular, we have– B_nT_n-1=T_n-1B_n-1 –B_nT_j=T_jB_n, for all j<n-1–B_nB_1=B_1B_n+ (-^-1)[B_1T_1^-1… T_n-2^-1𝕋_n,1^- E_1,k-T_1^-1… T_n-2^-1𝕋_n,1^-B_1E_1,k] See <cit.>.The following claims hold in .(i) T_kB_kB_k+1=B_kT_kB_k,for k≥ 1. (ii) 𝕋^-_k,jB_k=B_k-1𝕋^-_k,j,for k≥ 2.See (iv) <cit.> and (i) <cit.> respectively.The defining generators B_1 andT_i's of the algebrasatisfy the same braid relations as the Coxeter generators_1 and _i of the group W_n. Thus, the well–knownMatsumoto's Lemmaimplies thatif w_1… w_m is a reduced expression of w∈ W_n, with w_i∈{_1, _1, …, _n-1}, then the following element T_w is well–defined:T_w := T_w_1⋯T_w_m,where T_w_i = B_1, if w_i=_1 and T_w_i = T_j, ifw_i = _j. Therefore, according to <ref> we have that {T_w | w∈ W_n}={𝔯_1…𝔯_n | 𝔯_i∈𝖭_i}. In the same fashion, we have a natural bijection γ: {𝔪_1…𝔪_n | 𝔪_i∈𝖬_i}→ W_n, induced by the map B_k↦_k, T_i↦_i.Let w∈ W_n, and η:W_n→ S_n the natural projection defined by _1↦ 1 and _i↦ s_i. Since inthe action of B_1 over the elements F_i's and E_i's is trivial, that is, these commute, we have the next result Let w∈ W_n, I∈ and A⊆𝐧, then a) T_wE_IT_w^-1=E_w̅(I)b) T_wF_AT_w^-1=F_w̅(A)where w:=η(w) For a) see <cit.>, and for b) the result follows by applying the defining relations (<ref>) and (<ref>).Let v∈{𝔪_1…𝔪_n | 𝔪_i∈𝖬_i}, I∈ and A⊆𝐧, then a) vE_I v_-1=E_w(I)b) vF_A v_-1=F_w(A)where w=η∘γ(v). §.§ Diagrams for . We associate to each word in the algebra , a tied braid of type 𝙱_n according the following identifications. For n≥ 1 we associate the unit ofwith the trivial braid of type 𝙱_n, B_1 with the loopgenerator, and F_j with the braid of type 𝙱_n whose has a tied between the fixed strand and the j–th moving strand. For n≥ 2, as in the classical case we associate T_i with the usual braid generator, and E_i with the 𝙱–type braid whose has a tied between the i-th and i+1-st moving strand.We can see this identification in the following figureWe consider the multiplication of diagrams by concatenation, that is, given two diagrams d_1 and d_2, the multiplication d_1d_2 is the diagram that result from putting the diagram d_2 below to the diagram d_1, obtaining pictures like in Figure <ref>. Thus, we have a diagrammatical interpretation for every word in . The previous identification provide an epimorphism δ, from the algebrato an algebra of diagrams, which will be denoted by . This algebra is generated by the elements in Figure <ref>, and satisfies the defining relations ofrewritten as diagram relations, for instance see Figure <ref>. As we would expected, the ties of the elements ofkeep all the properties explained in <cit.>, that is, the elasticity, transparency and transitivity. For example, the elasticity and transparency properties, for ties involving the moving strands are inherited by the relations in common with the bt-algebra, and for the ties attached to the fixed strand are guaranteed by relation (<ref>), see Figure <ref>. On the other hand, the transitivity property is a consequence of Proposition <ref> proven in the next section.In the same fashion, we can conjecture that the homomorphism δ is, in fact, an isomorphism. However, to give a formal proof of this fact, is in general a problem itself, for example see <cit.>. Therefore we will continue without proving it, since we would deviate of our main goal.§ A LINEAR BASIS FOR In this section we introduce two linear bases for the algebra . One of them will be used to define a Markov trace in the next section, and the other for proving the linear independence of the first one. Additionally we give a faithful tensorial representation of the algebrabased in the constructed forin <cit.>.We begin the section proving a technical result, which will be useful to set properly our base. Let be I,I'∈ and A,A'∈. Note that, the elements E_IF_A, E_I'F_A'∈ can be equal even when I≠I' and A≠A'. For instance, let I=({2,3,5},{4,6}), I'=({4,6})∈ and A={3}, A'={2,3,5}. Then, we haveE_IF_A=E_2,3E_2,5E_4,6F_3 = F_3 E_2,3E_2,5E_4,6= F_3F_2E_2,5E_4,6= F_2F_3F_5E_4,6= E_I'F_A'by (<ref>). Using the same idea we can prove the following resultThe set 𝒜_n={E_IF_A | I∈, A⊆𝐧} is parameterized by . That is, there is a bijection between the setsand 𝒜_n. Let be I=(I_1,…,I_m) a partition in . We define Ψ:→𝒜_n as followsΨ(I)={[E_Iif 0∉I_j for all 1≤ j ≤ m; E_I\ I_k F_I_k^* if 0∈ I_k for some 1≤ k≤ m ]. where I\ I_r denote the partition obtained by removing the block I_k from I. For the other hand, let be I∈, A∈𝐧 we define φ:𝒜_n→ as followsφ(E_IF_A)=I*A^0where A^0=A∪{0} and I is considered as a element of(since 𝒫()⊆𝒫(_0)). It is not difficult to prove that φ∘Ψ=Id_ and Ψ∘φ=Id_𝒜_n. In fact, let say that I=(I_1,…,I_k), without lose generality consider that I_1,…, I_r are the blocks of I that have intersection with A, and I_r+1,…, I_k the blocks that don't have. By definition, we have thatI*A^0=(C,I_r+1,…, I_k)where C=A^0∪ I_1∪…∪ I_r. Thus, Ψ(I*A^0)=F_C^*E_I\ A where I\ A is the partition (I_r+1,…, I_k) in . Then, it is enough to prove thatF_C^*E_I\ A=E_IF_A. First, recall that for any block I_r, we have thatE_I_r=∏_j≠i_0, j∈ I_r E_i_0,j,where i_0= minI_rThus, (<ref>) implies thatE_I_rF_j=E_I_rF_i_0for any j∈ A∩ I_r. Therefore, more generally using (<ref>) we haveE_I_rF_A=E_I_rF_i_0F_A\ I_r= ∏_j≠i_0 E_i_0,jF_i_0F_A\ I_r= ∏_j≠i_0F_i_0F_j F_A\ I_r=F_I_r F_A\ I_r=F_A∪ I_rFinally, (<ref>) follows by using this argument for every block that has intersection with A. The converse can be proven from an analogous way. From now on, given a partition I∈ we will denote the element Ψ(I)by EF_I.The set ℬ_n={EF_IT_w | I∈ and w∈ W_n} spans the algebra . We proceed as in <cit.>, that is, we will prove by induction that 𝔅_n, thelinear subspace ofspanned by ℬ_n is equal to . The assertion is clear for n=1. Assume now that ℰ_n-1^𝙱 is spanned by 𝔅_n-1. Notice that 1∈𝔅_n. This fact and proving that 𝔅_n is a right ideal, implies the proposition. Now, we deduce that 𝔅_n is a right ideal from the hypothesis induction andLemma <ref>. Indeed, let EF_IT_w a element in ℬ_n and Y a generator of , and suppose that n∈ Supp(I), we have that EF_IT_wY is equal to E_n,iEF_I\ nx𝕋_n,k^±Y or F_nEF_I\ nx𝕋_n,k^±Y, where x∈⟨ B_1,T_1…,T_n-2⟩, and i= min(I_r), with n∈ I_r.depending if n belongs to I_0 or not. Then, using the relations from Lemma <ref> we can convert this expressions in a linear combination of elements of the formE_n,i(X)𝕋_n,k'^±or F_n(X)𝕋_n,k'^±, where X∈ℰ_n-1^𝙱.thus, the result follows by applying the induction hypothesis. The case when n∉ Supp(I) will be omitted, since it follows by analogous way.Now, we will focus in proving the linear independence of the set ℬ_n, to achieve that, we will readjust the arguments used by Ryom-Hansen in <cit.>. For that, we use a tensorial representation of , which is obtained by restricting the representation ofconstructed in <cit.>, considering d=n+1, to the subalgebra φ_n(). More precisely, let V be aK–vector space with basis 𝔅={v_i^r; i∈ X_n,0≤ r≤ n}.As usual we denote by 𝔅^⊗ k the naturalbasis ofV^⊗ k associated to 𝔅. That is, the elements of 𝔅^⊗ kare of the form:v_i_1^m_1⊗⋯⊗ v_i_k^m_kwhere (i_1, … , i_k)∈ X_n^k and (m_1, … , m_k)∈𝐧^k.We define the endomorphisms 𝐅, 𝐁:V→ V by:(v_i^r)𝐁= {[ v_-i^r for i>0 and r=0,; v_-i^r+(-^-1)v_i^r for i<0 and r=0,; v_-i^r for r≠0. ].and(v_i^r)𝐅={[ 0 r>0; v_i^r r=0 ]. On the other hand we define 𝐓, 𝐄:V⊗ V→ V⊗ V by()𝐓={[for i=j and r=s,; for i< j and r=s,;+ (-^-1)for i>j and r=s,; forr≠s. ]. and(v_i^r⊗ v_j^s)𝐄={[0r≠s; v_i^r⊗ v_j^sr=s ]. For all 1≤ i ≤ n-1, 1≤ j≤ n we extend these endomorphisms to the endomorphisms 𝐄_i, 𝐓_i, 𝐁_1, 𝐅_j of the n–th tensor power V^⊗ n of V, as follows:[𝐄_i:=1_V^⊗(i-1)⊗𝐄⊗1_V^⊗(n-i-1),𝐁_1:=𝐁⊗ 1_V^⊗(n-1),; 𝐓_i:=1_V^⊗(i-1)⊗𝐓⊗ 1_V^⊗(n-i-1) ,𝐅_j:=1_V^⊗(j-1)⊗𝐅⊗1_V^⊗(n-j) ] where 1_V^⊗ k denotes the endomorphism identity of V^⊗ k. The mapping B_1↦𝐁_1, T_i↦𝐓_i, E_i↦𝐄_i and F_i↦𝐅_i defines a representation Φ of in End(V^⊗ n). It is a consequence of <cit.>.Further, we have(See <cit.>) Let w∈ W_n parameterized by (m_1,…,m_n)∈ X_n^n. Then(v_1^r_1⊗…⊗ v_n^r_n)Φ_w=v_m_1^r_| m_1|⊗…⊗ v_m_n^r_| m_n|.where Φ_w denotes Φ(T_w). Let I=(I_1,…,I_m)∈ and A⊆, we will denote Φ(E_I) and Φ(F_A) by 𝐄_I and 𝐅_A respectively. We know by <cit.> that 𝐄_I acts over V^⊗ n as follows()𝐄_I={[0 if there exist i,j,k such that i,j∈ I_k y r_j≠r_j,; otherwise ]. In the same fashion, it is not difficult to prove that()𝐅_A={[ 0 if there exist i∈ A such that r_i≠0; otherwise ].Now, we have all the necessary to prove the main result of this sectionThe set ℬ_n is a basis of . In particular, the dimension ofis b_n+12^nn! We only have to prove that ℬ_n is a linear independent set, since it was already proven in Proposition <ref> that it spans . Let be I=(I_0,I_1,…,I_m) a element inconsidering the single blocks in its expression. Without lose of generality set I_0 as the block that contains 0, then, we define v^I∈ V^⊗ n as followsv^I:=v_1^r_1⊗…⊗ v_n^r_n,with r_i=k if i∈ I_kSuppose now that∑_J∈ , w∈ W_nλ_J,wEF_JT_w=0Then, given I∈, if we apply Φ and evaluate v^I in (<ref>), we will obtain∑_J∈ , w∈ W_nλ_J,w(v^I)(𝐄𝐅_IΦ_w)= 0thus, using (<ref>) and (<ref>) we have∑_ w∈ W_nλ_I,w(v^I)Φ_w =0∑_ w∈ W_nλ_I,w(v_1^r_1⊗…⊗ v_n^r_n)Φ_w =0∑_λ_I,wv_m_1^r_| m_1|⊗…⊗ v_m_n^r_| m_n| =0with (m_1,…, m_n) running in X_n^n. But, this elements are L.I in V^⊗ n, then λ_I,w=0 for all w∈ W_n. Finally as I was picked arbitrarily the result follows.The representation Φ is faithful.Sinceℬ_n⊆ℬ_n+1, it follows that⊆ℰ_n+1^𝙱, for all n≥ 1. Thus, by taking ℰ_0^𝙱:=K, we have the following tower of algebras.ℰ_0^⊆ℰ_1^⊆⋯⊆⊆ℰ_n+1^𝙱⊆⋯Recall that in the original definition of Φ in <cit.>, V is considered as the vector space with basis ℬ={v_i^r; i∈ X_n,0≤ r≤ d-1}. Thus, the condition d=n+1 it is essential for proving Theorem <ref>, in fact, if we suppose d≤ n, we could obtain a sum of linear dependent elements in Eq. (<ref>). Moreover, to get a sum of linear independent elements, it is enough to consider d≥ n+1, which is contained in the next result.Suppose that d≥ n+1. Then the homomorphism φ_n:↦ defined in Proposition <ref> is an embedding. It is enough to prove that the set A=φ_n(ℬ_n) is linear independent in . Now, we know that Φ is faithful (<cit.>), and we proved in Theorem <ref> that Φ(A) is L.I, then, the result follows.The set 𝒞_n={EF_I( 𝚖_1…𝚖_n)| I∈, 𝚖_i∈𝖬_i} is also a basis for . We can prove that 𝒞_n spansanalogously to Proposition <ref>, but this time using the relations given in Lemma <ref>. The linear independence is guaranteed by cardinality.§ MARKOV TRACE IN In this section we prove thatsupports a Markov Trace. For that, we use the method of relative traces, cf.<cit.>, which consists in construct a family of linear maps ϑ_n:→ℰ_n-1^, which gives step by step the desired Markov properties. Specifically, these properties are guaranteed by three key results, in our case these are Lemmas <ref>, Lemma <ref> and (ii) Lemma <ref> (ii), which are essential to prove that the trace defined by 𝚝𝚛_n=ϑ_1∘…∘ϑ_n is a Markov trace (Theorem <ref>).§.§From now on, we fix the parameters ,, 𝗒,𝗓∈𝕂 and we consider w∈𝒞_n expressed on the form w= EF_I (which is possible by Corollary <ref>). Let 𝕃:=𝕂(,,,), then, when it is needed, we will consider ,, 𝗒,𝗓 as variables, and work with the algebra ⊗𝕃, which, for simplicity, will be denoted byagain. We set ϑ_1(B_1)=, ϑ_1(F_1)= and ϑ_1(B_1F_1)=. For n≥ 2, we define the linear map fromto ℰ_n-1 on the basis 𝒞_n as follows: ϑ_n(𝚖_1⋯𝚖_n-1𝚖_n EF_I)={[𝚖_1⋯𝚖_n-1EF_I for 𝚖_n=1, n∉ Supp(I),; 𝚖_1⋯𝚖_n-1EF_I\ n for 𝚖_n=1, n∈ Supp(I),; 𝗓𝚖_1⋯𝚖_n-1𝕋_n-1,k^±EF_τ_n,k(I)for 𝚖_n=𝕋_n,k^±; k<n,; 𝗒𝚖_1⋯𝚖_n-1EF_I for 𝚖_n=B_n, n∉ Supp(I),; 𝚖_1⋯𝚖_n-1EF_I\ n for 𝚖_n=B_n, n∈ Supp(I). ]. where τ_n,k(I) denotes the partition (I*{n,k})\ n.We begin proving some partitions properties, which will use frequently in the sequelLet σ∈ S_n and I∈𝒫(_0), then we have that (i) σ(I\{k})=σ(I)\{σ(k)}, for some k∈ Supp(I).(ii) σ(I*{j,k})=σ(I)*{σ(j),σ(k)}, for some k,j∈.(iii) σ(τ_n,k(I))=τ_σ(n),σ(k)(σ(I)) for some k < n.Let I=(I_1,…,I_m) ∈𝒫(_0), and suppose without lose of generality that I_1 contains k. Then we have that I\{k}=(I_1',I_2,…, I_m) where I_1'=I_1\{k}. Thereforeσ(I\{k})=(σ(I_1'),σ (I_2),…, σ(I_m)) =(σ(I_1)\{σ(k)},σ (I_2),…, σ(I_m))=σ(I)\{σ(k)}and we have (i). For (ii) we only prove the case when j,k∈ Supp(I), and these are in different blocks. Let I_1 y I_2 the blocks of I that contains j and k respectively. Then, we haveI*{j,k}=(I_1∪ I_2, I_3, …, I_m)thereforeσ(I*{n,k})= (σ(I_1)∪σ(I_2), σ(I_3), …, σ(I_m))=(σ(I_1), σ(I_2), σ(I_3), …, σ(I_m))*{σ(n),σ(k)}=σ(I)*{σ(j),σ(k)}Finally, (iii) is a consequence of (i) and (ii).We would like to prove that (<ref>) holds by considering v∈ℰ_n-1^𝙱 instead 𝚖_1⋯𝚖_n-1 in the formula. Having this in mind, we introduce some notation and we prove a technical lemma. Let j>k∈ we define the element σ_j,k∈ S_n byσ_j,k=s_j-1⋯ s_kwhere the s_i denote the transposition (i,i+1). Note thatσ_j,k(i)={[ jif i=k; i-1 if k<i≤ j; i otherwise ]. andσ_j,k^-1(i)={[k if i=j;i+1 if k≤ i< j;iotherwise ]. For J=(J_1,…,J_r)∈𝒫(_0 \{n}) and I=(I_1,…, I_s)∈ the following equality holds. (σ_n,k^-1(J)*I)*{n,k})\ n = σ_n-1,k^-1(J)*((I*{n,k})\ n) We only prove the case when n,k ∈ Supp(I), and these are in different blocks of I, since the other cases can be verified analogously. Let J=(J_1,…,J_r)∈𝒫(_0 \{n}) andI=(I_1,…, I_s)∈, without lose generality, we can suppose that I_1 and I_2 are the blocks of I that contain k and n respectively. We proceed, distinguish cases.Case: n-1∉Supp(J)First, note that in this case the partitions σ_n,k^-1(J) and σ_n-1,k^-1(J) are the same, and it will be denoted by A=(A_1,…, A_r). Moreover, we have that n,k∉ Supp(A) by (<ref>) and the fact that n-1∉ Supp(J). Then, in these case (<ref>) holds directly, since the operations *{n,k} and \ n just have influence over I.Case: n-1∈ Supp(J)This time σ_n,k^-1(J) and σ_n-1,k^-1(J) are different, we will denote these by A=(A_1,…, A_r) and B=(B_1,…, B_r) respectively. Note that, n∈ Supp(A),k∈ Supp (B),k∉ Supp(A) andn∉ Supp (B). Moreover, if A_1 and B_1 are the blocks of A and B that contain n and k respectively, we have that A_i=B_i for 2≤ i≤ r, and A_1\{n}=B_1\{k}.Let I_3,…,I_t, with t≤ s, the blocks of I that have intersection with A_1\{n}=B_1\{k}, and I_t+1,…,I_s those that don't have. We define the partitionsA'=(A_2,…,A_r), B'=(B_2,…, B_r),and I'=(I_t+1,…,I_s)Then, for one side we haveB*((I*{n,k})\{n})=B*((I_1∪ I_2)\ n, I_3,… ,I_t,I' ) =B*(I_1∪ (I_2\{n}), I_3,… ,I_t,I' ) =(B_1∪ I_1∪ (I_2\{n})∪ I_3∪…∪ I_s, B'*I')= ((B_1\{k})∪ I_1∪ (I_2\{n})∪ I_3∪…∪ I_s, B'*I')On the other hand we have((A*I)*{n,k})=((A_1∪ I_2∪ I_3∪…∪ I_t,I_1,A'*I')*{n,k}) \ n=((A_1∪ I_1 ∪ I_2∪ I_3∪…∪ I_t)\ n,A'*I') =((A_1\{ n})∪ I_1 ∪ (I_2\{n})∪ I_3∪…∪ I_t,A'*I')since A'=B'and A_1\{n}=B_1\{k} the result follows.For every v∈ℰ_n-1^𝙱 we have ϑ_n(v𝚖_n EF_I)={[vEF_I for 𝚖_n=1, n∉ Supp(I),; vEF_I\ n for 𝚖_n=1, n∈ Supp(I),;𝗓 v𝕋_n-1,k^±EF_τ_n,k(I)for 𝚖_n=𝕋_n,k^±; k<n,;𝗒 vEF_I for 𝚖_n=B_n, n∉ Supp(I),; vEF_I\ n for 𝚖_n=B_n, n∈ Supp(I). ].By the linearity of the trace is enough prove the statement for v∈𝒞_n-1. The cases when 𝚖_n=1 can be proven easily.For case 𝚖_n=𝕋_n,k^± with k<n, we have ϑ_n(v𝕋_n,k^±EF_I)= ϑ_n(𝚖_1⋯𝚖_n-1EF_J𝕋_n,k^±EF_I) = ϑ_n(𝚖_1⋯𝚖_n-1𝕋_n,k^±EF_σ_n,k^-1(J) EF_I) = ϑ_n(𝚖_1⋯𝚖_n-1𝕋_n,k^±EF_σ_n,k^-1(J)*I)=𝚖_1⋯𝚖_n-1𝕋_n-1,k^±EF_τ_n,k(σ_n,k^-1(J)*I)On the other hand, we havev𝕋_n-1,k^±EF_τ_n,k(I) = =𝚖_1⋯𝚖_n-1EF_J𝕋_n-1,k^± EF_τ_n,k(I)=𝚖_1⋯𝚖_n-1𝕋_n-1,k^±EF_σ_n-1,k^-1(J) EF_τ_n,k(I)=𝚖_1⋯𝚖_n-1𝕋_n-1,k^±EF_σ_n-1,k^-1(J)*τ_n,k(I) Then the result follows by Lemma <ref>. Finally, we suppose that𝚖_n=B_n. We only prove the case when n∈ Supp(I),since the opposite case can be verified by an analogous way. Then, we haveϑ_n(vB_nEF_I)= ϑ_n( EF_J B_nEF_I)= ϑ_n( B_n EF_J EF_I ) = ϑ_n( B_n EF_I*J) =EF_(I*J)\ n=EF_(I\ n)*J=EF_J EF_I\ n=v EF_I\ nThe following lemmas contain several computations analogous to the proved in <cit.> for the bt–algebra. Therefore, although we work with a different quadratic relation, we will omit some computations in the following proofs, since these can be obtained by passing through the automorphism induced by the change of generators given in Remark <ref>. For all X,Z∈ℰ_n-1^𝙱 and Y∈, we have: (i) ϑ_n(YZ)=ϑ_n(Y)Z(ii) ϑ_n(XY)=Xϑ_n(Y)For proving claim (i) notice that, due to the linearityof ϑ_n, we can suppose that Z is adefining generator of ℰ_n-1^𝙱 and Y=𝚖_1𝚖_2…𝚖_n EF_I, with 𝚖_i∈𝖬_i and I∈. To prove the claimwe shall distinguish the Y's according to the possibilities of 𝚖_n, and if n belongs to Supp(I) or not. First, note that for Z=E_i, F_j with 1≤ i≤ n-2 and 1≤ j≤ n-1, claim (i) holds easily for any choice of _n. Also, when 𝚖_n=1 and n∉ Supp(I), since ϑ_n acts like the identity. Now, we proceed to study the remaining cases. Case: 𝚖_n=1, n∈ Supp(I)If we consider Z=T_j, E_j the result follows by <cit.>. And for Z=B_1, F_1 the results follows by Lemma <ref> and the fact that B_1 commutes with EF_I.Case: 𝚖_n=B_n, n∈ Supp(I)First suppose that Z=T_j for j∈{1,…,n-2}ϑ_n(𝚖_1⋯𝚖_n-1B_nEF_IT_j)= ϑ_n(𝚖_1⋯𝚖_n-1 B_n T_j EF_s_j(I))= ϑ_n(𝚖_1⋯𝚖_n-1 T_j B_n EF_s_j(I)) = 𝚖_1⋯𝚖_n-1 T_j EF_s_j(I)\ n)On the other hand, ϑ_n(𝚖_1⋯𝚖_n-1B_nEF_I)T_j=𝚖_1⋯𝚖_n-1 T_j EF_s_j(I\ n). Thus, since s_j doesn't act over n, we have that s_j(I)\ n=s_j(I\ n), and the result follows.For Z=B_1, we haveϑ_n(𝚖_1⋯𝚖_n-1B_nEF_IB_1)= ϑ_n(𝚖_1⋯𝚖_n-1B_n B_1 EF_I)= ϑ_n(𝚖_1⋯𝚖_n-1B_1B_nEF_I+ (-^-1)𝚖_1⋯𝚖_n-1λ_nEF_I)= 𝚖_1⋯𝚖_n-1B_1EF_I\ n+ ϑ_n( (-^-1)𝚖_1⋯𝚖_n-1λ_n EF_I) = 𝚖_1⋯𝚖_n-1EF_I\ nB_1+ (-^-1)ϑ_n(𝚖_1⋯𝚖_n-1λ_n EF_I)by using ii) Lemma <ref>, where λ_n=[B_1T_1^-1… T_n-2^-1 T_n-1… T_1B_1E_1,k-T_1^-1… T_n-2^-1T_n-1… T_1B_1^2E_1,k]. Then, it is enough to prove that 𝙰=ϑ_n( 𝚖_1⋯𝚖_n-1λ_n EF_I)=0. In fact, we have𝙰 = ϑ_n( 𝚖_1⋯𝚖_n-1[B_1T_1^-1… T_n-2^-1𝕋_n,1^-E_1,k-T_1^-1… T_n-2^-1𝕋_n,1^+B_1^2E_1,k]EF_I)= ϑ_n( 𝚖_1⋯𝚖_n-1B_1T_1^-1… T_n-2^-1𝕋_n,1^-E_1,kEF_I)-ϑ_n(𝚖_1⋯𝚖_n-1T_1^-1… T_n-2^-1𝕋_n,1^+B_1^2E_1,kEF_I) = ϑ_n( 𝚖_1⋯𝚖_n-1B_1T_1^-1… T_n-2^-1𝕋_n,1^-EF_I*{1,k})-ϑ_n(𝚖_1⋯𝚖_n-1T_1^-1… T_n-2^-1𝕋_n,1^+EF_I*{1,k})- (-^-1) ϑ_n(𝚖_1⋯𝚖_n-1T_1^-1… T_n-2^-1𝕋_n,1^-EF_I*{0,1,k})= 𝚖_1⋯𝚖_n-1B_1T_1^-1… T_n-2^-1𝕋_n-1,1^-EF_τ_n,1(I*{1,k})- 𝚖_1⋯𝚖_n-1T_1^-1… T_n-2^-1𝕋_n-1,1^+EF_τ_n,1(I*{1,k})-(-^-1) 𝚖_1⋯𝚖_n-1T_1^-1… T_n-2^-1𝕋_n-1,1^-EF_τ_n,1(I*{0,1,k})= 𝚖_1⋯𝚖_n-1B_1^2EF_τ_n,1(I*{1,k})- 𝚖_1⋯𝚖_n-1EF_τ_n,1(I*{1,k})-(-^-1) 𝚖_1⋯𝚖_n-1B_1EF_τ_n,1(I*{0,1,k})Finally, expanding B_1^2, we obtain that ϑ_n(𝚖_1⋯𝚖_n-1λ_n EF_I)=0, since τ_n,1(I*{1,k})*{0,1}=τ_n,1(I*{0,1,k}). For the case𝚖_n=B_n, n∉ Supp(I), we can proceed analogously (we only have to putinstead , and omit the operation \ n in the partition ). Case: 𝚖_n=𝕋_n,k^+, For Z=T_j with j∈{1,…,n-2} the result follows analogously as in <cit.> using the relations of Lemma <ref>. Suppose now, that Z=B_1, if k>1, then B_1 commutes with 𝕋_n,k^+, therefore the result follows easily. If k=1, we haveϑ_n(YZ)=ϑ_n(𝚖_1⋯𝚖_n-1T_n,k^+EF_IB_1)= ϑ_n(𝚖_1⋯𝚖_n-1T_n,1^+B_1 EF_I) = ϑ_n(𝚖_1⋯𝚖_n-1T_n,1^-EF_I) =𝚖_1⋯𝚖_n-1T_n-1,1^-EF_τ_n,1(I)=𝚖_1⋯𝚖_n-1T_n-1,1^+EF_τ_n,1(I)B_1=ϑ_n(Y)ZCase: 𝚖_n=𝕋_n,k^-.For Z=T_j with j∈{1,…, n-2}, the proof follows analogously as in <cit.>, since the formula i) of Lemma <ref> coincideswith (22) of <cit.>. Finally, for Z=B_1 we haveϑ_n(YZ)= ϑ_n( EF_I B_1)= ϑ_n( B_1 EF_I)= ϑ_n( B_1𝕋^-_n,k EF_I ) + (-^-1)[ϑ_n ( B_1T_1^-1… T_k-2^-1𝕋_n,1^- E_1,kEF_I) - ϑ_n(T_1^-1… T_k-2^-1𝕋_n,1^- B_1 E_1,kEF_I]= ϑ_n( B_1𝕋^-_n,k EF_I ) + (-^-1)[ϑ_n ( B_1T_1^-1… T_k-2^-1𝕋_n,1^- EF_I*{1,k}) - ϑ_n(T_1^-1… T_k-2^-1𝕋_n,1^+ EF_I*{1,k}))-(-^-1)ϑ_n(T_1^-1… T_k-2^-1𝕋_n,1^- EF_I*{0,1,k})]= B_1𝕋^-_n-1,k EF_τ_n,k(I)+(-^-1)[ B_1T_1^-1… T_k-2^-1𝕋_n-1,1^- EF_τ_n,1(I*{1,k}) -T_1^-1… T_k-2^-1𝕋_n-1,1^+ EF_τ_n,1(I*{1,k})-(-^-1)T_1^-1… T_k-2^-1𝕋_n-1,1^- EF_τ_n,1(I*{0,1,k})] On the other hand,ϑ_n(Y)Z= EF_τ_n,k(I) B_1= B_1 EF_τ_n,k(I)= B_1 EF_τ_n,k(I)= B_1𝕋^-_n-1,k EF_τ_n,k(I)+(-^-1)[ B_1T_1^-1… T_k-2^-1𝕋_n-1,1^- EF_τ_n,k(I)*{1,k} -T_1^-1… T_k-2^-1𝕋_n-1,1^+ EF_τ_n,k(I)*{1,k}-(-^-1)T_1^-1… T_k-2^-1𝕋_n-1,1^- EF_τ_n,k(I)*{0,1,k}]clearly we have that τ_n,k(I)*{1,k}=τ_n,1(I*{1,k}) and τ_n,1(I*{0,1,k})=τ_n,k(I)*{0,1,k}, then, the result follows.Finally (ii) is a direct consequence of Lemma <ref>, since X∈ℰ_n-1^𝙱. For all X,Z∈ℰ_n-1^𝙱 and Y∈, we have:ϑ_n(XYZ)=Xϑ_n(Y)Z.The proof is straightforward using the previous lemmas. For n≥ 2, X∈ℰ_n-1^ and Y∈, we have (i) ϑ_n(E_n-1X T_n-1)=ϑ_n(T_n-1X E_n-1)(ii) ϑ_n-1(ϑ_n(E_n-1Y))=ϑ_n-1(ϑ_n(YE_n-1)) As always, by linearity of the trace, we can consider X and Y in 𝒞_n-1 and 𝒞_n respectively. Let X=𝚖_n-1EF_I, with I∈𝒫(_0\{n}), for proving (i) we will distinguish cases depending of the value of 𝚖_n-1.Case: 𝚖_n-1=1. For one side, we haveϑ_n(E_n-1X T_n-1)= ϑ_n(E_n-1 EF_I T_n-1) = ϑ_n( T_n-1 EF_s_n-1(I)E_n-1 ) = ϑ_n( T_n-1 EF_s_n-1(I)*{n-1,n}= EF_τ_n,n-1(s_n-1(I)*{n-1,n})On the other handϑ_n(T_n-1X E_n-1) = ϑ_n(T_n-1 EF_I E_n-1) = ϑ_n( T_n-1 EF_I*{n-1,n} )= EF_τ_n,n-1(I*{n-1,n})Now, if n-1∉ Supp(I) the equality is clear, since s_n-1(I)=I. On the other hand, if n-1 ∈ Supp(I), it is not difficult to prove that s_n-1(I)*{n-1,n}=I*{n-1,n}. Case: If 𝚖_n-1=𝕋_n-1,k^±, we haveϑ_n(E_n-1X T_n-1) = ϑ_n(𝕋_n-1,k^±E_n,k T_n-1 EF_s_n-1(I)) = ϑ_n(𝕋_n-1,k^± T_n-1 EF_s_n-1(I)*{n-1,k}) =𝕋_n-1,k^± EF_τ_n,n-1(s_n-1(I)*{n-1,k})On the other handϑ_n(T_n-1X E_n-1)= ϑ_n(T_n-1𝕋_n-1,k^± EF_IE_n-1) = ϑ_n(𝕋_n,k^± EF_I*{n-1,n}) =𝕋_n-1,kEF_τ_n,k(I*{n-1,n})and, it is easy to verify that the partitions from both cases are equal. Case: If 𝚖_n-1=B_n-1, we haveϑ_n( E_n-1XT_n-1) = ϑ_n(E_n-1 B_n-1EF_IT_n-1) = ϑ_n( B_n-1T_n-1E_n-1EF_s_n-1(I)) = ϑ_n( B_n-1T_n-1EF_s_n-1(I)*{n-1,n}) = B_n-1EF_τ_n,n-1(s_n-1(I)*{n-1,n})On the other handϑ_n(T_n-1X E_n-1) = ϑ_n(T_n-1 B_n-1 EF_IE_n-1)= ϑ_n( T_n-1B_n-1 EF_I*{n-1,n}) = ϑ_n(𝕋_n,n-1^- EF_I*{n-1,n})= B_n-1EF_τ_n,n-1(I*{n-1,n}))and we know by the first case that the partitions involved are equal, then we have already proved (i). For proving (ii) we need more cases, since we have to apply two levels of the relative trace, then the result depends from the values of 𝚖_n-1 and 𝚖_n of Y=𝚖_n-1𝚖_nEF_I∈𝒞_n. First note that for the cases _n=1, _n-1=1; _n=1, _n-1=B_n-1; _n=B_n, _n-1=1 and _n=B_n, _n-1=B_n-1, the result follows directly, since E_n-1 commute with Y in each case, then we only have to analize five cases. Case: If _n=1 and _n-1=𝕋_n-1,k^±.ϑ_n-1(ϑ_n(YE_n-1))= ϑ_n-1(ϑ_n(𝕋_n-1,k^±EF_I E_n-1)) = ϑ_n-1(ϑ_n(𝕋_n-1,k^±EF_I*{n-1,n}) = ϑ_n-1( 𝕋_n-1,k^±EF_(I*{n-1,n})\ n)) =𝕋_n-2,k^±EF_τ_n-1,k(I_1)where I_1=(I*{n-1,n})\ n. On the other handϑ_n-1(ϑ_n(E_n-1Y))= ϑ_n-1(ϑ_n( E_n-1𝕋_n-1,k^± EF_I)) = ϑ_n-1(ϑ_n(𝕋_n-1,k^± EF_I*{n,k}))= ϑ_n-1( 𝕋_n-1,k^± EF_(I*{n,k})\ n)=𝕋_n-2,k^± EF_τ_n-1,k(I_2)where I_2= I*{n,k})\ n. Further, we know by <cit.> that τ_n-1,k(I_1)=τ_n-1,k(I_2). Note that for case _n=B_n and _n-1=𝕋_n-1,k^± we have an analogous proof, since E_n,k commutes with B_n, indeed, the only difference with this case is that when we apply the trace at level n appear the parameterinstead . Case: If _n=𝕋_n,k^± and _n-1=1.ϑ_n-1(ϑ_n(YE_n-1))= ϑ_n-1(ϑ_n(𝕋_n,k^± EF_IE_n-1))= ϑ_n-1(ϑ_n(𝕋_n,k^±EF_I*{n-1,n}))=ϑ_n-1(𝕋_n-1,k^±EF_I_1)where I_1=τ_n,k(I*{n-1,n}). On the other handϑ_n-1(ϑ_n(E_n-1Y)) = ϑ_n-1(ϑ_n(E_n-1𝕋_n,k^± EF_I)) = ϑ_n-1(ϑ_n(𝕋_n,k^± EF_I*{n,k}))=ϑ_n-1(𝕋_n-1,k^± EF_I_2)where I_2=τ_n,k(I*{n,k}). First note that when k=n-1 the result follows directly, and if k<n-1 we obtainϑ_n-1(𝕋_n-1,k^± EF_I_i)=𝕋_n-2,k^±EF_τ_n-1,k(I_i)Again by <cit.> we have that τ_n-1,k(I_1)=τ_n-1,k(I_2) and the result follows.Case: If _n=𝕋_n,k^± and _n-1=B_n-1. ϑ_n-1(ϑ_n(YE_n-1)) = ϑ_n-1(ϑ_n( B_n-1𝕋_n,k^± EF_I E_n-1))= ϑ_n-1(ϑ_n( B_n-1𝕋_n,k^± EF_I*{n-1,n})) =ϑ_n-1( B_n-1𝕋_n-1,k^± EF_I_1)On the other handϑ_n-1(ϑ_n(E_n-1Y)) = ϑ_n-1(ϑ_n(E_n-1 B_n-1𝕋_n,k^± EF_I ))= ϑ_n-1(ϑ_n( B_n-1𝕋_n,k^± E_n,kEF_I ))=ϑ_n-1( B_n-1𝕋_n-1,k^± EF_I_2)where I_1=τ_n,k(I*{n-1,n}) and I_2=τ_n,k(I*{n,k}). Now, note that when k=n-1 the result is direct, then we can suppose that k<n-1, thus we have that B_n-1𝕋_n-1,k^±=B_n-1T_n-2𝕋_n-2,k^±=𝕋_n-1,n-2^-𝕋_n-2,k^±=T_n-2B_n-2𝕋_n-2,k^±Using this we obtain for one sideϑ_n-1( B_n-1𝕋_n-1,k^± EF_I_j)= ϑ_n-1( 𝕋_n-1,n-2^-𝕋_n-2,k^± EF_I_j)= ϑ_n-1( 𝕋_n-1,n-2^- EF_σ(I_j))𝕋_n-2,k^±=B_n-2 EF_τ_n-1,n-2(σ(I_j))𝕋_n-2,k^±where σ=σ_n-2,k. Let see that the partitions are equal. let be A_1,A_2,A_3 the blocks of I that contains k,n-1 and n respectively, consider I' as the partition that result by removing the blocks A_1,A_2 and A_3 from I. Then, we haveI_1 = (I*{n,n-1})*{n,k})\ n= ((I*{n,n-1, k})\ n= I'*(A_1∪ A_2∪ A_3')I_2=((I*{n,k})*{n,k})\ n= ((I*{n,k})\ n = I'*(A_1∪ A_3', A_2)where A_3'=A_3\{n}. Thereforeσ(I_1) = σ(I')*(σ(A_1)∪σ(A_2)∪σ(A_3')) σ (I_2)= σ(I')*(σ(A_1)∪σ(A_3'), σ(A_2))now, note that σ(k)=n-2 and σ(n-1)=n-1, then we haveτ_n-1,n-2(σ(I_1)) =(σ(I_2)*{n-1,n-2})\ n-1= σ(I')*(σ(A_1)∪σ(A_2')∪σ(A_3'))τ_n-1,n-2(σ (I_2))=(σ(I_1)*{n-1,n-2})\ n-1 = σ(I')*(σ(A_1)∪σ(A_3')∪σ(A_2'))where A_2=A_2\{n-1}. Case: If _n=𝕋_n,k^± and _n-1=𝕋_n-1,j^±. Similarly as the last case, we haveϑ_n-1(ϑ_n(YE_n-1)) = ϑ_n-1(ϑ_n(𝕋_n-1,j^±𝕋_n,k^± EF_I*{n-1,n})) =ϑ_n-1(𝕋_n-1,j^±𝕋_n-1,k^± EF_I_1)On the other handϑ_n-1(ϑ_n(E_n-1Y)) = ϑ_n-1(ϑ_n( E_n-1𝕋_n-1,j^±𝕋_n,k^± EF_I)) = ϑ_n-1(ϑ_n(𝕋_n-1,j^±𝕋_n-1,k^± E_a,bEF_I))=ϑ_n-1(𝕋_n-1,j^±𝕋_n-1,k^± EF_I_2)where I_1=τ_n,k(I*{n-1,n}), I_2=τ_n,k(I*{a,b}) and{a,b}={[ {k,j}if j<k; {k,j+1} if j≥ k ].∙ Subcase k<n-1: First, if j=n-2 implies that j≥ k, therefore {a,b}={j+1,k}={n-1,k}, thus I_1=I_2, and the result follows. Then, we can suppose j<n-2, and we obtain𝕋_n-1,j^±𝕋_n-1,k^±=T_n-2T_n-3𝕋_n-3,j^± T_n-2𝕋_n-2,k=T_n-2T_n-3T_n-2𝕋_n-3,j^±𝕋_n-2,k^±=T_n-3T_n-2𝕋_n-2,j^±𝕋_n-2,k^±thusϑ_n-1( 𝕋_n-1,j^±𝕋_n-1,k^±EF_I_1) i = T_n-3ϑ_n-1(T_n-2EF_I_i') 𝕋_n-2,j^±𝕋_n-2,k^±=T_n-3EF_τ_n-1,n-2(I_i')𝕋_n-2,j^±𝕋_n-2,k^±where I_i'=σ(I_i), with σ=σ_n-2,jσ_n-2,k, for i=1,2. And, it is known that τ_n-1,n-2(I_1')=τ_n-1,n-2(I_2') by <cit.>. ∙ Subcase k=n-1: We have that 𝕋_n-1,k^±=1 or B_n-1, for the first case the result is direct. Then, suppose 𝕋_n-1,k^±=B_n-1, we proceed with the positive case first, that is _n-1=𝕋_n-1,j^+. Note that𝕋_n-1,j^+B_n-1 =T_n-2⋯ T_j T_n-2⋯ T_1 B_1 T_1^-1⋯ T_n-2^-1=T_n-2^2⋯ T_1 B_1 T_1^-1⋯ T_n-2^-1𝕋_n-2,j^+= T_n-3⋯ T_1 B_1 T_1^-1⋯ T_n-2^-1𝕋_n-2,j^+ + (-^-1)B_n-1E_n-2𝕋_n-2,j^+=B_n-2T_n-2𝕋_n-2,j^+-(-^-1)B_n-2E_n-2𝕋_n-2,j^++(-^-1)B_n-1E_n-2𝕋_n-2,j^+Therefore we obtainϑ_n-1(𝕋_n-1,j^+ B_n-1 EF_I_i)= ϑ_n-1(B_n-2T_n-2𝕋_n-2,j^+ EF_I_i)- (-^-1)ϑ_n-1(B_n-2E_n-2𝕋_n-2,j^+EF_I_i)+ (-^-1)ϑ_n-1(B_n-1E_n-2𝕋_n-2,j^+EF_I_i)=B_n-2ϑ_n-1(T_n-2 EF_I_i')𝕋_n-2,j^+- (-^-1)B_n-2ϑ_n-1(EF_I_i'*{n-1,n-2})𝕋_n-2,j^++ (-^-1)ϑ_n-1(B_n-1EF_I_i'*{n-1,n-2})𝕋_n-2,j^+=B_n-2 EF_τ_n-1,n-2(I_i')𝕋_n-2,j^+- (-^-1)B_n-2EF_(I_i'*{n-1,n-2})\ n-1𝕋_n-2,j^++(-^-1)B_n-1EF_(I_i'*{n-1,n-2})\ n-1))𝕋_n-2,j^+where I_j'=σ(I_j) with σ=σ_n-2,j. First, note that by definition (I_j'*{n-1,n-2})\ n-1=τ_n-1,n-2(I_j'), also we have j<k, since k=n-1. Therefore, we can deduce from the last case that τ_n-1,n-2(I_1')=τ_n-1,n-2(I_2').Finally, suppose that _n-1=𝕋_n-1,j^-. For (i) <cit.> (taking m=0) we have that𝕋_n-1,j^-B_n-1=B_n-2𝕋_n-1,j^-Thereforeϑ_n-1(𝕋_n-1,j^-B_n-1) = ϑ_n-1(B_n-2𝕋_n-1,j^-EF_I_i)=B_n-2ϑ_n-1( 𝕋_n-1,j^-EF_I_i) =B_n-2𝕋_n-2,j^-EF_τ_n-1,j(I_i)for i=1,2. Then, we have to verify that τ_n-1,j(I_1)=τ_n-1,j(I_2). Since we are supposing k=n-1, we have thatI_1= I*{n-1,n}\ n I_2=I*{n,j,n-1}\ nand therefore (I_1*{n-1,j})\ n-1=(I_2*{n-1,j})\ n-1.For n≥ 2 and X∈ℰ_n-1^𝙱. We have (i) ϑ_n(T_n-1XT_n-1^-1)=ϑ_n-1(X)=ϑ_n(T_n-1^-1XT_n-1)Consider X=_n-1EF_I, with I∈𝒫(_0\{n}). We proceed by cases according to the value of _n-1. Case: When _n-1=1 the results follows easily. Indeed, if n-1∉ Supp(I) the result is direct, and when n-1∈ Supp(I) we haveϑ_n(T_n-1XT_n-1^-1) = ϑ_n(T_n-1 EF_IT_n-1^-1) = ϑ_n( EF_s_n-1(I)T_n-1T_n-1^-1)=EF_(s_n-1(I))\ n= EF_I\ n-1= ϑ_n-1(X)and the right side follows analogously. Case: When _n-1=B_n-1, we haveϑ_n(T_n-1 B_n-1EF_IT_n-1^-1)= ϑ_n( B_nEF_s_n-1(I))Then when n-1∈ Supp(I) we obtainϑ_n( B_nEF_s_n-1(I)) =EF_s_n-1(I)\ n=EF_I\ n-1= ϑ_n-1(X)the opposite case follows analogously.Case: When _n-1=𝕋_n-1,k^± with k<n-1, we haveϑ_n(T_n-1XT_n-1^-1)= ϑ_n( T_n-1𝕋_n-1,k^± EF_I T_n-1^-1) = ϑ_n(𝕋_n,k^± T_n-1^-1 EF_s_n-1(I))= ϑ_n( T_n-2^-1𝕋_n,k^± EF_s_n-1(I))= T_n-2^-1𝕋_n-1,k^± EF_τ_n,k(s_n-1(I))=𝕋_n-2,k^± EF_τ_n,k(s_n-1(I))It is not difficult to prove that independently if n-1 belong to Supp(I) or not, we have that τ_n,k(s_n-1(I))=τ_n-1,k(I), thus we obtain 𝕋_n-2,k^± EF_τ_n-1,k(I)=ϑ_n-1(X)Finally we haveT_n-1XT_n-1^-1=T_n-1^-1XT_n-1+(-^-1)(E_n-1XT_n-1-T_n-1XE_n-1)for X∈, for details see <cit.>. Then applying this relation and (i) Lemma <ref> we obtain thatϑ_n(T_n-1XT_n-1^-1)=ϑ_n(T_n-1^-1XT_n-1)and (ii) follows.For all X∈, we haveϑ_n-1(ϑ_n(XT_n-1))=ϑ_n-1(ϑ_n(T_n-1X))First note that the Eq. (<ref>) is equivalent toϑ_n-1(ϑ_n(XT_n-1^-1))=ϑ_n-1(ϑ_n(T_n-1^-1X))which can be obtained using (ii) Lemma <ref> and the formula for the inverse, cf. <cit.>. Then, sometimes we will prove this assertion instead of (<ref>) according to its difficulty.As always we consider X=_nEF_I, and we will distinguish cases according to the possibilities of _n and _n-1. We omit the case _n=_n-1=1 since it is straightforward. Case: _n=1, _n-1=𝕋_n-1,k^± with k<n-1ϑ_n-1(ϑ_n(T_n-1X))= ϑ_n-1(ϑ_n(T_n-1𝕋_n-1,k^± EF_I)) = ϑ_n-1(ϑ_n( T_n-1𝕋_n-1,k^± EF_I))= ϑ_n-1(ϑ_n( 𝕋_n,k^± EF_I))= ϑ_n-1(𝕋_n-1,k^± EF_τ_n,k(I))= ^2 𝕋_n-2,k^± EF_τ_n-1,k(τ_n,k(I))On the other handϑ_n-1(ϑ_n(XT_n-1))= ϑ_n-1(ϑ_n(𝕋_n-1,k^± EF_IT_n-1)) = ϑ_n-1(ϑ_n(𝕋_n-1,k^±T_n-1 EF_s_n-1(I))) = ϑ_n-1(𝕋_n-1,k^± EF_τ_n,n-1(s_n-1(I))) = ^2 𝕋_n-2,k^± EF_τ_n-1,k(τ_n,n-1(s_n-1(I)))Now, note that τ_n,n-1(s_n-1(I))=τ_n,n-1(I), then, it is clear that τ_n-1,k(τ_n,n-1(s_n-1(I)))=τ_n-1,k(τ_n,k(I)). Case: _n=1, _n-1=B_n-1ϑ_n-1(ϑ_n(T_n-1X))= ϑ_n-1(ϑ_n(T_n-1 B_n-1 EF_I)) = ϑ_n-1(ϑ_n( T_n-1B_n-1 EF_I))= ϑ_n-1(B_n-1 EF_τ_n,n-1(I))On the other handϑ_n-1(ϑ_n(X T_n-1))= ϑ_n-1(ϑ_n( B_n-1 EF_I T_n-1)) = ϑ_n-1(ϑ_n( B_n-1 T_n-1 EF_s_n-1(I)))= ϑ_n-1( B_n-1 EF_τ_n,n-1(s_n-1(I)))and we know by the last case that the partitions involved are equal. Case: _n=𝕋_n,k^± with k<n. In this case we will prove (<ref>). First suppose that n∉ Supp(I)ϑ_n-1(ϑ_n(T_n-1^-1X)) = ϑ_n-1(ϑ_n(T_n-1^-1𝕋_n,k^±EF_I))= ϑ_n-1(ϑ_n(T_n-1^-1 T_n-1𝕋_n-1,k^±EF_I))= ϑ_n-1(ϑ_n(T_n-1^-1 T_n-1)𝕋_n-1,k^±EF_I) (by Lemma <ref>)= ϑ_n-1(ϑ_n-1()𝕋_n-1,k^±EF_I)=ϑ_n-1() ϑ_n-1(𝕋_n-1,k^±EF_I)On the other hand ϑ_n-1(ϑ_n(X T_n-1^-1)) = ϑ_n-1(ϑ_n(𝕋_n,k^±EF_I T_n-1^-1))= ϑ_n-1(ϑ_n( T_n-1𝕋_n-1,k^±EF_IT_n-1^-1))= ϑ_n-1(ϑ_n( T_n-1(𝕋_n-1,k^±EF_I)T_n-1^-1)(by Lemma <ref>)= ϑ_n-1(ϑ_n-1( 𝕋_n-1,k^±EF_I))=ϑ_n-1() ϑ_n-1(𝕋_n-1,k^±EF_I)From now on we suppose that n∈ Supp(I).∙Subcase: _n-1=1.Fisrt note, that for k=n-1 the result follows easily. Then, we can suppose k<n-1ϑ_n-1(ϑ_n(T_n-1^-1X)) = ϑ_n-1(ϑ_n(T_n-1^-1𝕋_n,k^±EF_I))= ϑ_n-1(ϑ_n(𝕋_n-1,k^±EF_I))=ϑ_n-1( 𝕋_n-1,k^±EF_I\ n)=𝕋_n-2,k^±EF_τ_n-1,k(I\ n)On the other handϑ_n-1(ϑ_n(X T_n-1^-1))= ϑ_n-1(ϑ_n(𝕋_n,k^±EF_IT_n-1^-1)) = ϑ_n-1(ϑ_n(𝕋_n,k^±T_n-1^-1EF_s_n-1(I)))= ϑ_n-1(ϑ_n( T_n-2^-1𝕋_n,k^±EF_s_n-1(I)))= ϑ_n-1( T_n-2^-1ϑ_n(𝕋_n,k^±EF_s_n-1(I))) = ϑ_n-1( T_n-2^-1𝕋_n-1,k^±EF_τ_n,k(s_n-1(I)))= 𝕋_n-2,k^±EF_τ_n,k(s_n-1(I))\ n-1 and it is easy verify that τ_n,k(s_n-1(I))\ n-1=τ_n-1,k(I\ n).∙Subcase: _n-1=B_n-1. First suppose k=n-1 for the negative case, that is _n=T_n-1B_n-1, we haveϑ_n-1(ϑ_n(T_n-1^-1X))= ϑ_n-1(ϑ_n(T_n-1^-1 B_n-1T_n-1B_n-1EF_I))= ϑ_n-1(ϑ_n(T_n-1^-1 T_n-1B_n-1B_n EF_I)) (by (i) Lemma <ref> )= ϑ_n-1(ϑ_n( B_n-1B_n EF_I))On the other handϑ_n-1(ϑ_n(X T_n-1^-1))= ϑ_n-1(ϑ_n( B_n-1T_n-1B_n-1EF_I T_n-1^-1)) = ϑ_n-1(ϑ_n( B_n-1T_n-1B_n-1 T_n-1^-1EF_s_n-1(I))) = ϑ_n-1(ϑ_n( B_n-1B_n EF_s_n-1(I)))If we fix I_1=I and I_2=s_n-1(I),ϑ_n-1(ϑ_n( B_n-1B_n EF_I_i)) = ϑ_n-1( B_n-1EF_I_i\ n))= ϑ_n-1( B_n-1EF_(I_i\ n)\ n-1) = ^2 EF_(I_i\ n)\ n-1the result follows easily by comparing the partitions for i=1,2. In the present case, we are supposing that n-1∈ Supp(I), for the opposite case we can proceed analogously, and we will obtain the same partitions, but this time, it will appear the parameterfor i=1,2 in the final result. We omit the proof for _n=T_n-1 (positive case) since can be verified analogously.For k<n-1 we haveϑ_n-1(ϑ_n(T_n-1^-1X))= ϑ_n-1(ϑ_n(T_n-1^-1 B_n-1𝕋_n,k^±EF_I)) = ϑ_n-1(ϑ_n(T_n-1^-1B_n-1T_n-1T_n-2𝕋_n-2,k^±EF_I))= ϑ_n-1(ϑ_n(T_n-1^-1B_n-1T_n-1T_n-2EF_σ(I)))_𝙰𝕋_n-2,k^±where σ=σ_n-2,k. Now, let's compute 𝙰𝙰 = ϑ_n-1(ϑ_n(T_n-1^-1B_n-1T_n-1T_n-2EF_σ(I))) = ϑ_n-1(ϑ_n(T_n-1B_n-1T_n-1T_n-2EF_σ(I)))-(-^-1)ϑ_n-1(ϑ_n(E_n-1B_n-1T_n-1T_n-2EF_σ(I))) = ϑ_n-1(ϑ_n(T_n-1B_n-1T_n-1^-1T_n-2EF_σ(I)))+(-^-1)ϑ_n-1(ϑ_n(T_n-1B_n-1E_n-1T_n-2EF_σ(I))) -(-^-1)ϑ_n-1(ϑ_n(E_n-1B_n-1T_n-1T_n-2EF_σ(I))) = ϑ_n-1(ϑ_n(T_n-2B_nEF_σ(I)))+(-^-1)ϑ_n-1(ϑ_n(T_n-1B_n-1T_n-2EF_σ(I)*{n,n-2})) -(-^-1)ϑ_n-1(ϑ_n(B_n-1T_n-1T_n-2EF_σ(I)*{n,n-2})) = ϑ_n-1(T_n-2EF_σ(I)\ n)+(-^-1)ϑ_n-1(ϑ_n(T_n-1T_n-2B_n-2EF_σ(I)*{n,n-2})) -(-^-1)ϑ_n-1(B_n-1T_n-2EF_τ_n,n-2(σ(I)*{n,n-2})) =EF_τ_n-1,n-2(σ(I)\ n)+(-^-1)ϑ_n-1(T_n-2B_n-2EF_τ_n,n-2(σ(I)*{n,n-2})) -(-^-1)ϑ_n-1(T_n-2B_n-2EF_τ_n,n-2(σ(I)*{n,n-2})) =EF_τ_n-1,n-2(σ(I)\ n) On the other hand ϑ_n-1(ϑ_n(X T_n-1^-1)) = ϑ_n-1(ϑ_n( B_n-1𝕋_n,k^±EF_I T_n-1^-1)) = ϑ_n-1(ϑ_n(B_n-1T_n-2^-1𝕋_n,k^±EF_J))= ϑ_n-1(ϑ_n(B_n-1T_n-2^-1T_n-1T_n-2EF_σ(J)))_𝙳𝕋_n-2,k^± where J=s_n-1(I) and σ=σ_n-2,k. Now, we compute 𝙳𝙳 = ϑ_n-1(ϑ_n(B_n-1T_n-2^-1T_n-1T_n-2EF_σ(J))) = ϑ_n-1(ϑ_n(B_n-1T_n-2T_n-1T_n-2EF_σ(J)))-(-^-1)ϑ_n-1(ϑ_n(B_n-1E_n-2T_n-1T_n-2EF_σ(J)))= ϑ_n-1(ϑ_n(T_n-2B_n-2T_n-1T_n-2EF_σ(J)))-(-^-1)ϑ_n-1(ϑ_n(B_n-1T_n-1T_n-2EF_σ(J)*{n,n-1})) = ϑ_n-1(T_n-2B_n-2T_n-2EF_τ_n,n-2(σ(J)))-(-^-1)ϑ_n-1(B_n-1T_n-2EF_τ_n,n-2(σ(J)*{n,n-1})) = ϑ_n-1(B_n-1EF_τ_n,n-2(σ(J)))+ (-^-1)ϑ_n-1(T_n-2B_n-2EF_(τ_n,n-2(σ(J)))*{n-1,n-2}) -(-^-1)ϑ_n-1(T_n-2B_n-2EF_τ_n,n-2(σ(J)*{n,n-1})) =EF_τ_n,n-2(σ(J))\ n-1and it is not difficult verify that the partitions involved coincide. ∙Subcase: _n-1=𝕋_n-1,j^±. We will distinguish 2 subcases First suppose that k<n-1ϑ_n-1(ϑ_n(T_n-1^-1X))= ϑ_n-1(ϑ_n(T_n-1^-1𝕋_n-1,j^±𝕋_n,k^±EF_I))= ϑ_n-1(ϑ_n(T_n-1^-1T_n-2𝕋_n-2,j^±T_n-1𝕋_n-1,k^±EF_I))= ϑ_n-1(ϑ_n(T_n-1T_n-2T_n-2^-1𝕋_n-2,j^±𝕋_n-1,k^±EF_I))= ϑ_n-1(ϑ_n(T_n-2T_n-1T_n-2^-1𝕋_n-2,j^±𝕋_n-1,k^±EF_I))= ϑ_n-1(T_n-2ϑ_n(T_n-1T_n-2^-1EF_φ(I))𝕋_n-2,j^±𝕋_n-1,k^±)where φ=σ_n-2,jσ_n-1,k. Further it is easy to prove thatϑ_n(T_n-1T_n-2^-1EF_φ(I))= T_n-2^-1EF_τ_n,n-2(φ(I))then we obtain the following. First we consider j<k.ϑ_n-1(ϑ_n(T_n-1^-1X)) = ϑ_n-1(T_n-2T_n-2^-1EF_τ_n,n-2(φ(I))𝕋_n-2,j^±𝕋_n-1,k^±) = ϑ_n-1(𝕋_n-2,j^±𝕋_n-1,k^±EF_τ_n,j(I))= ^2𝕋_n-2,j^±𝕋_n-2,k^±EF_τ_n-1,k(τ_n,j(I)) On the other handϑ_n-1(ϑ_n(X T_n-1^-1)) = ϑ_n-1(ϑ_n(𝕋_n-1,j^±𝕋_n,k^±EF_I T_n-1^-1)) = ϑ_n-1(ϑ_n(𝕋_n-1,j^±𝕋_n,k^±T_n-1^-1EF_J)),where J=s_n-1(I)= ϑ_n-1(𝕋_n-1,j^±T_n-2^-1ϑ_n(𝕋_n,k^±EF_J))= ϑ_n-1(𝕋_n-1,j^±T_n-2^-1𝕋_n-1,k^±EF_τ_n,k(J))= ϑ_n-1(𝕋_n-1,j^±𝕋_n-2,k^±EF_τ_n,k(J)= ϑ_n-1(𝕋_n-1,j^±EF_σ_n-2,k(τ_n,k(J)))𝕋_n-2,k^±= ^2𝕋_n-2,j^±EF_τ_n-1,j(σ_n-2,k(τ_n,k(J)))𝕋_n-2,k^±= ^2 𝕋_n-2,j^±T_n-2^-1𝕋_n-1,k^±EF_τ_n-1,j(τ_n,k(J))and the result follows by comparing the partitions. Note that for j≥ k the proof is the same but will appear the term j+1 instead of j, in both partitions. Finally suppose k=n-1, we only prove the negative case, that is _n=T_n-1B_n-1, since for the positive case we can proceed analogously, and it is easierϑ_n-1(ϑ_n(T_n-1^-1X))= ϑ_n-1(ϑ_n(T_n-1^-1𝕋_n-1,j^±T_n-1B_n-1EF_I)) = ϑ_n-1(ϑ_n(T_n-1^-1T_n-2𝕋_n-2,j^±T_n-1B_n-1EF_I)) = ϑ_n-1(ϑ_n(T_n-1^-1T_n-2T_n-1𝕋_n-2,j^±B_n-1EF_I))= ϑ_n-1(ϑ_n(T_n-2T_n-1T_n-2^-1𝕋_n-2,j^±B_n-1EF_I))= ϑ_n-1(T_n-2ϑ_n(T_n-1T_n-2^-1EF_σ_n-2,j(I))𝕋_n-2,j^±B_n-1 )= ϑ_n-1(T_n-2T_n-2^-1EF_τ_n,n-2(σ_n-2,j(I))𝕋_n-2,j^±B_n-1 )= ϑ_n-1(𝕋_n-2,j^±B_n-1EF_τ_n,j(I) )now, depending if n-1∈ Supp(I) or not, we can obtain𝕋_n-2,j^±EF_τ_n,j(I)\ n-1or𝕋_n-2,j^±EF_τ_n,j(I)respectively. On the other handϑ_n-1(ϑ_n(X T_n-1^-1))= ϑ_n-1(ϑ_n(𝕋_n-1,j^±T_n-1B_n-1EF_I T_n-1^-1)) = ϑ_n-1(ϑ_n(𝕋_n-1,j^±T_n-1B_n-1T_n-1^-1EF_J))= ϑ_n-1(ϑ_n(𝕋_n-1,j^±B_nEF_J))=CNow depending of n-1∈ Supp(I) or not, we obtain[ C = ϑ_n-1(𝕋_n-1,j^±EF_J\ n) C =ϑ_n-1(𝕋_n-1,j^±EF_J); = 𝕋_n-2,j^±EF_τ_n-1,j(J\ n) = 𝕋_n-2,j^±EF_τ_n-1,j(J\ n) ]respectively. And, it is easy verify that the partitions are the same, therefore this case follows.Case: _n=B_n, _n-1=1ϑ_n-1(ϑ_n(X T_n-1))= ϑ_n-1(ϑ_n( B_nEF_I T_n-1))= ϑ_n-1(ϑ_n( B_n T_n-1EF_s_n-1(I)))= ϑ_n-1(ϑ_n( T_n-1B_n-1EF_s_n-1(I))) = ϑ_n-1(B_n-1EF_τ_n,n-1(s_n-1(I)))=EF_τ_n,n-1(s_n-1(I))\ n-1On the other handϑ_n-1(ϑ_n(T_n-1X))= ϑ_n-1(ϑ_n(T_n-1 B_nEF_I))= ϑ_n-1(ϑ_n(T_n-1 B_nEF_I)) = ϑ_n-1(ϑ_n(T_n-1^2B_n-1T_n-1^-1EF_I))expanding the square and the inverse we have that ϑ_n-1(ϑ_n(T_n-1^2B_n-1T_n-1^-1EF_I))=A-(-^1)B+(-^1)CwhereA:= ϑ_n-1(ϑ_n(B_n-1T_n-1EF_I)) B:= ϑ_n-1(ϑ_n(B_n-1E_n-1EF_I))C:= ϑ_n-1(ϑ_n(E_n-1B_nEF_I))Now, by direct computations we have thatA= ϑ_n-1(B_n-1EF_τ_n,n-1(I)) =EF_τ_n,n-1(I)\ n-1 B= ϑ_n-1(ϑ_n(B_n-1EF_I*{n-1,n})) = ϑ_n-1(B_n-1EF_(I*{n-1,n})\ n) =B_n-1EF_((I*{n-1,n})\ n)\ n-1andC= ϑ_n-1(ϑ_n(B_nEF_I*{n-1,n})) = ϑ_n-1(EF_I*{n-1,n}\ n) = ϑ_n-1(EF_(I*{n-1,n}\ n)\ n-1clearly we have that B=C and also that τ_n,n-1(I)\ n-1=τ_n,n-1(s_n-1(I))\ n-1, thus the results follows.Case: _n=B_n, _n-1=B_n-1ϑ_n-1(ϑ_n(X T_n-1)) = ϑ_n-1(ϑ_n( B_n-1B_n EF_I T_n-1))= ϑ_n-1(ϑ_n( B_n-1B_n T_n-1EF_s_n-1(I)))= ϑ_n-1(ϑ_n(B_n-1T_n-1B_n-1EF_s_n-1(I)))On the other handϑ_n-1(ϑ_n(T_n-1X)) = ϑ_n-1(ϑ_n(T_n-1 B_n-1B_nEF_I))= ϑ_n-1(ϑ_n(T_n-1 B_n-1B_nEF_I)) = ϑ_n-1(ϑ_n(B_n-1T_n-1B_n-1EF_I))by using (i) Lemma <ref>. Now denote I_1=I and I_2=s_n-1(I). Then we haveϑ_n-1(ϑ_n(B_n-1T_n-1B_n-1EF_I_i))= ϑ_n-1(B_n-1ϑ_n(T_n-1EF_I_i)B_n-1) = ϑ_n-1(B_n-1EF_τ_n,n-1(I_i)B_n-1)= ϑ_n-1(B_n-1^2EF_τ_n,n-1(I_i))= [ϑ_n-1(EF_τ_n,n-1(I_i))+ (-^-1) ϑ_n-1(B_n-1EF_τ_n,n-1(I_i)*{0,n-1}) ]= [ EF_τ_n,n-1(I_i)\ n-1+(-^-1)ϑ_n-1(B_n-1EF_(τ_n,n-1(I_i)*{0,n-1})\ n-1) ]finally it is not difficult verify that are the same for i=1,2. Case: _n=B_n, _n-1=𝕋_n-1,k^± with k<n-1. We proceed first with the positive caseϑ_n-1(ϑ_n(X T_n-1))= ϑ_n-1(ϑ_n(𝕋_n-1,k^+ B_n EF_I T_n-1)) = ϑ_n-1(𝕋_n-1,k^+ϑ_n( B_n T_n-1EF_s_n-1(I))= ϑ_n-1(𝕋_n-1,k^+ϑ_n( T_n-1B_n-1EF_s_n-1(I))=ϑ_n-1(𝕋_n-1,k^+ B_n-1EF_τ_n,n-1(s_n-1(I))) =ϑ_n-1( T_n-2B_n-1EF_σ(I_1))𝕋_n-2,k^+where I_1=τ_n,n-1(s_n-1(I)) and σ=σ_n-2,k. Now expanding the square and the inverse we obtain thatϑ_n-1( T_n-2B_n-1EF_σ(I_1)) = ϑ_n-1( B_n-2T_n-2EF_σ(I_1))-(-^-1)ϑ_n-1( B_n-2E_n-2EF_σ(I_1))+ (-^-1)ϑ_n-1( B_n-1E_n-2EF_σ(I_1))=B_n-2ϑ_n-1(T_n-2EF_σ(I_1))-(-^-1)B_n-2ϑ_n-1(EF_σ(I_1)*{n-1,n-2})+ (-^-1)ϑ_n-1( B_n-1EF_σ(I_1)*{n-1,n-2})=B_n-2EF_τ_n-1,n-2(σ(I_1))-(-^-1)B_n-2EF_(σ(I_1)*{n-1,n-2})\ n-1+ (-^-1) EF_(σ(I_1)*{n-1,n-2})\ n-1On the other handϑ_n-1(ϑ_n(T_n-1X)) = ϑ_n-1(ϑ_n(T_n-1𝕋_n-1,k^+B_n EF_I)) = ϑ_n-1(ϑ_n(𝕋_n,k^+B_n EF_I)) = [ϑ_n-1(ϑ_n(B_n-1T_n-1𝕋_n-1,k^+ EF_I))- (-^-1)ϑ_n-1(ϑ_n(B_n-1E_n-1𝕋_n-1,k^+ EF_I)) + (-^-1)ϑ_n-1(ϑ_n(B_nE_n-1𝕋_n-1,k^+ EF_I))]Let's compute each term separately ∙ϑ_n-1(ϑ_n(B_n-1T_n-1𝕋_n-1,k^+ EF_I))= ϑ_n-1(B_n-1ϑ_n(T_n-1 EF_φ( I))𝕋_n-1,k^+)with φ=σ_n-1,k= ϑ_n-1(B_n-1EF_τ_n,n-1φ(I)𝕋_n-1,k^+) = ϑ_n-1(B_n-1𝕋_n-1,k^+EF_τ_n,k(I)(by (iii) Lemma <ref>)= ϑ_n-1(T_n-2B_n-2𝕋_n-2,k^+EF_τ_n,k(I)) = ϑ_n-1(T_n-2B_n-2EF_σ(τ_n,k(I))) 𝕋_n-2,k^+= ^2 B_n-2EF_τ_n-1,n-2(σ(τ_n,k(I)))𝕋_n-2,k^+ ∙ϑ_n-1(ϑ_n(B_n-1E_n-1𝕋_n-1,k^+ EF_I))= ϑ_n-1(ϑ_n(B_n-1𝕋_n-1,k^+E_n,k EF_I)) = ϑ_n-1(ϑ_n(B_n-1𝕋_n-1,k^+EF_I*{n,k})) = ϑ_n-1(B_n-1𝕋_n-1,k^+EF_(I*{n,k})\ n) = ϑ_n-1(T_n-2B_n-2EF_σ(τ_n,k(I)))𝕋_n-2,k^+=B_n-2EF_τ_n-1,n-2(σ(τ_n,k(I)))𝕋_n-2,k^+∙ϑ_n-1(ϑ_n(B_nE_n-1𝕋_n-1,k^+ EF_I)) = ϑ_n-1(ϑ_n(B_n𝕋_n-1,k^+ E_n,kEF_I))= ϑ_n-1(ϑ_n(𝕋_n-1,k^+B_nEF_I*{n,k})) = ϑ_n-1(𝕋_n-1,k^+EF_(I*{n,k})\ n)) = 𝕋_n-2,k^+EF_τ_n-1,k(τ_n,k(I))=EF_τ_n-1,n-2(σ(τ_n,k(I)))𝕋_n-2,k^+Finally, by using (iii) Lemma <ref> we haveτ_n-1,n-2(σ(τ_n,k(I)))= τ_n-1,n-2((τ_n,n-2(σ(I)))) =(σ(I)*{n,n-1,n-2})\{n,n-1}and, on the other handτ_n-1,n-2(σ(I_1))= τ_n-1,n-2(τ_n,n-1(σ(s_n-1(I))))= τ_n-1,n-2(τ_n,n-1(s_n-1(σ(I))))=(s_n-1(σ(I))*{n,n-1,n-2})\{n,n-1}since s_n-1 only moves the elements n and n-1, and these are removed by the partition, the equality follows.For the negative case, we haveϑ_n-1(ϑ_n(X T_n-1)) = ϑ_n-1(ϑ_n(𝕋_n-1,k^-B_n T_n-1EF_s_n-1(I))) = ϑ_n-1(ϑ_n(𝕋_n-1,k^-T_n-1B_n-1EF_s_n-1(I)))= ϑ_n-1(𝕋_n-1,k^-B_n-1EF_τ_n,n-1(s_n-1(I)))= ϑ_n-1(B_n-2𝕋_n-1,k^-EF_τ_n,n-1(s_n-1(I)))(by (ii) Lemma <ref>)=B_n-2ϑ_n-1(𝕋_n-1,k^-EF_τ_n,n-1(s_n-1(I))) = ^2B_n-2𝕋_n-2,k^-EF_τ_n-1,k(τ_n,n-1(s_n-1(I)))= ^2B_n-2EF_σ(τ_n-1,k(τ_n,n-1(s_n-1(I))))𝕋_n-2,k^-and for the other sideϑ_n-1(ϑ_n(T_n-1X))= ϑ_n-1(ϑ_n(T_n-1𝕋_n-1,k^-B_n EF_I)) = ϑ_n-1(ϑ_n(𝕋_n,k^-B_n EF_I))= ϑ_n-1(ϑ_n(B_n-1𝕋_n,k^- EF_I))= ϑ_n-1(B_n-1𝕋_n-1,k^- EF_τ_n,k(I)) = ϑ_n-1(T_n-2B_n-2 EF_σ(τ_n,k(I)))𝕋_n-2,k^-= ^2B_n-2 EF_τ_n-1,n-2(σ(τ_n,k(I)))𝕋_n-2,k^-= ^2B_n-2 EF_τ_n-1,n-2(σ(τ_n,k(I)))𝕋_n-2,k^-Finally, using (iii) Lemma <ref> we have thatσ(τ_n-1,k(τ_n,n-1(s_n-1(I)))) = τ_n-1,n-2(τ_n,n-1(σ (s_n-1(I)))) =[(σ (s_n-1(I))*{n,n-1})\ n]*{n-1,n-2}\ n-1=[(σ (s_n-1(I))*{n,n-1,n-2}]\{n,n-1}= [(σ (I)*{n,n-1,n-2}]\{n,n-1}= τ_n-1,n-2(τ_n,n-2(σ (I))=τ_n-1,n-2(σ(τ_n,k(I))) Let be 𝚝𝚛_n:→𝕃 the linear map defined inductively as follows by, 𝚝𝚛_1=ϑ_1 and𝚝𝚛_n:=𝚝𝚛_n-1∘ϑ_nand, let us denote by 𝚝𝚛 the family {𝚝𝚛_n}_n≥ 1. Then, we have the following result𝚝𝚛 is a Markov trace on {}_n≥ 1. That is, for every n≥ 1 the linear map 𝚝𝚛_n:→𝕃 satisfies the following properties.i) 𝚝𝚛_n(1)= 1ii) 𝚝𝚛_n+1(XT_n) = 𝚝𝚛_n+1(XE_nT_n)= 𝚝𝚛_n(X)iii) 𝚝𝚛_n+1(XE_n) =𝚝𝚛_n(X)iv) 𝚝𝚛_n+1(XB_n) =𝚝𝚛_n(X)v) 𝚝𝚛_n+1(XB_nE_n) =𝚝𝚛_n+1(XB_nF_n+1)=𝚝𝚛_n(X)vi) 𝚝𝚛_n(XY)=𝚝𝚛_𝚗(YX) where X,Y∈for all n≥ 1. Rules (ii)–(v) are directconsequences ofLemma <ref> (ii).We will prove rule (vi) by induction on n. For n=1, the ruleholds since ℰ_1^𝙱 is commutative. Suppose now that (vi) is true for all k less than n. ForY∈ℰ_n-1^𝙱 and X∈ the result follows easily by Lemma <ref> and induction hypothesis, cf. <cit.>. Thus, 𝚝𝚛_n(XY) =𝚝𝚛_n(YX) for all X∈ and Y∈ℰ_n-1^𝙱.Further, for Y∈{T_n-1, E_n-1} we have𝚝𝚛_n(XY) = 𝚝𝚛_n-2(𝚝𝚛_n-1(𝚝𝚛_n(XY))) = 𝚝𝚛_n-2(𝚝𝚛_n-1(𝚝𝚛_n(YX)))by using Lemmas <ref> and <ref>.Therefore, we have𝚝𝚛_n(XY) = 𝚝𝚛_n(XY)for all X∈ and Y∈ℰ_n-1^∪{T_n-1, E_n-1}, thus, having in mind the linearity of 𝚝r_n, the result follows.§.§ Knot invariants from . In order to define a new invariant of classical links in the solid torus, we recall some necessary facts. The closure of a braid α in the group W_n (recall Section <ref>), is defined by joining with simple (unknotted and unlinked) arcs its corresponding endpoints, and it is denoted by α. The result of closure, α, is a link in the solid torus, denoted ST. This can be regarded by viewing the closure of the fixed strand as the complementary solid torus. For an example of a link in the solid torus see Figure <ref>. By the analogue of the Markov theorem for ST (cf. for example <cit.>), isotopy classes of oriented links in ST are in bijection with equivalence classes of ⋃_n W_n, the inductive limit of braid groups of type 𝙱, respect to the equivalence relation ∼_𝙱: (i) αβ∼_𝙱βα(ii) α∼_𝙱ασ_n and α∼_𝙱ασ_n^-1for all α, β∈ W_n. We set𝖫 :=- (- ^-1)/and D:=1/z √(𝖫).And, let us denote π_𝖫 the representation of W_n in , given by σ_i ↦√(𝖫)T_i and ρ_1↦ B_1. Then, for α∈W_n, we defineΔ_𝙱(α):=(D)^n-1(𝚝𝚛_n∘π_𝖫)(α)it is well know that the previous expression can be rewritten as followsΔ_𝙱(α)=(D)^n-1(√(𝖫))^e(α)(𝚝𝚛_n∘π)(α)where e(α) is the exponent sum of the σ_i's appearing in the braid α, and π is the natural representation of W_n in . Let L be a link in ST obtained by closure a braid α∈W_n. Then the map L↦Δ_𝙱(α) defines an isotopy invariant of links in ST. The proof follows by using the Markov trace properties, and by the definition of the normalization element 𝖫. Note that the classical links can be regarded as a link in ST, in fact, a classical link can be obtained by closure a braid α∈W_n whose doesn't contain ρ_1 in its expression. Thus, the invariant Δ_𝙱 restricted to classical links coincide with the invariant Δ given in <cit.>, and therefore it's more powerful than the Homflypt polynomial in that case. The Markov trace 𝚝𝚛 from Theorem <ref> was constructed with the aim to define invariants for tied links in the solid torus, having as reference <cit.>. However, to do that, it is necessary to introduce these new objects from the beginning, which is a problem itself. Then, we will study this subject in a future work (in progress).acm
http://arxiv.org/abs/1703.08850v1
{ "authors": [ "Marcelo Flores" ], "categories": [ "math.RA", "math.QA", "math.RT" ], "primary_category": "math.RA", "published": "20170326170610", "title": "A bt-algebra of type B" }
ramon.lapiedra@uv.esDepartament d'Astronomia i Astrofísica, Universitat de València, 46100 Burjassot, València, Spain Observatori Astronòmic, Universitat de València, E-46980 Paterna, València, Spain.antonio.morales@uv.esDepartament d'Astronomia i Astrofísica, Universitat de València, 46100 Burjassot, València, Spain Observatori Astronòmic, Universitat de València, E-46980 Paterna, València, Spain. A physically plausible Lemaître-Tolman-Bondicollapse in the marginally bound case is considered. By “physically plausible” we mean that the correspondingmetric is C^1 matched at the collapsing star surface and further that its intrinsic energy is, as due, stationary and finite. It is proved for this Lemaître-Tolman-Bondicollapse, for some parameter values,that its intrinsic central singularity is globally naked, thus violating the cosmic censorship conjecture with, for each direction, one photon, or perhaps a pencil of photons, leaving the singularity and reaching the null infinity. Our result is discussed in relation to some other cases in the current literature on the subject in which some of the central singularities are globally naked too. 04.20.-q, 04.20.CvCosmic censorship conjecture in some matchingspherical collapsing metrics Juan Antonio Morales-Lladosa December 30, 2023 =========================================================================== § INTRODUCTIONSpherical inhomogeneous dust collapse has been extensively studied in the past, paying special attention to the final stages of the evolutionary process. Behind these studies there usuallyexists an extra motivation: to confront the validity of the Penrose conjecture <cit.> (censoring the nakedness ofessential space-time singularities) with the singularities developed in specific collapsing situations. For a select set of pioneering work in this context, see, for instance,Refs. <cit.>, which paved the way to delimitate the hypothesis which ensures the validity of the aforementioned Penrose conjecture.Recently, some spherically symmetric collapsing metrics have been considered in Ref. <cit.> (see also the work in Refs. <cit.> and <cit.>),in order to show that some of their central singularities can violate the Penrose conjecture <cit.>. In other words, these central singularities could be, against the Penrose conjecture, global naked singularities, i.e., they could be seen from null future infinity. See, for instance ref. <cit.>, section 4, for a distinction between local and global naked singularities. Here, we are only concerned by the possible existence of global naked singularities. For some authors (see, for instance, Ref. <cit.>), to elucidate whether there are in nature naked singularities or not is important since in the affirmative case their vision could give us some clues about how to change the theory of General Relativity in order to avoid these singularities, leading to a, quantum or not, modification of the theory. In the present paper, we consider the marginally bound case of the dust Lemaître-Tolman-Bondi (LTB) family of Einstein equations solutions <cit.> (see also Refs. <cit.>). We will chose a subfamily made of the particular solutions satisfying the Lichnerowicz matching conditions <cit.> with the exterior Schwarzschild metric at the collapsing star[Throughout the paper the term “star” refers to any uncharged, spherical, non rotating, finite mass cloud.] surface. That is,in an admissible[Following Ref. <cit.>, the term “admissible” designates a coordinate system of a C^2 class (atlas) manifold structuredescribing the space-time.]coordinate system, the metric is assumed to be class C^1; i. e., the metric and its first derivatives are assumed to be continuous across this surface. Perhaps this condition is not always a physically realistic one, but in our opinion, it could be worth to exploring its consequences,as we do in the present case.Furthermore, we impose the physical condition that these metrics have a finite stationary intrinsicenergy (see Appendix <ref>), and finally for the sake of simplicity we choose a simple metric of this particular subfamily of metrics. Hereafter, we name this chosen metric the ξ-metric by reasons that will appear later, when we introduce the ξ parameter inSec. <ref>. Our main result is that for this ξ-metric, and for some parameter values,the intrinsic central singularity is a globally naked singularity; that is, given a 3-space direction, one outgoing radial null geodesic (or perhaps a pencil of such geodesics) leaves this singularity and reaches the future null infinity. On the other hand, in Ref. <cit.>, the authors raise the following question: “Could it be that the initial distributions which lead to naked singularities are not astrophysically realizable?” Thus, our result suggests that such distributions are astrophysically realizable. Some previous results in Refs. <cit.>, <cit.> and <cit.>, for some marginally bound LTB metrics, seem to support the same suggestion, although differently to our case all but one of these metrics leading to the previous results do not fulfill all the C^1 matching requirements across the star boundary. Then, we could confirm that the cosmic censorship conjecture would become violated.This is the paper's outline. In Sec. <ref> we obtain the ξ-metric, a LTB marginally bound solution obeying the C^1 matching conditions with vanishing intrinsic energy.Section <ref> revisits a sufficient condition for the global nakedness of the central singularity. In Sec. <ref>,we prove that this ξ-metric fulfills, for some parameter values, this sufficient condition and, in Sec. <ref>, we analyze numerically this global nakedness with the help of Mathematica.The last section, Sec. <ref>, is devoted to final considerations. Detailed calculations concerning the intrinsic energy of the ξ-metric have been included in Appendix <ref>. The causal character of the apparent horizon of this metric is analyzed in Appendix <ref>.We take G=c=1 for the gravitational and the speed of light constants.§ MATCHING THE LTB MARGINALLY BOUND COLLAPSE As it is well known, when referred to Gauss coordinates adapted to the spherical symmetry, in the marginally bound case the metric element of the dust LTB metrics can be written <cit.> (signature + 2) ds^2 =-dτ^2 + A'^2 dρ^2 +A^2 (dθ^2 + sin^2 θdϕ^2), with A=A(τ, ρ) and A' ≡∂_ρ A. The general expression for A, the solution of the Einstein field equations,is A(τ, ρ) = (9/2 M)^1/3 (τ - ψ)^2/3, where M=M(ρ) and ψ=ψ(ρ) are two arbitrary functions of ρ, M(ρ) representing the enclosed partial mass in the sphere of radius ρ and ψ(ρ) representingthe singular time τfor the ρ shell. The regular coordinate ranges are- ∞ < τ < ψ(ρ), 0 ≤ρ < ∞, 0 < θ < π,and 0 ≤ϕ≤ 2π. The 2-surface τ = ρ = 0 and variable θ and ϕ will be referred to as the central singularity. We can supplement Eq. (<ref>) with the particular Einstein field equation 4 πμ (τ, ρ) = M'/A^2 A',M' ≡dM/dρ, relating the energy density source, μ, to the metric. Let us take the commonly used scale A(0,ρ) = ρ (see, for instance, Refs. <cit.>; <cit.>, p. 245; and <cit.>, p. 17). This leads toψ = 2/3ρ^3/2/√(2M). In this gauge, the C^1 matching conditions with the exterior Schwarzschild metric [see next Eq. (<ref>)]through the star surface, say ρ = λ, M(ρ≥λ) = m = const., are in the usual Hadamard notation <cit.>,[M]=[M'] =[M”] = 0. For a detailed proof of this result, see Ref. <cit.>. Notice that in Eq. (<ref>), besides A, its first derivative A' appears. As a result, the C^1matching conditions involve the second derivative A” too, which leads finally to the last condition of (<ref>), i. e., [M”] = 0. A simple solution of the Eq. (<ref>) is[Solution (<ref>) is a specially simple case inside a large family of LTB metrics satisfyingEq. (<ref>). See next Eq. (<ref>).] M(ρ) =m -m (1- ρ^2/λ^2)^3, ρ≤λ m, ρ≥λ With this solution we will build, through (<ref>)–(<ref>), what we have called in the Introduction the ξ-metric. Further, this metric has, as due, a stationary and finite intrinsic energy, as shown in Appendix <ref>. Notice that theξ-metric source is a spherical finite mass, regularly distributed before the eventual collapse. Further, this physical system neither expels nor accretes any mass and neither radiates electromagnetically nor gravitationally. Then, any meaningful kind of energy we can ascribe to it has to be actually stationary and finite as we have demanded, irrespective of how much we approach the physical singularity. However, before arriving at the basic result of the present section, notice, to begin with, that the expression (<ref>) can be written for the Schwarzschild solution like A = r = (9m/2)^1/3 (τ - ψ)^2/3, ψ = 2/3ρ^3/2/√(2m), with r the standard static radial coordinate and m the Schwarzschild mass parameter. Then, the generic singularity event, τ = ψ(ρ), will be visible from the star outside if the leaving photon arrives at the star surface, ρ = λ, in a time τ_λ of which the corresponding r value given by Eq. (<ref>) is such thatr > 2m. This condition gives for τ_λ the inequality τ_λ <2/3λ^3/2/√(2m) - 4m/3 = ψ(λ)- 4m/3 = τ_h(λ), with τ_h(ρ) = ψ(ρ) - 4M/3,which is called the apparent horizon of the metric (<ref>) (see Ref. <cit.>) and is implicitly defined by A(τ_h(ρ), ρ) = 2M(ρ).In other words, a radial outgoing null geodesic leaving out the generic singular event τ = ψ(ρ) could only be seen from future null infinity ifits corresponding photon actually arrives at the star surface and then if its arrival time, τ_λ, to this surface satisfies the inequality(<ref>), τ_λ < τ_h(λ). Nevertheless, for the metric given by (<ref>) and (<ref>), there is a well-known result (see, for instance, Ref. <cit.> p. 332, at the beginning of Sec. 18.14, and the reference 17 in Ref. <cit.>), according to which if M' > 0 all these singularities, out of the central one, are not visible from this future null infinity. That is, all these singularities are dressed ones. Thus, we will concentrate on the possible global nakedness of the remaining singularity, the central one,of our ξ-metric, and then, in the final section, we will compare our result with some well-known results of the present literature on the subject.§ SUFFICIENT CONDITION FOR THE GLOBAL NAKEDNESS OF THE CENTRAL SINGULARITY In Ref, <cit.>, Eq. (26), a sufficient condition for the global visibility of the central singularity, ψ'/M' > 1/3(26 + 15 √(3)),M'>0,∀ρ∈(0, λ), is given for the case of a marginally bound dust LTB metric. Notice that the present notation is different from the one used in Ref. <cit.>.The justification of the above inequality concerns the behavior of the radial null geodesics across the regionA > 2M, outside the apparent horizon. To make our discussion self-contained,we give next our version of this justification.§.§ Null geodesics from the centerTo begin with, the general equation for the radial outgoing null geodesics, (τ_g(ρ), ρ), for the metric (<ref>) is dτ_g/dρ = A', where A', having in mind Eq. (<ref>), becomesA' = 1/3M'/MA + √(2M/A) ψ'. In the region A > 2M, let us consider the k-lines implicitly defined by the condition A(τ_k(ρ), ρ) = k M(ρ),k > 2, which from Eq. (<ref>) is equivalent to τ_k(ρ) = ψ(ρ) - k/3√(2k)M(ρ) with k>2. The slope of these lines, τ_k'(ρ) = ψ'(ρ) - k/3√(2k)M'(ρ), might be compared with the slope of the outgoing radial null geodesics, τ_g' = A', on the events (τ, ρ) where both families of lines, τ_k(ρ) and τ_g(ρ), intersect. Notice that these intersection events could always exist since they can always be considered the initial condition of a corresponding unique outgoing radial geodesic. Thus, takingA=kM in Eq. (<ref>), we have for these intersecting events: τ_g'(ρ)_|_kM≡ A'(τ_k(ρ), ρ) = √(2/k)ψ'(ρ) + k/3M'(ρ),Then,asufficient condition for this geodesic escaping to nullinfinity is that for all these intersection eventsof the geodesic lines, τ_g, with some τ_k>2 line, with the ρ values belonging to the (0, λ) interval,we have the following inequality τ_g'(ρ)_|_kM < τ_k' (ρ), k > 2. In fact, from (<ref>), τ'_k>2 <τ'_k=2 = τ'_h, and (<ref>) impliesτ'_g (ρ)_|_kM < τ'_h (ρ),in this interval.Then, in particular, the photon arrives at the star surface at time τ_λ, which satisfies(<ref>). Consequently,the photon escapes at the null infinity. It remains to prove that such a geodesic starts from the central singularity when Eq. (<ref>) occurs. §.§ Null geodesics from the central singularityFrom Eqs. (<ref>) and (<ref>), the sufficient condition (<ref>) is equivalent to ψ'/M' > k/3 1+ √(2k)/1 - √(2/k)≡ f(k),k>2, which coincides with Eq. (25) in Ref. <cit.>, once the corresponding change in notation is taken into account.Let us be more precise. Actually, the fulfillment condition (<ref>) for all ρ∈ (0, λ) implies that the corresponding k-line has to be timelike. One can easily arrive at this conclusion by simply drawing the forward outgoing light cone in the assumed intersection event of τ_g with τ_k>2. These timelike lines actually exist because from Appendix <ref>, for each k>2, the corresponding k-line is timelike [as it was implied by (<ref>)], provided that Eq. (<ref>) be satisfied for all ρ∈ (0, λ).For k>2, the function f(k) has aglobal minimum at k = k_m = 2 + √(3),the value of which is f(k_m)= 1/3(2 + √(3))^3 = 1/3 (26 + 15 √(3)) according to Eq. (26) in Ref. <cit.>. In fact, it is easy to verify that f”(k_m) >0. In particular, the sufficient condition (<ref>) will be minimally demanding for k=k_m. Then, henceforth, we will put in (<ref>) f(k_m), that is we will demand (<ref>). From this assumption and the above considerations, the following general statement (cfRef. <cit.> and references quoted therein) can be proved:For any marginally bound LTB metric (<ref>) satisfyingthe inequality (<ref>) with M and M' positive functions in the vicinity of ρ = 0 and M(0) = M'(0) = 0,there could exist a pencil of radial null geodesics which come from the central singularity and escape from the star.Let us prove thisresult step by step:(i) The family of lines (<ref>) intersects the central singularity τ_k (0) = ψ(0) becauseM(ρ) goes to zero when ρ→ 0.(ii) In the vicinity of ρ =0, the slope of every line τ_k (k>2) remains larger than the corresponding slope τ'_g(ρ)_|kM for the outgoing radial null geodesic (compare (<ref>) and (<ref>) keeping in mind that M'(0) = 0).(iii) Moreover,taking into account Eq. (<ref>), the smoothness of the functions involved in Eq. (<ref>) guarantees that an open elementary interval around k_m, (k_m-ϵ, k_m + ϵ), exists such that Eq. (<ref>) is satisfied, that is, τ'_g(ρ)_|_l M <τ'_l (ρ) ∀ l ∈ (k_m-ϵ, k_m + ϵ),∀ρ∈(0, λ).(iv) Then, let us consider any one of the k-lines, k=l, and any one of the events on it; let us say the event correspondingto ρ = ρ_1.Further, given a direction θ, ϕ, consider the virtual unique null outgoing geodesic, say τ_g(ρ)_|_l, passing through this ρ_1 event.Assume that this virtual geodesic exists actually from ρ = 0. Can this geodesic remain over τ_l(ρ) when ρ goes to zero? No, it cannot, since (τ=0,ρ=0)is the essential central singularity, such that events with ρ=0 and τ > 0 are forbidden. Could thenthe geodesic run, for ρgoing to zero, the opposite way, that is, to start from ρ =0 below the l-line, τ_l(ρ)? No, since in order to arrive at Eq. (<ref>) for ρ= ρ_1 we should have, contrarily to Eq. (<ref>), τ'_g(ρ)_|_l M >τ'_l (ρ) for some ρ = ρ_2 < ρ_1.But, as remarked above in the present section, referring to Appendix <ref>, the l-line is timelike.Thus, simply drawing the corresponding outgoing light cone for ρ_2 one becomes convinced that the last inequality is impossible. In all, the outgoing radial l-geodesics, τ_g(ρ)_|_l, start from the central essential singularity. Therefore, a pencil of photons, one photon for each one of the above corresponding l and ρ_1 values, would exist and would be emitted from the central singularity and would remain always out of the apparent horizon A = 2M and, consequently, it could be detected outside the star.Contrarily, no such a pencil can be present when we consider the light leaving outthe central regular events(τ <0, ρ = 0), since given a direction (θ, ϕ) there is a unique radial null geodesic leaving out any regular event. Then, although leaving a door open to the actual existence of that photon pencil leaving out the central singularity, we must admit that such a pencil could be the result of having assumed the actual existence of some virtual photons.Notice that, in a mathematical terminology, Eq. (<ref>) together with the algebraic conditions τ_l(0) = τ_h(0)and τ_l(ρ) < τ_h(ρ) for all ρ∈ (0,λ) say that the lines τ_l(ρ) are subhorizon supersolutionsof Eq. (<ref>) of which the existence is equivalent to the global naked character of the central singularity (see Ref. <cit.>, Theorem 2.5). We have just then proven that τ_l(ρ) is a set of subhorizon supersolutions of Eq. (<ref>).§ PROVING THAT THE CENTRAL SINGULARITY OF THE Ξ-METRIC IS GLOBALLY NAKED FOR SOME Ξ VALUES In the present section, we will show, for some parameter values,that the central singularity τ = ψ(ρ = 0)=0 for the ξ-metric (see Sec. <ref>) is a global naked singularity, in accordance with a similar result from Ref. <cit.>. Our result will be obtained numerically in the next section, and alsoapplying the sufficient condition (<ref>), according to Ref. <cit.>, in the present section. However, it cannot be obtained fromRef. <cit.> going to the limiting case where the 3-space curvature vanishes, since this limit does not allow us to recover our ξ-metric.Using inequality (<ref>), the authors of Ref. <cit.> prove the existence of four metrics with a global naked central singularity for four different functions M(ρ),Eqs. (28), (33), (38), and (43), respectively, of the Sec. V of Ref. <cit.>. But these M functions do not fulfill the last condition (<ref>), [M”] = 0, and then do not fulfill allthe corresponding C^1 matching conditions across the star boundary, ρ = λ. Could this non-fulfillness be the reason for thenakedness and so the reason for the corresponding violation of the cosmic censorship conjecture? The answer is negative, since we are going to see that our ξ-metric, which satisfies all conditions (<ref>), has a central global naked singularity for large enoughvalues of the parameter ξ≡λ / 2m.Let us have in mind Eqs. (<ref>) and (<ref>) for ρ≤λ. In terms of the dimensionless variable x=ρ/λ∈ [0, 1],the mass function and the singularity time lines are given by M(x)/m = x^2 (x^4 -3x^2 +3) ≡ x^2 P(x), and τ(x)/m = ψ(x)/m = 4/3ξ^3/2√(x/P(x)), respectively, where P(x) ≡ x^4-3x^2+3. Thus, taking into account (<ref>) and (<ref>),the inequality (<ref>) becomes: ξ > [3f(k)]^2/3 F(x),where the function of the right hand is F(x) ≡x P(x) ((1-x^2)^2/1+ x^2 - x^4)^2/3, which has a maximum value F_max≈ 0.74 at x ≈ 0.4 (see Fig. <ref>).Thus, Eq. (<ref>) is the expression of the sufficient condition (<ref>) for the ξ-metric. In particular, for k=k_m,from Eqs. (<ref>) and (<ref>) we obtain ξ > (2 + √(3))^2 F(x). Then, for any value of ξ larger than (2 + √(3))^2 F_max≈ 10.33,the corresponding ξ-metric has a central global naked singularity.As discussed at the end of Sec. <ref>, this naked singularity leads, for each central direction, to a unique infinite escaping photon or even to a pencil of them. On the other hand, this value of F_max≈ 0.74 provides the threshold value of the ξ parameter from which the apparent horizon of themetric is everywhere spacelike for all x ∈ (0,1). For a detailed proof of this statement see Appendix <ref>.The above four M functions of Ref. <cit.> can be specifiedin terms of local expansions in ρ near the vanishing value of ρ: the three first M functions go like ρ^3, for small ρ values, and the fourth one goes like ρ, while our M function,Eq. (<ref>), goes like ρ^2.But inRef. <cit.>, a fifth case for Mis considered (see Eq. (47) in Ref. <cit.>), this one leading to another central global naked singularity in the paper and, although it is not mentioned in Ref.<cit.>, the corresponding metric fulfilling the C^1 matching conditions. Furthermore, its intrinsic energy is finite and stationary (actually,it vanishes), as it must be according to the comment at the paragraph that follows Eq. (<ref>) in Sec. <ref>.The intrinsic energy of our ξ-metric vanishes too, because in this case, as we have noted, M ∼ρ^2, for ρ→ 0 (see Appendix <ref>). The similar vanishing in the fifth M case of Ref. <cit.> comes a fortiori from the fact that nowM ∼ρ^3, for ρ→ 0. Actually, the mass function (<ref>) of the ξ-metric is slightlygreater than the mass given by Eq. (47) in Ref. <cit.> (see Fig. <ref>). In all, the Penrose cosmic censorship conjecture becomes violated at least for two plausible –C^1 matched and with a finite, stationary, intrinsic energy– metrics belonging to the marginally bound dust LTB family, one of these two metrics having already been proposed in Ref. <cit.>, although the authors had not noticed that the proposed metric was a C^1 matching metric with a finite, stationary, vanishing intrinsic energy.§ SHOWING BY NUMERICAL CALCULATION THAT THE CENTRAL SINGULARITY OF THE Ξ-METRIC IS GLOBALLY NAKED FOR SOME Ξ VALUESInequality (<ref>) is a sufficient condition for central global nakedness, but not a necessarycondition. Then, helped by Mathematica, we numerically calculate some of the outgoing central null geodesics of our ξ-metric for different values of the ξ parameter. We will show the existence of this nakedness for ξ values lower than the above (2 + √(3))^2 F_max≈ 10.33 value. The outgoing radial null geodesics (x, y(x)) of the marginally boundLTB metric (<ref>) are the solution of the ordinary differential equation[To perform numerical integration and graphic representation, normalized variables (x, y)=(ρ/λ, τ/m) are used for convenience. Note the irrelevant, but graphically convenient, order change with respect the starting (τ, ρ) coordinates. ]y'(x) = 1/mA'(y(x), x), where y(x) ≡τ_g(x)/m and now the prime stands for the derivative of y with respect to x. Here,since we are dealing with the specific case of the ξ-metric, we must use Eq. (<ref>) with M given by Eq. (<ref>)and ψ by Eq. (<ref>), in the interior of the star,[Notice that, inside the star, ψ and M are both increasing functions. Then,Eq. (<ref>) implies that A' is always positive. Consequently shell crossing singularities (see Ref. <cit.>, p. 321) will not occur during the collapse. In addition, the proper energy density μ≡μ(τ, ρ),that is, according to Eq. (<ref>), 4 πμ = M'/(A^2 A'),is everywhere regular (except for the essential singularity A=0). These properties could reinforce the belief in the goodness of the ξ-metric. ] and then the second member of (<ref>) has the expression 1/mA'(y, x)=(2/P)^2/3[ 3^2/3(1-x^2)^2/x^1/3(4/3ξ^3/2√(x/P)-y)^2/3 +2/√(3)ξ^3/2x^1/6 (1+ x^2 - x^4)/√(P)(4/3ξ^3/2√(x/P)-y)^1/3],where y ≡ y(x) and P ≡ P(x) ≡ x^4 -3x +3. On the other hand, substitution of Eqs. (<ref>) and (<ref>) in Eq. (<ref>) gives theequation ofthe k-lines for the ξ-metric: τ_k(x)/m = 4/3ξ^3/2√(x/P)- k/3 √(2k) x^2 P. By considering appropriate initial conditions, the integration of Eq. (<ref>) with the second member given byEq. (<ref>) will be carried out with Mathematica.The figures of this section show,in a (x, y) diagram, the resulting null geodesics coloured in red,and alsosome representative k-lines: the time singularity (k=0) in black, the apparent horizon (k=2) in green, and the k=2+√(3) line in blue.§.§ Sufficient condition ξ≥ 10.33 for global nakedness For the particular value of ξ,ξ =10.33, some of these geodesics have been drawn in Fig. <ref>, using as mentioned above Mathematica. Specifically, we have considered four of them, corresponding to the initial conditions(1, 30), (1, 34), (1, 38), and (1, 42). Then, in accordance with what has been mentioned at the end of Sec. <ref>,there would be an actual or virtual pencil of outgoing radial null geodesics emanating from the central singularity and escaping outside the star to infinity since,in accordance with Eq. (<ref>), the corresponding geodesic times τ_g(λ) are lower than the horizon time τ_h(λ) (see Fig. <ref>). It is to be noticed that, according to Mathematica, in the overlapping region x ≲ 0.1, the geodesic lines have been actually calculated (without extrapolation) up to at least x ≈ 10^-4.From this figure, looking at the kind of intersection with the null geodesics (updown or the opposite way), it is easily concluded that, for the considered ξ-value(ξ =10.33), the apparent horizon is spacelike and that the k=2 +√(3) line is timelike, in accordance with the results obtained inAppendix <ref>. §.§ Threshold value ξ≈ 4.5 for global nakednessIn a similar way, helped by Mathematica, we can find the ξ values for which the central singularity becomes dressed, that is, nonglobally naked. For the particular value of ξ,ξ = 1, one has τ_h(λ) =0 and every null geodesic, if any, starting from the center ρ = 0 at τ = 0 cannot reach the exterior region of the star, and thenthe central singularity is, indeed, dressed (see Fig. <ref>).Notice how the radial null geodesics leavingx=0 before y=0 finish their run in the intrinsic singularity time, such that the sooner the initial value of y gets close to zero, the faster the geodesic runs into thesingularity time. From Fig. <ref>, the kind of intersection of these geodesics with the apparent horizon line makes evident, in this case, that this line is spacelike.The same conclusion follows by taking ξ = 2, 3, 4. Some outgoing radial null geodesics are plotted in Fig. <ref> for ξ =4, from which one sees that, when these null geodesics τ_g(ρ) approach more and more the one leaving out ρ = 0 at τ = 0, their corresponding τ_g(λ) values approach, from the low, the τ_h(λ) value until over a certain degree of this approaching τ_g(λ) becomes larger than τ_h(λ). As a result, the corresponding photons leaving the central singularity cannot reach the exterior of the star. Further, for all the above cases with ξ < 4, we obtain that the central singularity is dressed too. Nevertheless, going ahead the numerical integration of Eq. (<ref>) with Eq. (<ref>), one can see that for ξ = 4.5and ξ =5, 6, ... the central singularity becomes globally naked. For ξ =4.5 (see Fig <ref>) four representative geodesics are displayed after numerical integration of Eq. (<ref>) considering the initial conditions (1, 11), (1, 10), (1,9), and (1,8). The upper geodesic is the one that corresponds to the initial condition (1, 11). This geodesic comes from the central singularity and escapes out of the star. Then, for this ξ=4.5 value, the central singularity becomes globally naked. A management by trial and error of the cases 4 < ξ <5 leads to the the following result: For the ξ-metric,there is a threshold ξ value, say, ξ_0≈ 4.5,from which the central singularity becomes globally naked. The ξ-parameter, ξ =λ /2m, is related to the proper time, ψ(λ), at which the collapsing star surface reaches the essential singularity. This follows from Eq. (<ref>) by taking ρ = λ and M(λ) = m, ψ(λ) = 2/3λ^3/2/√(2m) = 4/3 mξ^3/2. Then, we have ξ = (3 τ_λ/4m)^2/3, where τ_λ = ψ (λ) is the proper time duration of the collapse from the formation, τ=0, of the central singularity. To end this section, we would remark that our analysis has been concentrated on the behavior of outgoing radial null geodesics.The reason for this self-limitation is that, for marginally bound collapse, a singularity is censored if it is radially censored(see Ref. <cit.>, Proposition 8). On the other hand,we have just constructed a ξ-metric model with a set of natural physicaland mathematical requirements, and we have checked numerically the global naked character of the central singularity of the model.For an analytical and rigorous treatment of the behavior of the radial null geodesics in the vicinity of the singular point attached to amarginally bound dust collapse scenario, see Ref. <cit.>. § FINAL CONSIDERATIONS In accordance with our result of the previous section showing that our ξ-metric has a global nakedcentral singularity, the metrics usedinRefs. <cit.>,<cit.>, and <cit.> for the dust spherical collapsing case have global central naked singularities too. But these other metrics, except one,do not satisfy all the C^1 matching requirements (actually, they do not satisfy the condition [M”] = 0 of Eq. (<ref>)), whereas the metric with M given in Ref. <cit.>, Eq. (47), and our ξ-metric do satisfy it. Thus, a certain nonmatchingcharacter of those metrics in Ref. <cit.> could not be the reason why they violate the Penrose conjecture since two other (C^1 class) metrics, the ξ-metric plus the one associated to M given by Eq. (47) in Ref. <cit.>, do violate the conjecture.Incidentally, instead of Eq. (<ref>) we could have chosen any mass function M(ρ) of the large familyM(ρ) =m +∑_k=3^∞ M_k (1- ρ/λ)^k, ρ≤λm, ρ≥λ with the sole restriction on the constant coefficients M_k that ∑_k=3^∞ M_k (1- ρ/λ)^k converges for anyρ≤λ.Actually, any of these M(ρ) functions satisfies all the requirements (<ref>). Furthermore, these coefficients should guarantee the physical conditionM > 0, M' ≥ 0, ∀ρ >0, and even more that M ∼ρ^n, n ≥ 2, for ρ≪λ, in order that the intrinsic energy of the corresponding metric vanishes, in accordance with what is explained in Appendix <ref> for the ξ-metric. Future work could confirm that Eq. (<ref>),with the supplementary conditions for the M_k coefficients, leads to marginally bound collapsing LTB metrics with their central singularities being globally naked for some ξ parameter values. For the time being, we have proven easily this statement for the interesting particular case of the ξ-metric.Further, we remark that the present paper's calculations have been performed in the particular gauge A(0, ρ) = ρ (see Sec. <ref>) largely used in the literature. However, our main result –that a null geodesic, or a pencil of null geodesics, leaving the centralsingularityof the ξ-metric escape to the future null infinity– is a covariant one, and therefore gauge independent. The same can be said of similar results in the above-cited references. Finally, there is a line of thinking, that canbe traced back toPenrose <cit.>, according to which the naked singularities found in the spherically symmetric dust case, like the ones foundin the present paper, would be mere artifacts due to the oversimplified case considered. However, this objection could not be kept since the present literature on the subject shows many cases in which naked singularities persist when pressure is added to the initial dust case, and the same literature shows another cases of this persistence when the spherical symmetry is perturbed (see, for instance, Refs. <cit.> concerning the first cases and Refs. <cit.>, concerning the second ones).This work was supported by the Spanish Ministerio de Economía y Competitividad and the Fondo Europeo de Desarrollo Regional MINECO-FEDER Project No. FIS2015-64552-P. § INTRINSIC ENERGY OF THE Ξ-METRICExpressed as a 3-volume integral, the ADM energy <cit.> (see also Ref. <cit.>), P^0 , becomes P^0 = 1/8π∫∂/∂ρ_i (∂_j g_ij - ∂_i g) dρ_1 dρ_2 dρ_3, with i, j = 1, 2, 3, g ≡δ^ij g_ij, G = c = 1, and ρ_i the rectilinear coordinates associated to (ρ, θ, ϕ), g_ij being the 3-space metric components. According with the more general situation considered elsewhere <cit.>, in the particular case of our ξ-metric, P^0 becomes P^0 = 1/8π∫∂_i [(A - ρ A')^2n_i/ρ^3] dρ_1 dρ_2 dρ_3,n_i = ρ_i/ρ, which we call here its intrinsic energy, since the metric is expressed in Gauss comoving coordinates adapted to the spherical symmetry, at rest at the spatial infinity, and we call these coordinates intrinsic coordinates <cit.>. Then, since the integrand in (<ref>) is regular enough (it is continuous everywhere, except for ρ = 0) we can apply the Gauss theorem to the corresponding 3-volume integral and express it as a 2-surface integral on the boundary. More specifically this boundary will be made of two 2-surfaces, ρ = + ∞ and ρ = ϵ >0 where ϵ is a positive infinitesimal quantity. Then, we will take the limit ϵ→ 0.So, we will haveP^0=P^0_∞ + lim_ϵ→ 0 P^0_ϵ, withP^0_∞ = lim_ρ→ + ∞1/8 π∫_S_ρ Qcosθ d θ d ϕ = 1/2lim_ρ→ + ∞ Q, where the double integral is calculated on the 2-sphere of radius ρ, S_ρ, and P^0_ϵ = - 1/2Q|_ρ = ϵ, and whereQ ≡1/ρ (A - ρ A')^2.To calculate easily the limit (<ref>), notice that for ρ > λ (and so for ρ→∞) our ξ-metric is the Schwarzschild metric, that is, Eq. (<ref>) with eq. (<ref>) given by Eq. (<ref>). Then an easy calculation gives for ρ > λ Q = (9m/2)^2/3τ^2/ρ(τ - 2/3ρ^3/2/√(2m))^2/3, the limit of which for ρ→∞ and τ fixed vanishes. Notice that we cannot put there τ≥ψ (ρ = λ), since for this value of ρ the outer spherical shell of the star has just reached its own singularity and we no longer have a classical object ruled by General Relativity. The same is partiallytrue for τ≥ψ (ρ = 0). In all, the contribution P^0_∞ to the total P^0 vanishes and we are left with the other contribution lim_ϵ→ 0 P^0_ϵ. Let us calculate it. First, according to (<ref>) and (<ref>), we can write Q as Q = ρ(9M/2(τ - ψ))^2/3[(1/ρ- 1/3M'/M) (τ - ψ)+ 2/3ψ' ]^2.Then, we are going to calculate P^0 for τ < ψ(0) since, as already mentioned, for τ = ψ(0) the inner spherical shell of the star reaches the intrinsic singularity and full General Relativity begins to be not completely valid. Thus, in order to calculate lim_ρ→ 0 Q, we only have to study howthe functions M, M', and ψ', present in Eq. (<ref>), behave in this limit. But from (<ref>) and (<ref>) it is easy to see that for ρ/λ≪ 1 the function M goes like M ∼ρ^2 and consequently ψ∼ρ^1/2. This entails the vanishing of lim_ρ→ 0 Q. In all, both contributions to P^0, present in Eq. (<ref>), vanish, and then P^0 vanishes too, which means that P^0, the intrinsic energy of the ξ-metric, is stationary and finite, as is physically required. § CAUSAL CHARACTER OF THE LINES A=KMIn this appendix, we consider the LTB marginal bound metric (<ref>) and analyze the causal character ofthe one-parameter family of radial lines(τ_k(ρ), ρ, θ = const, ϕ = const.), implicitly given byA(τ_k(ρ), ρ) = k M(ρ), where k is a positive real parameter.The performed analysis is model independent in the sense that it applies for arbitrary positive increasing functionsψ(ρ) and M(ρ). For each k-line, the square v_k^2 ≡ g_μν v_k^μ v_k^νof the tangent vector,v_k^μ = (τ'_k(ρ), 1, 0, 0) (greek indices running from 0 to 1),isv_k^2= (2/k-1 ) ψ'^2 + 2/3√(2k) (k+ 1) ψ' M' + k^2/9(1-2k) M'^2 and can be written in the suitable form v_k^2 = M'^2 P_k(β),withP_k(β) = (2/k-1 ) β^2 + 2/3√(2k) (k + 1) β + k^2/9(1-2k), and where β is a function of ρ∈(0,λ) given by the ratio of the derivatives of the free functions of the metric (<ref>),β≡β(ρ) ≡ψ'/M'(ρ). The particular value k=2 corresponds to the apparent horizon line, in which case (<ref>)becomes linear in β, P_2(β) = 4(β -1/3). The detailed analysis of the causal character of the apparent horizon for the ξ-metricis carried out at the end of this Appendix. Thus, we will concentrate here on a generic valuek ≠ 2 for which (<ref>) is a quadratic function of β, the discriminant of which Δ_k is always positiveΔ_k = 4 k^2, saying thatP(β) has two distinct real roots which can be written: β_ε = k/3 ε + √(2k)/1 - ε√(2/k), ε = ± 1, and then β_+ - β_- = 2k^2/k -2 . Notice that, for ε = + 1, the root β_+is the function f(k) defined in Eq. (<ref>), β_+ = f(k); moreover, from Eq.(<ref>), β_+ - β_- is positive (respectively, negative) if k>2 (respectively, k<2). In addition, β_+ is positive (respectively,negative) for k>2 (respectively, k<2), and it becomes β_+→ + ∞ when k → 2^+. On the other hand, for ε = - 1,the root β_- is positive (respectively negative) for k>1/2 (respectively, k<1/2), and vanishes for k=1/2; it is finite for k=2, becoming for this value the root β = 1/3 of the linear polynomialP_2(β).According with this analysis, we conclude that:For eachk >2 (respectively, k<2) and for each ρ∈ (0, λ), the line τ_k (ρ) is timelikeif, and only if,β >β_+ or β <β_- (respectively,β_+ < β < β_-), withβ_± given by Eq. (<ref>). This line is null forβ = β_+ or β = β_-, and it becomes spacelike whenβ_- < β < β_+ (respectively,β > β_- or β < β_+), where β≡β(ρ) is given by Eq. (<ref>).Notice that the first member of Eq. (<ref>), ψ'/M', is bounded for every fixed ρ∈(0, λ), but the second member, β_+, diverges when k → 2^+. In fact, fork → 2^+,β < β_+ = + ∞ and β_-→ 1/3. Then,if β > 1/3 for all ρ∈ (0, λ) the lines τ_2+ϵ (ρ) are spacelike when ϵ→ 0^+. The apparent horizon line A=2M has to be considered as a special case: for k=2,Eq. (<ref>) reduces to v_2^2 = 4 M' (ψ' - 1/3 M'). Thus, forρ≠0, λ, the apparent horizon is spacelike, null or timelike if β(ρ) is greater than, equal to,or less than 1/3, respectively.Finally, we consider the ξ-metric. Deriving Eqs. (<ref>) and (<ref>),the function β given by Eq. (<ref>) becomes β(x) = 1/3 (ξ/F(x))^3/2, where F(x) is given by Eq. (<ref>). Then, using Eq. (<ref>) one can express (for each k value) the above results about the causal character of lines A=kM in terms of the normalized variable x = ρ/λ and the ξ parameter values. In particular, the followingstatements directly result from the previous analysis. For the ξ-metric: (i) The line A = k_m M, k_m = 2 + √(3),is timelikefor all x ∈ (0, 1) if ξ > (2 + √(3))^2 F_max≈ 10.33.(ii) The apparent horizon is spacelike for all x ∈ (0, 1) if, and only if, ξ > F_max≈ 0.74.Moreover,forξ = F_maxthe apparent horizon line is null at the sole point x = x_0 ≈ 0.4 ∈ (0,1), such that F(x_0) = F_max, beingspacelike ∀ x ∈ (0, x_0)∪ (x_0, 1). Otherwise,for each ξ < F_max there always exist twodifferent values, say x_1 and x_2, such that x_1 < x_0 < x_2 where the apparent horizon is null; of course, it is spacelike ∀ x ∈ (0, x_1)∪ (x_2, 1) and timelike ∀ x ∈ (x_1, x_2). It is tacitly understood that at x =0 and x=1 the apparent horizon line is always null whatever the value of the parameter ξ may be.99Penrose-79 R. Penrose, Nuovo Cimento, 1, 252 (1969); reprinted with historical comments in Gen. Relativ. Gravit. 34, 1141 (2002). See also R. Penrose, “Singularities and time-asymmetry”, in: General Relativity: an Einstein Centenary Survey, S. W. Hawking andW. Israel (eds.), (Cambridge University Press, Cambridge, England,1979), p. 581. Eardley-Smarr-1979 D. M. Eardley and L. Smarr, Phys. Rev.D 19, 2239 (1979).Christodoulou-84 D. Christodoulou, Commun. Math. Phys. 93, 171 (1984).Hellaby-Lake-1988 C. Hellaby and K. Lake, “The Singularity of Eardley, Smarr and Christodoulou,” Preprint 88/7, Department of Applied Mathematics, University of Cape Town, 1988. Joshi-Dwivedi-93 P. S. Joshi and I. H. Dwivedi, Phys. Rev.D 47, 5357 (1993).Singh-Joshi-96 T. P. Singh and P. S. Joshi, Classical Quantum Gravity 13, 559 (1996).Jhin-Kau-2014 S. Jhingan and S. Kaushik, Phys. Rev. D 90, 024009 (2014).Singh-99 T. P. Singh, Journal ofAstrophysics and Astronomy, 20, 221 (1999).Lemaitre-1933 G. Lemaître, Ann. Soc. Sci. Bruxelles A 53, 51 (1933); English translation, with historical comments inGeneral Relativ. Gravit. 29, 637 (1997).Tolman-1934 R. C. Tolman, Proc. Natl. Acad. Sci. U.S.A. 20, 169 (1934); reprinted, with historial comments in General Relativ. Gravit. 29, 935 (1997).Bondi-1947 H. Bondi, Mon. Not. Roy. Astr. Soc. 107, 410 (1947); reprinted, with historial comments in General Relativ. Gravit. 31, 1783 (1999).PlebKra J. Plebański and A. Krasiński, An Introduction to General Relativity and Cosmology(Cambridge University Press, Cambridge, England,2006).Exact-2003 H. Sthephani, D. Kramer, M. Maccallum, C. Hoenselaers and E. Herlt, Exact Solutions to Einstein's Field Equations(Cambridge University Press, Cambridge, England, 2003).Lichnerowicz A. Lichnerowicz, Théories Relativistes de la Gravitation et de l'Électromagnétisme (Masson. Paris, 1955).Joshi-llibre P. S. Joshi, Global Aspects in Gravitation and Cosmology (Oxford University, New York, 1993).Joshi-Malafarina-2011 P. S. Joshi and D. Malafarina, Int. J. Mod. Phys. D 20, 2641 (2011).See also arXiv:1201.3660.Hadamard J. Hadamard, Leçons sur la Propagation des Ondes et les Équations de l'Hydrodyna­mique (Hermann, Paris, 1903).LaMo-arXiv-2016 R. Lapiedra andJ. A. Morales–Lladosa, “Spherical symmetric parabolic dust collapse: C^1 matching metric with zero intrinsic energy”, arXiv: 1608.01253v1 [gr-qc].Lake-2015 K. Lake, Phys. Rev.D 91, 124036 (2015).Gi-Gi-Ma-Pi-2003 R. Giambò, F. Giannoni, G. Magli and P. Piccione, Classical Quantum Gravity 20, L75, (2003). Mena-Nolan F. C. Mena and B. Nolan, Classical Quantum Gravity 18, 4531 (2001).Giambo-Magli R. Giambò and G. Magli, Diff. Geom. Appl., 18, 285 (2003). Penrose-1999 R. Penrose, Journal ofAstrophysics and Astronomy, 20, 233 (1999).MuYoSe-1974 H. Müller zum Hagen, P. Yodzis,and H.-J. Seifert, Commun. Math. Phys. 37, 29 (1974).Torres-2012 R. Torres, Classical Quantum Gravity 29, 205016 (2012).OrPi-1987 A. Ori and T. Piran, Phys. Rev. Lett. 59, 2137 (1987).Joshi-Krolak-1996 P. S. Joshi and A. Królak, Classical Quantum Gravity 13, 3069 (1996).IgNaHa-1998 H. Iguchi, K. Nakao, and T. Harada, Phys. Rev. D 57, 7262 (1998).ADM-1962 R. Arnowitt, S. Deser,and C. W. Misner,Gravitation: An Introduction to Current Research, edited byL. Witten (Wiley, New York, 1962), Chap. 7, p. 227; reprinted in General Relativ. Gravit. 40, 1997 (2008).Weinberg S. Weinberg, Gravitation and Cosmology (Wiley, New York, 1972).Lapiedra-Morales-2013 R. Lapiedra and J. A. Morales–Lladosa, General Relativ. Gravit.45, 1145 (2013).LaMo-2014 R. Lapiedra andJ. A. Morales–Lladosa, Phys. Rev.D 89, 064033 (2014).
http://arxiv.org/abs/1703.09133v1
{ "authors": [ "Ramon Lapiedra", "Juan Antonio Morales-Lladosa" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170327150742", "title": "Cosmic censorship conjecture in some matching spherical collapsing metrics" }
1]Dongdong ChenThis work was done when Dongdong Chen is an intern at MSR Asia. 2]Jing Liao 2]Lu Yuan 1]Nenghai Yu 2]Gang Hua [1]University of Science and Technology of China, cd722522@mail.ustc.edu.cn, ynh@ustc.edu.cn [2]Microsoft Research Asia, {jliao,luyuan,ganghua}@microsoft.com Coherent Online Video Style Transfer [==================================== Training a feed-forward network for fast neural style transfer of images is proven to be successful. However, the naive extension to process video frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near real-time. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures the consistency over larger period of time. Our network can incorporate different image stylization networks. We show that the proposed method clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitudes faster in runtime. § INTRODUCTION Inspired by the success of work from Gatys et al. <cit.> on neural style transfer, there have been a surge of recent works <cit.> addressing the problem of style transfer using deep neural networks.In their approaches, style transfer is formulated as an optimization problem, , starting with white noise, searching for a new image presenting similar neural activations as the content image and similar feature correlations as the style image. Notwithstanding their impressive results, these methods are very slow in runtime due to the heavy iterative optimization process. To mitigate this issue, many works have sought to speed up the transfer by training feed-forward networks <cit.>. Such techniques have been successfully applied to a number of popular apps such as Prisma, Pikazo, DeepArt, .Extending neural style transfer form image to video may produce new and impressive effects, whose appeal is especially strong in short videos sharing, live-view effects, and movie entertainments. The approaches discussed above, when naively extended to process each frame of the video one-by-one, often lead to flickering and false discontinuities. This is because the solution of the style transfer task is not stable. For optimization-based methods (, <cit.>), the instability stems from the random initialization and local minima of the style loss function. And for those methods based on feed-forward networks (, <cit.>), a small perturbation in the content images, , lighting, noises and motions may cause large variations in the stylized results, as shown in fg:motivation. Consequently, it is essential to explore temporal consistency in videos for stable outputs.Anderson et al. <cit.> and Ruder et al. <cit.> address the problem of flickers in the optimization-based method by introducing optical flow to constrain both the initialization and the loss function. Although very impressive and smoothing stylized video sequences are obtained, their runtime is quite slow (usually several minutes per frame), making it less practical in real-world applications.In search for a fast and yet stable solution to video style transfer, we present the first feed-forward network leveraging temporal information for video style transfer, which is able to produce consistent and stable stylized video sequences in near real-time. Our network architecture is constituted by a series of the same networks, which considers two-frame temporal coherence. The basic network incorporates two sub-networks, namely the flow sub-network and the mask sub-network, into a certain intermediate layer of a pre-trained stylization network (, <cit.>).The flow sub-network, which is motivated by <cit.>, estimates dense feature correspondences between consecutive frames. It helps all consistent points along the motion trajectory be aligned in feature domain. The mask sub-network identifies the occlusion or motion discontinuity regions. It helps adaptively blend feature maps from previous frames and the current frame to avoid ghosting artifacts. The entire architecture is trained end-to-end, and minimizes a new loss function, jointly considering stylization and temporal coherence.There are two kinds of temporal consistency in videos, as mentioned in <cit.>: long-term consistency and short-term consistency. Long-term consistency is more appealing since it produces stable results over larger periods of time, and even can enforce consistency of the synthesized frames before and after the occlusion. This constraint can be easily enforced in optimization-based methods <cit.>. Unfortunately, it is quite difficult to incroporate it in feed-forward networks, due to limited batch size, computation time and cache memory. Therefore, short-term consistency seems to be more affordable by feed-forward network in practice. Therefore, our solution is a kind of compromise between consistency and efficiency. Our network is designed to mainly consider short-term relationship (only two frames), but the long-term consistency is partially achieved by propagating the short-term ones. Our network may directly leverage the composite features obtained from the previous frame, and combine it with features at the current frame for the propagation. In this way, when the point can be traced along motion trajectories, the feature can be propagated until the tracks end.This approximation may suffer from shifting errors in propagation, and inconsistency before and after the occlusion. Nevertheless, in practice, we do not observe obvious ghosting or flickering artifacts through our online method, which is necessary in many real applications. In summary, our proposed video style transfer network is unique in the following aspects:* Our network is the first network leveraging temporal information that is trained end-to-end for video style transfer, which successfully generates stable results.* Our feed-forward network is about thousands of times faster compared to optimization-based style transfer in videos <cit.>, reaching 15 fps on modern GPUs. * Our method enables online processing, and is cheap in both learning and inference, since we achieve the good approximation of long-term temporal coherence by propagating short-term one. * Our network is general, and successfully applied to several existing image stylization networks, including per-style-per-network <cit.> or mutiple-style-per-network <cit.>.§ RELATED WORK §.§ Style Transfer for Images and Videos Traditional image stylization work mainly focus on texture synthesis based on low-level features, which uses non-parametric sampling of pixels or patches in given source texture images <cit.> or stroke databases <cit.>. Their extension to video mostly uses optical flow to constrain the temporal coherence of sampling <cit.>. A comprehensive survey can be found in <cit.>.Recently, with the development of deep learning, using neural networks for stylization becomes an active topic. Gatys et al. <cit.> first propose a method of using pre-trained Deep Convolutional Neural Networks (CNN) for image stylization. It generates more impressive results compared to traditional methods because CNN provides more semantic representations of styles. To further improve the transfer quality, different complementary schemes have been proposed, including face constraints <cit.>, Markov Random Field (MRF) prior <cit.>, user guidance <cit.> or controls <cit.>. Unfortunately, these methods based on an iterative optimization are computationally expensive in run-time, which imposes a big limitation in real applications. To make the run-time more efficient, some work directly learn a feed-forward generative network for a specific style <cit.> or multiple styles <cit.> which are hundreds of times faster than optimization-based methods.Another direction of neural style transfer <cit.> is to extend it to videos. Naive solution that independently processes each frame produces flickers and false discontinuities. To preserve temporal consistency, Alexander et al. <cit.> use optical flow to initialize the style transfer optimization, and incorporate flow explicitly into the loss function. To further reduce ghosting artifacts at the boundaries and occluded regions, Ruder et al. <cit.> introduce masks to filter out the flow with low confidences in the loss function. This allows to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusions. Notwithstanding their demonstrated success in video style transfer, it is very slow due to the iterative optimization. Feed-forward networks <cit.> have proven to be efficient in image style transfer. However, we are not aware of any work that trains a feed-forward network that explicitly takes temporal coherence into consideration in video style transfer. §.§ Temporal Coherence in Video Filter Video style transfer can be viewed as applying one kind of artistic filter on videos. How to preserve the temporal coherence is essential and has been considered in previous video filtering work. One popular solution is to temporally smooth filter parameters. For instance, Bonneel et al. <cit.> and Wang et al. <cit.> transfer the color grade of one video to another by temporally filtering the color transfer functions.Another solution is to extend the filter from 2D to 3D. Paris et al. <cit.> extend the Gaussian kernel in bilateral filtering and mean-shift clustering to the temporal domain for many applications of videos. Lang et al. <cit.> also extend the notion of smoothing to the temporal domain by exploiting optical flow and revisit optimization-based techniques such as motion estimation and colorization. These temporal smoothing and 3D extension methods are specific to their applications, and cannot generalize to other applications, such as stylization.A more general solution considering temporal coherence is to incorporate a post-processing step which is blind to filters. Dong et al. <cit.> segment each frame into several regions and spatiotemporally adjust the enhancement (produced by unknown image filters) of regions of different frames; Bonneel et al. <cit.> filter videos along motion paths using a temporal edge-preserving filter. Unfortunately, these post-processing methods fracture texture patterns, or introduce ghosting artifacts when applied to the stylization results due to high demand of optical flow.As for stylization, previous methods (including traditional ones <cit.> and neural ones <cit.>) rely on optical flow to track motions and keep coherence in color and texture patterns along the motion trajectories. Nevertheless, how to add flow constraints to feed-forward stylization networks has not been investigated before. §.§ Flow Estimation Optical flow is known as an essential component in many video tasks. It has been studied for decades and numerous approaches has been proposed <cit.>). These methods are all hand-crafted, which are difficult to be integrated in and jointly trained in our end-to-end network.Recently, deep learning has been explored to solving optical flow. FlowNet <cit.> is the first deep CNNs designed to directly estimate the optical flow and achieve good results. Later, its successors focused on accelerating the flow estimation <cit.>, or achieving better quality <cit.>. Zhu et al. <cit.> recently integrate the FlowNet <cit.> to image recognition networks and train the network end-to-end for fast video recognition. Our work is inspired by their idea of applying FlowNet to existing networks. However, the stylization task, different from the recognition one, requires some new factors to be considered in network designing, such as the loss function, and feature composition, . § METHOD§.§ Motivation When the style transfer for consecutive frames is applied independently (, <cit.>), subtle changes in appearance (, lighting, noise, motion) would result in strong flickering, as shown in fg:motivation. By contrast, in still-image style transfer, such small changes in the content image, especially on flat regions, may be necessary to generate spatially rich and varied stylized patterns, making the result more impressive. Thus, how to keep such spatially rich and interesting texture patterns, while preserving the temporal consistency in videos is worthy of a more careful study.For simplicity, we start by exploring temporal coherence between two frames. Our intuition is to warp the stylized result from the previous frame to the current one, and adaptively fuse both together. In other words, some traceable points/regions from the previous frame keep unchanged, while some untraceable points/regions use new results occurring at the current frame. Such an intuitive strategy strikes two birds in one stone: 1) it makes sure stylized results along the motion paths to be as stable as possible; 2) it avoids ghosting artifacts for occlusions or motion discontinuities. We show the intuitive idea in fg:intuition.The strategy outlined above only preserves the short-term consistency, which can be formulated as the problem of propagation and composition. The issue of propagation relies on good and robust motion estimation. Instead of optical flow, we are more inclined to estimate flow on deep features, similar to <cit.>, which may neglect noise and small appearance variations and hence lead to more stable motion estimation. This is crucial to generate stable stylization videos, since we desire appearance in stylized video frames not to be changed due to such variations. The issue of composition is also considered in the feature domain instead of pixel domain, since it can further avoid seam artifacts.To further obtain the consistency over long periods of time, we seek a new architecture to propagate short-term consistency to long-term. The pipeline is shown in fg:net_overview. At t-1, we obtain the composite feature maps F^o_t-1, which are constrained by two-frame consistency. At t, we reuse F^o_t-1 for propagation and composition. By doing so, we expect all traceable points to be propagated as far as possible in the entire video. Once the points are occluded or the tracking get lost, the composite features will keep values independently computed at the current frame. In this way, our network only needs to consider two frames every time, but still approaches long-term consistency. §.§ Network Architecture In this section, we explain the details of our proposed end-to-end network for video style transfer. Given the input video sequence {I_t|t=1...n}, the task is to obtain the stylized video sequence {O_t|t=1...n}. The overall system pipeline is shown in fg:net_overview. At the first frame I_1, it uses existing stylization network (, <cit.>) denoted as Net_0 to produce the stylized result. Meanwhile, it also generates the encoded features F_1 as the input of our proposed network Net_1 at the second frame I_2. The process is iterated over the entire video sequence. Starting from the second frame I_2, we use Net_1 rather than Net_0 for style transfer.The proposed network structure Net_1 incorporating two-frame temporal coherence is presented in fg:net_structure. It consists of three main components: the style sub-network, the flow sub-network, and the mask sub-network. Style Sub-network. We adopt the pre-trained image style transfer network of Johnson et al. <cit.> as our default style sub-network, since it is often adopted as the basic network structure for many follow-up work (, <cit.>). This kind of network looks like auto-encoder architecture, with some strided convolution layers as the encoder and fractionally strided convolution layers as the decoder, respectively. Such architectures allow us to insert the flow sub-network and the mask sub-network between the encoder and the decoder. In Section <ref>, we provide the detailed analysis on which layer is better for the integration of our sub-networks. Flow Sub-network. As a part for temporal coherence, the flow sub-network is designed to estimate the correspondences between two consecutive frames I_t-1 and I_t, and then warp the convolutional features. We adopt FlowNet (the "Simple" version) <cit.> as our flow sub-network by default. It is pre-trained on the synthetic Flying Chairs dataset <cit.> for optical flow task, and should be fine-tuned to produce feature flow suitable for our task.The process is similar to <cit.>, which uses it for video recognition. Two consecutive frames I_t-1, I_t are firstly encoded into feature maps F_t-1, F_t respectively by the encoder. W_t is the feature flow generated by the flow sub-network and bilinearly resized to the same spatial resolution as F_t-1. As the values of W_t are in general fractional, we warp F_t-1 to F_t' via bilinear interpolation:F_t' = 𝒲_t-1^t(F_t-1)where 𝒲_t-1^t(·) denotes the function that warps features from t-1 to t using the estimated flow field W_t, namely F_t'(p) = F_t-1(p + W_t(p)), where p denotes spatial location in feature map and flow. Mask Sub-network. Given the warped feature F_t' and the original feature F_t, the mask sub-network is employed to regress the composition mask M, which is then adopted to compose both features F_t' and F_t. The value of M varies from 0 to 1. For traceable points/regions by the flow (, static background), the value in the mask M tends to be 1. It suggests that the warped feature F_t' should be reused so as to keep coherence. On the contrary, at occlusion or false flow points/regions, the value in the mask M is 0, which suggests F_t should be adopted. The mask sub-network architecture consists of three convolutional layers with stride one. Its input is the absolute difference of two feature mapsΔ F_t = |F_t - F_t'|,and the output is a single channel mask M, which means all feature channels would share the same mask in the later composition. Here, we obtain the composite features F_t^o by linear combination of F_t and F_t':F_t^o= (1-M) ⊙ F_t + M ⊙ F_t'where ⊙ represents element-wise multiplication. Summary of Net_1. fg:net_structure summarizes our network Net_1 designed for two frames. Given two input frame I_t-1, I_t, they are fed into the encoder of fixed style sub-network, generating convolutional feature maps F_t-1, F_t. This first step is different in inference, where F_t-1 will not be computed from I_t-1, and instead borrowed from the obtained composite features F_t-1^o at t-1. It is illustrated by the dot lines in fg:net_structure. On the other branch, both frames I_t-1, I_t are fed into the flow sub-network to compute feature flow W_t, which warps the features F_t-1 (F_t-1^o used in inference instead) to F_t'. Next, the difference Δ F_t between F_t and F_t' is fed into the mask sub-network, generating the mask M. New features F_t^o are achieved by linear combination of F_t and F_t' weighted by the mask M. Finally, F_t^o is fed into the decoder of the style sub-network, generating the stylized result O_t at frame t. For the inference, F_t^o is also the output for the next frame t+1. Since both flow and mask sub-networks learn relative flow W_t and mask M_t between any two frames, it is not necessary for our training to incorporate historic information (, F_t-1^o) as well as the inference. It can make our training be simple. §.§ The Loss Function To train both the flow and mask sub-networks, we define the loss function by enforcing three terms: the coherence term ℒ_cohe, the occlusion term ℒ_occ, and the flow term ℒ_flow. The coherence term ℒ_cohe penalizes the inconsistencies between stylized results of two consecutive frames.ℒ_cohe(O_t,S_t-1) = M^g ⊙ ||O_t-𝒲_t-1^t(S_t-1)||^2,where S_t-1 is the stylized result produced independently at t-1. The warping function 𝒲_t-1^t(·) uses the ground-truth flow W_t^g. M^g is the ground-truth mask, where 1 represents consistent points/regions and 0 represents untraceable ones. It encourages the stylized result O_t to be consistent with S_t-1 in the traceable points/regions.On the contrary, in the untraceable regions (e.g. occlusions), the occlusion term ℒ_occ enforces O_t to be close to the independently stylized result S_t at frame I_t:ℒ_occ(O_t,S_t) = (1-M^g)⊙||O_t- S_t||^2. Besides, we add a term to constrain the feature flow:ℒ_flow = ||W_t - W_t^g↓||^2.Here we use the down-scaled version of the ground-truth optical flow W_t^g↓, which is re-scaled to the same size of W_t, to serve as the guidance for feature flow estimation.In summary, our loss function to train flow and mask sub-networks is the weighted avearge of three terms.ℒ = αℒ_cohe + βℒ_occ + λℒ_flow,where α = 1e5, β = 2e4 and λ = 20 by default.Note that our loss function discards the content and style loss for training the original style network, because the pre-trained style sub-network is fixed during the training period of the flow and mask sub-networks. We believe that S_t (or S_t-1) itself can provide sufficient style supervision in learning. One extra benefit is that we can directly leverage other trained still-image style models and apply it to videos directly. In this sense, our proposed framework is general.§ EXPERIMENTS§.§ Dataset Set-upOur task requires a big video dataset with varied types of motions and ground-truth optical flow. However, existing datasets are quite small, , the synthetic MPI Sintel dataset <cit.> (only has 1,064 frames totally). Instead, we collect ten short videos (eight animation movies episode of Ice Age, and two real videos from YouTube), around 28,000 frames together as our training dataset.To obtain approximated ground-truth flow W^g between every two consecutive frames in these videos, we use DeepFlow2 <cit.> to compute the bidirectional optical flow and use the backward flow as the ground-truth.As for the ground-truth of the composition mask M^g, we adopt the methods used in <cit.> to detect occlusions and motion boundaries. We mask out two types of pixels, being set to 0 in M^g: 1) the occlusion pixels achieved by cross-checking the forward and backward flows; 2) the pixels at motion boundaries with large gradients of flow, which are often less accurate and may result in ghosting artifacts in composition. All other pixels in M^g are set to 1.We use the MPI Sintel <cit.> as the test dataset, which is widely adopted for optical flow evaluation. It contains 23 short videos and is labeled with ground-truth flow and occlusion mask. The dataset covers various types of real scenarios, such as large motions and motion blurs. §.§ Implementation details In our experiments, we adopt two types of pre-trained style network (per-style-per-net <cit.>[In our experiment, we adopt the released model of <cit.>. It uses two stride-2 convolutions to down-scale the input followed by five residual blocks and then two convolutional layers with stride-1/2 to up-scale, but its channel number of all convolutional layers is half of <cit.>.], multiple-style-per-net <cit.>[We slightly modified the StyleBank model <cit.>, whose encoder and decoder sub-networks adopted the same structures as <cit.>, but the stylebank layer is inserted after the third residual block.]) as our fixed style sub-network. We train the flow sub-network and mask sub-network on the video dataset described in sec:data_acq. All videos have the image resolutions of 640×360. The network is trained with a batch size of 1 (frame pair) for 100k iterations. And the Adam optimization method <cit.> is adopted with the initial learning rate of 1e-4 and decayed by 0.8 at every 5k iterations. §.§ Quantitative and Qualitative ComparisonFor video style transfer, runtime and temporal consistency are two key criteria. Runtime uses the frame rate of inference. The temporal consistency is measured bye_stab(O_t,O_t-1) = M^g ⊙ ||O_t-𝒲_t-1^t(O_t-1)||^2,where the stability error e_stab(O_t,O_t-1) measures the coherence loss (in eq:loss_cohe) between two results O_t and O_t-1. Here, we only evaluate stability of results on traceable regions. Lower stability error indicates more stable result. For the entire video, we use the average error instead.Quantitative Results. To validate the effectiveness of our method, we test and compare using two existing stylization networks <cit.>. The baseline for comparison is to apply their networks to process each frame independently. As shown in tb:quan_eval, for all the four styles, our method obtains much lower stability error than the baseline <cit.>. As for the runtime, our method is around 2.5∼2.8× slower than the baseline, because our network may need extra computation in both flow and mask sub-networks. Nevertheless, our method is still near real-time (15 fps in Titan X).As a reference, we also test the optimization method <cit.> with the Candy style on our test database. Ours is with slightly larger temporal coherence error compared to theirs (0.0067), because our network is trained for all videos while theirs is optimizedfor one. As for the speed, ours is thousands of times faster than theirs (0.0089 fps).Qualitative Results. In fg:qual_eval, we show three examples with kinds of representative motions to visually compare our results with per-frame processing models <cit.>. These results clearly show that our methods successfully reduce temporal inconsistency artifacts which appear in these per-frame models. In the nearly static scene (First Row), ours can keep the scene unchanged after stylization while the per-frame models fail. As for the scenes with motions, including both camera motions (Second Row) and object motions (Third Row), our method keeps the coherence between two frames except for the occluded regions. (The comparisons in our supplementary video [<http://home.ustc.edu.cn/ cd722522/>] are highly recommended for better visualization.)We further compare our method with a post-processing method <cit.>, which is applied to the stylized results produced by per-frame model <cit.>. As shown in fg:blind_cmp, the results produced from the post-processing method <cit.> look not so clear as ours, and produces ghosting artifacts. This is because optimizing temporal coherence after stylization may not be able to obtain the global optima for both temporal coherence and stylization. §.§ Ablation Study Layer Choice for Feature Composition. To study which layer of the style sub-network is the best for our feature propagation and composition, we try different layers for integration. For the basic style network <cit.>, we find 5 intermediate feature layers from input to output (respectively with 1,1/2,1/4,1/2,1 times of original resolution), which allow our flow and mask sub-networks being integrated. The five settings are trained and tested on the same database and with the same style.In this experiment, we measure the sharpness of their stylization results by Perceptual Sharpness Index (PSI) <cit.>, in addition to the stability error (in eq:stability_error). fg:blur_cmp clearly shows that the stability is improved from input to output layers, while the sharpness decreases. It may result from the observation that the stylization networks (, <cit.>) will amplify the image variances as shown in fg:motivation. When feature flow estimation and composition happen closer to the input layer, small inconsistencies in composite features would also be amplified, causing incoherent results. When they happen closer to the output layer, blending already amplified differences become more difficult and may introduce strong ghosting artifacts. To strike for a balance between stability and image sharpness, we recommend to integrate our sub-networks into the middle layer of stylization networks, , r1/4(E). In this layer, the image content is compressed as much as possible, which may be beneficial to robust flow estimation and feature composition. Fixed Flow Sub-network. In our experiment, FlowNet <cit.> is adopted as our flow sub-network. Original Flownet is trained in image domain for optical flow. It needs to be fine tuned on our task, since the flow would be further improved by jointly learning stylization and temporal coherence. Here, we compare fixed and fine-tuned flow sub-network. As shown in tb:quan_eval, fixed flow sub-network obtains less temporally coherent results than fine-tuned one.Transferability. To know whether our trained flow and mask sub-networks can be used to a new style (not appearing in training), we conduct two experiments respectively on per-style-per-net <cit.> and multiple-style-per-net <cit.>. In per-style-per-net <cit.>, we use two different styles, named as A and B for cross experiments. One combination is style sub-network learned from A, and our flow and mask sub-networks learned from B. The other combination is reversed. As shown in tb:trans_comp (First Column), it is hard to preserve the original stability when our sub-networks trained on one style are applied to another. By contrast, in multiple-style-per-net <cit.>, our trained sub-networks can be directly used to two new styles without re-training, while preserving the original stability, as shown in tb:trans_comp (Second Column). The observation suggests that our sub-networks learned with multiple-style-per-net <cit.> can be independent of styles, which is beneficial to real applications.§ CONCLUSION AND DISCUSSION In this paper, we present the first end-to-end training system by incorporating temporal coherence for video style transfer, which can speed up existing optimization-based video style transfer (<cit.>) by thousands of times, and achieve near real-time speed on modern GPUs. Moreover, our network achieves the long-term temporal coherence through the propagation of the short-term ones, which enables our model for online processing. It can be successfully employed in existing stylization networks <cit.>, and can even be directly used for new styles without re-training. Our method can produce stable and visually appealing stylized videos in the presence of camera motions, object motions, and occlusions.There are still some limitations in our method. For instance, limited by the accuracy of ground-truth optical flow (given by DeepFlow2 <cit.>), our results may suffer from some incoherence where the motion is too large for the flow to track. And after propagation over a long period, small flow errors may accumulate, causing blurriness. These open questions are interesting for further exploration in the future work.§ ACKNOWLEDGEMENTThis work is partially supported by National Natural Science Foundation of China(NSFC, NO.61371192)ieee
http://arxiv.org/abs/1703.09211v2
{ "authors": [ "Dongdong Chen", "Jing Liao", "Lu Yuan", "Nenghai Yu", "Gang Hua" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170327175255", "title": "Coherent Online Video Style Transfer" }
Resource-monotonicity and Population-monotonicity in Connected Cake-cutting [ December 30, 2023 ============================================================================15pt § INTRODUCTIONThe study of ultraviolet properties of four-dimensional gravity theories has a long history, starting from the seminal work of 't Hooft and Veltman <cit.>. Despite this we do not know the answer to the basic question of at which loop order various gravity theories actually diverge.In addition, when divergences occur in graviton amplitudes we now know that they have unusual properties, including dependence on evanescent effects <cit.> and suspected links to anomalies <cit.>. Even more interesting are indications in certain supergravity theories that the loop order where the first divergence occurs is higher than previous expectations <cit.>.This renews the possibility that certain theories, such assupergravity, are ultraviolet finite at any order in perturbation theory. No known symmetry is powerful enough to render a four-dimensional quantum gravity theory ultraviolet finite, so if this were true it would be extraordinary.Certain cancellations in gravity theories are different from those in supersymmetric gauge theories in that they cannot be made manifest for ordinary local representations. When such cancellations happen they are dubbed `enhanced cancellations' <cit.>.In simple cases, these enhanced cancellations can be understood through conventional means by constraining the set of available counterterms from symmetry considerations.For example, at one loop, a well-known counterterm argument <cit.> explains that the n graviton amplitudes are finite even though the diagrams scale poorly in the ultraviolet.On the other hand, recent examples of enhanced cancellations have as yet no standard symmetry explanation, despite attempts <cit.> and insight from string theory <cit.>.These examples includesupergravity at four loops in D=4 <cit.>,supergravity at three loops in D=4 <cit.>, and half-maximal supergravity at two loops in D=5 <cit.>.In the relatively simple case of half-maximal supergravity at two loops the cancellations have been understood using the double-copy structure that allows the amplitudes to be built from gauge-theory ones <cit.>. Unfortunately, it is not clear how to generalize this understanding to higher loops.In light of the difficulties in trying to develop a comprehensive explanation for enhanced cancellations, we should consider alternative approaches.For instance one could try to mimic diagram-based proofs of finiteness that were successfully carried out forsuper-Yang–Mills theory (see for example Refs. <cit.>).These were achieved by finding representations of the integrand where every term is ultraviolet finite by power counting.However, enhanced cancellations are different: By definition they cannot be made manifest diagram by diagram at the integrand level, using only standard Feynman propagators.But one can still wonder if some kind of integrand-level reorganization could be found that makes large loop-momentum cancellations manifest or at least clarifies how the cancellations occur.An obstruction to pursuing these ideas is that we lack a good definition of global variables for all diagrams of a multiloop amplitude including nonplanar diagrams. One way to approach this difficulty is to use unitarity cuts. At one loop, a systematic program was successfully followed for all one-loop (super)gravity amplitudes in Ref. <cit.> using a formalism <cit.> based on generalized unitarity <cit.>.This was used to demonstrate the existence of nontrivial cancellations between diagrams as the number of external legs increases. However, a general extension of the one-loop analysis to higher loops remains a challenge.In this paper instead of attempting a general argument we turn to specific examples in half-maximal supergravity, which we study in some detail.We construct the examples using the Bern–Carrasco–Johansson (BCJ) double-copy construction of gravity loop integrands in terms of gauge-theory ones <cit.>.These examples are based on the one- and two-loopsupergravity amplitudes previously obtained in Refs. <cit.>.We first show that at one loop it is not possible to construct integrands where cancellations are manifest in general dimensions. In particular, we identify cancellations in D=4 that require integration identities.At two loops we use unitarity cuts to argue that cancellations cannot be made manifest at the integrand level. To further investigate this case, we use integration-by-parts (IBP) technology <cit.> to reorganize the integrand into pieces that are finite by power counting and pieces that are divergent by power counting, yet integrate to zero.Although this re-arrangement of the complete integrand is successful, it requires detailed knowledge of the specific integrals and their relations, making it difficult to generalize to higher loops.To deal with this, we then turn to a simpler approach by giving up on trying to make the full integrand display enhanced ultraviolet cancellations. Instead we series expand in large loop momenta in order to focus on the ultraviolet behavior.We show that at least in the two loop examples we study the integral identities necessary for exposing the enhanced cancellations follow from only Lorentz and SL(2) relabeling invariance.These ideas continue to higher loops, and as a nontrivial confirmation we found that these principles generate all required integral identities for exposing the ultraviolet behavior of maximal supergravity at four loops in the critical dimension where the divergences first occur <cit.>.Based on these results, we conjecture that at L loops the IBP identities generated by Lorentz and SL(L) relabeling symmetry are sufficient for revealing the enhanced cancellations, when they exist. The principles are generic and present in all amplitudes in the large loop-momentum limit.This paper is organized as follows.In sec:example, we present one- and two-loop examples showing the lack of integrand-level cancellations.In sec:rearranging we outline how one can arrange complete integrands so that they are manifestly finite by power counting up to terms that integrate to zero.In sec:ibp we then analyze the large loop-momentum limit, bringing us to a conjecture on symmetries of the integrals responsible for making enhanced cancellations visible.We give our conclusions in sec:conc.We also include an appendix on subtleties regarding boundary terms in integration-by-parts identities.§ ABSENCE OF ENHANCED CANCELLATIONS IN THE INTEGRANDEnhanced cancellations are a recently identified type of ultraviolet cancellation that can occur in gravity theories <cit.>. These cancellations are defined as follows: Start with an amplitude organized in terms of diagrams whose denominators are only the usual Feynman propagators i/(p^2 + i ).Suppose this amplitude is ultraviolet finite, yet there are terms that are divergent by power counting and cannot be re-assigned to other diagrams without introducing additional spurious denominators in other diagrams.This implies nontrivial cancellations that cannot be manifest in the integrand of each diagram.We would then say there is an enhanced cancellation.This notion is distinct from the question of whether it is possible to exhibit the cancellations at the integrand level; one might imagine that with careful choices of loop variables in each diagram, one might be able to align the loop momenta in just the right way so that poor behavior cancels algebraically between diagrams prior to integration.Here we show that this does not happen.We present examples of enhanced cancellations to illustrate that it is only after integration that divergences cancel. We focus on the relatively simple cases of 16-supercharge half-maximal supergravity at one and two loops in D=4 and D=5. In D=4 this theory is justsupergravity <cit.>.Even though the one-loop D=4 cancellation is a well-known consequence of supersymmetry <cit.>, it provides a relatively simple concrete example of cancellations that do not arise at the integrand level, but can be exposed using Lorentz invariance. We then turn to the more interesting case of two-loop half-maximal supergravity in D=5. In this case no known standard-symmetry argument invalidates the potential R^4 divergence <cit.>.In order to construct the integrands we use the BCJ double-copy construction <cit.>, which we review briefly.The double-copy construction is useful because it directly gives us gravity loop integrands from corresponding gauge-theory ones.In this construction, one of the two gauge-theory amplitudes is first reorganized into diagrams with only cubic vertices,𝒜_m^L- loop = i^L g^m-2+2L∑_ S_m∑_j ∫∏_l=1^L d^D p_l/(2π)^D1/S_jc_j n_j/∏_α_j D_α_j ,where the D_α_j are the propagators of the j^ th diagram, L is the number of loops, m is the number of external legs and g is the gauge coupling. The first sum runs over the m! permutations of external legs, denoted by S_m, while the second sum over j runs over the distinct cubic graphs.The product in the denominator runs over all Feynman propagators.The symmetry factor S_n accounts for any overcounts and internal automorphisms. The c_j are the color factors associated with the diagrams and the n_j are kinematic numerators.The double-copy construction relies on BCJ duality <cit.> where triplets of diagram numerators satisfy equations in one-to-one correspondence with the Jacobi identities of the color factors of each diagram,c_i + c_j + c_k = 0⇒n_i + n_j + n_k = 0 .The indices i, j, k label the diagram to which the color factors and numerators belong.If the diagram numerators satisfy the same algebraic properties as the color factors, we can obtain corresponding gravity amplitudes simply replacing the color factors of a second gauge theory with numerator factors where the duality holds:c_i → n_i.The gauge-theory coupling constant is also replaced by the gravitational one: g → (κ/2).In this construction the duality (<ref>) needs to be manifest in only one of the two gauge theories <cit.>. This construction also extends to cases where the gauge theory includes fundamental-representation matter particles <cit.>. §.§ One-loop example We start with the one-loop amplitude of pure half-maximalsupergravity in four dimensions <cit.>. This amplitude is well studied and has been computed in Refs. <cit.>. The double-copy construction of this amplitude is particularly simple. We start from the correspondingsuper-Yang–Mills and pure Yang–Mills amplitudes.The one-loop four-pointsuper-Yang–Mills amplitude was first obtained from the low-energy limit of a Type I superstring amplitude <cit.>.This amplitude is particularly simple and the only nonzero kinematic numerators are those of the box diagrams in fig:box,n^ box_ = s t A^ tree_(1,2,3,4),where s = (k_1+k_2)^2 and t=(k_2+k_3)^2 are the usual Mandelstam invariants and A^ tree_(1,2,3,4) is the color-ordered tree superamplitude. The combination s t A^ tree_(1,2,3,4) is crossing symmetric, so the three box diagrams have identical numerators.It is easy to check that this representation of the amplitude satisfies the color-kinematics duality (<ref>).Replacing the color factors in the pure Yang–Mills box contributions given in Ref. <cit.> with thesuper-Yang–Mills numerators (<ref>), we obtain thesupergravity amplitude as a sum over box diagrams,M^_ = - (κ/2)^4 st A^_(1,2,3,4) (I_1234[n_1234,p] + I_1324[n_1324,p] + I_1423[n_1423,p]),whereI_1234[n_1234,p] ≡∫d^D p/(2π)^Dn_1234,p/p^2 (p-k_1)^2 (p-k_1-k_2)^2 (p+k_4)^2 ,is the first box integral in fig:box and n_1234,p is the expression defined in Eq. (3.5) of Ref. <cit.>. The triangle and bubble contributions from the pure Yang–Mills amplitude are simply set to zero because the correspondingSYM numerators vanish.As dictated by the double-copy construction, the supergravity states are given by the tensor product of pure Yang–Mills gluon states with the states ofsuper-Yang–Mills theory.The case of D=4 is an example of enhanced cancellations because the three box diagrams are each logarithmically divergent, yet the sum over diagrams is finite.We can see this by finding power-counting divergent terms in each diagram that cannot be moved to other diagrams without introducing nonlocalities in the diagram numerators.An example is the term,n_1234,p∼p_μ_1p_μ_2p_μ_3 p_μ_4^μ_1_1^μ_2_2 ^μ_3_3 ^μ_4_4 + ⋯ ,where ^μ_i_i is the gluon polarization of leg i on the pure Yang–Mills side of the double copy.The cancellations between the diagrams are nontrivial. To see the cancellation of the logarithmic divergences, we expand in large loop momentum or equivalently small external momenta k_i^μ. Because the integrals are only logarithmically divergent in D=4, this amounts to simply setting all k_i^μ to zero in the integrand (keeping the overall prefactor fixed).In this limit, the propagator of each graph become identical, and the resulting graph effectively becomes a scaleless vacuum integral.Such scaleless integrals vanish in dimensional regularization, but we can introduce a mass for each propagator to separate out the infrared divergences without affecting the ultraviolet divergence.Starting with the pure Yang–Mills numerators, keeping only the leading terms in all three box diagrams results in an integrand proportional to-.3 cm -i st A^_ (D_s-2)^μ_1_1^μ_2_2^μ_3_3^μ_4_4/2 (p^2 - m^2)^4[(p^2)^2(η_μ_1μ_4η_μ_2μ_3+η_μ_1μ_3η_μ_2μ_4+η_μ_1μ_2η_μ_3μ_4)- 4 p^2 (η_μ_1μ_2p_μ_3p_μ_4 + η_μ_1μ_3p_μ_2p_μ_4 + η_μ_1μ_4p_μ_2p_μ_3 + η_μ_2μ_3p_μ_1p_μ_4+ η_μ_2μ_4p_μ_1p_μ_3 + η_μ_3μ_4p_μ_1p_μ_2 )+ 24p_μ_1p_μ_2p_μ_3p_μ_4] ,where A^_ = A^_(1,2,3,4) and D_s is a state-counting parameter coming from contractions η_μ^μ = D_s. (In conventional dimensional regularization <cit.> D_s = 4-2ϵ, but in other schemes, such as the four-dimensional helicity scheme <cit.>, D_s = 4.)In the expression above we see explicitly that the amplitude is logarithmically divergent by power counting and that no purely algebraic manipulations can expose the cancellation of the divergence. What makes this case particularly simple is that in the large loop-momentum limit all diagrams degenerate to a single vacuum integral, avoiding loop-momentum labeling ambiguities in different terms that plague higher loops.This example provides a clear demonstration that even after summing over diagrams, enhanced cancellations are not visible prior to using properties of integrals.To expose the ultraviolet cancellation we use Lorentz invariance in the form of integration identities:∫ d^D pp_μ p_ν/(p^2 - m^2)^4 = ∫ d^D p1/Dη_μν p^2/(p^2-m^2)^4 ,∫ d^D pp_μ p_ν p_ρ p_σ/(p^2 -m^2)^4 = ∫ d^D p1/D(D+2)(η_μνη_ρσ + η_μρη_νσ + η_μση_ρν ) (p^2)^2/(p^2-m^2)^4 .With these identities, we find that the integral of OneLoopIntegrand is equal to the integral of-i st A_^ (D_s-2) (p^2)^2/2(p^2 - m^2)^4(D-2)(D-4)/D(D+2) ×_1^μ_1_2^μ_2_3^μ_3_4^μ_4 (η_μ_1μ_2η_μ_3μ_4 +η_μ_1μ_3η_μ_2μ_4 +η_μ_1μ_4η_μ_2μ_3),which vanishes in D=4.While in this case, the cancellation is understood to be a consequence of supersymmetry <cit.>, it does provide a robust example illustrating that enhanced cancellations become visible in the amplitudes only after making use of integral identities. §.§ Two-loop example Enhanced cancellations become more interesting beyond one loop where they correspond to a variety of ultraviolet cancellations for which standard-symmetry explanations are not known <cit.>.We therefore turn to half-maximal supergravity at two loops.In D=4 the cancellations are well understood to be a consequence of supersymmetry <cit.>, but in D=5 no such explanation is known <cit.>.In D=4 we can enormously simplify the integrand by using helicity states.A simple trick that helps us simplify the analysis in higher dimensions as well is to start with the higher-dimensional theory but to restrict the external states and momenta to live in a four-dimensional subspace.In this way we can use four-dimensional helicity methods to enormously simplify higher-dimensional integrands as well.This trick, of course, does not work for all states in the higher-dimensional theory, but is sufficient for our purpose of illustrating the difficulty of exposing enhanced cancellations at the integrand level.Consider the four-point two-loop amplitude ofsupergravity. This amplitude has already been discussed in some detail in Ref. <cit.>.The double-copy construction of the two-loop integrand is rather straightforward. We start from the dimensionally-regularized D=4 all-plus helicity (++++) pure Yang–Mills amplitude in the form given in Ref. <cit.>.(An earlier form of the integrand may be found in Ref. <cit.>.) In this representation the kinematic numerators of the planar and nonplanar double-box diagrams shown in fig:diags aren^PYM_1234=T((D_s-2)s(λ_p^2λ_q^2+ λ_p^2λ_p+q^2+λ_q^2λ_p+q^2) + 16s((λ_p·λ_q)^2 -λ_p^2λ_q^2) +1/2(D_s-2)(p+q)^2((D_s-2) λ_p^2λ_q^2+8(λ_p^2+λ_q^2) (λ_p·λ_q)) ) ,n^NPYM_1234 =T((D_s-2)s(λ_p^2λ_q^2+λ_p^2λ_p+q^2+λ_q^2λ_p+q^2)+16 s ((λ_p·λ_q)^2-λ_p^2λ_q^2) ) ,where D_s is the state-counting parameter similar to that at one loop and the subscript `1234' refers to the diagram external leg labeling as in fig:diags.The momenta p and q are the momenta carried by the propagators indicated in fig:diags, while λ_p and λ_q are their (-2 ) components, where = (4-D)/2.We use λ_p+q as a shorthand for λ_p + λ_q. The crossing symmetric prefactorT = [12][34]/⟨ 12 ⟩⟨ 34 ⟩ ,is defined in terms of spinor inner products, following the notation of Ref. <cit.>. The remaining planar and nonplanar double-box numerators are given by relabeling these. There are contributions to the Yang–Mills integrand from other types of diagrams as well, but we will not need them for the double-copy procedure.To obtain half-maximal supergravity we then take the pure-Yang–Mills amplitude and replace the color factors withsuper-Yang–Mills numerators that satisfy BCJ duality using BCJDuality. For the two-loop four-point amplitude ofSYM a representation that satisfies the duality happens to match the original construction <cit.>. The only nonvanishing diagrams are the planar and nonplanar double boxes shown in fig:diags. The substitution (<ref>) is simplyc^ P _1234 → n^ P _1234 = s^2 t A^_(1,2,3,4) , c^ NP_1234 → n^ NP _1234 = s^2 t A^_(1,2,3,4) ,where numerators other than the planar and nonplanar ones vanish.As for the one-loop case, we package thesuper-Yang–Mills tree amplitude for all states into a single superamplitude.The half-maximal supergravity amplitude is then obtained by summing over the planar and nonplanar double boxes in fig:diags, with kinematics numerators given by the product of pure Yang–Mills andsuper-Yang–Mills numerators,N^ P half-max.sugra_ 1234= s^2 t A^_(1,2,3,4) × n^PYM_1234 , N^ NP half-max. sugra_ 1234= s^2 t A^_(1,2,3,4) × n^NPYM_1234 .The remaining supergravity planar and nonplanar double-box numerators are given by simple relabelings. Diagrams other than the planar and non-planar double boxes vanish.This construction is also valid for the D=5 theory with the external states restricted to a D=4 subspace.We simply take → -1/2 + and accordingly the λ_p and λ_q become one dimensional up to 𝒪() corrections.Similarly the state-counting parameter should be shifted, D_s → D_s + 1.With these modifications, the simple integrand in TwoLoopDoubleCopy is valid for the D=5 theory as well.As terminology for the rest of the paper, when we label an amplitude by its external helicity, we are not referring to the helicities of the supergravity theory, but to the helicities of the pure Yang–Mills theory comprising one side of the double-copy supergravity theory.§.§.§ Cuts and labels for nonplanar amplitudes Enhanced cancellations generally occur between diagrams of different topologies.A difficulty for exposing the cancellations at the integrand level beyond one loop is that there is no unique and well-defined notion of an integrand involving nonplanar diagrams.Nor is it clear in general how one should choose momentum labels in each diagram that would allow cancellations between diagrams of various topologies to occur.For planar diagrams there is a canonical choice of global variables for all diagrams based on dual variables <cit.>, but no analogous notion is known in the nonplanar case.As a simple example consider the planar and nonplanar double-box diagrams in fig:diags.Fundamentally, the propagator structure is different, making it unclear how one might be able show the cancellation without integration.A way to sidestep the labeling issue is to focus on unitarity cuts. Generalized unitarity cuts that place at least one line on-shell in every loop impose global momentum labels on the cut.We can then ask whether we can find nontrivial cancellations in the cut linked to enhanced cancellations.If such cancellations happen at the level of the integrand, one should expect an improvement in the overall power counting after summing over all contributions to the cuts compared to individual terms.Some care is required because cuts can also obscure cancellations by restricting the diagrams that appear.The more legs that are cut, the fewer diagrams are included, since only those diagrams that contain propagators corresponding to the cut ones will be included.Because of this, it is best to focus on cuts where only a few legs are placed on shell.§.§.§ Absence of cancellations in a three-particle cut The three-particle cut in fig:3pcut is useful for studying enhanced cancellations.In the following section, using integration-by-parts technology we describe an arrangement of the integrand where potential divergences are pushed into sunset diagrams, illustrated in fig:sunsets. This suggests that the three-particle cut, where the cut lines correspond to the three propagators of a sunset diagram, is a natural one for studying enhanced cancellations. In addition, this cut fixes all loop momentum labels in this amplitude in terms of the momenta of the cut lines.An obvious guess is that if we apply the three-particle cut corresponding to the internal lines of the sunset diagram, we should find improved power counting in the full sum over terms compared to individual contributions.The (++++) amplitude has a number of special features that simplify the analysis of the cut, making it easier to find ultraviolet cancellations if they exist.On the three-particle cut, the terms in the numerator proportional to (p+q)^2 in eq:allPlusPlanar are set to zero because they corresponds to one of the on-shell inverse propagators ℓ_1^2, ℓ_2^2 or ℓ_3^2, as can be seen in fig:3pcut, making the form of the planar and nonplanar numerators identical in the three-particle cut. A useful feature of the remaining numerator terms that we exploit is that they are invariant under relabelings: the expression is the same under any mapping of the p and q propagator labels to any two of the three ℓ_1, ℓ_2 and ℓ_3.In addition, up to prefactors depending on external momenta, the dependence of the numerators is only on the components outside the four-dimensional subspace where the external momenta and helicities live.These features enormously simplify the analysis of the cut because most of the numerator factors out and is independent of permutations of external or internal legs.Using these observations, after inserting the numerators into the planar and nonplanar double-box diagrams and taking the three-particle cut shown in fig:3pcut, we obtain the expression:ℐ^cut =P(ℓ_1,ℓ_2,ℓ_3) × [ (1/2 t^2/(ℓ_1+k_1)^2 (ℓ_3+k_2)^2 (ℓ_3-k_3)^2(ℓ_1-k_4)^2 + t^2/(ℓ_2+k_1)^2 (ℓ_3+k_2)^2 (ℓ_3-k_3)^2 (ℓ_1-k_4)^2 +s^2/(ℓ_1+ℓ_2)^2 (ℓ_2+ℓ_3)^2 (ℓ_3+k_2)^2 (ℓ_1-k_4)^2+1/2 s^2/(ℓ_2+ℓ_3)^2 (ℓ_3+k_1)^2 (ℓ_2+k_2)^2 (ℓ_1-k_4)^2+1 2 s^2/(ℓ_1+ℓ_2)^2 (ℓ_3+k_2)^2 (ℓ_2-k_3)^2 (ℓ_1-k_4)^2) + perms(ℓ_1, ℓ_2, ℓ_3)5 cm + (1 ↔ 2 ) + ( 3 ↔ 4) + (1 ↔ 2 ,3 ↔ 4)] ,where the on-shell conditions ℓ_1^2 = ℓ_2^2 = ℓ_3^2 = 0 are imposed. The prefactor P(ℓ_1,ℓ_2,ℓ_3) isP(ℓ_1,ℓ_2,ℓ_3)= -i (D_s-2) s t A^_(1,2,3,4)T×( (λ_ℓ_1^2λ_ℓ_2^2+λ_ℓ_1^2λ_ℓ_3^2+ λ_ℓ_2^2λ_ℓ_3^2)+ 16 s ((λ_ℓ_1·λ_ℓ_2)^2- λ_ℓ_1^2λ_ℓ_2^2) ) ,which is invariant under the permutations of external and internal cut legs indicated in ThreePartCut.We have analyzed ThreePartCut both analytically and numerically and we find that for ℓ_i →∞ there is no improvement in the large loop-momentum behavior after summing over all terms, compared to the behavior of a single term.In fact, this is no surprise because other than the overall prefactor (<ref>), this sum over terms is precisely the same one that appears in the three-particle cut of the two-loop four-point amplitude ofsupergravity given in Eq. (5.15) of Ref. <cit.>. Insupergravity we know there are no further cancellations arising from the sum over diagrams.This can be seen as follows: the only nonvanishing diagrams insupergravity are the planar and nonplanar double boxes of fig:diags, but with no loop momenta in the numerators <cit.>.Simple power counting shows that each diagram ofsupergravity is ultraviolet divergent in dimensions D≥ 7.This divergence does not cancel in the sum over diagrams, leading to a divergence of the four-point amplitude ofsupergravity <cit.>:M_4^, D=7-2|_ UV div. = 1/2(4 π)^7π/3 (s^2 + t^2 + u^2) ×(κ/2)^6stu M^_(1,2,3,4) ,where we have stripped the coupling constant and M^_ is the supergravity tree amplitude.The fact that there are no further cancellations insupergravity implies that no integrand-level cancellation is possible in oursupergravity three-particle cut (<ref>).One might imagine trying to include relabelings ℓ_i → -ℓ_i in the spirit of Ref. <cit.> or other relabelings in order to try to expose cancellations.However, because of the link to thesupergravity cut, it is clear there are no further cancellations to be found.In summary, we see no evidence of cancellations at the integrand level.The usual supergraph Feynman rules or amplitudes-based proofs of ultraviolet finiteness in gauge theory (see for example, Ref. <cit.>) rely on the ability to make the integrand manifestly ultraviolet finite by power counting.The difficulty in finding a standard-symmetry based explanation for enhanced cancellations <cit.> in gravity theories is presumably tied to our difficulty in identifying the cancellations at the integrand level.This greatly complicates any all-order understanding of the divergence properties of supergravity theories. If we are to unravel enhanced cancellations, we need to turn to the systematics of cancellations from integral identities.§ REARRANGING THE INTEGRAND TO SHOW FINITENESSAs discussed in the previous section, it does not appear possible to expose enhanced cancellations purely at the integrand level. In this section we show how one can rearrange integrands into a form where all terms are manifestly finite by power counting, except those that integrate to zero.We do so using modern integration-by-parts (IBP) technology <cit.>.In our discussion we will be using the language of integrands and integrals interchangeably. This is because the modern approaches to integration by parts can be used to track terms in the integrand that integrate to zero, in a manner analogous to the one-loop technology of Refs. <cit.>.We first outline how IBP relations can be used to reorganize integrands with enhanced cancellations so that all terms that are naively ultraviolet divergent by power counting integrate to zero.We start from a given integrand that has the schematic structureℐ^total=∑_i ℐ_i^fin. + ∑_j ℐ_j^div. .The sum runs over the various pieces of the integrand, denoted by ℐ_i^fin., which are finite by power counting, and ℐ_i^div. which are divergent by power counting. After integration, however, the total may be finite. The idea is to reorganize this integrand into the formℐ^total=∑_i ℐ̃_i^fin. + ∑_j ℐ̃_j^van. ,where ℐ̃_i^fin. is another set of integrands that are finite after integration and ℐ̃_j^van. can be divergent by power counting but integrate to zero,∫ℐ̃_j^van. = 0 ,thus making the finiteness manifest.The reorganization is accomplished by writing the sum over power-counting divergent integrals as∑_j ℐ_j^div. = ∑_jℐ'_j^fin. +∑_j (ℐ_j^div. - ℐ'_j^fin.),where the terms in parentheses integrates to zero and the finite integrals ℐ'_j^fin. are included with the finite ones in finiteRep.IBP technology offers a systematic means for accomplishing this.We briefly review this.The IBP method <cit.> takes advantage of the fact that in dimensional regularization a total derivative vanishes:∫∏_i d^Dℓ_i∂/∂ℓ_j^μ(v_j^μ/∏_k D_k) =0 ,where 1/D_k are propagators and v^μ_j are arbitrary functions of loop momenta as well as external kinematics or other vectors in the problem.Evaluating the derivatives gives a sum of terms, and the vanishing of the integral therefore implies a relation amongst the integrals corresponding to each term.By exhausting all such independent relations one can choose a basis of integrals in terms of which to express a given amplitude. The standard basis choice at one loop is a combination of boxes, triangles, and bubbles <cit.>, but at higher loops there is no canonical choice.In general, different bases might be used to manifest different aspects of the amplitude, such as its symmetries and/or behavior on certain unitarity cuts.Generically, when applying integration-by-parts identities, there is no natural separation of the type in finiteRep.In general, the coefficients of individual terms can develop 1/ϵ singularities, and divergences cancel in complicated ways, making the finiteness unclear.To avoid this, some care is required to pick integral bases that (a) do not introduce divergences in integral coefficients and (b) contain a minimal number of divergent integrals. Usually, one picks a linearly independent set of integrals, because this minimizes the number of objects that need to be computed.But, even for an ultraviolet finite amplitude, a general choice of basis will likely have explicit ultraviolet divergences either in basis integrals or in their coefficients.The finiteness is thus obscured because the divergence cancels only in the full sum over contributions.A way to avoid this problem and express the amplitude in the form of finiteRep is to use an overcomplete set of integrals.The overcompleteness gives sufficient freedom that we can exploit to make the finiteness manifest.We illustrate this procedure with a simple example. Suppose our expression is given as the sum of integrals:A=1/70 < g r a p h i c s >- 1/2s^2 < g r a p h i c s >-1/2t^2 < g r a p h i c s > .Each of these integrals are ultraviolet divergent in five dimensions with the following leading divergences (omitting an overall π/32):< g r a p h i c s > |_ UV div. =1/3 ϵ ,< g r a p h i c s > |_ UV div.=s^2/210 ϵ , < g r a p h i c s > |_ UV div.=t^2/210 ϵ .Evaluating the divergence shows that toyAmp is finite, but this is not manifest in the above representation.Now consider the following IBP identitiesd ω_1=< g r a p h i c s >- 70/s^2 < g r a p h i c s > - 1/3 s^2 < g r a p h i c s >, d ω_2=< g r a p h i c s >+ 70/su < g r a p h i c s > + 70/tu < g r a p h i c s > - s t/3 u < g r a p h i c s >,where dω_1 and dω_2 are appropriate total derivatives; their precise form is not important for our purposes.The dot placed on a propagator indicates that the propagator is doubled, i.e., squared. This choice is convenient because the two integrals with doubled propagators are both ultraviolet finite in D=5.For this simple example, one can solve this system of equations for two of the three ultraviolet-divergent integrals.Plugging in the solution leaves only a single ultraviolet-divergent integral whose coefficient must vanish, if the amplitude is finite.However, the ability to express A in toyAmp in terms of a basis of manifestly finite integrals is a consequence of the simplicity of this example, and for more complicated amplitudes this straightforward approach will not suffice. We will therefore take a more general approach for this example. In particular, we can use IBP to rewrite the crossed box integral as< g r a p h i c s >= α( - 70/su < g r a p h i c s >- 70/tu < g r a p h i c s >+ s t/3 u < g r a p h i c s > ) + (1-α) (70/s^2 < g r a p h i c s > + 1/3 s^2 < g r a p h i c s > ) +d( (1-α) ω_1 + α ω_2 ),where α is a free parameter.In this way we traded one ultraviolet-divergent integral for two ultraviolet-divergent sunset integrals which were already in the basis, plus two other finite integrals and a collection of integrals that vanish (i.e., are total derivatives).Plugging this back into the original expression for A givesA =( 1-α/s^2-α/su-1/2s^2) < g r a p h i c s >-( α/tu+1/2t^2) < g r a p h i c s > +finite +1/70d( (1-α) ω_1 + α ω_2 ) ,where “finite” corresponds to integrals that are manifestly ultraviolet finite with finite coefficients and the term 1/70d(...) vanishes upon integration.For general α this form of A is still not manifestly finite, but since α is arbitrary we can take it to be α=-u/2t, in which case the coefficients to the two sunsets both vanish and A is then manifestly a sum of finite integrals and integrals that vanish. In general, one free parameter will not be enough to tune away two coefficients of ultraviolet-divergent integrals.For more complicated examples one needs to generate more IBP relations and introduce more tunable parameters, and in general each parameter can be used to set one coefficient to an ultraviolet-divergent integral to zero.As a nontrivial example, we carried out this procedure for the (-+++) two-loop amplitude of half-maximal supergravity in D=5. (Recall that the helicity labels refer to the helicities of the pure Yang–Mills side of the double copy, with the external states restricted to live in a four-dimensional subspace.)The structure of this amplitude is much more complicated than the (++++) case and more representative of generic cases.In the first step we reduce the full integrand to a basis of master integrals using Larsen and Zhang's method <cit.>.After this procedure the only contributing ultraviolet-divergent integrals are the three different labels of the sunsets and a few others.We then used these types of over-complete relations to express all of the (non-sunset) ultraviolet-divergent integrals in terms of ultraviolet-divergent sunset integrals, finite integrals and total derivatives that integrate to zero.The tunable parameters are solved so that coefficients of the three sunsets vanish separately, while maintaining finiteness of the coefficients of all finite integrals.Therefore, by allowing for an over-complete basis and tuning the parameters that keep track of this over-completeness, we are able to write the amplitude in the desired form, Eq. (<ref>).We note that unless special care is taken, an IBP identity in general involves doubled propagators, as in overComplete. This has the unwanted side effect of introducing spurious infrared singularities even in D=5. With more modern approaches <cit.> we can avoid the appearance of such integrals. This is achieved by imposing∑_j v_j^μ∂/∂ℓ_j^μ D_k = f_k D_k ,on the v_j^μ and where f_k has polynomial dependence on Lorentz-invariant dot products of momenta.We have also applied the more modern approach and find similar results. The procedure sketched above shows that the D=5 two-loop four-point integrand of half-maximal supergravity can be rewritten in a form that is manifestly finite, up to terms that integrate to zero.However, this procedure relies on the specific details of the integrand and corresponding IBP relations.It is also computationally difficult to extend to higher loops.Clearly, we need an approach where the necessary identities can be derived from generic properties of loop integrals.We will describe such an approach in the next section. § VACUUM EXPANSION AND SYSTEMATICS OF ULTRAVIOLET CANCELLATIONSIn this section we describe a systematic approach to understanding enhanced cancellations, in a manner that appears to have an all-loop generalization.We continue to focus on the two-loop amplitudes of half-maximal supergravity. The ultraviolet behavior is determined at the integrand level by large values of loop momenta, or equivalently small external momenta.It is therefore natural to series expand the integrand in this limit. Although this expansion has the unwanted effect of losing contact with the unitarity cuts and introducing spurious singularities, such as doubled propagators, it does have the important advantage of focusing on the term directly relevant for the ultraviolet behavior.In general, we are interested in the logarithmic divergences, so we series expand to the appropriate order where the integrals become logarithmically divergent in ultraviolet <cit.>. (We note that while dimensional regularization does not have direct access to power divergences, such divergences become logarithmic simply by lowering the dimension.)This expansion generates a set of vacuum integrals. For example, at two loops these integrals have the form∫d^D p d^D q 𝒩(p, q, k_i)/(p^2)^A (q^2)^B ((p+q)^2)^C ,where A,B and C denote the powers of the propagators.In addition to being ultraviolet divergent, these vacuum integrals also are infrared divergent. This complicates the extraction of the ultraviolet divergences. For example, in dimensional regularization these integrals are scaleless, and the infrared singularities exactly cancel the ultraviolet ones.This is usually dealt with by introducing a mass regulator or by injecting external momentum into the diagram. (See, for example, Refs. <cit.>.) We will avoid this complication by systematically finding relations between the divergences of the integrals using integration by parts.As noted in the previous section, the simplest example to analyze is the case where the external gluons in the pure Yang–Mills side of the double-copy are restricted to live in four dimensions, and correspond to all-plus helicity (++++). For this helicity configuration on the pure Yang–Mills side of the double copy, we use the spinor-helicity integrands in eq:allPlusPlanareq:allPlusNonPlanar. For the remaining helicity configurations we used the pure Yang–Mills integrand from Ref. <cit.>. The only contributions needed are those whose color structure matches those of the planar and nonplanar double-box diagrams. For other helicities we used the gauge-invariant projection method to be described in Ref. <cit.>.In four-dimensions these integrals do not have overall ultraviolet divergences because they are suppressed by the numerators; they are proportional to the (-2)-dimensional components of loop momenta. (They do however contain subdivergences which cancel.)To have a nontrivial example, we turn to the same integrand but with the internal states in D=5. In this case the numerator is not suppressed because λ_p and λ_q are one-dimensional. (In the context of dimensional regularization in D=5-2, they are actually (1-2) dimensional.)Using D=5 properties the integrand simplifies: In D=5 the λ_p and λ_q become one-dimensional so that(λ_p ·λ_q)^2 - λ_p^2 λ_q^2 →𝒪() ,in eq:allPlusPlanareq:allPlusNonPlanar.In the large loop-momentum limit, the logarithmically divergent terms in D=5 are given byI^ P,NP =(D_s-2) s∫ d^D pd^D q( λ_p^2 λ_q^2 + λ_p^2 λ_p+q^2 + λ_q^2 λ_p+q^2 )/(p^2)^A(q^2)^B[(p+q)^2]^C+ UV finite ,where(A,B,C) = .(3,3,1) ,P: planar double box ,(3,2,2) ,NP: nonplanar double box ..In the planar case there are power divergences coming from terms proportional to (p+q)^2, which removes the middle propagator generating a product of one-loop integrals.Such terms do not give rise to logarithmic divergences. (This is consistent with finiteness of such integrals in dimensional regularization, which is sensitive only to logarithmic divergences.)We may then ignore such terms for the purposes of trying to understand overall two-loop logarithmic divergence.One way to evaluate eq:expanded is to consider vacuum integrals with numerators that are polynomial in v_j· p and v_j · q, where the v_j's are a set of orthonormal basis vectors for the five-dimensional momentum space. We havev_5 · p = λ_p ,.7cm v_5 · q = λ_q ,.7cm ∑_j (v_j · p) (v_j · p) = p^2,.7 cm ∑_j (v_j · q) (v_j · q) = q^2 ,.5 cmwith appropriate factors of i inserted for the metric signature. Lorentz invariance then impliesUV finite= ∫ d^D p d^D q v_i^[μ v_j^ν]( p_μ∂/∂ p^ν + q_μ∂/∂ q^ν) 𝒩(v_k· p, v_k · q)/(p^2)^A(q^2)^B[(p+q)^2]^C ,where the Lorentz indices μ and ν are antisymmetrized.By replacing 𝒩 in the above equation by all possible monomials in v_i · p and v_i · q up to degree four, we generate linear relations between vacuum integrals with different numerators, allowing us to reduce eq:expanded to scalar vacuum integrals.The result of this procedure isI^ P, NP = 3/70 (D_s-2) s ∫ d^D p d^D q[ (p^2)^2 + (q^2)^2 +((p+q)^2)^2 ]/(p^2)^A(q^2)^B[(p+q)^2]^C= 3/70 (D_s-2) s (I_A-2,B,C + I_A,B-2,C + I_A,B,C-2),where the scalar vacuum integrals are defined asI_A,B,C = ∫ d^D pd^D q 1/(p^2)^A(q^2)^B[(p+q)^2]^C ,which is invariant under the six permutations of {A,B,C}.One can also obtain this equation by reducing the implicit tensor integrals in eq:expanded, using Lorentz invariance in the more traditional way following for example Eq. (4.18) of Ref. <cit.>. Alternatively, Mastrolia et. al. recently proposed an efficient algorithm to integrate away loop momentum components orthogonal to all external momenta <cit.>. For the particular cases of eq:allPlusReduced we obtainI^ P = 3s/70 (D_s -2 ) ( I_1,3,1 + I_3,1,1 + I_3,3,-1) +UV finite= 3s/70 (D_s -2 ) (2 I_3,1,1 + I_3,3,-1) + UV finite, I^ NP = 3s/70 (D_s -2 ) (I_1,2,2+I_3,0,2+I_3,2,0)+ UV finite,where we used the fact that the integrals are invariant under the exchange of p and q in the second equality in eq:vaccumnum1. Summing the planar and nonplanar contributions, we conclude that the logarithmic UV divergence is given by(I^ P +I^ NP) |_ log UV = 3s/70 (D_s -2 ) ( 2 I_3,1,1 + I_1,2,2) |_ log UV.As explained above, the terms with “one-loop squared” propagator structures (e.g., I_3,2,0 or I_3,3,-1 ) do not contain logarithmic UV divergences. Also, it is not surprising that the final result is a linear combination of I_3,1,1 and I_1,2,2, as these are the only two possible logarithmically divergent vacuum integrals in D=5. By explicit evaluation using a uniform internal mass m as an infrared regulator and dimensional regularization in 5-2ϵ dimensions as an ultraviolet regulator, we findI_3,1,1|_ UV div. = -π/192 ϵ , I_1,2,2|_ UV div. = π/96 ϵ ,so the combination of integrals in Eq. (<ref>) is ultraviolet finite in D=5.However, in order to understand the general structure of the cancellations, it is illuminating to instead show this using IBP identities. §.§ Extracting divergences using IBP identitiesWe recall that the fundamental assumption of the IBP method is that the integral of a total derivative vanishes in dimensional regularization, as shown in IBPe. Obviously, integrals of total derivatives only vanish when boundary contributions vanish. In dimensional regularization however, we can consider the integral in a dimension where the boundary contribution is vanishing and then analytically continue the result (zero) to the original dimension. But in an another regularization scheme one has to consider the behavior of boundary terms. In particular, if the boundary term contains ultraviolet or infrared divergences itself, the corresponding IBP identity cannot be used to relate the divergences of the integrals.On the other hand, dimensional regularization is known to regulate the ultraviolet and infrared simultaneously. In general this is very convenient, but this fact might obstruct the use of certain IBPs in this scheme for extracting ultraviolet divergences. The reason for this is that IBP identities in dimensional regularization can mix up ultraviolet and infrared poles. To illustrate this consider the following identity that relates bubble and triangle integrals in D=4:d ω = s ϵ× < g r a p h i c s > +< g r a p h i c s >,where ω is not relevant for the discussion.The internal propagators are all massless.The triangle integral has only an infrared divergence with a 1/^2 pole and the bubble has only an ultraviolet divergence with a 1/ pole.Thedependence in the coefficient of the triangle allows the infrared and ultraviolet divergences to mix.In order to directly extract ultraviolet divergences without introducing an explicit infrared cutoff (such as a mass) we must make sure that the IBPs being used do not mix infrared and ultraviolet poles.These subtleties are pertinent to our discussion since our aim is to extract ultraviolet divergences by focusing on scaleless vacuum integrals, which vanish in dimensional regularization.However, IBP identities that avoid both of the above complications can be directly used to give relations between the ultraviolet divergences of different dimensionally-regularized vacuum integrals without introducing an additional explicit infrared cutoff.In this way we can demonstrate ultraviolet cancellations without explicitly evaluating any integrals.The situation in the presence of subdivergences is more subtle and outside the scope of our present discussion. We note that our principal aim is to examine the loop order where ultraviolet divergences might first occur, so subdivergences are not of primary concern.Consider the following identities between two-loop vacuum integralsUV finite = ∫ d^D p d^D q( p^μ∂/∂ p^μ - q^μ∂/∂ q^μ) 1/ (p^2)^A (q^2)^B ((p+q)^2)^C= (-2A+2B) I_A,B,C - 2C I_A-1, B, C+1 + 2C I_A,B-1,C+1 ,UV finite = ∫ d^D p d^D q( p^μ∂/∂ q^μ) 1/ (p^2)^A (q^2)^B ((p+q)^2)^C= (-B+C) I_A,B,C - B I_A-1, B+1, C + B I_A, B+1, C-1 + C I_A-1,B,C+1 - C I_A,B-1,C+1 , UV finite = ∫ d^D p d^D q( q^μ∂/∂ p^μ) 1/ (p^2)^A (q^2)^B ((p+q)^2)^C= (-A+C) I_A,B,C - A I_A+1, B-1, C + A I_A+1, B, C-1 + C I_A,B-1,C+1 - C I_A-1,B,C+1 .In any of the three above identities, we can easily write the integrand as a total derivative because the contributions arising from commuting the loop momenta past the derivatives vanish. As desired there is no explicit dependence on the dimension D.With A+B+C=5, the above IBP identities relate logarithmically divergent integrals in D=5.With dimensional regularization (and a mass as infrared cutoff) there are no boundary terms, but here we allow more general regularization schemes, in which case there may be a ultraviolet finite boundary term on the left hand side of Eqs. (<ref>).As elaborated in the appendix, even in such schemes, boundary terms do not contain divergences and do not modify the relations.We therefore use eq:logIBP5d as a direct relationship between the ultraviolet divergences of the vacuum integrals.With A=1,B=C=2, the first equation in Eqs. (<ref>) provides the following relation between the leading overall divergences of the integrals( I_1,2,2 + 2 I_1,1,3 - 2 I_0,2,3) |_ log UV =( I_1,2,2 + 2 I_1,1,3) |_ log UV =0,where we used the fact that I_0,2,3 is a “one-loop squared” integral with power divergences and no logarithmic divergence. This is consistent with the explicit results in eq:divs, while allowing us to expose cancellations in eq:PandNPdiv without computing divergences of individual integrals or using identities that depend on details of the integrand.In addition, by starting with the Yang–Mills integrand from Ref. <cit.> to construct the half-maximal supergravity integrand via TwoLoopDoubleCopy, we have checked that for any external state, the log divergences in D=5 are always proportional to the same combination as above,(I_1,2,2 + 2I_3,1,1) ,whose leading log divergence vanishes.While dimensional regularization is not sensitive to the potential quadratic divergences in D=5, we can study these divergences by lowering the dimension to D=4.In D=4 one finds that for any helicity configuration h the expanded amplitude is𝒜_h = C_h ( 2 I_3,3,-2 - 11 I_3,2,-1 + 7 I_3,1,0 +5 I_2,2,0)+ UV finite ,for some coefficient C_h depending on the external states and on choices made for reference momenta when choosing external polarizations.We constructed the required integrand by starting from two-loop four-point Feynman diagrams for pure-Yang-Mills and then applied to double-copy procedure to generate the diagrams of half-maximal supergravity.These are then expanded large loop momentum and simplified using Lorentz symmetry to obtainedeq:4dvacResult. We apply the identities (<ref>) to the D=4 case, under the logarithmic power-counting requirement A+B+C=4, with A,B,C chosen to be all possible combinations of integers (some of which may be negative) with some cutoff on their absolute values. Dozens of IBP identities are generated, and the resulting linear system relates all integrals to I_1,2,2. In this way, we obtain cancellation of the divergences of eq:4dvacResult for the vacuum expansion of the 𝒩=4 supergravity amplitude.Thus, we see that the two-loop cancellations in D=4 and D=5 can be understood entirely and systematically using IBP identities. §.§ Generalizations and an all-loop conjecture In general, the structure of IBP equations can be rather opaque. Might there be a simple organizing principle that applies to all loop orders?A strong hint is that the subset of IBP identities given in Lorentz follows from Lorentz symmetry. We also saw the key role that Lorentz symmetry played at one loop in sec:example.The obvious L-loop extension isUV finite= ∫( ∏_a=1^L d^D ℓ_a ) v_i^[μ v_j^ν]∑_a=1^L ℓ_aμ∂/∂ℓ_a^ν𝒩(ℓ_a· v_b, ℓ_a ·ℓ_b)/∏_j D_j^A_j ,where the ℓ_a are an independent set of loop momenta to be integrated, the v_a a set of external vectors in the problem and the 1/D_j the propagators in the diagram.As noted earlier, we can equivalently apply Lorentz invariance following the methods in Refs. <cit.>.What about the identities in eq:logIBP5d?These can be understood as belonging to a special class of IBP identities generated by SL(2) transformations of the loop momenta of the form[ p; q ]→ e^ω[ p; q ],with some traceless 2 × 2 matrix ω. Since such an SL(2) transformation leaves the integration measure d^D pd^D q invariant, we haveUV finite = ∫ d^D p d^D q ω_ab ℓ_a^μ∂/∂ℓ_b^μ1/(p^2)^A(q^2)^B[(p+q)^2]^C ,where we used the notation (ℓ_1,ℓ_2)=(p,q).We can rewrite this as an IBP relation,UV finite = ∫ d^D p d^D q ∂/∂ℓ_b^μω_ab ℓ_a^μ/(p^2)^A(q^2)^B[(p+q)^2]^C,due to ω_ab being traceless. This also shows that these relations do not have explicit dependence on the spacetime dimension D.In particular, the IBP identity which come from the first equation in (<ref>) used to exhibit the cancellation of the logarithmic divergence in D=5 is given by the SL(2) generator,ω_ab = [10;0 -1 ] . In fact, the above ideas generalize trivially to the L-loop case by considering generators of SL(L). In more generality, the combination of Lorentz invariance and SL(L) transformations gives rise to some subset of SL(D L) transformations.As a nontrivial check that these ideas provide the key relations between the ultraviolet divergences of vacuum integrals, we have reproduced the relations between ultraviolet divergences of four-loop vacuum integrals in Appendix C of Ref. <cit.> in the context of obtaining the four-loop ultraviolet divergence forsupergravity in the critical dimension, D=11/2. One example of such a relation is given graphically in fig:vacIdentity.This shows that Lorentz and SL(4) symmetry generates a complete set of IBP identities necessary for reducing the vacuum integrals encoding the ultraviolet divergence to an independent set.(We know the set is independent from Eq. (4.15) of Ref. <cit.>.)In this case there were no enhanced cancellations, but had they been present they would have been found after applying the identities. This brings us to a conjecture: * Given a loop integrand, homogeneous linear transformations of the loop momentum variables with unit Jacobian are sufficient for revealing enhanced cancellations of potential ultraviolet divergences in gravity theories.Generally, we are interested in the first divergence of a theory in a given dimension so we do not need to concern ourselves with complications due to subdivergences or divergences beyond the logarithmic ones.Even if the cancellation are not complete and an ultraviolet divergence remains we expect these symmetries to generate a complete set of IBP identities for studying logarithmic divergences.If this conjecture were to hold in general, it would shed light on the mysterious enhanced cancellations that have been observed in various supergravity theories.Furthermore, these transformations can be connected to the labeling difficulty of nonplanar integrands. Remarkably, even though there does not seem to be a single “discrete” relabeling of the integration variables for each diagram that allows us to construct an integrand that would manifest the cancellations, the freedom to change integration variables appears to be at the root of the cancellations. § CONCLUSIONSIn this paper we took initial steps towards systematically understanding enhanced ultraviolet cancellations in supergravity theories <cit.>. These cancellations go beyond those presently understood from standard-symmetry argumentation <cit.> and therefore appear to require novel explanations.While a different avenue for understanding enhanced cancellations based on exploiting the double-copy structure of gravity theories has been successful for the special case of half-maximal supergravity in D=5 <cit.>, it is unclear how to extend that argument beyond two loops.In contrast, our large loop-momentum analysis here relies only on generic properties of the integrands and integrals.In nonabelian gauge theories, standard methods including superspace techniques can be used expose ultraviolet cancellations at the integrand level.One might have thought that it is possible to similarly find organizations of multi-loop integrands of supergravity theory.However, as we showed via one- and two-loop examples, it does not seem possible to do this without relying also on integration properties.The simplest example of an enhanced cancellation in a supergravity theory is probably the vanishing of one-loop divergences in puresupergravity in four dimensions. While the cancellation of the divergence in D=4 is well understood as a consequence of supersymmetry <cit.>, the pattern of cancellation amongst the diagrams serves as a prototype for enhanced cancellations. The double-copy construction <cit.> allowed us to obtain thesupergravity integrand very easily from the corresponding ones of pure-Yang–Mills andsuper-Yang–Mills theory. Even in this relatively simple case where there are no labeling ambiguities, we found that the cancellations cannot be exposed at purely the integrand level.After using integral identities that follow from Lorentz invariance, the cancellations become visible.We also investigated the more interesting case of half-maximal supergravity at two loops.In D=5, no standard symmetry explanation is known for the cancellation that removes the logarithmic divergence <cit.>.We showed that the three-particle cuts display no integrand-level cancellations, even though the final integrated expression does display the cancellations. Based on our considerations, purely integrand-based proofs of the observed enhanced cancellations do not appear to be possible.In order to systematize ultraviolet cancellations after integration, we used integration-by-parts identities <cit.>. This gives a systematic means for finding all relations between the different integrals.While the machinery of doing so is generally difficult to apply at high loop orders, at two-loops we made use of various advances for controlling the complexity of the identities <cit.>.As an example we showed that one can use these ideas to rearrange the full integrands of amplitudes so that they consist of terms that are manifestly finite as well as terms that integrate to zero.While this construction is a proof of principle and gives some insight into how the cancellations happen, it is too dependent on details of the integrands and the associated identities to be useful for developing an all-orders understanding.To develop such an understanding, we instead focused on the large loop-momentum behavior of the integrands.For the two-loopsupergravity amplitude, by series expanding at large loop momentum, we demonstrated that the only identities needed to expose the cancellation are those that follow from Lorentz and an SL(2) symmetry. Using these principles we also reproduced the necessary four-loop identities <cit.> for extracting the ultraviolet divergence ofin the critical dimension where it first appears, suggesting that we have identified the key identities.This led us to conjecture that at L loop order the integral identities generated by Lorentz and SL(L) symmetry are sufficient for exposing the enhanced cancellations of ultraviolet divergences, when they happen.If generally true, it would point towards a symmetry explanation of enhanced cancellations.There are a number of avenues for further exploration. It would be important to first explicitly confirm our conjecture for the known three- and four-loop examples of enhanced ultraviolet cancellations <cit.>, and to develop an all-loop understanding.It would also be interesting to study whether this set of integral identities is also applicable to more general problems in QCD and other theories that involve extracting ultraviolet divergences.It may also turn out to be helpful for efficiently obtaining the required integration-by-parts identities for analyzing divergences insupergravity at five loops and beyond, once the integrands become available <cit.>.We expect that in the coming years, as new theoretical tools are developed, a complete and satisfactory understanding of enhanced ultraviolet cancellations in gravity theories will follow. §.§ Acknowledgments We thank Lance Dixon, Enrico Hermann, Harald Ita, David Kosower, James Stankowicz, Jaroslav Trnka, and Yang Zhang for many enlightening discussions.This material is based upon work supported by the Department of Energy under Award Number DE-SC0009937.J.P.-M. is supported by the U.S. Department of State through a Fulbright Scholarship.§ BOUNDARY TERMS IN LOGARITHMICALLY DIVERGENT IBPSIn section <ref> we claimed that for logarithmically divergent integrals even in schemes other than dimensional regularization, the boundary contributions of the IBP relations do not alter the relation between the divergences.Here we demonstrate this. This is relevant to our discussion because it supports the notion that the required IBP relations to obtain the cancellations of the studied logarithmic divergences are robust and do not depend on details of the scheme.First, recall that the vacuum expansion to logarithmically divergent integrals, the IBPs are of the form,∫∏_i d^D ℓ_i∂/∂ℓ_j^μ( ℓ_k^μ∏_a N_a^B_a/∏_b D_b^A_b),where the powers A_b and B_a of the propagators 1/D_b and irreducible numerators N_a are such that the integrals are logarithmically divergent.Consider ultraviolet regularization after Wick rotation using a physical cut off Λ, under which the right-hand-side of eq:logIBP, as a total divergence, is turned into a boundary integral at the compact cutoff surface by Stokes' theorem. Since the number of propagators makes the integral logarithmically divergent, the boundary integral also has mass dimension 0.In Wilson's floating cutoff picture, a change in the cutoff Λ does not change the boundary integral, which precludes it from having an ultraviolet divergence. Note that the above argument breaks down if we consider, e.g. quadratically divergent IBP relations. This argument is equivalent to the textbook explanation of the finiteness of anomalies in one-loop diagrams given by a boundary term of a linearly divergent integral <cit.>.However, there is an extra subtlety at higher loops that does not arise in the study of anomalies.The argument cannot be trivially extended to the case where there are subdivergences because there is no longer just one UV divergence coefficient to be fixed by a single floating cutoff. However, this is of secondary concern because usually we are interested in studying the very first potential divergence of a supergravity theory.(There are some subtleties with evanescent effects feeding into divergences which require some care <cit.>.)The most interesting cases, such assupergravity at five loops in D=24/5, automatically have no subdivergences because of a lack of lower-loop divergences.It would be nevertheless interesting to understand the behavior of boundary terms in general and study whether the relations generated by Lorentz and SL(L) symmetry can be applied to more general problems of extracting divergences from vacuum integrals in the presence of subdivergences.We also comment on the dimensional regularization, which requires a mass regulator to separate out infrared singularities. One might worry that this mass regulator might interfere with the IBP identities. However, it is easy to argue that when there are no subdivergences the mass regulator does not cause any issues. To prevent IBP identities from mixing up ultraviolet and infrared poles, infrared divergences can be regulated by introducing a uniform mass m to every propagator on the right-hand-side of eq:logIBP. It is best to introduce the mass prior to vacuum expansion to retain cancellations of subdivergences <cit.>. After series expanding in small external momentum, we again obtain a sum of logarithmically divergent vacuum integrals (whose internal propagators are regulated by the uniform mass), but we also obtain additional vacuum integrals multiplied by factors of m^2. To have the correct dimensions, these additional integrals must have negative mass dimension and are power-counting finite in the ultraviolet. Assuming there are no one-loop subdivergences, a naive power counting is sufficient for establishing the lack of ultraviolet divergence. Therefore we obtain relations between logarithmic ultraviolet divergences of massive vacuum integrals. Furthermore, there is a smooth limit when the dimension D tends to a fixed integer (or a fractional number in more exotic cases), while the mass m tends to zero, because our special IBP identities have no D dependence and because leading logarithmic ultraviolet divergences are mass-independent. So we end up with relations between logarithmic ultraviolet divergences of massless vacuum integrals.This argument is applicable whenever dimensional regularization rules out lower-loop subdivergences, for example for supergravity calculations in fractional dimensions (see e.g., Ref. <cit.>). We note that Ref. <cit.> also investigated well-defined limits of IBP identities as the dimension tends to an integer, in the different context of studying finite integrals.99 tHooftVeltman G. 't Hooft and M. J. G. Veltman,Ann. Inst. H. Poincare Phys. Theor. A 20, 69 (1974). TwoLoopEvanescent Z. Bern, C. Cheung, H. H. Chi, S. Davies, L. Dixon and J. Nohle, Phys. Rev. Lett.115, 211301 (2015) doi:10.1103/PhysRevLett.115.211301 [arXiv:1507.06118 [hep-th]];Z. Bern, H. H. Chi, L. Dixon and A. Edison,arXiv:1701.02422 [hep-th].CarrascoAnomaly J. J. M. Carrasco, R. Kallosh, R. Roiban and A. A. Tseytlin,JHEP 1307, 029 (2013) doi:10.1007/JHEP07(2013)029 [arXiv:1303.6219 [hep-th]];R. Kallosh,Phys. Rev. D 95, no. 4, 041701 (2017) doi:10.1103/PhysRevD.95.041701 [arXiv:1612.08978 [hep-th]]; D. Z. Freedman, R. Kallosh, D. Murli, A. Van Proeyen and Y. Yamada,arXiv:1703.03879 [hep-th].N4GravFourLoops Z. Bern, S. Davies, T. Dennen, A. V. Smirnov and V. A. Smirnov,Phys. Rev. Lett.111, no. 23, 231302 (2013) doi:10.1103/PhysRevLett.111.231302 [arXiv:1309.2498 [hep-th]].N4GravThreeLoops Z. Bern, S. Davies, T. Dennen and Y.-t. Huang,Phys. Rev. Lett.108, 201301 (2012) [arXiv:1202.3423 [hep-th]].N5GravFourLoops Z. Bern, S. Davies and T. Dennen, Phys. Rev. D 90, 105011 (2014) [arXiv:1409.3089 [hep-th]].TwoLoopHalfMaxD5 Z. Bern, S. Davies, T. Dennen and Y. t. Huang, Phys. Rev. D 86, 105014 (2012) [arXiv:1209.2472 [hep-th]].VanishingVolume G. Bossard, P. S. Howe, K. S. Stelle and P. Vanhove,Class. Quant. Grav.28, 215005 (2011) doi:10.1088/0264-9381/28/21/215005 [arXiv:1105.6087 [hep-th]].KellyAttempt G. Bossard, P. S. Howe and K. S. Stelle,Phys. Lett. B 719, 424 (2013) doi:10.1016/j.physletb.2013.01.021 [arXiv:1212.0841 [hep-th]];G. Bossard, P. S. Howe and K. S. Stelle,JHEP 1307, 117 (2013) doi:10.1007/JHEP07(2013)117 [arXiv:1304.7753 [hep-th]]; HalfMaxMatter Z. Bern, S. Davies and T. Dennen,Phys. Rev. D 88, 065007 (2013), doi:10.1103/PhysRevD.88.065007 [arXiv:1305.4876 [hep-th]].PierreN4 P. Tourkine and P. Vanhove,Class. Quant. Grav.29, 115006 (2012) doi:10.1088/0264-9381/29/11/115006 [arXiv:1202.3692 [hep-th]].Finiteness S. Mandelstam,J. Phys. Colloq.43, no. C3, 331 (1982);doi:10.1051/jphyscol:1982367S. Mandelstam,Nucl. Phys. B 213, 149 (1983);doi:10.1016/0550-3213(83)90179-7L. Brink, O. Lindgren and B. E. W. Nilsson,Phys. Lett.123B, 323 (1983). doi:10.1016/0370-2693(83)91210-8UVProofs P. S. Howe, K. S. Stelle and P. K. Townsend,Nucl. Phys. B 236, 125 (1984). doi:10.1016/0550-3213(84)90528-5.UnexpectedCancellations Z. Bern, J. J. Carrasco, D. Forde, H. Ita and H. Johansson,Phys. Rev. D 77, 025010 (2008) doi:10.1103/PhysRevD.77.025010 [arXiv:0707.1035 [hep-th]].Forde D. Forde,Phys. Rev. D 75, 125019 (2007) doi:10.1103/PhysRevD.75.125019 [arXiv:0704.1835 [hep-ph]].GeneralizedUnitarity Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower,Nucl. Phys. B 425, 217 (1994) doi:10.1016/0550-3213(94)90179-1 [hep-ph/9403226];Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower,Nucl. Phys. B 435, 59 (1995) doi:10.1016/0550-3213(94)00488-Z [hep-ph/9409265]. BCJ Z. Bern, J. J. M. Carrasco and H. Johansson,Phys. Rev. D 78, 085011 (2008) doi:10.1103/PhysRevD.78.085011 [arXiv:0805.3993 [hep-ph]].BCJLoop Z. Bern, J. J. M. Carrasco and H. Johansson,Phys. Rev. Lett.105, 061602 (2010) doi:10.1103/PhysRevLett.105.061602 [arXiv:1004.0476 [hep-th]].DunbarN4 D. C. Dunbar and P. S. Norridge,Nucl. Phys. B 433, 181 (1995) doi:10.1016/0550-3213(94)00385-R [hep-th/9408014];D. C. Dunbar, J. H. Ettle and W. B. Perkins,Phys. Rev. D 83, 065015 (2011) doi:10.1103/PhysRevD.83.065015 [arXiv:1011.5378 [hep-th]].BernOneLoopN4 Z. Bern, C. Boucher-Veronneau and H. Johansson,Phys. Rev. D 84, 105035 (2011) doi:10.1103/PhysRevD.84.105035 [arXiv:1107.1935 [hep-th]].DixonTwoLoopN4 C. Boucher-Veronneau and L. J. Dixon,JHEP 1112, 046 (2011) doi:10.1007/JHEP12(2011)046 [arXiv:1110.1132 [hep-th]].IBP F.V. Tkachov, Phys. Lett. B 100, 65 (1981);K.G. Chetyrkin and F.V. Tkachov, Nucl. Phys. B 192, 159 (1981);K. G. Chetyrkin and F. V. Tkachov,Nucl. Phys. B 192, 159 (1981). doi:10.1016/0550-3213(81)90199-1;S. Laporta,Int. J. Mod. Phys. A 15, 5087 (2000) doi:10.1016/S0217-751X(00)00215-7, 10.1142/S0217751X00002157 [hep-ph/0102033];S. Laporta and E. Remiddi,Phys. Lett. B 379, 283 (1996) doi:10.1016/0370-2693(96)00439-X [hep-ph/9602417];A. V. Smirnov,Comput. Phys. Commun.189, 182 (2015) doi:10.1016/j.cpc.2014.11.024 [arXiv:1408.2372 [hep-ph]];A. von Manteuffel and C. Studerus,arXiv:1201.4330 [hep-ph].KosowerIBP J. Gluza, K. Kajda and D. A. Kosower,Phys. Rev. D 83, 045012 (2011) doi:10.1103/PhysRevD.83.045012 [arXiv:1009.0472 [hep-th]];R. M. Schabinger,JHEP 1201, 077 (2012) doi:10.1007/JHEP01(2012)077 [arXiv:1111.4220 [hep-ph]];G. Chen, J. Liu, R. Xie, H. Zhang and Y. Zhou,JHEP 1609, 075 (2016) doi:10.1007/JHEP09(2016)075 [arXiv:1511.01058 [hep-th]];A. Georgoudis, K. J. Larsen and Y. Zhang,arXiv:1612.04252 [hep-th];H. Ita,PoS LL 2016, 080 (2016) [arXiv:1607.00705 [hep-ph]];Y. Zhang,arXiv:1612.02249 [hep-th].ItaIBP H. Ita,Phys. Rev. D 94, no. 11, 116015 (2016), doi:10.1103/PhysRevD.94.116015 [arXiv:1510.05626 [hep-th]].LarsenZhang K. J. Larsen and Y. Zhang,Phys. Rev. D 93, no. 4, 041701 (2016), doi:10.1103/PhysRevD.93.041701 [arXiv:1511.01071 [hep-th]].Simplifying Z. Bern, J. J. M. Carrasco, L. J. Dixon, H. Johansson and R. Roiban,Phys. Rev. D 85, 105014 (2012) doi:10.1103/PhysRevD.85.105014 [arXiv:1201.5366 [hep-th]].N4Sugra E. Cremmer, J. Scherk and S. Ferrara,Phys. Lett. B 74, 61 (1978).OneLoopSugraDiv M. T. Grisaru, P. van Nieuwenhuizen and J. A. M. Vermaseren,Phys. Rev. Lett.37, 1662 (1976) doi:10.1103/PhysRevLett.37.1662Square Z. Bern, T. Dennen, Y. t. Huang and M. Kiermaier,Phys. Rev. D 82, 065003 (2010) doi:10.1103/PhysRevD.82.065003 [arXiv:1004.0693 [hep-th]].HenrikFundamental H. Johansson and A. Ochirov,JHEP 1511, 046 (2015) doi:10.1007/JHEP11(2015)046 [arXiv:1407.4772 [hep-th]];H. Johansson and A. Ochirov,JHEP 1601, 170 (2016) doi:10.1007/JHEP01(2016)170 [arXiv:1507.00332 [hep-ph]].GSB M. B. Green, J. H. Schwarz and L. Brink,Nucl. Phys. B 198, 474 (1982). doi:10.1016/0550-3213(82)90336-4ColorKinOneTwoLoops Z. Bern, S. Davies, T. Dennen, Y. t. Huang and J. Nohle,Phys. Rev. D 92, no. 4, 045041 (2015) doi:10.1103/PhysRevD.92.045041 [arXiv:1303.6605 [hep-th]].CollinsBook J. C. Collins, Renormalization: An Introduction to Renormalization, the Renormalization Group, and the Operator Product Expansion, Cambridge University Press, Cambridge (1985).FDH Z. Bern and D. A. Kosower,Nucl. Phys. B 379, 451 (1992), doi:10.1016/0550-3213(92)90134-W.TwoLoopSugraDiv M. T. Grisaru,Phys. Lett.66B, 75 (1977), doi:10.1016/0370-2693(77)90617-7;E. T. Tomboulis,Phys. Lett.67B, 417 (1977), doi:10.1016/0370-2693(77)90434-8AllPlusTwoLoop Z. Bern, L. J. Dixon and D. A. Kosower,JHEP 0001, 027 (2000), doi:10.1088/1126-6708/2000/01/027 [hep-ph/0001001]. ManganoParke M. L. Mangano and S. J. Parke,Phys. Rept.200, 301 (1991), doi:10.1016/0370-1573(91)90091-Y [hep-th/0509223].BRY Z. Bern, J. S. Rozowsky and B. Yan,Phys. Lett. B 401, 273 (1997), doi:10.1016/S0370-2693(97)00413-9 [hep-ph/9702424].NimaAllLoops N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, S. Caron-Huot and J. Trnka,JHEP 1101, 041 (2011), doi:10.1007/JHEP01(2011)041 [arXiv:1008.2958 [hep-th]].TwoLoopN8 Z. Bern, L. J. Dixon, D. C. Dunbar, M. Perelstein and J. S. Rozowsky,Nucl. Phys. B 530, 401 (1998), doi:10.1016/S0550-3213(98)00420-9 [hep-th/9802162].EnricoJaraIR E. Herrmann and J. Trnka,JHEP 1611, 136 (2016) doi:10.1007/JHEP11(2016)136 [arXiv:1604.03479 [hep-th]].OPP G. Ossola, C. G. Papadopoulos and R. Pittau,Nucl. Phys. B 763, 147 (2007), doi:10.1016/j.nuclphysb.2006.11.012 [hep-ph/0609007].PassarinoVeltman G. Passarino and M. J. G. Veltman,Nucl. Phys. B 160, 151 (1979), doi:10.1016/0550-3213(79)90234-7Vladimirov A. A. Vladimirov, Theor. Math. Phys.43, 417 (1980) [Teor. Mat. Fiz.43, 210 (1980)];N. Marcus and A. Sagnotti,Nuovo Cim. A 87, 1 (1985); DoubleCopyUnitarity Z. Bern, S. Davies and J. Nohle,Phys. Rev. D 93, no. 10, 105015 (2016) doi:10.1103/PhysRevD.93.105015 [arXiv:1510.03448 [hep-th]].SuperGaussBonnet Z. Bern, A. Edison, D. Kosower, J. Parra-Martinez,in preparation. Mastrolia P. Mastrolia, T. Peraro and A. Primo,JHEP 1608, 164 (2016) doi:10.1007/JHEP08(2016)164 [arXiv:1605.03157 [hep-ph]].DoubleCopy Z. Bern, J. J. Carrasco, W. M. Chen, H. Johansson and R. Roiban,arXiv:1701.02519 [hep-th]. AnomalyTextBook T. P. Cheng and L. F. Li,Oxford, UK: Clarendon ( 1984) 536 P. (Oxford Science Publications) SimonJohannesFiniteInts S. Caron-Huot and J. M. Henn,JHEP 1406, 114 (2014) doi:10.1007/JHEP06(2014)114 [arXiv:1404.2922 [hep-th]].
http://arxiv.org/abs/1703.08927v1
{ "authors": [ "Zvi Bern", "Michael Enciso", "Julio Parra-Martinez", "Mao Zeng" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170327044023", "title": "Manifesting enhanced cancellations in supergravity: integrands versus integrals" }
Igor Tydniouk December 30, 2023 ===================== In electroencephalography (EEG) source imaging, the inverse source estimates are depth biased in such a way that their maxima are often close to the sensors. This depth bias can be quantified by inspecting the statistics (mean and covariance) of these estimates.In this paper, we find weighting factors within a Bayesian framework for the used ℓ_1/ℓ_2 sparsity prior that the resulting maximum a posterior (MAP) estimates do not favour any particular source location. Due to the lack of an analytical expression for the MAP estimate when this sparsity prior is used, we solve the weights indirectly. First, we calculate the Gaussian prior variances that lead to depth un-biased maximum a posterior (MAP) estimates. Subsequently, we approximate the corresponding weight factors in the sparsity prior based on the solved Gaussian prior variances. Finally, we reconstruct focal source configurations using the sparsity prior with the proposed weights and two other commonly used choices of weights that can be found in literature.Electroencephalography, sparsity prior, Gaussian prior, Bayesian inverse problems, depth bias§ INTRODUCTIONIn EEG focal source imaging, the goal is to estimate the focal neural activity that arises, for example, during an epileptic seizure using scalp potentials. Based on the distributed source modelling <cit.>, the mapping that connects the dipole moments of n potential source locations to m scalp-potential measurements can be written as v = Kd + ξ,where v ∈ℝ^m, K ∈ℝ^m × kn (m≪ kn) is the lead field matrix, k is the dimension of the problem (2D or 3D), d∈ℝ^kn is the distributed dipole source configuration and ξ∼𝒩(0,Γ_ξ) is the measurement noise. The ill-posedness of the associated inverse problem requires the use of prior information to obtain stable estimates. One way to solve the problem is to find the estimate of the under-determined linear system that has the minimum norm <cit.>. However, the minimum norm estimate (MNE) has the property that its maxima can lie only close to the sensors, becausethe measured scalp potentialscan be generated from superficial source configurations with less power than from deep source configurations <cit.>. Similar source reconstructions can also be obtained with ℓ_2-norm priors. Even if ℓ_1-norm priors are employed the solution consists of several scattered superficial sources <cit.>. To reduce the depth bias several (often heuristic) approaches have been suggested <cit.>. The most common approaches are to weight all the sources in the penalty term with the norm of the corresponding column of the lead field matrix <cit.> or the diagonal elements of the model resolution matrix <cit.>. Another approach is to use the Bayesian hierarchical modelling <cit.>. In this paper, our aim is to find, within a Bayesian framework, weights for our sparsity prior such that the resulting posterior estimates do not favor any particular source location or component. Because there is no analytical expression for the MAP estimate when sparsity priors are employed, we propose to solve the weights indirectly. We first quantify the depth bias of the maximum a posterior (MAP) estimates when an i.i.d. Gaussian prior is employed by inspecting the statistics of the MAP estimates. Next, we calculate the Gaussian prior variances that ensure depth un-biased solutions by equalizing the variances in the covariance matrix of the MAP estimates. Finally, we approximate the corresponding weighting factors in the sparsity prior using the solved Gaussian prior variances. We demonstrate the feasibility of our approach by simulating focal brain activity with finite element (FE) simulations. In the reconstructions, we employ the weighted ℓ_1/ℓ_2 sparsity prior and we compare the results obtained using our proposed weights with the reconstructions based on two other commonly used choices of depth weights.§ THEORY§.§ Bayesian Inversion In the Bayesian framework, the inverse solution is the posterior density of the Bayes formulaπ(d|v) ∝π(v|d)π(d),where π(v|d) is the likelihood and π(d) the prior. From Equation (<ref>), the likelihood isπ(v|d) ∝exp(-1/2(v-Kd)^TΓ_ξ^-1(v-Kd)).The MAP estimate of the reconstructions is <cit.>,d̂ :=min_d ∈ ℝ^knL_ξ(Kd-v)_2^2 - 2lnπ(d),where L_ξ comes from the Cholesky factorization of Γ_ξ.§.§ Gaussian priorLet us consider a Gaussian priorπ(d)∝exp(-1/2d^TΓ_d^-1d) that does not have depth weights i.e. the covariance matrix is Γ_d= α^-2 I where I is the identity matrix and α^2 a scaling parameter. In this case, the MAP estimate is <cit.>d̂ = K^T(KK^T+α^2 Γ_ξ)^-1v.From variational point of view, this MAP estimate coincides with Tikhonov regularization and thus yields to a harmonic solution that attains its maximum at the boundary <cit.>. This can also be explained statistically by analyzing the expectation value and covariance of the MAP estimates. Theses values can be estimated by sampling or by using the analytical expressions𝔼[d̂]=0Γ_d̂ = 𝔼[d̂d̂^T]= K^T(KK^T+α^2 Γ_ξ)^-1 K.Figure <ref>-A shows how the values of the diagonal elements (variances) of Γ_d̂ decrease almost quadratically with respect to depth. The zero expectation values and the very low variances associated with the deep locations imply that the deep sources are very unlikely to be reconstructed. Thus, this MAP estimator is biased with respect to depth and favors sources close to the sensors. In this paper, our aim is to determine such prior variances that the resulting MAP estimates do not favor any particular source location or component over other i.e. the variances of the MAP estimates are equal. We start by postulating that this prior covariance matrix is diagonal Γ_d = α^-2diag(γ_d^(i)) for i=1,…,kn. The MAP estimate corresponding to this prior isd̂ = Γ_dK^T(KΓ_dK^T+α^2 Γ_ξ)^-1v,and the covariance of the MAP estimates becomesΓ_d̂ = 𝔼[d̂d̂^T]= Γ_dK^T(KΓ_dK^T+ Γ_ξ)^-1KΓ_d.In a similar way as in <cit.>, we estimate the prior variances by minimizingγ_d := min_γ_ddiag(α^-2I-Γ_d̂)_2^2.This results in solving a set of non-linear equationsα^2 = γ_d^2(i)K^(:,i)TMK^(:,i)i=1,…, kn,where M=(Γ_ξ+KΓ_dK^T)^-1 and K^(:,i) is the i^th column.Figure <ref>-B shows that with these prior variances the diagonal elements of Γ_d̂ will be equal, or in other words, the corresponding MAP estimator is depth unbiased. Moreover, Figure <ref>-C depicts the diagonal elements of the posterior covariance Γ_d|v=(K^TΓ_ξ^-1K+Γ_d^-1)^-1 obtained based on the estimated priorand Figure <ref>-D shows two corresponding marginal posterior densities of two different locations. We can observe that the posterior dipole variances increase with respect to depth. Qualitatively, this means that in the estimated source configurations the deep sources are allowed to have higher strengths than the superficial sources, and therefore, the solutions can attain their maximum also deeper in the brain (and not only close to the sensors).§.§ ℓ_1/ℓ_2- norm sparsity prior In this paper, we consider sparse focal source reconstructions and therefore, we employ the ℓ1/ℓ_2-norm prior π(d)∝exp(-α/2∑_i=1^nw^r_id_i_2) where d_i=(d_ix,d_iy,d_iz), d_i_2=√(d_ix^2+d_iy^2+d_iz^2)is the strength of the source at location i and w_i^r are the weights. For short, we denote the dipole strength at location i as r_i=d_i_2 andπ(r_i)∝exp(-α/2 w^r_ir_i).The variance of π(r_i) isγ_r^(i)= c∫_0^∞ (r_i-r_*i)^2exp(-α/2 w_i^r r_i) dr_i=4/α^2(w^r_i)^2where r_*i =c∫_0^∞ r_iexp(-0.5α w_i^rr_i) dr_i=4c/α^2 (w_i^r)^2 and c=0.5α w_i^r because ∫_0^∞ cexp(-0.5α w_i^r r_i) dr_i = 1. We calculate γ_r^(i) at location i with the help of the corresponding Gaussian variancesγ_d^(i+(j-1)n) asγ_r^(i)=kα^-2k+2(∏^k_j=1γ_d^(i+(j-1)n)) (∑^k_j=1γ_d^(i+(j-1)n))^-1,wherej=1,…,k and k is the dimension of the problem. This choice ensures that γ_r^(i) is roughly the average of the dipole component variances when the variances of the components are similar and that γ_r^(i) is close to the lowest dipole component variance when the variances have large differences. Finally, from Equation (<ref>) and (<ref>) we calculate the weightsw_i^r=2√(α^2k-4/k∑_j=1^kγ_d^(i+(j-1)n)/2∏_j=1^kγ_d^(i+(j-1)n))The estimated Gaussian variances and the corresponding weights of the ℓ_1/ℓ_2-norm prior are shown in Figure <ref>.§ MATERIALS AND METHODSWe study the proposed weights by simulating focal deep sources in the gray matter of a 2D FE head model. The head model consisted of five compartments with conductivities (in S/m) equal to 0.33 for the scalp, 0.015 for the skull, 1.76/0.016/0.33 for the cerebral spinal fluid, gray matter and white matter <cit.>, respectively. The potential measurements v were obtained from 32 point sensors equally spaced around the boundary. For the forward and the inverse computations, we use two meshes with 2342 and 1236 nodes, respectively.The MAP estimate of the dipole configuration with sparsity constraint isd̂_MAP :=min_dv-Kd_2^2+∑_i=1^nλ w_id_i_2 wwhere λ is a tuning parameter. The minimization is performed by using the interior point method <cit.> with Bregman iterations <cit.>. The performance of the proposed weights, w_i^r, from Equation (<ref>), was compared with two other commonly used weights: first, the MNE resolution weights given by w_i^MNE = √(1/k ∑_j=1^kR^(i,i+(j-1)n)), where R^(i,i)=diag(K^T (KK^T+Γ_ξ)^-1K) <cit.> and second, the normalized maximum sensor responses w^MSR_i=g_i/max(g_i), where g_i=max_l=1:m(1/k∑_j=1^k K^(l,i+(j-1)n)_2) <cit.>. To access the ground truth, we consider measurements with high signal to noise ratio, SNR=60dB. For the quantitative comparison of the results we employ the earth mover's distance (EMD) <cit.>. § RESULTS AND DISCUSSION We demonstrate the performance of the different weights using three test cases with one and two dipole sources. In Figure <ref>, the small images on the left hand side show the true dipoles, the location is marked with blue circles and the orientations with small blue lines. The remaining images, starting from left, show the reconstruction when w_i^MNE, w_i^MSR and w_i^r are used as weights, respectively. The blue marker x shows the locations of true sources. The MAP estimates were computed by solving Equation (<ref>).All the tested weights give feasible reconstructions. However, we note that the proposed weights w_i^r give the least scattered results and work the best in the single focal source cases. For the two source case, all the weights give roughly similar reconstructions and EMD values. § CONCLUSION AND FUTURE WORK We have demonstrated that the proposed depth weights with the ℓ_1/ℓ_2 sparsity prior give better reconstruction compared to two commonly used weights when single deep sources are studied. Our proposed approach has the benefit that it does not require using hyper-parameter models that would involve extensive sampling due to the lack of an analytical expression for the MAP estimate when the ℓ_1/ℓ_2 prior is used. In the future, Monte Carlo simulations will be carried out in a 3D head model to analyze the distribution of the MAP estimates reconstructed by using the ℓ_1/ℓ_2 sparsity prior with the proposed weights.§ CONFLICT OF INTEREST The authors declare that they have no conflict of interest.
http://arxiv.org/abs/1703.09044v1
{ "authors": [ "Alexandra Koulouri", "Ville Rimpiläinen", "Mike Brookes", "Jari P Kaipio" ], "categories": [ "physics.med-ph", "physics.comp-ph" ], "primary_category": "physics.med-ph", "published": "20170327125338", "title": "Prior Variances and Depth Un-Biased Estimators in EEG Focal Source Imaging" }
Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China Department of Physics, Tsinghua University, Beijing 100084, Chinatcli@purdue.edu Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, USA Purdue Quantum Center, Purdue University, West Lafayette, IN 47907, USA School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA Birck Nanotechnology Center, Purdue University, West Lafayette, IN 47907, USAyinzhangqi@mail.tsinghua.edu.cn Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China An optically levitated nonspherical nanoparticle can exhibit both librational and translational vibrations due toorientational and translational confinements of the optical tweezer, respectively. Usually, the frequency of its librational mode in a linearly-polarized optical tweezeris much larger than the frequency of its translational mode. Because of the frequency mismatch,the intrinsic coupling between librational and translational modes is very weak in vacuum. Here we propose a scheme to couple its librational and center-of-mass modes with an optical cavity mode. By adiabatically eliminating the cavity mode, the beam splitter Hamiltonian between librational andcenter-of-mass modes can be realized. We find that high-fidelity quantum state transfer between the librational and translational modes can be achieved with practical parameters. Our work may find applications in sympathetic cooling of multiple modes and quantum information processing. ****** Coupling librational and translational motion of a levitated nanoparticle in an optical cavity Zhang-qi Yin December 30, 2023 ==============================================================================================§ INTRODUCTIONQuantum optomechanics is a rapidly developing field that deals with the interaction between an optical field and the mechanical motion of an object <cit.>.In the last decade, there were a lot of studies on the interaction between the light and the center-of-mass motion of a mechanical oscillator. Quantum ground cooling of mechanical oscillators has been realized <cit.>. The study of optomechanics has many applications in macroscopic quantum mechanics <cit.>, precise measurements <cit.>, and quantum information processing <cit.>.An optically levitated dielectric nanoparticle in vacuum can have an ultra-high mechanical Q >10^9 <cit.>. Therefore, it can be used for ultra-sensitive force detection <cit.>, searching for hypothetical millicharged particles and dark energy interactions <cit.>, and testing the boundary between quantum and classical mechanics <cit.>. A levitated nanoparticle has 6 degrees of freedom: three translational modes and three rotational modes <cit.>. If its orientation is confined by the optical tweezer, it will exhibit libration (Such motion was called “torsional vibration” in Ref. <cit.>, and “rotation” in Ref. <cit.>. Several recent papers called it “libration” <cit.>, which may be a better term as it is similar to the libration of a molecule in an external field). The librationalmode of an optically levitated nonspherical nanoparticle has been observed recently <cit.>. Both translational motion and libration of a nanoparticle could be coupled with light and cooled towards quantum ground state by a cavity mode <cit.>. The librational mode frequency could be one order of magnitude higher than the frequency of a translational mode <cit.>. The coupling between the librational mode and the cavity mode can also be larger than the coupling between the translational mode and the cavity <cit.>. Therefore, it requiresless cooling laser power to cool the librational mode to the quantum regime than to cool the translational mode <cit.>.In an optical trap in vacuum, the 6 motional degrees of freedom of a nanoparticle are uncoupled with each other when they are near ground state. It would be interesting to study how to induce strong coupling between them. Such coupling will have several applications. For example, we may use one of these modes to synthetically coolother modes. It is also useful for quantum information, as we may use all 6 motional modes to store quantum bits, and realize quantum processes such as controlled gates. By dynamically tuning the polarization orientation of a trapping laser, it was found that two different translational modes could be coupled with each other <cit.>. In this way, one translational mode was synthetically cooled by coupling it to another translational mode, which was feedback cooled. It has been proposed to couple translational and rotationalmotion of asphere with aspot painted on its surface by a continuous joint measurement of two motional modes <cit.>. However, a coherent way to couple rotational and translational motion of a nanoparticle is still lacking.In this paper, we propose a scheme to realizestrong coupling between librational and translational modes of a levitated nanoparticle. We consider an optically trapped nano-particle that resides in an optical cavity. Both its translational and librational modes couple with the cavity mode. We discuss the effects of cavity decay, and find thathigh-fidelity quantum state transfer could be realized under realistic experimental conditions. We also find that two-mode squeezing Hamiltonian between librational and translational modes could be realized by adjusting the detunings of driving lasers.§ THE MODEL As shown in Fig. 1, we consider a system that contains an optical cavity and an ellipsoidal nanoparticle levitated by a trapping laser. The trapping laser is linearly polarized. Therefore, both location and direction of the nanoparticle are fixed <cit.>. The nanoparticle has translational mode b with frequency ω _ m and librational mode c with frequency ω _φ. They are both coupled to the cavity mode a. The frequency of the mode c is usually much larger than the frequency of the mode b. The optical mode is driven by two lasers of frequencies ω _L_1 and ω _L_2. The Hamiltonian of the system can be divided into three parts: H_E, H_I and H_D. Such thatH = H_E + H_I + H_DwhereH_E = ħω _0a^†a + ħω _mb^†b + ħω _φc^†c H_I = ħg_aba^†a( b^† + b) + ħg_aca^†a( c^† + c) H_D = hΩ _1/2( a^†e^ - iω _L_1t + ae^iω _L_1t) + hΩ _2/2( a^†e^ - iω _L_2t + ae^iω _L_2t)Here H_E is the energy term of translational mode b, librational mode c and cavity mode a. H_I describes the couplings between cavity mode a and two mechanical modes b and c. The coupling rates g_ab and g_ac are small, but they can be amplified by the driving lasers H_D. We will discuss how to derive the effective Hamiltonian between a and b, c modes in the next section.§ THE EFFECTIVE HAMILTONIANWe firstly consider an ideal system without decay. In order to get the effective Hamiltonian between the cavity mode a and mechanical modes b and c, we first give the Heisenberg equation corresponding to (1).ȧ =- i ω _0a - i g_aba( b^† + b) - ig_aca( c^† + c) - iΩ _1/2e^ - iω _L_1t - iΩ _2/2e^ - iω _L_2t.To deal with it, we make a semi-classical ansatza = a_0( t ) + α _1( t )e^ - iω _L_1t + α _2( t )e^ - iω _L_2twhere α _1 and α _2 are the classical amplitudes of mode a with frequencies ω_L1 and ω_L2, and a_0 is the quantum fluctuation operator.Inserting (<ref>) into (<ref>), we get the equation for the classical amplitudes α _1 and α _2[ - iω _L_1α _1 e^ - iω _L_1t - iω _L_2α _2 e^ - iω _L_2t + α̇_1 e^ - iω _L_1t + α̇_2 e^ - iω _L_2t =;- iω _0α _1 e^ - iω _L_1t - iω _0α _2 e^ - iω _L_2t - iΩ _1/2e^ - iω _L_1t - iΩ _2/2e^ - iω _L_2t. ]As α _1 and α _2 have different frequencies, we have equations for each of themα̇_1 =- i ω _0α _1 + iω _L_1α _1 - iΩ _1/2 α̇_2 =- i ω _0α _2 + iω _L_2α _2 - iΩ _2/2So we can get their classical steady state amplitude (α̇_1= α̇_2= 0): α _1 = Ω _1/2Δ _1 and α _2 = Ω _2/2Δ _2, where Δ _1 = ω _L_1 - ω _0, Δ _2 = ω _L_2 - ω _0. So we geta = a_0e^ - iω _0t + Ω _1/2Δ _1e^ - iω _L_1t + Ω _2/2Δ _2e^ - iω _L_2tWe can derive steady-state displacements β and γ for b and c in the same way.b = b_0 + βc = c_0 + γWhere β=- g_ab( α _1^2 + α _2^2)/.-ω _m, γ=- g_ac( α _1^2 + α _2^2)/.-ω _φ. We substitute (<ref>) and (<ref>) into Hamiltonian and in the rotating frame with U = e^ - iH_0t/ħ, where H_0 = ħω _0a_0^†a_0 + ħω _mb_0^†b_0 + ħω _φc_0^†c_0. The Hamiltonian H_RW = U^†(H - H_0)U. By tuning the lasers detunings, we can get different Hamiltonian between mechanical modes and the cavity mode. The different tasks such as quantum state transfer and entanglement generating can be realized. For example, if the driving lasers fulfill Δ _1 = -ω _m , Δ _2 = -ω _φ, we can neglect fast oscillation terms. The effective Hamiltonian readsH_RW = ħg_abα _1 ( a_0^† b_0 + a_0b_0^†) + ħg_acα _2 ( a_0^† c_0 + a_0c_0^†).The perfect quantum state transfer between the translational and the librational modes requires |g_abα _1 |= |g_acα _2|=G. If we initialize the system as | ψ _a( t = 0)⟩| ψ _b( t = 0)⟩| ψ _c( t = 0)⟩= | 0 ⟩| 0 ⟩| 1 ⟩, we can get| ψ _aψ _bψ _c( t )⟩= 1/2( 1 + cos√(2) Gt)| 001⟩- 1/2( 1 - cos√(2) Gt)| 010⟩- i√(2)/2sin√(2) Gt| 100⟩.If we let t = π/√(2) G, we can transfer a state from librational mode to translation mode (vice versa).If we set Δ _1 + ω _m = Δ _2 + ω _φ = δ, and in the large detuning limit δ≫ |g_abα _1 |, |g_acα _2|, the cavity mode can be adiabatically eliminated <cit.>.Here we including all fast rotating terms, both rotating wave and anti-rotating wave. If the cavity mode a_0 is initially in the vacuum state, the effective Hamiltonian isH_eff = ħG_1b_0^†b_0 + ħG_2c_0^†c_0 + ħG_3( b_0^†c_0 + b_0c_0^†)whereG_1 = α _1^2g_ab^2/Δ _1 + ω _m + α _1^2g_ab^2/Δ _1 - ω _m + α _2^2g_ab^2/Δ _2 + ω _m + α _2^2g_ab^2/Δ _2 - ω _m G_2=α _2^2g_ac^2/Δ _2 + ω _φ + α _2^2g_ac^2/Δ _2 - ω _φ + α _1^2g_ac^2/Δ _1 + ω _φ + α _1^2g_ac^2/Δ _1 - ω _φ . G_3=( α _1α _2g_abg_ac/Δ _1 + ω _m + α _1α _2g_abg_ac/Δ _1 - ω _φ)If G_1=G_2 (we will provide workable parameters in next section), and we take the initial state as | ψ _b( t = 0)⟩| ψ _c( t = 0)⟩ = | 0 ⟩| 1 ⟩, we can get| ψ _b( t )⟩| ψ _c( t )⟩ = 1/2( e^ - i( G_1 + G_3)t + e^ - i( G_1 - G_3)t)| 0 ⟩| 1 ⟩+ 1/2( e^ - i( G_1 + G_3)t - e^ - i( G_1 - G_3)t)| 1 ⟩| 0 ⟩ .In the lab reference frame, we have| ψ _b( t )⟩| ψ _c( t )⟩= 1/2e^ - iω _φt( e^ - i( G_1 + G_3)t + e^ - i( G_1 - G_3)t)| 0 ⟩| 1 ⟩ + 1/2e^ - iω _mt( e^ - i( G_1 + G_3)t - e^ - i( G_1 - G_3)t)| 1 ⟩| 0 ⟩ .If we let t = π/2G_3^2 , we can transfer a state from librational mode to translational mode (vice versa).We can also choose Δ _1 - ω _m = Δ _2 - ω _φ = δ. In the limit δ≫ G, we can adiabatically eliminate the cavity mode, and get a two-mode-squeezing effective Hamiltonian <cit.>H_RW = ħG'_1( b_0^†b_0 + c_0^†c_0) + ħG'_3( b_0^† c_0^†+ b_0c_0).which could be used for generating entanglement between modes b_0 and c_0. § EXPERIMENTAL FEASIBILITY AND DISSIPATION EFFECTSIn this section, we will provide the feasible parameters in experiment and consider the effect of dissipations. In our scheme, the steady-state amplitudes α _1 and α _2 are in the order of 10^4 to 10^5. Therefore, the strength of linear couplings between cavity mode and the mechanical modes are enhanced by 10^4 to 10^5 times. The photon number fluctuation is in the order of √(α_1,2)∼10^2,which is relating with non-linear coupling between the cavity and the mechanical modes. Therefore, the linear coupling strength is 10^2 times larger than the non-linear coupling strength. The effect of the photon number fluctuation is negligible in our scheme.In experiments, the dissipation by cavity mode and mechanical modes decay is inevitable. However, in high vacuum, the mechanical decay rates are much less than the cavity decay rate <cit.>.Therefore, we only need to consider the cavity decay effect. Considering the dissipation, the steady-amplitudes will change and we can derive them by adding a term of - iħκ/2a^†a into Hamiltonian (<ref>). Following the same procedure mentioned above, we can getα _1 = Ω _1/2( Δ _1 + iκ/2) α _2 = Ω _2/2( Δ _2 + iκ/2)And in order to maintain the form of the Hamiltonian, we should do the transformation b_0→α _1b_0/.-| α _1|,b_0^†→α _1^*b_0^†/.-| α _1| and c_0→α _2c_0/.-| α _2|,c_0^†→α _2^*c_0^†/.-| α _2|. Using perturbation theory <cit.>, we can obtain the coupling constants in the same way with Ref. <cit.>. If we restrict the librational motion of the long axis of the nanoparticle in the plane xOy, we getg_ab = √(ħ/2Mω _m)32π ^2ce^ - 4π( x^2 + z^2)/λ Lcos kysin ky/ε _0λ ^3L^2·( s _2 + cos^2φ( s _1 - s _2)), g_ac = √(ħ/2Iω _φ)8π ce^ - 4π( x^2 + z^2)/λ Lcos^2ky/ε _0λ ^2L^2( s _1 - s _2)sin 2φ .Here L is the length of the cavity, λ and k is the wavelength and wavenumber of cavity mode. M and I is the mass and the moment of inertia of the nanoparticle. s _1 and s _2 are the diagonal elements of susceptibility matrix. ( x,y,z,φ) are the parameters describe the position of the nanoparticle: ( x,y,z) are the coordinates of the center of mass (origin is the center of the cavity), φ is the angle between the long axis of the nanoparticle and x-axis. x, y, z and φ can be changed by adjusting the trapping laser. For example, we choose the angle between the polarization direction of the trapping laser and the y-axis (φ) as 45^ ∘, the equilibrium position of the center of mass is( 0,π/4k,0). We can get g_ab/2π = 0.3056 Hz and g_ac/2π = 0.2189 Hz. (The parameter of the nano particle we choose: ρ= 3500 kg/.-m^3, long axis a = 50 nm, short axis b = 25 nm, ε _r = 5.7, waist of the trapping laser W_t = 600 nm, power of the trapping laser is100 mW, wavelength λ _cav = 1540 nm, length of the cavity L = 10 mm) And in this situation, ω _m/ . -2π = 247.7 kHz, ω _φ/ . -2π = 2.6 MHz. If the finesse of our cavity ℱ=10^5 and we can get κ/ . -2π = 75.2 kHz. For example we let δ/.-2π = 200 kHz, Ω _1/.-2π = 2.66×10^9 Hz, Ω _2/.-2π = 5.0 ×10^10 Hz. And we can get G_3 / . -2π = 25 kHz and time of state transfer t = 1 ×10^ - 5 s thus it's not difficult to realize.§.§ Large detuning schemeUnder the large detuning condition that Δ _1 + ω _m = Δ _2 + ω _φ = δ≫ G,we change the system Hamiltonian to the rotating wave frame, andneglect the fast rotating terms in H_RW. In order to deal with the cavity loss effects, here we adopt the conditional Hamiltonian <cit.>. We assume that cavity decay rate is weak. Therfore, we can only consider the situation that system evolveswithout photon leakage. Under the condition that no photon is leaking out,we get the conditional Hamiltonian from quantum trajectory method <cit.>H = ħ G( a_0^† b_0e^ - iδ t + a_0b_0^†e^iδ t) + ħ G( a_0^† c_0e^ - iδ t + a_0c_0^†e^iδ t) - iħκ/2a_0^†a_0,where κ is the decay rate of the cavity mode a. We can use the above conditional Hamiltonian to calsulate the possibility P of the system evolving without photon leakage.Because we suppose that the initial state of the system is| 0 ⟩ _a| 01⟩ _bc, so the subspace only includes 3 basis states:| 0 ⟩ _a| 01⟩ _bc, | 0 ⟩ _a| 10⟩ _bc and | 1 ⟩ _a| 00⟩ _bc. And at any time t, the state of the system is| ψ_d ( t )⟩ = C_d1( t )| 0 ⟩ _a| 01⟩ _bc + C_d2( t )| 0 ⟩ _a| 10⟩ _bc+ C_d3( t )| 1 ⟩ _a| 00⟩ _bcwhereC_d1( t ) = 1/2 + ( 2δ + iκ + χ)/4χe^ - iE_3t/ħ - ( 2δ + iκ - χ)/4χe^ - iE_2t/ħ,C_d2( t ) = - 1/2 + ( 2δ + iκ + χ)/4χe^ - iE_3t/ħ - ( 2δ + iκ - χ)/4χe^ - iE_2t/ħ,C_d3( t ) = - e^ - iδ t( 2δ+ iκ+ χ)( 2δ+ iκ- χ)/16Gχ·( e^ - iE_3t/ħ - e^ - iE_2t/ħ)here χ = √(4δ ^2 + 32G^2 + 4iδκ - κ ^2), E_2 = 1/4(- 2δ - iκ - χ), E_3 = 1/4(- 2δ - iκ + χ). We first normalize the state | ψ_d⟩ to calculate the fidelity, we can get | ψ _dn⟩= | ψ _d⟩/.-√(| C_d1|^2 + | C_d2|^2 + | C_d3|^2). As shown in Fig. <ref>(a), we plot the fidelity F = | ⟨ψ _dn( t )| 010.⟩| at the time t = πδ/2G^2 - κ ^2/ . -16 which can be directly derived from the strict solution of the Schrödinger Equation and κ/.-2π = 75.2kHz as a function of δ and G.The possibility of the system evolving without photon leakage is P = | C_d1|^2 + | C_d2|^2 + | C_d3|^2.It is found that the fidelity could approach to 1 when G is small and δ is large. However, at this regime, the effective coupling between two mechanical modes is also pretty small. In Fig. <ref>(b), we plot P as a function of δ and G as well. When we choose δ = 200kHz and G = 50kHz, the fidelity F = 0.95 and the successful possibility P = 0.68.§.§ Resonant scheme In resonance case, the Hamiltonian reads inH = ħ G( a_0^† b_0 + a_0b_0^†) + ħ G( a_0^† c_0 + a_0c_0^†) - iħκ/2a_0^†a_0Because we suppose that the initial state of the system is| 0 ⟩ _a| 01⟩ _bc, the subspace only includes 3 basis states:| 0 ⟩ _a| 01⟩ _bc, | 0 ⟩ _a| 10⟩ _bc and | 1 ⟩ _a| 00⟩ _bc as well. And at any time t, the state of the system is| ψ _r( t )⟩= C_r1( t )| 0 ⟩ _a| 01⟩ _bc + C_r2( t )| 0 ⟩ _a| 10⟩ _bc+ C_r3( t )| 1 ⟩ _a| 00⟩ _bcandC_r1( t ) = 1/2 + 1/2e^ - κ t/ . - 4cos√(32G^2 - κ ^2)/4t + κ/√(32G^2 - κ ^2)e^ - κ t/ . - 4sin√(32G^2 - κ ^2)/4t, C_r2( t ) =- 1/2 + 1/2e^ - κ t/ . - 4cos√(32G^2 - κ ^2)/4t + κ/√(32G^2 - κ ^2)e^ - κ t/ . - 4sin√(32G^2 - κ ^2)/4t, C_r3( t ) = - i4G/√(32G^2 - κ ^2)e^ - κ t/ . - 4sin√(32G^2 - κ ^2)/4t. Same as the large detuning case,we plot the fidelity F = | ⟨ψ _nr( t )| 010.⟩| in Fig. <ref>(a) and possibility P = | C_r1|^2 + | C_r2|^2 + | C_r3|^2 at t = 4π/√(32G^2 - κ ^2) in Fig. <ref>(b) as a function of G and κ. Here | ψ _nr( t )⟩ is the normalized state of | ψ_r ( t )⟩. It is found that both P and F are in favor of larger G and less κ. When we choose κ = 75.2kHz and G = 50kHz, the fidelity F = 0.926 and the successful possibility P = 0.59. As we can see, for both large detuning and resonant schemes, the quantum state transfer could be realized with pretty high fidelty and successful possibility. In experiment, we can choose either of them for convenience. § CONCLUSIONIn this paper, we propose a scheme to couple librational and translational modes of a levitated nanoparticle with an optical cavity mode. We discuss how to realize quantum state transfer from a librational mode to a translational mode, and vice versa. We also discuss effects ofcavity decay on the fidelity of state transfer. We find that the high-fidelity state transfer could be realized under practical experimental conditions.§ FUNDING AND ACKNOWLEDGEMENTNational Natural Science Foundation of China (61435007); Joint Foundation of Ministry of Education of China; National Science Foundation (NSF) (1555035-PHY).We would like to thank Yue Ma for the helpful discussions.99ASP2014M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, “Cavity optomechanics,” Rev. Mod. Phys. 86, 1391 (2014).PZ2012M. Poot andH. S. J. van der Zant, “Mechanical systems in the quantum regime,” Phys. Rep. 511, 273 (2012).Connell2010 A. D. O'Connell, M. Hofheinz, M. Ansmann, R. C. Bialczak, M. Lenander, E. Lucero, M. Neeley, D. Sank, H. Wang, and M. Weides, J. Wenner, John M. Martinis, and A. N. Cleland, “Quantum ground state and single-phonon control of a mechanical resonator,” Nature 464, 697 (2010).Chan2011 J. Chan, T. P. M. Alegre, A. H. Safavi-Naeini, J. T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer, O, Painter, “Laser cooling of a nanomechanical oscillator into its quantum ground state,” Nature, 478, 89 (2011). Chen2013 Y. Chen, “Macroscopic quantum mechanics: theory and experimental concepts of optomechanics,”J. Phys. B: At. Mol. Opt. Phys. 46, 104001 (2013).Yin2017 Z.-q. Yin, T. Li. “Bringing quantum mechanics to life: from Schrödinger's cat to Schrödinger's microbe,” Contemp. Phys. 58, 119 (2017).Teufel2009 J. D. Teufel, T. Donner, M. A. Castellanos-Beltran, J. W. Harlow, K. W. Lehnert,“Nanomechanical motion measured with an imprecision below that at the standard quantum limit,” Nat. Nanotech. 4, 820 (2009). Yin2015 Z. Q. Yin, W. L. Yang, L. Sun, and L. M. Duan, “Quantum network of superconducting qubits through an optomechanical interface,” Phys. Rev. A 91, 012333 (2015).Li2013 H.-K. Li, X.-X. Ren, Y.-C. Liu, and Y.-F. Xiao, “Photon-photon interactions in a largely detuned optomechanical cavity,” Phys. Rev. A 88, 053850 (2013).Li2011 T. Li, S. Kheifets, and M. G. Raizen. “Millikelvin cooling of an optically trapped microsphere in vacuum,” Nature Phys. 7, 527 (2011).Romero2010 O. Romero-Isart, M. L. Juan, R. Quidant, and J. I. Cirac, “Toward quantum superposition of living organisms,” New J. Phys. 12, 033015 (2010).Jain2016 V. Jain, J. Gieseler, C. Moritz, C. Dellago, R. Quidant, and L. Novotny, “Direct Measurement of Photon Recoil from a Levitated Nanoparticle,” Phys. Rev. Lett. 116, 243601 (2016).Chang2010 D. E. Chang, C. A. Regal, S. B. Papp, D. J. Wilson, J. Ye, O. Painter, H. J. Kimble, and P. Zoller, “Cavity opto-mechanics using an optically levitated nanosphere,” Proc Natl Acad Sci U S A 107, 1005 (2010).Ranjit2016 G Ranjit, M Cunningham, K Casey, A. A. Geraci, “Zeptonewton force sensing with nanospheres in an optical lattice,”, Phys. Rev. A 93, 053801 (2016). Rider2016 A. D. Rider, D. C. Moore, C. P. Blakemore, M. Louis, M. Lu, and G. Gratta. “Search for screened interactions associated with dark energy below the 100 μm length scale,” Phys. Rev. Lett. 117, 101101 (2016).Moore2014 D.C. Moore, A.D. Rider, G. Gratta. “Search for Millicharged Particles Using Optically Levitated Microspheres,” Phys. Rev. Lett. 113, 251801 (2014).Romero2011 O. Romero-Isart, A. C. Pflanzer, F. Blaser, R. Kaltenbaek, N. Kiesel, M. Aspelmeyer, and J. I. Cirac, “Large Quantum Superpositions and Interference of Massive Nanometer-Sized Objects,” Phys. Rev. Lett. 107, 020405 (2011).Yin2013Z. Yin, T. Li, X. Zhang, and L. Duan, “Large quantum superpositions of a levitated nanodiamond through spin-optomechanical coupling,” Phys. Rev. A 88, 033614 (2013).Shi2016 H. Shi and M. Bhattacharya, “Optomechanics based on angular momentum exchange between light and matter,” J. Phys. B: At. Mol. Opt. Phys. 49,153001 (2016).Shi2013H Shi, M Bhattacharya, “Coupling a small torsional oscillator to large optical angular momentum,”J. Mod. Opt. 60, 382 (2013).Hoang2016 T. M. Hoang, Y. Ma, J. Ahn, J. Bang, F. Robicheaux, Z. Yin, and T. Li, “Torsional Optomechanics of a Levitated Nonspherical Nanoparticle,” Phys. Rev. Lett. 117, 123604 (2016).Stickler2016 B. A. Stickler, S. Nimmrichter, L. Martinetz, S. Kuhn, M. Arndt, and K. Hornberger, “Ro-translational cavity cooling of dielectric rods and disks,” Phys. Rev. A 94, 033818 (2016).Kuhn2016 Stefan Kuhn, Alon Kosloff, Benjamin A. Stickler, Fernando Patolsky, Klaus Hornberger, Markus Arndt, and James Millen, "Full rotational control of levitated silicon nanorods," Optica 4, 356 (2017).Nagornykh2016 P. Nagornykh, J. E. Coppock, J. P. J. Murphy, B. E. Kane, “Optical and magnetic measurements of gyroscopically stabilized graphene nanoplatelets levitated in an ion trap,” arXiv:1612.05928 (2016). Zhong2017 C. Zhong, F. Robicheaux, “Shot noise dominant regime of a nanoparticle in a laser beam,” arXiv:1701.04477 (2017).Marquardt2007 F. Marquardt, J. P. Chen, A. Clerk, and S. Girvin, “Quantum Theory of Cavity-Assisted Sideband Cooling of Mechanical Motion,” Phys. Rev. Lett. 99, 093902 (2007).Wilson2007 I. Wilson-Rae, N. Nooshi, W. Zwerger, and T. J. Kippenberg, “Theory of Ground State Cooling of a Mechanical Oscillator Using Dynamical Backaction,” Phys. Rev. Lett. 99, 093901 (2007). Frimmer2016 M. Frimmer, J. Gieseler, and L. Novotny, “Cooling Mechanical Oscillators by Coherent Control” Phys. Rev. Lett. 117, 163601 (2016).Ralph2016 J. F. Ralph, K. Jacobs, and J. Coleman, “Coupling rotational and translational motion via a continuous measurement in an optomechanical sphere,” Phys. Rev. A 94, 032108 (2016).James2007 D. James and J. Jerke, “Effective Hamiltonian theory and its applications in quantum information,” Can. J. Phys. 85, 625 (2007).Yin2009 Z.-q. Yin and Y.-J. Han, “Generating EPR beams in a cavity optomechanical system,” Phys. Rev. A 79, 024301 (2009).Stickler2016a B. A. Stickler, B. Papendell, and K. Hornberger, “Spatio-orientational decoherence of nanoparticles,” Phys. Rev. A 94, 033828 (2016).Zhong2016 C. Zhong, and F. Robicheaux, “Decoherence of rotational degrees of freedom,” Phys. Rev. A 94, 052109 (2016).Buck2003 Joseph R. Buck Jr,Cavity QED in microsphere and Fabry-Perot cavities. PhD Thesis. California Institute of Technology, (2003).Plenio1998 M. B. Plenio and P. L. Knight, “The quantum-jump approach to dissipative dynamics in quantum optics,” Rev. Mod. Phys. 70, 101 (1998).Huang2016 Y. Huang, Z.-q. Yin, and W. L. Yang, “Realizing a topological transition in a non-Hermitian quantum walk with circuit QED,” Phys. Rev. A 94, 022302 (2016).
http://arxiv.org/abs/1703.08645v1
{ "authors": [ "Shengyan Liu", "Tongcang Li", "Zhang-qi Yin" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170325040806", "title": "Coupling librational and translational motion of a levitated nanoparticle in an optical cavity" }
=1images/
http://arxiv.org/abs/1703.08971v3
{ "authors": [ "Chandrasekhar Chatterjee", "Muneto Nitta" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170327083754", "title": "BPS Alice strings" }
firstpage–lastpage 2017Quantum Spin Stabilized Magnetic Levitation O. Romero-Isart=========================================== Cosmic rays (CRs) govern the energetics of present-day galaxies and might have also played a pivotal role during the Epoch of Reionization. In particular, energy deposition by low-energy (E ≲ 10 MeV) CRs accelerated by the first supernovae, might have heated and ionized the neutral intergalactic medium (IGM) well before (z ≈ 20) it was reionized, significantly adding to the similar effect by X-rays or dark matter annihilations.Using a simple, but physically motivated reionization model, and a thorough implementation of CR energy losses, we show that CRs contribute negligibly to IGM ionization, but heat it substantially, raising its temperature by Δ T=10-200 K by z=10, depending on the CR injection spectrum. Whether this IGM pre-heating is uniform or clustered around the first galaxies depends on CR diffusion, in turn governed by the efficiency of self-confinement due to plasma streaming instabilities that we discuss in detail. This aspect is crucial to interpret future HI 21 cm observations which can be used to gain unique information on the strength and structure of early intergalactic magnetic fields, and the efficiency of CR acceleration by the first supernovae. § INTRODUCTION Sometimes during the first billion cosmic years the first stars formed. Ionizing photons emitted by these sources induced a major phase transition from the otherwise cold and neutral state in which the intergalactic medium (IGM) was left after recombination (z≲ 500) to the warm/ionized state we measure today. The transition phase is known as the Epoch of Reionization (EoR).Understanding the detailed EoR physics is one of the primary goals of present-day cosmology <cit.>. To date, we only have a broad-brush picture of such important event. Nevertheless sizable steps forward have been made in the last decade. High redshift (z ≲ 6) quasar absorption line experiments, probing the physical state of the neutral IGM, indicate that reionization was complete by z ∼ 5.7 <cit.>.The CMB polarization measurements made by PLANCK <cit.> have set tight bounds to the value of the free electron scattering optical depth, τ_ e = 0.066 ± 0.016. Translating this value into a reionization redshift, z_r, yields uncertain results as the conversion depends from the (yet unknown) reionization history. In the popular instantaneous reionization model it is 7.8 < z_r < 8.8, with an upper limit on the EoR duration of Δ z < 2.8. Thus, reionization might have been a remarkably fast (≤400 Myr) process.Great advances in the understanding of the EoR history and physics are expected from a number of upcoming observations of the redshifted HI 21-cm line signal from these epochs <cit.>. Several experiments are attempting to measure the 21-cm signal from the EoR using low-frequency radio interferometers. These include GMRT[<http://gmrt.ncra.tifr.res.in>], LOFAR[<http://www.lofar.org>], MWA[<http://www.mwatelescope.org>], PAPER[<http://eor.berkeley.edu>], HERA[<http://reionization.org>] and, in the future, SKA[<http://www.skatelescope.org>].Measuring such signal will provide direct information on the pre-reionization ages. The 21-cm signal will inform us of the HI spin temperature as a function of redshift. As such it will probe the efficiency of gas heating and ionization mechanisms that might have played a role from recombination to the time at which the first light sources appeared (the so-called "Dark Ages"). The first stars and galaxies are likely to be the main sources of the ionizing UV  <cit.>. Both theoretical arguments and numerical studies suggest that the first generation stars (known as Pop III stars) were more massive than present day ones and metal-free. These short-lived stars polluted the surrounding gas with metals, inducing a rapid transition to a cosmological star formation rate (SFR) dominated by the present-day, Pop II/I, stars <cit.>.However, sources of higher energy X-ray photons might also have been present. X-rays have a far longer mean free path than lower energy UV photons. Therefore, these photons are able to travel significantly larger distances in the neutral IGM and release a significant amount of energy, eventually increasing the temperature of the intergalactic gas. Potential X-ray sources are quasars, supernovae and X-ray binaries. However, very little is known about their abundances, evolution and spectra, especially at these high redshifts <cit.>.An additional contribution to IGM heating has been proposed by <cit.>. The deposition of kinetic energy into the IGM via plasma instabilities <cit.> triggered by TeV photons from blazars may yield a higher heating rate than photoheating for z≲ 6.Such an effect is more relevant in the intergalactic voids, i.e. in less dense regions, of the ionized medium and allows to better reproduce Lyman-α forest observables over the redshift range 2 < z < 3 <cit.>.On the other hand, recent numerical simulations have been suggesting that this mechanism could be fairly ineffective in the IGM conditions <cit.>, although we note thatthere is still no general consensus on this.Similarly to X-ray photons, cosmic rays (CRs) – accelerated from the shocks created by the exploding supernovae and promptly released in the IGM – could also efficiently deposit their energy in the form of gas heating <cit.>. Curiously, such heating source has received relatively little attention in the literature so far, in spite of the fact that it is known since decades that in local galaxies low-energy CRs play a key overall role in the energetics of these systems and, in particular, in regulating the ionization and thermal state of the interstellar gas.At higher redshifts, CRs should be able to escape their host galaxies by advection or by diffusion before they lose a significant fraction of their energy within the halo <cit.>. As they propagate in the IGM, CRs interact with the surrounding environments mainly via H/He photoionization and Coulomb collisions with free electrons, and by doing so they deposit thermal energy in the gas.Both mechanisms imply that CRs could contribute to the thermal history of the IGM <cit.>. Earlier studies of the impact of CRs on the high-redshift IGM mainly focused on the cosmological CRs originated by Pop III stars <cit.>. Recently, the authors of <cit.> found that low-energy particles (E ≲ 30 MeV) are capable of increasing the IGM temperature to 10-100 K before standard heating sources, such as galaxies and quasars, appear.In the present work, we reanalyze the role played by CRs from high-z Pop II stars.In fact, it has been shown that in realistic models of galaxy formation, chemical feedback suppresses metal-free star formation in the self-enriched progenitors, and although Pop III star formation can in principle persist down to z ∼ 3 - 4, Pop II stars dominate the SFR at any redshift <cit.>. Moreover, Pop II stars are observed in the local Universe and their properties can be much more robustly constrained.In <ref>, we build a simple reionization model and link it to the cosmic star formation rate. In <ref> we introduce the treatment of CR production, energy losses and propagation. With these ingredients, in <ref> we compute the CR contribution to IGM heating, and explore the implications of the spatial dependence of the temperature increment. Results and assumptions are discussed in <ref>. We use the cosmological parameters: h = 0.678, Ω_m = 0.308, Ω_Λ = 0.692, Ω_b h^2 = 0.0223, n_s = 0.968 and τ_e = 0.066 <cit.>.§ REIONIZATION BY EARLY GALAXIES In this Section, we assume that galaxies are the primary reionization sources. This scenario is supported by observations of high-redshift galaxies, which would be able to reionize the Universe by z = 6, provided that a substantial fraction of their ionizing emission escapes into the IGM <cit.>.Our model implements the basic physical processes required to properly model IGM evolution in the presence of the ionizing radiation from galaxies. Such formalism allows us to track the IGM thermal and ionization history, and to reproduce the available observational EoR data. A more detailed treatment can be obtained by means of numerical simulations, e.g. in <cit.>.§.§ Star Formation RateWe assume that the star formation rate per unit of stellar mass and comoving volume (SFR) inside a DM halo is proportional to its mass, ρ̇_* (M_h) ∝ M_h, and it occurs on a free-fall time-scale,t_ff = √(3π/32 G_N ρ_m), where G_N is the gravitational constantand ρ_m is the average mass density inside the virial radius of the halo<cit.>.We further assume that stars only form in Lyα cooling halos <cit.>, i.e. those above a massM_ Lyα(z) ∼ 10^8 M_⊙( 10/1+z)^3/2. Radiative feedback is expected to quench SF in halos with circular velocity at the virial radius, V_c smaller than a critical value V̅_c. The corresponding minimum mass, M_ rf(z), is obtained according to the relationship <cit.> V̅_c = 24km/s( M_ rf/10^8M_⊙)^1/3(1+z/10)^1/2. We can then write the cosmological SFR per unit comoving volume at redshift z as ρ̇_* (z) = f_* Ω_b/Ω_m∫_M_ min(z)^∞ dM_hM_h/t_ ff(M_h) dN/dM_h dV, where f_* is the star-forming efficiency, M_ min is the maximum betweenM_ Lyα and M_ rf. Ω_b and Ω_m are the density parameters of baryonic and total matter, respectively, and dN/(dM_h dV) gives the number density of haloes within a mass range (M_h, M_h + dM_h).To estimate the DM halo mass function we use the Press-Schechter formalism augmented by the Sheth-Tormen correction for ellipsoidal collapse <cit.>.As we show in Fig. <ref>, a value of f_* = 0.02 and V̅_c = 100 km/s allows us to reproduce the SFR measurements reported by <cit.>.§.§ Reionization historyWe denote the neutral hydrogen (HI) fraction by x_ H I = n_ H I/n_ H and the ionized fraction by x_ H II = n_ H II/n_ H with x_ H I + x_ H II = 1 where n_x is the number density of species x.It is further assumed that the ionization fraction of singly ionized helium and hydrogen are equal,x_ He II = x_ H II. We can then write the evolution equation for the ionization fraction in terms of the photo-ionization () and the recombination (R) rates as d /dz = dt/dz[- R ]. Since stellar radiation contributes to ionize the IGM with photons of energy higher than the ionization threshold I_H = h ν_0 = 13.6 eV, one can write the ionization rate as <cit.> (z) = ∫_ν_0^∞ dν λ_ HI(z,ν) σ_ HI(ν) ṅ_γ(z, ν) , where the ionization cross-section is σ_ HI(ν) =σ_0 (ν/ν_0)^-3, with σ_0 = 6.3 × 10^-18 cm^2.The mean free path of hydrogen ionizing photons depends on the distribution of absorbing Lyman limit systems, which can be computed through the distribution of the column density N_H I <cit.>. Assuming the distribution of absorbers to be given by f(N_HI)dN_HI∝ N_HI^-3/2dN_HI <cit.>, the mean free path increases with frequency as in <cit.>, λ_ HI(z,ν)= (z) ( ν/ν_0)^3/2, with (z) depending on the size and topology of the ionized regions. In <cit.>, λ_ν_0(z) is derived from the column density distribution of the Lyman-limit systems and is found to be a rapidly evolving function of z. We then assume(z) ≈ 39 ( 1+z/4)^-5Mpc. Finally in Eq. (<ref>), the proper specific density rate of ionizing photons production is given by ṅ_γ(z, ν) = f_ escdN_γ/dM dν ρ̇_*(z) (1+z)^3 ; f_ esc is the escape fraction of ionizing photons from galaxies, and dN_γ/(dM dν) the specific number of photons produced per unit mass of Pop II stars formed. Here we assume that the UV stellar spectrum is a power law ∝ν^-β and that, integrating over frequency, dN_γ/dM = 8.05× 10^60 M_⊙^-1 <cit.>. Putting all together, Eq. (<ref>) can then be simplified to (z) =(β - 1/β + 1/2) σ_0 f_ escdN_γ/dM (z) ρ̇_*(z) (1+z)^3 ≈5 × 10^-8s^-1(f_ esc/10^-2)( ρ̇_*(z)/M_⊙Mpc^-3yr^-1)( 1 + z )^-2, where β = 5 is a typical value for Pop II stars. The recombination rate can be expressed asR(z) =α_A (T_k^ i) C n_e (z)(z)=α_A(T_k^ i) C (1 + χ_ He) n_ H x_ H II^2 , where α_A is the case-A recombination coefficient, n_e(z) =(z) (1+χ_ He) n_ H (z) is the total electron number densityand χ_ He the cosmic helium fraction (in density).For the clumping factor C≡⟨ n_ H II^2 ⟩ / ⟨ n_ H II⟩ ^2 ≈ 2 we take the fiducial average given by <cit.>.The evolution of the gas temperature in the ionized regions (T_k^ i) is given by the combination of all the cooling and heating processes <cit.> dT_k^ i/dz = . dT_k^ i/dz|_ ex+ . dT_k^ i/dz|_ ion+ . dT_k^ i/dz|_ ph, where the cooling due to the expansion of the universe can be written as . dT_k^ i/dz|_ ex= 2T_k^ i/1+z, and the heating due to the change in the internal energy, corresponding to the change in the total number of gas particles due to ionizations, where He ionizations are assumed to be negligible, is . dT_k^ i/dz|_ ion= - T_k^ i/1+x_edx_e/dz,with x_e=n_e/n_H.The last term is the heat gain by the gas particles from the surrounding radiation field . dT_k^ i/dz|_ ph= 2/3 k_B (1+x_e)dt/dz, where k_B is the Boltzmann constant andis the photoheating rate per baryon.Analogously to Eq. (<ref>), the latter can be written as (z) = (z) ∫_ν_0^∞ dν λ_ HI(z,ν) σ_ HI(ν) ṅ_γ(z, ν)h (ν - ν_0 ) , which we can integrate over frequency, obtaining (z) = (β - 1/β^2 - 1/4) (z)(z) σ_0 f_ esc hν_0 dN_γ/dMρ̇_*(z) (1+z)^3 ≈3 × 10^-19erg s^-1 (z)(f_ esc/10^-2)×( ρ̇_*(z)/M_⊙Mpc^-3yr^-1) (1+z)^-2.Finally, we choose the value for f_ esc by assuming that the total optical depth, defined as τ_e (z) = ∫_0^z n_e (z') σ_Tdl/dz' dz', does not exceed the 3σ value measured by PLANCK, τ_e = 0.066 ± 0.016 <cit.>. In Fig. <ref> we show the optical depth corresponding to f_ esc=10^-2 and compared to the observed value for τ_e.We also show that our model predicts a fully reionized IGM by z ∼ 6 as inferred from the Gunn-Peterson trough detections <cit.>.Finally, we note that a very similar value for f_ esc has been found by means of a more sophisticated approach and tested against more observables by the authors of <cit.>.§ COSMIC RAYS IN THE IGM CRs accelerated in early galaxies can act in principle as an additional source of non-thermal energy for the IGM.In the Milky Way, most of the CR energy is in protons. They diffuse or advect out of the Galaxy on timescales of about 30 Myr that can be directly inferred from secondary-over-primary ratios <cit.>, with only a few percent of the energy lost in pion production and ionization <cit.>.Thus, a large fraction of the power injected in CRs in our Galaxy ends up in the surrounding IGM.In our model CRs are accelerated by star-forming galaxies with an universal energy spectrum and their energy released far beyond the circumgalactic gas, i.e. in the IGM.This conclusion is motivated by the fact that earlier structures are expected to be less confining than the present galaxies, since they were smaller and had a weaker magnetic field. In fact, <cit.> and <cit.> argued that primary CRs escape from parent galaxies on a timescale short enough so that they do not suffer any energy loss.To follow the propagation of an homogeneous CR population in an expanding universe for a continuous source of CRs, we generalize the classical work of <cit.>, including all the relevant energy loss processes. §.§ Production in the early galaxiesStar formation pumps energy into CR protons at a rate Ė_ p (z) = ϵ E_ SN SNR(z) (1+z)^3, where E_ SN∼ 10^51 erg is the average explosion energy for a Type II supernova (SN) not going in neutrinos, ϵ∼ 0.1 is the fraction of the kinetic energy transferred to CRs by a single SN, and SNR is the comoving SN rate. In principle, one should account also for Helium nuclei. For simplicity, we assume here that α-particles can be treated as four protons, and hence be absorbed in the proton spectrum efficiency. In addition to the SFR, to derive SNR we need to know the number of SNe explosions per solar mass of forming stars, which is given by f_ SN = ∫_8^50ϕ(m) dm/∫_0.1^100 m ϕ(m) dm∼ 10^-2M_⊙^-1 , where ϕ(m) is the Initial Mass Function (IMF) of Population II/I stars, for which we assume the following form ϕ(m) ∝ m^-1+xexp( -m_c/m) , with x=-1.35, m_c=0.35 M_⊙; m lies in the range [0.1,100] M_⊙ <cit.>.We neglect the contribution from Pop III stars. Their SF history is very debated and still highly uncertain. In fact, detailed studies, exploiting cosmological hydrodynamical simulations implementing chemical feedback effects, have shown that Pop II/I stars dominate the global SFR at any redshift <cit.>. Moreover, the recent surveys hunting for PopIII stars in the Milky Way have found no metal free stars so far, implying that they are rare in the Milky Way even if they exist <cit.>.Combining the above formulae, Eq. (<ref>) becomes Ė_ p (z) ∼ 10^-33ergcm^-3s^-1 ( ϵ/0.1)( E_ SN/10^51erg)( f_ SN/10^-2M_⊙^-1)( ρ̇_* (z)/M_⊙ yr^-1 Mpc^-3)(1+z)^3. Particle acceleration in SN explosions is believed to occur through diffusive shock acceleration, which leads to momentum power-law spectra of accelerated particles. With this in mind, we assume that the source function of volume averaged CR protons injected by SNe (defined as a rate per unit energy and volume) isq_p(E,z) = C(z)/β(E) ( E^2 + 2 E m_p c^2/E_0^2 + 2 E_0 m_p c^2)^-α/2, where E is the proton kinetic energy, E_0 = 1 GeV, m_p is the proton mass, β = v/c is the dimensionless velocity of the particle, α≥ 2 is the slope of the differential spectrum of accelerated particles and C(z) is a redshift-dependent normalization obtained by imposing that the total kinetic energy rate equals Ė_ p (z), i.e. Ė_ p (z) = ∫_E_ min^E_ max E q_p(E,z) dE . In Eq. (<ref>) we fix E_ max = 10^6 GeV <cit.> and we verify a posteriori that our conclusions are not strongly dependent on our choice of E_ min=10 keV. From Eq. (<ref>) one can easily realize that α is the key parameter determining the fraction of the total kinetic energy released that goes into protons with E ≪ 1 GeV. We keep α as the only free parameter of the model.In Fig. <ref> we plot the source function as a function of the kinetic energy for three different values of α.We found that protons with kinetic energies below 10 MeV represent 0.5%, 2.7%, 13% of the total kinetic energy released for α = 2, 2.2, 2.5, respectively.§.§ Energy losses in the IGM CRs can be an efficient heat source especially for a low density gas. When the CR proton ionizes an atom, it transfers a certain fraction of its kinetic energy to the electron, which is either used for further atomic excitation and ionization, or distributed via elastic collisions to other species of the medium. The latter process increases the kinetic temperature of the gas.Ionization losses can be taken into account using the Bethe-Bloch equation, that for γ≪ m_p/m_e can be approximated as <cit.> . -dE/dt|_I =4π e^4/m_e β c∑_Z Z x_ HI n_Z[ln(2m_e c^2/I_Z P^2)-β^2] , where m_e is the electron mass, n_Z is the number density of the elements with atomic number Z, I_Z is the ionization potential (I_ H=13.6 eV and I_ He=24.6 eV), and P = p / (m_p c^2) = √(γ^2 - 1) is the dimensionless particle momentum. Losses due to Coulomb interactions, which describe the fact that the energy lost by protons in Coulomb interactions is directly transferred to momentum of the plasma electrons (and hence heating), can be expressed as <cit.> . -dE/dt|_C =4π e^4 n_e/m_e β c[ln(2m_e c^2β/ħω_pl P )-β^2/2] , where ω_pl= (4π e^2 n_e/m_e)^1/2 is the plasma frequency.The number density of free electrons n_e is computed from our reionization model described in <ref> assuming only stellar radiation ionizations. Indeed, we verify a posteriori that CR ionizations are a subdominant contribution (see <ref>).Inverse Compton scattering with respect to CMB photons can be safely neglected, since its timescale is much longer compared to collisional processes <cit.>. Finally, the adiabatic energy losses caused by Hubble expansion can be taken into account as <cit.> . -dE/dt|_a = E (E+2 m_p c^2)/E+ m_p c^21/1+zdz/dt. For CR energy deposition to be effective, its timescale must be shorter than the Hubble time, t_i = E/dE_i/dt≤ t_H ,wheret_H(z)= ∫_∞^z dz'dt/dz'≃2(1+z)^-3/2/3H_0Ω_m^1/2≃ 0.2(1+z/21)^-3/2 Gyr. In Fig. <ref> we plot t_i/ t_H as a function of redshift for the loss mechanisms discussed above. Energy losses are efficient for kinetic energies ≲ 10 MeV. Ionization losses dominate over Coulomb losses at earlier epochs when the IGM was mainly neutral. §.§ Propagation in the IGM The evolution equation of the CR proton number density (averaged over the volume), n_p(E,z), can be written as follows <cit.> ∂/∂ tN_p + ∂/∂ E( b N_p ) + N_p/t_D = Q_p(E,z) , where the number density of protons and the source term are now normalized to n_ H(z), being N_p = n_p / n_H and Q_p = q_p / n_H. We also assume that the total energy loss rate b ≡ dE/dt is given by the sum of the loss processes described by Eqs. (<ref>), (<ref>) and (<ref>).Proton-proton interactions cause CR energy loss in a timescale<cit.> t_ pp^-1 = n_b c κσ_ pp, where the inelasticity coefficient is κ≈ 0.45 and σ_ pp≈ 35 mb is the total inelastic cross-section for proton-proton interaction.This results in t_ pp≈ 10^9(1+z)^3 Gyr, making this process negligible for our purposes. Eq. (<ref>) is solved numerically using the Crank-Nicolson implicit method described in <cit.>. The results are presented in Fig. <ref>, showing the redshift evolution of the proton spectrum. The effect of energy losses (solid vs. dashed lines) is evident mainly at low-energies (E ≲ 10 MeV).§.§ IGM ionization and heatingWe come now to the central question: can CR energy losses sensibly affect the IGM ionization state and/or temperature?The primary ionization rate for H isΓ_ ion^ CR = 1/W_H∫_E_ min^∞| dE/dt|_ I n_p(E) dE , where W_ H≃ 36.3 eV is the mean energy expended by a CR proton to create an ion pair <cit.>.Following <cit.>, we account for all secondary and higher generation ionizations by multiplying the primary ionization rate in Eq. (<ref>) by a factor ξ(x_e). For this we use a linear interpolation between the extreme case ξ(1) = 1 andξ(0) = 5/3, which reads as ξ(x_e) = 5/3 - 2/3 x_e . In the case of Coulomb losses, we can assume that all the lost energy is entirely converted into background heat. The corresponding heating rate can be calculated by using Eq. (<ref>)ℋ_ C =∫_E_ min^∞| dE/dt|_ C n_p(E) dE .The contribution to heating by secondary electrons from ionization can be divided in three regimes according to their energy: for E > I_ H ionization or excitation of H I can occur; for 3 I_ H / 4 < E < I_ H the electron can suffer losses by Coulomb and excitation collisions; for E < 3 I_ H / 4 the energy is transferred directly into heating. An approximate general formula for the heating is given by <cit.>, leading to a total heating rate by CR as ℋ^ CR = [W_ H - ξ(x_ e) I_ H] Γ_ ion^ CR + ℋ_ C. It follows that, in a neutral medium, a heat input of Δ E = W_ H - 5/3 I_ H∼ 13.6 eV for every ionization of hydrogen via CR protons is transferredto the IGM. We note, however, that the above expression is likely to overestimate the heating rate as electron energy losses via excitations are not accounted for. § RESULTS We are now ready to discuss the effects of CRs on the IGM ionization fraction and temperature from our model. §.§ Impact of CRs on reionizationThe ionization rate computed with Eq. (<ref>) is shown in Fig. <ref> and compared to the ionization rate by UV photons through Eq. (<ref>). The CR ionization rate is several orders of magnitude smaller than the UV photoionization rate. This justifies the fact that CR were not included when we described our reionization model in  <ref>. This result can be better understood by comparing the stellar and CR emissivities in a more simplified scenario.We recall that the ionizing photon emissivity by galaxies is given by ϵ_* = f_ esc E_γ f_ SNρ̇_* , where the energy in photons is given byE_γ = Ṅ_γ h ν_0 t_* , where Ṅ_γ∼ 5 × 10^47 s^-1 is the rate of ionizing photons, and t_* ∼ 30 Myr is the stellar lifetime. In doing so, we are assuming that the UV emission by galaxies is dominated by the same stars that go supernova, with mass ∼ 10 M_⊙.On the other hand, the CR emissivity can be written as ϵ_ CR = f_d ϵ E_ SN f_ SNρ̇_* , where f_d ∼ 10^-3 is the fraction of energy deposited in the IGM and, as we discussed in  <ref>,corresponds to the fraction of energy in CR protons with E ≲ 10 MeV. We are thus assuming that all the deposited energy is used to ionize IGM atoms.The ratio between the corresponding fluxes is then J_ CR/J_* = ϵ_ CRλ_ CR/ϵ_* λ_*∼ 10^-3λ_ CR/λ_*, where λ_i designates the corresponding mean free path. Finally, the ratio of the ionization rates, J_i/τ_i, with τ_i the corresponding energy loss time, can then be roughly estimated as 10^-3 by approximating τ_i ∝λ_i. §.§ IGM heating While there is a consensus that UV stellar radiation is largely responsible for cosmic reionization, its impact on the global IGM temperature is limited to fully ionized regions.For quasi-neutral IGM regions, X-rays are clearly more relevant due to their larger mean free path.These energetic photons might come from an early population of relatively soft X-ray binaries  <cit.>. Alternatively, if sourced by black holes, they could pre-heat the gas up to10^4 K  <cit.>. However, uncertainties related to the nature and abundance of their sources at high redshift make predictions very uncertain <cit.>. For this reason, CRs might represent a competitive, alternative source of thermal input for the neutral IGM.The IGM temperature increase produced by CR heating (Fig. <ref>) isΔ T (z) = 2/3ℋ^ CR(z)/k_B H(z) , where ℋ_ h^ CR is given by Eq. (<ref>). The IGM temperature can be raised up to ∼ 3 × 10^3 K before reionization is complete and it exceeds the CMB one, T_ CMB(z) = 2.725 (1+z) K, at z ≲ 9 (12) for α = 2(2.5). These results imply that the IGM is pre-heated well before being reionized. Depending on the efficiency of the CR diffusion mechanism, pre-heating might be confined in regions around star forming galaxies, or, if diffusion is very efficient, it might give rise to a more distributed, quasi-uniform warmer floor (see  <ref>).We can directly compare our predictions with the results presented in Fig. 1 of <cit.>. These authors also study the IGM heating from low-energy CRs, finding a temperature increament between (1-10^3) K, mostly depending on the minimum halo mass allowed to form stars.The highest temperature was found for a supernova energy explosion E_ SN=10^53 erg (hence corresponding to a PopIII supernova) out of which 5% is pumped into low-energy CRs, and a minimum star-forming halo mass of 3× 10^5M_⊙. Our model predicts a similar Δ T but relying on fairly standard stellar populations (and supernova explosion energies) whose SFR has been calibrated with both the observed cosmic star formation history and reionization constraints from the Thomson scattering optical depth. §.§ Diffusion in the IGM As energetic particles are injected into the IGM, they will be subject to a random-walk through it. The travelled distance depends on the strength and structure of the intergalactic magnetic field, on which very little is known at high redshift. The standard assumption is to consider CR energy deposition as a uniform background <cit.>. The rationale behind this assumption is based on the following argument.The slowest rate at which CRs can diffuse corresponds to the so-called Bohm diffusion. This assumes one scattering per gyroradius; the diffusion coefficient is D_B = c r_L / 3, where r_L is the particle Larmor radius.The maximum diffusion timescale between haloes is then given by t_B ≃⟨ d ⟩^2/D_B, where ⟨ d ⟩ is the average proper distance between them, and the Bohm diffusion coefficient for protons (Z=1) can be estimated asD_B (p,z) ∼ 1.1 Mpc^2/ Gyr ( p/ GeV/c) ( B_0/10^-16 G)^-1(1+z/21)^-2, B_0=10^-16 G is the assumed IGM magnetic field strength at z=20, following <cit.>.To estimate ⟨ d ⟩, we consider uniformly distributed haloes. Their average inter-distance is then4π/3⟨ d ⟩^3 ∼( M_h dN/dM_h)^-1 (1+z)^-3, where dN/dM_h is the comoving halo density. At z=20, the haloes contributing mostly to the SFR are those with M_h ∼ M_ min(z = 20), whose mean separation is ⟨ d ⟩∼ 50 kpc. From Eq. (<ref>) and Eq. (<ref>), we deduce that t_B ≲ t_H as long as the CR energy is E ≳ 20 keV. This result would support the idea of a uniform thermal deposition by CRs.However, the above calculation is incomplete, as CRs may affect the environment in which they propagate. When CRs escape from the halo they produce an electric current to which the background plasma reacts by generating a return current that in turn leads to the development of small scale instabilities. The growth of such instabilities leads to large turbulent magnetic fields and to an enhanced particle scattering. In short, CRs may undergo self-confinement <cit.>.In this scenario, the particle diffusion timescale can be significantly larger than what we found in Eq. (<ref>).In order to get an estimate of the potential maximum effect associated with this mechanism, we generalize the formalism developed by <cit.> to the non-relativistic regime; moreover, we maximize the effect by assuming that all escaping CRs contribute to the self-generated magnetic field.The differential number density (in momentum) of CRs escaping out from a halo at a distance r from it can be written as[The injection spectrum assumed here is equivalent to Eq. (<ref>), since dN/d^3p ∝ p^-4 corresponds to dN/dE ∝ p^-2.] f(p,r) = dN_ p/dV d^3p = A(r) ( p/p_0)^-4, where A(r) is obtained by imposing that the total pressure exerted on a surface S = 4 π r^2 by the CR source, P_ s = F_ s/S∼L_ CR/c/4 π r^2, is given by the CR pressure at the same distance P_ CR∼∫_p_ min^p_ max dp p^3 v(p) f(p,r) , where the source luminosity in CRs, L_ CR, for a typical star forming halo at z=20 is L_ CR = f_* f_ SN E_ CRΩ_b/Ω_mM_h/t_ ff∼ 10^38 erg s^-1.By equating Eqs. (<ref>) and (<ref>), one hasA(r) = L_ CR/4 π r^2 c[ ∫_p_ min^p_ max dp p^3 v(p) ( p/p_0)^-4]^-1.The electric current j_ CR associated with CRs streaming away from their sources can be written asj_CR(p) = e ∫_p^p_ max 4 π p^2 dp v(p) f(p,r) ∼e L_ CR/c r^2 p_0 g(p/p_0) , where we have introducedg(x) = ∫_x^x_ max dx β(x p_0) x^-2/∫_x_ min^x_ max dxβ(x p_0) x^-1. Assuming that the non-resonant modes are able to grow on a timescale much shorter than t_H, the magnetic field saturates at a value δ B_ s. The saturation level is set by equipartition between the energy density of the amplified field and the kinetic energy density of the CR current (see Eq. (<ref>)): δ B_ s^2/8 π∼L_ CR/c r^2[ p/p_0 g(p) ]_ max∼ 0.01 L_ CR/c r^2; the last approximate equality holds since p g(p) reaches a maximum at ∼GeV/c and remains constant for larger momenta. Numerically this yields δ B_ s≈ 0.01 μG at r=1 kpc. The corresponding mean free path at a given epoch can be finally computed assuming Bohm diffusion, as in Eq. (<ref>), λ_ CR = √(t_B D_B). The magnetic field is given by Eq. (<ref>) with r representing now the mean free path. This results in λ_ CR = 1kpc ( t_i/Gyr)( L_ CR/10^38 erg s^-1)^-1/2( p/ GeV/c) .If this is the case, it would imply that CRs heating is far from uniform; rather, it is highly patchy and clustered around the smallest star forming halos.In practice, a number of neglected effects might reduce the efficiency of CR self-confinement. These are: (a) the presence of neutrals outside the fully ionized bubbles can quickly damp the waves generated through the CR streaming instability; (b) the B-field equipartition value in Eq. (<ref>) might not be attained due to an inefficient CR-magnetic energy density conversion. Moreover, a sufficiently strong intergalactic B-field and/or a smaller galactic CR luminosity may result in a magnetic-CR energy density ratio that is too large for the development of the instability in the non-resonant regime.We note that observations of the redshifted HI 21 cm line from these high redshifts would be very sensitive to the morphology of the pre-heated, neutral regions. We thus expect that the clustered heating scenario leaves unique imprints in the power spectrum of suchradiation. Additionally, the analysis of the power spectrum should allow to discriminate between the case in which the heating source are X-rays or CRs. Finally, we could also gain precious information about the strength and structure of early intergalactic magnetic fields and the efficiency of CR acceleration by the first SNe. All these aspects are very hard to investigate with any other mean.§ CONCLUSIONS In this work we have shown that CRs can influence the temperature and ionization fraction of the IGM using a self-consistent model for galaxy formation and cosmic reionization. The model was designed to reproduce the observed SFR at redshift z ≲ 10, and a cosmic history consistent the latest PLANCK results. Such data constrain the conversion efficiency of gas into stars (f_* = 0.04) and the population-averaged escape fraction of ionizing photons into the IGM (f_ esc = 0.01). From the supernova rate evolution given by the model we further derived the CR energy density. CRs with energies < 1 MeV lose energy predominantly by ionizations at redshifts z > 10, and by Coulomb scatterings at lower redshifts (see Fig. <ref>). The energy lost via Coulomb collisions goes directly to heat, increasing the IGM temperature at z ∼ 10 above the standard adiabatic thermal evolution Δ T ∼ 10-200 K, depending onthe slope of the CR injection spectrum in the range 2 < α < 2.5, and on the transport efficiency of CRs out of the first star-forming structures.Such increase is comparable or higher than that produced by two other popular heating mechanisms, i.e. X-rays <cit.> and dark matter annihilations <cit.>.Plasma instabilities induced by blazar TeV photons can additionally heat up the IGM above the temperature induced by photo-heating <cit.>. Compared to our results, this mechanism is relevant at lower redshifts, z≲ 6, and provides a more uniform background. The signal yielded by such contribution would therefore be easily distinguishable from the one expected from CRs. Our model for the CR injection and transport in the IGM is based on two commonly accepted assumptions: (1) CRs escape from star-forming structures on a timescale much shorter than the energy loss timescale in the ISM, (2) CRs provide a spatially uniform energy density floor, i.e. a “background”.The first assumption is certainly valid for Milky Way protons with E > 100 MeV. Whether it holds for high-z galaxies depends on poorly known quantities, such as the turbulent magnetic fields in these objects. On general grounds, however, weaker magnetic fields should correspond to a larger diffusion coefficient and a smaller size of the magnetic halo. Both factors lead to a shorter diffusion timescale than in the Galaxy.The second assumption has been carefully investigated in  <ref>. We showed that CRs escaping from galaxies trigger streaming instabilities eventually amplifying the seed magnetic field up to equipartition. Under the most optimistic conditions for the development of the instabilities, such self-generated magnetic field might efficiently confine GeV particles around haloes for a time largely exceeding the Hubble time at z ∼ 20.If true, this strongly clustered emission is expected to leave a specific imprint on the 21cm line power spectrum. Such detection would allow for the first time to study the structure and strength of magnetic fields in the Dark Ages. However, a testable prediction of this CR heating signature requires a more detailed model for the ejection and propagation of CRs in the pre-ionized bubbles and will be investigated in a forthcoming work. § ACKNOWLEDGMENTS N.L. thanks GSSI in L'Aquila for the warm hospitality during the preparation of the paper.We thank A. Mesinger for useful discussions. This work was partially supported by the ”Helmholtz Alliance for Astroparticle Physics (HAP)” funded by the Initiative and Networking Fund of the Helmholtz Association, and by the Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Centre SFB 676 ”Particles, Strings and the Early Universe”. mn2e
http://arxiv.org/abs/1703.09337v1
{ "authors": [ "Natacha Leite", "Carmelo Evoli", "Marta D'Angelo", "Benedetta Ciardi", "Günter Sigl", "Andrea Ferrara" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170327231050", "title": "Do Cosmic Rays Heat the Early Intergalactic Medium?" }
Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, USA*lwaller@alum.mit.edu Optical phase-space functions describe spatial and angular information simultaneously; examples include light fields in ray optics and Wigner functions in wave optics. Measurement of phase-space enables digital refocusing, aberration removal and 3D reconstruction. High-resolution capture of 4D phase-space datasets is, however, challenging. Previous scanning approaches are slow, light inefficient and do not achieve diffraction-limited resolution. Here, we propose a multiplexed method that solves these problems. We use a spatial light modulator (SLM) in the pupil plane of a microscope in order to sequentially pattern multiplexed coded apertures while capturing images in real space. Then, we reconstruct the 3D fluorescence distribution of our sample by solving an inverse problem via regularized least squares with a proximal accelerated gradient descent solver. We experimentally reconstruct a 101 Megavoxel 3D volume (1010×510×500 with NA 0.4), demonstrating improved acquisition time, light throughput and resolution compared to scanning aperture methods. Our flexible patterning scheme further allows sparsity in the sample to be exploited for reduced data capture. (100.3010) Image reconstruction techniques, (110.0180) Microscopy, (110.4980) Partial coherence in imaging, (110.1758) Computational imaging osajnl § INTRODUCTION 3D fluorescence microscopy is a critical tool for bioimaging, since most samples are thick and can be functionally labeled. High-resolution 3D imaging typically uses confocal <cit.>, two-photon <cit.> or light sheet microscopy <cit.>. Because these methods all involve scanning, they are are inherently limited in terms of speed or volume. Light field microscopy <cit.>, on the other hand, achieves single-shot 3D capture, but sacrifices resolution. High resolution and single-shot capture are possible with coded aperture microscopy <cit.>; however, this requires an extremely sparse sample. Here, we describe a multi-shot coded aperture microscopy method for high-resolution imaging of large and dense volumes with efficient data capture.Our work fits into the framework of phase-space optics, which is the general term for any space-angle description of light <cit.>. Light fields, for example, are phase-space functions for ray optics with incoherent light. The light field describes each ray's position and angle at a particular plane, which can be used for digital refocusing <cit.>, 3D imaging, aberration correction <cit.> or imaging in scattering <cit.>. In microscopy, wave-optical effects become prominent and so ray optics no longer suffices if one wishes to achieve diffraction-limited resolution. Hence, we use Wigner functions, which are the wave-optical analog to light fields <cit.>. The Wigner function describes spatial and spatial frequency (propagation angle) information for a wave-field of arbitrary coherence. It converges to the light field in the limit of incoherent ray optics <cit.>. Capturing Wigner functions is akin to spatial coherence imaging <cit.>, since they contain the same information as mutual intensity and coherent mode decompositions <cit.>. In the case of fluorescence microscopy, where the object is a set of incoherent emitters, Wigner representations define the incoherent 3D Optical Transfer Function (OTF) <cit.>. The 4D nature of phase space (two spatial and two spatial-frequency dimensions) poses significant measurement challenges. Single-shot schemes (e.g. lenslet arrays <cit.>) must distribute the 4D function across a 2D sensor, severely restricting resolution. Scanning aperture methods <cit.> are slow (∼minutes) and light inefficient. We seek here a flexible trade-off between capture time and resolution, with the ability to exploit sparsity for further data reduction.Our experimental setup consists of a widefield fluorescence microscope with a spatial light modulator (SLM) in Fourier space (the pupil plane). The SLM implements a series of quasi-random coded aperture patterns, while collecting real space images for each (Fig. <ref>) <cit.>. The recovered 4D phase space has very large pixel count (the product of the pixel counts of the SLM and the sensor) ∼ 10^12. Compared to scanning aperture methods <cit.>, the new scheme has three major benefits. First, it achieves better resolution by capturing high-frequency interference effects (high-order correlations). This enables diffraction-limited resolution at the microscope's full numerical aperture (NA). Second, we achieve higher light throughput by opening up more of the pupil in each capture; this can be traded for shorter exposure time and faster acquisition. Third, the multiplexed nature of the measurements means that we can employ compressed sensing approaches (when samples are sparse) in order to capture fewer images without sacrificing resolution. This means that the number of images required scales not with the reconstructed number of resolved voxels, but rather with the sparsity of the volume. Our method can be thought of as a multi-shot coded aperture scheme for diffraction-limited 3D fluorescence microscopy. It is analogous to coded aperture photography <cit.>; however, we use a wave-optical model to account for diffraction effects, so intensity measurements are nonlinear with complex-field. Fluorescent imaging allows a simplification of the forward model, since each fluorophore is spatially coherent with itself but incoherent with all other fluorophores. Our reconstruction algorithm then becomes a large-scale inverse problem akin to multi-image 3D deconvolution, formulated as a convex ℓ_1 regularized least-square error problem and solved by a fast iterative shrinkage-thresholding algorithm (FISTA) <cit.>. § THEORY AND METHODSWe use the Wigner function (WF) and its close relative, the Mutual Intensity (MI) function, to describe our multiplexing scheme. The WF describes position and spatial frequency, where spatial frequency can be thought of as the direction of ray propagation in geometrical optics. The concept was introduced by Walther <cit.> to describe the connection between ray intensity and wave theory. Here we assume that the light is quasi-monochromatic, temporally stationary and ergodic, so the Wigner function is <cit.>: W(,) ≜∬⟨Ẽ^* (-Δ/2) Ẽ (+Δ/2) ⟩ e^i2π (Δ)·𝐫^2(Δ),where =(x,y) denotes transverse spatial coordinates, =(u_x,u_y) denotes spatial frequency coordinates and ⟨·⟩ denotes ensemble average. The quantity contained by angle brackets,MI(,Δ)≜⟨Ẽ^* (-Δ/2) Ẽ (+Δ/2) ⟩,is, apart from a coordinate transform, the Mutual Intensity. E(𝐫) is a spatially coherent electric field (e.g. from a single fluorophore) and Ẽ() is its Fourier transform. The ensemble average allows representation of both coherent and partially (spatially) coherent light. Here, we assume that the object is a 3D volume of incoherent emitters with no occlusions. Thus, the phase-space description of light from the object is a linear sum of that from each fluorophore. §.§ General forward modelOur forward model relates each coded aperture's captured image to the 3D object's intensity. Each image contains information from multiple frequencies and their interference terms (which are key to resolution enhancement). The MI framework facilitates our mathematical analysis, since the projection of a mask in MI space is equivalent to applying the 3D Optical Transfer Function (OTF) to an incoherent source (see Fig. <ref>). MI analysis further reveals the interference effects that may be obscured by looking only at a projected quantity (the OTF). For simplicity, we first describe the forward model of a point source and then generalize to multiple mutually incoherent sources.Consider a complex electric field, specified by some properties, α (e.g. location and wavelength of a point source). The field E_s(_1;α) acts like a unique coherent mode – it interferes coherently with itself but not with other modes. For a single coherent mode at the front focal plane of the 4f system in Fig. <ref>, the complex-field just before the SLM is the Fourier transform of the input complex-field <cit.>, expressed as Ẽ_s(_1,α), where _1 is in units of spatial frequency that relate to the lateral coordinates _1 on the SLM by multiplying the wavelength λ and the front focal length f, such that _1=λ f _1. The field Ẽ_s(_1;α) is then multiplied by the coded aperture, giving,Ẽ_n(_1;α) = M_n(λ f_1)Ẽ_s(_1;α),where Ẽ_n is the patterned complex-field in Fourier space and M_n represents the n'th binary coded aperture pattern. At the sensor plane (the back focal plane of the 4f system) the intensity I_n is:I_n(;α)= ∬Ẽ_n(_1;α) e^ i2π_1·^2_1∬Ẽ_n^*(_2;α) e^-i2π_2·^2_2,where _2 is a duplicate of _1. The intensity image can be alternately related to the MI or WF by a simple coordinate transform _1=+Δ/2, _2=-Δ/2:I_n(;α)= Ẽ_n(+Δ/2;α)Ẽ_n^*(-Δ/2;α) e^i2πΔ·^2Δ^2=MI_n(,Δ;α) e^i2πΔ·^2Δ^2=∬ W_n(,;α) ^2,where MI_n and W_n are the MI and WF associated with the field modified by the n'th mask. Describing the intensity measurement in terms of both its WF and MI gives new insights. (<ref>) shows that the intensity image is a projection of the patterned Wigner function W_n across all spatial frequencies. Alternately, looking at the Fourier transform of I_n, denoted by Ĩ_n, we can interpret the intensity measurement as a patterning and projection of the source MI:Ĩ_n(Δ;α)= ∬MI_n(,Δ;α) ^2= ∬ M_n^*(λ f(-Δ/2)) M_n(λ f(+Δ/2)) MI_s(,Δ';α) ^2,where MI_s is the MI of source field E_s. This interpretation is illustrated for a 1D complex-field in Fig. <ref>. Each SLM pattern probes different parts of the input MI. By using multiple coded apertures, one may reconstruct the MI from its projections. The extension of our forward model to multiple coherent modes is straightforward. Since each mode is mutually incoherent with others, we sum over all of them with their weights C(α):I_n()= ∑_α C(α)I_n(;α). §.§ Forward model for incoherent sourcesFrom the general forward model of (<ref>), we can now derive the case in which the object is a 3D incoherent intensity distribution (e.g. a fluorescent sample). We specify Ẽ_s in (<ref>) to be from a single-color fluorophore located at position (_s,z_s) where z_s is the defocus distance. The parameter α is (_s, z_s, λ) for a single point source, and our goal is to solve for the coefficients C(α) which represents intensity of each. To account for off-focus point sources, the field can be propagated to -z_s using angular spectrum propagation <cit.>:Ẽ_s(_1;_s, z_s, λ) =e^i2π/λ(-z_s)√(1-λ^2|_1|^2)-i2π_s·_1,0, .Using Eqs. (<ref>), (<ref>) and (<ref>), after some algebra the intensity becomes: I_n(;_s,z_s,λ) = ∬ K_M_n,z_s(Δ;λ)e^i2π(-_s)·Δ^2Δ,where K_M_n,z_s(Δ;λ) = ∬ M_n^*(λ f(-Δ/2)) M_n(λ f(+Δ/2)) e^i2π z_s/λ(√(1-λ^2|-Δ/2|^2)-√(1-λ^2|+Δ/2|^2))^2is the kernel for mask M_n at depth z_s. Plugging (<ref>) into (<ref>) gives the final expression of the forward model. Before doing so, we assume that the fluorescent color spectrum S(λ) is identical and known for all flourophores. Thus, we can decompose the mode weights in (<ref>) into a product of the spectral weight and the 3D intensity distribution, S(λ)C(_s,z_s), giving:I_n() = ∭(∬∑_λ S(λ) K_M_n,z_s(Δ;λ) e^i2π(-_s)·Δ^2Δ)C(_s,z_s)^2_sz_s. Equation (<ref>) describes the forward model for a 3D fluorescent object C(_s,z_s) with no occlusions. The term in parentheses is a convolution kernel describing the 3D Point Spread Function (PSF) for mask M_n (shown in Fig. <ref>). For simplicity, we assume here no scattering, though incorporating the scattering forward model in <cit.> is straightforward. §.§ Inverse problemBased on the raw data and forward model, the inverse problem is formulated as a nonlinear optimization. Our goal is to reconstruct the 3D intensity distribution C(_s,z_s) from the measured images. To do so, we aim to minimize data mismatch, with an ℓ_1 regularizer to mitigate the effects of noise (and promote sparsity where applicable). The mismatch is defined as the least-square error between the measured intensity images and the intensity predicted by our forward model (Eq. (<ref>)). This formulation has a smooth part and a non-smooth part in the objective function and is efficiently solved by a proximal gradient descent solver (FISTA <cit.>).To formulate the inverse problem, we first discretize the forward model in (<ref>) to be= .Here ∈ℝ^MP×1 corresponds to predicted images on the sensor; each small chunk (∈ℝ^P×1) ofis a vectorized image I_n(). We discretizeinto P pixels, and the number of masks is M, so n=1… M. Similarly, we discretize _s and z_s into P' pixels and L samples, respectively, to obtain a vectorized version of C(_s,z_s) (∈ℝ^LP'× 1). The matrix ∈ℝ^MP× LP', which is not materialized, represents the summation and convolution in (<ref>) using 2D Fast Fourier Transforms (FFTs) for each subvector (∈ℝ^P'×1) of , with zero-padding to avoid periodic boundary condition errors. The convolution kernel is precomputed and stored for speed.The inverse problem becomes a data fidelity term plus an ℓ_1 regularization parameter μ:_≥01/2-_meas_2^2 + μ_1,where _meas∈ℝ^MP×1 is the measured intensity. We also use a diagonal matrix ∈ℝ^LP'× LP' to lower the weight of point sources near the borders of images whose light falls off the sensor. Each diagonal entry ofis obtained by summing the corresponding column in . Outside point sources may also contribute to the measured intensity due to defocus; hence, we use an extended field-of-view method <cit.> to solve for more sample points inthan(i.e. P'>P).§ DESIGN OF CODED APERTURESIn the scanning-aperture scheme <cit.>, smaller apertures give better frequency sampling of the 4D phase space, at a cost of: 1) lower resolution, 2) lower signal-to-noise ratio (SNR) and 3) large data sets. Our multiplexing scheme alleviates all of these problems. Multiplexing achieves diffraction-limited resolution by additionally capturing interference terms, which cover the full NA-limited bandwidth. This is evident in the Fourier transform of the captured images (Fig. <ref>). The SNR improvement is also visible; the multiplexed image is less noisy. Our masks are chosen by quasi-random non-replacement selection. We section the SLM plane into 18×18 square blocks and keep only the 240 blocks that are inside the system NA. For each mask, we open 12 blocks, selected randomly from the blocks remaining after excluding ones that were open in previous sequences. In this scheme, the full NA can be covered by 20 masks. To allow for both diversity and redundancy, we choose to to cover the entire pupil 5 times, resulting in 100 multiplexed aperture patterns, one of which is shown in Fig. <ref>.Importantly, the number of multiplexed patterns can be flexibly chosen to trade off accuracy for speed of capture. For instance, by increasing the number of openings in each pattern, we can cover the entire pupil with fewer patterns. This means that we may be able to reconstruct the object from fewer measurements, if the inverse problem is solvable. By using a priori information about the object (such as sparsity in 3D) as a constraint, we can solve under-determined problems with fewer total measured pixels than voxels in the reconstruction.§ EXPERIMENTSOur experimental setup consists of the 4f system (f_1=250 mm,f_2=225 mm) shown in Fig. <ref>, with an additional 4f system in front, made of an objective lens (20× NA 0.4) and a tube lens (f=200 mm) to image the sample at the input plane. The SLM (1400×1050 pixels of size 10.3) is a liquid crystal chip from a 3-LCOS projector (Canon SX50) which is reflective and polarization-sensitive, so we fold the optical train with a polarization beam splitter and insert linear polarizers. Our sensor (Hamamatsu ORCA-Flash4.0 V2) captures the multiplexed images and synchronizes with the SLM via computer control. Our sample is a fixed fluorescent brine shrimp (Carolina Biological). It is relatively dense, yet does not fill the entire 3D volume. The reconstructed 3D intensity (Fig. <ref>(a), <ref>(b) and <ref>(f)-<ref>(h)) is stitched from five volume reconstructions, each with 640×640×120 voxels to represent the sample volume of 455×455×600. The reconstruction is cropped to the central part of our extended field of view, so the final volume contains 1422×715×100 voxels corresponding to 1010×510×500. The dataset size is large (9 GB), and since the size of the 3D array is ∼5×10^7 without the extended field-of-view and the measured data is ∼4×10^7, the number of operations for evaluating Eq. (<ref>) is on the order of 3×10^10. This takes 4 seconds to compute on a computer with 48-core 3.0 GHz CPUs and requires 94 GB memory to store the kernel ((<ref>)). The reconstructed 3D intensity is shown in Fig. <ref>, alongside images from a confocal microscope and a widefield focus stack, for comparison. Both our method and the focus stack use a 0.4 NA objective, while the confocal uses 0.25 NA; hence, the confocal results should have slightly better resolution. Our reconstructed slices appear to have slightly lower resolution than the defocus stack and confocal, possibly due to the missing information in the frequency mutual intensity illustrated in Fig. <ref>(c), which is greatly undersampled. As expected, the depth slices of our reconstruction have better rejection of information from other depths, similar to the confocal images. To illustrate the flexible tradeoff between capture time (number of coded apertures used) and quality, we show reconstructions in Fig. <ref> using different numbers of coded aperture images. The case of only 1 image corresponds to a single coded aperture and gives a poor result, since the sample is relatively dense. However, with as few as 10 images we obtain a reasonable result, despite the fact that we are solving a severely under-determined problem. This is possible because the measurements are multiplexed and so the ℓ_1 regularizer acts as a sparsity promoter.§ CONCLUSIONWe demonstrated 3D reconstruction of a large-volume high-resolution dense fluorescent object from multiplexed phase-space measurements. An SLM in Fourier space dynamically implements quasi-random coded apertures while intensity images are collected in real space for each coded aperture. Theory is developed in the framework phase space, with relation to Mutual Intensity functions and 3D OTFs. Reconstruction is formulated as an ℓ_1-regularized least-square problem. This method enables diffraction-limited 3D imaging with high resolution across large volumes, efficient data capture and a flexible acquisition scheme for different types and sizes of samples. § FUNDINGThe Office of Naval Research (ONR) (Grant N00014-14-1-0083). § ACKNOWLEDGMENTSThe authors thank Eric Jonas, Ben Recht, Jingzhao Zhang and the AMP Lab at UC Berkeley for help with computational resources.
http://arxiv.org/abs/1703.09187v2
{ "authors": [ "Hsiou-Yuan Liu", "Jingshan Zhong", "Laura Waller" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170327170846", "title": "Multiplexed phase-space imaging for 3D fluorescence microscopy" }
yx59@phy.duke.edu Department of Physics, Duke University, Durham, NC 27708, USA Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe Universität, Frankfurt am Main, Germany Institute for Theoretical Physics, Johann Wolfgang Goethe Universität, Frankfurt am Main, Germany GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt, Germany Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe Universität, Frankfurt am Main, Germany Institute for Theoretical Physics, Johann Wolfgang Goethe Universität, Frankfurt am Main, Germany Institut für Theoretische Physik, Universität Gießen, Heinrich-Buff-Ring 16, 35392 Gießen, Germany Department of Physics, Duke University, Durham, NC 27708, USA SUBATECH UMR 6457 (IMT Atlantique, Université de Nantes, IN2P3/CNRS), 4 rue Alfred Kastler, 44307 Nantes, FranceDepartment of Physics, Duke University, Durham, NC 27708, USA Institute for Theoretical Physics, Johann Wolfgang Goethe Universität, Frankfurt am Main, Germany GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt, Germany The impact of non-equilibrium effects on the dynamics of heavy-ion collisions is investigated by comparing a non-equilibrium transport approach, the Parton-Hadron-String-Dynamics (PHSD), to a2D+1 viscous hydrodynamical model, which is based on the assumption of local equilibrium and conservation laws. Starting the hydrodynamical model from the same non-equilibrium initial condition as in the PHSD, using an equivalent lQCD Equation-of-State (EoS), the same transport coefficients, i.e. shear viscosity η and the bulk viscosity ζ in the hydrodynamical model, we compare the time evolution of the system in terms of energy density, Fourier transformed energy density, spatial and momentum eccentricities and ellipticity in order to quantify the traces of non-equilibrium phenomena. In addition, we also investigate the role of initial pre-equilibriumflow on the hydrodynamical evolution and demonstrate its importance for final state observables. We find that due to non-equilibrium effects, the event-by-event transport calculations show large fluctuations in the collective properties, while ensemble averaged observables are close to the hydrodynamical results. 25.75.Nq, 25.75.Ld, 25.75.-q, 24.85.+p, 12.38.MhTraces of non-equilibrium dynamics in relativistic heavy-ion collisions Elena Bratkovskaya Received .....; accepted .... ======================================================================= § INTRODUCTIONRelativistic heavy-ion collisions produce a hot, dense phase of strongly-interacting matter commonly known as the quark-gluon plasma (QGP) which rapidly expands and freezes into discrete particles <cit.>. Since the QGP is not directly observable – only final-state hadrons and electromagnetic probes are detected – present research relieson dynamical models to establish the connection between observable quantities andthe physical properties of interest. The output of these dynamical model simulations are analogous to experimental measurements as they provide final state particle distributions from which a wide variety of observables related to the bulk properties of QCD matter can be obtained. In addition, dynamical models also provide information on the full space-time evolution of the QCD medium. This information can be utilized for the study of rare probes, such as jets, heavy quarks and electromagnetic radiation that are sensitive to the properties of the medium either via re-interaction with the QGP constituents or its production. There exist a variety of different dynamical models, based on different transport concepts that have been successful in describing current data measured at the Relativistic Heavy-Ion Collider at Brookhaven National Laboratory and at the Large Hadron Collider at CERN. These models may differ in their assumptions on the formation of the deconfined phase of QCD matter, the nature of its dynamical evolution, the mechanism of hadron formation and freeze-out and on many other details. A key question that needs to be addressed is how sensitive current observables are to these differences and what kind of strategy to pursue to ascertain which of these model features reflect the actual physical nature of the hot and dense QCD system. In this paper we perform a comparison of two prominent models for the evolution of bulk QCD matter. The first one is a non-equilibrium transport approach, the Parton-Hadron-String-Dynamics (PHSD) <cit.>, and the second one a2D+1 viscous hydrodynamical model, VISHNew <cit.> which is based on the assumption of local equilibrium and conservation laws.Non-equilibrium effects are considered to be strongest during the early phase of the heavy-ion reaction and thus may significantly impact the properties of probes with early production times, such as heavy quarks (charm and bottom hadrons), electromagnetic probes (direct photons and dileptons), and jets. Moreover, some bulk observables, such as correlation functions and higher-order anisotropy coefficients, might also retain traces of non-equilibrium effects <cit.>. In particular, the impact of the event-by event fluctuations on the collective observables has been studied by Kodama et al. <cit.>. Based on the comparison of the coarse-grained hydrodynamical evolution with the PHSD dynamics, they find that in spite of large fluctuations on event-by-event basis in the PHSD, the ensemble averages are close to the hydrodynamical limit. A similar behaviorhas been pointed out before within the PHSD study in Ref. <cit.> where a linear correlation of the elliptic flow v_2 with the initial spatial eccentricity ε_2 has been obtained for the model study of an expanding partonic fireball (cf. Fig. 7 in Ref. <cit.>). Such correlations of v_2 versus ε_2 are expected in the ideal hydrodynamical case <cit.>. The large event-by-event fluctuations of the charge distributionshas been addressed also in another PHSD study <cit.>. In the present paper our focus will be on isolating differences in the dynamical evolution of the system that can be attributed to non-equilibrium dynamics. The groundwork laid in this comparative study will hopefully lead to the development of new observables that have an enhanced sensitivity to the non-equilibrium components of the evolution of bulk QCD matter and that will allow us to quantify how far off equilibrium the system actually evolves. § DESCRIPTION OF THE MODELS§.§ PHSD transport approach The Parton-Hadron-String Dynamics (PHSD) transport approach <cit.> is a microscopic covariant dynamical model for strongly interacting systems formulated on the basis of Kadanoff-Baym equations <cit.> for Green's functions in phase-space representation (in first order gradient expansion beyond the quasi-particle approximation). The approach consistently describes the full evolution of a relativistic heavy-ion collision from the initial hard scatterings and string formation through the dynamical deconfinement phase transition to the strongly-interacting quark-gluon plasma (sQGP) as well as hadronization and the subsequent interactions in the expanding hadronic phase as in the Hadron-String-Dynamics (HSD) transport approach <cit.>. The transport theoretical description of quarks and gluons in the PHSD is based on the Dynamical Quasi-Particle Model (DQPM) for partons that is constructed to reproduce lattice QCD results for the QGP in thermodynamic equilibrium <cit.> on the basis of effective propagators for quarks and gluons. The DQPM is thermodynamically consistent and the effective parton propagators incorporate finite masses (scalar mean-fields) for gluons/quarks as well as a finite width that describes the medium dependent reaction rate. For fixed thermodynamic parameters (T, μ_q) the partonic width's Γ_i(T,μ_q)fix the effective two-body interactions that are presently implemented in the PHSD <cit.>. The PHSD differs from conventional Boltzmann approaches in a couple of essential aspects:i) it incorporates dynamical quasi-particles due to the finite width of the spectral functions (imaginary part of the propagators); ii) it involves scalar mean-fields that substantially drive the collective flow in the partonic phase;iii) it is based on a realistic equation of state from lattice QCD and thus describes the speed of sound c_s(T) reliably;iv) the hadronization is described by the fusion of off-shell partons to off-shell hadronic states (resonances or strings) and does not violate the second law of thermodynamics; v) all conservation laws (energy-momentum, flavor currents etc.) are fulfilled in the hadronization contrary to coalescence models;vi) the effective partonic cross sections are not given by pQCD but are self-consistently determined within the DQPM and probed by transport coefficients (correlators) in thermodynamic equilibrium. The latter can be calculated within the DQPM orcan be extracted from the PHSD by performing calculations in a finite box with periodic boundary conditions (shear- and bulk viscosity, electric conductivity, magnetic susceptibility etc. <cit.>).Both methods show a good agreement. In the beginning of relativistic heavy-ion collisions color-neutral strings (described by the LUND model <cit.>) are produced in highly energetic scatterings of nucleons from the impinging nuclei. These strings are dissolved into 'pre-hadrons' with a formation time of ∼ 0.8 fm/c in the rest frame of the corresponding string, except for the 'leading hadrons'. Those are the fastest residues of the string ends, which can re-interact (practically instantly) with hadrons with a reduced cross sections in line with quark counting rules. If, however, the local energy density is larger than the critical value for the phase transition, which is taken to be ∼ 0.5 GeV/ fm^3, the pre-hadrons melt into (colored) effective quarks and antiquarks in their self-generated repulsive mean-field as defined by the DQPM <cit.>. In the DQPM the quarks, antiquarks and gluons are dressed quasi-particles and have temperature-dependent effective masses and widths which have been fitted to lattice thermal quantities such as energy density, pressure and entropy density. The nonzero width of the quasi-particles implies the off-shellness of partons, which is taken into account in the scattering and propagation of partons in the QGP on the same footing (i.e. propagators and couplings). The transition from the partonic to hadronic degrees-of-freedom (for light quarks/antiquarks) is described by covariant transition rates for the fusion of quark-antiquark pairs to mesonic resonances or three quarks (antiquarks) to baryonic states, i.e. by the dynamical hadronization.Note that due to the off-shell nature of both partons and hadrons, the hadronization process described above obeys all conservation laws (i.e. four-momentum conservation and flavor current conservation) in each event, as well as the detailed balance relations and the increase in the total entropy S. In the hadronic phase PHSD is equivalent to the hadron-strings dynamics (HSD) model <cit.> that has been employed in the past from SchwerIonen-Synchrotron (SIS) to SPS energies. On the other hand the PHSD approach has been applied to p+p, p+A and relativistic heavy-ion collisions from lower SPS to LHC energies and been successful in describing a large number of experimental data including single-particle spectra, collective flow as well as electromagnetic probes <cit.>. §.§ 2D+1 viscous hydrodynamics Relativistic hydrodynamical models calculate the space-time evolution of the QGP medium via the conservation equations∂_μ T^μν = 0for the energy-momentum tensorT^μν = e u^μ u^ν- Δ^μν (P + Π) + π^μν,provided a set of initial conditions for the fluid flow velocity u^μ, energy density e, pressure P, shear stress tensor π^μν, and bulk viscous pressure Π. For our analysis, we use VISH2+1 <cit.>, which is an extensively tested implementation of boost-invariant viscous hydrodynamics that has been updated to handle fluctuating event-by-event initial conditions <cit.>. We use the method from Ref. <cit.> for the calculation of the shear stress tensor π^μν. This particular implementation of viscous hydrodynamics calculates the time evolution of the viscous corrections through the second-order Israel-Stewart equations <cit.> in the 14-momentum approximation, which yields a set of relaxation-type equations <cit.>τ_ΠΠ̇ + Π = -ζθ - δ_ΠΠΠθ + ϕ_1 Π^2+ λ_Πππ^μνσ_μν + ϕ_3 π^μνπ_μν,τ_ππ̇^⟨μν⟩ + π^μν = 2ησ^μν + 2π_α^⟨μ w^ν⟩α - δ_πππ^μνθ + ϕ_7 π_α^⟨μπ^ν⟩α - τ_πππ_α^⟨μσ^ν⟩α + λ_πΠΠσ^μν + ϕ_6 Ππ^μν. Here, η and ζ are the shear and bulk viscosities. For the remaining transport coefficients, we use analytic results derived for a gas of classical particles in the limit of small but finite masses <cit.>. The hydrodynamical equations of motion must be closed by an equation of state (EoS), P = P(e). We use a modern QCD EoS based on continuum extrapolated lattice calculations at zero baryon density published by the HotQCD collaboration <cit.> and blended into a hadron resonance gas EoS in the interval 110 ≤ T ≤ 130 MeV using a smooth step interpolation function <cit.>. While not identical, this EoS is compatible with the one that the DQPM model (underlying the PHSD approach) is tuned to reproduce. In order to start the hydrodynamical calculation, an initial condition needs to be specified. Initial condition models provide the outcome of the collision's pre-equilibrium evolution at the hydrodynamical thermalization time, at approximately 0.5 fm/c. This pre-equilibrium stage is the least understood phase of a heavy-ion collision. While some hydrodynamical models explicitly incorporate pre-equilibrium dynamics <cit.> starting from a fullinitial state calculation,others sidestep the uncertainty associated with this early regime by generating parametric initial conditions directly at the materialization time <cit.>. For our study here, we shall initialize the hydrodynamical calculation with an initial condition extracted from PHSD that provides us with a common starting configuration for both models regarding our comparison of the dynamical evolution of the system.§ NON-EQUILIBRIUM INITIAL CONDITIONSIn this section we describe the construction of the initial condition for the hydrodynamical evolution from the non-equilibrium PHSD evolution. One should note that PHSD starts its calculation ab initio with two colliding nuclei and makes no equilibrium assumptions regarding the nature of the hot and dense system during the course of its evolution from initial nuclear overlap to final hadronic freeze-out. For the purpose of our comparison we have to select the earliest possible time during the PHSD evolution where the system is in a state in which a hydrodynamical evolution is feasible (e.g. the viscous corrections are already small enough) and generate an initial condition for the hydrodynamical calculation at that time (note that this criterion is less stringent than assuming full momentum isotropization or local thermal equilibrium).§.§ Evaluation of the energy-momentum tensor T^μν in PHSDThe energy-momentum tensor T^μν(x) of an ideal fluid (by removing viscous corrections in Eq. <ref>) is given byT^μν = (e+P) u^μ u^ν -P g^μνwhere e is the energy density, P the thermodynamic pressure expressed in the local rest frame (LRF) and the 4-velocity is u^μ =γ (1,β_x,β_y,β_z). Here β is the (3-)velocity of the considered fluid element and the associated Lorentz factor is given by γ = 1/√(1-β^2). In order to calculate T^μν in PHSD which fully describes the medium in every space-time coordinate, the space-time is divided into cells of size Δ x = 1 fm, Δ y = 1 fm (which is comparable to the size of a hadron) and Δ z ∝ 0.5 × t/γ_NN scaled by γ_NN to account for the expansion of the system. We note that choosing a high resolution has been shown in Ref. <cit.> to lead to very similar results. In each cell, we can obtain T^μν in the computational frame from:T^μν(x) = ∑_i∫_0^∞d^3p_i/(2π)^3 f_i(E_i) p_i^μ p_i^ν/E_i.where f_i(E) is the distribution function corresponding to the particle i, p_i^μ the 4-momentum and E_i=p_i^0 is the energy of the particle i. In the case of an ideal fluid, if the matter is at rest (u^μ = (1,0,0,0)) T^μν(x) should only have diagonal components and the energy density in the cell can be identified to the T^00 component. However, in heavy-ion collisions the matter is viscous, anisotropic and relativistic, thus the different components of the pressure are not equal and it becomes more difficult to extract the relevant information. This especially holds true for the early reaction time at which the initial conditions for hydrodynamical model are taken. In order to obtain the needed quantities (e,β) from T^μν for the hydrodynamical evolution, we have to express them in the local rest frame (LRF) of each cells of our space-time grid. In the general case, the energy-momentum tensor can always be diagonalized, i.e presented asT^μν (x_ν)_i = λ_i (x^μ)_i = λ_i g^μν (x_ν)_i, with i =0,1,2,3, where its eigenvalues are λ_i and the corresponding eigenvectors (x_ν)_i. When i = 0, the local energy density e is identified to the eigenvalue of T^μν (Landau matching) and the corresponding time-like eigenvector is defined as the 4-velocity u_ν (multiplying (<ref>) by u_ν): T^μνu_ν = e u^μ = (e g^μν)u_ν using the normalization condition u^μ u_μ =1. In order to solve this equation, we have to calculate the determinant of the corresponding matrix which is the 4^th order characteristic polynomial associated to the eigenvalues λ: P(λ) =T^00-λT^01T^02T^03T^10T^11+λT^12T^13T^20T^21T^22+λT^23T^30T^31T^32T^33+λHaving the four solutions for this polynomial, we can identify the energy density being the larger and positive solution, and the 3 other solutions are (-P_i) the pressure components expressed in the LRF. To obtain the 4-velocity of the cell, we use (7) which gives us this set of equations: {(T^00-e) + T^01X + T^02Y + T^03Z = 0 T^10 + (T^11+e)X + T^12Y + T^13Z = 0 T^20 + T^21X + (T^22+e)Y + T^23Z = 0 T^30 + T^31X + T^32Y + (T^33+e)Z = 0 .Rearranging these equations, we can obtain the solutions which are actually for the vector u_ν = γ (1,X,Y,Z) = γ (1,-β_x,-β_y,-β_z). To obtain the physical 4-velocity u^μ, wehave to multiply by g^μν u_ν = u^μ. §.§ PHSD initial conditions for hydrodynamics By the Landau matching procedure described above, we can obtain the initial conditions such as the local energy density e and initial flow β⃗ for the hydrodynamical evolution. In the PHSD simulation the parallel ensemble algorithm is used for the test particle method, which has an impact on the fluctuating initial conditions. For a larger number of parallel ensembles (NUM), the energy density profile is smoother since it is calculated on mean-field level by averaging over all ensembles. From a hydrodynamical point of view, gradients should not be too large and some smoothing of the initial conditions is therefore required. Here, we choose NUM=30, which provides the same level of smoothing of the initial energy density as in typical PHSD simulations. In fig. <ref> we show the initial condition at time τ=0.6 fm/c extracted from a single PHSD event averaged over (NUM=30) parallel events (upper panel) and averaged over 100 parallel events (lower panel), the color maps represent the local energy density while the arrows shows the initial flow at each of the cells. Even though the initial profiles are averaged over NUM=30 parallel events, the distribution still captures the feature of event-by-event initial state fluctuations. In Fig. <ref> we investigate the dependence of the PHSD initial conditions on the equilibration time τ_0, at which the non-equilibrium evolution is switched to a hydrodynamical evolution in local thermal equilibrium. As expected, for larger initial times τ_0, the local initial flow increases and the local energy density decreases.§ MEDIUM EVOLUTION: HYDRODYNAMICS VERSUS PHSD In this section we compare the response of the hydrodynamical long-wavelength evolution to the PHSD initial conditions with the microscopic PHSD evolution itself. In order to avoid as many biases as possible we apply the temperature-dependent shear viscosity as determined in PHSD simulations <cit.> and shown in the upper panel of Fig. <ref>: the blue and red symbols correspond to η/s obtained from the Kubo formalism and from the relaxation time approximation method, respectively. The black line in Fig. <ref> shows the parametrization of the PHSD η/s(T), which is used in the viscous hydrodynamics for the present study.We note that the parametrized curve is very similar to the recently determined temperature dependence of η/s via Bayesian analysis of the available experimental data <cit.>.While the effect of shear viscosity on the hydrodynamical evolution has been studied extensively for simulations of heavy-ion collisions, bulk viscosity has not been treated as carefully so far. This is because at higher temperatures the bulk viscosity should be very small, and vanish in the conformal limit. Moreover, an enhanced bulk viscosity at the pseudo-critical temperature causes problems for the applicability for hydrodynamics itself. Studies conducted for dynamical quasi-particle models, like the one used in PHSD, show that the magnitude and temperature behavior of the bulk viscosity depend on details of the parametrization of the equation of state and properties of the underlying degrees-of-freedom <cit.>. For the relaxation time approximation in quasi-particle models slightly different values for the bulk viscosity are obtained <cit.>. Given these uncertainties for the values of the bulk viscosity, we decide to use the bulk viscosity that has recently been determined by the Bayesian analysis of experimental data in our hydrodynamical simulations <cit.>. In the low panel of Fig. <ref> we compare the ratio of bulk viscosity to entropy ζ/s that is adapted in our hydrodynamical simulations and the one extracted from PHSD simulations. It should be noted that the maximum ζ/s that hydrodynamical model can handle is much smaller than the bulk viscosity from PHSD simulations, and its effect on the momentum anisotropy will be discussed at the end of this section.§.§ Pressure isotropization In order to justify the choice of initial time τ_0=0.6 fm, we first take a look at the evolution of the different pressure components in PHSD. In the pre-equilibrium stage deviations from thermal equilibrium are very large. It has been argued that one can relax this strict requirement and instead apply hydrodynamics once the pressure is isotropic, which implies that both transverse and longitudinal pressure are about equal.As mentioned in the previous section, the deviations from equilibrium are strongest at the beginning of the heavy-ion collision. In this case the viscous corrections can have a large contribution to the energy-momentum tensor, and the pressure components can differ substantially from the isotropic pressure given by the EoS. This situation is illustrated in Fig. <ref> which shows the evolution of the transverse and longitudinal pressures divided by the local energy density e in different cells along the x-axis extracted from PHSD as a function of time for a peripheral Au+Au collision at √(s_NN)=200 GeV. These pressure components correspond to the eigenvalues of T^μν(x) where the latter have been averaged in this case over 100 PHSD events in order to get a smooth evolution. As seen from Fig. <ref>, at early reaction times the deviation between the pressure components is large and the longitudinal pressure dominates. The transverse pressure starts from zero but grows with time and approximately reaches the isotropic pressure within a range of 0.3 to 1. fm/c. On the other hand, the longitudinal pressure decreases to very low values and remains small for large times. One of the reasons for this behavior is that we took only a few cells on the z-axis which correspond to a pseudorapidty gap Δη≈𝒪(10^-2). By taking into account more cells in the longitudinal direction, the longitudinal pressure increases but the collective expansion cannot be removed properly in this case (as it has already been studied in Section 7 of Ref. <cit.>). By looking at more peripheral cells (bottom panel of Fig. <ref>), we can see that the pressure components deviate more from the isotropic pressure given by the EoS compared to more central cells (top panel of Fig. <ref>). We illustrate in Fig. <ref> the (non-)equilibrated regions in the PHSD simulation. We evaluated the relative value between the transverse pressure extracted from PHSD P_T (see Fig. <ref> for the pressure components) and the pressure P given by the EoS in the full transverse plane. One can see that the central region in grey is rather equilibrated for all times ((P_T-P)/P is around 0). The peripheral cells have a higher pressure when the initial condition for the hydrodynamical model is taken (t = 0.6 fm/c), and then fluctuate around the isotropic pressure as depicted by the red and blue colors. We can therefore conclude that by averaging over the PHSD events, the medium reaches with time a transverse pressure comparable to the isotropic one as given by the lQCD EoS. This statement is of course not valid for a single PHSD event where the pressure components show a much more chaotic behavior and where the high fluctuations in density and velocity profiles indicate that the medium is in a non-equilibrium state, as we will see in the next section.§.§ Space-time evolution of energy density e and velocity β⃗ Starting with the same initial conditions (as discussed in section <ref>), the evolution of the QGP medium is now simulated by two different models: the non-equilibrium dynamics model – PHSD, and hydrodynamics – (2+1)-dimensional VISHNU. Fig. <ref> shows the time evolution of the local energy density e(x,y,z=0) (from T^μν) (left) and the corresponding temperature T (right) as calculated using the lQCD EoS in the transverse plane from a single PHSD event (NUM=30) at different times for a peripheral (b=6 fm) Au+Au collision at √(s_NN)=200 GeV. As seen in figure <ref> for t = 0.6 fm/c, the energy density profile is far from being smooth. Note also that the energy density decreases rapidly as the medium expands in the transverse and longitudinal direction. By converting the energy density to the temperature given by the lQCD EoS, we can see that the variations are less pronounced in that case. Fig. <ref> shows the same quantities for a single event evolved through hydrodynamics. In particular for the energy density at later times one can already observe a significant smoothing compared to the PHSD evolution. Fig. <ref> shows the time evolution of the local energy density e(x,y) in the transverse plane from a single PHSD event (NUM=30) at different proper times for a peripheral Au+Au collision at √(s_NN)=200 GeV, while Fig. <ref> shows the same time evolution of e(x,y) from a hydrodynamical evolution using the same initial condition as the PHSD event above. A comparison of the two medium evolutions shows distinct differences: in PHSD the energy density retains many small hot spots during its evolution due to its spatial non-uniformly. In hydrodynamics, the initial hot spots of energy density quickly dissolve andthe medium becomes much smoother withincreasing time. Moreover, as a result of the initial spatial anisotropy, the pressure gradient in x-direction is larger than that in y-direction, resulting in a slightly faster expansion in x-direction. We attribute these differences directly to the non-equilibrium nature of the PHSD evolution.In Fig. <ref> and Fig. <ref> we show the time evolution of the velocity β⃗=(β_x,β_y,β_z) in the transverse plane for the same PHSD initial condition evolved through PHSD and hydrodynamics. The longitudinal velocity β_z shown in the PHSD event remains on average approximately 0 and much smaller than the transverse flow since we only consider a narrow interval in the z-direction. At τ_0=0.6 fm/c, transverse flow has already developed and the transverse velocity can reach values of 0.5 at the edge of the profile. Even though the velocity increase with time in both PHSD and hydrodynamical events,it is clearly seen that the development of flow in a hydrodynamical event is much faster than in a PHSD event. In addition, local fluctuations in a single event are more visible in the PHSD event. Moreover, the velocity in x-direction is slightly larger than the one in y-direction in both events, as a result of the initial spatial anisotropy of the energy density, and that spatial anisotropy is converted into momentum anisotropy, which increases with time. §.§ Fourier images of energy density The inhomogeneity of a medium can be quantified by the Fourier transform of the energy density, ẽ(k_x, k_y). For a discrete spatial grid with an energy distribution as e(x,y)_m× n, the Fourier coefficients are given by ẽ(k_x, k_y) =1/m1/n∑_x=0^m-1∑_y=0^n-1 e(x, y) e^2π i (x k_x/m + y k_y/n). The zero mode ẽ_k_x=0, k_y=0 is the total sum of the energy density, while higher order coefficients contain information about the correlations of the local energy density on different length scales. For a medium with large wave-length structures the higher-order coefficients should be suppressed and the typical global shape of the event should dominate. Given that our simulations in both PHSD and hydrodynamics are performed for the same centrality classes, we expect these structures to give similar Fourier coefficients for lower modes. However, if structures are dominated by smaller length scales, the higher Fourier modes are excited as well.In Figs. <ref> and <ref> we present the Fourier transform ẽ(k_x, k_y)for a medium evolved byPHSD and hydrodynamics, respectively, for different stages of the evolution. For the hydrodynamical evolution of medium only the dominant lower Fourier modes survive in the later stages and shorter wavelength irregularities are washed out. The microscopic transport evolution of PHSD generates the same level of short wavelength phenomena at all times of the evolution; only the overall dilution of the medium reduces the strength.This difference can be identified more easily in Fig. <ref>, where we plot the distribution of the Fourier coefficients ⟨ẽ(√(k_x^2+k_y^2))⟩ for different evolution times. For the lower order Fourier modes, which carries the information about the global event scale, the microscopically evolving medium and the hydrodynamical medium are identical. We observe that the strength of the shorter wavelength modes rapidly decreases with respect to the zero mode at the beginning of the hydrodynamical evolution.§.§ Time evolution of the spatial and momentum anisotropy Much interest is given to the medium's response to initial spatial anisotropies. For the hydrodynamical models the spatial anisotropies lead to substantial collective flow, measured by Fourier coefficients of the azimuthal particle spectra.Initial spatial gradients are transformed into momentum anisotropies via hydrodynamical pressure. While experimentally only the final state particle spectra are known, models for the space-time evolution of the medium can give insight into the evolution of the spatial and the momentum anisotropy. For hydrodynamical models the latter is directly related to the elliptic flow v_2. Similar statements apply to the transport models where the initial spatial anisotropies are converted to momentum anisotropies <cit.>. The spatial anisotropy of the matter distribution is quantified by the eccentricity coefficients ϵ_n defined asϵ_n exp(i n Φ_n) = - ∫ rdr dϕ r^n exp(inϕ) e(r, ϕ)/∫ rdr dϕ r^n e(r, ϕ)where e(r, ϕ) is the local energy density in the transverse plane. The second-order coefficient ϵ_2 is also called ellipticity and to leading order the origin of the elliptic flow v_2. It can be simplified toϵ_2 = √({r^2 cos(2ϕ)}^2 + {r^2 sin(nϕ)}^2)/{r^2}where {...} = ∫ dxdy (...) e(x,y) describes an event-averaged quantity weighted by the local energy density e(x,y) <cit.>. The importance of event-by-event fluctuations in the initial state has been realized in particular for higher-order flow harmonics but also as a contribution to the elliptic flow and has been extensively investigated both experimentally and theoretically <cit.>. As shown earlier, the PHSD model naturally produces initial state fluctuations due to its microscopic dynamics. We therefore apply event-by-event hydrodynamics and all subsequent quantities are averaged over many events. In Fig. <ref> we show the time evolution of the ellipticity ⟨ϵ_2⟩ for both medium descriptions. For the PHSD simulations we observe large oscillations in ⟨ϵ_2⟩ at the beginning of the evolution due to the initialization geometries and formation times. After sufficient overlap of the colliding nuclei at the initial time τ_0 the average⟨ϵ_2⟩is stabilized in PHSD. There are, however, still significant event-by-event fluctuations of this quantity at later times and strong variations between individual events.In contrast, in a single hydrodynamical event ϵ_p deviates from the average, but remains a smooth function of time. Due to the faster expansion in x-direction the initial spatial anisotropy decreases during the evolution for both medium descriptions. However, the spatial anisotropy decreases faster when initial pre-equilibrium flowβ_i (extracted from the early PHSD evolution) is included in the hydrodynamical evolution. In this case, the time evolution of the event-by-event averaged spatial anisotropy is very similar in PHSD and in hydrodynamics. Initializing with the shear-stress tensor π_i^μν may have slight effects on the spatial eccentricity but not large enough to be visible. A similar feature is also seen in the evolution of the momentum ellipticity, which is directly related to the integrated elliptic flow v_2 of light hadrons. The total momentum ellipticity is determined from the energy-momentum tensor as <cit.>:ϵ_p = ∫ dx dy (T^xx - T^yy)/∫ dx dy (T^xx + T^yy)Here the energy-momentum tensor includes the viscous corrections fromπ^μν and Π.In the left panel of Fig. <ref> we show the time evolution of the event-by-event averaged ⟨ϵ(p)⟩ for the hydrodynamical medium description with and without pre-equilibrium flow in the initial conditions. Including the initial flow leads to a finite momentum anisotropyat τ_0 which subsequently increases as the pressure transforms the spatial anisotropy in collective flow. Consequently, ϵ_p is larger than in the scenario without initial flow throughout the entire evolution of the medium and an enhanced elliptic flow can be expected. Given the unresolved question of bulk viscosity in heavy-ion collisions, we investigate the effect of tuning the bulk viscosity from the standard value discussed at the beginning of this section to four times of this value, which comes closer to the bulk viscosity found in different quasi-particle calculations <cit.>. We see that for an enhanced bulk viscosity around T_c the momentum anisotropy develops a bump at later times, which is more pronounced for larger bulk viscosity. In the right panel of Fig. <ref> the hydrodynamical simulation is compared to the results from PHSD, again for event-by-event averaged quantities and the event-by-event fluctuations indicated by the spread of the cloud. The PHSD momentum eccentricity is constructed by Eq. (<ref>) where T^μν is evaluated from Eq. (<ref>). It can be observed that before τ_0 the averaged momentum anisotropy in PHSD develops continuously during the initial stage, before it reaches the value which is provided in the initial conditions for hydrodynamics. Despite the seemingly large bulk viscosity, as discussed in the beginning of this section, the momentum anisotropy in PHSD does not show any hint of a bump like in the hydrodynamical calculation. The response to intrinsic bulk viscosity in a microscopic transport model does not seem to be as strong as in hydrodynamics. § SUMMARYIn this paper, we have compared two commonly used descriptions of the evolution of a QGP medium in heavy-ion collisions, the microscopic off-shell transport approach PHSD and a macroscopic hydrodynamical evolution. Both approaches give an excellent agreement with numerous experimental data, despite the very different assumptions inherent in these models. In PHSD, quasi-particles are treated in off-shell transport with thermal masses and widths which reproduce the lattice QCD equation of state and are determined from parallel event runs in the simulations. Hydrodynamics assumes local equilibrium to be reached in the initial stages of heavy-ion collisions and transports energy-momentum and charge densities according to the lattice QCD equation of state and transport coefficients such as the shear and bulk viscosity. We have tried to match the hydrodynamical evolution as closely as possible to these quantities as obtained within PHSD: * by construction the equation of state in PHSD is compatible with the lQCD equation of state used in the hydrodynamical evolution * a new Landau-matching procedure was used to determine initial conditions for hydrodynamics from the PHSD simulation, * the hydrodynamical simulations utilize the same η/s(T) as obtained within PHSD and * different bulk viscosity parameterizations have been introduced in the hydrodynamical simulation that resemble to those obtained in (dynamical) quasi-particle models, which are the basis for PHSD simulations. In general we find that the ensemble averages overPHSD events follow closelythe hydrodynamical evolution. The major differences between the macroscopic near-(local)-equilibrium and the microscopic off-equilibrium dynamics can be summarized as:* A strong short-wavelength spatial irregularity in PHSD at all times during the evolution versus a fast smoothing of initial irregularities in the hydrodynamical evolution such that only global long-wavelength structures survive. These structures have been calculated on the level of the fluid velocity and energy density and quantified in terms of the Fourier modes of the energy density. Due to the QCD equation of state the irregularities imprinted in the temperature are smaller than in the energy density itself. * The hydrodynamical response to changing transport coefficients, especially the bulk viscosity, has a strong impact on the time evolution of the momentum anisotropy. In PHSD these transport coefficients can be determined but remain intrinsically linked to the interaction cross sections. Although there are indications for a substantial bulk viscosity in PHSD, it does not show the same sensitivity to the momentum space anisotropyas in hydrodynamical simulations. * Event-by-event fluctuations might be of similar magnitude in quantities like the spatial and momentum anisotropy but while they remain smooth functions oftime in hydrodynamics significant variations are observed within in a single event in PHSD as a function of time.After having gained an improved understanding of the similarities and differences in the evolution of bulk QCD matter between the non-equilibrium PHSD and the equilibrium hydrodynamic approach, we plan to utilize our insights in future projects regarding the development of observables sensitive to non-equilibrium effects and the impact these effects may haveon hard probe observables. § ACKNOWLEDGEMENTSWe appreciate fruitful discussions with J. Aichelin, W. Cassing, P.-B. Gossiaux, T. Kodama. This work in part was supported by the LOEWE center HIC for FAIRas well as by BMBF and DAAD. The computational resources have been provided by the LOEWE-CSC. SAB, MG and YX acknowledge support by the U.S. Department of Energy under grant no. DE-FG02-05ER41367. § APPENDIX§.§ Fourier transform of energy densityFor a discrete 2D Fourier transform, we have:X(k, l) = ∑_m=0^M-1∑_n=0^N-1 x(m,n) e^-2π i mk/M e^-2π i nl/N x(m,n)=1/MN∑_m=0^M-1∑_n=0^N-1X(k,l) e^2π i (mk/M + nl/N)= 1/MN∑_m=0^M-1∑_n=0^N-1X(k,l) [cos(2π (mk/M + nl/N)) + i sin(2π (mk/M + nl/N))] = 1/MN∑_m=0^M-1∑_n=0^N-1[X_real(k,l) cos(2π (mk/M + nl/N)) - X_imag (k,l) sin(2π (mk/M + nl/N))] X_real and X_imag are the real and imagine part of Fourier transform coefficients XX_real(k,l) = ∑_m=0^M-1∑_n=0^N-1 x(m,n) cos(2π (mk/M+nl/N)) X_real(k,l) = -∑_m=0^M-1∑_n=0^N-1 x(m,n) sin(2π (mk/M+nl/N))Therefore, FT is defined as: |X(k,l)|^2 = |X_real(k,l)|^2 + |X_imag(k,l)|^2 = ∑_m=0^M-1∑_n=0^N-1 x^2(m,n)+ ∑_m≠ m', n≠ n' x(m,n) x(m',n')cos( 2π ((m-m') k/M + (n-n')l/N))99Arsene:2004faI. Arsene et al. [BRAHMS Collaboration],Nucl. Phys. A 757, 1 (2005). Adcox:2004mhK. Adcox et al. [PHENIX Collaboration],Nucl. Phys. A 757, 184 (2005).Back:2004jeB. B. Back et al.,Nucl. Phys. A 757, 28 (2005).Adams:2005dqJ. Adams et al. [STAR Collaboration],Nucl. Phys. A 757, 102 (2005). Gyulassy:2004zyM. Gyulassy and L. McLerran,Nucl. Phys. A 750, 30 (2005).Muller:2006eeB. Müller and J. L. Nagle,Ann. Rev. Nucl. Part. Sci.56, 93 (2006). Muller:2012zqB. Müller, J. Schukraft and B. Wyslouch,Ann. Rev. Nucl. Part. Sci.62, 361 (2012).Cassing:2008svW. Cassing and E. L. Bratkovskaya,Phys. Rev. C 78, 034919 (2008).PHSDW. Cassing and E.L. Bratkovskaya, Nucl. Phys. A 831, 215 (2009). PHSDrhic E. L. Bratkovskaya, W. Cassing, V. P. Konchakovski and O. Linnyk,Nucl. Phys. A 856, 162 (2011).Song:2007uxH. Song and U. W. Heinz,Phys. Rev. C 77, 064901 (2008). Shen:2014vraC. Shen, Z. Qiu, H. Song, J. Bernhard, S. Bass and U. Heinz,Comput. Phys. Commun.199, 61 (2016).deSouza:2015ena R. Derradi de Souza, T. Koide and T. Kodama, Prog. Part. Nucl. Phys.86, 35 (2016). DerradideSouza:2011rpR. Derradi de Souza, J. Takahashi, T. Kodama and P. Sorensen,Phys. Rev. C 85, 054909 (2012). Niemi:2013 H. Niemi, et al., Phys. Rev. C 87, 054901 (2013). Voloshin S. A. Voloshin et al., J. Phys. G 34, S883 (2007). Konchakovski:2014wqaV. P. Konchakovski, W. Cassing and V. D. Toneev,J. Phys. G 41, 105004 (2014). Cassing:2008nn W. Cassing,Eur. Phys. J. ST 168, 3 (2009); Nucl. Phys. A 795, 70 (2007). Kadanoff1 L. P. Kadanoff and G. Baym,Quantum Statistical Mechanics, Benjamin, New York, 1962.Kadanoff2 S. Juchem, W. Cassing,and C. Greiner,Phys. Rev. D 69, 025006 (2004);Nucl. Phys. A 743, 92 (2004). HSD W. Cassing and E. L. Bratkovskaya, Phys. Rep. 308, 65 (1999); W. Cassing, E.L. Bratkovskaya, and S. Juchem,Nucl. Phys. A 674, 249 (2000).Berrehrah:2016vzwH. Berrehrah, E. Bratkovskaya, T. Steinert and W. Cassing,Int. J. Mod. Phys. E 25, 1642003 (2016). VitalyV. Ozvenchuk, O. Linnyk, M.I. Gorenstein, E.L. Bratkovskaya, andW. Cassing, Phys. Rev. C 87,024901 (2013). Ozvenchuk:2012khV. Ozvenchuk, O. Linnyk, M. I. Gorenstein, E. L. Bratkovskaya and W. Cassing,Phys. Rev. C 87, 064903 (2013).Ca13W. Cassing, O. Linnyk, T. Steinert, and V. Ozvenchuk,Phys. Rev. Lett. 110,182301 (2013); T. Steinert and W. Cassing,Phys. Rev. C 89,035203 (2014). FRITIOF B. Andersson, G. Gustafson, and H. Pi, Z. Phys. C 57, 485 (1993).Volo V. P. Konchakovski et al., J. Phys. G 42, 055106(2015); J. Phys. G 41, 105004(2014); Phys. Rev. C 85,044922 (2012); Phys. Rev. C 85, 011902 (2012); Phys. Rev. C 90, 014903 (2014). Linnyk O. Linnyk, E. L. Bratkovskaya and W. Cassing,Prog. Part. Nucl. Phys.87, 50 (2016).Liu:2015nwaJ. Liu, C. Shen and U. Heinz,Phys. Rev. C 91, no. 6, 064906 (2015) Erratum: [Phys. Rev. C 92, no. 4, 049904 (2015)]. Israel:1979wp W. Israel, J. M. Stewart, Annals. Phys. 118, 341 (1979). Israel:1976aa W. Israel, J. M. Stewart,Phys. Lett. A. 58, 4 (1976).Denicol:2014vaaG. S. Denicol, S. Jeon and C. Gale,Phys. Rev. C 90, no. 2, 024912 (2014). Bazavov:2014pvzA. Bazavov et al. [HotQCD Collaboration],Phys. Rev. D 90, 094503 (2014).Moreland:2015dvcJ. S. Moreland and R. A. Soltz,Phys. Rev. C 93, 044913 (2016). Schenke:2012wbB. Schenke, P. Tribedy and R. Venugopalan,Phys. Rev. Lett.108, 252301 (2012). Drescher:2006piH. J. Drescher, A. Dumitru, A. Hayashigaki and Y. Nara,Phys. Rev. C 74, 044905 (2006). Miller:2007riM. L. Miller, K. Reygers, S. J. Sanders and P. Steinberg,Ann. Rev. Nucl. Part. Sci.57, 205 (2007).Moreland:2014oyaJ. S. Moreland, J. E. Bernhard and S. A. Bass,Phys. Rev. C 92, 011901 (2015). bayesQM2017 J. Bernhard et al., Quark Matter 2017 https://indico.cern.ch/event/433345 /contributions/2358284/ Sasaki:2008fgC. Sasaki and K. Redlich,Phys. Rev. C 79, 055207 (2009) doi:10.1103/PhysRevC.79.055207 [arXiv:0806.4745 [hep-ph]].Bluhm:2010qf M. Bluhm, B. Kämpfer and K. Redlich,Phys. Rev. C 84, 025201 (2011). KSS G. Policastro, D. T. Son, A. O. Starinets, Phys. Rev. Lett. 87, 081601 (2001);P. K. Kovtun, D. T. Son, A. O. Starinets, Phys. Rev. Lett. 94, 111601 (2005).Mattiello S. Mattiello and W. Cassing, Eur. Phys. J. C 70, 243 (2010). Qiu:2011ivZ. Qiu and U. W. Heinz,Phys. Rev. C 84, 024911 (2011). Miller:2003kd M. Miller and R. Snellings,nucl-ex/0312008.Alver:2006wh B. Alver et al. [PHOBOS Collaboration],Phys. Rev. Lett.98, 242302 (2007).Alver:2010gr B. Alver and G. Roland,Phys. Rev. C 81, 054905 (2010)Erratum: [Phys. Rev. C 82, 039903 (2010)]. Kolb:2003dz P. F. Kolb and U. W. Heinz,In *Hwa, R.C. (ed.) et al.: Quark gluon plasma* 634-714 [nucl-th/0305084].
http://arxiv.org/abs/1703.09178v1
{ "authors": [ "Yingru Xu", "Pierre Moreau", "Taesoo Song", "Marlene Nahrgang", "Steffen A. Bass", "Elena Bratkovskaya" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170327164724", "title": "Traces of non-equilibrium dynamics in relativistic heavy-ion collisions" }
=1apsrev
http://arxiv.org/abs/1703.08856v2
{ "authors": [ "Anping Huang", "Yin Jiang", "Shuzhe Shi", "Jinfeng Liao", "Pengfei Zhuang" ], "categories": [ "hep-ph", "hep-th" ], "primary_category": "hep-ph", "published": "20170326174756", "title": "Out-of-Equilibrium Chiral Magnetic Effect from Chiral Kinetic Theory" }
=1equationsection [ αβ̱γγ̧Γδ̣Δϵεμνκ̨łλŁΛσΣρ̊øωØΩþθϑΘτϕφζ𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵 O∂/∇ℓ←→↔√()∝inf∞ϕ̃ψ̃ϕ̂ψ̂ω̂⟨⟩⊛∘⊚k̂l̂≈deiptnpAiqmPQabpqfρDett̂r̂arcsinh arcsinharccosh arccosharctanh arctanhReImtrTrStrSTrdiagSdetSDetandwithGCGYDTCFTTspin- ⟨0||0⟩vaccltree1-loopn=12-loopAreaNSRZSLLNLNNLNNNLcyltorusSubsystem eigenstate thermalization hypothesis for entanglement entropy in CFTSong He^1,2[hesong17@gmail.com] , Feng-Li Lin^3[fengli.lin@gmail.com]  and Jia-ju Zhang^4,5[jiaju.zhang@mib.infn.it]=========================================================================================================================^1Max Planck Institute for Gravitational Physics (Albert Einstein Institute),Am Mühlenberg 1, 14476 Golm, Germany^2CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences,55 Zhong Guan Cun East Road, Beijing 100190, China^3Department of Physics, National Taiwan Normal University, No. 88, Sec. 4, Ting-Chou Rd., Taipei 11677, Taiwan^4Dipartimento di Fisica, Universitá degli Studi di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy^5INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy We investigate a weak version of subsystem eigenstate thermalization hypothesis (ETH) for a two-dimensional large central charge conformal field theory by comparing the local equivalence of high energy state and thermal state of canonical ensemble.We evaluate the single-interval Rényi entropy and entanglement entropy for a heavy primary state in short interval expansion. We verify the results of Rényi entropy by two different replica methods. We find nontrivial results at the eighth order of short interval expansion, which include an infinite number of higher order terms in the large central charge expansion.We then evaluate the relative entropy of the reduced density matrices to measure the difference between the heavy primary state and thermal state of canonical ensemble, and find that the aforementioned nontrivial eighth order results make the relative entropy unsuppressed in the large central charge limit. By using Pinsker's and Fannes-Audenaert inequalities, we can exploit the results of relative entropy to yield the lower and upper bounds on trace distance of the excited-state and thermal-state reduced density matrices. Our results are consistent with subsystem weak ETH, which requires the above trace distance is of power-law suppression by the large central charge. However, we are unable to pin down the exponent of power-law suppression.As a byproduct we also calculate the relative entropy to measure the difference between the reduced density matrices of two different heavy primary states. 18pt empty§ INTRODUCTION According to eigenstate thermalization hypothesis (ETH) <cit.>, a highly excited state of a chaotic system behaves like a high engergy microcanonical ensemble thermal state. More precisely, it states that (i) the diagonal matrix element _αα of a few-body operatorwith respect to the energy eigenstate α change slowly with the state in a way of suppression by the exponential of the system size; (ii) the off-diagonal element _αβ is much smaller than the diagonal element by the factor of exponential of the system size.This will then yield that the expectation value of the few-body observable in a generic state |ϕ behaves like the ones in the microcanonical ensemble _ϕ - _E∼^-(S(E)),where the subscript E denotes microcanonical ensemble state with energy E and S(E) is the system entropy.Recently, a new scheme called subsystem ETH has been proposed in <cit.>,in contrast to the old one which is then called local ETH. The subsystem ETH states that the reduced density matrix _̊A,ϕ of a subregion A for a high energy primary eigenstate |ϕ are universal to the reduced density matrix _̊A,E for microcanonical ensemble thermal state with some engery E up to exponential suppression by order of the system entropy, i.e.,t(ρ_A,ϕ,ρ_A,E) ∼^-𝒪(S(E)),where t(ρ_A,ϕ,ρ_A,E) denotes the trace distance between ρ_A,ϕ and ρ_A,E.As the subsystem ETH is a statement regarding the reduced density matrices, their derived quantities such as correlation functions, entanglement entropy and Rényi entropy should also satisfy some sort of the subsystem ETH. In this sense, the subsystem ETH is strongest form of ETH, i.e., stronger than local ETH.The above discussions of ETH are all based on the comparison of energy eigenstate and the microcanonical ensemble state. In <cit.>, there are discussions of generalizing the ETH to the comparison with the canonical ensemble state, based on the observation of the local equivalence between the canonical and microcanonical states <cit.>. In this case, the energy eigenstate and canonical ensemble thermal state should also locally alike in the thermodynamic limit. This was called weak ETH in <cit.>, in which, however, there is no exact bound on weak ETH for general cases. Despite that, some of the results in <cit.> showed that the thermal states of canonical and microcanonicalensembles are locally equivalent up to power-law suppression of the dimension of Hilbert space. Based on all the above, we may expect that the weak ETH will at least yield _ϕ - _∼̱[S()̱]^-ã,    t(ρ_A,ϕ,ρ_A,) ∼ [S()̱]^-b̃,where the subscript $̱ denotes canonical ensemble with inverse temperature$̱, S()̱ is the canonical ensemble entropy, and ã and b̃ are some positive real numbers of order one. More specifically, it was argued in <cit.> that ã=1 and was verified in <cit.> by numerical simulation for some integrable model. However, for generic models it was mathematically shown that ã=1/4<cit.>. We expect b̃ should also behave similarly. We will call the 2nd inequality in weakETH the subsystem weak ETH, which inherits both the subsystem ETH and weak ETH.For a conformal field theory (CFT) with there are infinite degrees of freedom, which is in some sense the thermodynamic limit, as required for the local equivalence between canonical and microcanonical states.Moreover, the nonlocal quantities like entanglement entropy and Rényi entropy do not necessarily be exponentially suppressed <cit.>. In fact the primary excited state Rényi entropy in two-dimensional (2D) CFT is not exponentially suppressed in the large central charge limit <cit.>.All of these motivate us to check the validity of weak ETH for a 2D CFT of large central charge c.In this paper we investigate the validity of the subsystem weak ETH weakETH for a 2Dlarge c CFT. In this case, the worldsheet description of an excited state |ϕ for a CFT living on a circle of size L corresponds to an infinitely long cylinder of spatial period L capped by an operator ϕ at each end. On the other hand, the thermal state of a CFT living on a circle with temperature T has its worldsheet description as atorus with temporal circle of size β=1/T. In the high temperature limit with L ≫β, the torus is approximated by a horizontal cylinder. Naively the vertical and horizontal cylinders should be related by Wick rotation and can be compared after taking care of the capped states. This is indeed what have been done in <cit.> by comparing the two-point functions two light operators of large c CFT, and in <cit.> for the single-interval entanglement entropy. These comparisons all show that the subsystem weak ETH holds. However, in <cit.> the one-interval Rényi entropy for small interval of size ℓ≪ L are compared by ℓ expansion up to orderℓ^6 and it was found that one cannot find a universal relation between β and L to match the excited-state Rényi entropy with the thermal one in the series expansion of ℓ.[Note that in <cit.> no large c is required, and they just require the excited state to be heavy.]Moreover, in the context of AdS/CFT correspondence <cit.>, a large N CFT is dual to the AdS gravity of large AdS radius, and so the subsystem ETH implies that the backreacted geometry by the massive bulk field is approximately equivalent to the black hole geometry for the subregion observer. Especially, for AdS_3/CFT_2 the CFT has infinite dimensional conformal symmetries as the asymptotic symmetries of AdS space <cit.>, along with ETH it could imply that the infinite varieties of Bañados geometries <cit.> dual to the excited CFT states are universally close to the BTZ black hole <cit.>. Although the Newton constant G_N may get renormalized in the 1/c perturbation theory, and then obscure the implication of our results to the above issue, we still hope that our results can provide as the stepping stone for thefurther progress.As discussed in <cit.>, the validity of subsystem ETH depends on how the operator product expansion (OPE) coefficients scale with the conformal dimension of the eigen-energy operator in the thermodynamic limit. This means that the subsystem ETH could be violated for some circumstances. In this paper we continue to investigate the validity of ETH for a 2D large c CFT by more extensive calculations, and indeed find the surprising results. According to Cardy's formula <cit.> the thermal entropy is proportional to the central charge c, and so we just focus on how various quantities behave in large c limit. We calculate the entanglement entropy and Rényi entropy up to order ℓ^8 in the small ℓ expansion. We then find that there appear subleading corrections of 1/c expansion at the order ℓ^8. Because the appearance of these subleading corrections at order ℓ^8 is quite unexpected, we solidify the results by adopt two different method to calculate them. These methods are (i) the OPE oftwist operators<cit.> on cylinder, or equivalently on complex plane; and (ii) 2n-point correlation function on complex plane <cit.>. By both the two methods we get the same results. Moreover, we turn the comparison of the entanglement entropy into the relative entropy between reduced density matrices for excited state and thermal state by the modular Hamiltonian argument in <cit.>. Then, the above discrepancy yields that the relative entropy is of order c^0.Based on the above results, we use the Fannes-Audenaert inequality <cit.> and Pinsker's inequality by relating the trace distance to entanglement entropy or relative entropy, to argue how the trace distance of the reduced density matrices of the excited and thermal states scales with the large c. Our results are consistent with the subsystem weak ETH. However, we lack of further evidence to pin-down the exact power-law suppression, i.e., unable to obtain the exponent b̃ in weakETH. Finally, using the replica method based on evaluating the multi-point function on a complex plane, as a byproduct of this project we explicitly calculate the relative entropy to measure the difference between the reduced density matrices of two different heavy primary states.The rest of the paper is organized as follows. In section <ref> we briefly review the known useful results about Rényi entropy, entanglement entropy and relative entropy, and also evaluate the relative entropy between chiral vertex state and thermal state in 2D free massless scalar theory. In section <ref> we calculate the excited state Rényi entropy in two different replica methods in the short interval expansion. Using these results, in section <ref> we check the subsystem weak ETH. We find that the validity of the subsystem weak ETH depend only on whether the large but finite effective dimension of the reduced density matrix exists and if yes how it scales with c. In section <ref> we evaluate the relative entropy of the reduced density matrices to measure the difference of two heavy primary states. Finally, we conclude our paper in section <ref>. Moreover, in appendix <ref> we give some details of the vacuum conformal family; in appendix <ref> we give the results of OPE of twist operators, including both the review of the formulism and some new calculations; and in appendix <ref> we list some useful summation formulas.§ RELATIVE ENTROPY AND ETH In this section we will briefly review the basics of relative entropy and then ETH. In the next section, the relative entropy between the reduced density matrices of heavy state and thermal state will be evaluated for large central charge 2D CFT to check the validity of ETH. We will end this section by calculating the relative entropy of a toy example CFT, namely the 2D massless scalar. §.§ Relative entropy Given a quantum state of a system denoted by the density matrix ρ, then the reduced density matrix on some region A is given by _̊A=_A^c.̊ where A^c is the complement of A.One can then define the Rényi entropyS_A,n=-1n-1log_A_̊A^n,and the entanglement entropyS_A=-_A(_̊Alog_̊A),which is formally equivalent to taking n→1 limit of the Rényi entropy.In this work we will focus on the holomorphic sector of a 2D CFT of central charge c, and the Rényi entropy for a region A of size ℓ, i.e., A=[-ℓ/2,ℓ/2], for a vacuum state it is known <cit.> S_n,L=c(n+1)12nlog( LπsinπℓL),where L is the size of the spatial circle on which the CFT lives. Similarly, the Rényi entropy of a thermal state with temperature 1/β for a CFT living on a infinite straight line isS_n,=c(n+1)12nlog( πsinhπℓ).Taking n → 1 limit, one can get the corresponding entanglement entropy straightforwardly. For simplicity, in this paper we only consider the contributions from the holomorphic sector of CFT, and the anti-holomorphic sector can be just added for completeness without complication. Also we will not consider the subtlety due to the boundary conditions imposed on the entangling surface <cit.>.For subsystem ETH it is to compare the reduced density matrices between a heavy state and the excited state, and in this paper we consider the Rényi entropy difference, the entanglement entropy difference, and the relative entropy, as well as the trace distance. The relative entropy is defined asS('̊_A_̊A) = _A '̊_Alog'̊_A - _A '̊_Alog_̊A.where _̊A and '̊_A are the reduced density matrixes over region A for state $̊ and'̊, respectively.Note that the relative entropy is not symmetric for its two arguments, i.e.,S('̊_A_̊A)S(_̊A'̊_A). One may define the symmetrized relative entropyS('̊_A,_̊A) =S('̊_A_̊A) + S(_̊A'̊_A),to characterize the difference of the two reduced density matrices, but one should be aware that it is not a “distance”.[We thank the anonymous referee for pointing this out to us.]One can also express the relative entropy as follows:S('̊_A_̊A) =H_A _'̊ -H_A _ -S'_A +S_A.where the modular HamiltonianH_Ais defined byH_A ≡ -logρ_A.The modular Hamiltonian is in general quite nonlocal and known only for some special cases <cit.>. One of these cases fitted for our study in this paper is just the case considered for the Rényi entropy Snb of a thermal state, and the modular Hamiltonian is given by <cit.> H_A, = - π∫_-ℓ/2^ℓ/2 dxsinhπ(ℓ-2x)2sinhπ(ℓ+2x)2sinhπℓT(x)whereT(x)is the holomorphic sector stress tensor of the 2D CFT. In this paper we will check the subsystem weak ETH for a normalized highly and globally excited state[In this paper, we mainly focus on global excited states which are quite different from so called locally excited states studied in, for examples, <cit.>.] created by a (holomorphic) primary operatorϕof conformal weighth_ϕ=c_ϕacting on the vacuum, i.e.,|ϕ =ϕ(0) |0.The first step to proceed the comparison for checking ETH is to make sure the excited state and the thermal state have the same energy, and this then requires ϕ|T|ϕ_L =T _.̱ The right hand side of the above equation is just the Casimir energy of the horizontal worldsheet cylinderT _=̱ -π^2 c/6 β^2.and the left hand side is given by (<ref>). Thus, e41 yields a relation between the inverse temperature$̱ and the conformal weighth_ϕ (or _ϕ)<cit.>=̱L24_ϕ-1.Moreover, the relation e41 and (<ref>) ensure H_A,_ϕ =H_A,_ so that <cit.> S(_̊A,ϕ_̊A,) = -S_A,ϕ + S_A,.§.§ ETHETH states that a highly excited state of a chaotic system behaves thermally. One way to formulate this is to compare the expectation values of few-body operators for high energy eigenstate and the thermal state, as explicitly formulated in localETH. This is called the local ETH in <cit.> in contrast to a stronger statement called subsystem ETH proposed therein, which is formulated as in subETH by comparing the reduced density matrices, i.e., requiring that trace distance between the two reduced states should be exponentially suppressed by the system entropy.The trace distance for two reduced density matrices _̊A', _̊A is defined ast(_̊A',_̊A) = 12 _A | _̊A'-_̊A |,and by definition 0 ≤ t(_̊A',_̊A) ≤ 1.In the paper we compare the energy eigenstate and canonical ensemble state, and so it is about the local weak ETH and subsystem weak ETH. As the subsystem weak ETH is stronger than local weak ETH, it could be violated for the system of infinite number of degrees of freedom. However, we do not directly calculate the trace distance but the Rényi entropies and entanglement entropies for both heavy state and thermal state of canonical ensemble. After doing this, we can then use some inequalities to constrain the trace distance with the difference of the Rényi entropies or relative entropy, thus check the validity of subsystem weak ETH. Here are three such kinds of inequality.First, the Fannes-Audenaert inequality <cit.> relating the difference of entanglement entropy, S_A := S_A,ϕ -S_A, to the trace distance t:=t(_̊A,ϕ,_̊A,) as follows:| S_A| ≤ t log (d-1) +h,with h=-tlog t -(1-t)log(1-t) and d being the dimension ofHilbert space _A for the effective degrees of freedom in subsystem A. On the other hand, there is the Audenaert inequality for the Rényi entropy of order 0<n<1<cit.> | S_n| ≤11-nlog[ (1-t)^n + (d-1)^1-nt^n ],with S_n:=S_n,ϕ -S_n,. Both the right hand sides of FAI and AI are vanishing at t=0, are log(d-1) at t=1, monotonically increase at 0<t<1-1d, monotonically decrease at 1-1d<t<1, and have a maximal value log d at t=1-1d. Since d is very large, the right hand sides of FAI and AI are approximately monotonically increase at 0<t<1. Finally, we also need Pinsker's inequality to give upper bound on trace distance by the square root of relative entropy, i.e.t ≤12 S(_̊A,ϕ_̊A,). By using FAI we see that | S_A| gives the tight lower-bound on the trace distance if the d is finite, thus the validity of subsystem weak ETH can be pin down by the scaling behavior of | S_A| with respect to the system entropy. This is no longer true if d is infinite as one would expect for generic quantum field theories, then both FAI and AI are trivially satisfied and can tell no information about the trace distance <cit.>. However, it is a subtle issue to find out how the effective dimension d of the reduced density matrix scale with the large c and if it is finite once a UV cutoff is introduced.We will discuss in more details in section <ref>. §.§ A toy example We now apply the above formulas to a toy 2D CFT, the massless free scalar. This CFT has central charge c=1 so that it makes less sense to check subsystem weak ETH. Despite that, we will still calculate the relative entropy between excited state and thermal state, and the result can be compared to the large c ones obtained later.Let the massless scalar denoted by , and from it we can construct the chiral vertex operator <cit.> V_(z) = ^(z),with conformal weighth_=^22.Choosing ≫ 1 we can create the highly excited state as follows:|V_ = V_(0)|0.The Rényi entropy for the state |V_ was calculated before in <cit.>, and the result is the same as SnL for the vacuum state, no matter what the valueis.Thus, the relative entropy LDL can be obtained straightforwardly and the result isS(_̊A,_̊A,) = 16logs̱i̱ṉẖπℓLsinπℓL.In Fig. <ref> we have plotted the results as a function of ℓ/L for various /̱L.We see that the relative entropy is overall larger for heavier excited state. Note thatappears in j45 implicitly through =̱L12^2-1. § EXCITED STATE RÉNYI ENTROPY We now consider the 2D CFT with large central charge, which can be also thought as dual CFT of AdS_3. We aim to calculate the Rényi entropy S_n,ϕ for a highly excited state |ϕ, i.e., the conformal weight h_ϕ is order c for short interval ℓ≪ L so that we can obtain the results with two different methods based on short interval expansion up to order (ℓ/L)^8.The first method is to use OPE oftwist operators on the cylinder to evaluate the excited state Rényi entropy <cit.>. We have used this method in<cit.> to get the result up to order (ℓ/L)^6 and find that the subsystem ETH is violated for n ≠ 1 but holds for n=1, i.e., the entanglement entropy. In this paper we calculate up to order (ℓ/L)^8 and find nontrivial violation of subsystem ETH at the new order. For consistency check we also use the other two methods to calculate and obtain the same result. The second method is to use the multi-point correlation functions on complex plane <cit.>. As in <cit.>, we focus on the contributions of the holomorphic sector of the vacuum conformal family. Some details of the vacuum conformal family are collected in appendix <ref>.§.§ Method of twist operators By the replica trick for evaluating the single-interval Rényi entropy, we get the one-fold CFT on an n-fold cylinder, or equivalently an n-fold CFT, which we call ^n on the one-fold cylinder. The boundary conditions of the ^n on cylinder can be replaced by twist operators <cit.>. Thus, the partition function of ^n on cylinder capped by state |ϕ can be expressed as the two-point function of twist operators, i.e. _A_̊A,ϕ^n = Φ|(ℓ/2)(-ℓ/2)|Φ_,with the definition Φ≡∏_j=0^n-1ϕ_j and the index j marking different replicas. This is illustrated in figure <ref>. Formally and practically, we can use the OPE of the twist operators to turn the above partition function into a series expansion, and the formal series expansion for the excited state Rényi entropy isS_n,ϕ=c(n+1)12nlogℓ -1n-1log( ∑_K d_K ℓ^h_KΦ_K_Φ).The details about the OPE of twist operators <cit.> is reviewed in appendix <ref>. In arriving the above, we have used (<ref>) and the fact that Φ_K_Φ≡Φ|Φ_K|Φ_ is a constant.Further using the properties for the vacuum conformal family and its OPE in appendix <ref> and<ref>, i.e.,specifically (<ref>), (<ref>), (<ref>) and (<ref>), we can obtain the explicit result of the short interval expansion up to order (ℓ/L)^8 as follows:S_n,ϕ =c(n+1)/12nlogℓ +π^2 c (n+1) (24 ϵ_ϕ-1)ℓ^2/72 n L^2 -π^4 c (n+1) [ 48 (n^2+11)24 ϵ_ϕ^2- 24(n^2+1)ϵ_ϕ +n^2 ] ℓ^4/2160n^3 L^4 S_n,ϕ= -π^6 c (n+1) [96 (n^2-4) (n^2+47) ϵ_ϕ^3+36 (2 n^4+9 n^2+37) ϵ_ϕ^2-24 (n^4+n^2+1)ϵ_ϕ + n^4 ] ℓ^6/34020n^5 L^6 S_n,ϕ= +π^8 c (n+1) ℓ^8/453600 (5 c+22) n^7 L^8{ c [ 64 (13 n^6-1647 n^4+33927 n^2-58213) _ϕ^4S_n,ϕ= -64(n^2+11) (13 n^4+160 n^2-533) _ϕ^3 -48 (9 n^6+29 n^4+71 n^2+251) _ϕ^2 S_n,ϕ= +120 (n^2+1) (n^4+1) _ϕ -5 n^6 ]-5632(n^2-4)(n^2-9)(n^2+119) _ϕ^4S_n,ϕ=-2816 (n^2-4) (n^2+11) (n^2+19) _ϕ^3-128 (15 n^6+50 n^4+134 n^2+539) _ϕ^2 S_n,ϕ=+528 (n^2+1) (n^4+1) _ϕ-22 n^6 } +O((ℓ/L)^9).Note that the result up to order (ℓ/L)^6 is just proportional to c, and agrees with the result obtained previously in <cit.>. At the order (ℓ/L)^8, however, novel property appears. There appears a nontrivial 5c+22 factor in the overall denominator, which yields infinite number of higher order subleading terms in the 1/c expansion for large c. These subleading terms come from the contributions of the quasiprimary operatordefined by Adef at level four of the vacuum family.We will obtain the same result for other method insubsection <ref>. Instead of working on the cylinder geometry, we can also work on complex plane by conformal map, as shown in figure <ref>. The cylinder with coordinate w is mapped to a complex plane with coordinate z by a conformal transformation z=^2π w/L. The partition function then becomes a four-point function on complex plane _A_̊A,ϕ^n = ( 2πL)^2h_Φ(inf)(^πℓ/L)(^-πℓ/L)Φ(0)_.Using sts for the OPE of twist operators on complex plane, we get the excited state Rényi entropyS_n,ϕ=c(n+1)12nlog(LπsinπℓL) -1n-1[ ∑_K d_K C_ΦΦ K(1-^2πℓ/L)^h_K_2F_1(h_K,h_K;2h_K;1-^2πℓ/L) ].Using (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) we can reproduce (<ref>).§.§ Method of multi-point function on complex plane In the second method we use the formulism of multi-point function on complex plane, see <cit.>. The idea is illustrated in Fig.<ref>. Using the state/operator correspondence, we map the partition function on the capped n-fold cylinder into the two-point function on the n-fold complex plane ^n, i.e., formally _A _̊A,ϕ^n_A _̊A,0^n = Φ(inf)Φ(0)_^n.We then map each copy of complex plan into a wedge of deficit angle 2π/n by the followingconformal transformationf(z) = ( z-^πℓ/Lz-^-πℓ/L)^1/n.The two boundaries of each wedge correspond to the intervals just right above or below the interval A. Gluing all the n wedges along the boundaries, we then obtain the one-fold complex planeso that the above two-point function on ^n becomes a 2n-point function on a one-fold complex plane , i.e. Φ(inf)Φ(0)_^n = ( 2nsinπℓL)^2nh_ϕ∏_j=0^n-1( ^2π(ℓn L+2jn)ϕ(^2π(ℓn L+jn)) ϕ(^2πjn) )_.Based on the above, it is straightforward to see thatS_n,ϕ(ℓ) = S_n,ϕ(L-ℓ),which is expected for a pure state.Formally, the OPE of a primary operator with itself is given <cit.>ϕ(z)ϕ(w) = 1(z-w)^2h_ϕ_ϕ(z,w).with _ϕ(z,w)=∑_C_ϕϕ_∑_r=0^infa_^rr!(z-w)^h_+r^r(w),    a_^r = C_h_+r-1^rC_2h_+r-1^r,where the summation runs over all the holomorphic quasiprimary operators {} with eachbeing of conformal weight h_, and C_x^y denotes the binomial coefficient.In a unitary CFT, the operator with the lowest conformal weight is the identity operator, and so in z→ w limit _ϕ(z,w) = 1 + ⋯.Putting (<ref>) in (<ref>) and using <cit.>_A _̊A,0^n = (LπsinπℓL)^-2h_,we get the excited state Rényi entropyS_n,ϕ =c(n+1)12nlog( LπsinπℓL)-2nh_ϕn-1logsinπℓLnsinπℓnL-1n-1log∏_j=0^n-1_ϕ( ^2π(ℓn L+jn),^2πjn) _.We now perform the short-interval expansion for Snphi3 up to order (ℓ/L)^8 by considering only the contributions from the vacuum conformal family, i.e.,including its descendants up to level eight, see appendix <ref>. For n=2 there is a compact formulaS_2,ϕ =c8log( LπsinπℓL)-4h_ϕlogsinπℓL2sinπℓ2L-log[∑_ψC_ϕϕψ^2_ψ(sinπℓ2L)^2h_ψ_2F_1(h_ψ,h_ψ;2h_ψ;sin^2πℓ2L)].Using the details in the appendix <ref>, we can obtain the explicit result as follows[SinceC_ϕϕ=0 for a bosonic operatorwith odd integer conformal weight, using results in the appendix <ref> we can get the result up to order (ℓ/L)^19. However higher order results are too complicated to be revealing. We just write down the result up to order (ℓ/L)^11.]S_2,ϕ = c/8logℓ +π^2 c (24 _ϕ -1)ℓ^2/48 L^2 -π^4 c (180 _ϕ^2-30 _ϕ+1)ℓ^4/1440 L^4 -π^6 c (945 _ϕ^2-126 _ϕ+4)ℓ^6/90720 L^6 S_2,ϕ = +π^8 c [ 5c(83160 _ϕ^4-7560 _ϕ^3-1890 _ϕ^2+255 _ϕ-8) -2 (22680 _ϕ^2-2805 _ϕ+88) ] ℓ^8/2419200 (5 c+22) L^8 S_2,ϕ = +π^10 c [5 c (1372140 _ϕ^4-124740 _ϕ^3-14355 _ϕ^2+2046 _ϕ-64) -44 (8595 _ϕ^2-1023 _ϕ+32)]ℓ^10/239500800 (5 c+22) L^10 S_2,ϕ = +O((ℓ/L)^12). To perform the short-interval expansion Rényi entropy of general rank n, i.e., (<ref>), to order ℓ^m, we have to calculate a series of j-point correlation functions with j=1,2,⋯,⌊ m/2 ⌋. To order ℓ^8 the number of these multi-point correlation functions can be counted in each order by the following: ∏_k=2^inf1(1-x^k)^n = 1+n x^2+n x^3+n (n+3)/2 x^4 +n (n+1) x^5+n (n+1) (n+11)/6 x^6∏_k=2^inf1(1+x^k)^n =+n (n^2+5n+2)/2 x^7+n (n+3) (n^2+27n+14)/24 x^8+O(x^9).These multi-point correlation functions are listed in table <ref>, and we note that many of them are trivially vanishing. Putting theresults of appendix <ref> in (<ref>), we reproduce the Rényi entropy (<ref>).§ CHECK SUBSYSTEM WEAK ETH We now can use the result in the previous section to check the subsystem weak ETH for the 2D large c CFT. We first take the n→ 1 limit of the excited state Rényi entropy Snphi8 and the thermal one Snb up to order (ℓ/L)^8 to get the corresponding entanglement entropy. We then haveS_A,ϕ = c/6logℓ+π^2c (24 ϵ_ϕ-1)ℓ^2/36 L^2-π^4c (24 ϵ_ϕ-1)^2ℓ^4/1080 L^4+π^6c (24 ϵ_ϕ-1)^3ℓ^6/17010 L^6-π^8 c ℓ^8/226800 (5 c+22) L^8[ 5c (24 _ϕ-1)^4S_A,ϕ = +2 (8110080 _ϕ^4 -1013760 _ϕ^3 +47232 _ϕ^2 -1056 _ϕ +11) ]+O((ℓ/L)^9),andS_A, = c/6logℓ+π^2 c ℓ^2/36 β^2-π^4 c ℓ^4/1080 β^4+π^6 c ℓ^6/17010 β^6-π^8 c ℓ^8/226800 β^8 +O((ℓ/)̱^10).It is straightforward to see that (<ref>) fails to match with (<ref>) at order (ℓ/L)^8 under the identification of inverse temperature and conformal weight by the relation e47, and the discrepancy is S_A, - S_A,ϕ = 128π^8c_ϕ^2(22_ϕ-1)^2ℓ^8/1575(5 c+22) L^8+O((ℓ/L)^9).From LDL we know that this is nothing but the relative entropy S(_̊A,ϕ_̊A,). Note that this discrepancy is of order c^0 in the large c limit, and also there are infinite number of subleading terms in large c expansion.Based on the result DS-1, we then use the inequalities FAI, AI, and PI to estimate the order of the trace distance in large c limit and check the validity of ETH. We have obtained the Rényi entropy, entanglement entropy and relative entropy firstly in expansion of small ℓ, and then in expansion of large c. Focusing on the order of large c, we haveS_A ∼(c^0), S_n ∼(c), S(_̊A,ϕ_̊A,) ∼(c^0),and we assume that these orders still apply when 0<ℓ/L<1 is neither too small nor too large.From FAI, AI, and PI we get respectivelyt ≥(c^0)log d, d ≥^(c), t ≤(c^0).From the first inequality of z58, the lower bound of the trace distance t, which is crucial for the validity of subsystem weak ETH, depends on if the effective dimension d of the subsystem A is strictly infinite or how it scales with c. It is a subtle issue to determine d for generic CFTs. We will raise this as an interesting issue for further study, but now consider some interesting scenarios.[In <cit.>, instead a similar proposal to our first inequality of z58 for subsystem weak ETH, i.e.,t∼ d^-1 2 S(E) is used as the definition for the effective dimension d. We understand this as the working definition because the spectrum of the density matrix for the CFT should be continuous without further coarse-graining.]If d is strictly infinite, from (<ref>) we get0 ≤ t ≤(c^0),which is trivial and gives no useful information. It is also possible there exists a large but finite effective dimension d of the subsystem Athat satisfies FAI, AI and so satisfies z58. The Cardy's formula <cit.> and Boltzmann's entropy formula Ω(E) ∼^S(E) state that the number of states at a specific high energy is ^(c). It is plausible that for both the reduced density matrices _̊A,ϕ and _̊A, only ^(c) components are nontrivial and other components are even smaller than exponential suppression. Then we get the tentative resultd ∼^(c).If this is true, from (<ref>) we get(c^-1) ≤ t ≤(c^0). Both check-1 and check-2 are consistent with the subsystem weak ETH weakETH. However, it lacks further evidence to obtain the power of suppression. § RELATIVE ENTROPY BETWEEN PRIMARY STATES In this section we present some byproducts of this paper obtained by using the same method as the one in subsection <ref><cit.>, which has also been used to calculate the relative entropy <cit.>. We will calculate the relative entropy S(_̊A,ϕ_̊A,ψ), the 2nd symmetrized relative entropy S_2(_̊A,ϕ,_̊A,ψ), and the Schatten 2-norm _̊A,ϕ-_̊A,ψ_2 between the reduced density matrices of two primary states |ϕ and |ψ in the short interval expansion, where ψ is similar to ϕ and is the primary field of conformal weight h_ψ=c_ψ. To calculate the relative entropy S(_̊A,ϕ_̊A,ψ), we first need tocalculate the “n-th relative entropy” S_n(_̊A,ϕ_̊A,ψ) = 1n-1( log_A _̊A,ϕ^n -log_A (_̊A,ϕ_̊A,0^n-1) ).and then take n→ 1 limit.We have already calculated _A _̊A,ϕ^n as illustrated in figure <ref> of in subsection <ref>, and this inspires us to calculate _A (_̊A,ϕ_̊A,ψ^n-1) as illustrated in figure <ref>.Similar to the manipulation in subsection <ref>, in the end we can obtain the formal result _A (_̊A,ϕ_̊A,ψ^n-1)_A _̊A,0^n =Ψ_ϕ(inf)Ψ_ϕ(0) _^n = ( sinπℓLnsinπℓnL)^2(h_ϕ+(n-1)h_ψ)_ϕ( ^2πℓn L,1 ) ∏_j=1^n-1_ψ( ^2π(ℓn L+jn),^2πjn) _,Here Ψ_ϕ≡ϕ_0∏_j=1^n-1ψ_j, with ϕ existing in one copy and ψ existing in the other n-1 copies. The explicit result up to order (ℓ/L)^8 isS_n(_̊A,ϕ_̊A,ψ) = 2c(_ϕ-_ψ)logsinπℓLnsinπℓnL +π^4 c(_ϕ-_ψ) (n+1) (n^2+11) ℓ^4/45 n^4 L^4(n _ϕ+(n-2) _ψ)S_n(_̊A,ϕ_̊A,ψ) = +π^6 c(_ϕ-_ψ)(n+1) ℓ^6/2835 L^6 n^6( 8 n (n^2-4)(n^2+47) _ϕ^2 +8 (n-3)(n^2-4) (n^2+47) _ψ^2 S_n(_̊A,ϕ_̊A,ψ) = +8 n (n^2-4)(n^2+47) _ϕ_ψ +3 n (2 n^4+9 n^2+37) _ϕ +3 (n-2) (2 n^4+9 n^2+37) _ψ)S_n(_̊A,ϕ_̊A,ψ) = -π^8 c(_ϕ-_ψ) (n+1)ℓ^8/28350 (5 c+22) n^8 L^8[c ( 4 n (13 n^6-1647 n^4+33927n^2-58213) _ϕ^3S_n(_̊A,ϕ_̊A,ψ) =+4 (n-2) (13 n^6+40 n^5-1567 n^4+4400 n^3+42727n^2-42840 n-143893) _ψ^3S_n(_̊A,ϕ_̊A,ψ) =+4 n (13 n^6-1647 n^4+33927n^2-58213) _ϕ^2_ψ S_n(_̊A,ϕ_̊A,ψ) =+4 (n-2) (13 n^6-40 n^5-1727 n^4-4400n^3+25127 n^2+42840 n+27467) _ϕ_ψ^2S_n(_̊A,ϕ_̊A,ψ) =-4 n(n^2+11) (13 n^4+160 n^2-533) _ϕ^2S_n(_̊A,ϕ_̊A,ψ) =-4 (n-2) (n^2+11) (13 n^4-10 n^3+140 n^2-190 n-913) _ψ^2S_n(_̊A,ϕ_̊A,ψ) =-4 (n^2+11) (13 n^5-3 n^4+160 n^3-10 n^2-533 n-227)_ϕ_ψ S_n(_̊A,ϕ_̊A,ψ) =-3 n (9 n^6+29 n^4+71 n^2+251) _ϕ-3 (n-2) (9n^6+29 n^4+71 n^2+251) _ψ) S_n(_̊A,ϕ_̊A,ψ) = -352 n (n^2-9) (n^2-4)(n^2+119) _ϕ^3 -352 (n-4)(n^2-9) (n^2-4) (n^2+119) _ψ^3 S_n(_̊A,ϕ_̊A,ψ) = -352 n (n^2-9) (n^2-4) (n^2+119) _ϕ^2_ψ -352 n (n^2-9) (n^2-4) (n^2+119) _ϕ_ψ^2 S_n(_̊A,ϕ_̊A,ψ) = -176 n (n^2-4) (n^2+11)(n^2+19) _ϕ^2 -176 (n-3)(n^2-4) (n^2+11) (n^2+19) _ψ^2 S_n(_̊A,ϕ_̊A,ψ) = -176 n (n^2-4) (n^2+11) (n^2+19) _ϕ_ψ -8 n (15 n^6+50 n^4+134 n^2+539) _ϕ S_n(_̊A,ϕ_̊A,ψ) = -8 (n-2) (15 n^6+50 n^4+134 n^2+539)_ψ] + O((ℓ/L)^9).By taking n→ 1 limit we getS(_̊A,ϕ_̊A,ψ) =8 π^4 c (_ϕ-_ψ)^2ℓ^4/15L^4 -32π^6 c (_ϕ-_ψ)^2(8 (_ϕ+2 _ψ)-1)ℓ^6/315 L^6 S(_̊A,ϕ_̊A,ψ) = +8 π^8 c (_ϕ-_ψ)^2ℓ^8/1575 (5 c+22) L^8(5 c (288 _ϕ^2+1568 _ψ^2+576_ϕ_ψ-48 _ϕ-128 _ψ+3)S(_̊A,ϕ_̊A,ψ) = +2 (7040 _ϕ^2+21120 _ψ^2+14080 _ϕ_ψ-880 _ϕ-1760 _ψ+41) ) + O((ℓ/L)^9).To order ℓ^6 the result is in accord with <cit.>.Using the above result we can obtain S(_̊A,ψ_̊A,ϕ) by swapping _ϕ and _ψ in e515. After that we can get the symmetrized relative entropyS(_̊A,ϕ,_̊A,ψ) = 16 π^4 c (_ϕ-_ψ)^2ℓ^4/15L^4 -64 π^6 c (_ϕ-_ψ)^2(12 (_ϕ+_ψ)-1) ℓ^6/315L^6d(_̊A,ϕ,_̊A,ψ) = +16 π^8 c(_ϕ-_ψ)^2 ℓ^8/1575(5 c+22) L^8[5c ( 928 (_ϕ^2+_ψ^2)+576 _ϕ_ψ-88 (_ϕ+_ψ)+3)d(_̊A,ϕ,_̊A,ψ) =+2 (14080 (_ϕ^2+_ψ^2)+14080_ϕ_ψ-1320 (_ϕ+_ψ)+41)] + O((ℓ/L)^9).Note that if we take _ψ=0, we can obtain S(_̊A,ϕ_̊A,0), S(_̊A,0_̊A,ϕ) and S(_̊A,ϕ,_̊A,0) which characterize the difference of the excited state |ϕ and the vacuum state |0. Moreover, all the above results show nontrivial subleading 1/c corrections at the order (ℓ/L)^8.The n-th relative entropy for n 1 is not positive definite so that it cannot used as the measure for the difference of two quantum states. However, it turns out the 2nd symmetrized relative entropy S_2(,̊'̊) is positive definite because it can be written asS_2(,̊'̊) = log^̊2 '̊^2[(̊̊')]^2.Thus, S_2(,̊'̊) can be used as a difference measure between two quantum states. In fact it is directly related to the overlap of the two density matrices (,̊'̊) = [(̊̊')]^2^̊2 '̊^2.Note that the 2nd symmetrized relative entropy S_2(,̊'̊) is vanishing if and only if two density matrices are identical =̊'̊ and is infinite for two orthogonal density matrices (̊̊') =0.More general, one can also use Schatten n-norm to measure the difference of two density matrices -̊'̊_n = [ ( |-̊'̊|^n) ]^1/n.For n=1 it is just the trace distance, and for n=2 we have -̊'̊_2 = [ ^̊2 + '̊^2 -2 (̊̊') ]^1/2, Below we will calculate both S_2(_̊A,ϕ,_̊A,ψ) and _̊A,ϕ-_̊A,ψ_2 by following the similar trick used in the previous subsection. In fact, the ingredients needed to carry out the calculations such as _A_̊A,ϕ^2 and _A(_̊A,ϕ_̊A,ψ) have all been done already in previous sections. Packing them up, we then obtain the formal results as follows:S_2(_̊A,ϕ,_̊A,ψ) = log{[∑_C_ϕϕ^2_(sinπℓ2L)^2h__2F_1(h_,h_;2h_;sin^2πℓ2L)]S_2(_̊A,ϕ,_̊A,ψ) =×[∑_C_ψψ^2_(sinπℓ2L)^2h__2F_1(h_,h_;2h_;sin^2πℓ2L)]S_2(_̊A,ϕ,_̊A,ψ) =÷[∑_C_ϕϕC_ψψ_(sinπℓ2L)^2h__2F_1(h_,h_;2h_;sin^2πℓ2L)]^2}, _̊A,ϕ-_̊A,ψ_2 = ( LπsinπℓL)^-c/16{( sinπℓL2sinπℓ2L)^4h_ϕ[∑_C_ϕϕ^2_(sinπℓ2L)^2h__2F_1(h_,h_;2h_;sin^2πℓ2L)]_̊A,ϕ-_̊A,ψ_2 =+( sinπℓL2sinπℓ2L)^4h_ψ[∑_C_ψψ^2_(sinπℓ2L)^2h__2F_1(h_,h_;2h_;sin^2πℓ2L)] _̊A,ϕ-_̊A,ψ_2 =-2( sinπℓL2sinπℓ2L)^2(h_ϕ+h_ψ)[∑_C_ϕϕC_ψψ_(sinπℓ2L)^2h__2F_1(h_,h_;2h_;sin^2πℓ2L)] }^1/2. Then, in the short interval expansion we obtain the explicit results as follows:S_2(_̊A,ϕ,_̊A,ψ) = π^4 c (_ϕ-_ψ)^2ℓ^4/8 L^4[ 1+π^2 ℓ^2/12 L^2+π^4 (5 c+24-20 c(ϵ_ϕ+ϵ_ψ) (11 ϵ_ϕ+11 ϵ_ψ-1))ℓ^4/160 (5 c+22)L^4 S_2(_̊A,ϕ,_̊A,ψ) =+π^6 (145 c+764-1260 c (ϵ_ϕ+ϵ_ψ) (11 ϵ_ϕ+11 ϵ_ψ-1))ℓ^6/60480 (5 c+22)L^6+O((ℓ/L)^8) ], _̊A,ϕ-_̊A,ψ_2 = π^2 √(c(c+2))(_ϕ-_ψ)^2ℓ^24L^2( ℓ)^-c/16[ 1+π^2 (c+4) (c+2-12 c (ϵ_ϕ+ϵ_ψ))ℓ^2/96 (c+2)L^2+O((ℓ/L)^4) ]. We see that both of them have nontrivial large c corrections though with different structures.§ CONCLUSION AND DISCUSSION ETH is a fundamental issue in quantum thermodynamics and its validity for various situation should be scrutinized. An interesting version of ETH was proposed very recently in <cit.>, the so-called subsystem ETH which requires the difference between high energy state and microcanonical ensemble thermal state over all a local region should be exponentially suppressed by the entropy of the total system.This can be further relaxed to the so-called subsystem weak ETH, which compares the high energy state and canonical ensemble thermal state. To be precise, the trace distance of the reduced density matrices should be power-law suppressed.In this paper we check the validity of subsystem weak ETH for a 2D large c CFT. We evaluate the Rényi entropy, entanglement entropy, and relative entropy of the reduced density matrices to measure the difference between the heavy primary state and thermal state. We use these results and some information inequalities to get the bounds for the trace distance in large c limit, and find (c^0)log d≤ t ≤(c^0).The upper bound is trivial. The lower bound depends on how the effective dimension d of the subsystem scales with c, which is subtle to determine. Instead of using the relation trace d-1 as a definition of d, see, e.g., <cit.>, we treat it as an open issue and consider the possible interesting scenarios. One of these is that dsatisfies Fannes-Audenaert inequality and at the same time yields nontrivial lower bound of t. It is plausible thatd ∼^(c).If this is true, the trace distance would be power-law suppressed so that it is consistent with the subsystem weak ETH.We have to say we do not have a concrete proof that the result (<ref>) is correct. The validity of the subsystem weak ETH really depends only on whether the large but finite effective dimension exists and if yes how it scales with the large central charge. It is an open question, and it is possible one has to calculate the trace distance explicitly to find the answer.As pointed out in <cit.>, the mismatch of the Rényi entropies of excited state and canonical ensemble thermal state at order ℓ^4 originates from the mismatch of the one-point expectation values of the level 4 quasiprimary operator . The same reason applies to the mismatch of entanglement entropies, and the non-vanishing of relative entropy at order ℓ^8 in this paper. One possible resolution is that the excited state should be compared to the generalized Gibbs ensemble thermal state <cit.>, instead of the ordinary canonical thermal state. In fact, there are an infinite number of commuting conserved charges in the vacuum conformal family <cit.>, and in the generalized Gibbs ensemble one can also use noncommuting charges <cit.>. We will discuss about it in more details in a work that will come out soon <cit.>.[We thank a JHEP referee for discussions about higher order conserved charges in KdV hierarchy and the generalized Gibbs ensemble.]§ ACKNOWLEDGMENTS We thank Huajia Wang for an initial participation of the project. We would like to thank Pallab Basu, Diptarka Das, Shouvik Datta, Sridip Pal for their very helpful discussions. We thank Matthew Headrick for his Mathematica code Virasoro.nb, which can be downloaded at <http://people.brandeis.edu/ headrick/Mathematica/index.html>. JJZ would like to thank the organisers of The String Theory Universe Conference 2017 held in Milan, Italy on 20-24 February 2017 for being given the opportunity to present some of the results as gongshow and poster, and thank the participants, especially Geoffrey Compère, Debajyoti Sarkar and Marika Taylor, for stimulating discussions. SH is supported by Max-Planck fellowship in Germany and the National Natural Science Foundation of China Grant No. 11305235. FLL is supported by Taiwan Ministry of Science and Technology through Grant No. 103-2112-M-003-001-MY3 and No. 103-2811-M-003-024. JJZ is supported by the ERC Starting Grant 637844-HBQFTNCER. § SOME DETAILS OF VACUUM CONFORMAL FAMILY We list the holomorphic quasiprimary operators in vacuum conformal family to level 8. In level 2, we have the quasiprimary operator T, with the usual normalization _T=c2. In level 4, we have =(TT)-310^2T,   _ A=c(5c+22)10.In level 6, we have the orthogonalized quasiprimary operators =( T T)-45(^2TT)-142^4T,  =+ 9370c+29 B,with the definition= (T(TT))-910(^2TT)-128^4 T,and the normalization factors are _ B=36c (70 c+29)/175,   _ D=3 c (2 c-1) (5 c+22) (7 c+68)/4 (70 c+29).In level 8 we have the orthogonalized quasiprimary operatorsE= (^2 T^2 T)-109(^3 TT)+1063(^4TT)-1324^6 T, H= F+ 9 (140 c+83)/50 (105 c+11) E,I= G+ 81 (35 c-51)/100 (105 c+11) E + 12(465 c-127)/5 c(210 c+661)-251 H,with the definitionsF=( T( TT))-45(^2 T(T T))+215(^3 TT)-370(^4TT), G=(T(T(TT)))-95(^2 T(TT))+310(^3 TT)-1370(^4TT)-1240^6 T,and the normalization factors are _ E=22880 c (105 c+11)/1323,_ H=26 c (5 c+22) (5 c (210 c+661)-251)/125 (105 c+11), _ I=3 c (2 c-1) (3 c+46) (5 c+3) (5 c+22) (7 c+68)/2 (5 c (210 c+661)-251). In this paper we need the structure constantsC_TTT=c,    C_TT=c(5c+22)/10,and the four-point function on complex planeT(z_1) T(z_2) T(z_3) T(z_4) _ = c^24( 1( z_12z_34)^4+1( z_13z_24)^4+1( z_14z_23)^4) +c (z_12z_34)^2+(z_13z_24)^2+(z_14z_23)^2(z_12z_34z_13z_24 z_14z_23)^2,with the definitions z_ij≡ z_i-z_j.Under a general coordinate transformation z→ f(z), we have the transformation rulesT(z)=f'^2 T(f)+c12s,   (z)=f'^4(f)+5c+2230s ( f'^2 T(f)+c24s ),(z)= f'^6(f)-85sf'^4(f) -70c+291050sf'^4^2T(f) +70c+29420f'^2(f's'-2f”s) T(f)(z)= -11050( 28(5c+22)f'^2s^2+(70c+29)(f'^2s”-5f'f”s'+5f”^2s) )T(f)(z)= -c50400( 744s^3+ (70c+29)(4ss”-5s'^2) ),(z)=f'^6(f)+(2c-1)(7c+68)70c+29s ( 54 f'^4(f)+5c+2248s ( f'^2T(f)+c36s ) ), (z) = f'^8(f) -4510/567s f'^6 (f) +50/567 s f'^6 ^2(f) -25/63 f'^4 (f' s'-2 s f”) (f)(z) = +4/63f'^2 (25 s f”^2+5 f'^2 s”+98 s^2 f'^2-25 f' f” s') (f) +105 c+11/7938 s f'^6^4T(f)(z) = -105 c+11/1134f'^4 (f' s'-2 s f”)^3T(f) +1/5670f'^2 (9 (105 c+11) (f'^2 s”+5 s f”^2-5 f' f” s') (z) = +10 (120 c+77) s^2 f'^2)^2T(f) -1/2268((105 c+11) (2 s^(3) f'^3-30 s f”^3-18 f'^2 f” s”+45 f' f”^2 s') (z) = + 10 (120 c+77) s f'^2 (f' s'-2 s f”)) T(f) +1/79380 f'^2(8 (3570 c+2629) s f'^4 s”(z) = -5 (2940 c+2563) f'^4 s'^2+12 (1225 c+9449) s^3 f'^4+700 (120 c+77) s f'^2 f” (s f”-f' s') (z) = +5 (105 c+11) (105 s f”^4+2 s^(4) f'^4-28 s^(3) f'^3 f”+126 f'^2 f”^2 s”-210 f' f”^3 s'))T(f)(z) = +c/952560((105 c+11) (10 s s^(4)+63 s”^2-70 s^(3) s') +451 s (20 s s”-25 s'^2+52 s^3)),(z) = f'^8(f)-8/5 s f'^6(f)+91 (5 c+22) (5 c (210 c+661)-251)/540 (70 c+29) (105 c+11)s f'^6(f) (z) =-5 c (210 c+661)-251/540 (105 c+11)s f'^6^2(f)+5 c (210 c+661)-251/120 (105 c+11)f'^4 (f' s'-2 s f”)(f) (z) =+1/150 (105 c+11)f'^2 ((5 c (210 c+661)-251) (5 f' f” s'-5 s f”^2-f'^2 s”) (z) =-8 (25 c (21 c+187)-951) s^2 f'^2)(f)-(5 c+22) (5 c (210 c+661)-251)/9000 (105 c+11) s^2 f'^4^2T(f)(z) =+(5 c+22) (5 c (210 c+661)-251)/3600 (105 c+11)s f'^2 (f' s'-2 s f”) T(f)(z) =+5 c+22/108000 (105 c+11)(3 (5 c (210 c+661)-251) (-20 s^2 f”^2-8 s f'^2 s”+5 f'^2 s'^2+20 s f' f” s')(z) =-8 (15 c (210 c+2273)-7357) s^3 f'^2)T(f)-c (5 c+22) s/1296000 (105 c+11)(104 (465 c-127) s^3(z) =+ 3 (5 c (210 c+661)-251) (4 s s”-5 s'^2)), (z)=f'^8(f) +(3 c+46) (5 c+3) (70 c+29)/3 (5 c (210 c+661)-251)s ((f) f'^6+5 (2 c-1) (7 c+68)/8 (70 c+29)s ( f'^4(f) (z)=+5c+22/90 s ( f'^2 T(f) +c/48 s))).In the above equations, we have the definition of Schwarzian derivatives(z)=f”'(z)f'(z)-32 ( f”(z)f'(z))^2,and the shorthand notationsf' ≡ f'(z),    f”≡ f”(z),    s ≡ s(z),    s' ≡ s'(z),    s”≡ s”(z),    s^(3)≡ s^(3)(z),   ⋯. For a general primary operator ϕ with conformal weight h_ϕ and normalization factor _ϕ=1, we have the structure constantsC_ϕϕ T=h_ϕ,   C_ϕϕ=h_ϕ(5h_ϕ+1)5,   C_ϕϕ=-2 h_ϕ(14 h_ϕ+1)/35 ,C_ϕϕ=h_ϕ [(70 c+29) h_ϕ^2+(42 c-57) h_ϕ+(8 c-2)]/70 c+29,   C_ϕϕ=4 h_ϕ (27 h_ϕ+1)/63,C_ϕϕ=-2 h_ϕ(10 (105 c+11) h_ϕ^2+(435 c-218) h_ϕ+55 c-4)/25 (105 c+11),C_ϕϕ=h_ϕ/5 c (210 c+661)-251((5 c (210 c+661)-251) h_ϕ^3+6 (c (210 c-83)+153) h_ϕ^2C_ϕϕ= +(c (606 c-701)-829) h_ϕ+6 c (18 c+13)-6). For a general holomorphic quasiprimary operator , the non-vanishing of C_ϕϕ require thatis bosonic and its conformal dimension h_ is an even integer, and this leads to C_ϕϕ=C_ϕϕ. We have the three-point function on complex plane ϕ(inf)^r(z)ϕ(0)_ = (-)^r(h_+r-1)!(h_-1)!C_ϕϕz^h_+r.For a general operator , we denote its expectation value on a cylinder with spatial period L in excited state |ϕ as _ϕ=ϕ||ϕ_. From translation symmetry in both directions of the cylinder, we know that _ϕ is a constant. So for r∈ and r>0, we have ^r_ϕ=0.By mapping the cylinder to a complex plane, using (<ref>) and (<ref>) we get the expectation valuesT _ϕ = π^2 (c-24 h_ϕ)/6 L^2,   _ϕ = π^4(c (5 c+22) -240 (c+2) h_ϕ +2880 h_ϕ^2 )/180 L^4,   _ϕ = -2 π^6(31 c - 504 h_ϕ )/525 L^6,_ϕ = π^6/216 (70 c+29) L^6(c (2 c-1) (5 c+22) (7 c+68) -72 (70 c^3+617c^2+938c-248) h_ϕ _ϕ = +1728 (c+4) (70 c+29) h_ϕ^2 -13824 (70 c+29) h_ϕ^3 ),_ϕ = 572 π^8 (41 c-480 h_ϕ)/59535 L^8, _ϕ = -13 π^8/10125 (105 c+11) L^8(c (5 c+22) (465 c-127)-480 (195 c^2+479c-44) h_ϕ+8640 (105 c+11) h_ϕ^2),    _ϕ = π^8/1296 (1050 c^2+3305 c-251) L^8( c (2 c-1) (3 c+46) (5 c+3) (5 c+22) (7 c+68)_ϕ =-96 (1050 c^5+23465 c^4+153901 c^3+274132 c^2+22388 c-6864) h_ϕ _ϕ =+3456 (1050 c^4+16325 c^3+69963 c^2+65686 c-648) h_ϕ^2 _ϕ =-55296 (c+6) (1050 c^2+3305 c-251) h_ϕ^3+331776 (1050 c^2+3305 c-251) h_ϕ^4 ).§ OPE OF TWIST OPERATORS We review OPE of twist operators in the n-fold CFT that is denoted as ^n<cit.>. We also define and calculate C_ΦΦ K, Φ_K_Φ, and b_K that would be useful to subsection <ref>. Note that in this paper we only consider contributions of the holomorphic sector. The twist operatorsandare primary operators with conformal weights<cit.> h_=h_=c(n^2-1)24n.We have the OPE of twist operators <cit.>(z)(w)=c_n(z-w)^2h_∑_K d_K ∑_r=0^infa_K^rr!(z-w)^h_K+r^rΦ_K(w).Here c_n is the normalization factor. The summation K is over each orthogonalized holomorphic quasiprimary operator Φ_K in ^n, and h_K is the conformal weight of Φ_K. We have definitiona_K^r ≡C_h_K+r-1^rC_2h_K+r-1^r,with C_x^y denoting the binomial coefficient that is also written as xy. To level 8, the ^n holomorphic quasiprimary operators has been constructed in <cit.>, and we just list them in table <ref>. The normalization factors _K and OPE coefficients d_K for all these quasiprimary operators can also be found in <cit.>.From a holomorphic primary operator ϕ with normalization _ϕ=1 in the original CFT, we can define the ^n primary operator Φ = ∏_j=0^n-1ϕ_j.In subsection <ref>, we need structure constant C_ΦΦ K for the quasiprimary operators Φ_K in table <ref>. The results can be written in terms of (<ref>). First of all it is easy to seeC_ΦΦ T = C_ϕϕ T,   C_ΦΦ = C_ϕϕ,   C_ΦΦ = C_ϕϕ,   C_ΦΦ = C_ϕϕ, C_ΦΦ = C_ϕϕ,   C_ΦΦ = C_ϕϕ,   C_ΦΦ = C_ϕϕ,   C_ΦΦ TT = C_ϕϕ T^2, C_ΦΦ T = C_ϕϕ TC_ϕϕ,   C_ΦΦ T = C_ϕϕ TC_ϕϕ,   C_ΦΦ T = C_ϕϕ TC_ϕϕ, C_ΦΦ = C_ϕϕ^2,   C_ΦΦ TTT = C_ϕϕ T^3,   C_ΦΦ TT = C_ϕϕ T^2 C_ϕϕ,   C_ΦΦ TTTT = C_ϕϕ T^4.There are vanishing structure constantsC_ΦΦ=C_ΦΦ=C_ΦΦ=C_ΦΦ T=C_ΦΦ=0.For , , andwe haveC_ΦΦ= -4/5C_ϕϕ T^2,    C_ΦΦ= -56/45C_ϕϕ TC_ϕϕ,    C_ΦΦ= 12/7C_ϕϕ T^2.Finally, we haveC_ΦΦ T = C_ϕϕ TC_ϕϕ,    C_ΦΦ = 79 C_ΦΦ T,    C_ΦΦ = 711 C_ΦΦ T. It is easy to get Φ_K_Φ that appear in (<ref>) interms of (<ref>)T _Φ =T _ϕ,   _Φ = _ϕ,   _Φ = _ϕ,   _Φ = _ϕ,   _Φ = _ϕ,   _Φ = _ϕ,_Φ = _ϕ,    TT _Φ =T _ϕ^2,    T_Φ =T _ϕ_ϕ,    T_Φ =T _ϕ_ϕ,    T_Φ =T _ϕ_ϕ,_Φ = _ϕ^2,    TTT _Φ =T _ϕ^3,    TT_Φ =T _ϕ^2 _ϕ,    TTTT _Φ =T _ϕ^4.Because of (<ref>) we have the vanishing results _Φ = _Φ = _Φ = _Φ = _Φ = _Φ = _Φ = _Φ = _Φ =T_Φ =T_Φ =0. From OPE coefficient d_K for quasiprimary operators in table <ref> we may define b_K by summing over the indices of Φ_K b_K = ∑_j_1,⋯ d_K^j_1⋯.For examples, in table <ref>T denotes operators T_j with 0≤ j ≤ n-1, and TT denotes operators T_j_1T_j_2_j_3 with 0≤ j_1,2,3≤ n-1 and the constraints j_1 < j_2, j_1≠ j_3, j_2 ≠ j_3, and so we haveb_T = ∑_j=0^n-1d_T=n d_T,andb_TT = ∑_0≤ j_1,2,3≤ n-1d_TT^j_1j_2j_3 with constraints j_1 < j_2, j_1≠ j_3, j_2 ≠ j_3.Using the results of d_K and the summation formulas in <cit.> we get the b_K we need. In subsection <ref> we needb_T=n^2-1/12 n,   b_=(n^2-1)^2/288 n^3,   b_=-(n^2-1)^2 (70 c n^2+122 n^2-93)/10368 (70 c+29) n^5, b_=(n^2-1)^3/10368 n^5,   b_=(n^2-1)^2 (11340 c n^4+11561 n^4-16236 n^2+5863)65894400 (105 c+11) n^7, b_=-(n^2-1)^3 (3150 c^2 n^2+c (15960 n^2-6045)-2404 n^2+1651)539136 (5 c (210 c+661)-251) n^7,   b_=(n^2-1)^4/497664 n^7, b_TT=(n^2-1) (5 c (n+1) (n-1)^2+2 n^2+22)/1440 c n^3 ,   b_T=(n^2-1)^2 (5 c (n+1) (n-1)^2+4 n^2+44)/17280 c n^5, b_T=-(n^2-1)^2/13063680 c (70 c+29) n^7( 7350 n^2 c^2 (n-1)^2 (n+1) b_T= +35 c (366 n^5-238 n^4-645 n^3+2369 n^2+279 n-403) +2 (6787 n^4+71089 n^2-65348) ), b_T=(n^2-1)^3 (5 c (n+1) (n-1)^2+6 n^2+66)/622080 c n^7, b_=1/5806080 c (5 c+22) n^7( 175 c^2 (n+1)^4 (n-1)^5 b_=+70 c (n^2-1)^3 (11 n^3-7 n^2-11 n+55)+8 (n^2-1)(n^2+11) (157 n^4-298 n^2+381)), b_TTT=(n-2) (n^2-1)/362880 c^2 n^5(35 c^2 (n+1)^2 (n-1)^3+42 c (n^4+10 n^2-11)-16 (n+2) (n^2+47)),b_TT=(n-2) (n^2-1)14515200 c^2 n^7( 175 c^2 (n+1)^3 (n-1)^4+350 c (n^2-1)^2 (n^2+11)b_TT=-128 (n+2) (n^4+50 n^2-111) ), b_TTTT=(n-3) (n-2) (n^2-1)/87091200 c^3 n^7( 175 c^3 (n+1)^3 (n-1)^4+420 c^2 (n^2-1)^2 (n^2+11)b_TTTT=-4 c (59 n^5+121 n^4+3170 n^3+6550 n^2-6829 n-11711)+192 (n+2) (n+3) (n^2+119) ).In subsection <ref>, we also needb_=-(n^2-1)(70 c (n-1)^2 (n+1) n^2-2 n^4+215 n^2-93)/725760 c n^5, b_=-(n^2-1)^2(210 c (n-1)^2 (n+1) n^2+38 n^4+1445 n^2-403)37739520 c n^7,b_=(n^2-1)(11340 c (n-1)^2 (n+1) n^4-1481 n^6+27797 n^4-22099 n^2+5863)6918912000 c n^7,b_T+79 b_+711b_=-(n-2) (n^2-1)/188697600 c^2 n^7( 1050 c^2 (n+1)^2 n^2 (n-1)^3b_T+79 b_+711b_=+5 c (n^2-1)(122 n^4+2369 n^2-403)-4 (n+2) (81 n^4+4600 n^2-2041) ). § USEFUL SUMMATION FORMULAS Most of the summation formulas that are used in this paper can be found in <cit.>. There are two other ones ∑_≠1s_j_1j_2^2 s_j_2j_3^4 s_j_3j_1^4 = 4 n (n^2-4) (n^2-1) (n^2+19) (n^4+19 n^2+628)/467775, ∑_≠c_j_1j_2 c_j_1j_3s_j_1j_2^3 s_j_1j_3^3 s_j_2j_3^4 = 2 n (n^2-25) (n^2-4) (n^2-1) (n^4+30 n^2+419)/467775,and the summations are in the range 0≤ j_1,2,3≤ n-1 with the constraints j_1≠ j_2, j_1≠ j_3 and j_3≠ j_1. Here we have used the shorthand s_j_1j_2=sinπ(j_1-j_2)n, c_j_1j_2=cosπ(j_1-j_2)n, and et al.We define the summation of k indices 0≤ j_1,2,⋯,k≤ n-1∑_≠ f(j_1,j_2,⋯,j_k),with the constraints that any two of the indices are not equal and the function f(j_1,j_2,⋯,j_k) is totally symmetric for the k arguments. First we have ∑_≠' f(0,j_2,⋯,j_k) = 1n∑_≠ f(j_1,j_2,⋯,j_k),with the summation ≠' of the left-hand side being over 1≤ j_2,⋯,k≤ n-1 and the constraints that any two of the indices are not equal. Then we have ∑_≠' f(j_1,j_2,⋯,j_k) = n-kn∑_≠ f(j_1,j_2,⋯,j_k),with the summation of the left-hand side being over 1≤ j_1,2,⋯,k≤ n-1.10Deutsch:1991 J. M. Deutsch, “Quantum statistical mechanics in a closed system,”http://dx.doi.org/10.1103/PhysRevA.43.2046 Phys. Rev. A43 (1991) 2046–2049.Srednicki:1994 M. Srednicki, “Chaos and quantum thermalization,”http://dx.doi.org/10.1103/PhysRevE.50.888 Phys. Rev. E50 (1994) 888–901.Lashkari:2016vgj N. Lashkari, A. Dymarsky, and H. Liu, “Eigenstate Thermalization Hypotehsis in Conformal Field Theory,”http://arxiv.org/abs/1610.00302 arXiv:1610.00302 [hep-th]. Dymarsky:2016aqv A. Dymarsky, N. Lashkari, and H. Liu, “Subsystem ETH,”http://arxiv.org/abs/1611.08764 arXiv:1611.08764 [cond-mat.stat-mech]. Iyoda:2016 E. Iyoda, K. Kaneko, and T. Sagawa, “Fluctuation Theorem for Many-Body Pure Quantum States,”http://arxiv.org/abs/1603.07857 arXiv:1603.07857 [cond-mat.stat-mech].Tasaki:2016 H. Tasaki, “On the local equivalence between the canonical and the microcanonical distributions for quantum spin systems,”http://arxiv.org/abs/1609.06983 arXiv:1609.06983 [cond-mat.stat-mech]. Srednicki-1M. Srednicki, “On the local equivalence between the canonical and the microcanonical distributions for quantum spin systems,”The approach to thermal equilibrium in quantized chaotic systems,"J. Phys. A: Math. Gen.32 (1999) 1163-1175.AlbaV. Alba, “ Eigenstate thermalization hypothesis and integrability in quantum spin chains,"Phys. Rev. B91 (2015) 155123, http://arxiv.org/abs/1409.6069 arXiv:1409.6069 [cond-mat.stat-mech]. Lin:2016dxa F.-L. Lin, H. Wang, and J.-j. Zhang, “Thermality and excited state Rényi entropy in two-dimensional CFT,”http://dx.doi.org/10.1007/JHEP11(2016)116 JHEP 1611 (2016) 116, http://arxiv.org/abs/1610.01362 arXiv:1610.01362 [hep-th]. Fitzpatrick:2014vua A. L. Fitzpatrick, J. Kaplan, and M. T. Walters, “Universality of Long-Distance AdS Physics from the CFT Bootstrap,”http://dx.doi.org/10.1007/JHEP08(2014)145 JHEP 1408 (2014) 145, http://arxiv.org/abs/1403.6829 arXiv:1403.6829 [hep-th]. Fitzpatrick:2015zha A. L. Fitzpatrick, J. Kaplan, and M. T. Walters, “Virasoro Conformal Blocks and Thermality from Classical Background Fields,”http://dx.doi.org/10.1007/JHEP11(2015)200 JHEP 1511 (2015) 200, http://arxiv.org/abs/1501.05315 arXiv:1501.05315 [hep-th]. Asplund:2014coa C. T. Asplund, A. Bernamonti, F. Galli, and T. Hartman, “Holographic Entanglement Entropy from 2d CFT: Heavy States and Local Quenches,”http://dx.doi.org/10.1007/JHEP02(2015)171 JHEP 1502 (2015) 171, http://arxiv.org/abs/1410.1392 arXiv:1410.1392 [hep-th]. Caputa:2014eta P. Caputa, J. Simón, A. Štikonas, and T. Takayanagi, “Quantum Entanglement of Localized Excited States at Finite Temperature,”http://dx.doi.org/10.1007/JHEP01(2015)102 JHEP 1501 (2015) 102, http://arxiv.org/abs/1410.2287 arXiv:1410.2287 [hep-th]. Maldacena:1997re J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,”http://dx.doi.org/10.1023/A:1026654312961 Int. J. Theor. Phys. 38 (1999) 1113–1133, http://arxiv.org/abs/hep-th/9711200 arXiv:hep-th/9711200 [hep-th]. [Adv. Theor. Math. Phys.2, (1998) 231]. Gubser:1998bc S. Gubser, I. R. Klebanov, and A. M. Polyakov, “Gauge theory correlators from noncritical string theory,”http://dx.doi.org/10.1016/S0370-2693(98)00377-3 Phys. Lett. B428 (1998) 105–114, http://arxiv.org/abs/hep-th/9802109 arXiv:hep-th/9802109 [hep-th]. Witten:1998qj E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys. 2 (1998) 253–291, http://arxiv.org/abs/hep-th/9802150 arXiv:hep-th/9802150 [hep-th]. Aharony:1999ti O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri, and Y. Oz, “Large N field theories, string theory and gravity,”http://dx.doi.org/10.1016/S0370-1573(99)00083-6 Phys.Rept. 323 (2000) 183–386, http://arxiv.org/abs/hep-th/9905111 arXiv:hep-th/9905111 [hep-th]. Brown:1986nw J. D. Brown and M. Henneaux, “Central charges in the canonical realization of asymptotic symmetries: an example from three-dimensional gravity,”http://dx.doi.org/10.1007/BF01211590 Commun. Math. Phys. 104 (1986) 207–226. Banados:1998gg M. Bañados, “Three-dimensional quantum geometry and black holes,”http://arxiv.org/abs/hep-th/9901148 arXiv:hep-th/9901148 [hep-th]. [AIP Conf. Proc. 484, 147 (1999)]. Banados:1992wn M. Bañados, C. Teitelboim, and J. Zanelli, “The Black hole in three-dimensional space-time,”http://dx.doi.org/10.1103/PhysRevLett.69.1849 Phys. Rev. Lett. 69 (1992) 1849–1851, http://arxiv.org/abs/hep-th/9204099 arXiv:hep-th/9204099 [hep-th]. Cardy:1986ie J. L. Cardy, “Operator content of two-dimensional conformally invariant theories,”http://dx.doi.org/10.1016/0550-3213(86)90552-3 Nucl. Phys. B270 (1986) 186–204. Calabrese:2004eu P. Calabrese and J. L. Cardy, “Entanglement entropy and quantum field theory,”http://dx.doi.org/10.1088/1742-5468/2004/06/P06002 J. Stat. Mech. 0406 (2004) P06002, http://arxiv.org/abs/hep-th/0405152 arXiv:hep-th/0405152 [hep-th]. Headrick:2010zt M. Headrick, “Entanglement Rényi entropies in holographic theories,”http://dx.doi.org/10.1103/PhysRevD.82.126010 Phys. Rev. D82 (2010) 126010, http://arxiv.org/abs/1006.0047 arXiv:1006.0047 [hep-th]. Calabrese:2010he P. Calabrese, J. Cardy, and E. Tonni, “Entanglement entropy of two disjoint intervals in conformal field theory II,”http://dx.doi.org/10.1088/1742-5468/2011/01/P01021 J. Stat. Mech. 1101 (2011) P01021, http://arxiv.org/abs/1011.5482 arXiv:1011.5482 [hep-th]. Chen:2013kpa B. Chen and J.-j. Zhang, “On short interval expansion of Rényi entropy,”http://dx.doi.org/10.1007/JHEP11(2013)164 JHEP 1311 (2013) 164, http://arxiv.org/abs/1309.5453 arXiv:1309.5453 [hep-th]. Alcaraz:2011tn F. C. Alcaraz, M. I. Berganza, and G. Sierra, “Entanglement of low-energy excitations in Conformal Field Theory,”http://dx.doi.org/10.1103/PhysRevLett.106.201601 Phys. Rev. Lett. 106 (2011) 201601, http://arxiv.org/abs/1101.2881 arXiv:1101.2881 [cond-mat.stat-mech]. Berganza:2011mh M. I. Berganza, F. C. Alcaraz, and G. Sierra, “Entanglement of excited states in critical spin chians,”http://dx.doi.org/10.1088/1742-5468/2012/01/P01016 J. Stat. Mech. 1201 (2012) P01016, http://arxiv.org/abs/1109.5673 arXiv:1109.5673 [cond-mat.stat-mech]. Lashkari:2014yva N. Lashkari, “Relative Entropies in Conformal Field Theory,”http://dx.doi.org/10.1103/PhysRevLett.113.051602 Phys. Rev. Lett. 113 (2014) 051602, http://arxiv.org/abs/1404.3216 arXiv:1404.3216 [hep-th]. Lashkari:2015dia N. Lashkari, “Modular Hamiltonian for Excited States in Conformal Field Theory,”http://dx.doi.org/10.1103/PhysRevLett.117.041601 Phys. Rev. Lett. 117 (2016) 041601, http://arxiv.org/abs/1508.03506 arXiv:1508.03506 [hep-th]. Sarosi:2016oks G. Sárosi and T. Ugajin, “Relative entropy of excited states in two dimensional conformal field theories,”http://dx.doi.org/10.1007/JHEP07(2016)114 JHEP 1607 (2016) 114, http://arxiv.org/abs/1603.03057 arXiv:1603.03057 [hep-th]. Sarosi:2016atx G. Sárosi and T. Ugajin, “Relative entropy of excited states in conformal field theories of arbitrary dimensions,”http://dx.doi.org/10.1007/JHEP02(2017)060 JHEP 1702 (2017) 060, http://arxiv.org/abs/1611.02959 arXiv:1611.02959 [hep-th]. Ruggiero:2016khg P. Ruggiero and P. Calabrese, “Relative Entanglement Entropies in 1+1-dimensional conformal field theories,”http://dx.doi.org/10.1007/JHEP02(2017)039 JHEP 1702 (2017) 039, http://arxiv.org/abs/1612.00659 arXiv:1612.00659 [hep-th]. Fannes1973 M. Fannes, “A continuity property of the entropy density for spin lattice systems,”http://dx.doi.org/10.1007/BF01646490 Commun. Math. Phys. 31 (1973) 291–294.Audenaert:2006 K. M. R. Audenaert, “A sharp continuity estimate for the von neumann entropy,”http://dx.doi.org/10.1088/1751-8113/40/28/S18 J. Phys. A40 (2007) 8127, http://arxiv.org/abs/quant-ph/0610146 arXiv:quant-ph/0610146 [quant-ph].Ohmori:2014eia K. Ohmori and Y. Tachikawa, “Physics at the entangling surface,”http://dx.doi.org/10.1088/1742-5468/2015/04/P04010 J. Stat. Mech. 1504 (2015) P04010, http://arxiv.org/abs/1406.4167 arXiv:1406.4167 [hep-th]. Cardy:2016fqc J. Cardy and E. Tonni, “Entanglement hamiltonians in two-dimensional conformal field theory,”http://dx.doi.org/10.1088/1742-5468/2016/12/123103 J. Stat. Mech. 1612 (2016) 123103, http://arxiv.org/abs/1608.01283 arXiv:1608.01283 [cond-mat.stat-mech]. Casini:2011kv H. Casini, M. Huerta, and R. C. Myers, “Towards a derivation of holographic entanglement entropy,”http://dx.doi.org/10.1007/JHEP05(2011)036 JHEP 1105 (2011) 036, http://arxiv.org/abs/1102.0440 arXiv:1102.0440 [hep-th]. Wong:2013gua G. Wong, I. Klich, L. A. Pando Zayas, and D. Vaman, “Entanglement Temperature and Entanglement Entropy of Excited States,”http://dx.doi.org/10.1007/JHEP12(2013)020 JHEP 1312 (2013) 020, http://arxiv.org/abs/1305.3291 arXiv:1305.3291 [hep-th]. Nozaki:2014hna M. Nozaki, T. Numasawa, and T. Takayanagi, “Quantum Entanglement of Local Operators in Conformal Field Theories,”http://dx.doi.org/10.1103/PhysRevLett.112.111602 Phys. Rev. Lett. 112 (2014) 111602, http://arxiv.org/abs/1401.0539 arXiv:1401.0539 [hep-th]. He:2014mwa S. He, T. Numasawa, T. Takayanagi, and K. Watanabe, “Quantum dimension as entanglement entropy in two dimensional conformal field theories,”http://dx.doi.org/10.1103/PhysRevD.90.041701 Phys. Rev. D90 (2014) 041701, http://arxiv.org/abs/1403.0702 arXiv:1403.0702 [hep-th]. Guo:2015uwa W.-Z. Guo and S. He, “Rényi entropy of locally excited states with thermal and boundary effect in 2D CFTs,”http://dx.doi.org/10.1007/JHEP04(2015)099 JHEP 1504 (2015) 099, http://arxiv.org/abs/1501.00757 arXiv:1501.00757 [hep-th]. Chen:2015usa B. Chen, W.-Z. Guo, S. He, and J.-q. Wu, “Entanglement Entropy for Descendent Local Operators in 2D CFTs,”http://dx.doi.org/10.1007/JHEP10(2015)173 JHEP 1510 (2015) 173, http://arxiv.org/abs/1507.01157 arXiv:1507.01157 [hep-th]. DiFrancesco:1997nk P. Di Francesco, P. Mathieu, and D. Senechal, Conformal Field Theory.Springer, New York, USA, 1997. Blumenhagen:2009zz R. Blumenhagen and E. Plauschinn, “Introduction to conformal field theory,”http://dx.doi.org/10.1007/978-3-642-00450-6 Lect. Notes Phys. 779 (2009) 1–256. Rigol:2006 M. Rigol, V. Dunjko, V. Yurovsky, and M. Olshanii, “Relaxation in a Completely Integrable Many-Body Quantum System: An AbInitio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons,”http://dx.doi.org/10.1103/PhysRevLett.98.050405 Phys. Rev. Lett. 98 (2007) 050405, http://arxiv.org/abs/cond-mat/0604476 arXiv:cond-mat/0604476 [cond-mat].Mandal:2015jla G. Mandal, R. Sinha, and N. Sorokhaibam, “Thermalization with chemical potentials, and higher spin black holes,”http://dx.doi.org/10.1007/JHEP08(2015)013 JHEP 1508 (2015) 013, http://arxiv.org/abs/1501.04580 arXiv:1501.04580 [hep-th]. Cardy:2015xaa J. Cardy, “Quantum Quenches to a Critical Point in One Dimension: some further results,”http://dx.doi.org/10.1088/1742-5468/2016/02/023103 J. Stat. Mech. 1602 (2016) 023103, http://arxiv.org/abs/1507.07266 arXiv:1507.07266 [cond-mat.stat-mech]. Mandal:2015kxi G. Mandal, S. Paranjape, and N. Sorokhaibam, “Thermalization in 2D critical quench and UV/IR mixing,”http://arxiv.org/abs/1512.02187 arXiv:1512.02187 [hep-th]. Vidmar:2016 L. Vidmar and M. Rigol, “Generalized Gibbs ensemble in integrable lattice models,”http://dx.doi.org/10.1088/1742-5468/2016/06/064007 J. Stat. Mech. 6 (2016) 064007, http://arxiv.org/abs/1604.03990 arXiv:1604.03990 [cond-mat.stat-mech].deBoer:2016bov J. de Boer and D. Engelhardt, “Remarks on thermalization in 2D CFT,”http://dx.doi.org/10.1103/PhysRevD.94.126019 Phys. Rev. D94 (2016) 126019, http://arxiv.org/abs/1604.05327 arXiv:1604.05327 [hep-th]. Sasaki:1987mm R. Sasaki and I. Yamanaka, “Virasoro Algebra, Vertex Operators, Quantum Sine-Gordon and Solvable Quantum Field Theories,”http://dx.doi.org/10.1016/B978-0-12-385340-0.50012-7 Adv. Stud. Pure Math. 16 (1988) 271–296. Bazhanov:1994ft V. V. Bazhanov, S. L. Lukyanov, and A. B. Zamolodchikov, “Integrable structure of conformal field theory, quantum KdV theory and thermodynamic Bethe ansatz,”http://dx.doi.org/10.1007/BF02101898 Commun. Math. Phys. 177 (1996) 381–398, http://arxiv.org/abs/hep-th/9412229 arXiv:hep-th/9412229 [hep-th]. work-disimilarity S. He, F. L. Lin and J. j. Zhang, “Dissimilarities of reduced density matrices and eigenstate thermalization hypothesis,” arXiv:1708.05090 [hep-th]. Chen:2013dxa B. Chen, J. Long, and J.-j. Zhang, “Holographic Rényi entropy for CFT with W symmetry,”http://dx.doi.org/10.1007/JHEP04(2014)041 JHEP 1404 (2014) 041, http://arxiv.org/abs/1312.5510 arXiv:1312.5510 [hep-th]. ]
http://arxiv.org/abs/1703.08724v4
{ "authors": [ "Song He", "Feng-Li Lin", "Jia-ju Zhang" ], "categories": [ "hep-th", "cond-mat.stat-mech", "cond-mat.str-el", "quant-ph" ], "primary_category": "hep-th", "published": "20170325175840", "title": "Subsystem eigenstate thermalization hypothesis for entanglement entropy in CFT" }
innercustomgenericplaintheoremTheorem[section]statement[equation]sublemma[equation]Sublemmalemma[theorem]Lemmaconjecture[theorem]Conjecturesubclaim[theorem]Subclaim claim[theorem]ClaimHilbertry[theorem]Corollaryproposition[theorem]Propositionfact[theorem]Factquestion[theorem]Questioncorollary[theorem]Corollary*corollary*CorollarythmTheorem*theorem*TheoremcustomthmTheorem*proposition*Theoremdefinitiondefinition[theorem]Definitionnotation[theorem]Notationexample[theorem]Exampleconstruction[theorem]Constructionhypothesis[theorem]Standing Hypothesisremarkremark[theorem]Remarkthmbis[1] theorem-1 Ubboldmn 11 ℤ ℚ 𝔽 ℝ 𝔸 𝔹 ℂ 𝔻 𝔼 ℍ ℕ 𝔾 𝕋 𝕍 𝕌 ℙ 𝕃 𝕊 𝕎 𝕄 𝕏 𝔾𝕃 𝕐𝒢 𝒬 ℱ 𝒰 𝒱 𝒪 𝒪 𝒮 𝒜 𝒦 ℬ 𝒞 𝒟 ℳ 𝒯 ℛ ℋ ℒ 𝒫 𝒲𝔥 𝔞 𝔅 𝔓 𝔮 𝔭 𝔄 𝔟 ℜ 𝔴 𝔬 𝔫 𝔲 𝔷 𝔤𝔩 𝔤 𝔪 𝔖 𝔰 𝔱 𝔣 𝔉𝐲 𝐱 𝐃 𝐰 𝐯 𝐞*remark*RemarkO2 Ox_1_#2^[#1] 1. To whom correspondence should be addressedE-mail: ezelmano@math.ucsd.eduAuthor Contributions: A.A., H.A, S. K. J., E. Z. designed research, performed research, and wrote the paper. The authors declare no conflict of interest. We introduce a new construction of matrix wreath products of algebras that is similar to wreath products of groups. We then use it to prove embedding theorems for Jacobson radical, nil, and primitive algebras. In <ref>, we construct finitely generated nil algebras of arbitrary Gelfand-Kirillov dimension ≥ 8 over a countable field which answers a question from <cit.>. Matrix wreath products of algebras and embedding theorems Adel Alahmadi[1], Hamed Alsulami[1], S.K. Jain[1]^,[2], Efim Zelmanov[3]============================================================================§ MAIN RESULTS G. Higman, H. Neumann, and B.H. Neumann <cit.> proved that every countable group embeds in a finitely generated group. The papers <cit.>, <cit.>, <cit.>, <cit.>, <cit.> show that some important properties can be inherited by these embeddings. Much of this work relies on wreath products of groups.Following <cit.>, A. I. Malcev <cit.> showed that every countable dimensional associative algebra over a field is embeddable in a finitely generated algebra.In <ref>, <ref>, we introduce matrix wreath products of algebras and study their basic properties.In <ref>, we use matrix wreath products to prove embedding theorems for Jacobson radical algebras.S. Amitsur <cit.> asked if a finitely generated algebra can have a non nil Jacobson radical. The first examples of such algebras were constructed by K. Beidar <cit.>. J. Bell <cit.> constructed examples having finite Gelfand-Kirillov dimension. Finally, L. Bartholdi and A. Smoktunowicz <cit.> constructed a finitely generated Jacobson radical non nil algebra of Gelfand-Kirillov dimension 2.4.1 An arbitrary countable dimensional Jacobson radical algebra is embeddable in a finitely generated Jacobson radical algebra. 4.2 An arbitrary countable dimensional Jacobson radical algebra of Gelfand-Kirillov dimension d over a countable field is embeddable in a finitely generated Jacobson radical algebra of Gelfand-Kirillov dimension ≤ d+6. We say that a nil algebra A is stable nil (resp. stable algebraic) if all matrix algebras M_n(A) are nil (resp. algebraic). The problem of Koethe (<cit.>, see also <cit.>) if all nil algebras are stable nil is still open.4.3 An arbitrary countable dimensionalstable nil algebra A is embeddable in a finitely generated stable nil algebra. If GK A =d<∞ and the ground field is countable, then A is embeddable in a finitely generated nil algebra of Gelfand-Kirillov dimension ≤ d+6. In <ref>, we prove embedding theorems for countable dimensional algebraic primitive algebras. I. Kaplansky <cit.> asked if there exists an infinite dimensional finitely generated algebraic primitive algebra, a particular case of the celebrated Kurosh Problem. Such examples were constructed by J. Bell and L. Small in <cit.>. Then J. Bell, L. Small, and A. Smoktunowicz <cit.> constructed finitely generated algebraic primitive algebras of finite Gelfand-Kirillov dimension provided that the ground field is countable.Our embedding theorems for algebraic primitive algebras have a special feature.Let A be an associative algebra over a ground field F. Let X be a countable set. Consider the algebra M_∞(A) of X× X matrices over A having finitely many nonzero entries. Clearly, the algebra A is embeddable in M_∞(A) in many ways. We say that an algebra A is M_∞-embeddable in an algebra B if there exists an embedding φ:M_∞(A)→ B. We say that A is M_∞-embeddable in B as a (left, right) ideal if the image of φ is a (left, right) ideal of B.5.1 An arbitrary countable dimensional stable algebraic primitive algebra is M_∞-embeddable as a left ideal in a 2-generated algebraic primitive algebra. In particular, this theorem answers the first part of question 7 from <cit.>.5.2 Let F be a countable field. An arbitrary countable dimensional stable algebraic primitive algebra of Gelfand-Kirillov dimension ≤ d is M_∞-embeddable as a left ideal in a finitely generated algebraic primitive algebra of Gelfand-Kirillov dimension ≤ d+6. In <ref>, we answer question 1 from <cit.>.6.1 Let F be a countable field. For an arbitrary d≥ 8, there exists a finitely generated nil F-algebra of Gelfand-Kirillov dimension d. § MATRIX WREATH PRODUCTS OF ALGEBRAS Let F be a field and let A,B be two associative F-algebras. Let Lin(A,B) denote the vector space of all F-linear transformations A→ B.We will define multiplication on Lin(B, B⊗_FA). Let f,g∈ Lin(B,B⊗_FA). For an arbitrary element b∈ B, let g(b)=∑_ib_i⊗ a_i, where a_i∈ A, b_i∈ B. Let f(b_i)=∑_j b_ij⊗ a_ij, where a_ij∈ A, b_ij∈ B. Define(fg)(b)=∑_i,jb_ij⊗ a_ija_i.In other words, if μ:A⊗ A→ A is the multiplication on A, thenfg=(1⊗μ)(f⊗ 1)g.Choose an arbitrary basis {b_i}_i∈ I of the algebra B and a linear transformation f:B→ B⊗_F A. Letf(b_j)=∑_ib_i⊗ a_ij.Consider the I× I matrixA_f=(a_ij)_I× I.Each column of this matrix contains only finitely many nonzero entries a_ij.Let f,g∈ Lin(B,B⊗_FA). Let f(b_j)=∑_ib_i⊗ a_ij, g(b_j)=∑_ib_i⊗ a_ij', so A_f=(a_ij)_I× I, A_g=(a_ij')_I× I. Then(fg)(b_j) =(1⊗μ)(f⊗1)(∑_ib_i⊗ a_ij')=(1⊗μ)∑_k,ib_k⊗ a_ki⊗ a_ij'=∑_kb_k⊗∑_ia_kia_ij',which implies thatA_fg=A_fA_g. Let M_I× I(A) denote the algebra of I × I matrices over A having finitely many nonzero entries in each column. We proved that every basis of the algebra B gives rise to an isomorphismLin(B,B⊗_FA)≅M_I× I(A).Let's define a structure of a B-bimodule on Lin(B,B⊗_FA). For an arbitrary element b∈ B and a linear transformation f:B→ B⊗_FA, we will define linear transformations fb and bf via:(fb)(b')=f(bb'),b'∈ B,and(bf)(b')=(b⊗1)f(b').In other words, if f(b')=∑_ib_i⊗ a_i, then (bf)(b')=∑_ibb_i⊗ a_i. We will check that this is indeed a B-bimodule.Choose arbitrary elements b_1, b_2∈ B. Then((fb_1)b_2)(b)=(fb_1)(b_2b)=f(b_1b_2b)=(f(b_1b_2))(b),hence (fb_1)b_2=f(b_1b_2). Also,(b_1(b_2f))(b) =(b_1⊗1)(b_2f(b))=(b_1⊗1)(b_2⊗1)f(b)=(b_1b_2⊗1)f(b)=((b_1b_2)f)b,hence b_1(b_2f)=(b_1b_2)f. Finally,((b_1f)b_2)(b)=(b_1f)(b_2b)=(b_1⊗1)f(b_2b).On the other hand,(b_1(fb_2))(b)=(b_1⊗1)(fb_2)(b)=(b_1⊗1)f(b_2b).Hence, (b_1f)b_2=b_1(fb_2).Now, consider the semidirect sumA≀ B=B+Lin(B,B⊗_FA)that extends multiplications on B and on Lin(B,B⊗_FA). A≀ B is an associative algebra. From the isomorphism Lin(B,B⊗_FA)≅M_I× I(A), we conclude that the algebra Lin(B,B⊗_FA) is associative. We checked above that Lin(B,B⊗_FA) is a bimodule over the associative algebra B. Hence, it remains to check that for arbitrary elements b'∈ B; f,g∈ Lin(B,B⊗_FA), we have (fb')g=f(b'g), f(gb')=(fg)b', (b'f)g=b'(fg). Indeed, let b∈ B and let g(b)=∑_ib_i⊗ a_i. Then(fb'⊗1)g(b)=∑(fb')(b_i)⊗ a_i=∑ f(b'b_i)⊗ a_i.Therefore,((fb')g)(b)=(1⊗μ)∑_if(b'b_i)⊗ a_i.Now consider the element (f(b'g))(b). We have(b'g)(b)=(b'⊗1)g(b)=∑_ib'b_i⊗ a_i.Applying f⊗1, we get(f⊗1)(∑_ib'b_i⊗ a_i)=∑_if(b'b_i)⊗ a_i.Finally,(f(b'g))(b)=(1⊗μ)(f⊗1)(b'g)(b)=(1⊗μ)∑_if(b'b_i)⊗ a_i.Hence, (fb')g=f(b'g).Next,((fg)b')(b)=(fg)(b'b)=(1⊗μ)(f⊗1)g(b'b).On the other hand,(f(gb'))(b)=(1⊗μ)(f⊗1)(gb')(b).Now, g(b'b)=(gb')(b) shows that (fg)b'=f(gb'). We will show that (b'f)g=b'(fg). We have((b'f)g)(b)=(1⊗μ)(b'f⊗1)g(b)=(1⊗μ)(b'⊗1⊗1)(f⊗1)g(b),whereas(b'(fg))(b)=(b'⊗1)(1⊗μ)(f⊗1)g(b).Now it remains to notice that (1⊗μ)(b'⊗1⊗1)=(b'⊗1)(1⊗μ), which completes the proof of the proposition. We call A ≀ B=B+Lin(B,B⊗_FA) the matrix wreath product of the algebras A,B.We remark that the above construction was preceded and inspired by * constructions of examples in the paper <cit.> by J. Bell, L. Small, and A. Smoktunowicz, and* a different definition of wreath products by Leavitt path algebras in the paper <cit.> by A. Alahmadi and H. Alsulami. If _BM is a left module over the algebra B, then we can defineA≀_MB=B+Lin(M,M⊗_FA). § PROPERTIES OF MATRIX WREATH PRODUCTS Fix an element b∈ B. For a linear transformation γ:B→ A, consider an element c_γ∈ Lin(B,B⊗_FA), c_γ(b')=b⊗γ(b') for an arbitrary element b'∈ B. Clearly, ρ_b={c_γ|γ∈ Lin(B,A)} is a right ideal of the algebra Lin(B,B⊗_FA).Consider the algebraS(A,B)=∑_b∈ Bρ_b⊲_rLin(B,B⊗_FA).The algebra S(A,B) consists of linear transformations φ:B→ B⊗_FA such that there exists a finite dimensional subspace V⊂ B with φ(B)⊆ V⊗_FA. Once we choose a basis {b_i, i∈ I} of the algebra B and thus define an isomorphism Lin(B,B⊗_FA)≅M_I× I(A), the algebra S(A,B) consists of (infinite) I× I matrices having finitely many nonzero rows. Recall that the algebra M_∞(A) consists of I× I matrices having finitely many nonzero entries.In this section, we will study ring theoretic properties of S(A,B) and of subalgebras B+S<A≀ B, where M_∞(A)⊆ S⊆ S(A,B). First, we will determine conditions for B+S to be prime. Recall that an algebra is said to be prime if the product of any two nonzero ideals is not equal to zero.In what follows, we assume that the algebra B does not contain nonzero element b such that _FbB<∞.Let  denote the unital hull of the algebra A, i.e., Â=A if A contains 1, otherwise Â=A+F·1.For an element b∈ B, let L_b denote the operator of left multiplication L_b:B→ B, x→ bx. The operator L_b can be viewed as a mapping L_b:B→ B⊗1, hence L_b∈ Lin(B,B⊗_FÂ). DenoteL_B={L_b,b∈ B}<Lin(B,B⊗_FÂ).*L_BS(A,B)+S(A,B)L_B⊆ S(A,B),*L_B∩ S(A,B)=(0).Let φ∈ S(A,B), let V⊂ B be a finite dimensional subspace such that φ(B)⊆ V⊗ A. Let b∈ B. Then (L_bφ)(B)⊆ bV⊗ A, (φ L_b)(B)⊆φ(B)⊆ V⊗ A, which proves (<ref>).If b∈ B and L_b∈ S(A,B), then _FbB<∞. By our assumption, it implies that b=0. This completes the proof of the lemma. Let M_∞(A)⊆ S⊆ S(A,B) be a subalgebra such that BS+SB⊆ S. The algebra B+S is prime if and only if the algebra A is prime. Suppose that the algebra B+S is prime. If J_1,J_2 are nonzero ideals of A, then M_∞(J_1), M_∞(J_2) are nonzero left ideals of the algebra B+S. If J_1J_2=(0), then M_∞(J_1)M_∞(J_2)=(0). It is well known that a product of two nonzero left ideals in a prime algebra is not equal to zero. That contradicts the primeness of B+S.Suppose now that the algebra A is prime. Then the algebra M_∞(A) is prime as well. If K_1, K_2 are nonzero ideal of B+S such that K_1K_2=(0), then K_1∩ M_∞(A)=(0) or K_2∩ M_∞(A)=(0). Since M_∞(A) is a left ideal of B+S, it follows that K_1M_∞(A)=(0) or K_2M_∞(A)=(0). Hence, M_∞(A) has a nonzero left annihilator in B+S. Let 0≠ b∈ B, s∈ S, and suppose that (b+s)M_∞(A)=(0). Since the algebra M_∞(A) has zero left annihilator in M_I× I(A), it follows that M_I× I(A)(b+s)=(0).For an arbitrary element f∈ Lin(B,B⊗_FA), we have (fb)(b')=f(bb')=(fL_b)(b'). Hence, L_b+s=0. By Lemma <ref> (<ref>), b=0 and it remains to recall again that M_∞(A) has zero left (right) annihilators in M_I× I(A). This completes the proof of the proposition. Next we will find conditions for B+S to be primitive. Recall that an algebra is said to be (left) primitive if it has a faithful irreducible left module <cit.>. Let R be a prime algebra with a nonzero left ideal L⊲_eR. Suppose that *{ℓ∈ L|Lℓ=(0)}=(0),*for arbitrary n≥ 1; arbitrary elements a∈ R; and ℓ_1, ⋯, ℓ_n∈ L, there exists an element ℓ'∈ L such that (a-ℓ')ℓ_i=0, 1≤ i≤ n.Then the algebra R is primitive if and only if the algebra L is primitive. Let M be a faithful irreducible left module over R. Consider the subspace M'={m∈ M|Lm=(0)}. Because of faithfulness of M, we have M' M. We will show that the factor space M/M' is a faithful irreducible L-module.Indeed, if ℓ∈ L and ℓ(M/M')=(0), then Lℓ M=(0), which implies that Lℓ=(0). From (<ref>), we conclude that ℓ=0. We will show that the L-module M/M' is irreducible. Let 0≠ m+M'∈ M/M'. Then Lm=M, which implies L(m+M')=M/M'.Now suppose that the algebra L is primitive and M is a faithful irreducible left module over L. We will define a structure of an R-module on M. Since LM=M, an arbitrary element of M can be represented as ∑_i=1^nℓ_im_i, ℓ_i∈ L, m_i∈ M. For an element a ∈ R, we define a(∑_i=1^nℓ_im_i)=∑_i=1^n(aℓ_i)m_i. To check that this action is well defined, we have to show that ∑_i=1^nℓ_im_i=0 implies ∑_i=1^n(aℓ_i)m_i=0. By assumption (<ref>), these exists an element ℓ'∈ L such that aℓ_i=ℓ'ℓ_i, 1≤ i≤ n. Hence, ∑_i=1^n(aℓ_i)m_i=∑_i=1^nℓ'ℓ_im_i=0. This completes the proof of the lemma.The algebra B+S is primitive if and only if the algebra A is primitive. Suppose that the algebra B+S is primitive. Since a nonzero two sided ideal of a primitive algebra is primitive, we conclude that the algebra S is primitive and therefore prime. The algebra A is also prime by Proposition <ref>.We will check whether the algebra S and its left ideal M_∞(A) satisfy assumptions (<ref>), (<ref>) of Lemma <ref>. Part (<ref>) is trivial. Now we will check assumption (<ref>). Choose elements a ∈ S; a_1,⋯, a_n∈ M_∞(A). Let b_1, ⋯, b_m be elements of the basis of the algebra B such that a_1, ⋯, a_n have only nonzero rows that correspond to b_1, ⋯, b_m. In other words, a_1, ⋯, a_m ∈∑_i=1^mρ_b_i.Let a' be the I× I matrix that has the same entries as a in the columns that correspond to b_1, ⋯, b_m and zeros everywhere else. Then a'∈ M_∞(A) and aa_i=a'a_i, 1≤ i ≤ n. By Lemma <ref>, the left ideal M_∞(A) of the algebra S is a primitive algebra, which implies primitivityof the algebra A.Now suppose that the algebra A is primitive. By Proposition <ref>, the algebra B+S and S are prime. By Lemma <ref>, the algebra S is primitive. It is easy to see that if a nonzero ideal of a prime algebra is primitive, then the full algebra is primitive as well. This finishes the proof of the proposition. In the rest of this section, we will study growth of some subalgebras in A ≀ B. We will recall some definitions.Let R be an F-algebra generated by a finite dimensional subspace V. LetV^n=span_F(v_1 ⋯ v_k|k≤ n, v_i∈ V, 1≤ i≤ k).Then _FV^n< ∞ and R is the union of the ascending chainV^1⊆ V^2⊆⋯. The function g(V,n)=_FV^n is called the growth function of the algebra R that corresponds to the generating subspace V.Given two functions f_1, f_2:N→ [1,∞), we say that f_1 is asymptotically less than or equal to f_2 (denote: f_1 ≼ f_2) if there exists c∈ N such that f_1(n)≤ cf_2(cn) for all n. If f_1≼ f_2 and f_2 ≼ f_1, then we say that f_1 and f_2 are asymptotically equivalent (denote: f_1 ∼ f_2).If V_1, V_2 are two finite dimensional generating subspaces of R, then g(V_1, n)∼ g(V_2,n). We will denote the class of functions that are equivalent to g(V,n) as g_R(n).If there exists α>0 such that g_R(n)≼ n^α, then we say that growth of R is polynomially bounded. In this caseGK(R)=inf{α>0|g_R(n)≼ n^α}is called the Gelfand-Kirillov dimension of R. If R does not have polynomially bounded growth, then GK(R)=∞.For a not necessarily finitely generated algebra R, we letGK(R)=sup GK(R'),where R' runs over all finitely generated subalgebras of R.Coming back to the algebras A,B, we say that a linear transformation γ: B→ A is a generating linear transformation if γ(B) generates A.Now suppose that the algebra B contains 1. Let γ:B→ A be a generating linear transformation. As above, we consider the element c_γ:b→1⊗γ(b)∈ B⊗_FA.If a∈ A, then we denote ac_γ=c_γ', where γ'(b)=aγ(b).Consider the subalgebra C=⟨ B,c_γ⟩ generated in A≀ B by B and the element c_γ.If V is a generating subspace of the algebra B, then U=V+Fc_γ is a generating subspace of the algebra C.For n≥ 1, consider the vector spaceW_n=∑_i_1+⋯+i_r≤ nγ(V^i_1)⋯γ(V^i_r)⊆ A.Clearly, W_1⊆ W_2⊆⋯, A=⋃_n≥1W_n. U^n⊆∑_i+j+k≤ n V^i(W_jc_γ)V^k+V^n for any n≥1. For n=1, the assertion is obvious. We denote the right hand side of the inclusion above as (n). We need to show that U(n-1)⊆(n). Clearly, V(n-1)⊆(n). Let v∈ V^i. Then c_γ v=c_γ', where γ'(b)=γ(vb). We have γ'(1)=γ(v)∈ W_i. Now,c_γ v(W_jc_γ)=c_γ'(W_jc_γ)=(γ'(1)W_j)c_γ⊆ W_i+jc_γ.Therefore,c_γ V^i(W_jc_γ)V^k⊆(W_i+jc_γ)V^k⊆(n-1)⊆(n).This completes the proof of the lemma. Denote w_γ(n)=_FW_n. g_C(n)≼ g_B^2(n)w_γ(n). If A∋1, then along with the algebra C, we will consider a bigger algebra C'=⟨ B,c_γ, e_11(1)⟩ and its generating subspaceU'=V+Fc_γ+Fe_11(1). U'^n⊆∑_i+j+k≤ nV^i(W_jc_γ)V^k+V^n+∑_i+j+k≤ nV^ie_11(W_j)V^k.Again, denote the right hand side of the inclusion as (n). We need to check that c_γ∑_i+j+k≤ n-1V^ie_11(W_j)V^k⊆(n) and e_11(1)(n-1)⊆(n). The subspace e_11(W_j) lies in ρ_1. Hence,c_γ V^ie_11(W_j)⊆ e_11(γ(V^i)W_j)⊆ e_11(W_i+j)and therefore e_11(W_i+j)V^k⊆(n-1). Furthermore,e_11(1)V^i(W_jc_γ)V^k=e_11(1)V^ie_11(1)(W_jc_γ)V^kand it remains to notice that e_11(1)V^ie_11(1)=Fe_11(1). This completes the proof of the lemma.g_C'(n)≼ g_B^2(n)w_γ(n). We say that a linear transformation γ:B→ A is dense if for arbitrary linearly independent elements b_1,⋯, b_n∈ B and arbitrary nonzero element a∈ A, there exists an element b∈ B such that γ(b_ib)=0, 1≤ i≤ n-1, and aγ(b_nb)≠0. If γ:B→ A is a dense generating linear transformation, then g_C(n)∼ g_B(n)^2w_γ(n). It is easy to see thatV^n(W_nc_γ)V^n⊆ U^3n.We will show that _FV^n(W_nc_γ)V^n=(_FV^n)^2w(n). Let b_1,⋯, b_r be a basis of V^n and let a_1,⋯, a_t be a basis of W_n. We need to verify that elements b_i(a_jc_γ)b_k are linearly independent.For an arbitrary element b∈ B and arbitrary coefficients γ_ijk∈ F, we have(∑γ_ijkb_i(a_jc_γ)b_k)(b)=∑γ_ijkb_i⊗ a_jγ(b_kb).Since the elements b_i are linearly independent, it follows that for every i,∑_j,kγ_ijka_jγ(b_kb)=0. Let γ_i_0j_0k_0≠ 0. By density of γ, there exists an element b∈ B such that γ(b_ℓ b)=0 for ℓ≠ k_0 and (∑γ_i_0jk_0a_j)γ(b_k_0b)≠ 0, a contradiction.Suppose that the algebra B has a basis b_1, b_2, ⋯ that consists of invertible elements. Suppose that A∋1. The basis {b_i}_i∈ I defines the isomorphism Lin(B, B⊗_FA)≅M_I× I(A). Let γ: B→ A be a generating linear transformation. Then the algebra C'=⟨ B,c_γ,e_11(1)⟩ contains M_∞(A). We have e_ij(1)=b_ie_11(1)b_j^-1, hence C'⊇ M_∞(F). If γ(b_i)=a, then c_γ be_11(1)=e_11(a). Since γ is a generating linear transformation, it follows that C'⊇ e_11(A). Now it remains to notice that M_∞(F) and e_11(A) generate M_∞(A). § RADICAL ALGEBRAS In this section, we will prove embedding theorems <ref>-<ref> for Jacobson radical algebras. For an arbitrary Jacobson radical algebra A, there exists a Jacobson radical algebra A and an element u∈A, u^3=0, such that A is embeddable in the right ideal uA (resp. left ideal Au). Consider the two dimensional nilpotent algebra B with a basis b_1=b, b_2=b^2, b^3=0. Let A be a Jacobson radical algebra. Consider the matrix wreath product A ≀ B=B+M_2(A). Clearly, A≀ B is a Jacobson radical algebra. For 1≤ i,j≤ 2 and an element a∈ A, we consider the linear transformation e_ij(a) that maps a basic element b_k to δ_ikb_j⊗ a. Then be_21(a)=e_22(a). Hence e_22(A)⊆ b(A≀ B), which completes the proof of the lemma.Let A be a countable dimensional Jacobson radical algebra. By Lemma <ref>, there exists a countable dimensional Jacobson radical algebra A and an element u∈A, u^3=0, such that A embeds in Au.Let B be a finitely generated infinite dimensional nil algebra of E. S. Golod <cit.>. Let B̂=B+F·1 be its unital hull. Let γ:B̂→A be a generating linear transformation. In the matrix wreath product A≀B̂, consider the element c_γ:B̂→B̂⊗_FA, c_γ(b)=1⊗γ(b).Choose a basis {b_i}_i∈ I of the algebra B̂, b_1=1. It gives rise to an isomorphism Lin(B̂,B̂⊗_FA)→M_I× I(A). Consider the element e_11(u)∈ Lin(B̂,B̂⊗_FA) that sends b_1=1 to 1⊗ u and sends other basic elements to zero. Consider the subalgebra C of A≀B̂ generated by B, C_γ, e_11(u).Consider also the right idealρ_1={c_α | α∈ Lin(B̂,A), c_α(b)=1⊗α(b)}.For any α,β∈ Lin(B̂,A), we have c_α c_β=α(1)c_β. Hence, the mapping π:ρ_1→A, π(c_α)=α(1) is a homomorphism.The subalgebra ⟨ c_γB̂⟩ generated by c_γB̂ lies in ρ_1. For any basic element b_i, we have π(c_γ b_i)=γ(b_i). Since γ is a generating linear transformation, it follows that the restriction of π to ⟨ c_γ B⟩ is surjective. HenceC⊇⟨ c_γB̂⟩ e_11(u)=e_11(Au)⊇ e_11(A). It remains to show that the subalgebra C is Jacobson radical. We will start by showing that the right ideal Fc_γ+c_γ C is Jacobson radical. The right ideal Fc_γ+c_γ C is contained in ρ_1 and contains ⟨ c_γB̂⟩. Hence, the restriction of the homomorphism π to Fc_γ+c_γ C is surjective. The kernel of this homomorphism lies inρ_1'={c_α|α(1)=0},with (ρ_1')^2=(0). This proves that the right ideal Fc_γ+c_γ C of the algebra C is Jacobson radical. Hence, c_γ lies in the Jacobson radical Jac(C) of the algebra C.The ideal generated by e_11(u) in the subalgebra ⟨ B,e_11(u)⟩ lies in M_I× I(uF[u]), hence this ideal is nilpotent. Hence e_11(u)∈ Jac(C).Finally, it follows that C/Jac(C)=B+Jac(C)/Jac(C), a nil algebra, which implies that C=Jac(C). The algebra C is finitely generated. This completes the proof of Theorem <ref>. Now we turn to Theorem <ref>. Let A be a countable dimensional algebra of Gelfand-Kirillov dimension ≤ d. Let the algebra B be generated by a finite dimensional subspace V. Recall that for a linear transformation γ:B→ A, we denoteW_n=∑_i_1+⋯+i_r≤ nγ(V^i_1)⋯γ(V^i_r),w_γ(n)=_FW_n.There exists a generating linear transformation γ:B→ A such that w_γ(n)≤ n^d+ϵ_n, where ϵ_n>0, ϵ_n→0 as n→∞. Let a_1, a_2, ⋯ be a basis of the algebra A. Letg_k(n)=_F span(a_i_1⋯ a_i_r; 1≤ r≤ n; 1≤ i_1,⋯, i_r≤ k).From GK A≤ d, it follows that there exists an increasing sequence n_k, k≥ 1, such that g_k(n)≤ n^d+1/k as soon as n≥ n_k. For n≥ n_1, choose k such that n_k≤ n< n_k+1. Let ϵ_n=1/k. It is clear that ϵ_n→0 as n→∞. Choose a subspace V_k'⊂ V^n_k and an element v_k∈ V^n_k such that V^n_k=V^n_k-1⊕ V_k'⊕ Fv_k is a direct sum of subspaces. Then B=V_1'⊕ Fv_1⊕ V_2'⊕ Fv_2⊕⋯. Define a linear transformation γ: B→ A via γ(V_i')=0, i≥ 1, γ(v_i)=a_i.The subspace W_n is spanned by γ(V^i_1)⋯γ(V^i_r), i_1+⋯+i_r≤ n. Hence, γ(V^i_1)⋯γ(V^i_n)⊆span_F(a_j_1⋯ a_j_r; 1≤ j_1,⋯, j_r≤ k). Now we get w_γ(n)≤ g_k(n)≤ n^d+1/k as n≥ n_k. This completes the proof of the lemma.Let A be a countable dimensional Jacobson radical algebra of Gelfand-Kirillov dimension ≤ d. Let the ground field F be countable. In <cit.>, T. Lenagan and A. Smoktunowicz constructed a finitely generated nil F-algebra of finite Gelfand-Kirillov dimension. In <cit.>, T. Lenagan, A. Smoktunowicz, and A. Young refined the argument of <cit.> to construct a finitely generated nil algebra B of Gelfand-Kirillov dimension ≤ 3.Following Lemma <ref>, there exists a generating linear transformation γ:B̂→A such that w_γ(n)≤ n^d+ϵ_n, ϵ_n→0 as n→∞. As shown above, the algebra A embeds in a finitely generated algebra C'=⟨ B,c_γ, e_11(u)⟩. By Corollary <ref>, g_c'(n)≼ g_B(n)^2w_γ(u). This implies GK C'≤ d+6.This completes the proof of Theorem <ref>. For the proof of Theorem <ref>, we need to recall more details about the Golod-Shafarevich inequality (see <cit.>) and Golod's construction <cit.>.Let F⟨ x_1, ⋯, x_m⟩ be the free associative algebra on m free generators, m≥ 2. We consider the free algebra without 1, i.e., it consists of formal linear combinations of nonempty words in x_1, ⋯, x_m. Assigning degree 1 to all variables x_1, ⋯, x_m, we make F⟨ x_1, ⋯, x_m⟩ a graded algebra. The degree (a) of an arbitrary element a∈ F⟨ x_1, ⋯, x_m⟩ is defined as the minimal degree of a nonzero homogeneous component of a.Let R⊂ F⟨ x_1,⋯, x_m⟩ be a subset containing finitely many elements of each degree.Golod-Shafarevich Condition: If there exists a number 0<t_0<1 such that∑_a∈ Rt_0^(a)<∞ and 1-mt_0+∑_a∈ Rt_0^(a)<0,then the algebra ⟨ x_1, ⋯, x_m|R=0⟩ presented by the set of generators x_1, ⋯, x_m and the set of relations R is infinite dimensional.Recall that a function g:N→[1,∞) is said to be subexponential if lim_n→∞g(n)e^α n=0 for any α>0. A finitely generated algebra A has subexponential growth if its growth function g_A(n) is subexponential. It is equivalent to g_A(n)⪵ e^n.A (not necessarily finitely generated) algebra A is of locally subexponential growth if every finitely generated subalgebra of A is of subexponential growth. Let F be a countable field and let A be a countable dimensional F-algebra of locally subexponential growth. Then there exists a subset R⊂ F⟨ x_1,⋯, x_m⟩ satisfying the Golod-Shafarevich condition and such that the algebra F⟨ x_1, ⋯, x_m|R=0⟩⊗_F A is nil. The algebra F⟨ x_1,⋯, x_m⟩⊗_FA is countable. LetF⟨ x_1,⋯,x_m⟩⊗_FA={f_1,f_2,⋯}.Choose 1/m<t_0<1 and a sequence ϵ_1,ϵ_2,⋯>0 such that ∑_i=1^∞ϵ_i<∞ and 1-mt_0+∑_i=1^∞ϵ_i<0. Choose i≥1. Let f_i∈∑_j=1^kF⟨ x_1,⋯, x_m⟩⊗ a_j, a_j∈ A, V_i=∑_jFa_j. Then for an arbitrary n≥1, we have f_i^n∈ F⟨ x_1,⋯,x_m⟩^n⊗ V_i^n. Let g_V_i(n)=_FV_i^n. Since the function g_V_i(n) is subexponential, there exists n_i≥ 1 such that for all n≥ n_i, we have g_V_i(n)t_0^n≤ϵ_i. Let r=g_V_i(n_i), let v_i1,⋯, v_ir be a basis of V_i^n_i and let f_i^n_i=∑_j=1^rf_ij⊗ v_ij, f_ij≥ n_i.Let R={f_ij|i≥1, 1≤ j≤ g_V_i(n_i)}. The image of an element f_i in F⟨ x_1,⋯, x_m|R=0⟩⊗_FA is nilpotent of index ≤ n_i. Besides,∑ g_V_i(n_i)t_0^(f_ij)≤∑ g_V_i(n_i)t_0^n_i≤∑ϵ_i.Hence, R satisfies the Golod-Shafarevich Condition and therefore the algebra F⟨ x_1,⋯, x_m|R=0⟩⊗_FA is an infinite dimensional nil algebra. This completes the proof of the lemma.Let F be an arbitrary field. There exists an infinite dimensional finitely generated stable nil F-algebra. Let F_0 be the prime subfield of F. We will apply Lemma <ref> to the countable dimensional F_0-algebra A=F_0[t_i,i≥1]⊗_F_0M_∞(F_0) of locally subexponential growth. By Lemma <ref>, there exists a subset R⊂ F_0⟨ x_1,⋯,x_m⟩ satisfying the Golod-Shafarevich condition such that the F_0-algebra F_0⟨ x_1,⋯,x_m|R=0⟩⊗_F_0A is nil. We will show that the F-algebra F⟨ x_1,⋯, x_m|R=0⟩ is stable nil. Indeed, we need to check only that for arbitrary elements a_1,⋯, a_k∈ F_0⟨ x_1,⋯, x_m|R=0⟩, arbitrary elements α_1,⋯,α_k∈ F and arbitrary matrices y_1,⋯, y_k∈ M_n(F_0), the tensor ∑_i=1^ka_i⊗α_i⊗ y_i is nilpotent. It follows from the nilpotency of the element ∑_i=1^ka_i⊗ t_i⊗ y_i of the algebra F_0⟨ x_1,⋯, x_m|R=0⟩⊗_F_0A. This completes the proof of the lemma.Let A be a stable nil (resp. algebraic) algebra. Then for an arbitrary algebra B, the algebra S(A,B) is nil (resp. algebraic). Recall that S(A,B) consists of I× I matrices over A with finitely many nonzero rows. For a finite subset T⊂ I, let S_T(A,B) consist of such matrices that for any i∈ I∖ T, the i^th row is zero. The algebra S(A,B) is a union of subalgebras S_T(A,B). Consider the mapping S_T(A,B)M_T× T(A). For an arbitrary matrix Y∈ S_T(A,B), the matrix φ(Y) is the part of Y at the intersection of rows and columns indexed by T. Clearly, φ is a homomorphism and (kerφ)^2=(0). This implies that the algebra S_T(A,B) is nil (resp. algebraic) and completes the proof of the lemma.Let B be an infinite dimensional finitely generated stable nil algebra of Lemma <ref>. If A is a countable dimensional stable nil algebra, then the algebra A=A≀ F[u|u^3=0], A↪Au, is also countable dimensional and stable nil due to Lemma <ref>.In the proof of Theorem <ref>, we embedded the algebra A in a finitely generated subalgebra C=⟨ B,C_γ,e_11(u)⟩ of A≀B̂. Now we need to show that for an arbitrary n≥1, the matrix algebra M_n(C) is nil. It is easy to see that M_n(C) is a subalgebra of M_n(A)≀ M_n(B̂). Moreover,M_n(C)⊆ M_n(B)+S(M_n(A),M_n(B̂)).By Lemmas <ref> (<ref>), <ref>, S(M_n(A),M_n(B̂)) is a nil ideal of the algebra M_n(B)+S(M_n(A),M_n(B̂)). The algebra M_n(B) is nil because the algebra B is stable nil. We proved that the algebra C is stable nil.Suppose now that the algebra A is stable nil, the ground field F is countable, and GK A≤ d. Let B be the Lenagan-Smoktunowicz-Young (<cit.>, <cit.>) nil algebra, GK B≤ 3. Arguing as above, we see that the finitely generated algebra C, in which the algebra A is embedded, is nil. By Corollary <ref>, the growth of C is bounded by n^6w_γ(n). By Lemma <ref>, a generating linear transformation γ can be chosen so that w_γ(n)≤ n^d+ϵ_n, where ϵ_n→0 as n→∞. This implies that GK C≤ d+6 and finishes the proof of the theorem.If we knew that there exists a Lenagan-Smoktunowicz algebra that is stable nil, then we could embed a countable dimensional stable nil algebra of finite Gelfand-Kirillov dimension in a finitely generated stable nil algebra of finite Gelfand-Kirillov dimension. § ALGEBRAIC PRIMITIVE ALGEBRAS The purpose of this section is to prove Theorems <ref>, <ref>. Let A be a countable dimensional stable algebraic primitive algebra. Without loss of generality, we will assume that A∋1. Let B be an infinite dimensional finitely generated stable nil algebra of Lemma <ref>. Without loss of generality, we will also assume that {b∈ B|_FbB<∞}=(0).Consider the matrix wreath product A≀B̂ and an element c_γ∈ Lin(B̂,B̂⊗_FA), c_γ(b)=1⊗γ(b), where γ:B̂→ A is a generating linear transformation. Choose a basis {b_i}_i∈ N of the algebra B that consists of invertible elements. This basis defines an isomorphismLin(B̂,B̂⊗_FA)≅M_N× N(A). As above, we consider the finitely generated algebra C'=⟨B̂, c_γ, e_11(1)⟩. By Lemma <ref>, M_∞(A)⊆ C'. Since M_∞(A) is a left ideal in M_N× N(A) and BM_∞(A)⊆ M_∞(A), it follows that M_∞(A) is a left ideal in C'.Arguing precisely as in the proof of Theorem <ref> and using Lemma <ref>, we can show that the algebra C' is stable algebraic.By Proposition <ref>, the algebra C' is primitive.By a theorem of V. T. Markov <cit.>, there exists n≥1 such that the matrix algebra M_n(C') is 2-generated. Since M_n(M_∞(A))≅ M_∞(A), the algebra is still M_∞-embedded in M_n(C') as a left ideal. The algebra M_n(C') is still stable algebraic and primitive. This finishes the proof of Theorem <ref>.Assume now that the ground field F is countable, the algebra A is stable algebraic and primitive, and GK A≤ d. For the algebra B, we now take the Lenagan-Smoktunowicz-Young finitely generated nil algebra of Gelfand-Kirillov dimension ≤ 3 (see <cit.>, <cit.>).Then the algebra C' in our construction above is finitely generated and nil (though not necessarily stable nil) and M_∞(A)◃_ℓ C'. By Lemma <ref>, we can choose a generating linear transformation γ so that w_γ(n)≤ n^d+ϵ_n, ϵ_n→0, n→∞. Then by Corollary <ref>, GK C'≤ d+6, which finishes proof of Theorem <ref>. If we knew that an infinite dimensional stable nil algebra of finite Gelfand-Kirillov dimension exists, then we could embed an arbitrary countable dimensional stable algebraic primitive algebra of finite Gelfand-Kirillov dimension in a 2-generated stable algebraic primitive algebra of finite Gelfand-Kirillov dimension, thus answering the second part of question 7 in <cit.>. § EXAMPLES OF FINITELY GENERATED NIL ALGEBRAS OF ARBITRARY GELFAND-KIRILLOV DIMENSION D≥ 8 Everywhere in this section, we assume that the ground field F is countable. Let B be an infinite dimensional graded finitely generated Lenagan-Smoktunowicz-Young nil algebra of Gelfand-Kirillov dimension ≤ 3 (<cit.>, <cit.>). Without loss of generality, we will assume that ℓ(B)={b∈ B|bB=(0)}=(0). For arbitrary linearly independent elements b_1,⋯, b_n∈ B and arbitrary s≥1, there exists an element b∈ B^s such that the elements b_1b,⋯, b_nb are still linearly independent. We will induct on n. For n=1, the assertion of the lemma means that b_1B^s≠(0), which follows from the assumption on the left annihilator of B.Suppose that the assertion is true for n-1. Choose an element b∈ B^s such that the elements b_1b, ⋯, b_n-1b are linearly independent. Since ⋂_i≥1B^i=(0), it follows that there exists t≥ s such that span_F(b_1b,⋯, b_n-1b)∩ B^t=(0).Again, by the inductive assumption, we can choose an element b'∈ B^t such that b_1b',⋯, b_n-1b' are linearly independent elements. Assuming that the assertion of the lemma is wrong, there exist scalars α_1,⋯,α_n-1,β_1,⋯,β_n-1∈ F such thatb_nb=∑_i=1^n-1α_ib_ib,b_nb'=∑_i=1^n-1β_ib_ib'.Since the elements b_1(b+b'),⋯, b_n-1(b+b') are linearly independent, there exist scalars γ_1,⋯,γ_n-1∈ F such thatb_n(b+b')=∑_i=1^n-1γ_ib_i(b+b').Subtracting the first two equations from the third, we get∑_i=1^n-1(γ_i-α_i)b_ib+∑_i=1^n-1(γ_i-β_i)b_ib'=0.It implies that α_i=β_i=γ_i, 1≤ i≤ n-1.We will use the following elementary statement from Linear Algebra:Let V be a vector space over an infinite field. Let v_1,⋯,v_n∈ V be arbitrary elements and let w_1,⋯,w_n∈ V be linearly independent elements. Then there exists a scalar ξ∈ F such that the elements v_i+ξ w_i, 1≤ i≤ n, are linearly independent.This implies that for an arbitrary element b”∈ B^t, there exists a scalar ξ∈ F such that the elements b_i(b”+ξ b'), 1≤ i≤ n-1, are linearly independent. Taking b”+ξ b' instead of b', we get(b_n-∑_i=1^n-1α_ib_i)(b”+ξ b')=0,and therefore(b_n-∑_i=1^n-1α_ib_i)b”=0,(b_n-∑_i=1^n-1α_ib_i)B^t=(0).This contradicts the assumption that the left annihilator of B is zero and completes the proof of the lemma. Since the ground field F is countable, it follows that the algebra B is countable.Letbe the set of all nonempty finite sequences of linearly independent elements of B, card =ℵ_0.Let u_1,u_2,⋯ be a sequence of elements ofsuch that each element ofoccurs in this sequence infinitely many times.The algebra B is generated by the homogeneous component of degree 1, V=B_1, V^n=∑_i=1^nB_i.We will construct an increasing sequence of integers 0=n_0<n_1<n_2<⋯. Suppose that k≥2 and n_0,n_1,⋯,n_k-1 have already been constructed. Let u_k=(b_1,⋯,b_m)∈, the elements b_1,⋯, b_m are linearly independent. By Lemma <ref>, there exists an element b∈ B^n_k-1+1 such that the elements b_1b,⋯,b_mb are linearly independent. Choose n_k such that n_k>e^n_k-1, n_k>e^e^k, and b_1b,⋯, b_mb∈ V^n_k-1. This completes the construction of the sequence 0=n_0<n_1<n_2<⋯.For an arbitrary α≥2, W. Bohro and H. P. Kraft <cit.> constructed a graded F-algebra R=∑_i=1^∞ R_i generated by two elements x,y∈ R_1, such that for any ϵ>0 we haven^α-ϵ≤_F∑_i=1^nR_i≤ n^α+ϵfor all sufficiently large n. Let f(n)=_F∑_i=1^nR_i.Let J be a graded ideal of the free associative algebra F⟨ x,y⟩ such that F⟨ x,y⟩/J≅ R. Now we are ready to introduce a countable dimensional locally nilpotent algebra A. Let X={x_1,x_2,⋯}, Y={y_1,y_2,⋯}. Consider the algebra A presented by the set of generators X∪ Y and the following set of relations: *x_ix_jx_k=0, where i,j,k are arbitrary, distinct integers;*J(x_i,x_j)=(0), i≠ j, where J(x_i,x_j) is the image of the ideal J under the homomorphism F⟨ x,y⟩→ F⟨ X,Y⟩, x→ x_i, y→ x_j;*id_F⟨ X,Y⟩(x_i)^n_i+3=(0);*[X,y_i]=[Y,y_i]=(0), i≥ 1;*y_i^2=0, i≥ 1. Let g_k(n)=_F∑_μ=1^n(∑_i=1^kFx_i)^μ. Suppose that n_k≤ n< n_k+2. Thenf(n)≤ g_k(n)≤k2f(n).Let J be the ideal of the free algebra F⟨ X,Y⟩ generated by (<ref>)-(<ref>). We have⟨ x_k-1,x_k⟩∩J⊆ J(x_k-1,x_k)+∑_i=n_k+2^∞ F⟨ X,Y⟩_i.Hence,g_k(n)≥_F∑_i=1^n(⟨ x_k-1,x_k⟩/J(x_k-1,x_k))_i=f(n).On the other hand, relation (<ref>) impliesF⟨ x_1,⋯, x_k⟩⊆∑_1≤ i≠ j≤ k⟨ x_i,x_j⟩+J.This implies that g_k(n)≤k2f(n) and completes the proof of the lemma. Now we will define a generating dense linear transformation γ:B̂→ A. Recall that V=B_1. We let V^0=F·1 and γ(1)=0. Suppose that γ:V^n_k-1→ A has already been defined. Let u_k=(b_1,⋯,b_m)∈. Recall that in the course of choosing the numbers n_k, we first chose an element b∈ B^n_k-1+1 such that b_1b,⋯, b_mb are linearly independent and then choose n_k large enough so that b_1b,⋯, b_mb∈ V^n_k-1. Choose an element v_k∈ V^n_k∖ V^n_k-1 and a subspace T_k⊂ V^n_k so that V^n_k=V^n_k-1⊕ T_k⊕ Fb_1b⊕⋯⊕ Fb_mb⊕ Fv_k is a direct sum of subspaces.Define γ(T_k)=0, γ(b_1b)=⋯=γ(b_m-1b)=0, γ(b_mb)=y_k,γ(v_k)=x_k.It is clear that γ is a generating linear transformation. We will show that γ is dense. Choose an element u=(b_1,⋯,b_m)∈ and a nonzero element a∈ A. The linearly independent set u occurs infinitely many times in the sequence u_1,u_2,⋯. Choose k≥1 such thatu_k=(b_1,⋯,b_m) and y_k does not occur in a. Then there exists an element b∈ B such that γ(b_1b)=⋯=γ(b_m-1b)=0, γ(b_mb)=y_k, ay_k≠0.As above, consider the element c_γ∈ Lin(B̂,B̂⊗_FA), c_γ(b)=1⊗γ(b) for b∈ B. Let C=⟨ B,c_γ⟩. We proved in <ref> that C is a nil algebra. We will show that GK C=2GK(B)+α.Indeed, by Lemma <ref>, g_C(n)∼ g_B(n)^2w_γ(n). We will estimate w_γ(n).Let n_k≤ n<n_k+1. ThenW_n=span(γ(V^i_1)⋯γ(V^i_r),i_1+⋯+i_r≤ n).Since γ(V^n_i)⊆γ(V^n_i-1)+Fx_i+Fy_i for i≥1, and all i_1,⋯,i_r are smaller than n_k+1, it follows thatγ(V^i_1)+⋯+γ(V^i_r)⊆∑_i=1^kFx_i+∑_i=1^k+1Fy_i. Then w_γ(u)≤ g_k(n)2^k+1. By Lemma <ref>, for any ϵ>0 and a sufficiently large n, we have g_k(n)≤ n^α+ϵk2. Since n≥ n_k>e^e^k, it follows that k<ln(ln n). Hence, for any ϵ', 0<ϵ<ϵ', we have w_γ(n)≼ n^α+ϵ'.On the other hand, x_1,⋯, x_k-1∈ V^n_k-1. Hence, w_γ(n)≥ g_k-1([n/n_k-1]). We have n_k-1≤[n/n_k-1]<n_k+1. Hence, by Lemma <ref>,g_k-1([n/n_k-1])≥ f([n/n_k-1]).From n≥ n_k≥ e^n_k-1+1, we conclude that n_k-1≤ln n-1 and therefore [n/n_k-1]≥n/ln n. Hence, for an arbitrary ϵ>0 for a sufficiently large n, we have w_γ(n)≥(n/ln n)^α-ϵ. This implies that for any ϵ'>ϵ, we have w_γ(n)≥ n^α-ϵ'. This implies that GK C=2GK(B)+α and completes the proof of the theorem.§ ACKNOWLEDGEMENT The fourth author gratefully acknowledges the support form the NSF. amsplain[1]Department of Mathematics, King Abdulaziz University, Jeddah, SA,E-mail address: analahmadi@kau.edu.sa; hhaalsalmi@kau.edu.sa;[2]Department of Mathematics, Ohio University, Athens, USA,E-mail address: jain@ohio.edu;[3]Department of Mathematics, University of California, San Diego, USAE-mail address: ezelmano@math.ucsd.edu
http://arxiv.org/abs/1703.08734v2
{ "authors": [ "Adel Alahmadi", "Hamed Alsulami", "S. K. Jain", "Efim Zelmanov" ], "categories": [ "math.RA" ], "primary_category": "math.RA", "published": "20170325192002", "title": "Matrix wreath products of algebras and embedding theorems" }