text
stringlengths
0
6.48M
meta
dict
--- title: 'Une correspondance de Jacquet-Langlands $p$-adique' --- Gaëtan CHENEVIER Ecole normale supérieure, DMA [^1] Université Paris 7 [*Abstract:*]{} In this paper, we extend the Jacquet-Langlands’correspondence, between Hecke-modules of usual modular forms and quaternionic modular forms, to overconvergent $p$-adic forms of finite slope. We show that this correspondence respects $p$-adic families and is induced by an isomorphism between some associated eigencurves. AMS classification: 11F85 (11F12, 11F72, 14G22) Introduction ============ Soit $D/{\mathbb{Q}}$ l’algèbre de quaternions définie de discriminant $d$, $p$ un nombre premier et $N$ un entier tels que $(Np,d)=(N,p)=1$, nous établissons dans ce texte un transfert “à la Jacquet-Langlands” bijectif entre les formes modulaires $p$-adiques surconvergentes de pente finie ([@K], [@Col2]), propres, cuspidales et nouvelles en $d$, et les formes modulaires $p$-adiques quaternioniques pour $D$ de pente finie ([@Buz2]), en niveau modéré $N$ et poids-caractère quelconque (théorème \[jlp\]). Ce transfert est Hecke-équivariant et coïncide avec la correspondance de Jacquet-Langlands usuelle lorsqu’on le restreint aux formes modulaires “classiques” de part et d’autre. Mieux, il respecte les familles $p$-adiques, et nous prouvons qu’il provient d’un isomorphisme rigide-analytique entre les courbes de Hecke (“the eigencurves”) correspondantes (théorème \[jlpfamille\]). Notre preuve est basée sur l’existence de systèmes de modules de Banach de part et d’autre (\[banachsystem\]), et de leur comparaison. On démontre en fait un énoncé général (théorème \[general\]) sur la comparaison de ces systèmes. Des arguments de Zariski-densité de points “classiques” sur certaines hypersurfaces de Fredholm et sur les variétés de Hecke y jouent un rôle important. On en déduit notre correspondance en utilisant la correspondance de Jacquet-Langlands usuelle, et les assertions de “classicité en petite pente” de part et d’autre. Un des charmes de cette correspondance est que les objets qu’elle compare sont, dans une large mesure, de natures assez différentes. Les formes modulaires $p$-adiques surconvergentes usuelles sont des sections de fibrés surconvergentes sur le lieu ordinaire des courbes modulaires, alors que les formes modulaires quaternioniques $p$-adiques sont purs produits de la théorie des groupes, spécialement des représentations $p$-adiques de $GL_2({\mathbb{Q}}_p)$. Nous y voyons plusieurs intérêts, l’un d’entre eux étant la possibilité d’introduire des méthodes de théorie des représentations $p$-adiques des groupes de Lie $p$-adiques dans la théorie des formes modulaires surconvergentes. Ceci est par exemple en accord avec une philosphie de Langlands purement $p$-adique (encore non formulée à notre connaissance). Un autre intêret vient de qu’il reste une multitude de questions non résolues concernant “the eigencurve” (voir l’introduction de [@eigen]), l’origine plus combinatoire des formes quaternioniques peut permettre, sinon de les résoudre, de faciliter tout au moins les expériences numériques. Nous discutons de certaines conséquences de notre correspondance, ainsi que de problèmes encore ouverts, dans la dernière partie du texte. Dans [@ch], des variétés de Hecke sont construites pour tous les groupes algébriques $G$ sur ${\mathbb{Q}}$ tels que $G({\mathbb{R}})=U_n({\mathbb{C}})$, $G({\mathbb{Q}}_p)=GL_n({\mathbb{Q}}_p)$, et on peut se demander de manière générale si les transferts usuels se prolongent, et comment, aux formes modulaires surconvergentes pour ces groupes. En ce qui concerne les correspondances de Jacquet-Langlands éventuelles entre de tels groupes, nos résultats s’y appliquent sans modification, du moment que la correspondance classique entre les deux groupes en question est connue pour suffisament de poids (voir le théorème \[general\]). Ce travail repose substantiellement sur ceux de Buzzard, Coleman, et Mazur, nous les en remercions. La question de l’existence ou non d’un morphisme entre les deux courbes de Hecke avait notamment été posée par K.Buzzard. [**Notations**]{}: $p$ est un nombre premier. ${\mathbb{C}}_p$ est le complété d’une clôture algébrique de ${\mathbb{Q}}_p$, muni de la norme telle que $|p|=1/p$, $v(.)$ est la valuation associée. Si $X/{\mathbb{C}}_p$ est un espace rigide, $A(X)$ est l’anneau des fonctions rigides analytiques sur $X$; si $X$ est de plus affinoide réduit, la norme sup. sur $X$ munit $A(X)$ d’une structure de ${\mathbb{C}}_p$-algèbre de Banach. ${\mathbb{A}}^1$ désigne la droite rigide sur ${\mathbb{C}}_p$. ${\mathbb{A}}$ (resp. ${\mathbb{A}}_f$, resp. ${\mathbb{A}}_f^{(d)}$) est l’anneau des adèles (resp. adèles finies, resp. adèles hors de $\{d,\infty\}$) de ${\mathbb{Q}}$. Si $d$ et $n$ sont des entiers, on note $d{\, |\!| \,}n$ si $d$ divise $n$ et $(d,n/d)=1$. Si $B$ est un anneau, $B^*$ désigne le groupe multiplicatif de ses inversibles. ${\textrm{diag}}(a,b)$ désigne la matrice $\left( \begin{array}{cc} a & 0 \\ 0 & b \end{array} \right)$. Systèmes de modules de Banach ============================= Modules de Banach {#banachsystem} ----------------- La notion de systèmes de modules de Banach orthonormalisables est introduite dans [@eigen] §4.3. On appellera [*système d’espaces de Banach*]{} la donnée d’un ensemble $E=\{ E_n, i_n, n \in {\mathbb{N}}\}$ de ${\mathbb{C}}_p$-espaces de Banach orthonormalisables $E_n$, et d’applications ${\mathbb{C}}_p$-linéaires compactes $i_{n}: E_n \rightarrow E_{n+1}$, on notera $E^{\dagger}$ la limite inductive des $E_n$ selon les $i_n$. Si ${\mathcal{W}}/{\mathbb{C}}_p$ est un espace rigide réduit, on appellera [*faisceau de modules de Banach orthonormalisables sur ${\mathcal{W}}$*]{} la donnée d’un faisceau $B$ sur ${\mathcal{W}}$ tel que pour tout ouvert affinoide $V \subset {\mathcal{W}}$, $B(V)$ ait une structure de $A(V)$-module de Banach orthonormalisable, et tel que si $V \subset V'$ sont des ouverts affinoides de ${\mathcal{W}}$, l’application canonique $B(V')\widehat{\otimes}_{A(V')}A(V) \rightarrow B(V)$ soit un isomorphisme de $A(V)$-modules de Banach. Un [*système de modules de Banach sur ${\mathcal{W}}$*]{} est la donnée d’un ensemble $E=\{E_n,i_n, n\in {\mathbb{N}}\}$ où $E_n$ est un faisceau de modules de Banach orthonormalisables sur ${\mathcal{W}}$, et $i_n: E_n \rightarrow E_{n+1}$ est un morphisme de faisceaux tel que si $V$ est un ouvert affinoide de ${\mathcal{W}}$, $i_n(V): E_n(V) \rightarrow E_{n+1}(V)$ soit $A(V)$-linéaire compacte. On notera alors $E(V,n):=E_n(V)$, et on abrégera le tout en $E=(E(V,n))$. Enfin, on dispose d’une notion évidente de sous-système de modules de Banach sur ${\mathcal{W}}$. [*Exemple:*]{} Si $E=\{ E_n, i_n, n \in {\mathbb{N}}\}$ est un système d’espaces de Banach, on peut lui associer un système de modules de Banach sur ${\mathcal{W}}$ comme suit. Soit $E_{{\mathcal{W}},n}$ le faisceau de modules de Banach orthonormalisables associé à $E_n$ au sens de [@ch] 3.7.2. On rappelle que si $V$ est ouvert affinoide, on a $E_{{\mathcal{W}},n}(V):=A(V)\widehat{\otimes}_{{\mathbb{C}}_p} E_n$ que l’on notera encore $E(V,n)$. $i_n$ nous fournit de plus par extension des scalairs une application compacte $A(V)$-linéaire $i_n(V): E(V,n) \rightarrow E(V,n+1)$. On appellera $(E(V,n))$ [*le système de modules de Banach sur ${\mathcal{W}}$ associé à $E$*]{}. Fixons $E=(E(V,n))$ un système de modules de Banach sur ${\mathcal{W}}$. Si $x \in {\mathcal{W}}({\mathbb{C}}_p)$, on notera $E_{x}$ le système d’espaces de Banach déduit de $E$ par évaluation en $x$: $E_{x,n}:=E(V,n)\widehat{\otimes}_{A(V)}{\mathbb{C}}_p$ pour tout $V$ tel que $x \in V({\mathbb{C}}_p)$ ($A(V) \rightarrow {\mathbb{C}}_p$ étant l’évaluation en $x$), $i_{x,n}: E_{x,n} \rightarrow E_{x,n+1}$ l’application compacte déduite. Dans nos applications, les $i_{x,n}$ seront toujours injectives. C’est par exemple le cas si $E_{{\mathcal{W}}}$ est le système de modules de Banach sur ${\mathcal{W}}$ associé a un système d’espaces de Banach $\{E_n, \, i_n, \, n \in {\mathbb{N}}\}$ tel que les $i_n$ soient injectives. On appellera endomorphisme de $E$ la donnée, pour chaque $V \subset {\mathcal{W}}$ ouvert affinoide, et pour tout $n$ assez grand ($V$ étant fixé), d’un endomorphisme continu $A(V)$-linéaire $U(V,n)$ de $E(V,n)$, tel que les $U(V,n)$ commutent aux applications $E(V,n) \rightarrow E(V,n+1)$, et aux changements de bases ouverts affinoides $E(V,n) \rightarrow E(V',n)$ lorsqu’ils sont définis. On dira que $U:=(U(V,n))$ est un endomorphisme de $E$ et on identifiera deux endomorphismes $U$ et $U'$ si pour tout $V$ et $n$ assez grand, $U(V,n)=U'(V,n)$. L’ensemble ${\textrm{End}}(E)$ de ces endomorphismes est alors une ${\mathbb{C}}_p$-algèbre de manière naturelle. Si $H$ est un anneau (resp. $G$ un moinoide), une représentation de $H$ (resp. de $G$) sur $E$ est la donnée d’un morphisme de ${\mathbb{C}}_p$-algèbre $H \rightarrow {\textrm{End}}(E)$ (resp. ${\mathbb{C}}_p[G] \rightarrow {\textrm{End}}(E)$). On parlera alors de système de $H$-modules de Banach sur ${\mathcal{W}}$. On note ${\textrm{Comp}}(E)$ l’idéal bilatère de ${\textrm{End}}(E)$ composé des éléments ayant la propriété suivante: pour $V \subset {\mathcal{W}}$ un ouvert affinoide fixé, il existe pour tout $n$ assez grand un endomorphisme $A(V)$-linéaire continu $T(V,n): E(V,n+1) \rightarrow E(V,n)$ tel que le diagramme suivant soit commutatif: $$\xymatrix{ E(V,n) \ar@{->}[d]_{i_{n}(V)} \ar@{->}[rr]^{U(V,n)} & & E(V,n) \ar@{->}[d]^{i_{n}(V)} \\ E(V,n+1) \ar@{->}[rr]_{U(V,n+1)} \ar@{->}[urr]^{T(V,n)} & & E(V,n+1) }$$ En particulier, $U(V,n)$ est compact, et [@BMF] A.2.3 assure que $\det(1-U(V,n)_{|E(V,n)}) \in 1+TA(V)\{\{T\}\}$ est indépendant de $n$ assez grand. La formation des séries de Fredholm commutant aux changements de base ouverts affinoides, toutes ces séries proviennent par restriction d’une unique série de Fredholm $_E$($U$) sur ${\mathcal{W}}$, $_E$($U$) $\in 1+TA({\mathcal{W}})\{\{T\}\}$. L’espace des poids-caractères {#poids} ----------------------------- ${}^{}$ On fixe $p>2$ dans ce qui suit, $\Lambda$ est l’anneau local complet ${\mathbb{Z}}_p[[(1+p{\mathbb{Z}}_p)]]$, ${\mathcal{W}}$ la boule ouverte rigide de centre $1$ et rayon $1$, ${\mathcal{W}}({\mathbb{C}}_p):=\{z \in {\mathbb{C}}_p, \, \, |z-1|<1\}$. L’application $\Lambda \rightarrow A({\mathcal{W}})$ définie par $[1+p] \mapsto Z$ identifie topologiquement $\Lambda$ aux fonctions analytiques ${\mathbb{Q}}_p$-valuées sur ${\mathcal{W}}({\mathbb{Q}}_p)$ bornées par $1$ sur tout ${\mathcal{W}}({\mathbb{C}}_p)$. L’application $\kappa \mapsto \kappa(1+p)$ induit une bijection: $$\textrm{Hom}_{gr-cont}(1+p{\mathbb{Z}}_p,{\mathbb{C}}_p^*) \simeq {\mathcal{W}}({\mathbb{C}}_p)$$ On dispose d’un caractère “universel” continu déterminé par $\kappa^{univ}(1+p)=Z$: $$\kappa^{univ}: 1+p{\mathbb{Z}}_p \rightarrow A({\mathcal{W}})^*$$ On note $\mu_{p^{\infty}}:=\{\zeta \in {\mathbb{C}}_p^*, \, \, \exists r \in {\mathbb{N}}, \zeta^{p^r}=1\}$, $\mu_{p^{\infty}} \subset {\mathcal{W}}({\mathbb{C}}_p)$. Si $\zeta \in {\mathbb{C}}_p^*$ est tel que $\zeta^{p^r}=1$, $\zeta.(1+p)^k \in {\mathcal{W}}({\mathbb{C}}_p)$ correspond au caractère $x \rightarrow x^k \chi(x)$, $\chi$ étant le caractère d’ordre fini de $1+p{\mathbb{Z}}_p$ tel que $\chi(1+p)=\zeta$, caractère que l’on notera $(k,\chi)$. Dans ce cas, on appellera conducteur de $\kappa$, noté ${\textrm{cond}}(\chi)$, le plus petit entier naturel $r$ tel que $\chi$ soit trivial sur $1+p^r{\mathbb{Z}}_p$ . On aura besoin en \[quat\] d’introduire pour $r \geq 1$, la boule rigide ${\mathcal{W}}_r \subset {\mathcal{W}}$: $${\mathcal{W}}_r({\mathbb{C}}_p):=\{ \kappa \in {\mathcal{W}}({\mathbb{C}}_p), \, \, |\kappa-1|< p^{-\frac{1}{p^{r-1}(p-1)}} \}$$ On a par exemple $(1+p)^k\zeta \in {\mathcal{W}}_r$ quand $\zeta^{p^{r-1}}=1$. Tout $\kappa$ dans ${\mathcal{W}}_r({\mathbb{C}}_p)$ est un caractère de $1+p{\mathbb{Z}}_p$ de restriction analytique à $1+p^r{\mathbb{Z}}_p$. Mieux, si $V \subset {\mathcal{W}}_r$ est ouvert affinoide, la restriction de $\kappa^{univ}$ à $A(V)$ est un caractère $A(V)$-valué de $1+p{\mathbb{Z}}_p$ dont la restriction à $1+p^r{\mathbb{Z}}_p$ est analytique. On note $\tau: ({\mathbb{Z}}/p{\mathbb{Z}})^* \rightarrow {\mathbb{Z}}_p^*$ le caractère de Teichmüller. On verra en général les caractères de $1+p{\mathbb{Z}}_p$ comme des caractères de ${\mathbb{Z}}_p^*$, en les étendant trivialement sur les racines de l’unité. Formes modulaires $p$-adiques {#formes} ============================= Nous allons dans ce qui suit rappeler les acteurs essentiels de ce papier. On fixe un nombre premier $p\geq 5$, des entiers $N$ et $d$, $(N,p)=(N,d)=(p,d)=1$, $d$ sans facteur carré. Soit $\mathcal{H}$ la ${\mathbb{Z}}$-algèbre commutative de polynômes sur les lettres $S_l$, $T_l$, si $l$ est premier et $(l,Ndp)=1$, et $U_l$ si $l$ premier divise $Ndp$. On fixe $\varepsilon=\varepsilon_p\varepsilon_N$ un caractère de $({\mathbb{Z}}/p{\mathbb{Z}}\times {\mathbb{Z}}/N{\mathbb{Z}})^*$. On notera aussi, si $n \geq 1$, $T_n$ l’élément de ${\mathcal{H}}$ obtenu par les formules usuelles: $$\sum_{n\geq 1} T_n n^{-s} =\prod_{l|Npd} (1-U_ll^{-s})^{-1}\prod_{l \nmid Npd} (1-T_ll^{-s}+l S_ll^{-2s})^{-1}$$ Formes modulaires surconvergentes --------------------------------- ${}^{}$ On fait des rappels sur certaines constructions faites dans [@eigen] §2.1, [@BMF] §4.3 . Soit $X_1(Np,d)$ la courbe propre et plate sur ${\mathbb{Z}}_p$ paramétrant les courbes elliptiques généralisées avec structure de niveau de type $\Gamma_1(Np)\cap\Gamma_0(d)$, $\omega$ le faisceau inversible habituel sur $X_1(Np,d)$. Si $p>3$, on rappelle que la série d’Eisenstein normalisée de niveau $1$, $E_{p-1}$, est une section globale de $\omega^{p-1}$ sur $X_1(Np,d)/{\mathbb{Z}}_p$ qui relève l’invariant de Hasse. Si $0\leq v<1 \in {\mathbb{Q}}$, on définit $X_1(Np,d)(v)$ comme étant l’ouvert affinoide de $X_1(Np,d)^{rig}$ sur lequel $|E_{p-1}|\geq p^{-v}$, et $Z_1(Np,d)(v)$ la composante connexe affinoide de $\infty$ dans $X_1(Np,d)(v)$ ([@eigen] §2.1, §4.3). Pour tout $m \geq 1$, on s’intéressera plus généralement ([@eigen] §2) à la courbe rigide analytique (lisse irréductible) $Z_1(Np^m,d)(v)$, avec $0\leq v < 1$, qui est la composante connexe affinoide de $\infty$ dans $X_1(Np^m,d)(v)$. $$M_k(p^m,d,v):=H^0(Z_1(Np^m,d)(v),\omega^k)$$ est le ${\mathbb{C}}_p$-espace de Banach des formes modulaires $v$-surconvergentes de poids $k$, de niveau $\Gamma_1(Np^m)\cap \Gamma_0(d)$, il est muni d’une action naturelle de ${\mathcal{H}}$ ([@eigen] §3.2, [@BMF] B5). On fixe ici, et pour tout le texte, une suite réelle décroissante $(v_n)_{n\in {\mathbb{N}}}$, telle que $\forall n \in {\mathbb{N}}, \, v_n=p^{-n}v_0 \in [\frac{p}{p^{n+1}(p+1)},\frac{p}{p^{n}(p+1)}[\cap {\mathbb{Q}}$. La construction de nos systèmes de modules de Banach dépend de $v_0$ d’une manière qui nous importera peu, pour alléger les notations nous omettrons cette dépendance dans ce qui suit. On pourrait fixer $v_0=\frac{1}{p+1}$. Si $\kappa=(k,\chi) \in {\mathcal{W}}({\mathbb{C}}_p)$ est de conducteur $m$, on dispose d’une série d’Eisenstein $E_{\kappa}$, qui est une forme modulaire de poids $k$ sur $X_1(Np^m,d)(v)$ de caractère trivial hors de $p$, $\chi \tau^{-k}$ en $p$. Si $0\leq v<\frac{p}{p^{m-1}p+1}$, $Z_1(Np,d)(v)$ s’identifie canoniquement au quotient de $Z_1(Np^m,d)(v)$ par l’action des diamants de $(1+p{\mathbb{Z}}_p)/(1+p^m{\mathbb{Z}}_p)$, permettant de voir $M_0(p,d,v)$ comme le sous-espace de $M_0(p^m,d,v)$ de caractère trivial sous $(1+p{\mathbb{Z}}_p)/(1+p^m{\mathbb{Z}}_p)$. La multiplication par $E_{\kappa}$ est un isomorphisme de ${\mathbb{C}}_p$-Banach de $M_0(p^m,d,v)$ sur $M_k(p^m,d,v)$ multipliant le caractère en $p$ par $\chi \tau^{-k}$. Le $q$-développement à l’infini de $E_{\kappa}$ est la spécialisation en $\kappa$ d’un $q$-développement “abstrait” ${\mathbb{E}}(q) \in 1+q\Lambda[[q]]$ appelé famille d’Eisenstein restreinte ([@eigen], §2.2). On pose $F:=(F(V,n))$, le système de modules de Banach sur ${\mathcal{W}}$ associé au système d’espaces de Banach $\{ A(Z_1(Np,d)(v_n)), {\textrm{res}}_n, n\in {\mathbb{N}}\}$, les applications $${\textrm{res}}_n: A(Z_1(Np,d)(v_n)) \rightarrow A(Z_1(Np,d)(v_{n+1}))$$ étant les restrictions compactes naturelles. Par définition, si $V \subset {\mathcal{W}}$ est ouvert affinoide, $F(V,n)=A(V \times Z_1(Np,d)(v_n))$. $F$ est de manière naturelle une représentation de ${\mathcal{H}}$, $U_p \in {\textrm{Comp}}(F)$ ([@eigen] 3.2, [@BMF] B5). Si $\kappa=(k,\chi) \in {\mathcal{W}}({\mathbb{C}}_p)$ est de conducteur $m$, $n\geq m-1$, $F_{\kappa,n}$ s’identifie, comme représentation de ${\mathcal{H}}$, au sous-espace de $M_k(p^m,d,v_n)$ de caractère $\chi \tau^{-k}$ sous $({\mathbb{Z}}/p^m{\mathbb{Z}})^*$. En chaque pointe $c$ dans $Z_1(Np,d)(0)$, la théorie des courbes de Tate (voir [@K] A1.2) nous fournit un $q_c$-développement: $F(V,n) \hookrightarrow A(V)[[q_c]]$. De plus, $({\mathbb{Z}}/Np{\mathbb{Z}})^*$ agit par automorphismes naturels sur $X_1(Np,d)$ en préservant $Z_1(Np,d)(v)$ et donc l’ensemble de ses pointes. Notons $A(Z_1(Np^m,d)(v_n))^{0,\varepsilon}$ le sous-espace de $A(Z_1(Np^m,d)(v_n))$ composé des fonctions de $q_c$-développement nul pour tout $c$ dans $Z_1(Np,d)(0)$, et sur lequel $({\mathbb{Z}}/Np{\mathbb{Z}})^*$ agit par le caractère $\varepsilon$ fixé plus haut. On s’intéresse au système de modules de Banach $F^{0,\varepsilon}$ associé au système d’espaces de Banach $$\{ A(Z_1(Np,d)(v_n))^{0,\varepsilon}, {{\textrm{res}}_n}_{|A(Z_1(Np,d)(v_n))^{0,\varepsilon}}, n \in {\mathbb{N}}\}$$ C’est un sous-système de modules de Banach de $F$, préservé par ${\mathcal{H}}$. Son évaluation $F^{0,\varepsilon}_{\kappa,n}$ en $\kappa=(k,\chi) \in {\mathcal{W}}({\mathbb{C}}_p)$ de conducteur $m$ tel que $n \geq m-1$ s’identifie au sous-espace de $M_k(p^m,d,v_n)$ composé des formes de caractère $\varepsilon \tau^{-k} \chi $ s’annulant aux pointes de $Z_1(Np^m,d)(0)$ (noter que $E_\kappa$ est non nulle en chacune de ces pointes). On définit dans ce qui suit le sous-module de Banach de $F^{0,\varepsilon}$ composé des familles de formes surconvergentes “nouvelles en $d$” (voir aussi [@BMF] B5). Soit $l$ premier divisant $d$, $ld'=d$, on dispose de morphismes canoniques finis et plats $$\pi_l: X_1(Np^m,d) \rightarrow X_1(Np^m,d')$$ oubliant le sous-groupe d’ordre $l$. Ces morphismes induisent des morphismes rigides analytiques $\pi_{l}: Z_1(Np^m,d)(v) \rightarrow Z_1(Np^m,d')(v)$, finis et plats de degré $l+1$ (étale hors des pointes). L’isomorphisme canonique $\pi_l^*(\omega)\simeq \omega$ permet d’en considérer la trace, ${\pi_l}_*\omega \rightarrow \omega$, d’où en particulier sur les sections sur $Z_1(Np^m,d')(v)$: $$tr(\pi_l)_k: M_k(p^m,d,v) \rightarrow M_k(p^m,d',v)$$ On étend $tr(\pi_l)_0: A(Z_1(Np,d)(v)) \rightarrow A(Z_1(Np,d')(v))$, linéairement en un $A(V)$-morphisme $${\textrm{Tr}}_l: A(V\times Z_1(Np,d)(v)) \rightarrow A(V \times Z_1(Np,d')(v))$$ ${\textrm{Tr}}_l$ définit ainsi un endomorphisme du module de Banach $F^{0,\varepsilon}$, il est bien défini pour tous les couples $(V,n)$. Soit $\kappa=(k,\chi) \in {\mathcal{W}}({\mathbb{C}}_p)$, $m=\textrm{cond}(\kappa)$, $n\geq m-1$, le diagramme suivant est commutatif: $$\xymatrix{ F_{\kappa,n} \ar@{->}[d]_{{\textrm{Tr}}_l} \ar@{->}[r]^{\hspace{- 1 cm}\times E_{\kappa}} & M_k(p^m,d,v_n) \ar@{->}[d] \ar@{->}[d]^{tr(\pi_l)_k} \\ F_{\kappa,n} \ar@{->}[r]^{\hspace{-1 cm} \times E_{\kappa}} & M_k(p^m,d',v_n) }$$ [*Preuve:*]{} Il faut montrer que si $f \in M_0(p^m,d,v)$, $tr(\pi_l)_k(E_{\kappa}f)=E_{\kappa}tr(\pi_l)_0(f)$, ce qui découle immédiatement de ce que $E_{\kappa}$ est de niveau premier à $l$. $\square$ On vérifie aisément que ${\textrm{Tr}}_l$ commute aux correspondances de Hecke hors de $l$, aux opérateurs diamants, et que ${\textrm{Tr}}_l^2=(l+1){\textrm{Tr}}_l$ (cf. [@BMF] B5.1). Notons $W_l$ l’involution d’Atkin sur $X_1(Np^m,d)/{\mathbb{Z}}_p$ définie modulairement par $(E,H,\alpha) \mapsto (E/H,E[l]/H,\pi\cdot \alpha)$, $H$ étant un sous-groupe d’ordre $l$ de $E$, $\pi$ l’isogénie $E \rightarrow E/H$, et $\alpha$ une structure de niveau de type $\Gamma_1(Np^m)\cap \Gamma_0(d')$. Elle préserve les $Z_1(Np^m,d)(v)$ et induit par extension des scalaires une involution encore notée $W_l$ sur les $A(Z_1(Np,d)(v)\times V)$. [**Définition:**]{} Soit $\kappa \in {\mathcal{W}}({\mathbb{C}}_p)$, $f \in F_{\kappa,n}^{0,\varepsilon}$ est dite nouvelle en $d$ si pour tout $l$ divisant $d$, ${\textrm{Tr}}_l(f)={\textrm{Tr}}_l(W_l(f))=0$. On notera $F^{0,\varepsilon,d}$ le système de modules de Banach sur ${\mathcal{W}}$ associé au système d’espaces de Banach $ \{ A(Z_1(Np,d)(v_n))^{0,\varepsilon,d}, res_n , n \in {\mathbb{N}}\}$, où $$A(Z_1(Np,d)(v_n))^{0,\varepsilon,d}:=A(Z_1(Np,d)(v_n))^{0,\varepsilon}\cap (\bigcap_{l|d} {\textrm{Ker}}({\textrm{Tr}}_l)\cap {\textrm{Ker}}({\textrm{Tr}}_l \cdot W_l))$$ $F^{0,\varepsilon,d}$ est un sous-système de modules de Banach de $F^{0,\varepsilon}$ stable par l’action de ${\mathcal{H}}$. [*Preuve*]{}: Ça n’est pas complètement évident en ce qui concerne les $U_l$ avec $l|d$. Il suffit de vérifier que pour $\kappa=(k,\chi) \in {\mathcal{W}}({\mathbb{C}}_p)$, $m={\textrm{cond}}(\kappa)$, $n \geq m-1$, $U_l$ préserve $F_{\kappa,n}^{0,\varepsilon,d}$ dans $F_{\kappa,n}^{0,\varepsilon}$. Sur $M_k(p^m,d,v_n)$, on dispose d’un endomorphisme $W_{l,k}$ défini par $W_{l,k}(f)(E,H,\alpha,\omega)=f(E/H,E[l]/H,\pi.\alpha, (\pi^{\vee})^*(\omega) )$, avec les notations évidentes. On vérifie que sur le sous-espace de caractère $\varepsilon\chi\tau^{-k}$, on a $W_{l,k}^2=\varepsilon(l)\kappa(l)$ et $$tr(\pi_l)_k(f)=f+l\varepsilon(l)^{-1}\kappa(l)^{-1}U_l(W_{l,k}(f)), \, \, tr(\pi_l)_k(W_{l,k}(f))=f+l\cdot U_l(f)$$ De plus, si $g \in F_{\kappa,n}^{0,\varepsilon}$ et $e_l$ désigne la fonction inversible sur $X_1(Np,d)(v_n)$ de $q$-développement $\frac{E_{\kappa}(q^l)}{E_{\kappa}(q)}$, alors un calcul montre que $$E_{\kappa}^{-1}W_{l,k}(gE_{\kappa})=e_lW_l(g)$$ En particulier, la multiplication par $E_{\kappa}$ envoie $F_{\kappa,n}^{0,\varepsilon,d}$ isomorphiquement sur l’intersection des ${\textrm{Ker}}({\textrm{tr}}(\pi_l)_k)\cap {\textrm{Ker}}({\textrm{tr}}(\pi_l)_k\cdot W_{l,k})$ dans le sous-espace de $M_k(p^m,d,v_n)$ de caractère $\varepsilon\chi\tau^{-k}$. Les relations ci-dessus montrent que ce sous-espace est stable par $W_{l,k}$ puis par $U_l$, qui coïncide avec $-l^{-1}W_{l,k}$ sur ce dernier. $\square$ Si $\kappa=(k,\chi) \in {\mathcal{W}}({\mathbb{C}}_p)$, $n\geq {\textrm{cond}}(\chi)-1$, $F^{0,\varepsilon,d}_{\kappa,n}$ est ${\mathbb{C}}_p\otimes_{{\mathbb{Z}}}{\mathcal{H}}$-isomorphe au sous-espace de Banach de $M_k(p^m,d,v_n)$ constitué des formes s’annulant aux pointes de $Z_1(Np^m,d)(0)$, de caractère $\varepsilon \chi \tau^{-k}$, et annulées par les ${\textrm{tr}}(\pi_l)_k$ et ${\textrm{tr}}(\pi_l)_k\cdot W_{l,k}$. Ces formes seront appelées [*nouvelles en $d$*]{}. Sur le sous-espace de $M_k(p^m,d,v_n)$ composé des restrictions à $Z_1(Np^m,d)(v_n)$ des formes convergentes sur tout $X_1(Np^m,d)$, cette condition d’être nouvelle en $d$ est précisément la condition usuelle. [*Remarques*]{}: i) Soit $\kappa=(k,\chi) \in {\mathcal{W}}({\mathbb{C}}_p)$ de conducteur $m$, $r \in {\mathbb{N}}$, $f \in F_{\kappa,r}^{0,\varepsilon,d}$ propre pour tout ${\mathcal{H}}$, on note $\chi: {\mathcal{H}}\rightarrow {\mathbb{C}}_p$ le caractère défini par $T_n(f):=\chi(T_n)f$. $f$ a un $q$-développement sur la pointe $\infty$ de la forme $\sum_{n\geq 1} a_n q^n$ tel que $a_n=\chi(T_n)a_1$. Le principe du $q$-développement (i.e l’irréductibilité de $Z_1(Np^m,d)(0)$) montre alors que $a_1 \neq 0$, et donc que l’on peut supposer $a_1=1$. Ceci définit donc une bijection entre caractères de ${\mathcal{H}}$ dans $F_{\kappa,r}^{0,\varepsilon,d}$ et les éléments de ce dernier qui sont propres et de $q$-développement normalisé à $1$ (“multiplicité $1$ faible”). ii\) Les ${\mathbb{C}}_p$-espaces de Banach $A(Z_1(Np,d)(v))$, ainsi que tous leurs sous-espaces considérés dans ce paragraphe, sont orthonormalisables. En effet, d’après [@Ser] 1.1, tout espace de Banach provenant par extension des scalairs d’un espace de Banach sur un corps local est orthonormalisable. De plus, par [@Ser] 1.2, les inclusions $A(Z_1(Np,d)(v_n)) \supset A(Z_1(Np,d)(v_n))^{0,\varepsilon} \supset A(Z_1(Np,d)(v_n))^{0,\varepsilon,d}$ sont scindées dans la catégorie des ${\mathbb{C}}_p$-espaces de Banach. Formes modulaires quaternioniques $p$-adiques --------------------------------------------- ${}^{}$ Dans ce qui suit, nous nous référerons à [@Buz2]. ### Séries principales $p$-adiques de l’Iwahori {#rep} Introduisons tout d’abord quelques notations de théorie des groupes: $$L \textrm{ désigne le Borel inférieur de } GL_2({\mathbb{Q}}_p)$$ $$N \textrm{ les unipotents supérieurs de } GL_2({\mathbb{Z}}_p)$$ $$I(m) \textrm{ le sous-groupe de $GL_2({\mathbb{Z}}_p)$ composé des éléments triangulaires supérieurs modulo $p^{m}$ }$$ $$I:=I(0) \textrm{ l'Iwahori, }\, \, \, u:={\textrm{diag}}(1,p)$$ $${\mathbb{M}}(m) \textrm{ est le sous-monoide de $GL_2({\mathbb{Q}}_p)\cap M_2({\mathbb{Z}}_p)$ engendré par $I(m)$ et $u$}, \, \, \, {\mathbb{M}}:={\mathbb{M}}(1)$$ La décomposition d’Iwahori s’écrit $I=(L\cap I)\times N$. On identifie $N$ à ${\mathbb{Z}}_p$ via $$\left( \begin{array}{cc} 1 & t \\ 0 & 1 \end{array} \right) \mapsto t$$ La grosse cellule de Bruhat $L\backslash LI \subset L\backslash GL_2({\mathbb{Q}}_p)={\mathbb{P}}^1({\mathbb{Q}}_p)$ est stable par multiplication à droite par ${\mathbb{M}}$. On note $T$ la coordonnée sur ${\mathbb{Z}}_p$, l’action de ${\mathbb{M}}$ par translation à droite sur les fonctions sur $L\backslash LI ={\mathbb{Z}}_p$ s’écrit alors $$\left( \begin{array}{cc} a & b \\ c & d \end{array} \right).T = \frac{b+dT}{a+cT}$$ Soit $n \in {\mathbb{N}}$, on note ${\mathcal{C}}_{n}$ la ${\mathbb{C}}_p$-algèbre de Banach des fonctions sur ${\mathbb{Z}}_p$ de restriction analytique aux $a+p^n{\mathbb{Z}}_p$, ${\mathcal{C}}_n$ est stable par l’action de ${\mathbb{M}}$ et ${\mathcal{C}}_0$ s’identifie à l’algèbre de Tate ${\mathbb{C}}_p\!\!<\!T\!>$. Les restrictions naturelles $i_n: {\mathcal{C}}_n \rightarrow {\mathcal{C}}_{n+1}$ sont compactes injectives , ce qui fait de ${\mathcal{C}}=\{{\mathcal{C}}_n,i_n, n\in {\mathbb{N}}\}$ un système d’espaces de Banach tel que $u \in {\textrm{Comp}}({\mathcal{C}})$. On note ${\mathcal{C}}_{{\mathcal{W}}}$ le système de modules de Banach sur ${\mathcal{W}}$ associé à ${\mathcal{C}}$. Il sera commode de poser ${\mathcal{C}}_{-n}:={\mathcal{C}}_0$ et ${\mathcal{C}}(V,-n):={\mathcal{C}}(V,0)$ si $n \in {\mathbb{N}}$, $V \subset {\mathcal{W}}$ ouvert affinoide. Soit $j: I \rightarrow {\mathcal{C}}_0^*$ le 1-cocycle défini par $$j( \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) ):= \tau^{-1}(a)(a+cT) \in 1\!+\!p{\mathbb{Z}}_p\!<\!T\!> \, \subset \, {\mathcal{C}}_0^*$$ Il se prolonge à ${\mathbb{M}}$ en le prenant trivial sur $u$. Si $\kappa \in {\mathcal{W}}({\mathbb{C}}_p)$, $\gamma \in {\mathbb{M}}$, on définit un cocycle $\kappa(j): {\mathbb{M}}\rightarrow \bigcup_{n \in {\mathbb{N}}} {\mathcal{C}}_{n}^*$ par $$(\kappa(j)(\gamma))(t):=\kappa(j(\gamma)(t)), \, \, \, t \in {\mathbb{Z}}_p$$ Si $\gamma \in {\mathbb{M}}(m)$ et $\kappa \in {\mathcal{W}}_r({\mathbb{C}}_p)$, $\kappa(j)(\gamma) \in {\mathcal{C}}_{r-m}^*$. On notera $\rho_{\kappa}$ la représentation de ${\mathbb{M}}$ sur l’espace $\bigcup_{n \in {\mathbb{N}}}{\mathcal{C}}_n$ tordue par $\kappa(j)$, i.e $\rho_{\kappa}(v):=\kappa(j)(\gamma)\gamma.v$. Cette torsion n’affecte pas $u$ et si $\kappa \in {\mathcal{W}}_r({\mathbb{C}}_p)$, ${\mathbb{M}}(m)$ préserve ${\mathcal{C}}_n$ dès que $n\geq r-m$. Plus généralement, si $V \subset {\mathcal{W}}$ est un ouvert affinoide, on dispose d’un $1$-cocycle $\kappa^{univ}(j): {\mathbb{M}}\rightarrow \bigcup_{n \in {\mathbb{N}}}{\mathcal{C}}(V,n)^*$, tel que $\kappa^{univ}(j)_{\kappa}=\kappa(j)$ si $\kappa \in {\mathcal{W}}({\mathbb{C}}_p)$ et $\kappa^{univ}(j)(\gamma) \in {\mathcal{C}}_{r-m}^*$ si $\gamma \in {\mathbb{M}}(m)$ et $V \subset {\mathcal{W}}_r$. On dispose donc d’une représentation $\rho^{univ}$ de ${\mathbb{M}}$ sur $\bigcup_{n \in {\mathbb{N}}} {\mathcal{C}}(V,n)$, en tordant par $\kappa^{univ}(j)$ la représentation naturelle obtenue par extension complète des scalairs à $A(V)$ de celle sur ${\mathcal{C}}_n$. Si $V \subset {\mathcal{W}}_r$, ${\mathbb{M}}(m)$ préserve ${\mathcal{C}}(V,n)$ dès que $n \geq r-m$. Ainsi, le système de modules de Banach ${\mathcal{C}}_{{\mathcal{W}}}$ sur ${\mathcal{W}}$ est muni d’une représentation $\rho^{univ}: {\mathbb{M}}\rightarrow {\textrm{End}}({\mathcal{C}}_{{\mathcal{W}}})$. Si $V \subset {\mathcal{W}}_r$, ${\mathbb{M}}(m)$ préserve ${\mathcal{C}}(V,n)$ dès que $n \geq r-m$, et on a $u \in {\textrm{Comp}}({\mathcal{C}}_{{\mathcal{W}}})$. On appelle ${\mathcal{C}}_{{\mathcal{W}}}$ [*la famille analytique des séries principales $p$-adiques de $I$*]{}. [*Remarques:*]{} i) ${\mathcal{C}}^{\dagger}=\bigcup_{n \in {\mathbb{N}}}{\mathcal{C}}_n$ est l’espace des fonctions localement analytiques sur ${\mathbb{Z}}_p$, muni de sa topologie localement convexe c’est un espace de type compact au sens de [@ST] §1 [^2]. La représentation $\rho_{\kappa}$ de $I$ sur ${\mathcal{C}}^{\dagger}$ est la série principale localement analytique de $I$ de caractère $\kappa$ ([@ST] §5, noter que leur $B$ est l’Iwahori opposé à $I$). ii\) Les assertions de ce paragraphe sont détaillées dans [@Buz2] §4, §7, noter qu’il a des actions à droites, non à gauche. A cette modification près, on a dans ses notations: $\mathcal{A}_{\kappa,p^{-n}}={\mathcal{C}}_{\kappa,n}$ et $M_{m} \supset {\mathbb{M}}(m)$; d’autre part si $\kappa \in {\mathcal{W}}_r({\mathbb{C}}_p)\backslash {\mathcal{W}}_{r-1}({\mathbb{C}}_p)$, “$m$ is good for $(\kappa,p^{-n})$” équivaut à $n \geq r-m$. ### Formes modulaires {#quat} Soit $D({\mathbb{Q}})$ une algèbre de quaternions sur ${\mathbb{Q}}$, on fixe $D({\mathbb{Z}})$ un ordre maximal de $D({\mathbb{Q}})$, et on note $D$ le schéma en anneaux sur ${\mathbb{Z}}$ associé, $D^*$ son groupe des inversibles. On suppose que $D$ est définie. Soit $d={\textrm{disc}}(D)$ le produit des premiers ramifiés, on fixe un isomorphisme au dessus de ${\mathbb{Z}}[1/d]$, $$\varphi: D/{\mathbb{Z}}[1/d] \simeq \mathbb{M}_2/{\mathbb{Z}}[1/d]$$ Si $M$ est un entier premier à $d$, on notera $U_1(M)$ (resp. $U_0(M)$) le sous-groupe ouvert compact de $D^*({\mathbb{A}}_f) \simeq D^*({\mathbb{A}}_{f}^{(d)}) \times \prod_{l | d} D^*({\mathbb{Q}}_l)$, décomposé selon ce produit, valant le compact maximal $D^*({\mathbb{Z}}_l)$ en les premiers $l$ divisant $d$, égal à $\varphi^{-1}(\Gamma_1(M))$ (resp. $\varphi^{-1}(\Gamma_0(M))$) sur l’autre facteur, avec: $$\Gamma_1(M)=\{ g \in GL_2(\widehat{{\mathbb{Z}}[1/d]}), \, \, g \equiv \left( \begin{array}{cc} 1 & * \\ 0 & * \end{array} \right)\,\bmod M\},$$ $$\Gamma_0(M)=\{ g \in GL_2(\widehat{{\mathbb{Z}}}), \, \, g \equiv \left( \begin{array}{cc} * & * \\ 0 & * \end{array} \right)\,\bmod M \}$$ On verra les caractères de $({\mathbb{Z}}/M{\mathbb{Z}})^*$ comme des caractères de $U_0(M)$ par l’étoile supérieure gauche. Si $M\geq 5$, $D^*({\mathbb{Q}}) \times U_1(M)$ agit librement sur $D^*({\mathbb{A}}_f)$, et $D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}}_f)/U_1(M)$ est fini de cardinal noté $h_1(M)$. On choisit $M=Np$ comme en \[formes\]. On note ${F}^{D}$ le système de modules de Banach sur ${\mathcal{W}}$ tel que si $V \subset {\mathcal{W}}$ est ouvert affinoide, $n \in {\mathbb{N}}$, $F^{D}(V,n)$ est le $A(V)$-module de Banach des fonctions $f: D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}}_f)\rightarrow {\mathcal{C}}(V,n)$ satisfaisant $$f(xu)=j(u_p^{-1})^{-2}\rho^{univ}(u_p^{-1})f(x), \, \, \forall (x,u) \in D^*({\mathbb{A}}_f) \times U_1(Np)$$ et $i_n$ est l’application déduite de la restriction canonique ${\mathcal{C}}(V,n) \rightarrow {\mathcal{C}}(V,n+1)$. L’orthonormalisabilité vient de ce que $F^D(V,n)$ s’identifie à ${\mathcal{C}}(V,n)^{h_1(Np)}$ car $Np\geq 5$ (voir aussi [@Buz2] §7, §4). On a une représentation naturelle sur $F^D$ de $U_0(Np)/U_1(Np)\simeq ({\mathbb{Z}}/Np{\mathbb{Z}})^*$, définie par $(<\gamma>.f)(x)=j(\gamma)^{-2}\rho^{univ}(\gamma)f(x\gamma)$. On notera $F^{D,\varepsilon}$ le sous-système de modules de Banach de $F^D$ sur lequel $({\mathbb{Z}}/Np{\mathbb{Z}})^*$ agit par $\varepsilon^{-1}$. On dispose d’une représentation naturelle ${\mathcal{H}}\rightarrow {\textrm{End}}(F^{D,\varepsilon})$ telle que $U_p \in {\textrm{Comp}}(F^{D,\varepsilon})$. Les doubles classes considérées ici pour les opérateurs de Hecke sont (avec les abus évidents) les ${\textrm{diag}}(1,l)$, pour $T_l$ et $U_l$ si $l\nmid d$, ${\textrm{diag}}(l,l)$ pour $S_l$ si $l \nmid Npd$, comme en \[JLclass\] pour $U_l$ avec $l|d$. Si $\kappa \in {\mathcal{W}}({\mathbb{C}}_p)$, $F^{D,\varepsilon}_{\kappa,n}$ s’identifie à l’espace des fonctions $D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}}_f) \rightarrow {\mathcal{C}}_n$ telles que $$f(xu)=\varepsilon^{-1}(u)j(u_p^{-1})^{-2}\rho_{\kappa}(u_p^{-1})f(x), \, \,\, \forall \, (x,u) \in D^*({\mathbb{A}}_f) \times U_0(Np)$$ $(F^{D,\varepsilon}_{\kappa})^{\dagger}$, vu avec sa structure de ${\mathcal{H}}$-module, est [*l’espace des formes modulaires $p$-adiques quaternioniques de poids-caractère $\kappa$, de niveau modéré $N$, de caractère $\varepsilon$*]{}. Soient $n,\, m,\, r$ des entiers tels que $n\geq r-1 \geq 0$, $n\geq m-1 \geq 0$, et $\kappa \in {\mathcal{W}}_r({\mathbb{C}}_p)$, considérons l’espace annexe $F_{\kappa,n}^{D,\varepsilon}[m]$ des fonctions $f: D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}}_f)\rightarrow {\mathcal{C}}_{\kappa,n}$ satisfaisant $$f(xu)=\varepsilon^{-1}(u)j(u_p^{-1})^{-2}\rho_{\kappa}(u_p^{-1})f(x), \, \, \forall (x,u) \in D^*({\mathbb{A}}_f^{(d)}) \times U_0(Np^m)$$ C’est un ${\mathcal{H}}$-module, en prenant cette fois-ci des doubles classes par rapport à $U_0(Np^m)$ (ce qui ne change que $U_p$, encore défini par la double classe de ${\textrm{diag}}(1,p)$). D’après [@Buz2] (§5 lemme 3,iv), $F_{\kappa,n-m+1}^{D,\varepsilon}[m]$ et $F_{\kappa,n}^{D,\varepsilon}$ sont isomorphes comme ${\mathcal{H}}\otimes_{{\mathbb{Z}}}{\mathbb{C}}_p$-modules. Si de plus $\kappa=(k,\chi)$ est de conducteur $m$, et $k\geq 2$, $F_{\kappa,0}^{D,\varepsilon}[m]$ contient comme sous-${\mathcal{H}}\otimes_{{\mathbb{Z}}}{\mathbb{C}}_p$-module l’espace des fonctions à valeurs polynomiales en $T$ de degré $\leq k-2$, ce dernier est isomorphe comme ${\mathcal{H}}\otimes_{{\mathbb{Z}}} {\mathbb{C}}_p$-module à l’espace des fonctions $f: D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}}_f)\rightarrow \textrm{Sym}^{k-2}({\mathbb{C}}_p^2) $ telles que $$f(xu)=\varepsilon(u)^{-1}\chi(u_p)^{-1}\tau^{k}(u_p)u_p^{-1}f(x), \, \, \forall \, \, (x,u) \, \, \in \, \, D^*({\mathbb{A}}_f) \times U_0(Np^m)$$ Ce ${\mathcal{H}}\otimes_{{\mathbb{Z}}} {\mathbb{C}}_p$ module est $S_{k}^D(Np^m,\varepsilon\chi\tau^{-k},{\mathbb{C}}_p)$ dans la notation du §\[pstructure\]. Préliminaires de théorie spectrale ================================== Semi-simplification en dimension infinie {#semi} ---------------------------------------- Soit $K$ un corps valué complet non archimédien (non discret), $V$ un $K$-espace de Banach orthonormalisable, $U$ un endomorphisme compact de $V$. La série caractéristique $P(T)=\det(1-TU)$ de $U$ se décompose sous la forme $$P(T)=\prod_{i \geq 0} P_i(T)^{n_i}$$ où les $P_i(T)$ sont des irréductibles de $1+TK[T]$ deux à deux distincts tels que $|P_i(T)-1| \underset{i \rightarrow \infty}{\rightarrow} 0$ pour la norme $|.|$ sur $K[T]$ du sup des coefficients. Par [@Ser], on sait que $V$ est somme directe topologique de $Ker(U)$ et des espaces de dimension finie $V(P_i):={\textrm{Ker}}(P^*_i(u)^{n_i})$, $Q^*(T)$ désignant le polynôme réciproque de $Q(T)$, sur lesquels $U$ a pour polynôme caractéristique $P^*_i(T)^{n_i}$. Soit $H$ une $K$-algèbre, $\rho: H \rightarrow {\textrm{End}}_K(V)$ une représentation telle que $\rho(H)$ contienne $U$ et lui commute, alors $\rho(H)$ stabilise les $V(P_i)$, que l’on peut semi-simplifier. [**Définition:**]{} On notera $\mathcal{X}_U(V)$ l’ensemble des représentations irréductibles de $H$ apparaissant dans la réunion des semi-simplifications des $V(P_i)$, comptées avec multiplicité (qui sont nécessairement finies). Notons que $\mathcal{X}_U(V)$ dépend de $U$, mais que $\mathcal{X}_U(V)=\mathcal{X}_{U'}(V)$ si $U' \in \rho(H)$ est un autre endomorphisme compact de $V$ commutant à $\rho(H)$ et à $U$ tel que ${\textrm{Ker}}(U')={\textrm{Ker}}(U)$. Si $\mathcal{X}$ est un ensemble de représentations d’une algèbre, on notera $|\mathcal{X}|$ l’ensemble des classes d’isomorphie de représentations apparaissant dans $\mathcal{X}$ (autrement dit, on oublie les multiplicités). \[semisimpl\] Soient $(\rho_1,V_1)$ et $(\rho_2,V_2)$, des représentations d’une $K$-algèbre $H$ dans les endomorphismes continus de $V_1$ et $V_2$, telles que $V_i$ ($i=1,2$) est muni d’un endomorphisme compact $U_i$ commutant à $\rho_i(H)$. On suppose de plus que pour tout $h \in H$, $\det(1-T\rho_1(h)U_1)=\det(1-T\rho_2(h)U_2) \in K\{\{T\}\}$. Alors $\mathcal{X}_{U_1}(V_1)=\mathcal{X}_{U_2}(V_2)$. [*Preuve:*]{} Soit $\alpha \in {\mathbb{R}}$, $i \in \{1,2\}$, $V_i^{\alpha}$ le plus grand sous-espace de dimension finie de $V_i$ stable par $U_i$ sur lequel le polygone de Newton du polynôme caractéristique de $U_i$ est de pente $\alpha$. Soit $h \in H$, $\rho_i(h)$ stabilise les $V_i^{\alpha}$, et ses valeurs propres sur ces derniers sont toutes bornées par $||\rho_i(h)||< \infty$, $||.||$ désignant la norme d’opérateur sur $V_i$. Soit $x \in K^*$ tel que $|x|<1$, on peut trouver par conséquent un $N \in {\mathbb{N}}$ tel que $\rho_i(1+x^Nh)$ ait toutes ses valeurs propres de norme $1$ sur les $V_i^{\alpha}$ ($i=1,2$). On pose $h'=1+x^Nh \in H$. En co-trigonalisant $U_i$ et $\rho_i(h')$ sur $V_i^{\alpha}$, on en déduit: $$\det(1-TU_i\rho_i(h'))^{\alpha}=\det(1-TU_i\rho_i(h')_{|V_i^{\alpha}})$$ Si $Q \in 1+TK\{\{T\}\}$, $Q^{\alpha}$ désigne le polynôme $\in 1+TK[T]$ divisant $Q$ tel que le polygone de Newton de $Q/Q^{\alpha}$ n’a pas la pente $\alpha$. Ainsi, $\det(1-T\rho_1(h')U_1)=\det(1-T\rho_2(h')U_2)$ donne $$\det(1-T\rho_1(h'){U_1}_{|V_1^{\alpha}})= \det(1-T\rho_2(h'){U_2}_{|V_2^{\alpha}})$$ $\rho_i(h)U_i$ étant injectif sur $V_i^{\alpha}$, notons que cela implique que $\dim(V_1^{\alpha})=\dim(V_2^{\alpha})$, et que $\rho_1(h'){U_1}_{|V_1^{\alpha}}$ et $\rho_2(h'){U_2}_{|V_2^{\alpha}}$ ont même polynôme caractéristique. Ceci reste vrai pour les même raisons en remplaçant $h'$ par $h'+\lambda$ pour (une infinité de) $\lambda \in K$ assez petit. Si $A$ et $B$ sont deux endomorphismes qui commutent d’un $K$-espace vectoriel de dimension finie $r$, avec $A$ inversible, la donnée de $$\det(X.1-A(B+Y.1))=\prod_{i=1}^r(X-a_iY-b_i) \in \overline{K}[X,Y]$$ permet de retrouver $\det(X.1-B)=\prod_{i=1}^r(X-(b_i/a_i))$ (ici $\overline{K}$ est une clôture algébrique de $K$). On en déduit $\det(T-\rho_1(h')_{|V_1^{\alpha}})=\det(T-\rho_2(h')_{|V_2^{\alpha}})$, puis la même chose en remplaçant $h'$ par $h$. Ainsi, $V_1^{\alpha}$ et $V_2^{\alpha}$ sont deux représentations de $H$ ayant même polynômes caractéristiques, on sait alors que leurs semi-simplifications sont isomorphes, ce qui conclut. $\square$ Terminons cette partie par une légère amélioration de \[semisimpl\]. Soient $E^i$, $i=1,2$, deux systèmes d’espaces de Banach, munis de représentations $\rho_i: H \rightarrow \textrm{End}(E^i)$ et d’endomorphismes compacts $U_i \in \textrm{Comp}(E^i)$, $U_i \in \rho_i(H)$ commutant à $\rho_i(H)$. $E^{i,\dagger}$ est muni d’une opération de $\rho_i(H)$ et $U_i$. Si $\alpha \in {\mathbb{R}}$ est fixé et $E_n^{i,\alpha}(U_i)$ désigne le sous-espace de $E^i_n$ sur lequel $U_i$ est de pente $\alpha$, $i_n$ induit pour tout $n$ assez grand une bijection $E_n^{i,\alpha}(U_i) \rightarrow E_{n+1}^{i,\alpha}(U_i)$ qui commute à $U_i$. Cet espace définit donc un sous-espace de dimension finie $E^{i,\alpha}(U_i)$ de $E^{i,\dagger}$ qui hérite d’une représentation de $H$. Ceci permet de définir à nouveau $\mathcal{X}_{U_i}(E^i)$ comme étant l’ensemble des représentations (comptées avec multiplicités) de $H$ apparaissant dans les semi-simplifications des $E^{i,\alpha}(U_i)$, $\alpha$ variant dans ${\mathbb{R}}$. \[corsemisimpl\] Sous ces hypothèses, supposons que pour tout $h \in H$, $\det(1-T\rho_1(h)U_1)=\det(1-T\rho_2(h)U_2) \in K\{\{T\}\}$. Alors $\mathcal{X}_{U_1}(E^1)=\mathcal{X}_{U_2}(E^2)$. [*Preuve:*]{} Fixons $\alpha \in {\mathbb{R}}$, $E^{i,\alpha}(U_i)$ est de dimension finie, $H$ agit donc sur $E^{1,\alpha}(U_1)\oplus E^{2,\alpha}(U_2)$ à travers un quotient de dimension finie. Considérons une sous-$K$-algèbre $H_0$ de $H$ de type fini sur $K$ engendrant toute l’image de $H$ dans ce quotient; pour $n$ assez grand, $H_0$ et $U_i$ agissent alors par endomorphismes sur $E^i_n$, $i=1,2$. On applique la proposition \[semisimpl\] avec $V_i=E^i_n$, il vient $\mathcal{X}_{U_1}(E^{1,\alpha}(U_1))=\mathcal{X}_{U_2}(E^{2,\alpha}(U_2))$, ce qui conclut. $\square$ Un critère de densité {#critere} --------------------- On fixe $M$ un système de modules de Banach comme en §\[banachsystem\], $\textrm{dim}({\mathcal{W}})>0$, muni d’une action d’une ${\mathbb{C}}_p$-algèbre $H$, et d’un endomorphisme compact $U$ lui commutant, on notera $(M,H,U)$ une telle donnée. On rappelle que si $T/{\mathbb{C}}_p$ est un espace rigide, un sous-ensemble $X\subset T({\mathbb{C}}_p)$ est dit Zariski-dense si pour tout fermé analytique $F$ de $T$ tel que $X \subset F({\mathbb{C}}_p)$, alors $F({\mathbb{C}}_p)=T({\mathbb{C}}_p)$. Soit $X \subset {\mathcal{W}}({\mathbb{C}}_p)$ un sous-ensemble Zariski-dense, tel que pour tout $x \in X$ et tout voisinage ouvert affinoide irréductible $V$ de $x$ dans ${\mathcal{W}}$, $V({\mathbb{C}}_p) \cap X$ est Zariski-dense dans $V$. On dira alors que $X$ est [**très Zariski-dense dans ${\mathcal{W}}$**]{}. C’est par exemple le cas des points de la forme $\zeta(1+p)^k$, avec $\zeta^{p^m}=1$ et $k,m \in {\mathbb{N}}$, dans la boule ouverte de centre $1$ de rayon $1$ de ${\mathbb{C}}_p$. On se fixe de plus une “structure classique sur $X$”, on entendra par là la donnée pour tout $x \in X$ d’un sous-espace vectoriel de dimension finie $M_x^{cl}$ de $M_x^{\dagger}$ stable par l’action de $H$. Soit $\alpha \in {\mathbb{R}}$, on notera $M_{x}^{\leq \alpha}$ (resp. $M_x^{\alpha}$) le sous-espace de dimension finie de $M_x^{\dagger}$ sur lequel $U$ est de pente au plus $\alpha$ (resp. exactement $\alpha$). On fera de plus l’hypothèse de “contrôle”: \[Cl\] (Cl) \[det\] Soient $(M_1,H,U)$ et $(M_2,H,U)$ deux systèmes de modules de Banach sur ${\mathcal{W}}$ relativement factoriel. On se donne $X \subset {\mathcal{W}}({\mathbb{C}}_p)$ un ensemble très Zariski-dense, et une structure très classique sur $X$ pour $M_1$ et $M_2$, chacune de ces structures satisfaisant (Cl). Soit $h \in H$, supposons $$\forall x \in X, \, \, \, \, \, \det(1-ThU_{|M_{1,x}^{cl}})=\det(1-ThU_{|M_{2,x}^{cl}})\in {\mathbb{C}}_p[T]$$ Alors $_{M_1}(hU)$=$_{M_2}(hU)$. [*Preuve:*]{} Soit $Z_i \subset {\mathcal{W}}\times {\mathbb{A}}^{1}$ l’hypersurface de Fredholm de $P_i$:=$_{M_i}(hU)$, $p_i$ la première projection $Z_i \rightarrow {\mathcal{W}}$. On dira que $z \in Z_i({\mathbb{C}}_p)$ est classique si $z=(x,\lambda)$ avec $x \in X$ et si $\lambda^{-1}$ est valeur propre de $hU$ sur $M_{i,x}^{cl}$. On montrera plus bas que les points classiques sont Zariski-denses dans $Z_i({\mathbb{C}}_p)$, admettons le pour l’instant. Par hypothèse, $P_1$ s’annule sur les points classiques de $Z_2({\mathbb{C}}_p)$, donc sur $Z_2({\mathbb{C}}_p)$ par Zariski-densité. Par symétrie, il vient $Z_1^{{\textrm{red}}}=Z_2^{{\textrm{red}}}$ et on déduit de [@Con] 4.3.2 que $P_1$ et $P_2$ ont même facteurs irréductibles, il reste à prouver que ces derniers ont même multiplicités. Soit $\Pi$ un de ces facteurs irréductibles, de multiplicité $n_i$ dans $P_i$, $Z(\Pi)_i \subset Z_i$ la composante irréductible associée. Soit $W_i$ l’ouvert de $Z(\Pi)_i$ dont le complémentaire est l’ensemble points de $Z(\Pi)_i$ qui sont dans au moins deux composantes irréductibles de $Z_i$. Admettons pour l’instant que l’on puisse trouver $x \in X$ et $z=(x,\lambda) \in W_1({\mathbb{C}}_p)=W_2({\mathbb{C}}_p)$ tels que $\lambda$ soit une racine de $\det(1-ThU_{|M_{i,x}^{cl}})$ mais pas de $P_i(x,T)/\det(1-ThU_{|M_{i,x}^{cl}})$. Par le choix de $W_i$, $\lambda$ est une racine de $P_i(x,T)$ qui est en fait une racine de $\Pi(x,T)$ mais pas des autres facteurs irréductibles. La multiplicité de $\lambda$ comme racine de $P_i(x,T)$ est donc de la forme $n_in$ où $n$ est la multiplicité de $\lambda$ comme racine de $\Pi(x,T)$. Mais, par le choix de $z$, $nn_i$ est aussi la multiplicité de $\lambda$ comme racine de $\det(1-ThU_{|M_{i,x}^{cl}})$, qui ne dépend pas de $i$. Ainsi, $n_1n=n_2n$, puis $n_1=n_2$. Il reste à trouver un tel $z$ et à prouver que les points classiques sont Zariski-denses dans $Z_i({\mathbb{C}}_p)$. Par hypothèse sur ${\mathcal{W}}$ et [@Con] 4.3.2, les composantes irréductibles de $Z_i$ sont des hypersurfaces de Fredholm. Ces dernières étant d’image Zariski-ouverte dans ${\mathcal{W}}$, elles contiennent toutes des points d’image (par $p_i$) dans $X$. Soit $z_i$ un de ces points, appartenant a une composante irréductible $T_i$ de $Z_i$, et soit $\Omega_i \in \mathcal{C}(Z_i)$ contenant $z_i$. $\mathcal{C}(Z_i)$ désigne le recouvrement canonique de l’hypersurface de Fredholm $Z_i$ (voir la discussion au début de \[unicite\]). $\Omega_i$ étant fini et plat sur son image $V_i \subset {\mathcal{W}}$ (que l’on peut supposer irréductible), chacune de ses composantes irréductibles se surjecte sur $V_i$. $hU$ et $U$ sont des endomorphismes de $M_i(V_i,n)$ pour un $n$ assez grand que l’on fixe, et $P_i(T)_{|V_i}=\det(1-T{hU}_{|M_i(V_i,n)})\in 1+A(V_i)T\{\{T\}\}$. Par choix de $\Omega_i \in \mathcal{C}(Z_i)$ et un théorème de Coleman ([@eigen] §7.1, [@BMF] A.4.3), on peut trouver un sous-$A(V_i)$-module $N_i$ de $M_i(V_i,n)$ localement libre de rang fini, stable par $U$ et $hU$, tel que $\Omega_i$ soit le fermé des zéros de $\det(1-{ThU}_{|N_i})$ dans $V_i \times {\mathbb{A}}^1$. $hU$ est un endomorphisme inversible de $N_i$, ainsi donc que $U$, et ils sont automatiquement continus ([@BGR] 3.7.3 proposition 2). En particulier, les valeurs propres de $U^{-1}$ sur les $N_{i,x}$, $x \in V_i({\mathbb{C}}_p)$, sont bornées par une constante ne dépendant que des $V_i$. Ceci et $(Cl)$ impliquent que pour tout $x \in X \cap V_i({\mathbb{C}}_p)$ sauf peut-être un nombre fini d’entre eux, $\Omega_i({\mathbb{C}}_p) \cap p_i^{-1}(\{x\})$ est composé de points classiques. $X\cap V_i({\mathbb{C}}_p)$ étant Zariski-dense dans $V_i({\mathbb{C}}_p)$, et $\Omega_i \rightarrow V_i$ étant fini et plat, on en déduit que les points classiques sont Zariski-denses dans chaque composante irréductible de $\Omega_i$, et en particulier dans $Z_i({\mathbb{C}}_p)$ et $T_i({\mathbb{C}}_p)$ ([@Con] 2.2.3). Plus exactement, on en déduit que l’ensemble des points $z=(x,\lambda)$ tels que $M_{i,x}^{hU=\lambda^{-1}} \subset M_{i,x}^{cl}$ est Zariski-dense dans $T_i({\mathbb{C}}_p)$. $M_{i,x}^{hU=\lambda^{-1}}$ désigne ici le sous-espace de $M_{i,x}$ qui est l’espace caractéristique pour la valeur propre $\lambda^{-1}$ de l’endomorphisme compact $hU$. On appliquant cela à $T_i:=Z(\Pi)_i$. $\Omega_i \cap W_i$ est un ouvert de $\Omega_i$ et contient donc des points du type précédent. Si $(x,\lambda)$ en est un, $\lambda$ est une racine de $\det(1-ThU_{|M_{i,x}^{cl}})$ mais pas de $P_i(x,T)/\det(1-ThU_{|M_{i,x}^{cl}})$, c’est juste ce qui nous manquait pour conclure. $\square$ La correspondance de Jacquet Langlands “$p$-adique” =================================================== Rappels sur la correspondance classique {#JLclass} --------------------------------------- Si $M \in {\mathbb{N}}$, on pose : $$K_1(M)=\{ g \in GL_2(\widehat{{\mathbb{Z}}}), \, \, g \equiv \left( \begin{array}{cc} * & * \\ 0 & 1 \end{array} \right)\,\bmod M\},$$ $$K_0(M)=\{ g \in GL_2(\widehat{{\mathbb{Z}}}), \, \, g \equiv \left( \begin{array}{cc} * & * \\ 0 & * \end{array} \right)\,\bmod M \}$$ On pourra voir, par l’étoile inférieure droite, les caractères complexes de $({\mathbb{Z}}/M{\mathbb{Z}})^*$ comme des caractères de $K_0(M)$. On fixe $\varepsilon$ un tel caractère, ainsi qu’un entier $k \geq 2$. Le ${\mathbb{C}}$-espace vectoriel $S_k(M,\varepsilon)$ des formes modulaires paraboliques de poids $k$, de niveau $M$, de caractère $\varepsilon: ({\mathbb{Z}}/M{\mathbb{Z}})^* \rightarrow {\mathbb{C}}$ s’identifie à l’espace des fonctions complexes sur $GL_2({\mathbb{Q}})\backslash GL_2({\mathbb{A}})/K_1(M)$, de caractère central $|.|^{2-k}\varepsilon^{-1}$, de caractère $\varepsilon^{-1}$ sous $K_0(M)$, satisfaisant les conditions usuelles de croissance aux pointes et holomorphie, en associant à $f$ (voir par exemple [@hida] §3.1 pour des détails) [^3] $$g=(g_{\infty},g_f) \in GL_2^{+}({\mathbb{R}}) \times K_0(M) \mapsto |\det(g)|^{1-k/2}f_{|_k g_{\infty}}(i)\varepsilon^{-1}(g_f)$$ Cette identification préserve l’action des opérateurs de Hecke usuels (non renormalisés du côté adélique). Soit $l$ premier, l’opérateur de Hecke $T_l$ si $(l,M)=1$, $U_l$ sinon, est donné par la double classe $U_0(M){\textrm{diag}}(l,1)U_0(M)$; si $(l,M)=1$, $S_l=l^{k-2}\varepsilon(l)$ est l’action de ${\textrm{diag}}(l,l)$ (ici, on voit ${\textrm{diag}}(a,b) \in GL_2({\mathbb{A}})$ partout trivial sauf en $l$ où il vaut effectivement ${\textrm{diag}}(a,b)$). On notera $S_k(M,\varepsilon)^{d-new}$ le sous-espace de $S_k(M,\varepsilon)$ composé des formes $d$-nouvelles au sens usuel. On fixe un entier $d$ sans facteur carré, ayant un nombre impair de diviseurs premiers, tel que $d{\, |\!| \,}M$ et que $\varepsilon$ soit trivial sur $({\mathbb{Z}}/d{\mathbb{Z}})^*$. On reconsidère $D({\mathbb{Q}})$ l’algèbre de quaternions de discriminant $d$ introduite en \[quat\]. $K_1(M)$ (resp. $K_0(M)$) est le compact ouvert de $D^*({\mathbb{A}}_f)$ décomposé place par place, qui vaut $D({\mathbb{Z}}_p)^*$ aux places $p|d$, $\varphi^{-1}(K_1(M))$ (resp. $\varphi^{-1}(K_0(M))$) hors de $d$ (cf. \[quat\]), noter que $K_1(M) \neq U_1(M)$. On note $S_k^{*,D}(M,\varepsilon)$ le ${\mathbb{C}}$-espace vectoriel des fonctions complexes sur $D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}})/K_1(M)$, de caractère $\varepsilon^{-1}$ sous $K_0(M)$, de caractère central $|\det(g)|^{2-k} \varepsilon^{-1}$, engendrant sous $D^*({\mathbb{R}})$ un multiple du dual de la représentation $\textrm{Sym}^{k-2}({\mathbb{C}}^2)$. Si $k=2$ et $\varepsilon=1$, on note $S$ la droite des fonctions constantes sur $D^*({\mathbb{A}})$ dans $S_2^{*,D}(M,1)$, elle est stable par $D^*({\mathbb{A}})$. (Arthur, Jacquet, Langlands) \[JL\] Si $k>2$ ou $\varepsilon\neq 1$, les ${\mathbb{C}}$-espaces vectoriels $S_k^{*,D}(M,\varepsilon)$ et $S_k(M,\varepsilon)^{d-new}$ sont isomorphes en tant que modules sous l’algèbre de Hecke de $GL_2({\mathbb{A}}_f^{(d)})$. Si $k=2$, $\varepsilon=1$, c’est encore vrai en remplaçant $S_2^{*,D}(M,1)$ par $S_2^{*,D}(M,1)/S$. L’action des opérateurs de Hecke dans cette correspondance se précise de plus en $l|d$, en faisant correspondre à $U_l$, l’opérateur de Hecke de $S_k^{*,D}(M,\varepsilon)$ donné par la double classe $K_0(M)\pi_lK_0(M)$, où $\pi_l \in D^*({\mathbb{A}}_f)$ est partout trivial sauf en $l$ où il vaut une (quelconque) uniformisante de $D({\mathbb{Z}}_l)$. On notera encore $U_l$ cet opérateur de Hecke pour $D^*$. Il se trouve que nous n’allons pas considérer exactement l’espace $S_k^{*,D}(M,\varepsilon)$, mais un autre légèrement différent (ce qui explique la notation $*$ provisoire), qui lui est isomorphe comme module sous l’algèbre de Hecke. Soit $$\omega_M:= \left( \begin{array}{cc} 0 & 1 \\ M & 0 \end{array} \right) \in GL_2({\mathbb{Q}})$$ on le voit comme un élément de $D^*({\mathbb{A}})$ trivial aux places divisant $d$ et à l’infini, diagonalement $\omega_M$ dans $D^*({\mathbb{A}}_f^{(d)})=GL_2({\mathbb{A}}_f^{(d)})$. $\omega_M$ normalise $K_0(M)$ et agit par ${\textrm{diag}}(a,b) \mapsto {\textrm{diag}}(b,a)$ sur le tore diagonal de $GL_2({\mathbb{A}}_f^{(d)})$. L’application $f \mapsto \omega_M.f $ induit un isomorphisme ${\mathbb{C}}$-linéaire de $S_k^{*,D}(M,\varepsilon)$ sur l’espace des fonctions complexes sur $D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}})/U_1(M)$ de caractère $\varepsilon^{-1}$ sous $U_0(M)$, de caractère central $|\det(g)|^{2-k}\varepsilon^{-1}$, engendrant sous $D^*({\mathbb{R}})$ un multiple du dual de $\textrm{Sym}^{k-2}({\mathbb{C}}^2)$. On note $S^{D}_k(M,\varepsilon)$ ce ${\mathbb{C}}$-espace vectoriel muni de l’action de l’algèbre de Hecke obtenue par transport. Explicitement, si $l$ est premier ne divisant pas $d$, l’opérateur de Hecke $T_l$ si $(l,M)=1$, $U_l$ sinon, est donné par la double classe $U_0(M){\textrm{diag}}(1,l)U_0(M)$; si $(l,M)=1$, $S_l=l^{k-2}\varepsilon(l)$ est l’action de ${\textrm{diag}}(l,l)$, $U_l$ est inchangé si $l|d$. Comme en \[formes\], ${\mathcal{H}}$ désigne la ${\mathbb{Z}}$-algèbre de polynômes sur les lettres $T_l$, $S_l$ si $l$ premier ne divise pas $M$, $U_l$ si $l|M$. Par ce que l’on vient de dire plus haut, $S_k^D(M,\varepsilon)$ et $S_k(M,\varepsilon)^{d-new}$ sont deux ${\mathcal{H}}$-modules isomorphes, avec la même exception pour $k=2$ que dans \[JL\]. Structures $p$-adiques des espaces de formes classiques {#pstructure} ------------------------------------------------------- ${}^{}$ Nous allons introduire une ${\mathbb{Q}}_p$-structure sur les espaces $S_k^{D}(M,\varepsilon)$ et $S_k(M,\varepsilon)$. On fixe pour cela $p$ premier, $\iota: {\mathbb{C}}_p \rightarrow {\mathbb{C}}$ un isomorphisme de corps, $K \subset {\mathbb{C}}_p$ un sous-corps complet, et $M=Npd$ avec $N,p$ et $d$ comme en §\[formes\]. La courbe modulaire $X_1(M)$ a une structure naturelle sur ${\mathbb{Q}}_p$, préservée par les correspondances de Hecke. Soit $S_k(M,\varepsilon,K)$ le sous $K$-espace vectoriel de $H^0(X_1(M)/K,\omega^k)$ composé des formes paraboliques de caractère $\varepsilon$. Notons que par “$GAGA$ rigide analytique”, ce ${\mathcal{H}}$-module est canoniquement isomorphe à son analogue sur $X_1(M)^{rig}/K$. La formation du ${\mathcal{H}}$-module $S_k(M,\varepsilon,K)$ commute à l’extension des scalaires sur $K$ et la donné de $\iota$ identifie $S_k(M,\varepsilon,{\mathbb{C}}_p)$ et $S_k(M,\varepsilon)$. $S^{D}_k(M,\varepsilon,K)$ le $K$-espace vectoriel des fonctions $f: D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}}_f) \rightarrow \textrm{Sym}^{k-2}(K^2)$ satisfaisant $$f(xu)=\varepsilon(u)^{-1}u_p^{-1}f(x), \, \, \, \, \forall (x,u) \in D^*({\mathbb{A}}_f)\times U_0(M)$$ c’est un ${\mathcal{H}}$-module de manière naturelle. La formation du ${\mathcal{H}}$-module $S_k^D(M,\varepsilon,K)$ commute à l’extension des scalaires sur $K$ et la donnée de $\iota$ identifie $S_k^D(M,\varepsilon,{\mathbb{C}}_p)$ et $S_k^D(M,\varepsilon)$, on rappelle comment dans ce qui suit. A une fonction $f \in S_k^{D}(M,\varepsilon)$ est associé par définition un morphisme $D^*({\mathbb{R}})$-équivariant $\varphi_f$ de $\textrm{Sym}^{k-2}({\mathbb{C}}^2)^*$ vers l’espace des fonctions complexes sur $D^*({\mathbb{Q}})\backslash D^*({\mathbb{A}})/U_1(M)$. Si $x \in D^*({\mathbb{A}})$, $$v \mapsto \varphi_f(v)(x)$$ définit par dualité un unique élément $F_f(x) \in \textrm{Sym}^{k-2}({\mathbb{C}}^2)$. On considère alors l’application qui à $f$ associe l’élément de $S_k^{D}(M,\varepsilon,{\mathbb{C}}_p)$ défini par $$x_f \mapsto x_p^{-1}\iota^{-1}(F_f(1 \times x_f))$$ c’est l’isomorphisme cherché comme on le vérifie immédiatement. Si $S_k(M,\varepsilon,{\mathbb{Q}}_p)^{d-new}$ désigne le sous-espace de $S_k(M,\varepsilon,{\mathbb{Q}}_p)$ composé des formes nouvelles en $d$ de $S_k(M,\varepsilon,{\mathbb{Q}}_p)$, il vient Si $k>2$ ou $\varepsilon \neq 1$, les ${\mathcal{H}}\otimes_{{\mathbb{Z}}}{\mathbb{Q}}_p$-modules $S_k(M,\varepsilon,{\mathbb{Q}}_p)^{d-new}$ et $S_k^{D}(M,\varepsilon,{\mathbb{Q}}_p)$ sont isomorphes. $S_2(M,1,{\mathbb{Q}}_p)$ est ${\mathcal{H}}$-isomorphe au quotient de $S_2^{D}(M,1,{\mathbb{Q}}_p)$ par la droite des fonctions constantes. La correspondance $p$-adique à poids-caractère fixé {#cla} --------------------------------------------------- On reprend les notations du §\[formes\], et on s’intéresse aux systèmes de modules de Banach sur ${\mathcal{W}}$, $$M^1:={F^{0,\varepsilon,d}}\, \, \, \, et \, \, \, \, M^2:={F^{D,\varepsilon}}$$ Comme on l’a vu, ils sont munis d’une action de l’algèbre $H:=\mathcal{H}$, et $U_i:=\rho_i(U_p) \in \textrm{Comp}(M^i)$ commute à $\rho_i(H)$. On pose $X:=\{ \zeta(1+p)^k, k\geq 2, \zeta \in \mu_{p^{\infty}} \} \subset {\mathcal{W}}({\mathbb{C}}_p)$, il est très Zariski-dense dans ${\mathcal{W}}$. On munit $M^1$ et $M^2$ de structures classiques sur $X$ en posant pour $M^{i,cl}_{\zeta (1+p)^k}$, $k\geq 3$, $\zeta^{p^m}=1$, - le sous-espace de $F_{\zeta(1+p)^k,n}^{0,\varepsilon,d}$, avec $n$ quelconque tel que $n \geq m-1$, des restrictions à $X_1(Np^m,d)(v_n)$ des sections de $\omega^k$ convergentes sur tout $X_1(Np^m,d)/{\mathbb{C}}_p$, s’annulant à [*toutes*]{} les pointes de $X_1(Np^m,d)/{\mathbb{C}}_p$, si $i=1$. - le sous-espace $S_k^D(Np^m,\varepsilon\chi\tau^{-k},{\mathbb{C}}_p)$ de $F_{\zeta (1+p)^k,n}^{D,\varepsilon}$, $n \geq m-1$ quelconque, défini à la fin du paragraphe \[quat\]. Ce sont bien des structures classiques, satisfaisant $(Cl)$ en $(1+p)^k\zeta$ dès que $\alpha < k-1 $ par les assertions connues de classicité en pente petite devant le poids ([@Col2] §8, [@Col3] 1.1,[@Buz2] §4). Il faut noter qu’une forme modulaire de poids $k$ sur $X_1(Np^m,d)$ qui est propre pour $U_p$ de pente strictement inférieure à $k\!-\!1$, et qui s’annule en toutes les pointes de $Z_1(Np^m,d)(0)$, est en fait parabolique. On est alors dans les hypothèses de la proposition \[det\] par le théorème \[JL\]. On en déduit le \[seriecaracteristique\] Soit $h \in {\mathcal{H}}$, alors ${\textrm{Fred}}_{F^{0,\varepsilon,d}}(hU_p)={\textrm{Fred}}_{F^{D,\varepsilon}}(hU_p)$ [*Remarques:*]{} - Il est aisé de voir que le membre de droite de cette égalité est en fait dans $1+T\Lambda\{\{T\}\}$, ainsi donc que le premier. - On aurait pu se restreindre, dans notre choix de l’ensemble $X$, à celui de $\{(1+p)^k, k\geq 3\}$ pour obtenir le même résultat \[seriecaracteristique\]. Via le théorème \[jlpfamille\] (qui n’utilise que le résultat de \[seriecaracteristique\]), on peut alors déduire de la propriété $(Cl)$ pour les formes quaternionique, et de [@Col2], une nouvelle preuve du théorème de contrôle de [@Col3]. Soit $\kappa \in {\mathcal{W}}({\mathbb{C}}_p)$, on dispose de deux systèmes d’espaces de Banach $F^{0,\varepsilon,d}_{\kappa}$ et $F^{D,\varepsilon}_{\kappa}$, munis de représentations de ${\mathcal{H}}$. Le corollaire \[corsemisimpl\] implique le \[jlp\] $\mathcal{X}_{U_p}({F^{0,\varepsilon,d}})=\mathcal{X}_{U_p}({F^{D,\varepsilon}})$ Autrement dit, l’espace des formes modulaires paraboliques surconvergentes de pente finie, de poids-caractère $\kappa$, de niveau modéré $Nd$, nouvelles en $d$, de caractère $\varepsilon$ et l’espace des formes modulaires quaternioniques $p$-adiques de pente finie, poids-caractère $\kappa$, de niveau modéré $N$, de caractère $\varepsilon$, ont même semi-simplification comme ${\mathcal{H}}$-module. La correspondance en familles $p$-adiques ========================================= Une propriété d’unicité pour les variétés spectrales {#unicite} ---------------------------------------------------- Soit ${\mathcal{W}}/{\mathbb{C}}_p$ un espace rigide réduit, soit $M$ la donnée d’un système de modules de Banach orthonormalisables sur ${\mathcal{W}}$, muni d’une représentation d’une ${\mathbb{C}}_p$-algèbre commutative $\rho: H \rightarrow \textrm{End}(M)$, et d’un $U \in H$ tel que $\rho(U) \in \textrm{Comp}(M)$. A une telle donnée $(M,U,H)$, on peut attacher la série de Fredholm $_M$($U$) $\in 1+TA({\mathcal{W}})\{\{T\}\}$, et l’hypersurface de Fredholm associée $Z \subset {\mathcal{W}}\times {\mathbb{A}}^1$ définie par $_M$($U$)=0, munie de ses deux projections $(pr_1,pr_2): Z \rightarrow {\mathcal{W}}\times {\mathbb{A}}^1$. On sait alors que $Z$ a un recouvrement admissible canonique $\mathcal{C}:=\mathcal{C}$($_M(U)$), composé des ouverts affinoides $\Omega \subset Z$ finis et plats sur leur image $pr_1(\Omega)$, et ouverts fermés dans $pr_1^{-1}(pr_1(\Omega))$ ([@BMF] A5.8, ce qui suffit pour nos applications, [@Buz1] §4 pour le cas général). On sait de plus construire par recollement à l’aide de $\mathcal{C}$, un espace rigide $D$ ([@eigen] §7 [^4] puis [@ch] §6), la “variété spectrale attachée à $(M,U,H)$”, muni d’un morphisme fini $\pi: D \rightarrow Z$. On dispose de plus d’un morphisme d’anneaux $a: H \rightarrow A(D)$, ainsi que d’un diagramme commutatif: $$\xymatrix{ D \ar@{->}^{\kappa}[dd] \ar@{->}[dr]_{\pi} \ar@{->}[drr]^{a(U)^{-1}} \\ & Z \ar@{->}_{pr_2}[r] \ar@{->}^{pr_1}[dl] & {\mathbb{A}}_{rig}^1 \\ {\mathcal{W}}}$$ $H$, $U \in H$ et ${\mathcal{W}}$ étant fixés, on note $\mathcal{E}$ la catégorie dont les objets sont les couples $(\pi,a)$ formés d’un morphisme d’espaces rigides $\pi: D \rightarrow Z$ au dessus de ${\mathcal{W}}$, ainsi que d’un morphisme d’anneaux $a: H \rightarrow A(D)$ tel que $a(U)^{-1}=pr_2\cdot \pi$. Les morphismes $\textrm{Hom}((\pi_1,a_1),(\pi_2,a_2))$ sont ceux $(\varphi_D, \varphi_Z): \pi_1 \rightarrow \pi_2$ au dessus de ${\mathcal{W}}$ tels que $\forall h \in H, a_2(h) \cdot \varphi_D= a_1(h)$. Si $X=(\pi,a)$ est un objet de $\mathcal{E}$, on notera $D(X)$ (resp. $Z(X)$) l’espace rigide $D$ source (resp. $Z$ but) de $\pi$, $a(X):=a$, $\pi(X):=\pi$. Si $(M,U,H)$ est comme plus haut, on notera $\mathcal{E}(M)$ l’objet de $\mathcal{E}$ associé à $M$ par la construction précédente. On notera de plus $\mathcal{E}(M)^{{\textrm{red}}}$, la réduction de $\mathcal{E}(M)$, l’objet $(\pi^{{\textrm{red}}},a^{{\textrm{red}}})$ de $\mathcal{E}$, défini par $$\pi^{{\textrm{red}}}: D^{{\textrm{red}}} \overset{can}{\hookrightarrow} D \overset{pi}{\longrightarrow} Z , \, \, \, a^{{\textrm{red}}}: H \overset{a}{\longrightarrow}A(D) \overset{can}{\longrightarrow} A(D)/\textrm{Nilrad}(A(D))$$ \[spectral\]Soient $(M^1,U,H)$ et $(M^2,U,H)$ comme plus haut, on suppose $$\forall h \in H, {\textrm{Fred}}_{M^1}(hU)={\textrm{Fred}}_{M^2}(hU)$$ Alors $\mathcal{E}(M^1)^{{\textrm{red}}}$ et $\mathcal{E}(M^2)^{{\textrm{red}}}$ sont canoniquement isomorphes. [*Preuve:*]{} Par hypothèse, $Z(\mathcal{E}(M^1))=Z(\mathcal{E}(M^2))$, on la note $Z$, elle est munie de sa première projection $pr_1: Z \rightarrow {\mathcal{W}}$, on pose de plus $D_i=D_i(\mathcal{E}(M^i))$, $\mathcal{E}(M^i)=(\pi_i,a_i)$. Soit $\mathcal{C}$ le recouvrement canonique de $Z \rightarrow {\mathcal{W}}$, $\Omega \in \mathcal{C}$, $V:=pr_1(\Omega)$ est un ouvert affinoide de ${\mathcal{W}}$, et par hypothèse $\Omega \rightarrow pr_1^{-1}(V)$ est un ouvert fermé fini sur $V$. Pour $n$ assez grand, $M^i(V,n)$ contient alors un sous-$A(V)$-module localement libre de rang fini, “indépendant de $n$”, dont on note $M^i(\Omega)$ l’image dans $M_i(V)^{\dagger}$. $M^i(\Omega)$ hérite d’une action de $H$, $\rho_i(U)$ ayant pour polynôme caractéristique le polynôme associé à la donnée de $\Omega$ (cf. par exemple [@ch] 6.3.3). Soit $H_V:=H \otimes_{{\mathbb{C}}_p} A(V)$, par construction $D_i(\Omega)$ est l’affinoide d’algèbre l’image de $H_V$ dans ${\textrm{End}}_{A(V)}(M^i(\Omega))$. Soit $h \in H$, montrons que si $P_{i,h}(X):=\det(h_{|M^i(\Omega)}-X.1)$, alors $P_{1,h}(X)=P_{2,h}(X)$. $V \subset {\mathcal{W}}$ étant réduit, il suffit de le faire après évaluation en tout $x \in V({\mathbb{C}}_p)$. Soit $x \in V({\mathbb{C}}_p)$, alors le corollaire \[corsemisimpl\] s’applique à $(H,U)$ agissant sur les systèmes de modules de Banach $M^i_x$, on en conclut que les même caractères de $H$ apparaissent dans les semi-simplifications de ces deux espaces (avec multiplicités). $M^i(\Omega)_x$ est par définition le sous-espace de $\bigcap_{n\in {\mathbb{N}}} M^i_{x,n}$ sur lequel $\rho_i(U)$ a ses valeurs propres d’inverse dans $pr_1^{-1}(\{x\})\cap \Omega$, et ces dernières ne dépendent pas de $i$, la série caractéristique de $U$ n’en dépendant pas. Il vient donc $P_{1,h}(X)(x)=P_{2,h}(X)(x)$, puis $P_{1,h}(X)=P_{2,h}(X)$. Si plus généralement $h \in H_V$, on a encore $P_{1,h}(X)=P_{2,h}(X)$ car par l’argument précédent on a cette égalité après évaluation en tout $x \in V({\mathbb{C}}_p)$. Soit $I_i$ l’idéal de $H_V$ noyau du morphisme $H_V \rightarrow {\textrm{End}}_{A(V)}(M^i(\Omega))$, prouvons que $\sqrt{I_1}$=$\sqrt{I_2}$. Soit $f \in I_1$, alors $P_{1,h}(X)=X^d$, $d$ étant le rang de $M^1(\Omega)$ (égal au rang de $M^2(\Omega)$), on en déduit que $P_{2,h}(X)=X^d$ puis que $I_1^d \subset I_2$. Il vient $\sqrt{I_1} \subset \sqrt{I_2}$, puis par symétrie $\sqrt{I_1}=\sqrt{I_2}$, ce que l’on voulait. On en déduit l’existence d’un isomorphisme d’anneau $H_V$-linéaire: $\varphi^*(\Omega): A(D_2(\Omega)^{{\textrm{red}}}) \rightarrow A(D_1(\Omega)^{{\textrm{red}}})$. Un tel morphisme $H_V$-linéaire est nécessairement unique s’il existe, il est au dessus de $A(\Omega)$. Soit alors $\varphi(\Omega) : D_1(\Omega)^{{\textrm{red}}} \rightarrow D_2(\Omega)^{{\textrm{red}}}$ l’isomorphisme induit au dessus de $\Omega$. Vérifions que si $\Omega' \subset \Omega \in \mathcal{C}$, $\varphi_{\Omega}$ envoie $D_1(\Omega')^{{\textrm{red}}}$ dans $D_2(\Omega')^{{\textrm{red}}}$. Soit $V=pr_1(\Omega) \subset {\mathcal{W}}$, $V'=pr_1(\Omega')$, alors $\Omega_{V'}:=\Omega \cap pr_1^{-1}(V') \in \mathcal{C}$ et $D_i(\Omega_{V'})$ est l’ouvert $D_i(\Omega)^{{\textrm{red}}} \times_V V'$ de $D_i(\Omega)^{{\textrm{red}}}$. $\varphi(\Omega)$ induit un $H_{V'}$-isomorphisme $D_1(\Omega_{V'})=D_1(\Omega)^{{\textrm{red}}}\times_V V' \rightarrow D_2(\Omega)^{{\textrm{red}}}=D_2(\Omega')^{{\textrm{red}}}$, ce qui conclut le cas $\Omega'=\Omega_{V'}$. Il reste donc le cas $V=V'$. On a $D_i(\Omega)^{{\textrm{red}}}=D_i(\Omega')^{{\textrm{red}}}\coprod D_i(\Omega \backslash \Omega')^{{\textrm{red}}}$. $\varphi_{\Omega}$ étant $A(\Omega)$ et $H_V$-linéaire sur les fonctions, elle envoie $D_1(\Omega')^{{\textrm{red}}}$ isomorphiquement sur $D_2(\Omega')^{{\textrm{red}}}$ au dessus de $H_V$ et $A(\Omega')$. On conclut par unicité d’un tel morphisme que les $\varphi_{\Omega}$ se recollent en un isomorphisme $D_1^{{\textrm{red}}} \rightarrow D_2^{{\textrm{red}}}$ au dessus de $Z$ par [@BGR] 9.3.3/1. $\square$ [*Remarque:*]{} Il est bien sur faux en général que sous les hypothèses de la proposition \[spectral\], $M^1$ et $M^2$ soient des $H$-modules isomorphes; \[spectral\] est la généralisation naturelle de \[semisimpl\]. Nous pouvons énoncer un résultat général combinant \[det\], \[spectral\] et \[corsemisimpl\]. Fixons ${\mathcal{W}}$ réduit de dimension $>0$ et relativement factoriel, $(M^1,H,U)$ et $(M^2,H,U)$ des systèmes de $H$-modules de Banach sur ${\mathcal{W}}$ munis de structures classiques sur un sous-ensemble très Zariski-dense $X \subset {\mathcal{W}}({\mathbb{C}}_p)$ comme en \[critere\], satisfaisant (Cl): \[general\] Supposons pour tout $h \in H$, $x \in X$, $$\det(1-ThU_{|M_x^{1,cl}})=\det(1-ThU_{|M_x^{2,cl}}) \in {\mathbb{C}}_p[T]$$ alors, - ${\textrm{Fred}}_{M^1}(hU)={\textrm{Fred}}_{M^2}(hU) \in 1+TA({\mathcal{W}})\{\{T\}\}$, - $\mathcal{E}(M^1)$ est canoniquement isomorphe à $\mathcal{E}(M^2)$, - Pour tout $x \in {\mathcal{W}}({\mathbb{C}}_p)$, $\mathcal{X}_U(M^1_x)=\mathcal{X}_U(M^2_x)$. Nous allons énoncer, pour terminer ce paragraphe, un critère sur $(M,U,H)$ assurant que $D(\mathcal{E}(M))$ est réduit. Ce passage peut être omis en première lecture, et apparaît ici par manque de référence satisfaisante. On fera les hypothèses suivantes sur ${\mathcal{W}}$: ${\mathcal{W}}$ est relativement factoriel ([@Con] §4), de dimension $>0$, et pour tout $x \in {\mathcal{W}}({\mathbb{C}}_p)$, $\widehat{{\mathcal{O}}_{X,x}}$ est intègre. On suppose de plus que l’on dispose de $X \subset {\mathcal{W}}({\mathbb{C}}_p)$ un ensemble très Zariski dense, et une structure classique sur $X$ au sens de §\[critere\], satisfaisant $(Cl)$. On fait de plus l’hypothèse de type “multiplicité $1$” suivante: $$\textrm{ "Soit $\alpha \in {\mathbb{R}}$, pour presque tout $x \in X$, $H$ agit de manière semi-simple sur $M_x^{cl}\cap M_x^{\leq \alpha}$}"$$ \[reduit\] Sous ces hypothèses, $D(\mathcal{E}(M))$ est réduit. [*Rappels:*]{} Si $A$ est une algèbre affinoide, $x \in \textrm{Specmax}(A)$, on notera $A_x$ (resp. $A^{rig}_x$) le localisé Zariski (resp. rigide) en $x$ ([@BGR] 7.3.2). Ils sont tous deux locaux noethériens, ont des complétés canoniquement isomorphes, et $A_x, A_x^{rig}$ et $\widehat{A_x}$ sont simultanément réduits ([@BGR] 7.3.2/8). Notons que si $A \rightarrow A'$ est un morphisme plat entre anneaux noethériens, alors si $A'$ n’a pas d’idéaux premiers associés immergés, il en va de même pour $A$. Ceci vaut en particulier pour $A_x \rightarrow A_x^{rig} \rightarrow \widehat{A_x^{rig}}=\widehat{A_x}$. Par exemple, si tous les $A^{rig}_x$ sont sans composantes associées immergées, alors $A$ est sans composante associée immergée. Enfin, pour qu’un anneau noethérien $A$ sans composante associée immergée soit réduit, il suffit que pour un ensemble $\{x_1,...,x_n\} \subset \textrm{Specmax}(A)$ tel que chaque composante irréductible de $\textrm{Spec}(A)$ contienne un des $x_i$, chacun des $A_{x_i}$ soit réduit. En effet, sous ces hypothèses l’application canonique $A \rightarrow \oplus_{i=1}^r A_{x_i}$ est injective. En particulier, si un tel anneau $A$ a son spectre irréductible, soit il est réduit, soit aucun des $A_x$, $x \in \textrm{Specmax}(A)$ n’est réduit. [*Preuve:*]{} On reprend les notations du début du §\[unicite\], un point $z \in D({\mathbb{C}}_p)$ sera dit classique si $\kappa(z) \in X$, et si le caractère de $H$ obtenu par évaluation en $z$ sur $D({\mathbb{C}}_p)$ est dans la semi-simplification du $H$-module $M_{\kappa(z)}^{cl}$. Les points classiques sont alors très Zariski denses car ${\mathcal{W}}$ est relativement factoriel de dimension $>0$, et par (Cl). Soit $\Omega \in \mathcal{C}$ tel que $D(\Omega)$ contienne un point classique, il en contient alors un ensemble Zariski-dense. Soit $M(\Omega)$ le $A(V)$-module projectif de type fini associé à $\Omega$ comme dans la preuve de \[spectral\]. Soit $u \in A(D(V))\subset {\textrm{End}}_{A(V)}(M(\Omega))$ nilpotent, par hypothèse de multiplicité $1$, les évaluations de $u$ sont nulles en presque tout $x$ de $X\cap V({\mathbb{C}}_p)$, et $u$ est donc nul; $D(V)$ est donc réduit s’il contient un point classique. Soit $\Omega \in \mathcal{C}$ quelconque, montrons que les complétés des anneaux locaux aux points fermés de $D(\Omega)$ n’ont pas de composantes associées immergées. C’est en fait une propriété générale des sous-$A$-algèbres $B$ de ${\textrm{End}}_A(P)$ où $A$ est intègre noethérien tel que les $\widehat{A_m}$, $m \in \textrm{Max}(A)$, soient intègres, et $P$ projectif de type fini sur $A$. En effet, si $m \in \textrm{Max}(A)$ est fixé, $A':=\widehat{A_m}$, la platitude de $A \rightarrow A'$ entraîne que $B_m:=B \otimes_A A'$ est canoniquement isomorphe à son image dans ${\textrm{End}}_{A'}(P\otimes_A A')\simeq M_r(A')$, où $r:=\textrm{rg}_A(P)$. $A'$ étant hensélien, $B_m$ est un produit d’algèbres locales $B_m^i$ finies sur $A'$, sans $A'$-torsion, car incluse dans $M_r(A')$. En particulier si $Q$ est un idéal premier associé de $B_m^i$, $Q\cap A'$ est un idéal premier de $A'$ de même hauteur que $Q$ annulant un élément de $B_m^i$, donc $Q\cap A'=0$ et cette hauteur commune est nulle, ce que l’on voulait. Les $D(\Omega)$ recouvrant $D$ de manière admissible, tous les ${\mathcal{O}}_{D,x}^{rig}$, $x \in D$, sont sans composantes immergées associées, on conclut par le lemme suivant. $\square$ \[reduc\] Soit $X/{\mathbb{C}}_p$ un espace rigide dont les anneaux locaux n’ont pas de composantes associées immergées, et ayant un ensemble Zariski dense de $x$ tels que ${\mathcal{O}}_{X,x}^{rig}$ soit réduit, alors $X$ est réduit. [*Preuve:*]{} Soit $X^0$ l’ensemble des $x$ de $X$ qui sont dans une seule composante irréductible de $X$, $X^0$ est un ouvert admissible de $X$. Soit ${\textrm{Red}}(X):=\{x \in X, {\mathcal{O}}_{X,x}^{rig} \textrm{ est réduit }\}$, c’est (sans condition sur $X$) un ouvert admissible de $X$, on définit de même ${\textrm{Red}}(X^0)$. ${\textrm{Red}}(X)$ est Zariski-dense dans $X$, comme il est ouvert il y est aussi très Zariski-dense. On en déduit que ${\textrm{Red}}(X^0)$ est Zariski-dense dans $X^0$. Les rappels ci-dessus assurent que ${\textrm{Red}}(X^0)$ est un ouvert fermé admissible de $X^0$, on a donc $X^0={\textrm{Red}}(X^0)$. Si $V$ est un ouvert affinoide de $X$, $V\cap X^0$ est Zariski-dense dans $V$, et les rappels montrent que $V \subset {\textrm{Red}}(X)$. $\square$. L’isomorphisme rigide analytique $_p$ ------------------------------------- Soient ${\mathcal{W}},N,p,d$ comme en §\[formes\], $${D}^{0,\varepsilon,d}:=D(\mathcal{E}(F^{0,\varepsilon,d}))^{{\textrm{red}}}, \, \, \, \, {D}^{D,\varepsilon}:=D(\mathcal{E}(F^{D,\varepsilon}))^{{\textrm{red}}}$$ On note $Z \subset {\mathcal{W}}\times {\mathbb{A}}^1$ l’hypersurface de Fredholm associée à $_{F^{0,\varepsilon,d}}(U_p)$= $_{F^{D,\varepsilon}}(U_p)$ (théorème \[seriecaracteristique\]), $a:=a(\mathcal{E}(F^{0,\varepsilon,d}))$, $a_D:=a(\mathcal{E}(F^{D,\varepsilon}))$, on dispose aussi de morphismes naturels ${D^{0,\varepsilon,d}}\rightarrow {\mathcal{W}}$ et ${D^{D,\varepsilon}}\rightarrow {\mathcal{W}}$ que l’on notera par le même nom $\bf{\kappa}$. \[jlpfamille\] Il existe un unique isomorphisme rigide analytique ${\textrm{JL}}_p$: $D^{D,\varepsilon} \rightarrow D^{0,\varepsilon,d}$ au-dessus de ${\mathcal{W}}$, coïncidant avec la correspondance de Jacquet-Langlands sur les points classiques différents du point spécial. Il satisfait $\forall h \in {\mathcal{H}}, \, \, a(h).{\textrm{JL}}_p=a_D(h)$. Avant de prouver ce théorème, rappelons qu’un point $x$ de $D^{0,\varepsilon,d}({\mathbb{C}}_p)$ (resp. $D^{D,\varepsilon}({\mathbb{C}}_p)$) est dit [**classique**]{} si : i\) $\kappa(x)=(1+p)^k\zeta$, où $k \geq 2$ est un entier, $\zeta \in \mu_{p^{\infty}}$, ii\) le caractère ${\mathcal{H}}\rightarrow {\mathbb{C}}_p$ obtenu par l’évaluation en $x$ apparaît dans la semi-simplification du ${\mathcal{H}}$-module $F_{(1+p)^k\zeta}^{0,\varepsilon,d,cl}$ (resp. $F_{(1+p)^k\zeta}^{D,\varepsilon,cl}$). On notera $x_0$ le point de $D^{D,1}({\mathbb{C}}_p)$ correspondant au caractère de ${\mathcal{H}}$ sur la droite des fonctions constantes dans $F_{(1+p)^2}^{D,1,cl}$, on l’appellera le [**point spécial**]{}. [*Preuve:*]{} L’assertion d’unicité de \[jlpfamille\] découle de la Zariski-densité des points classiques de $D^{D,\varepsilon}$. Le théorème \[seriecaracteristique\] combiné à la proposition \[spectral\] assure l’existence d’un isomorphisme canonique $\phi: \mathcal{E}(F^{D,\varepsilon})^{{\textrm{red}}} \rightarrow \mathcal{E}(F^{0,\varepsilon,d})^{{\textrm{red}}}$. On définit alors $${\textrm{JL}}_p: {D^{D,\varepsilon}}\rightarrow {D^{0,\varepsilon,d}}$$ comme étant l’isomorphisme induit par $\phi$. Par construction, il est au dessus de ${\mathcal{W}}$ et satisfait $a={\textrm{JL}}_p.a_D$. Prouvons que ${\textrm{JL}}_p$ induit la correspondance de Jacquet-Langlands sur les points classiques non spéciaux. Rappelons (cf. par exemple [@ch] 6.2.4, 6.2.5) que si $w \in {\mathcal{W}}({\mathbb{C}}_p)$, l’application qui a un point $x \in {D^{0,\varepsilon,d}}({\mathbb{C}}_p)$ (resp. ${D^{D,\varepsilon}}({\mathbb{C}}_p)$) tel que ${\bf \kappa}(x)=w$ associe le caractère ${\mathbb{C}}_p$-valué de ${\mathcal{H}}$ d’évaluation en $x$ induit une bijection entre ${ \bf \kappa}^{-1}(w)$ et $|\mathcal{X}_{U_p}(F_{w}^{0,\varepsilon,d})|$ (resp. $|\mathcal{X}_{U_p}(F_{w}^{D\varepsilon})|$, voir §\[semi\] pour la notation $|.|$). En particulier, si $x \in {D^{D,\varepsilon}}({\mathbb{C}}_p)$ est un point classique non spécial, la relation $a={\textrm{JL}}_p.a_D$ assure que ${\textrm{JL}}_p(x)$ correspond au même caractère de ${\mathcal{H}}$ que $x$. Or la correspondance de Jacquet-Langlands usuelle nous assure l’existence d’une forme classique de poids $\kappa(x)$ ayant ce caractère sous ${\mathcal{H}}$; par unicité (i.e par la bijection rappelée ci-dessus) le point de ${D^{0,\varepsilon,d}}({\mathbb{C}}_p)$ correspondant est nécessairement ${\textrm{JL}}_p(x)$, ce qui prouve le théorème. $\square$ [*Remarques:*]{} i) Par des techniques usuelles, on peut montrer que l’adhérence $\overline{{\mathcal{H}}}$ de la $\Lambda$-algèbre engendrée par l’image de ${\mathcal{H}}$ dans $A(D^{D,\varepsilon})$ est compacte. De ceci et de l’existence des représentations galoisiennes attachées aux formes modulaires classiques résulte facilement (voir par exemple [@ch] §7) l’existence d’un unique pseudo-caractère continu de dimension $2$ $$T: \textrm{Gal}(\overline{{\mathbb{Q}}}/{\mathbb{Q}})_{Npd} \longrightarrow \overline{{\mathcal{H}}} \subset A(D^{0,\varepsilon,d})$$ tel que si $l$ est premier avec $(l,Npd)=1$, $T(\textrm{Frob}_l)=a(T_l)$. ii\) ${\textrm{JL}}_p(x_0)$ correspond à la forme modulaire parabolique surconvergente de poids $2$, nouvelle en $d$, de $q$-développement $q+\sum_{n\geq 2}a_n q^n$ avec $a_l=l+1$ si $(l,pd)=1$, $1$ si $l|d$, et $p$ si $l=p$. Elle n’est pas classique au sens strict précédent, mais elle est quand même convergente sur tout $X_1(Np,d)$, bien que non cuspidale. Elle est “critique”, car de pente $1$ et de poids $2$. En prenant $X:=\{(1+p)^k, k\geq 2 \in {\mathcal{W}}({\mathbb{C}}_p)\}$, les structures classiques sur $X$ ci-dessus, $N=1$, l’hypothèse de multiplicité $1$ pour ${\mathcal{H}}$ agissant sur $M_{(1+p)^k}^{cl}\cap M_{(1+p)^k}^{\leq \alpha}$ sont vérifiées dès que $\frac{k-1}{2} > \alpha $ et \[reduit\] donne la Si $N=1$, $D(\mathcal{E}(F^{0,\varepsilon,d}))$ et $D(\mathcal{E}(F^{D,\varepsilon})) $ sont réduits. Quelques conséquences, remarques et questions ============================================= Opérateurs thêta ---------------- Soit $\kappa=(k,\chi)$ de conducteur $m$, $k \geq 2$, il existe un opérateur d’entrelacements surjectif $${\mathcal{C}}_{(1+p)^{-2}\kappa}^{\dagger} \longrightarrow ({\mathcal{C}}_{(1+p)^{-2}{\kappa^*}}^{\dagger})\otimes {\det}^{k-1}, \, \, \, f \mapsto (\frac{d}{dT})^{k-1}(f), \, \, \, \, \kappa^*:=(2-k,\chi)$$ en tant que représentations de ${\mathbb{M}}$ (cf. [@Buz2] §6, voir aussi [@ST] 5.5). Son noyau est l’espace des fonctions localement polynomiales de degré $\leq k-2$. Il induit un opérateur $\Theta^{1-k}: F^{D,\varepsilon}_{\kappa,0}[m] \rightarrow F^{D,\varepsilon}_{\kappa,0}[m]$ avec les notations de \[quat\], tel que $\forall n \in {\mathbb{N}}, \, \, \, \, T_n(\Theta^{1-k}(f))=n^{1-k}\Theta^{1-k}(f)$. Le noyau de $\Theta^{1-k}$ est le ${\mathcal{H}}$-module $S_k^D(Np^m,\varepsilon\chi\tau^{-k},{\mathbb{C}}_p)$. Du côté des courbes modulaires, on peut définir via l’application de Kodaira-Spencer ([@Col2] §4, [@Col3]) un opérateur $M_{2-k}(Np^m,d)^{\dagger} \rightarrow M_{k}(Np^m,d)^{\dagger}$ noté $\Theta^{k-1}$, qui agit sur les $q$-développements par $(\frac{d}{dq})^{k-1}$, et satisfait donc $\forall n \in {\mathbb{N}}, \, \, T_n(\Theta^{k-1}(f))=n^{k-1}\Theta^{k-1}(T_n(f))$. Les formes modulaires classiques propres, de pente finie, de poids $k$ et de niveau $Np^md$ ne sont pas dans l’image de $\theta^{1-k}$. Si $x \in D^{D,\varepsilon}({\mathbb{C}}_p)$ est non classique mais de poids $\kappa$, il existe une forme $f_x \in F_{\kappa}^{D,\varepsilon}[m]$ propre, non classique, ayant pour système de valeurs propres celui associé à $x$. $\Theta^{1-k}(f_x) \in F_{\kappa^*}^{D,\varepsilon}[m]$ est propre et non nul, et correspond donc à un point que l’on notera $\Theta^{1-k}(x) \in D^{D,\varepsilon}({\mathbb{C}}_p)$. On définit de même $\Theta^{k-1}(x)$ pour tout $x \in D^{0,\varepsilon,d}({\mathbb{C}}_p)$ de poids $\kappa^*$, \[jlpfamille\] implique la: Soit $x \in D^{D,\varepsilon}({\mathbb{C}}_p)$ non classique de poids $\kappa(x)=(k,\chi)$, alors $$\Theta^{k-1}({\textrm{JL}}_p(\Theta^{1-k}(x)))=x$$ Questions --------- ${}^{}$ (Q1) Nous n’avons pas montré que les systèmes de modules de Banach $F^{0,\varepsilon,d}$ et $F^{D,\varepsilon}$ sont isomorphes. Est-il vrai, par exemple, que si $\kappa \in {\mathcal{W}}({\mathbb{C}}_p)$, les sous-espaces de $(F_{\kappa}^{D,\varepsilon})^{\dagger}$ et de $(F_{\kappa}^{0,\varepsilon,d})^{\dagger}$ composés des vecteurs de pente finie sont des ${\mathcal{H}}$-modules isomorphes ? Cette version “non semi-simplifiée” de notre correspondance serait par exemple intéressante pour des questions de multiplicité $1$ mieux comprises du côté $GL_2$ (essentiellement par la présence du $q$-développement). (Q2) Existe-t’il une réalisation géométrique de la correspondance donnée ? Nous espérons revenir sur ce point dans un travail ultérieur. (Q3) Plusieurs autres espaces de “formes modulaires $p$-adiques” naturels peuvent être définis autant au niveau quaternionique que pour $GL_2$. Par exemple, on peut remplacer les séries principales de $I$ dans notre définition des formes quaternioniques $p$-adiques par des restrictions des séries cuspidales de $GL_2({\mathbb{Q}}_p)$, qui apparaissent aussi en familles sur ${\mathcal{W}}$ et contiennent les représentations de dimension finie usuelles aux poids-caractères arithmétiques: à quoi correspondent-elles du côté $GL_2$ ? Nous espérons revenir aussi sur l’étude d’une autre famille d’espaces de Banach (construite du côté elliptique cette fois-ci), obtenue en considérant les sections de $\omega^k$ sur la réunion finie des disques supersinguliers de $X_1(N)/{\mathbb{C}}_p$ ($N$ premier à $p$ disons). Ces espaces sont aussi liés à des espaces de formes modulaires quaternioniques pour $D$ ramifiée en $p$ et l’infini cette fois-ci (cf. une lettre de Serre à Tate [@Ser2] pour une correspondance modulo $p$). [9999]{} \ En préparation (2002) \ Disponible à l’adresse http://www.ma.ic.ac.uk/$\thicksim$kbuzzard/maths/research/papers/index.html \ Springer Verlag, Grundlehren der mathematischen Wissenschaften [**261**]{} \ Annales de l’institut Fourier, [**49**]{}, 905-919 (1999). \ Inventiones math. [**124**]{}, 214-241 (1996) \ Inventiones math. [**127**]{}, 417-479 (1997) \ Journal de théorie des nombres de Bordeaux [**9**]{}, 395-403 (1997) \ Disponible à l’adresse http://www.dma.ens.fr/$\thicksim$chenevie/. \ Proc. Durham, 1996. London Math. Soc. Lecture Note Ser. [**254**]{}, (1998) \ Cambridge university press [**69**]{}. \ Springer lecture notes [**114**]{}, (1970) \ Modular functions of one variable 3, Springer lecture notes [**350** ]{}, (1972) \ Preprint. \ Publications Math. I.H.E.S. [**12**]{} (1962) \ Oeuvres complètes, $IV$.\ Journal AMS [**15** ]{}, 443-468 (2002) [^1]: chenevie@dma.ens.fr [^2]: Strictement, il faudrait plutôt prendre des fonctions $K$-valuées avec $K$ sphériquement complet. [^3]: $f_{|k g}(z):=f(g(z))j(g,z)^{-k}\det(g)^{k/2}, \, \, g \in GL_2({\mathbb{R}})^+$, en particulier, si $g \in Z({\mathbb{R}})$, $f_{|k g}=f$ [^4]: Il s’agit de la construction “D” de la courbe de Hecke dans [@eigen], uniquement basée sur de la théorie spectrale, nous ne nous occupons pas dans ce texte de la “C”
{ "pile_set_name": "ArXiv" }
Drug dosage adjustment of patients with impaired renal function at hospital discharge in a teaching hospital. Inappropriate dosing and the risk of toxicities are common with the patients with impaired renal function. Therefore, appropriate dosing is obligatory to prevent the drug toxicities. The present study was performed to investigate the appropriateness of dosage adjustment of the drugs that are toxic to kidney and/or metabolized or eliminated (TEM) by kidney. A retrospective study was performed at the time of hospital discharge in the patients with impaired renal function. All patients with renal clearance ≤50 ml/min/1.73 m² were included for the analysis. Data with respect to patient's clinical, medications and their dosages, laboratory findings were extracted from medical record section. At discharge, there were a total of 848 prescribed drugs in 116 impaired renal function patients. Of them 404 were classified as TEM medication. Dose adjustment according to renal function was judged as necessary in 135 TEM medications and 28 were deemed to be used with caution. Among these, 108 (80% of 135) medications were considered appropriate in dosing, whereas 27 (20%) were inappropriate. Total 14 (10.37%) and 13 (9.63%) times of inappropriate dosing were found in those with moderate and severe renal impairment, respectively. The frequency of inappropriate dosing was not significantly different from moderate than that of the severe renal impairment (p > 0.05). The results of the study demonstrated that dosage adjustment of TEM drugs in patients with impaired renal function is less than optimum in a considerable number of patients at hospital discharge. Awareness raising and monitoring system for inappropriate dosing is critical to improve the quality of care in patients with renal dysfunction.
{ "pile_set_name": "PubMed Abstracts" }
Q: Can't connect to Pi Zero over USB I followed the guides here and here, both times freshly flashing Raspbian Buster Lite to the SD and modifying the config files. I leave the Pi around 3 minutes to boot, with it connected to my PC's USB port, anfd the light is solidly on. However, I keep getting ssh: Could not resolve hostname raspberrypi.local: No such host is known.. Bonjour is definitely installed on my Windows machine. Is there a way to get it working? A: I solved it by switching off my VPN and using a different USB cable; either one of these could have fixed it. I hope this solves any issues others may have.
{ "pile_set_name": "StackExchange" }
/* i7080_defs.h: IBM 7080 simulator definitions Copyright (c) 2006-2016, Richard Cornwell Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL RICHARD CORNWELL BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #include "sim_defs.h" /* simulator defns */ #include "i7000_defs.h" /* Memory */ #define MEM_ADDR_OK(x) ((x) < MEMSIZE) extern uint8 M[MAXMEMSIZE]; extern uint32 EMEMSIZE; /* Size of emulated memory */ /* Issue a command to a channel */ int chan_cmd(uint16 dev, uint16 cmd, uint32 addr); /* Map device to channel */ int chan_mapdev(uint16 dev); /* Process the CHR 3 13 command and abort all channel activity */ void chan_chr_13(); uint32 load_addr(int loc); void store_addr(uint32 addr, int loc); /* Opcode definitions. */ #define OP_TR CHR_1 #define OP_SEL CHR_2 #define OP_CTL CHR_3 #define OP_CMP CHR_4 #define OP_SPR CHR_5 #define OP_ADM CHR_6 #define OP_UNL CHR_7 #define OP_LOD CHR_8 #define OP_TMT CHR_9 #define OP_TRS CHR_O #define OP_NOP CHR_A #define OP_SET CHR_B #define OP_SHR CHR_C #define OP_LEN CHR_D #define OP_RND CHR_E #define OP_ST CHR_F #define OP_ADD CHR_G #define OP_RAD CHR_H #define OP_TRA CHR_I #define OP_HLT CHR_J #define OP_TRH CHR_K #define OP_TRE CHR_L #define OP_TRP CHR_M #define OP_TRZ CHR_N #define OP_SUB CHR_P #define OP_RSU CHR_Q #define OP_WR CHR_R #define OP_RWW CHR_S #define OP_SGN CHR_T #define OP_RCV CHR_U #define OP_MPY CHR_V #define OP_DIV CHR_W #define OP_NTR CHR_X #define OP_RD CHR_Y #define OP_WRE CHR_Z #define OP_AAM CHR_QUOT #define OP_CTL2 CHR_COM #define OP_LDA CHR_EQ #define OP_ULA CHR_STAR #define OP_SND CHR_SLSH #define OP_BLM CHR_DOL #define OP_SBZ CHR_RPARN #define OP_TZB CHR_DOT #define OP_CTL3 CHR_LPARN #define OP_SMT CHR_TRM /* Channel options */ #define CHAN_NOREC 0001 /* Don't stop at record */ #define CHAN_8BIT 0002 /* Send 8 bit data */ #define CHAN_SNS 0010 /* Issue sense command */ #define CHAN_CTL 0020 /* Issue control command */ #define CHAN_ZERO 0040 /* Zero memory after write */ #define CHAN_SKIP 0040 /* Don't read data */ #define CHAN_END 0100 /* Last location */ #define CHAN_RECCNT 0200 /* Last was set record counter */ #define CHAN_CMD 0400 /* Opcode in high order bits */ #define CHAN_AFULL 01000 /* A buffer has data */ #define CHAN_BFULL 02000 /* B buffer has data */ #define CHAN_BFLAG 04000 /* Write/read B buffer */
{ "pile_set_name": "Github" }
SoMo Tickets Under the stage name SoMo, Joseph Somers-Morales first took the Internet by storm in 2011 when he released a stunning medley of Drake covers on YouTube. Thanks to that initial video, the R&B artist now boasts thousands of fans across the nation and has embarked on multiple headlining tours equipped with a catalog of original songs that accentuate his sleek, polished vocal style. SoMo Ticket Information Hailing from a small town in Texas, this independent R&B artist has been making waves with his smooth and sultry vocal style. See SoMo on a stage near you by first clicking the “Tickets” button next to the concert date you’d like to attend. With the show selected, sort through the available seats on the next page by toggling the price, section, or row filters on the sorting tool, and then hit “Buy” when you’re ready to complete the purchase. Log into your Vivid Seats account, provide your billing information, and complete the transaction by clicking “Place Order.” Streamlined Purchasing From the moment you begin browsing to order completion, the purchasing process at Vivid Seats has been designed with your convenience in mind, including easy-to-follow steps and clearly displayed ticket and event information. Secure Transactions We use the most trusted online security systems to make sure that all your private information is 100% protected every time you place and order. Start-to-Finish Customer Support For assistance finding the right seats or arranging delivery, call our customer service center at 866.848.8499. You can also chat online by clicking the Live Chat link at the top of the page. Concert News Wide Awake SoMo Tour 2014 to Visit More Than 30 Cities In support of his first official studio album that dropped earlier in 2014, Texas-based R&B artist SoMo will return to the road starting in October with a slew of more than two dozen shows that will fill his schedule for the remainder of the year. The young rising star will launch the venture, titled the Wide Awake Tour, from Nashville’s Marathon Music Works on Oct. 4, with later dates to follow in cities like New Orleans (Oct. 8), New York (Oct. 23), Minneapolis (Nov. 5), and San Francisco (Nov. 18), plus many more. Near the end of the trek, SoMo will also perform a hometown show at Emo’s in Austin, Texas, on Nov. 25. The Wide Awake Tour will wrap up days later on Nov. 29 with a final stop in Dallas. SoMo dropped his debut studio album in April 2014, and the long-awaited release made it all the way to the second slot on Billboard’s Top R&B Albums chart. In the same month, SoMo also earned his first award from the RIAA for his single “Ride.” The track earned Gold certification for selling more than 500,000 copies.
{ "pile_set_name": "Pile-CC" }
Just another WordPress.com site BDN reports ” Former Maine Attorney General James Tierney told a Portland audience Tuesday evening that the state’s economy will depend on its ability to attract — and accommodate — newcomers from foreign countries. “We are so old that we have got to attract people to come to Maine from someplace else. I don’t care what color they are, I just want them to come here,” Tierney said. “We’re not talking about affirmative action, we’re not talking about doing people favors. We’re talking about doing ourselves a favor if we can figure out this diversity issue.” Tierney joined Eva Millona, a former Albanian judge and executive director of the Massachusetts Immigrant and Refugee Advocacy Coalition, as keynote speakers in a panel discussion about economic growth and immigration. The talk was the first in a series of five such public discussions scheduled for Portland through June, and comes against a backdrop of an ongoing dispute between city officials and Gov. Paul LePage over the distribution of aid money to undocumented immigrants…. immigrants are 30 percent more likely to start their own businesses than native-born U.S. citizens. “If immigrants succeed, we all benefit,” said opening speaker Tim Honey, president of the World Affairs Council of Maine. “If immigrants don’t, we all pay the price.” Tierney said that Maine’s aging population represents an economic crisis, and the state’s only chance to avoid economic ruin will be to welcome immigrants to replenish its population. “We are no longer a state with people looking for jobs, we’re a state with jobs looking for people,” Tierney said. “We have jobs in this state, which we’re losing because we do not have people to fill them.” The former attorney general said Mainers must reject the “politics of fear” and embrace programs that create opportunities for immigrants.” “If immigrants succeed, we all benefit.” “If immigrants don’t, we all pay the price.” Hard working Mainers have “paid the price” through the “theft” of their small businesses and loss of livelihood through the failure of accountability within the State of Maine. Can we do “ourselves a favor” and put Mainers first….let’s start with accountability and restitution of the losses of livelihoods!! Then, perhaps, people will have trust that their hard work will not be “ripped out” from under them, decide to stay in Maine and start their small businesses. Brushing crimes “under the rug” does not move Maine forward…wake up! Isn’t it high time that our elected officials listen to some real problems facing Maine’s economy? They can only learn by speaking with the people who have been “tossed out” of their system, rather than those who “control” their system. Like this: “Attorneys general are now the object of aggressive pursuit by lobbyists and lawyers who use campaign contributions, personal appeals at lavish corporate-sponsored conferences and other means to push them to drop investigations, change policies, negotiate favorable settlements or pressure federal regulators, an investigation by The New York Times has found. A result is that the routine lobbying and deal-making occur largely out of view. But the extent of the cause and effect is laid bare in The Times’s review of more than 6,000 emails obtained through open records laws in more than two dozen states, interviews with dozens of participants in cases and attendance at several conferences where corporate representatives had easy access to attorneys general. Often, the corporate representative is a former colleague.” “The current and increasing level of the lobbying of attorneys general creates, at the minimum, the appearance of undue influence, and is therefore unseemly,” said James E. Tierney, a former attorney general of Maine, who now runs a program at Columbia University that studies state attorneys general. “It is undermining the credibility of the office of attorney general.” “Mr. Tierney, the former Maine attorney general, said that lobbyists were entitled to set up a meeting with the attorneys general in their offices. But to write a check, for as much as $125,000, to gain days’ worth of private time with the attorneys general is another matter, he said. When you start to connect the actual access to money, and the access involves law enforcement officials, you have clearly crossed a line,” he said. “What is going on is shocking, terrible.” “In an effort to make allies rather than adversaries, Bernard Nash, the head of the attorney general practice at Dickstein and the self-proclaimed “godfather” of the field, tells clients that it is essential to build a personal relationship with important attorneys general, part of what his firm boasts as “connections that count.” “Through their interaction with A.G.s, these individuals will become the ‘face’ of the company to A.G.s, who are less likely to demagogue companies they know and respect,” said a confidential memo that Dickstein sent late last year to one prospective client, Caesars Entertainment. Executing this strategy means targeting the attorneys general “front office,” a reference to the handful of important decision makers.” “For the attorneys general, there is a personal benefit, too: Their airfare, meals and hotel bills at these elite resorts are generally covered, either by the corporate sponsors or state taxpayers.” “The schedule of attorney general conferences for the coming year is laid out — after a pause for the elections — with events set for the Fontainebleau resort in Miami Beach, the Four Seasons Hotel at Mandalay Bay in Las Vegas and the Grand Wailea resort on Maui, among many others. The invitations for corporate sponsorships are already being sent.” Read more HERE. James Tierney, a former attorney general of Maine, states “What is going on is shocking, terrible. It is undermining the credibility of the office of attorney general.” Oh really? Credibility? Evidence proves a pattern of official corruption within Maine’s Attorney General’s Office since the days of Jim Tierney…nothing has changed! How about some accountability within Maine’s government? With the upcoming election, will the elected officials finally do the right thing….or will history repeat itself? Will credibility not only lack with the Attorney General? “You might think the following is an aberration in Maine Courts, sadly it happens on a fairly regular basis. On Wednesday we were in Farmington Superior Court watching Judge Donald Marden perform, er preside over a case that’s been in the courts since 1994. First we were denied the right as a reporter to video or audio record the hearing, you can see why as you read further. At 09:30 J. Marden told the Defendant in Farry v. Lavigne CV-94-61, “I don’t chew my cabbage twice.” Then Marden proceeded to repeat his previous statement. Who wouldn’t want to see that on YouTube.com? At 09:35 Marden told the Defendant, “If you’ll be quiet for a moment I’ll tell you what it is.” This seems to be the polite way of saying, just shut up. Not too condescending if Judge Marden was talking to a second grader. The best quote at 11:12 was the Defendant requesting for the SECOND TIME to invoke her rights under Rule 76H to have her own recording of the Hearing made. Marden said, “You have a right to a recording and that’s it.” As Marden indicated by pointing to the official transcription made by the court reporter sitting in front of the witness stand. In Rule 76H there’s no place that allows any judge to override the rule at their own whim. It states, the Rule, SHALL NOT BE DEFEATED. I guess when this Rule was dreamed up, the authors hadn’t heard about Judge Donald Marden’s veto power over it.” As the swill turns, connect the dots. If you have not viewed Tom Dunn’s Most Powerful, Revealing Video, please do so. You will learn of the official corruption that Mainers are facing today. The pattern has not changed….only positions of players have changed. Tom’ s video is a draft of what was to be a full documentary. He was called into the Attorney General’s Office to conduct an investigation for them. He apparently came up with more evidence than the A.G.’s Office wanted. Arthur Brennan (who was elevated to judgeship) brushed Tom’s evidence under the rug and Tom was off the case! Maine Exposed, hosted by Leon Bard, will broadcast a series of programs on public corruption. A video, documented by Tom Dunn, has recently been put out on the internet and prompted this series. These broadcasts will educate and inform the public that this pattern of public corruption depicted on Tom’s video continues today. Officials named in Tom’s tape continue to sit in their official capacity or were elevated to higher positions, including appointment to judgeship. This pattern of public corruption will take us into New Hampshire, New Jersey and the Department of Justice, Washington DC. Tom’s impressive resume includes his recruitment into N.S.A.’s (National Security Agency’s) A.S.A. (Army Security Agency), Far East, as field agent with 330th A.S.A. as head of Top-Secret-Codeword operations as Non-Commissioned-Officer-in-Charge, member of the Baltimore P.D., and president/investigator for L.A.W., Inc., a citizen’s advocate organization that exposed corruption at all levels and which was extremely successful. Dottie Lafortune who produced and hosted a public access t.v. talk show, The Maine Forum, in Biddeford, Maine, and a personal friend of Tom’s, will join us in these discussions. Tom had been a guest several times on her program as was Philip Castora, a licensed Private Investigator, who gave a report containing his conclusions with respect to various public documents relative to his investigation into fraudulent confiscations of property through the concerted actions of city officials, bankers and the courts. This resulted in the pulling of the “What Price Justice” program from the air and the black-out of public access to all producers. A copy of “What Price Justice?” was mailed out to many of you across the country and you know the contents of that broadcast. We hope that this invite gets out to those of you and that you join us on this program. These broadcasts will be held on Wednesday evenings, series beginning on October 12, 2011, and will be most interesting and revealing. We hope to see you there. “Assistant Attorney General Leanne Robbin said during Violette’s court hearing that the state can prove beyond a reasonable doubt that the former turnpike head used approximately $155,000 in authority money on personal expenses, including gift cards to hotels and resorts in Canada, France, Italy, Bermuda and Puerto Rico.” “These are people who never imagined themselves in jail,” she said, calling the Violette case “the biggest public robbery case I’ve seen in my 28 years of public service.” “Robbin told reporters after the hearing Thursday that Violette exhibited a “significant abuse of power.” The news reports that Violette is accused of theft by unauthorized taking. Is the pot calling the kettle black????? I “can prove beyond a reasonable doubt“ that Asst. A.G. Leanne Robbin, in concert with Deputy Secretary of State Julie Flynn, “exhibited a significant abuse of power.” Aren’t these two government employees doing the same thing that Asst. A.G. Robbin’s is prosecuting Violette for? Who will prosecute them? Inquiry on the status of my correspondence to Secretary of State Charles Summers regarding possible election law violations dating back to 2004. (This was also brought to former Sec. of State Matthew Dunlap during his tenure.) Secretary Summers did not find that there was significant evidence to support an investigation. PPH reports “The former executive director of the Maine Turnpike Authority has been charged with felony theft and faces time in prison as part of a plea agreement with prosecutors. The state Attorney General’s Office said Thursday that Paul Violette was charged with unauthorized use of the turnpike authority’s gift cards and credit cards for personal travel, meals and other expenses exceeding $10,000 in value. The charge carries a maximum penalty of 10 years in prison.” ******** Debate began in the late 1970’s to decide whether or not the Maine Turnpike should continue as a toll highway or become a “freeway.” Some citizens wanted the responsibility to maintain the roadway and construct new projects on the Turnpike placed under the jurisdiction of the Maine Department of Transportation. The Legislature also gave the Authority a directive to study the needs of new interchanges in urban regions to promote economic development and increased commercial activity. By allowing tolls to remain the Turnpikes revenue source, and the Turnpike Authority to manage the highway, valuable and increasingly limited state and federal transportation funding, generated by state and federal gas taxes, could be used to maintain the rest of Maine’s roads, bridges and highways. PPH reports “The resignation of the head of the Maine Turnpike Authority” and lawmakers are “pushing forward with their investigation of the authority’s spending and lobbying practices.” Former state Sen. Peter Mills “will replace Paul Violette, who was executive director for 23 years until he resigned last week amid questions about the authority’s spending practices. The Legislature’s Government Oversight Committee wants Violette and other turnpike authority executives and board members to appear at a hearing April 15. The committee has said it may issue subpoenas if they do not appear voluntarily, and has said people should be prepared to testify under oath.” The Sun Journal reports, “Mills, an attorney, is noted for his open and direct manner of speaking and his enthusiasm for tackling difficult subjects.” “Although the Legislature oversees the MTA’s operating budget, it has little oversight of the agency’s repair-maintenance budget, a $33 million fund in which a watchdog group discovered questionable expenditures.” The Legislature’s Government Oversight Committee will “examine the legality of the turnpike authority’s practices of withholding some budget information from the Legislature, lobbying state officials and giving money to dozens of organizations and trade groups.” The committee is also concerned about “$157,000 worth of gift cards that the authority donated to organizations but could not explain with any documentation.” If Peter Mills is open to tackling difficult subjects (and won’t look the other way as he’s done with bank fraud), he should look into the MTA’s relationship with the MDOT and “friends” which began during the tenure of former Governor Joseph Brennan, per Tom Dunn’s investigation. Tom’s investigation was brushed under the rug by then Asst. A.G. Arthur Brennan who sits on the bench in York County Superior Court. Isn’t it interesting that the MTA board hired Roger Mallar as a consultant? Mallar was Maine’s transportation commissioner under Governor Joseph Brennan. Has anything changed? Will Governor LePage be in for another surprise? Will Peter Mills’ uphold his “integrity, experience and commitment to public service?” DISCLAIMER Any ad videos following any posts on this blog are posted by WordPress and is beyond this blog/editor’s control. These ad videos are not the responsibility of, nor endorsed by this blog/editor. This blog/editor does NOT receive any compensation or monies from these ad videos. What Price Justice? The main purpose of this blog is to bring the truth to the people of Maine and across this country about the corrupt state, judicial and federal officials who are influenced by special interests where our citizens are getting abused and where the perpetrators find shelter under the state and federal Attorneys General do-nothing umbrella of authority. The dots will be connected to show a pattern of co-operation and obstruction of justice under color of legal authority between all levels of local, county, state and federal governments to sock it to us, intimidate and deny us due process. We are sitting ducks for official harassment and are getting wrongfully harmed, scammed, beaten, drugged or otherwise deprived of our life, liberty and property by a whole system of administrative terror which has grown up throughout the country. Feel free to comment with any information you may have of corruption or abuse by the people or agencies you see listed here. Follow Blog via Email Enter your email address to follow this blog and receive notifications of new posts by email.
{ "pile_set_name": "Pile-CC" }
Aspartic acid conjugates of 2-(3,4-dichlorophenyl)-N-methyl-N-[(1S)-1(3-aminophenyl)-2-(1-pyrrolidi nyl) ethyl]acetamide: kappa opioid receptor agonists with limited access to the central nervous system. Aspartic acid conjugates of 2-(3,4-dichlorophenyl)-N-methyl-N-[(1S)-1-(3-aminophenyl)-2-(1-pyrrol idinyl) ethyl]acetamide (5) were synthesized and evaluated in mice for antinociceptive activity by intravenous and intracerebroventricular routes of administration. The intravenously-administered alpha-conjugate of L-Asp (2), its D-Asp diastereomer (3), and the beta -conjugate of L-Asp (4) were found to be 11-, 31-, and 40-fold, respectively, less effective than the parent ligand 1 (ICI 199,441) in producing central nervous system mediated antinociception in the mouse abdominal stretch assay. In addition, iv-administered 2 and 3 were found to also produce potent antinociception in the tonic phase of the mouse formalin assay, which is a model of tonic rather than acute pain. This study suggests that the attachment of a zwitterionic moiety to a position in the molecule that exhibits bulk tolerance is a viable strategy for the design of peripherally-selective and peripherally-active opioids.
{ "pile_set_name": "PubMed Abstracts" }
Association of Alumni Announces 2017-2018 Timeline Thursday, October 26, 2017 The Association of Alumni Executive Committee will announce its nominated slate of candidates for officers and members of the Executive Committee by November 22, 2017. Petition candidates have until December 8, 2017, to submit a petition with 250 Dartmouth alumni signatures for inclusion on a ballot. The executive committee or any one percent of the voting members of the Association may file a proposed amendment to the Association of Alumni constitution. The deadline to submit signed petitions to place amendments to the association constitution on a ballot is November 15, 2017. If the nominated Association of Alumni slate is uncontested, then they will assume office, without the need for an election, on the day of the annual meeting of the Association of Alumni, March 14, 2018. If the slate is contested and/or there are Association of Alumni constitution amendments to be voted on, then the annual election will be held from February 9 through March 9, 2018. The results of the election will be announced at the Association of Alumni annual meeting on March 14, 2018. Ballots in the election may be cast by mail or electronic transmission. Ballots will be sent electronically to each eligible voter, unless a voter has asked the secretary or the Office of Alumni Relations to send the voter a paper ballot by mail, in which case the secretary will send the voter a paper ballot by mail, and will send ballots electronically to all voters who do not request a paper ballot by mail. For more information, please review the election guidelines, or contact Liz Nunez at (603) 646-3929. November 15, 2017 Deadline to submit petitions to the Association of Alumni Executive Committee for association constitution amendments
{ "pile_set_name": "Pile-CC" }
My Big Brother Jake By the time I turned eighteen, my brother Jake had been off at college for almost five years. Not that he was anywhere near graduating or anything like that. I would hear my parents complain that he kept changing his major and that was why it was taking him so long. I didn't really know Jake all that well. He was four years older than me and was often distant seeming, but I had always thought he was handsome with his dark hair and green eyes. I'm not the only one who thought he was cute. Lots of girls liked him, and they would always come over whenever he was visiting from school. Despite how overprotective they were with me, my parents didn't care if Jake had girls in his room. Some evenings I would hear them, the bed creaking, the girl whimpering and my brother breathing hard, through the thin wall that separated our bedrooms. Those sounds would make me feel funny and I would slip my hand inside my panties and rub. It was late December during my senior year, when Jake caught me sucking my boyfriend's dick. I thought I had the house to myself. My parents had left me at home alone (they thought), while they attended a late night holiday party. They hadn't bothered to tell me that Jake would be coming into town that evening. So there I was on my bed, stripped down to nothing but my panties. I was on my hands and knees, ass up in the air, sucking a guy's dick with the door to my bedroom wide open, when I hear Jake clear his throat behind me. I had no idea how long he'd been there, but I totally freaked out when I saw him. As fast as I jumped out of bed, my boyfriend jumped faster and before I knew it he was pulling his pants on and leaping out my bedroom window. I leaned out the window and watched him running down the street and through the snow with no shirt or shoes. I laughed thinking it was lucky for him that he only lived a couple of blocks away. "So little sister this is what you've been up to since I went away." Jake's voice was very close. I felt his hot breath on the back of my neck. "Slut." He hissed into my ear. I pushed him away. "What do you mean? I hear you all the time with your girlfriends in there." I pointed to the wall with his bedroom on the other side of it. "They're sluts too." He smirked at me. "That isn't fair. That means you're the biggest slut of all them, Jake." I picked up my t-shirt off the bed and held it over my bare chest. "You're right about that little sister. I am the biggest slut of them all." He reached out and gently pulled at my shirt. I let him pull it away from me and I dropped my arms to my sides. He began to unbutton his shirt as he moved in towards me. He grabbed the back of my head and pulled me towards him. His mouth mashed roughly against mine. When his tongue started to push inside I opened my lips to him. His big hand began to knead my breast and then it slid down my belly and into my panties. His fingers penetrated me. He pulled my panties down and kneeled before me, spreading my pussy lips and dipping his tongue inside. I let out a whimper of delight, it felt so good. He stood, lowered me to the bed, pushed my legs apart and began kissing and slurping at my cunt. My fingers twisted in his dark curls as I began to shake. I came all over his pretty face. When I stopped shuttering he stood up. His face was slick with me and he gave me the biggest smile ever. He began to unbutton his pants. I couldn't believe what I was about to let my brother do to me, but the last thing I wanted was to stop him. His cock was surprisingly big and then it dawned on me that it wasn't just his looks that made him so popular with the girls. He stroked it a couple of times, as if it wasn't already big and hard enough. He grabbed me by the thighs and pulled me towards him, I let out a yelp, but it didn't faze him. Spreading my legs really wide, he leaned over me. I felt his heat pressing against my slick slit and my body jerked up towards him. My pussy was hungry his cock. "Slow down baby sister. I'm gonna give it to you, don't you worry about that." He winked at me, which made my pussy quake. As he began to push between my legs, he watched it go in. I liked that he was looking at me down there. I wished I could see as well as he could. It hurt a lot as he pressed his huge cock into me, but I tried not to let on. I didn't want him to stop. "Man, you are one tight slut, little sis." He smiled and continued his forward progress. There was one thing I had forgotten to mention to Jake. It was that what he had caught me doing with my boyfriend was the furthest I had ever gone. We'd been dating for a few months and sure he was pressuring me. I had finally agreed to blow him and yes let him cum in my mouth. It was a compromise I was willing to make, because I was saving myself for someone special. I just hadn't realized it was Jake. Once Jake got all the way into me, he started pumping. It was painful, but I tried to suffer with dignity. I clung to him. I wrapped my arms around his shoulders and my legs around his waist and let him pound me into the bed. After that night, we fucked as often as we could the whole three weeks he was home from school. He would sneak into my bedroom in the middle of the night or he would follow me into the bathroom, bending me over the sink for a fast fuck. He even caught me a couple of times in the middle of the day, when our parents were only a few feet away in the next room. He covered my mouth and fucked me up against the wall, knowing any minute they might come around the corner. One afternoon he had me crawl under his desk and blow him while he was studying. When Dad knocked, Jake let him come in and proceeded to carry on a conversation with him while his dick was still hard in my mouth. He made me ride him in his desk chair and fucked me hard while I was lying on my back on his desk. He convinced me to take a walk with him in the snow. Then he dragged me into the woods. He forced me onto the ground, reached up under my skirt, tore a hole in my new black tights and fucked me there in the cold whiteness. I had never had an orgasm before that first evening with Jake in my bedroom and after that it seemed not a day went by without at least four or five. Needless to say, I was devastated when Jake had to return to college, but he promised me that I could come stay with him soon. He said his roommates were going to love me.
{ "pile_set_name": "Pile-CC" }
In disk-based storage systems, there is usually a clear separation between the primary storage function—which deals with providing rapid and efficient access to active data—and secondary storage mechanisms which deal with less active data, with long term data protection, and with maintaining archives of historical storage contents. These secondary functions have, for the most part, traditionally been handled using magnetic tape storage. Reasons for this include the fact that tape has been much cheaper than disk storage (and other alternatives), and tape cartridges are easily transported to provide offsite copies of data to protect against loss due to localized disasters. For a number of years, the cost per byte of disk hardware has been dropping at a much faster rate than that of tape hardware, making disk increasingly attractive as an alternative to tape as a medium for secondary storage. Some of the properties of disk, such as low-latency random access, clearly make it superior to tape as a secondary storage medium. If, however, the superior properties of disk are exploited in a secondary storage system, then new challenges arise which did not previously exist with tape. For example, since every hard disk drive includes the mechanism for reading and writing the media that it contains, in a disk-based secondary storage system it becomes attractive to keep all data online at all times. This means that traditional mechanisms for protecting archival data, based on physically isolating and protecting the storage media, become inapplicable. One could simply turn the disks into write-once media by disallowing deletions in hardware, but then deletion of old data that are no longer needed would also be prohibited. Moreover, for low cost safe disk storage it may be attractive to use an object storage scheme, such as is described in Margolus et al., “A Data Repository and Method for Promoting Network Storage of Data,” US 2002/0038296 A1 (Mar. 28, 2002). An object storage system is like a file system without a built-in mechanism for organizing the files (“objects”) into a hierarchy. The clients of the object storage system must define and implement any such mechanism, for example by storing directory information in objects. This lack of built-in hierarchy separates out a complicated issue from the implementation of the storage system itself. In the example of Margolus et al. US 2002/0038296, security and privacy considerations are addressed by assuming that the storage system has little or no access to information about the structure or nature of the data that it stores. This constraint adds an extra dimension to the problem of safely allowing deletion of unnecessary data, while protecting necessary data from malicious or accidental deletion. If deletion of unnecessary data is to be allowed, mechanisms are of course required for determining which data has become unnecessary. Traditional backup schemes maintain “snapshots” of storage system contents at predefined moments, discarding some snapshots as unnecessary after some period of time. File servers often use an on-disk snapshotting mechanism for short-term protection of files from data corruption or accidental deletion. Commonly, this is implemented by simply avoiding overwriting data that is needed for some existing snapshot, and instead writing the new data to a new location (and maintaining appropriate indexing information for finding the different versions of files). A snapshot is created by declaring at some point in time that no data that exists at that point will be overwritten. A snapshot is discarded by freeing storage resources that are not needed by any other snapshot, and are not currently in use. Thus one definition of unnecessary data is data that is only needed by discarded historical snapshots. The challenge of deleting only unnecessary data then requires reconciling this definition with the constraints and structure of a distributed, private and secure storage system. For example, it may not be possible, in general, for a storage server to determine which stored data is part of a given historical version, or even which historical versions exist. This problem is compounded if some pieces of data are shared: different historical versions of the same object, or even different objects, may all share common pieces of data, for storage efficiency. These pieces may only be deleted when they are no longer needed by any version of any object. Finally, there may be more sophisticated needs for the protection of historical information than are provided by simple snapshotting.
{ "pile_set_name": "USPTO Backgrounds" }
You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so , join our community today! If you have any problems with the registration process or your account login, please contact us. Vista 64-Bit, HP 820CSE and Linksys PPSX1 Print Server Problem microsoft.public.windows.vista.print fax scan I have an ancient HP 820CSE attached to an equally ancient Linksys PPSX1 version 1 on my home network. The printer and printer server work fine from my other computers running XP on the network, but when I try to print from Vista, the printer starts feeding the paper, hangs, and then eventually shuts down. It does not print, and I do not get a print error. I have the printer set up on the network as instructed here:
{ "pile_set_name": "Pile-CC" }
Group anal porn in the orgy of two heifers on blathat with muzhiks – hq porn Since the heifers are specifically tied up with a bunch and hanjubas, they are frequent guests of this blathat, where the baby-bander allows the nykes to lick and dry up in anal with the self-made ones brought with them. But the girls do not have time to properly raskochegaritsya, as there are men and cheerfully girls in a group anal porn. It seems, after all, some kind of cook mistress of the brothel has on both sides of this love rectangle!
{ "pile_set_name": "Pile-CC" }
Q: Is there any way to provide `html dom parser with two optional classes? I'm curious if there is any way to provide two optional classes to simple html dom parser <div class = 'a'> <!-- some data --> <div> <div class = 'b'> <!-- some data --> <div> <?php foreach($html->find('.a | .b') as $class){} ?> Is it possible? A: Yes, it's: $html->find('.a,.b')
{ "pile_set_name": "StackExchange" }
Aztrank Aztrank is the operator of the wireless digital CDMA 2000 (Code Division Multiple Access) telecommunication services in Azerbaijan. AZTRANK Company was established in 1996 and, until 2005, its major service was trunk communications support. From the beginning of 2005 the company started providing CDMA 2000 wireless communications services. The main activity of the company is the development and improvement of wireless telecommunication services. AZTRANK is offering its customers CDMA technology phones for both stationary and vehicle use. These are provided by one of the world telecommunication equipment manufacturer, Huawei. Current devices work in the standard phone mode with landline phone numbers. At present, AZTRANK is offering its CDMA technology wireless services based on the Great Britain made station “GPT”. This has been used by the JV “Azeurotel” during recent years. Its current network coverage areas are Baku and Absheron Rayon. See also Telecommunications in Azerbaijan Ministry of Communications and Information Technologies (Azerbaijan) List of Azerbaijani companies External links Official website Category:Telecommunications companies of Azerbaijan
{ "pile_set_name": "Wikipedia (en)" }
Q: On form input focus, show div I have a form with an input (#test). Does anyone know how to, when in focus, display another div (#tool-tip) with jQuery? <form> <div id="tool-tip"></div> <input id="test" /> <input type="submit"> </form> A: $("#test").focusin(function() { $("#tool-tip").show(); }).focusout(function () { $("#tool-tip").hide(); }); Using .focusin() and its opposite, focusout()
{ "pile_set_name": "StackExchange" }
Q: How to remap alphanumerical keys? I have a buggy keyboard in which my w,f,b,6,2 keys are not working. I want to remap those keys to some other combination of keys such that when I press I get the corresponding alphanumerical character in text. For example w -> Ctrl+t 6 -> Ctrl+9 How can I do it? A: As far as I know there is no way to do it quite as easy as you suggested. One way to handle your situation might be to define a compose key and create an ~/.XCompose file. These are some example lines in such a file: <Multi_key> <v> <v> : "w" U0077 <Multi_key> <V> <V> : "W" U0057 <Multi_key> <3> <3> : "6" U0036 With that you can do for instance: Compose followed by Shift+V followed by Shift+V => W Compose followed by 3 followed by 3 => 6 HTH
{ "pile_set_name": "StackExchange" }
116 N.H. 167 (1976) MARTIN G. BATTCOCK & a. v. TOWN OF RYE & a. No. 7224. Supreme Court of New Hampshire. March 31, 1976. *168 Casassa, Mulherrin & Ryan (Mr. John J. Ryan orally) for the plaintiffs. Taylor & Gray and Mr. William J. Hurley (Mr. Hurley orally) for the defendants. GRIFFITH, J. This is an appeal from the decision of the town of Rye building inspector and Board of Adjustment denying the plaintiffs a building permit. The Trial Court (Cann, J.) ruled upon an agreed statement of facts that the plaintiffs were entitled to a building permit under the provisions of section 13-B of the Rye zoning ordinance as a matter of law. To this order, the defendants seasonably objected, and all questions of law raised thereby were reserved and transferred. The plaintiffs are the owners of two lots with continuous frontage located in a subdivision in Rye, New Hampshire. Although the subject lots do not meet the minimum area requirement of the zoning ordinance, the plaintiffs take the position that they are nonetheless entitled to a building permit under the so-called "grandfather clause" of the ordinance, section 13-B. This provision in effect allows the owner of an undersized lot the right to commence construction thereon if his is a "lot in a subdivision, the plan of which is lawfully recorded in the Rockingham County Registry of Deeds at the time [the] ordinance takes effect." The parties are agreed that the subdivision in which the plaintiffs' lots are located is so recorded. The defendants dispute the trial court's finding that the above-quoted language compels a favorable disposition of the plaintiffs' claim, arguing that the definition of "lot" given in section 2 of the zoning ordinance must be applied to the term as used in section 13-B. The section 2 definition provides that a "lot" is a parcel of land "having its principal frontage upon a street" (Emphasis added.) *169 "Street" is in relevant part defined in the same section as "an officially approved private road of not less than forty feet in width...." The defendants contend that inasmuch as the property in issue fronts on a road which is only thirty feet in width, it cannot be characterized as a "lot" so as to fall within the coverage of section 13-B. The first issue raised by this appeal is whether the trial court should be upheld in its ruling that plaintiffs are entitled to a building permit as a matter of law. Since we hold herein that the trial court did rule correctly, we need not reach the second issue, whether the plaintiffs are entitled to a variance. In seeking to impose the definitions in section 2 on the language of section 13-B, the defendants misread the ordinance. Section 2 specifies that the definitions provided therein are to be applied throughout the ordinance "unless otherwise expressly stated." (Emphasis added.) Section 13-B does so expressly state, for it provides that dwellings may be erected on nonconforming lots of record "notwithstanding limitations imposed by other provisions of this ordinance," as long as various other requirements found in the regulations have been met. (Emphasis added.) Since the agreed statement of facts indicates that all such requirements have been satisfied, section 13-B entitles the plaintiffs to a building permit. In any case, the term used in section 13-B is not simply "lot" but rather "lot of record." The ordinance does not define "lot of record" in section 2. However, a reasonable reading of section 13-B itself shows the term to be used in its preordinance sense to refer to all properties designated as "lots" on subdivision plans recorded in the Rockingham County Registry of Deeds prior to the time the zoning ordinance took effect. To interpret the term otherwise would be contrary to the plain meaning and intent of section 13-B, for such a construction would render that section meaningless and defeat the "grandfather" provisions which seek specifically to except nonconforming lots of record in existing subdivisions. The evident intention of the promulgating authority as revealed by the ordinance itself cannot be defeated by giving a single word an unnecessary meaning. North Hampton &c. Ass'n v. Commission, 94 N.H. 156, 158, 48 A.2d 472, 474 (1946). See also Hackett v. Gale, 104 N.H. 90, 92, 179 A.2d 451, 453 (1962). Defendants' argument that the issuance of a building permit to the plaintiff is further prohibited by section 3.7 of the building code is without merit, for that section deals with "new streets," not roadways shown on a plot plan recorded in the registry of deeds *170 prior to the adoption of the zoning ordinance. Section 13-B of the ordinance specifically exempts roads of the latter description. Similarly, RSA 36:26 is not a bar, for that section provides that building permits may be issued where the street giving access to the lot on which the building is proposed to be placed "corresponds in its location and lines with a street shown on the official map or with a street on a subdivision plat approved by the planning board...." The enactment of section 13-B by the town must be interpreted as the town's grant of approval to all subdivisions preexisting the ordinance, for otherwise section 13-B would be robbed of much of the meaning it was clearly intended to have. The use of the word "street" in RSA 36:26 poses no problem in terms of the width of the right-of-way in question, for the word is defined for purposes of that statute by RSA 36:1 VII to include all ways, regardless of their dimensions. Accordingly, the ruling of the superior court is upheld. Exceptions overruled. All concurred.
{ "pile_set_name": "FreeLaw" }
Lethrinus microdon Lethrinus microdon is a species of emperor fish. It is a marine fish, bluish-grey or brown in colour with pale or somewhat orange fins. This species is reef-associated and is often found in small schools, occasionally with Lethrinus olivaceus at depths of 10 to 80 metres. It is widespread in the Indo-West Pacific and other waters. This species is caught commercially and is considered to be an excellent food fish. Common names Common names include the following, or variants thereof: Smalltooth emperor Longface emperor Longnosed emperor Pigface bream Description This species is bluish-grey or brown in colour with pale or somewhat orange fins, and has a moderately long snout. It commonly has dark, scattered, irregular blotches on its sides. Some specimens have three streaks of dark colouration radiating away from the eye toward the snout. It is a relatively elongate fish and grows to a maximum length of approximately 70 cm, but is commonly recorded at between 30 and 50 cm in length. Distribution Lethrinus microdon is a widespread species. It has been recorded in the Red Sea, Persian Gulf, Arabian Sea, from East Africa to Sri Lanka, in the Ryukyu Islands as well as Papua New Guinea. Habitat This fish is non-migratory and is found over sandy bottoms near reefs. It forms small schools, occasionally with Lethrinus olivaceus, and has a maximum depth range of approximately 10 to 80 metres. Diet Lethrinus microdon feeds in the day and at night, and is known to feed mainly on other fishes, cephalopods, crustaceans, and polychaetes. Human uses This species is fished commercially and is considered to be an excellent food fish. It is usually marketed fresh and not frozen. It is known to be caught using gill nets, trawls, handlines, and fish traps. References External links Category:Lethrinidae Category:Fish described in 1830
{ "pile_set_name": "Wikipedia (en)" }
Posts It has been about a week since my OrdBot's first successful calibration print and it seems like a good time to write down some of my initial problems, experiences and solutions. However before I get into that a review of my 3D printer stack. Electronics AzteegX1 v1.0 (644P processor) Lava heatbed, I have only used PLA at this point so this is still disconnected.Software Mac OSX Marlin firmware Slic3r v0.9.9 Pronterface (March 2012 release) Repetier-host Mac (0.56, lower versionthan linux/windows releases for some reason) Now for the problems encountered during the past week, approximately in the order which I ran into them. Pololu A4988 - I have used these before on my ShapeOko CNC machine. They have been so reliable that I completely forgot how to set them up, as such I blew up 3 drivers while setting up the electronics. So b… I've been a fan of MakerSlide ever since building my ShapeOko CNC Mill, and have been interested in 3D printing since first hearing about the RepRap project in 2007. So when I first saw the OrdBot, I knew that it would be the printer I build. It didn't hurt that I had 10 feet of extra MakerSlide and a whole bunch of the special bearings and eccentric spacers left over from my ShapeOko build. One of my goals was to build all the custom parts myself, the blue and black pieces in the photo above. Cutting aluminum on my new CNC machine pushed it to the limit, but worked out in the end. One of the larger OrdBot pieces is the handle, here is a shot of the ShapeOko making short work of it: There are a couple parts that I modified or upgraded during the build. Most notably the Z axis. After reading about bent Z-rods I decided to get some ACME rods, and I wanted to use some spare Nema-23 motors that I had on hand. The NEMA-23 motors for the Z axis were easy, I slightly modified the sto… April of 2012 I signed up for the first batch of ShapeOko kits from inventables.com. Unsure of how popular the kit would be, inventables had a kickstarter-style order of 150 (or so) kits. That number was reached handily and several more batches followed. Since then the ShapeOko has become a standard item in their store. I've wanted a CNC mill for a long time, but could never justify the expense. Now there are products like the MakerSlide linear rail system that made it possible for low cost machines. The first round of kits were only $200 for the entire mechanical platform - add your electronics and a dremel tool and the machine can start cutting. So thats what I did. The stock kit plus motors after assembly: One of the nice things about the ShapeOko is how hackable it is. For instance if you want to make the cutting area larger you can just replace the MakerSlide with longer rails. So I added longer rails, a second Y-axis motor, a torsion box to mount everything on, a bigger rout… Will Winder is a software developer. In his four years of study at UNH he took variety of advanced Computer Science courses including Object Oriented Design, Computer Networks, Artificial Intelligence and Compiler Design. He has been working professionally using C, C++ and Java since graduating in 2006. In his free time he continues to expand his skills by involving himself in many projects, some of which can be seen on this blog.
{ "pile_set_name": "Pile-CC" }
Cholecystokinin (CCK) is a gastrointestinal hormone which is produced by and released from duodenal and jejunal mucous membranes, and is known to have actions such as secretion of pancreatic juice, gallbladder constriction, and stimulation of insulin secretion. CCK is also known to be present at high concentration in the cerebral cortex, hypothalamus, and hippocampus. CCK is also known to exhibit various actions, including inhibition of eating and hunger, augmentation of memory, and generation of anxiety. Meanwhile, gastrin is a gastrointestinal hormone which is produced by and released from G-cells distributed in the pylorus. Gastrin is also known to exhibit actions such as secretion of gastric acid and constriction of the pylorus and gallbladder. CCK and gastrin, having the same five amino acids in their C-terminals, exert the aforementioned actions via receptors. The receptors of CCK are classified into CCK-A receptors, which are of the peripheral-type and are distributed in the pancreas, the gallbladder, and the intestines; and CCK-B receptors, which are of the central-type and are distributed within the brain. Since gastrin receptors and CCK-B receptors show similar properties in receptor-binding experiments, and thus are proven to have high homology, they are often called CCK-B/gastrin receptors. Compounds having antagonism to these receptors, i.e., gastrin or CCK-B receptor, are expected to be useful for prevention and treatment of the following diseases and disorders: gastric ulcer, duodenal ulcer, gastritis, reflux esophagitis, pancreatitis, Zollinger-Ellison syndrome, vacuolating G-cell hyperplasia, basal-mucous-membrane hyperplasia, inflammation of the gallbladder, attack of biliary colic, motor disorders of alimentary canal, irritable bowel syndrome, certain types of tumors, eating disorders, anxiety, panic disorder, depression, schizophrenia, Parkinson's disease, tardive dyskinesia, Gilles de la Tourette syndrome, drug dependence, and drug-withdrawal symptoms. Moreover, the compounds are expected to induce pain relief or to augment the pain-relieving effect of opioid analgesics (Folia Pharmacologica Japonica, Vol. 106, 171-180 (1995), Drugs of the Future, Vol. 18. 919-931 (1993), American Journal of Physiology, Vol. 269, G628-G646 (1995), American Journal of Physiology, Vol. 259, G184-G190 (1990), European Journal of Pharmacology, 261, 257-263 (1994), Trends in Pharmacological Science, Vol. 15, 65-66 (1994)). Proglumide, which is a drug having gastrin receptor antagonism, has conventionally been known as a remedy for gastric ulcer and gastritis. However, proglumide has a very weak affinity with gastrin or CCK-B receptors, and has a low curative effect. It is described that some 1,4-benzodiazepine derivatives--such as L-364,718 (Dibazepaido, Japanese Patent Application Laid-Open (kokai) No. 63666/1986) and L-365,260 (Japanese Patent Application Laid-Open (kokai) No. 238069/1988)--exhibit CCK-A receptor antagonism or CCK-B receptor antagonism. It is also known that compounds having strong CCK-B receptor antagonism suppress secretion of gastric acid stimulated by pentagastrin (WO 94/438 and WO 95/18110). However, these compounds do not provide satisfactory effects when administered in vivo. Drugs which exhibit gastrin or CCK-B receptor antagonism and are clinically useful have not yet been provided. Compounds that can be strongly bound to gastrin or cholecystokinin receptors are expected to be useful as remedies and for prevention of diseases associated with respective receptors and found in the alimentary canal and the central nervous system.
{ "pile_set_name": "USPTO Backgrounds" }
Q: MultiSelectable ListView - ItemSelected is not called I have created a simple MultiSelectListView as shown below, but somehow the ItemSelected event is not fired. Can somebody please tell me, what is wrong, as I think I have tried everything? using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using Xamarin.Forms; namespace <myProject>.Views { class MultiSelectListView<T> : ContentView { private ListView _lstView; private IEnumerable<WrappedSelection<T>> _wrappedItems; public IEnumerable<T> Items { get { return _wrappedItems?.Select(i => i.Item) ?? new T[0]; } set { _wrappedItems = value?.Select(item => new WrappedSelection<T> { Item = item }); _lstView.ItemsSource = _wrappedItems; } } public MultiSelectListView(string bindingProperty, IEnumerable<T> items = null) { if (string.IsNullOrWhiteSpace(bindingProperty)) throw new ArgumentNullException(nameof(bindingProperty)); WrappedItemSelectionTemplate.BindingProperty = bindingProperty; Content = _lstView = new ListView(ListViewCachingStrategy.RecycleElement) { ItemTemplate = new DataTemplate(typeof(WrappedItemSelectionTemplate)) }; _lstView.ItemSelected += (sndr, e) => { if (e.SelectedItem == null) return; var wrappedSelection = e.SelectedItem as WrappedSelection<T>; if (wrappedSelection != null) wrappedSelection.IsSelected = !wrappedSelection.IsSelected; var lstView = sndr as ListView; if (lstView != null) lstView.SelectedItem = null; }; Items = items; } } class WrappedItemSelectionTemplate : ViewCell { public static string BindingProperty { get; internal set; } public WrappedItemSelectionTemplate() { var name = new Label { LineBreakMode = LineBreakMode.WordWrap, Style = Styles.GetTextStyle() }; name.SetBinding(Label.TextProperty, new Binding($"Item{(!string.IsNullOrEmpty(BindingProperty) ? $".{BindingProperty}" : "")}")); var grid = new Grid { Children = { name }, ColumnDefinitions = { new ColumnDefinition { Width = new GridLength(1, GridUnitType.Star) } } }; View = grid; } } class WrappedSelection<T> : INotifyPropertyChanged { private bool _isSelected; public event PropertyChangedEventHandler PropertyChanged = delegate { }; public bool IsSelected { get { return _isSelected; } set { if (_isSelected != value) { _isSelected = value; PropertyChanged(this, new PropertyChangedEventArgs(nameof(IsSelected))); } } } public T Item { get; set; } } } A: I have created a simple MultiSelectListView as shown below, but somehow the ItemSelected event is not fired. I have tested your code and reproduced your issue. I resolved the issue by converting the type of _wrappedItems into List<> type. Inside MultiSelectListView class modify the codes like below: public IEnumerable<T> Items { get { return _wrappedItems?.Select(i => i.Item) ?? new T[0]; } set { _wrappedItems = value?.Select(item => new WrappedSelection<T> { Item = item }).ToList(); _lstView.ItemsSource = _wrappedItems; } } When the ListView.SelectedItem changes, the the ItemSelected event will be triggered. But when using IEnumerable object returned from Linq the SelectedItem of ListView is null, thus the event won’t be triggered correctly. So changing the ItemSource of ListView(_wrappedItems) to a list will resolve the problem.
{ "pile_set_name": "StackExchange" }
Q: Can I safely connect audio out to audio in with an aux cable? This question might be really dumb. But can I safely connect the audio output from one pc to the audio/microphone input on another pc, using an aux cable? I want to record audio from one pc on another pc using audacity. What I mean by "safe" is like will it short out or anything crazy? Also can I connect the audio out to the audio in on the same pc without problems? I know sound is just a wave, but I don't know if that applies here. Forgive me, if this is the wrong fourm for this question, or if this question is really stupid. This is just a random image I found online. However, just in case I don't know the proper names, these are the ports I'm talking about. A: The only stupid question is the one you don't ask. There. You're off THAT hook. ;-) Audio Out on PC1 to Audio In on PC2 is a perfectly reasonable thing to do and won't harm anything. If the PCs have Line In and Line Out connections, those will likely provide better results. But in many cases you can record from PC1's sound output directly to Audacity running on PC1. No need for a second computer. https://manual.audacityteam.org/man/tutorial_recording_computer_playback_on_windows.html
{ "pile_set_name": "StackExchange" }
Giorgio Grilz Giorgio Grilz (30 July 1930 – 3 December 2018) was an Italian swimmer. He competed in the men's 200 metre breaststroke at the 1952 Summer Olympics. References Category:1930 births Category:2018 deaths Category:Italian male swimmers Category:Olympic swimmers of Italy Category:Swimmers at the 1952 Summer Olympics Category:Sportspeople from Trieste
{ "pile_set_name": "Wikipedia (en)" }
Kaolinite is a clay mineral, part of the group of industrial minerals, with the chemical composition Al2Si2O5(OH)4. It is a layered silicate mineral, with one tetrahedral sheet linked through oxygen atoms to one octahedral sheet of alumina octahedra. Rocks that are rich in kaolinite are known as kaolin or china clay. The name is derived from Chinese Kao-Ling, a village near Jingdezhen, Jiangxi province, China. The name entered English in 1727 from the French version of the word: kaolin, following Francois Xavier d'Entrecolles's reports from Jingdezhen In Africa, kaolin is sometimes known as kalaba (in Gabon and Cameroon), calaba, and calabachop (in Equatorial Guinea). Mecca for export and supply Is one of working companies in the field of import and export of phosphate and mining ores and phosphate fertilizers to its customers in the Middle East and Near East and Asia
{ "pile_set_name": "Pile-CC" }
In magnetic recording media, as used in hard disk storage devices, information is written to and read from magnetic elements that represent digital bits. In order to increase the amount of information that can be stored within a given area, the size and distance between these magnetic elements may be reduced so that they may be more densely positioned. At the same time, in order to increase production volume and decrease production cost, the speed at which disks are written to and read from when preparing the disks for use by an end-user may be increased. Thus, accurate location information as a function of time of the spin axis of the disks is useful. One way to increase disk production volume and decrease production cost is by increasing the speed at which the disks rotate. Accordingly, more magnetic elements may be accessed within a certain amount of time, thereby yielding more completed disks within the same amount of time. Another way to increase disk production volume and decrease production cost is by performing the same operations on more disks simultaneously, thereby requiring less manufacturing equipment.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Extraction of elevation data from ICESAT-2 dataset I need icesat2 elevation data for DEM validation process. I have been trying to extract elevation data only from icesat-2 dataset that i downloaded from earthdata but unable to successfully extract it in arcgis, global mapper, qgis or erdas imagine as it is in hdf. / h5 format. I have downloaded couple of tools eg hdf data viewer but still couldn’t successfully extract elevation values in .xsls format. If someone can please guide about the porcess, would be highly appreciated. A: You could also check out icepyx, a Python library that was created specifically for obtaining and working with ICESat-2 data in a straightforward and easy way. It's still in development to expand the available features (one of which will be reading the hdf5 files into other data formats), but it works to download data from the NSIDC. During the download process you can select which variables you want, your spatial and temporal extent, and have the data delivered in a few different file types (such as a GeoTIFF, ASCII, or other geospatial file that is easily opened with one of the software programs you mentioned). This might be an option if you're still needing help opening ICESat-2 data in the short term. Please check out our example Jupyter Notebooks for getting and subsetting data. Information/tutorials available from the University of Washington eScience Institute's ICESat-2 Hackweeks also contain examples for opening and working with ICESat-2 data (both in hdf5 and other geospatial formats). Here are last year's tutorials (they're currently being updated for this year's event). Full Disclosure: I am the lead developer for icepyx. The effort was motivated by questions similar to yours at a live event in June 2019.
{ "pile_set_name": "StackExchange" }
Circular RNA circZFR contributes to papillary thyroid cancer cell proliferation and invasion by sponging miR-1261 and facilitating C8orf4 expression. In recent years, more and more circular RNAs (circRNAs) have been identified in multiple tissues and cells. Increasing evidences show circRNAs play important roles in human cancers. However, the role of circRNAs in papillary thyroid carcinoma (PTC) remains largely unknown. In this study, we identified a new circRNA circZFR that was significantly upregulated in PTC tissues compared to adjacent normal tissues. Furthermore, circZFR expression level was negatively correlated with clinical severity. We found that circZFR knockdown dramatically inhibited the proliferation, migration and invasion of PTC cells in vitro. Mechanistically, we found circZFR could promote C8orf4 expression via serving as a competing endogenous RNA (ceRNA) of miR-1261 in PTC cells. Rescue assays indicated that restoration of C8orf4 significantly attenuated the inhibitory effects of circZFR knockdown on PTC cell proliferation, migration and invasion. In summary, our findings demonstrated that circRNA circZFR exerted oncogenic roles via regulating miR-1261/C8orf4 axis in PTC, which suggested circZFR might be a potential therapeutic target.
{ "pile_set_name": "PubMed Abstracts" }
Khara, Iran Khara (, also Romanized as Khārā; also known as Kharaw and Khareh) is a village in Jarqavieh Olya Rural District, Jarqavieh Olya District, Isfahan County, Isfahan Province, Iran. At the 2006 census, its population was 699, in 183 families. References Category:Populated places in Isfahan County
{ "pile_set_name": "Wikipedia (en)" }
Q: In R, why does example() produce non-examples? If I type example(hist) in R, I get the following output: hist> op <- par(mfrow = c(2, 2)) hist> hist(islands) Hit <Return> to see next plot: The first line in the output doesn't even contain "hist". So how is it an example of how to use "hist"? Maybe I'm not understanding this, but all I wanted to see was examples of "hist" usage. Please help me interpret the output. A: example(hist) produces these three images: And the following text: hist> op <- par(mfrow=c(2, 2)) hist> hist(islands) Waiting to confirm page change... hist> utils::str(hist(islands, col="gray", labels = TRUE)) List of 7 $ breaks : num [1:10] 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 $ counts : int [1:9] 41 2 1 1 1 1 0 0 1 $ intensities: num [1:9] 4.27e-04 2.08e-05 1.04e-05 1.04e-05 1.04e-05 ... $ density : num [1:9] 4.27e-04 2.08e-05 1.04e-05 1.04e-05 1.04e-05 ... $ mids : num [1:9] 1000 3000 5000 7000 9000 11000 13000 15000 17000 $ xname : chr "islands" $ equidist : logi TRUE - attr(*, "class")= chr "histogram" hist> hist(sqrt(islands), breaks = 12, col="lightblue", border="pink") hist> ##-- For non-equidistant breaks, counts should NOT be graphed unscaled: hist> r <- hist(sqrt(islands), breaks = c(4*0:5, 10*3:5, 70, 100, 140), hist+ col='blue1') hist> text(r$mids, r$density, r$counts, adj=c(.5, -.5), col='blue3') hist> sapply(r[2:3], sum) counts intensities 48.000000 0.215625 hist> sum(r$density * diff(r$breaks)) # == 1 [1] 1 hist> lines(r, lty = 3, border = "purple") # -> lines.histogram(*) hist> par(op) hist> require(utils) # for str hist> str(hist(islands, breaks=12, plot= FALSE)) #-> 10 (~= 12) breaks List of 7 $ breaks : num [1:10] 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 $ counts : int [1:9] 41 2 1 1 1 1 0 0 1 $ intensities: num [1:9] 4.27e-04 2.08e-05 1.04e-05 1.04e-05 1.04e-05 ... $ density : num [1:9] 4.27e-04 2.08e-05 1.04e-05 1.04e-05 1.04e-05 ... $ mids : num [1:9] 1000 3000 5000 7000 9000 11000 13000 15000 17000 $ xname : chr "islands" $ equidist : logi TRUE - attr(*, "class")= chr "histogram" hist> str(hist(islands, breaks=c(12,20,36,80,200,1000,17000), plot = FALSE)) List of 7 $ breaks : num [1:7] 12 20 36 80 200 1000 17000 $ counts : int [1:6] 12 11 8 6 4 7 $ intensities: num [1:6] 0.03125 0.014323 0.003788 0.001042 0.000104 ... $ density : num [1:6] 0.03125 0.014323 0.003788 0.001042 0.000104 ... $ mids : num [1:6] 16 28 58 140 600 9000 $ xname : chr "islands" $ equidist : logi FALSE - attr(*, "class")= chr "histogram" hist> hist(islands, breaks=c(12,20,36,80,200,1000,17000), freq = TRUE, hist+ main = "WRONG histogram") # and warning Waiting to confirm page change... hist> require(stats) hist> set.seed(14) hist> x <- rchisq(100, df = 4) hist> ## Don't show: hist> op <- par(mfrow = 2:1, mgp = c(1.5, 0.6, 0), mar = .1 + c(3,3:1)) hist> ## End Don't show hist> ## Comparing data with a model distribution should be done with qqplot()! hist> qqplot(x, qchisq(ppoints(x), df = 4)); abline(0,1, col = 2, lty = 2) Waiting to confirm page change... hist> ## if you really insist on using hist() ... : hist> hist(x, freq = FALSE, ylim = c(0, 0.2)) hist> curve(dchisq(x, df = 4), col = 2, lty = 2, lwd = 2, add = TRUE) hist> ## Don't show: hist> par(op) hist> ## End Don't show hist> hist> hist> If you don't hit Enter/Return, you just get what you posted, which is not the full example. Hitting Enter/Return advances the plot so you can see each image in order rather than all at once.
{ "pile_set_name": "StackExchange" }
[Oxygen affinity of mouflon blood]. Hemoglobin phenotypes of European mouflon and sheep were analyzed by isoelectric focusing; the results show that the sheep HbA migrated faster than that of other animals, while the mouflon HbA was more anodic than the sheep HbB. The oxygen dissociation curve of blood from mouflon is similar to that of sheep with HbA. The authors suggest that the mouflon should not be considered a direct ancestor of the Sardinian breed of sheep.
{ "pile_set_name": "PubMed Abstracts" }
Q: Coinvariant Subalgebras of Hopf Comodules and Quotients For $H$ a Hopf algebra, let $V$ be a right $H$-comodule with coaction $\Delta_R$. Moreover, let $W$ be a subspace of $V$ such that $\Delta_R(W) \subseteq W \otimes H$, and note that this implies that $\Delta_R$ restricts to a coaction $V/W \to V/W \otimes H$. If we denote, $$ V^H := \lbrace v \in V ~ | ~ \Delta_R(v) = v \otimes 1 \rbrace, $$ and analagously $$ (V/W)^H := \lbrace [v] \in V/W) ~ | ~ \Delta_R([v]) = [v] \otimes 1 \rbrace, $$ where $[v]$ denotes the coset of $v$, and $\pi:V \to V/W$ be the canonical projection, then when do we have $$ \pi(V^H) = (V/W)^H? $$ A: Let $\newcommand\Com{\mathsf{Com}^H}\newcommand\Vect{\mathsf{Vect}}\Com$ be the category of right $H$-comodules. This has sufficiently many injectives, so we can compute the right derived functors of the left exact functor $F=\hom_{\Com}(k,\mathord-):\Com\to\Vect$, where $K$ denotes the trivial, $1$-dimensiona comodule. We can write $\newcommand\Ext{\mathrm{Ext}}\Ext_{\Com}^p(k,V)=R^pF(V)$. Notice that $F(V)=V^H$ for all $V\in\Com$. If $$\tag{$\star$}0\to W\to V\to U\to0$$ is a short exact sequence in $\Com$, then we have a long exact sequence for the derived functors $R^pF$, which starts with $$0\to W^H\to V^H\to U^H\to\Ext^1_{\Com}(k,W)\to\cdots$$ We can conclude, then, as usual, that the map $V^H\to U^H$ is surjective if, for example $\Ext^1_{\Com}(k,W)=0$. This can happen for various reasons: one obvious one is that $W$ be an injective comodule. A draconiant version of this is the condition that $H$ be cosemisimple. To say something more intelligent, one would probably need to know more details about your concrete situation, though. Dually, we can use the functor $\newcommand\box{\mathbin{\Box^H}}G=(\mathord-)\box k$, the cotensor product with the trivial module $k$. Here the traditional notation for its derived functors is $\newcommand\Cotor{\mathrm{Cotor}^H}\Cotor_p(\mathord-,k)=R^pG(\mathord-)$. The long exact sequence for the derived functors of $G$ applied to the short exact sequence $(\star)$ is now $$0\to W^H\to V^H\to U^H\to\Cotor_1(W,k)\to\cdots$$ and we see that for $V^H\to U^H$ to be surjective is enough that $W$ be coflat.
{ "pile_set_name": "StackExchange" }
--- abstract: 'Nowadays there is compelling evidence for the existence of dark matter in the Universe. A general consensus has been expressed on the need for a directional sensitive detector to confirm, with a complementary approach, the candidates found in conventional searches and to finally extend their sensitivity beyond the limit of neutrino-induced background. We propose here the use of a detector based on nuclear emulsions to measure the direction of WIMP-induced nuclear recoils. The production of nuclear emulsion films with nanometric grains is established. Several measurement campaigns have demonstrated the capability of detecting sub-micrometric tracks left by low energy ions in such emulsion films. Innovative analysis technologies with fully automated optical microscopes have made it possible to achieve the track reconstruction for path lengths down to one hundred nanometers and there are good prospects to further exceed this limit. The detector concept we propose foresees the use of a bulk of nuclear emulsion films surrounded by a shield from environmental radioactivity, to be placed on an equatorial telescope in order to cancel out the effect of the Earth rotation, thus keeping the detector at a fixed orientation toward the expected direction of galactic WIMPs. We report the schedule and cost estimate for a one-kilogram mass pilot experiment, aiming at delivering the first results on the time scale of six years.' address: - 'INFN Sezione di Bari, Bari, Italy' - 'INFN Sezione di Napoli, Napoli, Italy' - 'INFN Sezione di Padova, Padova, Italy' - 'INFN Sezione di Roma, Roma, Italy' - 'INFN-Laboratori Nazionali del Gran Sasso, Assergi (L’Aquila), Italy ' - 'INFN-Laboratori Nazionali di Frascati, Frascati (Roma), Italy ' - 'Dipartimento di Fisica dell’Università di Bari, Italy' - 'Dipartimento di Fisica dell’Università Federico II di Napoli, Napoli, Italy' - 'Dipartimento di Fisica e Astronomia dell’Università di Padova, Padova, Italy' - 'Dipartimento di Fisica dell’Università di Roma, Rome, Italy' - 'Nagoya University and KM Institute, Nagoya, Japan ' - 'Chiba University, Chiba, Japan ' - 'JINR-Joint Institute for Nuclear Research, Dubna, Russia ' - 'SINP MSU-Skobeltsyn Institute of Nuclear Physics of Moscow State University, Russia' - 'LPI-Lebedev Physical Institute of the Russian Academy of Sciences, Moscow, Russia' - 'METU-Middle East Technical University, Ankara, Turkey' author: - 'A. Aleksandrov' - 'A. Anokhina' - 'T. Asada' - 'D. Bender' - 'I. Bodnarchuk' - 'A. Buonaura' - 'S. Buontempo' - 'M. Chernyavskii' - 'A. Chukanov' - 'L. Consiglio' - 'N. D’Ambrosio' - 'G. De Lellis' - 'M. De Serio' - 'A. Di Crescenzo' - 'N. Di Marco' - 'S. Dmitrievski' - 'T. Dzhatdoev' - 'R. A. Fini' - 'S. Furuya' - 'G. Galati' - 'V. Gentile' - 'S. Gorbunov' - 'Y. Gornushkin' - 'A. M. Guler' - 'H. Ichiki' - 'C. Kamiscioglu' - 'M. Kamiscioglu' - 'T. Katsuragawa' - 'M. Kimura' - 'N. Konovalova' - 'K. Kuge' - 'A. Lauria' - 'P. Loverre' - 'S. Machii' - 'A. Managadze' - 'P. Monacelli' - 'M. C. Montesi' - 'T. Naka' - 'M. Nakamura' - 'T. Nakano' - 'A. Pastore' - 'D. Podgrudkov' - 'N. Polukhina' - 'F. Pupilli' - 'T. Roganova' - 'G. Rosa' - 'O. Sato' - 'T. Shchedrina' - 'S. Simone' - 'C. Sirignano' - 'A. Sotnikov' - 'N. Starkov' - 'P. Strolin' - 'Y. Tawara' - 'V. Tioukov' - 'A. Umemoto' - 'M. Vladymyrov' - 'M. Yoshimoto' - 'S. Zemskova' title: '[ LNGS-LOI 48/15]{} NEWS: Nuclear Emulsions for WIMP Search (NEWS Collaboration) ' --- Introduction {#sec:Intro} ============ Compelling evidence for an abundant, non-baryonic, non-luminous (dark) matter component was collected in the last decades [@PdG]. Yet, the nature of the dark matter (DM) remains totally unknown, and the quest for an answer ranks as one of the main issues of the experimental particle physics, astrophysics and cosmology. Weakly Interacting Massive Particles (WIMPs) [@ref1; @ref2] are creditable, theoretically appealing DM candidates. If these massive relics of the early universe do exist, they are expected to be gravitationally bound to the baryonic visible matter. A direct search for WIMPs in the mass range from a few GeV/c$^2$ to a few TeV/c$^2$ could be based on the detection of nuclear recoils induced by WIMP elastic scattering. Cross-sections are not expected to exceed those of weak processes. The kinetic energy of scattered nuclei and consequently their range in dense matter would be determined by the WIMP mass and by its velocity relative to a terrestrial target. In the Standard Halo Model the WIMP speed in the galaxy is supposed to follow a Maxwellian distribution, showing null average values of all the velocity components. The motion of the Solar System through the galaxy, however, creates an apparent wind of dark matter particles, blowing opposite to the direction of the Sun’s motion toward the Cygnus constellation. The intensity of this wind, i.e. the WIMP flux, is expected to be time-modulated due to the Earth motion in the Solar System, with an annual period and a maximum rate in summer [@Spergel]. The speed of the Earth in the Solar System is anyway small compared to the speed of the Sun in the Milky Way, so the amplitude of the annual modulation is of the order of a few percent. The DAMA experiment [@DAMA] at LNGS has indeed reported a signal with a very clear evidence of annual modulation, as a possible indication of DM induced signal. This signal, although statistically extremely significant ($>8$ standard deviations), is controversial because many experiments have already partially or totally excluded the region allowed by DAMA. Therefore DAMA results remain an intriguing puzzle. Figure \[fig:StateOfArt\] shows the upper limits and contour regions for the WIMP spin-independent cross sections, normalized to the scattering on a single nucleon, as function of the WIMP mass. The constraints from SUSY models with the inclusion of LHC results are also shown. The figure was made with the `dmtools` web page [@dmtool]. ![WIMP cross sections (normalized to a single nucleon) for spin-independent couplings versus mass. The DAMA/LIBRA [@DAMA] and CoGeNT [@COGENT] contour regions indicate possible signal events. The 90$\%$ C.L. upper limits for the CRESST-II [@CRESST], CDMS+EDELWEISS [@EDELWEISS], XENON100 [@Xenon100] and LUX [@LUX] experiments are shown as solid curves. The green region indicates the predictions from the Minimal Supersymmetrized Standard Model (MSSM) integrated with constraints set by LHC experiments [@MSSM]. []{data-label="fig:StateOfArt"}](figs/StateOfArt_dmtools){width="0.65\linewidth"} On the other hand, the angular distribution of the scattered nuclei is peaked around the direction of the apparent dark matter wind. The directional modulation is expected to be stronger than the annual modulation, with a rate of forward-scattered nuclei one order of magnitude higher than the backward-scattered nuclei. Since background sources are expected to be isotropic, the detection of a signal with a preferred direction would provide a powerful discrimination. Directional experiments intend to exploit this effect by measuring the direction of nuclear recoils, and hence the WIMP direction. In the above sketched WIMP scenario, the key points for the design of an experiment searching for DM with a directional approach are the expected event rate and the expected angular and energy distribution of recoiling nuclei. The expected event rate does not exceed 1 event/kg/year. Such extremely low rates require strong background suppression. The WIMPs mean velocity inside our galaxy is a few hundred kilometers per second at the location of our Solar System. For these velocities, WIMPs interact with ordinary matter mainly via elastic scattering on nuclei. With expected WIMP masses in the range from 10 GeV to 10 TeV, typical nuclear recoil energies are of the order of 1 $\div$ 100 keV. The expected nuclear recoil energy spectrum decreases almost exponentially with energy. To exploit directionality with light-medium mass scattered nuclei, the required spatial accuracy is in the sub-mm domain for gaseous detectors and in the sub-$\mu$m range for solid detectors. In the first case the low event rate sets the requirement of very large volumes while in the second case an extremely high resolution is required in order to cope with the very short range of the recoil nuclei. Experiments for dark matter searches based on solid or liquid targets are not able to measure the direction of nuclear recoils. They search for a WIMP signal as an excess of events over the expected background with possibly an annual modulation of the event rate, if sensitive enough. The gaseous detectors, on the other hand, are capable of reconstructing the three-dimensional tracks of nuclear recoils, but their mass and the corresponding sensitivity are rather limited. Current gas-based detectors as DRIFT [@DRIFT], NEWAGE [@NEWAGE], DMTPC [@DMTPC] and MIMAC [@MIMAC] make use of low-pressure CF$_4$ with a fiducial volume ranging from 3 to 140 g [@DRIFT], thus providing limits only on the spin-dependent WIMP-proton cross-section. The use of a solid target would allow to explore lower cross sections in the phase space indicated by recent limits drawn by direct search experiments, the challenge being the shorter track length, $O$(100nm), resulting in the WIMP-nucleus scattering. The Nuclear Emulsions for WIMP Search (NEWS) project presented here aims at the direct detection of dark matter candidates by measuring the direction of WIMP-induced nuclear recoils. For this challenge, the detector exploits new generation nuclear emulsions with nanometric grains. An R$\&$D conducted by the Nagoya University in collaboration with the Fujifilm Company has established the production of films with nanometric grains for an ultra-high spatial resolution. We do report the results of this R$\&$D and the corresponding development of new fully automated scanning systems capable of detecting such short tracks, with improved optical technologies overcoming the diffraction limit of conventional systems. We have studied the detection efficiency of nanometric tracks, using ion implantation systems to reproduce nuclear tracks of the same length as expected from WIMP-induced nuclear recoils. A paragraph of this document is devoted to the measurements performed on the neutron yield from intrinsic film radioactivity and more in general to the discussion of potential background sources. Given that nuclear emulsions are time insensitive, the detector will be placed on a standard equatorial telescope to keep its orientation fixed toward the Cygnus constellation. The choice of appropriate shielding materials and detector layout is also discussed. Finally we propose the design and construction of a one-kilogram detector for a pilot experiment, acting as a demonstrator of the technology and aiming at scaling it up to a larger scale experiment. The construction, run and data analysis are planned on the time scale of about six years. This experiment will demonstrate the potentiality of the technique and will start constraining the parameter space outlined by the DAMA experiment. NIT: Nano Imaging Tracker {#sec:NIT} ========================= After decades of remarkable experimental applications, nuclear emulsions still mantain their attraction as ionizing particle detectors of unmatched spatial and angular resolution. The first application of fully automated scanning systems to large-scale experiment was for the CHORUS experiment [@CHORUS]. Impressive achievements with new generation systems, more than one order of magnitude faster, allowed the design of the OPERA experiment [@OPERA] while current developments of the technology still inspire the design of high-statistics neutrino experiment with large active target [@SHIP]. Nuclear emulsions are made of silver halide crystals embedded in a gelatine matrix. When light falls on the emulsion, or ionizing particles pass through it, some of the halide crystals are modified in such a way that they are turned into grains of silver when the emulsion is immersed in a reducing bath (the so-called *developer*). The modifications in the grains caused by the action of light or radiation are invisible and the effect is referred to as *formation of latent image*. After development, a silver halide emulsion is placed in a second bath, called *fixer*, which dissolves the unaffected grains of silver halide but leaves the small black granules of silver. Finally, the plate is washed and dried [@Emulsion1; @Emulsion2; @Chap2HandbookOf]. The primary function of the gelatine is to provide a three-dimensional framework which serves to locate the small crystals of the halide and to prevent them migrating during development and fixation. The three-dimensional trajectory of passing through particles can be reconstructed with an optical microscope by connecting all the silver grains produced after development. The size of silver halide crystals in standard emulsion ranges from 0.1 $\mu$m to 1 $\mu$m. The sensitivity of the emulsion strongly depends on the size of the crystals: the larger the grain, the higher the emulsion sensitivity to ionising radiation. Due to the low recoil energy of a WIMP-scattered nucleus, the expected track length is of the order of a few hundred nanometers. State-of-the-art emulsions produced by the Fuji Co. [@OPERAemulsion] for the OPERA experiment, with a linear dimension of the crystals of 200 nm, are therefore not suitable for Dark Matter searches. The R$\&$D performed at Nagoya University, in collaboration with Fuji Co. experts, led to the production of novel emulsion films with grain diameters down to a few tens of nm, one order of magnitude smaller than conventional ones. The so-called Nano Imaging Trackers (NIT) and Ultra-Nano Imaging Trackers (U-NIT), have grains of 44.2 and 24.8 nm diameter respectively (see Figure \[fig:grains\]). NIT films have a linear density of crystals of about 11 crystals/$\mu$m [@NIT] while U-NIT show 29 crystals/$\mu$m [@U-NIT]. They make the reconstruction of trajectories with path lengths shorter than 100 nm possible, if analyzed by means of microscopes with enough resolution.\ ![Distribution of the crystal diameter measured with an electron microscope for NIT (left) and U-NIT (right) emulsions. The measurements refer to three different batches.[]{data-label="fig:grains"}](figs/NIT_grain_distribution_2){width="1.0\linewidth"} ![NIT gel production machine.[]{data-label="fig:production-machine"}](figs/production-machine_old){width="0.8\linewidth"} NIT are produced in three steps using a dedicated machine (see Figure \[fig:production-machine\]). First, the AgBr crystal growth is obtained by mixing in a thermostatic bath AgNO$_3$ and NaBr exploiting the following reaction: $$\mbox{AgNO}_3 + \mbox{NaBr} \rightarrow \mbox{AgBr} + \mbox{Na}^+ + \mbox{NO}_3^-$$ Polyvinyl alcol (PVA) is then added to ensure the uniformity of the grain size of the crystals. NaI, with a concentration of 4% mol, is also used in order to increase the quantum efficiency in the activation of the crystals. Next, in the desalination phase, AgBr crystals are mixed with the gelatin while the residual extra ions (Na$^+$,NO$_3^-$) are extracted by means of a reduction process. A homogeneous crystal distribution is obtained with a centrifugation process at 1000 rpm and $50^\circ$ C. Finally, the emulsion gel obtained with this procedure (see Figure \[fig:gel\], left) is mixed with ultra-pure water and poured on a rigid support (usually plastic or glass) as shown in the right picture of Figure \[fig:gel\]. The production machine is able to produce up to 3 kg of NIT emulsion gel per week. The mass fractions of NIT constituents and the chemical composition of NIT emulsions are reported in Tables \[tab:composition\] and \[tab:constituents\], respectively. The emulsion composition has been carefully determined for light elements by an elemental analyser (YANACO MT-6) with an uncertainty of 0.3 %. The mass fraction of silver and bromine has been measured by an energy dispersive X-ray analysis with an uncertainty of 2%. The density amounts to 3.43 g/cm$^{3}$. ![Left: emulsion gel. Right: emulsion gel poured on a glass support.[]{data-label="fig:gel"}](figs/gel1){width="1.0\linewidth"} Constituent Mass Fraction ------------- --------------- AgBr-I 0.78 Gelatin 0.17 PVA 0.05 : Constituents of NIT emulsions[]{data-label="tab:composition"} Element Mass Fraction Atomic Fraction --------- --------------- ----------------- Ag 0.44 0.10 Br 0.32 0.10 I 0.019 0.004 C 0.101 0.214 O 0.074 0.118 N 0.027 0.049 H 0.016 0.410 S 0.003 0.003 : Elemental composition of NIT emulsions.[]{data-label="tab:constituents"} During the whole lifetime of the emulsion and before the development, due to thermal excitation, sensitive crystals can be randomly activated thus resulting in the production of random dark grains: the so-called *fog* (of the order of $1 \div 10 \slash(10 \mu$m$)^3$ for OPERA emulsions) represents a potentially dangerous source of background when looking for very short track length ($O$(100nm)) made of only two consecutive dark grains. In this case, indeed, if the fog density is too high, the probability that two fog grains are close enough to mimic a signal track is not negligible. A recent R$\&$D led to a new chemical development procedure resulting in a suppression of the fog density of one order of magnitude: using a low-temperature ($5^\circ$C) developer based on MAA-1 a fog density of $\sim0.1 \slash(10 \mu$m$)^3$ has been achieved. Moreover, fog grains show a rather different contrast and shape with respect to radiation sensitized grains. These important features can be exploited to enhance the signal to background ratio, as it will be explained in Section \[sec:read-out\]. Experimental concept {#sec:expConcept} ==================== NEWS is a very innovative approach for a high sensitivity experiment aiming at the directional detection of WIMPs: the detector is based on recent developments of the nuclear emulsions technology allowing to reach an extremely high spatial resolution. The detector is conceived as a bulk of nuclear emulsions acting both as a target and as a tracking device surrounded by a shield (see Section \[sec:set-up\]) to reduce the external background. The detector will be placed on an equatorial telescope in order to absorb the earth rotation, thus keeping the orientation towards the Cygnus constellation fixed. The emulsion films will lie with their surface permanently parallel to the expected average WIMP wind direction. Figure \[fig:wimp\_direction\] shows the distribution of the WIMP incoming angle, in the laboratory frame, projected on a plane containing the average WIMP wind direction. The majority of WIMPs are directed forward with a peak at zero. The superimposed red curve shows the same angle if one is not sensitive to the forward/backward direction. The angular distribution of the trajectories of WIMP-scattered nuclei is therefore expected to be anisotropic. ![WIMP 2-dim angle distribution on a plane containing the average WIMP wind direction (blue curve). The red curve shows the same angle if one is not sensitive to the forward/backward direction.[]{data-label="fig:wimp_direction"}](figs/wimp_angle_2D){width="0.6\linewidth"} The presence in the emulsion gel of lighter nuclei such as carbon, oxygen and nitrogen, in addition to the heavier nuclei of silver and bromine, is a key feature of the NEWS project, resulting in a good sensitivity to WIMPs with both light and heavy masses. The sensitivity indeed strongly depends on the minimum detectable track length. The path length of the recoiled track depends in turn on the kinetic energy of the scattered nucleus, being the kinematics determined both by the mass of the incident WIMP and by that of the target nucleus. The correlation between the track length of the recoiled nucleus and its kinetic energy is shown in Figure \[fig:correlation\] for the different target nuclei. WIMP with a mass of about 100 GeV/c$^2$ prefers Ag and Br as target, producing e.g. Br recoils with an average kinetic energy of about 50 keV. Although Ag and Br are the most effective targets for WIMP masses in this range, the detection capability is reduced since their ranges are shorter than lighter elements at the same energy. Instead, for a WIMP with a mass around 10 GeV/c$^2$, the kinematics favours lighter nuclei that, for a given kinetic energy, have a longer range. Therefore, the contribution of the C, N and O ions is essential for WIMP masses around 10 GeV/c$^2$. ![Correlation between the track length of the recoiled nuclei and their kinetic energy, for different target nuclei in NIT emulsions.[]{data-label="fig:correlation"}](figs/correlation){width="0.6\linewidth"} The estimated WIMP rates are of the order of 1 event$\slash$kg$\slash$year, much lower than the usual radioactive backgrounds. For this reason, the detector has to be placed underground to be protected from cosmic-ray induced background. Moreover, a careful control of the radioactive contamination of the materials used for the detector construction and a precise estimation of the corresponding induced background are needed. We will discuss in detail the most relevant background sources for the WIMP search with an emulsion based detector on the mass scale of a few kilograms. After the exposure, the emulsion films composing the target will be developed and the whole detector volume will be analyzed by using fully automated scanning systems. The read-out (see Section \[sec:read-out\]) is performed in two phases. In the first phase a fast scanning is performed (see Section \[sec:optical-read-out\]) by means of an improved version of the optical microscope used for the scanning of the OPERA films ([@ESS; @S-UTS]). By this step a fast pre-selection of the candidate signal tracks with a relatively low spatial resolution (200 nm) can be achieved. In order to resolve the nanometric grains belonging to a signal tracks and to enhance the signal to background ratio, a further scanning of the pre-selected tracks with a ultra-high resolution scanning system is foreseen (see Section \[sec:plasmon\]). The final resolution for the reconstruction of nuclear recoil tracks is estimated to be between $10$ and $20$ nm in position and better than $15 ^\circ$ in angle. Read-out technique {#sec:read-out} ================== In the NEWS experiment the expected WIMP signal will consist of short-path, anisotropically distributed, nuclear recoils over an isotropically distributed background. The search for signal candidates requires the scanning of the whole emulsion volume. The read-out system has therefore to fulfill two main requirements: a fast, completely automated, scanning system is needed to analyse the target volume over a time scale comparable with the exposure; the spatial resolution has to be improved by more than one order of magnitude compared to that achieved with standard emulsion films, reaching the challenging value of a few tens of nanometers, in order to ensure high efficiency and purity in the selection of signal candidates. The analysis of NIT emulsions is performed with a two-step approach: a fast scanning with a state-of-the-art resolution for the signal preselection followed by a pin-point check of preselected candidates with unprecedented nanometric resolution to further enhance the signal to noise ratio and perform very accurate measurements of the range and the recoil direction. These two steps are discussed in the next sub-sections. Optical microscopy for candidate selection {#sec:optical-read-out} ------------------------------------------ [![Optical scanning systems modified for the analysis of NIT. \[fig:mic\_Nagoya\] Prototype installed at Nagoya University. \[fig:mic\_LNGS\] Prototype installed at LNGS and Naples scanning laboratories. \[fig:mics\]](figs/mic_Nagoya.jpg "fig:"){width="0.5\linewidth"}]{} The members of the NEWS Collaboration own state-of-the-art experience of large-scale fast automated scanning with a spatial resolution of about $1\mu$m and an angular resolution of about 1 mrad, as currently applied in the OPERA experiment [@OPERAhowTo]: the European Scanning System (ESS [@ESS]) in Europe and the Super-Ultra Track Selector (S-UTS [@S-UTS]). In the last years an R&D program aimed at improving the ESS performances was carried by INFN groups, leading to prototypes with resolution improved by one order of magnitude, achieving a speed of almost 200 cm$^2$/h [@ESS-new]. A new system is being developed in Japan (the Super-Ultra Track Selector), aiming at increasing the scanning speed up to 5000 cm$^2$/h. Stepping into the nano imaging domain requires substantial upgrades of the OPERA-style scanning systems. New prototypes (see Figure \[fig:mics\]) were already set-up both in Japan and in Italy, featuring: - higher magnification of the objectives lens, from 50x to 100x - higher numerical aperture, from 0.8 to 1.45 - higher optical contrast (illumination by reflected light instead of transmitted light). - light with green or blue wavelength to improve the resolution - high pixel to micron ratio ($\sim$ 28 nm/pixel), one order of magnitude better than the systems used in OPERA - high resolution (4Mpx) and high speed (563 fps) CMOS camera. In parallel with the hardware improvements, the development of a new acquisition software and a new tracking algorithm has been carried out: the high data rate (1.7 GB/s), a factor 4 higher than the ESS one due to the improved sensor resolution, has required the use of last generation acquisition boards (Matrox Radient eCL SFCL/DFCL). As a consequence, a more powerful computing system, exploiting a GPU (Graphic Processing Unit) based architecture, has been implemented. In order to evaluate the performances of the new scanning systems, extensive tests were performed with exposures of NIT to slow ions and neutron beams. Results are discussed here. The starting point of the emulsion scanning is the image analysis to collect clusters made of dark grains at several depths across the emulsion plate thickness. Given the intrinsic resolution of the optical microscope ($\sim$ 200 nm), the sequence of several grains making a track of a few hundred nanometers, appears as a single cluster. Therefore, the key element to distinguish clusters made of several grains from clusters made of a single grain produced by thermal excitation (fog) is the analysis of their shape. A cluster made of several grains indeed tends to have an elliptical shape with the major axis along the direction of the trajectory, while a cluster produced by a single grain tends to have a spherical shape. In order to simulate the effect of a WIMP-induced nuclear recoil and to measure the efficiency and the resolution of the new optical prototype, a test beam with low velocity ions was performed. We used both a Kr ion beam with energies of 200 and 400 keV [@ShapeAnalysis] and a C ion beam with energies of 60, 80 and 100 keV. Kr and C ions of such energies produce in emulsion tracks with a length in the range 100$\div$300 nm. These ions were implanted in the emulsions using an implantation facility of low speed ions at Nagoya University. When analysed with the optical microscope, submicrometric tracks produced by Kr and C ions appear as shown in Figure \[fig:KrIon-ShapeAnalysis\]. Although silver grains belonging to the tracks are not distinguishable and appear as a single cluster, the elongated form of the cluster is clearly visible [@ShapeAnalysis2]. An elliptical fit of the cluster shape allows a clear separation between fog grains and signal tracks: the latter ones are expected to have ellipticity larger than a given threshold, typically 1.25 or higher (see left plot of Figure \[fig:shapeAnalysis60\] and \[fig:shapeAnalysis80\]). The angular distributions of 60 and 80 keV C ions are reported in the right plot of Figure \[fig:shapeAnalysis60\] and Figure \[fig:shapeAnalysis80\], respectively. A peak corresponding to the direction of the implanted ions is clearly visible; the width of the distribution corresponds to the angular resolution, amounting to 360 mrad. The angular resolution is given by the convolution of the intrinsic resolution with the angular deviations caused by the scattering in the material. For low energy (below 100 keV) tracks, the scattering cannot be neglected. In order to evaluate the intrinsic angular resolution of the scanning system we analysed an emulsion sample exposed to a 2.8 MeV neutron beam at the Fusion Neutronics Source (FNS) of the Japan Atomic Energy Agency (JAEA). In this case the track length distribution of neutron-induced proton recoils shows a wider range, up to a few hundred of micrometers. A sample of tracks with length of the order of few tens of micrometers and made by a sequence of several elliptical clusters was selected, being the scattering effect negligible for them. The same ellipticity cut applied in the previous analysis was used for the selection of the clusters. For each cluster, the angular difference $\Delta \theta$ between its major axis and the fitted track was evaluated (see Figure \[fig:angularResMethod\]). The distribution of $\Delta \theta$ shows a gaussian shape, as shown in Figure \[fig:angularResAndrey\] with a width corresponding to the intrinsic angular resolution and amounting to 230 mrad. This value represents the intrinsic angular resolution achieved with fully automated scanning systems, by far the best resolution achieved with direction sensitive detectors in this energy range. The simulation shows that this result is compatible with the measurement reported above when the scattering contribution is included. ![Kr ions implanted on NIT films. The image is taken with an optical microscope. The selection of candidate tracks is based on the elliptic fit of the clusters[]{data-label="fig:KrIon-ShapeAnalysis"}](figs/KrIon-ShapeAnalysis_2){width="0.6\linewidth"} ![Left: scatter plot of major and minor axes for clusters analysed with an elliptical fit in a 60 keV C ion test beam. Signal tracks are shown as red dots, fog grains in blue. Right: angular distribution of 60 keV C ion tracks selected by the ellipticity cut.[]{data-label="fig:shapeAnalysis60"}](figs/shapeAnalysis60){width="1.0\linewidth"} ![Left: scatter plot of major and minor axes for clusters analysed with an elliptical fit in a 80 keV C ion test beam. Signal tracks are shown as red dots, fog grains in blue. Right: angular distribution of 80 keV C ion tracks selected by the ellipticity cut.[]{data-label="fig:shapeAnalysis80"}](figs/shapeAnalysis80){width="1.0\linewidth"} Tracks selected with the shape analysis were validated using the X-ray microscope [@NakaX-ray]. This technique features a higher resolution (of the order of 60 nm) but a slower scanning speed when compared with the optical microscopy. The analysis of a few hundred $\mu$m$^2$ takes about 100 s. The X-ray microscopy can therefore be used only to check a sample of already selected candidate tracks: X-ray analysis was used to demonstrate the principle of selection by elliptical shape analysis and measure the efficiency achievable with the optical microscopy. The comparison of optical and X-ray images of candidate tracks is reported in Figure \[fig:x-ray\_confirmation\]: the high resolution of the X-ray microscope allows to resolve grains belonging to submicrometric tracks thus providing the final discrimination between signal and background. In Figure \[fig:eff\_vs\_length\] the detection efficiency of the optical system as a function of the track length is shown: the efficiency is obtained first selecting a set of multi-grain tracks with the X-ray microscope and then scanning them with the optical one and applying the shape analysis. In this test an optical microscope with a pixel to micron ratio of 55 nm/pixel was used. Results show that the efficiency reaches 100$\%$ above 180-200 nm. In Figure \[fig:eff\_vs\_energy\] the efficiency as a function of the recoil energy for C ions of 60, 80 and 100 keV, is shown: MC simulations (red line) well describes the data (blue points). It is worth noting that the capability of reconstructing low energy tracks (E $<$ 40 keV), corresponding to shorter path lengths, although with a lower efficiency, could significantly enhance the sensitivity to low WIMP mass regions. The scanning speed of the prototype currently used for the shape analysis is of about 25 mm$^2$/h. ![Comparison between reconstructed tracks of a few hundred nanometers length with the optical microscope and with the X-ray microscope.[]{data-label="fig:x-ray_confirmation"}](figs/x-ray_confirmation_2){width="0.8\linewidth"} ![Efficiency of the elliptical fit analysis versus the track length when an ellipticity of 1.25 is used as a threshold. []{data-label="fig:eff_vs_length"}](figs/eff){width="0.6\linewidth"} ![Efficiency of the elliptical fit analysis versus the C ion energy when an ellipticity of 1.25 is used as a threshold. MC simulation (red line) well describes the data (blue points).[]{data-label="fig:eff_vs_energy"}](figs/Eff-vs-Energy){width="0.8\linewidth"} Beyond the limits of the optical scanning for candidate validation {#sec:plasmon} ------------------------------------------------------------------ The use of optical microscopes allows the reconstruction of tracks down to 200 nm. The X-ray microscopy can overcome this limit though being extremely slow if compared with automated optical systems. Being the speed an issue in the analysis of a large mass detector, NEWS aims at the improvement of the spatial resolution enhancing the optical microscopy without using X-ray microscopes. The basic idea is to exploit the resonance effect occurring when nanometric metal grains are dispersed in a dielectric medium [@ResonantLightScattering]. The polarization dependence of the resonance frequencies strongly reflects the shape anisotropy and can be used to infer the presence of non-spherical nanometric silver grains. Figure \[fig:resonantLight\] shows the results of the resonant light scattering from individual Ag nanoparticles [@ResonantLightScattering]: spherical particles do not show any different response as a function of the incident polarization, while a deformed sphere is sensitive to the polarization. ![Scattered-light spectra from individual Ag particles with spherical (left) and spheroidal (right) shape [@ResonantLightScattering]. The inset shows the 300 $\times$ 300 nm$^2$ SEM image of the particle. Arrows indicate the polarization of the incident light. A dependence of the response on the light polarization is observed for particles with ellipsoidal shape.[]{data-label="fig:resonantLight"}](figs/resonantLight){width="1.0\linewidth"} NEWS will use this technology to retrieve track information in NIT emulsions beyond the optical resolution. Images of the same cluster taken with different polarization angles will show a displacement of the position of its barycenter. The analysis of the displacements allows to distinguish clusters made of a single grain from those made of two (or more) grains. ![Schematic view of the optical path instrumented with a polarizer to obtain a nanometric resolution with optical microscopes. []{data-label="fig:plasmon_prototype"}](figs/plasmon_prototype){width="1.0\linewidth"} ![Application of resonant light scattering to an elliptical cluster with ellipticity 1.27. Left plot: $dx$ and $dy$ are the displacements of the cluster barycenter for a given polarization in pixel units (1 pixel = 55 nm). Right plot: track slope fit and its length of about 90 nm.[]{data-label="fig:plasmon_analysis1"}](figs/plasmon_analysis1){width="1.0\linewidth"} ![Position accuracy of $x$ (left) and $y$ (right) coordinates of about 10 nm with the resonant light scattering.[]{data-label="fig:plasmon_resolution"}](figs/plasmon_resolution){width="1.0\linewidth"} In order to study the polarized light effect, several tests have been performed on NIT samples exposed to 100 keV C ions. Optical microscopes have been equipped with a polarization filter as shown in Figure \[fig:plasmon\_prototype\]. The polarization direction can be changed by rotating the polariser. The rotation is at the moment done by hand while its automation is being designed. Images of the same clusters were taken by rotating the polarizer of 180$^\circ$ with steps of 10$^\circ$. An example of the analysis performed on a cluster with ellipticity 1.27 is reported in Figure \[fig:plasmon\_analysis1\]. For all the images, the displacement ($dx$, $dy$) of the cluster barycenter in $x$ and $y$ coordinates is measured in terms of pixel units (1 pixel $=$ 55 nm). A displacement exceeding the position accuracy of a single grain is the evidence for a cluster made of two consecutive grains and therefore produced by a signal track. From the analysis of $dy$ versus $dx$ it is possible to retrieve the track length and slope. In this case, the measured track length is 1.5 pixel, corresponding to about 90 nm. The evaluation of the position accuracy was performed by analysing images of single grains. The unprecedented accuracy of about 10 nm can be achieved in both coordinates, as shown in Figure \[fig:plasmon\_resolution\]. The test performed demonstrates that this technology is very promising and that it can replace the X-ray microscope. The resonant light scattering has, in fact, the big advantage to achieve a nanometric resolution with optical microscopes. The validation of the candidates identified by the shape analysis will be performed in the same scanning laboratory, without moving the samples to a dedicated laboratory for the X-ray analysis. Moreover, optical microscopes are characterized by a much faster scanning speed with respect to X-ray microscopes, since they profit of all the R$\&$D performed in the last decades both for the OPERA and the NEWS experiments. Expected Background {#sec:expected-bkg} =================== The final sensitivity of low-energy rare event searches is strongly limited by the background induced by radioactivity. Two main categories have to be taken into account: the environmental or external background and the intrinsic one. The flux of the former can be significantly reduced by placing the detector underground, to absorb the cosmic radiation, and designing an appropriate shield against the natural radioactivity. The latter is an irreducible source of radiation: it is therefore crucial to control the radioactivity of the materials used for the construction of both the detector and of the shield as well as of the structure of the apparatus. Background sources for dark matter searches are $\alpha$ and $\beta$ particles, $\gamma$-rays and neutron induced recoils, while NIT are essentially not sensitive to minimum ionizing particles (MIP). The main sources of $\alpha$-particles are U and Th radioactive chains and Radon. The $\alpha$-particles produced in those processes have energies of the order of MeV and their range in emulsion is of the order of tens of microns, by far longer than WIMP-induced nuclear recoils. $\alpha$-particles can therefore be identified and discarded in the emulsions by an upper cut on the track length. Anyway Radon progeny $^{214}$Pb, $^{214}$Bi and $^{210}$Bi emit energetic $\beta$ and $\gamma$ radiation. To prevent Radon contamination, the detector has to be kept sealed from the air and continuously flushed with boil-off nitrogen. The $\gamma$ radiation due to environmental radioactivity constitutes a non-negligible contribution to the total background budget. In Figure \[fig:gamma-bkg\] the measured $\gamma$ flux in the LNGS underground halls is shown [@BrunoPhDThesis; @arneodo; @wulandari]. Passive or active shielding (usually water, copper or lead) can be used to suppress the external $\gamma$-radiation down to the level of ppb or ppt. The thickness *l* required to reduce the external flux by a factor $f > 1$ can be estimated assuming exponential damping $\emph{l} = \lambda (E_\gamma) \times \log f$, where $\lambda (E_\gamma)$ is the energy-dependent attenuation length and $E_\gamma$ is the $\gamma$-ray energy. A relevant source of background is represented by $\beta$-rays produced in $^{14}$C decay. Given the carbon content in the emulsions and the $^{14}C$ activity, a rejection power R$_{\beta}\leq10^{-8}$ is required in order to make it negligible (i.e. less than one background track/kg/year). The current rejection power for tracks made by two crystals is R$_{\beta}=10^{-6}$. In order to further improve the rejection, three possibile improvements are under investigation. The first one is based on the different energy deposition per path length of WIMP induced recoils and electrons [@gamma-response]: the response of emulsions can be tuned by dedicated chemical treatments (e.g. Tetrazorium compound [@tetraz]). The second possibility is to exploit the response of $\beta$-rays to the polarized light scattering: indeed grains induced by $\beta$-rays might be less sensitive to polarized light. Finally, a reduction of the background can be achieved by performing a cryogenic exposure and by exploiting the phonon effect. Preliminary tests at $\sim 100$ K show an upper limit of R$_{\beta}<10^{-7}$ for tracks made by two crystals. ![$\gamma$-flux measured in the underground LNGS halls [@BrunoPhDThesis; @arneodo; @wulandari].[]{data-label="fig:gamma-bkg"}](figs/gamma-flux){width="0.7\linewidth"} Neutron induced recoils rank as the main background source because they are not distinguishable from the expected WIMP signal, except for the isotropic angular distribution and for the typical track length, largely exceeding the range expected for WIMP-induced recoils. Indeed, while neutron-induced proton recoils can be as long as few hundred microns, the maximum length of a WIMP-induced nuclear recoil is smaller than $1\mu$m even for large ($O$(TeV)) WIMP masses. Three types of neutron sources affect underground experiments: radiogenic neutrons in the MeV range produced in ($\alpha$, n) and spontaneous fission reactions in the detector due to its intrinsic radioactive contaminants, cosmogenic neutrons with energy spectrum extending to GeV energies induced by muons penetrating underground through the rock, neutrons induced by environmental radioactivity. In Figure \[fig:neutron-flux\] the measured neutron flux in the LNGS underground halls is shown [@BrunoPhDThesis]: for a neutron energy of the order of a few MeV (*fast* neutrons) the flux ranges from $10^{-6}$ to $10^{-10}$ cm$^{-2}$ s$^{-1}$ MeV$^{-1}$. Light materials are effective moderators for fast neutrons: polyethylene (PE, C$_2$H$_4$) is commonly used to reduce the external neutron flux. ![The neutron flux measured in the underground LNGS halls [@BrunoPhDThesis].[]{data-label="fig:neutron-flux"}](figs/neutron-flux){width="0.7\linewidth"} While the external neutron flux can be reduced to a reasonable level with an appropriate shielding, the intrinsic emulsion radioactivity would be responsible of an irreducible neutron yield through ($\alpha$, n) and $^{238}$ U spontaneous fission reaction. In order to estimate this contribution, a sample of each component of the nuclear emulsion (AgBr, Gelatin and PVA) has been analysed by the Chemistry Service in Laboratori Nazionali del Gran Sasso (LNGS, Italy), with the Inductively Coupled Plasma Mass Spectrometry (ICP-MS) technique [@ICP-MS] and at the low background facility STELLA (SubTErranean Low Level 125 Assay) of the LNGS [@STELLA] with germanium detectors. The complementary use of these techniques allows to determine both the Uranium and Thorium activities and to verify the secular equilibrium hypothesis. The measured activities are reported in Table \[tab:activities-MS\] for all the constituents. The upper limits on PVA are evaluated at 95$\%$ CL. By weighting the measured activity of each constituent for its mass fraction, the total activity of nuclear emulsion can be calculated. Using the contamination measured with the mass spectrometry, the $^{238}$U activity amounts to $23\pm 7$ mBq kg$^{-1}$, while the $^{232}$Th one is $5.1\pm 1.5$ mBq kg$^{-1}$. The reported errors are dominated by the 30$\%$ uncertainty in the radioactive contamination measurements. By assuming a null contribution from PVA, the previous contaminations are reduced by $\sim 2\%$. The $\gamma$ spectrometry gives comparable results for the AgBr sample. For the gelatin the measurements provide comparable results for the $^{232}$Th chain, while the measured concentrations of $^{226}$Ra in the $^{238}$U chain is about 20 times smaller than the parent isotope, with a measured value of $2.4\pm 0.6$ mBq kg$^{-1}$. This measurement suggests a break in the secular equilibrium of the decay chain at this point. Therefore the secular equilibrium is assumed for the upper part of this chain, using the activity measured by mass spectrometry, while, for the lower part, nuclides are considered in equilibrium with $^{226}$Ra and the activity measured with $\gamma$-spectroscopy is used. The nuclear emulsion activity for nuclides of the $^{226}$Ra sub-chain is therefore $15\pm 5$ mBq kg$^{-1}$ [@intrisicBkgPaper]. [c|c|c]{} Nuclide & Contamination \[10$^{-9}$ g g$^{-1}$\] & Activity \[mBq kg$^{-1}$\]\ \ $^{232}$Th & 1.0 & 4.1\ $^{238}$U & 1.5 & 18.5\ \ $^{232}$Th & 2.7 & 11.0\ $^{238}$U & 3.9 & 48.1\ \ $^{232}$Th & $< 0.5$ & $< 2.0$\ $^{238}$U & $< 0.7$ & $< 8.6$\ The measured activity was used to determine the neutron yield both through a semi-analitical calculation [@refCalcFabio1; @refCalcFabio2] and a MC simulation based on the SOURCES code [@SOURCES]. Results are reported in Table \[tab:resNeutronYield\]. The two approaches give comparable results and the flux due to the intrinsic radioactive contamination is expected to be of the order of $1.2 \pm 0.4$ neutron per year per kilogram of nuclear emulsion. The energy spectrum of the produced neutrons, as calculated with SOURCES, is reported in Figure \[fig:SOURCES-spectrum\]. ![Total neutron energy spectrum (black line); in red the contribution from $^{238}$ U spontaneous fission is shown, while in blue and green the contributions from ($\alpha$,n) reactions due to nuclides in the $^{238}$U and $^{232}$Th chains respectively are displayed [@intrisicBkgPaper].[]{data-label="fig:SOURCES-spectrum"}](figs/neutron_spectrum){width="0.7\linewidth"} In order to estimate the detectable background due to radiogenic neutrons produced by the intrinsic radioactive contamination of the nuclear emulsions, a GEANT4 based simulation was performed. Simulated neutrons have an isotropic angular distribution and are uniformly distributed in a target where emulsion are arranged in a stack with a surface of $25 \times 25$ cm$^2$ and a thickness of 0.5 cm; their energy spectrum was generated according to Figure \[fig:SOURCES-spectrum\]. The fraction of interacting neutrons is 20.4$\%$: they can produce either a proton a nuclear recoil. In the former case the track length in emulsion extend up to several hundreds $\mu$m (see Figure \[fig:proton\_recoils\]) while nuclear recoils show shorter track lengths, not exceeding 3 $\mu$m for light nuclei (C, N, O) and 1 $\mu$m for heavy nuclei (Ag, Br, I) (see Figure \[fig:nuclear\_recoils\]). The overall fraction of neutron-induced recoils contributing to the background is computed by accounting for recoil tracks with lengths above the read-out threshold. Moreover an upper limit on the track length can be introduced since the signal is expected to be below 1 $\mu$m even for large ($O$(TeV)) WIMP masses (see Figure \[fig:maximumRange-vs-WIMPmass\]). The fractions of neutron-induced recoils below this cut, as a function of the read-out threshold, are reported in Table \[tab:recoils1\]: only fraction, from 5% to 10%, contributes to the background. A further reduction of $\sim 70\%$ of the neutron-induced background can be achieved exploiting the directionality information with the cut $-1 < \phi < 1$. Under these assumptions, the detectable neutron-induced background would be 0.02 $\div$ 0.03 per year per kilogram. ![Track length (left) and energy spectrum (right) for proton recoils produced by elastic (blue curve) and inelastic (red curve) processes.[]{data-label="fig:proton_recoils"}](figs/proton_recoils_G4){width="1.0\linewidth"} ![Track length (left) and energy spectrum (right) for heavy (blue curve) and light (red curve) nuclei.[]{data-label="fig:nuclear_recoils"}](figs/nuclear_recoils_G4){width="1.0\linewidth"} ![Maximum range expected for nuclear recoils as a function of the WIMP mass for the various nuclei.[]{data-label="fig:maximumRange-vs-WIMPmass"}](figs/maximum_range_vs_WIMPmass){width="0.7\linewidth"} ------------------------------------- ------------------------ ----------------------------- Process SOURCES simulation Semi-analytical calculation \[kg$^{-1}$ y$^{-1}$\] \[kg$^{-1}$ y$^{-1}$\] ($\alpha$, n) from $^{232}$Th chain 0.12 $0.10 \pm0.03$ ($\alpha$, n) from $^{238}$U chain 0.27 $0.26 \pm 0.08$ Spontaneous fission 0.79 $0.8 \pm 0.3$ Total flux 1.18 $1.2 \pm 0.4$ ------------------------------------- ------------------------ ----------------------------- : Neutrons per kilogram per year due to ($\alpha$, n) and spontaneous fission reactions in the nuclear emulsion, evaluated with the SOURCES code and semi-analytical calculation using the measured $^{238}$U and $^{232}$Th contaminations as input.[]{data-label="tab:resNeutronYield"} The neutron-induced background due to the intrinsic radioactive contamination allows the design of an emulsion detector with an exposure of about 10 kg year. A careful selection of the emulsion components and a better control of their production could further increase the radiopurity, thus extending the detector mass and exposure time. In particular, since the activity of the gelatin is higher than that of the other emulsion components (see Table \[tab:activities-MS\]) and since PVA shows a very low radioactive level, we are studying a possible replacement of gelatin with PVA. In nuclear emulsion-based detectors the instrumental background is due to the so called *fog* grains, i.e. dark grains produced by thermal excitation. The fog density determines the probability of random coincidences of two or more fog grains mimicking a WIMP-induced nuclear recoil. The measured value of the fog density for current NIT samples is about 0.1 grains/(10$\mu$m)$^3$. The number of background tracks due to random coincidences of fog grains depends on the minimum number of grains required to build a track and increases with the track length, as shown in the left plot of Figure \[fig:combinatorial\_bkg\], where the instrumental background for 1 kg emulsion target is reported. In NIT (U-NIT) emulsions a track made of 2 grains has an average length of about 100 nm (50 nm). The number of background tracks corresponding to this track length amounts to 10$^4$ (10$^3$), as outlined by red arrows on the plot.\ In order to make the combinatorial background smaller than one, the coincidence of at least 3 grains has to be required. In NIT (U-NIT) emulsions a track made of a sequence of 3 grains has on average a path length of about 200 nm (100 nm): the corresponding background level is 0.3 tracks ($4\times10^{-3}$ tracks).\ The right plot in Figure \[fig:combinatorial\_bkg\] shows the number of background tracks as function of the fog density in NIT emulsions, if 2-grain tracks are accepted: the background can be considered as negligible only reducing the fog density to from the current value to 10$^{-3}$ grains/(10$\mu$m)$^3$. Preliminary tests show that a value of 0.03 grains/(10$\mu$m)$^3$ can be obtained using purified gelatine. Further purification might lead to lower fog values. This research line will be followed in collaboration with the firm producing the gelatine.\ In order to further reduce the fog density, two possible improvements are under study. The first one exploits the response of fog grains to the polarized light scanning: fog grains show indeed both a different image contrast and a different size with respect to the grains sensitized by a nuclear recoil. This effect is essentially due to different $dE/dx$ of the two processes and offers a powerful discrimination of such kind of background. Moreover, a reduction of the fog density can be achieved operating the detector at low temperature (from a simple refrigeration down to a cryogenic regime of $\sim 80$ K) or by applying dedicated chemical treatments. ![Left: number of background tracks in 1 kg of NIT emulsions as function of the track length for tracks made by two (continuous red line) and three fog grains (dashed blue line). Right: number of background tracks in 1 kg of NIT emulsions as function of the fog density for 50 nm (continuous green line), 100 nm (dashed black line) and 200 nm (dotted-dashed magenta line) threshold in the track length. Only tracks made by two grains are considered here. []{data-label="fig:combinatorial_bkg"}](figs/combinatorial_bkg_arrow){width="1.0\linewidth"} Finally, the requirement of a background-free experiment sets the necessity of operating in a clean environment in order to avoid surface contamination. Moreover in order to reduce the activation risk of detector materials, an underground location for the emulsion production and handling facilities is required. The construction of a (dark) clean room in the Gran Sasso underground Laboratory is therefore needed. Threshold \[nm\] Fraction ------------------ ---------- 50 0.100 100 0.075 150 0.060 200 0.052 : Fraction of detectable neutron-induced recoils as a function of the read-out threshold.[]{data-label="tab:recoils1"} Experimental set-up {#sec:set-up} =================== As a first phase of the project, we plan to perform a pilot experiment with a detector of 1 kg exposed for one year. Details of the related schedule will be examined in Section \[sec:schedule\]. A detector with one kg mass of NIT can be made of 50 $\mu$m thick-films assembled in a stack of 100 planes with a surface of $25 \times 25$ cm$^2$. We are considering the option of embedding OPERA-like films between two consecutive NIT planes: OPERA-like films would act as a monitoring system to register, with micrometric accuracy and high sensitivity, all the radiation integrated by the detector along the exposure. As composed of the same raw materials, the intrinsic radioactivity of the OPERA-like films would be of the same order of magnitude of that of NIT, therefore tolerable for a 1 kg detector. The emulsion planes are placed with their surface parallel to the expected WIMP wind direction. We might consider to place an equivalent amount of emulsion films in an orthogonal plane. These films would act as a control sample. In case a signal would be found in thefirst sample, and only in this case, the scanning of these films would be performed to demonstrate that the signal found is not an artefact. To maintain the detector with a fixed orientation towards the Cygnus constellation it will be installed on an Equatorial Telescope (see Figure \[fig:detector\]) allowing to cancel out the effect of the Earth rotation thus keeping the detector pointed on a fixed position in the sky. An equatorial telescope has two axes: the so called Polar Axis, parallel to the rotation axis of the Earth and pointed to the North celestial pole, and the Declination Axis, perpendicular to the polar one. The motion of the Earth can be canceled out by driving at a constant speed the Polar Axis synchronised with the apparent daily motion of the sky. The Polar Axis will be motorized and both axes will be equipped with precise encoders to constantly check the position of the mechanics with high accuracy. The detector will be therefore pointed towards the Cygnus constellation and kept in that direction with an accuracy better than 1 degree. A calibration procedure of the telescope will be performed before the installation in the underground laboratory. To ensure a precise syncronization of the mount with the apparent daily motion of the sky it is necessary to tune the response of the mechanics and to correct for any possible periodic error. The calibration procedure foresees several steps. The mount will at first be tested in the external laboratory using an optical telescope mounted on it and aligned with the Polar Axis: the telescope will be used, during the night, to point a star in the Cygnus constellation. Using a specific software and an imaging CCD camera attached to the prime focus of the telescope, the mount will be guided to keep the star centered in the field of view of the CCD camera. The software will record all the guiding parameters, as the position of the star and the corrections applied to the Polar and Declination Axis. This procedure will be repeated during several nights and all the data collected will be analyzed in order to get and apply the necessary correction to the mechanics and to the electronic system. In a second phase the mount will be used throughout the whole day to compensate the apparent daily motion: the position during the night will be then measured in order to evaluate the pointing accuracy given by the difference between the nominal and measured positions. This measurement will provide a fine tuning on the position of both the Polar and the Declination axes. Finally the mount will be moved underground in its final position: profiting of the presence, in the underground halls, of already existing high precision reference points the mount will be aligned with high accuracy in the north-south direction in order to align the Polar Axis parallel to the rotation axis of the Earth. The design and construction of the equatorial telescope will be carried-out in collaboration with specialized firms. A screening of all the materials used in the construction of the telescope is foreseen in order to evaluate their intrinsic radioactivity. A detailed simulation of all the components of the telescope is planned. A large telescope supporting both the target and the surrounding shield is considered (see Figure \[fig:detector\]). This configuration allows to build a light shield while ensuring a low contamination of the background originating from the telescope itself. ![Schematic view of the detector structure.[]{data-label="fig:detector"}](figs/Mount_1){width="0.7\linewidth"} In Figure \[fig:detector\] a schematic view of the detector structure is shown: a stack of NIT films is placed at the center of a plexiglass sphere with a diameter of 30 cm. A sphere of polyethylene will act as a shield against the external neutron background. The target and the shielding are installed on the equatorial telescope. The target emulsions are arranged in such a way to have the film surface parallel to the WIMP wind. From a preliminary evaluation a layer of 50 cm of polyethylene will reduce the external neutron flux by a factor of the order of $10^{4}$: considering an integrated flux of the order of $\phi_n \sim 2 \times 10^{-6}$ cm$^{-2}$ s$^{-1}$, for a target with an exposed surface of $25 \times 25$ cm$^2$ and a thickness of 0.5 cm this corresponds to a residual flux of the order of 1 neutron/kg/year, the same order of magnitude of the intrinsic neutron contamination. More accurate evaluations of the polyethylene thickness sufficient to provide the required background rejection power are under study. The addition of a thin ($1\div 2$ cm) layer of Cadmium to capture thermalised neutrons is under study. As explained in Section \[sec:expected-bkg\] NIT have a high electron rejection power: a proper chemical treatment allows to reach a reduction factor of the order of 10$^{-6}$ in the sensitivity to electrons. For this reason the use of high-Z shielding materials (Pb and Cu) against the external $\gamma$ flux is not foreseen at the moment. Both the passive shield and the emulsion target will be enclosed in a sealed plexiglass box maintained in High Purity (HP) Nitrogen atmosphere in slight overpressure with respect to the external environment to prevent radon contamination. The shape of the shield surrounding the detector will be optimized in order to design the lighter and efficient structure. Two solutions are under study, either a parallelepiped box containing the shielding and the emulsion target, or a spherical one. In the first case the weight of the PE layer is of the order of 1.8 ton, while in the spherical option the weight is $\sim$1.14 ton. Even if the latter option ensures a lighter and symmetrical shielding, the final choice will depend on the cost and on the technical implementation of the design. Nevertheless, we are also investigating a completely different approach based on the use of water as shielding material against the external background. A preliminary layout is shown in Figure \[fig:WaterOption\]: the emulsion detector is hermetically enclosed inside a spherical container made of low-Z material (teflon or polyethylene) with a diameter of 55 cm. The inner volume is flushed with N$_2$. The container is mounted on a long shaft and positioned in the centre of a tank (diameter 5 m, height 5 m) filled with ultra-pure water. The shaft is made of light, low-radioactive material (i.e. aluminum) aligned with Earth’s rotation axis. The constant orientation of the target with respect to the Cygnus constellation is kept thanks to the slow rotation of the shaft with the period of one sidereal day. All the mechanics needed to keep the orientation and the rotation of the shaft is mounted outside the water tank. The immersed part can be constructed in a way to keep the mean density close to 1 g/cm$^3$. In this way the mechanical load becomes negligible, thus simplifying the design and providing a big flexibility for materials selection. This solution can be more flexible and cheaper, allowing to hold much larger masses without changing neither the mechanics of the telescope nor the shielding. A detailed simulation of the shielding and a study of the mechanics requirements, together with an estimation of the costs, are ongoing. ![Schematic view of the detector structure for the water shielding: the detector holder is placed in the centre of tank. Its orientation toward the Cygnus constellation is kept by the rotating pivot mounted with one edge above the water surface. Only pure and low-Z materials are used for the immersed part.[]{data-label="fig:WaterOption"}](figs/Mount_2){width="1.0\linewidth"} ![Sketch of the planimetry of the NIT production and development facility.[]{data-label="fig:emulsion-facility"}](figs/layout_CleanRoom_100m2_v3){width="0.85\linewidth"} ![A picture of the existing OPERA CS facility in hall B.[]{data-label="fig:CSemulsion-facility"}](figs/CSfacility){width="0.7\linewidth"} ![Planimetry the existing OPERA CS facility in hall B.[]{data-label="fig:CSemulsion-facility-planimetry"}](figs/planimetria_csFacility){width="0.7\linewidth"} Emulsion production and development facility -------------------------------------------- The layout of the facility we intend to build is shown in Figure \[fig:emulsion-facility\]. The total surface is about 100 m$^2$ and it is divided in four parts: emulsion gel production, emulsion gel pouring, film development and chemical solution preparation.\ Once produced, the gel will be sealed in an envelop flushed and filled with N$_2$ and moved to the pouring station, where a glove box flushed with HP Nitrogen will be installed. After the pouring, the films will be sealed in an envelop flushed and filled with N$_2$ and stored underground until the exposure.\ All the operations involving the emulsion production and development require a dark room environment.\ In order to minimize the surface contamination and the activation risk, the facility will be hosted in a clean room located underground. A class 1’000 clean room is required for the emulsion production, pouring and the chemical solutions preparation; a class 100’000 will be installed for the area devoted to the development.\ An air conditioning system will be installed in order to stabilize and monitor the temperature, ($20 \pm 1)^\circ$, and the humidity, ($60\pm 5)\%$, of the clean room. A demineralized water treatment plant and a chemical waste system are also required. For the film development and the pouring activity foreseen in the first year of the project an excellent starting point is the existing OPERA emulsion handling facility shown in Figure \[fig:CSemulsion-facility\]. The facility, currently hosted in Hall B, is made of three rooms: a control room, a handling room and a development room, for a total surface of $\sim$ 50 m$^2$ (see Figure \[fig:CSemulsion-facility-planimetry\]). The handling room will be equipped with a pouring station and a development station. The installation of two systems for the temperature control is also foreseen. The scanning of the exposed films will be performed in the existing OPERA scanning facilities in Italy, Russia, Turkey and Japan. In Italy the scanning laboratories are located at LNGS, Naples and Bari with 13, 5 and 3 OPERA microscopes respectively. Few more microscopes are currently located in Russian and Turkish scanning laboratories. Moreover at LNGS and Naples two additional microscopes, partially upgraded for the scanning of NIT and the polarized light analysis, are available. An equivalent scanning power is hosted at Nagoya University. Physics reach ============= The 90$\%$ C.L. upper limit in case of null observation is shown in Figure \[fig:sensitivity1Kg\] for an exposure of 1 kg$\cdot$year of NIT emulsions, with a minimum detectable track length ranging from 200 nm down to 50 nm and in the hypothesis of zero background. Even not including the directionality discrimination of the signal and assuming to reach a negligible background level, such an experiment would cover a large part of the parameter space indicated by the DAMA/LIBRA results with a small (1 kg) detector mass, using a powerful and complementary approach. It is worth noting that the sensitivity strongly depends on the final detection threshold: as explained in Section \[sec:expected-bkg\] the current threshold value is limited to 200 nm only by the fog density. A reduction of the fog density or its discrimination through the use of the optical microscope with polarized light, would allow to lower the threshold to 100 nm. In order to lower the threshold down to 50 nm the use of the U-NIT technology is needed. Moreover we are conservatively assuming zero efficiency below the threshold value while, as shown in Figure \[fig:eff\_vs\_length\], the efficiency is not negligible even for shorter tracks. This would enhance the sensitivity to low WIMP masses. This effect will be taken into account. ![The 90$\%$ C.L. upper limits for a NIT detector with an exposure of 1 kg $\times$ year, a threshold ranging from 200 nm down to 50 nm, in the zero background hypothesis. The directionality information is not included.[]{data-label="fig:sensitivity1Kg"}](figs/NEWS_sensitivity1Kg_JP){width="0.6\linewidth"} Schedule, Cost Estimate, Organization ===================================== Time schedule {#sec:schedule} ------------- ![image](figs/gantt_v7){width="1.2\linewidth"} On a time scale of six years we intend to perform the first exposure with a target mass of 1 kg and the corresponding analysis of the data taken. In Figure \[fig:gantt\] a detailed plan of all the phases of the project is reported. In the beginning of 2016 we plan to construct a prototype shield and the equipment for the emulsion pouring. The above mentioned phases have to be completed in nine months in order to perform a first test to benchmark the level of intrinsic radioactivity of emulsions. For this measurement, we will use the gelatine produced at Nagoya University and perform the pouring underground. We will perform an exposure of a 10 g detector surrounded by the prototype shield. The detector exposure together with the analysis of the emulsion films will last nine months. The results of this test will provide a measurement of the background, intended to cross-check the estimates based on simulation and measurements of intrinsic radioactivity. In parallel, tests with radioactive sources are foreseen to characterize the response to external radioactivity. We do consider the possibility of getting raw materials for the emulsion production within European countries, provided that their intrinsic radioactivity does not exceed the level measured in Japanese samples. This would allow a reduction of the activation processes induced during transportation. This activity will take place in 2016. The measurement of intrinsic radioactivity of the different emulsion components and the prototype shield materials will be performed from June 2016 to the end of 2017. Tests of the activation due to cosmic rays during the transportation will be performed by bringing samples back and forth between Italy and Japan. The design of the gel production machine will start in September 2016 while the design of the clean room will be carried on with the help of specialized firms, starting from January 2017. The construction of the clean room, the pouring facility and the gel production machine will start from January 2018 and will last six months. As soon as the film production machine will be operational in the underground laboratory and the gelatine will be produced, the measurement of intrinsic radioactivity will be performed. If satisfying the required radioactivity level, the pouring of the gelatine will be performed. The design of the equatorial telescope and the choice of the materials is supposed to start soon in 2016 and last 18 months. The construction of the telescope will start in the beginning of 2018 and last six months. In the second part of 2018 the surface calibration measurements and the underground telemetry will be carried out. The construction of the shield and the target holding will start in 2018. Once the equatorial telescope installation will be finalised, the detector commissioning will start. We plan to finalize the upgrade of the read-out system on a prototype microscope, exploiting in particular the resonant light scattering technique. This activity will start at the beginning of 2016. In June 2017 a Technical Design Report will be submitted. The upgrade of all the available OPERA systems will start in the second half of 2018 and last 27 months. By March 2020 we plan to have the final equipment installed on a number of microscopes adequate for the analysis of 1 kg of NIT emulsion in one year. Once the whole film production will be completed, the run with 1 kg mass detector will start. The data taking will last one year: from October 2019 to October 2020. The emulsion films will be developed soon after the exposure. The scanning and the analysis of the emulsion films will start once the upgrade of all the read-out systems will be complered and it is supposed to be completed by the end of 2021. Costs ----- The cost for the constructions of the clean room (75 m$^2$ class 1’000 and 25 m$^2$ class 100’000) is estimated to be around 200 k. As explained in Section \[sec:set-up\] the clean room will host the production machines, the pouring and the development facilities.\ The cost of the production machine is of the order of 200 k. The pouring and the development facilities will cost about 18 kand 50 k, respectively. The above mentioned costs will be shared according to a MoU to be signed between parties. In case the production will be carried out at Nagoya University, Japan will cover the corresponding costs. Japan will cover the costs for the all the emulsion components.\ A first estimate of the cost for the equatorial telescope is 15 k for the design and 240 k; the cost for the construction of the shielding amounts to about 15 k.\ Finally the upgrade of the read-out systems will be needed. Japan will cover the cost for the realization of their own scanning systems. The construction of the microscope prototype in Europe costs about 300 k; the hardware and computing upgrade of each OPERA microscope amounts to about 30 k. Depending on the final scanning speed, from 10 to 14 systems will be modified for the high resolution and high speed scanning of NIT for a total cost ranging from $\sim$ 300 kto $\sim$ 420 k.\ An expense of 80 kis expected for the maintenance of the microscopes and 120 kfor the consumables. In Table \[tab:costs\] a summary of the expected costs is reported. [c | c | c]{} Category & Cost \[k\] & Assignment\ Clean Room & 200 & EU\ NIT production machine & 200 & JP\ Pouring facility & 18 & EU\ Development facility & 50 & EU\ Equatorial Telescope & 255 & EU\ Shielding & 15 & EU\ EU Prototype Microscope & 30 & EU\ EU Microscopes Upgrade & 300 & EU\ EU Microscope Maintenance & 80 & EU\ JP Microscopes Upgrade & 300 & JP\ Consumables & 120 & EU\ TOTAL & 1468 &\ Collaboration ------------- NEWS is at present a collaboration between Italy, Japan and Russia and Turkey. The involved groups are: - University and INFN Bari, Italy - Lab. Naz. Gran Sasso, Italy - University and INFN Naples, Italy - University and INFN Rome, Italy - Nagoya University and KM Institute, Japan - Chiba University, Japan - JINR Dubna, Russia - Moscow State University, Moscow, Russia - Lebedev Physical Institute, Moscow, Russia - METU, Ankara, Turkey All the above mentioned groups are leaders in the emulsion scanning having gathered the experience of the emulsion analysis in in the OPERA experiment. The scanning and the analysis of the exposed emulsions will be shared according to the available scanning power of each group. The development of the prototype, both for hardware and software, of the new read-out system is shared between LNGS and Naples while it is entirely carried out at Nagoya University for the Japanese one. The LNGS and Napoli groups are in charge of the design of the telescope, the construction of the prototype and the calibration measurements. The same groups will perform the intrinsic background measurements, the studies about the environmental background and the design of the detector shielding and structure. The Russian groups will perform radioactive studies. The design and the realization of the local underground facilities will be shared between LNGS and Japan. The simulation of the detector response, efficiency and resolution as well as and the expected sensitivity is shared between LNGS, Naples and Nagoya. The responsibility about the emulsion production, development and handling is currently assigned to the Nagoya group. Conclusions and outlook ======================= ![Sensitivity at 90$\%$ C.L, in the zero background hypothesis for an experiment with a mass of 10 kg (green) and 100 kg (blue) for two value of detection threshold: 100 nm (dashed lines) and 50 nm (solid line). []{data-label="fig:NEWSsensitivity_10-100Kg_50-100nm_JP"}](figs/NEWSsensitivity_10-100Kg_50-100nm_JP_2){width="0.8\linewidth"} NEWS is meant to be the first experiment with a solid target for directional dark matter searches: the use of a nuclear emulsion based detector, acting both as target and tracking device, would allow to explore the low cross section region in the phase space indicated by DAMA. The novel emulsion technology, based on the use of nuclear emulsion with nanometric AgBr crystals (NIT), makes it possible to record the sub-micrometric tracks produced by the WIMP scattering off a target nucleus. The presence, in the emulsion components, of light and heavy nuclei results in an enhanced sensitivity to both light and heavy WIMP masses. The read-out of tracks with length of the order of 100 nm, is possible thanks to an R$\&$D carried out on the scanning systems currently used for the analysis of the OPERA emulsions. The use of improved optics and mechanics allowed to reach a spatial and angular resolution of the order of 100 nm and 235 mrad, respectively, with a tracking efficiency approaching 100$\%$ for tracks with lengths longer than 180 nm. The new optical microscope has a scanning speed of about 25 mm$^2$/h allowing to perform a fast preselection of the candidate signal tracks with the shape analysis method. The final signal confirmation is obtained with powerful optical microscope equipped with a light polarizer: exploiting the different response of non spherical grain clusters to different polarization angles, the unprecedented spatial resolution of 10 nm is obtained. This resolution allows to resolve grains belonging to a few hundred of nanometer long tracks thus providing the final signal confirmation with a very high signal to noise ratio. The intrinsic radioactivity of nuclear emulsions has been measured and a detailed MC simulation has been performed: the estimated neutron yield allows to design an experiment with masses of the order of 10 kg while keeping this background negligible. A careful evaluation of the external background sources has been performed allowing to design a proper shielding. The final experimental set-up foresees the use of an equatorial telescope holding both the emulsion target and the shielding. We plan to perform a pilot experiment with a 1 kg mass target on a time scale of six years: even using a rather small detector mass we would be able to explore the region indicated by the DAMA experiment with a powerful and complementary approach (see Figure \[fig:sensitivity1Kg\]). The actual intrinsic radioactive level allows to scale the target mass and exposure time up to one order of magnitude. A careful selection of the emulsion components and a better control of their production could further increase the radiopurity, thus allowing larger detector mass. The reduction of the fog density and further developments of the optical microscopy with polarized light would allow to reduce the detection threshold down to 50 nm. Improvements both in the mechanics (use of piezoelectric-driven objective) and in the image acquisition (use of multiple image sensors) envisage already now the possibility to analyse with such a resolution a volume of 100 kg or larger. Moreover further improvements both in the microscope hardware and in the analysis software will permit to fully exploit the intrinsic emulsion capability of recording 3D tracks. In Figure \[fig:NEWSsensitivity\_10-100Kg\_50-100nm\_JP\] the upper limit in case of null observation for an experiment with a mass of 10 (green) and 100 (blue) kg and for a detection threshold of 50 (dashed lines) and 100 (solid lines) nm is shown at 90 $\%$ C.L and in the zero background hypothesis. The proposed program would open a new window in the DM search. The developments done will likely have impact on the nano-imagining applications in physics, biology and medicine. [00]{} K.A. Olive et al. (Particle Data Group), Chin. Phys. C, **38** (2014) 090001,\ Plank Collaboration, *Plank 2015 results. XI. CMB power spectrum, likelihoods, and robustness of parameters*, arXiv:1507.02704. G. Bertone, Particle Dark Matter, Cambridge University Press, 2010. M. W. Goodman and E. Witten, Phys. Rev. **D31** (1985) 3059. D. N. Spergel, Phys. Rev. **D37** (1988) 1353. R. Bernabei et al., Eur. Phys. J. **C56** (2008) 333. DMTOOLS site: `http://dmtools.brown.edu:8080/`. C. E. Aalseth et al., CoGeNT Collaboration, *CoGeNT: A search for low-mass dark matter using p-type point contact germanium detectors*, Phys. Rev. **D88** (2013) 012002. G. Angloher et al., CRESST-II Collaboration, *Results on low mass WIMPs using an upgraded CRESST-II detector*, Eur. Phys. J. **C74** 12 (2014) 3184. Z. Ahamed et al., *Combined limits on WIMPs from the CDMS and EDELWEISS Experiments*, Phys. Rev. D 84 (2011) 011102 E. Aprile et al., XENON100 Collaboration, *Dark matter results from 225 live days of XENON100 data*, Phys. Rev. Lett. **109** (2012) 181301. D. S. Akerib et al., *First results from the LUX dark matter experiment at the Sanford Underground Research Facility*, Phys. Rev. Lett. **112** (2014) 091303. O. Buchmueller et al., *Implications of initial LHC searches for Supersymmetry*, Eur.Phys.J. C71 (2011) 1634. S.P. Ahlen et al., *Time-projection-chambers with optical readout for dark matter, double beta decay, and neutron measurements*, Int. J. Mod. Phys. **A25** (2010) 4525. K. Miuchi, H. Nishimura et al., *First underground results with NEWAGE-$0.3$a direction-sensitive dark matter detector*, Phys. Lett. **B686(1)** (2010) 11. S. Ahlen, J. Battat et al., *First dark matter search results from a surface run of the 10-L DMTPC directional dark matter detector*, Phys. Lett. **B695(1-4)** (2011) 124. J. Billard, F. Mayet et al., *Directional detection of dark matter with MIMAC: WIMP identification and track reconstruction*, J. Phys. Conf. Ser. **309** (2011) 012015. E. Eskut et al., *The CHORUS experiment to search for muon-neutrino $\to$ tau-neutrino oscillation*, Nucl. Instrum. Meth. **A401** (1997) 7.\ S. Aoki, E. Barbuto, C. Bozza, J. Fabre, W. Flegel, et al., *Nuclear emulsions in a large, hybrid experiment (CHORUS) to search for $\nu_\mu \to \nu_\tau$*, Nucl. Instrum. Meth. **A447** (2000) 361. OPERA Collaboration, *The OPERA experiment in the CERN to Gran Sasso neutrino beam*, JINST [**4**]{} (2009) P04018. SHiP Collaboration, *A facility to Search for Hidden Particles (SHiP) at the CERN SPS*, arXiv:1504.04956 \[physics.ins-det\]. P.H. Fowler, D.H. Perkins and C.F. Powell, *The study of elementary particles by the photographic method*, Pergamon Press (1959). W.H. Barkas, *Nuclear research emulsion*, Academic Press, New York, 1973. G. De Lellis, A. Ereditato and K. Niwa, Nuclear Emulsions, in Handbook of Physics, Vol. 2; C.W. Fabjan and H. Schopper (Eds.), (2011) Springer Publishers. T. Nakamura, A. Ariga, T. Ban, T. Fukuda et al., *The OPERA film: New nuclear emulsion for large-scale, high-precision experiments*, Nucl.Instrum.Meth. **A556** (2006) 80-86. M. Natsume et al., *Low-velocity ion tracks in fine grain emulsion*, Nucl. Instr. Meth. **A575** (2007) 439. T. Naka et al., *Fine grained nuclear emulsion for higher resolution tracking detector*, Nucl. Instrum. Meth. **A718** (2013) 519. N. Armenise et al., Nucl. Instr. Meth. **A551** (2005) 261\ L. Arrabito et al., Nucl. Instr. Meth. **A568** (2006) 578\ M. De Serio et al., Nucl. Instr. Meth. **A554** (2005) 247\ L. Arrabito et al., *JINST* **2** (2010) P05004\ C. Bozza et al., *Nucl. Instrum. Meth. A* **703** (2013) 204 K. Morishima and T. Nakano, *Development of a new automatic nuclear emulsion scanning system, S-UTS, with continuous 3D tomographic image read-out*, *JINST* **5** (2010) P04011.\ S. Aoki et al., *The fully automated emulsion analysis system*, *Nucl. Instrum. Meth. B* **51** (1990) 466\ T. Nakano, *Automatic analysis of nuclear emulsion*, Ph.D. thesis, Nagoya University, Japan (1997).\ OPERA Collaboration, JINST [**4**]{} (2009) P06020. A. Alexandrov, V. Tioukov, M. Vladymyrov, *Further progress for a fast scanning of nuclear emulsions with Large Angle Scanning System*, JINST **9** (2014) C02034. T. Naka, et al., *R$\&$D Status of Nuclear Emulsion For Directional Dark Matter Search*, EAS Publ. Ser. **53** (2012) 51-58. M. Kimura and T. Naka, *Submicron track readout in fine grained nuclear emulsions using optical microscopy*, Nucl. Instrum. Meth. **A680** (2012) 12-17 Naka et al., Rev. Sci. Instrum. **86** (2015) 073701 H. Tamaru et al., *Resonant light scattering from individual Ag nanoparticles and particle pairs*, Applied Phys. Lett. **80** (2002) 1826. G. Bruno, *Neutron Background studies for direct dark matter searches in the Gran Sasso Underground Laboratory*, PhD Thesis, L’Aquila University (2012). Arneodo et al., *Neutron background measurements in the Hall C of the Gran Sasso Laboratory*, Nuovo Cim. **A112** (1999) 819. Wulandari et al., *Neutron flux underground revisited*, Astropart.Phys. **22** (2004) 313. K. I. Nagao and T. Naka, Prog. Theor. Exp. Phys. (2012) 043B02. T. Habu, N. Mii, K. Kuge, H. Manto, Y. Takamuki, J. Imaging Sci. **35** (1991) 202. J. S. Becker, Inorganic Mass Spectrometry - Principles and Applications (Wiley, 2007), ISBN 9780470012000. M. Laubenstein et al., Appl. Radiat. Isot. **61** (2004) 167. F. Pupilli et al., *Intrinsic neutron background of nuclear emulsions for directional Dark Matter searches*, submitted to Atrophys. J., arXiv:1507.03532 (astro-ph). J. K. Shultis, R. E. Faw, Fundamentals of Nuclear Science and Engineering (CRC press, 2007), p. 141, ISBN 1420051369. R. Heaton, H. Lee, P. Skensved and B. C. Robertson, Nucl. Instrum. Meth. **A276** (1989) 529. W. B. Wilson et al., *SOURCES 4A: A Code for Calculating ($\alpha$,n), Spontaneous Fission, and Delayed Neutron Sources and Spectra*, LA-13639-MS, Los Alamos (1999).
{ "pile_set_name": "ArXiv" }
Nike Presto insole together with personalized art Nike Presto insole together with personalized art One of many very first colorways nike shoes for men upon outdoor patio is surely an OG launch offering your palette associated with azure, off white, lime along with ebony, together with tie-dye-like graphics running in the course of it's neoprene-constructed top. Beneath rests a new metalic midsole as well as Air conditioning Utmost 97-inspired Utmost Weather system with tangerine. This really is one of the Surroundings Max Luxurious colorways required to make shelving this holiday season. It can be staying claimed in which Skepta will relieve any effort around reddish colored in addition to dark-colored with July too. Immediately after contributing his / her hint for the nike running shoes as well as DMX Manage TEN, rap star Cam'ron is usually signing up for another typical Reebok silhouette — Allen Iverson's Problem. Leaked photographs display your suede-based version of the 70's baskeball hoop black-jack shoe, reminded by way of Bape-like crimson camo screen-print within the toe in addition to your back heel. Dipset markings glimpse along side eyelets, on the language as well as wide lace top hints, when Cam's famous lift telephone is usually silver plated within company logo mode for the back heel. Insoles aspect nike air max 97 personalized art, as well as graffiti-style 'Killa Cam' lettering for the proper. Recently, certainly one of Nike’s most favored athletic shoes was the air Additional Uptempo. Every single weekend break them seemed as a diverse colorway from the boisterous field hockey 40's struck drawers together with daring “AIR” media. Even so, you couple in which never made their solution to list was your “Snakeskin” sample that you just view pictured right here. From your distance, it may just appear to be nike huarache a normal tripe black Uptempo, yet if you obtain magnified, you will see any high-class snakeskin art print for the previously mentioned SURROUNDINGS media. Along Carmelo Anthony’s background considering the Jordan Brand name might nearly competing every other NBA celeb concerning long life. In which Melo will be deficient, could be the attraction with his / her Jumpman-labeled products.
{ "pile_set_name": "Pile-CC" }
Q: ASIC timing constraints via SDC: How to correctly specify a multiplexed clock? Introduction Having found multiple, sometimes conflicting or incomplete information on the internet and in some training classes about how to create timing constraints in SDC format correctly, I'd like to ask the EE community for help with some general clock generating structures I have encountered. I know that there are differences on how one would implement a certain functionality on an ASIC or FPGA (I have worked with both), but I think there should be a general, correct way to constrain the timing of a given structure, independent of the underlying technology - please let me know if I'm wrong on that. There are also some differences between different tools for implementation and timing analysis of different vendors (despite Synopsys offering a SDC parser source code), but I hope that they are mainly a syntax issue which can be looked up in the documentation. Question This is about the following clock multiplexer structure, which is part of the clkgen module which is again part of a larger design: While the ext_clk input is assumed to be generated externally to the design (entering through an input pin), the clk0 and clk4 signals are also generated and used by the clkgen module (see my related ripple clock question for details) and have associated clock constraints named baseclk and div4clk, respectively. The question is how to specify the constraints such that the timing analyser Treats cpu_clk as a multiplexed clock which can be either one of the source clocks (fast_clk or slow_clk or ext_clk), taking the delays through the different AND and OR gates into account While at the same time not cutting the paths between the source clocks which are used elsewhere in the design. While the simplest case of an on-chip clock multiplexer seems to require just the set_clock_groups SDC statement: set_clock_groups -logically_exclusive -group {baseclk} -group {div4clk} -group {ext_clk} ...in the given structure, this is complicated by the fact that clk0 (via the fast_clk output) and clk4 (via slow_clk) are still used in the design, even if cpu_clk is configured to be ext_clk when only use_ext is asserted. As described here, the set_clock_groups command as above would cause the following: This command is equivalent to calling set_false_path from each clock in every group to each clock in every other group and vice versa ...which would be incorrect, since the other clocks are still used elsewhere. Additional Information The use_clk0, use_clk4 and use_ext inputs are generated in such a way that only one of them is high at any given time. While this could be used to stop all clocks if all use_* inputs are low, the focus of this question is on the clock multiplexing property of this structure. The X2 instance (a simple buffer) in the schematic is just a place-holder to highlight the issue of automatic place&route tools being usually free to place buffers anywhere (such as between the and_cpu_1/z and or_cpu1/in2 pins). Ideally, the timing constraints should be unaffected by that. A: Define divide by 1 clocks on the and_* nets and declare them to be physically exclusive. Cadence RTL compiler handles the situation correctly by generating 3 timing paths for registers clocked by cpu_clk (one path each for one clock). Registers directly driven by clk0, clk4 and clk_ext have their own timing arcs. create_generated_clock -source [get_ports clk0] \ -divide_by 1 -name and_clk0 [get_pins and_cpu_1/Y] create_generated_clock -source [get_ports clk4] \ -divide_by 1 -name and_clk4 [get_pins and_cpu_2/Y] create_generated_clock -source [get_ports clk_ext] \ -divide_by 1 -name and_clk_ext [get_pins and_cpu_ext1/Y] set_clock_groups \ -physically_exclusive \ -group [get_clocks and_clk0] \ -group [get_clocks and_clk4] \ -group [get_clocks and_clk_ext]
{ "pile_set_name": "StackExchange" }
Q: Slowing the animation to the end on jQuery I use a counter that linearly counts numbers through animation at a constant rate. I want to know if it's possible to slow down the animation of the estimate closer to the end. <h1 class="counter" data-count="2200">0</h1> $('.counter').each(function() { var $this = $(this), countTo = $this.attr('data-count'); $({ countNum: $this.text()}).animate({ countNum: countTo }, { duration: 4000, easing:'linear', step: function() { $this.text(Math.floor(this.countNum)); }, complete: function() { $this.text(this.countNum); } }); }); For example: if the animation of the counter is completed more than half, the animation will slow by 50%. A: You are looking for an easing effect. Easing The remaining parameter of .animate() is a string naming an easing function to use. An easing function specifies the speed at which the animation progresses at different points within the animation. The only easing implementations in the jQuery library are the default, called swing, and one that progresses at a constant pace, called linear. More easing functions are available with the use of plug-ins, most notably the jQuery UI suite. By default Jquery has two easing functions. You can use swing to get something close to what you expect. But it slows down the animation at the beginning and at the end. $('.counter').each(function() { var $this = $(this), countTo = $this.attr('data-count'); $({ countNum: $this.text()}).animate({ countNum: countTo }, { duration: 4000, easing:'swing', step: function() { $this.text(Math.floor(this.countNum)); }, complete: function() { $this.text(this.countNum); } }); }); ========================================================== Edit: Answer to the comment: I think you can do the same using the step function. You can check if the animation is completed halfway, and then animate it again from that point with a new duration. Here's an example: $('.counter').each(function () { var $this = $(this), countTo = $this.attr('data-count'); var animation = { countNum: countTo }; $({ countNum: $this.text() }).animate(animation, { duration: 4000, step: function (now, fx) { $this.text(Math.floor(this.countNum)); if (fx.pos > 0.5) { $(this).stop(); $(this).animate(animation, { duration: 5000, step: function () { $this.text(Math.floor(this.countNum)); }, complete: function () { $this.text(this.countNum); } }); } } }); }); Fiddle: https://jsfiddle.net/nimeshka/z86pf27q/19/ Hope it helps!!
{ "pile_set_name": "StackExchange" }
Q: Double-matching in a bipartite graph I've encountered the following problem studying for my Algorithm test, with no answer published to it: Maximum double matching problem- given a bipartite graph G=(V=(LUR),E) describe an algorithm that returns a group of edges M in E s.t for each vertex v in V there are at most 2 edges in M that include v, of a maximum size. Definition: a "Strong double matching" is a double matching s.t for each vertice v in V there is at least one edge in M that includes v. Given a bipartite graph G=(V=(LUR),E) and strong double matching M, describe an algorithm that returns a strong double matching M' of maximum size. Prove your answer. so I've already managed to solve 1) using reduction to max-flow: adding vertices's s and t and edges from s to L and edges from R to t each with the capacity of 2, and defining the capacity of each edge between L and R with the infinite capacity. Finding a max flow using Dinic's algorithm and returning all edges with positive flow between L and R. about 2) i thought about somehow manipulating the network so that there is positive flow from each vertex then using the algorithm from a somehow to construct a maximum solution. Any thoughts? The runtime restriction is O(V^2E) (Dinics runtime) A: Here is a solution in O(n^3) using minimum cost flow. Recall how we make a network for a standard bipartite matching. For each vertex u from L, add a unit-capacity edge from S to u; For each edge u-v, where u is from L and v is from R, add an edge from u to v. Note that its capacity does not matter as long as it is at least one; For each vertex v from R, add a unit-capacity edge from u to R. Now we keep the central part the same and change left and right parts a bit. For each vertex u from L, add two unit-capacity edges from S to u, one of them of having cost -1 and another having cost 0; Same for edges v->S. Ignoring cost, this is the same network you built yourself. The maximum flow here corresponds to the maximum double-matching. Now let's find the minimum cost flow of size k. It corresponds to some double-matching, and of those it corresponds to the matching that touches the maximum possible number of vertices, because touching a vertex (that is, pushing at least unit flow through it) decreases the cost by 1. Moreover, touching the vertex for the second time doesn't decrease the cost because the second edge has cost 0. How we have the solution: for each k = 1, ..., 2n iteratively find the min-cost flow and take the value which corresponds to the minimum cost. Using Johnson's algorithm (also called Dijkstra's with potentials) gives O(n^2) per iteration, which is O(n^3) overall. P.S. The runtime of Dinic's algorithm on unit graphs is better, reaching O(E sqrt(V)) on bipartite graphs.
{ "pile_set_name": "StackExchange" }
The present invention relates to an apparatus for splicing spun yarns. An inventor of the present application has invented and proposed a pneumatic yarn splicing apparatus, in which a splicing member having a splicing hole and a jet nozzle for jetting a compressed fluid into the splicing hole are provided and control nozzles are arranged on both the outer sides of the splicing hole to suck and to untwist the yarn ends to be spliced. (See, an example, Japanese Patent Application Number 134986/80 and U.S. patent application Ser. No. 360,062 claiming priority of Japanese Patent Application Number 44967/81 filed Mar. 26, 1981.)
{ "pile_set_name": "USPTO Backgrounds" }
[Do enkephalins and other endogenous opioids participate in regulation of cancer growth?]. Attempts are interesting exploratory trend to define precisely relations between endogenous opioid system and neoplastic process development. Mechanism in which enkephalins and other endogenous opioides could influence on cancer growth is not clear. Several hypothesis were put and presented in the paper.
{ "pile_set_name": "PubMed Abstracts" }
New York, NY – April 24, 2008 Botaniculture Records says that it planned to release “50 Bullets” about the killing of Sean Bell, one month after his murder. The artist known as “Yardmon50″ states : “I started writing the song the day after the shooting and within a week I had the song down.” He says that he performed the song for his brother who said “it was going to be a hit!” The song was recorded first in December of 2007, around the same time as Papoose’s “50 Shots.” Alas, do to problems with production the song was never circulated until recently, according to Danny Pella, an A & R rep for the record company. This song follows last years controversial “50 Shots” by the artist known as “Papoose.” Though Papoose’s song and Yardmon50’s song art similar they are also different. “Yardmon50 is more of a crossover reggae dance music artist while Papoose is a rapper,” say Pella. This issue seems more relevant today since the Judge in the Sean Bell manslaughter trial will give his verdict Friday, the 25. Yardmon50 states: “The only connection is psychic.” Since they were both rapid fire responses to clear cases of injustice. “Intellectually the songs are different, but in spirit they are the same,” he says. Yardmon50 fifty denies any beef with Papoose because of the similarity of the titles, but he plans on proposing a business idea to Papoose based off of this shared theme. “One thing’s for sure” says Danny Pella, “we don’t want nobody to think Sean Bell lived or died in vain.” The public is invited to visit the Myspace page (myspace.com/yardmon50) and download the song with a portion of the proceeds going to a special fund for the environment set up in Sean Bell’s name. ]]>By: victor&d-mohttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-29036 Thu, 27 Mar 2008 20:21:31 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-29036f**k dem ]]>By: victor&d-mohttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-29035 Thu, 27 Mar 2008 20:20:46 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-29035chris brown is gay ]]>By: Kanyetudahttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28477 Sat, 22 Mar 2008 01:52:45 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28477Yessurr!! Lil Wayne is coming for that number one spot. This song is a much needed injection of creativity and fun into the broken records that are most videos recently. Bestrapperaliveyadig?!! ]]>By: julianahttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28476 Sat, 22 Mar 2008 01:43:13 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28476ours the chris Brown my dream is pretty is the fan numbers 1 of it ours is conhecelo one day but I know that nao vo conhecelo but is pretty of more ]]>By: julianahttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28466 Sat, 22 Mar 2008 01:30:20 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28466I love the chris Brown very I cry I tie love it of but and love hip hop ]]>By: r!Ck3I@http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28354 Fri, 21 Mar 2008 03:16:15 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28354I love wayne so much especially since im from NEW ORLEANS but this song just so hot but all his songs hot ]]>By: Lil Wayne Lollipop Music Video-Download Musichttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28210 Wed, 19 Mar 2008 21:45:12 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28210[…] admin wrote an interesting post today onHere’s a quick excerptNew music video from Lil Wayne for his first single ” Lollipop” featuring Static Major, the Lil Wayne Lollipop video looks like the type of video Lil Wayne would do and it fits the Lollipop single. Can’t see the video GO HERE. […] ]]>By: Lil Wayne Lollipop Music Video-Download Music Freehttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28183 Wed, 19 Mar 2008 18:52:14 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28183[…] Castina wrote an interesting post today onHere’s a quick excerptNew music video from Lil Wayne for his first single ” Lollipop” featuring Static Major, the Lil Wayne Lollipop video looks like the type of video Lil Wayne would do and it fits the Lollipop single. Can’t see the video GO HERE. […] ]]>By: Mookiehttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28172 Wed, 19 Mar 2008 16:32:06 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28172you hater he is cold and he said lllll lollipop and shawty said her n cant hit im a hit like i cant miss so het it right lameeeeeeeeeeeeeeeee llllllllllllllll llollipop ]]>By: M ONEhttp://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28093 Wed, 19 Mar 2008 00:17:13 +0000http://www.ihiphopmusic.com/lil-wayne-lollipop-music-video.html#comment-28093ONCE AGAIN WHAT THE f**k IS WAYNE TALKING ABOUT!!! BUT THE VIEO IS HOT!! BURP BURP.. ]]>
{ "pile_set_name": "Pile-CC" }
fileFormatVersion: 2 guid: e4f6ef78bad62ed40a24952b4fc66b85 timeCreated: 1448271443 licenseType: Free DefaultImporter: userData: assetBundleName: assetBundleVariant:
{ "pile_set_name": "Github" }
Background ========== Heart disease, defined as myocardial infarction, hypertensive and ischemic heart disease, and heart failure, is the leading cause of mortality and morbidity in the United Sates \[[@B1]\]. Increased left ventricular mass (LVM) is a well-known, independent risk factor for heart disease incidence, mortality, and all-cause mortality \[[@B2]-[@B4]\]. LVM can be measured non-invasively via echocardiography and risk factors associated with increases in LVM include high blood pressure, high dietary salt intake, increased age, male gender, diabetes, and increased body mass index (BMI) \[[@B5]-[@B8]\]. African-Americans experience higher mean values of LVM and have almost twice the amount of left ventricular hypertrophy (clinical threshold for high LVM) compared to a non-Hispanic white population \[[@B5]\]. Family and twin studies have demonstrated that genetic factors significantly contribute to the inter-individual variation in LVM in numerous racial/ethnic groups. Heritability estimates range between 0.2 - 0.6 depending on the population being studied and risk factors adjusted for \[[@B9]-[@B12]\]. In an African-American population, the heritability of LVM, after adjustment for known risk factors, was estimated to be 0.46 \[[@B11]\]. As a follow-up to heritability studies, candidate gene association studies have attempted to test for associations with genetic variants in pathways involved in LVM. While some of the candidate gene results are promising, they have been limited by the lack of replication and the failure to consider the full spectrum of genetic effects involved in complex traits (ie. interactions). LVM is a complex, quantitative trait and by definition is the result of environmental factors, genetic factors, and interactions between. However, to date, most genetic association studies (candidate gene and genome wide) have inappropriately simplified genetic architecture by focusing on single SNP effects. Issues of failed replication are not surprising given that true genetic effects may not replicate in different study populations because they are specific to a given population, in a given environment or because the true architecture involves unaccounted interactions \[[@B13]-[@B16]\]. In order to fully understand the genetic architecture of complex traits such as LVM, single candidate gene SNP associations must be considered in the context of, and in conjunction with, environmental factors and other genetic variants. The goal of this research was to explore the genetic architecture of LVM by identifying robust, replicated single SNP effects, SNP-environment interactions, and SNP-SNP interactions associated with LVM after adjusting for population stratification and relevant risk factors. In achieving this goal, we implemented a multi-stage approach that focuses on reducing the number of false-positive results and shows replication of effects within the study sample. Methods ======= Study population ---------------- The National Heart Lung and Blood Institute established the Family Blood Pressure Program (FBPP) in 1996, joining established research networks investigating hypertension and cardiac diseases. One of the four networks in FBPP is the Genetic Epidemiology Network of Arteriopathy (GENOA), which recruited hypertensive African-Americans and non-Hispanic white sibships for linkage and association studies to investigate genetic contributions to hypertension and hypertensive target organ damage. Subjects for this particular GENOA sub-study were African-Americans recruited from Jackson, Mississippi. GENOA recruited sibships containing at least two individuals with clinically diagnosed essential hypertension before age 60. Hypertension was defined by a previous clinical diagnosis of hypertension by a physician with current anti-hypertensive treatment, or an average systolic blood pressure (SBP) ≥140 mmHg or diastolic blood pressure (DBP) ≥90 mmHg on the second and third clinic visit \[[@B17]\]. After identifying each hypertensive sibship, all members of the sibship were invited to participate regardless of their hypertension status. Exclusion criteria included secondary hypertension, alcoholism or drug abuse, pregnancy, Type I diabetes, or active malignancy. A total of 1,481 individuals were enrolled in GENOA. Informed consent for this study was obtained from all subjects and approval was granted by the institutional review board at the University of Mississippi Medical Center. Phenotype measurement --------------------- Data collection consisted of demographic information, medical history, clinical characteristics, lifestyle factors, and blood samples for genotyping and biomarker assays. Study visits were conducted in the morning after an overnight fast of at least eight hours. Blood pressure was measured with random zero sphygmomanometers and cuffs appropriate for arm size. Three readings were taken in the right arm after the participant rested in the sitting position for at least five minutes; the last two readings were averaged for the analysis. Height was measured by stadiometer, weight by electronic balance, and BMI was obtained by the standard calculation of weight (kg) divided by height squared (m^2^). Diabetes was considered present if the subject was being treated with insulin or oral agents or had a fasting glucose level ≥126 mg/dL. Smoking status was defined as self-described smoker within the past year. Use of anti-hypertensive medication was based on self-report during the clinical exam. The outcome of interest, LVM, was derived using phased-array echocardiographs with M-mode, two-dimensional and pulsed, continuous wave, and colorflow Doppler capabilities. Standardized methods, along with training and certification, were used by field-center technicians to achieve high-quality recordings. Readings were performed at the New York Presbyterian Hospital-Weill Cornell Medical Center and verified by a single highly experienced investigator. The parasternal acoustic window was used to record at least 10 consecutive beats of two-dimensional and M-mode recordings of the left ventricular internal diameter (LVID) and wall thicknesses at, or just below, the tips of the anterior mitral leaflet in long- and short-axis views. Correct orientation of planes for imaging and Doppler recordings was verified using standardized protocols. Measurements were made using a computerized review station equipped with digitizing tablet and monitor screen overlay for calibration and performance of each measurement. LVID and interventricular septal and posterior wall thicknesses were measured using the two-dimensional view at end-diastole and end-systole according to the recommendations of the American Society of Echocardiography in up to three cardiac cycles \[[@B18]\]. Calculations of LVM were made using a necropsy-validated formula \[[@B19]\]. LVM has excellent reliability when measured through echocardiography; the correlation between repeated measures of LVM was 0.93 between paired echocardiograms in hypertensive adults \[[@B20]\]. LVM was measured on a total 1,440 African-American participants of GENOA. SNP selection and genotyping ---------------------------- One thousand nine hundred and fifty six SNPs from 268 genes known or hypothesized to be involved in blood pressure regulation, lipoprotein metabolism, inflammation, oxidative stress, vascular wall biology, obesity and diabetes were identified from the genetic association literature and positional candidate gene studies \[[@B21]\] to be genotyped in the entire GENOA population. SNPs were chosen based on a number of different criteria including the published literature, non-synonymous SNPs with a minor allele frequency (MAF) \> 0.02, and tagSNPs identified using public databases such as dbSNP \[[@B22]\] and the SeattleSNPs database \[[@B23]\]. DNA was isolated using the PureGene DNA Isolation Kit from Gentra Systems (Minneapolis MN). Genotyping, based on polymerase chain reaction amplification techniques, was conducted at the University of Texas-Health Sciences Center at Houston using the TaqMan assay and ABI Prism^®^Sequence Detection System (Applied Biosystems, Foster City CA). Primers and probes are available from the authors upon request. Quality control measures for genotyping assays included robotic liquid handling, separate pre- and post-PCR areas, standard protocols and quality control analyses including 5% duplicates, positive and negative controls, computerized sample tracking, and data validity checks. After these quality control procedures and removal of monomorphic SNPs, 1,878 SNPs from 234 genes were available for analysis in the African-American cohort of GENOA.(see Additional file [1](#S1){ref-type="supplementary-material"}) Primers and probes are available from authors upon request. Furthermore, FBPP data (including GENOA) is freely available to researchers upon request <http://public.nhlbi.nih.gov/GeneticsGenomics/home/fbpp.aspx>. Population substructure ----------------------- The presence of population substructure is a concern for genetic epidemiological studies because the distribution of admixture proportions within a study sample can be a source of confounding, resulting in spurious SNP-disease associations \[[@B24]-[@B26]\]. Based on seventy-six microsatellite markers that were measured in both the GENOA cohort and also in the Human Genome Diversity Project (HGDP) \[[@B27]\], we used Structure to test for substructure in the GENOA African-American sample \[[@B28]\]. The populations that served as \"parents\" to the African-American cohort of GENOA in Structure analysis were the HGDP African Yoruba and Mandenka populations and the Caucasian GENOA population from Rochester, MN. After testing three possible underlying clusters in our data (K = 1, 2, or 3), Structure indicated that K = 2 clusters had the highest posterior probability. This indicates that given our data and the ancestral populations assumed there were no distinct underlying subgroups in our dataset, only admixture between African and European ancestors. The underlying admixture within the African-American GENOA sample can be accounted for through principal component analysis (PCA) \[[@B29]\]. There were 453 microsatellite markers previously genotyped in GENOA for genome wide linkage analysis; these microsatellite markers were used to run PCA using R. Prior research has shown that association tests are not sensitive to the number of principle components included as long as a sufficient number of components are included in the model \[[@B30]\]. The first 20 principal components described approximately 20% of the underlying genetic variation and were used to adjust LVM using least-squares linear regression. Statistical analysis -------------------- Data analyses were conducted using the statistical language R (version 2.6) \[[@B31]\]. LVM was transformed using the natural logarithm in order to best approximate the distributional assumptions of linear regression. Allele and genotype frequencies were calculated using standard gene counting methods. Hardy-Weinberg equilibrium (HWE) was assessed using a chi-square test or Fisher\'s exact test if a genotype class had less than five individuals \[[@B32]\]. LVM was adjusted for risk factors including age, sex, SBP, height, weight, and admixture using least-squares linear regression. The residuals from the adjustment model were normally distributed, centered around zero, and used as the dependent variable for association tests. Tests for single SNP effects and SNP-SNP interactions utilized these residuals. For tests of SNP-covariate interactions, the respective variable was left out of the adjustment model and instead included in the model for interaction. For example when SNP-SBP interactions were tested, the LVM residuals were obtained by adjusting for age, sex, height, weight and admixture. Of the 1,440 African-Americans in GENOA with LVM measures, the final sample size for association analyses is 1,326 due to a limited number of individuals missing risk factor adjustment data, microsatellite data for PCA, or SNP data. We used a multi-stage approach in order to identify both main and interactive genetic effects associated with adjusted logLVM. The first stage was dedicated to conducting association analyses for SNP effects, SNP-covariate interactions, and SNP-SNP interactions. The second stage focused on reducing the possibility of false-positive association results and replication of results within our GENOA sample. Finally, we conducted multivariable SNP modeling with associations passing the second stage of analysis. This analysis approach has been previously described by Kardia et al \[[@B33]\] and Smith et al \[[@B34]\]. Stage I: Association analyses ----------------------------- In the first stage of analysis, we tested each of the 1,878 SNPs for association with adjusted logLVM using least-squares linear regression methods in the full sample \[[@B32],[@B35]\]. The SNPs were modeled with two degrees of freedom, therefore assuming no underlying genetic model, and statistical significance for the main effect of each SNP was determined based on a likelihood ratio statistic. Based on the 1,878 SNPs and 15 chosen covariates, all possible SNP-covariate interactions were assessed for association with adjusted logLVM using least-squares linear regression. The covariates considered in the interactions included age, sex, SBP, DBP, height, weight, diabetes status (0/1), hypertension status (0/1), use of anti-hypertensive medication (0/1), duration of hypertension, smoking status (0/1), myocardial infarction (0/1), total cholesterol, low density lipoprotein cholesterol (LDL), and triglycerides. Age, sex, SBP, height, and weight were left out of the adjustment model in order to include this main effect in the respective test for interaction. We determined significance of the SNP-covariate interaction with a likelihood ratio test statistic comparing a full model (including interaction terms and main effects of the variables in the interaction term) to a reduced model that contains the main effects of the covariate and SNP being tested. All possible pairwise SNP-SNP interactions were tested with SNPs coded as two dummy variables to allow testing for all possible statistical epistatic effects \[[@B36]\]. The statistical significance of the SNP-SNP interaction was based on a likelihood ratio test comparing the full model including all interaction terms to a reduced model with only the main effects of each SNP (up to four degrees of freedom depending on presence of all genotypic combinations) \[[@B36]\]. Stage II: Reduction of false positive associations -------------------------------------------------- The second stage of analysis was focused on reducing the possibility of false-positive association results and replication of results within our GENOA sample. We did this by implementing three analytic approaches: 1) False Discovery Rate (FDR) \[[@B37]\], 2) four-fold cross-validiation (CV) \[[@B38]\], and 3) internal replication of results between two subsets of the data. Only associations passing the pre-determined thresholds for all three approaches were considered positive associations. The first step for reducing the probability of false positive results was to calculate the FDR q-value for all association tests \[[@B37]\]. FDR is a method that controls for the proportion of \"rejected hypotheses\" that are rejected falsely. For the single SNP associations, the vector of model p-values was used to calculate the q-value, while for the SNP-covariate and SNP-SNP interactions, the vectors of partial F test p-values were used to calculate the q-value. An FDR q-value threshold \<0.30 was used to determine significance. The second approach for minimizing false positive results was to use four-fold CV, a method that reduces false positive results by eliminating associations that lack predictive ability in independent test samples. We performed CV by dividing the full sample into four equally sized groups. Three of the four groups were combined into a training dataset, and the modeling strategy outlined above was carried out to estimate model coefficients. These coefficients were then applied to the fourth group, the testing dataset, to make predictions about the value of the outcome variable of each individual in the independent test sample. This process was repeated for each of the four testing sets. Because random variations in the sampling of the four mutually exclusive test groups can potentially impact the estimates of CV R^2^, this procedure was repeated ten times and the CV R^2^values were averaged \[[@B38]\]. Single SNP associations were considered cross-validated if the average percent variation predicted in independent test samples (CV R^2^) was greater than 0.5% and interactions were considered cross-validated if the difference in average percent variation predicted in independent test samples between the full model containing the interaction terms and the reduced model containing only main effect terms was greater than 0.5%. This threshold of 0.5% was chosen because permutation tests on the models investigated in this paper, we found that the probability of observing a CV R^2^× 100 greater than 0.5% by chance alone was less than 5% (results not shown). That is, Pr(CV R^2^× 100 \> 0.5%) \<0.05 under the null hypothesis of no association. The third and final step to reduce false positive results was to demonstrate replication of effects within our GENOA sample. Considering the entire sample of African-Americans and randomly sampling one sibling from each sibship, without replacement, the first replication subset sample was created. From the remaining people, we randomly sampled a second sibling from each sibship to establish the second sample. Association analyses were then conducted in both of the subset samples. If a SNP, SNP-covariate, or SNP-SNP association replicated across these two samples (α = 0.10), passed FDR and CV criteria in the full sample, it was tested for homogeneity of direction and magnitude of effect across the two samples. Multivariable SNP modeling -------------------------- Based on the association tests that passed all three of the above criteria (FDR q-value \< 0.30, replication in replication datasets with α = 0.10, and CV R^2^\> 0.005), we built a multivariable linear regression model using forward selection in the full sample of GENOA African-Americans. Residuals from the age, sex, SBP, height, weight, and admixture adjusted logLVM were used as the dependent variable for this multivariable model. The increase in percent variation of adjusted LVM explained was then calculated, as was the increased predictive ability of the model based on the full model CV R^2^with the addition of each term. Because the full sample of individuals contains siblings, the associations that were included in the final multivariable model were also tested using a linear mixed effects model to account for the familial correlation and ensure that the results were not dependent upon the underlying familial correlation in the data. Results ======= To examine the genetic architecture of LVM in African-American individuals, we used data from the GENOA study for association analysis. In general, this is an older (mean age 63) hypertensive cohort (79% hypertensive) with an average BMI of 31 and 29% diabetic (Table [1](#T1){ref-type="table"}). The average LVM is 160.8 grams. ###### Descriptive statistics for the full African-American cohort of GENOA and two internal replication subset samples. Full Sample Subset 1 Subset 2 ---------------------------------- ------ --------------- ----- --------------- ------ --------------- Age, years. 1328 62.7 ± 9.5 491 62.99 ± 9.63 496 63.09 ± 9.62 BMI, kg/m^2^ 1326 31.5 ± 6.6 488 31.67 ± 7.01 494 31.5 ± 6.88 SBP, mmHg 1328 138.3 ± 21.1 491 139.3 ± 21.49 496 138.5 ± 20.77 DBP, mmHg 1328 79.6 ± 10.8 491 80.28 ± 10.76 496 79.92 ± 11.35 Height, cm 1326 168.4 ± 8.8 488 169.4 ± 9.15 494 169.2 ± 9.08 Weight, kg 1326 89.3 ± 19 488 90.66 ± 19.83 494 90.01 ± 19.5 Duration of hypertension, years 1046 16.5 ± 12.8 404 16.79 ± 13.24 396 16.17 ± 12.49 LV Mass, g 1328 160.8 ± 47.1 477 167.4 ± 51.66 477 163.5 ± 46.42 Sex, male 1328 393 (29.6%) 491 187 (38.1%) 496 175 (35.3%) Smoker 1328 188 (14.2%) 491 78 (15.9%) 496 79 (15.9%) Diabetic 1328 387 (29.1%) 491 150 (30.5%) 496 144 (29.0%) LV Hypertrophy 1328 210 (15.8%) 491 90 (18.3%) .496 81 (16.3%) Hypertensive 1328 1,046 (78.8%) 491 400 (81.5%) 496 391 (78.8%) Use anti-hypertensive medication 1328 930 (70.0%) 491 357 (72.7%) 496 344 (69.4%) BMI = body mass index, SBP = systolic blood pressure, DBP = diastolic blood pressure, LV = left ventricular Smoker: Self-reported smoker within the past year. Diabetic: current treatment with insulin or oral agents OR a fasting glucose ≥126 mg/dL. LV Hypertrophy: sex-specific thresholds; LVMI ≥51 g/m^2.7^for males, LVMI ≥49 g/m^2.7^for females Hypertensive: previous clinical diagnosis by physician with current anti-hypertensive treatment, OR an average SBP ≥140 mmHG or DBP ≥90 mmHG on the second and third clinic visits. Stage I and II results ---------------------- 1,878 SNPs were tested for association with adjusted logLVM. Of these, 221 had a p-value \< 0.10 in the full sample, the minimum p-value was 9.24 × 10^-4^(SNP: rs12460421, FDR q-value = 0.738, CV R^2^= 0.0033). None of these single SNP associations had an FDR q-value \< 0.30 and only one had a CV R^2^\> 0.005 (SNP: rs2182833). Table [2](#T2){ref-type="table"} summarizes the number of results passing each of the three pre-determined multiple testing criteria for the SNP main effects, the SNP-covariate interactions, and the SNP-SNP interactions. ###### Summary of the number of associations passing each of the three multiple testing criteria. SNP Main Effects SNP-Covariate Interactions SNP-SNP Interactions ------------------------------------- ------------------ ---------------------------- ---------------------- Total \# of Tests 1878 28075 1740614 P-value \< 0.10\* 221 3217 192202 FDR q-value \<0.30 0 10 3083 Cross-Validation R^2^\>0.005 1 112 5007 Replication (P \< 0.10 both groups) 14 303 17593 FDR + CV + Replication 0 0 409 This table outlines the number of associations (single SNP, SNP-covariate interactions, and SNP-SNP interactions) passing each level of multiple testing criteria (False Discovery Rate (FDR), 4-fold cross validation (CV) repeated and averaged 10 times, and internal replication in two subsets of the full dataset). The intersection of associations passing all three criteria reveals little overlap. \*P-values for SNP main effects are from a 2 degree of freedom likelihood ratio test statistic. The SNP-covariate and SNP-SNP interactions p-values were determined from a likelihood ratio test comparing a full model (including all interactions and main effects) to a reduced model only containing main effects of covariates and/or SNPs. There were a total of 28,075 SNP-covariate interactions tested. Ten of those had an FDR q-value \< 0.30 (p-values ranging from 1.95 × 10^-6^to 9.59 × 10^-5^), 303 replicated across sample subsets, and 112 had a CV R^2^\> 0.005. However, none of the SNP-covariate interactions passed all three criteria. Based on the 1,878 SNPs, all possible SNP-SNP interactions were tested for a total of 1,740,614 associations. 409 of these associations passed all three criteria with an FDR q-value \< 0.30, replicating in both subsets of the data, and had a CV R^2^\> 0.005. The interaction with the lowest partial F-test p-value in the full sample was rs17876148\*rs12971616 (p-value = 4.35 × 10^-8^, FDR q-value = 0.0139, CV R^2^= 0.0219). Multivariable modeling results ------------------------------ A multivariable model was built to determine if a significant proportion of the variation in LVM could be explained by the joint effect of these SNPs and their interactions. To avoid over-parameterizing the model, only four SNP-SNP interactions were chosen for the final multivariable model. The model building process began with the interaction with the most significant likelihood ratio test statistic p-value in the full sample (rs35314437\*rs7552841) (first row of Table [3](#T3){ref-type="table"}). A forward selection process was implemented with the remaining top nine SNP-SNP interaction models. At each decision point, the SNP-SNP interaction resulting in the lowest likelihood ratio test statistic p-value for including main and interaction SNP effects was added to the model. Table [3](#T3){ref-type="table"} shows the detailed association results for the ten most significant SNP-SNP interaction models considered in the forward selection process. Ultimately the following four interactions, and their main effects, were included in the final multivariable model in the order listed: rs35314437\*rs7552841, rs257376\*rs5267, rs17876148\*rs12971616, and rs6745660\*rs12460421 (bold rows in Table [3](#T3){ref-type="table"}). Combined, these interactions explained 11.3% of the variation in logLVM after adjustment for age, sex, SBP, height, weight, and admixture. Table [4](#T4){ref-type="table"} outlines the variation in LVM explained by the addition of the main and interactive effects of each SNP-SNP interaction. The predictive ability of the model increases steadily with the addition of each interaction term as indicated by the increase in CV R^2^. CV R^2^was 5.56% when the full model included all four SNP-SNP interactions. Detailed mathematical description of the final model which includes main and interactive effects is included as an additional file.(see Additional file [2](#S2){ref-type="supplementary-material"}) Finally, these inferences are robust to family structure. When each of the four SNP-SNP interactions were tested using linear mixed effects models to account for familial correlation, the p-values from the least squares linear regression and linear mixed effects models had a Pearson correlation coefficient \>0.99. ###### Detailed results for the ten most significant SNP-SNP interaction models. SNP 1 SNP 2 DF\* for Interaction Test Interaction P-value in full sample Model P-value in full sample Interaction q-value in full sample CV\* R^2^in full sample Interaction P-value (Sample 1) Interaction P-value (Sample 2) ------------ ------------ --------------------------- ------------------------------------ ------------------------------ ------------------------------------ ------------------------- -------------------------------- -------------------------------- rs35314437 rs7552841 2 1.78 × 10^-7^ 3.88 × 10^-8^ 0.0142 0.0165 0.0202 4.21 × 10^-6^ rs257376 rs5267 3 1.33 × 10^-6^ 9.11 × 10^-8^ 0.0218 0.0031 0.0965 0.0442 rs2229169 rs6664855 4 2.45 × 10^-7^ 1.19 × 10^-7^ 0.0142 0.0094 0.0004 0.0028 rs10482839 rs7552841 3 2.13 × 10^-6^ 2.78 × 10^-7^ 0.0256 0.0143 0.0276 3.96 × 10^-5^ rs17876148 rs12971616 4 4.35 × 10^-8^ 3.14 × 10^-7^ 0.0139 0.0151 1.85 × 10^-7^ 0.0389 rs936211 rs521898 2 1.17 × 10^-6^ 1.07 × 10^-6^ 0.0211 0.0115 0.0663 0.0023 rs6745660 rs12460421 4 0.0002 1.09 × 10^-6^ 0.2247 0.0158 0.0856 0.0054 rs945032 rs12028945 4 6.45 × 10^-6^ 1.11 × 10^-6^ 0.0385 0.0103 0.0028 0.0011 rs17876144 rs12971616 4 7.29 × 10^-8^ 1.14 × 10^-6^ 0.0139 0.012 2.73 × 10^-6^ 0.0341 rs35314437 rs4846052 1 1.73 × 10^-7^ 1.15 × 10^-6^ 0.0142 0.0177 0.0021 4.14 × 10^-5^ Table 3 outlines the detailed association and multiple testing results for the top ten SNP-SNP interactions passing all three multiple testing criteria. \"Interaction p-values\" are from an up to 4 degree of freedom likelihood ratio test (depending on number of genotype classes represented in GENOA sample), \"model p-value\" column is from the likelihood ratio for the model including main effects and interactions compared to a null model (up to 8 degrees of freedom), q-value was assessed from the \"interaction p-value\", CV R^2^is the difference in CV R^2^when the interaction terms are included in the CV process compared to only main effects of SNPs, and the final two columns are the interaction p-values for the internal replication subset samples. These ten models were used in the multivariable model building process with the bold rows indicating interactions included in final model. \*DF = degrees of freedom, CV = cross-validation ###### Outline of model improvement with addition of each SNP-SNP interaction included in final multivariable model. Model Interaction Terms in Model Total \# of Terms in Model R^2^ Adjusted R^2^ LR\* p-value for Additional Terms Full Model CV\* R^2^ ------- -------------------------------------- ---------------------------- ------- --------------- ----------------------------------- ---------------------- 1 (rs35314437 \* rs7552841) 5 0.034 0.03 n/a 0.0165 2 Model 1 + (rs257376 \* rs5267) 12 0.073 0.064 2.094 × 10^-8^(df = 7) 0.0332 3 Model 2 + (rs17876148 \* rs12971616) 20 0.108 0.093 2.208 × 10^-7^(df = 8) 0.046 4 Model 3 + (rs6745660 \* rs12460421) 28 0.133 0.113 3.631 × 10^-5^(df = 8) 0.0556 A multivariable model including a total of four SNP-SNP interactions was built in the African-American cohort of GENOA using forward selection. With the addition of each SNP-SNP interaction, along with each SNP\'s respective main effect, the variability in adjusted logLVM increased (assessed by adjusted R^2^) as did the predictive ability of the model in cross-validation test sets (assessed by full model CV R^2^). The final multivariable model explained 11.3% of the observed inter-individual variation in adjusted logLVM in GENOA and increased the predictive ability of the model by 5.6%. \* LR = likelihood ratio, CV = cross-validation Discussion ========== LVM is a complex, quantitative trait highly predictive of incident heart disease. While many studies have investigated candidate gene associations with LVM, to our knowledge, no one has investigated the spectrum of candidate gene effects for association with LVM including SNP main effects, SNP-covariate interactions, and SNP-SNP interactions. Our motivating hypothesis was that variations within positional and functional candidate genes for hypertension and heart disease are associated with LVM via interactive effects, in addition to single SNP effects. In examining this hypothesis, we demonstrated SNP-SNP interactions dominate the genetic architecture of LVM in the African-American cohort of GENOA. One notable aspect of these results is the overwhelming presence of statistically significant epistasis in the absence of marginal SNP effects. There has been debate in the literature about the best way to test for interactions while minimizing computational burden and the possibility of false positives \[[@B39],[@B40]\]. One strategy is to condition tests for SNP-SNP interactions on at least weakly significant marginal SNP effects (ex. p-value \< 0.10) \[[@B39]\]. While this method will reduce the number of tests conducted, not all SNP-SNP interactions are expected to demonstrate marginal effects \[[@B40]\]. Many previous studies have identified epistasis in the absence of main effects. One example was found in dyslipidemia; individually, none of the three SNPs within the USF1 gene tested for association with various lipid measures showed any significance \[[@B41]\]. However, significant interactions between SNPs in USF1 and SNPs in HSL and APOC3 were identified as significantly associated with triglycerides and apoE levels \[[@B41]\]. Additional examples of epistasis in the absence of main effects in heart disease traits are found in atrial fibrillation \[[@B42]\] and coronary artery disease \[[@B43]\]. An additional case against conditioning searches for interaction based on initially significant main effects is the possible bias from the \"winners curse\", a type of ascertainment bias that describes the first positive report of a genetic variant overestimating the true effect size. Follow-up searches for interaction based on this overestimated effect tend to be underpowered \[[@B44],[@B45]\]. Likewise, our results do not support conditioning searches for interaction on main effects. Of the eight SNPs included in the multivariable model, the range of main effect SNP p-values was 9.24 × 10^-4^(rs12460421) to 0.415 (rs12971616) (Table [5](#T5){ref-type="table"}). Conditioning searches for interaction on main effects would have precluded investigation of two of the four robust interactions included in the final multivariable model. This conclusion is directly parallel with a recent study demonstrating the feasibility and justification of genome wide interaction searches without conditioning on main effects \[[@B46]\]. ###### Positional and functional details of SNPs included in the final multivariable model. SNP Gene Chromo-some Position Minor Allele Minor Allele Freq. Type Biological Processes\* HWE p-value P-value for Main Effect of SNP ------------ --------- ------------- ----------- -------------- -------------------- ---------------- ----------------------------------------------- ------------- -------------------------------- rs35314437 MPO 17q 53704206 G 0.015 Synonymous Response to oxidative stress, anti-apoptosis 1 0.044 rs7552841 PCSK9 1p 55291340 A 0.237 Intron Cholesterol homeostasis & metabolic processes 0.6532 0.0215 rs257376 PRKAR2B 7q 106393948 A 0.486 Synonymous Intra-cellular signaling cascade 0.5951 0.0512 rs5267 NPPC 2q 232615776 T 0.196 Non-synonymous Regulation of BP & vasoconstriction 0.7323 0.0067 rs17876148 PON2 7q 94877484 A 0.093 Intron None reported 0.0017 0.1564 rs12971616 CARM1 19p 10875937 A 0.13 Intron Transcription regulation, histone methylation 0.1464 0.4149 rs6745660 HSPD1 2q 198057781 G 0.351 3\' near gene Protein folding, response to stress 0.0625 0.0188 rs12460421 CARM1 19p 10842352 G 0.438 5\' near gene Transcription regulation, histone methylation 0.6796 9.24 × 10^-4^ \* Biological process of the gene are a subset of those reported in the Michigan Molecular Interactions website: <http://mimi.ncibi.org/MimiWeb/main-page.jsp> A concern for the occurrence of type I errors in the face of so many hypothesis tests is substantial and valid. Genetic association studies in the literature have suffered from a great lack of replicability. This lack of replication can be attributed to various reasons. Some might be due to population specific effects resulting from differing allelic and environmental distributions in various geographical regions, false positive reports, or overestimated initial effects (the \"winners curse\"). Recognizing that replication in an independent cohort might not be possible because of various sources of heterogeneity; we sought to find genetic associations that replicated within our study sample and were robust across numerous multiple testing adjustment methods. The relative low level of agreement between results filtered through FDR, internal replication, and CV supports the conservative nature of our strategy for determining which results are robust and significant. Furthermore, a similar analysis approach applied to two different phenotypes, ankle brachial index \[[@B33]\] and leukoaraiosis \[[@B34]\], identified different patterns of genetic architecture, with less emphasis on SNP-SNP interactions. Therefore, we feel this analysis approach is useful for the reduction of type I errors and may provide a tool for identifying unique patterns of genetic architecture, which are likely to vary based on the phenotype of study. A natural question arising from our study results is how these SNPs interact biologically? As these SNPs were selected from \"candidate genes\", biological plausibility can be argued for any individual SNP. Table [5](#T5){ref-type="table"} outlines positional and functional information for each SNP. Inferences of protein-protein interactions are more difficult to make from this research because statistical tests for SNP-SNP interactions will not necessarily mirror tests for biological interactions \[[@B36]\]. We searched the Michigan Molecular Interactions database \[[@B47]\] and PubMed \[[@B48]\] for any previously reported protein interactions between the four pairwise gene interactions in the multivariable model. No protein interactions were identified in those databases for the gene combinations reported in Table [5](#T5){ref-type="table"}. This is not surprising as making the connection between statistical epistasis and biological epistasis is difficult and arguably not permissible \[[@B36],[@B49]\]. Furthermore, since association testing relies on the concept of linkage disequilibrium, it is always possible at least one of the \"causal\" SNPs is in a different gene than the reported gene, and therefore we would not expect to see the biological interaction between reported genes. Despite these caveats, the strength and concordance of the associations detected in both traditional hypothesis testing methods (ie. FDR and internal replication) and prediction testing methods (ie. CV) gives us confidences in the effects these SNP-SNP interactions have on LVM. Of particular potential biological relevance is the MPO SNP (rs35314437) that was identified in the first interaction term included in our multivariable model. Work done by Vasilyev et al found that MPO-generated oxidants have a profound, adverse effect on left ventricular remodeling and function \[[@B50]\]. Further, Ng et al concluded that MPO biomarkers increased the specificity of n-terminal pro-B-type natriuretic peptide as a screening tool for identifying undiagnosed left ventricular systolic dysfunction \[[@B51]\]. An interesting future direction for research would be to further pursue how the effects of MPO on left ventricular structure and function may be modified by other genes such as PCSK9. Conclusions =========== There is much yet to be understood about LVM and why it is so highly predictive of heart disease and all-cause mortality, independent of other cardiovascular risk factors \[[@B4]\]. The results of this research underscore the biological complexity underlying LVM and that context dependent effects, specifically SNP-SNP interactions, may dominate the genetic architecture of LVM. In this study we focused on main and interactive genetic effects of SNPs within candidate genes. Given the complexity of LVM and replication issues inherent for heterogeneous traits, we demonstrate a conservative approach for identifying robust associations within a given population. Future examinations into the genetic architecture of LVM should include replication efforts of the interactions reported in independent populations, with detailed consideration of sources of heterogeneity such as differing allele frequencies and population characteristics. Competing interests =================== The authors declare that they have no competing interests. Authors\' contributions ======================= SLRK and KJM conceived of and designed the study. KJM and JC conducted statistical analysis. THM was responsible for recruiting and data collection at the Jackson field center of GENOA. KJM prepared and wrote the manuscript. All authors were involved in critical revisions of the manuscript and have agreed upon its final content. Pre-publication history ======================= The pre-publication history for this paper can be accessed here: <http://www.biomedcentral.com/1471-2350/11/160/prepub> Supplementary Material ====================== ###### Additional file 1 **Contains a list of all SNPs (and their respective gene) investigated**. ###### Click here for file ###### Additional file 2 **Contains the R output for the multivariable model including 4 SNP-SNP interactions**. ###### Click here for file Acknowledgements ================ This work and the authors were supported by the National Institutes for Health (NIH) grants RO1 HL087660 and P60 MD002249. KJM received additional manuscript preparation support through 1Ul1RR025011 from the Clinical and Translational Science Award (CTSA) program of the National Center for Research Resources (NCRR), NIH. The authors would also like to thank Jennifer Smith and Yan Sun for their scientific feedback throughout the journey of this manuscript.
{ "pile_set_name": "PubMed Central" }
100 Easy Talk Thoughts for LDS Youth. Vol.2 Whether you need to give a talk or you just need to know the answer to a gospel question, this book is a priceless help. Its sequential arrangement of ideas lets you customize your thoughts to the time allotted. And with a broad range of topics, it's perfect for youth and leaders alike. Use it for talks, lessons, or as a missionary resource.
{ "pile_set_name": "Pile-CC" }
Q: Offline with Bots? Is there any way for me to directly modify how many players are on each team for CS:GO? I don't have a very good computer, and the default 5v5 makes running CS:GO a real pain. I'd really like to only start out with 3v3s, realisticlly, but I don't know how to modify server settings. What should I do to make offline with bots a lot easier on my computer? A: There are a couple of ways to accomplish this, namely editing config files. However, the easiest way is for you to use console commands once you are in a game to fill it however you like. You will start with a standard 5 vs 5 setup and by using the following commands you can then clear the server and fill it with the amount of bots you so desire. bot_kick - This command will remove all bots from the game. bot_quota # - By replacing the "#" with a number you will spawn in that amount of bots. Take into account that YOU are included in that number. For instance if you type "bot_quota 2" it will simply add 1 bot in addition to yourself. You can find additional bot-related console commands HERE.
{ "pile_set_name": "StackExchange" }
9 Simple Acts of Self-Care for the Sandwich Generation The Sandwich Generation — those who are both caring for their aging parents and raising their own children — face unique stressors that most others do not. If you're in this generation, then you know what it feels like to constantly have other people need you, to the point that you feel like you never have a moment to yourself. Even though you feel squashed between the needs and expectations of others, you have to take care of yourself, too. Otherwise, you won't be able to stay healthy and strong enough to continue caring for everyone who loves you and needs you. Here are some simple, free ways to start caring for yourself today. 1. Track accomplishments When there are always more demands on your time, it's easy to forget the things you do get done. So track your accomplishments every day. Even keeping a running list of the things you do can help you feel better about the ways you're spending your time and energy. Remember to include relationships you invest in and people you actively care for, because those things matter, too! 2. Get more sleep Choose sleep even when there's more to do. Give yourself a bedtime, and find a way to stick to it. Make sleep nonnegotiable, so you have the energy and the alertness to deal with whatever tomorrow throws at you. (See also: Treat Yourself With These 7 Free Self-Care Routines) 3. Allow yourself to feel Both children and aging parents tend to come with a lot of feelings that you may tend to prioritize. Instead of letting their feelings dominate yours, set aside some time for your feelings. Give yourself 10-15 minutes every morning or night to cry, be angry, or whatever. Enter into your feelings, give them their own time and space, and you will process them better and maintain better emotional health through a difficult time. 4. Get goofy Are your kids being silly? Join them. Giving yourself time to play means letting loose, being free from constant demands, and remembering what it's like to have fun. When you're running from task to task, it's easy to forget what that feels like. But giving yourself some time to be a goofball will make you feel better when you return to your regularly scheduled adulting. 5. Breathe deeply Deep breaths are good for you. They help you focus, fully oxygenate your body, and help you be intentional about the pace you're setting. Stopping every so often for three to five deep breaths will help you feel more in control, and like you can handle all of the things on your plate. 6. Talk to yourself kindly Stop the critical voices in your head by becoming your own best friend. What would you say to a best friend facing stress like this? I can almost guarantee it's not the same things you're saying to yourself. Change the messages you're sending yourself, and you will feel better about the important work you're doing. 7. Exercise If hitting the gym is your thing, make sure you go. But exercise doesn't have to be hard, time-consuming, or even cost money. It can be as simple as taking a walk. The point is to get your body moving and get some endorphins flowing. Plus, exercise helps you stay stronger and healthier, so you can continue to care for those who need you. 8. Give yourself a gift Give yourself something that you need or want. This can be as simple as a couple extra hours of sleep or a morning off. Or maybe you need to buy yourself something, as opposed to spending your money on everyone else first. Being intentional about gifting yourself what you need — without going overboard, of course — will make you feel more valued and cared for. (See also: 8 Stress Relief Items You Need in Your Life That Are Under $20) 9. Ask for help Whether you need to hire a baby sitter, get some help caring for your parents, or have someone else clean the house, do it. If you can't afford it, ask some friends or family members for help. When you stop expecting to do it all on your own, you'll feel better about the things that you do get done, and you'll feel free to set the boundaries you need to maintain your physical and emotional health. Disclaimer: The links and mentions on this site may be affiliate links. But they do not affect the actual opinions and recommendations of the authors. Wise Bread is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.
{ "pile_set_name": "Pile-CC" }
Q: Android shared element transition bug As you can see in this video, I am trying to animate the open of a new activity. The images are animated properly when opening the activity, but fail miserably when going back (I'm using supportFinishAfterTransition()) I've tried all sorts of methods that I've found here or on Google, but nothing worked, i.e.: Defining a transition under the res/transition: <transitionSet xmlns:android="http://schemas.android.com/apk/res/android"> <changeBounds/> <changeImageTransform/> </transitionSet> and then use it in both the styles of the main activity and the one that I'm opening: <!-- enable window content transitions --> <item name="android:windowActivityTransitions">true</item> <!-- specify shared element transitions --> <item name="android:windowSharedElementEnterTransition"> @transition/change_image_transform</item> <item name="android:windowSharedElementExitTransition"> @transition/change_image_transform</item> I've also tried the same in my java code: getWindow().requestFeature(Window.FEATURE_CONTENT_TRANSITIONS); getWindow().setSharedElementExitTransition(...); getWindow().setSharedElementEnterTransition(...); From my view holder I'm starting the new activity by doing so: private void openNewActivity() { String transitionName = "details"; Intent intent = new Intent(mContext, ActivityCardDetails.class); ViewCompat.setTransitionName(mImageView, transitionName); //noinspection unchecked ActivityOptionsCompat options = ActivityOptionsCompat.makeSceneTransitionAnimation((MainActivity)mContext, mImageView, // The view which starts the transition transitionName // The transitionName of the view we’re transitioning to ); ActivityCompat.startActivity((MainActivity) mContext, intent, options.toBundle()); } This is the ImageView that starts the animation: <ImageView android:layout_width="match_parent" android:layout_height="180dp" android:id="@+id/business_card_image" android:adjustViewBounds="true" android:scaleType="centerCrop" /> and this is the second ImageView that should animate it back: <ImageView android:id="@+id/business_card_image" android:layout_width="match_parent" android:layout_height="180dp" android:scaleType="centerCrop" android:fitsSystemWindows="@bool/isFitSystemWindows" android:transitionName="@string/transition_card_details" app:layout_collapseMode="parallax" /> PS: I'm using Glide to load the images as this is user created content and I have no control over the aspect ratio of the images. Glide.with(mContext) .load(toLoad) .fitCenter() .centerCrop() .crossFade() .into(mImageView); I've been struggling with this issue for weeks now and I just can't seem to overcome it. Why can't it animate its bounds back? Thank you! UPDATE: Apparently the bug is related to the image width. If I'm setting the width to a constant (say 200dp) the animation runs just fine in both ways, but if I'm setting the image width to match_parent the return animation is broken. UPDATE2: I can pretty much say that this bug is related to the image loading lib (Glide or Picasso both have this issue) A: Apparently the issue has been described before here and here. The issue comes indeed from the image loader. The trick was to use .dontTransform() with ActivityCompat.postponeEnterTransition(this) ActivityCompat.startPostponedEnterTransition(ActivityCardDetails.this) to only play the animation when the image is properly loaded.
{ "pile_set_name": "StackExchange" }
Q: Line integral: Circulation to the square P: Find the circulation to the square which is defined by $-\frac{\pi}{2}\leq x\leq \frac{\pi}{2} $ and $-\frac{\pi}{2} \leq y \leq \frac{\pi}{2}$ $\hat{v}=v_x\hat{i}+v_y\hat{j}=\cos(x)\sin(y)\hat{i}-\sin(x)\cos(y)\hat{j}$ Approach: $$\Delta\hat{v}=\oint{\hat{v}}\cdot d\hat{r}=I+II+III+IV$$ $$ I(i)=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} {v(x,y)}\cdot dx$$ I would do this for $II(j), III(i), IV(j)$ where I would replace $dx$ with $dy$ whenever I give $x$ an value $$II(j)=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} {v(\frac{\pi}{2},y)}\cdot dy$$ etc... I would end up with $$\therefore\Delta\hat{v}=-2-2-2-2=-8$$ Question: I'm curious to know if there is any other possible way to solve this? Thanks in advance. A: You can also apply Stoke's theorem $$\oint\hat{v}\cdot d\hat{r}=\int\!\!\!\int_S(\nabla\times\hat{v})\cdot dS,$$ where $$(\nabla\times\hat{v})\cdot dS=\left(\frac{\partial v_y}{\partial x}-\frac{\partial v_x}{\partial y}\right)dx\,dy=-2\cos x\cos y\,dx\,dy.$$ Then $$\Delta\hat{v}=\int_{-\pi/2}^{\pi/2}\int_{-\pi/2}^{\pi/2}(-2\cos x\cos y)dx\,dy=-2\left(\int_{-\pi/2}^{\pi/2}\cos x\right)^2=-8.$$
{ "pile_set_name": "StackExchange" }
Game Crashes/Black Screen when exiting world/map editor ExpandCollapse As the title of this thread indicates, when I exit the world editor, more often than not, the game either crashes or it goes into a black screen. This is extremely irritating especially when I'm in the middle of putting a video together. When the game crashes, it goes back to my desktop, however when the screen goes black as I exit the world editor, the computer has to be restarted which closes all of my open tabs and potential other applications running on the background. If anyone is able to help me out with this issue, it would be greatly appreciated.
{ "pile_set_name": "Pile-CC" }
942 - 31726. Suppose 0 = 2*c + b - 55616. Round c to the nearest 10000. 20000 Let k = -547875 - -547877.19943. Let m = 26.2 - 24. Let d = m - k. Round d to four decimal places. 0.0006 Let q = 1.0176842 - 22.0176772. Let f = q + 21. What is f rounded to 5 decimal places? 0.00001 Let n = 58.623 + -58.9. What is n rounded to 2 dps? -0.28 Let d = -90 - -60. Let b be (-45)/d*(-15000 - 0). What is b rounded to the nearest one thousand? -23000 Let m be (-12)/(3 - (-4275345)/(-1425117)). Let y = m + 14315679. Let c = y + -17165445. Round c to the nearest one million. -6000000 Let t = 1 - -1. Let p be (45/4)/(-7 - (-42690)/6096). Suppose f - t*r = -1270, 3*f + p = 4*r - 3*r. What is f rounded to the nearest 100? -1300 Let n = -452.999665 + 453. What is n rounded to five decimal places? 0.00034 Let a = -143.699219 + 143.7. Round a to five decimal places. 0.00078 Let v = -0.05741 + 0.03194. What is v rounded to 3 dps? -0.025 Suppose -2*s - 12050080 = -5*p, -2*s - 11 = -1. Suppose 0 = -4*k - p - 3509986. Round k to the nearest 100000. -1500000 Let a be 15/(-1) + (6 + -5)*-2. Let w(i) = -5*i - 19. Let b be w(a). Round b to the nearest 10. 70 Let i = 78.08 + -3.88. What is i rounded to the nearest integer? 74 Suppose -4*f = 3*w - 4 - 24, -2*f + 2*w = 0. Suppose -3*p = 4*n - 9*n - 15106, n + f*p = -3012. What is n rounded to the nearest 1000? -3000 Let v = 0.001 + 0.419. Let l = 2.5288 + -2.102. Let k = l - v. Round k to 3 dps. 0.007 Let t = -113674 - -113676.90011. Let q = -6.4 + 9.3. Let k = q - t. What is k rounded to four decimal places? -0.0001 Let j = 653 + -713.9. Let w = j - -2.9. Let a = w + 58.033. What is a rounded to 2 decimal places? 0.03 Let n = -31.2574 + 0.1074. Let i = -59.41 - n. Let d = -28 - i. What is d rounded to 1 decimal place? 0.3 Let z(c) = -1548727*c**3 - c**2 - 3*c - 3. Let m be z(-1). Suppose -m = 2*t + 96251274. Round t to the nearest one million. -49000000 Let h = -119 - -118.93593. Round h to 3 dps. -0.064 Let g = 1007 - 1006.9999077. Round g to 6 dps. 0.000092 Let f = -2108.9586 - 2247964.9414. Let v = -2250080.9000015 - f. Let y = -7 - v. What is y rounded to 6 decimal places? 0.000002 Let q = 103 + -103.5. Round q to 1 dp. -0.5 Let f(m) be the first derivative of -5 - 4/3*m**3 + 4*m - 5*m**2 + 1/4*m**4. Let y be f(8). Round y to the nearest 100. 200 Let c be (-8)/12 - 2/6. Let l be c/((-296)/300 - -1). Let z = l - -30. Round z to the nearest 10. -50 Let r = -26 - -25.973. Let s = -63.973 + r. Let a = s + 63.99999949. What is a rounded to seven dps? -0.0000005 Let i be (160*-25)/((-7)/66850). Round i to the nearest 1000000. 38000000 Let o(y) = -y**2 + 32*y + 25. Let a be o(17). Let h be ((-11)/(22/a))/(1/2). Round h to the nearest one hundred. -300 Suppose 2*b + 8014000 = 3*b. What is b rounded to the nearest one hundred thousand? 8000000 Let d = -5769831 + 5742068.816. Let g = d - -27793.183926. Let q = g - 31. Round q to 5 dps. -0.00007 Let z = 70 - -35. Let t = z - 104.9999488. Round t to 6 decimal places. 0.000051 Suppose 3*o = 12 - 0. Suppose 0 + 12 = o*i. Suppose -14000000 = -i*j - j. What is j rounded to the nearest 1000000? 4000000 Let u = -11.5432 + 41.5367. Let v = u - 30. What is v rounded to 3 decimal places? -0.007 Let q be (-6)/4 - 4/(-8). Let p be ((-5154 - -4)/(-5))/q. Round p to the nearest one hundred. -1000 Let d = -44 + 36. Let w be ((-354)/d)/((-3)/(-8)). What is w rounded to the nearest ten? 120 Suppose 3 = -3*b, b = 4*s - 2*s - 200607. Let x = s - 53277. Let p be 4/(-14) - x/49. Round p to the nearest 100. -1000 Let c = 2.786 + 0.479. Let v = c - -0.035. What is v rounded to the nearest integer? 3 Let m = -119.1 - -108. Let u = m + 50.5. What is u rounded to the nearest integer? 39 Let m = 607 - 647.7. Let k = 39 + m. Round k to the nearest integer. -2 Suppose 0 = -0*k + 3*k + 4506. Suppose 10*i + 4186 = 3*i. Let f = i + k. What is f rounded to the nearest one hundred? -2100 Let x = 181333 + -181333.0399985. Let l = x + 0.04. What is l rounded to 6 decimal places? 0.000002 Let d = -1411.168 - -1405. Round d to the nearest integer. -6 Let x = -2936.364 + 2930. Round x to the nearest integer. -6 Suppose 2*w + 128253035 = 5*v - 0*w, 10 = -2*w. Let p = v - 15850605. What is p rounded to the nearest 1000000? 10000000 Let n be (-1)/5 + (-36650756)/(-5). Suppose -n = -g + 19469849. Suppose -s + 4*r = -3*s + 13400000, -4*s + 3*r = -g. Round s to the nearest 1000000. 7000000 Let c = -11985.0216 - -11979. Let p = c - 0.0504. Let q = p + 6. Round q to two decimal places. -0.07 Let s(j) = 40742*j**2 + 22*j + 96. Let o be s(-9). What is o rounded to the nearest 1000000? 3000000 Let a = 34.4 + -28. Let b = 6 - a. Let m = b + -1.1. Round m to the nearest integer. -2 Let s be 116/(-638) - 8448002/(-11). What is s rounded to the nearest 100000? 800000 Let j = 6.9 - 6.953. Let h = 0.023 + j. What is h rounded to one decimal place? 0 Let k = 9 + -74.8. What is k rounded to 0 dps? -66 Let r = -272214393127114.42120306 + 272214396439495. Let f = 3312021.5788 - r. Let q = -359 - f. Round q to 7 dps. -0.0000031 Suppose -5*k + 31 - 11 = 0. Let n be (-63)/(-36) + (-3)/k. Let j(c) = 110*c**3 - c**2 + c. Let p be j(n). Round p to the nearest one hundred. 100 Let h = -0.001 - 0. Let s = h - 0.269. Let q = s + 0.33. Round q to 2 decimal places. 0.06 Let g = -222.519 + -0.481. Let u = 222.999741 + g. Round u to five decimal places. -0.00026 Let c = -347990 - -1619990. What is c rounded to the nearest 10000? 1270000 Let r = 45.4 - 52.75. What is r rounded to 0 dps? -7 Let w = 54.8737 - 0.6137. Let j = w + -54. Round j to 1 decimal place. 0.3 Let z(a) be the first derivative of 549999*a**4/4 - 5*a**3/3 - 2*a**2 - 11. Let g be z(-4). What is g rounded to the nearest one million? -35000000 Let q = 5387.0855 + -114.2755. Let x = 5272.6099868 - q. Let k = -0.2 - x. Round k to six dps. 0.000013 Let l = 16.76 - 39. What is l rounded to the nearest integer? -22 Let j = -39 + -2. Let k = -41.085 - j. Let g = -0.085093 - k. What is g rounded to 5 dps? -0.00009 Let y = -2905 + 2904.999998481. Round y to 7 dps. -0.0000015 Suppose 2*n - 84808 = -4*a, 0*n = n + a - 42402. Round n to the nearest 10000. 40000 Let j = 0.6 + 0. Let z = j + -4.1. Let d = 3.5000029 + z. Round d to six dps. 0.000003 Let m = -12 + 6. Let z = -994089.00001 - -994095. Let u = m + z. Round u to four dps. 0 Let w = -1.03533 - -1.042. What is w rounded to 3 decimal places? 0.007 Let m(b) = -b**3 + 14*b**2 + 6*b - 30. Suppose 5*n = -4*z + 23, -n + 2*z = -0*z - 13. Let i be m(n). Round i to the nearest 100. 400 Let t be 32/14 - 4/14. Let v = 9419027 + 2580978. Suppose -x + 0*x - b = v, 2*b = t*x + 23999990. What is x rounded to the nearest one million? -12000000 Let o = 104 - 52. Let p = o - 51.99916. Round p to four decimal places. 0.0008 Let t = -3416.076 - -3421.0760009. Let p = 6 - 1. Let o = t - p. Round o to 7 decimal places. 0.0000009 Let q = 1317.0169 + -1317. Round q to 3 decimal places. 0.017 Suppose -3*y + 1824962 = -5*f, -y - 4*y - f = -3041650. Let p = -215329 + y. What is p rounded to the nearest ten thousand? 390000 Suppose 4*r = 5*r. Suppose r*i = 3*i + 1650000. What is i rounded to the nearest 100000? -600000 Let x be (-2)/(-3 + 4) - 4. Let f(p) = p - 3. Let k be f(x). Let i(c) = -16173*c**2 - c + 4. Let b be i(k). What is b rounded to the nearest 100000? -1300000 Suppose 5*t - a = -2030000, 2*t + 10*a = 14*a - 812000. Round t to the nearest 100000. -400000 Let b be ((-5)/(-15))/(2/90). Suppose -7*x + b = -2*x. Suppose 2117 + 9283 = -x*a. Round a to the nearest 1000. -4000 Let z = -0.382 + 85.682. What is z rounded to the nearest 10? 90 Let g = 3.39492 - 3.4. What is g rounded to three dps? -0.005 Let o = -6.775 + 0.075. Let p = o - -6.6641. Round p to two decimal places. -0.04 Let o(x) = 3*x - 7. Let a be o(5). Let r be (12/8)/((-4)/a). Let w be (-3)/(r/1600)*-625. What is w rounded to the nearest one million? -1000000 Suppose v - 6 = -v, 28479994 = 5*b - 2*v. What is b rounded to the nearest one hundred thousand? 5700000 Let x = -0.0217 - -0.021700767. What is x rounded to 7 dps? 0.0000008 Let b be 2992795/(-4) - 565/452. What is b rounded to the nearest 100000? -700000 Let f = 863.002445 - 863. Round f to 5 decimal places. 0.00245 Let q = 3498392.072998
{ "pile_set_name": "DM Mathematics" }
Obviously, 100 roof top panels is 100 times as big as 1 roof top panel. Any comparison to non roof top panel system is obviously invalid, since those systems aren’t supposed to exist in a perfect world. (Do I really need to include the sarc tag?) I think you just hit the nail on the head. Follow the money. Do any of the decision makers on the government side own SunPower stocks? How about the lobbyists who strong armed the government decision makers? Its intent is not to generate useable, economical electricity. The intent is an offering to the Climate Change deity, Gaia. Or you could also be viewed as a religious shrine evoke a favor from Gaia by preventing some amount of the evil “carbon pollution” gas into the atmosphere. Because, as the belief goes, if you don’t at least do something, Gaia is going to fry us. Take it on faith. And open your wallets. Meanwhile, the solar panel crony capitalists continue to reap the rewards behind the scenes and funnel political $upport to Democrats. Once this mess is up and running watch the way that it will be described … the system “can” produce … it “could” replace … has the “potential” to … they will never say what it actually achieves. Watch the Greenies and the Libs, they always talk about something that might be, not about what really “is,” which really depends on what the meaning of “is” is. The “monster-sized” labeling has far more to do with the physical connections needed to link 300 locations to one storage source rather than the actual electric output or storage capability. These connections will likely require (1) adding additional conductors to existing or new distribution feeders and/or (2) committing “x” amount of load to an existing conductor. Either way, this results in a monster-sized inefficiency within an electrical grid built specifically and historically for reliability. if you were to evaluate this project from a global warming potential, I believe it represents a negative for the environment (at least initially) when compared to more traditional (i.e., fossil fuel) generation sources. More troubling, though, is the deliberate introduction of multiple points of failure on an otherwise reliable system – that’s just dumb from any engineering perspective. James Schrumpf, your “rain tax” sounds a lot like the “road tax” that we pay on motor fuel. What the proponents of electric vehicles never mention is where the money is going to come from to maintain the highways that the fuel tax is now supposed to. Ian M @Ian – don’t worry, both the federal government and state governments have a plan for that. However, a total switchover will not happen, anyway. The grid system and the United States government will both collapse long before that fantasy happens. To put this in context regarding baseload power generation see our article “Going Solar-System Requirements For 100% U.S. Solar Generated Utility Baseload Electricity” which goes through the science, math, and economics of solar power. http://fusion4freedom.us/going-solar/ Fossil fuels and nuclear (fission today and fusion for the grandkids) are the only realistic alternatives. Did you notice these points: The idea behind the Con Edison venture is to create a virtual solar power plant with the specific aim of not having to build an extra “peaker” plant to supply extra power to the grid during peak use periods and: In case of a widespread power outage, the home owner can draw electricity from their own battery This battery design is I believe smart enough to provide power at points where grid electricity cost is higher (I don’t know if NY has differential pricing). Its big for a virtual solar plant – this is in first half dozen rolled out by conventional power companies (one other equivalent in suburban Oz). This is the future for (conventional) power companies – it means they don’t need to build more plant/string more power lines to meet increased demand I just love it when you Green bedwetters wax ecstatic over the merits of batteries – one of the most ecologically destructive artefacts that technology has managed to create. Solar panels – particularly the inverters and control circuitry – aren’t particularly brilliant either. But hey Grifter, who cares about a few hundred square miles of utter devastation, along with massive damage to the health of those unlucky to be dwelling there, on the other side of the planet, when you little bedwetters are ‘Saving the World™’? You saving the world much? Off topic, I know you were a DT poster… you know how/where AlecM is? If you post on any of same forums, please pass on my regards: though we could not be further apart in viewpoint, he has a great sense of humour and I loved his jokes. “You saving the world much?” A great deal more than you, I suspect. The family fortune (such as it is) is based on over half a century of recycling of waste amounting to many millions of tons, starting decades before it became fashionable. I have been using low energy bulbs ever since they appeared decades ago, my house is fully insulated and for daily transport I run an elderly turbo diesel Mercedes that does around 45MPG mostly on rapeseed oil. All of the above are purely a matter of economics, with no ulterior motive, of course. I have sat on the committee of a number of charitable trusts, on one in particular I was able through my engineering experience to save the trust many thousands of pounds per annum in electricity and gas bills, and with the exception of my disagreement with the CAGW brigade I would consider myself a convinced and effective environmentalist. Alec is alive and well over at Breitbart. New York is no stranger to whackadoodle ventures. In 1869 Scientific American promoted the construction of a pneumatic subway in New York. It lasted from 1870-1873. “The Beach Pneumatic Transit was the first attempt to build an underground public transit system in New York City. It was developed by Alfred Ely Beach in 1869 as a demonstration subway line running on pneumatic power. As the subway line had one stop and a one-car shuttle going back and forth, it was merely a novelty and not a regular mode of transport. It lasted from 1870 until 1873.” I’m sure they claimed it would be the model for subways worldwide. What went wrong? The leather seals didn’t last, them not having access to petroleum based plastics. For the solar project, I’ll put my money on the bird kaka and the cost of union kakacleaners. Oh wait, there’s the issue of networking all these electrical generators and getting it to work right. I suggest going back to the wisdom of 1865 – connect all the treadmills in NY together with a system of shafts and pulleys, running down to and under the streets, pumping water to rooftop pumped storage units and finally driving a mighty 100 Watt Edison generator that he promises will be ready by 1873 and a yet to be invented light bulb. Substitute iPhone for light bulb. Monster SizedPhysically yes : panels and batteries all over the placeOutput no : less than 20% capacity factor from the panels and the piddly storage (would power only 1000 households using 4 kWh for one hour) doesn’t help that at all – Greens don’t understand that storage doesn’t generate any electricity on its own. Greens hate maths because being able to do and understand maths destroys their hopeful fantasies Recruit? With what bait? What rate subsidies will NYC promise the homeowners? “…connect more than 1.8 megawatts of solar power…” Aand there are how many fully self solar powered homes out there? leaving just how many dregs of solar power per homeowner that just might be available during the sunniest of days? Couple that with what must be a very expensive contractor devised plan, yet to be developed connection costs, some sort of central lame battery storage idea, not forgetting the homeowner lure they haven’t told us about. So what about the cost for this? First assumption is that all of the installed equipment is meticulously maintained, with snow and ice and bird droppings and leaves and dust removed when necessary by the homeowners so that output capacity is close to predictions. With 1.8 MW capacity and 300 homes, that comes to 6 kW AC capacity for each home. With 4 MWhr usable storage and 300 homes, that comes to 13.3 kWhr of storage per home. That means the Sunverge battery system for each home is probably the SIS 19.4, with 6 kW maximum output and 16.5 kWhr of usable storage for 7000 cycles. Price is $20,000. Or $1200 per kWhr of usable storage when new. That needs to be tied to a solar PV system with 5 kW AC or 6 kWDC capacity.https://www.californiasolarstatistics.ca.gov/ At the CA average installed cost of $5.28/WDC, that adds another $31,700 to the bill. The article also says homeowners will be able to operate certain items during a power outage. This likely requires additional circuits to be added in the home that are fed only by the Sunverge battery/inverter system. Add another $3,000 for that. Total cost per household is about $55,000. $16.5 Million for 300 homes. Now add in the costs on Con Edison’s side for integrating the SIS data and control into the existing distribution control system. This could easily be a $20 Million project, especially if they are designing for future expansion. Interesting, Chris – lets take the math a bit further, shall we? 1.8 mw = 1800 kw; NREL says NY gets between 4.0 and 4.5 kwh/m^2 annualized so: (I make the assumption that the 1.8 mw figure is based on 1m^2 panel = 1 kw power) 1800 X 4.5 = 8100 kwh/day 8100 X 365 (days/yr) X 20 years = 59,130,000 kwh over 20 years. $16,500,000 / 59,130,000 = $.279 / kwh over the 20 year lifespan of the panels According to Forbes, the current electricity rate in NY is about $.25/kwh, though it’s expected to go up. The rest of the US averages about $.14, so NY is already paying extra for electricity. All this assumes ZERO additional costs for maintenance, breakage/failure, cleaning, etc. (I also didn’t include ConEd’s costs) I am also more than a bit leery of NREL’s 4.5 kw/m^2 annualized figure – that seems high to me, perhaps I’m not reading their map correctly? I wonder if that figure is for sunny days only or if it includes weather days? I got it at this page : http://www.nrel.gov/gis/solar.html This is a good website for insolation data collected by NREL over multiple years for various solar panel installation types-http://rredc.nrel.gov/solar/old_data/nsrdb/1961-1990/redbook/sum2/state.html The Central Park data for fixed tilt at latitude is 4.6 kwh/m^2/day, which basically agrees with what you found. I think it is a pretty reliable number as it is based on measured data in-situ (it is marked as a primary station site by NREL). But it assumes the surface is never encumbered with snow, debris, dust, shadows, etc. For permission, contact us. See the About>Contact menu under the header. All rights reserved worldwide. Some material from contributors may contain additional copyrights of their respective company or organization. We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case! Cookie Policy
{ "pile_set_name": "Pile-CC" }
Arturo Macapagal Arturo dela Rosa Macapagal (14 September 1942 – 11 August 2015) was the son of Philippine President Diosdado Macapagal. He was a Filipino shooter who competed at the 1972 and 1976 Summer Olympics at the free pistol event. Early life Macapagal was the son of President Diosdado Macapagal and Purita dela Rosa, sister of Rogelio and Jaime. Purita was Diosdado Macapagal's first wife. Arturo was the second child and eldest son among the Macapagal family. Cielo was his elder sister. His mother died when he was just 1 year old and his father married Evangeline Macaraeg Macapagal when he was five years old. His father had two children with Evangeline, Gloria Macapagal-Arroyo and Diosdado Macapagal Jr. Education Macapagal entered San Beda College for a bachelor's degree in business management and graduated from the institution as a cum laude in 1968. During his last year at San Beda, he was president of the student council. He later entered the Asian Institute of Management (AIM) and attained his master's degree in business management in 1971 from the institution. He became the first president of AIM's student association and the first chairman of its alumni association. Sporting career As a shooter Macapagal represented the Philippines in shooting on several competitions including at the 1972 and 1976 Summer Olympics where he participated at the free pistol event. During his 1972 Olympic stint he established a national record for free pistol, a record which would not be broken for 21 years, the longest in the country's shooting history. He was chosen as the "Most Outstanding Shooter of the Decade" by the Philippine Olympic Committee in 1980. In 1973 and 1974, he was named the All-around Filipino Sports Awardee by the Philippine Sportswriters Association. As an official Macapagal also led the Philippine National Shooting Association for many years and also served the Philippine Olympians Association as president. In 2008 he ran for the position of president at the Philippine Olympic Committee, challenging incumbent Jose Cojuangco who has been President of the Association since 2004 and was seeking a second term. Macapagal lost to Cojuangco by 2 votes with Macapagal receiving 19 votes and Cojuangco receiving 21 out of 40 votes cast. Cojuangco managed to win a third term in 2012. Business career Macapagal became the president and CEO of Toyota Pasong Tamo, one of Toyota's largest dealers and also became the chair of Majal Properties Inc. and Melandrex Holdings Inc. He was also an active member of several business organizations such as the Financial Executives Institute and the Management Association of the Philippines. He was also given awards by the alumni association of his former colleges. In 1979, the AIM Alumni Association gave Macapagal the triple-award for Outstanding Achievements in Management and in 1995 he was given the Centennial Award for Outstanding Achievement in Business and Management by the San Beda Alumni Association, on the occasion of the Centennial of the Benedictine Monks in the Philippines. Politics Macapagal received many offers to enter politics. In 1971, Governor Jose Lingad asked for Macapagal to run as governor of Pampanga, in 1987; by Governor Bren Guiao to run a congressman and in 1992 as Guiao's vice governor. He declined to enter politics. Social involvement In 1963, Macapagal and some of his friends established the Scholarship Foundation of the Filipino Youth and served as its chairman. The scholarship foundation grants college scholarship to high school students with financial difficulties that it deems as talented. He is also a trustee of the St. Anthony College of Technology in Mabalacat, Pampanga and a member of Habitat Philippines, with the later being an organization that provides housing for the poor. Personal life Macapagal was married to Maria Therese Jalandoni who came from Iloilo whom he had three children. Death Macagal died on 11 August 2015 at age 72. He was hospitalized at the Makati Medical Center for prostate cancer. References Category:1942 births Category:2015 deaths Category:Filipino male sport shooters Category:Olympic shooters of the Philippines Category:Shooters at the 1972 Summer Olympics Category:Shooters at the 1976 Summer Olympics Category:San Beda University alumni Category:Filipino businesspeople Category:Filipino chief executives Category:Sportspeople from Pampanga Category:Deaths from prostate cancer Category:Deaths from cancer in the Philippines Category:Shooters at the 1974 Asian Games Category:Children of Presidents of the Philippines Arturo Category:Asian Institute of Management alumni Category:Asian Games competitors for the Philippines
{ "pile_set_name": "Wikipedia (en)" }
Ride on, cowgirlDOVER — Leslie Dicus is a wonderful athlete with dedication, according to Denise Campbell, Dicus’ dressage coach and certified riding instructor. Dicus, 13, of Dover was named the Arkansas Valley Private Riding Academy (AVRA) rider of the year. The daughter of Dwight and Susan Dicus competes in jumping, dressage, cross country and attends three-day eventing shows across the country. Dicus has traveled with Natalie Smith, her cross country coac... Local basketball roundup (Jan. 13, 2012)RJHS East 9G 29, Greenbrier 15 The Russellville Junior High Eastside Lady Whirlwinds beat Greenbrier 29-15 Wednesday in River Valley Junior High Conference play. RJHS East led 9-2 after the first quarter, 13-3 at halftime and 25-5 after three. “Our defense was a huge factor in (Wednesday night’s) win,” head coach Nina Chiolino said. “We were able to hold them to three points at halftime.” Alyssa Owens led the way with 11 points for the Lady Wh... Pottsville teams down Booneville in basketballPOTTSVILLE — The Apaches started fast and never let up in a 62-35 win over Booneville on Friday at George Jones Gymnasium. Pottsville (8-3, 2-1 4A-4 Conference) jumped out to a 15-4 lead after the first quarter and led 37-10 at halftime. The Bearcats finally were able to get some scoring done in the third, and trailed 50-23 entering the fourth. “We came out with a lot of energy tonight on defense, and that led to some transition baskets,” head... Local basketball roundup (Dec. 13, 2011)Waldron SB 65, Dardanelle 58 WALDRON — Dardanelle kept it close throughout Friday, but the Bulldogs pulled away at the end for a 65-58 win in 4A-4 Conference play. Despite 23 points from Mark Gathright, the Sand Lizards faltered down the stretch. “This was a tough loss for us on the road,” head coach Russell Sturdivant said. “We battled all night and got even with about five minutes to go in the fourth quarter. We weren’t ever able to get over... Local basketball roundup (Dec. 10, 2011)RJHS West JG 35, Greenbrier 31, 2OT The Russellville Junior High Westside Lady Whirlwinds fought a battle Thursday night as they beat the Greenbrier Lady Panthers 35-31 in two overtimes. “I am excited about this win for our girls,” head coach Jason Martin said. “They fought until the end. There were many times throughout this game that things didn’t go our way but the girls hung in their and never gave up. They execetuted well at key times in ... Local basketball roundup (Dec. 7, 2011)Dover SG 45, Dardanelle 29 DARDANELLE — The Lady Sand Lizards struggled in their 4A-4 Conference opener Tuesday as the Dover Lady Pirates earned a 45-29 win at Sand Lizard Gym. “Dover did a good job tonight on both ends of the court against us,” Dardanelle head coach Kenny McCoy said. “I thought we played hard defensively for most of the first half, but we struggled scoring inside and from the perimeter for the first time this year and it real... Local basketball roundup (Nov. 30, 2011)Pottsville SG 81, Brinkley 56 CONWAY — The Lady Apaches made quick of the Brinkley Lady Tigers on Monday in the first round of the St. Joseph Tournament in Conway with a convincing 81-56 win. Pottsville pulled ahead 14-6 in the first quarter and led 36-28 at halftime. The Lady Apaches extended the lead to 60-49 in the third and put the game away in the fourth as they outscored Brinkley 21-7 down the stretch. Callie Cox led the way with 22 poin... Sports briefs (Nov. 23, 2011)Bigelow SG 47, Dover 44 DOVER — The Lady Pirates struggled in the second quarter and couldn’t claw their way back in a 47-44 loss to Bigelow on Tuesday. Dover (4-2) held a 15-14 edge after the first quarter, but the Lady Panthers surged ahead in the second and took a 27-20 lead into halftime. The Lady Pirates closed it to 34-32 after three, but Bigelow was able to close it out in the fourth. Kaitlyn Meador led the way for Dover with 12 points.... Sports briefs (Nov. 22, 2011)RJHS East JG 33, Harrison 22 The Russellville Junior High Eastside Lady Whirlwinds beat Harrison 33-22 in the ninth-grade game, and 21-19 in the eighth-grade game Monday at Whirlwind Gymnasium. In ninth-grade action, the East Lady Whirlwinds (4-2) jumped out to a 12-3 lead in the first quarter and extended that lead to 25-11 at halftime. The Junior Lady Golden Goblins narrowed the gap to 27-20 as the teams entered the fourth, but RJHS East hel... West Whirlwinds win Whirlwind InvitationalAll four Russellville Junior High basketball teams were in action Saturday in either championship games or third-place games at the Whirlwind Invitational in Whirlwind Gymnasium. The West Whirlwinds (3-0) were the lone team from the host school to win the tournament championship, beating Cabot South 48-35 in the final game of the day. Russellville led all the way, but the Junior Panthers made it competitive early. After jumping out to a 12-4 l... Dardanelle basketball sweeps HectorDARDANELLE — The Sand Lizards used a 23-11 run in the third quarter Friday to put away Hector 68-41 at Dardanelle Gym in nonconference play. “We are very happy to start the season with two wins,” Head Coach Russell Sturdivant said. “(Friday’s) win was a total team effort. We got big contributions from our seniors, led by Alec Pyburn, all the way to our sophomores. The sophomore group was led by Austin Steen and Montarious Grimes tonight on the... Local basketball roundup (Nov. 18, 2011)Clarksville JG 39, RJHS East 38 The Clarksville Junior Lady Panthers upset Russellville Junior High Eastside 39-38 Thursday in the semifinals of the Whirlwind Invitational at Whirlwind Gymnasium. “I was proud of them,” Eastside Head Coach Nina Chiolino said. “Aside from free throws, we played right to the game plan.” Russellville exploded out of the gates and built an 18-4 lead in the first quarter with a suffocating defense, but Clarksville e... RJHS West picks up win in Whirlwind InvitationalRussellville Junior High School’s Westside boys defeated Conway Blue 55-52 in the annual Whirlwind Invitational basketball tournament. Westside was led by Andy Campbell with 26 points, followed by Co-Chese Temple Laws with eight and Mark Moyer and Tony Jones with six points apiece. Westside will play Alma in the second round at 5:30 p.m. today at Russellville Jr. High. The Eastside Whirlwinds will play Cabot South at 8 p.m. today. Dardanelle 7... Dover Junior Pirates win Pope County TournamentBy Kevin Hill sports@couriernews.com POTTSVILLE — Playing on their opponents home court, the Dover Junior Pirates won the Pope County Tournament championship game 46-29 over the Junior Apaches at George Jones Gymnasium on Saturday. It was erroneously reported that Pottsville won the tournament championship over Atkins in Sunday’s edition of The Courier. That game was a semifinals matchup. The Junior Pirates (2-0) fell behind early, 11-8 after ... Pottsville, Atkins win Pope County tournamentPOTTSVILLE — It was all Pottsville and Atkins in the championship basketball game Saturday as the Junior Apaches and Junior Lady Red Devils netted wins in the finals of the Pope County Tournament at Pottsville’s George Jones Gymnasium. The Junior Apaches (2-0) beat Atkins 30-27, while the Atkins junior girls beat Pottsville 31-23. In the boys game, Jake White and Caleb Moore scored nine and eight points, respectively, as the Junior Apaches rac... Lady Whirlwinds drop Lady Sand LizardsThe Russellville Junior High Eastside Lady Whirlwinds opened the 2011-12 season Tuesday with a 36-17 win over Dardanelle at Whirlwind Gym, while the eighth-grade Lady Sand Lizards beat Russellville 28-16. In the ninth-grade game, Jocelyn Brown led the way for the Lady Whirlwinds (1-0) and led all scorers with 14 points. Russellville put a stranglehold on the Lady Sand Lizards (0-1) in the first half and took a 24-3 lead into the break. Dardane... Russellville Middle School record breakers are honored2011 was an exceptional track year at Russellville Middle School as four long-standing track records were broken and the East Boys completed a perfect season winning meets at Greenbrier, Booneville, Heber Springs, Vilonia, Conway and Russellville. “It is an exciting time at RMS,” Coach Joey Fisher said. “With the Cyclone boys winning the state championship last year, there is a renewed interest in track and field. And our middle school kids st... West beats East 41-16 on Colors DayThe Russellville Junior High Westside Whirlwinds were too much for RJHS Eastside and pulled away for a 41-16 win Tuesday on Colors Day at RHS Cyclone Stadium. Tony Jones carried the ball three times for 70 yards and a pair of touchdowns, all in the first half, and the West Whirlwinds built a 35-8 halftime lead. “It’s just an honor for me to watch them play football,” West Head Coach Josh Edgin said. “I told the boys that I’ve had the best seat... Bragging rights are on the lineThere may not be as much on the line as has been the case in previous years, but the annual Russellville Junior High East-West showdown is still one of the biggest games of the year for these kids. Though they play for different teams, the players attend the same school and many have common classes. The rivalry that develops can become intense, at least on game week. Thankfully, it’s a short week for both teams. “I think it can be tough,” West... RJHS Volleyball: Utter dominationIn any sport, 63-8 is an amazing record. The Russellville Junior High Eastside (35-1) and Westside (28-7) Lady Whirlwinds dominated the River Valley Junior High Conference teams so thoroughly, other conference teams allegedly said they don’t want RJHS in the conference anymore. The two teams teamed up to go 18-2 in conference play, with the two losses falling on West’s shoulders against the East. Cindy Williams and the Eastside Lady Whirlwinds...
{ "pile_set_name": "Pile-CC" }
<?php return [ /* |-------------------------------------------------------------------------- | Pagination Language Lines |-------------------------------------------------------------------------- | | The following language lines are used by the paginator library to build | the simple pagination links. You are free to change them to anything | you want to customize your views to better match your application. | */ 'previous' => '« הקודם', 'next' => 'הבא »', ];
{ "pile_set_name": "Github" }
1. Field of the Invention The present invention relates to a quadrature detecting apparatus for demodulating an angle-modulated signal such as a phase-modulated signal and a frequency-modulated signal, which is adapted to prevent the phase of a demodulated signal from changing even if the amplitude of a balanced angle-modulated input signal is varied. 2. Description of the Prior Art Conventionally, this type of quadrature detecting apparatus, as disclosed in U.S. Pat. No. 3,667,060 (hereinafter described in connection with FIG. 1), comprises a phase detecting means C (multiplying means), a phase shifting means 2, and first and second coupling means A and B for supplying the input terminals of the phase detecting means C and the phase shifting means 2 with angle-modulated signals, respectively, and is constructed such that when a frequency-modulated signal is demodulated, variations in the amplitude of the frequency-modulated signal will not badly affect the demodulated signal. Specifically, since the first coupling means A has the phase relationship between input and output signals thereof changing in dependence on the amplitude of an input signal, due to the circuit configuration, an angle-modulated signal, when passing through the first coupling means A, is subjected to phase modulation in accordance with fluctuations in the amplitude thereof, which results in generating components other than the modulated signal in the demodulated signal. However, such a change in phase produced in the first coupling means A is cancelled by providing the second coupling means B constructed identically to the first coupling means A prior to the phase shifting means 2, thereby preventing the influence of fluctuations in amplitude of the frequency-modulated signal on the demodulated output signal. FIG. 1 illustrates the configuration of a conventional balanced quadrature detecting apparatus. In FIG. 1, reference numeral 1 designates a semi-conductor integrated circuit which includes active elements constituting the quadrature detecting apparatus. Reference letter A designates a first coupling means which is composed of a pair of transistors 107, 108 and a current source 30. Reference letter B designates a second coupling means which is composed of a pair of transistors 101, 102 and the common current source 30. A transistor 603 is a common-base type connection. Reference letter C designates a phase detecting means which is balanced by two pairs of transistors 109 and 110; 111 and 112. The commonly connected emitters of the respective pairs respectively serve as inputs to the phase detecting means C, while a base-to-base voltage of the respective pairs is used as a control input. Also, the sum of collector currents of the transistors 109, 111 and the sum of collector currents of the transistors 110, 112 are balanced phase detecting output currents. Reference numeral 2 designates a phase shifting means wherein a frequency-modulated signal is supplied from a phase shift input terminal 6, then a phase proportional to the frequency-modulated signal generated by a resistor 31, inductors 3, 4 and a capacitor 5 is added to the frequency-modulated signal, and the resultant signal is outputted to a phase shift output terminal 7. Reference numeral 8 designates an alternate current ground terminal. Reference numeral 25 designates a first voltage source, 26 a second voltage source which is set to generate a voltage value smaller than that of the first voltage source 25. Reference numerals 27 and 28 designate balanced (push-pull type) frequency modulated signal sources which are mixed with a bias voltage source 29. Transistors 19, 21 and 23 constitute emitter followers together with current sources 20, 22 and 24. Pairs of transistors 13 and 14, 15 and 16, 17 and 18 respectively constitute a current mirror. Reference numeral 10 designates a demodulated signal output terminal which is connected with a resistor 11, a voltage source 12 and a capacitor 9 as loads. The operation of the above-mentioned conventional apparatus will be next described. Referring to FIG. 1, the pairs of transistors 107 and 108; 101 and 102, respectively constituting the first and second coupling means A and B, have their respective emitters commonly connected to the current source 30. Between the bases of the respective pairs, the identical balanced frequency-modulated signal sources 27 and 28 and the bias voltage source 29 are connected so that the two pairs of transistors operate as two sets of amplitude limiting amplifiers. Respective collector currents of the transistors 107 and 108 forming a pair, which are balanced outputs of the amplitude limiting amplifier of the first coupling means A, respectively flow into the common emitters which serve as inputs to the pairs of transistors 109 and 110; 111 and 112 which constitute the phase detecting means C. One of the collector currents of the transistors 101, 102 forming a pair, which are balanced outputs of the amplitude limiting amplifier of the second coupling means B, is supplied to the phase shift input terminal 6 of the phase shifting means 2 through the transistor 603 which is a common-base type connection, while the other one flows into the second voltage source 26. A signal delivered from the phase shift output terminal 7 through the emitter follower formed by the transistor 19 is connected to one of the base-to-base connections of the pairs of transistors 109 and 110; 111 and 112, while a direct current voltage at the alternate current ground terminal 8 is connected through the emitter follower formed by the transistor 21 to the other one of the base-to-base connections. In the above-mentioned conventional example, when the amplitude of frequency modulation of the frequency-modulated signals generated from the signal sources 27, 28 are changed, two sets of balanced amplitude limited outputs generated from the two sets of amplitude limiting amplifiers respectively composed of the pairs of transistors 107 and 108; 101 and 102, which also constitute the first and second coupling means, also changes correspondingly. The respective balanced amplitude limited outputs change completely in the same manner. Then, the collector currents of the transistors 107 and 108 constituting the first coupling means A are inputted to the common emitters of the pairs of transistors 109 and 110; 111 and 112, respectively. Also, the collector current of the transistor 101, a component of the second coupling means B, is supplied to the phase shift input terminal 6 through the common-base type transistor 603, whereby an emitter voltage of the transistor 603 hardly changes since the transistor 603 is a common-base type connection. The phase shifting means 2 adds a phase proportional to the amplitude of frequency modulation of a signal passing therethrough to an output signal, so that the phase of a signal at the phase shift output terminal 7 changes in proportion to the amplitude of frequency modulation of a signal supplied to the phase shift input terminal 6. Therefore, a balanced amplitude-limited output in phase with the signal at the phase shift input terminal 6 is connected to the input, so that balanced modulated output currents proportional to the amplitudes of frequency modulation of the frequency-modulated signals generated from sources 27, 28 are derived at outputs of the phase detecting means C, the control input of which is supplied with a signal at the phase shift output terminal 7 through the emitter follower of the transistor 19. Then, one of the demodulated output currents balanced by the three pairs of transistors 17 and 18; 13 and 14; 15 and 16 has its direction changed, whereby the demodulated output currents are eventually converted to a balanced demodulated signal current proportional to the amplitudes of frequency modulation of the frequency-modulated signals generated from the sources 27, 28 at the demodulated signal output terminal 10. The balanced demodulated signal current can be taken out as a demodulated signal voltage by the resistor 11, the voltage source 12 and the capacitor 9. Even if the amplitudes of the frequency-modulated signals generated from the frequency-modulated signal sources 27, 28 change to cause fluctuations in the phase relationship between the input and output of the amplitude limiting amplifier composed of the pair of transistors 107 and 108 which also constitute the first coupling means A, the phase of the output from the amplitude limiting amplifier composed of the pair of transistors 101 and 102 also constituting the second coupling means B in a configuration substantially equal to the first coupling means A also fluctuates in a similar manner. The fluctuation in phase between the input and the control input of the phase detecting means C is thus cancelled, which results in eliminating the phase fluctuations therebetween and accordingly preventing the demodulated output from being influenced by such fluctuations. As described above, the conventional quadrature detecting apparatus can also prevent a demodulated output from being influenced by fluctuations in amplitude of frequency-modulated signals generated from signal sources. However in the above-mentioned conventional quadrature detecting apparatus, if the amplitude of a driving voltage fed to the second input of the phase detecting means C is increased to make larger the amplitude of the control input supplied to the phase detecting means C, for the purpose of enhancing the modulation sensitivity and reducing noise possibly produced during demodulation, fluctuations in the input and output of the phase detecting means C caused by fluctuations in the amplitudes of the frequency-modulated signals generated by the frequency-modulated signal sources 27, 28 connected to the amplitude limiting amplifiers constituting the first and second coupling means A, B do not become equal to each other, whereby such fluctuations appear in a demodulated output. More specifically, as long as the amplitudes of the base-to-base voltages in the phase detecting means C composed of the pairs of transistors 109 and 110; 111 and 112, respectively having their emitters commonly connected, are small, there are also small changes in the voltages at the respective common emitters due to changes in the amplitudes. However, if the amplitudes of the base-to-base voltages are made larger, changes in voltages at the common emitters also become larger, which results in largely changing the base-to-collector voltages of the pair of transistors 107 and 108 of the amplitude limiting amplifier also constituting the first coupling means A. On the other hand, the base-to-collector voltages of the pair of transistors 101 and 102 of the amplitude limiting amplifier also constituting the second coupling means B hardly change because of the circuit configuration. Therefore, fluctuations in the amplitudes of signals generated from the frequency-modulated signal sources 27, 28 cause fluctuations in the amplitude of the base-to-base voltage, whereby the relative phases of the two sets of balanced amplitude limited outputs from the amplitude limiting amplifiers constituting the first and second coupling means A and B which are respectively composed of transistors so as not to present the same base-to-collector voltages are subjected to fluctuations, which results in failing to achieve the original object.
{ "pile_set_name": "USPTO Backgrounds" }
Testing for bioequivalence of highly variable drugs from TR-RT crossover designs with heterogeneous residual variances. Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR-RTR-RRT and TRTR-RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR-RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR-RT designs. Under the assumption of zero subject-by-formulation interaction, this basic model is comparable to the FDA-recommended model for TRR-RTR-RRT and TRTR-RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real-world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ-based testing procedure that evaluates HV drugs under practical TR-RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ-based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods.
{ "pile_set_name": "PubMed Abstracts" }
Q: How to jump curbs? Putting aside the question should one ever ride on the sidewalk, I would like to jump curbs safely and seamlessly. Assume a hardtail mountain bike with high saddle (we are in the city), at a speed of 30 km/h. I have heard of several techniques: Move weight to the rear. Pull on the handlebars, the tire climbs the curb, without touching it. Move the weight to the front, in order to unload the rear tire. The rear tire impacts the curb, but is not carrying much weight. Stand up. Push on the handlebar. The fork compresses, then jumps over the curb. Let the rear roll over the curb, like in the previous case. Stand up. Push on the handlebars, crouch as much as possible. As the fork begins to decompress, jump up. Push with feet against pedals (pedals vertical) in order to pull the bike up. Pull up on the handlebar. After the front tire has passed the curb, push down on it, to gain height in the rear. Which one is best, or is there a better way? A: The last one. As already mentioned, you're describing a bunny hop. Allowing the rear to hit the curb - even if there is relatively little weight over it - will increase the risk of pinch punctures, potential rim damage, and it will slow you down considerably more than a clean bunny hop. Hops are weird. Once you can do them you will never understand why you couldn't. They're much easier if you use clipless pedals, but learning to do it on flat pedals will be hugely beneficial to your technique.
{ "pile_set_name": "StackExchange" }
Q: IdentityServer4 IsValidReturnUrl returns false for valid url I've a test project with this client configuration: public class Clients : IClientStore { public Task<Client> FindClientByIdAsync(string clientId) { return Task.FromResult(new Client { ClientId = "client.webforms", ClientName = "WebForms Client", AllowedGrantTypes = GrantTypes.Hybrid, AllowAccessTokensViaBrowser = false, ClientSecrets = { new Secret("1234".Sha256()) }, RedirectUris = { "http://localhost:9869/signin-oidc" }, PostLogoutRedirectUris = { "http://localhost:9869/" }, AllowedScopes = { IdentityServerConstants.StandardScopes.OpenId, CstIdSrvScopeTypes.TestWebForms }, AllowOfflineAccess = false, RequireConsent = false, AlwaysIncludeUserClaimsInIdToken = true }); } } When I try to validate it in LoginController I'm getting false result (this is from Immediate Window: returnUrl "http://localhost:9869/signin-oidc" this.identityServer.IsValidReturnUrl(returnUrl) false Also this.identityServer.GetAuthorizationContextAsync(returnUrl) result is null. Am I doing something wrong? A: Yes - you need to add a single RedirectUri when you configur your client that is one of the RedirectUris that is in the list you have above. Something like # app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions { SignInAsAuthenticationType = Settings.AuthenticationType, Authority = config.Authority, RedirectUri = config.RedirectUri }
{ "pile_set_name": "StackExchange" }
Maria Olga de Moraes Sarmento da Silveira Olga Moraes Sarmento da Silveira (née Maria Olga de Moraes Sarmento da Silveira; known also as, Olga Morais Sarmento; 26 May 1881 - 17 October 1948) was a Portuguese writer and feminist. Early years Maria Olga de Moraes Sarmento da Silveira was born in Setúbal, 26 May 1881. She was a daughter and granddaughter of military men, spending part of her childhood in Elvas, where she became a friend of Virgínia Quaresma. She married a Navy physician when she was 16, who died shortly afterwards in combat, in Cuamato, Angola. Career Moraes associated with a group of Portuguese intellectuals, who at the beginning of the 20th century, fought for civil rights, as well as women's legal and political rights. She succeeded Ana de Castro Osório as editor-in-chief of Sociedade Futura (founded 1902). She affiliated with Liga Portuguesa da Paz (Portuguese League of Peace), cofounding the organization and serving as president of its Feminist Section in 1906. On May 18, 1906, Moraes delivered a lecture on "Problema Feminista" (Feminist Problem) at the Sociedad de Geografía de Lisboa. She also traveled as a lecturer to South America, visiting Brazil, Uruguay, and Argentina. In Brazil, she met and became friends with the writer Júlia Lopes de Almeida. Personal life Moraes lived in Paris during the First World War. For more than thirty years, she was a companion and partner of Baroness Hélène van Zuylen, of the Rothschild banking family of France, whom she saved from the Holocaust by taking her to Lisbon and then to New York City. She also devoted herself to writing the Baroness' memoirs. Moraes was closely linked to her city of birth, Setúbal, leaving all her assets to the Municipal Chamber, including her personal library and a vast collection of autographs of personalities from art, music and literature in postcards, letters, and books. This legacy is part of the collection of the Museo de Setúbal/Convento de Jesús. She died in Lisbon, 17 October 1948. Selected works Problema Feminista (1906) Mulheres illustres: A Marqueza de Alorna (sua influencia na sociedade portuguesa, 1750-1839) (1907) Arte, Literatura & Viagens (1909) A Infanta Dona Maria e a Corte Portuguesa (1909) La Patrie Brésilienne (1912) Sa Majesté la Reine Amélie de Portugal, Princesse de France (1924) Teófilo Braga: Notas e Comentários (1925) As Minhas Memórias: Tempo Passado, Tempo Ausente (1948) Honors Legión de Honor Orden de Cristo Orden de Santiago de la Espada References Attribution Bibliography Category:1881 births Category:1948 deaths Category:Chevaliers of the Légion d'honneur Category:Lesbian writers Category:Portuguese women writers Category:Portuguese-language writers Category:Portuguese feminists Category:20th-century Portuguese poets Category:Portuguese republicans Category:Portuguese suffragists Category:Portuguese women poets Category:20th-century women writers
{ "pile_set_name": "Wikipedia (en)" }
Two subdomains of negative symptoms in psychotic disorders: established and confirmed in two large cohorts. Negative symptoms of schizophrenia are normally grouped into a single category. However, the diversity of such symptoms suggests that they are actually made up of more than one dimension. The DSM-V proposes two negative symptom domains, namely expressive deficits and avolition/asociality. We investigated whether the negative symptoms do indeed have two dimensions. An exploratory factor analysis was carried out based on interviews with the PANSS (664 patients). We restricted our analysis to items that had been described as negative symptoms in previous factor analyses. The symptom structure was then tested for stability by performing a confirmatory factor analysis on PANSS interviews from a separate cohort (2172 patients). Exploratory factor analysis yielded a two-factor structure of negative symptoms. The first factor consisted of PANSS items Flat affect, Poor rapport, Lack of spontaneity, Mannerisms and posturing, Motor retardation, and Avolition. The second factor consisted of Emotional withdrawal, Passive/apathetic social withdrawal, and Active social avoidance. The first factor could be related to expressive deficits, reflecting a loss of initiative, and the second factor to social amotivation, related to community interaction. This factor structure supports the DSM-V classification and may be relevant for pathophysiology and treatment of schizophrenia and other psychotic disorders.
{ "pile_set_name": "PubMed Abstracts" }
This invention relates generally to heating and/or air conditioning systems, and is specifically directed to a heating and air conditioning system in which heat exchange is accomplished through the use of geothermal ground coils which are vertically inserted into the ground. In air conditioning systems commonly is use, and in heat pump systems in particular, heat exchange between a refrigerant contained within the system and the environment is required. Most commonly, this heat exchange has been accomplished by means of ambient air, wherein the refrigerant is directed to an outdoor coil and heat exchange between the refrigerant contained within the coils is made with the outside air. The problem associated with heat exchange with outside air is the inconsistency of the temperature of the outside air. Particularly with heat pumps, since heat for the heating cycle is obtained from the outside air, the system loses its efficacy and efficiency as the outside temperature drops, since there is less heat in the air which can be extracted for the purpose of indoor heating. This problem is compounded due to the fact that as the temperature drops, additional heat is needed to heat the building. To overcome the problems associated with heat exchange with the outside air, water and geothermal means have been employed for heat exchange. In the water system, heat exchange within the refrigerant contained within the system is accomplished by exposing refrigerant contained within the coil to quantities of water, which is generally passed in a dynamic fashion across the coils. This system requires large quantities of water, and ground water is usually employed. Limitations in this system include the availability of ground water which can be efficiently and cost-effectively obtained in sufficient quantities to achieve the desired and required heat exchange. It has previously been recognized that geothermal heat exchange is potentially an efficient and effective way of achieving heat exchange in heating and air conditioning systems, and especially heat pump type systems. Since the ground temperature is relatively constant at about 68 degrees F. at a depth below the frost line, the available heat is constant. However, a problem which has been associated with such systems is the means and manner in which the heat exchange coils, or outdoor coils, are placed into the ground to achieve geothermal heat exchange. It is preferred to place the geothermal outdoor coils into the ground in a vertical fashion. Installation may be easily accomplished by drilling or boring holes into the ground, into which the vertical geothermal outdoor coils may be placed. The coils may quickly and easily be placed into the ground to a depth with is sufficient to overcome ground freezing problems associated with colder climates. Heretofore, the reason that placing coils into the ground in a vertical fashion has not been workable is due to the fact that when sufficient refrigerant is placed into the system to achieve maximum efficiency on both the heating and cooling cycles, the refrigerant as it condenses in the ground coils, causes a liquid refrigerant build-up. The compressor is unable to properly move the refrigerant through the system when the liquid refrigerant settles within the ground coils, making the system unworkable. Damage to the compressor can occur when the compressor forces liquid refrigerant into the intake of the compressor, since compressors for such systems ar designed for receiving and compressing gases. In the prior art, to overcome the problem associated with vertical outdoor geothermal coils, the coils have been placed into the ground in a horizontal fashion. Placing the coils into the ground in a horizontal fashion alleviates the problem of liquid refrigerant build-up, since there is not a low point which the refrigerant seeks, but requires a vast amount of available ground to achieve the proper heat exchange, and requires the excavation of sufficient land to place enough ground coils to achieve sufficient heat exchange. In colder climates, this excavation must also be to a sufficient depth to place the coils for proper heat exchange. In short, placing the geothermal coils in a horizontal fashion is more difficult, expensive, and requires much more available ground than does placing of the coils into vertical holes.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Requesting values of key from Postman form-data I'm trying to write if statement validation code for my Flask API and I want to check if values of key are int or not but it seems like values stored in Postman form-data are string type. Is there any way that values can be stored in their original type? Here is my code: if type(request.form['user_id']) is not int and (request.form['passport_url']) is not str: abort(make_response(jsonify(Error="UserId or PassportUrl type is not valid", Code="500"), 500)) A: You can use isNumeric function to check if all characters in user_id are numeric characters. if not request.form['user_id'].isNumeric() and (request.form['passport_url']) is not str: ... Then you can parse the user_id to int using int function. user_id = int(request.form['user_id']) # use user_id as int
{ "pile_set_name": "StackExchange" }
Dan Kihlström Dan Kihlström (born 1957) is a Swedish Christian Democratic politician, member of the Riksdag since 1998. References Category:Christian Democrats (Sweden) politicians Category:Living people Category:1957 births Category:Members of the Riksdag 2002–2006
{ "pile_set_name": "Wikipedia (en)" }
Q: CreateProcessAsUser from c++ service creates process but no console I am developing C++ service which uses CreateProcessAsUser function. I have tested this on Windows 7 and it was working excellent. But i am testing now my code for windows 10 and it doesnt want to work but process is created and visible in task manager, just no windows/console created. I am trying now code snippet with cmd only without any parameters I will really appreciate any kind of help I can see this with task manager, so my process is created. task manager PROCESS_INFORMATION pi; STARTUPINFO si; BOOL bResult = FALSE; DWORD dwSessionId; HANDLE hUserToken; // Log the client on to the local computer. dwSessionId = WTSGetActiveConsoleSessionId(); WTSQueryUserToken(dwSessionId,&hUserToken); ZeroMemory(&si, sizeof(STARTUPINFO)); si.cb= sizeof(STARTUPINFO); si.lpDesktop = L"winsta0\\default"; ZeroMemory(&pi, sizeof(pi)); LPVOID pEnv =NULL; if(CreateEnvironmentBlock(&pEnv,hUserToken,TRUE)){ } else pEnv=NULL; bResult = CreateProcessAsUser( hUserToken, // client's access token L"C:\\Windows\\System32\\cmd.exe", // file to execute L"", // command line NULL, // pointer to process SECURITY_ATTRIBUTES NULL, // pointer to thread SECURITY_ATTRIBUTES FALSE, // handles are not inheritable CREATE_UNICODE_ENVIRONMENT|HIGH_PRIORITY_CLASS, // creation flags pEnv, // pointer to new environment block NULL, // name of current directory &si, // pointer to STARTUPINFO structure &pi // receives information about new process ); //Perform All the Close Handles tasks DestroyEnvironmentBlock(pEnv); CloseHandle(pi.hThread); CloseHandle(pi.hProcess); CloseHandle(hUserToken); A: #define LAA(se) {{se},SE_PRIVILEGE_ENABLED|SE_PRIVILEGE_ENABLED_BY_DEFAULT} #define BEGIN_PRIVILEGES(tp, n) static const struct {ULONG PrivilegeCount;LUID_AND_ATTRIBUTES Privileges[n];} tp = {n,{ #define END_PRIVILEGES }}; ULONG adjustPrivileges() { HANDLE hToken; ULONG err; if (OpenProcessToken(NtCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken)) { BEGIN_PRIVILEGES(tp, 2) LAA(SE_ASSIGNPRIMARYTOKEN_PRIVILEGE), LAA(SE_INCREASE_QUOTA_PRIVILEGE), END_PRIVILEGES AdjustTokenPrivileges(hToken, FALSE, (::PTOKEN_PRIVILEGES)&tp, 0, 0, 0); err = GetLastError(); CloseHandle(hToken); } else { err = GetLastError(); } return err; } ULONG cup() { HANDLE hUserToken; DWORD dwSessionId = WTSGetActiveConsoleSessionId(); if (dwSessionId == MAXDWORD) { return ERROR_GEN_FAILURE; } ULONG err = adjustPrivileges(); if (err) { return err; } if (WTSQueryUserToken(dwSessionId,&hUserToken)) { PVOID pEnv; if (CreateEnvironmentBlock(&pEnv,hUserToken,TRUE)) { PROCESS_INFORMATION pi; STARTUPINFO si = { sizeof(STARTUPINFO) }; si.lpDesktop = L"winsta0\\default"; if (CreateProcessAsUser( hUserToken, // client's access token L"C:\\Windows\\System32\\cmd.exe", // file to execute NULL, // command line NULL, // pointer to process SECURITY_ATTRIBUTES NULL, // pointer to thread SECURITY_ATTRIBUTES FALSE, // handles are not inheritable CREATE_UNICODE_ENVIRONMENT|HIGH_PRIORITY_CLASS, // creation flags pEnv, // pointer to new environment block NULL, // name of current directory &si, // pointer to STARTUPINFO structure &pi // receives information about new process )) { CloseHandle(pi.hThread); CloseHandle(pi.hProcess); } else { err = GetLastError(); } DestroyEnvironmentBlock(pEnv); } else { err = GetLastError(); } CloseHandle(hUserToken); } else { err = GetLastError(); } return err; }
{ "pile_set_name": "StackExchange" }
#### Via The Synchronous API To get the device's current IP: _With Kotlin_ ```kotlin val ip = wisefy.getIP() ``` _With Java_ ```java String ip = wisefy.getIP(); ``` #### Via The Asynchronous API To get the device's current IP: _With Kotlin_ ```kotlin wisefy.getIP(object: GetIPCallbacks { override fun retrievedIP(ip: String) { } override fun failureRetrievingIP() { } override fun wisefyFailure(wisefyFailureCode: Int) { } }) ``` _With Java_ ```java wisefy.getIP(new GetIPCallbacks() { @Override public void retrievedIP(String ip) { } @Override public void failureRetrievingIP() { } @Override public void wisefyFailure(int wisefyFailureCode) { } }); ```
{ "pile_set_name": "Github" }
Q: The global object is an object of which class? I want to know what the global object in JavaScript is and to which class this object belongs to. And how are Infinity, NaN and undefined part of the global object? A: Variable scope is defined in JavaScript by a function, and functions can be nested inside other functions. function foo() { // new variable scope in here var a = "a"; function bar() { // another nested variable scope var b = "b"; } bar(); } foo(); EXCEPT there is a default "global" variable scope that is defined when your program runs. It is the base variable scope in which all function created scopes are nested. So what? Well, every variable scope has a variable object (or more accurately, a "binding" object). It's an internal object to which all the local variables you create are bound. This variable object is not directly accessible. You can only add properties to it by declaring a local variable (or function parameter, or function declaration). And you can only access properties via the variable names. Again, so what? Well the "global" variable scope is unique. It exposes this internal variable object by automatically defining a property on the object that refers back to the object itself. In a browser, the property is named window. Because a property is placed on the object that refers back to the object, and because properties on the object become variables, we now have a direct access to the global variable object. You can test this by observing that the window.window property is an equal reference to the window variable. alert(window.window === window); // true As a result, we can add a property to the object window.foo = "bar";, and it show up as a global variable alert(foo); // "bar". Note that the only variable scope that exposes this internal object is the global scope. None of the function scopes expose it. Also note that the ECMAScript specification does not require that the global variable object be exposed. It is up to the implementation to decide.
{ "pile_set_name": "StackExchange" }
A series of startling events in November revealed the abysmal state of affairs in the Arab world. The Lebanese prime minister announced his resignation abroad, but reversed the statement later. A missile was launched from Yemen toward Saudi Arabia’s capital, Riyadh. Saudi Arabia’s leadership carried out a major anti-corruption campaign that affected dozens of high-profile personalities. Egypt, meanwhile, experienced its worst terrorist attack in living memory, with more than 300 civilians killed and injured. Video footage of alleged slave auctions in Libya underscored the continuing chaos there amid the complete breakdown of the Libyan state. Military victories against the Islamic State and a rapprochement between Palestinian factions in Gaza and the West Bank have done little to ease a collective sense of anxiety in the region. Nor have these positive developments inspired much confidence that the Arab world will somehow pull itself back from the edge of the abyss. Foreign interference has become routine in Syria, Lebanon, Iraq, and Yemen. And ongoing debates over identity politics and borders in the Levant are a prelude to the grave, fundamental challenges ahead. In fact, the situation in the Middle East is not surprising, given that in recent years no Arab country has led attempts to resolve the ongoing conflicts in Libya, Syria, and Yemen, let alone address the Palestine-Israel issue. In many of these conflicts, foreigners have had far more influence than Arabs. Historically, the Middle East has been the target of numerous foreign invasions, from the Crusades to European colonialism. Its natural resources have been greedily usurped, and it was a theater for proxy wars during the Cold War. Even today, Arab territories remain under occupation. But while there are many reasons to blame foreign powers for the region’s parlous state of affairs, blaming others – or even one another – will solve nothing. After all, the Arab world has many homegrown problems, too, including inefficient and ineffective governance, unholy alliances, and undeveloped national capacities. Disaster awaits any region that is helpless to shape its own future, in which a majority of citizens feel disenfranchised. Though the Arab world is traditionally conservative, almost 70 percent of its citizens are below the age of 35, and young people suffer from the highest rates of unemployment in most countries. This constitutes not only a tremendous waste of resources, but also a serious long-term sociopolitical problem. And yet it is just one of the many domestic challenges facing the region. Arabs must take charge of their own agenda, and become the primary force defining their future and that of their countries. They should, of course, continue to engage with the outside world and strengthen their strategic relationships and alliances. But they also must become less dependent on others. For starters, the region’s governments need to develop their own national-security capacities, to defend against non-existential threats and hegemonic expansionism. This, in turn, will enhance their political influence, and give them more diplomatic tools for addressing regional problems and preventing military conflicts. Moreover, Arabs must defend their national identities. The Middle East’s nation-state system is not perfect, but it is far better than religious or ethnic sectarianism, which threaten to destabilize the region even further. To avoid that outcome, the region’s existing nation-states will need strong institutions to provide for efficient governance and social inclusion. Unfortunately, most Arab countries’ institutions are nowhere near being able to meet this imperative. Looking ahead, Arabs should recognize that domestic reform is the best way to prevent foreign interference and defend national interests. The Arab awakenings over the last few years revealed a centrist middle-class yearning for change. Opportunistic parties tried to benefit from the turbulent environment created by sudden change. But this does not negate the fact that these movements were a response to perpetually bad governance and a failure on the part of Arab leaders to pursue gradual reforms. Arabs also need to give themselves a larger variety of economic, political, and security options, so that they can adapt to changing circumstances. The world is no longer bipolar or Eurocentric. In fact, it is the Westphalian state system itself, not just the postwar geopolitical paradigm, which is being tested by rapid technological, economic, and social changes. Lastly, the Arab world needs to confront regional hegemonic attitudes and the illegitimate occupation of Arab lands. Solutions to current problems must respect people’s aspirations for statehood and sovereignty, while going beyond tactical or transactional approaches that provide only short-term relief. Ultimately, any policy that fails to protect basic rights will not succeed. Arab countries, individually and collectively, will need a fully formed strategy to confront existential foreign and domestic threats to their sovereignty and security in the coming years. It is high time for Arab leaders to outline a vision for the future of inter-Arab relations, and a plan for engaging with their non-Arab neighbors on regional opportunities and challenges. Last but not least, Arab leaders must also explain how they will provide better domestic governance for their people. If the Arab world wants to have a say in shaping its own future, it cannot remain complacent in the present. Its leaders and people must start planning now. Nabil Fahmy, a former foreign minister of Egypt, is the dean of the School of Global Affairs and Public Policy at the American University in Cairo. He served as Egypt’s ambassador to the United States from 1999–2008, and as envoy to Japan between 1997 and 1999. On Twitter: @DeanNabilFahmy.
{ "pile_set_name": "Pile-CC" }
Effects of argan oil on the mitochondrial function, antioxidant system and the activity of NADPH- generating enzymes in acrylamide treated rat brain. Argan oil (AO) is rich in minor compounds such as polyphenols and tocopherols which are powerful antioxidants. Acrylamide (ACR) has been classified as a neurotoxic agent in animals and humans. Mitochondrial oxidative stress and dysfunction is one of the most probable molecular mechanisms of neurodegenerative diseases. Female Sprague Dawley rats were exposed to ACR (50mg/kg i.p. three times a week), AO (6ml/kg,o.p, per day) or together for 30days. The activities of cytosolic enzymes such as xanthine oxidase (XO), glucose 6-phosphate dehydrogenase (G6PDH), glutathione-S-transferase (GST), mitochondrial oxidative stress, oxidative phosphorylation (OXPHOS) and tricarboxylic acid cycle (TCA) enzymes, mitochondrial metabolic function, adenosine triphosphate (ATP) level and acetylcholinesterase (AChE) activity were assessed in rat brain. Cytosolic and mitochondrial antioxidant enzymes were significantly diminished in the brains of rats treated with ACR compared to those in control. Besides, ACR treatment resulted in a significant reduction in brain ATP level, mitochondrial metabolic function, OXPHOS and TCA enzymes. Administration of AO restored both the cytosolic and mitochondrial oxidative stress by normalizing nicotinamide adenine dinucleotide phosphate (NADPH) generating enzymes. In addition, improved mitochondrial function primarily enhancing nicotinamide adenine dinucleotide (NADH) generated enzymes activities and ATP level in the mitochondria. The reason for AO's obvious beneficial effects in this study may be due to synergistic effects of its different bioactive compounds which is especially effective on mitochondria. Modulation of the brain mitochondrial functions and antioxidant systems by AO may lead to the development of new mitochondria-targeted antioxidants in the future.
{ "pile_set_name": "PubMed Abstracts" }
Q: How to 'web enable' a legacy C++ application I am working on a system that splits users by organization. Each user belongs to an organization. Each organization stores its data in its own database which resides on a database server machine. A db server may manage databases for 1 or more organizations. The existing (legacy) system assumes there is only one organization, however I want to 'scale' the application by running an 'instance' of it (tied to one organization), and run several instances on the server machine (i.e. run multiple instances of the 'single organization' application - one instance for each organization). I will provide a RESTful API for each instance that is running on the server, so that a thin client can be used to access the services provided by the instance running on the server machine. Here is a simple schematic that demonstrates the relationships: Server 1 -> N database (each organization has one database) organization 1 -> N users My question relates to how to 'direct' RESTful requests from a client, to the appropriate instance that is handling requests from users for that organization. More specifically, when I receive a RESTful request, it will be from a user (who belongs to an organization), how (or indeed, what is the best way) to 'route' the request to the appropriate application instance running on the server? A: From what I can gather, this is essentially a sharding problem. Regardless of how you split the instances at a hardware level (using VMs, multiple servers, all on one powerful server, etc), you need a central registry and brokering layer in your overall architecture that maps given users to the correct destination instance per request. There are many ways to implement this of course, so just choose one that you know and is fast, and will scale, as all requests will come through it. I would suggest a lightweight stateless web application backed by a simple read only database that does the appropriate client identifier -> instance mapping, which you would load into memory/cache. To add flexibility on hardware and instance location, use (assuming Java) JNDI to store the hardware/port/etc information for each instance, and in your identifier mapping map the client identifier to the appropriate JNDI lookup key.
{ "pile_set_name": "StackExchange" }
Identification of the NH2-terminal blocking group of NADH-cytochrome b5 reductase as myristic acid and the complete amino acid sequence of the membrane-binding domain. The NH2-terminal blocking group of the membrane-binding domain of NADH-cytochrome b5 reductase has been deduced as myristic (n-tetradecanoyl) acid. This fatty acid was identified by gas chromatography of the digest of the NH2-terminal tetrapeptide of cytochrome b5 reductase. Fast atom bombardment and direct chemical ionization mass spectroscopy of the underivatized NH2-terminal tetrapeptide confirmed the presence of myristic acid, identified its linkage to the NH2 terminus and established CH3(CH2)12-CO-Gly-Ala-Gln-Leu as the NH2-terminal sequence. In addition, the complete amino acid sequence of the membrane-binding domain of cytochrome b5 reductase is also reported. The finding of a myristic acyl chain on the NH2-terminal segment, comprised of hydrophobic amino acid residues, implies that the function of the myristate group may be other than simply to anchor the reductase to the microsomal membrane. This post-translational modification, presumably in the endoplasmic reticulum, may selectively stabilize a particular membrane structure and orientation that optimally facilitates electron transport on the cytosolic surface of this membrane organelle.
{ "pile_set_name": "PubMed Abstracts" }
Accountants and the gig economy It’s becoming more common for workers to take a freelance “gig” rather than a permanent job. What does this mean for accounting? The nature of work is changing and the accounting profession is going to have to change with it, according to experts in workplace evolution. Australian Bureau of Statistics data show that 9 per cent of people work as freelancers, and 30 per cent of all workers engage in some form of freelance work, or take on “gigs”. “Freelancers have always been a part of the workforce but the strong growth we have seen in the past decade is underpinned by advances in technology that make it not only possible but inevitable,” says Steve Sammartino, an author and futurist who will speak on the subject at CPA Congress in October. “For employers, it can be much more cost-effective to hire skills on an as-needed basis rather than recruit full-time employees, with all the on-costs that that entails,” he adds. The gig economy means rethinking work The shift to the so-called “gig economy” has taken on a new momentum with the rise of intermediaries such as Airtasker, Upwork and Freelancer.com. These link freelancers seeking work with employers, even if the employers are small companies or individuals who need a job done. High-profile organisations such as ride-service Uber have given another dimension to the freelancing idea. Airtasker advertises that it can provide freelancers for everything from installing software to doing housework. Accountants and Airtasker Will this model work for the high-value advisory or business planning work that accountants do? Airtasker includes accountants in its list of available people, although most offer services in areas such as bookkeeping, tax preparation and routine compliance. This does not surprise Sammartino. “Generally, what we have seen is that it starts with fairly straightforward activities,” he says. “But it doesn’t stay there for long. You soon see more sophisticated services being offered, and freelancers marketing themselves on their reputation. In fact, reputation is an important part of this.” A freelancer can easily provide recommendations and testimonials from satisfied clients, and they can link to their website to show their credentials and expertise. Will gig workers replace permanent staff? Sammartino says that employers can pay freelancers more on a per-task basis because they offer a lower cost structure than permanent employees. “It can even work out better for them to hire an accountant to perform tasks that come up only occasionally than buy software and train their own staff on how to use it.” Other commentators are more wary of the risks that freelance gigs can involve. Associate professor Sarah Kaine of the University of Technology Sydney points to her research showing that most freelancers struggle to make an income equivalent to a full-time wage. “There is a much higher degree of negotiation involved here,” she says. That can work for a professional offering high-value services, but someone lower down the scale, in a very competitive market, can find themselves in a race to the bottom. “Freelancers in this position will have to find ways to leverage their assets,” says Kaine. Some clients, for example, might want to work with someone who is local and understands the issues of the community. “That can be a valuable selling point,” she says. Freelancing for retired accountants Kaine notes that freelancing often means insecurity of income, but it can also provide flexibility. This can be important to someone who has family commitments, and gig workers often arbitrate between the costs and benefits of the work. Gigs can also be a stopgap measure until a full-time permanent job comes up. “People who have retired – and this is especially relevant to accountants – but want to continue to utilise their skills for some extra income, might find that occasional freelance work suits them,” says Kaine. Can gig workers be exploited? There are also dangers. The nature of freelance work means that legal recourse is difficult if payment for a task performed is not made as agreed. Another risk may be that the freelancer is actually an employee, says Mark Scully, Deputy Fair Work Ombudsman. “While independent contracting may be an entirely legitimate way to get work done, businesses need to think about whether the true nature of the relationship is one of employment,” he says. When employees are incorrectly classified as independent contractors, they miss out on a number of legislated protections and benefits, such as workers’ compensation and annual and personal leave. “If a business is deliberately disguising what is an employment relationship as a freelancer or an independent contractor to save running costs, they run a risk of being pursued through the courts for sham contracting, which carries significant civil remedy penalties,” warns Scully. In many cases, a freelancer might never meet the client in person. “The legal system hasn’t really caught up with these changes yet,” says Kaine. “So there is the possibility for exploitation.” There are also questions of liability. If a provider like Airtasker organises a freelancer and the job is done badly, or even creates a risk, where does responsibility lie? These issues are not clear. The gig economy will continue to grow Nevertheless, Kaine believes that the gig economy will continue to grow. “We will probably see hybrid forms developing,” she says. Employers will increasingly use a mix of permanent employees and freelance workers. Freelancers, especially professionals, will mix independent work with full-time employment over the course of their careers. It’s not new, but it will become common, says Kaine. Steve Sammartino agrees, and adds that professional organisations might have to consider a new form of designation for gig workers to recognise the changing reality. Accounting and legal bodies, for their part, may need to review the requirements of their members to hold practice certificates. “Reputation, certification and qualifications are going to be increasingly important for freelancers, as well as for the people who engage them,” says Sammartino. “Whether the existing system will continue to be appropriate is something to think about.”
{ "pile_set_name": "Pile-CC" }
Q: iOS Safari Vertical Scrolling Feels Sticky (With No Momentum) I've got a problem with vertical scroll in iOS Safari on a web page: while being scrolled, page moves in a very slow way, with high resistance (such behavior is not usual for iOS browsers) My attempts to locate the problem: <!-- piece of HTML listing --> <body> <div id="wrapper"> (here goes some content) </div> </body> I detected the problem in the overflow-x:hidden; rule for div#wrapper, changing it to 'overflow:hidden;' or removing it dynamically in web debugging panel. Is there any chance to fix it without changing the page layout? Repeats on Safari / iOS 6.1.4 and 7 (both iPad and iPhone), also in iOS Simulator on OS X. A: You can try to add the webkit specific css line to you div: #wrapper{ -webkit-overflow-scrolling: touch; } read more about momentum and ios scrolling here: http://css-tricks.com/snippets/css/momentum-scrolling-on-ios-overflow-elements/
{ "pile_set_name": "StackExchange" }
Mesenchymal stem cell implantation in osteoarthritic knees: is fibrin glue effective as a scaffold? The cell-based tissue engineering approach that uses mesenchymal stem cells (MSCs) has addressed the issue of articular cartilage repair in osteoarthritic (OA) knees. However, to improve outcomes, an advanced surgical procedure with tissue-engineered scaffolds may be needed to treat patients with large cartilage lesions. To investigate the clinical and second-look arthroscopic outcomes of the implantation of MSCs loaded in fibrin glue as a scaffold in patients with OA knees and to compare these outcomes with those of MSC implantation without a scaffold. Cohort study; Level of evidence, 3. This study retrospectively evaluated 54 patients (56 knees) who were examined with second-look arthroscopy after MSC implantation for cartilage lesions in their OA knees. Patients were divided into 2 groups: 37 patients (39 knees) were treated with MSC implantation without a scaffold (group 1), and 17 patients (17 knees) underwent implantation of MSCs loaded in fibrin glue as a scaffold (group 2). Clinical outcomes were evaluated according to the International Knee Documentation Committee (IKDC) score and the Tegner activity scale, and cartilage repair was assessed with the International Cartilage Repair Society (ICRS) grade. Statistical analyses were performed to identify various prognostic factors associated with the clinical and second-look arthroscopic outcomes. At final follow-up (mean, 28.6 months; range, 24-34 months), the mean IKDC score and Tegner activity scale in each group significantly improved: group 1, from 38.1±7.7 to 62.0±11.7 (IKDC) and from 2.5±0.9 to 3.5±0.8 (Tegner); group 2, from 36.1±6.2 to 64.4±11.5 (IKDC) and from 2.2±0.8 to 3.8±0.8 (Tegner) (P<.001 for all). According to the overall ICRS cartilage repair grades, 9 of the 39 lesions (23%) in group 1 and 12 of the 17 lesions (58%) in group 2 achieved a grade of I or II. There was a significant difference in ICRS grades between the groups (P=.028). Overweight (body mass index≥27.5 kg/m2) and large lesion size (≥5.7 cm2) were significant predictors of poor clinical and arthroscopic outcomes in group 1 (P<.05 for both). There was a similar trend in group 2, but the differences were not significant, possibly owing to the smaller sample size. Clinical and arthroscopic outcomes of MSC implantation were encouraging for OA knees in both groups, although there were no significant differences in outcome scores between groups. However, at second-look arthroscopy, there were better ICRS grades in group 2.
{ "pile_set_name": "PubMed Abstracts" }
Description:Native to moist places in partial sun to shade, horsetails make attractive accents in a container or around a pond or stream. An aggressive spreader, planting in a pot without drainage holes is recommended unless there is a large area for it to cover. Stalks may rise to 6 feet, and are a dramatic addition to floral arrangements. Deer resistant. This plant IS LISTED on our most current availability list. (Printer friendly page generated on February 21, 2018.)Call (650) 851-1668 to inquire about stock quantities.
{ "pile_set_name": "Pile-CC" }
Q: right align text to edge of embedded media I need some text to appear below embedded media (in this case a video), and I would like the text to right-align to the video. I'm not sure how to do this, since the layout needs to be fluid. Some videos will be wider than others. Currently the text is right-aligning to the wrapping div. Here's a fiddle with what I have so far http://jsfiddle.net/thwackukulele/2N6a9/ I would like the text "Watch more videos on our YouTube Channel" to align to the video's right edge. Thanks for any help! A: Videos can be varying width but they are never wider than your 960px set on the content-unit? If content-unit is centering everything, set framewrap to inline-block. This will make it shrink-wrap to its contents (here, the video), and if content-unit has text-align: center set on it, then the inline-block framewrap will be centered (at whatever width it turns out to be). Now, because its width is contrained to the widest content (the iframe here, an inline element), the h4 can be set to text-align right and the text should be restricted to the right side of framewrap, since the block h4 will still expand to 100% of whatever framewrap is. Ah, you have a lot of positioning and margins in there that I can't see what they are doing, so I don't expect this code will really fix your problem but... looking at JSFiddle is like viewing through a screen magnifier. Fun stuff http://jsfiddle.net/2N6a9/2/ I made content-unit blue and framewrap red so you could see how the caption gets wrapped. You could add a bit of right padding on the h4 if you want to nudge the text a bit more to the left. Oh I didn't add code for IE6,7 etc. If you are supporting them, remember to set framewrap to display: inline after the inline-block declaration. Edit 2: I didn't move the facebook stuff back left
{ "pile_set_name": "StackExchange" }
Q: Java codingbat help - withoutString I'm using codingbat.com to get some java practice in. One of the String problems, 'withoutString' is as follows: Given two strings, base and remove, return a version of the base string where all instances of the remove string have been removed (not case sensitive). You may assume that the remove string is length 1 or more. Remove only non-overlapping instances, so with "xxx" removing "xx" leaves "x". This problem can be found at: http://codingbat.com/prob/p192570 As you can see from the the dropbox-linked screenshot below, all of the runs pass except for three and a final one called "other tests." The thing is, even though they are marked as incorrect, my output matches exactly the expected output for the correct answer. Here's a screenshot of my output: And here's the code I'm using: public String withoutString(String base, String remove) { String result = ""; int i = 0; for(; i < base.length()-remove.length();){ if(!(base.substring(i,i+remove.length()).equalsIgnoreCase(remove))){ result = result + base.substring(i,i+1); i++; } else{ i = i + remove.length(); } if(result.startsWith(" ")) result = result.substring(1); if(result.endsWith(" ") && base.substring(i,i+1).equals(" ")) result = result.substring(0,result.length()-1); } if(base.length()-i <= remove.length() && !(base.substring(i).equalsIgnoreCase(remove))){ result = result + base.substring(i); } return result; } A: Your solution IS failing AND there is a display bug in coding bat. The correct output should be: withoutString("This is a FISH", "IS") -> "Th a FH" Yours is: withoutString("This is a FISH", "IS") -> "Th a FH" Yours fails because it is removing spaces, but also, coding bat does not display the correct expected and run output string due to HTML removing extra spaces. This recursive solution passes all tests: public String withoutString(String base, String remove) { int remIdx = base.toLowerCase().indexOf(remove.toLowerCase()); if (remIdx == -1) return base; return base.substring(0, remIdx ) + withoutString(base.substring(remIdx + remove.length()) , remove); } Here is an example of an optimal iterative solution. It has more code than the recursive solution but is faster since far fewer function calls are made. public String withoutString(String base, String remove) { int remIdx = 0; int remLen = remove.length(); remove = remove.toLowerCase(); while (true) { remIdx = base.toLowerCase().indexOf(remove); if (remIdx == -1) break; base = base.substring(0, remIdx) + base.substring(remIdx + remLen); } return base; } A: I just ran your code in an IDE. It compiles correctly and matches all tests shown on codingbat. There must be some bug with codingbat's test cases. If you are curious, this problem can be solved with a single line of code: public String withoutString(String base, String remove) { return base.replaceAll("(?i)" + remove, ""); //String#replaceAll(String, String) with case insensitive regex. } Regex explaination: The first argument taken by String#replaceAll(String, String) is what is known as a Regular Expression or "regex" for short. Regex is a powerful tool to perform pattern matching within Strings. In this case, the regular expression being used is (assuming that remove is equal to IS): (?i)IS This particular expression has two parts: (?i) and IS. IS matches the string "IS" exactly, nothing more, nothing less. (?i) is simply a flag to tell the regex engine to ignore case. With (?i)IS, all of: IS, Is, iS and is will be matched. As an addition, this is (almost) equivalent to the regular expressions: (IS|Is|iS|is), (I|i)(S|s) and [Ii][Ss]. EDIT Turns out that your output is not correct and is failing as expected. See: dansalmo's answer. A: @Daemon your code works. Thanks for the regex explanation. Though dansalmo pointed out that codingbat is displaying the intended output incorrectly, I through in some extra lines to your code to unnecessarily account for the double spaces with the following: public String withoutString(String base, String remove){ String result = base.replaceAll("(?i)" + remove, ""); for(int i = 0; i < result.length()-1;){ if(result.substring(i,i+2).equals(" ")){ result = result.replace(result.substring(i,i+2), " "); } else i++; } if(result.startsWith(" ")) result = result.substring(1); return result; }
{ "pile_set_name": "StackExchange" }
Pressure sore carcinoma: a late but fulminant complication of pressure sores in spinal cord injury patients: case reports. The development of a pressure sore carcinoma in scars of spinal cord injury patients is a rare event (less than 0.5%) and occurs late (more than 30 years after the spine injury) but the prognosis is very poor. Five cases are reported and different aspects are reviewed: anamnesis, clinical features, and follow-up studies. The association of surgery and radiotherapy is usual but is not very successful. Local-regional chemotherapy and a better approach concerning immunological mechanisms may improve survival. Scar prevention and surgical management of chronic scars treated unsuccessfully by medical methods are the best means to prevent malignant changes in chronic pressure sores. Biopsy should be mandatory for all pressure sores after the first decade.
{ "pile_set_name": "PubMed Abstracts" }
Intrameatal aneurysm of the anterior inferior cerebellar artery. Aneurysms of the distal part of the anterior-inferior cerebellar artery (AICA) are rare, with an incidence of 0.1% to 0.5%. We report a 55-year-old woman suffering from a subarachnoid hemorrhage resulting from a ruptured intrameatal aneurysm of the AICA. A left retrosigmoid craniotomy was performed and the aneurysm was clipped without post-operative deficits. Follow-up angiography demonstrated exclusion of the aneurysm, confirming preservation of the distal AICA. We review the pertinent literature and discuss clinical presentation, radiological findings and surgical management of this patient.
{ "pile_set_name": "PubMed Abstracts" }
<# .SYNOPSIS Installs hasher from the Internet via Chocolatey. .DESCRIPTION Author: Dane Stuckey (@cryps1s) License: MIT Performs a standard chocolatey installation of the most recent stable version of hasher. .NOTES #> Set-StrictMode -Version Latest # Load the Install-ChocolateyPackage Function . "$($PSScriptRoot)\Install-ChocolateyPackage.ps1" $PackageName = "hasher-erz" Try { Install-ChocolateyPackage -PackageName $PackageName } Catch { Write-Host "Fatal erorr installing package $PackageName. Exiting." Exit 1 }
{ "pile_set_name": "Github" }
Multivariable evaluation of term birth weight: a comparison between ultrasound biometry and symphysis-fundal height. To derive a birth weight predictive equation and to compare its diagnostic value with that of ultrasound. A longitudinal observational cohort study, including singleton pregnancies at term, was performed at St. Orsola-Malpighi Hospital, University of Bologna (Italy). A birth weight prediction formula, including symphysis-fundal height (SFH), BMI, maternal abdominal circumference (mAC) and parity was derived from a general linear model (GLM) (retrospective study). Moreover, on a new series of patients, the fetal weight was estimated by using both GLM and ultrasound using Hadlock formula (prospective study). The residual analysis and the intraclass correlation coefficient (ICC) were used to test the accuracy of methods in predicting birth weight. Between January and November 2012, 1034 patients were included in the retrospective study and 44 in the prospective one. The following GLM was derived: estimated birth weight (g) = 1485.61 + (SFH (cm) × 23.37) + (11.62 (cm) × mAC) + [BMI × (-6.81)] + (parity (0 = nulliparous, 1 = multiparous) × 72.25). When prospectively applied, the GLM and ultrasound provided a percentage of prediction within ±10% of the actual weight of 73% and 84%, respectively. Ultrasound estimation, as opposite of GLM one, was significantly associated with neonatal weight (R(2 )= 0.388, F = 26.607, p value <0.001, ICC = 0.767). Although ultrasound biometry has provided the best values in fetal weight estimation, the predictive performance of both methods is limited.
{ "pile_set_name": "PubMed Abstracts" }
Second Women’s Fight Added To UFC 212 Another women’s fight has been added to this June’s UFC 212 card as Viviane Pereira is in action against Jamie Moyle. Pereira (12-0-0) has already competed one time under the UFC banner, defeating Valerie Letourneau by split decision at UFC 206. Pereira is also a veteran of the Xtreme Fighting Championships International promotion and she won all four fights during her time there. Pereira has seen her last two fights go the distance and it has been well over a year since she earned a finish in one of her fights. Moyle (4-1-0) is currently undefeated in UFC competition as well, having defeated Kailin Curran by unanimous decision at the UFC Ultimate Fighter 24 Finale. Moyle has spent the majority of her career in the Invicta Fighting Championships, where she holds victories over Amy Caldwell Montenegro, JJ Aldrich and Jenny Liou. Moyle also competed on the UFC Ultimate Fighter 23, losing to Amanda Bobby Cooper in the quarterfinals of the reality show. UFC 212 takes place on Saturday, June 3 from the HSBC Arena in Rio De Janeiro, Brazil with Jose Aldo and Max Holloway headlining. Fightful is providing live coverage of the event, with a post-show podcast to follow.
{ "pile_set_name": "Pile-CC" }
import module namespace cdml = "http://zorba.io/modules/store/dynamic/collections/dml"; for $e in cdml:collection(xs:QName("earthquakes")) let $r := $e/column[last()] (: last column contains region name :) where xs:double($e/column[7]) > 3 (: 7th column contains magnitude :) group by $r2 := $r let $r := $r[1] where $r contains text ("California") return <region name="{$r}">{ $e }</region>
{ "pile_set_name": "Github" }
Q: Logparser error when used with PowerShell I'm trying to use Log Parser within PowerShell to export a Windows Evtx log file to CSV: $logparser = "c:\program files (x86)\Log Parser 2.2\logparser.exe" $allArgs = ("SELECT * INTO c:\logs\logs.csv FROM c:\logs\logs.evtx", "-i:evt", "-o:csv") $ps = Start-Process -FilePath $logparser -ArguementList $allArgs -Wait -Passthru -NoNewWindow; $ps.WaitForExit() $ps.ExitCode; But when I run this I get an error: Error: detected extra argument "*" after query The error code is 13. I tried putting the paths in single quotes and running it from the same directory as the logs but it keeps returning the same error. A: You need to preserve the double quotes around the query string, otherwise it won't be recognized as a single argument by the spawned process. Putting the query string (with double quotes) in single quotes might work: $allArgs = '"SELECT * INTO c:\logs\logs.csv FROM c:\logs\logs.evtx"', "-i:evt", "-o:csv" However, a much simpler solution to the problem would be to avoid Start-Process entirely and use the call operator (&) instead: $logparser = "c:\program files (x86)\Log Parser 2.2\logparser.exe" $query = "SELECT * INTO c:\logs\logs.csv FROM c:\logs\logs.evtx" & $logparser -i:evt -o:csv $query
{ "pile_set_name": "StackExchange" }
Extragenital manifestations of Neisseria gonorrhoeae. Neisseria gonorrhoeae is a common cause of genitourinary sexually transmitted infections. N. gonorrhoeae is an obligate human pathogen that has evidence of tissue-specific host interactions and diverse extragenital manifestations of infection both in adult and pediatric populations. The clinical presentation of extragenital gonorrhea, diagnostic methods, treatment and preventive measures are reviewed.
{ "pile_set_name": "PubMed Abstracts" }
Q: Is there a convenient way to set default values for ApplicationData.Current.LocalSettings using a settings file? I'm using the built in settings infrastructure in my Windows Phone 8.1 application to store my settings key-value pairs. For instance: ApplicationDataContainer settings = ApplicationData.Current.LocalSettings; object value = settings.Values["DailyReminderOnOff"]; I'm trying to find a way to supply default values that come canned with the app at installation. Is there some recommended and convenient way of doing that? I could implement my own system by placing a dirty bit and reading from a file if it's unset or provide defaults through if-null checks within the getter; but I'd rather avoid the hassle of writing and maintaining that code if the system provides something I've missed. Thanks! A: There is no default way to get the value. Why don't you use simple fallback like this: const string DefaultValue1 = "value123"; object value = settings.Values["DailyReminderOnOff"] ?? DefaultValue1;
{ "pile_set_name": "StackExchange" }
Javier Hernández Carrera Javier "Javi" Hernández Carrera (born 2 May 1998) is a Spanish footballer who plays for Real Madrid Castilla as either a central defender or a left back. Club career Born in Jerez de la Frontera, Cádiz, Andalusia, Hernández joined Real Madrid's youth setup in 2013, from Sevilla FC. On 17 July 2017, after finishing his formation, he was loaned to Segunda División B side CD El Ejido, for one year. Hernández made his senior debut on 27 August 2017, starting and scoring his team's first in a 3–3 home draw against FC Cartagena. He finished the campaign as an undisputed starter, contributing with two goals in 33 matches. On 13 July 2018, Hernández was loaned to Real Oviedo Vetusta also in the third division, until the end of the season. He made his first-team debut on 11 September, starting in a 0–1 away loss against RCD Mallorca for the season's Copa del Rey. Hernández scored his first professional goal on 7 January 2019, netting the opener in a 3–2 away win against CD Numancia for the Segunda División championship. References External links Real Madrid profile Category:1998 births Category:Living people Category:Sportspeople from Jerez de la Frontera Category:Spanish footballers Category:Andalusian footballers Category:Association football defenders Category:Segunda División players Category:Segunda División B players Category:Real Madrid Castilla footballers Category:Real Oviedo Vetusta players Category:Real Oviedo players
{ "pile_set_name": "Wikipedia (en)" }
Q: Save any output a function gives, into a variable Being new at programming in general, and new with Python in particular, I'm having some beginner's troubles. I'm trying out a function from NLTK called generate: string.generate() It returns what seems like a string. However, if I write: stringvariable = string.generate() or stringvariable = str(string.generate()) … the stringvariable is always Empty. So I guess I'm missing something here. Can the text output generated, that I see on the screen, be something else than a string output? And if so, is there any way for me to grab that output and put it into a variable? Briefly put, how to I get what comes out of string.generate() into stringvariable, if not as described above? A: you can rewrite generate. The only disadvantage is that it can change and your code might not be updated to reflect these changes: from nltk.util import tokenwrap def generate_no_stdout(self, length=100): if '_trigram_model' not in self.__dict__: estimator = lambda fdist, bins: LidstoneProbDist(fdist, 0.2) self._trigram_model = NgramModel(3, self, estimator=estimator) text = self._trigram_model.generate(length) return tokenwrap(text) then "a.generate()" becomes "generate_no_stdout(a)"
{ "pile_set_name": "StackExchange" }
1. Introduction {#sec1} =============== Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections are a serious health concern due to their global distribution and direct relationship with liver cirrhosis and hepatocellular carcinoma (HCC) development. In average, 57 and 78% of cirrhosis and HCC cases are attributable to these hepatotropic viruses \[[@B1], [@B2]\]. HCC is the most common primary liver tumor that presents a heterogeneous prevalence among different ethnic groups and geographic regions. More than 80% of HCC cases occur in Asia and Africa: Japan, China, and Niger, holding an incidence of 20--500 cases/100.000 inhabitants; South American countries present a lower incidence (5 cases/100.000 inhabitants), particularly in Colombia; an incidence of 2 cases/100.000 inhabitants is estimated \[[@B3]\]. Although clinical significance of the HBV and HCV genotypes has not been completely elucidated, increasing epidemiological data suggests that some genotypes could be related to higher risk of HCC development \[[@B4]--[@B8]\]. For example, the HCV subtype 1b (HCV/1b) is associated with more severe clinical outcome and poor antiviral response. Moreover patients infected with HBV genotype C (HBV/C) present a higher frequency of HCC than HBV/B-infected patients \[[@B9]--[@B13]\]; patients with HBV/F seem to follow a similar tendency like HBV/C-infected patients \[[@B14], [@B15]\]. In the same way, a double mutation (A1762T/G1764A) in HBV basal core promoter (BCP) region has been implicated with severe clinical outcome and poor response to nucleosides\' analogues \[[@B4], [@B16]--[@B20]\]. Considering that Latin American region is extremely diverse in culture, ethnicity, socioeconomic status, and health systems, the research findings of neighbour countries are not totally comparable. Until now, the epidemiological pattern of end-stage liver diseases in Colombia is unknown, and no report has described the molecular characterization of HBV and HCV in this group of patients \[[@B21], [@B22]\]. In the present study, aetiology and viral genotypes, subgenotypes/subtypes, and HBV pre-C/C mutants were analyzed in cirrhosis and/or HCC cases attended at the Pablo Tobon Uribe Hospital (HPTU) during the period 2005--2007 in Medellin, the second largest city in Colombia. 2. Materials and Methods {#sec2} ======================== 2.1. Patients {#sec2.1} ------------- From February 2005 to February 2007, 131 patients with end-stage liver diseases (cirrhosis and/or HCC) were enrolled in this study; previous voluntary informed consent sign was obtained. All patients were recruited at the Hepatology Unit of HPTU in Medellin city, Colombia. Diagnosis of liver cirrhosis was established according to the following findings: hepatic encephalopathy, ascites, digestive bleeding due to esophageal varices, coagulopathy, spontaneous bacterial peritonitis, hepatorenal syndrome, imaging criteria (ultrasonography, magnetic resonance, and tomography), and/or liver biopsy; HCC diagnosis was performed following the guidelines of the European Association for the Study of the Liver (EASL). 2.2. Samples {#sec2.2} ------------ Serum and liver tissue samples were obtained from patients who underwent liver transplantation. Both types of material were kept at −70°C until processing; the following serological markers were assessed in samples: HBsAg, total anti-HBc, and anti-HCV (Roche). 2.3. Viral Genome Detection {#sec2.3} --------------------------- Total RNA and DNA were isolated from samples with serological markers for HBV and/or HCV, using Trizol reagent (Invitrogen, USA). HBV DNA was amplified by PCR, using S-gene-specific primers \[[@B23]\]. The HCV genome was assessed by nested RT-PCR, using flanking primers for 5′UTR, following a protocol previously published \[[@B24], [@B25]\]. As a positive control, samples with HBV or HCV genome detection were used; liver tissue from a patient with diagnosis of cirrhosis associated to alcohol intake abuse, without HBV or HCV infection, was used as negative controls. All assays were performed in duplicate. 2.4. Molecular Characterization of HBV and HCV {#sec2.4} ---------------------------------------------- In order to know the viral genotype, different PCR products were sequenced to perform a phylogenetic analysis; HCV genotyping was conducted with the 5′ conserved region (5′UTR), using the primers\' set described before for HCV detection \[[@B24], [@B25]\]. To amplify the full HBV genome (3200 nts), a first round of PCR was performed with P1 and P2 primers \[[@B26]\]. A second round of PCR was carried out with some isolates using primers 58p-1450n, 1860p-2853n, 2440p-58n, 1101p-P2, P1-2440n, and 1450p-P2, to obtain the total genome sequences by subregions\' amplification \[[@B27]\]. When total HBV genome was not amplified with the primers mentioned above, the small S gene fragment was amplified using 58P-1101N in the first round and s3-s3as (319 nt) in a second round \[[@B27], [@B28]\], or hep3-hep33 as a unique round of PCR \[[@B29]\]. All sequences obtained were compared with GenBank-available sequences of known genotypes, including subgenotypes/subtypes. Phylogenetic analyses by neighbour joining, maximum parsimony, and maximum likelihood were conducted with PAUP 4.0, MEGA 4.1; Treeview program was used for tree representation. Recombination events were studied by bootscanning and similarity analysis (Simplot). To evaluate mutations in HBV BCP (A1762T/G1764A) and pre-C/C (G1896A), analysis of the sequences was carried out by comparison with other GenBank sequences of different HBV genotypes considering mutants and wild type. BioEdit program was used for this purpose. The accession numbers of sequences included are as follows: HBV: FJ589065; FJ589066; FJ589067; FJ589068; FJ589069; FJ589070; HCV: JF693486, JF693487, JF693488, JF693489. 3. Results {#sec3} ========== 3.1. Demographic and Clinical Characteristics of Patients {#sec3.1} --------------------------------------------------------- The mean age of the 131 individuals was 58.1 years (range: 17--85 years); most of patients recruited were males (65.6%). According to the followed clinical guidelines, 71% evidenced cirrhosis, 12.2% HCC, and 16.8% cirrhosis and HCC (HCC/Ci\*). Interestingly, when risk factors were analyzed alcohol intake abuse was the most frequent risk factor (37.4%), followed by viral etiology (17.6%), autoimmunity (9.9), NASH (7.6%), and other causes such as metabolic disorders and biliary disease (16.8%); 10.7% of the cases were not associated to any risk factor assessed ([Table 1](#tab1){ref-type="table"}). The most frequent clinical manifestations were esophageal varices (64%), ascites (61.8%), coagulopathy (46%), and hepatic encephalopathy (38.2%); most of patients were scored at the Child B and C (\>75%), indicating an advanced chronic liver disease in patients enrolled. From 131 patients included in the present study, 14 were positive for the HBsAg serological marker (10.7%) and 9 for anti-HCV (6.9%); most of patients infected by HBV and HCV were males (60.8%). The mean age of these 23 patients was 56.6 (range 34--74 years). Among them, 22 had diagnosis of liver cirrhosis and 7 had HCC in addition (HCC/Ci\*); just one patient had diagnosis of HCC without cirrhosis. According to the phenotype, 20 patients corresponded to non-Amerindian individuals; besides, the three others were patients from El Salvador, Venezuela, and Israel ([Table 2](#tab2){ref-type="table"}). 3.2. Phylogenetic Analysis {#sec3.2} -------------------------- In 4 out of 8 tissue samples from patients infected by HCV, it was possible to successfully sequence the 5′UTR. The expected grouping was observed among Genbank sequences after phylogenetic analysis conduction, with minor method-dependent changes (data not shown). All four isolates belonged to HCV genotype 1. Three HCV strains corresponded to HCV subtype 1b and one to subtype 1a ([Figure 1](#fig1){ref-type="fig"}). In the case of HBV, seven strains were sequenced; four strains from liver tissues and three were serum sample derived. In one additional isolate (UdeA-072) the HBV genome was detected by PCR, although it was not possible to obtain a clear electropherogram after several assays. HBV S gene sequence analysis showed that all isolates belonged to genotype HBV/F, validated by the HBV GenBank sequence grouping results and high bootstrap values observed in most tree branches. Similar topology was observed among trees generated by the different inference methods (data not shown). Five isolates grouped into the clade of South American strains (Codes UdeA-009, UdeA-054, UdeA-056, UdeA-083, and UdeA-089); this clade included the first Colombian isolate characterized by Norder et al. \[[@B29]\]. Interestingly, one of the sequences analyzed (Code UdeA-024) was less related to this clade ([Figure 2](#fig2){ref-type="fig"}). The complete HBV genome was sequenced in four strains, two Colombian isolates (Codes UdeA-083 and UdeA-089) one from Venezuela (Code UdeA-054), corresponding to subgenotype F3 (HBV/F3), and the sequence from El Salvador (Code UdeA-024), grouped in a different clade from the HBV/F3 ([Figure 3](#fig3){ref-type="fig"}). Indeed, UdeA-024, belonging to subgenotype F1a (HBV/F1a), was more closely related to strains from Central America countries (El Salvador, Costa Rica, and Nicaragua), in agreement with the origin of the patient. In the same phylogenetic analysis, partial sequences (S gen) of strains UdeA-009 and UdeA-056 were added. On the basis of strict consensus tree generated by maximum parsimony, these isolates corresponded to HBV/F3; this grouping was supported by a bootstrap value higher than 80. The results obtained in the present phylogenetic analysis are in agreement with the HBV and HCV genotype geographic distribution in Latin America. Additionally, this report corresponds to the first description of HBV/F in Colombian patients with severe liver disease. 3.3. Characterization of G1896A and A1762T/G1764A Mutants {#sec3.3} --------------------------------------------------------- To establish the presence of G1896A and A1762T/G1764A mutants, pre-C/C sequences were aligned with HBV wild-type and mutant prototypes available in GenBank. The BCP analysis showed that isolates UdeA-083 and UdeA-089 carried the double mutation A1762T/G1764A. These isolates were recovered from the Colombian patients with diagnosis of HCC/Ci\* ([Table 3](#tab3){ref-type="table"}). In addition, T at 1858 nucleotide (T^1858^) was detected in isolates UdeA-024 and UdeA-054 and C^1858^ in samples UdeA-083 and UdeA-089. In UdeA-054, mutant G1896A was also identified in addition to T^1858^. The G1896A mutation correlated with detection of no HBeAg by ELISA in the corresponding serum sample ([Table 3](#tab3){ref-type="table"}). 4. Discussion {#sec4} ============= This paper corresponds to the first study of aetiology description in Colombian patients with end-stage liver diseases and the molecular characterization of HBV and HCV strains detected in this group of patients. One of forty deaths around the world is due to end-stage liver disease. In the present study, 71% of the patients correspond to cirrhosis cases and 29% to HCC, a similar result to other descriptions reported in countries of the region \[[@B1], [@B30], [@B31]\]. Among these 131 patients, alcohol intake abuse was the most frequent risk factor observed (37.4%), followed by viral infections (17.6%). Although this epidemiological pattern is usually found in developed countries, the high proportion of males (in general males have a higher alcohol intake than females) in the present study and the HBV vaccination status in Colombia could be contributing to the risk factors\' pattern of the population study \[[@B32]\]. On the other hand, the prevalence of cryptogenic cirrhosis and autoimmune liver disease is according to previous reports \[[@B1], [@B30]\]. As previously mentioned, the frequency of cirrhosis and HCC cases associated to viral etiology in the present study was low (17.6%; 23/131). Only 10.7% (14/131) and 6.9% (9/131) of these patients were positive for serological markers of HBV (HBsAg) or HCV (anti-HCV) infection, respectively. The HCV-related HCC has increased in several countries; 80% of infected patients with HCV progress to chronic infection, while 20% of them develop cirrhosis, and at least 5% of these evolve to HCC \[[@B33]\]. In Latin America, the world health organization (WHO) estimates an intermediate prevalence of HCV infection (1--2.5%); moreover, a low prevalence (0.5--1%) has been reported among Colombian blood donor population \[[@B34]--[@B36]\]. As previously mentioned, Medellin is the second largest city in the country and the capital of Antioquia State (Department of Antioquia). Health authorities in Antioquia have reported a similar HCV prevalence since 2004 (0.2--0.3/100.000 inhabitants) (Indicadores Básicos 2004--2007). Contrary to general population, studies in some Latin American countries show a high HCV prevalence in severe liver disease. Indeed, HCV infection is the predominant HCC risk factor in Argentina, Chile, and southeastern states of Brazil. Furthermore, in a recently prospective multicenter study of HCC cases from 9 Latin American countries, the main HCC risk factor was HCV infection (30.8%), followed by alcohol (20.4%), HBV infection (10.8%), and then HCV plus alcohol (5.8%) \[[@B37]--[@B39]\]. A similar tendency was observed in Mexican cirrhotic patients, where 39.5% of recruited patients presented alcohol intake abuse, followed by HCV infection (36.6%) \[[@B40]\]. Considering the analysis of HCV sequences included in the phylogenetic analysis, the prototypes clustered according to the genotypes described in the literature (HCV/1-6), obtaining similar results with all methods conducted. Inside the main cluster of genotype HCV/1 was clearly observed clades assigned to subtype HCV/1a and HCV/1b; Colombian strains were grouping into these subtypes. Indeed, one isolate belongs to HCV/1a and three strains to HCV/1b. This result is consistent with previous reports of HCV geographic distribution in Latin American countries where different genotypes are present (HCV/1, HCV/2, HCV/3, and HCV/4); however, genotype HCV/1 is the prevalent in most countries of the region including Colombia \[[@B41]\]. Indeed, HCV/1 has been described in some studies performed by different approaches in Colombian multitransfused patients, individuals with elevated aminotransferases, general population, and kidney transplant patients \[[@B42]--[@B45]\]. This is the first report based on sequence analysis and developed in samples of patients with severe liver disease in Colombia. Secondly, most of research findings agree that HCV/1b is related with higher risk of severe liver disease \[[@B23], [@B46]--[@B48]\]. It is important for health authorities in Colombia and other Latin American countries to develop studies that contribute to knowing the impact of genotype HCV/1b over the hepatitis C natural history in the region. According to WHO, Colombia has a moderate endemicity for hepatitis B, although there are several epidemiological patterns given the geographic, ethnic, cultural, and socioeconomic status of the population. Actually, Sierra Nevada de Santa Marta, Orinoquian and Amazon basins, and southeastern part of the country corresponded to high-prevalence regions for hepatitis B infection in this country. However Antioquia state holds a different behavior \[[@B49]\]; in fact, the general incidence of HBV infection in this state in the last years range was 2.7--4.4 per 100.000 inhabitants, while the prevalence in blood donors was 0.3% \[[@B49], [@B50]\]. As mentioned above, HBV infection was observed in 10.7% of the patients analyzed. The study population was recruited in Medellin, the second largest city in Colombia, in one of the most important units of hepatology in the country; even if it is possible that this hospital receives patients from rural area of Antioquia state and other Colombian states, most of the cases corresponded to people living in urban area and not from high-prevalence regions of hepatitis B infection. This heterogeneity of hepatitis B situation is also described in Brazil; indeed, higher frequency of HBV infection than other risk factors has been described in HCC patients from states of northeastern and northern regions of Brazil but not in patients from southeastern states \[[@B51], [@B52]\]. The low HBV prevalence described in the present study contrasts with some studies conducted in Peru and Brazil, where HBV was reported in 42--63% of end-stage liver disease cases \[[@B51]--[@B56]\]. Contrary to these reports, a low HBV prevalence has been described among HCC patients from Chile (6.8%) and Puerto Rico (4%) \[[@B57], [@B58]\], similar to other works carried out in the United States, Japan, and western Europe \[[@B1]\]. On the other hand, on Ecuador a study conducted in 770 cirrhotic patients linked viral etiology to 2.8% of the cases, while alcohol intake was the most frequent risk factor associated (48.3%) \[[@B59]\]. Whereas differences among studied populations (gender, age, origin), diagnosis, and viral markers are described above, additional studies will be necessary for clarifying the real statement of hepatotropic viruses in cases of cirrhosis and HCC/Ci\* in Colombia and the region. On the other hand, the phylogenetic analysis of HBV showed that HBV/F and subgenotype HBV/F3 were presented in serum and tissue samples of the Colombian population analyzed; subgenotypes HBV/F1a and HBV/F3 were also detected in two cases from El Salvador and Venezuela, respectively. These results are consistent with previously published reports about molecular diversity of these hepatotropic viruses, geographic distribution in Latin America, and their prevalence in severe forms of hepatic disease. In addition, the A1762T/G1764A double mutant, associated according to some authors with a poor clinical outcome, was described in strains isolated from Colombian patients with diagnosis of HCC/Ci\*. As mentioned before all isolates of HBV sequenced belonged to HBV/F. It has been proposed that HBV/F is autochthonous to America due to its predominance in different ethnic groups, in particular Amerindian \[[@B60]\]. In Colombia, few studies about HBV molecular characterization have been published. Two of them included samples from blood donor populations, showing a predominance of genotype HBV/F (77--87.23%) \[[@B21], [@B22]\]. This result is in agreement with our study and previous findings of genetic population founder carried out in the state of Antioquia \[[@B61], [@B62]\], which revealed that 90% of the genetic pool (mitochondrial DNA) corresponded to Amerindian origin. More recently, in Colombia was detected the genotype HBV/E in nine pregnant women, being the first description of an exclusively African HBV genotype circulating in South America \[[@B63]\]; this result also coincides with high frequency of African haplotypes in population from Choco state, at the pacific coast of Colombia \[[@B64]\]. The subgenotypes HBV/F1-F4 have a specific geographic distribution in America. Indeed, subgenotype HBV/F1a is predominant in Alaska, Nicaragua, Costa Rica, and El Salvador, while subgenotype HBV/F1b in Peru and Argentina. Subgenotype HBV/F2 is prevalent in Venezuela and Brazil and subgenotype HBV/F3 in Panama, Venezuela, and Colombia. Finally, subgenotype HBV/F4 is present in Bolivia and Argentina \[[@B20], [@B65]\]. Devesa et al. and Alvarado et al. have characterized the HBV subgenotypes in samples from Colombian blood donor isolates; most of the isolates corresponded to HBV/F3. Genotype HBV/F was also recently characterized in samples from blood donor population from Medellin by our group (unpublished data). In the present study, the complete genome analysis in 4 out of 6 HBV isolates from patients with cirrhosis or HCC/Ci\* made it possible to classify 3 of them into subgenotype HBV/F3, and one strain into subgenotype HBV/F1a, while partial analysis (small S gene sequence) of two others showed grouping with HBV/F3 prototypes. The BCP and pre-C/C mutations have been associated with clinical outcome severity. One frequent mutation corresponds to G1896A in the pre-C/C region which leads to a premature stop codon preventing HBeAg synthesis \[[@B4], [@B8], [@B20], [@B66]\]. When this region was analyzed, the G1896A mutation was characterized only in one isolate (UdeA-054), presenting in addition T^1858^, while the isolates UdeA-083 and UdeA-089 carry C^1858^. According to several studies, G1896A is frequent in genotype HBV/F isolates that carry T^1858^. A hypothesis for this coevolution pattern is that hydrogen binding between nucleotides 1858--1896 is necessary to maintain the low stem secondary structure of *ε* signal \[[@B67]\]. The presence of G1896A in isolate UdeA-054 correlated with no detection of HBeAg by ELISA in serum sample ([Table 3](#tab3){ref-type="table"}). When BCP was analyzed, it was demonstrated that isolates UdeA-083 and UdeA-089 carried the double mutation A1762T/G1764A; these isolates corresponded to two Colombian patients with diagnosis of HCC/Ci\*. Regarding the prevalence of double mutation in BCP in isolates of genotype HBV/F, there are different results. In fact, in Brazilian patients with chronic infection, the BCP double mutation was described in 90% of HBV/F isolates; this double mutation was not identified in any other samples of these studies \[[@B68], [@B69]\]. Several authors have proposed that HBV genotype and BCP mutants could be related with liver disease severity. Although HBV/B and HBV/C circulate in Asia, patients with diagnosis of HBV-related HCC present a higher prevalence of HBV/C infection \[[@B9], [@B12], [@B13]\]. Similar findings have been reported for HBV/F, in a prospective study of 258 patients with chronic HBV infection; after a mean followup of 94 months, the mortality rate related to liver disease was more frequent in cases of genotype HBV/F than HBV/ A and HBV/D infection \[[@B14]\]. Livingston et al. also described an association of HBV/F and liver disease severity, in particular HCC risk. They compared the frequency of HBV/F in Alaska natives with chronic hepatitis B infection with or without HCC; the frequency of genotype HBV/F was 68% and 18%, respectively \[[@B15]\]. This finding suggested a higher risk of HCC development in HBV/F cases \[[@B15], [@B60]\]. Sanchez-Tapias et al. and Livingston et al. described that genotype HBV/F, autochthonous to America, could be related with poor clinical outcome and higher HCC risk development; however, a higher number of studies should be developed for a stronger support of these findings. In addition, the present study is the first report of A1762T/G1764A in Colombian HBV isolates. Recent studies assign a more important role to A1762T/G1764A in hepatocarcinogenesis than the HBV genotype itself. It has been demonstrated that the BCP double mutation generates a new binding site for the transcription factor HNF1, regulating pgRNA transcription and promoting an enhancement of HBV replicative activity \[[@B70]\]. In patients with HCC due to HBV infection, isolates belonging to genotype HBV/C carry a higher frequency of A1762T/G1764A compared to HBV/B strains \[[@B4], [@B9], [@B71], [@B72]\]. In our study, the presence of BCP double mutation correlates with HCC diagnosis in those patients. This study corresponds to the first description of end-stage liver diseases and the molecular characterization of HBV and HCV in cirrhosis and HCC/Ci\* cases in Colombia. Genotype HCV/1 and genotype HBV/F (subgenotype F3) were detected in samples belonging to Colombian patients. This result agrees with previous studies and in the case of HBV with genetic founder populations in Colombia. Additionally, HBV/F3 and HBV/F1a were characterized in isolates from patients from Venezuela and El Salvador, respectively. The HBV and HCV subgenotyping/subtyping results obtained in the present study are according to the geographic pattern and predominance described for these hepatotropic viruses, especially subgenotypes HBV/F1a and HBV/F3 in Central and South America, respectively. On the other hand, mutation of A1762T/G1764A was characterized in isolates from patients with HCC. Although the double mutant has been related with higher risk of HCC development, the descriptive design of our study and limited sample size do not allow us to assess any type of statistical association between BCP mutant, genotype, and clinical outcome. Additionally studies will be necessary to determine whether HCV/1b, HBV/F, and pre-C/C variants are associated with a higher risk of cirrhosis and HCC development. Moreover, the statement of viral etiology and alcohol intake abuse in end-stage liver disease cases in Colombia and Latin America should be explored in further case-control studies. The authors would like to thank Dr. Francisco Javier Diaz for his contribution to the phylogenetic analysis and Dr. Anne-Lise Haenni for manuscript review. This study was supported by the Departamento Nacional de Ciencia Innovación y Tecnologia (Grant: 115 041 6445) and University of Antioquia (Codi Grant: E01157-CIM 2431). ![Unrooted Tree generated with MEGA software, using 5\'UTR HCV sequences. Solid green circle indicates the position of isolate characterized in the present study. The accession number of the sequences are shown.](HEPRT2011-363205.001){#fig1} ![Phylogenetic tree of HBV genotype A to H generated by the Neighbour Joining method (PAUP), using HBV S gene sequences The Hepatitis Woolly monkey virus (WM) sequence was used as outgroup. Red arrow: isolate characterized from cirrhosis and/or HCC cases. The accession number followed by genotype identity is indicated. Bootstrap values are shown (1000 repetitions). HKY was used to assess distances.](HEPRT2011-363205.002){#fig2} ![HBV subgenotyping (F1-F4), based on the complete genome analysis using PAUP program. Sequence of genotype G was used as outgroup (AB056513). The accession number and subgenotype are indicated followed by the isolate origin letters code (Sal: El Salvador, CR: Costa Rica, Nic: Nicaragua, Jap: Japan, Arg: Argentina, Ven: Venezuela, Col: Colombia, Pan: Panama, Bol: Bolivia, and two Amerindian tribes from Venezuela War: Warao tribe, Japre: Japreira tribe). Red arrow: sequences belonging to the present study. Bootstrap values are shown (1000 replications).](HEPRT2011-363205.003){#fig3} ###### Description of End-stage liver disease cases recruited. Characteristics Proportion (%) ------------------------------------ ---------------- Risk factors  Alcohol intake 37,4  Viral etiology 17,6  Cryptogenic 10,7  Autoimmunity 9,9  NASH 7,6  Others\* 16,8 Clinical findings  Esophageal varices 64  Ascites 61,8  Coagulopathy 46  Hepatic encephalopathy 38,2  Spontaneous bacterial peritonitis 15,7  Hepatorenal syndrome 8,9 \*Metabolic disorders, biliary disease, or both. ###### Clinical and demographic characteristics of patients with positive serological markers for HBV and HCV. Code Diagnosis Origin Age Gender Alcohol\* Type of simple Serological marker ---------- ----------- ------------- ----- -------- ----------- ---------------- -------------------- ----- ----- UdeA-001 Ci Colombia 68 F 4 NA [Yes]{.ul} Neg Pos UdeA-002 Ci Colombia 59 M 4 NA [Yes]{.ul} Neg Pos UdeA-003 Ci Colombia 68 M 4 NA [Yes]{.ul} Neg Pos UdeA-004 Ci Colombia 47 M 4 Yes NA Pos Neg UdeA-006 HCC + Ci Colombia 48 M 4 NA Yes Neg Pos UdeA-009 Ci Colombia 69 M 1 [Yes]{.ul} NA Pos Neg UdeA-015 HCC + Ci Colombia 68 F 9 NA Yes Neg Pos UdeA-024 Ci El Salvador 60 M 9 NA [Yes]{.ul} Pos Neg UdeA-054 Ci Venezuela 47 M 3 Yes [Yes]{.ul} Pos Neg UdeA-056 HCC + Ci Colombia 56 F 4 [Yes]{.ul} NA Pos Neg UdeA-058 HCC + Ci Colombia 53 M 9 Yes NA Pos Neg UdeA-061 Ci Colombia 47 F 4 Yes NA Pos Neg UdeA-065 Ci Colombia 58 F 1 NA [Yes]{.ul} Neg Pos UdeA-069 HCC Colombia 64 F 4 NA Yes Neg Pos UdeA-070 Ci Colombia 34 M 4 NA Yes Neg Pos UdeA-072 Ci Israel 49 M 3 Yes [Yes]{.ul} Pos Neg UdeA-077 Ci Colombia 57 M 1 Yes NA Pos Neg UdeA-083 HCC + Ci Colombia 67 F 4 Yes [Yes]{.ul} Pos Neg UdeA-087 Ci Colombia 48 M 2 Yes NA Pos Neg UdeA-089 HCC + Ci Colombia 47 F 4 [Yes]{.ul} [Yes]{.ul} Pos Neg UdeA-099 Ci Colombia 56 M 4 Yes NA Pos Neg UdeA-101 Ci Colombia 74 F 4 Yes NA Pos Neg UdeA-124 HCC + Ci Colombia 57 M 2 Yes NA Pos Neg ci: Cirrhosis, HCC: hepatocellular carcinoma, M: male, F: female, \*Alcohol intake: for male/(female) 1: \>80 g/day (40g/day); 2: 50--80 g/day (20--40 g/day); 3: \<50 g/day (20 g/day) 4: no intake; 9: no data; Yes: available; NA: nonavailable; [Yes]{.ul}: HBV/HCV-positive sample by molecular analysis; Pos: positive serological result;Neg: negative serological result. ###### Molecular characterization of HBV isolates corresponding to End-stage liver disease cases: Genotype, Subgenotype and precore/core mutants. Code Diagnosis Genotype Subgenotype Mutation HBV serological markers --------------- ----------- ---------- ------------- ---------- ------------------------- ----- ----- ----- ----- ----- UdeA-009 Ci F F3^*μ*^ --- --- --- --- Pos Pos Neg UdeA-024 Ci F F1a^∞^ T G A G Pos Pos Neg UdeA-054 Ci F F3^∞^ T A^*α*^ A G Pos Pos Neg UdeA-056 HCC/Ci\* F F3^*μ*^ --- --- --- --- Pos Pos Neg UdeA-083 HCC/Ci\* F F3^∞^ C G T\* A\* Pos Pos Neg UdeA-089^*β*^ HCC/Ci\* F F3^∞^ C G T\* A\* Pos Pos Neg *β*: both, tissue and serum samples, Ci: cirrhosis, HCC/Ci\*: cirrhosis and hepatocellular carcinoma, ∞: based on complete genome analysis, *μ*: based on S gene sequence analysis, ---: no data, *α*: nonsense mutation, \*: double mutant, Pos: positive, Neg: negative. Strains isolated from Colombian patients: UdeA-009, UdeA-024, UdeA-056, UdeA-083, and UdeA-089. Strain isolated from a Venezuelan patient: UdeA-54. 1. [^1]: Academic Editor: Isabelle Chemin
{ "pile_set_name": "PubMed Central" }
Effects of background noise on click-evoked otoacoustic emissions. To investigate the effect of increased levels of background noise on click-evoked otoacoustic emission (CEOAE) recordings and to compare the effectiveness of the default CEOAE program with the QuickScreen CEOAE program in increased levels of noise, using an Otodynamics ILO88 recording device. The right ears of 40 young adult women with normal hearing were assessed using CEOAEs under four different noise conditions and with two different methods of data collection. The noise conditions were in quiet, 50 dB A, 55 dB A, and 60 dB A of white noise. Data were collected at each noise level in the default mode and also using the ILO88 QuickScreen program. There was a significant change in a number of important CEOAE output parameters with increased noise. In the default mode, mean whole wave reproducibility was 89.2% in quiet but declined to 85% with 50 dB A of white noise, 65% at 55 dB A and 20% at 60 dB A. The QuickScreen program proved more robust to the effects of noise than the default. In that mode, mean whole wave reproducibility was 91.7% in quiet, 92.5% with 50 dB A of white noise, 82.5% at 55 dB A and 45% at 60 dB A. The findings of the study indicate ambient noise levels for accurate CEOAE recording should not exceed 50 to 55 dB A of noise and alternatives to the default program should be considered in non-sound-treated situations.
{ "pile_set_name": "PubMed Abstracts" }
define("ace/mode/logtalk_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; var LogtalkHighlightRules = function() { this.$rules = { start: [ { token: 'punctuation.definition.comment.logtalk', regex: '/\\*', push: [ { token: 'punctuation.definition.comment.logtalk', regex: '\\*/', next: 'pop' }, { defaultToken: 'comment.block.logtalk' } ] }, { todo: 'fix grouping', token: [ 'comment.line.percentage.logtalk', 'punctuation.definition.comment.logtalk' ], regex: '%.*$\\n?' }, { todo: 'fix grouping', token: [ 'storage.type.opening.logtalk', 'punctuation.definition.storage.type.logtalk' ], regex: ':-\\s(?:object|protocol|category|module)(?=[(])' }, { todo: 'fix grouping', token: [ 'storage.type.closing.logtalk', 'punctuation.definition.storage.type.logtalk' ], regex: ':-\\send_(?:object|protocol|category)(?=[.])' }, { caseInsensitive: false, token: 'storage.type.relations.logtalk', regex: '\\b(?:complements|extends|i(?:nstantiates|mp(?:orts|lements))|specializes)(?=[(])' }, { caseInsensitive: false, todo: 'fix grouping', token: [ 'storage.modifier.others.logtalk', 'punctuation.definition.storage.modifier.logtalk' ], regex: ':-\\s(?:e(?:lse|ndif)|built_in|dynamic|synchronized|threaded)(?=[.])' }, { caseInsensitive: false, todo: 'fix grouping', token: [ 'storage.modifier.others.logtalk', 'punctuation.definition.storage.modifier.logtalk' ], regex: ':-\\s(?:c(?:alls|oinductive)|e(?:lif|n(?:coding|sure_loaded)|xport)|i(?:f|n(?:clude|itialization|fo))|reexport|set_(?:logtalk|prolog)_flag|uses)(?=[(])' }, { caseInsensitive: false, todo: 'fix grouping', token: [ 'storage.modifier.others.logtalk', 'punctuation.definition.storage.modifier.logtalk' ], regex: ':-\\s(?:alias|info|d(?:ynamic|iscontiguous)|m(?:eta_(?:non_terminal|predicate)|ode|ultifile)|p(?:ublic|r(?:otected|ivate))|op|use(?:s|_module)|synchronized)(?=[(])' }, { token: 'keyword.operator.message-sending.logtalk', regex: '(:|::|\\^\\^)' }, { token: 'keyword.operator.external-call.logtalk', regex: '([{}])' }, { token: 'keyword.operator.mode.logtalk', regex: '(\\?|@)' }, { token: 'keyword.operator.comparison.term.logtalk', regex: '(@=<|@<|@>|@>=|==|\\\\==)' }, { token: 'keyword.operator.comparison.arithmetic.logtalk', regex: '(=<|<|>|>=|=:=|=\\\\=)' }, { token: 'keyword.operator.bitwise.logtalk', regex: '(<<|>>|/\\\\|\\\\/|\\\\)' }, { token: 'keyword.operator.evaluable.logtalk', regex: '\\b(?:e|pi|div|mod|rem)\\b(?![-!(^~])' }, { token: 'keyword.operator.evaluable.logtalk', regex: '(\\*\\*|\\+|-|\\*|/|//)' }, { token: 'keyword.operator.misc.logtalk', regex: '(:-|!|\\\\+|,|;|-->|->|=|\\=|\\.|=\\.\\.|\\^|\\bas\\b|\\bis\\b)' }, { caseInsensitive: false, token: 'support.function.evaluable.logtalk', regex: '\\b(a(bs|cos|sin|tan|tan2)|c(eiling|os)|div|exp|flo(at(_(integer|fractional)_part)?|or)|log|m(ax|in|od)|r(em|ound)|s(i(n|gn)|qrt)|t(an|runcate)|xor)(?=[(])' }, { token: 'support.function.control.logtalk', regex: '\\b(?:true|fa(?:il|lse)|repeat|(?:instantiation|system)_error)\\b(?![-!(^~])' }, { token: 'support.function.control.logtalk', regex: '\\b((?:type|domain|existence|permission|representation|evaluation|resource|syntax)_error)(?=[(])' }, { token: 'support.function.control.logtalk', regex: '\\b(?:ca(?:ll|tch)|ignore|throw|once)(?=[(])' }, { token: 'support.function.chars-and-bytes-io.logtalk', regex: '\\b(?:(?:get|p(?:eek|ut))_(c(?:har|ode)|byte)|nl)(?=[(])' }, { token: 'support.function.chars-and-bytes-io.logtalk', regex: '\\bnl\\b' }, { token: 'support.function.atom-term-processing.logtalk', regex: '\\b(?:atom_(?:length|c(?:hars|o(?:ncat|des)))|sub_atom|char_code|number_c(?:har|ode)s)(?=[(])' }, { caseInsensitive: false, token: 'support.function.term-testing.logtalk', regex: '\\b(?:var|atom(ic)?|integer|float|c(?:allable|ompound)|n(?:onvar|umber)|ground|acyclic_term)(?=[(])' }, { token: 'support.function.term-comparison.logtalk', regex: '\\b(compare)(?=[(])' }, { token: 'support.function.term-io.logtalk', regex: '\\b(?:read(_term)?|write(?:q|_(?:canonical|term))?|(current_)?(?:char_conversion|op))(?=[(])' }, { caseInsensitive: false, token: 'support.function.term-creation-and-decomposition.logtalk', regex: '\\b(arg|copy_term|functor|numbervars|term_variables)(?=[(])' }, { caseInsensitive: false, token: 'support.function.term-unification.logtalk', regex: '\\b(subsumes_term|unify_with_occurs_check)(?=[(])' }, { caseInsensitive: false, token: 'support.function.stream-selection-and-control.logtalk', regex: '\\b(?:(?:se|curren)t_(?:in|out)put|open|close|flush_output|stream_property|at_end_of_stream|set_stream_position)(?=[(])' }, { token: 'support.function.stream-selection-and-control.logtalk', regex: '\\b(?:flush_output|at_end_of_stream)\\b' }, { token: 'support.function.prolog-flags.logtalk', regex: '\\b((?:se|curren)t_prolog_flag)(?=[(])' }, { token: 'support.function.compiling-and-loading.logtalk', regex: '\\b(logtalk_(?:compile|l(?:ibrary_path|oad|oad_context)|make(_target_action)?))(?=[(])' }, { token: 'support.function.compiling-and-loading.logtalk', regex: '\\b(logtalk_make)\\b' }, { caseInsensitive: false, token: 'support.function.event-handling.logtalk', regex: '\\b(?:(?:abolish|define)_events|current_event)(?=[(])' }, { token: 'support.function.implementation-defined-hooks.logtalk', regex: '\\b(?:(?:create|current|set)_logtalk_flag|halt)(?=[(])' }, { token: 'support.function.implementation-defined-hooks.logtalk', regex: '\\b(halt)\\b' }, { token: 'support.function.sorting.logtalk', regex: '\\b((key)?(sort))(?=[(])' }, { caseInsensitive: false, token: 'support.function.entity-creation-and-abolishing.logtalk', regex: '\\b((c(?:reate|urrent)|abolish)_(?:object|protocol|category))(?=[(])' }, { caseInsensitive: false, token: 'support.function.reflection.logtalk', regex: '\\b((object|protocol|category)_property|co(mplements_object|nforms_to_protocol)|extends_(object|protocol|category)|imp(orts_category|lements_protocol)|(instantiat|specializ)es_class)(?=[(])' }, { token: 'support.function.logtalk', regex: '\\b((?:for|retract)all)(?=[(])' }, { caseInsensitive: false, token: 'support.function.execution-context.logtalk', regex: '\\b(?:context|parameter|se(?:lf|nder)|this)(?=[(])' }, { token: 'support.function.database.logtalk', regex: '\\b(?:a(?:bolish|ssert(?:a|z))|clause|retract(all)?)(?=[(])' }, { token: 'support.function.all-solutions.logtalk', regex: '\\b((?:bag|set)of|f(?:ind|or)all)(?=[(])' }, { caseInsensitive: false, token: 'support.function.multi-threading.logtalk', regex: '\\b(threaded(_(call|once|ignore|exit|peek|wait|notify))?)(?=[(])' }, { caseInsensitive: false, token: 'support.function.engines.logtalk', regex: '\\b(threaded_engine(_(create|destroy|self|next(?:_reified)?|yield|post|fetch))?)(?=[(])' }, { caseInsensitive: false, token: 'support.function.reflection.logtalk', regex: '\\b(?:current_predicate|predicate_property)(?=[(])' }, { token: 'support.function.event-handler.logtalk', regex: '\\b(?:before|after)(?=[(])' }, { token: 'support.function.message-forwarding-handler.logtalk', regex: '\\b(forward)(?=[(])' }, { token: 'support.function.grammar-rule.logtalk', regex: '\\b(?:expand_(?:goal|term)|(?:goal|term)_expansion|phrase)(?=[(])' }, { token: 'punctuation.definition.string.begin.logtalk', regex: '\'', push: [ { token: 'constant.character.escape.logtalk', regex: '\\\\([\\\\abfnrtv"\']|(x[a-fA-F0-9]+|[0-7]+)\\\\)' }, { token: 'punctuation.definition.string.end.logtalk', regex: '\'', next: 'pop' }, { defaultToken: 'string.quoted.single.logtalk' } ] }, { token: 'punctuation.definition.string.begin.logtalk', regex: '"', push: [ { token: 'constant.character.escape.logtalk', regex: '\\\\.' }, { token: 'punctuation.definition.string.end.logtalk', regex: '"', next: 'pop' }, { defaultToken: 'string.quoted.double.logtalk' } ] }, { token: 'constant.numeric.logtalk', regex: '\\b(0b[0-1]+|0o[0-7]+|0x[0-9a-fA-F]+)\\b' }, { token: 'constant.numeric.logtalk', regex: '\\b(0\'\\\\.|0\'.|0\'\'|0\'")' }, { token: 'constant.numeric.logtalk', regex: '\\b(\\d+\\.?\\d*((e|E)(\\+|-)?\\d+)?)\\b' }, { token: 'variable.other.logtalk', regex: '\\b([A-Z_][A-Za-z0-9_]*)\\b' } ] }; this.normalizeRules(); }; oop.inherits(LogtalkHighlightRules, TextHighlightRules); exports.LogtalkHighlightRules = LogtalkHighlightRules; }); define("ace/mode/folding/cstyle",["require","exports","module","ace/lib/oop","ace/range","ace/mode/folding/fold_mode"], function(require, exports, module) { "use strict"; var oop = require("../../lib/oop"); var Range = require("../../range").Range; var BaseFoldMode = require("./fold_mode").FoldMode; var FoldMode = exports.FoldMode = function(commentRegex) { if (commentRegex) { this.foldingStartMarker = new RegExp( this.foldingStartMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.start) ); this.foldingStopMarker = new RegExp( this.foldingStopMarker.source.replace(/\|[^|]*?$/, "|" + commentRegex.end) ); } }; oop.inherits(FoldMode, BaseFoldMode); (function() { this.foldingStartMarker = /([\{\[\(])[^\}\]\)]*$|^\s*(\/\*)/; this.foldingStopMarker = /^[^\[\{\(]*([\}\]\)])|^[\s\*]*(\*\/)/; this.singleLineBlockCommentRe= /^\s*(\/\*).*\*\/\s*$/; this.tripleStarBlockCommentRe = /^\s*(\/\*\*\*).*\*\/\s*$/; this.startRegionRe = /^\s*(\/\*|\/\/)#?region\b/; this._getFoldWidgetBase = this.getFoldWidget; this.getFoldWidget = function(session, foldStyle, row) { var line = session.getLine(row); if (this.singleLineBlockCommentRe.test(line)) { if (!this.startRegionRe.test(line) && !this.tripleStarBlockCommentRe.test(line)) return ""; } var fw = this._getFoldWidgetBase(session, foldStyle, row); if (!fw && this.startRegionRe.test(line)) return "start"; // lineCommentRegionStart return fw; }; this.getFoldWidgetRange = function(session, foldStyle, row, forceMultiline) { var line = session.getLine(row); if (this.startRegionRe.test(line)) return this.getCommentRegionBlock(session, line, row); var match = line.match(this.foldingStartMarker); if (match) { var i = match.index; if (match[1]) return this.openingBracketBlock(session, match[1], row, i); var range = session.getCommentFoldRange(row, i + match[0].length, 1); if (range && !range.isMultiLine()) { if (forceMultiline) { range = this.getSectionRange(session, row); } else if (foldStyle != "all") range = null; } return range; } if (foldStyle === "markbegin") return; var match = line.match(this.foldingStopMarker); if (match) { var i = match.index + match[0].length; if (match[1]) return this.closingBracketBlock(session, match[1], row, i); return session.getCommentFoldRange(row, i, -1); } }; this.getSectionRange = function(session, row) { var line = session.getLine(row); var startIndent = line.search(/\S/); var startRow = row; var startColumn = line.length; row = row + 1; var endRow = row; var maxRow = session.getLength(); while (++row < maxRow) { line = session.getLine(row); var indent = line.search(/\S/); if (indent === -1) continue; if (startIndent > indent) break; var subRange = this.getFoldWidgetRange(session, "all", row); if (subRange) { if (subRange.start.row <= startRow) { break; } else if (subRange.isMultiLine()) { row = subRange.end.row; } else if (startIndent == indent) { break; } } endRow = row; } return new Range(startRow, startColumn, endRow, session.getLine(endRow).length); }; this.getCommentRegionBlock = function(session, line, row) { var startColumn = line.search(/\s*$/); var maxRow = session.getLength(); var startRow = row; var re = /^\s*(?:\/\*|\/\/|--)#?(end)?region\b/; var depth = 1; while (++row < maxRow) { line = session.getLine(row); var m = re.exec(line); if (!m) continue; if (m[1]) depth--; else depth++; if (!depth) break; } var endRow = row; if (endRow > startRow) { return new Range(startRow, startColumn, endRow, line.length); } }; }).call(FoldMode.prototype); }); define("ace/mode/logtalk",["require","exports","module","ace/lib/oop","ace/mode/text","ace/tokenizer","ace/mode/logtalk_highlight_rules","ace/mode/folding/cstyle"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var TextMode = require("./text").Mode; var Tokenizer = require("../tokenizer").Tokenizer; var LogtalkHighlightRules = require("./logtalk_highlight_rules").LogtalkHighlightRules; var FoldMode = require("./folding/cstyle").FoldMode; var Mode = function() { this.HighlightRules = LogtalkHighlightRules; this.foldingRules = new FoldMode(); this.$behaviour = this.$defaultBehaviour; }; oop.inherits(Mode, TextMode); (function() { this.lineCommentStart = "%"; this.blockComment = {start: "/*", end: "*/"}; this.$id = "ace/mode/logtalk"; }).call(Mode.prototype); exports.Mode = Mode; }); (function() { window.require(["ace/mode/logtalk"], function(m) { if (typeof module == "object" && typeof exports == "object" && module) { module.exports = m; } }); })();
{ "pile_set_name": "Github" }
Generally speaking, in order to fix a corrugated tube for protecting the outer periphery of a wire harness to a vehicle body or the like, a band-type clip (hereinafter referred to as “band clip”) is often used, considering the cost and the installation area size. A band clip is configured to have a loop shape when a belt portion is inserted into a buckle portion, and, in this state, to be able to clamp the outer periphery of a corrugated tube when the belt portion is fastened. It is possible to fix the corrugated tube to a vehicle body or the like by inserting a clip portion of the band clip, which is clamping the corrugated tube, into a fixing hole formed in the vehicle body or the like. However, since the inner circumferential surface of the belt portion of such a band clip is likely to slip along the outer circumferential surface of the corrugated tube, the band clip might unintentionally rotate in the circumferential direction relative to the corrugated tube, and workability when attaching the corrugated tube to the vehicle body or the like is poor. In order to eliminate such a problem, there is a conventionally-known band clip attaching structure in which a plurality of protruding portions are provided on the inner circumferential surface of the band clip, particularly on the inner circumferential surface of the belt portion, along the longitudinal direction such that the plurality of protruding portions dig into the outer circumferential surface of the corrugated tube when the belt portion is fastened to the corrugated tube, and the band clip is thus prevented from rotating (see JP2013-46505A, for example).
{ "pile_set_name": "USPTO Backgrounds" }
Q: How to print national characters in list representation? I'm writing JSON data with special characters (å, ä, ö) to file and then reading it back in. Then I use this data in a subprocess command. When using the read data I cannot make special characters get translated back to å, ä and ö respectively. When running the python script below, the list "command" is printed as: ['cmd.exe', '-Name=M\xc3\xb6tley', '-Bike=H\xc3\xa4rley', '-Chef=B\xc3\xb6rk'] But I want it to be printed like this: ['cmd.exe', '-Name=Mötley', '-Bike=Härley', '-Chef=Börk'] Python Script: # -*- coding: utf-8 -*- import os, json, codecs, subprocess, sys def loadJson(filename): with open(filename, 'r') as input: data = json.load(input) print 'Read json from: ' + filename return data def writeJson(filename, data): with open(filename, 'w') as output: json.dump(data, output, sort_keys=True, indent=4, separators=(',', ': ')) print 'Wrote json to: ' + filename # Write JSON file filename = os.path.join( os.path.dirname(__file__) , 'test.json' ) data = { "Name" : "Mötley", "Bike" : "Härley", "Chef" : "Börk" } writeJson(filename, data) # Load JSON data loadedData = loadJson(filename) # Build command command = [ 'cmd.exe' ] # Append arguments to command arguments = [] arguments.append('-Name=' + loadedData['Name'] ) arguments.append('-Bike=' + loadedData['Bike'] ) arguments.append('-Chef=' + loadedData['Chef'] ) for arg in arguments: command.append(arg.encode('utf-8')) # Print command (my problem; these do not contain the special characters) print command # Execute command p = subprocess.Popen( command , stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # Read stdout and print each new line sys.stdout.flush() for line in iter(p.stdout.readline, b''): sys.stdout.flush() print(">>> " + line.rstrip()) A: This is the canonical representation of string constants in Python which is designed to eliminate encoding issues. Actually, it's what repr() on a string returns. List's str() function implementation that is called when it's printed calls repr() on its members to represent them. The only way to output a string with non-ASCII characters as they are is to print it or otherwise write it to a stream. See Why does Python print unicode characters when the default encoding is ASCII? on how character conversion is done on printing. Also note that for non-ASCII 8-bit characters, the output will be different for terminals set up for different codepages. Regarding the solution: The simplest one will be to make an alternative str(list) implementation that will call str() instead of repr() - noting the warnings above. def list_nativechars(l): assert isinstance(l,list) return "[" + ", ".join('"'+str(i)+'"' for i in l) + "]" Now (in cp866 console encoding): >>> l=["йцукен"] >>> print list_nativechars(l) ["йцукен"] With data in foreign encoding: # encoding: cp858 <...> l= ['cmd.exe', '-Name=Mötley', '-Bike=Härley', '-Chef=Börk'] print list_nativechars(l) c:\>python t.py ["cmd.exe", "-Name=MФtley", "-Bike=HДrley", "-Chef=BФrk"]
{ "pile_set_name": "StackExchange" }
The WP systems sends a lot of mails to authors. There also plugins who use email notifications for authors. But, let say I have a user, who is managed by two persons (because it's a company, an entity that is more then one person) - I want all the emails to be sent also to a second Email address, that I will enter in the profile page of that WP user. What is the best way to achieve this? I know how to add a field to the profile page, but I don't know how to send every notification that the user gets to the second email address. please help :) If you have a lousy host it's probably time to move hosts, not route around their lousiness ;-). This is certainly easier than a code based solution (for non technical users). – RefinerFeb 27 '12 at 22:36 Yes, I know. but just for the challange. what would you do. in the end i've changed hosting and forward the Email as you suggested, and as I also thought would be the best solution. But what if? – Asaf ChertkoffMar 2 '12 at 10:56 A suggestion, albeit a sorta hack, I'd like to make is to use a mailing list. You can add an infinite number of emails to a mailing list. An alternative you could do is to use the publish_{$posttype} hook to send email notificiations through wp_mail. The wp_mail function's $to parameter takes either a string or an array so you could pass in multiple email addresses. I attempted a sample code block (above). A suggestion if you plan on using this on a production site, use a cron job if you have a lot of registered users, otherwise I'm pretty sure this will cause a timeout in PHP.
{ "pile_set_name": "Pile-CC" }
Emergency preparedness curriculum in nursing schools in the United States. With concern about bioterrorism and inadequacies in responding to mass casualty events, health care professionals have been placed in the category of first responders. The International Nursing Coalition for Mass Casualty Education (INCMCE) was established to plan strategically to address the educational needs of the nation's nurses. This study sought to determine the types and levels of disaster preparedness curricula being delivered or in development in nursing programs at all levels. INCMCE surveyed 2,013 deans or directors of nursing schools as to curricula for emergency preparedness prior to September 11, 2001, and during the two following academic years. Initial requests were sent via email and the US postal service. Respondents were invited to answer the online survey so data could be directly entered into a database for purposes of data analysis. Responses were received from 348 schools of nursing. Curriculum plans, followed by competency lists, were selected as most helpful for teaching content in disaster preparedness. The survey results validated the general assumption that nursing programs provide limited curricula in this area. The mean number of hours of disaster preparedness content provided, approximately four hours, did not change significantly over three academic years. The study also showed that 75 percent of respondents thought that nurse faculty were inadequately prepared in the area of disaster management. The study established a baseline for future curricular growth.
{ "pile_set_name": "PubMed Abstracts" }
[Theory and Phenomenology of $\mu$ in $M$ theory\ ]{} Bobby Samir Acharya$^{1,2}$, Gordon Kane$^1$, Eric Kuflik$^1$, Ran Lu $^1$\ ^1^[*Michigan Center for Theoretical Physics, University of Michigan, Ann Arbor, MI 48109*]{}\ ^2^[*Abdus Salam International Centre for Theoretical Physics, Trieste, Italy* ]{} We consider a solution to the $\mu$-problem within $M$ theory on a $G_2$-manifold. Our study is based upon the discrete symmetry proposed by Witten that forbids the $\mu$-term and solves the doublet-triplet splitting problem. We point out that the symmetry must be broken by moduli stabilization, describing in detail how this can occur. The $\mu$-term is generated via Kahler interactions after strong dynamics in the hidden sector generate a potential which stabilizes all moduli and breaks supersymmetry with $m_{3/2} \sim 20 - 30 \operatorname{TeV}$. We show that $\mu$ is suppressed relative to the gravitino mass, by higher dimensional operators, $\mu \sim 0.1 m_{3/2} \sim 2-3 \operatorname{TeV}$. This necessarily gives a Higgsino component to the (mostly Wino) LSP, and a small but non-negligible LSP-nucleon scattering cross-section. The maximum, spin-independent cross-sections are not within reach of the current XENON100 experiment, but are within reach of upcoming runs and upgrades. Introduction ============ In the Minimal Supersymmetric Standard Model (MSSM) [@Martin:1997ns], the only low energy supersymmetric parameter with mass dimension one is the $\mu$ parameter. Through the $\mu$-term, $W \supset \mu H_u H_d $, it gives mass to Higgsinos and also generates scalar potential couplings for Higgs fields. The size of $\mu$ plays an important role in phenomenology. In particular, it affects properties of potential dark matter particles. LEP searches for the charged Higgsino require $\mu \gtrsim 100$ GeV, while arguments against fine tuning of the mass of the Z-boson suggest that $\mu$ should not be too large. On the other hand, one might expect, with ignorance of the high scale theory, that $\mu \sim m_{GUT}$, the natural UV cutoff. Solving the $\mu$-problem [@Kim:1983dt] presumably requires an understanding of the fundamental theory that generates the scale of the $\mu$ parameter. Thus the $\mu$-problem is exceptionally important–a high scale theory cannot be qualitatively complete without addressing it, and its solution will have significant implications for dark matter, Higgs physics, and fine-tuning issues. The most promising framework for a complete fundamental theory that incorporates low energy supersymmetry is string theory. Within string theory, many explanations for the small value of $\mu$ have been proposed. In most scenarios the $\mu$-term is forbidden at the high scale. Then, it is somehow dynamically generated at a lower scale. In many cases, the $\mu$-term is forbidden by a continuous or discrete symmetry, which is spontaneously broken at a smaller, dynamically generated scale ($\ll m_{GUT}$), and perhaps related to supersymmetry breaking [@Antoniadis:1994hg; @Nath:2002nb]. Some examples of the above include NMSSM scenarios [@Suematsu:1994qm; @Cvetic:1995rj; @Cvetic:1997ky; @Lebedev:2009ag; @RamosSanchez:2010sc; @Ratz:2010zz] and approximate $R$-symmetric models [@Casas:1992mk; @Kappl:2008ie]. Others scenarios have the $\mu$-term forbidden by stringy selection rules, and are broken by non-perturbative instanton effects that produce exponentially suppressed mass scales [@Ibanez:2007tu; @Ibanez:2008my; @Green:2009mx; @Cvetic:2009yh]. It has long been suspected that the MSSM unifies the strong and electroweak forces [@Langacker:1991an] into a single $SU(5)$ grand unified group. Each family of quarks and leptons are organized into a $\bf{10} \oplus \bf{\bar{5}}$ representation of $SU(5)$. The remaining MSSM fields, the Higgs doublets, do not form a complete $SU(5)$ representation. Minimally, the Higgs doublets can be assigned to a $\bf{5}\oplus \bf{\bar{5}}$ representation, but require the introduction of a pair of Higgs color triplets. The Higgs triplets can mediate baryon and lepton violating processes, and thus should be very heavy, $m_{T} \gtrsim 10^{14}$ GeV, to avoid rapid proton decay [@Murayama:2001ur]. Additionally, they should be heavy to ensure gauge coupling unification in the minimal model. If Higgs triplet masses are very heavy, then an $SU(5)$ symmetric theory would require that the Higgs doublets masses be the same as the triplet mass, $\mu = m_T \sim M_{GUT} $, but it was just argued that is this a factor $10^{13}$ too large. A string theoretic solution to the $\mu$-problem is inevitably related to the solution of the doublet-triplet problem of grand unified theories. Therefore, it is paramount that the symmetry that protects the $\mu$-term not forbid the triplet masses if both problems are to be solved. This restriction leads to an elegant, perhaps unique solution to the $\mu$-problem in $M$ theory; the symmetry which protects $\mu$ from being generated at the unification scale iwas originally proposed by Witten [@Witten:2001bf]. Although Witten did not discuss how this symmetry would be broken, we argue that the symmetry would–indeed must– be broken by moduli stabilization. Then by including the mechanism for stabilizing the moduli proposed in [@Acharya:2006ia], we will show that $\mu \sim 0.1\,m_{3/2}$. Finally, the implications for dark matter discovery are discussed, where we conclude that the XENON100 experiment should not observe a dark matter signal, but may do so in its next upgrade (Figure \[fig:xsecuniversal\]). $M$ theory ========== Matter and Gauge Theory ----------------------- In $M$ theory compactified on a $G_2$ manifold, $ADE$ gauge symmetries ($SU(n)$, $SO(2n)$ and $E_6$, $E_7$, $E_8$) are localized along three dimensional submanifolds of orbifold singularities [@Acharya:2000gb; @Acharya:1998pm]. Chiral matter, charged under the $ADE$ gauge theory, is localized at conical singularities in the seven dimensional $G_2$ manifold, at points where the $ADE$ singularity is enhanced [@Witten:2001uq; @Acharya:2001gy; @Acharya:2004qe]. Matter will additionally be charged under the $U(1)$ symmetry, corresponding to the vanishing $2$-cycle that enhances the singularity. Hence, all chiral matter will charged under at least one $U(1)$ symmetry. Bi-fundamental matter, charged under two non-Abelian gauge groups, is also possible, but will not be considered here. As argued by [@Pantev:2009de], the additional $U(1)$ symmetries are never anomalous. Therefore, there is no Green-Schwarz mechanism [@Green:1984sg] needed for anomaly cancellation, and GUT-scale FI $D$-terms are not present in the theory. This will be important later, since it removes a possibility for generating large scalar vacuum expectation values (vevs) for charged matter fields. Two gauge theories will generically only have precisely the same size gauge coupling if they arise from the same orbifold singularities. Therefore, if gauge coupling unification is to be motivated theoretically, and not an approximation or accident, the gauge group of the $ADE$ singularity should be a simple group containing the Standard Model gauge group, which we will take (for simplicity) to be $SU(5)$. Any larger group containing $SU(5)$ will give results similar to those we find below. To obtain the Standard Model gauge group, $SU(5)$ needs to be broken. Perhaps the 4D gauge symmetry can be broken spontaneously, but only representations smaller than the adjoint are realizable in $M$ theory–the $\bf{10}$ and $\bf{5}$ representations (and their conjugates) in $SU(5)$. This leaves only “flipped $SU(5)$” [@Barr:1981qv; @Derendinger:1983aj; @Antoniadis:1987dx] as a possible mechanism to break the GUT group and solve doublet-triplet splitting. Given the difficulty in constructing a realistic flipped $SU(5)$ model [@Kuflik:2010dg], it will not be considered here. The remaining possibility is to break the higher dimensional gauge theory by Wilson lines and will be discussed below. Moduli Stabilization -------------------- In the mid-80’s it was realized that, classically, string vacua contain a plethora of moduli fields. The standard lore was that, after supersymmetry breaking, the moduli fields would obtain masses and appropriate vacuum expectation values. Part of this lore was also the idea that strong dynamics in a hidden sector would be responsible for breaking supersymmetry at, or around, the TeV scale. Though some progress was made, it was not until recently that it has been clearly demonstrated that these ideas can be completely realized in string/$M$ theory: in $M$ theory compactified on a $G_2$-manifold (without fluxes) strong gauge dynamics can generate a potential which stabilizes all moduli and breaks supersymmetry at a hierarchically small scale [@Acharya:2006ia; @Acharya:2007rc]. These vacua will be the starting point for our considerations. In these vacua, the gravitino mass (and therefore also the moduli masses [@Acharya:2010af]) $m_{3/2} \sim {\Lambda^3 \over m_{pl}^2}$, where $\Lambda$ is the strong coupling scale of the hidden sector gauge interaction. This is parametrically of order $\Lambda \sim e^{ -2\pi /( \alpha_{h}b)} m_{pl}$, where $\alpha_h$ is the coupling constant of the hidden sector and $b$ is a beta-function coefficient. The vacuum expectation values of the moduli fields are also determined in terms of $\alpha_h$: Roughly speaking, one has: s\^A \~1/\_h where the modulus here is dimensionless and not yet canonically normalized. The physical meaning of the vevs of $s^A$ is that it characterizes the volumes in eleven dimensional units of 3-cycles in the extra dimensions, e.g., the 3-cycle that supports the hidden sector gauge group. Thus, self-consistently when the hidden sector is weakly coupled in the UV, the moduli are stabilized at large enough volumes in order to trust the supergravity potential which only makes sense in this regime. In general, the rough formula exhibits the scaling with $\alpha_h$ and, numerically the moduli vevs in the vacua considered thus far range from about $1 \leq s^A \leq 5/\alpha_h$. In order to incorporate the moduli vevs into the effective field theory in an $M$ theory vacuum, we have to consider the normalized dimensionful vevs which appear in the Einstein frame supergravity Lagrangian. For obtaining the normalization it suffices to consider the moduli kinetic terms alone: m\_[pl]{}\^2 [12]{} g\_[AB]{} \_ s\^[A]{} \^ s\^[B]{} \[modulikin\] where $s^A$ are the dimensionless moduli described above and $g_{AB}$ is the (Kahler) metric on the moduli space. From the fact that the extra dimensions have holonomy $G_2$, it follows that each component of $g_{AB}$ is homogeneous of degree [*minus*]{} two in the moduli fields g\_[AB]{} = \_A \_B K = \_A \_B(-3 V\_7+ …) because the volume of $X$, $V_7$ is homogeneous of degree $7/3$. For isotropic $G_2$-manifolds, i.e. those which receive similar order contributions to their volume from each of the $N$ moduli, studying examples shows that, not only is the metric of order ${1 \over s^2}$, but also of order $1/N$: g \~ Therefore in a given vacuum the order of magnitude of the entries of $g_{AB}$ are g \~ Therefore, a dimensionless modulus vev of order $1/\alpha_h$ translates into a properly normalized dimensionful vev \~ \~0.1 m\_[pl]{} for $N\sim 100$, which is a typical expectation for the number of moduli [@joyce][^1]. This can lead to a suppression of the effective couplings which generate the $\mu$-term, once the symmetry forbidding $\mu$ is broken. More precise calculations of the moduli vevs can be found in [@Acharya:2007rc; @Acharya:2008zi]. Clearly, however, a $G_2$-manifold with less than ten or so moduli will not have suppressed, normalized moduli vevs; such cases are presumably unlikely candidates for $G_2$-manifolds with realistic particle spectra and will not be considered further. We briefly also discuss the spectrum of Beyond Standard Model (BSM) particles which arise from the $M$ theory vacuum. Classically, it is well known that string/$M$ theory has no vacuum with a positive cosmological constant (de Sitter minimum). From the effective field theory point of view, this is the statement that moduli fields tend to have potentials which, in the classical limit have no de Sitter minimum. If we now consider quantum corrections to the moduli potential, which [*only*]{} involve the moduli fields – if they are computed in a perturbative regime – they tend to be small and hence are unlikely to generate de Sitter vacua. Positive, larger sources of vacuum energy must therefore arise from other, non-moduli fields. This is indeed the case in the $M$ theory vacua described in [@Acharya:2007rc]. Here the dominant contribution to the vacuum energy arises from a [*matter*]{} field in the hidden sector (where it can be shown that, without the matter field, no de Sitter vacuum exists). This is important for the following reasons. Adopting supersymmetric terminology, this suggests that the fields with the dominant $F$-terms are not moduli. Hence, the moduli $F$-terms are suppressed relative to the dominant contribution (in fact, in $M$ theory the suppression is of order $\alpha_h$). This affects the spectrum of BSM particles. In string/$M$ theory, gaugino masses are generated through $F$-terms of moduli vevs (because the gauge coupling function is a superfield containing volume moduli). Hence, at leading order these will be suppressed relative to, say, scalar masses which receive order $m_{3/2}$ contributions from all $F$-terms in the absence of accidental symmetries. Therefore, in the $G_2$-MSSM (and presumably other classes of string vacua) the scalar superpartners and moduli fields will have masses of order $m_{3/2}$ whereas the gaugino’s will have masses which are suppressed; in fact in the $G_2$-MSSM the gaugino masses at the GUT scale are at least two orders of magnitude below $m_{3/2}$. This is what makes the anomaly mediated contributions to gaugino masses relevant to the $G_2$-MSSM and also why the models often contain a Wino LSP [@Acharya:2008zi]. Important for our considerations below will be the fact that the suppression of the gaugino masses is greater than the suppression of moduli vevs discussed above by one order of magnitude (at the GUT scale), at least for $G_2$-manifolds with less than O($10^4$) moduli. Geometric Symmetries and Moduli Transformations ----------------------------------------------- Compact, Ricci-flat manifolds with finite fundamental groups, such as manifolds with holonomy $G_2$ or $SU(3)$ can not have continuous symmetries. They can, however, have [*discrete*]{} symmetries. Witten was considering just such a discrete symmetry ($G$) of a $G_2$-manifold when he proposed the symmetry which prevents $\mu$. Assuming the simplest possibility of an Abelian discrete symmetry, let us consider $G = {\bf Z_N}$, which acts on $X$: : X X As a result of this, it will also act naturally on the fields on $X$. In particular ${\bf Z_N}$ will act on the set of harmonic forms on $X$. Our interest here is $H^3 (X, {\bf R})$ the set of harmonic 3-forms on $X$, since this locally represents the moduli space of $G_2$-manifolds. A $G_2$-manifold, with moduli at a point $\langle s^S \rangle = s_0^A$ is determined by a harmonic (locally) $G_2$ invariant 3-form $\varphi$ as = s\_0\^A \_A where $\beta_A$ are a basis for $H^3 (X, {\bf R})$. If the point $s^A_0$ is such that ${\bf Z_N}$ is a symmetry, then $\varphi$ will be invariant under ${\bf Z_N}$, because invariance of $\varphi$ is equivalent to invariance of the metric. The three-forms $\beta_A$ transform in a representation of ${\bf Z_N}$, which is a real representation because the 3-forms are real on a $G_2$-manifold. Hence, : \_A M\_A\^B\_B where $M$ is defined by this equation. The fact that the particular $G_2$-manifold, characterized by the particular point in moduli space $s_0^A$, is ${\bf Z_N}$-invariant is simply the statement that: s\_0\^B M\_A\^B = s\_0\^A i.e., the $s_0^A$ are an eigenvector of $M$ with unit eigenvalue. Clearly, this will not be true for a generic vector $s^A$; hence, for a generic point in the moduli space, the entire ${\bf Z_N}$ symmetry will be broken. Since the representation of ${\bf Z_N}$ defined by the matrix $M$ is real, it must be the sum of a complex representation plus its conjugate. Thus, the basis $\beta_B$ can be chosen such that the complex representation is spanned by [*complex*]{} linear combinations of moduli fields. For instance, there might be a linear combination S = \^1 + i \^2 \[complexmoduli\] which we choose to write in-terms of the dimensionful fields ($\hat{s}$), that transforms as S e\^[2i/N]{} S. Since we usually consider complex representations of discrete symmetries acting on the matter fields in effective field theories, it will be precisely the linear combinations of moduli (those in the form (\[complexmoduli\])) which span ${\bf r_C}$ which will appear in the “symmetry breaking sector” of the effective Lagrangian. In other words, the moduli will appear in complex linear comibinations such as (\[complexmoduli\]) in the Kahler potential operators containing other fields that transform under the ${\bf Z_N}$. Note that in (\[complexmoduli\]) we are abusing notation in the sense that the “$i$” which appears is in general an $N$-by-$N$ matrix whose square is minus the identity. Witten’s Solution ================= In heterotic and type-II string theories doublet-triplet splitting is often solved via orbifold compactifications [@Hosotani:1983vn; @Witten:1985xc]. In these theories, higher (space-time) dimensional gauge symmetries are broken by the Wilson lines in an orbifold compactification, while the Kaluza-Klein zero mode Higgs triplets are absent due to non-trivial transformations under the orbifold symmetry. On the contrary, matter fields in $M$ theory are co-dimension 7, that is, the fields live only in four dimensions, and are not zero modes of a KK tower of fields, so this solution to the $\mu$-problem will not work. Other possibilities, such as NMSSM realizations or string instanton effects, will also not work since the symmetry that forbids $\mu$ (a $U(1)$ or stringy selection rules) would also forbid the triplet mass, thus spoiling doublet-triplet splitting. One may also consider the possibility that a discrete $R$-symmetry can forbid the $\mu$-term while solving doublet-triplet splitting. Requiring the symmetry to be anomaly free, and that it commutes with the gauge theory can lead to a unique symmetry [@Lee:2010gv]. However, this symmetry will also forbid the triplet mass and spoil doublet triplet splitting unless the triplets are absent from the four dimensional theory. For most string theories, this can be accomplished by a Wilson line in the higher dimensional theory, but in $M$ theory, this is not possible since matter only exists in four dimensions. Therefore, an alternative approach is needed to solve doublet-triplet splitting in $M$ theory. The only known possibility, originally discussed by Witten, is to construct a discrete $ {\bf Z_N}$ symmetry of the geometry, that will act on both matter fields and moduli-fields. When combined with a discrete Wilson line thats breaks $SU(5)$, this symmetry need not commute with the $SU(5)$, thus allowing components of a single $SU(5)$ representation to have different ${\bf Z_N}$ charges. Since the above arguments demonstrate that there must be a symmetry that acts differently on doublets and triplets, so far this is the only approach known to work, and maybe be the only solution. The minimal $SU(5)$ matter content contains three generations of matter descending from three copies of $\bf{10}_M \oplus \bf{\bar{5}}_M$. There is also a $\bf{5}_H \oplus \bf{\bar{5}}_{H}$ pair containing the MSSM Higgs doublets, $H_u \oplus H_d$, and a vector-like pair of Higgs triplets, $T_u \oplus T_d$. Here a doublet and a triplet from one of the Higgs representations can transform differently under the ${\bf Z_N}$ symmetry group. Without loss of generality or phenomenology, this field is taken to be the $\bf{\bar{5}}_{H}$ field, with the following charges for the fields $$\begin{array}{rc|c} \multicolumn{2}{l}{\mbox{Field}} & {\bf Z_N} \\ \hline \multicolumn{2}{c|}{ \mathbf{10}_M } & \eta^\sigma \\ \multicolumn{2}{c|}{ \mathbf{\overline{5}}_M } & \eta^\tau \\ \multirow{2}{*}{$\mathbf{5}_H $} & T_u & \eta^\alpha \\ & H_u & \eta^\alpha \\ \multirow{2}{*}{$\mathbf{\overline{5}}_{\overline{H}} $} & T_d & \eta^\gamma \\ & H_d & \eta^\delta \end{array}$$ where $\eta\equiv e^{2\pi i / N}$. These charges are constrained by the requirement that the $\bf Z_N$-symmetry does not forbid necessary terms in the superpotential, such as Yukawa couplings, Majorana neutrino masses, and the Higgs triplet masses $$\begin{array}{rc|c} \multicolumn{2}{c}{\mbox{Coupling}\;\;\;\;\;} & \mbox{Constraint }\\ \hline \mbox{Up Yukawa Coupling} & \mathbf{10}_M \mathbf{10}_M H_u & 2 \sigma + \alpha = 0 \mod N\\ \mbox{Down Yukawa Coupling} & \mathbf{10}_M \mathbf{\overline{5}}_M H_d & \sigma + \tau + \delta = 0 \mod N\\ \mbox{Majorana Neutrino Masses} & H_d H_d \mathbf{\overline{5}}_M \mathbf{\overline{5}}_M & 2 \alpha + 2 \tau = 0 \mod N\\ \mbox{Triplet Masses} & T_u T_d & \alpha + \gamma= 0 \mod N.\\ \end{array}$$ The solution to these equations are $$\begin{array}{ccl} \alpha &=& -2 \sigma \\ \gamma &=& 2 \sigma \\ \delta &=& -3 \sigma + N/2 \\ \tau &=& 2 \sigma + N/2 \\ \sigma &=& \sigma. \end{array}$$ Inherently, the $\bf Z_N$ should forbid the $\mu$-term, and if possible, other dangerous terms, such as dimension-5 proton decay operators and dimensions 3 and 4 R-parity violation. $$\begin{array}{rc|c} \multicolumn{2}{c}{\mbox{Coupling}} & \mbox{Constraint }\\ \hline \mu-\mbox{term} & H_d H_u & -5 \sigma + N/2 \ne 0 \mod N\\ \mbox{D-5 Proton Decay} & \mathbf{10}_M \mathbf{10}_M \mathbf{10}_M \mathbf{\overline{5}}_M & 5\sigma - N/2 \ne 0 \mod N\\ \mbox{D-3 R-Parity} & \mathbf{5}_H \mathbf{\overline{5}}_M & N/2 \ne 0 \mod N\\ \mbox{D-4 R-Parity} & \mathbf{10}_M \mathbf{\overline{5}}_M \mathbf{\overline{5}}_M & 5\sigma \ne 0 \mod N.\\ \end{array}$$ Doublet-triplet splitting occurs if $5 \sigma \ne N/2 \mod N $. If one only wants to solve doublet-triplet splitting while forbidding the $\mu$-term, then there is a solution for $N=2$ and $\sigma=1$. Forbidding all the dangerous operators above can be accomplished with a ${\bf Z_4}$ symmetry. An essential point is that the existing bounds coming from the LEP experiments assert that the masses of charged Higgsinos are at least 100 GeV, hence an effective $\mu$-term must be generated. In our context here this implies that the ${\bf Z_N}$ symmetry must be broken, an aspect not discussed in [@Witten:2001bf]. This symmetry breaking is the subject of the next section. Generating $\mu$ ================ As discussed in above, the ${\bf Z_N}$ symmetry is a geometric symmetry of the internal $G_2$ manifold, under which the moduli are charged. The $G_2$ moduli [@Acharya:2006ia] reside in chiral supermultiplets whose complex scalar components, z\^j =t\^j + i s\^j ,are formed from the geometric moduli of the manifold[^2], $s_i$, and axionic components of the three-from $C$-field, $t_i$. We expect the moduli to break the discrete symmetry just below Planck scale when their vevs are stabilized [@Acharya:2007rc; @Acharya:2008zi] (see Section (2.2)), \~0.1 m\_p . \[modulivev\] Likewise, the moduli $F$ terms are expected to give gaugino masses in the usual way, so that m\_[1/2]{} m\_p \[moduliFvev\] . where $m_{1/2}$ is the tree level gaugino mass at the GUT scale. The axion shift symmetries $t_i \rightarrow t_i + a_i $ require that only imaginary parts of the moduli appear in perturbative interactions. The superpotential, being holomorphic in the fields, will not contain polynomial terms that explicitly depend on the moduli. The $\mu$-term can then only be generated via Kahler interactions when supersymmetry is broken via a Guidice-Massiero like mechanism [@Giudice:1988yz], i.e., from Kahler potential couplings quadratic in the Higgs fields. To understand the size of $\mu$ (and $B\!\mu$) we we first find a combination of moduli fields (or product of moduli fields), invariant under the axion symmetries, that transform under (a complex representation of ) $\bf Z_N$ with charge $5 \sigma - N/2$ S\^1 = \^[i]{} + i \^[j]{} \[S1\] and another with charge $-5 \sigma - N/2$ S\^2 = \^[m]{} + i \^[n]{} \[S2\] . These fields have the correct charge to break the symmetry and generate the $\mu$-term which has total ${\bf Z_N} $ charge $-5 \sigma - N/2$. In a general supergravity theory [@Wess:1992cp; @Brignole:1997dp] the fermion mass matrix is m\^\_[ij]{} = m\_[pl]{}\^3 e\^[G/2]{} ( \_i G\_j + G\_i G\_j ) \[fermmasses\] and the holormorphic components of the scalar mass matrix are m\^[  2]{}\_[ij]{} = m\_[pl]{}\^4 e\^[G]{} ( \_i G\_j + G\^k \_i \_j G\_k ) \[scalarmasses\] where $G = m_{pl}^{-2} K + \ln (m_{pl}^{-6} |W|^2)$ and subscripts on $G$ denote derivatives with respect to the scalar fields $\phi_i$ or their conjugates $\phi^{*}_{\bar{i}}$. Respectively, (\[fermmasses\]) and (\[scalarmasses\]) can be used to find $\mu$ = m\_[3/2]{} K\_ - F\^[|[k]{}]{} K\_[|[k]{}]{} and $B\mu$ [ccl]{} B&=& 2 m\_[3/2]{}\^2 [K]{}\_ - m\_[3/2]{}F\^ [K]{}\_[|[k]{}]{} + m\_[3/2]{} F\^m K\_[m ]{}\ &-& ( m\_[3/2]{} F\^m K\^[n ]{} K\_[ m ]{} K\_[n ]{} +( ))\ &-& F\^n F\^[|[m]{}]{} ( K\_[n|[m]{}]{} - K\^[j ]{} K\_[ n ]{} K\_[j |[m]{} ]{} +( )) where the indices run over the moduli fields and we have used that the superpotential does not contribute to either mass. Leading contributions come from Kahler potential terms K H\_u H\_d + H\_u H\_d + h.c. \[Kahlermu\] where the coefficients $\alpha,\beta$ are expected to be $\mathcal{O}(1)$. Plugging the Kahler potential ( \[Kahlermu\]) into the formulas for $\mu$ and $B\!\mu$ gives the $\mu$-term [ccl]{} &=& m\_[3/2]{} +\ B&=& 2 m\_[3/2]{}\^2 + m\_[3/2]{} + m\_[3/2]{}. \[muterm\] However, as a result of (\[modulivev\]), (\[moduliFvev\]) and the suppression of $m_{1/2}$ by order two orders of magnitude in the $G_2$-MSSM, $\langle S^i \rangle m_{3/2} \simeq 10 \;\;\langle F^{S^i} \rangle $, the contribution to the masses coming from $F$-terms are sub-dominant, at least if we assume that $N\ll10^4$. Therefore, to a good approximation B2 m\_[3/2]{} a fact which will have significant phenomenological consequences[^3] . The coefficients of the operators in (\[Kahlermu\]) are in principle determined from $M$ theory, but is not yet known how to calculate them precisely. It is natural to assume that the coupling coefficients are of $\mathcal{O}(1)$. When combined with a model of moduli stabilization, such as in the $G_2$-MSSM described in [@Acharya:2006ia; @Acharya:2007rc; @Acharya:2008zi] and briefly reviewed section (2.2), $\mu$ and $B\!\mu$ can be approximately determined. Since the real and imaginary components of the complex fields, $S^1$ (\[S1\]) and $S^2$ (\[S2\]), are expected to have similar, but not necessarily identical vevs, $\mu$ will generically have a phase, that will be unrelated to the phases that enter the gaugino masses. But, $B\mu$ and $\mu$ will have the same phase since both are proportional to $S^1$ and the same coupling constant. Before moving on to the next section we discuss the possibility that other matter fields may be charged under the ${\bf Z_N}$ symmetry, spontaneously break the ${\bf Z_N}$ symmetry, and generate $\mu$. Consider an $SU(5)$-singlet matter field $X$ that generates the $\mu$-term via the superpotential coupling $X H_u H_d$. Since $X$ is a matter field, $M$ theory requires that it is charged under least one $U(1)$ symmetry. Then $H_u H_d$ is not invariant under the $U(1)$, and consequently, the triplet mass term $T_d T_u$ is not invariant, spoiling doublet-triplet splitting. Thus, such contributions should not occur. Alternatively, the $\mu$-term may be generated by a $U(1)$ invariant combination of two fields, for example by the operator H\_u H\_d. Requiring $\mu \gtrsim 10^3$ GeV, and taking $\Lambda \sim M_{GUT}$ this would require $\sqrt{ \langle X_1 X_2 \rangle } \gtrsim 10^{10}$ GeV. Radiative symmetry breaking will generally give a vev $\sim m_{3/2}$– usually large vevs are associated with FI $D$-terms. But since FI $D$-terms are absent in $M$ theory, it may be difficult for such large vevs to arise from here. The recent results of [@Acharya:2011kz] do suggest that the $F$-term potential can generate large matter field vevs, however in that case the vevs are too large to be relevant for the $\mu$ problem. Therefore, we very tentatively conclude that a matter field spurion is not responsible for breaking the ${\bf Z_N}$ symmetry and giving a physically relevant $\mu$-term. Finally, we comment on a potential domain wall problem. The moduli are stabilized away from a ${\bf Z_N}$ point, which implies that the ${\bf Z_N}$ symmetry was really only an approximate symmetry of the $G_2$-manifold. The moduli stabilization serves to parameterize the amount that the $G_2$-manifold differs from a ${\bf Z_N}$ symmetric manifold. Therefore, since the ${\bf Z_N}$ symmetry is not an exact symmetry of the $G_2$ manifold, the Lagrangian will explicitly break the ${\bf Z_N}$ symmetry, and domains walls would not have formed in the early universe. Origin of $R$-Parity in $M$ theory ================================== In the Standard Model, the Yukawa couplings and Higgs potential form the most general set of renormalizable couplings consistent with the gauge symmetries. In this sense, baryon (B) and lepton (L) number are accidental symmetries of the theory. However, this is not the case in supersymmetric theories, which allow for the B and L violating renormalizable couplings[^4] W\_ = \^ L L e\^c + \^ L Q d\^c + \^ u\^c d\^c d\^c + L h\_u . \[WRbreaking\] If the squark masses are not of order the GUT scale (which presumably they are not), these operators can lead to too rapid proton decay if not heavily suppressed. Hence one usually introduces $R$-parity, where the Standard Model fields have $R$-parity $+1$, while their supersymmetric partners have $R$-parity $-1$. This forbids all the couplings in (\[WRbreaking\]). Additionally, $R$-parity invariance insures the stability of the LSP, and the absence of an $R$-parity can eliminate the LSP as a dark matter candidate. Therefore, in this section we will discuss the origin of $R$-parity in $M$ theory, or at least an approximate $R$-parity that leaves the proton and LSP very long lived. Of course from a theoretical point of view an $R$-parity or equivalent symmetry should emerge from the theory and not be put in by hand. The ${\bf Z_N}$ symmetry constructed in Section 3 contains $R$-parity, but for generic moduli charges the complete ${\bf Z_N}$ symmetry, including any $R$-parity subgroup, will be spontaneously broken. Although the ${\bf Z_N}$ symmetry will prevent the superpotential couplings in (\[WRbreaking\]) from being invariant, supersymmetry breaking will revitalize the operators just as in the case of the $\mu$-term, from Kahler potential operators K\_ L L e\^c + L Q d\^c + u\^c d\^c d\^c + L h\_u \[kahlerbad1\] where the $\tilde{S}^{ \dagger}$’s symbolically represent the moduli and need not all be the same. Just as the $\mu$-term was generated from the Kahler potential as a result of moduli stabilization, the effective superpotential can be calculated from the supersymmetry breaking contribution from (\[kahlerbad1\]) to [ccl]{} \_[i j k]{} && [m\_[pl]{}\^[-2]{}]{} ( m\_[3/2]{} +F\_ )( K\_ )\_[i jk]{}\ && m\_[pl]{}\^[-1]{} ( m\_[3/2]{} +F\_)( K\_ )\_[L h\_u]{} \[kappalambda\] for $\lambda = \lambda^{\prime}, \lambda^{\prime\prime}, \lambda^{\prime\prime\prime}$ and where $ i,j,k$ run over the matter fields. Comparing (\[kappalambda\]) to (\[muterm\]), one easily sees that $\kappa \sim \mu$, since both are generated the same way. Then using that $\mu \sim \kappa $, the superpotential can be rewritten as W\_ (L L e\^c + L Q d\^c + u\^c d\^c d\^c )+ L h\_u. The trilinear couplings are suppressed but the lepton violating bilinear coupling is large and of order the $\mu$-term–this is simply a consequence of $\kappa$ not being suppressed. After rotating away the $L h_u$ term using the approximation (\[muyukawa\]), the superpotential simplifies to W\_ \~y\_e L L e\^c + y\_d L Q d\^c + u\^c d\^c d\^c \[WRbad1\] where smaller terms in $\lambda^{\prime},\lambda^{\prime\prime},\lambda^{\prime\prime\prime}$ have been dropped. Thus the lepton number violating trilinears pick up large contributions from the bilinear term, even if they were originally suppressed. The proton lifetime for the decay mode $p \rightarrow e^{+} \pi^0$ is estimated to be \_[p e\^[+]{} \^0]{} . The current bounds on this partial decay width is $\tau_{p \rightarrow e^{+} \pi^0} > 1.6 \times 10^{33}$ years [@Amsler:2008zzb]. For scalar masses in the $G_2$-MSSM ($\sim 10 \operatorname{TeV}$ see [@Acharya:2008zi]) this gives the experimental bound \^ \^ 10\^[-24]{} which clearly excludes the superpotential (\[WRbad1\]), since $ \lambda^{\prime\prime } \sim y_e \sim 10^{-5}$ and $ \lambda^{\prime\prime \prime} \sim \mu / m_{pl} \sim 10^{-14}$. Therefore, proton stability requires an additional form of $R$-parity invariance beyond the discrete symmetry proposed. One possible way to preserve the $R$-parity is to simply assume that the $G_2$-manifold in the vacuum is $R$-parity invariant, though not ${\bf Z_N}$ invariant i.e. the vacuum partially breaks ${\bf Z_N}$ to an $R$-parity subgroup. For example, take $N=6$, then [rc|c]{} &[**Z\_6**]{}\ - & H\_d H\_u & \^4\ & M\_[10]{} M\_[|[5]{}]{} M\_[|[5]{}]{} & \^5\ & M\_[|[5]{}]{} H\_u & \^3 for $\eta \equiv e^{i 2 \pi /6}$. If all moduli transform under the ${\bf Z_3}$ subgroup of ${\bf Z_6}$, then ${\bf Z_6}$ is broken to ${\bf Z_2}$ $R$-Parity, since no $R$-parity couplings can be generated. This is technically satisfactory, but is presumably “non-generic”. It could certainly emerge from $M$ theory, but we will not consider it further here. Alternatively, $R$-parity may manifest itself as matter-parity, a conserved remnant of a local, continuous $U(1)$ symmetry. As is well known, matter parity arises naturally in $SO(10)$ theories. When embedded into an $SO(10)$ unified theory, the Standard Model matter fields belong to a different representation than the Higgs fields– a generation of matter is contained in a $\bf{16}$ of $SO(10)$, while a pair of Higgs doublets comes from a $\bf{10}$ of $SO(10)$. When $SO(10)$ is broken to $SU(5)\times U(1)_{\chi}$, for example by a discrete Wilson line, the Higgs fields and matter fields are charged differently under $U(1)_{\chi}$: [ccl]{} SO(10) & & SU(5)U(1)\_\ **[16]{} & & **[10]{}\_[-1]{} **[|[5]{}]{}\_[3]{} **[1]{}\_[-5]{}\ **[10]{} & & **[5]{}\_[2]{} **[|[5]{}]{}\_[-2]{}.************** where the subscript is the $U(1)_{\chi}$ charge. The vacuum expectation values of the Higgses, which are contained in the $\bf{5}_{2}$ and $\bf{\bar{5}}_{-2}$ multiplets, will break the $U(1)_{\chi}$ symmetry into a discrete ${\bf Z_2}$ subgroup. This is because the Lagrangian is no longer invariant under the full local transformation $\Phi \rightarrow e^{i \alpha(x) q_d } \Phi$, but only the subgroup of transformations given by $\alpha(x)=\pi$. In terms of the $U(1)_{\chi}$ charges $q_{\chi}$, the chiral multiplets have ${\bf Z_2}$-parity $e^{i \pi q_{\chi}}$. Thus chiral superfields with even $U(1)_{\chi}$ charge will have parity $+1$ and fields with odd $U(1)_{\chi}$ charge will have parity $-1$. The ${\bf Z_2}$ symmetry is exactly $R$-parity. The only $SU(5)$ singlet with $U(1)_{\chi}$ charge is the $\bf{1}_{-5}$ field (and its conjugate), and thus this is the only field that can break $U(1)_{\chi}$ without breaking the SM gauge group. But since it has odd $U(1)_{\chi}$ charge, its vev will break $R$-parity. Therefore an $SO(10)$ completion of $U(1)_{\chi}$ will not contain an unbroken $R$-parity, but perhaps when combined with the ${\bf Z_N}$ symmetry, $R$-parity violating operators may be sufficiently suppressed to allow a long lived proton and LSP. Next we estimate these lifetimes. The singlet field $\bf{1}_{-5}$ can be considered to be the right-handed neutrino, $\nu^c$, since it has the right quantum numbers to make the operator $ \nu^c h_u L$ invariant under $U(1)_{\chi}$. However, if $\langle \nu^c \rangle \ne 0 $, all baryon and lepton violating operators in (\[WRbreaking\]) will be generated via the superpotential W\_ \~\^c L L e\^c + \^c L Q d\^c + \^c u\^c d\^c d\^c + \^c h\_u L . \[WRbad2\] The operators in (\[WRbad2\]) should be suppressed and can be forbidden by the ${\bf Z_N}$ symmetry. The story will be the same as above and the Kahler potential operators will generate (\[WRbad2\]) , but with additional suppression coming from $U(1)_{\chi}$ breaking W\_ = ()( ) (L L e\^c + L Q d\^c + u\^c d\^c d\^c )+ () ( ) L h\_u. Diagonalizing away the $L h_u $ term, and using (\[muterm\]) gives W\_ \~y\_e L L e\^c + y\_d L Q d\^c + u\^c d\^c d\^c. \[WRbadc\] where again large lepton violating trilinear terms are induced by the rotation. ![Decays of the LSP. Only the lepton number violating diagrams are shown, since the lepton number violating couplings– $\lambda^{\prime}$ (in the first line) and $\lambda^{\prime\prime}$ (in the second line)– receive large contributions (compared to the baryon number violating couplings) when the bilinear $R$-parity violating term, $h_u L$, is rotated away. Primes on the $L$ indicate that the lepton flavor is different than the slepton flavor. Figures from [@Martin:1997ns]. \[fig:rparityviolation\]](LSP.png){width="100.00000%"} To be conservative in our estimates, we can take $\langle \nu^c \rangle \sim$ TeV, which may be expected from radiative symmetry breaking [@Ambroso:2010pe]. In this limit, proton decay constraints are safe from $R$-parity violation, but there are more stringent constraints coming from the LSP lifetime. Current bounds on the LSP lifetime are slightly model dependent, but for the most part are [@Dreiner:1997uz] \_[LSP]{} 1 \_[LSP]{} 10\^[25]{} . The first bound excludes the region where the LSP decays would ruin the successful predictions of big bang nucleosynthesis on light nuclei abundances [@Reno:1987qw; @Ellis:1990nb]. The other region is excluded by indirect dark matter detection experiments that search for energetic positrons and anti-protons coming from decaying or annihilating relics [@Berezinsky:1991sp; @Baltz:1997ar; @Arvanitaki:2009yb; @Shirai:2009fq]. The LSP lifetime can be calculated in terms of the general $R$-parity violating superpotential couplings (\[WRbreaking\]). Diagrams in Figure 1 lead to an LSP lifetime ( )\^4 ( )\^5 where $\lambda = \lambda^{\prime} ,\lambda^{\prime\prime}, \lambda^{\prime\prime\prime}$ and $m_0$ is the mass of the sfermion mediating the decay. Taking $\lambda = \frac{\langle \nu^c \rangle}{m_p} \sim 10^{-15}$, $m_0 \sim 10$ TeV, and $m_{LSP} \sim 100 $ GeV gives \_ \~10\^[17]{} , about the age of the universe. The $R$-parity violating couplings still need to be about $10^{-4}\sim 10^{-5} $ smaller to have an LSP lifetime greater than $10^{25}$ seconds. There are several ways additional suppressions might arise. We have not yet discussed the possibility of there being a horizontal family structure to the couplings. This could appear as a Froggett-Nielson symmetry, or a symmetry relating the locations of the matter singularities on the $G_2$ manifold, and would be responsible for forging the quark and lepton hierarchy. It may also suppress the LSP decay width pass the astrophysical bounds. Family symmetries arise naturally from the $E_8$ structure [@King:2010mq], which can also explain why the Standard Model has three generation, and this may hint towards a larger gauge theory. We leave this issue to future work.. If the family symmetry is not the answer, then it may be the case that resolution of the $E_8$ singularity to $SU(5)$ preserves a $U(1)$ symmetry–whose charges are necessarily given as a linear combination of four $U(1)$s belonging to the coset group $E_8/SU(5)$–and is broken to an exactly conserved $R$-parity. There are two well known examples, $U(1)_\chi$ and $U(1)_\psi$ , defined as the symmetries coming from the breaking $SO(10)\rightarrow SU(5) \times U(1)_\chi$ and $E_6\rightarrow SO(10) \times U(1)_\psi$. However, $U(1)_\chi$ does not contain a field that can break $U(1)_\chi$ to $R$-Parity, and $U(1)_\psi$ forbids Higgs triplet masses, spoiling doublet-triplet splitting, so neither of these choices give a conserved $R$-parity. However, there is a possibility that $U(1)$ symmetry is similar to $U(1)_\chi$, in that the MSSM fields and right handed handed neutrinos have the same charge assignment as in $U(1)_\chi$, but has additional $SU(5)$ singlet fields with even charges[^5]. These theories can then be broken to a [*conserved*]{} $R$-parity, when the additional singlets get vevs. It is easy to construct such a linear combination, though it is unclear why from a purely theoretical perspective why $G_2$ compactifications would favor this $U(1)$ symmetry. For instance, if $U(1)_a \times U(1)_b$ is the cartan subgroup of $SU(3)$ in the breaking pattern $E_8\rightarrow E_6\times SU(3)$, then the $U(1)$ given by the linear combination of charges $$q_\chi + 5(q_a-q_b)$$ allows for conical singularities that give rise to MSSM and right handed neutrino fields with $U(1)_\chi$ charges, but with additional $SU(5)$ singlets with charges $q=\pm 10$. The vevs of the additional singlets will break the $U(1)$ to a $Z_{10}$ symmetry that contains a $Z_2$ $R$-parity. Finally we note (for the non string duality oriented reader) that $E_8 \times E_8$ is well motivated theoretically if the $G_2$-manifold is a $K3$ fibration. This is because the intersection matrix of 2-cycles inside $K3$ contain the Cartan matrix of $E_8 \times E_8$. It is in this case–that the gauge-theory of $M$ theory matches the gauge theory of $E_8 \times E_8$ Heterotic string theory– in which M theory on a $K3$-fibered $G2$-manifold and the heterotic string theory on a $T^3$- fibered Calabi-Yau threefold are dual. To summarize, we find that incorporating the $\mu$ parameter into the structure of $M$ theory compactified on a $G_2$-manifold, with stabilized moduli, can lead to a broken discrete symmetry allowing $\mu$ to be non-zero. $R$-parity is slightly broken, giving an LSP lifetime long enough to be the dark matter, but not quite long enough to evade satellite detector constraints. The theoretical structure allows for family symmetries, or an embedding of $R$-parity into $E_8$, both of which stabilize the LSP lifetime to be consistent with the experimental constraints. An example of the latter case is given above, so this is indeed a possibility. Either case will lead to the same dark matter phenomenology. The $R$-parity completion of this story is an interesting avenue for further investigation. Phenomenology ============= The $M$ theory framework, along with moduli stabilization in the $G_2$-MSSM, allows one to estimate the high-scale SUSY breaking masses and $\mu$ to within a factor of a few. This allows $M$ theory to make many phenomenological predictions. For some cases even small variations in the high-scale theory can have significant phenomenological consequences. In particular, the low-scale values of $\mu $ and $\tan\!\beta$ have significant implications for dark matter properties, and thus it is crucial to have a good understanding of their low-scale values while considering the $M$ theory predictions of the high-scale masses. Electroweak Symmetry Breaking ----------------------------- The first and foremost phenomenological constraint is that the theory accurately produce electroweak symmetry breaking (EWSB). That is, the theory must give a stable potential (bounded from below), break the electroweak symmetry and allow for the correct Z-boson mass. Respectively, these three conditions can be quantified by the following tree level constraints at the EWSB scale [ccc]{} | B| & & ( m\_[H\_u]{}\^2 + m\_[H\_d]{}\^2 ) + ||\^2\ | B|\^2 & & ( m\_[H\_u]{}\^2 + ||\^2)( m\_[H\_d]{}\^2 + ||\^2)\ M\_Z\^2 & = & -2 ||\^2 + 2 \[EWSBconstraints\] where $\tan\!\beta$ is not an independent parameter, but is determined by 2= . \[bmutanbeta\] and m\_A\^2 = m\_[H\_u]{}\^2 + m\_[H\_d]{}\^2 +2 ||\^2 \[mA\]. where $A$ is the pseudoscalar Higgs boson. To get a feeling for $\tan\!\beta$, we plug in the expected values (at the unification scale and with degenerate scalars) of $B\!\mu \simeq 0.2 m_{3/2}^2$ and $m_A^2 \simeq 2 m_{3/2}^2$, into ($\ref{bmutanbeta}$) which gives $\tan\!\beta \simeq 10$. On the other hand, RGE flow will lower the values of both $B\!\mu $ and $m_A^2$, resulting in variations around $\tan\!\beta \simeq 10$. In Section $7.1.1$, a numerical scan will show a lower bound of $\tan\!\beta \gtrsim 5$, when scalars are taken to be degenerate at the unification scale. The lowest values of $\tan\!\beta$ occur for the smallest values of $m_A^2$. The EW scale value for the mass depends on the running of the Higgs scalar masses, and in turn is very sensitive to the values of the squark masses. For specific non-degenerate values of scalar masses at the unification scale, $m_A^2$ can be of order $B\!\mu$ at the EW scale, resulting in values of $\tan\!\beta < 5$. We will consider this situation in Section $7.1.2$. At tree level the mass of the Z-boson is determined by the four Higgs parameters $$M_Z (m_{H_u}^2, \; m_{H_d}^2, \; |\mu |^2, \; \tan\!\beta).$$ These parameters not only depend on their respective values at the high-scale, but also on other masses as a result of RGE-flow. Assuming that the scalar masses are much larger than the gaugino masses, $M_Z$ has strongest dependence on the Higgs mass parameters and stop masses M\_Z( \_[H\_u]{}\^2, \_[H\_d]{}\^2, , | |\^2, \_[Q\_3]{}\^2, \_[U\_3]{}\^2 ) .where hatted ($~\hat{}~$) masses refer to GUT scale values. Interestingly the cancellation between the soft scalars masses contributing to $M_Z$ can be significant, even in the case in which the scalar masses are unified at the GUT scale $$\hat{m}_{H_u}^2 = \hat{m}_{H_d}^2 = \hat{m}_{Q_3}^2 = \hat{m}_{U_3}^2.$$ Naively what one thought was a large fine-tuning between the Higgs soft-masses and $\mu$ in eq. (\[EWSBconstraints\]) for $M_Z$, is in fact smaller. This is evident (see Figure \[fig:unifiedmutanbeta\]) from the fact that the scalar masses can be of order the gravitno mass at unification and $\mu$ can be an order of magnitude smaller, but the cancellation in eq. (\[EWSBconstraints\]) for $M_Z$ still occurs. In this sense, the ratio $\mu / m_{3/2}$, shown in Figure \[fig:unifiedmutanbeta\], might be considered a measure of the fine-tuning involved in EWSB. In other words, the smaller the ratio, the less fine tuning there will be of $\mu$ against the scalar masses in order to have the correct value for $M_Z$. ### Degenerate Scalars A numerical scan was performed over $M$ theory parameter space described in [@Acharya:2008zi] using SOFTSUSY [@Allanach:2001kg][^6]. We allow for the following variation in the $G_2$-MSSM parameters, - $10 \operatorname{TeV}\le m_{3/2} \le 20 \operatorname{TeV}$ – the gravitino mass - $10\le V_7 \le 40$ – the volume of the $G_2$-manifold in units of the eleven-dimensional Planck length - $-10 \le \delta \le 0$ – the size of the threshold corrections to the (unified) gauge coupling, $\alpha^{-1}_{\text{GUT}}$. [^7] An interested reader is referred to Section V of [@Acharya:2008zi] for variations in the spectra of $G_2$-MSSM models. In addition, order one variations are allowed for the coefficients in (\[muterm\]) for the formula for $\mu$, while it is imposed that $B\!\mu$ is in the range. 1m\_[3/2]{} &lt; B&lt; 3m\_[3/2]{}. \[bmubound\] The results are shown in Figure \[fig:unifiedmutanbeta\]. As is evident from the plot, values of $\mu$ much smaller than the gravitino mass are allowed under all the constraints, signaling a non-imposed cancellation among the scalars contributing to $M_Z$. Of note is the fact that $\tan\!\beta$ and $\mu$ are inversely correlated, which will play a significant role in limiting the maximum spin-independent scattering cross-section, when scalar masses are unified at the high scale. ![$\mu/m_{3/2}$ vs. $\tan\!\beta$. The upper band scans over the $G_2$-MSSM parameter space with [*degenerate scalars*]{} at the unification scale. The lower region on the left (low $\tan\!\beta$) scans over the $G_2$-MSSM parameter space where the scalar mass ratio $\hat{m}^2_{H_u}\!:\! \hat{m}^2_{U_3} \!:\! \hat{m}^2_{Q_3} = 3\!:\! 2\!:\! 1$ is required to be accurate within $20\%$. The black points show models that correctly break the EW symmetry, but are inconsistent with constraint $1\,\mu m_{3/2} < B\!\mu < 3\,\mu m_{3/2}$, so we expect them to not be valid solutions. The red points satisfy the constraint on $B\!\mu$ as given in the legend. The empty space on the plot, between the two regions, is expected to be filled in with complete scan over possible non-degenerate scalar mass parameter space. All points have EWSB.[]{data-label="fig:unifiedmutanbeta"}](mutanb2.png){width="100.00000%"} ### Non-Degenerate Scalars and Low We also consider the possibility that $M$ theory allows for scalar unification to be somewhat perturbed (at the factor of two to three level). Since eventually we will be interested in calculating the largest possible spin-independent scattering cross sections we will only consider high-scale scalar masses that give $\tan\!\beta\lesssim 3 $–since the scattering cross sections decrease with increasing $\tan\!\beta$ Consider the 1-Loop RGE equations, where only terms proportional to $\lambda_t$ are kept, while neglecting the $\lambda_t$ running. The RGE equations of the relevant scalars are: [ccl]{} [8\^2]{} &= &3 | \_t |\^2 ( m\_[H\_u]{}\^2 + m\_[Q\_3]{}\^2 + m\_[U\_3]{}\^2 + | A\_t |\^2)\ [8\^2]{} &= &2 | \_t |\^2 ( m\_[H\_2]{}\^2 + m\_[Q\_3]{}\^2 + m\_[U\_3]{}\^2 + | A\_t |\^2)\ [8\^2]{} &=& 1 | \_t |\^2 ( m\_[H\_2]{}\^2 + m\_[Q\_3]{}\^2 + m\_[U\_3]{}\^2 + | A\_t |\^2)\ [8\^2]{} &=& 6\_t\^2 A\_t \[eqn:rge\] whose solution is [ccl]{} m\^2\_[H\_u]{} &=& ( \^2\_[H\_u]{} -\^2\_[U\_3]{} - \^2\_[Q\_3]{} + e\^( |\_t|\^2 (-1+ e\^) + \^2\_[H\_u]{} + \^2\_[U\_3]{}+ \^2\_[Q\_3]{}) )\ m\^2\_[U\_3]{} &=& (- \^2\_[H\_u]{}+ 2\^2\_[U\_3]{} - \^2\_[Q\_3]{} + e\^(|\_t|\^2 (-1+ e\^) + \^2\_[H\_u]{} + \^2\_[U\_3]{}+ \^2\_[Q\_3]{}) )\ m\^2\_[Q\_3]{} &=& (- \^2\_[H\_u]{} - \^2\_[U\_3]{} + 5 \^2\_[Q\_3]{} + e\^(|\_t|\^2 (-1+ e\^) + \^2\_[H\_u]{} + \^2\_[U\_3]{}+ \^2\_[Q\_3]{} ) )\ A\_t\^2 &=& \_t\^2 e\^\ \[rgesolutions\] where hatted ($\hat{}$) masses indicate GUT scale mass. Since $m^2_{H_d}$ barely runs for low $\tan\!\beta$ and it is predicted that $\hat{\mu}^2$ is over an order of magnitude smaller than $m^2_{H_d}$ , the cancellation in $M_Z$ (\[EWSBconstraints\]) should occur between $m^2_{H_u}$ and $m^2_{H_d}$. Therefore, $m^2_{H_u}$ needs to stay positive at the EWSB scale. Ignoring the exponentially suppressed terms in (\[rgesolutions\]), we see that there are no choices of $\{ \hat{m}^2_{H_u}, \hat{m}^2_{Q_3},\hat{m}^2_{U_3} \} $ that leave all low-scale masses positive. On the other hand, there is a fixed point solution to the above RGEs \^2\_[H\_u]{}: \^2\_[U\_3]{} : \^2\_[Q\_3]{} = 3 : 2: 1 \[fixedpoint\] where the non-exponentially suppressed terms are identically zero, insuring that if the trilinears are of order the scalars as expected in the $G_2$-MSSM, all three masses will stay positive. This fixed point is analogous to the focus point solution in minimal supergravity (mSUGRA) theories [@Feng:1999zg; @Chan:1997bi], as it minimizes the fine tuning of EWSB. However, unlike the focus point region of mSUGRA where the Higgs scalars run small due to RGE flow, here the scalars remain heavy, and are close to the gravitino mass. Near this region, low $\tan\!\beta$ parameter space with EWSB can be realized. Results on the numerical scan can be seen in Figure \[fig:unifiedmutanbeta\]. The Nature of the LSP --------------------- As explained in detail in[@Acharya:2008zi], the $G_2$-MSSM framework gives rise to mostly Wino LSPs (as opposed to Bino LSPs). The tree level gaugino masses are degenerate at the GUT scale, but are suppressed by $F$-terms of the moduli relative to the gravitino mass to be of order the gaugino masses from the anomaly mediation contribution. The additional contribution from the anomaly lifts $M_1$ over $M_2$, leading to mostly Wino LSP models. In the original $G_2$-MSSM scenario, where it was simply that $\mu \sim m_{3/2}$, there were additional contributions to the gaugino masses from supersymmetric Higgs loops, proportional to $\mu$ [@Pierce:1996zz], that for some choices of high scale parameters, could re-lift $M_2$ over $M_1$. These models are disfavored by precision gauge coupling unification [@Acharya:2008zi], and occur less frequently in parameter space here than the original models since $\mu \not{\!\!\!\sim}\, m_{3/2}$. However, smaller $\mu$ will tend to introduce a small Higgsino admixture into the mostly Wino LSP - a fact which has significant implications on dark matter discovery (Section 6.3). All these considerations combine to strongly suggest that a Wino-like LSP with mass $\sim 140 - 200$ GeV constitutes a significant fraction of the dark matter. As emphasized in [@Acharya:2008bk; @Acharya:2009zt], in order to obtain about the right relic density from the moduli decays, the LSP must be a Wino-like particle, with a large annihilation cross section of about $3 \times 10^{−24} \operatorname{cm}^2$. A non-thermal history dominated by moduli and a wino LSP give a consistent picture for dark matter from the compactified string theory. Also encouraging is the fact that the PAMELA satellite data on positrons and antiprotons can be consistently described by a Wino LSP [@Grajek:2008jb; @Hisano:2008ti; @Kane:2009if; @Feldman:2009wv; @Chen:2010yi; @Chen:2010kq]. More recently, by also considering Wino annihilations into photons and Z-bosons one finds a cross-section of about $10^{-26} \operatorname{cm}^2$ – a fact relevant for future Fermi data. Direct Detection of Dark Matter ------------------------------- In December 2009, CDMS reported at most two possible WIMP candidate events, with a high likelihood of being background [@Ahmed:2009zw]. Combining with their previous data, this amounts to a bound on the spin-independent scattering cross-section of $\sigma_{si} \lesssim 6 \times 10^{-44} \text{ cm}^2 $ for a WIMP of mass around $200$ GeV. More recently, the XEXON100 experiment [@Aprile:2010um] reported observing no events after their first 11 days of running, slightly strengthening the CDMS bound. In the near future, XEXON100 is expected to report results that will probe much smaller scattering cross sections $\sigma_{SI} \sim 2\times 10^{-45} \text{cm}^2 $. We will see that even this region is out of reach given the $M$ theory predictions we calculate. In the decoupling limit, defined when the pseudoscalar mass is much larger that the Z-boson mass, $m_{A^0} \gg M_{Z}$, the charged and heavy CP-even Higgses are also heavy, $m_{H^{\pm}} \simeq m_{H^{0}} \simeq m_{A^{0}} $. The other Higgs boson $h^0$ remains light and behaves in the same way as the SM Higgs boson. The lower bound on its mass, corresponds to the same bound on the SM Higgs boson, namely 114 GeV[^8][@Barate:2003sz]. All the models consistent with all the theoretical and phenomenological constraints have light Higgs mass close to thia LEP limit. Since the squarks are also heavy in $G_2$-MSSM, the light Higgs boson exchange will give the only substantial contribution to the spin-independent scattering cross sections. The scattering of the LSP off nuclei is via the Higgsino component. While the LSP will be mostly Wino-like, the prediction that $\mu$ is of order the TeV scale implies that the LSP wavefunction can have non-trivial Higgsino mixing. Following [@Cohen:2010gj] we estimate the size of the direct detection cross section in the decoupling limit to be $$\sigma_{\rm SI} \left( \chi N \rightarrow \chi N \right) \approx 5 \times 10^{-45} \text{cm}^2 \left( \frac{115\operatorname{GeV}}{m_h} \right)^4 \left(\frac{Z_{H_u}\sin\beta - Z_{H_d}\cos\beta}{0.1}\right)^2 \left( Z_W - \tan\theta_W Z_B \right)^2 \label{eqn:tim}$$ where the $Z$’s give the composition of the LSP Z\_B+Z\_W+Z\_[H\_d]{}\_d+Z\_[H\_u]{}\_u. This gives us an estimate of the largest direct detection scattering cross sections, which naively may seem that for $Z_{H_u} \sim 0.1$ can be very close to the reach of XENON. Eq. (\[eqn:tim\]) can further be simplified, with the aid of analytical expressions for the neutralino mass matrix eigenvalues and eigenvectors [@ElKheishen:1992yv; @Barger:1993gh; @Bertone:2004pz]. Taking the limit $ M_1 = M_2 $, which maximizes the scattering cross section for fixed $\mu$ and $\tan\!\beta$, (\[eqn:tim\]) becomes $$\sigma^{\rm MSSM}_{\rm SI} \left( \chi N \rightarrow \chi N \right) \approx 6 \times 10^{-45} \text{cm}^2 \left( \frac{115 \operatorname{GeV}}{m_h} \right)^4 \left(\frac{1 \operatorname{TeV}}{\mu}\right)^2 \left( \frac{\sin2\beta+M_{2}/\mu}{1 - (M_2/\mu)^2} \right)^2 \label{eqn:upperlimit}$$ which falls off both with $\tan\!\beta$ and $\mu$. Allowing for the variation in $M_1$ and $M_2$ in the $G_2$-MSSM will only decrease this fraction. The value $M_2/\mu$ is typically around $.1\sim.2$. The parameters for three different models, along with their scattering cross sections, can be seen in Table 1 and are appropriately labeled in Figure 3. However, as shown in the previous section, when considering degenerate scalar masses at the unification scale EWSB imposes that small $\mu$ corresponds to large $\tan\!\beta$, and small $\tan\!\beta$ corresponds to large $\mu$. Hence, large cross-sections, of order the XEXON100 reach are not attainable for this region. To verify this we perform a scan of parameter space, using DarkSUSY [@Gondolo:2004sc]. The results are show in Figure \[fig:xsecuniversal\] where it is seen that the largest scattering cross-sections are $\sim 1 \times 10^{-45} \text{cm}^2$, close to, but slightly beyond the reach of XENON100. In Figure \[fig:xsecuniversal\] we also scan over the $G_2$-MSSM parameter space, while requiring that the ratio $ \hat{m}^2_{H_u}: \hat{m}^2_{U_3} : \hat{m}^2_{Q_3} = 3 : 2: 1$ be accurate within $20\%$. The spin-independent scattering cross-section reaches an upper limit of $1 \times 10^{-45}\operatorname{cm}^2$, just beyond the XENON100 reach. Since this is the region where largest cross-sections appear, we can conclude that if the solution of the $\mu$-problem proposed, along with moduli-stabilization in the $G_2$-MSSM, is the model of nature, the XENON100 experiment will not observe a dark matter signal soon, but its next run and upgraded detectors may do so. ![Spin-independent scattering cross-sections vs $\tan\!\beta$. The region shown scans over the $G_2$-MSSM parameter space where the scalar mass ratio $\hat{m}^2_{H_u}\!:\! \hat{m}^2_{U_3} \!:\! \hat{m}^2_{Q_3} = 3\!:\! 2\!:\! 1$ is required to be accurate within $20\%$. All points satisfy the constraint $\mu m_{3/2} < B\!\mu < 3\,\mu m_{3/2}$, have a SM-like Higgs with mass $m_h \ge 110$ GeV, and have EWSB. We also list the parameters for the 3 models in Table 1. In the region where EWSB, supergravity, and phenomenological constraints are satisfied, the upper limit on $\sigma_{SI}$ is robust, but the lower limit can decrease if the sign of $\mu$ is reversed.[]{data-label="fig:xsecuniversal"}](mu_new.pdf){width="100.00000%"} [2]{} Model 1 Model 2 Model 3 -------------------------------------- --------------------- ---------------------- ---------------------- $M_{\text{3/2}}$ 17.8 TeV 18.1 TeV 17.9 TeV $\sqrt{B\mu_{\text{GUT}}}$ 9.75 TeV 10.4 TeV 9.29 TeV $ \mu_{\text{GUT}}$ 3.79 TeV 2.10 TeV 1.69 TeV $ M_1 $ 151\. GeV 153\. GeV 150\. GeV $ M_2 $ 145\. GeV 143\. GeV 138\. GeV $ \mu$ 3.89 TeV 2.15 TeV 1.77 TeV $ M_A $ 18.8 Tev 19.0 TeV 18.2 TeV $ m_h $ 110\. GeV 110\. GeV 115\. GeV $ M_{\chi_1 } $ 141 GeV 143 GeV 141 GeV $ M_{\chi_2 } $ 143 GeV 147 GeV 145 GeV $ M_{\chi_1^\pm } $ 141 GeV 144 GeV 142 GeV $ Z_{\tilde{W}}$ 0.94 0.91 0.91 $ Z_{\tilde{B}} $ -0.35 -0.41 -0.41 $ Z_{\tilde{H}_d} $ -0.02 -0.04 -0.05 $ Z_{\tilde{H}_u} $ 0.01 0.02 0.02 $ \tan\beta $ 2.53 2.37 2.87 $ \sigma _{\text{SI}} [\text{cm}^2]$ $3.\times 10^{-46}$ $ 9.\times 10^{-46}$ $ 1.\times 10^{-45}$ $ \sigma _{\text{SD}} [\text{cm}^2]$ $5.\times 10^{-45}$ $ 5.\times 10^{-44}$ $ 1.\times 10^{-43}$ : High scale and low scale parameters for 3 models with larger spin independent scattering cross sections. All models shown belong to the parameter space where the scalar mass ratio $\hat{m}^2_{H_u}\!:\! \hat{m}^2_{U_3} \!:\! \hat{m}^2_{Q_3} = 3\!:\! 2\!:\! 1$ is accurate within $20\%$. We assume that details of the calculation and software outputs are sufficiently uncertain to allow $m_h \gtrsim 110$ GeV to be consistent with LEP bounds. \  \ Conclusions =========== We have argued that if our universe is described by $M$ theory compactified on a manifold of $G_2$ holonomy, with doublet-triplet splitting solved in the way originally proposed by Witten [@Witten:2001bf], then there is a simple solution to the $\mu$-problem: strong coupling dynamics in the the hidden sector will generate a non-perturbative potential for the moduli, which stabilizes all the moduli vevs, and breaks the symmetry forbidding $\mu$. Then, following the numerical analysis done in the $G_2$-MSSM [@Acharya:2008zi], the breaking will generate $\mu \sim \langle \frac{S}{m_{pl}} \rangle m_{3/2} \sim 0.1 ~m_{3/2} \sim 2 \operatorname{TeV}$. This then implies a non-zero Higgsino component of the mostly Wino LSP, with an upper limit, which in turn gives an upper limit of about $1 \times 10^{-45}\operatorname{cm}^2$ on the spin-independant scattering cross-section, somewhat below the reach of the XENON100 experiment, as well as a lower limit of about $10^{-46} \operatorname{cm}^2$. The Wino-like LSP also can account for the PAMELA positron and antiproton excesses [@Adriani:2008zr; @Kane:2009if], and gives about the desired relic density for a non-thermal cosmological history [@Acharya:2010af], as expected in theories with moduli. Since the scalars are of order $m_{3/2} \gtrsim 20 $ TeV, the Higgs sector is decoupled, and the light Higgs boson behaves like a Standard Model one. It’s mass is predicted to be of order 110-120 GeV. If we insist on a good description of the Pamela data plus consistent compactification, we find an LSP mass from about 140-155 GeV, and an annihilation cross section $2-3.5 \times 10^{-24} \text{cm}^3/\text{s}$. The annihilation to $\gamma/Z$ ranges from $(0.7-1.2) \times 10^{-26}$. Additionally, we noted that an eact $R$-parity could arise through ‘partial symmetry breaking’, though this isn’t obviously motivated by the theory itself. An alternative is that $R$-parity is either an exact remnant of a broken continuous gauge symmetry, or only an approximate symmetry of larger broken discrete and continuous groups. In either case, this requires the inclusion of additional $U(1)$ gauge symmetries, suggesting that the GUT group is larger than $SU(5)$, and may originate from an $E_8$ singularity. Acknowledgments {#acknowledgments .unnumbered} =============== We appreciate helpful conversations with Tim Cohen, Daniel Feldman, Piyush Kumar, Paul Langacker, Joseph Marsano, Aaron Pierce, and Lian-Tao Wang. B.A.. is grateful to the University of Michigan Physics Department and MCTP for support, and E.K. is grateful for a String Vacuum Project Graduate Fellowship funded through NSF grant PHY/0917807. This work was supported by the DOE Grant \#DE-FG02-95ER40899. Appendix A: Largest Spin Independent Cross Sections {#appendix-a-largest-spin-independent-cross-sections .unnumbered} =================================================== Following [@Cohen:2010gj], the spin-independent cross section for the LSP scattering off a nucleon, is given in the decoupling limit ($M_Z \ll M_A$) by the approximation $$\sigma_{\rm SI} \left( \chi N \rightarrow \chi N \right) \approx 5 \times 10^{-45} \text{cm}^2 \left( \frac{115\operatorname{GeV}}{m_h} \right)^4 \left(\frac{Z_{H_u}\sin\beta - Z_{H_d}\cos\beta}{0.1}\right)^2 \left( Z_W - \tan\theta_W Z_B \right)^2 \label{eqn:tim}$$ where the $Z$’s give the composition of the LSP $$\chi \equiv Z_B\,\tilde{B}+Z_W\,\tilde{W}\,+Z_{H_d}\,\tilde{H}_d+Z_{H_u}\,\tilde{H}_u.$$ Consider the neutralino mass matrix [@Frere:1983dd]: $$\mathcal{M} = \begin{pmatrix} M_1 & 0 & -M_Z \cos\beta\sin\theta_W & M_Z\sin\beta\sin\theta_W \\ 0 & M_2 & M_Z \cos\beta\cos\theta_W & -M_Z\sin\beta\cos\theta_W \\ -M_Z\cos\beta\sin\theta_W & M_Z\cos\beta\cos\theta_W & 0 & -\mu \\ M_Z\sin\beta\sin\theta_W & -M_Z\sin\beta\cos\theta_W & -\mu & 0 \end{pmatrix} \label{eqn:massmatrix}$$ in the $\{\tilde{B},\tilde{W},\tilde{H}_d,\tilde{H}_u\}$ basis.The analytical expression [@ElKheishen:1992yv; @Barger:1993gh; @Bertone:2004pz] for the components in the LSP can be written as: $$\begin{aligned} \alpha Z_{B} &= z_B = - \sin\theta_W \nonumber\\ \alpha Z_{W} &= z_W = \cos\theta_W \frac{M_1-M_\chi}{M_2-M_\chi} = \cos\theta_W \frac{\left( M_1- M_\chi \right)^2}{\Delta} \nonumber \\ \alpha Z_{H_d} &= z_ {H_d} = \frac{\mu\left( M_1 - M_\chi \right)\left( M_2 - M_\chi \right) + M_Z^2\sin\beta\cos\beta\left( \left( M_1 - M_2 \right)\cos^2\theta_W +M_2 - M_\chi \right)}{M_Z\left( M_2 - M_\chi \right)\left( -\mu\cos\beta + M_\chi\sin\beta \right)} \nonumber\\ \alpha Z_{H_u} &= z_{H_u} = \frac{M_\chi \left( M_1 - M_\chi \right)\left( M_2 - M_\chi \right) + M_Z^2\cos^2\beta\left( \left( M_1 - M_2 \right)\cos^2\theta_W +M_2 - M_\chi \right)}{M_Z\left( M_2 - M_\chi \right)\left( -\mu\cos\beta + M_\chi\sin\beta \right)} \label{eqn:components}\end{aligned}$$ where $ \alpha = \sqrt{ z_{B}^2+z_{W}^2+z_{H_d}^2+ z_{H_u}^2}$ is an overall normalization factor and $\Delta\equiv(M_\chi -M_1)(M_\chi-M_2)$. The combination $Z_{H_u}\sin\beta-Z_{H_d}\cos\beta$, that appears in the scattering cross section takes an especially simple form $$Z_{H_u}\sin\beta-Z_{H_d}\cos\beta = \frac{\left( M_\chi\sin\beta - \mu\cos\beta \right)\left( M_1-M_\chi \right)\left( M_2-M_\chi \right)}{M_Z\left( M_2 - M_\chi \right)\left( -\mu\cos\beta+M_\chi\sin\beta \right)} = \frac{M_1-M_\chi}{M_Z}. \label{eqn:components2}$$ It is clear from (\[eqn:components2\]) that as $M_1 - M_\chi$ increases, $Z_{H_u}\sin\beta-Z_{H_d}\cos\beta$ grows slower than the $Z_{W}$ component. Thus after normalization both the $\tilde{H}_u$ and the $\tilde{H}_d$ components will decrease. So the maximum of $Z_{H_u}\sin\beta-Z_{H_d}\cos\beta$ is realized when $M_1 - M_\chi$ is minimal. The eigenvalues of the neutralino mass matrix (\[eqn:massmatrix\]) are given by the solutions to: $$\left( x - M_1 \right)\left( x - M_2 \right)\left( x - \mu \right)\left( x + \mu \right)+\left(M_1\cos^2\theta_W +M_2\sin^2\theta\right)M_Z^2\mu\sin2\beta = 0 \label{eqn:eigenvalue}$$ Then the LSP mass, corresponding to $M_{\chi} \equiv x$, can be found by taking the limit $M_\chi \ll \mu$, so that (\[eqn:eigenvalue\]) is simply a quadratic equation. Then it is easy to see that the minimal value of $M_1 - M_\chi$, which maximizes $Z_{H_u}\sin\beta-Z_{H_d}\cos\beta$, corresponds to the situation when $M_1 - M_2$ is also minimized. Additionally, when $M_1 = M_2$, the term $Z_W - \tan\theta_W Z_B$ also reaches its maximum. Thus the maximum scattering cross sections will occur when $M_1 = M_2$. To normalize the expressions in (\[eqn:components\]) (i.e. finding $\alpha$) is tedious. Instead, a new basis is defined where $\tilde{\gamma} = \cos\theta_W \tilde{B} + \sin\theta_W \tilde{W}$ and $\tilde{Z} = -\sin\theta_W \tilde{B} + \cos\theta_W \tilde{W}$, where in the supersymmetric limit, these are the superpartners of the photon and $Z$-boson, respectively. The new mass matrix, in the $\{ \tilde{\gamma},\tilde{Z},\tilde{H}_d,\tilde{H}_u \}$ is $$\mathcal{M} = \begin{pmatrix} M_1{\cos\theta_W}^2 +M_2{\sin\theta_W}^2 & (M_2-M_1)\sin\theta_W\cos\theta_W & 0 & 0 \\ (M_2-M_1)\sin\theta_W\cos\theta_W & M_2{\cos\theta_W}^2 +M_1{\sin\theta_W}^2 & M_Z \cos\beta & -M_Z\sin\beta \\ 0 & M_Z\cos\beta & 0 & -\mu \\ 0 & -M_Z\sin\beta & -\mu & 0 \end{pmatrix} \label{eqn:massmatrix2}$$ Taking the limit $M_1 = M_2 \equiv M$, one immediately one finds that $\tilde{\gamma}$ in an eigenvector with mass eigenvalue $M$. The next lightest eigenvector of the remaining $3\times 3$ sub-matrix will be mostly $\tilde{Z}$, and to leading order in $M_Z/\mu$, the mass is M\_M - ( -2 ) Next we will assume that the phases of $M$ and $\mu$ are such that absolute value of $ M_\chi$ is smaller than $|M|$, so that it is indeed the LSP. The other scenario, in which the LSP is mostly $\tilde{\gamma}$, will have negligible scattering cross-section. Diagonalizing the remaining $3\times 3$ sub-matrix, the coefficients of $\tilde{H}_u$ and $\tilde{H}_d$ component to leading order is $$\begin{aligned} Z_{H_d} &= \frac{1}{2}\left( \frac{\left(\sin\beta - \cos\beta\right)M_Z}{\mu+M} + \frac{\left(\sin\beta + \cos\beta\right)M_Z}{\mu - M} \right) = \frac{M_Z\left( \mu\sin\beta+M\cos\beta \right)}{\mu^2 - M^2} \nonumber \\ Z_{H_u} &= \frac{1}{2}\left( \frac{\left(\sin\beta - \cos\beta\right)M_Z}{\mu+M} -\frac{\left(\sin\beta + \cos\beta\right)M_Z}{\mu - M} \right) = -\frac{M_Z\left( \mu\cos\beta+M\sin\beta \right)}{\mu^2 - M^2} \label{eqn:hu}\end{aligned}$$ and from the definition on $\tilde{Z}$, Z\_W - \_W Z\_B = [\_W]{}\^[-1]{}. \[eqn:zu\] Finally, using (\[eqn:hu\]) and (\[eqn:zu\]) as inputs to (\[eqn:tim\]) the upper limit for the cross section is $$\sigma_{\rm SI} \left( \chi N \rightarrow \chi N \right) \approx 6 \times 10^{-45} \text{cm}^2 \left( \frac{115 \operatorname{GeV}}{m_h} \right)^4 \left(\frac{1 \operatorname{TeV}}{\mu}\right)^2 \left( \frac{\sin2\beta+M_{2}/\mu}{1 - (M_2/\mu)^2} \right)^2 \label{eqn:upperlimit}$$ From the discussion in the text we expect $M_2 /\mu \lesssim 0.2$, $\sin{2\beta} \lesssim 0.8$ and $\mu \gtrsim 1 \operatorname{TeV}$, giving largest scattering cross-sections around $\sigma_{\rm SI} \lesssim 6 \times 10^{-45} \text{cm}^2$. However, as discussed in Section 6.3, the constraints cannot all be satisfied simultaneously, so in practice only a cross section of about $10^{-45} \operatorname{cm}^2$ could be achieved. [10]{} S. P. Martin, [*[A Supersymmetry Primer]{}*]{}, [[hep-ph/9709356]{}](http://xxx.lanl.gov/abs/hep-ph/9709356). J. E. Kim and H. P. Nilles, [*[The mu Problem and the Strong CP Problem]{}*]{}, [*Phys.Lett.*]{} [**B138**]{} (1984) 150. I. Antoniadis, E. Gava, K. Narain, and T. Taylor, [*[Effective mu term in superstring theory]{}*]{}, [*Nucl.Phys.*]{} [**B432**]{} (1994) 187–204, \[[[hep-th/9405024]{}](http://xxx.lanl.gov/abs/hep-th/9405024)\]. P. Nath and T. R. Taylor, [*[Modular invariance, soft breaking, mu and tan beta in superstring models]{}*]{}, [*Phys.Lett.*]{} [**B548**]{} (2002) 77–87, \[[[hep-ph/0209282]{}](http://xxx.lanl.gov/abs/hep-ph/0209282)\]. D. Suematsu and Y. Yamagishi, [*[Radiative symmetry breaking in a supersymmetric model with an extra U(1)]{}*]{}, [*Int.J.Mod.Phys.*]{} [**A10**]{} (1995) 4521–4536, \[[[ hep-ph/9411239]{}](http://xxx.lanl.gov/abs/hep-ph/9411239)\]. M. Cvetic and P. Langacker, [*[Implications of Abelian extended gauge structures from string models]{}*]{}, [*Phys.Rev.*]{} [**D54**]{} (1996) 3570–3579, \[[[ hep-ph/9511378]{}](http://xxx.lanl.gov/abs/hep-ph/9511378)\]. M. Cvetic, D. A. Demir, J. Espinosa, L. Everett, and P. Langacker, [ *[Electroweak breaking and the mu problem in supergravity models with an additional U(1)]{}*]{}, [*Phys.Rev.*]{} [**D56**]{} (1997) 2861, \[[[hep-ph/9703317]{}](http://xxx.lanl.gov/abs/hep-ph/9703317)\]. O. Lebedev and S. Ramos-Sanchez, [*[The NMSSM and String Theory]{}*]{}, [ *Phys.Lett.*]{} [**B684**]{} (2010) 48–51, \[[[arXiv:0912.0477]{}](http://xxx.lanl.gov/abs/arXiv:0912.0477)\]. S. Ramos-Sanchez, [*[The mu-problem, the NMSSM and string theory]{}*]{}, [[arXiv:1003.1307]{}](http://xxx.lanl.gov/abs/arXiv:1003.1307). M. Ratz, [*[Stringy Surprises]{}*]{}, [*Prog.Theor.Phys.Suppl.*]{} [**180**]{} (2010) 96–111, \[[[ arXiv:1003.0549]{}](http://xxx.lanl.gov/abs/arXiv:1003.0549)\]. J. Casas and C. Munoz, [*[A Natural solution to the mu problem]{}*]{}, [ *Phys.Lett.*]{} [**B306**]{} (1993) 288–294, \[[[hep-ph/9302227]{}](http://xxx.lanl.gov/abs/hep-ph/9302227)\]. R. Kappl, H. P. Nilles, S. Ramos-Sanchez, M. Ratz, K. Schmidt-Hoberg, [ *et. al.*]{}, [*[Large hierarchies from approximate R symmetries]{}*]{}, [ *Phys.Rev.Lett.*]{} [**102**]{} (2009) 121602, \[[[arXiv:0812.2120]{}](http://xxx.lanl.gov/abs/arXiv:0812.2120)\]. L. Ibanez and A. Uranga, [*[Instanton induced open string superpotentials and branes at singularities]{}*]{}, [*JHEP*]{} [**0802**]{} (2008) 103, \[[[arXiv:0711.1316]{}](http://xxx.lanl.gov/abs/arXiv:0711.1316)\]. L. Ibanez and . Richter, Robert, [*[Stringy Instantons and Yukawa Couplings in MSSM-like Orientifold Models]{}*]{}, [*JHEP*]{} [**0903**]{} (2009) 090, \[[[arXiv:0811.1583]{}](http://xxx.lanl.gov/abs/arXiv:0811.1583)\]. D. Green and T. Weigand, [*[Retrofitting and the mu Problem]{}*]{}, [[arXiv:0906.0595]{}](http://xxx.lanl.gov/abs/arXiv:0906.0595). M. Cvetic, J. Halverson, and . Richter, Robert, [*[Realistic Yukawa structures from orientifold compactifications]{}*]{}, [*JHEP*]{} [**0912**]{} (2009) 063, \[[[ arXiv:0905.3379]{}](http://xxx.lanl.gov/abs/arXiv:0905.3379)\]. P. Langacker and M.-x. Luo, [*[Implications of precision electroweak experiments for M(t), rho(0), sin\*\*2-Theta(W) and grand unification]{}*]{}, [ *Phys.Rev.*]{} [**D44**]{} (1991) 817–822. H. Murayama and A. Pierce, [*[Not even decoupling can save minimal supersymmetric SU(5)]{}*]{}, [*Phys.Rev.*]{} [**D65**]{} (2002) 055009, \[[[hep-ph/0108104]{}](http://xxx.lanl.gov/abs/hep-ph/0108104)\]. E. Witten, [*[Deconstruction, G(2) holonomy, and doublet triplet splitting]{}*]{}, [[ hep-ph/0201018]{}](http://xxx.lanl.gov/abs/hep-ph/0201018). B. S. Acharya, K. Bobkov, G. Kane, P. Kumar, and D. Vaman, [*[An M theory Solution to the Hierarchy Problem]{}*]{}, [*Phys.Rev.Lett.*]{} [**97**]{} (2006) 191601, \[[[ hep-th/0606262]{}](http://xxx.lanl.gov/abs/hep-th/0606262)\]. B. S. Acharya, [*[On Realizing N=1 superYang-Mills in M theory]{}*]{}, [[hep-th/0011089]{}](http://xxx.lanl.gov/abs/hep-th/0011089). B. S. Acharya, [*[M theory, Joyce orbifolds and super Yang-Mills]{}*]{}, [ *Adv. Theor. Math. Phys.*]{} [**3**]{} (1999) 227–248, \[[[hep-th/9812205]{}](http://xxx.lanl.gov/abs/hep-th/9812205)\]. E. Witten, [*[Anomaly cancellation on G(2) manifolds]{}*]{}, [[hep-th/0108165]{}](http://xxx.lanl.gov/abs/hep-th/0108165). B. S. Acharya and E. Witten, [*[Chiral fermions from manifolds of G(2) holonomy]{}*]{}, [[ hep-th/0109152]{}](http://xxx.lanl.gov/abs/hep-th/0109152). B. S. Acharya and S. Gukov, [*[M theory and singularities of exceptional holonomy manifolds]{}*]{}, [*Phys.Rept.*]{} [**392**]{} (2004) 121–189, \[[[hep-th/0409191]{}](http://xxx.lanl.gov/abs/hep-th/0409191)\]. T. Pantev and M. Wijnholt, [*[Hitchin’s Equations and M-Theory Phenomenology]{}*]{}, [[ arXiv:0905.1968]{}](http://xxx.lanl.gov/abs/arXiv:0905.1968). M. B. Green and J. H. Schwarz, [*[Anomaly Cancellation in Supersymmetric D=10 Gauge Theory and Superstring Theory]{}*]{}, [*Phys.Lett.*]{} [**B149**]{} (1984) 117–122. S. M. Barr, [*[A New Symmetry Breaking Pattern for SO(10) and Proton Decay]{}*]{}, [*Phys.Lett.*]{} [**B112**]{} (1982) 219. J. Derendinger, J. E. Kim, and D. V. Nanopoulos, [*[Anti-SU(5)]{}*]{}, [ *Phys.Lett.*]{} [**B139**]{} (1984) 170. I. Antoniadis, J. R. Ellis, J. Hagelin, and D. V. Nanopoulos, [ *[Supersymmetric Flipped SU(5) Revitalized]{}*]{}, [*Phys.Lett.*]{} [**B194**]{} (1987) 231. E. Kuflik and J. Marsano, [*[Comments on Flipped SU(5) (and F-theory)]{}*]{}, [[1009.2510]{}](http://xxx.lanl.gov/abs/1009.2510). B. S. Acharya, K. Bobkov, G. L. Kane, P. Kumar, and J. Shao, [*[Explaining the Electroweak Scale and Stabilizing Moduli in M Theory]{}*]{}, [*Phys.Rev.*]{} [**D76**]{} (2007) 126010, \[[[ hep-th/0701034]{}](http://xxx.lanl.gov/abs/hep-th/0701034)\]. B. S. Acharya, G. Kane, and E. Kuflik, [*[String Moduli Phenomenology, Cosmological History, Supersymmetry Breaking, and Dark Matter]{}*]{}, [[arXiv:1006.3272]{}](http://xxx.lanl.gov/abs/arXiv:1006.3272). D. D. Joyce, [*Compact Manifolds with Special Holonomy*]{}. Oxford University Press, 2000. B. S. Acharya, K. Bobkov, G. L. Kane, J. Shao, and P. Kumar, [*[The G(2)-MSSM: An M Theory motivated model of Particle Physics]{}*]{}, [ *Phys.Rev.*]{} [**D78**]{} (2008) 065038, \[[[arXiv:0801.0478]{}](http://xxx.lanl.gov/abs/arXiv:0801.0478)\]. Y. Hosotani, [*[Dynamical Gauge Symmetry Breaking as the Casimir Effect]{}*]{}, [*Phys.Lett.*]{} [**B129**]{} (1983) 193. E. Witten, [*[Symmetry Breaking Patterns in Superstring Models]{}*]{}, [ *Nucl.Phys.*]{} [**B258**]{} (1985) 75. H. M. Lee [*et. al.*]{}, [*[A unique $Z_4^R$ symmetry for the MSSM]{}*]{}, [[1009.0905]{}](http://xxx.lanl.gov/abs/1009.0905). G. Giudice and A. Masiero, [*[A Natural Solution to the mu Problem in Supergravity Theories]{}*]{}, [*Phys.Lett.*]{} [**B206**]{} (1988) 480–484. J. Wess and J. Bagger, [*[Supersymmetry and supergravity]{}*]{}, . A. Brignole, L. E. Ibanez, and C. Munoz, [*[Soft supersymmetry-breaking terms from supergravity and superstring models]{}*]{}, [[hep-ph/9707209]{}](http://xxx.lanl.gov/abs/hep-ph/9707209). B. S. Acharya and M. Torabian, [*[Supersymmetry Breaking, Moduli Stabilization and Hidden U(1) Breaking in M-Theory]{}*]{}, [[1101.0108]{}](http://xxx.lanl.gov/abs/1101.0108). \* Temporary entry \*. Collaboration, C. Amsler [*et. al.*]{}, [*[Review of Particle Physics]{}*]{}, [*Phys.Lett.*]{} [**B667**]{} (2008) 1. M. Ambroso and B. A. Ovrut, [*[The Mass Spectra, Hierarchy and Cosmology of B-L MSSM Heterotic Compactifications]{}*]{}, [[1005.5392]{}](http://xxx.lanl.gov/abs/1005.5392). H. K. Dreiner, [*[An Introduction to explicit R-parity violation]{}*]{}, [[hep-ph/9707435]{}](http://xxx.lanl.gov/abs/hep-ph/9707435). To be published in ’Perspectives on Supersymmetry’, Ed. by G.L. Kane, World Scientific. M. Reno and D. Seckel, [*[Primordial Nucleosynthesis: The Effects of Injecting Hadrons]{}*]{}, [*Phys.Rev.*]{} [**D37**]{} (1988) 3441. J. R. Ellis, G. Gelmini, J. L. Lopez, D. V. Nanopoulos, and S. Sarkar, [ *[Astrophysical constraints on massive unstable neutral relic particles]{}*]{}, [*Nucl.Phys.*]{} [**B373**]{} (1992) 399–437. V. Berezinsky, A. Masiero, and J. Valle, [*[Cosmological signatures of supersymmetry with spontaneously broken R-parity]{}*]{}, [*Phys.Lett.*]{} [ **B266**]{} (1991) 382–388. E. A. Baltz and P. Gondolo, [*[Limits on R-parity violation from cosmic ray antiprotons]{}*]{}, [*Phys. Rev.*]{} [**D57**]{} (1998) 7601–7606, \[[[hep-ph/9704411]{}](http://xxx.lanl.gov/abs/hep-ph/9704411)\]. A. Arvanitaki [*et. al.*]{}, [*[Decaying Dark Matter as a Probe of Unification and TeV Spectroscopy]{}*]{}, [*Phys. Rev.*]{} [**D80**]{} (2009) 055011, \[[[0904.2789]{}](http://xxx.lanl.gov/abs/0904.2789)\]. S. Shirai, F. Takahashi, and T. T. Yanagida, [*[R-violating Decay of Wino Dark Matter and electron/positron Excesses in the PAMELA/Fermi Experiments]{}*]{}, [*Phys. Lett.*]{} [**B680**]{} (2009) 485–488, \[[[0905.0388]{}](http://xxx.lanl.gov/abs/0905.0388)\]. S. King, G. Leontaris, and G. Ross, [*[Family symmetries in F-theory GUTs]{}*]{}, [*Nucl.Phys.*]{} [**B838**]{} (2010) 119–135, \[[[arXiv:1005.1025]{}](http://xxx.lanl.gov/abs/arXiv:1005.1025)\]. B. Allanach, [*[SOFTSUSY: a program for calculating supersymmetric spectra]{}*]{}, [*Comput.Phys.Commun.*]{} [**143**]{} (2002) 305–331, \[[[hep-ph/0104145]{}](http://xxx.lanl.gov/abs/hep-ph/0104145)\]. D. Feldman, G. Kane, R. Lu, and B. D. Nelson, [*[Dark Matter as a Guide Toward a Light Gluino at the LHC]{}*]{}, [*Phys.Lett.*]{} [**B687**]{} (2010) 363–370, \[[[1002.2430]{}](http://xxx.lanl.gov/abs/1002.2430)\]. J. L. Feng, K. T. Matchev, and T. Moroi, [*[Focus points and naturalness in supersymmetry]{}*]{}, [*Phys. Rev.*]{} [**D61**]{} (2000) 075005, \[[[hep-ph/9909334]{}](http://xxx.lanl.gov/abs/hep-ph/9909334)\]. K. L. Chan, U. Chattopadhyay, and P. Nath, [*[Naturalness, weak scale supersymmetry and the prospect for the observation of supersymmetry at the Tevatron and at the CERN LHC]{}*]{}, [*Phys.Rev.*]{} [**D58**]{} (1998) 096004, \[[[hep-ph/9710473]{}](http://xxx.lanl.gov/abs/hep-ph/9710473)\]. D. M. Pierce, J. A. Bagger, K. T. Matchev, and R.-j. Zhang, [*[Precision corrections in the minimal supersymmetric standard model]{}*]{}, [*Nucl. Phys.*]{} [**B491**]{} (1997) 3–67, \[[[hep-ph/9606211]{}](http://xxx.lanl.gov/abs/hep-ph/9606211)\]. B. S. Acharya, P. Kumar, K. Bobkov, G. Kane, J. Shao, [*et. al.*]{}, [ *[Non-thermal Dark Matter and the Moduli Problem in String Frameworks]{}*]{}, [ *JHEP*]{} [**0806**]{} (2008) 064, \[[[arXiv:0804.0863]{}](http://xxx.lanl.gov/abs/arXiv:0804.0863)\]. B. S. Acharya, G. Kane, S. Watson, and P. Kumar, [*[A Non-thermal WIMP Miracle]{}*]{}, [*Phys.Rev.*]{} [**D80**]{} (2009) 083529, \[[[0908.2430]{}](http://xxx.lanl.gov/abs/0908.2430)\]. P. Grajek, G. Kane, D. J. Phalen, A. Pierce, and S. Watson, [*[Neutralino Dark Matter from Indirect Detection Revisited]{}*]{}, [[0807.1508]{}](http://xxx.lanl.gov/abs/0807.1508). J. Hisano, M. Kawasaki, K. Kohri, and K. Nakayama, [*[Positron/Gamma-Ray Signatures of Dark Matter Annihilation and Big-Bang Nucleosynthesis]{}*]{}, [ *Phys.Rev.*]{} [**D79**]{} (2009) 063514, \[[[arXiv:0810.1892]{}](http://xxx.lanl.gov/abs/arXiv:0810.1892)\]. G. Kane, R. Lu, and S. Watson, [*[PAMELA Satellite Data as a Signal of Non-Thermal Wino LSP Dark Matter]{}*]{}, [*Phys.Lett.*]{} [**B681**]{} (2009) 151–160, \[[[ arXiv:0906.4765]{}](http://xxx.lanl.gov/abs/arXiv:0906.4765)\]. \* Brief entry \*. D. Feldman, Z. Liu, P. Nath, and B. D. Nelson, [*[Explaining PAMELA and WMAP data through Coannihilations in Extended SUGRA with Collider Implications]{}*]{}, [*Phys.Rev.*]{} [**D80**]{} (2009) 075001, \[[[arXiv:0907.5392]{}](http://xxx.lanl.gov/abs/arXiv:0907.5392)\]. N. Chen, D. Feldman, Z. Liu, P. Nath, and G. Peim, [*[Positron and Photon Compliant Higgsino Dark Matter and LHC-7]{}*]{}, [[1010.0939]{}](http://xxx.lanl.gov/abs/1010.0939). \* Temporary entry \*. N. Chen, D. Feldman, Z. Liu, P. Nath, and G. Peim, [*[Low Mass Gluino within the Sparticle Landscape, Implications for Dark Matter, and Early Discovery Prospects at LHC-7]{}*]{}, [[ 1011.1246]{}](http://xxx.lanl.gov/abs/1011.1246). \* Temporary entry \*. Collaboration, Z. Ahmed [*et. al.*]{}, [*[Dark Matter Search Results from the CDMS II Experiment]{}*]{}, [*Science*]{} [**327**]{} (2010) 1619–1621, \[[[0912.3592]{}](http://xxx.lanl.gov/abs/0912.3592)\]. Collaboration, E. Aprile [*et. al.*]{}, [*[First Dark Matter Results from the XENON100 Experiment]{}*]{}, [[1005.0380]{}](http://xxx.lanl.gov/abs/1005.0380). Collaboration, R. Barate [ *et. al.*]{}, [*[Search for the standard model Higgs boson at LEP]{}*]{}, [ *Phys. Lett.*]{} [**B565**]{} (2003) 61–75, \[[[hep-ex/0306033]{}](http://xxx.lanl.gov/abs/hep-ex/0306033)\]. T. Cohen, D. J. Phalen, and A. Pierce, [*[On the Correlation Between the Spin-Independent and Spin- Dependent Direct Detection of Dark Matter]{}*]{}, [ *Phys. Rev.*]{} [**D81**]{} (2010) 116001, \[[[1001.3408]{}](http://xxx.lanl.gov/abs/1001.3408)\]. M. El Kheishen, A. Aboshousha, and A. Shafik, [*[Analytic formulas for the neutralino masses and the neutralino mixing matrix]{}*]{}, [*Phys.Rev.*]{} [ **D45**]{} (1992) 4345–4348. V. D. Barger, M. Berger, and P. Ohmann, [*[The Supersymmetric particle spectrum]{}*]{}, [*Phys.Rev.*]{} [**D49**]{} (1994) 4908–4930, \[[[hep-ph/9311269]{}](http://xxx.lanl.gov/abs/hep-ph/9311269)\]. G. Bertone, D. Hooper, and J. Silk, [*[Particle dark matter: Evidence, candidates and constraints]{}*]{}, [*Phys. Rept.*]{} [**405**]{} (2005) 279–390, \[[[hep-ph/0404175]{}](http://xxx.lanl.gov/abs/hep-ph/0404175)\]. P. Gondolo, J. Edsjo, P. Ullio, L. Bergstrom, M. Schelke, [*et. al.*]{}, [ *[DarkSUSY: Computing supersymmetric dark matter properties numerically]{}*]{}, [*JCAP*]{} [**0407**]{} (2004) 008, \[[[astro-ph/0406204]{}](http://xxx.lanl.gov/abs/astro-ph/0406204)\]. Collaboration, O. Adriani [*et. al.*]{}, [*[An anomalous positron abundance in cosmic rays with energies 1.5-100 GeV]{}*]{}, [*Nature*]{} [**458**]{} (2009) 607–609, \[[[arXiv:0810.4995]{}](http://xxx.lanl.gov/abs/arXiv:0810.4995)\]. G. L. Kane and J. M. Frere, [*[On The Possibility Of Finding Light Uncolored Supersymmetric Partners At Present And Future Machines]{}*]{}, [*Nucl. Phys.*]{} [**B223**]{} (1983) 331. [^1]: Presumably, $N$ is of the same order as the number of renormalizable coupling constants of the effective low energy theory. [^2]: Note section 2. The “$i$”’s are not the same in $S$ and $z$. [^3]: We leave the case of $N \geq 10^4$ for further study. [^4]: The final term in (\[WRbreaking\]) can be rotated away in superpotential by a unitary transformation on $(h_d, L)$. This rotation will induce additional contributions to the lepton violating coupling constants $\lambda^{\prime}$ and $\lambda^{\prime\prime}$ that are proportional to the Yukawa couplings. Assuming that $\mu \gtrsim \kappa $, their sizes are approximately \^ \~y\_e \^ \~y\_d \[muyukawa\] [^5]: If this $U(1)$ symmetry is to be broken to $R$-parity, then requiring the symmetry to be flavor blind, allowing for Higgs triplet masses, and allowing an explanation for neutrino masses, basically constrains the charges of the MSSM and right handed neutrino fields to be the $U(1)_\chi$ charges. [^6]: See [@Feldman:2010uv] for general phenomenological discussions. [^7]: see Section IV of [@Acharya:2008zi] for the precise definition of $\delta$ [^8]: Since there are theoretical and calculational uncertainties with calculating the Higgs mass, we will consider models with $m_h \ge 110$ GeV.
{ "pile_set_name": "ArXiv" }
Organizational and expressional uniqueness of a testis-specific mRNA transcript of protooncogene c-kit receptor in water buffalo Bubalus bubalis. Protooncogene c-kit receptor is implicated with spermatogenesis, melanogenesis, and hematopoeisis, and undergoes tissue/stage specific alternate splicing. We have isolated 2973-bp full-length cDNA sequence (CDS) of this gene from testis and other tissues of water buffalo Bubalus bubalis. Upon comparison, the c-kit sequences showed tissue specific nucleotide changes resulting in novel truncated peptides. These peptides lacked intracellular and/or transmembrane domains in all the tissues except testis. Other alternately spliced tissue-specific transcripts were also detected, which are the integral parts of the open reading frame and have been reported in other mammals. Phylogenetic analysis of the sequences revealed unique tyrosine kinase domain in buffalo. Copy number calculation and expressional analysis of c-kit using real-time PCR established its single copy status and highest expression (137-177 folds) in testis compared to that (least) in liver. c-kit expression was detected in semen samples although 10 times lesser compared to that in testis. The highest expression of c-kit in testis and the presence of mRNA transcript in sperms substantiate its predominant role in spermatogenesis. This study establishes unequivocal involvement of an autosomal gene c-kit receptor in testicular function.
{ "pile_set_name": "PubMed Abstracts" }
Q: JavaMail : FolderClosedException coming frequently I'm using java mail to connect with gmail and I'm keeping one store for the all actions. (Store is set to static.). And the IMAPFolder instances are attached with imap listeners. So the folders are kept always open. (Folder close is not called any time) But while running after few minutes I'm getting FolderClosedException. After that exception, though the folder can be reopened but the idle() command cannot be issued again, which will result in NullPointerException. Is there any wrong with keeping folders open always? Thanks in advance. =================================================================== [Edit] Here I'm pasting the actual code i'm doing POC with. The NullPointerException comes when I check .isConnected() after reconnecting the store. Below is the run method of Thread which sends idle() command to the store. public void run() { while (true) { try { System.out.println("Checking connectivity..."); if (store.isConnected()) { store.idle(); System.out.println("IDLE send..."); } else { Thread.sleep(5000); System.out.println("Tring to connect..."); //Trying to reconnect to the store. store.connect(); System.out.println("Previous store connected again"); } } catch (InterruptedException ex) { System.out.println("InterruptedException..."); } catch (StoreClosedException ex) { System.out.println("StoreClosedException..."); } catch (MessagingException ex) { System.out.println("MessagingException..."); } } } Here is the stack trace: Exception in thread "Thread-1" java.lang.NullPointerException at com.sun.mail.imap.IMAPStore.waitIfIdle(IMAPStore.java:1881) at com.sun.mail.imap.IMAPStore.getStoreProtocol(IMAPStore.java:946) at com.sun.mail.imap.IMAPStore.isConnected(IMAPStore.java:1347) at pocworks.POCWorks1$IDLEThread.run(POCWorks1.java:125) A: Generally, mail servers don't like you to keep connections open when you're not using them. Typical IMAP servers will give you 30 minutes before they time out an unused connection; Gmail may be more aggressive.
{ "pile_set_name": "StackExchange" }
About Book Volume 2 opens at the the outbreak of the First World War and at the time of Janácek's lowest ebb. Within two years, however, his fortunes were transformed by the Prague production of Jenufa This led to international fame and fortune and to the magnificent creative flowering in which the elderly composer wrote most of his best-known works. His personal life was affected by his public affair with Gabriela Horvátová and his friendship with Kamila Stösslová, whom he saw as the inspiration for many of his late works. About John Tyrrell John Tyrrell is Professor of Music at Cardiff University. His books include Czech Opera, Janácek's Operas and editions of the memoirs of Janácek's widow and Janácek's correspondence with Kamila Stösslová . He is co-author of the standard catalogue of Janácek's works and, with Sir Charles Mackerras, he edited Janácek's opera Jenufa. In 2002 he was awarded an honorary doctorate by the Masaryk University of Brno for his work on Janácek and Czech music.
{ "pile_set_name": "Pile-CC" }
Q: Reference Office Object Library for older versions...missing mso.dll I do my development in Access 2016. I've created a custom shortcut menu (right-click menu) with VBA. In order for this VBA code to run, I have to enable the Microsoft Office 16.0 Object Library reference. Whenever I deploy this database to PCs with older versions of Access (2013 and 2010) the database is looking for the Microsoft Office 16.0 Object Library, which does not exist. I hoped that Access would be smart enough to automatically select the appropriate Object Library for the version of Microsoft that is installed. However, it does not and the code will not run until I manually set the appropriate object library. Is there a better way to automate this? Is there some VBA code I can implement that will find the correct library? The only solution I've come up with is setting the Object Library in an older version of Access before deploying the database to other PCs(there doesn't seem to be a problem finding newer Object libraries, only older ones.) Thanks guys. A: It seems you've early-bound your dependency to the 16.0 type library; early-bound references are always version-specific, and you can only early-bind to one specific version. Because you're going to need to support multiple versions, you need to switch everything over to late-binding. You haven't provided any code, so I'll give you a hypothetical example - instead of this: Dim foo As Library.SomeType Set foo = New Library.SomeType foo.DoSomething(Library.SomeEnumValue) You need to do this (and remove the early-bound reference from your project): Const SomeEnumValue As Long = 42 'Library.SomeEnumValue Dim foo As Object Set foo = CreateObject("Library.SomeType") foo.DoSomething(SomeEnumValue) There's no automated way to do this that I know of, however you may want to keep an eye on Rubberduck issue #1184, which aims specifically to make a refactoring tool exactly for this (full disclosure: I manage that open-source project).
{ "pile_set_name": "StackExchange" }
Spatial Distribution and Number Density of Scatterers in the Upper Uranian Atmosphere C. M. Walter, M. S. Marley (NMSU), H. B. Hammel (MIT) Since the visit by Voyager 2 in 1986 the amount of haze in the stratosphere and upper troposphere of Uranus has increased by an order of magnitude. The full disk albedo of the planet in narrowband near-infrared images of the planet taken at Apache Point Observatory from August 1995 to October 1996 is larger than can be accounter for by the Voyager haze. In an effort to confirm this and determine the spatial distribution of the haze we have examined high resolution images (FWHM = 0.3-0.5 ) taken at the IRTF in Mauna Kea, HI in late August 1995. The data was modeled by using center-to-limb fits of the photometric quantity I/F along specified latitudes. The model I/F values were computed using a multi-layer adding-doubling code. Our dataset consisted of several images in a broadband K filter spread over six nights. The K filter is particularly useful because it encompasses strong CH and H absorption bands, making this filter highly sensitive to the atmosphere above the CH cloud. At these wavelengths the contribution to the reflected light by Rayleigh scattering is smaller than the component from scattering by the haze, which allows its effect to be effectively isolated and measured. Preliminary results confirm the order of magnitude increase, and show that the distribution of haze is uniform across the disk.
{ "pile_set_name": "Pile-CC" }
[Axial computerised tomography - answered and unanswered questions on the ophthalmologist (author's transl)]. The results of 67 examinations with computerized axial tomography are presented with reference to the indications. 15 examinations were performed in exophthalmos, 11 in unknown visual field defects, 9 in optic atrophy of unknown origin, 8 each in optic neuritis, papilledema and unclear visual disturbance. 8 furthur examinations were performed in different cases. The axial computerized tomography proved to be an efficient complementary examination technique if it is limited to well defined indications. The best results are obtained examining exophthalmos and visual field defects.
{ "pile_set_name": "PubMed Abstracts" }
Zayn Just Cancelled a Concert Because of His Anxiety He is very sorry. Zayn Malik has cancelled a UK concert this evening. He made the announcement on Instagram, posting a note to fans in which he apologized and explained that anxiety is preventing him from performing. "Unfortunately, my anxiety that has haunted me throughout the last few months has gotten the better of me," he wrote. "With the magnitude of the live event, I have suffered the worst anxiety of my career."
{ "pile_set_name": "Pile-CC" }